• 0 Posts
  • 8 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • But your citation gives both statements:

    “In fact, the monkey would almost surely type every possible finite text an infinite number of times.”

    and

    “The theorem can be generalized to state that any sequence of events that has a non-zero probability of happening will almost certainly occur an infinite number of times, given an infinite amount of time or a universe that is infinite in size.”

    So when you say the number of times is “unknowable” the actual answer is “almost surely an infinite number of times” no ? Since the probability of that can be calculated as 100%. The mindfuck part is that it is still possible that no monkey at all will type a particular text, even though the probability of that is 0.

    The probability that only 2 monkeys will type the text is also still 0, same as 3 monkeys, 4 monkeys, etc. - in fact the probability of any specific finite number of monkeys only typing out the text is still 0 - only the probability of an infinite number of monkeys typing it out is 100% (the probabilities of all possible outcomes, even when infinite, have to sum up to 1 after all)

    We just know that it will almost surely happen, but that doesn’t mean it will happen an infinite amount of occurrences.

    Basically, if we know “it will almost surely happen” then we also know just as surely (p=1) that it will also happen an infinite number of times (but it might also never happen, although with p=0)


  • Ok, this is interesting, so thanks for pointing me to it. I think it’s still safe to say “almost surely an infinite number of monkeys” as opposed to “almost surely at least one”, since the probability of both cases is still 100% (can their probability even be quantitatively compared ? is one 100% more likely than another 100% in this case ?)

    The idea that something with probability of 0 can happen in an infinite set is still a bit of a mindfuck - although I understand why this is necessary (e.g. picking a random marble from an infinite set of marbles where 1 is blue and all others red for example - the probability of picking the blue marble is 0, but it is obviously still possible)


  • That’s the thing though, infinity isn’t “large” - that is the wrong way to think about it, large implies a size or bounds - infinity is boundless. An infinity can contain an infinite number of other infinities within itself.

    Mathematically, if the monkeys are generating truly random sequences of letters, then an infinite number (and not just “at least one”) of them will by definition immediately start typing out Hamlet, and the probability of that is 100% (not “almost surely” edit: I was wrong on this part, 100% here does actually mean “almost surely”, see below). At the same time, every possible finite combination of letters will begin to be typed out as well, including every possible work of literature ever written, past, present or future, and each of those will begin to be typed out each by an infinite number of other monkeys, with 100% probability.






  • The first computer I used was a PDP-8 clone, which was a very primitive machine by today’s standards - it only had 4k words of RAM (hand-made magnetic core memory !) - you could actually do simple programming tasks (such as short sequences of code to load software from paper tape) by entering machine code directly into memory by flipping mechanical switches on the front panel of the machine for individual bits (for data and memory addresses)

    You could also write assembly code on paper, and then convert it into machine code by hand, and manually punch the resulting code sequence onto paper tape to then load into the machine (we had a manual paper punching device for this purpose)

    Even with only 4k words of RAM, there were actually multiple assemblers and even compilers and interpreters available for the PDP-8 (FOCAL, FORTRAN, PASCAL, BASIC) - we only had a teletype interface (that printed output on paper), no monitor/terminal, so editing code on the machine itself was challenging, although there was a line editor which you could use, generally to enter programs you wrote on paper beforehand.

    Writing assembly code is not actually the same as writing straight machine code - assemblers actually do provide a very useful layer of abstraction, such as function calls, symbolic addressing, variables, etc. - instead of having to always specify memory locations, you could use names to refer to jump points/loops, variables, functions, etc. - the assembler would then convert those into specific addresses as needed, so a small change of code or data structures wouldn’t require huge manual process of recalculating all the memory locations as a result, it’s all done automatically by the assembler.

    So yeah, writing assembly code is still a lot easier than writing direct machine code - even when assembling by hand, you would generally start with assembly code, and just do the extra work that an assembler would do, but by hand.