I’m an anarchocommunist, all states are evil.

Your local herpetology guy.

Feel free to AMA about picking a pet/reptiles in general, I have a lot of recommendations for that!

  • 0 Posts
  • 30 Comments
Joined 6 months ago
cake
Cake day: July 6th, 2024

help-circle
  • You’re probablistic, you just have an internal verifier, you think things that are silly, and then decide not to say them all the time. A human being often thinks things that they realize are silly before they say them… that’s an entirely unfair goal in the first place from my perspective, why does it have to be non-probablistic?

    Are you not a general intelligence because sometimes your brain thinks silly things?

    o3 currently works precisely that way, by the way, it generates hundreds of possible things, and then uses something that checks if the steps actually work, before it outputs. In fact, they then reinforce it on these correct logical steps, so it becomes better at not outputting illogical answers like you said.

    it’s interesting that you said “not on the probability of the next word, but on context and rationality”

    context IS pricesely that, you know what’s likely to come next because of the context, that’s you understanding context. YOU as a human being don’t even always get this right, you must realize we are not perfect beings, we think of possibilities and choose the right one. I think we’re much better at this right now, but i don’t think that’s a fundamental difference between us and o3.

    Rationality is the internal verifier.

    Something that doesn’t require thousands of hours of training to update and instead is capable of ingesting and rationalize new information on the fly.

    Being able to do this is… exactly what arc-agi was testing. Literally the entire point of the benchmark, it can do that.

    I’ve done the test by the way, I solved it by brute forcing possible solutions in my head, then checking if they were true… did you just divine the answers instantly?



  • Why does it matter?

    If it can through brute force, it can do it. That’s the first step towards true agi, nobody said the first AGI would be economical, this feels like a major goalpost shift if you’re acknowledging it can do it at all, isn’t that insane?

    A little bit ago, everyone would’ve been saying this will never happen, that there was a natural wall simply because all it does is predict the next token, it’s been like, a few years of llm’s and they’re already getting this insane. We’re going to have AGI soon, it might not be a transformer, but billions upon billions of dollars are being thrown at this problem, there are people smart enough in the world to make this work, and this is the earliest sign that it’s coming.