• kazerniel@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 hour ago

    Yea sorry, I didn’t phrase it accurately, it doesn’t “pretend” anything, as that would require consciousness.

    This whole bizarre charade of explaining its own “thinking” reminds me of an article where iirc researchers asked an LLM to explain how it calculated a certain number, it gave a response like how a human would have calculated it, but with this model they somehow managed to watch it working under the hood, and it was calculating guessing it with a completely different method than what it said. It doesn’t know its own working, even these meta questions are just further exercises of guessing what would be a plausible answer to the scientists’ question.