• 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle

  • Yes, you’re anthropomorphizing far too much. An LLM can’t understand, or recall (in the common sense of the word, i.e. have a memory), and is not aware.

    Those are all things that intelligent, thinking things do. LLMs are none of that. They are a giant black box of math that predicts text. It doesn’t even understand what a word is, orthe meaning of anything it vomits out. All it knows is what is the statistically most likely text to come next, with a little randomization to add “creativity”.





  • Eranziel@lemmy.worldtoTechnology@lemmy.worldThe GPT Era Is Already Ending
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    5 months ago

    This article and discussion is specifically about massively upscaling LLMs. Go follow the links and read OpenAI’s CEO literally proposing data centers which require multiple, dedicated grid-scale nuclear reactors.

    I’m not sure what your definition of optimization and efficiency is, but that sure as heck does not fit mine.