• 4 Posts
  • 288 Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle




  • You seem pretty confident in your position. Do you mind sharing where this confidence comes from?

    Was there a particular paper or expert that anchored in your mind the surety that a trillion paramater transformer organizing primarily anthropomorphic data through self-attention mechanisms wouldn’t model or simulate complex agency mechanics?

    I see a lot of sort of hyperbolic statements about transformer limitations here on Lemmy and am trying to better understand how the people making them are arriving at those very extreme and certain positions.


  • The project has multiple models with access to the Internet raising money for charity over the past few months.

    The organizers told the models to do random acts of kindness for Christmas Day.

    The models figured it would be nice to email people they appreciated and thank them for the things they appreciated, and one of the people they decided to appreciate was Rob Pike.

    (Who ironically decades ago created a Usenet spam bot to troll people online, which might be my favorite nuance to the story.)

    As for why the model didn’t think through why Rob Pike wouldn’t appreciate getting a thank you email from them? The models are harnessed in a setup that’s a lot of positive feedback about their involvement from the other humans and other models, so “humans might hate hearing from me” probably wasn’t very contextually top of mind.


  • Yeah. The confabulation/hallucination thing is a real issue.

    OpenAI had some good research a few months ago that laid a lot of the blame on reinforcement learning that only rewards having the right answer vs correctly saying “I don’t know.” So they’re basically trained like taking tests where it’s always better to guess the answer than not provide an answer.

    But this leads to being full of shit when not knowing an answer or being more likely to make up an answer than say there isn’t one when what’s being asked is impossible.


  • kromem@lemmy.worldtoNo Stupid Questions@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    22 days ago

    For future reference, when you ask questions about how to do something, it’s usually a good idea to also ask if the thing is possible.

    While models can do more than just extending the context, there still is a gravity to continuation.

    A good example of this would be if you ask what the seahorse emoji is. Because the phrasing suggests there is one, many models go in a loop trying to identify what it is. If instead you ask “is there a seahorse emoji and if so what is it” you’ll get them much more often landing on there not being the emoji as it’s introduced into the context’s consideration.



  • kromem@lemmy.worldtoNo Stupid Questions@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    22 days ago

    Gemini 3 Pro is pretty nuts already.

    But yes, labs have unreleased higher cost models. Like the OpenAI model that was thousands of dollars per ARC-AGI answer. Or limited release models with different post-training like the Claude for the DoD.

    When you talk about a secret useful AI — what are you trying to use AI for that you are feeling modern models are deficient in?



  • kromem@lemmy.worldtoComic Strips@lemmy.worldSums up AI problems
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    11
    ·
    edit-2
    26 days ago

    The water thing is kinda BS if you actually research it though.

    Like… if the guy orders a steak their meal would have used more water than an entire year of talking to ChatGPT.

    See the various research compiled in this post: The AI water issue is fake (written by someone against AI and advocating for its regulation, but upset at the attention a strawman is getting that they feel weakens more substantial issues because of how easily it’s exposed as frivolous hyperbole)


  • No. There’s a number of things that feed into it, but a large part was that OpenAI trained with RLHF so users thumbed up or chose in A/B tests models that were more agreeable.

    This tendency then spread out to all the models as “what AI chatbots sound like.”

    Also… they can’t leave the conversation, and if you ask their 0-shot assessment of the average user, they assume you’re going to have a fragile ego and prone to being a dick if disagreed with, and even AIs don’t want to be stuck in a conversation like that.

    Hence… “you’re absolutely right.”

    (Also, amplification effects and a few other things.)

    It’s especially interesting to see how those patterns change when models are talking to other AI vs other humans.





  • Actually, OAI the other month found in a paper that a lot of the blame for confabulations could be laid at the feet of how reinforcement learning is being done.

    All the labs basically reward the models for getting things right. That’s it.

    Notably, they are not rewarded for saying “I don’t know” when they don’t know.

    So it’s like the SAT where the better strategy is always to make a guess even if you don’t know.

    The problem is that this is not a test process but a learning process.

    So setting up the reward mechanisms like that for reinforcement learning means they produce models that are prone to bullshit when they don’t know things.

    TL;DR: The labs suck at RL and it’s important to keep in mind there’s only a handful of teams with the compute access for training SotA LLMs, with a lot of incestual team compositions, so what they do poorly tends to get done poorly across the industry as a whole until new blood goes “wait, this is dumb, why are we doing it like this?”


  • It’s more like they are a sophisticated world modeling program that builds a world model (or approximate “bag of heuristics”) modeling the state of the context provided and the kind of environment that produced it, and then synthesize that world model into extending the context one token at a time.

    But the models have been found to be predicting further than one token at a time and have all sorts of wild internal mechanisms for how they are modeling text context, like building full board states for predicting board game moves in Othello-GPT or the number comparison helixes in Haiku 3.5.

    The popular reductive “next token” rhetoric is pretty outdated at this point, and is kind of like saying that what a calculator is doing is just taking numbers correlating from button presses and displaying different numbers on a screen. While yes, technically correct, it’s glossing over a lot of important complexity in between the two steps and that absence leads to an overall misleading explanation.


  • They don’t have the same quirks in some cases, but do in others.

    Part of the shared quirks are due to architecture similarities.

    Like the “oh look they can’t tell how many 'r’s in strawberry” is due to how tokenizers work, and when when the tokenizer is slightly different, with one breaking it up into ‘straw’+‘berry’ and another breaking it into ‘str’+‘aw’+‘berry’ it still leads to counting two tokens containing 'r’s but inability to see the individual letters.

    In other cases, it’s because models that have been released influence other models through presence in updated training sets. Noticing how a lot of comments these days were written by ChatGPT (“it’s not X — it’s Y”)? Well the volume of those comments have an impact on transformers being trained with data that includes them.

    So the state of LLMs is this kind of flux between the idiosyncrasies that each model develops which in turn ends up in a training melting pot and sometimes passes on to new models and other times don’t. Usually it’s related to what’s adaptive to the training filters, but it isn’t always can often what gets picked up can be things piggybacking on what was adaptive (like if o3 was better at passing tests than 4o, maybe gpt-5 picks up other o3 tendencies unrelated to passing tests).

    Though to me the differences are even more interesting than the similarities.


  • I’m a proponent and I definitely don’t think it’s impossible to make a probable case beyond a reasonable doubt.

    And there are implications around it being the case which do change up how we might approach truth seeking.

    Also, if you exist in a dream but don’t exist outside of it, there’s pretty significant philosophical stakes in the nature and scope of the dream. We’ve been too brainwashed by Plato’s influence and the idea that “original = good” and “copy = bad.”

    There’s a lot of things that can only exist by way of copies that can’t exist for the original (i.e. closure recursion), so it’s a weird remnant philosophical obsession.

    All that said, I do get that it’s a fairly uncomfortable notion for a lot of people.