• Dave@lemmy.nz
    link
    fedilink
    arrow-up
    27
    ·
    2 months ago

    Consider the implications if ChatGPT started saying “I don’t know” to even 30% of queries – a conservative estimate based on the paper’s analysis of factual uncertainty in training data. Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly.

    I think we would just be more careful with how we used the technology. E.g. don’t autocomplete code if the threshold is not met for reasonable certainty.

    I would argue that it’s more useful having a system that says it doesn’t know half the time than a system that’s confidently wrong half the time

    • Lucy :3@feddit.org
      link
      fedilink
      arrow-up
      11
      ·
      2 months ago

      Obviously. But more useful ≠ more money. So the fascocapitalists will ofc not implement that.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      Depends on the product. From an original AI research point of view this is what you want, a model that can realize it is missing information and deviates from giving a result. But once profit became involved, marketing requires a fully confident output to get everyone to buy in. So we get what we get and not something that’s more reliable.