• Melvin_Ferd@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 days ago

    This is crazy. I’ve literally been saying they are fallible. You’re saying your professional fed and LLM some type of dataset. So I can’t really say what it was you’re trying to accomplish but I’m just arguing that trying to have it process data is not what they’re trained to do. LLM are incredible tools and I’m tired of trying to act like they’re not because people keep using them for things they’re not built to do. It’s not a fire and forget thing. It does need to be supervised and verified. It’s not exactly an answer machine. But it’s so good at parsing text and documents, summarizing, formatting and acting like a search engine that you can communicate with rather than trying to grok some arcane sentence. Its power is in language applications.

    It is so much fun to just play around with and figure out where it can help. I’m constantly doing things on my computer it’s great for instructions. Especially if I get a problem that’s kind of unique and needs a big of discussion to solve.

    • Log in | Sign up@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      it’s so good at parsing text and documents, summarizing

      No. Not when it matters. It makes stuff up. The less you carefully check every single fucking thing it says, the more likely you are to believe some lies it subtly slipped in as it went along. If truth doesn’t matter, go ahead and use LLMs.

      If you just want some ideas that you’re going to sift through, independently verify and check for yourself with extreme skepticism as if Donald Trump were telling you how to achieve world peace, great, you’re using LLMs effectively.

      But if you’re trusting it, you’re doing it very, very wrong and you’re going to get humiliated because other people are going to catch you out in repeating an LLM’s bullshit.

      • Melvin_Ferd@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        If it’s so bad as if you say, could you give an example of a prompt where it’ll tell you incorrect information.

        • Log in | Sign up@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          22 hours ago

          It’s like you didn’t listen to anything I ever said, or you discounted everything I said as fiction, but everything your dear LLM said is gospel truth in your eyes. It’s utterly irrational. You have to be trolling me now.

            • Log in | Sign up@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              20 hours ago

              I already told you my experience of the crapness of LLMs and even explained why I can’t share the prompt etc. You clearly weren’t listening or are incapable of taking in information.

              There’s also all the testing done by the people talked about in the article we’re discussing which you’re also irrationally dismissing.

              You have extreme confirmation bias.

              Everything you hear that disagrees with your absurd faith in the accuracy of the extreme blagging of LLMs gets dismissed for any excuse you can come up with.