• papalonian@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    2 days ago

    I was using it to blow through an online math course I’d ultimately decided I didn’t need but didn’t want to drop. One step of a problem I had it solve involved finding the square root of something; it spat out a number that was kind of close, but functionally unusable. I told it it made a mistake three times and it gave a different number each time. When I finally gave it the right answer and asked, “are you running a calculation or just making up a number” it said that if I logged in, it would use real time calculations. Logged in on a different device, asked the same question, it again made up a number, but when I pointed it out, it corrected itself on the first try. Very janky.

    • stratoscaster@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      2 days ago

      ChatGPT doesn’t actually do calculations. It can generate code that will actually calculate the answer, or provide a formula, but ChatGPT cannot do math.

    • OpenStars@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      So it forced you to ask it many times? Now imagine that you paid for it each time. For the creator then, mission fucking accomplished.

    • vivendi@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      You need multi-shot prompting when it comes to math. Either the motherfucker gets it right, or you will not be able to course correct it in a lot of cases. When a token is in the context, it’s in the context and you’re fucked.

      Alternatively you could edit the context, correct the parameters and then run it again.

      On the other side of the shit aisle

      Shoutout to my man Mistral Small 24B who is so insecure, it will talk itself out of correct answers. It’s so much like me in not having any self worth or confidence.