In a recent survey, we explored gamers’ attitudes towards the use of Gen AI in video games and whether those attitudes varied by demographics and gaming motivations. The overwhelmingly negative attitude stood out compared to other surveys we’ve run over the past decade.

In an optional survey (N=1,799) we ran from October through December 2025 alongside the Gamer Motivation Profile, we invited gamers to answer additional questions after they had looked at their profile results. Some of these questions were specifically about attitudes towards Gen AI in video games.

Overall, the attitude towards the use of Gen AI in video games is very negative. 85% of respondents have a below-neutral attitude towards the use of Gen AI in video games, with a highly-skewed 63% who selected the most negative response option.

Such a highly-skewed negative response is rare in the many years we’ve conducted survey research among gamers. As a point of comparison, in 2024 Q2-Q4, we collected survey data on attitudes towards a variety of game features. The chart below shows the % negative (i.e., below neutral) responses for each mentioned feature. In that survey, 79% had a negative attitude towards blockchain-based games. This helps anchor where the attitude towards Gen AI currently sits. We’ll come back to the “AI-generated quests/dialogue” feature later in this blog post since we break down the specific AI use in another survey question.

  • P03 Locke@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    3
    ·
    1 day ago

    I think we, as a society, need to do a better job separating out the real issue. The real issue isn’t AI. The issue is laziness. It’s the “slop” part, not the “AI” part.

    This is just CGI arguments all over again. People fucking hated CGI back in the 90s and 2000s. They hated how it was a crutch for VFX, hated how people wouldn’t bother hiring an animal to put into a simple scene, but they’d spend $10K to make a CGI sheep for a few seconds. Practical effects were suddenly novel. People praised Mad Max: Fury Road for its practical effects, but completely ignored the fact that Fury Road very much had CGI effects throughout.

    And that’s the secret: people stopped talking about CGI when it became invisible. If you can’t tell it’s CGI, then CGI has done its job. If you can’t tell it’s AI, then AI has done its job.

    But, quite often, you can tell it’s AI, because lazy hacks pretend it’s supposed to replace things that it’s not made for. They spend five minutes trying to generate something, and call it “good enough”. The creative art/video models are getting there, but they aren’t there yet. It still requires a ton of work to get certain styles out of the uncanny valley, and inpainting isn’t perfect. Voice models are okay, and better than the old TTS ones, but they don’t know how to act out a scene well enough. 3D modeling might get somewhere, but it shouldn’t be used for primary characters.

    This hype train needs to crash into a brick wall, so that we can use it in a more reserved manner. Some companies are quietly doing so, but that’s not what pushes the headlines.

    • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      Most of the CGI in Fury Road is used for the backdrops, not the main focus of the action. You wouldn’t even notice it unless it’s pointed out and you see the real world place they filmed vs what they made it look like with CGI in the film.

      Compare that to, say, the crappy alligators in Eraser. Or even the dinosaurs in Jurassic Park.

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        You wouldn’t even notice it unless it’s pointed out and you see the real world place they filmed vs what they made it look like with CGI in the film.

        That’s my point. If it’s invisible, it’s done its job.

        • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 hour ago

          And that’s my point; back in the 90s, that shit wasn’t invisible. It was used in the foreground, not the background. Practical effects still look better for foreground action.

          And I think my app is bugging out; I only saw the first two paragraphs originally. Now there are two more.

    • leftzero@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      21
      ·
      23 hours ago

      No, laziness is good. Laziness begets engineering.

      The issue is that “generative AI” (which is neither generative nor intelligence) is built upon the stolen works of countless artists.

      The issue is that it consumes massive amounts of resources and energy to produce mediocre results at best.

      The issue is that it threatens the livelihood of whole segments of society, especially the ones who contribute the most to human culture.

      The issue is that it’s not sustainable. Once it runs out of new content to plagiarize it will be unable to produce anything new. It can’t replace what it’s destroying.

      The issue is that it’s so vastly inefficient that the data centres needed to sustain it are becoming a major contributor to global warming.

      The issue is that its bubble is causing massive price increases in consumer computer parts.

      The issue is that when it pops it’ll take the rest of the economy with it.

      The issue is that it’s a gateway drug. It’s being sold at a loss to destroy the human competition, and will inevitably increase massively in price once it’s become a necessary part of everyone’s process.

      The issue is that it’s being forced everywhere regardless of its uselessness for the task, replacing technologies that were actually useful and making everything less useable and more inefficient.

      The issue is that it’s making everything less reliable, and will inevitably cause massive damage and loss of life.

      The issue is that LLM use has been demonstrated to cause brain damage, yet they elude regulation and the companies selling them have yet to face consequences.

      The issue is that all of this makes it an existential threat to humanity, and a significant contributor to the ones we were already facing.

      The issue is that, once you’ve taken into account all the pros and cons, doing everything possible to ensure it ceases to exist as soon as possible in any way, shape, or form, together with the companies selling it and the CEOs responsible for them and any politicians and investors enabling them, becomes an evident moral and ethical imperative.

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        17 hours ago

        If looking at a picture is stealing, then I’m doing so every day when I browse a web page or Google Image Search.

        Energy needs are a concern, but it was never a new problem. Our energy needs have been ramping up ever since we learned how to make electricity.

        The ones who “contribute to human culture” are the 1% who are lucky enough to make a career out of making art or making music or whatever other creative talent they had. The problem is oversaturation, not AI. AI makes the problem worse, but so does the Internet and every other technological leap we’ve seen.

        “Once it runs out of new content to plagiarize it will be unable to produce anything new.” Sounds like humans in a nutshell. Good artists borrow, great artists steal. Creativity itself is not sustainable at the rate we consume it. Every new thing is drilled into the ground, and beaten into a bloody pulp, until we find the next new thing. This is not a new problem.

        Capitalism is the enemy of humanity. Capitalists wield AI as a weapon, and we treat it as a scapegoat. We think that we can just get rid of AI, and then the enemy is gone. But, AI isn’t going away, and the same enemy we’ve always had still exists.

        Use it to your advantage. Use local models. Support open source LLMs. The biggest failure of rich capitalist assholes is sheer, absolute overconfidence and an inability to relate to the people they are trying to fleece.

        • Deyis@beehaw.org
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          3 hours ago

          If looking at a picture is stealing. . .

          Except that’s not what AI and those who use it are doing. This is a deliberate oversimplification to try to excuse derivative and copied works of artists who have had their art stolen. When you do it, it’s copyright infringement. When AI does it, you get a deluge of people who lack the patience and discipline to actually produce any creative work trying to excuse it.

          • P03 Locke@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            This is a deliberate oversimplification to try to excuse derivative and copied works of artists who have had their art stolen.

            It’s not. You misunderstand both copyright law and how LLMs work.

            Models are GBs of weights, typically in the 4GB to 24GB range. LLMs do not look at a picture and then copy that picture into the model. There’s not enough disk space to do something like that. It’s used for training, adjusting weights here and there, based on how the image links to the description. You can’t just say “recreate the Mona Lisa” and have it give you a pixel perfect copy of the original.

            When you do it, it’s copyright infringement.

            It’s not copyright infringement to copy a style. People do it all the time. You wouldn’t believe the amount of times I’ve seen something that I thought was some unique style, and thought that one artist did it, but it turns out it’s just another copycat artist “inspired by” the more popular artist.

            Because that’s what people do to something unique, or even remotely rare: Copy it a thousands times and drive it into the ground until you’re fucking sick of it.

            • Deyis@beehaw.org
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              2 hours ago

              uMmMmM aCkTuALLy

              Taking the work of artists without compensating them for your own commercial gain is ethically bankrupt and theft. The fact that you keep likening an AI model to actual person demonstrates that this isn’t a conversation worth continuing.

    • Gabadabs@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      12
      ·
      1 day ago

      I get your overall point, but I do think that the issue isn’t laziness, the issue is the use of AI. I think it’s a problem when AI is used whether the result looks good or not, because of the nature of how those AI models are trained, the environmental impact of their data centers, among other issues. For example, the current ram shortage is a direct result of these data centers. Overall, we’re also talking about people’s jobs. And as much as I’m offer degrowth and reducing the amount of work that people do, I also think it’s important that artists who are typically always underpaid anyways, are able to keep their paying jobs. I’ve seen so many programming positions reduced to minimum wage AI prompt writer positions, and that same shit is happening to real artists that have rent to pay and kids to raise… We already have tools to make these jobs more efficient, but the last thing video games really need is more cost cutting measures.