

If you don’t like LLMs then why can’t you just not use them? Why do you have to start avoiding everything LLM like the plague?
Freedom is the right to tell people what they do not want to hear.
If you don’t like LLMs then why can’t you just not use them? Why do you have to start avoiding everything LLM like the plague?
AI is not the money maker people claimed it was going to be.
When people make claims about what “AI” is going to do in the future they’re talking about Artificial General Intelligence - not Large Language Models.
Or more likely it was a shitty job that shouldn’t have been done by a human in the first place.
In my experience you don’t even need camo to blend in. I’m a self-proclaimed military autist and my entire wardrobe is earth tones. That alone is enough to startle people on the trails when they’re not looking ahead and I suddenly appear in front of them on my bike. Camo obviously helps, but it’s not about perfectly blending in - it’s about not standing out. When I spot people in the woods it’s always because of a pale face or hands, or some brightly colored or white piece of clothing. Black stands out a lot too, but anything gray, brown, or green usually disappears into the background unless you’re staring right at it.
Because of climate change? That’s not even on the ballpark when it comes to the worst case scenario estimates of the excess deaths because of it. No wonder young people are so stressed out…
Almost every single post from you mentions that you’re a single mom to a 13 year old boy and you ask about things like co-sleeping, wether it’s okay to wear bikinis in front of your son and his friends, how to teach them about bodily changes during puberty and now you’re a stripper as well.
I’ve been seeing you around here for a while now and there’s seems to be a theme here.
It is a big part of the issue, but as Lemmy clearly demonstrates, that issue doesn’t go away even when you remove the algorithm entirely.
I see it a lot like driving cars - no matter how much better and safer we make them, accidents will still happen as long as there’s an ape behind the wheel, and probably even after that. That’s not to say things can’t be improved - they definitely can - but I don’t think it can ever be “fixed,” because the problem isn’t it - it’s us. You can’t fix humans by tweaking the code on social media.
Ofcourse not. The issue with social media are the people. Algorithms just bring out the worst in us but it didn’t make us like that, we already were.
They also mindlessly broadcast whatever lies the CEOs of the big tech company’s are telling.
LLMs? Want to give an example of this? When ChatGPT 5 came out I asked it what the “thinking built in” means and it told me it’s probably just a marketing term.
no one else sees that the media is driving a weird campaign against AI
Media criticism tends to fly out the window the moment the narrative aligns with someone’s personal views. Echo chambers can be hard to recognize once you’re inside one. If I didn’t come to Lemmy, I’d barely even know that there are people who’ve made hating AI their whole identity - they seem to be nowhere to be found in the real world.
For me LLMs have been the biggest thing since podcasts. Feels almost like gaslighting to read this AI hate here virtually every day as it doesn’t even remotely align with my personal experience of it.
I disagree with the premise, but I’d wager that people who say “suffering is good” are probably talking about things like lifting heavy at the gym or working long hours - not spitting blood in a ditch.
By what logic?
You think you have - but there’s really no way of knowing.
Just because someone writes like a bot doesn’t mean they actually are one. Feeling like “you’ve caught one” doesn’t mean you did - it just means you think you did. You might have been wrong, but you never got confirmation to know for sure, so you have no real basis for judging how good your detection rate actually is. It’s effectively begging the question - using your original assumption as “proof” without actual verification.
And then there’s the classic toupee fallacy: “All toupees look fake - I’ve never seen one that didn’t.” That just means you’re good at spotting bad toupees. You can’t generalize from that and claim you’re good at detecting toupees in general, because all the good ones slip right past you unnoticed.
That’s not what they mean by “happiness” when they say Finland is the happiest country in the world. It’s more about overall life satisfaction and I can assure you that this isn’t achieveable by drugs alone.
I hear you - you’re reacting to how people throw around the word “intelligence” in ways that make these systems sound more capable or sentient than they are. If something just stitches words together without understanding, calling it intelligent seems misleading, especially when people treat its output as facts.
But here’s where I think we’re talking past each other: when I say it’s intelligent, I don’t mean it understands anything. I mean it performs a task that normally requires human cognition: generating coherent, human-like language. That’s what qualifies it as intelligent. Not generally so, like a human, but a narrow/weak intelligence. The fact that it often says true things is almost accidental. It’s a side effect of having been trained on a lot of correct information, not the result of human-like understanding.
So yes, it just responds with statistical accuracy but that is intelligent in the technical sense. It’s not understanding. It’s not reasoning. It’s just really good at speaking.
A linear regression model isn’t an AI system.
The term AI didn’t lose its value - people just realized it doesn’t mean what they thought it meant. When a layperson hears “AI,” they usually think AGI, but while AGI is a type of AI, it’s not synonymous with the term.
I’ve had this discussion countless times, and more often than not, people argue that an LLM isn’t intelligent because it hallucinates, confidently makes incorrect statements, or fails at basic logic. But that’s not a failure on the LLM’s part - it’s a mismatch between what the system is and what the user expects it to be.
An LLM isn’t an AGI. It’s a narrowly intelligent system, just like a chess engine. It can perform a task that typically requires human intelligence, but it can only do that one task, and its intelligence doesn’t generalize across multiple independent domains. A chess engine plays chess. An LLM generates natural-sounding language. Both are AI systems and both are intelligent - just not generally intelligent.
What does history have to do with it? We’re talking about the definition of terms - and a machine learning system like an LLM clearly falls within the category of Artificial Intelligence. It’s an artificial system capable of performing a cognitive task that’s normally done by humans: generating language.
What do you mean they don’t give you a choice? You always have the choice not to use it. DDG gives me AI summaries and I never read them. WhatsApp has an LLM button I’ve never pressed. Twitter has Grok, never tried it. Android probably has Gemini somewhere, and I don’t even know how to access it. As for Proton’s LLM, I hadn’t even heard of it despite paying for their email for a decade. I just don’t see how something existing as a feature in a service I already use somehow mandates me to engage with it.
If someone is so deeply anti-LLM that they want to avoid all this on principle, I don’t necessarily have an issue with that. But personally, I genuinely struggle to grasp the logic behind it. People seem to have a strong emotional response to LLMs - your reply makes that pretty clear - and that’s the part that really boggles my mind.