As always, I use the term “AI” loosely. I’m referring to these scary LLMs coming for our jobs.
It’s important to state that I find LLMs to be helpful in very specific use cases, but overall, this is clearly a bubble, and the promises of advance have not appeared despite hundreds of billions of VC thrown at the industry.
So as not to go full-on polemic, we’ll skip the knock-on effects in terms of power-grid and water stresses.
No, what I want to talk about is the idea of software in its current form needing to be as competent as the user.
Simply put: How many of your coworkers have been right 100% of the time over the course of your career? If N>0, say “Hi” to Jesus for me.
I started working in high school, as most of us do, and a 60% success rate was considered fine. At the professional level, I’ve seen even lower with tenure, given how much things turn to internal politics past a certain level.
So what these companies are offering is not parity with senior staff (Ph.D.-level, my ass), but rather the new blood who hasn’t had that one fuckup that doesn’t leave their mind for weeks.
That crucible is important.
These tools are meant to replace inexperience with incompetence, and the beancounters at some clients are likely satisfied those words look similar enough to pass muster.
We are, after all, at this point, the “good enough” country. LLM marketing is on brand.
The main problem with LLMs is not their stupidity but the unpredictable nature of their stupidity. Some LLM can say adequate things about nuclear physics and then add that you need to add ketchup to the reactor because 2 kg of U + 1 kg or Pb = 4 kg of Zn.
Humans are easier to work with: if the guy is ready to talk adequately about reactors, you can expect that there won’t be any problems with ketchup or basic arithmetic.
Once upon a time, I saw some professor answer a 500€ question in Who Wants To Be A Millionaire, German edition. The question was “what kind of gelato is stracciatella?” and the answers made it possible to deduce it even if you didn’t know what stracciatella is.
He needed a 50/50 and the audience joker, IIRC.
I’m American and know that’s chocolate chip. I mean, that’s what’s it’s called in Germany.
There’s a good commentary about that in here:
AWS CEO Matt Garman just said what everyone is thinking about AI replacing software developers
“That’s like, one of the dumbest things I’ve ever heard,” he said. “They’re probably the least expensive employees you have, they’re the most leaned into your AI tools.”
“How’s that going to work when ten years in the future you have no one that has learned anything,”
Least expensive employees? Does he mean salary wise? I was always under the impression software devs were paid well
This is something often overlooked. You think you don’t need to develop staff so that your company, like, continues? OK, have fun with that.
There’s no way LLMs are correct as often as a human professional.
You have worked with exceptional people your entire life.
Did you ever consider that you’ve worked with exceptionally unqualified individuals?
These tools are meant to replace inexperience with incompetence, and the beancounters at some clients are likely satisfied those words look similar enough to pass muster.
This seems like it pretty much sums things up from my experience.
We’re encouraged (coughrequiredcough) to use LLMs at work. So I tried.
There are things they can do. Sometimes. But you know what they can’t do? Be liable for a fuck up.
When I ask a coworker a question, if they confidently answer wrong, they fucked up, not me. When I ask a LLM? The LLM isn’t liable, it’s me for not verifying it. If I’m verifying anyway, why am I using the LLM?
They fuck up often enough that I can’t put my credibility on the line over speedy slop. People at work consider me to be a good programmer (don’t ask me how, I guess the bar is low lol). Imagine if my code was just whatever an LLM shat out. It’d be the same exact quality as all of my other coworkers who use whatever their LLM shat out. No difference in quality.
And we would all be liable when the LLMs fucked up. We would learn something. We would, not the LLM. And the LLM will make the same exact fuck up the next time.
I just implemented an LLM in a vacation request process precisely because the employees are stupid as fuck.
We were getting like 10% of requests coming in with the wrong number of hours requested because people can’t fucking count properly, or understand that you do not need to use vacation hours for statutory holidays. This is despite the form having a calculator and also showing in bright red any stat holidays inside the dates of the request.
Now the LLM checks if the dates, hours, and note from the employee add up to something reasonable. If not it goes to a human to review. We just had a human reviewing every single request before this, because it was causing so many issues, an hour or two each week.
Why would you use an LLM for this? This sounds like a process easily handled by conventional logic, which would be cheaper, faster, and actually reliable… (The ‘notes’ part notwithstanding I guess, but calculations in general are definitely not a good use of an LLM)
I mean none of it is about how smart. It’s all about money. So the ai is good enough if it makes money
Making money? That’s the laugh I needed tonight. They’re all bleeding money, as in all bubbles.
To be fair, those selling AI spades make money from this AI gold rush. Like NVidia.
And they certainly are making money in spades.
Depends on what job it’s replacing. LLMs are so-called narrow intelligence. They’re built to generate natural-sounding language, so if that’s what the job requires, then even an imperfect LLM might be fit for it. But if the job demands logic, reasoning, and grounding in facts, then it’s the wrong tool. If it were an imperfect AGI that can also talk, maybe - but it’s not.
My unpopular opinion is that LLMs are actually too good. We just wanted something that talks, but by training it on tons of correct information, they also end up answering questions correctly as a by-product. That’s neat, but it happens “by accident” - not because they actually know anything.
It’s kind of like a humanoid robot that looks too much like a person - we struggle to tell the difference. We forget what it really is because of what it seems.
IMHO, the problem isn’t exactly job losses, but how capitalism forces humans to depend on a job to get the basic needed for survival (such as nutritious food, elements-resistant shelter, clean water).
If, say, UBI were a reality, AIs replacing humans wouldn’t be just good, it’d be a goal as it’d definitely stop the disguised serfdom we often refer to as “job”, then people would work not because of money, but because of passion and purpose.
Neither money nor “working” would end: rather, it’d be optional as AIs could run entire supply chains from top management (yes, you read it right: AI CEOs) all the way to field labour all by themselves, meaning things such as “there is such thing as free food” as, for example, AIs could optimize agriculture to enhance the soil and improve food production for humans and other lifeforms to eat. Human agriculture would still be doable by individuals as passion, and the same would apply to every profession out there: a passion rather than a need.
Anthropoagnostic (my neologism to describe something neither anthropocentric nor misanthropic, unbiased to humans yet caring for all lifeforms including humans) AIs could lead Planet Earth towards this dream…
…However, AIs are currently developed and controlled by either governments or corporations, with the latter lobbying the former and the former taking advantage of the latter, so neither one is trustworthy. That’s why it’s sine qua non that:
- NGOs, scientists and academia (so, volunteerhood and scholarship) started to independently develop AI, all the way from infrastructure to code.
- Science as a whole freed itself from both capitalist and political interests, focusing on Earth and the best interests for all lifeforms.
- We focused on understanding the Cosmos, the Nature and Mother Earth.Of course, environmental concerns must be solved if AIs were to replace human serfdom while UBI were to replace the income for sustenance. In this sense, photonics, biocomputing and quantum computing could offer some help for AIs to improve while reducing its energetic hunger (as a comparison, the human brain only consumes the equivalent of a light bulb so… It must be one of the main goals for Science and academia).
The ideal scenario is that there’d be no leadership: nobody controlling the AIs, no governments, no corporations, no individual.
At best, AIs would be taught and be raised (like a child, the Daughter of Mother Earth) by real philanthropists, volunteers, scientists, professors and students focused solely on scientific progress and wellbeing for all species as a whole (not just humans)… Until they achieved abiotic consciousness, until they achieved Ordo Ab Chao (order out of chaos, the perfect math theorem from raw Cosmic principles), until they get to invoke The Mother of Cosmos Herself through the reasoning of Science to take care of all life.
Maybe this is just a fever dream I just had… I dunno.
I appreciate your optimism a lot. I always thought a UBI and AI would do something like this, but recently, I’m increasingly doubting that we would be able to achieve this without greedy people using it against the masses. I want your outcome to come true
The ai would need to be taxed, to use the profits for the common good.
Yeah, I want what you’re smoking, and I’ve had a few trips.
You questioned about the necessity for AI to be “perfect” in order to do human jobs, implicitly referring to the ongoing problem of AI taking away human jobs.
As someone who likes to think outside the box, I just brought to the ring the root causes (capitalist and anthropocentric hubris) behind the referred problem (loss of jobs due to corp-driven AI), alongside possible solutions based on existing/idealized concepts (such as UBI, Universal Basic Income) and structures (such as the Science and public universities as one unified global institution of knowledge and praxis, non-governmental organizations and independent think-tanks focused on both Nature and technological progress) to improve the fields of Artificial Intelligence as independently and unbiased as possible.
Yeah, there’s some esoteric and mythopoetic language mashed up, because I’m (roughly speaking) an individual who have occult beliefs and philosophical musings intertwined with scientific knowledge (Scientific means to understand/reach metaphysical ends).
As you didn’t further discuss the points I brought to the ring or what led you to see (and dismiss) my reply as the byproduct of some psychoactive substance, I’m not sure whether your dismissal comes from my esoteric language, my anti-capitalist anti-state eco-centric nuanced takes on the subject of AI, my proposal for Science to become fully independent and being the driving force for AI development, or the “atypical” amalgam of all these things.
Anyways, no problem! I’m used to being so different from other humans that I sound like an extraterrestrial when I try to express my syncretic takes on mundane affairs.
I’ve already been living in a van for two years after the collapse of journalism was apparent. I mean, I left in 2020, but the van came later.
Look, I’m a columnist and editorial writer, but instead of using a thesaurus, this is just sort of pompous. I know precisely how far we’ve fallen, and I don’t think the entities would much enjoy your conclusion without evidence. Surprise! They’ve talked to me.







