It’s not shoving AI slop into it again to get a new AI slop? Until it stops, because it reached the point where it’s just done?
What ancient wizzardry do you use for your reasoning at home if not that?
But like look, we’ve had shit like this since forever, it’s increasingly obvious that most people will cheer for anything, so the new ideas just get bigger and bigger. Can’t wait for the replacement, I dare not even think about what’s next. But for the love of fuck, don’t let it be quantums. Please, I beg the world.
Why not quanta? Don’t you believe in the power of the crystals? Quantum vibrations of the Universe from negative ions from the Himalayan salt lamps give you 153.7% better spiritual connection with the soul of the cosmic rays of the Unity!
…what makes me sadder about the generative models is that the underlying tech is genuinely interesting. For example, for languages with large presence online they get the grammar right, so stuff like “give me a [declension | conjugation] table for [noun | verb]” works great, and if it’s any application where accuracy isn’t a big deal (like “give me ideas for [thing]”) you’ll probably get some interesting output. But it certainly not give you reliable info about most stuff, unless directly copied from elsewhere.
Yes, it is expensive. But most of that cost is not because of simple applications, like in my example with grammar tables. It’s because those models have been scaled up to a bazillion parameters and “trained” with a gorillabyte of scrapped data, in the hopes they’ll magically reach sentience and stop telling you to put glue on pizza. It’s because of meaning (semantics and pragmatics), not grammar.
Also, natural languages don’t really have nonsensical rules; sure, sometimes you see some weird stuff (like Italian genderbending plurals, or English question formation), but even those are procedural: “if X, do Y”. LLMs are actually rather good at regenerating those procedural rules based on examples from the data.
But I wish it had some broader use, that would justify its cost.
I with that they cut down the costs based on the current uses. Small models for specific applications, dirty cheap in both training and running costs.
(In both our cases, it’s about matching cost vs. use.)
There are many interesting AI applications, LLM or otherwise, but I’m talking about the IT bubble, that grows so big it will finally consume the industry. If it ever pops, the correction will not be pretty. For anyone.
I evaded the BS for now, but it feels like I won’t be able to hide much longer. And it saddens me. I used to love IT :(
Most of us have no use for quantum computers. That’s a government/research thing. I have no idea what the next disruptive technology will be. They are working hard on AGI, which has the potential to be genuinely disruptive and world changing, but LLMs are not the path to get there and I have no idea whether they are anywhere close to achieving it.
What is your definition of reasoning?
It’s not shoving AI slop into it again to get a new AI slop? Until it stops, because it reached the point where it’s just done?
What ancient wizzardry do you use for your reasoning at home if not that?
But like look, we’ve had shit like this since forever, it’s increasingly obvious that most people will cheer for anything, so the new ideas just get bigger and bigger. Can’t wait for the replacement, I dare not even think about what’s next. But for the love of fuck, don’t let it be quantums. Please, I beg the world.
Why not quanta? Don’t you believe in the power of the crystals? Quantum vibrations of the Universe from negative ions from the Himalayan salt lamps give you 153.7% better spiritual connection with the soul of the cosmic rays of the Unity!
…what makes me sadder about the generative models is that the underlying tech is genuinely interesting. For example, for languages with large presence online they get the grammar right, so stuff like “give me a [declension | conjugation] table for [noun | verb]” works great, and if it’s any application where accuracy isn’t a big deal (like “give me ideas for [thing]”) you’ll probably get some interesting output. But it certainly not give you reliable info about most stuff, unless directly copied from elsewhere.
It’s a bit fucking expensive for a grammar tool.
I get that it gets logarithmically more expensive for every last bit of grammar, and some languages have very ridiculous nonsensical rules.
But I wish it had some broader use, that would justify its cost.
Yes, it is expensive. But most of that cost is not because of simple applications, like in my example with grammar tables. It’s because those models have been scaled up to a bazillion parameters and “trained” with a gorillabyte of scrapped data, in the hopes they’ll magically reach sentience and stop telling you to put glue on pizza. It’s because of meaning (semantics and pragmatics), not grammar.
Also, natural languages don’t really have nonsensical rules; sure, sometimes you see some weird stuff (like Italian genderbending plurals, or English question formation), but even those are procedural: “if X, do Y”. LLMs are actually rather good at regenerating those procedural rules based on examples from the data.
I with that they cut down the costs based on the current uses. Small models for specific applications, dirty cheap in both training and running costs.
(In both our cases, it’s about matching cost vs. use.)
But that won’t happen, since the bubble rose on promises of gorillions of returns, and those have not manifested yet.
We are so fucking stupid, I hate this timeline.
I work in this field. In my company, we use smaller, specialized models all the time. Ignore the VC hype bubble.
There are many interesting AI applications, LLM or otherwise, but I’m talking about the IT bubble, that grows so big it will finally consume the industry. If it ever pops, the correction will not be pretty. For anyone.
I evaded the BS for now, but it feels like I won’t be able to hide much longer. And it saddens me. I used to love IT :(
Most of us have no use for quantum computers. That’s a government/research thing. I have no idea what the next disruptive technology will be. They are working hard on AGI, which has the potential to be genuinely disruptive and world changing, but LLMs are not the path to get there and I have no idea whether they are anywhere close to achieving it.
Surprise surprise, most of us have no use for LLMs.
And yet everyone and their gradma is using it for everything.
People asked GPT who would the next pope be.
Or which car to buy.
Or what’s a good local salary.
I’m so fucking tired of all the shit.