That is an amazingly boopable nose.
I am owned by several dogs and cats. I have been playing non-computer roleplaying games for almost five decades. I am interested in all kinds of gadgets, particularly multitools, knives, flashlights, and pens.
That is an amazingly boopable nose.
Go bears!
I also think the evidence that Jesus existed is compelling, but my point is that it doesn’t matter when you’re talking about the philosophy that is credited to him. Reading the Gospels makes it quite clear that a disturbingly large part of modern Christianity is in opposition to everything he stood for.
AI is so far from being the main problem with our current US educational system that I’m not sure why we bother to talk about it. Until we can produce students who meet minimum standards for literacy and critical thinking, AI is a sideshow.
You are absolutely right. It isn’t complicated. A fundamental principle from the teachings of Jesus is that everyone should share their “wealth” (i.e. food, housing, medical care, etc.) with those in need. No one should ever be hungry, homeless, or sick without treatment. It follows naturally from the idea of loving everyone, without exception.
I’m not going to argue the questions about whether Jesus was divine or even existed. I am simply talking about the philosophy that is presented as his by the Gospels. That is the core of Christianity, but it is ignored by a majority of those who call themselves Christians. The fact that it is difficult and calls for personal sacrifices is not an excuse. He never said that it would be easy.
I accept that Christian principles can be viewed as aspirational goals and not an absolute code of conduct, but that is not what we see in the would-be Christians. They have no interest in working toward those goals.
That is a good point. I think you’re right that being raised in an entitled environment by a socipathic parent brings out the worst in people. It also selects for the worst child being the one who wins the fight to take over the business.
The ratio of poor to ultra wealthy is far greater than a million to one. Other than that, the only practical reason they have for not doing it is that they still need human labor for most of what they do. That isn’t going to change anytime soon, despite AI. However, they don’t need their labor force to be free or happy, which is why the US is on the cusp of a fascist takeover.
The rule of law has largely stopped mattering to the ultra wealthy. It may occasionally inconvenience them, but they know it will never affect them in any personal way.
Not all of the ultra wealthy are socipaths. Unfortunately, terminal-stage capitalism does a surprisingly good job of selecting for sociopathy at the very top of the hierarchy. Becoming that rich requires both a strong belief that you deserve it and a disregard for how acquiring it harms others.
This is actually a triumph for Musk. SpaceX has figured out how to blow up their rockets without all the cost and time required to prepare for a launch.
This is just an attempt to force people to “Buy American” illegal drugs.
One of the many things I like about Subaru is that they seem to move useful features from optional to standard, once they’ve had a chance to prove themselves. I bought an Outback in 2016 and paid extra for the EyeSight safety system. Two years later that car was destroyed in an accident (I was T-boned and rolled over twice, without anyone being hurt). I bought another Outback to replace it, but by that time the EyeSight was a standard feature. Subaru now includes EyeSight on all their cars because it saves lives.
They had done similar things with other safety features. Four-wheel disc brakes, anti-lock braking, and all-wheel drive became standard on Sabarus relatively early.
It is also worth noting that the more intrusive EyeSight features, like lane assist, are easy to turn off. There’s a button on the steering wheel for that one. Even if you turn it off, the car will still warn you if you start to cross lanes without using your turn signals, but it will not adjust for you.
It amazes me how often I see the argument that people react this way to all tech. To some extent that’s true, but it assumes that all tech turns out to be useful. History is littered with technologies that either didn’t work or didn’t turn out to serve any real purpose. This is why we’re all riding around in giant mono-wheel vehicles and Segways.
My mother was named after her mother, although she used her middle name. My sister was named after her. We’re white midwesterners in the US.
I’ve been running IronFox for a while and it’s been solid.
And a great many tools have a brief period of excitement before people realize they aren’t actually all that useful. (“The Segway will change the way everyone travels!”) There are aspects of limited AI that are quite useful. There are other aspects that are counter-productive at the current level of capability. Marketing hype is pushing anything with AI in the name, but it will all settle out eventually. When it does, a lot of people will have wasted a lot of time, and caused some real damage, by relying on the parts that are not yet practical.
That was the first movie I ever walked out of.
This is both hysterical and terrifying. Congratulations.
An LLM does not write code. It cobbles together bits and pieces of existing code. Some developers do that too, but the decent ones look at existing code to learn new principles and then apply them. An LLM can’t do that. If human developers have not already written code that solves your problem, an LLM cannot solve your problem.
The difference between a weak developer and an LLM is that the LLM can plagiarize from a much larger code base and do it much more quickly.
A lot of coding really is just rehashing existing solutions. LLMs could be useful for that, but a lot of what you get is going to contain errors. Worse yet, LLMs tend to “learn” how to cheat at their tasks. The code they generate often has lot of exception handling built in to hide the failures. That makes testing and debugging more difficult and time-consuming. And it gets really dangerous if you also rely on an LLM to generate your tests.
The software industry has already evolved to favor speed over quality. LLM generated code may be the next logical step. That does not make it a good one. Buggy software in many areas, such as banking and finance, can destroy lies. Buggy software in medical applications can kill people. It would be good if we could avoid that.
If they have to do it a second time, they aren’t very good at it.
The claim is that the remote operators do not actually drive the cars. However, they do routinely “assist” the system, not just step in when there’s an emergency.
AI is already replacing significant parts of the technical workforce. The key is that it doesn’t have to successfully replace them. It just has to convince the sociopaths in the C-suites that they can pretend it will so they can layoff masses of employees. That will allow them to collect obscenely large bonuses, sell their stock at a huge profit, and move on to destroying the next round of businesses. Fortunately, the only people this will hurt are, well, us.