Things are undoubtedly bad at Tesla. Its sales are dwindling. Its profits are plunging, as is its share price. There are regular protests outside its showrooms. The Cybertruck is a flop. And somehow, it’s actually a lot worse than that.
The 71% drop in net income it just reported may have been overshadowed by CEO Elon Musk’s announcement that he would be stepping back from his controversial duties at the Department of Government Efficiency (DOGE). But that drop is just one indication of serious financial sickness at the EV maker, problems brought on by falling sales for the first time in its history and falling prices for electric vehicles.
The bottom line problem at Tesla is its vanishing bottom line. A deeper look at its first quarter report shows it’s now losing money on what should be its ostensible reason for existence – selling cars.
It was only able to post a $409 million profit in the quarter thanks to the sale of $595 million worth of regulatory credits to other automakers.
But if the Trump administration gets its way, the company can kiss those regulatory credits keeping it in the black goodbye, too.
I get what you mean but it’s still stuck at level 2 and it always will be. No matter how good it is, if you move your eyes from the road, it will eventually kill you. Cameras alone are not sufficient enough for autonomous driving.
I disagree with this assertion, because they’re correct that the only being that can currently drive is relying on vision. Vision alone is sufficient for driving.
But autonomous driving really hasn’t succeeded yet. We still have no idea what is required for autonomous driving or whether we can do it at all, regardless of sensors.
So you’re implying that we can definitely do autonomous driving but can’t do it the way humans do, whereas I say we won’t know the requirements until we find some that succeed, and we may never
Yeah sure. If you want the same bad results as humans deliver, in terms of crash rates, than it’s possible. I wouldn’t trust it. Also human vision and processing is completely different from computer vision and processing.
Presumably we have the intelligence to set requirements before something can be called self-driving - that’s usually what the fuss is about, whether the marketing is claiming it’s something it’s not.
If they fail with their approach, I’m fine with that, just like I’m fine if Waymo fails with their approach. Of either succeeds, why should I care how? Obviously there’s a problem if it runs over some old lady at a stop sign and drags them down the street but that’s clearly a failure for them
We already have that https://www.sae.org/blog/sae-j3016-update
Yes, we have the definitions, but I haven’t read about whether they’re effectively required. Is there a test, a certification authority, rules for liability or revocation? Have we established a way to actually require it.
I hope we wouldn’t let manufacturers self-certify, although historical data is important evidence. I hope we don’t aid profitability of manufacturers by either limiting liability or creating a path to justice doomed to fail
This stuff is highly regulated https://en.m.wikipedia.org/wiki/Regulation_of_self-driving_cars
Mercedes has the first autonomous car (L3) you can buy, which you can only activate at low speeds on certain roads in Germany. It’s only possible because of Lidar sensors and when activated you are legally allowed to look at your phone as long as you can take over in 10 or so seconds.
You aren’t allowed to do this in a Tesla, since the government doesn’t categorize Teslas as autonomous vehicles which requires L3+.
No car manufacturer can sell real autonomous vehicles without government approval. Tesla FSD is just Marketing bs. Others are far ahead in terms of autonomous driving tech.
The thing is humans are horrible drivers, costing a huge toll in lives and property every year.
We may already be at the point where we need to deal with the ethics of inadequate self-driving causing too many accidents vs human causing more. We can clearly see the shortcomings of all self driving technology so far, but is it ethical to block Immature technology if it does overall save lives?
Maybe it’s the trolley problem. Should we take the branch that leads to deaths or the branch that leads to more deaths
True
Are you talking about waymo vs human driver? It’s currently (and maybe never) economical to roll that out globally. That would cost trillions and probably wouldn’t even be feasible everywhere.
Teslas aren’t autonomous but just mere driving assistants so you can’t compare them. Otherwise you’d also have to include the Mercedeses (which btw have the first commercial Level 3 car), BMWs, BYDs, …
It would be very unethical to allow companies to profit from dangerous and unsafe technology that kills people.
No manufacturer does good self-driving yet.
Several manufacturers including Tesla make driver assistants more reliable than humans in at least some cases, possibly most of the time.
It’s easy to say you don’t want to allow companies to profit from unsafe technology that kills people but what is the other choice? If you send the trolley down the other track, you’re choosing different deaths at the hands of unsafe humans. We will soon be at the point, or already are, that your choice kills more people. Is that really such an easy choice?