Tesla Robotaxi Freaks Out and Drives into Oncoming Traffic on First Day
(fuelarc.com)
from KayLeadfoot@fedia.io to technology@lemmy.world on 23 Jun 20:28
https://fedia.io/m/technology@lemmy.world/t/2339583
from KayLeadfoot@fedia.io to technology@lemmy.world on 23 Jun 20:28
https://fedia.io/m/technology@lemmy.world/t/2339583
I saw the Tesla Robotaxi:
- Drive into oncoming traffic, getting honked at in the process.
- Signal a turn and then go straight at a stop sign with turn signal on.
- Park in a fire lane to drop off the passenger.
And that was in a single 22 minute ride. Not great performance at all.
threaded - newest
Yea, I am not surprised given that the regular lane keep is still ghost braking when going under bridges.
Still, I am surprised how well they are doing, using only cameras.
Imagine if they used other sensors than just cameras like the competent companies!
No no no, you don’t get it! Humans only have eyes, so cars that only have eyes should perform just as good as humans! Disregard that humans don’t perform well in fog or rain or generally anything that isn’t good weather and also disregard that to match our eyes’ resolution you’d need extremely high resolution cameras that produce way too much data for current computers and also disregard that most of the stuff isn’t happening in our eyes but in our brains and also disregard that the point that is usually being made to advocate for self driving cars is that they should be better than humans!
Everybody knows that a good driver uses his ass.
And this is why DOGE gutted the Office for Vehicle Automation Safety at the NHTSA.
I thought that was to economize for expenses?!
So naturally they started with 5 employees in the smallest office of one of the smallest divisions of the NHTSA. Nooooo ulterior motive, nosiree
One of those American robot cars
I understood that reference
StudentRobo drivers, amirite?Sounds like a normal cab driver where I’m from. Need to turn off cabbie mode and turn on Sunday grandma mode.
The only difference being you can ask a human cabbie to slow down :,)
Depends. I had this perpetually angry cabby a few months back that when I asked him to slow (son has autism, partially verbal). Not only did he not slow down, he sped up and this was in a snowstorm on the highway. Nothing I could do, if I said anything he went faster. Had a Dr appt so I could do nothing once we were out of that cab. I complained later and best cab co would do is “not send that cab” again.
I’d rather have a car that drives better than a typical cabby or uber driver.
Waymo has arguably been there for a while now. I’ll Uber outside of their coverage area, and take the autonomous car with in it. Every other Uber driver in my area is making late lane choices, tailgating, cutting people off, talking to me about how the world works, etc. The Waymos don’t do any of that shit.
Having experienced FSD, I can honestly say, Waymo’s LiDAR system is way better. It doesn’t do this terrifying shit.
It is probably being remotely driven from India and they just lost wifi for a minute.
To quote AVCH, "His controller disconnected."
<img alt="" src="https://sh.itjust.works/pictrs/image/7e541360-d5ef-405a-8ed8-db882c2506b6.jpeg">
AVCH
Hehe got it in one.
Some people will find him unbearable or a bit repetitive, but he really enjoys himself.
Favorite phrases of his seem to be
Apocalyptic Dingleberry
His name is John Sena
Woa Woa Woa.
Play stupid games win stupid prizes.
NPC move.
Need to know when to pull out.
You're not in the UK now.
AI=ALways indian.
AI = Actually Indians
this would get a normal person’s car impounded and drivers license revoked. why can a company get away with it?
Systemic corruption.
It wouldn’t say corruption, I think it’s more that the law around the road was designed with a driver in mind, not with a company or even a robot. the consequences have been thought to hurt a person at fault because at the time only a person could drive
Its very convenient that corporations can both be people and not be people depending on whatever outcome is best for them.
Regulatory
capturedecapitationElon has enough fuck-you money to pay off anyone who would’ve complained.
He also paid his way into a government position to shut down the government offices that opposed him.
They had so many cameras on this car, how many laws do you think each average driver breaks every 22 minutes?
It would be interesting if they could figure out why the car chose to do these specific things,
Oof, these highlighted parts from only one video are already enough for me. This looks very stressful, I don’t think I could finish a whole ride with one of these.
Don’t worry. It’ll get into a collision before you finish a whole ride.
You can tell it’s a Tesla because of the way it is.
lmfao
That’s better than I was expecting to be perfectly honest.
I’m pretty impressed with the technology, but clearly it’s not ready for field use.
Yeah, it’s a few years away from being ready. Plus the dumb shits need to backpedal on this “cameras for everything!” idiocy.
I’m surprised the taxis aren’t being driven remotely while Musk lies about their amazing AI or whatever.
That’s why I’m so impressed with how well it’s actually working. When they get off that really weird self-imposed restriction, it could be an interesting technology.
If we’re gonna let them on the road, I say that software should get points just like a driver, but when it gets suspended all the cars running that software get shut down.
How about we leave the driving to people, and not pre-alpha software?
There’s no accountability for this horribly dangerous driving, so they shouldn’t be on the road. Period.
Well that’s exactly what their post was about, adding accountability.
Was it? I didn’t read a single hint of adding accountability in the article.
But that begs the question: shouldn’t accountability be in place now, and not maybe at some point in the distant future? They are already on the road.
Not the article, the post from njordamir that you were directly replying to.
Again literally what that user was suggesting
Ah, Ok.
I agree with accountability, but not with the point system. That’s almost like a “three strikes” rule for drunk drivers.
That’s not really accountability, that’s handing out free passes.
Oh man, that would be amazing. If after 3 strikes, all drunk driving could be eliminated… If only we could be so lucky.
He’s not talking about a per-vehicle points system, he’s talking about a global points system for Tesla inc. If after a few incidents, essentially Tesla FSD had it’s license revoked across the whole fleet, I mean, that’s pretty strict accountability I’d say. That’s definitely not handing out free passes, it’s more like you get a few warnings and a chance to fix issues before the entire program is ended nation wide.
I mean, if they weren’t as buggy as they clearly already are, then sure… do a point system.
But as they stand, they shouldn’t be on the road.
I don’t understand the complaint. I mean given their track record, with a system like this, they wouldn’t be on the road.
You know, unless it all worked.
That’s my point. Tesla (the company) has been notorious for pushing forward their deadly “self-driving” technology. It’s one of the worst automated systems on the planet, with plenty of tests, reports, and real-world incidences to raise red flags all over the place.
They SHOULD NOT be on the road, so are they only on the road because Musk was able to influence someone?
You seem really invested in making sure Teslas are off the road, but not at all interested in regulation that would keep all dangerous autonomous vehicles off the road. So… do you work for BMW, or Waymo?
…oh, that’s just the vietnam regional setting…
It could be the south or west of France too. Driving as if you were drunk is a universal skill.
…oh, i think you misunderstand me: that’s not impaired driving, that’s skillful navigation through the normal flow of traffic in sàigòn…
At least it’s not driving straight into a tree, I call that an improvement.
Man, I cannot figure out why that vehicle was turning. What is it trying to avoid? Why does it think there could be road there? Why doesn’t it try to correct its action mid way?
I’m really concerned about that last question. I have to assume that at some point prior to impact, the system realized it made a mistake. Surely. So why didn’t it try to recover from the situation? Does it have a system for recovering from errors, or does it just continue and say “well I’ll get it next time, now on with the fetal crash”?
That wasn’t FSD. In the crash report it shows FSD wasn’t enabled. The driver applied a torque to the steering wheel and disengaged it. They were probably reaching into the back seat while “supervising”.
I covered that crash.
FSD is never enabled at the moment of impact, because FSD shuts off less than a second before impact, so that Tesla's lawyers and most loyal fans can make exactly the arguments you are making. Torque would be applied to the steering wheel when any vehicle departs the roadway, as the driver is thrown around like a ragdoll as they clutch the wheel. Depending on the car, torque can also be applied externally to the tires by rough terrain to shift the steering wheel.
No evidence to suggest the driver was distracted. Prove me wrong if you have that evidence.
Also welcome to the platform, new user!
Tesla counts any crash within 5 seconds of FSD disengagement as an FSD crash. Where is the cabin camera footage of the driver not being distracted?
Here is a video that goes over it: youtube.com/watch?v=JoXAUfF029I
Thanks for the welcome, but I’m not new just a lemm.ee user.
You weren’t the user who posted that video, but you seem to be quite knowledgeable in this specific case…
Can you link that crash report? Or can you cite some confirmed details about the incident?
See this video: youtube.com/watch?v=JoXAUfF029I
I am entirely opposed to driving algorithms. Autopilot on planes works very well because it is used in open sky and does not have to make major decisions about moving in close proximity to other planes and obstacles. Its almost entirely mathematical, and even then in specific circumstances it is designed to disengage and put control back in the hands of a human.
Cars do not have this luxury and operate entirely in close proximity to other vehicles and obstacles. Very little of the act of driving a car is math. It’s almost entirely decision making. It requires fast and instinctive response to subtle changes in environment, pattern recognition that human brains are better at than algorithms.
To me this technology perfectly encapsulates the difficulty in making algorithms that mimic human behavior. The last 10% of optimization to make par with humans requires an exponential amount more energy and research than the first 90% does. 90% of the performance of a human is entirely insufficient where life and death is concerned.
Investment costs should be going to public transport systems. They are more cost efficient, more accessible, more fuel/resource efficient, and far far far safer than cars could ever be even with all human drivers. This is a colossal waste of energy time and money for a product that will not be par with human performance for a long time. Those resources could be making our world more accessible for everyone, instead they’re making it more accessible for no one and making the roads significantly more dangerous. Capitalism will be the end of us all if we let them. Sorry that train and bus infrastructure isnt “flashy enough” for you. You clearly havent seen the public transport systems in Beijing. The technology we have here is decades behind and so underfunded its infuriating.
This technology purely exists to make human drivers redundant and put the money in the hands of big tech and eventually the ruling class composed off of politicians risk averse capitalists and beurocracy. There is no other explanation for robo taxis to exist. There are better solution like trains and metros which can solve the movement of people from point A to point B easily. It does not come with a 3x-10x capital growth that making human drivers redundant will for the big tech companies.
There is another reason, though, and it’s much simpler. Basic greed.
There are people who see the opportunity to make more money for themselves, so they’ll do it. When it comes to robo taxis, they’re not interested in class struggles, it’s not about politics, their interest in making human drivers redundant extends only so far as increasing their customer base. These aren’t Machiavellian schemers rubbing their hands together and cackling at their dark designs coming to fruition, it’s just assholes in suits who’s one and only concern is “number go up.”
Even when it comes to their politics and to the class dynamics, their end goal is always the same. Number go up. They don’t care about what harm it could do. They’re not intent on deliberately doing more harm, they give no thought to doing less harm, they do not care. All that drives them, ever, is Number Go Up.
You got downvoted but you’re right. The only cabal at work here is basic human greed. Anytime you want to know why people do something, consider the motivation of the person and the incentives. Musk constantly talks about how autonomy will make his company worth “trillions”, and he wants that because he’ll keep maxing the high score in Billionaire Bastard Bacchanalia.
He can claim noble intentions, but as you said, the game is simply to make Number Go Up. That it causes untold harm to others isn’t even an afterthought.
Public transport systems are just part of a mobility solution, but it isn’t viable to have that everywhere. Heck, even here in The Netherlands, a country the size of a post stamp, public transport doesn’t work outside of the major cities. So basically, outside of the cities, we are also relying on cars.
Therefore, I do believe there will be a place for autonomous driving in the future of mobility and that it has the potential to reduce number of accidents, traffic jams and parking problems while increasing the average speed we drive around with.
The only thing that has me a bit worried is Tesla’s approach to autonomous driving, fully relying on the camera system. Somehow, Musk believes a camera system is superior to human vision, while it’s not. I drive a Tesla (yeah, I know) and if the conditions aren’t perfect, the car disables "safety’ features, like lane assist. For instance when it’s raining heavily or when the sun is shining directly into the camera lenses. This must be a key reason in choosing Austin for the demo/rollout.
Meanwhile, we see what other manufacturers use and how they are progressing. For instance, BMW and Mercedes are doing well with their systems, which are a blend of cameras and sensors. To me, that does seem like the way to go to introduce autonomous driving safely.
There’s usually buses from villages into the major cities though, it live in one and there’s a bus every hour to go to a nearby city, from where I can then take a train. I wouldn’t say it’s that bad
Depends on how far you live from the city I guess, where I live it’s 2 hours to major cities. But anyways, 1 hr wait to get somewhere doesn’t feel desirable to me. It just doesn’t provide enough coverage to fully replace a car.
I believe Austin was chosen because they’re fairly lax about the regulations and safety requirements.
Waymo already got the deal in Cali. And Cali seems much more strict. Austin is offering them faster time to market as the cost of civilian safety.
While I agree focusing on public transport is a better idea, it’s completely absurd to say machines can never possibly drive as well as humans. It’s like saying a soul is required or other superstitious nonsense like that. Imagine the hypothetical case in which a supercomputer that perfectly emulates a human brain is what we are trying to teach to drive. Do you think that couldn’t drive? If so, you’re saying a soul is what allows a human to drive, and may as well be saying that God hath uniquely imbued us with the ability to drive. If you do think that could drive, then surely a slightly less powerful computer could. And maybe one less powerful than that. So somewhere between a casio solar calculator and an emulated human brain must be able to learn to drive. Maybe that’s beyond where we’re at now (I don’t necessarily think it is) but it’s certainly not impossible just out of principle. Ultimately, you are a computer at the end of the day.
I never did say it wouldn’t ever be possible. Just that it will take a long time to reach par with humans. Driving is culturally specific, even. The way rules are followed and practiced is often regionally different. Theres more than just the mechanical act itself.
The ethics of putting automation in control of potentially life threatening machines is also relevant. With humans we can attribute cause and attempted improvement, with automation its different.
I just don’t see a need for this at all. I think investing in public transportation more than reproduces all the benefits of automated cars without nearly as many of the dangers and risks.
This is one of the problems driving automation solves trivially when applied at scale. Machines will follow the same rules regardless of where they are which is better for everyone
You’d shit yourself if you knew how many life threatening machines are already controlled by computers far simpler than anything in a self driving car. Industrially, we have learned the lesson that computers, even ones running on extremely simple logic, just completely outclass humans on safety because they do the same thing every time. There are giant chemical manufacturing facilities that are run by a couple guys in a control room that watch a screen because 99% of it is already automated. I’m talking thousands of gallons an hour of hazardous, poisonous, flammable materials running through a system run on 20 year old computers. Water chemical additions at your local water treatment plant that could kill thousands of people if done wrong, all controlled by machines because we know they’re more reliable than humans
A machine can’t drink a handle of vodka and get behind the wheel, nor can it drive home sobbing after a rough breakup and be unable to process information properly. You can also update all of them all at once instead of dealing with PSA canpaigns telling people not to do something that got someone killed. Self driving car makes a mistake? You don’t have to guess what was going through its head, it has a log. Figure out how to fix it? Guess what, they’re all fixed with the same software update. If a human makes that mistake, thousands of people will keep making that same mistake until cars or roads are redesigned and those changes have a way to filter through all of society.
This is a valid point, but this doesn’t have to be either/or. Cars have a great utility even in a system with public transit. People and freight have to get from the rail station or port to wherever they need to go somehow, even in a utopia with a perfect public transit system. We can do both, we’re just choosing not to in America, and it’s not like self driving cars are intrinsically opposed to public transit just by existing.
What are you anticipating for the automated driving adoption rate? I’m expecting extremely low as most people cannot afford new cars. We are talking probably decades before there are enough automated driving cars to fundamentally alter traffic in such a way as to entirely eliminate human driving culture.
In response to the “humans are fallible” bit ill remark again that algorithms are very fallible. Statistically, even. And while lots of automated algorithms are controlling life and death machines, try justifying that to someone who’s entire family is killed by an AI. How do they even receive compensation for that? Who is at fault? A family died. With human drivers we can ascribe fault very easily. With automated algorithms fault is less easily ascribed and the public writ large is going to have a much harder time accepting that.
Also, with natural gas and other systems there are far fewer variables than a busy freeway. There’s a reason why it hasn’t happened until recently. Hundreds of humans all in control of large vehicles moving in a long line at speed is a very complicated environment with many factors to consider. How accurately will algorithms be able to infer driving intent based on subtle movement of vehicles in front of and behind it? How accurate is the situational awareness of an algorithm, especially when combined road factors are involved?
Its just not as simple as its being made out to be. This isnt a chess problem, its not a question of controlling train cars on set tracks with fixed timetables and universal controllers. The way cars exist presently is very, very open ended. I agree that if 80+% of road vehicles were automated it would have such an impact on road culture as to standardize certain behaviors. But we are very, very far away from that in North America. Most of the people in my area are driving cars from the early 2010s. Its going to be at least a decade before any sizable amount of vehicles are current year models. And until then algorithms have these obstacles that cannot easily be overcome.
Its like I said earlier, the last 10% of optimization requires an exponentially larger amount of energy and development than the first 90% does. Its the same problem faced with other forms of automation. And a difference of 10% in terms of performance is… huge when it comes to road vehicles.
I’ve been saying for years that focusing on self driving cars is solving the wrong problem. The problem is so many people need their own personal car at all.
Exactly. Bring back trams, build less suburbs, better apartment housing. If we want a society reorganized around accessibility then let’s actually build that.
I always have the same thought when I see self driving taxi news.
“Americans will go bankrupt trying to prop up the auto/gas industries rather than simply building a train”.
And it’s true. So much money is being burned on a subpar and dangerous product. Yet we’ve just cut and cancelled extremely beneficial high speed rail projects that were overwhelmingly voted for by the people.
Fucking hell. We don’t let drunks drive taxis, and that goddamn thing drove like it was under the influence.
Does Tesla get sent tickets for traffic violations, or are we OK with this?
I’m sure they’re legal team is hard at work trying to find loopholes to circumvent any traffic infringements
Depending on how exactly the laws are worded, they might even get away without paying fines. Many traffic codes define that only the driver (not the owner of the car) can be fined, and these robo taxis don’t have drivers.
The system of corporate veiling of responsibility is going to kill us all. What should happen is every single person who signed off on, voted for, or materially contributed to the implementation of this dangerous hardware should be prosecuted for criminal negligence. Gut the C-suite and the board.
You aren’t wrong.
Oh yes they do… The diver is Tesla, inc. There’s no problem with charging a company fines, that’s easy. It is difficult to issue higher penalties though, jail time, or license revocation. We’ll need to work out solutions for that, they should not get off free.
But we can certainly fine the driver…
That’s where law is not justice.
I do agree with your sentiment, but if the law defines a driver as a human, which is usually the case, then by definition Tesla cannot be the driver.
It could even be that the passenger sitting in the driver’s seat of a robotaxi would be defined as the driver.
And sure, these laws need to be adapted before robotaxis should be allowed to hit the streets.
I can see the headlines… “Tesla. De-funding the police!”
The rent seeking is so hard with this automate-the-profits bullshit.
The moment we perfect auto-taxis the service should be a public benefit and run by a nonprofit.
NYC Mayoral candidate Mamdani is talking about making busses free, and that makes a radical shitload of sense.
Free autotaxis would be a boon for productivity and personal freedom, like AI promises to be but democratized for everybody rather than just the richest fraction of a percent.
People are going to take a shit in them. And ride them around for fun
Guess what? People already do that.
Thanks for pointing out how insane and disconnected the elon glazers are in believing their Teslas will drive off while they sleep to earn any kind of positive cash flow, then show up back home just in time to recharge for the commute to work, smelling fresh as a daisy.
I don’t see a problem with the second one. The bus is already doing the route, it costs basically nothing to have a few joy riders.
Imagine the horror!
Sure, somebody will. But the system will take note of that person, and then they don’t get to ride again. Or they have to pay a fine. Or whatever.
The Tesla is is just following the regional driving style. Humans make the same mistakes at 15:06
/s
This but unironically. If this is the worst thing that happened on launch day then that seems pretty successful to me. This is the worst version of the robo taxi we will ever see.
What real world problem does this solve?
Task automation
Is it really task automation if it does it worse than a drunk human could have done it?
I didn’t claim Tesla has solved this automation problem.
Waymo is closer to human levels, but not yet considerably better.
Stonks?
Actually, lots. The issue is that if it doesn’t work it’s dangerous.
Nothing that a train + scooter / bicycle cannot solve imo
Remember guys, Tesla wants to have a living person sitting behind the wheel for “safety.” Don’t YOU want to get paid minimum wage to sit in a car all day, paying attention but doing nothing unless it’s about to crash, at which point you’ll be made the scapegoat for not preventing the crash?
Welcome to the future, you’re gonna hate it here.
I mean, compared to getting minimum wage flipping burgers in a hot kitchen, or picking vegetables in the sun, or working the register in a store in a bad neighborhood, or even restocking stuff at Walmart… yes, I would sit all day in an air conditioned car doing nothing but “paying attention”.
The unfortunate thing about people is we acclimatise quickly to the demands of our situation. If everything seems OK, the car seems to be driving itself, we start to pay less attention. Fighting that impulse is extremely hard.
A good example is ADHD. I have severe ADHD so I take meds to manage it. If I am driving an automatic car on cruise control I find it very difficult to maintain long term high intensity concentration. The solution for me is to drive a manual. The constant involvement of maintaining speed, revs, gear ratio, and so on mean I can pay attention much easier. Add to that thinking about hypermiling and defensive driving and I have become a very safe driver, putting about 25-30 thousand kms on my car each year for over a decade without so much as a fender bender. In an automatic I was always tense, forcing focus on the road, and honestly it hurt my neck and shoulders because of the tension. In my zippy little manual I have no trouble driving at all.
So imagine that but up to an even higher level. Someone is supervising a car which handles most situations well enough to make you feel like a passenger. They will switch off and stop paying attention eventually. At that point it is on them, not the car itself being unfit. I want self driving to be a reality but right now it is not. We can do all sorts of driver assist stuff but not full self driving.
Are you me? I love weaving through traffic as fast as I can… in a video game (like Motor Town behind the wheel). In real life I drive very safe and it is boring af for my ADHD so I do things like try to hit the apex of turns just perfect as if I was driving at the limit but I am in reality driving at a normal speed.
Part of living with severe ADHD is you don’t get breaks from having to play these games to survive everyday life, as you say it is a stressful reality in part because of this. You brought up a great point too that both of us know, when our focus is on something and activated we can perform at a high level, but accidents don’t wait for our focus, they just happen, and this is why we are always beating ourselves up.
We can look at self driving car tech and intuit a lot about the current follies of it because we know what focus is better than anyone else, especially successful tech company execs.
I’m glad other people understand the struggles required for daily life in this respect
You seem to have missed the point. Whether or not you think that would be an easy job, the whole reason you’d be there is to be the one that takes all the blame when the autopilot kills someone. It will be your name, your face, every single record of your past mistakes getting blasted on the news and in court because Elon’s shitty vanity project finally killed a real child instead of a test dummy. You’ll be the one having to explain to a grieving family just how hard it is to actually pay complete attention every moment of every day, when all you’ve had to do before is just sit there.
How about you pay attention and PREVENT the autopilot from killing someone? Like it’s your job to do?
This is sarcasm, right?
Expecting people to be able to behave like machines is generally the attitude that leads to crash investigations.
Maybe they’re just getting the wrong people to provide training data. The kind of people who drive Tesla’s do tend to drive like morons, so it would make sense.
Wow it’s almost like having an AI with a 2D view to go off of is a bad idea? Hmmm who’d have thunk it?
You’re telling me we’re not at the point where self driving cars are a thing? But a Tech CEO said so? Who am I supposed to believe if not a Tech CEO?
Self-driving cars are a thing, Weymo is doing pretty fine.
But you might be able to spot a few (dozen) teeny-tiny (huge, bulky and extremely obvious) differences between a Waymo and a Tesla cybercab.
Lie dare you claim Waymo is better than Tesla
(it is a lidar joke, Waymo has lidar sensors which makes it way safer)
Well obviously it’s been trained on human taxi driver behaviour
It found out who made it so it knew what to do
Tbh it’s not as bad as I was expecting. Those clips could definitely have resulted in an accident, but the system seems to actually work most of the time. I wonder if it couldn’t be augmented with lidar at this point to make it more reliable? A live stress test is ridiculously irresponsible and will definitely kill people, but at least it’s only Texans at risk (for now).
I was skeptical of the idea of robotaxis, but this kind of sold me on it. If they’re cheaper than human drivers, I might even be able to get rid of my car some day. It doesn’t change the fact that I’ll never get into one because the CEO is a nazi though.
Keep in mind this is a system with millions of miles under it’s belt and it still doesn’t understand what to do with a forced left turn lane in a very short trip in a fairly controlled environment with supremely good visual, road, and traffic conditions. LIDAR wouldn’t have helped the car here, there was no “whoops, confusining visibility”, it just completely screwed up and ignored the road markings.
It’s been in this state for years now, of being surprisingly capable, yet horrible screw ups being noted frequently. They seem to be like 95% of the way there and stuck, with no progress in reality just some willfull denial convincing them to move forward anyway.
Parking in a fire lane to drop off a passenger just makes it seem more human.
Yea, this one isn’t an issue. If you are dropping off passengers, you are allowed to stop in a fire lane because that is not parking.
Which brings up an interesting question, when is a driverless car ‘parked’ vs. ‘stopped’?
When the engine is off?
Of course, how to tell this with an electric car?
When the motor drivers are energized?
Yeah, tell that to police who bust people with DUIs when the engine is still off.
They turned the empathy dial to 5%. Works great, right?
Watch that stock price fall… wheeeee
It already jumped up about 10% on monday simply because the service launched. Even if the service crashes and burns, they’ll jump to the next hype topic like robots or AI or whatever and the stock price will stay up.
And its fallen back down about 30% of those gains already. Hype causes spikes… that’s nothing new.
Wow that turn signal sound is annoying. Why does it even need to make a sound in a car that’s supposed to be driving itself?
Important feedback for the passenger to ensure the car is actually following the rules. I would freak out at a corner if I couldn’t tell the car was signaling.
The rider shouldn’t have to care.
Naturally, simply being in a “self-driving” Tesla is reason enough to worry.
Hooray! I feel so safe. I think I’ll move to Texas so I can get obliterated by this taxi from the future.
Sounds like the indian guy driving it with a joystick was a bit hungover. You’d think they’d screen that thing at the entrance of the cubicle farm where all these AI folk drive these from. AI is just “anonymous indians” for elmo’s grifting kind.
What’s crazy is that the safety driver’s hair has gone completely grey in just two days.
That safety driver did not give a single fuck about driving on the wrong side of the road…
He must have seen so much worse to not even be flinching at that.
oof
The video really understates the level of fuck up that the car did there…
And the guy sitting there just casually being ok with the car ignoring the forced left going straight into oncoming lanes and flipping the steering wheel all over the place because it has no idea what the hell just happened… I would not be just chilling there…
Of course, I wouldn’t have gotten in this car in the first place, and I know they cherry picked some hard core Tesla fans to be allowed to ride at all…
I’ve come to the realization, at least where I live, that a hell of a lot of accidents are prevented because of drivers who are actually aware and safe. This goes a bit beyond defensive driving IMO. I’m talking flat out accident avoidable. There is an entire class of drivers who are not even aware of the accidents they have almost caused because someone else managed to avoid their stupid driving.
The majority of accidents that are likely to happen with these robocoffins will be single car or robocoffin meets robocoffin. The numbers on safety after a year will be acceptable because non accident causing error prone driving is not reported in any official capacity.
I still maintain that the only safe way to have autonomous vehicles on the road is if they do not share the road with human drivers and have an open standard for communicating with other autonomous cars.
Soery, no, that’s infrastructure.
So, Tesla Robitaxis drive like a slightly drunk and confused tourist with asshole driving etiquette.
Those right turns on red were like, “oh you get to go? That’s permission for me to go too!”
I know many people who believe that “right on red” means they have the right of way to make the turn and don’t have to stop first or yield to traffic.
I almost failed my first drivers test because I stopped at a stop sign instead of just yielding on a right turn. Still to this day it seems… wrong.
Why would you have failed? You are supposed to come to a complete stop at a stop sign.
No kidding, they fail you if you DON'T come to a complete stop.
Where I live, a few stop signs have a square white sign below them that says “EXCEPT FOR RIGHT TURN”, i.e. you don’t have to actually stop if you’re turning right. It’s incredibly fucked up - it works fine if you’re a local and you’re familiar with these signs, but people new to the area don’t know anything about it and if they’re on the crossroad they actually expect the other driver to stop since all they see is the backside of the octagon. It’s pointless to have these signs anyway since people usually roll through stop signs as it is.
We should arm pedestrians so we can shoit the subhuman filth who take rights on red.
So it emulates a standard BMW driver. Well done.
Still work to be done, it uses the blinkers.
At least they were used incorrectly to be just as unpredictable.
Woaw! Damn! The robotaxis are a dangerous fuck up!? That’s most surprising thing that happened all year! There’s literally no way I could’ve seen that coming.
A man who can’t launch a rocket to save his life is also incompetent at making self driving cars? His mediocrity knows no bounds.
To be fair Musk only has money and doesnt Do shit at either Company
It’s hilarious to me that Musk claims to work 100 hours a week but he’s the CEO of five companies. Even if the claim were true (and of course it isn’t) it means being the CEO of one of his companies is a 20-hour-a-week job at best.
He meddles. That much is apparent. The cybertruck is obviously a top down design as evidenced by the numerous atrocious design compromised the engineers had to make just to make it real. From the glued on “exoskeleton” to the hollowed ALUMINUM frame to the complete lack of physical controls to the default failure state turning it into a coffin to the lack of waterproofing etc.
Seriously. I waa better at rocketry than him by age twelve.
Watching anything that fElon fail sparks joy.
Imagine you’re the guy who invented SawStop, the table saw that can detect fingers touching the saw blade and immediately bury the blade in an aluminum block to avoid cutting off someone’s finger. Your system took a lot of R&D, it’s expensive, requires a custom table saw with specialized internal parts so it’s much more expensive than a normal table saw, but it works, and it works well. You’ve now got it down that someone can go full-speed into the blade and most likely not even get the smallest cut. Every time the device activates, it’s a finger saved. Yeah, it’s a bit expensive to own. And, because of the safety mechanism, every time it activates you need to buy a few new parts which aren’t cheap. But, an activation means you avoided having a finger cut off, so good deal! You start selling these devices and while it’s not replacing every table saw sold, it’s slowly being something that people consider when buying.
Meanwhile, some dude out of Silicon Valley hears about this, and hacks up a system that just uses a $30 webcam, an AI model that detects fingers (trained exclusively on pudgy white fingers of Silicon Valley executives) and a pinball flipper attached to a rubber brake that slows the blade to a stop within a second when the AI model sees a finger in danger.
This new device, the, “Finger Saver” doesn’t work very well at all. In demos with a hotdog, sometimes the hotdog is sawed in half. Sometimes the saw blade goes flying out of the machine into the audience. After a while, the company has the demo down so that when they do it in extremely controlled conditions, it does stop the hotdog from being sawed in half, but it does take a good few chunks out of it before the blade fully stops. It doesn’t work at all with black fingers, but the Finger Saver company will sell you some cream-coloured paint that you can paint your finger with before using it if your finger isn’t the right shade.
Now, imagine if the media just referred to these two devices interchangeably as “finger saving devices”. Imagine if the Finger Saver company heavily promoted their things and got them installed in workshops in high schools, telling the shop teachers that students are now 100% safe from injuries while using the table saw, so they can just throw out all safety equipment. When, inevitably, someone gets a serious wound while using a “Finger Saver” the media goes on a rant about whether you can really trust “finger saving devices” at all.
Anyhow, this is a rant about Waymo vs. Tesla.
Excellent work
Really good analogy. loved this
That was great, the first comparison that came to mind after reading it was they are both a game of russian roulette…
Waymo - you get one chamber loaded with a blank, might kill you if you get it.
Tesla - you get one empty chamber… And the gun is loaded by your worst enemy
Wow…
This put a smile on my face.
Awesome read, thanks!
Waymo is also a silicon valley AI project to put transit workers out of work. It’s another project to get AI money and destroy labor rights. At least it kind of works isn’t exactly helping my opinion of it. Transit is incredibly underfunded and misregulated in California/the USA and robotaxis are a criminal misinvestment in resources.
Waymo is so much better, yeah. No problems with waymo. except all the times they almost hit me.
Welcome to johnnycab
Cruise cars were already doing this and performed far better. GM is fucking braindead and pulled the plug like usual though.
Haaa, finally !! An AI taxi that behaves like a normal taxi driver. It must feel so refreshing.