Are self driving cars that aren't human controllable even legal? Does the government place that much trust in AI? AI can make mistakes, not that humans can't, but moreover, the only people that know exactly how the AI works is Tesla themselves, and who knows if they did a perfect job? Just the idea of leaving all control in the hands of a machine that will have to decide what to do if it encounters a situation where bodily harm is unavoidable and not having any means to take control from it is scary to say the least.
...and exactly where would that sort of thinking lead to? I've heard of a law during the first cars (automobiles) that they should always be preceded by a pedestrian waving a red flag that would indicate a car coming through. Most people (then as well as now) don't know how their engine works either, and when I'm in an airplane or a train I don't think the pilot/driver can disassemble and reassemble the entire thing either. And you put it yourself: "AI can make mistakes, but humans as well". But computers don't get tired after long shifts, are better equipped for the multitasking that is checking all positions and as much as I hate to say it: they're far better at learning that humans.
It's not the same thing as an elevator because an elevator is not AI and if it falls down it's because a mechanic/electrician didn't do their job or the manager of the building didn't bother to pay a mechanic/electrician for maintenance work. You can't blame it on the elevator.
Elevators are much simpler, yes. But I feel you're overplaying the role of AI in a car. It is not a robot trying to mimic a human that happens to drive a car; it is making sure that the car lives up to the traffic code as well as ensure that it doesn't hit anything.
But if an AI fails to respond to a situation in a way that is acceptable to most humans (and this is a really tough decision to make when designing AI because what in the hell do you make it do if someone is driving down the wrong side of the road on the highway and it's a steep fall down the side and it's either your life or theirs?) then people are gonna blame the AI and additionally people are not likely to trust the company behind it anymore.
Not to say a real person would do any better at making that decision than a machine would because it's just a bad situation overall, but that's just an example.
That's what I already said: humans are biased towards humans. That's why the laws are currently what they are, and even if that would change, tesla still would put wheels in their cars, well knowing that the human driver would be the unsafest part of the entity.
Heck...I've even read an article once about how there were X amounts of trafic accidents involving AI since its introduction. Somewhere buried in the article, almost as an afterthought, was the mention that A) for the amount of kilometers driven before the accident, the AI would beat even the most careful human driver, and B) in all but two cases, it was actually the other car's fault. The result? All the focus was on manufacturers patching the programs so these instances couldn't happen again. But the couple hundred accidents caused by humans driving into AI driven cars were forgotten...a simple trivia at best.
I also like the line of questioning that befalls on AI. Yeah, indeed: WHAT IF someone is driving directly at you on an extremely high highway bridge with no wiggle room? Yeah, sure: I bet the AI isn't accounting for THAT situation! But if you're implying that human drivers would instinctively know what to do in that situation in a split second, then I've got news for you: we'd be equally dead in that situation.
If a real person made a bad decision when driving that resulted in bodily harm they would likely lose their license and that's the problem solved. If lots of people have self driving cars and one of the cars makes a bad decsion that results in an accident then every single self driving car from that brand is suspect. And if the cars have no way to for a human to take control it's doubly bad as then the cars become essentially worthless. It only takes one accident like this to ruin it for everyone. At least if the car has a wheel you could argue that the human behind the wheel is also at fault for not noticing the issue and taking control (given that they had reasonable time to do so of course) and having that option gives some extra safety and peace of mind even if you are right in that the driver will pay less attention when they are not actively driving.
One word: patches.
(beside...I hope it's not as bad on your end, but in Belgium I can recount quite some scandals where someone was regranted his driver license after some years, even after mortal accidents. So the "problem solved" isn't true. Like it or not, but some places are near impossible to reach without a car).
The thing is that AI is such a complex thing that we can never really trust it fully with our lives because intentionally or not AI will make mistakes just like humans do. And if we can't trust it fully with our lives it makes no sense whatsoever to take away that human control. Just like if someone is a bad driver and gets into accidents it makes no sense to keep letting them drive. If you're a passenger in a car and the driver is falling asleep you wouldn't just let them continue driving right? You would ask them to pull over either to get some sleep or to let you take the wheel. That could be applied as a metaphor to self driving cars. Now imagine that they tell you "no" and continue driving and eventually fall asleep and get into an accident. They'd lose their license just like that. What if that was a self driving car and you couldn't take the wheel even if you wanted to even as you see an accident is about to occur. Always have a plan B so that this stuff is less likely to happen. And if it happens anyway, the driver is as much to blame as the AI.
I already responded to this one: by the time you see a potential accident and register it as such, it is simply too late to take control of the car to begin with. At best you can shout "STOP!!!" or something. But that is something that could easily be hardcoded in the program, whereas I honestly don't see how being able to take the wheel at any time and steer/speed up/speed down would INCREASE the safety. I mean...if it was, then there should be some record of car passengers who could have prevented accidents that the driver made. But I don't believe that'll happen.
Oh, that reminds me: I can tell a personal tale on this one. Not too long ago, my girlfriend and me visited a friend of mine. We were parked in his street, and this is a rather narrow street. My girlfriend didn't take too much attention to this, so when we were about to get home, she started the car, put it in reverse...
...it was only a fraction of a second, but I really felt that we were accelerating backwards way too fast. I wanted to shout her to stop, but the rear object detector was faster than me and started beeping that there was an object nearby...
...but even that sensor was too slow for my girlfriend to react to the situation. Because we immediately hit the parked car at the other end of the street (which incidentally killed our loyal rear object sensor
).
If vehicles were intelligently linked together and every car knew where every other car nearby was this would be less of an issue as the car would be able to spot a bad driver or a potential accident ahead of time and avoid it before it ever becomes a problem. Course that doesn't help them detect nearby pedestrians but in places that have pedestrians crossing, the speed limit should be low enough to avoid deaths even in a worst case scenario. Maybe this will become a thing once everyone has a self driving car and at that point we could probably say the technology is safe enough that no human control would ever be required, just frequent maintenance to make sure all the sensors and such are in order. With every car knowing the position of every other car and no potential bad drivers to cause accidents or almost-accidents that result in the AI making less than savory decisions, eliminating the human element completely might actually work. But as long as there are some human drivers on the road there is always that risk of potential accidents that the AI may not be able to handle.
But...cars (not even self driving ones) already do scan the proximity of other objects around them. They don't have to be linked together for that.
I've read somewhere that 'linking AI cars' together somewhat solves a seemingly totally different problem: traffic jams. By adjusting the speed before the jam, they effectively stop both the linked and the manually driven cars from making the jam grow, which ultimately dissolves it before it can become large enough. But that's really a different topic.