# Great Job,TESLA,we appreciate this !



## Alexander1970 (Jun 25, 2019)

https://www.reddit.com/r/IdiotsInCa...uldnt_believe_it_asleep_in_heavy_friday_rush/

https://twitter.com/SethWageWar/status/1102712751313498112/video/1



Idiots.

I love that technology from TESLA.
Go on and we have a "sleeping" and "smartphoning" car driving society.


----------



## Xzi (Jun 25, 2019)

Yeah I doubt these people have the level of autonomous driver assist necessary to be asleep at the wheel.  For anyone unaware, there are five levels of driver assist available in Teslas, level 5 being fully autonomous, but it's extremely pricey (on top of the already-high price of the cars themselves).  The crazy thing is that, as far as I know, neither of these incidents resulted in a crash, so the technology is definitely improving.  The sooner we can all sleep in our cars without having to worry about human error causing our deaths, the better.


----------



## Alexander1970 (Jun 25, 2019)

Xzi said:


> Yeah I doubt these people have the level of autonomous driver assist necessary to be asleep at the wheel.  For anyone unaware, there are five levels of driver assist available in Teslas, level 5 being fully autonomous, but it's extremely pricey (on top of the already-high price of the cars themselves).  The crazy thing is that, as far as I know, neither of these incidents resulted in a crash, so the technology is definitely improving.  The sooner we can all sleep in our cars without having to worry about human error causing our deaths, the better.



Do you know "Terminator - RISE of the MACHINES ?


----------



## Xzi (Jun 25, 2019)

alexander1970 said:


> Do you know "Terminator - RISE of the MACHINES ?


Rogue AI can potentially exterminate us with or without autonomous cars, I think it's worth the risk if it means no more drunk/high drivers on the road.


----------



## Alexander1970 (Jun 25, 2019)

Xzi said:


> I think it's worth the risk if it means no more drunk/high drivers on the road.



As long as they stay *anonymous *and get a therapy.


----------



## notimp (Jun 25, 2019)

Seems like an engineering problem..  If driver is asleep, car should probably stop - if safe?  Or much rather - beep like a mofo, so driver wakes up again. (Doesnt work if drivers are deaf, build in contingency..  )

Safety feature so far is "you have to keep hands on the wheel". Engineers havent thought about - well, you can do that while sleeping..


----------



## DBlaze (Jun 25, 2019)

While I agree they should build in some safety measure for this, I can't really blame it all on Tesla, users of anything are sometimes top tier retarded.


----------



## notimp (Jun 25, 2019)

*wakes up in oregon* (/another state) as the first twitter comment suggests is probably what would happen though.  If the battery doesnt run out first.. in which case I'm pretty sure the car would start beeping. 

On interstates the teslas are supposed to be "pretty safe" as in "can follow the roadmarkings.. " already.. 

--------------------- MERGED ---------------------------



DBlaze said:


> While I agree they should build in some safety measure for this, I can't really blame it all on Tesla, users of anything are sometimes top tier retarded.


Engineers still should have thought of that. 

Its not an easy thing to fix though.  Because you dont want to "surveil" users, and if you annoy them with a beep too often (false positive), they start to complain..


----------



## DBlaze (Jun 25, 2019)

notimp said:


> *wakes up in oregon* (/another state) as the first twitter comment suggests is probably what would happen though.  If the battery doesnt run out first.. in which case I'm pretty sure the car would start beeping.
> 
> On interstates the teslas are supposed to be "pretty safe" as in "can follow the roadmarkings.. " already..
> 
> ...


They probably did and are still thinking about it, hence the hands-on thing, I personally wouldnt be able to hold my hands on the wheel if i would fall asleep in the way it requires you to do every now and then, but that's me personally.
Then again I would also not sit behind the wheel if i was tired enough to actually fall asleep while driving, if I notice myself drifting off, I stop at the first place I can.

It's a hard thing to crack yeah, obviously the hands-on thing isn't enough, and other things are risky/annoying for users.
Should maybe have had a mandatory button you actually need to press instead of hands-on, or a combination of both.

But what do I know of car engineering; not much, if anything at all lol


----------



## notimp (Jun 25, 2019)

Twitter CEO Musk to the rescue to give more details.

Exactly. Default Autopilot behavior, if there’s no driver input, is to slow gradually to a stop & turn on hazard lights. Tesla service then contacts the owner. Looking into what happened here.— Elon Musk (@elonmusk) December 3, 2018


After a fist incidence of that sort.


----------



## Taleweaver (Jun 25, 2019)

Sorry, but I fail to see the problem here.

It's not that I don't understand the issue, but you've got to approach these situations rationally rather than emotionally. And that paints a picture that is arguably much darker, but can also be seen as optimistic.

Let's start with the obvious: drunk driving is prohibited and highly frowned upon. Why? Not because drunk people actively drive into sidewalks or cause accidents (that happens, of course, but for that you've got to be way more drunk than necessary), but because even small amounts of alcohol lower the reaction time to a position where the reaction time is simply inadequate to properly react to upcoming situations (assuming a normal driving speed).

And guess what: the same happens when your car drives for you. Sure,_ in theory_ the driver can instantly spot any mistake the computer driver makes, take over the wheel and stop and/or maneuver the car to avoid collision, driving off the road or whatever it is that the computer attempted to do that wasn't safe. In reality, your mind numbs down when not actively processing the input. It's the same risk that truck drivers face when they're driving the same highway for hours on end: the passivity of monotony can get them into a slow motion trance (they still see the road and can thus 'wake up' whenever something draws near that requires a manoever, but their reaction is seriously diminished). I haven't read much about traffic incidents involving computer driven cars(1), but in the instance where I read about it, it was exactly as I would've predicted: the driver had become so used to the computer driving spotless that he completely failed to react whenever that was needed.

So all in all, the question is wrong. It shouldn't be "why is that guy being dangerous sleeping behind his self-driving car?", but rather "why do self-driving cars still have a wheel?".


The answer to that latter question is rather interesting. It is that Tesla is smart enough to know that they're selling cars to _humans_, and that there will be _humans _on the side of the road (the OP video nicely proves that last one, btw)). Whenever we get in an elevator to anything but say two floors up, we pretty much trust the machine with our lives (if it fails, we're dead). But because we're so used to it and the potential danger isn't shown, our guard is down. But meanwhile, we've driven cars manually for many, many years now. Surely we can't blindly trust a machine to drive better than ourselves...right?

The sad truth is that it can. And is busy doing so. But that's thinking rationally. Emotionally, each and every driver claims to have above average driving skill. And I'm sure each and every driver will also put their own skills above the computer, so there is no way that anyone's going to buy a car (yet) without a wheel.
The same goes for bystanders. That little movie could've been part of a horror movie. I won't deny it: that really looks scary to me. And it does so because I've seen people drive cars my entire life and I've driven quite a bit since I'm an adult. But I bet that this sort of thing could become pretty common to the next generation...



(1): which, considering the amount of kilometers traveled, puts them well above human drivers, actually


----------



## Alexander1970 (Jun 25, 2019)

Taleweaver said:


> Sorry, but I fail to see the problem here.
> 
> It's not that I don't understand the issue, but you've got to approach these situations rationally rather than emotionally. And that paints a picture that is arguably much darker, but can also be seen as optimistic.
> 
> ...



Do you know the Pixar/Disney Movie "WALL·E" ?

Then you mabye understand MY problem.


----------



## Kioku_Dreams (Jun 25, 2019)

...and yet it's far safer than a lot of the people driving like complete assholes. Meh, I fail to see an issue here. Read the reddit thread. Seems to be better comprehension as to the situation than what can be grasped here in temptown.

People fall asleep while driving all of the time. Maybe the issue is the complacency that will arise from this technology?


----------



## Alexander1970 (Jun 25, 2019)

...and they don´t see where it leads......
As predicted in "old" Movies.......

A brainless,unindependent and not thinking human society.
Without resistance.....


----------



## Taleweaver (Jun 25, 2019)

alexander1970 said:


> Do you know the Pixar/Disney Movie "WALL·E" ?
> 
> Then you mabye understand MY problem.


People getting so spoiled by technology that the most basic tasks become impossible achievements because they're all so obese? 
Yeah, I've seen it. But if you think this is a telltale, then I don't believe you. I mean... I'm driven to work on a daily basis (I take the train), but commuters like me aren't more fat than those driving a car.


----------



## 8BitWonder (Jun 25, 2019)

alexander1970 said:


> ...and they don´t see where it leads......
> As predicted in "old" Movies.......
> 
> A brainless,unindependent and not thinking human society.
> Without resistance.....


Self-driving cars aren't going to lead to a brainless society, they're only a means of transportation.
Additionally, I think you're forgetting that a lot of people will likely not opt for or use auto-piloting technology for cars.


----------



## dAVID_ (Jun 27, 2019)

A solution would be to implement a camera in the car, and use a face recognition system based on artificial intelligence to determine whether the driver is asleep or not. The only problem with this solution is that raises privacy concerns.


----------



## ThoD (Jun 27, 2019)

Gonna just drop this here...


Basically the less people drive, the safe it will be and the better things will be. Self-driving cars are a good thing, to be honest they shouldn't even let human input, but we aren't at that point yet...

As for OP, I don't see anything wrong, it's not like it's high speed driving, the AI is more than capable of handling slow driving safely, so why all the debate?:/


----------



## Alexander1970 (Jun 27, 2019)

ThoD said:


> Gonna just drop this here...
> 
> 
> As for OP, I don't see anything wrong, it's not like it's high speed driving, the AI is more than capable of handling slow driving safely, so why all the debate?:/




Not for that,but for the more and more comprehensive independence of people to do something *themselves*.


----------



## The Real Jdbye (Jun 27, 2019)

8BitWonder said:


> Self-driving cars aren't going to lead to a brainless society, they're only a means of transportation.
> Additionally, I think you're forgetting that a lot of people will likely not opt for or use auto-piloting technology for cars.


Not everyone right now is just gonna switch to auto-pilot cars, but future generations of drivers are likely not going to even bother learning how to drive if they can just buy a self driving car that does it for them for not much more, and eventually every car will be self-driving as that becomes the mainstream thing and manually driven cars are seen as archaic.


Taleweaver said:


> Sorry, but I fail to see the problem here.
> 
> It's not that I don't understand the issue, but you've got to approach these situations rationally rather than emotionally. And that paints a picture that is arguably much darker, but can also be seen as optimistic.
> 
> ...


Are self driving cars that aren't human controllable even legal? Does the government place that much trust in AI? AI can make mistakes, not that humans can't, but moreover, the only people that know exactly how the AI works is Tesla themselves, and who knows if they did a perfect job? Just the idea of leaving all control in the hands of a machine that will have to decide what to do if it encounters a situation where bodily harm is unavoidable and not having any means to take control from it is scary to say the least.

It's not the same thing as an elevator because an elevator is not AI and if it falls down it's because a mechanic/electrician didn't do their job or the manager of the building didn't bother to pay a mechanic/electrician for maintenance work. You can't blame it on the elevator.

But if an AI fails to respond to a situation in a way that is acceptable to most humans (and this is a really tough decision to make when designing AI because what in the hell do you make it do if someone is driving down the wrong side of the road on the highway and it's a steep fall down the side and it's either your life or theirs?) then people are gonna blame the AI and additionally people are not likely to trust the company behind it anymore.
Not to say a real person would do any better at making that decision than a machine would because it's just a bad situation overall, but that's just an example.

If a real person made a bad decision when driving that resulted in bodily harm they would likely lose their license and that's the problem solved. If lots of people have self driving cars and one of the cars makes a bad decsion that results in an accident then every single self driving car from that brand is suspect. And if the cars have no way to for a human to take control it's doubly bad as then the cars become essentially worthless. It only takes one accident like this to ruin it for everyone. At least if the car has a wheel you could argue that the human behind the wheel is also at fault for not noticing the issue and taking control (given that they had reasonable time to do so of course) and having that option gives some extra safety and peace of mind even if you are right in that the driver will pay less attention when they are not actively driving.

The thing is that AI is such a complex thing that we can never really trust it fully with our lives because intentionally or not AI will make mistakes just like humans do. And if we can't trust it fully with our lives it makes no sense whatsoever to take away that human control. Just like if someone is a bad driver and gets into accidents it makes no sense to keep letting them drive. If you're a passenger in a car and the driver is falling asleep you wouldn't just let them continue driving right? You would ask them to pull over either to get some sleep or to let you take the wheel. That could be applied as a metaphor to self driving cars. Now imagine that they tell you "no" and continue driving and eventually fall asleep and get into an accident. They'd lose their license just like that. What if that was a self driving car and you couldn't take the wheel even if you wanted to even as you see an accident is about to occur. Always have a plan B so that this stuff is less likely to happen. And if it happens anyway, the driver is as much to blame as the AI.

If vehicles were intelligently linked together and every car knew where every other car nearby was this would be less of an issue as the car would be able to spot a bad driver or a potential accident ahead of time and avoid it before it ever becomes a problem. Course that doesn't help them detect nearby pedestrians but in places that have pedestrians crossing, the speed limit should be low enough to avoid deaths even in a worst case scenario. Maybe this will become a thing once everyone has a self driving car and at that point we could probably say the technology is safe enough that no human control would ever be required, just frequent maintenance to make sure all the sensors and such are in order. With every car knowing the position of every other car and no potential bad drivers to cause accidents or almost-accidents that result in the AI making less than savory decisions, eliminating the human element completely might actually work. But as long as there are *some* human drivers on the road there is always that risk of potential accidents that the AI may not be able to handle.


----------



## Alexander1970 (Jun 27, 2019)

> The thing is that AI is such a complex thing that we can never really trust it fully with our lives because intentionally or not AI will make mistakes just like humans do. And if we can't trust it fully with our lives it makes no sense whatsoever to take away that human control. Just like if someone is a bad driver and gets into accidents it makes no sense to keep letting them drive. If you're a passenger in a car and you have a license and the driver is falling asleep you wouldn't just let them continue driving right? You would ask them to pull over either to get some sleep or to let you take the wheel. That could be applied as a metaphor to self driving cars. Now imagine that they tell you "no" and continue driving and eventually fall asleep and get into an accident. They'd lose their license just like that. Always have a plan B so that this stuff is less likely to happen.



A foresighted, farsighted human he is.


----------



## Searinox (Jun 27, 2019)

Can you imagine if someone had a heart attack and died in their car, and their car would drive them on until either they show up at someone's place dead or go missing and are discovered much later at fuck knows where the car eventually decided to stop or ran out of gas? Creepy!


----------



## Taleweaver (Jun 28, 2019)

The Real Jdbye said:


> Are self driving cars that aren't human controllable even legal? Does the government place that much trust in AI? AI can make mistakes, not that humans can't, but moreover, the only people that know exactly how the AI works is Tesla themselves, and who knows if they did a perfect job? Just the idea of leaving all control in the hands of a machine that will have to decide what to do if it encounters a situation where bodily harm is unavoidable and not having any means to take control from it is scary to say the least.


...and exactly where would that sort of thinking lead to? I've heard of a law during the first cars (automobiles) that they should always be preceded by a pedestrian waving a red flag that would indicate a car coming through. Most people (then as well as now) don't know how their engine works either, and when I'm in an airplane or a train I don't think the pilot/driver can disassemble and reassemble the entire thing either. And you put it yourself: "AI can make mistakes, but humans as well". But computers don't get tired after long shifts, are better equipped for the multitasking that is checking all positions and as much as I hate to say it: they're far better at learning that humans.


The Real Jdbye said:


> It's not the same thing as an elevator because an elevator is not AI and if it falls down it's because a mechanic/electrician didn't do their job or the manager of the building didn't bother to pay a mechanic/electrician for maintenance work. You can't blame it on the elevator.


Elevators are much simpler, yes. But I feel you're overplaying the role of AI in a car. It is not a robot trying to mimic a human that happens to drive a car; it is making sure that the car lives up to the traffic code as well as ensure that it doesn't hit anything.



The Real Jdbye said:


> But if an AI fails to respond to a situation in a way that is acceptable to most humans (and this is a really tough decision to make when designing AI because what in the hell do you make it do if someone is driving down the wrong side of the road on the highway and it's a steep fall down the side and it's either your life or theirs?) then people are gonna blame the AI and additionally people are not likely to trust the company behind it anymore.
> Not to say a real person would do any better at making that decision than a machine would because it's just a bad situation overall, but that's just an example.


That's what I already said: humans are biased towards humans. That's why the laws are currently what they are, and even if that would change, tesla still would put wheels in their cars, well knowing that the human driver would be the unsafest part of the entity.
Heck...I've even read an article once about how there were X amounts of trafic accidents involving AI since its introduction. Somewhere buried in the article, almost as an afterthought, was the mention that A) for the amount of kilometers driven before the accident, the AI would beat even the most careful human driver, and B) in all but two cases, it was actually the other car's fault. The result? All the focus was on manufacturers patching the programs so these instances couldn't happen again. But the couple hundred accidents caused by humans driving into AI driven cars were forgotten...a simple trivia at best.

I also like the line of questioning that befalls on AI. Yeah, indeed: WHAT IF someone is driving directly at you on an extremely high highway bridge with no wiggle room? Yeah, sure: I bet the AI isn't accounting for THAT situation! But if you're implying that human drivers would instinctively know what to do in that situation in a split second, then I've got news for you: we'd be equally dead in that situation.




The Real Jdbye said:


> If a real person made a bad decision when driving that resulted in bodily harm they would likely lose their license and that's the problem solved. If lots of people have self driving cars and one of the cars makes a bad decsion that results in an accident then every single self driving car from that brand is suspect. And if the cars have no way to for a human to take control it's doubly bad as then the cars become essentially worthless. It only takes one accident like this to ruin it for everyone. At least if the car has a wheel you could argue that the human behind the wheel is also at fault for not noticing the issue and taking control (given that they had reasonable time to do so of course) and having that option gives some extra safety and peace of mind even if you are right in that the driver will pay less attention when they are not actively driving.


One word: patches.

(beside...I hope it's not as bad on your end, but in Belgium I can recount quite some scandals where someone was regranted his driver license after some years, even after mortal accidents. So the "problem solved" isn't true. Like it or not, but some places are near impossible to reach without a car).




The Real Jdbye said:


> The thing is that AI is such a complex thing that we can never really trust it fully with our lives because intentionally or not AI will make mistakes just like humans do. And if we can't trust it fully with our lives it makes no sense whatsoever to take away that human control. Just like if someone is a bad driver and gets into accidents it makes no sense to keep letting them drive. If you're a passenger in a car and the driver is falling asleep you wouldn't just let them continue driving right? You would ask them to pull over either to get some sleep or to let you take the wheel. That could be applied as a metaphor to self driving cars. Now imagine that they tell you "no" and continue driving and eventually fall asleep and get into an accident. They'd lose their license just like that. What if that was a self driving car and you couldn't take the wheel even if you wanted to even as you see an accident is about to occur. Always have a plan B so that this stuff is less likely to happen. And if it happens anyway, the driver is as much to blame as the AI.


I already responded to this one: by the time you see a potential accident and register it as such, it is simply too late to take control of the car to begin with. At best you can shout "STOP!!!" or something. But that is something that could easily be hardcoded in the program, whereas I honestly don't see how being able to take the wheel at any time and steer/speed up/speed down would INCREASE the safety. I mean...if it was, then there should be some record of car passengers who could have prevented accidents that the driver made. But I don't believe that'll happen.

Oh, that reminds me: I can tell a personal tale on this one. Not too long ago, my girlfriend and me visited a friend of mine. We were parked in his street, and this is a rather narrow street. My girlfriend didn't take too much attention to this, so when we were about to get home, she started the car, put it in reverse...

...it was only a fraction of a second, but I really felt that we were accelerating backwards way too fast. I wanted to shout her to stop, but the rear object detector was faster than me and started beeping that there was an object nearby...

...but even that sensor was too slow for my girlfriend to react to the situation. Because we immediately hit the parked car at the other end of the street (which incidentally killed our loyal rear object sensor  ).



The Real Jdbye said:


> If vehicles were intelligently linked together and every car knew where every other car nearby was this would be less of an issue as the car would be able to spot a bad driver or a potential accident ahead of time and avoid it before it ever becomes a problem. Course that doesn't help them detect nearby pedestrians but in places that have pedestrians crossing, the speed limit should be low enough to avoid deaths even in a worst case scenario. Maybe this will become a thing once everyone has a self driving car and at that point we could probably say the technology is safe enough that no human control would ever be required, just frequent maintenance to make sure all the sensors and such are in order. With every car knowing the position of every other car and no potential bad drivers to cause accidents or almost-accidents that result in the AI making less than savory decisions, eliminating the human element completely might actually work. But as long as there are *some* human drivers on the road there is always that risk of potential accidents that the AI may not be able to handle.


But...cars (not even self driving ones) already do scan the proximity of other objects around them. They don't have to be linked together for that.

I've read somewhere that 'linking AI cars' together somewhat solves a seemingly totally different problem: traffic jams. By adjusting the speed before the jam, they effectively stop both the linked and the manually driven cars from making the jam grow, which ultimately dissolves it before it can become large enough. But that's really a different topic.


----------



## Alexander1970 (Jul 9, 2019)

Self-Ignition included.
Also not bad.



_*Tesla claims that this is not a systemic defect, so it is not a general weakness of the Model S vehicles. In short: That should not be repeated, so the Tesla experts.*_


----------



## The Real Jdbye (Jul 9, 2019)

alexander1970 said:


> Self-Ignition included.
> Also not bad.
> 
> 
> ...



Damn Chinese Tesla knockoffs


----------



## osm70 (Jul 9, 2019)

Honestly, the one who was recording the video was in much higher danger than the sleeping "driver".


----------



## Alexander1970 (Aug 9, 2019)

*Tesla exaggerates wildly in crash test results*

Source: Nadine Dressler
The electric car manufacturer Tesla should finally stop presenting its vehicles, citing the transport safety authority NHTSA as the safest car on the market. Because this would not give the scores at all. And the authorities are also indignant because it is not the first incident of this kind.

https://www.plainsite.org/documents/fnrhg/tesla-nhtsa-foia-response/
(.pdf - you can download or read it online)


TFTC is switched on
Even this, in view of the prehistory, is a downright mild indication. Because Tesla and the NHTSA came together already in 2013. Back then, the same problem had to be faced - just that Tesla was advertising for his Model S at the time, explaining that the car set a new occupant safety record.

Due to the fact that Tesla once again combines the NHTSA ratings with exaggerated advertising claims, it no longer wants to leave the agency in charge of asking the company for appropriate corrections. Instead, the matter has now been handed over to the US FTC, which is investigating misleading advertising and a resulting violation of competition law.





*Tesla owner complains about allegedly manipulated batteries*

The owner of a Tesla electric vehicle has filed suit against the US company for manipulating the batteries of certain models. The plaintiff accuses Tesla to reduce the range of older Model S and Model X vehicles for software updates.

From the perspective of the plaintiff, the group wants to avoid costly recalls due to defective batteries. The case potentially affects thousands of Tesla's.


----------



## supersonicwaffle (Aug 21, 2019)

Taleweaver said:


> ...and exactly where would that sort of thinking lead to? I've heard of a law during the first cars (automobiles) that they should always be preceded by a pedestrian waving a red flag that would indicate a car coming through. Most people (then as well as now) don't know how their engine works either, and when I'm in an airplane or a train I don't think the pilot/driver can disassemble and reassemble the entire thing either. And you put it yourself: "AI can make mistakes, but humans as well". But computers don't get tired after long shifts, are better equipped for the multitasking that is checking all positions and as much as I hate to say it: they're far better at learning that humans.



Sorry for responding to an old post but I just stumbled upon it.
You're falling for some fallacies here I believe. Computers are infinitely more capable of taking experiences in that is correct but they're hilariously bad at intepreting and drawing conclusions from that. So the assertion that they're simply better at learning isn't really tenable as any AI will only be as good as its training data and any unforseen situations will result in the AI more or less going into trial and error mode.



Taleweaver said:


> That's what I already said: humans are biased towards humans. That's why the laws are currently what they are, and even if that would change, tesla still would put wheels in their cars, well knowing that the human driver would be the unsafest part of the entity.
> Heck...I've even read an article once about how there were X amounts of trafic accidents involving AI since its introduction. Somewhere buried in the article, almost as an afterthought, was the mention that A) for the amount of kilometers driven before the accident, the AI would beat even the most careful human driver, and B) in all but two cases, it was actually the other car's fault. The result? All the focus was on manufacturers patching the programs so these instances couldn't happen again. But the couple hundred accidents caused by humans driving into AI driven cars were forgotten...a simple trivia at best.



The accident data is worthless on its own as self driving cars are always monitored by humans. It'd be much more sensible to use human intervention data where a human was needed to prevent an accident, of course this won't be perfect as the human monitoring the car will sometimes intervene in a situation where the car wouldn't crash but it's a much better indicator than accident data on its own. For the best self driving system with regards to human intervention (Waymo) a human felt the need to intervene roughly 20 times more than a human would crash.
On top of that the University of Michigan Transportation Research Institue found in 2015 that autonomous cars got into accidents more than twice as frequently as human operated cars, however the injuries were much less serious, the most common accident for an autonomous vehicle was getting rear ended at low speeds. Of course technology has also improved since then.

However, explaining away the systems shortcomings with bias towards humans seems ill informed at best.



Taleweaver said:


> I also like the line of questioning that befalls on AI. Yeah, indeed: WHAT IF someone is driving directly at you on an extremely high highway bridge with no wiggle room? Yeah, sure: I bet the AI isn't accounting for THAT situation! But if you're implying that human drivers would instinctively know what to do in that situation in a split second, then I've got news for you: we'd be equally dead in that situation.



You need to realize you're constructing an edge case in which a computer might be making better decisions while they're demonstrably having trouble distinguishing a road painted on the back of a truck from a real road and yet Elon Musk is telling you that lidar is wasted money.

We can also look at real world examples where AI is already implemented to get an idea of what it's capable of:

Amazon stopped use of AI in assisting the HR department because it was found that the AI significantly favored male hires
Multiple social media sites show the same problem with AI moderation like the algorithms to remove ismalic extremism also impacting non extremist muslims or the algorithms to remove white supremacist content also removing critique of white supremacy. The AI is simply incapable of understanding the context.
I'm not convinced that autonomous vehicles are possible at a reasonable safety standard without the system being fed more perfect information. One of the issues the industry has is recognizing red traffic lights as there is a ton of red lights in traffic situations (basically every vehicle's rear or brake lights).

I've said this before and I'll say it again, the people communicating this stuff to the public are vastly overselling what it can do! As it stands, autonomous vehicles, especially those limited to camera only technology are only an unforseen and innovative ad campaign away from killing you. Now, add to that that regulation regarding trucks needs to be much more stringent still because of the whole E=mc² thing.

EDIT:
Let me add to my previous post to make things more clear.
I've brought up the hype cycle a lot in this forum which has been popularized by Gartner who is doing market research in the tech industry.
Basically the hype cycle works as follows:

there is a technology trigger, i.e. some new technology that opens the way for new kinds of applications
engineers are fantasizing about what could be possible with the technology which massively inflates expectations
engineers find the limitations and/or unreliability of the technology leading to disillusionment
what has been learned in 2 and 3 will be applied more conservatively in a practical way
what's been implemented in 4 is improved upon
Here's what Gartner's hype cycle prediction from 2018 regarding AI looks like:






As you can see "Autonomous Driving Level 5 is about halfway up the "Peak of Inflated Expectations" and is predicted to only reach the "Plateau of Productivity" in more than 10 years.
Quantum Computing for example is higher up the peak and expected to plateau within 10 years. IMO that will have a much bigger impact on autonomous driving viability but will also push it back as software engineers will first have to learn how to develop for these computers.

This also ties in to politics regarding expactations from automation and proposals like UBI. We should keep thinking about it for sure but Andrew Yang for example is anticipating a level of automation, especially for truck drivers, that more closely aligns with Tesla's PR department than reality.

EDIT2: Just noticed the "Autonomous Dricing Level 4" on the hype cycle, which is on its way to disillusionment but still more than 10 years away from reaching productivity


----------

