• Friendly reminder: The politics section is a place where a lot of differing opinions are raised. You may not like what you read here but it is someone's opinion. As long as the debate is respectful you are free to debate freely. Also, the views and opinions expressed by forum members may not necessarily reflect those of GBAtemp. Messages that the staff consider offensive or inflammatory may be removed in line with existing forum terms and conditions.

Great Job,TESLA,we appreciate this !

Alexander1970

XP not matters.
OP
Member
Joined
Nov 8, 2018
Messages
14,973
Trophies
3
Location
Austria
XP
2,499
Country
Austria
The thing is that AI is such a complex thing that we can never really trust it fully with our lives because intentionally or not AI will make mistakes just like humans do. And if we can't trust it fully with our lives it makes no sense whatsoever to take away that human control. Just like if someone is a bad driver and gets into accidents it makes no sense to keep letting them drive. If you're a passenger in a car and you have a license and the driver is falling asleep you wouldn't just let them continue driving right? You would ask them to pull over either to get some sleep or to let you take the wheel. That could be applied as a metaphor to self driving cars. Now imagine that they tell you "no" and continue driving and eventually fall asleep and get into an accident. They'd lose their license just like that. Always have a plan B so that this stuff is less likely to happen.

A foresighted, farsighted human he is.:D
 

Searinox

"Dances" with Dragons
Member
Joined
Dec 16, 2007
Messages
2,073
Trophies
1
Age
36
Location
Bucharest
XP
2,203
Country
Romania
Can you imagine if someone had a heart attack and died in their car, and their car would drive them on until either they show up at someone's place dead or go missing and are discovered much later at fuck knows where the car eventually decided to stop or ran out of gas? Creepy!
 
  • Like
Reactions: Alexander1970

Taleweaver

Storywriter
Member
Joined
Dec 23, 2009
Messages
8,689
Trophies
2
Age
43
Location
Belgium
XP
8,090
Country
Belgium
Are self driving cars that aren't human controllable even legal? Does the government place that much trust in AI? AI can make mistakes, not that humans can't, but moreover, the only people that know exactly how the AI works is Tesla themselves, and who knows if they did a perfect job? Just the idea of leaving all control in the hands of a machine that will have to decide what to do if it encounters a situation where bodily harm is unavoidable and not having any means to take control from it is scary to say the least.
...and exactly where would that sort of thinking lead to? I've heard of a law during the first cars (automobiles) that they should always be preceded by a pedestrian waving a red flag that would indicate a car coming through. Most people (then as well as now) don't know how their engine works either, and when I'm in an airplane or a train I don't think the pilot/driver can disassemble and reassemble the entire thing either. And you put it yourself: "AI can make mistakes, but humans as well". But computers don't get tired after long shifts, are better equipped for the multitasking that is checking all positions and as much as I hate to say it: they're far better at learning that humans.
It's not the same thing as an elevator because an elevator is not AI and if it falls down it's because a mechanic/electrician didn't do their job or the manager of the building didn't bother to pay a mechanic/electrician for maintenance work. You can't blame it on the elevator.
Elevators are much simpler, yes. But I feel you're overplaying the role of AI in a car. It is not a robot trying to mimic a human that happens to drive a car; it is making sure that the car lives up to the traffic code as well as ensure that it doesn't hit anything.

But if an AI fails to respond to a situation in a way that is acceptable to most humans (and this is a really tough decision to make when designing AI because what in the hell do you make it do if someone is driving down the wrong side of the road on the highway and it's a steep fall down the side and it's either your life or theirs?) then people are gonna blame the AI and additionally people are not likely to trust the company behind it anymore.
Not to say a real person would do any better at making that decision than a machine would because it's just a bad situation overall, but that's just an example.
That's what I already said: humans are biased towards humans. That's why the laws are currently what they are, and even if that would change, tesla still would put wheels in their cars, well knowing that the human driver would be the unsafest part of the entity.
Heck...I've even read an article once about how there were X amounts of trafic accidents involving AI since its introduction. Somewhere buried in the article, almost as an afterthought, was the mention that A) for the amount of kilometers driven before the accident, the AI would beat even the most careful human driver, and B) in all but two cases, it was actually the other car's fault. The result? All the focus was on manufacturers patching the programs so these instances couldn't happen again. But the couple hundred accidents caused by humans driving into AI driven cars were forgotten...a simple trivia at best.

I also like the line of questioning that befalls on AI. Yeah, indeed: WHAT IF someone is driving directly at you on an extremely high highway bridge with no wiggle room? Yeah, sure: I bet the AI isn't accounting for THAT situation! But if you're implying that human drivers would instinctively know what to do in that situation in a split second, then I've got news for you: we'd be equally dead in that situation.


If a real person made a bad decision when driving that resulted in bodily harm they would likely lose their license and that's the problem solved. If lots of people have self driving cars and one of the cars makes a bad decsion that results in an accident then every single self driving car from that brand is suspect. And if the cars have no way to for a human to take control it's doubly bad as then the cars become essentially worthless. It only takes one accident like this to ruin it for everyone. At least if the car has a wheel you could argue that the human behind the wheel is also at fault for not noticing the issue and taking control (given that they had reasonable time to do so of course) and having that option gives some extra safety and peace of mind even if you are right in that the driver will pay less attention when they are not actively driving.
One word: patches.

(beside...I hope it's not as bad on your end, but in Belgium I can recount quite some scandals where someone was regranted his driver license after some years, even after mortal accidents. So the "problem solved" isn't true. Like it or not, but some places are near impossible to reach without a car).


The thing is that AI is such a complex thing that we can never really trust it fully with our lives because intentionally or not AI will make mistakes just like humans do. And if we can't trust it fully with our lives it makes no sense whatsoever to take away that human control. Just like if someone is a bad driver and gets into accidents it makes no sense to keep letting them drive. If you're a passenger in a car and the driver is falling asleep you wouldn't just let them continue driving right? You would ask them to pull over either to get some sleep or to let you take the wheel. That could be applied as a metaphor to self driving cars. Now imagine that they tell you "no" and continue driving and eventually fall asleep and get into an accident. They'd lose their license just like that. What if that was a self driving car and you couldn't take the wheel even if you wanted to even as you see an accident is about to occur. Always have a plan B so that this stuff is less likely to happen. And if it happens anyway, the driver is as much to blame as the AI.
I already responded to this one: by the time you see a potential accident and register it as such, it is simply too late to take control of the car to begin with. At best you can shout "STOP!!!" or something. But that is something that could easily be hardcoded in the program, whereas I honestly don't see how being able to take the wheel at any time and steer/speed up/speed down would INCREASE the safety. I mean...if it was, then there should be some record of car passengers who could have prevented accidents that the driver made. But I don't believe that'll happen.

Oh, that reminds me: I can tell a personal tale on this one. Not too long ago, my girlfriend and me visited a friend of mine. We were parked in his street, and this is a rather narrow street. My girlfriend didn't take too much attention to this, so when we were about to get home, she started the car, put it in reverse...

...it was only a fraction of a second, but I really felt that we were accelerating backwards way too fast. I wanted to shout her to stop, but the rear object detector was faster than me and started beeping that there was an object nearby...

...but even that sensor was too slow for my girlfriend to react to the situation. Because we immediately hit the parked car at the other end of the street (which incidentally killed our loyal rear object sensor :( ).

If vehicles were intelligently linked together and every car knew where every other car nearby was this would be less of an issue as the car would be able to spot a bad driver or a potential accident ahead of time and avoid it before it ever becomes a problem. Course that doesn't help them detect nearby pedestrians but in places that have pedestrians crossing, the speed limit should be low enough to avoid deaths even in a worst case scenario. Maybe this will become a thing once everyone has a self driving car and at that point we could probably say the technology is safe enough that no human control would ever be required, just frequent maintenance to make sure all the sensors and such are in order. With every car knowing the position of every other car and no potential bad drivers to cause accidents or almost-accidents that result in the AI making less than savory decisions, eliminating the human element completely might actually work. But as long as there are some human drivers on the road there is always that risk of potential accidents that the AI may not be able to handle.
But...cars (not even self driving ones) already do scan the proximity of other objects around them. They don't have to be linked together for that.

I've read somewhere that 'linking AI cars' together somewhat solves a seemingly totally different problem: traffic jams. By adjusting the speed before the jam, they effectively stop both the linked and the manually driven cars from making the jam grow, which ultimately dissolves it before it can become large enough. But that's really a different topic.
 

Alexander1970

XP not matters.
OP
Member
Joined
Nov 8, 2018
Messages
14,973
Trophies
3
Location
Austria
XP
2,499
Country
Austria
Self-Ignition included.
Also not bad.:rofl2:



Tesla claims that this is not a systemic defect, so it is not a general weakness of the Model S vehicles. In short: That should not be repeated, so the Tesla experts.
 
Last edited by Alexander1970,

Alexander1970

XP not matters.
OP
Member
Joined
Nov 8, 2018
Messages
14,973
Trophies
3
Location
Austria
XP
2,499
Country
Austria
Tesla exaggerates wildly in crash test results

Source: Nadine Dressler
The electric car manufacturer Tesla should finally stop presenting its vehicles, citing the transport safety authority NHTSA as the safest car on the market. Because this would not give the scores at all. And the authorities are also indignant because it is not the first incident of this kind.

https://www.plainsite.org/documents/fnrhg/tesla-nhtsa-foia-response/
(.pdf - you can download or read it online)


TFTC is switched on
Even this, in view of the prehistory, is a downright mild indication. Because Tesla and the NHTSA came together already in 2013. Back then, the same problem had to be faced - just that Tesla was advertising for his Model S at the time, explaining that the car set a new occupant safety record.

Due to the fact that Tesla once again combines the NHTSA ratings with exaggerated advertising claims, it no longer wants to leave the agency in charge of asking the company for appropriate corrections. Instead, the matter has now been handed over to the US FTC, which is investigating misleading advertising and a resulting violation of competition law.





Tesla owner complains about allegedly manipulated batteries

The owner of a Tesla electric vehicle has filed suit against the US company for manipulating the batteries of certain models. The plaintiff accuses Tesla to reduce the range of older Model S and Model X vehicles for software updates.

From the perspective of the plaintiff, the group wants to avoid costly recalls due to defective batteries. The case potentially affects thousands of Tesla's.
 
  • Like
Reactions: IncredulousP

supersonicwaffle

Well-Known Member
Member
Joined
Oct 15, 2018
Messages
262
Trophies
0
Age
37
XP
458
Country
Germany
...and exactly where would that sort of thinking lead to? I've heard of a law during the first cars (automobiles) that they should always be preceded by a pedestrian waving a red flag that would indicate a car coming through. Most people (then as well as now) don't know how their engine works either, and when I'm in an airplane or a train I don't think the pilot/driver can disassemble and reassemble the entire thing either. And you put it yourself: "AI can make mistakes, but humans as well". But computers don't get tired after long shifts, are better equipped for the multitasking that is checking all positions and as much as I hate to say it: they're far better at learning that humans.

Sorry for responding to an old post but I just stumbled upon it.
You're falling for some fallacies here I believe. Computers are infinitely more capable of taking experiences in that is correct but they're hilariously bad at intepreting and drawing conclusions from that. So the assertion that they're simply better at learning isn't really tenable as any AI will only be as good as its training data and any unforseen situations will result in the AI more or less going into trial and error mode.

That's what I already said: humans are biased towards humans. That's why the laws are currently what they are, and even if that would change, tesla still would put wheels in their cars, well knowing that the human driver would be the unsafest part of the entity.
Heck...I've even read an article once about how there were X amounts of trafic accidents involving AI since its introduction. Somewhere buried in the article, almost as an afterthought, was the mention that A) for the amount of kilometers driven before the accident, the AI would beat even the most careful human driver, and B) in all but two cases, it was actually the other car's fault. The result? All the focus was on manufacturers patching the programs so these instances couldn't happen again. But the couple hundred accidents caused by humans driving into AI driven cars were forgotten...a simple trivia at best.

The accident data is worthless on its own as self driving cars are always monitored by humans. It'd be much more sensible to use human intervention data where a human was needed to prevent an accident, of course this won't be perfect as the human monitoring the car will sometimes intervene in a situation where the car wouldn't crash but it's a much better indicator than accident data on its own. For the best self driving system with regards to human intervention (Waymo) a human felt the need to intervene roughly 20 times more than a human would crash.
On top of that the University of Michigan Transportation Research Institue found in 2015 that autonomous cars got into accidents more than twice as frequently as human operated cars, however the injuries were much less serious, the most common accident for an autonomous vehicle was getting rear ended at low speeds. Of course technology has also improved since then.

However, explaining away the systems shortcomings with bias towards humans seems ill informed at best.

I also like the line of questioning that befalls on AI. Yeah, indeed: WHAT IF someone is driving directly at you on an extremely high highway bridge with no wiggle room? Yeah, sure: I bet the AI isn't accounting for THAT situation! But if you're implying that human drivers would instinctively know what to do in that situation in a split second, then I've got news for you: we'd be equally dead in that situation.

You need to realize you're constructing an edge case in which a computer might be making better decisions while they're demonstrably having trouble distinguishing a road painted on the back of a truck from a real road and yet Elon Musk is telling you that lidar is wasted money.

We can also look at real world examples where AI is already implemented to get an idea of what it's capable of:
  • Amazon stopped use of AI in assisting the HR department because it was found that the AI significantly favored male hires
  • Multiple social media sites show the same problem with AI moderation like the algorithms to remove ismalic extremism also impacting non extremist muslims or the algorithms to remove white supremacist content also removing critique of white supremacy. The AI is simply incapable of understanding the context.
I'm not convinced that autonomous vehicles are possible at a reasonable safety standard without the system being fed more perfect information. One of the issues the industry has is recognizing red traffic lights as there is a ton of red lights in traffic situations (basically every vehicle's rear or brake lights).

I've said this before and I'll say it again, the people communicating this stuff to the public are vastly overselling what it can do! As it stands, autonomous vehicles, especially those limited to camera only technology are only an unforseen and innovative ad campaign away from killing you. Now, add to that that regulation regarding trucks needs to be much more stringent still because of the whole E=mc² thing.

EDIT:
Let me add to my previous post to make things more clear.
I've brought up the hype cycle a lot in this forum which has been popularized by Gartner who is doing market research in the tech industry.
Basically the hype cycle works as follows:
  1. there is a technology trigger, i.e. some new technology that opens the way for new kinds of applications
  2. engineers are fantasizing about what could be possible with the technology which massively inflates expectations
  3. engineers find the limitations and/or unreliability of the technology leading to disillusionment
  4. what has been learned in 2 and 3 will be applied more conservatively in a practical way
  5. what's been implemented in 4 is improved upon
Here's what Gartner's hype cycle prediction from 2018 regarding AI looks like:
Hype-Cycle-for-Emerging-Tech-2018b.png


As you can see "Autonomous Driving Level 5 is about halfway up the "Peak of Inflated Expectations" and is predicted to only reach the "Plateau of Productivity" in more than 10 years.
Quantum Computing for example is higher up the peak and expected to plateau within 10 years. IMO that will have a much bigger impact on autonomous driving viability but will also push it back as software engineers will first have to learn how to develop for these computers.

This also ties in to politics regarding expactations from automation and proposals like UBI. We should keep thinking about it for sure but Andrew Yang for example is anticipating a level of automation, especially for truck drivers, that more closely aligns with Tesla's PR department than reality.

EDIT2: Just noticed the "Autonomous Dricing Level 4" on the hype cycle, which is on its way to disillusionment but still more than 10 years away from reaching productivity
 
Last edited by supersonicwaffle,
  • Like
Reactions: Alexander1970

Site & Scene News

Popular threads in this forum

General chit-chat
Help Users
  • No one is chatting at the moment.
    SylverReZ @ SylverReZ: https://www.youtube.com/watch?v=fv6vlP2qSyo