Great Job,TESLA,we appreciate this !

Discussion in 'World News, Current Events & Politics' started by alexander1970, Jun 25, 2019.

  1. alexander1970

    alexander1970 GBAtemp allows me to be here

    Nov 8, 2018
    A foresighted, farsighted human he is.:D
  2. Searinox

    Searinox Dances with Dragons

    Dec 16, 2007
    Can you imagine if someone had a heart attack and died in their car, and their car would drive them on until either they show up at someone's place dead or go missing and are discovered much later at fuck knows where the car eventually decided to stop or ran out of gas? Creepy!
    alexander1970 likes this.
  3. Taleweaver

    Taleweaver Storywriter

    Dec 23, 2009
    ...and exactly where would that sort of thinking lead to? I've heard of a law during the first cars (automobiles) that they should always be preceded by a pedestrian waving a red flag that would indicate a car coming through. Most people (then as well as now) don't know how their engine works either, and when I'm in an airplane or a train I don't think the pilot/driver can disassemble and reassemble the entire thing either. And you put it yourself: "AI can make mistakes, but humans as well". But computers don't get tired after long shifts, are better equipped for the multitasking that is checking all positions and as much as I hate to say it: they're far better at learning that humans.
    Elevators are much simpler, yes. But I feel you're overplaying the role of AI in a car. It is not a robot trying to mimic a human that happens to drive a car; it is making sure that the car lives up to the traffic code as well as ensure that it doesn't hit anything.

    That's what I already said: humans are biased towards humans. That's why the laws are currently what they are, and even if that would change, tesla still would put wheels in their cars, well knowing that the human driver would be the unsafest part of the entity.
    Heck...I've even read an article once about how there were X amounts of trafic accidents involving AI since its introduction. Somewhere buried in the article, almost as an afterthought, was the mention that A) for the amount of kilometers driven before the accident, the AI would beat even the most careful human driver, and B) in all but two cases, it was actually the other car's fault. The result? All the focus was on manufacturers patching the programs so these instances couldn't happen again. But the couple hundred accidents caused by humans driving into AI driven cars were forgotten...a simple trivia at best.

    I also like the line of questioning that befalls on AI. Yeah, indeed: WHAT IF someone is driving directly at you on an extremely high highway bridge with no wiggle room? Yeah, sure: I bet the AI isn't accounting for THAT situation! But if you're implying that human drivers would instinctively know what to do in that situation in a split second, then I've got news for you: we'd be equally dead in that situation.

    One word: patches.

    (beside...I hope it's not as bad on your end, but in Belgium I can recount quite some scandals where someone was regranted his driver license after some years, even after mortal accidents. So the "problem solved" isn't true. Like it or not, but some places are near impossible to reach without a car).

    I already responded to this one: by the time you see a potential accident and register it as such, it is simply too late to take control of the car to begin with. At best you can shout "STOP!!!" or something. But that is something that could easily be hardcoded in the program, whereas I honestly don't see how being able to take the wheel at any time and steer/speed up/speed down would INCREASE the safety. I mean...if it was, then there should be some record of car passengers who could have prevented accidents that the driver made. But I don't believe that'll happen.

    Oh, that reminds me: I can tell a personal tale on this one. Not too long ago, my girlfriend and me visited a friend of mine. We were parked in his street, and this is a rather narrow street. My girlfriend didn't take too much attention to this, so when we were about to get home, she started the car, put it in reverse... was only a fraction of a second, but I really felt that we were accelerating backwards way too fast. I wanted to shout her to stop, but the rear object detector was faster than me and started beeping that there was an object nearby...

    ...but even that sensor was too slow for my girlfriend to react to the situation. Because we immediately hit the parked car at the other end of the street (which incidentally killed our loyal rear object sensor :( ). (not even self driving ones) already do scan the proximity of other objects around them. They don't have to be linked together for that.

    I've read somewhere that 'linking AI cars' together somewhat solves a seemingly totally different problem: traffic jams. By adjusting the speed before the jam, they effectively stop both the linked and the manually driven cars from making the jam grow, which ultimately dissolves it before it can become large enough. But that's really a different topic.
    Reiten and alexander1970 like this.
  4. alexander1970

    alexander1970 GBAtemp allows me to be here

    Nov 8, 2018
    Self-Ignition included.
    Also not bad.:rofl2:

    Tesla claims that this is not a systemic defect, so it is not a general weakness of the Model S vehicles. In short: That should not be repeated, so the Tesla experts.
    Last edited by alexander1970, Jul 9, 2019
  5. The Real Jdbye

    The Real Jdbye Always Remember 30/07/08

    GBAtemp Patron
    The Real Jdbye is a Patron of GBAtemp and is helping us stay independent!

    Our Patreon
    Mar 17, 2010
    Damn Chinese Tesla knockoffs :P
    ThoD and alexander1970 like this.
  6. osm70

    osm70 GBAtemp Maniac

    Apr 17, 2011
    Czech Republic
    Honestly, the one who was recording the video was in much higher danger than the sleeping "driver".
    ThoD and alexander1970 like this.
  7. alexander1970

    alexander1970 GBAtemp allows me to be here

    Nov 8, 2018
    Tesla exaggerates wildly in crash test results

    Source: Nadine Dressler
    The electric car manufacturer Tesla should finally stop presenting its vehicles, citing the transport safety authority NHTSA as the safest car on the market. Because this would not give the scores at all. And the authorities are also indignant because it is not the first incident of this kind.
    (.pdf - you can download or read it online)

    TFTC is switched on
    Even this, in view of the prehistory, is a downright mild indication. Because Tesla and the NHTSA came together already in 2013. Back then, the same problem had to be faced - just that Tesla was advertising for his Model S at the time, explaining that the car set a new occupant safety record.

    Due to the fact that Tesla once again combines the NHTSA ratings with exaggerated advertising claims, it no longer wants to leave the agency in charge of asking the company for appropriate corrections. Instead, the matter has now been handed over to the US FTC, which is investigating misleading advertising and a resulting violation of competition law.

    Tesla owner complains about allegedly manipulated batteries

    The owner of a Tesla electric vehicle has filed suit against the US company for manipulating the batteries of certain models. The plaintiff accuses Tesla to reduce the range of older Model S and Model X vehicles for software updates.

    From the perspective of the plaintiff, the group wants to avoid costly recalls due to defective batteries. The case potentially affects thousands of Tesla's.
    IncredulousP likes this.
  8. supersonicwaffle

    supersonicwaffle GBAtemp Regular

    Oct 15, 2018
    Sorry for responding to an old post but I just stumbled upon it.
    You're falling for some fallacies here I believe. Computers are infinitely more capable of taking experiences in that is correct but they're hilariously bad at intepreting and drawing conclusions from that. So the assertion that they're simply better at learning isn't really tenable as any AI will only be as good as its training data and any unforseen situations will result in the AI more or less going into trial and error mode.

    The accident data is worthless on its own as self driving cars are always monitored by humans. It'd be much more sensible to use human intervention data where a human was needed to prevent an accident, of course this won't be perfect as the human monitoring the car will sometimes intervene in a situation where the car wouldn't crash but it's a much better indicator than accident data on its own. For the best self driving system with regards to human intervention (Waymo) a human felt the need to intervene roughly 20 times more than a human would crash.
    On top of that the University of Michigan Transportation Research Institue found in 2015 that autonomous cars got into accidents more than twice as frequently as human operated cars, however the injuries were much less serious, the most common accident for an autonomous vehicle was getting rear ended at low speeds. Of course technology has also improved since then.

    However, explaining away the systems shortcomings with bias towards humans seems ill informed at best.

    You need to realize you're constructing an edge case in which a computer might be making better decisions while they're demonstrably having trouble distinguishing a road painted on the back of a truck from a real road and yet Elon Musk is telling you that lidar is wasted money.

    We can also look at real world examples where AI is already implemented to get an idea of what it's capable of:
    • Amazon stopped use of AI in assisting the HR department because it was found that the AI significantly favored male hires
    • Multiple social media sites show the same problem with AI moderation like the algorithms to remove ismalic extremism also impacting non extremist muslims or the algorithms to remove white supremacist content also removing critique of white supremacy. The AI is simply incapable of understanding the context.
    I'm not convinced that autonomous vehicles are possible at a reasonable safety standard without the system being fed more perfect information. One of the issues the industry has is recognizing red traffic lights as there is a ton of red lights in traffic situations (basically every vehicle's rear or brake lights).

    I've said this before and I'll say it again, the people communicating this stuff to the public are vastly overselling what it can do! As it stands, autonomous vehicles, especially those limited to camera only technology are only an unforseen and innovative ad campaign away from killing you. Now, add to that that regulation regarding trucks needs to be much more stringent still because of the whole E=mc² thing.

    Let me add to my previous post to make things more clear.
    I've brought up the hype cycle a lot in this forum which has been popularized by Gartner who is doing market research in the tech industry.
    Basically the hype cycle works as follows:
    1. there is a technology trigger, i.e. some new technology that opens the way for new kinds of applications
    2. engineers are fantasizing about what could be possible with the technology which massively inflates expectations
    3. engineers find the limitations and/or unreliability of the technology leading to disillusionment
    4. what has been learned in 2 and 3 will be applied more conservatively in a practical way
    5. what's been implemented in 4 is improved upon
    Here's what Gartner's hype cycle prediction from 2018 regarding AI looks like:

    As you can see "Autonomous Driving Level 5 is about halfway up the "Peak of Inflated Expectations" and is predicted to only reach the "Plateau of Productivity" in more than 10 years.
    Quantum Computing for example is higher up the peak and expected to plateau within 10 years. IMO that will have a much bigger impact on autonomous driving viability but will also push it back as software engineers will first have to learn how to develop for these computers.

    This also ties in to politics regarding expactations from automation and proposals like UBI. We should keep thinking about it for sure but Andrew Yang for example is anticipating a level of automation, especially for truck drivers, that more closely aligns with Tesla's PR department than reality.

    EDIT2: Just noticed the "Autonomous Dricing Level 4" on the hype cycle, which is on its way to disillusionment but still more than 10 years away from reaching productivity
    Last edited by supersonicwaffle, Aug 22, 2019
    alexander1970 likes this.