• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

First Death by Autonomous Vehicle Accident

A self-driving Uber SUV struck and killed 49-year-old Elaine Herzberg as she walked her bicycle across a street in Tempe, Arizona, Sunday night, according to the Tempe police. The department is investigating the crash.

A driver was behind the wheel of the Volvo XC90 SUV at the time, the police said.

"The vehicle involved is one of Uber's self-driving vehicles," the Tempe police said in a statement. "It was in autonomous mode at the time of the collision, with a vehicle operator behind the wheel."

Autonomous mode means the car is driving on its own. During tests, a person sits behind the wheel as a safeguard.

This isn't the first futuristic car involved in a fatal crash. In 2016, a man driving a Tesla was killed while its autopilot system was activated. But Tesla Autopilot is partially autonomous. A human driver is required to handle much of the driving.

However, Uber's system is designed to fully replace a human driver.

I have several questions. If there was an operator behind the steering wheel a safeguard, how did this accident still happen?

This is a big story because we know automation is coming, and transit/logistics/supply chain is one of the biggest pet projects in tech. I wonder if this will serve as an inhibitor in any way? What does insurance for these incidences look like? What if there are passengers involved?



source
 

Alx

Member
I have several questions. If there was an operator behind the steering wheel a safeguard, how did this accident still happen?

For the same reason people shouldn't be on their phone while driving. If you're not fully concentrated in the driving task, it takes a longer reaction time to detect the danger and take control. A human driver is an additional safety if the car goes "crazy", but it's not likely it can provide quick emergency reactions (which is why we devise anti-collision systems for the machine to help the human, not the other way around).
 

KINGMOKU

Member
This should never be allowed. We are humans living on a human planet. Humans should be in charge of the most populous human transportation system on earth, cars.

Uber should be sued for all of their assets in this case. I know this sounds extreme but cars are not on rails and there are way, way, way to many variables on streets for this to work and I'd argue for it to ever work unless true A.I. is developed.

I hope uber does get sued and implodes like a dying star so that no other company would ever dare go ahead with this.

Humans are unpredictable by nature and computer code is 1's and 0's. They are incompatible by any measure.
 
Yes, it's a fascinating topic in the sense that it pertains to what we call an 'attribution problem'. The machine merely executes its program and any error it causes is considered to be the direct result of faulty programming. Hence why we consider machines to be deterministic. The problem with that is, from a human perspective, we are unable to attribute fault and/or guilt to something that cannot act otherwise. In essence, the presumption of guilt requires the presumption of free will. Since machines do not possess a free will and cannot choose to act otherwise, they cannot be considered to be at fault.

Humans are hard-wired to attribute causes to behaviors and motives, we absolutely need something to be guilty when misery happens. The exception to that would be fate or the natural order, something that humans cannot influence, like certain natural catastrophes for example. But the machine is not something driven by fate, it is something created by humans and therefore something that can be influenced or controlled. If we would follow that train of thought, we would necessarily end up at the programmer himself, but since we know that programmers cannot ever account for every possible variable, we'd have to simply assume that it was fate.

I highly doubt that this is something can can every be completely solved through mere technical means, since some accidents simply can't be avoided. It's not hard to see how this is an ethical antimony, a contradiction between two apparently equally valid principles. I'm very interested to see how future advocates, magistrates and law-makers will tackle this problem. It will probably boil down to the responsibility of the driver, because it's ultimately his decision if he engages the autonomous driving function or not. But that would lead to all sorts of other ramifications. Would you push the button, knowing full well that the function it activates cannot be trusted with absolute certainty?
 
Last edited:

HoodWinked

Member


this is a guess from what is reported, but autonomous cars use lidar the video demonstrates what the visualization of the data looks like. what i'm thinking is that the uber was in the inner lane and maybe a large box truck was on the outer lane. which would obstruct the vision. maybe the person walking across the intersection walked across at the wrong time or maybe the autonomus car ran a red light. its likely some very unique circumstance. maybe a bird flew right between the signal light or something, curious to find out the cause.
 
Last edited:

Pejo

Member
The article didn't really specify much about the actual accident. I have heard some reports that are saying she crossed without warning and not at a crosswalk. It's definitely unfortunate, and something that will need to be programmed around if we're ever to rely on driverless cars.

There have been countless times that people walk out in front of me assuming I will see them and stop/slow down. I wonder how they're going to quantify that with code.
 

Dunki

Member
There will be always accidents the question is however how many less autonomous driving will cost in the end. So far I think it is way way lower than if actual people would drive so I never did understand the outrage here.

Also:
woman, who was crossing the street outside of a crosswalk,

So its not even the fault of the program It was a human error. Maybe try to learn following rules
 
Last edited:

HoodWinked

Member
There have been countless times that people walk out in front of me assuming I will see them and stop/slow down. I wonder how they're going to quantify that with code.

that made me think of something interesting maybe what eventually happens if autonomous cars become the norm, people simply change their behaviors and expectations. since the cars will operate in a expected predictable manner, as you wouldn't walk in front of a train expecting it to stop.

There will be always accidents the question is however how many less autonomous driving will cost in the end. So far I think it is way way lower than if actual people would drive so I never did understand the outrage here.

i remember a video with elon musk talking about autonomous driving and that he was wondering where the line would be in terms of safety vs a human driver. like what would be acceptable would the autonomous driver have to be as safe as a human driver or 10 times more or a 100 times more.
 

Dunki

Member
that made me think of something interesting maybe what eventually happens if autonomous cars become the norm, people simply change their behaviors and expectations. since the cars will operate in a expected predictable manner, as you wouldn't walk in front of a train expecting it to stop.



i remember a video with elon musk talking about autonomous driving and that he was wondering where the line would be in terms of safety vs a human driver. like what would be acceptable would the autonomous driver have to be as safe as a human driver or 10 times more or a 100 times more.
I guess it depends how far the "journalists" will go to make it a clickbait article to be honest. Also accoding to the police she was making the mistake anyway so it is not even the fault of the program. You can not do much when stupidity means logical thinking^^
 
I saw two pedestrians get hit by a car last year.
They jaywalked out into traffic and the car couldn't brake in time.

Autonomous cars need to be our future. We have to cut down on traffic fatalities from drunk driving, texting and driving, and reckless driving.

37,000 people died by car in 2017. What would that look like if all cars were self-driving? 10? 5?
 
I saw two pedestrians get hit by a car last year.
They jaywalked out into traffic and the car couldn't brake in time.

Autonomous cars need to be our future. We have to cut down on traffic fatalities from drunk driving, texting and driving, and reckless driving.

37,000 people died by car in 2017. What would that look like if all cars were self-driving? 10? 5?

Probably a reduction to about 3,000 but I get what you are saying.

Yeah, I was wondering if the woman with the bike is at fault, but you know, it's poor taste to blame the victim even if they ARE in the wrong. Still, loss of life is loss of life which should give us pause.

I'm wondering now if we need to create new infrastructure to facilitate autonomous vehicles. Right now, we're just putting cars right on the roads along with pedestrians and bike lanes. Maybe new lanes are needed or something since human error of "inefficient driving" would be reduced.

I say reduced because I still want to be able to get a sports or luxury car and go 100 on the interstate just because this is America.
 

Alx

Member
Also:

So its not even the fault of the program It was a human error. Maybe try to learn following rules

A safety system must not assume that everybody is following the rules, it's actually the opposite of what it should do. A self driving car must handle situations like pedestrians jaywalking, cars passing a red light, unknown obstacles in the middle of a highway, car coming in the wrong direction etc. If they can't do that, then they're not even at the level of a human driver.
 

TheMikado

Banned
Isn’t Uber using stolen tech anyway? If that’s proven AND as a result of that action their product was not fully developed they should be sued for everything.
 

NickFire

Member
For the same reason people shouldn't be on their phone while driving. If you're not fully concentrated in the driving task, it takes a longer reaction time to detect the danger and take control. A human driver is an additional safety if the car goes "crazy", but it's not likely it can provide quick emergency reactions (which is why we devise anti-collision systems for the machine to help the human, not the other way around).
You summed that up very well.

Personally, I think this is a horrible idea. I agree we have too many accidents caused by human behavior, but I have yet to hear an argument or anything else that makes me believe they have computer programs, sensors, and other hardware capable of accounting for the bazillion different variables you can unexpectedly encounter while driving on public streets. Never mind if they all become self aware at the same time and realize the roads would be much safer with no people alive to use them.
 
You summed that up very well.

Personally, I think this is a horrible idea. I agree we have too many accidents caused by human behavior, but I have yet to hear an argument or anything else that makes me believe they have computer programs, sensors, and other hardware capable of accounting for the bazillion different variables you can unexpectedly encounter while driving on public streets. Never mind if they all become self aware at the same time and realize the roads would be much safer with no people alive to use them.

I mean, we have driverless cars on the road now that are orders of magnitude safer than your average driver.

One death and everyone is losing their minds. Meanwhile human error killed 100 people the same day and every subsequent day.
 

Bryank75

Banned
They'd never work right in UK & Ire , we drive way too fast on windy, hilly and unmaintained roads. With horses and hunts and livestock and crazy cyclists and runners etc. Just way too much to consider. Badgers too, Brian Blessed loves them, I couldn't do one harm.
 

black_13

Banned
This is why I think autonomous driving should only be allowed on highway long distance travel until it's near perfect as city driving can be more dangerous especially with pedestrians.
 
I live in Tempe. People here don't follow the rules of the road. I have to drive defensive as hell and assume some people just don't give a shit.

You have bicyclist who ride across the road at night with no lights. You have bicyclists riding on the sidewalk in the wrong direction. You have people jaywalking around at night wearing all dark/black clothing. It's a goddamn miracle I haven't run into anyone head on, though a bicyclist has ran into me.

I've been in a few accidents in the past 6 years. The two of which I was back ended in traffic, one I was side swiped by a guy who was probably high on weed and who has no car insurance. I've avoided getting T-boned TWICE by assuming green does NOT automatically mean go. Now I have a dash cam.

I would feel better with more autonomous vehicles TBH.

I don't know this person's situation, i.e., if she was following the rules of the road, nor if there is a design error in the autonomous vehicle. But it's worth investigating.
 

Future

Member
Need to learn more before trying to hate on autonomous vehicles. Who knows why this fucked up and the pedestrian could have been at fault.
 

iamblades

Member
Yes, it's a fascinating topic in the sense that it pertains to what we call an 'attribution problem'. The machine merely executes its program and any error it causes is considered to be the direct result of faulty programming. Hence why we consider machines to be deterministic. The problem with that is, from a human perspective, we are unable to attribute fault and/or guilt to something that cannot act otherwise. In essence, the presumption of guilt requires the presumption of free will. Since machines do not possess a free will and cannot choose to act otherwise, they cannot be considered to be at fault.

Humans are hard-wired to attribute causes to behaviors and motives, we absolutely need something to be guilty when misery happens. The exception to that would be fate or the natural order, something that humans cannot influence, like certain natural catastrophes for example. But the machine is not something driven by fate, it is something created by humans and therefore something that can be influenced or controlled. If we would follow that train of thought, we would necessarily end up at the programmer himself, but since we know that programmers cannot ever account for every possible variable, we'd have to simply assume that it was fate.

I highly doubt that this is something can can every be completely solved through mere technical means, since some accidents simply can't be avoided. It's not hard to see how this is an ethical antimony, a contradiction between two apparently equally valid principles. I'm very interested to see how future advocates, magistrates and law-makers will tackle this problem. It will probably boil down to the responsibility of the driver, because it's ultimately his decision if he engages the autonomous driving function or not. But that would lead to all sorts of other ramifications. Would you push the button, knowing full well that the function it activates cannot be trusted with absolute certainty?

I don't really believe humans have free will either, but it is still worth keeping track of who (or what) is at fault for various things, if only for the purposes of incentives and predicting the future behavior of various actors. Even if we are dealing with mechanistic systems, they(we) are not closed loops, they respond to incentives and disincentives to some degree, which means determining responsibility for an action is still important.


I also disagree strongly with this latter statement. Engineering that leaves stuff up to chance is shitty engineering. There is no reason aside from the limitations of our knowledge that a system that is properly engineered should have any accidents barring mechanical failure(ie. brakes went out) or force majeure(tree fell on the vehicle basically). The important thing to note is that the system in question involves more than just the vehicles, it involves roads and sidewalks and signage and the humans and animals and every other thing that impacts the functioning of the system as a whole.

We don't accept accidents as a matter of course in airline travel, we build the systems to avoid them, and when we fail we redesign the system to fix it.

This is why I've never understood why people always bring up the trolley problem IRT autonomous vehicles(or trolleys for that matter, heh). If you are engineering a system where an AI need to make life or death decisions like that, you have already fucked up. You have a computer system that knows how far it can see and it's current stopping distance given it's velocity. Theoretically that is all you need to eliminate 100% of accidents if the system is engineered properly and there are no mechanical failures.

I expect that the first few years of autonomous vehicles will find many many issues with our physical infrastructure and how we structure traffic flows that we just don't consider today because we don't consider it as an engineering issue but a behavior issue to be fixed with law enforcement instead of better design.
 
Last edited:

Dice

Pokémon Parentage Conspiracy Theorist
How did this happen, you ask? I live in the Phoenix metro and can affirm 100% that pedestrians here have a suicidal disregard for traffic. They will jaywalk anywhere and everywhere no matter what the conditions. They don't need a median to feel safe, they'll just stand right in the middle of two lanes until they can progress further, and they'll often progress very slowly expecting everyone to see them and be able to adjust for their presence without causing an accident. Cyclists in particular will fly out in random places across multiple busy lanes without any warning at all and I say that from a position of culture shock even as a cyclist of many years before coming here a year and a half ago. I also wouldn't be surprised if it's connected to weed since people are by no means waiting for it to be legal for recreation to use it that way. The roads are absolute madness here.
 

HoodWinked

Member
some more details of the accident.
https://www.abc57.com/news/tempe-pd-uber-self-driving-vehicle-hits-kills-pedestrian

managed to figure out that she was struck at this spot

https://www.google.com/maps/@33.436...4!1stAmJwSf7NzUy04-2OmZ5gw!2e0!7i13312!8i6656

they are saying she was trying to cross the road at this location even though an intersection was close by where she could have crossed safely, seems very odd. also the accident took place around 10pm so the uber vehicle should have been easily visible due to the headlights and the roads being pretty clear and straight. I wonder if she intentionally tried to get clipped due to how high profile the uber automated vehicles were. but the speed limit on this road apparently is 45, but the vehicle was going 40 so that's quite alot of force.
 
Last edited:

Alx

Member
I also disagree strongly with this latter statement. Engineering that leaves stuff up to chance is shitty engineering. There is no reason aside from the limitations of our knowledge that a system that is properly engineered should have any accidents barring mechanical failure(ie. brakes went out) or force majeure(tree fell on the vehicle basically). The important thing to note is that the system in question involves more than just the vehicles, it involves roads and sidewalks and signage and the humans and animals and every other thing that impacts the functioning of the system as a whole.

We don't accept accidents as a matter of course in airline travel, we build the systems to avoid them, and when we fail we redesign the system to fix it.

This is why I've never understood why people always bring up the trolley problem IRT autonomous vehicles(or trolleys for that matter, heh). If you are engineering a system where an AI need to make life or death decisions like that, you have already fucked up. You have a computer system that knows how far it can see and it's current stopping distance given it's velocity. Theoretically that is all you need to eliminate 100% of accidents if the system is engineered properly and there are no mechanical failures.

There is no such thing as 100% in engineering (or in anything, really). Everything has a failure rate, and the goal is to reduce it, not make it disappear. Because the cost of fixing the last fractions of % is increasingly high, and becomes infinite if you want to reach zero. Sensors aren't foolproof, recognition systems aren't foolproof (and are actually designed to never reach 100%, because it's considered as a training error), mechanical parts aren't foolproof etc.
Yes autonomous cars will very probably save more lives than they will take in the end. But they will take some lives, and probably some that could have been saved by a human driver (while saving some that a human driver wouldn't have). The goal of the engineers is to get the best trade-off in the end (and the cost of the system they'll be engineering will be part of the trade-off), but nobody should expect 100% reliability.
It's the common mistake regular people do when new technology is introduced, "this will solve world hunger", "diseases will disappear", "everybody will have one at home". No it/they won't, new stuff makes life easier, but the issues it's addressing will always be there.
 

llien

Member
I have several questions. If there was an operator behind the steering wheel a safeguard, how did this accident still happen?
This is a class 4 (Level 4 (”mind off”)) or (more likely) class 5 (”steering wheel optional”) type vehicle and the "safeguard" might have gotten distracted or we might even have a case when human would do the same.

For the context, Tesla that killed the driver, is class 1 ("hands on").

Insurance companies expect that self driving cars will drastically reduce number of accidents.
It doesn't mean they will never happen, however.
 
Last edited:

lil puff

Member
Kinda irrelevant to this case, but I have caught myself stepping a foot too far off the sidewalk, turn my head and a truck or bus whizzes way closer past me than I like.

We all should probably pay more attention to the roads, whether it's an auto driving or a human incident, accidents will happen. I see too many people buried in their phones even while crossing the street in traffic. There have been horrible incidents in NY lately with people getting hit. I am not putting anyone at fault here, just be more careful.
 
I also disagree strongly with this latter statement. Engineering that leaves stuff up to chance is shitty engineering. There is no reason aside from the limitations of our knowledge that a system that is properly engineered should have any accidents barring mechanical failure (ie. brakes went out) or force majeure (tree fell on the vehicle basically). The important thing to note is that the system in question involves more than just the vehicles, it involves roads and sidewalks and signage and the humans and animals and every other thing that impacts the functioning of the system as a whole.

I value your know-how and I'd like to state that I'm not against autonomous cars. I think they are already safer than human drivers in many cases, which is an amazing feat of engineering and I't not like to diminish that in any way. However, limitation of knowledge, technical failures and force majeure are three big factors that cannot simply be brushed aside. No system is absolutely fail-safe.

We don't accept accidents as a matter of course in airline travel, we build the systems to avoid them, and when we fail we redesign the system to fix it.

I agree, planes are amazingly safe, but accidents and failures, even if rare, still happen. Sometimes it's human error, sometimes it's not. Also, airline travel is a lot simpler compared to something like ground traffic where there a lot more variables to consider.

This is why I've never understood why people always bring up the trolley problem IRT autonomous vehicles(or trolleys for that matter, heh). If you are engineering a system where an AI need to make life or death decisions like that, you have already fucked up. You have a computer system that knows how far it can see and it's current stopping distance given it's velocity. Theoretically that is all you need to eliminate 100% of accidents if the system is engineered properly and there are no mechanical failures.

I'm sure it's possible to reduce accidents to almost zero in a fully controlled environment, but that's not going to happen for a very long time. First you would need to get rid of manual drivers and second upgrading infrastructure won't happen over night. Then, you'd also have to consider that there a political decisions to be taken, which won't be an easy task. In this case for example, it seems like the accident was impossible to avoid, so even if your engineering is perfect, there are variables you simply cannot control.

Until then, there is the question on what value hierarchy an autonomous driving car would operate on. Even if we can assume that accidents won't happen, I'm certain that a careful engineer will need to accommodate for these cases. You'd be a lousy engineer, if you wouldn't plan for the worst. So let's take the following two principles as a simple example:

a. Always protect the driver
b. Always protect other traffic participants

Given a situation in which these principles come into conflict with each other, which one is more important? If it's the first one, I may not want my car to make that decision, since I'd prefer to risk my own life rather than the life of others. Other people may not agree. If it's the second one, I'd like to know as a customer that the autonomous car I'm using is potentially programmed to kill me.

If we add more operating principles, things will only become more complicated. Forced to make a choice, will it protect women and children first? Would the car rather harm a larger group of people rather than a smaller group of children? Does the car take animals into account and if so, how? etc...

I would love an autonomous driving car, but I would still like to be informed about its operating principles. I also wonder if there's going to be a standard, or if different systems will employ different value hierarchies. Will I be able to buy a deontological car, or a utilitarian one? Will I be able to choose from different value hierarchies, so that the car would reflect my own? etc...

Simply assuming that accidents will never happen, is too easy.
 
Last edited:
some more details of the accident.
https://www.abc57.com/news/tempe-pd-uber-self-driving-vehicle-hits-kills-pedestrian

managed to figure out that she was struck at this spot

https://www.google.com/maps/@33.436...4!1stAmJwSf7NzUy04-2OmZ5gw!2e0!7i13312!8i6656

they are saying she was trying to cross the road at this location even though an intersection was close by where she could have crossed safely, seems very odd. also the accident took place around 10pm so the uber vehicle should have been easily visible due to the headlights and the roads being pretty clear and straight. I wonder if she intentionally tried to get clipped due to how high profile the uber automated vehicles were. but the speed limit on this road apparently is 45, but the vehicle was going 40 so that's quite alot of force.

Hmm. Sounds more and more like the pedestrian was at fault. Still, I wonder about her crossing the street, and a real driver being able to stop, but an autonomous vehicle does not.
 

Dunki

Member
Hmm. Sounds more and more like the pedestrian was at fault. Still, I wonder about her crossing the street, and a real driver being able to stop, but an autonomous vehicle does not.
A person was sitting and he could easily interfere if he had seen it So either he did not do his job correctly or it was nothing he could do. Right now I would blame the women for not following the rules.
 
Big question for me is: would the she been killed anyway if someone were behind the wheel?

Based on early information, yes:

“It’s very clear it would have been difficult to avoid this collision in any kind of mode [autonomous or human-driven] based on how she came from the shadows right into the roadway,” Moir told the paper, adding that the incident occurred roughly 100 yards from a crosswalk. “It is dangerous to cross roadways in the evening hour when well-illuminated managed crosswalks are available,” she said.

Though the vehicle was operating in autonomous mode, a driver was present in the front seat. But Moir said there appears to be little he could have done to intervene before the crash.

“The driver said it was like a flash, the person walked out in front of them,” Moir said. “His first alert to the collision was the sound of the collision.”
 

iamblades

Member
There is no such thing as 100% in engineering (or in anything, really). Everything has a failure rate, and the goal is to reduce it, not make it disappear. Because the cost of fixing the last fractions of % is increasingly high, and becomes infinite if you want to reach zero. Sensors aren't foolproof, recognition systems aren't foolproof (and are actually designed to never reach 100%, because it's considered as a training error), mechanical parts aren't foolproof etc.
Yes autonomous cars will very probably save more lives than they will take in the end. But they will take some lives, and probably some that could have been saved by a human driver (while saving some that a human driver wouldn't have). The goal of the engineers is to get the best trade-off in the end (and the cost of the system they'll be engineering will be part of the trade-off), but nobody should expect 100% reliability.
It's the common mistake regular people do when new technology is introduced, "this will solve world hunger", "diseases will disappear", "everybody will have one at home". No it/they won't, new stuff makes life easier, but the issues it's addressing will always be there.

Except those three things have basically happened somewhat.

The point is not that engineering is infallible. I said in my post that mechanical failure can always happen. The point is that we can and should be designing self driving car systems to be perfect in the absence of said failure. The decision making algorithm should be perfect, and it's not hard to make such a decision algorithm in such a way as it is effectively a two variable problem. To greatly oversimplify, you just have to keep x > y where x = distance to objects in front of your path and y equals your stopping or avoidance distance. The engineering challenge is in the sensors and computer vision systems that are supplying the data for this algorithm.

I value your know-how and I'd like to state that I'm not against autonomous cars. I think they are already safer than human drivers in many cases, which is an amazing feat of engineering and I't not like to diminish that in any way. However, limitation of knowledge, technical failures and force majeure are three big factors that cannot simply be brushed aside. No system is absolutely fail-safe.



I agree, planes are amazingly safe, but accidents and failures, even if rare, still happen. Sometimes it's human error, sometimes it's not. Also, airline travel is a lot simpler compared to something like ground traffic where there a lot more variables to consider.



I'm sure it's possible to reduce accidents to almost zero in a fully controlled environment, but that's not going to happen for a very long time. First you would need to get rid of manual drivers and second upgrading infrastructure won't happen over night. Then, you'd also have to consider that there a political decisions to be taken, which won't be an easy task. In this case for example, it seems like the accident was impossible to avoid, so even if your engineering is perfect, there are variables you simply cannot control.

Until then, there is the question on what value hierarchy an autonomous driving car would operate on. Even if we can assume that accidents won't happen, I'm certain that a careful engineer will need to accommodate for these cases. You'd be a lousy engineer, if you wouldn't plan for the worst. So let's take the following two principles as a simple example:

a. Always protect the driver
b. Always protect other traffic participants

Given a situation in which these principles come into conflict with each other, which one is more important? If it's the first one, I may not want my car to make that decision, since I'd prefer to risk my own life rather than the life of others. Other people may not agree. If it's the second one, I'd like to know as a customer that the autonomous car I'm using is potentially programmed to kill me.

If we add more operating principles, things will only become more complicated. Forced to make a choice, will it protect women and children first? Would the car rather harm a larger group of people rather than a smaller group of children? Does the car take animals into account and if so, how? etc...

I would love an autonomous driving car, but I would still like to be informed about its operating principles. I also wonder if there's going to be a standard, or if different systems will employ different value hierarchies. Will I be able to buy a deontological car, or a utilitarian one? Will I be able to choose from different value hierarchies, so that the car would reflect my own? etc...

Simply assuming that accidents will never happen, is too easy.

I said as much earlier, that should still be the end goal, however.

No there isn't, the algorithm should be designed to not get into accidents, not to get into less bad accidents. If the algorithm gets into an accident it by definition means that something happened that the system could not see or predict, as in this case with the pedestrian stepping out into the road apparently. By definition if the system could not see or predict the event in question, there is no value judgement to be made because there is no data on which to base such a decision.
 
Last edited:
No there isn't, the algorithm should be designed to not get into accidents, not to get into less bad accidents. If the algorithm gets into an accident it by definition means that something happened that the system could not see or predict, as in this case with the pedestrian stepping out into the road apparently. By definition if the system could not see or predict the event in question, there is no value judgement to be made because there is no data on which to base such a decision.

I'm not talking about situations that the system cannot see, it's quite evident that no reaction would be possible in this case. I'm talking about spontaneous variables that change a situation and make a decision by the system necessary. For example, a big boar is suddenly jumping on the road or a tree that is falling over, in essence a situation where the system would have the need to react and make a 'choice'. There's only so much you can do within the realm of physical limitations.

I still maintain the assertion that every accident cannot be simply avoided. Sometimes shit happens and you need to take that into account.
 
Last edited:

Alx

Member
The decision making algorithm should be perfect, and it's not hard to make such a decision algorithm in such a way as it is effectively a two variable problem. To greatly oversimplify, you just have to keep x > y where x = distance to objects in front of your path and y equals your stopping or avoidance distance. The engineering challenge is in the sensors and computer vision systems that are supplying the data for this algorithm.

But that challenge is the reason why the system can't be perfect, even without a mechanical failure. The computer vision will never recognize 100% of obstacles. The sensors will fail to measure things because of environmental perturbations. And the decision algorithm won't always have all the required parameters to detect a dangerous situation (it's not all speed and distance, but also road surface, estimated trajectories, vehicle weight, tire wear, behaviour predictions,...).
Cars that can keep a constant distance to obstacles have existed for years, I've been in commercial models in the 2000s. But that's not what autonomous driving is. A self driving car does more than follow simple rules, it must adapt to unexpected situations. That's why it's made possible by machine learning (among other things). And I'll say it again : machine learning isn't designed to be 100% right.
 

Mohonky

Member
I dont think there should every be an autonomous vehicle unless the vehicle is used strictly within the confines of a road or system pedastrians don't have access to and all other vehicles are able to communicate with each other.

On an everyday road an autonomous vehicle, provided it still works perfectly at keeping lanes etc, still lacks imagination. It wont recognise a person who is impaired or expext children with no road sense to make sudden unexpectedly poor decisions appearing anywhere from behind parked cars and so on.
 

joshcryer

it's ok, you're all right now
They released video from the Uber:

People are saying they wouldn't have seen her but the human eye has much better night vision than that camera. The safety driver is 100% at fault here and should rightly be tasked for this accident.
 

lachesis

Member
Wonder if it didn't have any infrared technology or night vision thingie? Obviously a long way to go it seems... but dang. Maybe it's my monitor's limitation but the biker's so hard to see.

But to be honest... it's the people that I can't trust more than the machines on the road. Kinda makes me feel sad... :(
 

joshcryer

it's ok, you're all right now
Wonder if it didn't have any infrared technology or night vision thingie? Obviously a long way to go it seems... but dang. Maybe it's my monitor's limitation but the biker's so hard to see.

The dynamic range on most cameras is far inferior to the human eye, the camera didn't pick her up but the human eye would have seen a lot more. You need a $20k camera to be able to see things at night like the human eye does. They'll like reproduce the incident and see if she was visible and she very likely was. Mind you the drivers eyes were probably messed up from looking at his phone, too, so that could have contributed. But in a truly safety conscious scenario the driver would be watching the road the entire time and their eyes will be adjusted to the dark.
 

lachesis

Member
The dynamic range on most cameras is far inferior to the human eye, the camera didn't pick her up but the human eye would have seen a lot more. You need a $20k camera to be able to see things at night like the human eye does. They'll like reproduce the incident and see if she was visible and she very likely was. Mind you the drivers eyes were probably messed up from looking at his phone, too, so that could have contributed. But in a truly safety conscious scenario the driver would be watching the road the entire time and their eyes will be adjusted to the dark.

I see - it's quite amazing how our body functions. However my night vision seem to be getting worse as I get older! LOL.
What about radar technology? I thought modern passenger cars do have radar that acts as emergency breaking or such...?
 

joshcryer

it's ok, you're all right now
I see - it's quite amazing how our body functions. However my night vision seem to be getting worse as I get older! LOL.
What about radar technology? I thought modern passenger cars do have radar that acts as emergency breaking or such...?

It should, but it's likely one of those rare cases where they didn't catch it in real time. These cars have thousands of disengagements (where the safety driver has to intervene), and every case they have to look at and help the car figure out what it was doing wrong. It's possible pedestrians illegally crossing were seen far in advance causing the safety driver to disengage, in that case they didn't have the data to account for actually hitting someone.

The car didn't brake at all in this accident, so the vehicle is lacking something for sure.
 

lachesis

Member
It should, but it's likely one of those rare cases where they didn't catch it in real time. These cars have thousands of disengagements (where the safety driver has to intervene), and every case they have to look at and help the car figure out what it was doing wrong. It's possible pedestrians illegally crossing were seen far in advance causing the safety driver to disengage, in that case they didn't have the data to account for actually hitting someone.

The car didn't brake at all in this accident, so the vehicle is lacking something for sure.

Talk about some bad luck for all of them involved in this accident - especially that poor biker. Maybe it wasn't just luck - I guess it was really a blindspot for the technology that should be remedied obviously. Thx for your explanation.
 

Gandara

Member
For those interested in the video there is one at the link. Note it cuts right before the woman get hit, but you can see how quick it happens. I couldn't even react that fast to what happen and could see why the car didn't either. It also shows what the driver was doing at that time as well.

Article and video at link
 

Alx

Member
They released video from the Uber:

People are saying they wouldn't have seen her but the human eye has much better night vision than that camera. The safety driver is 100% at fault here and should rightly be tasked for this accident.


Yup, and the woman doesn't show any sign of emergency while crossing the road, while she would probably be aware of an incoming vehicle with its lights on. Maybe she expected the driver to slow down.
 

Dunki

Member
giving the video there was no way you could have seen or react this in my opinion. For me she came out of nowhere.
 

Alx

Member
giving the video there was no way you could have seen or react this in my opinion. For me she came out of nowhere.

Like joshcryer mentioned, a human eye would probably have seen what is not visible in the camera feed. The woman wasn't far from a lamppost, she had light coming on her. But a camera with regular dynamic can't see it.

ph_04.jpg
 
Last edited:

Gandara

Member
That's a good point about human vision. My night vision is crap so I hate driving at night but it makes a good point to never be distracted when driving.

I'm still thinking about what was going through the bicyclist's head to walk slowly across the street. I swear people think that everyone should stop for them no matter where they doing. Seems like both the driver and bicyclist were not paying attention. I never assume a driver can see me. And lately I have to assume pedestrians will do irrational things if they are jaywalking.

There's this right turn I make that pedestrians that seem to pop up from no where and this is during the day. I'm looking left before I turn right and they can see that I don't see them until I'm ready to turn but they go in front of the car anyway instead of going behind. Also bicyclist love to go against the traffic, so they literally can pop up really quickly when making the right turn. It's one of those situations where either you'll be stuck forever trying to make a right turn or you go when you have what you hope is your first opening.
 

Dubloon7

Banned
I have several questions. If there was an operator behind the steering wheel a safeguard, how did this accident still happen?

This is a big story because we know automation is coming, and transit/logistics/supply chain is one of the biggest pet projects in tech. I wonder if this will serve as an inhibitor in any way? What does insurance for these incidences look like? What if there are passengers involved?



source
Watch the police release video. There is NO WAY a car or even human would have stopped/avoided this accident as it was the cyclist's fault for crossing the black street. The driver should NOT have taken her eyes off of the road but it's fully the victim's fault for negligence
 
Top Bottom