I also disagree strongly with this latter statement. Engineering that leaves stuff up to chance is shitty engineering. There is no reason aside from the limitations of our knowledge that a system that is properly engineered should have any accidents barring mechanical failure (ie. brakes went out) or force majeure (tree fell on the vehicle basically). The important thing to note is that the system in question involves more than just the vehicles, it involves roads and sidewalks and signage and the humans and animals and every other thing that impacts the functioning of the system as a whole.
I value your know-how and I'd like to state that I'm not against autonomous cars. I think they are already safer than human drivers in many cases, which is an amazing feat of engineering and I't not like to diminish that in any way. However, limitation of knowledge, technical failures and force majeure are three big factors that cannot simply be brushed aside. No system is absolutely fail-safe.
We don't accept accidents as a matter of course in airline travel, we build the systems to avoid them, and when we fail we redesign the system to fix it.
I agree, planes are amazingly safe, but accidents and failures, even if rare, still happen. Sometimes it's human error, sometimes it's not. Also, airline travel is a lot simpler compared to something like ground traffic where there a lot more variables to consider.
This is why I've never understood why people always bring up the trolley problem IRT autonomous vehicles(or trolleys for that matter, heh). If you are engineering a system where an AI need to make life or death decisions like that, you have already fucked up. You have a computer system that knows how far it can see and it's current stopping distance given it's velocity. Theoretically that is all you need to eliminate 100% of accidents if the system is engineered properly and there are no mechanical failures.
I'm sure it's possible to reduce accidents to almost zero in a fully controlled environment, but that's not going to happen for a very long time. First you would need to get rid of manual drivers and second upgrading infrastructure won't happen over night. Then, you'd also have to consider that there a political decisions to be taken, which won't be an easy task. In this case for example, it seems like the accident was impossible to avoid, so even if your engineering is perfect, there are variables you simply cannot control.
Until then, there is the question on what
value hierarchy an autonomous driving car would operate on. Even
if we can assume that accidents won't happen, I'm certain that a careful engineer will need to accommodate for these cases. You'd be a lousy engineer, if you wouldn't plan for the worst. So let's take the following two principles as a simple example:
a. Always protect the driver
b. Always protect other traffic participants
Given a situation in which these principles come into conflict with each other, which one is more important? If it's the first one, I may not want my car to make that decision, since I'd prefer to risk my own life rather than the life of others. Other people may not agree. If it's the second one, I'd like to know as a customer that the autonomous car I'm using is potentially programmed to kill me.
If we add more operating principles, things will only become more complicated. Forced to make a choice, will it protect women and children first? Would the car rather harm a larger group of people rather than a smaller group of children? Does the car take animals into account and if so, how? etc...
I would love an autonomous driving car, but I would still like to be informed about its operating principles. I also wonder if there's going to be a standard, or if different systems will employ different value hierarchies. Will I be able to buy a deontological car, or a utilitarian one? Will I be able to choose from different value hierarchies, so that the car would reflect my own? etc...
Simply assuming that accidents will never happen, is too easy.