Hi,
I've done some research into this in the last couple of weeks.
Am Fri, 28 Apr 2017 20:21:00 +0300 schrieb Bill Kontos vkontogpls@gmail.com:
On self driving cars atm the driver is required to sit on the driver's position ready to engage the controls. The moment the driver touches the gas pedal the car is under his control. So the system is designed in such a way that the driver is actually in control. In the only accident so far in the history of Tesla the driver was actually sleeping instead of paying attention.
This kind of car is what the SAE defined as a level 2 car. Full autonomy ist level 5. See the Standard J3016 (costs nothing, but needs account)[1]
Also the issue of preventing the AI from optimising out some edge cases can be solved by carefully planning the tests that the neural network is trained on, which includes hitting the cycler instead of the folks in a bus stop or hitting the tree instead of the animal etc. I'm confident this stuff has already been taken care of, but of course I would love it if Tesla's code was open source. Although I fail to see how they could continue making revenue if they open sourced their code( as that is basically 50% of what they are selling).
I'm sad to say that it isn't even close to solved. The only two concrete ideas I found are: 1. Egoism. The driver of the car always wins. 2. Utilitarianism. "The Greater Good". The best outcome for the most people.
There is also another one which ignores most of the problem. 3. Random. Creating different possible crash cenarios and selecting one at random.
Even if we would find utilitarianism as a good choice, we would have to calculate the sums of the products of probabilities for harm times value of harmed participant in the accident. Our sensors aren't even close to good enough to calculate good probabilities and we have no idea which value to assign to a participant. And the sensors and computing would have to decide how many outcomes there could be and then calculate those value-sums for each and then take the best outcome.
Then you have to consider that in many countries the programming of such a targeting algorithm, one that decides who is killed, would count as planning a murder. And every casualty in an accident would be murdered by the people that created the algorithm. Because it isn't reacting anymore as it was for human drivers but precalculated and planned.
Then there is the problem of who would by utilitarian cars. [2]
Greetings Hannes
[1] http://standards.sae.org/j3016_201609/ [2] https://arxiv.org/abs/1510.03346