<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">2017-04-28 16:17 GMT+02:00 Luke Kenneth Casson Leighton <span dir="ltr"><<a href="mailto:lkcl@lkcl.net" target="_blank">lkcl@lkcl.net</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
the rest of the article makes a really good point, which has me<br>
deeply concerned now that there are fuckwits out there making<br>
"driverless" cars, toying with people's lives in the process. you<br>
have *no idea* what unexpected decisions are being made, what has been<br>
"optimised out".<br></blockquote><div> </div><div>That's no different from regular "human" programming. If you employ IA programming you still can validate the code like you would that of a normal human. </div><div><br></div><div>Or build a second independ IA for the "four" eye principle.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
with aircraft it's a different matter: the skies are clear, it's a<br>
matter of physics and engineering, and the job of taking off, landing<br>
and changing direction is, if extremely complex, actually just a<br>
matter of programming. also, the PILOT IS ULTIMATELY IN CHARGE.<br>
<br>
cars - where you could get thrown unexpected completely unanticipated<br>
scenarios involving life-and-death decisions - are a totally different<br>
matter.<br>
<br>
the only truly ethical way to create "driverless" cars is to create<br>
an actual *conscious* machine intelligence with which you can have a<br>
conversation, and *TEACH* it - through a rational conversation - what<br>
the actual parameters are for (a) the laws of the road (b) moral<br>
decisions regarding life-and-death situations.<br></blockquote><div><br></div><div>The problem is nuance. If a cyclist crosses your path and escaping collision can only be done by driving into a group of people waiting to cross after you passed them. The choice seems logical: Hit the cyclist. Many are saved by killing/injuring/bumping one.</div><div><br></div><div>Humans are notoriously bad in taking those decisions themselves. We only consider the cyclist. That's our focus. The group become the second objective. </div><div><br></div><div>Many people are killed/injured by trying to avoid hitting animals. You try to avoid collision only to find you'r vehicle becoming uncontrollable or finding a new object on your new trajectory, mostly trees.</div><div><br></div><div>The real crisis comes from outside control. The car can be hacked and become weaponized. That works with humans as well but is more difficult and takes more time. Programming humans takes time.</div><div><br></div><div>Or some other Asimov related issue ;-)</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
applying genetic algorithms to driving of vehicles is a stupid,<br>
stupid idea because you cannot tell what has been "optimised out" -<br>
just as the guy from this article says.<br>
<div class="HOEnZb"><div class="h5"><br>
l.<br>
<br>
______________________________<wbr>_________________<br>
arm-netbook mailing list <a href="mailto:arm-netbook@lists.phcomp.co.uk">arm-netbook@lists.phcomp.co.uk</a><br>
<a href="http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook" rel="noreferrer" target="_blank">http://lists.phcomp.co.uk/<wbr>mailman/listinfo/arm-netbook</a><br>
Send large attachments to <a href="mailto:arm-netbook@files.phcomp.co.uk">arm-netbook@files.phcomp.co.uk</a></div></div></blockquote></div><br></div></div>