On self driving cars atm the driver is required to sit on the driver's position ready to engage the controls. The moment the driver touches the gas pedal the car is under his control. So the system is designed in such a way that the driver is actually in control. In the only accident so far in the history of Tesla the driver was actually sleeping instead of paying attention. Also the issue of preventing the AI from optimising out some edge cases can be solved by carefully planning the tests that the neural network is trained on, which includes hitting the cycler instead of the folks in a bus stop or hitting the tree instead of the animal etc. I'm confident this stuff has already been taken care of, but of course I would love it if Tesla's code was open source. Although I fail to see how they could continue making revenue if they open sourced their code( as that is basically 50% of what they are selling).
On Fri, Apr 28, 2017 at 5:55 PM, Luke Kenneth Casson Leighton <lkcl@lkcl.net
wrote:
On Fri, Apr 28, 2017 at 3:45 PM, mike.valk@gmail.com mike.valk@gmail.com wrote:
2017-04-28 16:17 GMT+02:00 Luke Kenneth Casson Leighton lkcl@lkcl.net:
the rest of the article makes a really good point, which has me deeply concerned now that there are fuckwits out there making "driverless" cars, toying with people's lives in the process. you have *no idea* what unexpected decisions are being made, what has been "optimised out".
That's no different from regular "human" programming.
it's *massively* different. a human will follow their training, deploy algorithms and have an *understanding* of the code and what it does.
monte-carlo-generated iterative algorthms you *literally* have no idea what it does or how it does it. the only guarantee that you have is that *for the set of inputs CURRENTLY tested to date* you have "known behaviour".
but for the cases which you haven't catered for you *literally* have no way of knowing how the code is going to react.
now this sounds very very similar to the human case: yes you would expect human-written code to also have to pass test suites.
but the real difference is highighted with the following question: when it comes to previously undiscovered bugs, how the heck are you supposed to "fix" bugs that you have *LITERALLY* no idea how the code even works?
and that's what it really boils down to:
(a) in unanticipated circumstances you have literally no idea what the code will do. it could do something incredibly dangerous.
(b) in unanticipated circumstances the chances of *fixing* the bug in the genetic-derived code are precisely: zero. the only option is to run the algorithm again but with a new set of criteria, generating an entirely new algorithm which *again* is in the same (dangerous) category.
l.
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk