Who is to blame when a self-driving car crashes? This question is posed by Elmar Degenhart, chief executive of Continental (Companies & Markets, March 7), and it’s easier to answer than he implies. Just as in a “regular” traffic accident, the movements of each car will need to be scrutinised and compared to the relevant traffic rules. If it turns out that I’ve caused the accident with my vehicle, whether my car was self-driving or not, then I (or my insurance) have to pay the bill.
If my car was driving itself, then if I believe the “driver” could have and should have avoided the accident, the manufacturer will get a damages claim, with justification. Then a judge will decide what I could have expected of my self-driving car, relying among other things on a large pile of legal disclaimers and my own behaviour. Because as long as the self-driving car’s vulnerability to malfunctions, hacking and miscalculations isn’t clear, manual intervention needs to remain possible anyway – just like the autopilot of an aircraft.
What the debate should really be about is an ethical dilemma: what should my self-driving car do if, in an accident, it could save my life by taking that of another (or more than one other) road user?
If self-driving car manufacturers think they can answer that question, let them pick up the bill, too.
This article was published in the Financial Times.
Leave a Reply