You may remember that I have previously written a few times about ‘driverless’ cars? (See links to previous posts at bottom of page) Well, not surprisingly as the vehicles edge closer to public reality, as the technology becomes more developed, there is more news about them. The latest? Morality. Specifically, to what degree should morality be built into the cars decision-making process?
Should manufacturers create vehicles with various degrees of morality programmed into them, depending on what a consumer wants? Should the government mandate that all self-driving cars share the same value of protecting the greatest good, even if that’s not so good for a car’s passengers? And what exactly is the greatest good? “Is it acceptable for an A.V. (autonomous vehicle) to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the A.V., than for the rider of the motorcycle? Should A.V.s take the ages of the passengers and pedestrians into account?” wrote Jean-François Bonnefon, of the Toulouse School of Economics in France.
In order to try to come up with answers to some of the above questions, Dr. Iyad Rahwan, of the Media Laboratory at the Massachusetts Institute of Technology came up with a series of surveys to try to find out what people actually want. Oh dear … that sounds like a recipe for disaster in and of itself! Turns out, long story short, people want exactly what you would expect, to save themselves first. It may be a product of the “me-generation” or it may simply be the human survival instinct.
Dr. Rahwan and his team came up with a series of surveys presenting different scenarios, such as varying the number of pedestrians that could be saved, adding family members to the mix, etc. This led to another line of thinking: what if the manufacturer were to offer different options, or versions of its ‘moral algorithm’? And say Mr. Jones selects ‘option A’, the one that is most protective of the passengers in the car, regardless of risk levels, ages of pedestrians, etc. Now, if the car hits and kills a mother pushing a stroller with her young child, whereas the alternative would have likely been only a few bumps and bruises to the sole passenger in the car, who is held liable? The owner? The manufacturer? Or some third-party, ie. insurer? Getting a bit dizzying in scope?
Now since we are already into what I think of as Twilight Zone stuff here, let us take it one step further. I have read there may be a possibility for the Artificial Intelligence (AI) in the cars program to determine ages and other data about persons involved. I have my doubts, but let us buy into that premise for a moment and project even further out. Suppose it is possible for the AI to also detect skin tones or colour? Or gender? I was going to include religion, gender-identity, or political affiliation, but for now that may be a step too far into the Twilight Zone, as those are not necessarily associated with any physical characteristics. Still … you see where I am going with this? Customer: “Oh yes, I would like option ‘R’ that will always put my life and safety first, especially if the ‘other guy’ is African-American, or a female.
All rather far-fetched? Sure, but then 25 years ago, the concept of self-driving cars was the stuff of a Ray Bradbury novel, not something we would actually see come to fruition in our own lifetime! For my money, I think the car should always attempt to avoid pedestrians and other vehicles. After all, the passenger in the ‘driverless’ car does still have the option to take over the controls if he sees a situation that might benefit from human intervention. And, though I am technically a product of what is known as the ‘Me Generation’, I would drive off a 50-ft cliff in order to avoid hitting a pedestrian, no matter what size, shape, colour or gender. Happily, I will never own one of these cars.
Previously on Filosofa’s Word: