The ethics of automation
We’d do far better to establish an
ethical framework within which they must operate, than force driverless cars to
comply with human laws
All of a sudden, so many of our
appliances—televisions, coffee machines, bathroom scales, light bulbs—have
grown smart and integrated themselves into an enormous hive-mind that
miraculously keeps us fit, our home temperatures controlled and our lives
organised. It seems we are on the threshold of a new age, but even at this
early stage, the enormous strains that this fundamental technological shift
will place on the world, are already becoming visible. We are struggling to
deal with the vast amounts of data that our personal devices collect and its
impact on privacy. If you layer on top of that, artificial intelligence and
machines that are empowered to make our choices for us, the real impact
increases exponentially.
Nowhere is this autonomy more evident
than in the automobile industry. Google Cars have been driving themselves
around the Bay Area for years now and even though they have been generally
incident-free, stray accidents have caused disproportionate anxiety. Wired magazine
featured a story last year, in which two engineers remotely gained access to a
Jeep driving on the highway and shut the engine down by hacking into its
telemetry—a scary demonstration of the additional risks that a connected future
could offer. But it wasn’t until Tesla released an over-the-air update that
miraculously allowed its cars to automagically park themselves, that the
reality of connected driverless cars really sank in.
Conventional wisdom says we should
regulate autonomous cars by seeing to it that they are capable of complying
with existing laws—ensuring that they are intelligent enough to abide by
traffic regulations and can stick to the speed limit. This approach, in my
opinion, is flawed. Our motor vehicles laws were designed to guard against
human failure—essentially, to protect us from ourselves. Laws against
drunk-driving, using cellphones and over-speeding exist solely to see to it
that when we take control of a powerful metal capsule capable of travelling at
insane speeds, we don’t end up killing ourselves. To regulate intelligent,
networked cars that are perfectly aware of each other’s location, speed and
direction under the same, essentially human framework is pointless.
Instead of making autonomous cars
behave more like us, what we should really be concerned about is how these cars
are programmed to make decisions. Whenever I think of tough choices, I am
reminded of Phillipa Foot’s ethics conundrum: the Trolley Problem. It describes
a situation in which a tramcar is rolling downhill towards five people, tied to
the tracks, unable to move and staring at certain death. You can choose to switch
the trolley to a siding but if you do so it will run over an innocent
by-stander. What do you do?
One approach would be do what causes
the least harm—switching the trolley to the siding would kill one person
instead of five. But would your decision be any different if there was a child
on the siding—and if so, how many adult lives is one child’s worth? Isn’t there
a moral difference between allowing people to die by your inaction compared to
wilfully switching tracks to cause the death of a human being?
Autonomous cars will be faced with
decisions like these every day. And while a fallible human at the switching
yard can assuage his guilt by convincing himself that he only had a split
second to decide, autonomous cars will decide based on pre-meditated risk-balancing
programs, consciously designed by their manufacturers. I wonder whether these
moral choices should be left to corporate whim. If we don’t intervene, our cars
may end up being programmed to protect their passengers at all costs—even at
the cost of the lives of innocent bystanders.
It is here I believe legislators need
to focus their efforts. Cars of the future will take smarter decisions and will
be able to based them on inputs from millions of sensors—in their chassis, from
the roads they drive on and the vehicles they interact with. They will have the
benefit of machine-to-machine communication, AI and big data algorithms that
will allow them to simulate millions of potential outcomes in the time they
need to take appropriate action. But even with that kind of assistance, their
choices will only be as sound as the programmatic basis on which they are
taken.
Car manufacturers are already making
these choices for us and will continue to build programming into their
vehicles. We’d do far better to establish an ethical framework within which
they must operate, than force autonomous cars to comply with human laws.
If it were left to me, I’d take the
responsibility of regulating autonomous cars away from the Motor Vehicles
Authority and hand it off to the Ministry of Robotics.
Source | Mint – The Wall Street Journal | 16 June 2016
No comments:
Post a Comment