Self-driving cars are already cruising the streets. But before they can become widespread, car makers must solve an impossible ethical dilemma of algorithmic morality. When it comes to automotive technology, self-driving cars are all the rage. Standard features on many ordinary cars include intelligent cruise control, parallel parking programs, and even automatic overtaking—features that allow you to sit back, albeit a little uneasily, and let a computer do the driving. So it’ll come as no surprise that many car manufacturers are beginning to think about cars that take the driving out of your hands altogether (see “ Drivers Push Tesla’s Autopilot Beyond Its Abilities ”). These cars will be safer, cleaner, and more fuel-efficient than their manual counterparts. And yet they can never be perfectly safe. And that raises some difficult issues. How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrif...
Comments
Post a Comment
Please do not paste any spam link here. Thank you.