With artificial intelligence along with many computing problems, problems with ethics also arise. For example should a self driving car priorities the safety of the driver or the safety of other road users. Lets say the self driving car finds itself in a situation where the only way to minimize damage to itself and the driver is to swerve across into a motorcyclist, reducing harm to he driver and itself but severely injuring the motorcyclist. I will include multiple examples below.
The first scenario shows the classic question of preserve the driver or the safety of others. The second scenario gets a little bit more complicated.
In this scenario the car must choose which people essentially have more value or which people it is more "ethical" to injure Below is a great video explaining these questions. It is also very interesting to note that most consumers would prefer to buy an autonomous vehicle that prioritizes driver safety over the safety of others.
As outlined in the video self driving cars will reduce human error, theoretically leading to less situations in which the car will have to make such ethical decisions however accidents and malfunctions will always happen so the questions still remain valid.



No comments:
Post a Comment