With the advent of self-driving cars, we see AI starting to make ethical decisions. How you may ask?
These automobiles will have to decide what to do when human lives are at stake. For example, let's say a couple is in a hurry. Instead of obeying the traffic signal to not cross the street, they attempt to cross it not seeing the oncoming vehicle. Moreover, this is a self-driving car that is about to hit them. The AI calculates that it does not have enough time to come to a safe stop. In a millisecond the AI reviews the scenario. Should the ethic algorithms decide to hit the single pedestrian on the side walk instead? Hit the couple crossing the street? Or risk the life of the passengers in the car as well as those in the neighboring office building by careening into it?
Let's take this concept a step further. What if the AI knows who all the humans are that are in danger via face recognition? Also, what if the algorithms instantly calculate that one of the people in peril crossing the street has a high proclivity to criminal behavior?
Once intelligent machines start making, at least hypothetically better and more informed ethical decisions based on data, will we then slowly start to hand over our own ethical decisions to them? Why not have an AI (artificial intelligence) that has access to massive amounts of information and can process it at a thousand fold rate more then we can at least assist us in those dilemmas?
No comments:
Post a Comment