Welcome to Neural’s beginner’s guide to AI. This long-running series should provide you with a very basic understanding of what AI is, what it can do, and how it works. In addition to the article you’re currently reading, the guide contains articles on (in order published) neural networks, computer vision, natural language processing, algorithms, artificial general intelligence, the difference between video game AI and real AI, and the difference between human and machine intelligence.
The most obvious solution for a given problem isn’t always the best solution. For example: it’d be much easier for us to dump all of our trash on our neighbors lawn and let them deal with it. But, for a variety of reasons, it’s probably not the optimal solution. At its core, such an action would be unethical because it forces someone else to assume your burdens in addition to their own.
Basically: It’s unethical to pass your garbage along to the next person. And that’s pretty much what we need to focus on when we’re trying to understand ethics in the field of artificial intelligence.
For the purposes of this article, when we discuss the ethics of AI we’re asking two simple questions:
- Is it ethical to build an AI for this specific purpose?
- Is it ethical to build an AI with these capabilities?
The first question covers the intent of the developer or creator. Since there is no governing body that determines the acceptable ethical strictures we should place on developers, the best we can do as attempt to ascertain the raison d’être for a given AI system.
When Google, for example, tells us it has created an AI that can label images in the wild, we accept its existence as a form of greater good because we assume it was created without malice.
And, thanks to that AI, we can type “puppy” into a search box on our phones and Google will sift through our personal archive of thousands of images and display all the ones with puppies in them.
However, at one point, if you typed “gorilla” into Search and clicked the images tab, it would surface pictures of Black people. And, no matter what the developer’s intent was, they created a system that perpetuated racist stereotypes at a scale unprecedented in human history.
The second question, “is it ethical to build an AI with these capabilities,” refers to the intent of any potential external parties who may be inspired to misuse an AI system or develop their own.
For example, the development of an AI system that analyzes human emotion as evident in facial expression isn’t inherently objectionable. One ethical use of this technology would be the creation of a system that alerts drivers when they appear to be falling asleep behind the wheel.
But if you use it to determine if a job candidate is a good fit for your company, for example, that’s likely to be considered unethical. It’s well-established that AI systems have bias towards white male faces, the systems clearly work better for one group than others.
When it comes to ethical dilemmas, the popular situations people like to talk about are seldom the ones developers and creators actually face. Whether a driverless car will decide to kill an old person or a group of children isn’t as common a problem as whether or not a database concerning humans has enough diversity to to make a system robust enough to be useful.
Unfortunately, every entity in the modern world seems to have its own agenda and its own ethics when it comes to AI. The world’s superpower governments have decided that autonomous killing machines are ethical, the general public has accepted deep fakes, and the proliferation of mass surveillance technology through devices ranging from Ring doorbell cameras to the legal use of facial recognition systems by law enforcement tells us it’s the Wild West for AI, as far as ethics are concerned.