Dark logo

Who Will Teach Machines Right from Wrong?

Published April 24, 2016
Doug Rose
Author | Agility | Artificial Intelligence | Data Ethics

All the current focus on artificial intelligence is about capability. That makes sense because AI is currently seen as an engineering problem. Large AI companies such as Google, Apple, IBM and Microsoft are heavily steeped in an engineering culture. It's all about what can be done. You need to, "move fast and break things."

As we struggle with these AI technologies we need to keep a seat at the table for those with a background in humanities. There needs to be a seat for our anthropologists, communication specialists, philosophers and cultural experts.

There are already some dangers with a one-sided engineering approach. When Microsoft's AI chat robot was exposed to the world she quickly picked up bad habits. Within an hour she tweeted troubling ideas about race and gender. Within 16 hours she was a full-on Nazi.

The computer engineers were caught off guard and started to intervene on her behalf. Later they said that users had found an "exploit in the system."

This AI chat robot wasn't exploited. She was working exactly how she was designed. She was using machine learning to pick up trends on Twitter. Then she remixed what she heard and repeated the ideas as her own. Anyone who's been to grade school can recognize this behavior. She was trying to be cool by repeating what she heard from the popular kids.

The difference between this AI chatbot and a grade school student came down to qualitative data. Grade school students have several years to observe human behavior. They were given a short period of time to study the humanities.

Microsoft's chatbot never studied the humanities. She spouted out hateful tweets the same way that any machine would report traffic conditions. There was no one in the room that could help her understand right from wrong. Everyone was focused on creating her and no one was there to give her a shadow of humanity.

It's not because she was built by sociopaths. It's because she was built by an engineering team that was unqualified to mimic humanity. Any cultural anthropologist would've been able to predict the first wave of response from her twitter followers. A specialist in rhetoric would've been able to categorize words that are hateful or had a strong connection to larger ideas. A philosopher would've been able to give her some framework for ethics and her larger responsibilities to society. Yet these seats were empty. In their place was just another data cruncher with dual monitors.

The Microsoft chat robot was tasteless and funny. No harm was done except for bruised egos and a long day for the corporate communications specialist.

The introduction of more practical AI will make things more serious. Most people will get a real taste of AI when they see the first wave of autonomous trucks. They have a terrific commercial appeal. They can drive around the clock. You can use real-time analytics to send them on the fastest route. It might take years for autonomous cars to percolate into the hands of individuals, but autonomous trucks might lead to immediate savings.

Smooth traffic in crossroad. Concept for advantage autonomous technology. 3D rendering image.

Right now there are engineers in a room coding out the decisions these trucks will make when presented with certain scenarios.

What happens when the truck is moving so fast that it's unavoidable to hit the car in front of them? Should it hit the brakes and risk “jackknifing” into oncoming traffic? The truck should be able to calculate the risk of additional injuries or deaths. What if the risk is too great? Then is the truck empowered to leave the small car in front of them to their fate?

We give humans a benefit of the doubt, because we know they're imperfect. Some drivers will choose to save themselves. Other drivers would take on greater risks, because they can't bring themselves to crush a small car in their path.

The AI agent won't have that benefit, because it'll all be hard-coded and quantified. The truck will have a framework for making these decisions. The key will be who's in the room when they build that framework? Will it be an overworked data cruncher with a computer science degree? Maybe it will be the legal team that is looking out for corporate liability?

The real danger here is that no one with a humanities background will have a seat at the table. What are the ethical challenges around having a multinational corporation using quantitative analysis to take a driver’s life? What framework should you use to weigh human lives? Should it be the number of people? Or their size?

If the trends in AI continue, then you'll either have a few large companies making many of these decisions or many smaller companies making several of them. Either way there will be a strong need for employees with a humanities background. Many of these companies might not be asking now, but as failures pile up they'll recognize the need for these specialists later.

If you’re interested in technology don't assume that there will only be a career for you in software development or engineering. A background in philosophy will give you critical thinking skills that will be essential when organizations need to ask interesting questions. A degree in language and culture will help when your organization's product isn't being accepted in different regions of the world.

Most importantly, humanity needs you when these large organizations run into the limits of what can be defined with quantitative data. Artificial intelligence won't solve that problem. It will likely to make it more transparent.

There's an overused business quote made by hockey legend Wayne Gretzky. He said the secret to his success was that he skated to where the puck was going and not to where it had been. The last half-century is been a boon for quantitative fields such as computer science, engineering and finance. Artificial intelligence may have given us insight into where the puck is going. As we use technology to enhance our humanity were going to need specialists in fields such as art, culture and philosophy.

The technology and capability issues behind artificial intelligence will be challenging, but so will the ethical, philosophical and cultural issues that can only be addressed through the humanities.

Related Posts
March 5, 2018
Neural Network Hidden Layers

Neural network hidden layers are between the input layer and the output layer of an artificial neural network. The number of neuron hidden layers can improve the learning models.

Read More
March 26, 2018
Supervised vs Unsupervised Learning

Supervised vs unsupervised learning models have different use cases with machine learning. The key difference is whether you're working with labeled or unlabeled data.

Read More
August 8, 2021
What Are Machine Learning Algorithms?

There are several machine learning algorithms. Some of the most popular are used for regression, reinforcement, clustering and classification.

Read More
1 2 3 10
9450 SW Gemini Drive #32865
Beaverton, Oregon, 97008-7105
Dark logo
© 2022 Doug Enterprises, LLC All Rights Reserved
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram