Dark logo

Should We Be Afraid of Artificial Intelligence?

Published August 8, 2021
Doug Rose
Author | Agility | Artificial Intelligence | Data Ethics

Some people are afraid of machine learning and artificial intelligence (AI), and some of those people who are most afraid of artificial intelligence work in the AI field. As for me, I don't fear AI itself as much as I fear what people might do with it. In fact, most AI fears stem from questions about its ethical use (by people). In this post, I cover these fears and a few others.

Institutional Discrimination

We are already seeing early warning signs of AI abuse at the national level. According to a report in The New York Times, "Chinese start-ups have built algorithms that the government uses to track members of a largely Muslim minority group" through the use of facial recognition technology. Along with its expansive network of surveillance cameras, China uses facial recognition technology to identify Uighurs (members of the Muslim minority) and then keeps a record of where they go, what they do, and the people they associate with, so this information can be searched and reviewed. 

Given that China has already held as many as a million Uighurs in detention camps, the use of technology to single out and monitor Uighurs more closely is an alarming development. As NY Times reporter Paul Mozur puts it, "The practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism."

Big data and machine learning may also drive more subtle forms of discrimination. For example, lenders, insurance providers, employers, and colleges might use the insights gleaned from big data to deny mortgage loans, insurance coverage, jobs, or college admission to certain applicants based on their online search history, spending patterns, products they purchased, or even the types of books they read or music they listen to.

Loss of Privacy

Big data and machine learning are primarily responsible for enabling personalization — products and services tailored for each user's needs and preferences. Search engines personalize your search results to make them more relevant to your interests and location. GPS personalizes directions to a destination based on your current location. Facebook can notify you if any of your Facebook friends happen to be nearby. Netflix can recommend movies based on your browsing history.

While there is certainly nothing to fear about these applications of big data and machine learning, the threat to personal privacy raises concern, particularly if personal data is not anonymized — which may not be entirely possible. (Anonymization of data involves removing any data that could be used to identify an individual, such as the person's name, address, phone number, social security number, and driver's license number.)

Loss of privacy is such a big concern that governments are developing legislation to protect privacy. For example, in the European Union, the General Data Protection Regulation (GDPR) protects the privacy of all individual citizens in the EU and the European Economic Area. The GDPR even contains a "right to be forgotten" clause, enabling individuals to have their personal data deleted upon request, providing there are no legal grounds for retaining it.

Unemployment

One of the top AI fears is that automation will eliminate so many jobs that unemployment will become a bigger problem than it already is. Countries around the world are trying to lessen the impact by preparing their workforce for higher-level jobs that AI is ill-equipped to handle. However, it remains to be seen whether the creation of new jobs will keep pace with the loss of old ones. In addition, not everyone is geared to fill the job openings of the future, which generally require more education, skill, and talent than those that technology is eliminating.

Financial Inequality

Related to unemployment is the resulting financial inequality. As more jobs are automated, the people who benefit most financially are those who own the machines. As a result, the rich get richer, the poor get poorer, and now a large portion of the middle class become poor.

Governments attempt various ways to distribute income through taxes and social programs (taking from the rich to give to the poor), but simply redistributing wealth can lead to a host of other socio-economic problems. 

The question is how we can all benefit when the workload shifts from humans to robots. Ideally, we should all have more time for leisure and higher level pursuits, such as art, literature, and philosophy. Unfortunately, the resulting financial inequality more often than not results in increased suffering and crime.

Rogue Robots

Doomsday scenarios in which robots develop self-awareness and self-determination and turn against humans certainly give rise to some AI fears, but these are still solidly in the realm of science fiction. Currently, and in the foreseeable future, robots still pretty much do what they're told. Machine learning enables them to hone certain skills, such as the ability to communicate in spoken language, but they are not even aware they are doing so. Natural language processing (NLP) is still largely an exercise in pattern-matching.

The biggest threat in this area comes from heavy-duty robotic machinery that people accidentally get too close to. Many robots operate inside cages to avoid such accidents. However, even in this application of AI, developers are working toward making robots safer — for example, enabling robots to detect animate beings nearby, so they can automatically shut themselves down to prevent injury.

Unintended Consequences

Of course, the law of unintended consequences holds true in AI as it does in other areas of complexity. What experts fear about rogue robots is related to the fear of unintended consequences; for example, artificial neural networks that become so complex that robots become self-aware and self-directed and decide that humans need to be enslaved or wiped off the face of the planet. In addition, there could be unintended consequences that produce outcomes we have yet to imagine.

So, the short answer to the question Should we be afraid of artificial intelligence? is yes. But keep in mind that AI is a tool, which, like many powerful tools, can be used or abused. As long as we are aware of the threats and address them appropriately, I think we have little to fear and great progress to look forward to.

Related Posts
April 24, 2016
Who Will Teach Machines Right from Wrong?

Right now we focus on capacity, but we need to teach machines right from wrong. This is a difficult data ethics challenge in artificial intelligence. How to build a moral machine?

Read More
August 5, 2021
The Role of Big Data in Machine Learning

Machine learning is one of the best ways to gain insights from your big data. See how you should approach this data science challenge.

Read More
August 9, 2021
Artificial Intelligence and Organizations

Artificial intelligence and organizations don't always fit together. To get the most from an AI initiative the leaders need to encourage creative questioning.

Read More
1 2 3 10
9450 SW Gemini Drive #32865
Beaverton, Oregon, 97008-7105
Dark logo
© 2022 Doug Enterprises, LLC All Rights Reserved
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram