Dark logo

What is Deep Learning?

Published August 9, 2021
Doug Rose
Author | Agility | Artificial Intelligence | Data Ethics

In a previous article What Is Machine Learning? I define machine learning as "the science of getting computers to perform tasks they weren't specifically programmed to do." So what is Deep Learning? Deep learning is a subset of machine learning (ML), which is a subset of artificial intelligence (AI):

  • Artificial intelligence is any technique or combination of techniques that enable computers to simulate human intelligence. Techniques may include logic (if-then statements), rules, decision trees, and machine learning, to name a few.
  • Machine learning is a subset of AI that relies on statistical approaches to enable machines to perform a function on the data it is fed, and then become progressively better at performing that function. An example of a machine learning product is a system that recommends products by associating a shopper's behaviors with those of other shoppers with similar interests and purchase histories.
  • Deep learning is a subset of machine learning that involves the use of multi-layered artificial neural networks with a structure and function similar to biological brains, enabling the system to train itself to perform and improve at complex tasks. For example, Google created an artificial neural network called AlphaGo that learned how to win a board game called Go by playing against professional Go players.

A Brief History Lesson

In 1958 Cornell professor Frank Rosenblatt created an early version of an artificial neural network composed of interconnected perceptrons. Like the nodes in modern artificial neural networks, a perceptron takes in binary inputs and performs a calculation on those inputs to produce an output, as presented below. Note that with a perceptron both the inputs and outputs are binary — for example, zero/one, on/off, in/out.

Rosenblatt's machine, the Mark I Perceptron, had small cameras and was designed to learn how to tell the difference between two images. Unfortunately, it took thousands of tries, and even then the Mark I had difficulty distinguishing even basic images. In other words, the Mark I Perceptron wasn't a very good student. It could not develop a skill that is relatively easy for humans to learn.

The Mark I Perceptron had a couple flaws — it had only one layer of perceptrons, and the perceptrons were equipped with binary functions. As a result, this artificial neural network could solve only linear problems and had no easy and effective way to adjust the strength of the connections between neurons, which is required for learning to take place.

These problems were solved primarily by the introduction of hidden layers in the mid-1980s by Carnegie Mellon professor Geoff Hinton and by replacing binary functions with the sigmoid function, which increased the variation in outputs while limiting those variations between zero and one.

These additions enabled the artificial neural network to tackle much more complicated challenges. However, these early artificial neural networks continued to struggle; they were slow, having to review a problem several times before becoming "smart" enough to solve it. 

Later, in the 1990s, Hinton started working in a new field called deep learning — an approach that added many more hidden layers between the input and output layers of the neural network.

A Black Box

The hidden layers of a hidden network function like a black box, swirling together computation and data to find answers and solutions. No human knows how the network arrives at its decision. For example, in 2012, Google’s DeepMind project wanted to see how a deep learning neural network might perceive video data. Developers fed 10 million random images from YouTube videos into a network that had over 1 billion neural connections running on 16,000 processors. They didn’t label any of the data. So the network didn’t know what it meant to be a cat, a human being, or a car. Instead, the network just looked through the images and came up with its own clusters. 

It found that many of the videos contained a very similar cluster. To the network this cluster looked like this.

Now as a human being you might recognize this as the face of a cat, but to the neural network this was just a very common pattern that it recognized in many of the videos. In a sense it invented its own interpretation of a cat. After performing this exercise, the network was able to identify a cat in an image 74.8% of the time.

While it is certainly intriguing to see an artificial neural network recognize objects without ever being trained to do so, the real mystery is how the network accomplishes such a feat. We know that the machine adjusts strengths of the connections between neurons, but we cannot describe the "thought processes" in a way that supports any of the conclusions the machine draws.

The black box nature of hidden layers is important to keep in mind when designing artificial neural networks, because you may be "flying blind" when you're developing your initial design. Success depends a great deal on taking an empirical approach — trying different arrangements of neurons, starting with different weights and biases, trying different activation functions, and then looking at the results and making adjustments.

Related Posts
August 9, 2021
The Neural Network Cost Function

The artificial neural network cost function helps machine learning system learn from their mistakes.

Read More
August 5, 2021
The Role of Big Data in Machine Learning

Machine learning is one of the best ways to gain insights from your big data. See how you should approach this data science challenge.

Read More
August 9, 2021
The Neural Network Chain Rule

The neural network chain rule works with backpropagation to help calculate the cost of the error in the gradient descent.

Read More
1 2 3 10
9450 SW Gemini Drive #32865
Beaverton, Oregon, 97008-7105
Dark logo
© 2022 Doug Enterprises, LLC All Rights Reserved
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram