Dark logo

What Is Machine Learning?

Published August 3, 2021
Doug Rose
Author | Agility | Artificial Intelligence | Data Ethics

Early attempts at artificial intelligence (AI) produced computers that were proficient at solving specific types of problems, along with expert systems that can perform tasks that normally require human intelligence. These tasks might include finding the fastest route from point A to point B for example, or translating text from one language to another. 

These AI applications were generally built on the foundation of physical symbol systems. A physical symbol system is any device that stores a set of patterns (symbols) and uses a number of processes to create, modify, copy, combine, and delete symbols. Think of symbols as the mental images that form in your brain as you observe and experience the world.

AI applications rely primarily on pattern-matching to do their jobs. For example, a translation program stores words, phrases, and grammar rules for two or more languages. When a user requests a translation, say from English to Spanish, the application looks up the English words or phrases that the user supplied, finds the Spanish equivalents, and then attempts to stitch together the Spanish words and phrases, following the grammar rules programmed into the system.

The limitation with such a system is that it is fixed — it cannot learn or adapt on its own. In the context of a translation program, the system may not choose the most accurate word based on the intended meaning in the context of a sentence, and it will continue to make the same mistake in future translations unless the programmer steps in and makes an adjustment.

This limitation becomes more of an issue in situations in which the environment changes rapidly; for example, when anti-malware software must adapt quickly to identify, block, and eliminate evolving threats. Such threats evolve too quickly for anti-malware developers to update their databases. They need a way for the software to automatically identify new potential threats and adjust accordingly. In other words, the anti-malware must learn.

The Birth of Machine Learning

To overcome the limitations of early AI, researchers started to wonder whether computers could be programmed to learn new patterns. Their curiosity led to the birth of machine learning — the science of getting computers to perform tasks they weren't specifically programmed to do. So what is machine learning?

Machine learning got its start very shortly after the first AI conference in 1956. In 1959, AI researcher Arthur Samuel created a program that could play checkers. This program was different. It was designed to play against itself to improve its performance. It learned new strategies from each game it played and after a short period of time began to consistently beat its own programmer.

A key advantage of machine learning is that it doesn't require an expert to create symbolic patterns and list out all the possible responses to a question or statement. On its own, the machine creates and maintains the list, identifying patterns and adding them to its database.

One interesting application of machine learning is in the area of fraud detection and prevention. Your credit card company, for example, monitors your charges—where you use your card, what you buy, the average amount you charge to the card, and so on. If the system detects anything that breaks the pattern of your typical card use, it triggers a fraud alert and may even automatically place your account on hold. 

Driving Machine Learning with Big Data

Machine learning has become one of the fastest growing areas in AI at least partially because the cost of data storage and processing has dropped dramatically. With virtually unlimited storage and compute available via the cloud, companies can now create and store extremely large data sets that can be analyzed by clusters of high-speed processors to identify patterns, trends and associations.

Machine learning enables companies to extract valuable information and insight from their data — information and insight that they may never have imagined was there.

Machine Learning Algorithms

Developers use various machine-learning algorithms to enable machine learning. (An algorithm is a process or set of rules to be followed in calculations or other problem-solving operations.) Machine learning algorithms enable AI applications to identify statistical patterns in data sets. Depending on the algorithms used, machines can learn in one of the following three ways:

  • Supervised learning: With supervised learning, a trainer feeds a training set of labeled data into the computer to enable the computer to identify patterns in that data, primarily for classification purposes. The algorithm creates a function that can identify those same patterns in any future input for the purpose of assigning data inputs to the different classes.
  • Unsupervised learning: With unsupervised learning, data that is neither classified nor labeled is fed into the system, and it identifies hidden patterns in the data that humans may be unable to detect or may have overlooked.
  • Semi-supervised learning: This is a cross between supervised and unsupervised learning. Supervised learning is used initially to train the system on a small data set, then a large amount of unlabeled data is fed into the system to increase its accuracy.

Just as learning is key to human intelligence, machine learning is a key element of artificial intelligence. Without learning, a machine only has the potential to perform the tasks it was programmed to do. With machine learning, machines can take the next step — increasing their knowledge, sharpening their skills, and developing new skills beyond what they were programmed to have.

Related Posts
August 9, 2021
Random Neural Network Weights

When you start your neural network you want to assign random neural network weights to each node in the network.

Read More
August 8, 2021
The Different Ways Machines Learn

In AI machines learn by looking through your data using machine learning algorithms.

Read More
March 5, 2018
Neural Network Hidden Layers

Neural network hidden layers are between the input layer and the output layer of an artificial neural network. The number of neuron hidden layers can improve the learning models.

Read More
1 2 3 10
9450 SW Gemini Drive #32865
Beaverton, Oregon, 97008-7105
Dark logo
© 2022 Doug Enterprises, LLC All Rights Reserved
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram