Artificial intelligence and organizations don't always fit together. To get the most from an AI initiative the leaders need to encourage creative questioning.
In an interesting post on HackerNoon.com entitled Is Another AI Winter Coming?, Thomas Nield argues that the expectations for artificial intelligence (AI) may exceed its potential, and that if we do not "temper our expectations and stop hyping 'deep learning' capabilities . . . we may find ourselves in another AI Winter." He expects the growing skepticism over AI capabilities to "go mainstream as soon as 2020."
Although AI does seem overhyped in some circles, I doubt an AI Winter is coming any time in the near future, if ever. I base this doubt on the following points:
AI winter is a period of time during which interest in and funding for AI research slumps. The two major AI winters occurred in 1974–1980 and 1987–1993 due to disappointing results that led to decreased funding for research.
Each of these periods was preceded by an AI Spring — a period of significant activity and progress in the field. The first spring occurred in 1956—1974, a period that included the invention of the perceptron by Frank Rosenblatt in 1958, along with several computers that were able to solve algebra word problems, prove theorems in geometry, and speak English. The second AI spring occurred in 1980–1987, a period which coincided with the development of the first expert systems based on the physical symbol system, developed by Allen Newell and Herbert A. Simon.
We are currently experiencing an AI spring, which began in 1993, primarily due to the development of the intelligent agent — an autonomous entity that perceives its environment and acts accordingly; for example, a self-driving car. The current spring gained momentum in the early 2000s with advances in machine learning and statistical AI along with increased availability of big data and significant increases in processing power.
Whether AI is overhyped depends on how you define "artificial intelligence." If you define it as a machine's ability to perform tasks traditionally thought to require human intelligence, then we have already achieved AI. We now have self-driving cars, automated investment advisors, and systems that are more accurate than doctors at diagnosing cancer and other diseases in patients.
However, if you define AI as a machine or computer system that possesses self-consciousness and self-determination, then AI may be unattainable. We may never see robots that think like humans. In that sense, we may be in a perpetual AI winter characterized by periods of booms and busts in research funding.
Nearly every ground-breaking technology experiences ebbs and flows. A few meet or even exceed expectations, and some never do. I think AI is somewhat different. I like to compare it to space exploration, in which, it seems to me, the journey is more important than the destination.
Maybe AI is a pipe dream — an unattainable goal. But maybe that's unimportant. Maybe this unattainable goal that we foolishly believe is attainable is the inspiration that motivates us to explore. As such, that is good enough, because in our drive to achieve the unattainable, we improve the human condition through increased knowledge and skills, amazing new technologies, and innovative products.
I think we have learned that lesson over the years — over the course of the several AI springs and winters — and I believe that having learned this lesson will make a future AI winter much less likely.
Artificial intelligence and organizations don't always fit together. To get the most from an AI initiative the leaders need to encourage creative questioning.
AI planning systems us an area of artificial intelligence which uses a set procedure to solve problems.
The symbolic systems approach is one of the original approaches to artificial intelligence. Symbolic AI is rule-based approach that has become less popular than machine learning.