Skip to Main Content

Artificial Intelligence Specialists

History

The first step toward artificial intelligence as we know it today occurred in 1943, when Warren McCulloch and Walter Pitts, two researchers at the University of Chicago, conceived of the first neural network. The analytics software developer SAS describes neural networks as “computing systems with interconnected nodes that work much like neurons in the human brain. Using algorithms [sets of instructions that allow a computer to perform a specific task or group of tasks], they can recognize hidden patterns and correlations in raw data, cluster and classify it, and—over time—continuously learn and improve.” The work of McCulloch and Walter Pitts prompted neural network research in the biological processes in the brain and the application of neural networks to what would eventually be called artificial intelligence.

In 1956, Allen Newell, Herbert A. Simon, and John Clifford Shaw of the Rand Corporation created a computer program called The Logic Theorist. It was the “first program deliberately engineered to mimic the problem solving skills of a human being,” according to Jeremy Norman’s HistoryofInformation.com.

A mathematics professor named John McCarthy first coined the term “artificial intelligence” in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence workshop. In his plan for the workshop, McCarthy stated that the event was “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

The U.S. Department of Defense began training computers to mimic basic human reasoning in the 1960s and 1970s.

Improvements in computing power and storage, advanced algorithms, and the increasing ability to collect and analyze large amounts of data have fueled the increasing use of AI in a variety of industries. Seventy-six percent of Americans surveyed by Northeastern University and Gallup in 2017 “strongly agreed” or “agreed” that AI would fundamentally alter the way people lived and worked in the next decade. Seventy-seven percent were “very positive” or “mostly positive” about the effects AI would have on the workplace and everyday life.

The development of deep learning techniques—in which huge neural networks with many layers of processing units are used to teach computers to recognize speech, identify images, and even make predictions—has opened up even further possibilities in the field. These ongoing developments suggest that we are only at the beginning of the potential uses and applications for artificial intelligence.

Related Professions