The following is a general timeline of AI development milestones from its inception to now (early 2023), on an incremental decade basis:
In 1951, Marvin Minsky and Dean Edmonds built the first Neural Network machine called the SNARC (Stochastic Neural Analog Reinforcement Calculator).
The first AI program was written by Allen Newell and Herbert Simon in 1955. It was called the Logic Theorist and could prove mathematical theorems.
The concept of "Artificial Intelligence" was coined in 1956 by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon at the Dartmouth Conference.
In 1956, John McCarthy developed the programming language LISP (List Processing) that became the primary language used in AI research.
In 1958, Frank Rosenblatt invented the perceptron, a type of Neural Network that could learn from experience.
In 1963, J.C.R. Licklider proposed the concept of a "man-computer symbiosis" where humans and computers would collaborate to solve problems.
In 1967, the first natural language processing system was created by Daniel Bobrow called STUDENT, which could solve algebra word problems.
The first AI game, called "Spacewar!" was developed in 1962 by Steve Russell.
In 1970, Terry Winograd developed SHRDLU, a program that could understand natural language and manipulate blocks in a virtual world.
In 1974, the first expert system, called Dendral, was developed by Edward Feigenbaum and Joshua Lederberg. It could identify the molecular structure of organic compounds.
In 1979, the first mobile robot, called the Stanford Cart, was created by Hans Moravec and his team.
In 1981, the first autonomous vehicle, called the VaMoRs, was created by Ernst Dickmanns and his team.
In 1985, the backpropagation algorithm was invented by David Rumelhart, Geoffrey Hinton, and Ronald Williams, which completely revolutionized Neural Network designs and learning algorithms.
In 1987, the first facial recognition system, called Eigenface, was developed by Sirovich and Kirby.
In 1990, the first search engine, called Archie, was created by Alan Emtage.
In 1993, the company NVIDIA was founded by Jensen Huang, Chris Malachowsky, and Curtis Priem, with a vision to bring 3D graphics to the gaming and multimedia markets, and the GPU chip and datacenter system architectures developed by this company in the coming decades would go on to establish the computing foundation of most global implementations of AI throughout the 2010’s and early 2020’s.
In 1995, IBM's Deep Blue defeated the world chess champion, Garry Kasparov, in a six-game match.
In 1997, Deep Blue defeated Kasparov again in a rematch.
etwork was publicly demonstrated by Simon Thorpe at the Information Society and Technologies Event in Copenhagen.
In 2002, the DARPA Grand Challenge, a competition for autonomous vehicles, was held for the first time.
In 2004, the company BrainChip was established to pioneer the development of neuromorphic processors using Spiking Neural Networks.
In 2005, the first Humanoid robot, called ASIMO, was created by Honda.
In 2009, IBM's Watson defeated two Jeopardy! champions, Brad Rutter and Ken Jennings, in a televised match.
In 2011, Google's self-driving car project was announced.
In 2012, the first deep Neural Network, called AlexNet, was developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. It achieved state-of-the-art performance in image recognition.
In 2014, Google's DeepMind developed a Neural Network that could play Atari games at superhuman levels without any prior knowledge of the games.
In 2015, Microsoft's Tay chatbot was released on Twitter but was quickly shut down due to racist and offensive comments it generated after interacting with users.
In 2016, Google's AlphaGo defeated the world champion at the game of Go, a complex strategy game with more possible moves than there are atoms in the universe
In 2016, a new Tensor Processing Unit (TPU) was publicly announced at Google I/O, where the company said that it had already been used inside their datacenters for over a year. The Google TPU has similar computational Capabilities to NVIDIA’s GPU’s used for AI Neural Network data processing.
In 2017, OpenAI's Dota 2 bot defeated professional players at the video game Dota 2, demonstrating advanced strategy and decision-making abilities.
In 2017 a research paper titled “Attention Is All You Need” was published by Google employed researchers announcing ‘transformers’ as one of the most powerful types of Neural Network models invented to date.
In 2018, Google's Duplex AI made headlines for being able to make phone calls and schedule appointments like a Human, using natural language processing and speech synthesis.
In 2019, GPT-2, a language model developed by OpenAI, made headlines for its ability to generate coherent, Human-like text with impressive accuracy and fluency.
In 2020, GPT-3, a successor to GPT-2, was released by OpenAI. It is currently one of the most advanced language models, Capable of generating natural language text, answering questions, and even writing code.
In 2020, AI was used extensively in the fight against COVID-19, from analyzing medical information to developing new treatments and vaccines.
In 2021, OpenAI released DALL-E, a generative model that can create images from textual descriptions, such as "a cat in a spacesuit walking on the moon".
In 2021, Stability AI began the establishment of a fully open-source AI platform.
In 2021, DeepMind's AlphaFold was used to predict the 3D structure of proteins with unprecedented accuracy, overcoming a long-standing challenge in biology.
In 2022, a new AI-powered robot, named "Stretch", was developed by Boston Dynamics, Capable of performing various tasks in warehouses and factories, such as picking and placing items on shelves.
In 2022, Midjourney AI was launched as a research lab for the creation of AI based text-to-image generation.
In 2023, Stability AI produced the first release of Stable Diffusion, a pioneering generative AI text-to-image model.
In 2023, Microsoft acquired a significant shareholding in OpenAI as investment of US$8b into the development of AI technologies, and commenced integrating AI into Bing search.
In 2023, GPT-4, the successor to GPT-3, was released by OpenAI. It is arguably the most advanced AI processing platform publicly available to date and is able to handle multi-modal information such as image, voice, text, and programming code.
In 2023, Google announced a generative AI platform called Bard that rivals OpenAI’s ChatGPT.
In 2023, NVIDIA announced Picasso for building text prompted visual content such as images, videos, and 3D models using generative AI running on NVIDIA GPU's within datacenters containing clusters of NVIDIA's incredibly computationally powerful DGX and HGX AI supercomputers.
At the GTC 2023 Keynote presentation, Jensen Huang, CEO of NVIDIA said:
“Generative AI is a new kind of computer – one that we program in human language. This ability has profound implications. Everyone can direct a computer to solve problems. This was a domain only for computer programmers. Now everyone is a programmer.”
“AI is at an inflection point as Generative AI has started a new wave of opportunities, driving a step-function increase in inference workloads. AI can now generate diverse data, spanning voice, text, images, video, and 3D graphics to proteins and chemicals.”
For further details, see 'Appendix 5 - GTC 2023Keynote with NVIDIA'.
Overall, the field of AI has made significant progress in recent decades, with many impressive achievements in natural language processing, image recognition, game playing, robotics, autonomous vehicle control, and more.
The rate of advancements in AI has grown non-linearly in the past few years as the adoption of this technology enters what appears to be the initial climb of the Gartner Hype Cycle for new technology adoption. These AI advancements are genuinely overcoming previously unsolved challenges and providing new opportunities in various industries, including healthcare, finance, transportation, and manufacturing, among others.