Artificial Intelligence (AI) is a specific discipline within the field of computer science dedicated to creating computers that can perform tasks that mimic the abilities of Human Intelligence, such as learning and recognizing patterns, understanding and generating natural language, recognizing and creating visual images, creating software functions, making decisions, solving problems, controlling and navigating physical machines [65].
The original development of AI can be traced back to the 1940’s, and since this time the development of AI has experienced many significant milestones and breakthroughs that have revolutionized various industries.
Over several decades, AI has gone through many stages of software and hardware development, including:
statistical model systems
expert systems databases
descriptive, predicate, and symbolic logic programming
cellular automata systems
simulated annealing systems
analog computers
fuzzy logic systems
however, the most effective has been analog and digital implementations of artificial Neural Networks
Neural Networks have proven to be extremely flexible and effective at learning information, typically using a couple of different types of training methods:
Supervised Learning - both the desired training Input information and the training Output information is provided to the Neural Network and it learns all the relationships between these sets of Input-Output information, by calculating errors and adjusting weights, most typically using a mathematical algorithmic process called backpropagation [6] originally developed by David Rumelhart and James McClelland. The goal of Supervised Learning is to learn a mapping function from the Input information to the Output information. Once learning of all of the sets of Input-Output information is complete, this mapping function can then be used to produce Output information predictions based on new, unseen Input information. For example, a Supervised Learning algorithm can be trained on a set of labeled images to recognize and classify new images into categories such as cats or dogs, then when learning is complete and an entirely new and different type of dog is provided into the Neural Network, it can usually correctly guess and provide the output: dog.
Unsupervised Learning - only training Input information is provided to the Neural Network and it learns how to organize and group the relationships between these based on similarities and differences in the Input information. The Input information is not labeled or categorized in any way, so the Neural Network's objective with Unsupervised Learning is to discover patterns or structure in the Input information, such as clusters or groups of similar information. Unsupervised Learning is used to build an Autoencoder Network which is very useful and powerful, and one of the first designs proposed for these was developed in 1982 by Teuvo Kohonen [20]. Unsupervised Learning can be used for tasks such as data compression, difference/anomaly detection, and data visualization. For example, Neural Networks that use Unsupervised Learning can be trained on billions of unlabeled images to identify similar and dissimilar images, and then automatically separate these images into different clustered groups based on their visual features such as shapes and colors.
Neural Networks are a simulation of computational systems inspired by the structure and function of the Human Brain. They consist of interconnected artificial neurons, organized into logically structured layers, that each receive and process input information, and produce output predictions or classifications.
The advantage of Neural Networks is their ability to learn from information and improve their performance over time, without being explicitly programmed with logic rules. During training, a Neural Network can adjusts the strength of connections between neurons, through adjustable weights, to minimize the error between its predicted outputs and the desired outputs.
There are many types of Neural Networks, each designed for a specific task or application, such as image recognition, natural language processing, and speech synthesis. Some popular architectures include feedforward networks, convolutional networks, recurrent networks, and deep learning networks.
[69] Reference: What are neural networks? - IBM
Reference: [68] Why Neural Networks can learn (almost) anything - Emergent Garden - https://www.youtube.com/watch?v=0QczhVg5HaI - Published: 13 Mar 2022