The following is a brief general list of the different and most commonly used types of Neural Networks that have been developed over the years for various AI applications to date. For further details on each of these please refer to ‘Appendix 8 – Neural Network types’.
DNNs can refer to any Neural Network architecture that has more than one hidden layer, and as such can be considered a general category that includes many of the other types of Neural Networks in the following list. DNNs can take many different forms and be used for a variety of tasks.
LNNs are a highly computationally efficient time-continuous variation of a Recurrent Neural Network (RNN) that processes data sequentially, keeps the memory of past inputs, adjusts its behaviors based on new inputs, and can handle variable-length inputs to enhance the task-understanding capabilities of NNs. Using Ordinary Differential Equations (ODEs) for computation between nodes in the Neural Network, the LNN architecture differs from traditional Neural Networks due to its ability to process continuous or time series data effectively. If new data is available, LNNs can change the number of neurons and connections per layer.
Transformers are a type of Neural Network that has become popular for natural language processing tasks such as language translation, due to their ability to process input sequences in parallel and capture long-term dependencies. They rely on self-attention mechanisms to focus on relevant parts of the input sequence.
SNNs are a type of Neural Network that attempts to model the behavior of biological neurons more closely than traditional Neural Networks. In SNNs, neurons communicate with each other by sending discrete, spike-like signals rather than continuous activation values. SNNs are a relatively recent development in Neural Network research and are still an area of active study. They have shown promise in areas such as neuromorphic computing and robotics, but are not yet widely used in practical applications. SNNs are a distinct type of Neural Network architecture, and could be placed somewhere after traditional feedforward networks and recurrent networks.
GANs are a type of Neural Network that consists of two sub-networks: a generator network and a discriminator network. The generator network learns to generate synthetic data that is similar to the real data, while the discriminator network learns to distinguish between the real and synthetic data. These networks are commonly used in tasks such as image generation and style transfer.
RNNs are a type of Neural Network that is designed to handle sequential data, such as time series data or natural language text. RNNs have a feedback loop that allows information to be passed from one time step to the next, allowing them to capture temporal dependencies.
CNNs are a type of Neural Network that is commonly used for image and video processing tasks. CNNs are designed to learn spatial features in the data by applying convolutional filters to the input image or video.
AlexNet is a specific type of Convolutional Neural Network (CNN) with a DNN layered design that was developed in 2012, and is often cited as one of the key discontinuous advancements in AI research and the AI industry.
DBNs are a type of Neural Network that consists of multiple layers of hidden units. DBNs are typically used for unsupervised learning tasks, such as feature learning and dimensionality reduction.
Autoencoder Networks are a type of Neural Network that is used for unsupervised learning tasks such as data compression and image denoising. Autoencoders consist of an encoder network that maps the input data to a low-dimensional representation, and a decoder network that maps the low-dimensional representation back to the original data.
Perceptron Network are among the most basic type of Neural Network that was developed in the 1950s and is one of the earliest Neural Network architectures. Perceptrons are simple feedforward networks that consist of a single layer of nodes with binary activations. They were originally developed for binary classification tasks.
Gradient Descent Algorithm refers to the use gradient descent as a mathematical optimization algorithm, and is used with many different types of Neural Networks in this list.
Backpropagation is a specialized methodical application of the Gradient Descent Algorithm used for Supervised Learning in Neural Networks.
In addition to the Gradient Descent Algorithm there are many other different types of Neural Network optimization algorithms, and each has advantages and disadvantages.