Today, existing information AI processing systems use massive numbers of CPUs and GPUs that are usually housed in large datacenters located around the world. Over time, it is expected these may be replaced by highly specialized AI accelerators that are exclusively designed for high performance AI information processing. These extremely specialized AI chips are sometimes referred to as Neuromorphic Processors.
Many AI applications require rapid training or retraining of AI models [16.2], and training is very time consuming, power demanding, and expensive. It is also difficult to correctly configure an AI system to ensure its output is completely safe and reliable. For example, the training of AI systems for the operation of Autonomous Vehicles requires the ability to train and retrain a Neural Network on enormous amounts of streamed video camera and LiDAR sensor information collected in the field, combined with extremely detailed Digital Maps, and Advanced Driver Assistance Systems (ADAS) information. The accurate and reliable training and operation of these AI systems is totally life critical for drivers and passengers of Autonomous Vehicles, and other pedestrians and vehicles moving around them.
Demand for AI processing in datacenters is continually growing. DNNs continue to grow in size and complexity, doubling every 3-5 months [16.3]. Similarly, there is growing demand for low-power AI chips for applications out in the field, at the ‘edge’, that includes surveillance cameras with facial and behavior recognition, to enabling complex situational awareness for autonomous vehicles.
AI has an unlimited number of applications. AI can help choose a movie, pick stocks, diagnose a medical xray, target a missile, or help find a cure for a virus [16.4]. The chip Intel acquired from Habana Labs can identify the faces of 15,000 Humans every second [16.5]. AI is significant at every possible level of society and the world, including global economic and global security levels. If data is considered the “New Oil” of the global economy, then AI is on the cusp of becoming the “Faster Than Light Drive” of everything on Earth. The access to AI information processing will build the economies and all Human society of the future.
Neuromophic Processors are adapted from the structure and function of real biological Neural Network based Brains. Instead they use artificial Neural Networks that are designed to mimic the behavior of biological neurons and interconnecting interfaces called synapses.
Traditional semiconductor computing architectures rely on digital circuits to perform arithmetic operations, however neuromorphic computing can use analog circuits that are designed to simulate the behaviour of biological neurons. These circuits are often implemented using existing semiconductor technologies, such as CMOS, memristors, and emerging spintronics computing technology.
One of the key features of neuromorphic computing is the ability to perform parallel processing. Just as the Human Brain can perform multiple tasks simultaneously, neuromorphic systems can process multiple streams of information in parallel. This makes them extremely well-suited for AI applications such as image processing and language recognition, which require the processing of large amounts of information, often in real time.
Neuromorphic computing also has the ability to learn and adapt in generally the same way that most AI systems work today. This is achieved through the modification of weighted connections between neurons, based on the information that the neural network receives. This allows the AI to learn from experience and improve its performance over time.
Neuromorphic computing architectures could potentially enable vastly more efficient and scalable AI by mimicking the way that the Brain electrically and computationally processes information. This is particularly important in relation to the use of a special AI design called a spiking neural network, which can be significantly faster at learning than the backpropagation algorithm.
Neuromorphic computing will be used for a wide range of applications, from robotics, autonomous vehicles and drones, to healthcare and finance. However, there are still many technical and commercial challenges presently being overcome.
Over the past several years, many new companies have emerged with an interest to meet the challenge of increasing needs for AI information processing, and as a result many are focused on building specialized neuromorphic processors [16.11]. The AI accelerators generally comprise proprietary chips built into proprietary server systems as well as complex custom software tools to program the hardware with a range of Neural Network designs.
In 2015, Amazon acquired a company called Annapurna Labs for USD $350 million which became the basis for their Inferentia AI chip used in AWS [16.8]. In addition, Alibaba, Huawei, Baidu, and Bitmain, are all developing neuromorphic processors [16.9][16.10].
Early pioneering companies such as Wave Computing worked to build a new massively parallel dataflow architecture for AI applications. There were also a few early attempts such as Tilera in 2004, acquired for $130M in 2014, but these were mostly focused on replacing more operationally expensive Field Programmable Gate Arrays (FPGA).
There have been many investments in start-up companies developing proprietary neuromorphic processors for AI applications.
The following is a list of a just a few companies developing neuromorphic processors and Computers in various countries around the world:
Brief description of the technology architecture: AIStorm develops AI chips that use a unique analog architecture to perform edge computing tasks. The chips are designed to process sensor data directly, rather than relying on a digital signal processor (DSP) or microcontroller. This allows for low-power and high-speed processing of sensor data for applications such as object detection, gesture recognition, and voice recognition.
Founded: 2017
Total investments to date: Over $13 million
Head office: Sunnyvale, California, USA
Brief description of the technology architecture: Annapurna develops custom-designed chips and systems for cloud infrastructure, storage, and networking. Their technology architecture is optimized for high performance, low power consumption, and scalability.
Founded: 2011
Total investments to date: $70 million
Head office: San Jose, California, USA
Brief description of the technology architecture: Blaize develops programmable and low-power AI chips that use a graph-based architecture to accelerate AI workloads. The chips are designed for edge computing devices and can perform a range of tasks, including Computer vision and natural language processing.
Founded: 2010
Total investments to date: Over $87 million
Head office: El Dorado Hills, California, USA
Brief description of the technology architecture: BrainChip has developed an AI processor called Akida, which is designed to provide high performance and low power consumption for edge applications. The processor uses neuromorphic computing technology, which mimics the way the human brain processes information.
Founded: 2011
Total investments to date: Over $27 million
Head office: Aliso Viejo, California, USA
Brief description of the technology architecture: Cerebras Systems has developed a chip called the Wafer Scale Engine (WSE) that is the largest Computer chip ever built, measuring at 8.5 inches by 8.5 inches. The WSE is designed specifically for AI applications, and contains 1.2 trillion transistors and 400,000 AI cores, making it significantly faster than traditional AI hardware. [16.12]
Founded: 2016
Total investments to date: Over $680 million
Head office: Los Altos, California, USA
Brief description of the technology architecture: DeePhi is a deep learning startup that develops software and hardware solutions to enable efficient and high-performance deep learning on edge devices. The company's technology architecture is built around deep compression algorithms and low-power processing units for edge devices.
Founded: 2016
Total investments to date: $12.5 million
Head office: Beijing, China
Brief description of the technology architecture: Esperanto's technology architecture is based on the RISC-V open-source instruction set architecture, which allows for more efficient and customizable chip design. Esperanto has developed a range of custom processors and accelerators that can be used for AI and ML workloads, as well as general-purpose computing tasks.
Founded: 2014
Total investments to date: Not publicly disclosed
Head office: Mountain View, California, USA
Brief description of the technology architecture: Eta Compute develops ultra-low power AI edge computing solutions using its patented Continuous Voltage Frequency Scaling (CVFS) technology that allows devices to operate in subthreshold voltages, resulting in up to 10 times less power consumption compared to other architectures.
Founded: 2015
Total investments to date: $27 million
Head office: Westlake Village, California, USA
Brief description of the technology architecture: Google is a multinational technology company that has developed the Tensor Processing Unit (TPU), which is a custom-designed chip specifically optimized for machine learning applications. The TPU is designed to accelerate machine learning workloads and enable more efficient and powerful data processing.
Founded: 1998
Total investments to date: Not publicly disclosed
Head office: Mountain View, California, USA
Brief description of the technology architecture: Graphcore has developed a new processor for AI and machine learning called the Intelligence Processing Unit (IPU). The IPU is designed to handle both training and inference in neural networks, and offers significant improvements in performance and efficiency compared to traditional hardware.
Founded: 2016
Total investments to date: Over $700 million
Head office: Bristol, UK
Brief description of the technology architecture: Groq has developed a tensor processing unit (TPU) that is designed to accelerate machine learning workloads. The TPU is designed to handle both training and inference in neural networks, and offers high performance and efficiency compared to traditional hardware.
Founded: 2016
Total investments to date: Over $347 million.
Head office: Palo Alto, California, USA
Brief description of the technology architecture: Habana specializes in developing deep learning processors and training accelerators based on their unique Gaudi architecture. The architecture features a scalable, programmable, and efficient design that leverages Habana's extensive experience in software, algorithms, and hardware engineering.
Founded: 2016
Total investments to date: Over $300 million.
Head office: Tel Aviv, Israel
Brief description of the technology architecture: Hailo develops specialized AI chips that use a unique architecture to optimize performance and energy efficiency. The chips are designed to accelerate AI workloads and can be used in a wide range of devices, from cameras to self-driving cars.
Founded: 2017
Total investments to date: Over $88 million.
Head office: Tel Aviv, Israel
Brief description of the technology architecture: Horizon Robotics develops AI edge computing processors that combine AI algorithms and computing power in a single chip. Their architecture features an AI accelerator engine that can support various types of neural networks, as well as a highly parallel computing engine that can process data in real-time.
Founded: 2015
Total investments to date: Over $900 million
Head office: Beijing, China
Brief description of the technology architecture: The TrueNorth neuromorphic chip is a Brain-inspired computing architecture developed by IBM. It uses a network of simple processing units that communicate through spikes, mimicking the way neurons in the Brain communicate.
Founded: 2011
Total investments to date: Not publicly disclosed
Head office: Armonk, New York, USA
Brief description of the technology architecture: Nervana Systems developed a deep learning acceleration platform called Nervana Engine, which is designed to provide high performance and energy efficiency for AI workloads. The platform uses custom hardware and software optimized for deep learning algorithms.
Founded: 2014
Total investments to date: $89 million before being acquired by Intel in 2016.
Head office: Prior to acquisition - San Diego, California, USA.
Brief description of the technology architecture: Kneron's technology architecture includes both hardware and software components that enable edge AI solutions. Its hardware includes a chip that can run AI algorithms efficiently on the edge, while its software includes an AI model compression and optimization tool called "Neural Processing Unit Compiler" (NPC).
Founded: 2015
Total investments to date: $73 million
Head office: San Diego, California, USA; Taipei, Taiwan; Shenzhen, China
Brief description of the technology architecture: Mythic has developed a chip-based solution for AI and machine learning called the M1108 Analog Matrix Processor (AMP). The AMP is designed to provide high performance and energy efficiency by using analog computation rather than digital.
Founded: 2012
Total investments to date: Over $56 million
Head office: Redwood City, California, USA.
Brief description of the technology architecture: SambaNova Systems has developed a scalable, hardware and software platform for AI and machine learning called the SambaFlow System. The platform combines custom hardware and software to enable efficient training and inference of large AI models.
Founded: 2017
Total investments to date: Over $1 billion
Head office: n Palo Alto, California, USA
Brief description of the technology architecture: Quadric.io is developing an edge computing platform that combines hardware and software to accelerate AI workloads in autonomous systems, such as drones, robots, and autonomous cars.
Founded: 2016
Total investments to date: $15.5 million
Head office: San Jose, California, USA
Brief description of the technology architecture: Wave Computing develops custom-built AI systems and processors for various applications that require accelerated computing, such as deep learning, data analytics, and autonomous driving. The company's technology architecture is designed to accelerate AI workloads by processing data in parallel across multiple cores, enabling higher performance and lower power consumption.
Founded: 2010
Total investments to date: Over $500 million (In chapter 11 proceedings). Wave has since been adopted by the acquired company MIPS and its brand.
Head office: Campbell, California, USA (prior to chapter 11)