There are many areas that have historically and continue to limit AI, and can be classified into the following categories:
Despite significant advances in hardware technologies such as GPUs, TPUs, and CPUs, there is no doubt that AI still require vast amounts of processing power to run, and particularly in stage where Neural Networks are in the learning stage of operation. Even with the availability of large datacenters containing clusters of more power efficient and optimised GPU stacks (eg. NVIDIA’s latest HGX GPU products), there are still very significant information processing limitations. As a result, training large AI systems is very time-consuming and expensive, which extends the time frame in which the complexity and scale of AI can be developed. It should be noted that once a AI systems Neural Network has completely learned an entire information set, it can much more easily operate in what is a purely feed-forward mode that is computationally much less expensive.
The size of AI is increasing rapidly and current datacenter hardware infrastructure typically lacks cost-effective memory capacity necessary to fully accommodate these very large systems. This can result in slower training times or the need to break down the AI into smaller functional parts and training in processing batches, which can reduce accuracy and efficiency of the AI’s Neural Network training process.
Training large AI requires very significant energy consumption to power the datacenters and has high operational costs, and produces associated large environmental concerns. This limits the complexity and scale of AI that can be trained, particularly in financially limited initial training and ongoing operations.
Even though there has been significant progress in developing AI that can perform well on several classes of abilities, these AI often struggle to generalize and infer new concepts that provide useful suggested solutions in entirely new situations that exist outside of the information on which they have been trained. This is particularly true for tasks that require what Humans consider is “common sense” reasoning. Common sense remains a challenge for AI to demonstrate, which really is a huge ‘red flag’ for Humanity. Notably however, common sense is also often also a huge challenge for some Humans, including but not limited to the very young.
Current AI require extremely large sets of information to be trained effectively. However, this information is often limited in size, and some of the information can also contain very undesirable biases in the concepts it contains. This can lead to AI that perform poorly because the diversity of information is too small to be fully representative of the complete subject matter. This can also reinforce very undesirable biases that may exist in information, particularly where such information is sourced from social media platforms, as compared to significantly better authoritative information sets such as professionally verified medical research information.
Advances in information processing hardware and software technologies will continue to expand the Capabilities of AI. Over time it may be possible that AI itself develops ways and methods to overcome the significant challenges that remain in terms of computational power, memory limitations, energy use, inferential generalization, and information limitations.