Intel, ARM, & Nvidia Propose a New Hardware Standard for AI

03-10-2022 | By Robin Mitchell

As the importance of AI technologies continues to grow, the lack of standardisation in hardware can make it difficult to run AI algorithms across various different platforms, but a new standard being proposed by Intel, ARM, and Nvidia hopes to change this. What challenges does AI face regarding hardware, what does the trio propose, and how could this help future devices?

What challenges does AI hardware face?

Surprisingly, the idea behind neural networks in AI algorithms is relatively straightforward when considering the capabilities of AI. A neural network consists of nodes that simply add the value of all their inputs (who are individually weighted), and if the output of this addition is over some activation level, then the node outputs its own signal. Nodes are then connected to other nodes, and layers of these nodes are linked together. With enough nodes and layers, it’s possible for a neural network to be able to recognise all kinds of patterns ranging from text to speech and images.

While a neural net may be simple in principle, there are major hardware challenges faced with such designs. Primarily, the computational effort needed to calculate the value of each input into a node can take massive amounts of CPU time (each number has to be multiplied, and there can be thousands of nodes in a network). As such, trying to use standard programming models with off-the-shelf hardware can make real-time AI challenging (something that is needed in vehicles, security cameras, and digital assistants).

One solution that is becoming more mainstream is the use of accelerator circuits that utilise GPU technologies. Arrays of fast multipliers combined with adders can quickly compute the value of nodes, while massive parallelism can perform numerous operations simultaneously. But while this does help to boost performance, the variations in hardware across manufacturers can make it challenging for AI developers to port algorithms. Thus, algorithms designed to work on one platform may fail to take full advantage of another or be outright incompatible.

Intel, ARM, and Nvidia propose new AI standards

Recognising the challenges faced with AI hardware, Intel, ARM, and Nvidia recently announced a proposal that could help hardware engineers create AI platforms that share common design practices. At the same time, the proposed standard would also help AI engineers create neural networks that utilise common technologies, which would aid in porting while finding an optimised solution between memory usage and accuracy.

The standard focuses on the use of 8-bit floating point numbers for weighted nodes, which has been claimed to be an ideal balance between memory and accuracy. Simply put, connections between nodes are digital numbers, and as such, they have a determined bit-width. The larger the bit-width, the more accurate the overall network will be as more numbers can be represented (thereby indicating different states), but larger bit-widths are more hardware-complex and memory intensive. On the flip side, neural networks have been built around 4-bit numbers that are lightning-fast and use very little memory. But the use of low-resolution numbers sees accuracy issues due to the limited number of states that can be represented.

The plan between the three parties is to release the standard to the public for free with no licensing requirements. While it is true that companies such as Nvidia would benefit from such a standard, it also makes sense that numbers beyond 8-bit may be to memory hungry for common AI tasks. Furthermore, Nvidia has plenty of experience in the field of AI and specialises in hardware that works closely with ARM and Intel platforms.

How could such standardisation help with future AI?

By far, one of the biggest advantages of standardised AI hardware is that it will help AI designers target multiple platforms without needing to make numerous code changes. At the same time, ensuring that the vast majority of AI hardware uses the same level of precision also helps maintain accuracy when targeted at different hardware.

The use of open standards also helps to encourage open-source hardware development, such as RISC-V. The combination of an open-source AI platform and RISC-V processor would help engineers eliminate the need for licenses, manufacture dependencies, and royalties that can all raise the price of products, while the open-source nature can help build trust in products. 

Overall, an open standard for AI would massively benefit engineers and users alike, and the choice of 8-bit floating point data strikes well with modern technology, providing a balance between accuracy and memory usage. 

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.