How is Heterogeneous Computing Deployed? – Why it matters

10-11-2021 | By Robin Mitchell

As the capabilities of silicon begin to slow down, new computing methods are needed to help continue the trend in improving performance. What is heterogeneous computing, how is heterogeneous computing deployed in the past and present, and what could future devices look like if taken to its limits?


What is heterogeneous computing?


A scientific definition for heterogeneous computing would talk about having separate Instruction Set Architectures that greatly vary depending on the task they specialise in and having tasks assigned to different ISAs to improve their performance. A more straightforward way to describe heterogeneous computing is a computing platform that delegates different tasks and processes to multiple processors. However, unlike a typical multicore processor, heterogeneous computing generally refers to using dedicated cores that are specialised at specific tasks.

The advantage of heterogeneous computing compared to homogeneous computing (where each computational point is identical) is that processes that consist of different types of tasks (such as graphical processing and advanced mathematics) can be split up and sent to be processed by units that specialise in that task. The result of splitting such a task can reduce processing time and reduce overall energy usage.


How is heterogeneous computing currently deployed?


Heterogeneous computing has been around far longer than many may think. The first computers were always homogeneous as the focus of early computers was to improve processing power and run all tasks on the central processor. However, even computers in the 1970s began to ship with co-processors for executing floating-point mathematics, and these could count as a form of heterogeneous computing systems.

Many early computers would use the CPU to process graphic routines, but GPUs quickly became mainstream when it became clear that CPUs were better suited to running user applications. These devices include their own instruction architectures and methods for execution to significantly speed graphics-related code. The use of GPUs is yet another example of heterogeneous computing that is in widespread use.

The ever-increasing number of security threats has also led to the development of hardware security chips that can analyse data busses in real-time, provide cryptographic functions, and generate keys on the fly. Again, these devices, which are now commonplace in modern machines, demonstrate the advantages of heterogeneous computing.

Cryptographic functions are often very complex and mathematically challenging to solve. Thus, moving these functions to an external device leaves the CPU to process other tasks.


How far could heterogeneous computing be taken?


When it comes to heterogeneous computing, modern technology has only scratched at the surface of what a heterogeneous system could be. As the number of tasks needed to be run by users continues to increase, the need for more CPU cores will also increase. However, tasks themselves may not be growing in complexity, and as such, the cores being integrated into modern CPUs could be made simpler by design. A simpler core can be made physically smaller, allowing more cores to be fitted onto the same silicon die. Thus, future CPUs could have hundreds of RISC cores that, on their own, are not very powerful, but their combination allows for thousands of concurrent tasks with ease.

The introduction of resource-heavy tasks such as AI will also see the introduction of more specialised chips. Thus, future devices may start to rely less on the sheer power of the CPU and instead begin to see reliance on co-processors that can be assigned specific tasks.

If taken to its logical conclusion, future computer systems may even take application execution so far that individual applications are given hardware cores to run on. For example, a rack system with many hundreds of slots could allow for applications, which come in the form of individual processing cores, to be inserted as an installation. Therefore, applications such as word processing, graphics editing. This ensures personal assistants are executed on their own individual cores, making it so that no one process ever interferes with any other process while ensuring that the current process is run as efficiently as possible.

Overall, heterogeneous computing presents many advantages, including increased energy efficiency, improved performance, and freeing up main systems resources. Exactly how far heterogeneous computing will be taken is unknown, but future computing systems could differ from what we use today.

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.