How software needs to pick up the slack in Moore’s Law

10-09-2020 | By Robin Mitchell

Software engineers have been heavily reliant on Moore’s law to get more processing power out of semiconductors, but increasing MIPS is now near its end. What problems does the semiconductor industry face, why can’t software engineers rely on decreasing instruction cycle times, and how should software adapt to changes in computer architecture?

Why is Moore’s Law ending?

Since the dawn of the first transistor, the ability to double the number of transistors every two years on a single chip has essentially defined the capabilities of electronics. This effect is known as Moore’s Law, and has greatly shaped the CPU industry with each iteration of CPU including more transistors, and thus improving its performance. At the same time, another effect was observed; CPU clock rates also increased with improved semiconductor technologies. This increase in clock rate was essential to providing instructions per second, and it was this effect that saw major improvements in software. 


However, reducing transistor sizes is becoming increasingly difficult, and won’t be long before it is near impossible. When this happens, designers expecting transistor densities to increase to provide greater performance will be in for a shock and will have to look at alternative methods for increasing performance. But there is a second issue faced with current semiconductor technology; it is already clocked as fast as it can go. When a CPU increases its frequency from 100MHz to 200MHz, the number of instructions per second that can be executed doubles. The ability to perform twice as many instructions per second allows for the CPU to run two identical tasks simultaneously, or one twice as fast, thus improving performance. However, for the past decade, consumer CPU speeds have not been able to push past the 5GHz boundary, and 8GHz has only been achieved with the use of advanced liquid nitrogen cooling systems. Thus, improved computer performance has arisen from other methods such as integrating multiple cores, speed up memory transfers, and moving software routines into specialised hardware accelerators.

How has software improved over time?

While software has greatly improved over time to provide better features and capabilities (such as graphics rendering), software mostly relies on its improvement to hardware. For example, the ability to solve a complex matrix calculation has always been known and writing this in software has been doable since the first computers. However, improvements in hardware (such as hardware multipliers and matrix solvers), is what allows modern applications to solve these quickly, and not better ways of writing software. 

When considering operating systems, it is amazing that despite a modern computer having 8 times the number of cores than those 20 years ago as well as 16 times the RAM, it takes longer to boot Windows 7 than it does Windows XP. Granted, this may be related to the fact that modern operating systems load a large number of processes and background services (my system, despite the only Word being open, is clocking in at 110 processes). More cores allow for more processes to be run simultaneously, but more cores do not execute individual instructions any faster. 

Thus, to say that software has improved is a stretch because while feature-rich applications are now available, most of these features are a result of better performing hardware. 

How is semiconductor hardware adapting to this change?

Since reducing transistor size is becoming less of an option, semiconductors have multiple paths to take. One is to create 3D dies that allow multiple layering of transistors to increase transistor count, and thus increase the number of cores. However, simply adding more cores, as stated before, does not improve instructions per cycle, but transistors cannot get any faster than they already are. But, instead of trying to attack instructions per cycle, hardware engineers are instead trying to move software into the hardware realm, and create complex hardware that can perform tasks very quickly, which software takes a long time to do. 

For example, software multiplication involves repetitively adding two numbers together, with each addition consuming an instruction cycle. Instead, a hardware multiplier can multiply two numbers in a single clock cycle, thus improving performance. Another example of hardware accelerators is cryptographic accelerators which can perform encryption tasks such as AES and key generation much faster than software routines.  

What does software need to do?

In order for software to improve, the first task is to make the most of hardware acceleration. This can be a problem when dealing with generic compilers as they may utilise software routines instead of deploying hardware routines. However, even if a software engineer makes use of hardware accelerators, it can cause issues when trying to target as many platforms as possible; not all CPUs support the same hardware-accelerated features. Clever software systems (such as an operating system), can detect supported hardware thus tailoring their performance to the machine, but user applications are harder to integrate such features into. Writing efficient code for microcontrollers is far more easily done, and such systems rarely have privilege mechanisms. Even when they do, the programmer will most likely have access to privileged areas, thus enabling maximum use of hardware accelerators.

Overall, in order for software to help combat Moore’s Law, both software engineers and operating system developers both need to rethink about how software is run. Operating system designers first need to understand that running 100 background services may not be the best, and creating an operating system that is backwards compatible with DOS applications means that the underlying architecture must be ancient. Software engineers may need to start considering the language which they code in, how that language is executed/interpreted, and if hardware accelerators are available to them.

Read More

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.