12-12-2022 | By Robin Mitchell
In continuation with Moore’s Law, Intel has recently announced that it fully intends to achieve 1 trillion transistor devices by 2030. What challenges do engineers face when shrinking transistor sizes, how would Intel achieve a 1 trillion transistor device, and what would such a device be capable of?
As engineers begin to approach the atomic scale in transistor design, sensational news articles that describe how Moore’s Law is dead are constantly being published, and yet, engineers continue to shrink transistors. While the rate of shrinking seems to have slowed marginally, engineers continue to reduce feature sizes, with the latest devices hitting the 3nm mark as of 2022. But to say that Moore’s Law is dead is simply untrue, and even if engineers hit the theoretical limit for transistor sizes, other solutions, such as chiplets and 3D stacking, will allow transistor counts to increase.
Remember, Moore’s Law simply states that the number of transistors on a chip will double, but it never specifies how that might be achieved. But when trying to shrink transistors down, there are many challenges that engineers have to solve, and with each new generation of technology, the challenges become exponentially harder to solve.
One such challenge is the shrinking of die patterns down to the atomic scale. Historically, exposing patterns into wafers has been done using lithography, where a large artwork is shrunk down using a collimating lens. However, as transistor features became smaller than the wavelength of light being used to create them, traditional photomasks were not able to provide accurate results. As such, engineers have had to turn to extremely complex masks that utilise multiple layers and patterns that result in the desired shape, but creating these masks is a speciality in its own right.
Another challenge engineers face when shrinking transistor sizes is the numerous quantum effects that can affect transistor performance. For example, macro-sized transistors behave in accordance with classical physics, whereby insulation barriers insulate against electrical current. However, transistors on the atomic scale are subjected to quantum phenomena such as quantum tunnelling, and this makes controlling such transistors challenging. Furthermore, such effects make it difficult to pack transistors too closely together; otherwise, they can interfere. This has already been exploited by hackers in DRAM attacks that can read the contents of neighbouring cells as well as force values by toggling DRAM bits in adjacent cells.
Finally, shirking transistor sizes introduces numerous thermal challenges with chips. Even though shrinking transistor sizes usually result in less energy consumption per device, the increased density of transistors can make it difficult to extract waste heat, and this will impact the performance of the device.
These are, but a handful of the challenges engineers face when shrinking transistors, yet engineers continue to solve these challenges in the most brilliant of ways.
In keeping with the trend set out by Moore’s Law, Intel recently announced that it fully intends to continue the trend well into 2030 with plans to release a 1 trillion device by 2030. According to Intel, it is likely that they will be exploiting materials with a thickness of just three atoms to replace silicon as the transistor channel, and such sizes will allow for extremely dense devices. At the same time, Intel also expects to fully utilise its 3D chip technologies to further help increase transistor densities.
Slides from a recent presentation describe how future devices will likely use Molybdenum Sulphide as the active channel, and the use of ribbon FET technology whereby individual channel slices are surrounded entirely by the gate will enable such transistors to operate. Additionally, gold would be used for both top and side contacts with the MoS2 structure. But, even though such a device would utilise a monolayer of MoS2, the feature size of the device would not be three atoms as the rest of the transistor device would still be bulky.
It is hard to imagine what a 1 trillion device would be capable of, but whether such devices would be commonplace is yet to be determined. It might be possible that such devices are made only available to high-end customers and researchers, but it’s also possible that these devices could be used in mobile applications.
For comparison, a modern processor contains around 2 billion transistors, and a modern GPU contains as many as 28 billion transistors. That would mean, by modern standards, a 1 trillion device would be able to contain 500 modern processors or 50 modern GPUs on a single device. If such a device were made into a commercial off-the-shelf chip, a single computer would have the processing capability of a small data centre and the graphics processing of an entire modern film studio several times over.
However, it is more than likely that such a device would integrate all components of a computer on a single chip, including RAM, peripherals, co-processors, GPU, and even storage. This would result in a massive speed increase as all system components are extremely close together. In fact, such technology may see DRAM replaced with SRAM to provide high-speed, low-energy memory or, better, universal memory where everything, including RAM, is stored in the same memory space.
In addition, such a device would also integrate large quantities of programmable hardware that would support neural networks. The use of programmable hardware allows for improved training of AI while retaining the speed of hardware-driven AI, and such an array could even be made as a general-purpose scratch pad. It is also possible that future software which has specific processing requests could configure the CPU with software-defined co-processors to accelerate their performance.
The idea of a 1 trillion device is truly mind-boggling, and I, for one, cannot wait to see what such a device can do. But it is possible that such devices will not be accessible to the general public, but one can hope!