Samsung announces LPDDR5X DRAM for metaverse and mobile AI applications

15-11-2021 | By Robin Mitchell

Recently, Samsung announced its latest memory technology that will be targeted towards the metaverse and mobile AI applications. What features does the new memory have, why is RAM so important in AI applications, and could dedicated AI hardware make such devices redundant?


Samsung announces LPDDR5X


Recently, Samsung announced their latest memory technology called LPDDR5X, which it says will help power the next generation of technology, including the metaverse and mobile AI. The new memory device, based on a 14nm silicon process, has a memory capacity of 16Gb and is approximately 1.3x faster than previous technologies. LPDDR5X from Samsung is also stated to use up to 20% less power than previous LPDDR5 solutions making it ideal for mobile applications where energy efficiency is paramount.

The new memory technology will also offer data rates of 8.5Gbps (compared to LPDDR5 6.4Gbps) and will be available in memory packages with RAM sizes up to 64GB. The large memory capabilities and speed of operation allow for the new memory technology to also be used in mobile AI applications, including self-driving vehicles, drone automation, and personal assistants on smartphones.

Why is RAM so important in AI applications?


The announcement by Samsung contains many buzzwords, including metaverse and AI, and there is a good reason for this; such technologies are heavily dependent on RAM). While DRAM is not the fastest memory technology (SRAM holds that record), it offers a very good trade between speed, size, and cost. DRAM is complex to interface with, requires refresh cycles, and is a destructive memory technology. Still, it can provide larger memory sizes at a very low cost (each memory cell is just a single transistor and capacitor).

These advantages of DRAM play very well with AI when considering how AI operates. The most common method used for implementing AI is a neural network that involves a series of nodes connected to each other via links. Signals propagate through the network from the input to the output travelling across the various nodes and links. Each node weighs each of its inputs to produce an output dependent on its input. The result is a network that can have its weighted values adjusted (i.e., trained) to produce the correct response the next time it receives a particular signal.

So why is RAM important in this case? Each node in a neural network is generally a simple mathematical operation that sums the value of all its inputs and then produces an output. This is computationally simple, but the challenge is the sheer number of connections involved. Each of these connections needs data stored about itself, including its weighting factor and the signal it’s currently carrying.

A neural network of a few nodes is trivial but scaling this up to thousands of nodes sees the RAM requirements skyrocket. For example, a 50-layer neural net can have up to 26 million weighted parameters, and if each parameter is an 8-bit number, then the total memory needed is 26 million bytes or 26MB. This problem becomes worse when using hardware such as GPUs that use 32-bit numbers; the total memory requirement can quickly scale to 8GB for AI that can respond promptly.


Will dedicated AI hardware make such RAM obsolete?


The main challenge with integrating AI into modern tech is that AI is fundamentally different to how computers function. Digital computers operate by executing instructions line by line, but neural networks instead work similarly to analogue electronics, whereby each node reads voltage inputs and produces a corresponding voltage output.

Because of this fact, many companies continue to develop hardware that is specifically designed to execute AI neural networks. Such devices operate as a co-processor to a traditional CPU, which helps free the CPU to do other tasks while the AI co-processor focuses solely on AI execution. Furthermore, dedicated hardware can almost always execute neural nets far more efficiently than CPUs and GPUs, which is ideal for mobile AI applications looking to reduce energy usage.

But the developments of high-capacity memory such as LPDDR5X won’t be replaced by AI chips but may instead be used in conjunction with them. External memory will still be important unless an AI processor uses memristor technology whereby nodes can be programmed into the chip. The LPDDR5X developed by Samsung will remain an essential part of modern AI applications.

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.