Intel Memory Market Reboot: SoftBank Partnership Plan

Insights | 13-06-2025 | By Robin Mitchell

Key Takeaways:

  • AI memory bottlenecks are slowing the progress of large-scale models, with current GPU limitations at the centre of the problem.
  • Standard memory technologies like DDR4/5 can’t deliver the bandwidth AI workloads demand, prompting a shift to specialised memory solutions.
  • Intel and SoftBank have formed Saimemory to develop a power-efficient alternative to High Bandwidth Memory (HBM), targeting deployment by 2027.
  • This move could mark Intel’s strategic re-entry into the memory market, aligning with Japan’s goal to revitalise domestic chip production.

Artificial intelligence continues to push technological boundaries, but its advancement depends as much on hardware as it does on algorithms. As models grow in complexity and scale, one persistent bottleneck remains: memory. The current solutions—though powerful—struggle to keep pace with AI’s escalating demands for bandwidth, speed, and energy efficiency.

What are the limits of existing AI memory hardware, how are companies like Intel and SoftBank tackling these challenges, and could this mark the beginning of a new era in memory architecture?

The Challenge with AI Memory

Artificial intelligence has established itself as both profitable and beneficialdespite being in the early stages of development. While its current impact is impressive, it is still very much an evolving technology. Both software and hardware systems continue to undergo refinement in order to meet AI’s growing demands. One of the most persistent bottlenecks is memory.

On the hardware front, graphics processing units (GPUs) have become the industry standard for executing AI workloads. Their massively parallel architecture makes them ideal for handling the matrix and tensor operations that AI models depend on. However, with the ever‑increasing scale of AI models and their parameter counts, GPUs are hitting a wall. The data throughput and memory capacity required to train and run these models efficiently are outpacing what most commercially available GPUs can deliver.

The core issue is that GPUs do not follow the same modular upgrade path as traditional computing systems. You cannot simply add a DDR module to a GPU the way you would on a PC motherboard. The memory that comes with a GPU is soldered directly to the board, which means you are locked into whatever capacity the manufacturer provides. In most cases, this is nowhere near enough for modern large‑scale AI workloads.

This limitation is compounded by the fact that general‑purpose memory technologies like DDR4 and even DDR5 cannot keep up with the speed and bandwidth requirements of AI workloads. When memory bandwidth becomes a bottleneck, performance collapses. At best, this results in longer processing times. At worst, it means certain models cannot be run at all.

To address this, engineers and hardware vendors have turned to more specialised solutions. High Bandwidth Memory (HBM) and GDDR variants provide significantly faster performance than standard memory technologies. While these are often paired with high‑end GPUs, they come with a cost. They are expensive to produce, difficult to scale, and have high power requirements, and that power demand translates directly into increased heat, which in turn requires more aggressive cooling solutions and drives up energy consumption.

As such, this becomes a compound problem. High‑power memory systems exacerbate the existing energy and thermal problems already faced by data centers and edge AI deployments.

Intel and SoftBank Partner to Develop Power‑Efficient DRAM Substitute for AI Data Centers

In a move to enhance the efficiency of AI data centers, Intel and SoftBank have joined forces to develop a stacked‑DRAM substitute that could potentially replace HBM chips. According to a recent report by Nikkei Asia, the two companies created Saimemory, a joint venture aimed at creating a prototype based on Intel’s technology and patents from Japanese academics, including the University of Tokyo.

Timeline update: The partners are now targeting the first silicon in 2027, with mass‑production viability studies commencing the same year, one year later than initially sign‑posted.

Saimemory is launching with an initial capital outlay of ¥10 billion (about US $70 million), and Japan’s Ministry of Economy, Trade and Industry (METI) has indicated it may provide additional backing. If all goes to plan, the JV will evaluate mass production on Rapidus’s forthcoming 2 nm process line, slated for 2028.

The goal is to halve power consumption versus comparable HBM chips by wiring DRAM stacks more efficiently. If successful, Saimemory plans to prioritise supply to the Japanese data‑center market.

Currently, only three companies produce HBM chips: Samsung Electronics, SK Hynix, and Micron. (Memory rival **Kioxia has announced plans to enter HBM production in 2026.) The surge in AI demand has strained supply chains, making it difficult for data centers to obtain enough HBM. By developing an alternative, Saimemory aims to relieve that bottleneck and help Japan reclaim a role it last dominated in the 1980s when it controlled 70 % of global DRAM output.

This collaboration marks Intel’s first venture into Japan’s memory chip industry in more than 20 years. Intel’s involvement underscores the growing importance of AI data centers and the search for power‑efficient solutions. The partnership demonstrates both companies’ commitment to innovation and to tackling the challenges faced by the AI industry.

Could This Be Intel’s Ticket Back into the Memory Game?

Intel’s history is long, and despite its modern association with CPUs, its origin story is rooted in memory. Before the world recognised Intel as the powerhouse behind the x86 architecture, it was building memory chips, specifically SRAM and DRAM. These early products laid the foundation for the company’s rise, though they were soon eclipsed by the overwhelming success of Intel’s microprocessors.

With the CPU market becoming the company’s primary focus for decades, memory development gradually fell off Intel’s strategic roadmap. That decision made sense at the time, as processor innovation was driving demand and margins were higher. But the landscape has since shifted: AI is no longer a niche application, and general‑purpose CPUs are no longer the only architecture of relevance. The computational requirements of AI are forcing a re‑evaluation of where performance gains must come from, and memory is now a key part of that equation.

Re‑entering the memory market—specifically with a new architecture aimed at AI workloads—is not just timely; it is potentially transformative. Intel has decades of experience in silicon process design, packaging, and vertical integration. If leveraged correctly, this technical foundation gives the company a head start. Intel is uniquely positioned to design AI accelerators and pair them with memory subsystems optimised for tight integration. Unlike third‑party vendors that must conform to standard interfaces, Intel can build tightly coupled hardware stacks optimised down to the transistor level.

The partnership with SoftBank through Saimemory could be the first significant move in this direction. By aligning itself with academic research and focusing on efficiency rather than bandwidth alone, Intel could leapfrog the conventional limitations of high-bandwidth memory. If the company moves fast, it may have a real chance to shape the future of AI memory, not just follow it.

That said, competition is fierce. The memory landscape is no longer what it was when Intel first exited. South Korean and Taiwanese firms dominate the market with vertically scaled supply chains and highly optimised manufacturing. Even newer players are entering with specialised solutions aimed directly at AI workloads. Intel will have to execute flawlessly, and that includes managing cost, yield, power efficiency, and supply‑chain logistics—all while keeping up with AI’s rapid evolution.

Still, this moment may be as close as Intel will get to a natural inflection point for re‑entry. The opportunity is real, and if Intel acts with precision, it could re‑establish itself not just as a CPU leader but as a critical architect of modern computing infrastructure. The company once led the industry by making bold, technically sound decisions. Whether it can still do that in the current environment will soon be tested.

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.