Why memory chips are the new frontier of the AI revolution
Innovation has helped push SK Hynix ahead of Samsung — and put chipmakers in the middle of growing US-China tensions
Christian Davies in Icheon
High bandwidth memory designs, such as the HBM3E produced by both SK Hynix and Samsung, are transforming the industry and shifting the historic order of the top players © FT montage/BloombergAt SK Hynix’s vast M14 chip fabrication plant, workers in white, pink and blue clean-room suits inspect rows of machines as 700 robots zip along overhead rails, carrying silicon wafers between different stages of the manufacturing process.
The factory, at the South Korean company’s main campus in the city of Icheon, produces high bandwidth memory (HBM) chips, capable of transferring data equivalent to 200 feature-length movies every second.
For decades, memory chips were the unglamorous end of the semiconductor industry, overshadowed by the logic or processor chips designed and produced by companies such as AMD, Qualcomm, Nvidia and TSMC to conduct calculations and control an electronic device’s operations.
But HBM designs, such as the HBM3E produced at the Icheon factory, are transforming the memory industry. Joon-yong Choi, vice-president and head of HBM business planning at SK Hynix, notes that whereas in conventional dynamic random-access memory (Dram), “cost was prioritised by customers over power and performance, with HBM power and performance are prioritised over cost”.
They are helping developers of so-called large language models alleviate the effects of the “memory wall” — where limitations in storing and retrieving data are an impediment to improving performance — as well as boosting efficiency and lowering costs at thousands of data centres under construction around the world.
The growing importance of memory to AI has placed the sector at the centre of intensifying competition between Washington, which is trying to restrict Chinese access to cutting-edge technology, and Beijing, which is nurturing a domestic semiconductor sector that it hopes can go toe-to-toe with global rivals.
It has also shifted the historic order of the industry’s top players.
SK Hynix’s revenues from Dram, of which HBM is a subset, swelled from Won7.5tn ($5.4bn) in the second quarter of 2021 to Won17.1tn in the same quarter of 2025, pushing them past those of its great rival Samsung for the first time since the two Korean groups started competing in the memory market in the 1980s.
“The idea that SK Hynix could surpass Samsung would have been unthinkable as recently as five years ago,” says Chris Miller, associate professor at Tufts University and author of Chip War.
“It would be like Dr Pepper suddenly becoming more popular than Coca-Cola.”
“It became clear decades ago that the commodity dynamic in the memory market would make it very hard to make outsized profits,” says Miller.
That prompted many of the brightest minds, and ambitious entrepreneurs like Nvidia’s Jensen Huang and Qualcomm’s Irwin Jacobs, to turn their attention to processor chips, he adds.
“But now, memory is back.”
Intel began life in the 1960s as a memory chip company, but exited the Dram sector in the 1980s under pressure from Japanese rivals Toshiba and NEC.
They, in turn, were supplanted in the 1990s by Samsung and the chip division of Hyundai Electronics, or Hynix, which would later be acquired by the SK conglomerate.
The two Korean groups and Micron, of the US, have dominated the sector ever since.
Samsung was until recently the undisputed leader of the heavily commoditised market in Dram chips, which are powered and store data temporarily while a processor is running.
It used its superior scale to invest in production capacity during the cyclical industry’s regular downturns.
Choi explains that while Dram and lower-value Nand chips — which store data for longer periods without power — were the dominant technologies, companies also experimented with more niche products.
HBM chips, which Hynix began developing in 2013, were among them.
They involved stacking layers of Dram units connected by copper wires a tenth of the thickness of a human hair, like a multistorey library with lifts to quickly transport piles of books between floors.
That means HBM chips can offer 1,024 pathways for sending data to and from a processor, Choi explains, compared with 64 for conventional advanced Dram chips.
“Think of it like the number of taps filling a water tank, or the number of lanes on a highway,” he says.
“When it comes to the memory requirements of AI, nothing comes close to HBM.”
Ray Wang, lead semiconductor analyst at the Futurum Group consultancy, also points to Hynix’s early adoption of an advanced bonding technology called mass reflow-molded underfill, or MR-MUF, as key to its HBM success.
It involves the use of a special resin-based insulation material to prevent overheating, crucial when stacking up to 16 Dram chips on top of each other.
Hynix’s exclusive contract for the material with its Japanese supplier Namics Corporation forced Samsung and Micron to settle for an inferior manufacturing process involving high temperatures and strong forces, both of which can crack the silicon layers and result in a higher failure rate.
Its superior product helped Hynix to secure its position as the principal supplier of HBM chips to Nvidia and allowed it to ride on the US company’s coat-tails as demand for AI chips exploded after OpenAI’s ChatGPT chatbot was released in late 2022.
HBM’s share of Hynix’s overall Dram revenues went from about 5 per cent in the final quarter 2022, according to Bernstein research, to more than 40 per cent by the first quarter of 2025.
Myron Xie of the consultancy SemiAnalysis notes that while Micron’s HBM3E chips have now passed the stringent qualification tests for use in Nvidia’s most advanced AI chips, Samsung’s equivalent is yet to do so.
People close to the company say its HBM3E chip is set to pass Nvidia’s tests “imminently”.
But they also acknowledge that, as the dominant player in what was until recently a commoditised market, it was caught cold by the needs of AI players for more customised memory solutions tailored to their specific requirements.
“Samsung is also struggling with the bread-and-butter task of producing the cutting-edge Dram chips that end up getting stacked into HBM,” says Xie.
“Micron has done well, but it’s also pretty damning against Samsung for it to be in third place.”
Miller adds that Samsung “was optimised for the smartphone era” rather than the AI one.
“I think the entire organisation has struggled to conceive of what the world would look like when the smartphone was no longer the dominant product.”
It has proved a costly stumble.
Peter Lee, a Seoul-based semiconductor analyst at Citigroup, notes that HBM chips offer profit margins of about 50-60 per cent, compared with about 30 per cent for conventional Dram units.
Because each HBM chip needs to be designed to fit the specific AI graphics processing unit to which it is paired, orders must be placed a year before production, typically on one-year contracts.
“That gives memory companies much more pricing leverage over potential customers than when selling conventional Dram, which can be bought at a month’s or even a day’s notice and easily swapped for the product of a rival chipmaker,” adds Lee.
Wang estimates that Samsung’s mis-steps have cost it tens of billions of dollars a year in lost revenue since ChatGPT’s release.
“They should have been aware of the implications for memory demand of the rise of machine learning,” he says.
“Underestimating the potential of HBM was a huge strategic mistake.”
If the rise of HBM has disrupted the old order at the top end of the memory market, another source of disruption is emerging from below: Chinese memory champion ChangXin Memory Technologies, or CXMT.
Based in Hefei, in the eastern province of Anhui, CXMT has increased its share of the global Dram market from close to zero in 2020 to 5 per cent last year, according to Shenzhen-based consultancy Qianzhan.
It remains unclear whether CXMT’s progress in conventional Dram will allow it to catch up with Hynix, Samsung and Micron in mass producing cutting-edge HBM chips — a development that would potentially reduce the dependence of Chinese AI developers and chipmakers on foreign companies for critical components.
The Financial Times reported last month that CXMT was testing samples of HBM3 products — one generation behind HBM3E — with a target to launch next year.
But analysts and industry insiders remain sceptical that, without access to key equipment and materials that are subject to US export controls, CXMT can close the HBM gap in the near future.
“CXMT stockpiled a lot of the equipment it needed before the most recent round of controls,” says Futurum Group’s Wang.
“But it cannot access extreme ultraviolet machines, and it is not clear that they have enough equipment to mass produce advanced HBM products at a similar scale to the leading memory players.”
He estimates CXMT is “three to four years behind” in HBM development.
Last week, the US government revoked waivers that had allowed Hynix and Samsung to send chipmaking equipment to their manufacturing facilities in China without a licence, a decision that Wang says “underscores Washington’s intent to further limit China’s access to memory technology”.
In a sign of the importance of HBM for China’s AI ambitions, the country’s tech groups and chipmakers rushed to stockpile Korean HBM before their availability was restricted by US export controls in December last year.
Despite those controls, many experts argue that US policymakers have been slow to recognise memory’s centrality to AI performance, leaving Chinese companies with access to cutting-edge memory technologies long after restrictions were imposed on the most advanced processing chips.
While individual HBM chips equivalent to the HBM2E standard and above can no longer be exported to China, more advanced chips can if they have been pre-packaged into AI chips that do not exceed certain performance criteria.
Xie, of SemiAnalysis, cites the example of Nvidia’s H20 chip, which US President Donald Trump has permitted to be sold into China on the basis that it is “obsolete”.
While the H20’s processing power is clearly inferior to the H100, its six on-board Samsung HBM3 chips actually provide better memory performance: four terabytes per second of memory bandwidth, compared with 3.4TBps in the H100 and 3.2TBps in Huawei’s flagship Ascend 910c.
Xie notes that while compute performance is more important for training AI models, memory is widely considered more important for deployment, also known as inference.
“In some ways, the H20 is a better chip than the H100 because there’s more memory capacity and bandwidth,” says Xie.
“People still overly focus on compute, but that’s only one aspect of performance.”
SK Hynix’s current dominance of HBM could yet come under pressure from other quarters, analysts say.
The Korean company plans to begin mass production this year of its next-generation HBM4 chip that is set to be used in Nvidia’s forthcoming Rubin platform, offering LLM developers a substantial leap forward in performance.
Whereas HBM3E and its predecessors use relatively unsophisticated Dram chips as their “logic die” — a base chip that regulates the HBM stack’s operations — that task will be performed by an advanced processor chip produced by TSMC in the new design.
Samsung’s HBM4 will also use an advanced processor chip, produced by its own foundry division.
A person familiar with its thinking told the FT that as the only company with cutting-edge capabilities in both processor and memory chips as well as advanced packaging — the process of integrating multiple chips closer together — it could offer customers a “one-stop shop”.
The person adds that the company is also in “active discussions with key clients” about a technique called “hybrid bonding”, an improved method of connecting the stacked Dram chips that could potentially offer greater bandwidth, power and signal integrity.
There’s not much value to being a one-stop shop if all the individual parts are inferior
Wang, at Futurum Group, argues that whoever manages to incorporate hybrid bonding first “will decide who has leadership in the next generation of HBM”.
He says Chinese companies are also investing heavily in related research and accelerating hybrid bonding-related patent filings.
Lee of Citigroup notes that in addition to improving performance, the use of processor chips as the logic die will increasingly allow HBM products to be tailored for specific tasks, making it even harder for customers to switch between suppliers.
That raises the question of whether HBM4 will pave the way for Samsung to make a comeback.
The larger company has had time to correct the mistakes it made on its HBM3E designs for HBM4, meaning it is likely to qualify as a supplier to Nvidia, says Lee.
But Wang notes that Hynix will continue to benefit from its recent close co-operation with Nvidia, as well as its long-standing relationship with industry leader TSMC.
On Wednesday, Hynix unveiled high numerical aperture extreme ultraviolet lithography machines it has acquired from ASML, giving it a further edge over its HBM rivals.
In contrast, Xie at SemiAnalysis notes that Samsung’s foundry and memory businesses have both been dogged by quality and production issues.
“There’s not much value to being a one-stop shop if all the individual parts are inferior.”
Samsung “has continued to invest in the HBM business and is focusing on developing next-generation memory technologies”, the company said in a statement.
In the meantime, Chinese AI chipmakers unable to circumvent US export controls are looking for ways to reduce their dependence on HBM as a means to boost performance.
Huawei last month launched new AI software designed to allocate different memory tasks to different kinds of memory chip, thereby reducing HBM reliance.
Last week, the Chinese tech giant also unveiled three new “AI solid state drives” as alternative memory solutions.
Miller, of Tufts University, says that with HBM still relatively expensive and energy intensive, and with memory capacity so critical to AI performance, many tech companies are trying to develop an alternative.
They include Japanese tech group SoftBank, which is working with Intel to develop a stacked Dram product that utilises a different wiring system to HBM.
Most analysts agree that HBM will probably dominate memory solutions for the next five years at least.
But greater customisation is likely to mean deeper involvement of foundry companies, chip designers and the customers themselves in the design and manufacturing process.
That could threaten the memory companies’ ambitions to capture more value from the supply chain, warns Xie.
“The more parts of the HBM stack get outsourced to TSMC and the fabless design companies, the higher the risk for the memory companies that the sexy guys end up doing the hardest stuff,” he says.
“If that happens, then this moment could prove bittersweet.”
0 comments:
Publicar un comentario