This site may earn affiliate commissions from the links on this page. Terms of apply.

Samsung is throwing its lid into the GDDR6 band and joining Micron in ramping the new memory technology for upcoming GPU products. It's non a surprising move, but it does suggest that GDDR6 will be more widely adopted than its predecessor, GDDR5X.

Samsung is touting the new memory every bit being built on a 10nm* process at double the density of its previous RAM. 16Gbit chips means scaling up to 2GB of RAM per GDDR6 chip. This also clears the way for much college amounts of RAM onboard GPUs over time, though I doubt nosotros'll see many 24GB GPUs in the virtually future. Even advanced 4K titles with HDR and other bells and whistles don't push that kind of envelope (for at present).

I potential advantage of the GDDR6 push button is that we should finally see 2GB cards dropping off the map this generation. With Intel now fielding 4GB GPUs on its Radeon-integrated hardware, hopefully we'll see a shift to larger RAM buffers across the board.

Samsung is claiming its GDDR6 tin calibration up to 72GB/due south per channel (18Gbps per pin), which is more than than twice as fast as the old GDDR5 standard and its 8Gbps performance. This ignores GDDR5X, of course, only since Samsung never built that blazon of RAM it tin can get away with skipping it as a point of comparison. Early GPUs are likely to opt for lower-clocked RAM, but a 72GB/south aqueduct transfer rate is impressive, implying that a 256-fleck GPU could hit 576GB/south of memory bandwidth. The adjacent generation of midrange cards from AMD and Nvidia should exist strong competitors for this reason solitary.

GDDR6

GDDR5X compard to GDDR6.

The fact that Samsung is picking up GDDR6 also suggests robust need from multiple companies. GDDR5X was an Nvidia-Micron play, but never saw wider adoption in the marketplace. With multiple companies ramping GDDR6, it's clear information technology'll exist more than popular.

One big question is how GDDR6 will compare with HBM2 at high clocks and wide channels. HBM2'south higher toll and more than hard manufacturing procedure are balanced, to an extent, by its significantly lower power consumption and smaller, simpler board layouts. This allows for GPUs similar AMD's Radeon Nano, and improves overall efficiency. If GDDR6 can friction match or approach these outcomes, we may see HBM2 shrink dorsum or vanish altogether. On the other hand, Samsung is touting its 2.4Gbps HBM2 stack likewise, which would give GPUs based on information technology a substantial functioning boot of their own.

Samsung claims its new standard offers a 35 percent improvement in power consumption compared with GDDR5, with 30 pct higher yields per wafer on GDDR6 compared with GDDR5 thanks to smaller process geometries. There's no word on production introductions, simply 2022 launches seem likely.

* – Samsung calls its 10nm manufacturing for GDDR6 "10nm-class" rather than "10nm." The term denotes a process node between 10nm – 19nm, which is to say, information technology doesn't define a process node at all. "10nm-class" or equivalent terms from all vendors should be considered fictional labels, not meaningful production classifications.