Next-Generation DDR4 Memory to Reach 4.266GHz - Report

The next-generation DDR4 SDRAM memory will bring rather ultimate performance improvements to both desktops and laptops as well as servers and workstations. But the new performance heights will demand a rather radical change to topology of memory sub-system.

At a recent MemCon conference in Tokyo, Japan, Bill Gervasi, vice president of engineering at US Modular and a member of the JEDEC board of directors, revealed that the target effective clock-speeds for DDR4 memory would be 2133MHz - 4266MHz, an increase from previously discussed frequencies. Apparently, JEDEC and memory manufacturers decided that the progress of DDR3 leaves no space for DDR4 data rates below 2133Mb/s.

The designers of DDR4 memory are looking forward 1.2V and 1.1V voltage settings for the new memory type and are even considering 1.05V option to greatly reduce power consumption of the forthcoming systems. It is expected that manufacturers of dynamic random access memory (DRAM) will have to use advanced fabrication technology to make the DDR4 chips. The first chips are likely to be made using 32nm or 36nm process technologies.

At present JEDEC expects to finalize the DDR4 specification in 2011 and start commercial production in 2012. Actual mass transition to the next-generation memory is projected to occur towards 2015.

But extreme performance will require a tradeoff. In DDR4 memory sub-systems every memory channel will only support one memory module, reports PC Watch web-site, since the developers substituted current multi-drop bus in facour of point-to-point topology. In order to overcome potential inability to install appropriate amount of memory into high-end clients as well as servers, the developers have reportedly presented two approaches:

* DRAM manufacturers will need to dramatically increase capacities of memory chips by using multi-layer technique with through silicon via (TSV) technology. As a result, DDR4 memory chips with very high density will become relatively inexpensive. Obviously, this will naturally make memory upgrades slightly more complicated as in order to sustain multi-channel memory performance, all memory modules will have to replaced with more advanced DIMMs.
* In case of server multi-layer DRAM IC approach only will not be viable for high-end machines. As a result, it is proposed that special switches are installed onto mainboards to allow multiple memory modules to work on a single memory channel.

The transition to DDR3 memory has taken a long time already and will take a couple more years to complete. But the transition to DDR4 memory will take even longer since it will be much more complicated for all the participants of the ecosystem: the DRAM chip makers, memory module manufacturers, mainboard makers, microprocessor producers, system builders and end-users.

News Source: http://www.xbitlabs.com/news/memory...ion_DDR4_Memory_to_Reach_4_266GHz_Report.html
 
I'm all for technological advancement but there's no point in this for the desktop. Tri-channel DDR3 is more than enough for every current desktop workload that I'm aware of, let alone quad-channel coming with Sandy Bridge on Waimea Bay.
 
I'm all for technological advancement but there's no point in this for the desktop. Tri-channel DDR3 is more than enough for every current desktop workload that I'm aware of, let alone quad-channel coming with Sandy Bridge on Waimea Bay.

You'd want that when you have GPU embedded inside a CPU.
 
I'm all for technological advancement but there's no point in this for the desktop. Tri-channel DDR3 is more than enough for every current desktop workload that I'm aware of, let alone quad-channel coming with Sandy Bridge on Waimea Bay.
Using less channels of faster memory makes the motherboards simpler/cheaper/more reliable, in theory at least.
 
obviously there's the latency wall, which ddr4 or any other new memory can't possibly break.
what's funny is ddr2 is still massively popular for new personal computers, and all the parts and non-PC uses.

so I guess ddr2, ddr3 and ddr4 will coexist for a while.
if anything the 2015 date may be when ddr2 gets dropped to the legacy status ddr1 somewhat has.
 
Actually, with clock speed saturated, I would have thought that memory latency (in terms of cpu clocks) will reduce as memory technology advances. But so far there doesn't seem to have been much progress in this regard.
 
Faking an evolution for the ddr standard is the best that we can do?
It's always the same: higher latencies, some bw, and for at least a year the newpowerfull standard is weaker and costlyer than the old one

what happened to native fb modules?
 
there are physical limits, even the speed of light. much like the situation with hard drives, or internet latency.

Maybe chip stacking will be used for that purpose?
i.e. imagine some cheap computer from the year 2020. along with your main processing die's various L1, L2 and L3 caches you've got 1GB or more of low latency, stacked memory, plus 16GB or 32GB memory in regular ddr4 sticks on the motherboard.
 
there are physical limits, even the speed of light. much like the situation with hard drives, or internet latency.

Maybe chip stacking will be used for that purpose?
i.e. imagine some cheap computer from the year 2020. along with your main processing die's various L1, L2 and L3 caches you've got 1GB or more of low latency, stacked memory, plus 16GB or 32GB memory in regular ddr4 sticks on the motherboard.

For desktops, the limitations is heat dissipation. It's difficult enough cooling these processors without having a DRAM module stacked on top.

Most embedded processors already use PoP memory.
 
I would have thought that memory latency (in terms of cpu clocks) will reduce as memory technology advances. But so far there doesn't seem to have been much progress in this regard.
It's due to the way DRAM works internally. It takes (pretty much) the same amount of time to precharge banks in a 15-year-old EDO chip as it does in today's DDR3 modules, more or less same amount of time for sense amps to read out bits, and so on... Some progress has been made of course, but nowhere near the same amount as how raw bandwidth has developed.

Most of the improvements of latency in DRAM tech seems to come from things like pipelining etc, but that only works on regular, uniform access patterns. Random accesses scattered across random memory pages are still terribly slow.

If we want faster memory, we're going to have to come up with a different technology to store data. Perhaps SRAM will become viable again at some point. After all, if it was good enough for the C64, it ought to be good enough for PCs too... ;)
 
It's due to the way DRAM works internally. It takes (pretty much) the same amount of time to precharge banks in a 15-year-old EDO chip as it does in today's DDR3 modules, more or less same amount of time for sense amps to read out bits, and so on...

I don't believe it. When I look at Hennessey/Patterson, I find that memory access time halves every 12-14 years. Though they mention that memory advances are slowing.
 
For desktops, the limitations is heat dissipation. It's difficult enough cooling these processors without having a DRAM module stacked on top.

Most embedded processors already use PoP memory.

maybe one of the magical cooling solutions we can sometimes read about will work out on industrial scale. my favorite one was the ultra-thin nanotube based solution that even turned heat into electric power ; die-stacked or "3D" chips are often hyped as the solution of the future.

no doubt there's a lot of hot air and hurdles, but I can't rule it out. when seeing that current stuff works at all despite the infinetely complex research, engineering and issues, and that 28nm, 22nm are likely to work I'm amazed already.
 
wow I didn't know of SRAM on the commodore 64, but after a search it turns out it had 512 bytes of it, for "Colour-RAM".
:p
All the RAM in the C64 should be SRAM, since it did not have a DRAM controller. Memory was basically hooked up straight to the main CPU, with maybe a bit of glue logic for banking etc...IIRC of course. Also, I'm pretty certain there was no need for taking account of memory refresh when programming the old breadbox in assembler - you just had a certain number of cycles/memory accesses per raster line, and that was it.

Yes, but the point is that latency is decreasing, and is not constant at all.
Latency IS decreasing, extremely slowly. 2x in a decade and a half means access hasn't moved much at all. It's moved from "very slow" to "very slow".

"Even slower" actually, since CPU clock speeds have increased roughly 700-1000 percent in the same time period...!
 
All the RAM in the C64 should be SRAM, since it did not have a DRAM controller. Memory was basically hooked up straight to the main CPU, with maybe a bit of glue logic for banking etc...IIRC of course. Also, I'm pretty certain there was no need for taking account of memory refresh when programming the old breadbox in assembler - you just had a certain number of cycles/memory accesses per raster line, and that was it.

It had DRAM, the VIC-II spent 5 cycles every scanline refreshing it.
 
Are they talking reasonable DDR4 availability in 2012 or 2014+?
Point to point bus should make for more efficient operation but I kinda don't like the sound of one DIMM per channel.

Now that I think about it though; most of the time in recent years I've only had two DIMMs because my RAM upgrades have normally been faster clocked as well as bigger, so I have removed the older modules.

While GDDR isn't directly comparable to DDR, GDDR4 & 5 came along quite soon after GDDR3, any sign of GDDR6 yet?
 
It had DRAM, the VIC-II spent 5 cycles every scanline refreshing it.
Oh! Then I stand corrected... :D Thanks for the reminder.

Are they talking reasonable DDR4 availability in 2012 or 2014+?
Seems to me the PC hardware market evolution is slowing down. Manufacturers seem more and more unwilling to embrace new standards (unless it has tangible cost benefits - such as replacing cumbersome IDE interfaces with SATA).

Hoping for "reasonable" (which is a rather loose term) DDR4 in 2012 might be too much to hope for - if one by "reasonable" means somewhat comparable cost/MB - but 2014 is a damn long way off. Perhaps a compromise? 2013 for cost parity? :)

Point to point bus should make for more efficient operation but I kinda don't like the sound of one DIMM per channel.
I think it sounds great. Should improve stability and reliability as well as performance.

in recent years I've only had two DIMMs because my RAM upgrades have normally been faster clocked as well as bigger, so I have removed the older modules.
I run OCZ Platinum DDR3 7-7-7-24 1600MHz modules at 7-7-7-20 1660MHz in all 6 slots on my mobo. Works great as long as the RAM is actively cooled (by the CPU fan in this case...) :D If you have quality RAM, and don't go totally overboard with the clocking it's not a problem with two modules per channel.

While GDDR isn't directly comparable to DDR, GDDR4 & 5 came along quite soon after GDDR3, any sign of GDDR6 yet?
Dunno GDDR6, however I wonder how similar DDR4 signaling is from GDDR5. If the two share the same ECC protection capability and so on. I suppose there would have to be ECC in DDR4 at those frequencies or things might get dicey.
 
Back
Top