Will there be GDDR 6 ??

It doesn't. DRAMs actually read out a lot more than the send. Basically they read out the whole 8n prefetch in a single cycle and then transfer that out. To enable back to back data out at data rates higher than an individual bank can sustain, GDDR5 uses the concept of bank groups and forced interleaving between the bank groups. There are 4 defined bank groups, each containing 4 banks. Read commands are not allowed back to back within a bank group but can be done between the bank groups. There is a parameter that determines the min timing interval for subsequent reads to the same bank group as well as a parameter for min cmd timing between reads to a different bank.
So you mean instead of an 8 bank design in DDR3 ? we have 16 now ?

Alright, as an example lets take a GDDR5 operating at 1GHz(4Gbits) , is it really operating at that speed ? to me a 500MHz memory core speed is more likely considering the 8-bit buffer .
 
So you mean instead of an 8 bank design in DDR3 ? we have 16 now ?

Alright, as an example lets take a GDDR5 operating at 1GHz(4Gbits) , is it really operating at that speed ? to me a 500MHz memory core speed is more likely considering the 8-bit buffer .

4 Ghz GDDR5, has an internal core of ~250 Mhz:

Prefetch of 8. So 500 Mhz max cas rate. Then you have the bank groups which means that the same set of 4 banks cannot CAS back to back.

Another way to look at is 4 separate 4 bank drams running at 250 Mhz all sharing a multiplexed interface running at 1 Ghz.

Some of this is assumption based off a somewhat older datasheet that leaked/was once available. Samsung/SkHynix don't have full datasheets publicly available for current gen GDDR5.
 
Figure 2-4 wide i/o/stacked drams each with something like 64-256 GB/s of bandwidth and total capacity of 1-2 GBs (2 Gb/4 Gb drams). So total bandwidth on the order of 256-512 GB/s. If you need more capacity, you use normal DDR4 or GDDR5. Done.

Current GPU's already reach those speeds. Seems like an awful lot of effort and not much in return. If you are stacking stuff, atleast get loads more bandwidth.
 
Current GPU's already reach those speeds. Seems like an awful lot of effort and not much in return. If you are stacking stuff, atleast get loads more bandwidth.

Current GPUs with 384b buses reach ~256GB/s of bandwidth while burning substantial amounts of power. Stacked and wide I/O dram will deliver upwards of 128-256 GB/s PER DRAM and do it at lower power. Total bandwidth will scale upwards of 1 TB/s if you wanted to connect 4-8 drams.
 
Current GPUs with 384b buses reach ~256GB/s of bandwidth while burning substantial amounts of power. Stacked and wide I/O dram will deliver upwards of 128-256 GB/s PER DRAM and do it at lower power. Total bandwidth will scale upwards of 1 TB/s if you wanted to connect 4-8 drams.

Is there anywhere I can actually look at this technology and its' capabilities?

Would stacked memory be the end of upgradeable memory?
 
4 Ghz GDDR5, has an internal core of ~250 Mhz:

Prefetch of 8. So 500 Mhz max cas rate. Then you have the bank groups which means that the same set of 4 banks cannot CAS back to back.

Another way to look at is 4 separate 4 bank drams running at 250 Mhz all sharing a multiplexed interface running at 1 Ghz.

Some of this is assumption based off a somewhat older datasheet that leaked/was once available. Samsung/SkHynix don't have full datasheets publicly available for current gen GDDR5.
Thanks alot ..
 
I think we are all being confused.
Why would VRzone care to write their article with GDDR 6 coming in 2014.
And second- how will you stuck memory on your motherboard? How will you upgrade this memory?

There is a need of faster classic memory.

Not really. GPUs don't upgrade memory, so stacked memory isn't a problem. CPUs pretty much have 2x the bandwidth they need right now, DDR4 will only increase that gap. Top end CPUs don't scale much beyond 2 channels of DDR3 1600, even though quad channel and up to DDR3 2400 exists. Thats 3 times beyond where reasonable scaling stops. Servers still use DDR3 1333 or even 1066, if they needed the bandwidth, they would have migrated to DDR3 1600 or higher years ago when it came out.

GPUs can use stacked memory, its pretty much designed with them in mind. CPUs can go on for a loooong time with DDR4, the fact that they aren't hurrying with DDR4 shows that it really isn't needed that bad.
 
Current GPUs with 384b buses reach ~256GB/s of bandwidth while burning substantial amounts of power. Stacked and wide I/O dram will deliver upwards of 128-256 GB/s PER DRAM and do it at lower power. Total bandwidth will scale upwards of 1 TB/s if you wanted to connect 4-8 drams.

Aaron, what is a realistic time table for this technology?

When could it be pushed out for a niche product that wants this to market ASAP?

When could it be pushed out for a mass fabbed product not dependent on industry standards?

When will we see the first PC's with stacked memory?

When will it become mainstream?

Importantly, not just when but how many drams AND how dense? Are we talking 4 drams that are a total of 256MB and ~ 512MB/s? Or are we looking at much lower bound in, say, 2014, of 2 drams at most, 128GB/s, and a 128MB density?

I am excited about HMC and such but there doesn't appear to be a solid roadmap--although it seems GPUs are really screaming for a relief in memory bandwidth and power. It is looking unfortunate that consoles are going to release so soon they probably will miss this tech in 2013??
 
Servers still use DDR3 1333 or even 1066, if they needed the bandwidth, they would have migrated to DDR3 1600 or higher years ago when it came out.

Servers do need more bandwidth and will continue to need more bandwidth. This has been a reoccurring issue with servers for the past 5 years or so and will only get worst moving forward.
 
Servers do need more bandwidth and will continue to need more bandwidth. This has been a reoccurring issue with servers for the past 5 years or so and will only get worst moving forward.

They will, yeah, but they have all the way up to 2133, almost double what they typically use now, if they want it. All they would have to do is ask for ECC versions and DRAM companies will have it ready asap. By the time they get to that point, DDR4 will be out, going up to 3200, and eventually 4266. I think that will take us nearly to 2020, for CPUs. Server-size dies have room for at least two more channels(maybe even four) if they really need it as well, to stretch current memory even further, again, if they actually need it, which I don't think it will come to, personally.
 
They will, yeah, but they have all the way up to 2133, almost double what they typically use now, if they want it.

No, no they don't. Seriously people, if you don't have a clue about what you are talking about, then don't.
DDR3 cannot run in reality even at 1600 with 3 dimms and 4 ranks per dimm.

2133 is pretty much restricted to 1 maybe 2 loads.

All they would have to do is ask for ECC versions and DRAM companies will have it ready asap.

That's not how ECC works in servers.
 
Back
Top