How is Sony going to implement 8GBs GDDR5 in PS4? *spawn

It' just a guess based on samsung GDDR5 power consumption at 1.35v. Just a ballpark figure. They claim 256bit 4GHz at 8.7W and 128bit 4GHz at 4.3W.
http://originus.samsung.com/us/business/oem-solutions/pdfs/Green-GDDR5.pdf

So, if you look at those numbers, they appear to be saying 2GB of GDDR5 at 4GHz on a 256bit bus is 8.7W, right? PS4 having 8GB at 5.5GHz with the same number, assuming linear scaling with clock speed is 8.7*4 =34.8W and then 4/5.5= .7272 and 34.8/0.7272 is almost 48W. Of course clock scaling usually isn't linear...
 
So, if you look at those numbers, they appear to be saying 2GB of GDDR5 at 4GHz on a 256bit bus is 8.7W, right? PS4 having 8GB at 5.5GHz with the same number, assuming linear scaling with clock speed is 8.7*4 =34.8W and then 4/5.5= .7272 and 34.8/0.7272 is almost 48W. Of course clock scaling usually isn't linear...

it's old chip, not PS4 GDDR5 chip
 
More ram isn't linear either. How many chips, how many volts, how high is it clocked.

Exactly. Now, they might have a 30nm "class" shrink, which may account for 20-25% savings, but taking into account other factors I think that ends up back where you started.

*I say "class" because IIRC 40nm "class" was 46nm and If I remember right 30nm "class" was 39nm, just off the top of my head. RAM marketers are a tricky bunch. ;)
 
So, if you look at those numbers, they appear to be saying 2GB of GDDR5 at 4GHz on a 256bit bus is 8.7W, right? PS4 having 8GB at 5.5GHz with the same number, assuming linear scaling with clock speed is 8.7*4 =34.8W and then 4/5.5= .7272 and 34.8/0.7272 is almost 48W. Of course clock scaling usually isn't linear...
What you calculated would be a 1024bit 8GB GDDR5, and I have no doubt that would consume 48W. It's the interface that consumes power and it dwarfs the dram array.

If we accept the two numbers that samsung provided for 1.35v 46nm 4GHz :
2GB 128bit = 4.3W
2GB 256bit = 8.7W

Logically, using the 128bit example twice would be:
4GB 256bit = 8.6W

It means capacity have zero impact on power, and bus width scales it linearly, so by extension, with 4Gb chips instead of 2Gb chips it's still 8.6W.
With your linear scaling from 4GHz to 5.5GHz you get 11.8W
Remove your 20% for 30nm it's 9.4W

Sure it looks too low, but it's the numbers I got from samsung. Is there any other source for gddr5 power consumption?
 
What you calculated would be a 1024bit 8GB GDDR5, and I have no doubt that would consume 48W. It's the interface that consumes power and it dwarfs the dram array.

If we accept the two numbers that samsung provided for 1.35v 46nm 4GHz :
2GB 128bit = 4.3W
2GB 256bit = 8.7W

Logically, using the 128bit example twice would be:
4GB 256bit = 8.6W

It means capacity have zero impact on power, and bus width scales it linearly, so by extension, with 4Gb chips instead of 2Gb chips it's still 8.6W.
With your linear scaling from 4GHz to 5.5GHz you get 11.8W
Remove your 20% for 30nm it's 9.4W

Sure it looks too low, but it's the numbers I got from samsung. Is there any other source for gddr5 power consumption?

Been trying to to figure this out as well, so far all I can come up with is with 680 GTX cards, 4GB models using 5-8 watts more than similar 2GB models, so I suspect that the # of chips/density scales pretty close to linear.
 
Been trying to to figure this out as well, so far all I can come up with is with 680 GTX cards, 4GB models using 5-8 watts more than similar 2GB models, so I suspect that the # of chips/density scales pretty close to linear.
Don't trust whole system benchmarks when using different cards, unless the 4GB version have absolutely identical benchmarks, otherwise it will burn more power just because the GPU is better used by the higher memory amount, and a 1% higher frame rate will skew the result. The difference between 680 2GB abd 4GB was 399W versus 395W for the whole system at maximum load. Benchmarks are a tiny bit higher with the 4GB so that means the whole system is working a bit more. This is a 1% statistical noise.

So far there are clear indications that higher density chips have a negligible impact on power.
For the number of chip, PS4 would be 16 chips clamshell. The samsung example also has to be 16 chips clamshell for the 128bit bus.
 
That's not GDDR5 but an alternative, as yet nameless, Rambus tech. Could be called DDR3 Hyperboost or something. Clarification that Sony are using GDDR5 means Rambus's tech is off the cards.
 
The hynix list above also says GDDR5M is sampling only in Q3, so it's impossible to be ready on time.
It's just plain GDDR5, nothing special except for the fact that it's the best possible type of memory available. You win again, rationality!!
 
You win again, rationality!!

Only if you let it....

Have there been any prior memory techs that have had the same latency/bandwidth contrast as with GDDR5/DDR3 and did it affect general purpose code much? I remember around the DDR/DDR2 transition there was some 'henny-penny, sky is falling down' stuff out there around the higher latencies for DDR2 over DDR1, did anyone actually see a real world decrease in performance?

As for mitigation from what I've read is it all on the coder to select better data types and to ensure they have the right data in the right places to avoid misses. Does anyone know if the compiler can help here or is it really down to the dev. team alone?
 
That's not GDDR5 but an alternative, as yet nameless, Rambus tech. Could be called DDR3 Hyperboost or something. Clarification that Sony are using GDDR5 means Rambus's tech is off the cards.

Is it possible that Sony used the name GDDR5 only because they wanted the public to clearly see a difference between Durango's 8GB DDR3 RAM? If they said they were using 8GB of stacked DDR3 RAM, wouldn't the average person say the RAM was the same? It would be a lot harder to differentiate their RAM from the competition, wouldn't it?
 
I would think it would be cooler to say 8GBs of Awesomesauce RAM rather than tell a very specific lie and then get called on it.
 
Is it possible that Sony used the name GDDR5 only because they wanted the public to clearly see a difference between Durango's 8GB DDR3 RAM? If they said they were using 8GB of stacked DDR3 RAM, wouldn't the average person say the RAM was the same? It would be a lot harder to differentiate their RAM from the competition, wouldn't it?

No, it is not possible. In effect, it is rather worrying that you would take it as a possibility.
 
When Mark Cerny said GDDR5 on-stage, the cameraman should have zoomed in to Cerny's face (plus extra spotlight or nightshade's favorite, god rays) to make sure everyone of us imprint the image in our mind. :devilish:

Only if you let it....

Have there been any prior memory techs that have had the same latency/bandwidth contrast as with GDDR5/DDR3 and did it affect general purpose code much? I remember around the DDR/DDR2 transition there was some 'henny-penny, sky is falling down' stuff out there around the higher latencies for DDR2 over DDR1, did anyone actually see a real world decrease in performance?

As for mitigation from what I've read is it all on the coder to select better data types and to ensure they have the right data in the right places to avoid misses. Does anyone know if the compiler can help here or is it really down to the dev. team alone?

Hmm... probably more effective and general as collection of libraries, tools, design patterns and best practices.

Let's hope we hear more in GDC. In the PS3 early days, we got lot's of presentations on how to exploit the SPUs.
 
I keep searching for a nice write-up comparing/contrasting ddr3 with gddr5, learned a lot but not what I was really after... latency! In fact the best was a link directly here, and i'll quote this great post from bobblehead;

"GDDR5 uses similar signaling to GDDR3. Pseudo open drain and pull up termination, but at lower voltage (1.2-1.5V rather than 1.8V). In order to push the interface faster there is additional overhead on the sending and receiving sides and logical changes to the interface. That overhead adds to the base latency. The DRAM core has roughly the same latency as DDR3 but the GDDR5 IO layer imposes that extra latency penalty. For that extra cost you gain the ability to send data a lot faster. As a result, GDDR5 latency is a bit higher than DDR3 latency in absolute terms, but it's not a huge difference."

Perhaps people are blowing latency out of proportion, humanity seems to have a nice record of doing so...

ps.. been lurking here for around 2 years and would like to thank all the people who take the time explaining, breaking down....and explaining again all of the esoteric information I wouldn't understand otherwise.
 
Back
Top