rokkerkory
Regular
I guess the real question is if MS knew GDDR5 would be available at 8GB in time for launch, would they even have went with ESRAM.
MS probably told developers how they think is the best way to use it.Isn't the real question that needs to be answered how much extra work the esram will require in order to circumvent the supposed slow ddr3 ram.
I am talking about 3rd party multi platform titles here. And considering that the 360/ps3 multi platform games ended up looking pretty much identical which I would guesstimate required way more work considering the difference between the ps3 and 360 hardware. I am not worried for Xbox One games.
I would on the other hand expect 1st party games to take advantage and use the esram for stuff, effects, whatever, that will be hard to duplicate on other platforms.
That's one we wont know, but there is a chance that even if they knew they could use 8 GDDR5 they may have still gone the esram route. Simply due to cost.I guess the real question is if MS knew GDDR5 would be available at 8GB in time for launch, would they even have went with ESRAM.
I guess the real question is if MS knew GDDR5 would be available at 8GB in time for launch, would they even have went with ESRAM.
It's already 365mm2, how big would it be with 18CU and 32 ROP?
Not really a practical question, more of looking at the memory subsystem from a different perspective.
People can't separate the ESRAM choice from the lower number of CUs but I wonder that if there was more compute resources on the chip, if people would look at the DDR3/ESRAM choice as having more flexibility and greater performance over more varied workloads than a single pool of GDDR5. I wonder if it might actually be viewed as preferable if it wasn't always framed as as a CU tradeoff.
I guess the real question is if MS knew GDDR5 would be available at 8GB in time for launch, would they even have went with ESRAM.
The ESRAM was there before the 8GB of RAM. The size of the RAM pool was not responsible for the decision to use ESRAM.I've always seen their need for 8GB of memory as driving the decision to use ESRAM and Kinect as the reason for get the cost down elsewhere in the design which led to less CUs. Any idea what CU and ROP parity would have done to cost?
Highly unlikely. GDDR5 is quite expensive compared to DDR3, not only in monetary costs but in power costs as well.
Regards,
SB
The ESRAM was there before the 8GB of RAM. The size of the RAM pool was not responsible for the decision to use ESRAM.
aaand.... there it is. :smile:
Well, that could still mean it was part of the design due to it being DDR3 not GDDR5, in that regards the amount of main memory is less important than the BW.aaand.... there it is. :smile:
I can't find any reliable information on GDDR5 being significantly more power hungry given they're using the same manufacturing process.
If anything, the manufacturers actually claim that GDDR5 is less power consuming, like on page 19 of this document
http://www.elpida.com/pdfs/E1600E10.pdf
GDDR5 can also down-volt itself at lower usage, so it could be less power-consuming than DDR3 if anything.
Well, that could still mean it was part of the design due to it being DDR3 not GDDR5, in that regards the amount of main memory is less important than the BW.
Although I still subscribe to the view that it was part of the design more so than some sort of band aid.
Well, the ESRAM was more tied into the type of ram used instead of the capacity chosen due to the bandwidth in the first place anyway.
It was pretty clear that it was either a large pool of high speed GDDR5, or a huge pool of DDR3 + an eSRAM. 8GB of it was just a choice of "how large is enough."
I mean, is a wrong way to describe system bandwidth?
Here's PR numbers from the same company...
GDDR5 (Samsung)
http://originus.samsung.com/us/business/oem-solutions/pdfs/Green-GDDR5.pdf
2 Gb module (Samsung 40 nm "class")
8.7W with a 256 bit interface
4.3W with a 128 bit interface
2 GB would then be 69.6W for a 256 bit interface and 34.4W for a 128 bit interface.
DDR3 (Samsung)
http://www.samsung.com/global/business/semiconductor/file/support/memory/green_ddr3_jun_11.pdf
DDR3 (choosing the numbers for the 40 nm "class" chips to keep it equivalent to the GDDR5 sample)
48 GB of 1 Gb chips - 41W - 1.7W per 2 GB or 0.2125W per Gb
48 GB of 2 Gb chips - 34W - 1.4W per 2 GB or 0.175W per Gb
So...at the 40 nm "class" node for Samsung, 2 Gb DDR3 is 24.6x to 49.9x more power efficient depending on whether you use low bit width GDDR5 or high bit width GDDR5.
That's a rather massive difference. Of course, the testing methodology is different (server workload versus graphics workload) and the GDDR5 is obviously massively faster.
Regards,
SB
You're seriously not suggesting PS4 with 8GBs will run at 200W+ for the memory alone.2 GB would then be 69.6W for a 256 bit interface and 34.4W for a 128 bit interface.
46nm 2Gb GDDR5 @ 256 bits will be 2GB of RAM, consuming 8.7W, around 1W~.5W per module.2 Gb module (Samsung 40 nm "class")
8.7W with a 256 bit interface
You're seriously not suggesting PS4 with 8GBs will run at 200W+ for the memory alone
4 Gb chips should be more power efficient then 2 Gb chips. As well, I'm assuming it's on a newer process node which will reduce power consumption as well. But then again the DDR3 modules will be on larger capacity chips on a lower process node as well. So the ratio should remain relatively the same.
And yes GDDR5 does use a LOT of power. It's a rather large power consumer on desktop graphics boards.
And those numbers that Samsung used are pretty much inline with desktop graphics cards using those chips at that time. So, I don't see anything wonky with their numbers.
[edit] For example in the DDR3 PDF. A 2 Gb DDR3 module uses 71% more power than a 4 Gb DDR3 module for the same amount of memory. So while each individual 4 Gb chip uses more than a 2 Gb chip, you need 2x 2 Gb chips to have the same memory capacity.
Regards,
SB
a 2Gb module is not going to eat up 8.7W or have a 265 bit interface. 8 of them, sure.2 Gb module (Samsung 40 nm "class")
8.7W with a 256 bit interface