How is Sony going to implement 8GBs GDDR5 in PS4? *spawn

Perhaps Sony implemented GDDR5 banking on DDR3 stacking reducing their costs and board complexity significantly a year or so after launch?

Is that a feasible thing? For 2.5d stacking to emulate the properties of what they want inside the console ram wise?
 
HBM memory on interposer would be lower latency and faster, so they can still design the memory controller to emulate the timings and clock it for the exactly same bandwidth. The question is, at which point in time will this kind of memory cost less than GDDR5? They need 8GB and some very high-tech packaging technology. There's also the question of yield for such a huge amount of silicon in a single package.
 
Writing 1MB per second to the hard drive is hardly a problem worth engineering around. If any smartphone or point and shoot camera can manage to effortlessly encode and save 1080p videos with their meager RAM and storage speeds, PS4 won't even feel it.

And even 1MB/s [8mbits] is generous. I think they will most probably aim for 720p 3-5mbits.
 
And even 1MB/s [8mbits] is generous. I think they will most probably aim for 720p 3-5mbits.

If it's conventional hard disk, then the issue is more to trashing. This is why you two different kind of hdd on the market (desktop vs server). If a game caches an asset to load from hdd, this sort of trashing would affect it.
 
We've had knowledgable folk tell us 8 gigabit GDDR5 chips aren't available so Sony would need to double the bus width or clamshell the RAM or something decidedly non-trivial. the 8 GBs GDDR5 was the real surprise and eye-opener. After that I should have switched off instead of sitting through an hour of old PR videos... :p

If the console had initially been designed around 4GB on a 256bit bus as seems to be the case then they would already have been using clamshell if we assume 2Gb chips is what they planned to include in there. So it could be a fairly trivial change from a design point of view if they were to simply secure a supply of future 4Gb chips.

I'd say that's definitely more realistic than Sony implementing a 512bit bus as the bandwidth would be much higher than what it is. I doubt 4Gb chips would even be available in speeds as low as 2250Mhz which is what would be required for that bandwidth.
 
4Gb GDDR5 chips will be used because SK Hynix will have them ready by 1Q2013 and Samsung has them listed as in mass production.

Hynix link(Pdf): https://www.skhynix.com/inc/pdfDownload.jsp?path=/datasheet/Databook/Databook_1Q%272013_GraphicsMemory.pdf

Samsung link: http://www.samsung.com/global/busin...t/graphic-dram/detail?productId=7824&iaId=759

I hope my first post was helpful.

I'd say that was an excellent first post, I just spent 10 minutes searching Google for just that information with no luck ;) Cheers!
 
Any guess the cost/power difference since they are using the same amount of chips just going from 2Gbit to 4Gbit?

Seems like it was an easy change to make unlike some debate on here before about having redesign the hardware.
 
That's an interesting notion.

In fact they could have designed it around 16 1Gb GDDR5 chips in clamshell mode (2GB according to the earliest rumors) then upped it to 4GB with 2Gb chips and eventually when 4Gb chips came into play upped it to 8GB.

No design changes needed at all :D

Power difference is nada. There is no voltage difference between the different densities according to Hynix's data sheets.
 
In fact they could have designed it around 16 1Gb GDDR5 chips in clamshell mode (2GB according to the earliest rumors) then upped it to 4GB with 2Gb chips and eventually when 4Gb chips came into play upped it to 8GB.

No design changes needed at all :D

Power difference is nada. There is no voltage difference between the different densities according to Hynix's data sheets.
Wow that is very interesting. No power difference would mean really no redesign of any hardware at all.

I thought they would have at least re-design the cooling and maybe that is why they didnt show the console itself.
 
Wow that is very interesting. No power difference would mean really no redesign of any hardware at all.

I thought they would have at least re-design the cooling and maybe that is why they didnt show the console itself.

I've posted it in other threads but I should have kept it here.
Better late but never.

The chips that PS4 would be using seem to be on the low end on the voltage side.

Instead of using 6Gbps speeds (192GB/s/256pins=6 Gbps)
They're using 5.5Gbps speeds (176GB/s/256pins=5.5 Gbps).
5.5 Gbps chips that have 4Gb density from Hynix (H5GC4H24MFR-T3C) requires less voltage than the 6Gbps ones (H5GQ4H24MFR-R2C).

To be specific, there seems to be 3 power ratings. 1.6V, 1.5V, and 1.35V.

1.6V is only available on two older high performance chips, and everything else is either 1.5V or 1.35V.
The one from Hynix that completely fits Sony's bill is rated at 1.35V.
Low power :D

http://www.skhynix.com/inc/pdfDownload.jsp?path=/datasheet/Databook/Databook_1Q%272013_GraphicsMemory.pdf
 
I've posted it in other threads but I should have kept it here.
Better late but never.

The chips that PS4 would be using seem to be on the low end on the voltage side.

Instead of using 6Gbps speeds (192GB/s/256pins=6 Gbps)
They're using 5.5Gbps speeds (176GB/s/256pins=5.5 Gbps).
5.5 Gbps chips that have 4Gb density from Hynix (H5GC4H24MFR-T3C) requires less voltage than the 6Gbps ones (H5GQ4H24MFR-R2C).

To be specific, there seems to be 3 power ratings. 1.6V, 1.5V, and 1.35V.

1.6V is only available on two older high performance chips, and everything else is either 1.5V or 1.35V.
The one from Hynix that completely fits Sony's bill is rated at 1.35V.
Low power :D

http://www.skhynix.com/inc/pdfDownload.jsp?path=/datasheet/Databook/Databook_1Q%272013_GraphicsMemory.pdf
Good find, at 1.35v it should be only 10W for the whole 8GB.

I'm not sure how the clamshell configuration works, but the PS4 would be using these x32 chips in an x16 mode. I think it's the same chips either way, there's not chips specifically marked as x16.

What I'm thinking is that maybe they didn't initially plan for a clamshell configuration for the console, and they might have had a PCB designed to allow clamshell for the devkits (8GB), and all chips on one side for the console (4GB), so they would source the exact same part number for either the dev kit or the console. But now with 8GB that would require a complete rework of the casing because they'd need airflow under the board. Hence the delay for the enclosure.
 
Good find, at 1.35v it should be only 10W for the whole 8GB.

I'm not sure how the clamshell configuration works, but the PS4 would be using these x32 chips in an x16 mode. I think it's the same chips either way, there's not chips specifically marked as x16.

What I'm thinking is that maybe they didn't initially plan for a clamshell configuration for the console, and they might have had a PCB designed to allow clamshell for the devkits (8GB), and all chips on one side for the console (4GB), so they would source the exact same part number for either the dev kit or the console. But now with 8GB that would require a complete rework of the casing because they'd need airflow under the board. Hence the delay for the enclosure.

No other way to make this work on 256bit bus. Unless they were always going with 4Gbit.

Most people believe they just move from 2Gbit to 4Gbit. If this was the class it always was a clam shell design.
 
Good find, at 1.35v it should be only 10W for the whole 8GB.

I'm not sure how the clamshell configuration works, but the PS4 would be using these x32 chips in an x16 mode. I think it's the same chips either way, there's not chips specifically marked as x16.

What I'm thinking is that maybe they didn't initially plan for a clamshell configuration for the console, and they might have had a PCB designed to allow clamshell for the devkits (8GB), and all chips on one side for the console (4GB), so they would source the exact same part number for either the dev kit or the console. But now with 8GB that would require a complete rework of the casing because they'd need airflow under the board. Hence the delay for the enclosure.

But 8GB in dev kits imply that 4Gb density chips would already be in use. Maybe that's the case (early samples?) but isn't it also possible that the dev kits use a combination of DDR3 and GDDR5 in a discrete setup (which would be the source of the APU+discrete rumours).
 
I've posted it in other threads but I should have kept it here.
Better late but never.

The chips that PS4 would be using seem to be on the low end on the voltage side.

Instead of using 6Gbps speeds (192GB/s/256pins=6 Gbps)
They're using 5.5Gbps speeds (176GB/s/256pins=5.5 Gbps).
5.5 Gbps chips that have 4Gb density from Hynix (H5GC4H24MFR-T3C) requires less voltage than the 6Gbps ones (H5GQ4H24MFR-R2C).

To be specific, there seems to be 3 power ratings. 1.6V, 1.5V, and 1.35V.

1.6V is only available on two older high performance chips, and everything else is either 1.5V or 1.35V.
The one from Hynix that completely fits Sony's bill is rated at 1.35V.
Low power :D

http://www.skhynix.com/inc/pdfDownl.../Databook/Databook_1Q'2013_GraphicsMemory.pdf

So I guess we can close the thread now ;)
 
Good find, at 1.35v it should be only 10W for the whole 8GB.

I'm not sure how the clamshell configuration works, but the PS4 would be using these x32 chips in an x16 mode. I think it's the same chips either way, there's not chips specifically marked as x16.

What I'm thinking is that maybe they didn't initially plan for a clamshell configuration for the console, and they might have had a PCB designed to allow clamshell for the devkits (8GB), and all chips on one side for the console (4GB), so they would source the exact same part number for either the dev kit or the console. But now with 8GB that would require a complete rework of the casing because they'd need airflow under the board. Hence the delay for the enclosure.

Clamshell automatically uses x16 mode for these x32 chips. It's in the documentary.


10235475_932397.jpg


Clamshell Mode
The GDDR5 SGRAM can operate in a x32 mode or a x16 (clamshell) mode to allow a clamshell configuration as shown in Figure 5
.
The mode is set at power-up.

The benefit of clamshell mode is that users are able to quickly react on changing market conditions by easily creating new product variations. E.g., by taking the same component from the inventory, utilizing the same controller, PCB layout and memory channel width, the user can decide on the actual framebuffer size at a very late stage of the manufacturing process by

• either populating only one side of the PCB and configuring the GDDR5 to x32 mode, which results e.g. in a 1GB framebuffer by using 8 pieces of 1Gbit with a 256-bit wide memory interface at the controller;
• or populating both sides of the PCB and configuring the GDDR5 to x16 mode, which results e.g. in a 2GB framebuffer by using 16 pieces of 1Gbit with a 256-bit wide memoryinterface at the controller.

Clamshell mode has no performance penalty because it preserves the point-to-point connection on the high-speed data bus. The shared address and command interface can easily be connected by vias in the PCB andthe use of mirror function mode which lets these pins appear at the exact opposite locations.


Taken from
http://www.elpida.com/pdfs/E1600E10.pdf
 
Back
Top