PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
GDDR5 timings as provided by Hynix datasheet:
CAS latency= 10.6ns
tRCD = 12ns
tRP = 12ns
tRAS = 28 ns
tRC = 40ns


DDR3 timings for some Corsair 2133 RAM 11-11-11-28
CAS 10.3ns
tRCD 10.3ns
tRP 10.3ns
tRAS 26.2ns
Apples to Oranges, this is comparing a boutique DDR3 to a stock GDDR5 specs.

Comparing datasheet to datasheet, Micron parts are CL14 at 2133, so 13.1ns. That would give a 10% to 20% edge to GDDR5.
 
Apples to Oranges, this is comparing a boutique DDR3 to a stock GDDR5 specs.

Comparing datasheet to datasheet, Micron parts are CL14 at 2133, so 13.1ns. That would give a 10% to 20% edge to GDDR5.
Or as said before: The interface to the memory cells doesn't play much of a role when it comes to DRAM latencies. They are always roughly the same as they are for the most part determined by the DRAM arrays itself, not by the fancy IO logic used to increase the bandwidth available on low pincount interfaces.
 
Thanks, that makes more sense.

I'm a bit off topic... but I wonder why it never really changed since DDR1.
DDR1-400 CL3
DDR2-800 CL5
DDR3-1600 CL11
DDR3-2133 CL14
DDR4-2400 CL16
DDR4-3500 CL20

Everything has been somewhere around 12ns CAS. Despite the huge process improvements and faster speeds it's all the same latency. :???:
 
Thanks, that makes more sense.

I'm a bit off topic... but I wonder why it never really changed since DDR1.
DDR1-400 CL3
DDR2-800 CL5
DDR3-1600 CL11
DDR3-2133 CL14
DDR4-2400 CL16
DDR4-3500 CL20

Everything has been somewhere around 12ns CAS. Despite the huge process improvements and faster speeds it's all the same latency. :???:
My first guess would be that the capacitor of each cell also shrinks with each process step. So there is less and less charge (in absolute terms) to change the state of the bitlines.
 
My first guess would be that the capacitor of each cell also shrinks with each process step. So there is less and less charge (in absolute terms) to change the state of the bitlines.

This would be the correct answer. As the work that needs to be done by the sense amps increases as the tech shrinks, DRAM just doesn't get any faster with shrinks.
 
My first guess would be that the capacitor of each cell also shrinks with each process step. So there is less and less charge (in absolute terms) to change the state of the bitlines.
Would voltage play a role then?
New processes are driven at a lower voltage, sense amp slew rate impacted by charge, and charge proportional to voltage-squared. It would explain why the only parts with improved latency (boutique ram) are overvolted above the "nominal" voltage which the dram cells were designed for.
 
He said, "it's an older devkit so it's not as fast as the current ones. It's about ten percent slower."

I just had another thought on this that's kinda out there but the numbers match up with the older Devkits being 10% slower.

What if the PS4 GPU is a modified HD 7870 & the yields was so good that they decided to use all 20 CU's?

18 CU's is 10% slower than 20 CU's.
 
Quick question. It's widely known that Orbis has 176GB/s peak memory bandwidth, but has the real world/sustained figure been measured?, or failing that educationally guessed at? (by people with an education in the subject...!)
 
I just had another thought on this that's kinda out there but the numbers match up with the older Devkits being 10% slower.

What if the PS4 GPU is a modified HD 7870 & the yields was so good that they decided to use all 20 CU's?

18 CU's is 10% slower than 20 CU's.

Interesting, but is there anything to substantiate this? I'd think that if this were true Sony would be blasting it from the rooftops.
 
Interesting, but is there anything to substantiate this? I'd think that if this were true Sony would be blasting it from the rooftops.

Since both parties haven't yet stepped back from the 'everything is set in stone' and even now are saying that the final specs are subject to change. I don't think it's likely but I can imagine a game of cat and mouse being played by MS and Sony right now.

Leaving any unannounced spec changes until the very last minute so that the opposition has no time to respond. We've already seen MS try and reduce the performance gap by modifying the CPU\GPU clocks. Not that it makes any significant difference.

Sony are more tightly constrained by hardware design, smaller case etc. They've also shown no interest in increasing their performance lead. But you never know :devilish:
 
I thought that the Xbone SoC was the current highest transistor count out there @ 5,000,000,000?

According to Wiki the PS4 is actually higher at 6,000,000,000. So an entire billion transistors more... If that's true what are they being used for? The extra CU's wouldn't account for that would they?

Do we find out that Sony included ESRAM on the SoC and then ditched it because of yield issues?
 
Quick question. It's widely known that Orbis has 176GB/s peak memory bandwidth, but has the real world/sustained figure been measured?, or failing that educationally guessed at? (by people with an education in the subject...!)
Taking the fillrate benchmarks of hardware.fr as reference (which use of course the ROPs to access the memory, read or write benchmarks limited by the memory bandwidth through the TMUs don't exist), the HD7850 or 7870 (also 32 ROPs and a 256Bit GDDR5 interface made by AMD) achieve about 93% of the peak bandwidth with writes and about 91% of the peak bandwidth with blending (mixed reads and writes, but longer bursts).
 
I thought that the Xbone SoC was the current highest transistor count out there @ 5,000,000,000?

According to Wiki the PS4 is actually higher at 6,000,000,000. So an entire billion transistors more... If that's true what are they being used for? The extra CU's wouldn't account for that would they?
Don't trust wikipedia on such things. It's simply bullshit and probably edited in by some stupid fanboy.
 
Wiki is mistaken.

Radeon 7870 has 20CU's [most likely PS4 has them as well, without that buffer zone Sony wouldn't have that "phenomenal yields" Tretton mentioned] and it has size od 212mm2 and total transistor count of 2.8 billion. CPU section and other modules will not add much to those numbers.
 
Last edited by a moderator:
I thought that the Xbone SoC was the current highest transistor count out there @ 5,000,000,000?

According to Wiki the PS4 is actually higher at 6,000,000,000. So an entire billion transistors more... If that's true what are they being used for? The extra CU's wouldn't account for that would they?

Do we find out that Sony included ESRAM on the SoC and then ditched it because of yield issues?

Trusting Wiki completely is like trusting a 5yr old around a plate of cookies (my son ate all my cookies :cry:). Anyhow, that page was recently edited - so anyone could have done some fun business. Not saying the PS4 SoC isn't that size... I just don't trust Wiki as being the ultimate source for info, especially something Sony has never stated officially.
 
Radeon 7870 has 20CU's [most likely PS4 has them as well, without that buffer zone Sony wouldn't have that "phenomenal yields" Tretton mentioned] and it has size od 212mm2 and total transistor count of 2.8 billion. CPU section and other modules will not add much to that numbers.
CPU section and northbridge is probably adding (significantly) more than 60mm² to it. Maybe Sony took slightly larger PHYs to get to the 5.5GBps instead of the ~5GBps limit of Pitcairn. In the end, we are likely seeing some diesize around 300mm² for the PS4, maybe even above it, depending on the unknown bits.
 
Trusting Wiki completely is like trusting a 5yr old around a plate of cookies (my son ate all my cookies :cry:). Anyhow, that page was recently edited - so anyone could have done some fun business. Not saying the PS4 SoC isn't that size... I just don't trust Wiki as being the ultimate source for info, especially something Sony has never stated officially.

Ummm. Cookie...

I have no intention of actually ever trusting wiki. That would be like be like paying a complete stranger to look after your luggage at an airport...

But it would be good to have a dig around to try and find the t-count for the PS4 APU - 8 core CPU, GPU, various other custom hardware for sound, (de)compression etc.
 
Status
Not open for further replies.
Back
Top