Playstation 5 [PS5] [Release November 12 2020]

I brought the conversation here, to keep it ontopic:


I see where you're going with that, but is it not a simpler explanation to simply compare the XSX flops:bandwidth ratio to that of the PS5's where we discover they are both pretty much equal?
GPU compute isn't the only big consumer of memory bandwidth. AFAIK, ROP count is similar between the two and the PS5 has >20% higher clocks. It's possible that the PS5's bandwidth requirements are similar or even higher than the SeriesX's.


Your suggestion that the PS5 has IC on top of that would give it bandwidth far in excess of the XSX.
It would give it bandwidth in excess of the SeriesX but only for the operations that fit in the small cache. The 100GB/s duplex of the XBone's EDRAM, if added to the DDR3's 60GB/s, results in a significantly higher "total bandwidth" than the PS4's 176GB/s, yet we never saw the XBone behaving like a 260GB/s system.
That said, the PS5 having something like 56MB of Infinity Cache + 448GB/s GDDR6 wouldn't mean they have a bandwidth advantage over the SeriesX with 560GB/s GDDR6.


If it had that, it's pretty likely Sony would have mentioned it by now.
Why would they? Infinity Cache is apparently completely transparent to the developer (therefore there's no need to mention in on the Road to PS5 presentation), and the existence of Infinity Cache on RDNA2 would have been on NDA up until 5 days ago.


It seems to me that it would have been an incredibly risky strategy for Sony to design a next gen console which from the get go was effectively designed to be unable to render at 4K in next gen games.
"Not designed for maximum IQ at 4K" != "unable to render at 4K".
The console is able to render at 4K. There are plenty of release date games coming up in 10 days that will be rendering at 4K.
 
It would give it bandwidth in excess of the SeriesX but only for the operations that fit in the small cache. The 100GB/s duplex of the XBone's EDRAM, if added to the DDR3's 60GB/s, results in a significantly higher "total bandwidth" than the PS4's 176GB/s, yet we never saw the XBone behaving like a 260GB/s system.
That said, the PS5 having something like 56MB of Infinity Cache + 448GB/s GDDR6 wouldn't mean they have a bandwidth advantage over the SeriesX with 560GB/s GDDR6.

How do we know that? Perhaps the XBO was never bandwidth limited but because it had significantly less core power than the PS4, that didn't help it close the performance gap?

What we do know is that early indications at least suggest the Infinity Cache can make a 512GB/s RX 6900 perform like a 936GB/s RTX 3090. So it would seemingly give the PS5 an enormous bandwidth advantage over the XSX if it were present.

Why would they? Infinity Cache is apparently completely transparent to the developer (therefore there's no need to mention in on the Road to PS5 presentation), and the existence of Infinity Cache on RDNA2 would have been on NDA up until 5 days ago.

For the reason I stated above. "enormous bandwidth advantage over the XSX" Seems like something worth shouting about to me. I don't remember Microsoft being shy about it in either or it's last 2 consoles.

"Not designed for maximum IQ at 4K" != "unable to render at 4K".
The console is able to render at 4K. There are plenty of release date games coming up in 10 days that will be rendering at 4K.

Yes of course it is. But if you were going to put IC in there to boost it's peak performance it'd be a bit strange IMO to only put enough in there to boost peak performance at a resolution below 4K, given as you state above, plenty of games are targeting 4k on the console, and lets face it - it is being marketed as a 4k console.
 
Has there been any mention to whether the PS5 will have universal/system level down-sampling for 1080p displays?....
 
https://www.resetera.com/goto/post?id=50112970

Someone noticed there's a dedicated Realtek audio chip inside Dualsense
Isn't it supposed to have one?

The controller needs amplified mono speaker output, amplified headphone output for the audio jack, and multi-channel audio input with noise cancellation.
An audio codec from Realtek like the ALC264 would have all of that already integrated, and AFAIK these can be pretty cheap.


What we do know is that early indications at least suggest the Infinity Cache can make a 512GB/s RX 6900 perform like a 936GB/s RTX 3090. So it would seemingly give the PS5 an enormous bandwidth advantage over the XSX if it were present.
LLC bandwidth will scale with cache amount if you're adding up the bandwidth of each cache slice.
If, in the case of RX6900 vs. RTX3090, 128MB of Infinity Cache equates to +424GB/s, then 56MB worth of cache slices will equate to +185.5GB/s.
On the PS5, 448GB/s (- 50GB/s) + 185.5GB/s = 583GB/s "effective GPU bandwidth", which then compares to the SeriesX's 560GB/s - 50GB/s = 510GB/s. In practice, we'd be looking at only 12.5% more effective bandwidth on the PS5, which has 18% lower compute resources but has 20% higher fillrate as well as an I/O subsystem and audio processor that probably consume more bandwidth. Tempest could be especially bandwidth-hungry since it's an entire CU that has no caches.


For the reason I stated above. "enormous bandwidth advantage over the XSX" Seems like something worth shouting about to me. I don't remember Microsoft being shy about it in either or it's last 2 consoles.
When was the last time Sony made a big deal about comparing their hardware to Microsoft's? When the 2013 consoles came out, I don't recall Sony bragging about the PS4 having much more compute throughput and system memory bandwidth than the XBone.
All I remember is Cerny warning about TFLOP count not being the only measurement to consider when comparing real performance between two systems, and if DF's latest statements are anything to go by, it seems he was right.


But if you were going to put IC in there to boost it's peak performance it'd be a bit strange IMO to only put enough in there to boost peak performance at a resolution below 4K, given as you state above, plenty of games are targeting 4k on the console, and lets face it - it is being marketed as a 4k console.
The PS4 Pro is also marketed as a 4K console yet it very rarely renders at full 4K.
And just like the PS4 Pro, the PS5 could be a console that is optimized to render at 1440p-1800p + temporal / checkerboard / AI upscaling.
 

two voice coils and two motors for triggers... battery life will be crap, I will buy some light USB-C cable and play with the cable attached

edit: regardless of battery life, DualSense looks cool and is packed with innovative stuff, well done Sony.
 
Last edited:
PS5 die looks a bit too small for any kind of meaningful cache to be there. AquariusZ said its ~300mm² back in February with XSX being ~350mm² (and Big Navi being 505mm²) so I would assume with everything we have for now (and Cerny not saying a thing about it) consoles are probably "Infinity Cache" free.
as I've alluded to earlier.
All cache is meaningful, all chips could use more of it.
the only question you need to ask really, is whether Infinity Cache is something different from regular cache. Size doesn't seem that relevant since we always demand more.
 
PS5 die looks a bit too small for any kind of meaningful cache to be there. AquariusZ said its ~300mm² back in February with XSX being ~350mm² (and Big Navi being 505mm²) so I would assume with everything we have for now (and Cerny not saying a thing about it) consoles are probably "Infinity Cache" free.

Considering the CUs counts for both and dividing them among the available mm there is definitely a discrepancy.

350/60=5.8
300/40=7.5

They should be counting as largely the same. I realise this shouldn't be taken accurately as there is other hardware, but it shows that there is an inconsistency. Assuming they have the same CPU and similar memory controllers, is the rest of that attributed as IO processors and audio?
 
two voice coils and two motors for triggers... battery life will be crap, I will buy some light USB-C cable and play with the cable attached

edit: regardless of battery life, DualSense looks cool and is packed with innovative stuff, well done Sony.
Bigger battery, most reporting a better overall life even playing a game that uses all the features (Astro)
 
Considering the CUs counts for both and dividing them among the available mm there is definitely a discrepancy.

350/60=5.8
300/40=7.5

They should be counting as largely the same. I realise this shouldn't be taken accurately as there is other hardware, but it shows that there is an inconsistency. Assuming they have the same CPU and similar memory controllers, is the rest of that attributed as IO processors and audio?
Our we 100% sure they're both made on the same process node?
 
Considering the CUs counts for both and dividing them among the available mm there is definitely a discrepancy.

350/60=5.8
300/40=7.5

They should be counting as largely the same. I realise this shouldn't be taken accurately as there is other hardware, but it shows that there is an inconsistency. Assuming they have the same CPU and similar memory controllers, is the rest of that attributed as IO processors and audio?

I think also the PS5 is >300 - for some reason 316 rings a bell.
 
Yeah, I have no idea.

Obviously they're both some variant of 7NM and most likely the same, but while watching the XBSX Hot Chips presentation the other day the I noticed the speaker for MS was reluctant to reveal what version of 7nm the Series X is. I found it a little odd and he referenced something about not being able to speak on it because of AMD.
 
Considering the CUs counts for both and dividing them among the available mm there is definitely a discrepancy.

350/60=5.8
300/40=7.5

They should be counting as largely the same. I realise this shouldn't be taken accurately as there is other hardware, but it shows that there is an inconsistency. Assuming they have the same CPU and similar memory controllers, is the rest of that attributed as IO processors and audio?
If we are pedantic its 359/56=6.4

In addition to that, portion of SOC which is not GPU is pretty much the same and PS5 had more mm² per CU reserved for memory PHY (XSX has 40% more CUs but only 25% more space for additional 64bit PHY).

Add to that one CU reserved for Tempest engine and any additional logic for cache scrubbers and it is actually pretty much a fit.

16CU+64bit PHY should not end up more then 50mm².
 
If we are pedantic its 359/56=6.4

In addition to that, portion of SOC which is not GPU is pretty much the same and PS5 had more mm² per CU reserved for memory PHY (XSX has 40% more CUs but only 25% more space for additional 64bit PHY).

Add to that one CU reserved for Tempest engine and any additional logic for cache scrubbers and it is actually pretty much a fit.

16CU+64bit PHY should not end up more then 50mm².

I included the redundant CUs for both and both chips are +/- 10mm. If anything it add more to the PS5, as 10mm is proportionally more.

Is could definitely be calculated better if someone knows the size of the CPU, etc.

But to @VitaminB6's point, it could just be down manufacturing nodes.

Edit: no matter how you cut it, there is a valid discrepancy here.
 
According to many, judging by the images the PS5 die is about 310-315mm² (which should ring a bell indeed, but that's not the point). I believe I read here a thorough prediction of PS5 die size giving a ~315mm² result. That's without additionnal cache.

The only way for PS5 to have some meaningful amount of infinity cache is to use 7nm EUV litography instead of 7nm P. But I think that's highly improbable.
 
I included the redundant CUs for both and both chips are +/- 10mm. If anything it add more to the PS5, as 10mm is proportionally more.

Is could definitely be calculated better if someone knows the size of the CPU, etc.

But to @VitaminB6's point, it could just be down manufacturing nodes.
But XSX has 56CUs (4 of them redundant) and Tempest engine is stripped CU from GPU, it is not on GPU part of die.

There is no discrapancy, that is the thing. We know size of 64 additional bit bus and 16CUs.
 
Back
Top