Playstation 5 [PS5] [Release November 12 2020]

I think a lot of people would call the PS4 slow based on the GPU specs alone, particularly in the PC realm.

Now yes, not at the time. a 7870 was a higher range mid end gpu. The CPU was slow for the time, but atleast they could say '8 cores'.
The move from HDD to SSD is going to enable faster loading, streaming and things like quick resume etc, and a five time increase in gpu power atleast and a real cpu this time around.
 
So when Cerny explains that the RDNA2 Compute Unit is approx 60% bigger than the GCN CU, is that extra size only due to the Ray Tracing addition, or is there other stuff adding to that increase in transistor number? Does anyone know what extra space that RT cores take up on the CU?
I'm trying to see what Cerny was getting at when he said that the 36 CUs in the GPU were the equivalent of 58 GCN Cu's. Is he saying that performance wise it's the equivalent of 58 GCN Cu's, or is just from a size point if view?
I expect the RDNA2 Cu's to have a 50% performance per clock increase of the PS4 compute units.
So in essence the PS5 can perform at the equivalent of 15.4tflops of a GCN GPU.
 
Maybe if the advantage of the tempest architecture is it's low latency and guaranteed operation time, it could be useful for VR, and not just the 3d audio aspect. The tracking processing have critical latency, in fact I can't think of anything in gaming where low latency and guaranteed deadline of operation is more important.
If the resource is accessible at a very low latency, this could be potentially useful. Generally the CPU cores have promised the lowest latency, with even Jaguar being cited as potentially going below 1ms in Sony's audio presentation.
If there is some element of cost to maintaining that latency, such as requiring isolating the audio workload on an underutilized CPU, perhaps a consistently low-latency and dedicated engine can help.
Whether Tempest can offer that to the developer may depend on how it is configured, and whether some of the latency-adders in the PS4 still exist. The DSP in the PS4 had a secure API that added latency, for example.

So when Cerny explains that the RDNA2 Compute Unit is approx 60% bigger than the GCN CU, is that extra size only due to the Ray Tracing addition, or is there other stuff adding to that increase in transistor number?
The comparison was between RDNA2 CU and a PS4-era Sea Islands GCN CU. GCN had steady growth in features every generation even before the bump to RDNA, and RDNA2 presumably grows the footprint of RDNA.
As an example of growth between GCN generations, Vega touted an extra 3.9 billion transistors over Fiji as a result of extra features and resources. The largest single consumer of that was apparently dedicated to implementation changes needed to increase clock speed.
https://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review/2
"Talking to AMD’s engineers, what especially surprised me is where the bulk of those transistors went; the single largest consumer of the additional 3.9B transistors was spent on designing the chip to clock much higher than Fiji."

Does anyone know what extra space that RT cores take up on the CU?
I'm trying to see what Cerny was getting at when he said that the 36 CUs in the GPU were the equivalent of 58 GCN Cu's. Is he saying that performance wise it's the equivalent of 58 GCN Cu's, or is just from a size point if view?
I'm sure someone knows the cost of the RT hardware, but nothing is public.
The comparison was just in terms of transistor count, to better explain why the CU count alone wasn't an accurate measure of how the GPU had grown. Cerny indicated performance depends on more things than CU count and TFLOPS, and indicated the new architecture could do more than those single numbers would indicate. He didn't commit to a specific amount of improvement.
 
Now yes, not at the time. a 7870 was a higher range mid end gpu. The CPU was slow for the time, but atleast they could say '8 cores'.
The move from HDD to SSD is going to enable faster loading, streaming and things like quick resume etc, and a five time increase in gpu power atleast and a real cpu this time around.

And to think that there was the 3x more powerful R9 290X releasing at the same time as the PS4. But in all fairness, it took a new generation to get us better looking games. I wonder at what timeframe MS and Sony had started considering midgen refreshes and whose actual idea it was (maybe AMD's?). It could've been planned from the beginning to span out as newer processes technologies made it feasible to get more resources in the same die area. Ultimately, it seems die area played the biggest part in the console APU design since it would be the single most expensive component and yields would be critical.

This gen was definitely an exercise in regrouping and reevaluating what consoles should and could be, and having an affordable BOM was a big part of that.

In reference to the next-gen, I wonder how adopting SSDs or hybrid drives in current gen with high speed I/O could've affected the memory architectures of the PS4 and Xbone assuming it would've been affordable. Either system needing 8 GB had arguably more to do with OS back end operation and content sharing than actual game rendering needs. They could've gotten by with 6 GB or even the originally slated 4 GB had they been equipped with high speed SSDs I bet.
 
I wonder at what timeframe MS and Sony had started considering midgen refreshes and whose actual idea it was (maybe AMD's?). It could've been planned from the beginning to span out as newer processes technologies made it feasible to get more resources in the same die area.

It would have been almost right away as soon as the base consoles launched as they would have needed to get things in place for compatibility while it still takes a long lead time to put together the customizations that they incorporated. The Scorpio DF interview indicated they were running simulations for potential designs before they went to AMD. For MS, it might have been an experiment with their overarching strategy for compatibility.

Neo, of course, was a year ahead in release, but IIRC, they were concerned about swaying folks to stay with consoles instead of upgrading to PC? I forget.
 
I think it's pretty obvious Sony got caught out with the XSX being so powerful this next gen. Nobody would design a system from scratch to have variable frequency. It's not a preferred thing, and is evidence of a reactionary step.
The PS4 and PS4 Pro were quite ordinary from a specs point of view. At the time the PS4 released, a 1.8tflop GPU was pretty average, and the Jaguar cores were weak. Even when the PS4 Pro came out, it was quite average spec wise. It was only the fact that the Xbox stuffed up with Kinect that the PS looked good.
So a 36CU unit at 9.2tflop GPU for PS5 seems inline with the PS4 to me. However MS went balls out to make sure they got the power crown again.
Sony got wind of the XSX power levels and worked out they could not go into next gen with such a power difference, especially since it was single digit vs double digit (always looks worse).
Because they were using one of the excess CUs for their Tempest Engine it meant they couldnt enable the extra four Cu's on the die to take it up to 40 Cu's, so the only option was to push the clocks as far as they could.
Sony come across as a bit shell shocked, and their repeated mantra of SSD and 3D Audio is telling to me.
However, this short fall in power has a good upside to people like me who love tech, and that is that is impressive to see a console clocked so high that nobody thought it possible. I am also really interested to see what cooling solution they have come up with. To cool a smaller APU with such high clocks is going to take some smarts. I dont doubt Sony's ability here.
So while I have no doubt the XSX is a far superior machine, in almost every way, I am actually far more interested to learn more about the PS5 than XSX now.
 
Neo, of course, was a year ahead in release, but IIRC, they were concerned about swaying folks to stay with consoles instead of upgrading to PC? I forget.

At least Sony has a nice library of exclusives, which makes me question the move to release Horizon on PC. It would be interesting to see Sony give their exclusives the PC treatment once the titles have already met their 95% potential lifetime predicted sell through rate. Pretty easy way to gain a new audience (and attract buyers to purchase a console for timed exclusives) and make some extra moolah. Too many PS4 games are begging for a PC release......Killzone: Shadowfall being one of them, since it controls like ass on gamepad and does have some framerate issues. Imagine how crazy it would be to see the original Killzone released to PC like it was originally intended in early development! :runaway:

Horizon and Uncharted 4 are pretty damn well controlling and rock solid performance wise, so making a move to PC less needed. Horizon PC might have more to do with PC support in Decima for Death Stranding? I wonder how well the GPGPU based world building system will translate to CPU unless it's been translated to DX or Vulcan instructions.
 
Last edited:
Then again, it would be simpler for them to do a remaster treatment and not have to deal with the wide variety of hardware (support costs) while having to reconsider the entire design of the UI for Mouse/KB.
 
If the resource is accessible at a very low latency, this could be potentially useful. Generally the CPU cores have promised the lowest latency, with even Jaguar being cited as potentially going below 1ms in Sony's audio presentation.
If there is some element of cost to maintaining that latency, such as requiring isolating the audio workload on an underutilized CPU, perhaps a consistently low-latency and dedicated engine can help.
Whether Tempest can offer that to the developer may depend on how it is configured, and whether some of the latency-adders in the PS4 still exist. The DSP in the PS4 had a secure API that added latency, for example.
There's been a few additions which were about preserving cache or micro-managing cache, back wih PS4 and the volatile flag, and now with the SSD loading, they talked about doing a precise cache scrubbing. Would there be an advantage with an SPU-like access which would help avoid trashing the cache? With 20GB/s I was thinking it might be useful to completely bypass the caches if it's probably a read once, write once sort of data. Or at least some control from that perspective.
 
Then again, it would be simpler for them to do a remaster treatment and not have to deal with the wide variety of hardware (support costs) while having to reconsider the entire design of the UI for Mouse/KB.

Even better: what if AMD was to finance ports that run exclusively only on their graphics? They wouldn't have the balls to do it, but imagine the ****storm. I do wish Sony and MS would be more open to mouse/KB support in their games for single player.
 
Even better: what if AMD was to finance ports that run exclusively only on their graphics? They wouldn't have the balls to do it, but imagine the ****storm. I do wish Sony and MS would be more open to mouse/KB support in their games for single player.
I'm pretty sure almost every Microsoft Games Studio game released since KB/Mouse support was added has some sort of support. Not sure about Ori 2 or Crackdown 2. I'm actually very surprised that there seams to be no support for mice using the adaptive controller. You could easily build a keyboard using a fight stick encoder and some cherry switches, but mouse support is the real thing holding back console games, if you are going for PC style inputs.
 
Sorry for the Fisking, but I’m bored.

I think it's pretty obvious Sony got caught out with the XSX being so powerful this next gen. Nobody would design a system from scratch to have variable frequency. It's not a preferred thing, and is evidence of a reactionary step.
Really? Assuming I understand it correctly, it would seem to make sense to let devs specify whether they’d like more CPU or more GPU within the console’s power budget, and it would allow for (slightly?) higher clocks than clocking to the lowest common denominator (max CPU + GPU usage). It sounds like a valid design philosophy (that maybe they’ve had to emphasize more than it’s worth just because there’s not much else to differentiate the two consoles), though it’s undermined a bit by XSX’s CPU’s min speed being higher than PS5’s max. Hard to say if it’s better or worse from a manufacturing cost POV. I’d say initial price and the first price drop will determine who got it right, but if Sony lead sales again, they might just hold price steady again (barring COVID19 reactions).

MS’ rumored baby XSX is slightly confusing things in that it’s looking down the performance ladder. Who’s to say Sony won’t be behind in sales by the time of the next mid-gen refresh / early next gen and so be forced to go bigger than MS in four years?

The PS4 and PS4 Pro were quite ordinary from a specs point of view. At the time the PS4 released, a 1.8tflop GPU was pretty average, and the Jaguar cores were weak.
The PS4, at launch, had 50% more GPU than the Xb1. The XSX will have ballpark 30% more GPU than PS5.

Even when the PS4 Pro came out, it was quite average spec wise.
Sony was killing two birds with one stone: PSVR and 4k. I would have liked more power, but I can see what they were going for, and it may have been for the best that they didn’t have to cool anything more powerful. Hopefully PS5 is much quieter.

Because they were using one of the excess CUs for their Tempest Engine
Do we know this for sure? It seems like it would defeat the purpose of disabling a CU for yields, or at least require further modifications to at least one other CU to mimic what seems to be some Tempest-specific functionality.

Sony come across as a bit shell shocked, and their repeated mantra of SSD and 3D Audio is telling to me.
Eh, it’s not like they had much to differentiate themselves from MS, hardware-wise. Sony are going into the next gen in a dominant position, so I always thought they’d wait for MS to show first. Then it’s a question of maximizing ROI. Are Sony taking so long to show their cooling solution because this was their plan all along, or are they really eating into their engineering margins for a few more MHz.
 
Sorry for the Fisking, but I’m bored.


Really? Assuming I understand it correctly, it would seem to make sense to let devs specify whether they’d like more CPU or more GPU within the console’s power budget, and it would allow for (slightly?) higher clocks than clocking to the lowest common denominator (max CPU + GPU usage). It sounds like a valid design philosophy (that maybe they’ve had to emphasize more than it’s worth just because there’s not much else to differentiate the two consoles), though it’s undermined a bit by XSX’s CPU’s min speed being higher than PS5’s max. Hard to say if it’s better or worse from a manufacturing cost POV. I’d say initial price and the first price drop will determine who got it right, but if Sony lead sales again, they might just hold price steady again (barring COVID19 reactions).
I would be pretty sure developers would rather not have to deal with variable clock speeds. If one had to choose, I would assume they would rather know up front what they were working with, and they would rather avoid having to divert power from the CPU to the GPU etc.
The fact that they do have variable clocks shows that it was an add on.
We all know that power usage isn't linear with clock speeds. The PS5 GPU at 2.23ghz is going to be anything other than efficient. It will effect the yields in production, and will require a more expensive cooling system as well.
We also know that initially they were going with a 2.0ghz native clock speed.

MS’ rumored baby XSX is slightly confusing things in that it’s looking down the performance ladder. Who’s to say Sony won’t be behind in sales by the time of the next mid-gen refresh / early next gen and so be forced to go bigger than MS in four years?
I'm not sure there will be a refresh this time.
Last gen there was the jump to 4k Tv's, and when they did the upgrade they were able to increase the GPU power by 2.25 x in the PS4 Pro and 4.25 x in the X. In a few years time they won't be able to release a new console with those types of increases.
The PS4, at launch, had 50% more GPU than the Xb1. The XSX will have ballpark 30% more GPU than PS5.
I was referring to the power of the PS4 compared the PC GPUs at the time. The PS4 was quite average spec wise compared to them.

Sony was killing two birds with one stone: PSVR and 4k. I would have liked more power, but I can see what they were going for, and it may have been for the best that they didn’t have to cool anything more powerful. Hopefully PS5 is much quieter.
Amen to that. My brothers PS4 Pro when playing COD sounds like a jumbo jet.
Do we know this for sure? It seems like it would defeat the purpose of disabling a CU for yields, or at least require further modifications to at least one other CU to mimic what seems to be some Tempest-specific functionality.
Yeah it is.
"The Tempest Engine is a re-purposed GPU compute unit, inspired by the PS3’s SPUs with an SIMD performance and bandwidth comparable to eight PS4 CPU cores combined."
I actually think it was a smart thing to do. Just imagine what else you could change one up to do?
Eh, it’s not like they had much to differentiate themselves from MS, hardware-wise. Sony are going into the next gen in a dominant position, so I always thought they’d wait for MS to show first. Then it’s a question of maximizing ROI. Are Sony taking so long to show their cooling solution because this was their plan all along, or are they really eating into their engineering margins for a few more MHz.
I dont think they have sorted everything out just yet. They are probably still finalizing designs etc. I mean, who shows their controller off before the console?
 
I think it's pretty obvious Sony got caught out with the XSX being so powerful this next gen. Nobody would design a system from scratch to have variable frequency. It's not a preferred thing, and is evidence of a reactionary step.
The PS4 and PS4 Pro were quite ordinary from a specs point of view. At the time the PS4 released, a 1.8tflop GPU was pretty average, and the Jaguar cores were weak. Even when the PS4 Pro came out, it was quite average spec wise. It was only the fact that the Xbox stuffed up with Kinect that the PS looked good.
So a 36CU unit at 9.2tflop GPU for PS5 seems inline with the PS4 to me. However MS went balls out to make sure they got the power crown again.
Sony got wind of the XSX power levels and worked out they could not go into next gen with such a power difference, especially since it was single digit vs double digit (always looks worse).
Because they were using one of the excess CUs for their Tempest Engine it meant they couldnt enable the extra four Cu's on the die to take it up to 40 Cu's, so the only option was to push the clocks as far as they could.
Sony come across as a bit shell shocked, and their repeated mantra of SSD and 3D Audio is telling to me.
However, this short fall in power has a good upside to people like me who love tech, and that is that is impressive to see a console clocked so high that nobody thought it possible. I am also really interested to see what cooling solution they have come up with. To cool a smaller APU with such high clocks is going to take some smarts. I dont doubt Sony's ability here.
So while I have no doubt the XSX is a far superior machine, in almost every way, I am actually far more interested to learn more about the PS5 than XSX now.
It’s a 18% difference in theoretical compute, mitigate some of that with ps5’s faster clock and you might end up with only 10-15% difference. Not to mention the split pool of memory might get into the way of reaching maximum throughputs on the X which might further drag the difference down. The SSD might offer better visual output in specific ways too on ps5.
Truth is there will be a tiny tiny difference when all said and done, the hardware power difference is likely nowhere near as drastic as you mentioned.
 
It’s a 18% difference in theoretical compute, mitigate some of that with ps5’s faster clock and you might end up with only 10-15% difference. Not to mention the split pool of memory might get into the way of reaching maximum throughputs on the X which might further drag the difference down. The SSD might offer better visual output in specific ways too on ps5.
Truth is there will be a tiny tiny difference when all said and done, the hardware power difference is likely nowhere near as drastic as you mentioned.
There could be a 15% difference anyway, depending on what you use as a baseline. 10.28 is roughly 15% less than 12.16, while 12.16 is roughly 18% more than 10.28. The real wild card with PS5 is the variable clocks, because we don't know the range. 2.23ghz is the max, and 82% of that roughly matches Series X GPU clock, so they have to stay above that to maintain any sort of advantage over series X. Also, we don't know how ray tracing is affected by frequency or CU count, so one console could have an advantage over the other. We also don't know how many ROPs each has. If they are equal, PS5 could have a pretty significant fillrate advantage. If they have roughly equal fill at max clocks, then PS5 will fall behind there when clocks drop.

I don't think we'll see a dramatic difference in the first year or two, though. Developers aren't going to fully leverage PS5's storage speeds or Series X's FLOP advantage because we will mostly be getting cross generational games if not because they are targeting the older platforms, but also because those game will have been developed with current generation engines, tools, and design sensibilities.
 
I think it's pretty obvious Sony got caught out with the XSX being so powerful this next gen. Nobody would design a system from scratch to have variable frequency. It's not a preferred thing, and is evidence of a reactionary step.
The PS4 and PS4 Pro were quite ordinary from a specs point of view. At the time the PS4 released, a 1.8tflop GPU was pretty average, and the Jaguar cores were weak. Even when the PS4 Pro came out, it was quite average spec wise. It was only the fact that the Xbox stuffed up with Kinect that the PS looked good.
So a 36CU unit at 9.2tflop GPU for PS5 seems inline with the PS4 to me. However MS went balls out to make sure they got the power crown again.
Sony got wind of the XSX power levels and worked out they could not go into next gen with such a power difference, especially since it was single digit vs double digit (always looks worse).
Because they were using one of the excess CUs for their Tempest Engine it meant they couldnt enable the extra four Cu's on the die to take it up to 40 Cu's, so the only option was to push the clocks as far as they could.
Sony come across as a bit shell shocked, and their repeated mantra of SSD and 3D Audio is telling to me.
However, this short fall in power has a good upside to people like me who love tech, and that is that is impressive to see a console clocked so high that nobody thought it possible. I am also really interested to see what cooling solution they have come up with. To cool a smaller APU with such high clocks is going to take some smarts. I dont doubt Sony's ability here.
So while I have no doubt the XSX is a far superior machine, in almost every way, I am actually far more interested to learn more about the PS5 than XSX now.
Only conclusions you can draw from PS5 at this point is Sony really seems to be placing an emphasis on the SSD, backward compatibility with PS4 and next generation audio.
 
It’s a 18% difference in theoretical compute, mitigate some of that with ps5’s faster clock and you might end up with only 10-15% difference. Not to mention the split pool of memory might get into the way of reaching maximum throughputs on the X which might further drag the difference down. The SSD might offer better visual output in specific ways too on ps5.
Truth is there will be a tiny tiny difference when all said and done, the hardware power difference is likely nowhere near as drastic as you mentioned.
I totally agree that there will be little perceivable difference between the two consoles side by side. The developer will matter more than the hardware will this gen.
My point isn't that 12.1tflos will be a big difference from.10.24tflops. My point is that initially Sony aimed lower than MS did with their consoles. Sony was originally running with a 9.2tflop GPU and then made the decision to try and ramp that up to a number closer to the XSX. They have been reacting to MS, and that shows.
 
Only conclusions you can draw from PS5 at this point is Sony really seems to be placing an emphasis on the SSD, backward compatibility with PS4 and next generation audio.
They are certainly pushing the SSD and 3D Audio, but the fact of them having variable clocks also shows they have tried to up the power of their console more than originally planned.
The end result is a better console for PS owners than they may have originally ended up with.
 
Back
Top