PS5 Pro *spawn

That seems to be talking about 'colour resolution' and wouldn't be applicable to upscaling 2D images.

Yeah, and skimming it I saw no mention of depth and motion vector buffers, or temporal re-projection or any of the basic elements of DLSS, XeSS, FSR2.

Because he said the difference in compute power is low. If Sony does the effort to handtune the code generated for dual issue and don't let the compiler do the job for this part then dual issue performance will be there. And I am sure they will do it. This is a software problem not a hardware problem.

You think Sony is going to hand tune shader code for all the PS5Pro versions of PS5 games?

My personal expectation is generally that if a new hardware feature is for generalised use, and it takes up a very small amount of additional silicon, that the performance improvements will be rather modest. Generalised performance, area, and speedup are kind of always in tension.
 
Yeah, and skimming it I saw no mention of depth and motion vector buffers, or temporal re-projection or any of the basic elements of DLSS, XeSS, FSR2.



You think Sony is going to hand tune shader code for all the PS5Pro versions of PS5 games?

My personal expectation is generally that if a new hardware feature is for generalised use, and it takes up a very small amount of additional silicon, that the performance improvements will be rather modest. Generalised performance, area, and speedup are kind of always in tension.

No this is not per games basically there are certain situation where the compiler don't know he can dual issue. it is SDK/handtuned inside the API not per game. They won't replace game developer code but PSSL compiler for certain instructions and situation.

If it was needed to do it per game, it would be impossible.
 
Last edited:
No this is not per games basically there are certain situation where the compiler don't know he can dual issue. it is SDK/handtuned inside the API not per game. They won't replace game developer but PSSL compiler for certain instructions and situation.

If it was needed to do it per game, it would be impossible.
I don't have your faith in the process. The hardware requires assembly code for the shaders. These shaders aren't in the SDK but in the game, authored by the devs. They use a high level shader language (or even worse, graphical interfaces). Those high-level shaders need to be reprogrammed, or at least the compiled code tweaked, in low-level GPU assembler to utilise the dual issue. Who does that any more? Maybe some Sony 1st parties will, but I can't see any magic solution for Sony to provide a shader library for devs to use. Even if they did, games would have to already be using this library instead of authoring their own shaders for this to be a drop-in replacement for PS5 games.

Furthermore, reading the article you linked to, dual issue is very limited in application:

A RDNA 3 VOPD instruction is encoded in eight bytes, and supports two sources and one destination for each of the two operations. That excludes operations that require three inputs, like the generic fused multiply add operation. Dual issue opportunities are further limited by available execution units, data dependencies, and register file bandwidth.
The actual benefits are shown in the red versus the pink if I'm reading this right...

1710534720595.png

You get double the throughput on FP32 add and 50% more FP32 mad(). The rest is pretty similar, with some ops being slower than on RDNA2.

In short, there's only a little speed boost over raw compute power even if it can be brought to be in the most ideal way in every game without developer involvement. We shouldn't be looking at 'dual issue' as something significant and should just look at the compute power as fairly relative to performance.

Edit: I see you are referencing PSSL as the shader authoring language and as such, think Sony could do something with that. The only advantage PSSL has in titles it's used is Sony handle the compiler. The report is compilers aren't capable and a human eye is needed. Again, this is per shader. Even if the shader is authored in PSSL, the compiler won't handle dual issue and a person will need to be involved. As every game has its own shaders and trillions of them, shy of a standardised library of shaders being used there's no way the human involvement in one shader in a game can be shared with many games.
 
Kepler already leaked that RDNA 3.5 will get dual issue improvements compared to RDNA3 and that except a few things like the scheduler, Viola should be using a good deal of RDNA4 tech improvements.
 
I don't have your faith in the process. The hardware requires assembly code for the shaders. These shaders aren't in the SDK but in the game, authored by the devs. They use a high level shader language (or even worse, graphical interfaces). Those high-level shaders need to be reprogrammed, or at least the compiled code tweaked, in low-level GPU assembler to utilise the dual issue. Who does that any more? Maybe some Sony 1st parties will, but I can't see any magic solution for Sony to provide a shader library for devs to use. Even if they did, games would have to already be using this library instead of authoring their own shaders for this to be a drop-in replacement for PS5 games.

Furthermore, reading the article you linked to, dual issue is very limited in application:


The actual benefits are shown in the red versus the pink if I'm reading this right...

View attachment 10988

You get double the throughput on FP32 add and 50% more FP32 mad(). The rest is pretty similar, with some ops being slower than on RDNA2.

In short, there's only a little speed boost over raw compute power even if it can be brought to be in the most ideal way in every game without developer involvement. We shouldn't be looking at 'dual issue' as something significant and should just look at the compute power as fairly relative to performance.

The graphic you use use Open CL and the compiler is unable to find dual issue opportunity... We are back to square one. And I know there is other bottleneck than the Tflops. If it wasn't the case Xbox Series X would beat PS5 in all games...

Unfortunately, testing through OpenCL is difficult because we’re relying on the compiler to find dual issue opportunities.

At the beginning of the article

RDNA 3 represents the third iteration of AMD’s RDNA architecture, which replaces GCN in their consumer graphics lineup. At a high level, RDNA 3 aims to massively scale up compared to RDNA 2. The cache setup is tweaked at all levels to deliver increased bandwidth. To scale compute throughput beyond just adding more WGPs, AMD implemented dual issue capability for a subset of common instructions.

For the library I am sure it is the future at least depending of the team, let lower level access only to the best team. For example if you read the screen about GPU AI capacity it is only usable through a library... Same IO capability of PS5 is only available through API... And I think it is the same for Tempest engine.

ZJlEHxI.jpeg


EDIT: I don't say the 33.5 Tflops number is accurate at all... but if it can gives a push to better compute performance it could be good.
 
Last edited:

Insider Gaming, who was also shared documentation from the developer portal under the condition that it’s not shared publically or privately can also confirm that Devkits have been available to first-party studios since September 2023, third-party since January 2024, and from Spring 2024 Testkits will also be available which will be identical to the final product.

EDIT: For frequency this is continuous boost and variable frequency depending of the GPU part. only 2180 mhz for FP32, FP16 and 2450 mhz for INT 8.
 
Last edited:
This will be digital only at 499 thanks to 5nm chip? You need to buy a disc drive separately..
 
Last edited:
The graphic you use use Open CL and the compiler is unable to find dual issue opportunity... We are back to square one.
Sure, but there are still plenty of caveats on when dual issue can be used, plus it needs a developer to hand-tune every shader.
For the library I am sure it is the future at least depending of the team, let lower level access only to the best team. For example if you read the screen about GPU AI capacity it is only usable through a library... Same IO capability of PS5 is only available through API... And I think it is the same for Tempest engine.
But devs are using the system libraries to access those. They aren't for GPU shaders. These are authored by the developers, hundreds of shaders per game. The are written, compiled and saved. After compilation, you'd need someone to sift through and tweak them all. Or you need the compiler to compile them better. Or you need some universal system-level shaders that all the devs are using instead of writing their own.

EDIT: I don't say the 33.5 Tflops number is accurate at all... but if it can gives a push to better compute performance it could be good.
Sure, any gain is nice, but you were raising this Dual Issue as a significant factor :

Because he said the difference in compute power is low.
The impact of Dual Issue will be low, so comparisons of rumoured PS5 Pro with PS5 compute are pretty reasonable. The difference in compute power won't be increased dramatically over the base CU*MHz rate due to dual issue.

Comparing XBSX to PS5, we don't factor in some extra virtual TFlops for XBSX because of VRS. VRS's impact is minimal. Likewise, 80% more TFlops (or whatever it is) on paper should be considered 80% without hard evidence showing otherwise.
 
.
Even worse than I thought. Not even 50% faster. That PSSR better be at least as good as DLSS.
And think of how less interesting the PS6 will be on top of it.

PS5 > PS6 would have been a nice jump. Pro to PS6 wont be as impressive. Though I suppose by then it will all be "AI this and AI that"..
 
Even worse than I thought. Not even 50% faster. That PSSR better be at least as good as DLSS.

If it is, then the 'end result' would be lot more than 50% though - performance modes could end looking better than existing Quality modes and delivering over 60fps. It all comes down to the price as well, if there was the possibility of an 80% raster improvement but would end up costing $100 more, doesn't mean much.


Don't get Tom's concern of "no major first party games" though - it's a Pro model, you're not going to get extremely tailored titles for this, the point is to make the upgrades easy for devs. I'm sure there will be plenty of Pro patches for existing games at launch, better upscaling, better RT effects. That's all you can expect, but the difference may be significant depending on how good the reconstruction is.
 
Last edited:
Repeating PS4 Pro's mistake by not giving a decent increase in RAM capacity is a poor choice by Sony.

Especially as they say the upscaling tech is 250MB of RAM, then the increased RT performance is likely to create a need for higher quality BVH's, which also require more RAM.
 
2180 mhz for FP32, FP16 and 2450 mhz for INT 8.
It seems 2180 MHz is typical frequency for a develop kit, just like 9.2TF@2GHz of PS5 dev kit.

PS5 retail product has variable frequency of 2.23 GHz max. It's likely PS5Pro also has variable frequency higher than 2.18GHz of dev kit. 2450 MHz may be the max number of variable frequency.
 
Don't get what "no major first party games" means though - it's a Pro model, you're not going to get extremely tailored titles for this, the point is to make the upgrades easy for devs. I'm sure there will be plenty of Pro patches for existing games at launch, better upscaling, better RT effects. That's all you can expect, but the difference may be significant depending on how good the reconstruction is.
It simply means that there's no major release planned to help push/market the consoles features. Doesn't need games specifically tailored to it... just something new never seen before running on it. It's not as effective when you basically show what it can do for a bunch of old games. People want to see new games. GTA6 will be a huge driver for the Pro anyway.
 
Repeating PS4 Pro's mistake by not giving a decent increase in RAM capacity is a poor choice by Sony.

Especially as they say the upscaling tech is 250MB of RAM, then the increased RT performance is likely to create a need for higher quality BVH's, which also require more RAM.

Maybe the quality wont increase that much but the framerates will
 
Repeating PS4 Pro's mistake by not giving a decent increase in RAM capacity is a poor choice by Sony.

Especially as they say the upscaling tech is 250MB of RAM, then the increased RT performance is likely to create a need for higher quality BVH's, which also require more RAM.

Potentially yeah, but no way around that other than to widen the APU's bus - then you've likely overshot your price target. I always felt the most "out there" rumour was the 320bit bus variant because of this.
 
If it is, then the 'end result' would be lot more than 50% though - performance modes could end looking better than existing Quality modes and delivering over 60fps. It all comes down to the price as well, if there was the possibility of an 80% raster improvement but would end up costing $100 more, doesn't mean much.
Except there is no guarantee most devs will use it. We saw what happened with PS4 Pro and its ID buffer/checkerboard combo. I expect we will see a very large number of games just up the FSR 2 quality setting.
 
Back
Top