PS5 Pro *spawn

Wait the PS5 Pro supports Path Tracing?
Technically, the PS5 base can support path tracing in something like cyberpunk. At like 7 fps. So, if the pro has major advancements in ray tracing acceleration, I could see some games with path tracing at 30 fps without looking like mud. Minecraft is another good candidate. But I'd wait for the PS6 gen for good ray tracing on console.
 
Technically, the PS5 base can support path tracing in something like cyberpunk. At like 7 fps. So, if the pro has major advancements in ray tracing acceleration, I could see some games with path tracing at 30 fps without looking like mud. Minecraft is another good candidate. But I'd wait for the PS6 gen for good ray tracing on console.
Unfortunately there is no version of Minecraft for consoles that does ray-tracing 😅
 
It was shown 4 years ago on series x, like what are they doing 🙄
Was it really? I'm suspicious of a lot of RT presentation. eg. nVidia's racer...most amazing looking thing ever, apparently running on GPUs back then, yet where is it? And other games thrashing current GPUs aren't looking that good.

I feel there's been some optimistic representation or something. Maybe gameplay recorded at 8fps and then run again one frame at a time to create the videos we saw, the devs hoping that they'll be able to optimise the framerate to playable and then finding they couldn't, hence no releases.
 
Was it really? I'm suspicious of a lot of RT presentation. eg. nVidia's racer...most amazing looking thing ever, apparently running on GPUs back then, yet where is it? And other games thrashing current GPUs aren't looking that good.

I feel there's been some optimistic representation or something. Maybe gameplay recorded at 8fps and then run again one frame at a time to create the videos we saw, the devs hoping that they'll be able to optimise the framerate to playable and then finding they couldn't, hence no releases.
Digital Foundry saw it before the series x launch and it was running at 1080p @45 fps if I remember correctly. I'm sure it could be done today.
 
Technically, the PS5 base can support path tracing in something like cyberpunk. At like 7 fps. So, if the pro has major advancements in ray tracing acceleration, I could see some games with path tracing at 30 fps without looking like mud. Minecraft is another good candidate. But I'd wait for the PS6 gen for good ray tracing on console.
Yeah it will be an interesting next gen cause both Xbox and sony both want their consoles to be the most powerful consoles.
 
Honestly, the power race is over. Microsoft doesn't want to lose money on hardware anymore, and Sony isn't even reducing the cost of their console 3 and a half years in.
True that all I want to see is in the future is games that will look just like cutscenes if not really close to it.
 
What does PS5 Pro being close to 7700 XT mean for graphics?

Largely we're going to see straightforward resolution increases. We saw that last generation with the PS4 and PS4 Pro, which saw many developers not bother with e.g. checkerboard rendering, and simply render their 1080p (2073600 pixels) game at 1440p (3686400 pixels.)

PSSR should function akin to Nvidia's DLSS, although we can't yet be certain of how similar in quality they will be. Nvidia's will likely be superior, given the maturity of their tech, but the gulf is unknowable.

The promising thing here is that multiplatform games which use DLSS on PC will have requisite tech such as motion vectors incorporated, and so PSSR should be fairly easy for developers to adopt. Also, we may well see motion interpolation taking e.g. 60fps to 120fps.

There's also the rumoured architectural improvements to ray tracing which should improve the quality of reflections and global illumination.

Last I heard of the rumoured specs, we're looking at a 60CU RDNA3.5ish GPU although clockspeeds seem to be a bit up in the air, with some suggestion that it may be a touch lower. I don't personally buy that, but we'll see - I anticipate either the same speed, or a very slight bump.

There's also the matter of RDNA3's dual-SIMD feature, which can, theoretically, double the performance of each CU. But that's purely theoretically - in reality, I think the improvement comes to something more like ~40% more performance per CU. Someone here smarter than I can probably offer a more thorough explanation and more accurate figure there though.

So in terms of TFLOPS:
PS5 = 128*36*2.23 = 10,275.84TF
PS5 Pro = 128*60*2.23 = 17,126.4TF
PS5 Pro inc. dual-SIMD = 256*60*2.23 = 34,252.8TF

TLDR: higher resolutions, higher perceived framerates, and higher quality ray tracing. Like a GPU upgrade in a PC.
 
Largely we're going to see straightforward resolution increases. We saw that last generation with the PS4 and PS4 Pro, which saw many developers not bother with e.g. checkerboard rendering, and simply render their 1080p (2073600 pixels) game at 1440p (3686400 pixels.)

PSSR should function akin to Nvidia's DLSS, although we can't yet be certain of how similar in quality they will be. Nvidia's will likely be superior, given the maturity of their tech, but the gulf is unknowable.

The promising thing here is that multiplatform games which use DLSS on PC will have requisite tech such as motion vectors incorporated, and so PSSR should be fairly easy for developers to adopt. Also, we may well see motion interpolation taking e.g. 60fps to 120fps.

There's also the rumoured architectural improvements to ray tracing which should improve the quality of reflections and global illumination.

Last I heard of the rumoured specs, we're looking at a 60CU RDNA3.5ish GPU although clockspeeds seem to be a bit up in the air, with some suggestion that it may be a touch lower. I don't personally buy that, but we'll see - I anticipate either the same speed, or a very slight bump.

There's also the matter of RDNA3's dual-SIMD feature, which can, theoretically, double the performance of each CU. But that's purely theoretically - in reality, I think the improvement comes to something more like ~40% more performance per CU. Someone here smarter than I can probably offer a more thorough explanation and more accurate figure there though.

So in terms of TFLOPS:
PS5 = 128*36*2.23 = 10,275.84TF
PS5 Pro = 128*60*2.23 = 17,126.4TF
PS5 Pro inc. dual-SIMD = 256*60*2.23 = 34,252.8TF

TLDR: higher resolutions, higher perceived framerates, and higher quality ray tracing. Like a GPU upgrade in a PC.
Maybe but that would be a huge mistake. People do not care about resolution anymore. They care about the quality of what they are seeing in their games when in motion. The resolution metric is now worthless in a world where most games are using FSR2 or DLSS and where the perceived quality can vary a lot between those 2 (and the settings those 2 techs use including the use of FG or not).
 
Largely we're going to see straightforward resolution increases. We saw that last generation with the PS4 and PS4 Pro, which saw many developers not bother with e.g. checkerboard rendering, and simply render their 1080p (2073600 pixels) game at 1440p (3686400 pixels.)

PSSR should function akin to Nvidia's DLSS, although we can't yet be certain of how similar in quality they will be. Nvidia's will likely be superior, given the maturity of their tech, but the gulf is unknowable.

The promising thing here is that multiplatform games which use DLSS on PC will have requisite tech such as motion vectors incorporated, and so PSSR should be fairly easy for developers to adopt. Also, we may well see motion interpolation taking e.g. 60fps to 120fps.

There's also the rumoured architectural improvements to ray tracing which should improve the quality of reflections and global illumination.

Last I heard of the rumoured specs, we're looking at a 60CU RDNA3.5ish GPU although clockspeeds seem to be a bit up in the air, with some suggestion that it may be a touch lower. I don't personally buy that, but we'll see - I anticipate either the same speed, or a very slight bump.

There's also the matter of RDNA3's dual-SIMD feature, which can, theoretically, double the performance of each CU. But that's purely theoretically - in reality, I think the improvement comes to something more like ~40% more performance per CU. Someone here smarter than I can probably offer a more thorough explanation and more accurate figure there though.

So in terms of TFLOPS:
PS5 = 128*36*2.23 = 10,275.84TF
PS5 Pro = 128*60*2.23 = 17,126.4TF
PS5 Pro inc. dual-SIMD = 256*60*2.23 = 34,252.8TF

TLDR: higher resolutions, higher perceived framerates, and higher quality ray tracing. Like a GPU upgrade in a PC.
The sum total of RDNA 3 improvements was 5%.
 
Maybe but that would be a huge mistake. People do not care about resolution anymore. They care about the quality of what they are seeing in their games when in motion. The resolution metric is now worthless in a world where most games are using FSR2 or DLSS and where the perceived quality can vary a lot between those 2 (and the settings those 2 techs use including the use of FG or not).

I agree it would be a mistake, I just think it's the most likely outcome based on the PS4 to PS4 Pro increases we saw last time. I'd love to be proven wrong though: rendering at the same resolution and PSSR'ing it higher while bumping up other settings would be great. Fingers crossed!


Isn't that due in some part to RDNA3's scheduler? IIRC, when Nvidia made the switch to dual-SIMD, it fared better. On a fixed platform, I think dual-SIMD stands a greater chance of being utilised in spite of any limitations of RDNA3's scheduler.

So although 200% CU performance isn't realistically going to happen, I think 105% CU performance is pretty unlikely too.
 
Isn't that due in some part to RDNA3's scheduler? IIRC, when Nvidia made the switch to dual-SIMD, it fared better. On a fixed platform, I think dual-SIMD stands a greater chance of being utilised in spite of any limitations of RDNA3's scheduler.

So although 200% CU performance isn't realistically going to happen, I think 105% CU performance is pretty unlikely too.
Dual issue can only occur in very specific scenarios. I'm sure the scheduler plays a substantial role, but I'd be surprised if it wasn't a myriad of other factors as well. Unfortunately, AMD has not detailed these limitations. PS5 pro will not have a large market share and therefor won't see much benefit in terms of being a fixed platform. Look at how developers utilized the PS4 pro and keep in mind this is only 40% of the perf increase that was. That Sony's own internal documents claim 40-50% improvement in raw performance from almost 250% more TFLOPS says it all.
 
There's also the matter of RDNA3's dual-SIMD feature, which can, theoretically, double the performance of each CU. But that's purely theoretically - in reality, I think the improvement comes to something more like ~40% more performance per CU. Someone here smarter than I can probably offer a more thorough explanation and more accurate figure there though.
Wave64 mode already utilizes the dual-issue in RDNA3 by default, reaching the max FLOPs rate. Some extra performance could be squeezed from dual-issue in Wave32 mode in cases of register pressure, something like 10%.
 
There's also the matter of RDNA3's dual-SIMD feature, which can, theoretically, double the performance of each CU. But that's purely theoretically - in reality, I think the improvement comes to something more like ~40% more performance per CU. Someone here smarter than I can probably offer a more thorough explanation and more accurate figure there though.
You're forgetting that *hardware performance* is also a function of fixed function unit (rasterizer/depth & blending state machines/texture sampler) throughput and the memory system (cache, memory bandwidth, & traffic compression) as well. Just increasing the raw compute power alone outside the proportion of those major parameters means you won't see very high scaling unless the said graphics workload is often compute bound ...

PS5 isn't too far off when compared to the Series X despite both featuring the same number of rasterizers/ROPs, because PS5 accommodates them with higher clocks which is advantageous for specific rendering passes (shadows/G-buffer fill/etc.) as well ...
 
Dual issue can only occur in very specific scenarios. I'm sure the scheduler plays a substantial role, but I'd be surprised if it wasn't a myriad of other factors as well. Unfortunately, AMD has not detailed these limitations. PS5 pro will not have a large market share and therefor won't see much benefit in terms of being a fixed platform. Look at how developers utilized the PS4 pro and keep in mind this is only 40% of the perf increase that was. That Sony's own internal documents claim 40-50% improvement in raw performance from almost 250% more TFLOPS says it all.

I largely agree in terms of Pro utilisation mirroring that of the PS4 Pro. Quick and dirty is going to be most prevalent, by far.

Which internal documents are you referring to when you state the 40-50% improvement in raw performance? I was under the impression that we were looking at a jump from 36 active CU'S to 60, and so I was expecting - even when quick and dirty - 1.6x the raw performance of the base PS5.

You're forgetting that *hardware performance* is also a function of fixed function unit (rasterizer/depth & blending state machines/texture sampler) throughput and the memory system (cache, memory bandwidth, & traffic compression) as well. Just increasing the raw compute power alone outside the proportion of those major parameters means you won't see very high scaling unless the said graphics workload is often compute bound ...

PS5 isn't too far off when compared to the Series X despite both featuring the same number of rasterizers/ROPs, because PS5 accommodates them with higher clocks which is advantageous for specific rendering passes (shadows/G-buffer fill/etc.) as well ...

All good points. The reason I cited a ~40% CU increase is because that's the best case, real world scenario that I seem to recall being brought up in either a forum or an article discussing (I think) Nvidia's implementation.

I'm not familiar enough with dual issue SIMD to speak authoritatively on the matter. From the bit of reading I've done on it, Nvidia's implementation fares better on PC but there's speculation that shaders tailored to AMD's would improve performance on AMD cards. Naturally, that won't happen in the PC space because Nvidia's the big dog there.

Certainly, doubling performance per CU won't happen, although it'll probably end up marketed that way. But I'm also not terribly convinced that current PC data paints the full picture. Greater than the 5-10% improvement mentioned in above posts seems likely (albeit not for all games/engines,) but ~40% seems to be the absolute ceiling.
 
Back
Top