Do you think there will be a mid gen refresh console from Sony and Microsoft?

I don't disagree, but I would add that consoles have often punched above their weight in terms of what performance the hardware delivers because devs can hone for specific configurations.

I agree that they can, but this seems to be more a case of games that are heavily optimised to a single console then getting ported to PC in a less optimised state (which can also happen in the other direction albeit less often) rather than a fundamental difference in say API or system overheads as may have been the case in previous generations. And in fact more specifically this seems to happen with PS5 to PC ports in a small number of cases. I'm not sure we've ever seen the XBSX punching above it's weight in relation to the PC in the same way? Generally the PS5 does perform in the 2070s-2080 region as per it's spec when settings are properly matched, but there are the obvious exceptions to that like TLOU and U4 that were heavily optimised for the console where it goes further.

However the important point for this thread I think is that generally the pro consoles are unlikely to enjoy those optimisation advantages to any serious degree because all the heavy optimisation goes into the base consoles with the extra power of the Pro units being used for extra resolution/frame rate and PC like higher fidelity effects. Particularly if the Pro units will be on a newer/different architecture with a different RT implementation to the base units. And even more particularly because there are likely no true console exclusives anymore, even the big Sony games are really only timed exclusives on the console and so will likely be developed from day one with the PC in mind - perhaps to the detriment of the consoles optimisation. And this may also be why we haven't seen the XBSX punching above it's weight like we have with the PS5 in Sony titles that were originally designed as true Playstation exclusives.
 
Any PS5 Pro is likely going to exist to ensure that all games can hit a 60fps target without dialling the graphics way back. The Digital Foundry guys, in several weekly podcasts over the past couple of months, have aired opinions that more games on console will target 30fps but at the same time see there is no need for a Pro console.

This is somewhat baffling disassociation of saying consoles aren't powerful enough but there is no need for more powerful consoles.. Folks who like consoles but having got used to 60fps (again), don't want to revert to 30fps no matter how good the objection motion blur is, well be the market.

Higher framerates and stronger RT? Those are pretty much the only stretch points for Series X and PS5 at the moment. Nobody is chasing 8K displays, despite what Sony lyingly print on their box. :runaway:
DF have been consistent in saying 30fps with good frame pacing is acceptable and a realistic compromise for console hardware that remains static for so long (and 40fps in a 120Hz container is a nice new compromise for some games). They're also fairly consistent in saying they prefer discrete console generations, but given their tech focus they may be more interested in seeing new tech / engines.

On PC, Ray Tracing and Path Tracing changed the equation, there are dozens of PC games with next gen graphics relying on these techniques now.
"There are dozens of us!" (Sorry, couldn't help it. I think RT is exciting enough to be every GPU's main focus.)

PS4 Pro was geared toward the new 4k standard and coincided nicely with PSVR. I guess we'll find out soon enough whether PS5 Pro is aimed more toward shoring up "4k"/PSVR2 performance or RT specifically. While better general perf would be nice, improving RT performance would be more forward-looking as I'm not sure PS5 is considered as lacking in general vs XSX in this era of "up to 4k" DRS as X1 was vs PS4 with their more discrete 900p vs 1080p generally fixed resolutions. I'd still like to know if memory bandwidth / MTs is as limiting as die size for any Pro console.
 
The point is probably similar to the PS4 Pro....have an option so people to don't switch to PC as the generation goes on.
And did that work? People still seem to be going to where they want at the same pace. Hardware isn't generally what makes consoles attractive but the software
 
you miss point it will be rdna3 (and maybe rt closer to rdna4), rx7900xt has 84cu 5376sp 51tf, keeping same clocks ps5pro will be around 35tf, it should be around rx4070/6800xt performance, close to double in perf

Would it be RDNA3 though? I could see some elements of RDNA3 and maybe even RDNA 4 (and don't forget RDNA 2) making their way in, but would Sony really want to move away from RDNA1 ROPs and geometry engine when games may rely on the former and gain little from the latter without additional work?

Doubling performance of PS5 would need more BW than just going to 18 MT/s ram would get you on a 256-bit bus, so you'd probably be needing at least something like a small infinity cache there - maybe 64 MB to keep die size under control?
 
you miss point it will be rdna3 (and maybe rt closer to rdna4), rx7900xt has 84cu 5376sp 51tf, keeping same clocks ps5pro will be around 35tf, it should be around rx4070/6800xt performance, close to double in perf
I don’t actually think there’s a large difference in rdna3 and 2 in terms of changes to the compute units, this has been discussed heavily in the PC side of the forum here.

I think the only improvement will be on the RT side here in terms of architecture change. I’m not expecting much of anything else unless they are implementing infinity cache
 
And did that work? People still seem to be going to where they want at the same pace. Hardware isn't generally what makes consoles attractive but the software

Bear in mind that was literally Sony's stated rationale, at least in part - for the introduction of the PS4 Pro.

PlayStation president Andrew House said:
I saw some data that really influenced me. It suggested that there's a dip mid-console lifecycle where the players who want the very best graphical experience will start to migrate to PC, because that's obviously where it's to be had. We wanted to keep those people within our ecosystem by giving them the very best and very highest [performance quality]. So the net result of those thoughts was PlayStation 4 Pro - and, by and large, a graphical approach to game improvement.

I don't necessarily believe that was the main driving factor myself, either from Sony or consumers - I think the two markets are just still too distinct in userbase for significant portions in either to consider 'switching' just because they want higher resolutions/framerates, but there is a small segment that this would likely impact. I think rather there's just enough console owners who are more sensitive to technical issues than the majority who would considering another $500 outlay to increase their enjoyment, they're not necessarily threats to jump to the PC though with their current software library and likely preference for the console's ease of use.

I think on the PC side, if there's going to be any 'threat' to consoles, it's likely going to come from more powerful and affordable APU's. When there's a host of ~$1k notebooks will that will give you Radeon ~6700+ performance, that's when the value proposition starts to shift to PC's. Most people still need PC's, they predominantly choose notebooks these days, and when you have the option for paying just a small premium for a massively better gaming experience and don't have to sacrifice with noise/battery life to the extent you have to today with discrete GPU's in gamer notebooks, that's when people may start to question why they don't just hook up their notebook to their TV instead of getting a console. Maybe. A lot of ifs, we'll see what AMD brings with Strix/Halo in 2024.
 
Last edited:
3 shader engines. Each one 10WGP. 1 disabled. 54 active CUs in total.

2 shader engines disabled for PS4 BC. 1 shader engine disabled for PS4 Pro and PS5 BC.

54 CU x 2500mhz x 64 flops per clock per cu x 4 (dual issue accumulative) =~ 35 teraflops.

35 TF RDNA3 should be roughly equivalent to ~22 teraflops RDNA2.

So double the performance on paper but should be much better in RT workloads.

@snc Anything to add?
 
3 shader engines. Each one 10WGP. 1 disabled. 54 active CUs in total.

2 shader engines disabled for PS4 BC. 1 shader engine disabled for PS4 Pro and PS5 BC.

54 CU x 2500mhz x 64 flops per clock per cu x 4 (dual issue accumulative) =~ 35 teraflops.

35 TF RDNA3 should be roughly equivalent to ~22 teraflops RDNA2.

So double the performance on paper but should be much better in RT workloads.

@snc Anything to add?
you are saying a RDNA2 TF is more efficient than a RDNA3 one?!
 
I don’t actually think there’s a large difference in rdna3 and 2 in terms of changes to the compute units, this has been discussed heavily in the PC side of the forum here.

I think the only improvement will be on the RT side here in terms of architecture change. I’m not expecting much of anything else unless they are implementing infinity cache
I mean sure, but you should undertstand perf diff
 
Now , lets decipher that 18000 Megatransfers memory to bandwidth on 256 or 384 bit bus.

I highly doubt it's 384 bit bus. They don't need 24GB of memory and they certainly don't want to mess with split bandwidth like the series x.

I think it'll be the existing 256 bit bus for 576GB/s, possibly along with some infinity cache. 3GB of the 3.5 GB OS ram can be offloaded to an enlarged ddr4/5 pool. Currently the ddr4 pool on the PS5 is 512mb. This'll increase title ram from 12.5GB to 15.5GB.

It can also be a 320 bit setup for 20GB of ram at 720GB/s.
 
Sign me up, but I always want the latest and greatest console.

I think this will be a fairly decent upgrade. Based on the Ryzen 9 7940HS laptop silicon, AMD is able to hit fairly high clock rates for both CPU and APU and within reasonable power envelopes within this APU so I would expect this to translate well into perf/watt gains for a PS5 Pro over the base model. 1.75-2.0x gains is what I expect in raw flops and hopefully more if the architecture in improved.

I am not concerned about forward/backward compatibility issues with PS5, Pro, PS6, and beyond. Based on the success of the PS4/PS4Pro, but given their limitations, I would have expected PS5 to be designed with a Pro model and maybe even a next-gen PS6 in mind. Even before the PS5 hit the shelves, I think Sony knew what PS5Pro was going to be. I think the days of "we're going to build it from scratch" are gone, and consoles will just evolves more gracefully.

At least if I were in charge, I would expect to ask the question how does this design scale forward into an intermediate product (Pro) and the next generation?

The only questions for me will be 1. what is the RT implementation in PS5 Pro, just more of PS5 solution or something new either co-developed with Sony or on the AMD roadmap, and 2. Will there be any dedicated AI inference hardware to help with upscaling and frame interpolation (I think this could be a huge benefit).

I don't know how well this fits in with modern Sony, but the company did a lot of innovative processor design in the past with the PS2 and PS3, I would hope that they are helping drive the future of RDNA for AMD rather than simply taking what AMD comes up with on their own.
 
I highly doubt it's 384 bit bus. They don't need 24GB of memory and they certainly don't want to mess with split bandwidth like the series x.

I think it'll be the existing 256 bit bus for 576GB/s, possibly along with some infinity cache. 3GB of the 3.5 GB OS ram can be offloaded to an enlarged ddr4/5 pool. Currently the ddr4 pool on the PS5 is 512mb. This'll increase title ram from 12.5GB to 15.5GB.

It can also be a 320 bit setup for 20GB of ram at 720GB/s.

Very reasonable prediction. It even checks out with probable target modes. Only visible wildcard is where gddr type or infinity cache point of gravity would land exactly.
 
I don’t actually think there’s a large difference in rdna3 and 2 in terms of changes to the compute units, this has been discussed heavily in the PC side of the forum here.

I think the only improvement will be on the RT side here in terms of architecture change. I’m not expecting much of anything else unless they are implementing infinity cache
In the end whats matter is perf, it looks like close to 2x in raster and more with rt, imo nothing surprising here
 
DF have been consistent in saying 30fps with good frame pacing is acceptable and a realistic compromise for console hardware that remains static for so long (and 40fps in a 120Hz container is a nice new compromise for some games). They're also fairly consistent in saying they prefer discrete console generations, but given their tech focus they may be more interested in seeing new tech / engines.
DF might find it acceptable, but I'm a 60fps convert and I am not willing to go back. Whilst I can adapt to the visuals, the latency for many 30fps games is just too much.
 
However the important point for this thread I think is that generally the pro consoles are unlikely to enjoy those optimisation advantages to any serious degree because all the heavy optimisation goes into the base consoles with the extra power of the Pro units being used for extra resolution/frame rate and PC like higher fidelity effects.
Many (perhaps most) PS4 games running on Pro used the GPU customisations for checkerboard rendering and everything else was just dialling up some settings. I've seen enough Alex dissections to have a feel for where console graphics settings differ from those available on PC, other than when were are weird middle ground settings used on consoles that aren't in PC, like crowd and traffic density in Spider-Man.

The base optimisation for the underlying hardware and console APIs are where the real gains are and these are the same for both hardware tiers. If Pro is aiming to be a 4K/60fps always, that's not a stretch.
 
Why we are all thinking that older rdna2 is more efficient than newer rdna3 ??
Can somebody elaborate about that - i don't get it..
 
Why we are all thinking that older rdna2 is more efficient than newer rdna3 ??
Can somebody elaborate about that - i don't get it..
They're talking about performance efficiency, not power efficiency.

So performance per TF of theoretical compute power.

When Ampere used a setup where an individual CUDA core could either be 1xFP32 and 1xINT32 operation OR 2xFP32 operations, that latter option meant a theoretical doubling of FP32(single precision) compute for every core of the GPU. So if you had 10TF for 2000 cores before, you now had a theoretical 20TF for the same 2000 cores. RDNA3 is doing something kinda similar.

But as you keep seeing me say 'theoretical', that's because this strategy never really works out like this in terms of performance. You may have 20TF of theoretical compute power instead of 10, but you're not actually doubling performance, or getting anywhere near. So like with Ampere, gaming workloads are always still gonna need portions of the GPU running 1xFP32 and 1xINT32 config, as INT operations are still important. So maybe for a doubling of theoretical FP32 teraflops, you're only gaining maybe 40% performance, which means a big regression in performance per flop(performance efficiency).

But RDNA3 is even worse with this, as it seems they're really struggling to get much benefit out at all out of the dual-issue compute units. An RX7600 has 22TF of FP32 compute, which is roughly the same as the 6800XT's 21TF. Yet despite this, the 6800XT is about 75% faster at 1440p. Some of this can be attributed to less bandwidth and less L3 cache and all, but even at 1080p, the 6800XT is still some 60% faster.

So RDNA3's performance efficiency is actually kinda terrible. As somebody above posted, a ~35TF RDNA3 part would likely have about the same performance as a ~20TF RDNA2 part.
 
Back
Top