Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Except Cerny said that they(Sony and AMD) designed a new Geometry engine, added cache scrubbers, coherency engine etc. That's custom! It's things that won't appear on the XBSX SOC from my understanding as Cerny explained it.

So I doubt RDNA1 will be better than Sony's custom SOC, even if it does share some of the tech.
I am specifically talking about RNDA1/RDNA2 difference, nothing about what Sony has or what Cerny talked about.

Cerny hasnt talked about GE in great lengths though and what he said can be applied on what is already in RNDA1 (but doesnt have to be - we dont know).
 
One question, what is the impact of having additional CU per Shader Array? I mean, I was reading today a piece after the 6800 reviews came and it described all the known configuration of RDNA2 based GPUs, including PS5 and XBSX. One thing that stood out is that PS5 and AMD cards have 10 CUs per SA, while XBSX has 14. Maybe that distinctiveness is what is different and developers need to optimize for that yet? What's the advantage of that configuration vs the standard one (if any)?
hum....

It would be more work for the front end to distribute a workload over the shader arrays. The L1 cache, which is per shader array, is seemingly a fixed amount, so the cache hit rate would go down somewhat leading to more pressure on external bandwidth. I'm not sure if setting a higher native resolution would offset that somewhat since the pixel per texel ratio increases as well (for a given LOD) so you get a different performance curve over a range of resolutions.

Similarly, the new RB+ / ROPs may have been designed with higher resolution targets in mind owing to the pixel: triangle size ratio & pixel coverage output of the RB+.

I think maybe it's just that you see less benefit in lowering the resolution to win back performance.

(Just guessing though - I don't really know)
 
I am specifically talking about RNDA1/RDNA2 difference, nothing about what Sony has or what Cerny talked about.

Cerny hasnt talked about GE in great lengths though and what he said can be applied on what is already in RNDA1 (but doesnt have to be - we dont know).
Well, Cerny says that the GPU is a custom RDNA2 GPU, not a custom RDNA1 GPU. So sorry if I don't believe crap internetz people put out about it being RDNA1/RDNA2.

It might not have all the features of RDNA2, but it's doesn't mean it should be classified as RDNA1.5 as some are referring it to.

Regarding the bold, RDNA1 will be a base for RDNA2, it's logical. But it doesn't mean RDNA2 tech is RDNA1. RDNA2 WILL be more efficient. Unless you say that the 5700XT and the new 6800 are similar and the newer GPU isn't efficient.

 
Last edited:
Well, Cerny says that the GPU is a custom RDNA2 GPU, not a custom RDNA1 GPU. So sorry if I don't believe crap internetz people put out about it being RDNA1/RDNA2.

It might not have all the features of RDNA2, but it's doesn't mean it should be classified as RDNA1.5 as some are referring it to.

Regarding the bold, RDNA1 will be a base for RDNA2, it's logical. But it doesn't mean RDNA2 tech is RDNA1. RDNA2 WILL be more efficient. Unless you say that the 5700XT and the new 6800 are similar and the newer GPU isn't efficient.

Honestly I am not really sure what is your point so I will drop it.
 
how do we know xbox perform worse than 5700xt ? on techpowerup valhalla benchamrk 1440p 53fps on avarage on 5700xt so worse than xsx, but don't know graphics settings on pc vs consoles, @Dictator do you know how valhalla compare to 5700xt on nextgen consoles ?

I'm pretty sure you'll need to wait for the DF article to get an answer to that. I'm reasonably certain the consoles won't be running at the same settings though so comparing frame rates at this stage is fairly meaningless.

I think John did mention in the recent DF console video for Valhalla that the XSX and PS5 are pretty much running at last gen console settings with resolution and frame rate being the main differentiators.
 
how do we know xbox perform worse than 5700xt ? on techpowerup valhalla benchamrk 1440p 53fps on avarage on 5700xt so worse than xsx, but don't know graphics settings on pc vs consoles, @Dictator do you know how valhalla compare to 5700xt on nextgen consoles ?

Probably...
upload_2020-11-18_23-38-13.png

If I recall correctly Unity was heavily CPU bound, so this may be the case as well.
 
The PS5 seems to be just the better designed console of the two. Most expected the XSX to actually perform 10 to 20% better but it seems the other way around.
A PS5 and PC seems to be a real nice combo like every year.
 
Just watched the Valhalla video. The tessellated snow trails are good. The tessellated terrain though not so much. Still similar to Odyssey. Object motion blur without camera motion blur is great however. Looking forward to the upcoming PC comparison. I agree with John WRT Unity, but for me the starkest difference is in animation.

 
Last edited:
The PS5 seems to be just the better designed console of the two. Most expected the XSX to actually perform 10 to 20% better but it seems the other way around.
A PS5 and PC seems to be a real nice combo like every year.

PS5 better, no! XBSX better, no! These systems will be trading blows like well-rounded boxers for the next 6-8yrs, and neither are in their prime.
 
PS5 better, no! XBSX better, no! These systems will be trading blows like well-rounded boxers for the next 6-8yrs, and neither are in their prime.
But PS5 is arguably better designed system. They managed to get similar performance from smaller chip and in turn put that difference into parts where they can make difference (SSD/IO and controller). For SSD it remains to be seen, but controller is terrific addition to next gen. Almost a must have feature!
 
Ah, so 32 double pumped ROPs would have the raw throughput of 64, but you would lose efficency on small tris?
They could just be wider. RBEs generally had 4 ROPs each, and making them have 8 seems more straightforward than double-pumping silicon whose base clock is >2GHz.

there are some hints from developers like Matt Hargett that cache system is very good on ps5 and bandwidth is not a problem for this console
Perhaps, although the only statements I've seen of this sort seemed pretty generic about having to stay within caches, which is universally applicable.

But PS5 is arguably better designed system. They managed to get similar performance from smaller chip and in turn put that difference into parts where they can make difference (SSD/IO and controller). For SSD it remains to be seen, but controller is terrific addition to next gen. Almost a must have feature!
There's a lot of missing information that is letting people fill in the blanks in the way they see fit, and I think it's better to wait a bit beyond the first post-launch week for the inevitable teething pains to work themselves out. I think it's too soon to know whether there are some easily resolved issues that could change the picture one way or the other, though at least there's not clear sign of a crippling deficit for the top-end consoles at any rate. Even stating that one is arguably better designed allows for the possibility that it can be argued not to be.
 
Is it possible to deduct from the power draw if the final retail units are in fact currently downclocked?
I looked at all the game comparisons and it appears the PS5 has a solid 20% performance lead in a lot of situations, PS5 should be 20% weaker so the 40% performance delta cannot be attributed to bad MS SDK tools only, I believe the Series X is probably downclocked 30% compared to what was marketed earlier.
It can really depend on what the workload is doing, or what portions of the GPU pipeline are the bottleneck.
There could be other factors, like expected workloads that might take advantage of more parallel resources, or features that haven't been heavily utilized yet.
However, I would not count out software or tool issues. There's not really a ceiling to how debilitating dumb software problems can be.
 
The way dips happen on XSX almost makes you think it would be CPU related....but that doesn't really make sense given the hardware...

Is it possible PS5's CPU is utilizing SMT but XSX CPU is not yet? I thought I remember reading something about XSX having it disabled in the beginning
 
The way dips happen on XSX almost makes you think it would be a CPU related....but that doesn't really make sense given the hardware...

Is it possible PS5's CPU is utilizing SMT but XSX CPU is not yet? I thought I remember reading something about XSX having it disabled in the beginning

On the CPU front the xbox consoles have developer toggleable SMT on or off, the ps5 just have SMT on as far as we know
 
On the CPU front the xbox consoles have developer toggleable SMT on or off, the ps5 just have SMT on as far as we know

Here's what I remember reading:

[As expected, we're getting eight CPU cores and 16 threads, delivered via two quad-core units on the silicon, with one CPU core (or two threads) reserved for running the underlying operating system and the front-end 'shell'. Microsoft is promising a 4x improvement in both single-core and overall throughput over Xbox One X - and CPU speeds are impressive, with a peak 3.8GHz frequency. This is when SMT - or hyper-threading - is disabled. Curiously, developers can choose to run with eight physical cores at the higher clock, or all cores and threads can be enabled with a lower 3.6GHz frequency. Those frequencies are completely locked and won't adjust according to load or thermal conditions - a point Microsoft emphasised several times during our visit.

In our PC-based tests, having SMT enabled can deliver up to 30 per cent - or more - of additional performance in well-threaded applications. However, for launch titles at least, Microsoft expects developers to opt for the higher 3.8GHz mode with SMT disabled. "From a game developer's perspective, we expect a lot of them to actually stick with the eight cores because their current games are running with the distribution often set to seven cores and seven worker threads," explains Microsoft technical fellow and Xbox system architect Andrew Goossen. "And so for them to go wider, for them to go to 14 hardware threads, it means that they have the system to do it, but then, you have to have workloads that split even more effectively across them. And so we're actually finding that the vast majority of developers - talking with them about the their choices for launch - the vast majority are going to go with the SMT disabled and the higher clock."]

https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs

So one possible explanation is that the PS5's CPU is being more fully exploited than XSX CPU' currently is (assuming they can't toggle it on PS5...)
 
Last edited:
Did not Dirt5 developer interview with DF, say that SMT is what made it possible to do 120 fps on the new current gen consoles?
 
Here's what I remember reading:

[As expected, we're getting eight CPU cores and 16 threads, delivered via two quad-core units on the silicon, with one CPU core (or two threads) reserved for running the underlying operating system and the front-end 'shell'. Microsoft is promising a 4x improvement in both single-core and overall throughput over Xbox One X - and CPU speeds are impressive, with a peak 3.8GHz frequency. This is when SMT - or hyper-threading - is disabled. Curiously, developers can choose to run with eight physical cores at the higher clock, or all cores and threads can be enabled with a lower 3.6GHz frequency. Those frequencies are completely locked and won't adjust according to load or thermal conditions - a point Microsoft emphasised several times during our visit.

In our PC-based tests, having SMT enabled can deliver up to 30 per cent - or more - of additional performance in well-threaded applications. However, for launch titles at least, Microsoft expects developers to opt for the higher 3.8GHz mode with SMT disabled. "From a game developer's perspective, we expect a lot of them to actually stick with the eight cores because their current games are running with the distribution often set to seven cores and seven worker threads," explains Microsoft technical fellow and Xbox system architect Andrew Goossen. "And so for them to go wider, for them to go to 14 hardware threads, it means that they have the system to do it, but then, you have to have workloads that split even more effectively across them. And so we're actually finding that the vast majority of developers - talking with them about the their choices for launch - the vast majority are going to go with the SMT disabled and the higher clock."]

https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs

So one possible explanation is that the PS5's CPU is being more fully exploited than XSX CPU' currently is (assuming they can't toggle it on PS5...)
It would be odd for them to not use SMT on Xbox if they’ve already done the work to support it on PS5.
 
It seems that Nvidia, AMD and MSFT has gone "slow and wide". There are 2.8x more CUDA core in 3080 over 2080 super...there are 1.8x more CU in 6800xt over the 5700xt with...and the trend continues down the product lines. This over a two years period.


We know that MSFT aims to unify Xbox and PC development, so it would make sense that they all are taking the same approach. So it fair to say it will take a while for the new cards to reach their full capacity.
 
But PS5 is arguably better designed system. They managed to get similar performance from smaller chip and in turn put that difference into parts where they can make difference (SSD/IO and controller). For SSD it remains to be seen, but controller is terrific addition to next gen. Almost a must have feature!

Maybe, maybe not. I'm just not into calling "winners" simply based on a handful of games. There is no doubt PS5 is showing excellent performance, but that was a given since many developers and trusted media sources stated as such. XBSX is showing great performance as well, but there are certain issues which can be resolved over time.

Honestly, I’m just tired of the internet noise from console gamers...
giphy.gif
 
Take it with a grain of salt.
Someone told me that under some situations, XBSX only can access the 560GB/s 10GB at 244GB/s.
 
Status
Not open for further replies.
Back
Top