Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
The difference between the XSX and PS5 in every department is way to small too call any of them a 'victor', assumingly for the whole generation.
Based on the information available, XSX has more powerful hardware than the PS5. The SFS memory management technology presented by MS and the hardware-supported Machine Learning will bring significant performance gains to Xbox Series consoles. These are not yet included in current development environments, but these techniques are expected to be applied to MS consoles from the second or third wave of games.
 
There is one thing I wonder about, and that is what the 100GB reservation is even needed for - can’t this reservation be part of the game install itself, since that contains all the assets already?

What 100gb reservation are you talking about?

If you're going off the Velocity Architecture, that was just a marketting number thrown out there going off some game install sizes, games can make use of more than 100GB of directly accessible storage, they even said "more than 100GB", hence why this thread is titled as such: https://forum.beyond3d.com/threads/...e-than-100gb-available-for-game-assets.61708/

There is no actual space reserved on the system drive that game data gets moved in and out of. Memory can be mapped to the actual game installation.
 
Based on the information available, XSX has more powerful hardware than the PS5. The SFS memory management technology presented by MS and the hardware-supported Machine Learning will bring significant performance gains to Xbox Series consoles. These are not yet included in current development environments, but these techniques are expected to be applied to MS consoles from the second or third wave of games.

This really isn't as cut and dried as people would think. There are many gray areas where each system has certain advantages. Yes, on paper XBSX has the obvious GPU and bandwidth advantage, but there maybe areas where inefficiencies exists (this applies to PS5 as well). And yes, Microsoft's XBSX GDK toolsets will get better, just like Sony's SDK toolsets will get better.

Personally, I'm going with the initial rumors from developers, that both systems are more similar in performance than the prior generations. In the end, who's the winner... gamers!
 
Last edited:
This really isn't cut and dried as people think. They are many gray areas where each system has certain advantages. Yes, on paper XBSX has the obvious GPU and bandwidth advantage, but there maybe areas where inefficiencies exists (this applies to PS5 as well). And yes, Microsoft's XBSX GDK toolsets will get better, just like Sony's SDK toolsets will get better.

Personally, I'm going with the initial rumors from developers, that both systems are more similar in performance than the prior generations. In the end, who's the winner... gamers!

Its what we are seeing now, and probably what we will be seeing the whole generation. PS5 has 10.2TF but due to much higher clocks its closer then the numbers suggest to the xsx's 12TF's. They will be trade blowing most likely in different areas.

What id like to see is compute and RT, how do they compare there? XSX has more CU's, while the PS5 can fill them faster. CoD cold war could be another intresting comparison.
 
Its what we are seeing now, and probably what we will be seeing the whole generation. PS5 has 10.2TF but due to much higher clocks its closer then the numbers suggest to the xsx's 12TF's. They will be trade blowing most likely in different areas.

What id like to see is compute and RT, how do they compare there? XSX has more CU's, while the PS5 can fill them faster. CoD cold war could be another intresting comparison.
Here is your Cold War comparison
 
YT comments section as always, is a hot mess. But yes, it seems both are performing well, with PS5 having less heavier FPS drops.
Well, it is not a like for like comparison (e.g. bots vs player etc) and not the same spots. But it really seems like the xbox has currently some kind of api-overhead problem at the current time. It should always run better at the same settings.
 
Well, it is not a like for like comparison (e.g. bots vs player etc) and not the same spots. But it really seems like the xbox has currently some kind of api-overhead problem at the current time. It should always run better at the same settings.

Once again, this is very dependent on the game engine. There will be situations (games) where PS5 will edge out XBSX, and vice versa. No different than the standard PS4 and XBO.
 
Here is your Cold War comparison
I don't think that is legit comparison (as in - I don't think its actual comparison at all, its spoof - drops look completely arbitrary for both).

As we are on page of NG hardware speculation, I will speculate XSX will outperform PS5 going forward but margin will be relatively small. MS got "Jebaited" hard IMO.

Basically, they both shot at ~200-210W hardware, but MS left the door opened by going with perfect perf/watt ratio and wider GPU, probably thinking Sony will either match them with similar chip, or in case of smaller chip - won't be able to go above certain GHz threshold thus run into low perf per watt return (as Cerny confirmed anyway).

In the end, they could go over 2.0GHz, well over it, just by turning standard console power supply strategy on its head. Not like MS could do anything more here, they picked chip size that will fit perfectly to ~200W limit on its game clocks, but Sony basically got much more bang for their buck with chip that would be ~170W console target with standard power delivery, yet they had headroom to clock it high enough to mostly nullify shader advantage MS had.

The way I see it, with standard power delivery :

36CU @ ~1.8GHz - 170W console
52CU @ ~1.8GHz - 210W console

But what if you shot for 200W console with 36CUs? Clock it at 2.0GHz? Perhaps, but that could spike above 200W in future and would yield "only" 9.2TF in best case and advantage in cache memory and pixel filtrate would not be enough to close on to MS's 200W console. Here comes the power limit idea, which is absolutely great (if it is actually working as well as we are seeing, and its not dev env that is causing perf results). Provide ~200W at all times, and whenever that limit is spiked in few ms's, just downclock chip low enough to breath and get back to max clocks. Brilliant idea, but only works for 36CU at 200W, would not work in MS case. For MS to work, they would have need to go with ~230-240W to get constant clock at around 2.0GHz or higher, as 200W will not do, it would downlock way way to offten if that chip had higher clocks.

From there on, its standard stuff. Bigger chip, more money spent on transistors less on SSD/controller yet performance advantage is not really there. Even if PS5 didn't have advantage in pixel fillrate or any ops concerning frequent cache trips, advantage might still not be big enough. In this case, it is relatively small.

To make things worse, they had 2 SKU option just to guarantee that high end model has performance advantage. They should have probably gone a bit less conservative IMO, but hindsight is 20/20. Tbh maybe MS's SSD/IO and software stack turn to be much closer to Sony then we thought it would be, and in that way they would be 2 very close consoles with only controller being diferentiator.
 
Long term the trend is always towards more maths per pixel, and similarly to GPUs in the PC space, I expect the wider and more ALU, BW and (at least in part) cache heavy GPU to age a little better. But this won't show fully until we leave last gen pixel shaders behind, which will by necessity have a lower math / pixel ratio. But I don't think that's what we're really looking at right now.

Right now, I think what we're really seeing is a difference in tools and a difference in philosophy. When you have 30%, 40% or greater dips in performance that aren't there on very similar platforms it's a sign of a practical inability (e.g. time, tools ) to optimise fully.

A stable, feature complete development environment is preferable to a relativity small increase in flops. And it makes a really big impact on the outcomes. It's so easy to lose performance, and increase variation in frame time with seemingly minor changes to what you're doing.
 
Last edited:
Well, it is not a like for like comparison (e.g. bots vs player etc) and not the same spots. But it really seems like the xbox has currently some kind of api-overhead problem at the current time. It should always run better at the same settings.
Only if both hardware have exactly the same design, which is not the case here. We have a wide and slow VS narrow and fast. But I think the biggest problem of XSX (ignoring the possibility of unified L3 cache on PS5 CPU) is that is has 14 CUs by shader array, which is unique in all AMD RDNA GPUs AFAIK.
 
Only if both hardware have exactly the same design, which is not the case here. We have a wide and slow VS narrow and fast. But I think the biggest problem of XSX (ignoring the possibility of unified L3 cache on PS5 CPU) is that is has 14 CUs by shader array, which is unique in all AMD RDNA GPUs AFAIK.
Well there would require way too much silicon to go with 4 Shader Engines and double up the number of fixed function silicon. Then you'd need to feed it, in which AMD solved it by having a 128MB cache. The silicon budget wasn't there to support this. So they instead opted to go ALU heavy, likely in hopes that as games continue to move towards compute shaders and away from fixed function, that they would extract more performance from Xbox over time.
 
Roughly speaking any load time over 3 seconds on PS5 or 6 seconds on Series X means that the game is unoptimized, doing something on the network, idly showing splash screens, or doing some kind of calculations or housekeeping.

Because filling the 16GB RAM (minus OS) should normally never take more than that at 5.5GB/s or 2.5GB/s respectively.

Exactly it seems there are other bottlenecks that are yet to be discovered. Most likely down the road we'll be seeing load times consistently below 4 seconds in either system. The best example of next gen loading times I've seen is NBA 2K21 loading into a game takes less than 4 seconds on either system. Once the bottlenecks are ironed out I don't see why less than 4 seconds isn't consistently possible on next gen games.
 
Exactly it seems there are other bottlenecks that are yet to be discovered. Most likely down the road we'll be seeing load times consistently below 4 seconds in either system. The best example of next gen loading times I've seen is NBA 2K21 loading into a game takes less than 4 seconds on either system. Once the bottlenecks are ironed out I don't see why less than 4 seconds isn't consistently possible on next gen games.
Because game assets passing through the interface isn't all that's happening during a loading screen. Think back to PS2 loading times. They were pretty long, and usually longer than GC or Xbox. PS2's optical drive could read 5 MB/s and it had 32MB of ram, but there probably isn't a game that loaded in 6-7 seconds on that system.

Think about it like a bakery. Just because you can transport the flour and sugar from the storage room instantly doesn't mean the cookies are immediately baked. That's what's happening. Game assets are being moved into RAM but it doesn't mean that's all that needs to be done. Think of a game where NPCs move about a city, time of day changes, weather conditions are randomized by day/season. So if all the game assets take 6 seconds to load into ram, the game engine still has to compute the location of those NPCs, calculate the time of day and weather condition.
 
Because game assets passing through the interface isn't all that's happening during a loading screen. Think back to PS2 loading times. They were pretty long, and usually longer than GC or Xbox. PS2's optical drive could read 5 MB/s and it had 32MB of ram, but there probably isn't a game that loaded in 6-7 seconds on that system.

Think about it like a bakery. Just because you can transport the flour and sugar from the storage room instantly doesn't mean the cookies are immediately baked. That's what's happening. Game assets are being moved into RAM but it doesn't mean that's all that needs to be done. Think of a game where NPCs move about a city, time of day changes, weather conditions are randomized by day/season. So if all the game assets take 6 seconds to load into ram, the game engine still has to compute the location of those NPCs, calculate the time of day and weather condition.

Okay that makes sense.
 
PS5 120fps 94% of the time, 91% for Series X. 60fps pretty much the same.


Link to data https://docs.google.com/spreadsheets/d/1CXW167MudfhuhOjscNSc8Vvs6YEuQ21dXDWkjJCpJQk/edit#gid=0

Looking at the video the only real difference is in the 120fps mode, and it almost always is during heavy explosions with smoke effects where the Xbox drops a little lower although on 120fps 5fps difference isn’t much. On 60fps mode they are both locked, PS5 had a single drop somewhere and that was really small.

Visually, the only difference I can see on my phone is that you get some smoke effects from your gun that you don’t get on the Xbox, in either mode. From that it would still seem there is a slight difference in available bandwidth. Slight, because both systems drop some frames in 120 FPS mode, and bandwidth because it’s explosions/smoke related.

My guess would be that right now the PS5 is saving more bandwidth on stuff like asset loading (meshes, textures, maybe even sounds, anything) than the Xbox can make up for in its faster 10GB of RAM. That is probably slightly more likely than that the remaining 3.5GB is slower and causing issues there what I thought initially, but you never know for sure in these early days. It could even be both, or neither, but the explosions do suggest bandwidth? So would the absense of the smoke from the gun on Xbox if that was deliberate.
 
Status
Not open for further replies.
Back
Top