Xbox Series X [XBSX] [Release November 10 2020]

If the PS5 has more die area dedicated to ROPs then that's because proportionally speaking, it has a smaller APU than Series X but ROP units aren't going to change in size to scale with the CU counts. I mean, they're ROPs, they have their function and a set silicon/transistor budget that's going to stay more or less fixed.

It would only seem like Series X's are smaller because it has a larger APU its ROPs are contained in (due to higher CU count).

It's not proportionally larger, it's larger in absolute terms. The PS5 has more absolute die area dedicated to ROPs than the Series X.

On a "macroscopic" level, it looks like the SeriesX is using a similar arrangement as the Navi 2x chips ("RB+"), which has 2 depth/stencil ROPs per color ROP, whereas the PS5 has 4 depth/stencil ROPs per color ROP which is similar to previous GPUs (I'm tracking that proportion back to at least VLIW4 Cayman).


This seems like an area saving procedure as we effectively saw the depth/stencil ROPs being halved from Navi 10 to Navi 22 without a substantial loss of performance (though it could change depending on the load).

Of course, on the PS5 side these are conjectures based on photographs where each pixel corresponds to >1500 transistors, so AFAIK we don't really have any means to be sure.
 
  • Like
Reactions: snc
It's not proportionally larger, it's larger in absolute terms. The PS5 has more absolute die area dedicated to ROPs than the Series X.

On a "macroscopic" level, it looks like the SeriesX is using a similar arrangement as the Navi 2x chips ("RB+"), which has 2 depth/stencil ROPs per color ROP, whereas the PS5 has 4 depth/stencil ROPs per color ROP which is similar to previous GPUs (I'm tracking that proportion back to at least VLIW4 Cayman).


This seems like an area saving procedure as we effectively saw the depth/stencil ROPs being halved from Navi 10 to Navi 22 without a substantial loss of performance (though it could change depending on the load).

Of course, on the PS5 side these are conjectures based on photographs where each pixel corresponds to >1500 transistors, so AFAIK we don't really have any means to be sure.
The RDNA 2 and XSX ROPs have the changes that are required for Variable Rate Shading to work.
Obviously with the PS5 not having VRS, maybe there was no need to move to Rdna 2 ROPs over the RDNA 2 ones.
RDNA 2 changes over Rdna 1 are two fold. One for the new hardware features such as Ray Tracing, VRS, Mesh Shaders etc, and the other for power efficiency gains.
From everything I can gather, there isn't an increase in performance in a RDNA 2 tflops VS a RDNA 1 tflops.
 
The ROP implementations changed between RDNA1 and RDNA2. I believe that is what is being referenced there.

Yes this is seeming it was the case, someone else posted more info clarifying things for me.

It's not proportionally larger, it's larger in absolute terms. The PS5 has more absolute die area dedicated to ROPs than the Series X.

On a "macroscopic" level, it looks like the SeriesX is using a similar arrangement as the Navi 2x chips ("RB+"), which has 2 depth/stencil ROPs per color ROP, whereas the PS5 has 4 depth/stencil ROPs per color ROP which is similar to previous GPUs (I'm tracking that proportion back to at least VLIW4 Cayman).


This seems like an area saving procedure as we effectively saw the depth/stencil ROPs being halved from Navi 10 to Navi 22 without a substantial loss of performance (though it could change depending on the load).

Of course, on the PS5 side these are conjectures based on photographs where each pixel corresponds to >1500 transistors, so AFAIK we don't really have any means to be sure.

Well this would just back up the implementation differences in terms of the backup between the two platforms but I agree, the differences here overall in terms of depth/stencil and arrangement doesn't have a perceivable gain or knock on rasterization perf in and of itself.

That's where clocks come into the picture and that's the main reason PS5 has the higher pixel fillrate of the two systems.
 
Last edited:
The implementation, yes, but not necessarily the size. At least to what's public knowledge. Can't picture why the size of the ROPs would change, let alone shrink, from an older implementation to the newer one.

They are just ROPs, after all, there's a somewhat general design for them across AMD, Nvidia, even Intel (architecture-wise Intel is more different from the other two than AMD and Nvidia generally are).
It is smaller by 1/2 IIRC. The RBEs on RDNA2 are double pumped. This is okay and will produce similar results to a full sized ROP from RDNA 1, but I believe some precision math it cannot be double pumped so it runs at 1/2 rate. This is the trade off essentially.
 
It is smaller by 1/2 IIRC. The RBEs on RDNA2 are double pumped. This is okay and will produce similar results to a full sized ROP from RDNA 1, but I believe some precision math it cannot be double pumped so it runs at 1/2 rate. This is the trade off essentially.

Right; overall though seems like a negligible one. In MS's case this also was likely taken into consideration when customizing for INT8/INT4 Direct ML-based additions to their GPU(s).
 
Right; overall though seems like a negligible one. In MS's case this also was likely taken into consideration when customizing for INT8/INT4 Direct ML-based additions to their GPU(s).
not sure if negligible. Sounds title dependent. Technically speaking we've seen Series consoles really suffer with particular alpha effects and being 1/2 rate of PS5 could be something to investigate with respect to these types of deltas.
 
not sure if negligible. Sounds title dependent. Technically speaking we've seen Series consoles really suffer with particular alpha effects and being 1/2 rate of PS5 could be something to investigate with respect to these types of deltas.

But isn't it also generally agreed in those instances said games are not really leveraging significant compute-driven approaches to the rendering pipeline? The depth/stencil rate might be 1/2 but the actual peak difference is only 22% between them.

Which would support both idea of lack of fuller GPU saturation on Series X for said games and lack of leveraging compute-driven rendering to offset the lower pixel fillrate of traditional rasterization peak on the platform (most likely because the game engine is not suited for it and/or lack of resources for whatever team is handling those versions, especially if Series are not the lead platform).
 
But isn't it also generally agreed in those instances said games are not really leveraging significant compute-driven approaches to the rendering pipeline? The depth/stencil rate might be 1/2 but the actual peak difference is only 22% between them.
Yes, there will always be some more optimum method of doing things. But many will do sub-optimal things for a variety of reasons, namely the fact that if it is not a major bottleneck, they can focus their attention elsewhere. So no way around that if developers choose not to leverage it.

Which would support both idea of lack of fuller GPU saturation on Series X for said games and lack of leveraging compute-driven rendering to offset the lower pixel fillrate of traditional rasterization peak on the platform (most likely because the game engine is not suited for it and/or lack of resources for whatever team is handling those versions, especially if Series are not the lead platform).

You would still have to use those ROPS for instance if you're using VRS. Pros and Cons.

Largely Xbox provided options to a variety of developers, and ultimately developers are going to choose the path best for them. These types of tradeoffs allowed Xbox to bias their device, but as we can see from the data, these compromises have been bagging XSX perhaps more than their fanbase has desired; at least with respect to their marketing campaign around the device.
 
Last edited:
Or to put it another way...

While "lazy devs" is a modern day meme, never underestimate a developer's ability to not spend time optimizing something if they feel it's "good enough". :p Just look at the recent user fix for RDR2 ... which the developers then incorporated into a patch for the game now that someone was kind enough to optimize it for them. :p

Just because there's a more optimal way to do something doesn't necessarily mean a developer will do that thing in the most optimal way, whether because of time, skill, or apathy a lot of non-optimal code ends up in many products (not just games) code bases.

Regards,
SB
 
Just look at the recent user fix for RDR2. :p

I wouldnt pin that one on devs. A studio famous for long term death march overtime having a few quadratic scaling performance bugs (which are hard to notice with small test datasets, but incredibly common to encounter in the wild) slip in is totally to be expected.

Ultimately though performan is usually a question of resources -- do we spend months totally remaking engine and risking bigger bugs cropping up, or do we ship the version that will definitely work but may not perform 100% on this device?
 
these compromises have been bagging XSX perhaps more than their fanbase has desired; at least with respect to their marketing campaign around the device.
That's a fair point.

What I would add though is that I'm not surprised that early cross gen games sees higher benefits with clock speed than compute power, and its still holding its own pretty well, some "wins" some "loses". I'm aware you're taking about fans, not yourself.

Regarding developer use of XSX feature set, this is where MS was smart in baselining DX12U across PC and consoles. That will give devs compelling reasons to support it.

Then you have the XSS, where the only way to make good use of it may be to make use of the features. Devs won't want to put out bad games.
Much like the esram, they used it because they had to on XO not because they liked it.

Especially if some games perform and look decent on XSS, then others don't because they never used XVA for example.
 
What I would add though is that I'm not surprised that early cross gen games sees higher benefits with clock speed than compute power, and its still holding its own pretty well, some "wins" some "loses". I'm aware you're taking about fans, not yourself.
oh it's okay ;) I was referring to myself. lol.
I did expect more. But the data isn't showing that. Either it makes a change come dropping off last generation, or it's forever locked around this performance profile. I think as long as the consoles can move to still run next-gen looking titles, it's really not an issue, even XSS at times is honestly looking pretty good. But yes, I was expecting more from it because I thought I spent enough time looking at historical trends to call this one, but it hasn't been the case.

At a certain point in time one needs to accept they were wrong. And even if it does turn around to what I expected later on, that means my knowledge was incomplete and there was a glaring gap I didn't account for in terms of weighting it's importance. Which still means I'm wrong. And that's okay, something to think upon for Gen10? predictions.
 
But yes, I was expecting more from it because I thought I spent enough time looking at historical trends to call this one, but it hasn't been the case.
That also depends on how much you also believe the 'tools' point also. There was no way for you to know about the wholesale change from xdk to gdk for example.

I'm mostly in the same boat as you though.
But then there's also things that I would never had forseen from the other side like m.2, VRR, cold storage of current gen games these are things that I would've bet my house on being there at launch.

Both had reasons for what they did but it definitely makes predictions and assumptions Intresting :D
 
That also depends on how much you also believe the 'tools' point also. There was no way for you to know about the wholesale change from xdk to gdk for example.
I mean, I think I believe tools had a hand for the launch titles. It seems like it's largely been addressed since then. We are closing just beyond 6 month mark now - so the tools should have been addressed and we can see it's performing more consistently than before as a whole.

I think we're looking at, at this point in time, just the variation in which engines maximize different parts of the pipeline and the bottlenecks show up more in XSX (Imbalance of RBEs, imbalance of compute to front end hardware) than they do in PS5 because clocks largely keep bottlenecks uniform since everything is being increased equally (except for memory bandwidth latency and bandwidth).

So it may better explain the hit and miss nature of XSX; at most times hitting around PS5, sometimes below, and sometimes well ahead.
 
I mean, I think I believe tools had a hand for the launch titles. It seems like it's largely been addressed since then. We are closing just beyond 6 month mark now - so the tools should have been addressed and we can see it's performing more consistently than before as a whole.

The question is how soon after the tools have been sorted will we see it reflected in shipping games? If a game has code that was optimal in the previous development environment which was written say a year ago, but isn't optimal for the current development environment, will the developer go back and rewrite it? Will they know to go back and rewrite something they had assumed was finished and checked off? Would it have unintended consequences to cross platform code modifying it after the fact?

It all seems a bit messy how the whole thing ended up on XBS. It's one thing to tell developers "this is what will be used primarily for next gen consoles and PC" but then just before the consoles launch to tell them it's not in a state suitable for shipping a game. It must have been a frustrating situation all around. From the limited accounts we've heard from devs so far, everything was much smoother on the PS5 side.

I've no question that because of this, launch titles were generally just in a better place for PS5. Now the question is, how long will it take before the whole thing is just as smooth on the XBS side and how long will the impact from the rough dev. environment around the XBS linger? Or is it at a point now that multiplat. developers will focus on PS5 and then just do what they can for XBS because of how the generation started out?

Regards,
SB
 
not sure if negligible. Sounds title dependent. Technically speaking we've seen Series consoles really suffer with particular alpha effects and being 1/2 rate of PS5 could be something to investigate with respect to these types of deltas.

I consider game/middleware/directx/driver issues here far more likely than hardware. Even if the PS5 can theoretically do some alpha effects faster how likely it is so such large *real* impact.
 
I consider game/middleware/directx/driver issues here far more likely than hardware. Even if the PS5 can theoretically do some alpha effects faster how likely it is so such large *real* impact.
Hmm. The way I see it is that it’s
64 ROPS for PS5
And 32 ROPS for XSX. Double pumped to have an equivalent output to 64ROPS. But double pumping only certain types of formats.
 
I am not seeing examples of alpha effects tanking performance on the Series consoles. Certainly not in games released of late or since around March. Most of these games seem to run at equal performance on both consoles with the XSX usually also running at higher res compared to the PS5. Come to the end of this year we will start to see games released that are developed primarily with these current-gen consoles in mind, with the last-gen versions developed elsewhere.
 
I am not seeing examples of alpha effects tanking performance on the Series consoles. Certainly not in games released of late or since around March. Most of these games seem to run at equal performance on both consoles with the XSX usually also running at higher res compared to the PS5. Come to the end of this year we will start to see games released that are developed primarily with these current-gen consoles in mind, with the last-gen versions developed elsewhere.
End of the year is a bit optimistic. Give it some more time. Also because of the pandemic the old gen will live a bit longer (IMHO). But it should get better over time.
 
Back
Top