Xbox Series X [XBSX] [Release November 10 2020]

Discussion in 'Console Technology' started by Megadrive1988, Dec 13, 2019.

  1. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    18,004
    Likes Received:
    8,200
    Nor surprising. Designing GPUs is as much art as science. It's about finding a balance of components within the limitations of transistor budget and ancillary technologies (like memory speeds).

    No matter what balance is struck, any given GPU will perform better or worse depending on what parts/components of the GPU are stressed in any given game/scenario.

    Because GPUs are a balancing act, scaling one factor up without also scaling up other facets of the GPU is unlikely to see linear scaling (as that factor will become increasingly limited by other factors). And in the rare occasions you do have linear scaling, it won't infinitely scale linearly (eventually the factor you are scaling will become limited by other factors).

    Both the PS5 and XBS systems are engineered with a certain balance of features that each engineering team felt was the best way to spend their transistors combined with the level and cost of ancillary technology to be used. Each are likely targeting different ideas about how game rendering will evolve over the generation. Neither architecture will be perfectly utilized because each individual game developer has different ideas about how they want to render their games or even what their game rendering budget requires.

    In other words, overclocking an existing GPU (to use an easy example) is unlikely to show how effective the increased clocks of the PS5 are. Likewise, examining increased CU counts on PC GPUs is unlikely to show how effective the wider architecture of the XBS-X is. Especially if the comparison is between a fully enabled GPU versus a cut down salvage GPU. And even less informative if the comparison is between different GPUs of the same family. It may or may not give some hints of how the different choices will impact rendering, but it's unlikely to illuminate us on how they actually effect rendering.

    What I find most interesting thus far is how evenly the different architectures perform despite the differences in engineering choices made. Much of that comes down to the fact that the architectures are far more similar than they are different. But a part is also the fact that for multiplatform games (the only ones where direct comparisons are possible) it's in the developers best interest to have the game run well on all platforms it releases on.

    It does make me wonder if we'll see distinctly different approaches to how a scene is rendered in platform exclusive games towards the mid to end of the generation. Basically the exclusives that will start development after the launch of the current generation of consoles.

    Regards,
    SB
     
    thicc_gaf and invictis like this.
  2. invictis

    Newcomer

    Joined:
    May 28, 2013
    Messages:
    109
    Likes Received:
    67
    And the other factor is that both the XSX and PS5 are using some RDNA 1 and some RDNA 2 parts, wether is the front end, or back end or ROPs.
    So that introduces another variable.

    I think from a typical GPU and CPU point of view, there isn't much between them, and that being shown so far with the actual games.
    What I am interested to see is if other features such as Mesh Shaders, VRS, SFS and ML on the XSX as well as Primitive Shaders on PS5 (don't have much more to add on the PS5 side as Sony just won't talk about their console tech) get adopted by devs and incorporated into engines, and if this may give an advantage over just the raw GPU difference.
    But there's no guarantees that they will be. Devs are generally slow at adopting new features. The slow uptake on DLSS and Ray Tracing for instance.

    From everything you hear, I expect that Turn 10 have incorporated alot of these features into the Forzatech engine, and you would expect idtech 7 to be built around showcasing DX12U features.

    Fun times ahead.
     
    thicc_gaf likes this.
  3. thicc_gaf

    Regular Newcomer

    Joined:
    Oct 9, 2020
    Messages:
    334
    Likes Received:
    257
    If the PS5 has more die area dedicated to ROPs then that's because proportionally speaking, it has a smaller APU than Series X but ROP units aren't going to change in size to scale with the CU counts. I mean, they're ROPs, they have their function and a set silicon/transistor budget that's going to stay more or less fixed.

    It would only seem like Series X's are smaller because it has a larger APU its ROPs are contained in (due to higher CU count).

    Performance actually can scale with clock increases; however the real point is that the scaling is not linear and at some point you run into a wall where you're exerting a lot more power for minimum performance gains. On some GPUs, this actually starts to crater performance long-term.

    PS5 still has to adhere to these laws of physics even if it's using supplementary features like Smart Shift to handle distribution of power load between CPU and GPU of the system.
     
    PSman1700 likes this.
  4. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    19,105
    Likes Received:
    21,726
    The ROP implementations changed between RDNA1 and RDNA2. I believe that is what is being referenced there.
     
    iroboto and PSman1700 like this.
  5. ToTTenTranz

    Legend Veteran

    Joined:
    Jul 7, 2008
    Messages:
    12,235
    Likes Received:
    7,190
    It's not proportionally larger, it's larger in absolute terms. The PS5 has more absolute die area dedicated to ROPs than the Series X.

    On a "macroscopic" level, it looks like the SeriesX is using a similar arrangement as the Navi 2x chips ("RB+"), which has 2 depth/stencil ROPs per color ROP, whereas the PS5 has 4 depth/stencil ROPs per color ROP which is similar to previous GPUs (I'm tracking that proportion back to at least VLIW4 Cayman).



    This seems like an area saving procedure as we effectively saw the depth/stencil ROPs being halved from Navi 10 to Navi 22 without a substantial loss of performance (though it could change depending on the load).

    Of course, on the PS5 side these are conjectures based on photographs where each pixel corresponds to >1500 transistors, so AFAIK we don't really have any means to be sure.
     
    snc likes this.
  6. invictis

    Newcomer

    Joined:
    May 28, 2013
    Messages:
    109
    Likes Received:
    67
    The RDNA 2 and XSX ROPs have the changes that are required for Variable Rate Shading to work.
    Obviously with the PS5 not having VRS, maybe there was no need to move to Rdna 2 ROPs over the RDNA 2 ones.
    RDNA 2 changes over Rdna 1 are two fold. One for the new hardware features such as Ray Tracing, VRS, Mesh Shaders etc, and the other for power efficiency gains.
    From everything I can gather, there isn't an increase in performance in a RDNA 2 tflops VS a RDNA 1 tflops.
     
    PSman1700 likes this.
  7. thicc_gaf

    Regular Newcomer

    Joined:
    Oct 9, 2020
    Messages:
    334
    Likes Received:
    257
    Yes this is seeming it was the case, someone else posted more info clarifying things for me.

    Well this would just back up the implementation differences in terms of the backup between the two platforms but I agree, the differences here overall in terms of depth/stencil and arrangement doesn't have a perceivable gain or knock on rasterization perf in and of itself.

    That's where clocks come into the picture and that's the main reason PS5 has the higher pixel fillrate of the two systems.
     
    #2727 thicc_gaf, May 6, 2021
    Last edited: May 6, 2021
  8. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,500
    Likes Received:
    16,528
    Location:
    The North
    It is smaller by 1/2 IIRC. The RBEs on RDNA2 are double pumped. This is okay and will produce similar results to a full sized ROP from RDNA 1, but I believe some precision math it cannot be double pumped so it runs at 1/2 rate. This is the trade off essentially.
     
    BRiT and PSman1700 like this.
  9. mr magoo

    Newcomer

    Joined:
    May 31, 2012
    Messages:
    249
    Likes Received:
    402
    Location:
    Stockholm


    Sounds good
     
    iceberg187 likes this.
  10. thicc_gaf

    Regular Newcomer

    Joined:
    Oct 9, 2020
    Messages:
    334
    Likes Received:
    257
    Right; overall though seems like a negligible one. In MS's case this also was likely taken into consideration when customizing for INT8/INT4 Direct ML-based additions to their GPU(s).
     
    Kugai Calo likes this.
  11. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,500
    Likes Received:
    16,528
    Location:
    The North
    not sure if negligible. Sounds title dependent. Technically speaking we've seen Series consoles really suffer with particular alpha effects and being 1/2 rate of PS5 could be something to investigate with respect to these types of deltas.
     
    BRiT and Silent_Buddha like this.
  12. thicc_gaf

    Regular Newcomer

    Joined:
    Oct 9, 2020
    Messages:
    334
    Likes Received:
    257
    But isn't it also generally agreed in those instances said games are not really leveraging significant compute-driven approaches to the rendering pipeline? The depth/stencil rate might be 1/2 but the actual peak difference is only 22% between them.

    Which would support both idea of lack of fuller GPU saturation on Series X for said games and lack of leveraging compute-driven rendering to offset the lower pixel fillrate of traditional rasterization peak on the platform (most likely because the game engine is not suited for it and/or lack of resources for whatever team is handling those versions, especially if Series are not the lead platform).
     
  13. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,500
    Likes Received:
    16,528
    Location:
    The North
    Yes, there will always be some more optimum method of doing things. But many will do sub-optimal things for a variety of reasons, namely the fact that if it is not a major bottleneck, they can focus their attention elsewhere. So no way around that if developers choose not to leverage it.

    You would still have to use those ROPS for instance if you're using VRS. Pros and Cons.

    Largely Xbox provided options to a variety of developers, and ultimately developers are going to choose the path best for them. These types of tradeoffs allowed Xbox to bias their device, but as we can see from the data, these compromises have been bagging XSX perhaps more than their fanbase has desired; at least with respect to their marketing campaign around the device.
     
    #2733 iroboto, May 6, 2021
    Last edited: May 6, 2021
    PSman1700, BRiT and cwjs like this.
  14. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    18,004
    Likes Received:
    8,200
    Or to put it another way...

    While "lazy devs" is a modern day meme, never underestimate a developer's ability to not spend time optimizing something if they feel it's "good enough". :p Just look at the recent user fix for RDR2 ... which the developers then incorporated into a patch for the game now that someone was kind enough to optimize it for them. :p

    Just because there's a more optimal way to do something doesn't necessarily mean a developer will do that thing in the most optimal way, whether because of time, skill, or apathy a lot of non-optimal code ends up in many products (not just games) code bases.

    Regards,
    SB
     
  15. cwjs

    Newcomer

    Joined:
    Nov 17, 2020
    Messages:
    236
    Likes Received:
    472
    I wouldnt pin that one on devs. A studio famous for long term death march overtime having a few quadratic scaling performance bugs (which are hard to notice with small test datasets, but incredibly common to encounter in the wild) slip in is totally to be expected.

    Ultimately though performan is usually a question of resources -- do we spend months totally remaking engine and risking bigger bugs cropping up, or do we ship the version that will definitely work but may not perform 100% on this device?
     
  16. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,630
    Likes Received:
    2,955
    That's a fair point.

    What I would add though is that I'm not surprised that early cross gen games sees higher benefits with clock speed than compute power, and its still holding its own pretty well, some "wins" some "loses". I'm aware you're taking about fans, not yourself.

    Regarding developer use of XSX feature set, this is where MS was smart in baselining DX12U across PC and consoles. That will give devs compelling reasons to support it.

    Then you have the XSS, where the only way to make good use of it may be to make use of the features. Devs won't want to put out bad games.
    Much like the esram, they used it because they had to on XO not because they liked it.

    Especially if some games perform and look decent on XSS, then others don't because they never used XVA for example.
     
  17. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,500
    Likes Received:
    16,528
    Location:
    The North
    oh it's okay ;) I was referring to myself. lol.
    I did expect more. But the data isn't showing that. Either it makes a change come dropping off last generation, or it's forever locked around this performance profile. I think as long as the consoles can move to still run next-gen looking titles, it's really not an issue, even XSS at times is honestly looking pretty good. But yes, I was expecting more from it because I thought I spent enough time looking at historical trends to call this one, but it hasn't been the case.

    At a certain point in time one needs to accept they were wrong. And even if it does turn around to what I expected later on, that means my knowledge was incomplete and there was a glaring gap I didn't account for in terms of weighting it's importance. Which still means I'm wrong. And that's okay, something to think upon for Gen10? predictions.
     
    BRiT and Jay like this.
  18. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,630
    Likes Received:
    2,955
    That also depends on how much you also believe the 'tools' point also. There was no way for you to know about the wholesale change from xdk to gdk for example.

    I'm mostly in the same boat as you though.
    But then there's also things that I would never had forseen from the other side like m.2, VRR, cold storage of current gen games these are things that I would've bet my house on being there at launch.

    Both had reasons for what they did but it definitely makes predictions and assumptions Intresting :-D
     
    iroboto likes this.
  19. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,500
    Likes Received:
    16,528
    Location:
    The North
    I mean, I think I believe tools had a hand for the launch titles. It seems like it's largely been addressed since then. We are closing just beyond 6 month mark now - so the tools should have been addressed and we can see it's performing more consistently than before as a whole.

    I think we're looking at, at this point in time, just the variation in which engines maximize different parts of the pipeline and the bottlenecks show up more in XSX (Imbalance of RBEs, imbalance of compute to front end hardware) than they do in PS5 because clocks largely keep bottlenecks uniform since everything is being increased equally (except for memory bandwidth latency and bandwidth).

    So it may better explain the hit and miss nature of XSX; at most times hitting around PS5, sometimes below, and sometimes well ahead.
     
    PSman1700 likes this.
  20. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    18,004
    Likes Received:
    8,200
    The question is how soon after the tools have been sorted will we see it reflected in shipping games? If a game has code that was optimal in the previous development environment which was written say a year ago, but isn't optimal for the current development environment, will the developer go back and rewrite it? Will they know to go back and rewrite something they had assumed was finished and checked off? Would it have unintended consequences to cross platform code modifying it after the fact?

    It all seems a bit messy how the whole thing ended up on XBS. It's one thing to tell developers "this is what will be used primarily for next gen consoles and PC" but then just before the consoles launch to tell them it's not in a state suitable for shipping a game. It must have been a frustrating situation all around. From the limited accounts we've heard from devs so far, everything was much smoother on the PS5 side.

    I've no question that because of this, launch titles were generally just in a better place for PS5. Now the question is, how long will it take before the whole thing is just as smooth on the XBS side and how long will the impact from the rough dev. environment around the XBS linger? Or is it at a point now that multiplat. developers will focus on PS5 and then just do what they can for XBS because of how the generation started out?

    Regards,
    SB
     
    mr magoo and iroboto like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...