Xbox Series X [XBSX] [Release November 10 2020]

Discussion in 'Console Industry' started by Megadrive1988, Dec 13, 2019.

  1. snc

    snc
    Veteran Newcomer

    Joined:
    Mar 6, 2013
    Messages:
    1,232
    Likes Received:
    913
    it would be very surprising if higher clocks wouldn't benefit in better performance in rt games on radeon cards ;)
     
  2. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,668
    Likes Received:
    16,876
    Location:
    The North
    they have a massive cache right ;) which increases bandwidth in line with the clock boost of calculation.

    Once you hit a memory latency/memory bottleneck, I don't know how much more improvement you're going to see increasing clockspeed further.
     
  3. ToTTenTranz

    Legend Veteran

    Joined:
    Jul 7, 2008
    Messages:
    12,485
    Likes Received:
    7,479
    The context of my post is quoting a statement that said "the Xbox will be doing increasingly better than the PS5 this generation".
    Had the comment said "increasingly similar" instead, I'd agree.

    This game developer does.
     
  4. snc

    snc
    Veteran Newcomer

    Joined:
    Mar 6, 2013
    Messages:
    1,232
    Likes Received:
    913
    once you hit a memory bottleneck I don't know how much more impovement you are going to see increasing cu number ;) also recomand computerbase 6700xt review, cu number is not scaling linear in terms of performance advantage
     
  5. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,668
    Likes Received:
    16,876
    Location:
    The North
    yup. That's usually why they scale bandwidth with CUs on the design.

    Scaling the CU's has to do with workload right. If you're designing shaders for 36CUs, it's going to perform optimally there. Older games, designed for smaller CU counts, in the case the most popular cards (1070) at the time, made a lot of sense. I don't know what going forward will look like, in terms of performance targets. It could be perfectly okay, or 5 years down the road when PC has moved onto 80CU as a base number, you're probably going to want to take even more advantage of it, not less. I can't predict the future, but usually things get bigger.
     
  6. snc

    snc
    Veteran Newcomer

    Joined:
    Mar 6, 2013
    Messages:
    1,232
    Likes Received:
    913
    at best you can then have real performance advantage of higher cu same to theoretical, in pracitce you will not see linear increase with cu increase, but still always clocks are at least same important
     
  7. cwjs

    Regular Newcomer

    Joined:
    Nov 17, 2020
    Messages:
    272
    Likes Received:
    537
    Weird way to write 'reaction youtuber'. Ratchet looks great but it's not the first game to include a good hair renderer or object based motion blur.

    edit: ok, i watched the video, what did he say was technically special? he spent the whole time talking about at polish.

    And he spent a while looking at a scene saying 'nothing looks like a realtime renderer' which is extreme hypoerbole, theres a zillion details in that shot that show the limitations. Its a credit to the art team that they don't draw a naive eye.
     
    Johnny Awesome and Kugai Calo like this.
  8. Tkumpathenurpahl

    Tkumpathenurpahl Oil Monsieur Geezer
    Veteran Newcomer

    Joined:
    Apr 3, 2016
    Messages:
    1,909
    Likes Received:
    1,927
    I don't follow what you mean by the first part the, sorry. Could you clarify it for me please?

    As for the ML part, that's true that MS have yet to implement it in any meaningful way, but we have evidence by way of DLSS that ML hardware does have a use. It's still a bit of an unknown quantity, and it may transpire that the RDNA2 ML capabilities aren't good enough to be leveraged in place of compute shader upscaling/reconstruction techniques, but in theory, the XSX should fare better here.

    Beyond upscaling/reconstruction, I don't know what gaming applications ML has though. I'd be grateful if anyone here can point me to any articles or, better yet, just give me some ideas.

    What do you mean by better performance on the back end? Sorry, I'm a bit dozy right now, and trying to figure out just about anything might just be the end of me.

    Please correct me if I'm wrong, but every argument I've seen in favour of the PS5's architecture seems to indicate that it's higher clocks mean it will consistently be somewhat superior when it comes to traditional rasterisation. However, the CU and bandwidth advantage turn the tables in favour of the XSX when it comes to RT performance.

    I pretty much agree. There's certainly no clear cut long term winner right at this moment, as all of the XSX's advantages are dependant upon game engines moving towards RT and ML-based upscaling/reconstruction. Both consoles will be fine, both consoles will produce beautiful games. I just think that after a few years, the PS5 will have slightly worse RT and run at a slight, but perceptibly lower rendering resolution.

    Except the average Joe still won't give a toss and they'll carry on ruining the gaming industry by throwing money at Fortnite.

    I'm not, I acknowledged them in my second paragraph. Arguably not clearly enough though.

    I woke up at the wee hours of the morning today, and I've just finished exercising and eating, so I'm rather sleepy, and struggling to properly articulate my thoughts now, but my impression of the two main consoles is as follows:
    The XSX has more of a CU quantity advantage than the PS5 has a clockspeed advantage.
     
    Johnny Awesome likes this.
  9. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,697
    Likes Received:
    3,045
    Also on PC you have the infinity cache, where the higher the frequency of the gpu the bigger the benefit.
    In console space that could mean that PS5 is hurt more by lacking it than XS.

    I'm unsure of what the discussion is though, would faster be beneficial. Well sure, if all things remain equal. But that's not the case as you've pointed out.
     
  10. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,668
    Likes Received:
    16,876
    Location:
    The North
    Clockspeed will always help, but how it improves performance will differ one setup to another. But overall yes, clockspeed is an ideal thing to have, but without benchmarking different loads you're not going to know. Like if we benchmark 128-bit floats or 64 bit floats, you're going to want something else to really improve performance than just clockspeed right.
     
  11. snc

    snc
    Veteran Newcomer

    Joined:
    Mar 6, 2013
    Messages:
    1,232
    Likes Received:
    913
    yes, its 12tf vs 10tf but not sure why we going from this to cu number ;)
     
  12. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    18,122
    Likes Received:
    8,395
    Assuming, of course, that you have the memory bandwidth to feed it. Wider doesn't help unless you have more bandwidth. Hence, why MS has the non-uniform (speed) memory configuration.

    Higher clocks, can also be beneficial, if memory bandwidth can feed it. Here the PS5 operates a deficit when compared to XBS-X.

    RT makes use of high bandwidth in order to keep things fed. NV see's a benefit when increasing core clock speed because they also have extremely high bandwidth.

    Neither XBS-X nor PS5 have bandwidth approaching that of NV. The question here is how much the bandwidth of a respective system will hold it back WRT RT. If RT is the limiting factor then it doesn't matter how much faster or wider you make something. If memory bandwidth is the limiting factor for RT, then the XBS-X will have an advantage as it'll be able to feed the GPU more data in a given period of time. If it isn't then other parts of the architecture comes into play.

    And that's what Iroboto has been saying. We just don't know at the moment because we don't have enough data to even take a guess currently as to how this will play out on consoles.

    It's far more complex than just saying:
    • More clock speed = better.
    • More CU = better.
    Each of those "could" be true depending on whether they can get fed fast enough from main memory if main memory needs to be accessed. There are going to be situations where clock speed will be more beneficial and there will be situations where more CUs will be better. And there will be situations where neither matter and it's all about how much bandwidth you have.

    Regards,
    SB
     
    mr magoo and iroboto like this.
  13. snc

    snc
    Veteran Newcomer

    Joined:
    Mar 6, 2013
    Messages:
    1,232
    Likes Received:
    913
    remidner, xsx has 25% bandwidth advantage, slightly more than its 20% theorecitcal compute advantage so almost liner bandwith increase to its compute inscrease, still don't know how from this 20% more flops and 25% bandwitsh many here make jump to writing about cu count number advantage as super important ignoring real compute and bandwitsh advantage ;)
     
  14. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,668
    Likes Received:
    16,876
    Location:
    The North
    If you look at the whole graphics pipeline in a whole, more CUs do not lead to linear increases in performance because the graphics don't hit the pipeline equally in terms of workloads or in saturation. But if you're going to target the discussion around a very specific function within the pipeline, then it's apt to consider the exact factors that come into play.
     
  15. dobwal

    Legend Veteran

    Joined:
    Oct 26, 2005
    Messages:
    5,740
    Likes Received:
    2,012
    Bandwidth plays a part but so does latency. Double the frequency of a processor and it will take just about twice as many cycles for system or video memory to service a request.
     
    Silent_Buddha and iroboto like this.
  16. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    12,871
    Likes Received:
    3,754
    https://ibb.co/JC1trcQ

    So it looks like we are getting another 50+ titles soon that use FPS boost on xbox.

    Whats the over under that MS starts to use ML resolution scaling on these older titles ? It seems like there is so much power in these next gen consoles that just boosting the frame rate is leaving a lot on the table.
     
    Pete likes this.
  17. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,697
    Likes Received:
    3,045
    Considering the upscaling would need to be implemented in engine?
    Just rendering at a higher resolution like the x360 & OG Xbox BC would be a huge benefit.

    They actually demoed it on gears remaster to DF during the initial reveal to them, but nothings come of it yet.

    Some of those 120fps boost titles could've done with a resolution bump instead.
    Some titles could even do both.
     
  18. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    12,871
    Likes Received:
    3,754
    I'm wondering if they could just enable it through the emulation software they are using and just force it.
     
  19. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,697
    Likes Received:
    3,045
    I wouldn't hold your breath for ML Upscaling that doesn't also require motion vectors as input. But I'm no expert.

    It must've run into problems, but as I said they did have a way to render XO games at higher resolutions. Probably more demanding than ML upscaling but doesn't need to be done in engine, and would give good results.

    It's really needed for XO/1S games running on XSX and especially XSS. 1X titles aren't as bad.
     
    BRiT likes this.
  20. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    18,122
    Likes Received:
    8,395
    Sure, but in any given period of time, if your processing is limited by how much data you can fetch in X amount of time, it doesn't matter if you process items more quickly due to clock speed or due to more CUs.

    Basically if your SOC is sitting idle because you can only get X amount of data to the SOC then it does not matter how quickly it can process that data. The faster you process it, the faster you get to sit there and wait for more data to come in.

    In that case, regardless of the system architecture, the system that can provide more data to the SOC will have an advantage.

    That's what happens if you become limited by your bandwidth. Your SOC could be operating at 10 GHz instead of 3 GHz, but both are rendering at the same speed if they both have the same bandwidth and are limited by it. Another SOC could have 1000 CUs or 300 CUs, but they are both rendering a scene at the same speed if they are limited by the same bandwidth.

    In situations like this where pure bandwidth is the main limitation, then whatever system has more bandwidth will have the advantage regardless how much faster or wider an SOC is than another SOC. It doesn't matter how quickly your SOC can process the data if your system can get less data to the SOC than another architecture.
    • The PS5 can, in ideal situations, process 448 GB/s of data from main memory. The SOC can process more than that depending on the workload, but that is the absolute limit if it has to access main memory.
    • The XBS-X can in ideal situations, process 560 GB/s of data from main memory. The SOC can process more than that depending on the workload, but that is the absolutely limit if it has to access main memory.
    Bandwidth limited situations means the SOC can process more than those data rates at that point in time, but that it's limited to processing data by those data rates. When this situation arises, it doesn't matter that you have similar or greater bandwidth per CU, your system is still sitting there twiddling it's thumbs if it's already pulling in the maximum amount of data that it can.

    Bandwidth obviously isn't the only limitation you can run into, so how much of a limitation it is depends on how often your system needs more data than it can pull from main memory.

    Regards,
    SB
     
    mr magoo likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...