Xbox Series X [XBSX] [Release November 10 2020]

sure, all things equal, going faster is going to be better than going slower. But the improvements on clockspeed on nvidia's side may not propagate to what we have on the console side of things. They run higher bandwidth on memory, are wider, and run fairly fast as well, with dedicated RT cores. Both consoles are hamstrung on width, have limited cache, and bandwidth is much lower than nvidia offerings. Nvidia's whole pipeline is larger in general, so improving clockspeeds are less likely to result into hitting a bottleneck earlier: in particular I'm looking at memory.

I'm not sure how the consoles will react here. 6800+ series of GPUs have big caches that possibly could fit a portion of BVH. I really don't know what to expect without more data.
it would be very surprising if higher clocks wouldn't benefit in better performance in rt games on radeon cards ;)
 
it would be very surprising if higher clocks wouldn't benefit in better performance in rt games on radeon cards ;)
they have a massive cache right ;) which increases bandwidth in line with the clock boost of calculation.

Once you hit a memory latency/memory bottleneck, I don't know how much more improvement you're going to see increasing clockspeed further.
 
You linked a gameplay video of a game which looks (I'm saying) way better than anything that will release exclusive on xsx this year.
The context of my post is quoting a statement that said "the Xbox will be doing increasingly better than the PS5 this generation".
Had the comment said "increasingly similar" instead, I'd agree.

But that's because its a good art team, I don't see anything technically dazzling in ratchet.
This game developer does.
 
they have a massive cache right ;) which increases bandwidth in line with the clock boost of calculation.

Once you hit a memory latency/memory bottleneck, I don't know how much more improvement you're going to see increasing clockspeed further.
once you hit a memory bottleneck I don't know how much more impovement you are going to see increasing cu number ;) also recomand computerbase 6700xt review, cu number is not scaling linear in terms of performance advantage
 
once you hit a memory bottleneck I don't know how much more impovement you are going to see increasing cu number ;) also recomand computerbase 6700xt review, cu number is not scaling linear in terms of performance advantage
yup. That's usually why they scale bandwidth with CUs on the design.

Scaling the CU's has to do with workload right. If you're designing shaders for 36CUs, it's going to perform optimally there. Older games, designed for smaller CU counts, in the case the most popular cards (1070) at the time, made a lot of sense. I don't know what going forward will look like, in terms of performance targets. It could be perfectly okay, or 5 years down the road when PC has moved onto 80CU as a base number, you're probably going to want to take even more advantage of it, not less. I can't predict the future, but usually things get bigger.
 
yup. That's usually why they scale bandwidth with CUs on the design.

Scaling the CU's has to do with workload right. If you're designing shaders for 36CUs, it's going to perform optimally there. Older games, designed for smaller CU counts, in the case the most popular cards (1070) at the time, made a lot of sense. I don't know what going forward will look like, in terms of performance targets. It could be perfectly okay, or 5 years down the road when PC has moved onto 80CU as a base number, you're probably going to want to take even more advantage of it, not less. I can't predict the future, but usually things get bigger.
at best you can then have real performance advantage of higher cu same to theoretical, in pracitce you will not see linear increase with cu increase, but still always clocks are at least same important
 

Weird way to write 'reaction youtuber'. Ratchet looks great but it's not the first game to include a good hair renderer or object based motion blur.

edit: ok, i watched the video, what did he say was technically special? he spent the whole time talking about at polish.

And he spent a while looking at a scene saying 'nothing looks like a realtime renderer' which is extreme hypoerbole, theres a zillion details in that shot that show the limitations. Its a credit to the art team that they don't draw a naive eye.
 
Yes, but as the latest examples have shown, the number of simulated rays doesn't translate linearly into better gaming performance, and for ML Microsoft has yet to implement it in any meaningful way in a released game.

I don't follow what you mean by the first part the, sorry. Could you clarify it for me please?

As for the ML part, that's true that MS have yet to implement it in any meaningful way, but we have evidence by way of DLSS that ML hardware does have a use. It's still a bit of an unknown quantity, and it may transpire that the RDNA2 ML capabilities aren't good enough to be leveraged in place of compute shader upscaling/reconstruction techniques, but in theory, the XSX should fare better here.

Beyond upscaling/reconstruction, I don't know what gaming applications ML has though. I'd be grateful if anyone here can point me to any articles or, better yet, just give me some ideas.

The compute advantage of the SeriesX does give it the advantage of simulating more rays/second, but OTOH the higher clocks on the PS5 should give it better performance on the back-end. The SeriesX has higher memory bandwidth but the PS5 has faster I/O and no memory contention issues, they both have the same number of shader engines so it's likely that the PS5 has a bit better utilization of its shader processors (less ALUs per shader engine).

What do you mean by better performance on the back end? Sorry, I'm a bit dozy right now, and trying to figure out just about anything might just be the end of me.

Please correct me if I'm wrong, but every argument I've seen in favour of the PS5's architecture seems to indicate that it's higher clocks mean it will consistently be somewhat superior when it comes to traditional rasterisation. However, the CU and bandwidth advantage turn the tables in favour of the XSX when it comes to RT performance.

In the end they're both very close. There's no clear cut "long-term winner" here. If 1st party and high-profile 3rd party developers do their job well, both consoles will get gorgeous titles this gen.

I pretty much agree. There's certainly no clear cut long term winner right at this moment, as all of the XSX's advantages are dependant upon game engines moving towards RT and ML-based upscaling/reconstruction. Both consoles will be fine, both consoles will produce beautiful games. I just think that after a few years, the PS5 will have slightly worse RT and run at a slight, but perceptibly lower rendering resolution.

Except the average Joe still won't give a toss and they'll carry on ruining the gaming industry by throwing money at Fortnite.

why are we ignoring clock here ? higher clock also lend itself well to rt and ml tasks

I'm not, I acknowledged them in my second paragraph. Arguably not clearly enough though.

I woke up at the wee hours of the morning today, and I've just finished exercising and eating, so I'm rather sleepy, and struggling to properly articulate my thoughts now, but my impression of the two main consoles is as follows:
The XSX has more of a CU quantity advantage than the PS5 has a clockspeed advantage.
 
they have a massive cache right ;) which increases bandwidth in line with the clock boost of calculation.

Once you hit a memory latency/memory bottleneck, I don't know how much more improvement you're going to see increasing clockspeed further.
Also on PC you have the infinity cache, where the higher the frequency of the gpu the bigger the benefit.
In console space that could mean that PS5 is hurt more by lacking it than XS.

I'm unsure of what the discussion is though, would faster be beneficial. Well sure, if all things remain equal. But that's not the case as you've pointed out.
 
at best you can then have real performance advantage of higher cu same to theoretical, in pracitce you will not see linear increase with cu increase, but still always clocks are at least same important
Clockspeed will always help, but how it improves performance will differ one setup to another. But overall yes, clockspeed is an ideal thing to have, but without benchmarking different loads you're not going to know. Like if we benchmark 128-bit floats or 64 bit floats, you're going to want something else to really improve performance than just clockspeed right.
 
I woke up at the wee hours of the morning today, and I've just finished exercising and eating, so I'm rather sleepy, and struggling to properly articulate my thoughts now, but my impression of the two main consoles is as follows:
The XSX has more of a CU quantity advantage than the PS5 has a clockspeed advantage.
yes, its 12tf vs 10tf but not sure why we going from this to cu number ;)
 
at best you can then have real performance advantage of higher cu same to theoretical, in pracitce you will not see linear increase with cu increase, but still always clocks are at least same important

Assuming, of course, that you have the memory bandwidth to feed it. Wider doesn't help unless you have more bandwidth. Hence, why MS has the non-uniform (speed) memory configuration.

Higher clocks, can also be beneficial, if memory bandwidth can feed it. Here the PS5 operates a deficit when compared to XBS-X.

RT makes use of high bandwidth in order to keep things fed. NV see's a benefit when increasing core clock speed because they also have extremely high bandwidth.

Neither XBS-X nor PS5 have bandwidth approaching that of NV. The question here is how much the bandwidth of a respective system will hold it back WRT RT. If RT is the limiting factor then it doesn't matter how much faster or wider you make something. If memory bandwidth is the limiting factor for RT, then the XBS-X will have an advantage as it'll be able to feed the GPU more data in a given period of time. If it isn't then other parts of the architecture comes into play.

And that's what Iroboto has been saying. We just don't know at the moment because we don't have enough data to even take a guess currently as to how this will play out on consoles.

It's far more complex than just saying:
  • More clock speed = better.
  • More CU = better.
Each of those "could" be true depending on whether they can get fed fast enough from main memory if main memory needs to be accessed. There are going to be situations where clock speed will be more beneficial and there will be situations where more CUs will be better. And there will be situations where neither matter and it's all about how much bandwidth you have.

Regards,
SB
 
Assuming, of course, that you have the memory bandwidth to feed it. Wider doesn't help unless you have more bandwidth. Hence, why MS has the non-uniform (speed) memory configuration.

Higher clocks, can also be beneficial, if memory bandwidth can feed it. Here the PS5 operates a deficit when compared to XBS-X.

RT makes use of high bandwidth in order to keep things fed. NV see's a benefit when increasing core clock speed because they also have extremely high bandwidth.

Neither XBS-X nor PS5 have bandwidth approaching that of NV. The question here is how much the bandwidth of a respective system will hold it back WRT RT. If RT is the limiting factor then it doesn't matter how much faster or wider you make something. If memory bandwidth is the limiting factor for RT, then the XBS-X will have an advantage as it'll be able to feed the GPU more data in a given period of time. If it isn't then other parts of the architecture comes into play.

And that's what Iroboto has been saying. We just don't know at the moment because we don't have enough data to even take a guess currently as to how this will play out on consoles.

It's far more complex than just saying:
  • More clock speed = better.
  • More CU = better.
Each of those "could" be true depending on whether they can get fed fast enough from main memory if main memory needs to be accessed. There are going to be situations where clock speed will be more beneficial and there will be situations where more CUs will be better. And there will be situations where neither matter and it's all about how much bandwidth you have.

Regards,
SB
remidner, xsx has 25% bandwidth advantage, slightly more than its 20% theorecitcal compute advantage so almost liner bandwith increase to its compute inscrease, still don't know how from this 20% more flops and 25% bandwitsh many here make jump to writing about cu count number advantage as super important ignoring real compute and bandwitsh advantage ;)
 
remidner, xsx has 25% bandwidth advantage, slightly more than its 20% theorecitcal compute advantage so almost liner bandwith increase to its compute inscrease, still don't know how from this 20% more flops and 25% bandwitsh many here make jump to writing about cu count number advantage as super important ignoring real compute and bandwitsh advantage ;)

If you look at the whole graphics pipeline in a whole, more CUs do not lead to linear increases in performance because the graphics don't hit the pipeline equally in terms of workloads or in saturation. But if you're going to target the discussion around a very specific function within the pipeline, then it's apt to consider the exact factors that come into play.
 
https://ibb.co/JC1trcQ

So it looks like we are getting another 50+ titles soon that use FPS boost on xbox.

Whats the over under that MS starts to use ML resolution scaling on these older titles ? It seems like there is so much power in these next gen consoles that just boosting the frame rate is leaving a lot on the table.
 
Whats the over under that MS starts to use ML resolution scaling on these older titles ? It seems like there is so much power in these next gen consoles that just boosting the frame rate is leaving a lot on the table.
Considering the upscaling would need to be implemented in engine?
Just rendering at a higher resolution like the x360 & OG Xbox BC would be a huge benefit.

They actually demoed it on gears remaster to DF during the initial reveal to them, but nothings come of it yet.

Some of those 120fps boost titles could've done with a resolution bump instead.
Some titles could even do both.
 
Considering the upscaling would need to be implemented in engine?
Just rendering at a higher resolution like the x360 & OG Xbox BC would be a huge benefit.

They actually demoed it on gears remaster to DF during the initial reveal to them, but nothings come of it yet.

Some of those 120fps boost titles could've done with a resolution bump instead.
Some titles could even do both.

I'm wondering if they could just enable it through the emulation software they are using and just force it.
 
I'm wondering if they could just enable it through the emulation software they are using and just force it.
I wouldn't hold your breath for ML Upscaling that doesn't also require motion vectors as input. But I'm no expert.

It must've run into problems, but as I said they did have a way to render XO games at higher resolutions. Probably more demanding than ML upscaling but doesn't need to be done in engine, and would give good results.

It's really needed for XO/1S games running on XSX and especially XSS. 1X titles aren't as bad.
 
remidner, xsx has 25% bandwidth advantage, slightly more than its 20% theorecitcal compute advantage so almost liner bandwith increase to its compute inscrease, still don't know how from this 20% more flops and 25% bandwitsh many here make jump to writing about cu count number advantage as super important ignoring real compute and bandwitsh advantage ;)

Sure, but in any given period of time, if your processing is limited by how much data you can fetch in X amount of time, it doesn't matter if you process items more quickly due to clock speed or due to more CUs.

Basically if your SOC is sitting idle because you can only get X amount of data to the SOC then it does not matter how quickly it can process that data. The faster you process it, the faster you get to sit there and wait for more data to come in.

In that case, regardless of the system architecture, the system that can provide more data to the SOC will have an advantage.

That's what happens if you become limited by your bandwidth. Your SOC could be operating at 10 GHz instead of 3 GHz, but both are rendering at the same speed if they both have the same bandwidth and are limited by it. Another SOC could have 1000 CUs or 300 CUs, but they are both rendering a scene at the same speed if they are limited by the same bandwidth.

In situations like this where pure bandwidth is the main limitation, then whatever system has more bandwidth will have the advantage regardless how much faster or wider an SOC is than another SOC. It doesn't matter how quickly your SOC can process the data if your system can get less data to the SOC than another architecture.
  • The PS5 can, in ideal situations, process 448 GB/s of data from main memory. The SOC can process more than that depending on the workload, but that is the absolute limit if it has to access main memory.
  • The XBS-X can in ideal situations, process 560 GB/s of data from main memory. The SOC can process more than that depending on the workload, but that is the absolutely limit if it has to access main memory.
Bandwidth limited situations means the SOC can process more than those data rates at that point in time, but that it's limited to processing data by those data rates. When this situation arises, it doesn't matter that you have similar or greater bandwidth per CU, your system is still sitting there twiddling it's thumbs if it's already pulling in the maximum amount of data that it can.

Bandwidth obviously isn't the only limitation you can run into, so how much of a limitation it is depends on how often your system needs more data than it can pull from main memory.

Regards,
SB
 
Back
Top