Digital Foundry Microsoft Xbox Scorpio Reveal [2017: 04-06, 04-11, 04-15, 04-16]

Another thing to throw into the mix is the possible hit from contention with the CPU. PS4 developer slides showed CPU prioritised accessed slicing up to 40 GB/s from the PS4's real world BW figures. If that were to scale with clockspeed, and there were no mitigating factors, could we be seeing PS4 Pro losing up to ~53 GB/s and Scorpio losing up to 58 GB/s from real world available BW for the GPU?

As I understand it (or potentially misunderstand it), in addition to higher total BW, a wider bus might be more likely to keep at least some needed data flowing to the GPU and therefore avoiding stalls ... though I'm not certain, and have even less idea if a potential theoretical advantage from additional width would make a real world difference.

Whatever, based on BW alone, Scorpio would seem to be in a better place to keep the CUs in the GPU busy even under adverse conditions due to CPU conditions. Speaking theoretically. And naively. And possibly ignorantly.
maybe the CPU contention lowers a lot now that draw calls are almost "free". True test for bandwidth is seeing how many games run at 4k60. I remember one of the best games in history, Ninja Gaiden Black on the original Xbox and I couldn't believe it. It made you wonder ... how can the Xbox be moving these graphics at 60 fps? Yet it ran almos flawlessly. I guess that smart developers like sebbbi and so on could make games run at 4k60 and fine graphics, a la Ninja Gaiden. A 60 fps game competing against the best 30 fps games back then..was awesome.
 
Another point I'm curious about is the bonus feature of forcing all filtering to AF (ed: for older titles), in case some code using bilinear and trilinear is actually depending on behaviors from that level of filtering.

At the end of the BC article there's an interesting comment:

"Some of the enhancements may cause compatibility issues on a very small percentage of titles, meaning that certain improvements listed above may not apply to all games."

Andrew Goossen specifically mentions the number of CUs in play as an example, but perhaps they have tools to detect how data is being used so they can avoid problems of the kind you're describing?
 
maybe the CPU contention lowers a lot now that draw calls are almost "free". True test for bandwidth is seeing how many games run at 4k60. I remember one of the best games in history, Ninja Gaiden Black on the original Xbox and I couldn't believe it. It made you wonder ... how can the Xbox be moving these graphics at 60 fps? Yet it ran almos flawlessly. I guess that smart developers like sebbbi and so on could make games run at 4k60 and fine graphics, a la Ninja Gaiden. A 60 fps game competing against the best 30 fps games back then..was awesome.

MS says they've done tons of analysis on how all parts of the X1 interact while running real software, hopefully they'll go into more depth on what they found during Hotchips or something ...
 
This is good. This will really benefit those games that have unstable framerate that fluctuates from say 30-27,25ish and 60-50fps. This will make those dips almost imperceptible. This is really nice.

Also, this is probably handled at the system level given that previous X1 games and X360 games are supported.
 
Not sure why Microsoft chose the Nuremberg circuit as the test case given it's the only one you have to pay extra for on Forza 6 Apex on PC grrrr. So unfortunately direct comparisons are difficult but at the same Ultra 4K, 4xMSAA settings I seem to be getting around the low to mid 80's in terms of percentage usage from the starting grid with some peaks into the low 90's. No drops below 60fps though unless you drive on a rainy course. Then it's consistently in the 50's.
 
direct comparisons are difficult but at the same Ultra 4K, 4xMSAA settings I seem to be getting around the low to mid 80's
Direct comparisons to Scorpio is not very useful, DF states that only settings that affected the GPU were set to Ultra on the Scorpio. CPU centric settings were not set to Ultra owing to the much weaker Jaguar CPU. Also don't think they ran with any MSAA at all.
 
Direct comparisons to Scorpio is not very useful, DF states that only settings that affected the GPU were set to Ultra on the Scorpio. CPU centric settings were not set to Ultra owing to the much weaker Jaguar CPU. Also don't think they ran with any MSAA at all.
There are CPU centric settings on Apex? I thought they showed maxed cars in that demo?
 
Direct comparisons to Scorpio is not very useful, DF states that only settings that affected the GPU were set to Ultra on the Scorpio. CPU centric settings were not set to Ultra owing to the much weaker Jaguar CPU. Also don't think they ran with any MSAA at all.

They used 4x MSAA with EQAA for effectively 8xAA. It's called by different things 4xEQ or 4+4 (4x MSAA + 4 Coverage Samples). Or as used in the article, 8:4x.

"The crazy story here is that we've gone over our PC ultra settings and for everything that's GPU-related, we've been able to max it - and that's what we're running at, 88 per cent," says Tector, pointing to the utilisation data at the top of the screen. Right beneath it is the anti-aliasing setting - 4x, or rather 8:4x using the Radeon EQAA hardware AA.

Regards,
SB
 
Last edited:
At the end of the BC article there's an interesting comment:

"Some of the enhancements may cause compatibility issues on a very small percentage of titles, meaning that certain improvements listed above may not apply to all games."

Andrew Goossen specifically mentions the number of CUs in play as an example, but perhaps they have tools to detect how data is being used so they can avoid problems of the kind you're describing?

The description of the feature indicates the hardware being designed to override fetches to be 16x AF, which I would expect to have less visibility of the use context of the resource in the application. The description of their testing process and the platform holder being responsible for fixing issues seems to indicate that there's some higher-level testing and decision making being done, perhaps being fed into the back-compat engine. I didn't see a description of what the back-compat engine is, or how backwards compatible the hardware is at a base level. Perhaps I missed it, but if not it was seemingly a mistake to just throw the term into the article without elaboration.

Even if they use testing suits or tools, performance drops might be held to a threshold since that can be empirically measured. Knowing what the developer considered an undesirable output may not be something tools or the judgement of the platform holder can do.
 
You can change the skill level and aggressiveness of the AI as well as collisions on/off. I guess all that would have an impact on CPU.
Oh. They've got these on XBO as well. Not sure if ramping up their skill levels and their aggressive/collision behaviour consistutes as additional CPU load. If true I never noticed frame dips for all these features on for XBO
 
Oh. They've got these on XBO as well. Not sure if ramping up their skill levels and their aggressive/collision behaviour consistutes as additional CPU load. If true I never noticed frame dips for all these features on for XBO

I've just checked and conveniently the game tells you what each setting impacts (it really is a superb port). So Reflections quality apparently has a "significant impact on both CPU and GPU". That's pretty interesting as it potentially means that there is a lower GPU load in the xbox demo than max PC settings if they have that dialled back.
 
I've just checked and conveniently the game tells you what each setting impacts (it really is a superb port). So Reflections quality apparently has a "significant impact on both CPU and GPU". That's pretty interesting as it potentially means that there is a lower GPU load in the xbox demo than max PC settings if they have that dialled back.
good finds!
I should try these features out myself LOL, I have it installed but never bothered to try it yet.
 
I know this is about Zen and not Scorpio, but it made me think about some of the latency reducing and efficiency increasing customisations that MS claim to have made to Scorpio's CPU:

https://www.pcper.com/reviews/Proce...Core-i5/CCX-Latency-Testing-Pinging-between-t

There may be much better solutions for communicating between modules available to MS from AMD. Looking at how much inter CCX latency drops with faster memory (why - is it tied to the same clock?), MS might have had options to really work on pushing down latency and increasing IPC for any workloads that required off module cache access.
 
There may be much better solutions for communicating between modules available to MS from AMD. Looking at how much inter CCX latency drops with faster memory (why - is it tied to the same clock?), MS might have had options to really work on pushing down latency and increasing IPC for any workloads that required off module cache access.

For Zen, currently, the data fabric runs at the non-DDR clock of the memory bus. There are lower fractions, and a debugging mode where the fabric could run at the doubled clock, but there's a link to the memory controller's clock speed.

Scorpio was not mentioned as adopting AMD's Infinity Fabric. However, since the CPU, GPU, and memory bus are all faster, even just bumping the uncore proportionately would likely help.
 
I should try these features out myself LOL, I have it installed but never bothered to try it yet.
There is also Mirror Quality and WindShield reflection quality beside the general Reflection quality setting (which I suppose controls reflections on the surfaces of cars). Another option which I believe has an impact on CPU is Particle Effects Quality, which controls the density and number of particles.
 
Nothing was mentioned, but the ID buffer was specifically pointed out as a custom feature added to the PS4 Pro in Mark Cerny's interview with DF, so I think that specific form of primitive tracking is not likely to show up.
The wording here:

would seem to suggest that the ID buffer (or at least something very similar) will be present, since there's not much else that would count as "hardware support" for checkerboarding.
Has this already been addressed in this thread? As for PS4 Pro, checkerboard rendering is patented and the ID buffer has patent application.

Gradient adjustment for texture mapping to non-orthonormal grid
https://patents.google.com/patent/US9495790B2/en
Priority date 2014-04-05 Filing date 2014-04-05 Publication date 2016-11-15 Grant date 2016-11-15

Graphics processing enhancement by tracking object and/or primitive identifiers
https://patents.google.com/patent/US20150287239A1/en
Priority date 2014-04-05 Filing date 2014-04-05 Publication date 2015-10-08

(The list Should Sony have waited with PS4 Pro?)
 
Last edited:
Could have their own variation on ID Buffer, may not work precisely the same way. I have know idea if there's a different enough way to implement something that would help checker boarding in a similar way though.

They are very big on dynamic resolution, I wonder if they've added anything in the hardware to support it even more.
Something like Gpu counters at particular points of the rendering pipeline that could be used to give early details if the rendering budget will be blown, if so depending on where in the pipeline it's happening could make different choices to what to scale back to get back into budget for the next frame?
 
Direct comparisons to Scorpio is not very useful, DF states that only settings that affected the GPU were set to Ultra on the Scorpio. CPU centric settings were not set to Ultra owing to the much weaker Jaguar CPU. Also don't think they ran with any MSAA at all.
Yeah they used "AMD hardware EQAA" I think at 2x or 4x.
 
Back
Top