Xbox Series X [XBSX] [Release November 10 2020]

For some reason I thought I remembered XB1 being a higher version than DX12.0. Thanks for the clarification and additional info.
Yea, well, it would be a dramatically different story for our esram friend if it support FL 12_1. It would have enabled XBO to use some more efficient algorithms and ideally kick start the whole conservative rasterization, UAV, ROV and executeIndirect movement years before instead of seeing these types of titles arrive in 2020.

Then there is a secondary grouping titled Shader Model Support which can be related to Feature Levels, but isn't necessary for there to be a new feature level to support a new shader model.
Support is underlined below:
  • Shader Model 5.1GCN 1.0 and Fermi+, DirectX 12 (11_0 and 11_1) with WDDM 2.0.
  • Shader Model 6.0GCN 2.0+ and Maxwell 2+, DirectX 12 (12_0 and 12_1) with WDDM 2.1.
  • Shader Model 6.1 — GCN 2.0+ and Maxwell 2+, DirectX 12 (12_0 and 12_1) with WDDM 2.3.
  • Shader Model 6.2 — GCN 2.0+ and Maxwell 2+, DirectX 12 (12_0 and 12_1) with WDDM 2.4.
  • Shader Model 6.3 — GCN 2.0+ and Maxwell 2+, DirectX 12 (12_0 and 12_1) with WDDM 2.5.
  • Shader Model 6.4GCN 5.0+, Maxwell 2+ and Skylake+, DirectX 12 (12_1) with WDDM 2.6.
  • Shader Model 6.5 — Pascal+ and Skylake+, DirectX 12 (12_1) with WDDM 2.7.
Not sure where XBO stood in terms of shader model. That part isn't talked about much. I'm going to assume 6.0 as a baseline, not sure if it went higher to something like 6.1. I don't even think it supports most of what is in 6.0.

6.3 is where we get DXR1.0
6.4 they added machine learning intrinsics
6.5 is where we get DXR1.1 and all the other stuff that comes with DX12 Ultimate basically.
 
hmmm...
Well a lot of GPUs that are older support DX12. The mono driver and DX12 is where they wanted to head, but XBO was just so delayed, everything was just so delayed. You can really see how MS turned it around with this coming generation. APIs, feature set, hardware, Targets being hit etc, marketing that makes sense, services moving forward, all sorts of compatibility etc, all of it coming together in a very strong cadence.

On this case when on the topic of XBO hardware feature support, XBO only supported features up to 12_0. There was some specific xbox only features that they supported, in particular, some additional microcode around the executeIndirect function. But outside of that, we didn't see any specific hardware support outside what GCN had already supported in both tier or feature levels.

A _sharp_ contrast to what we have coming.

I expect XSX to continue it's development on the command processor to incorporate the additional work they've been doing there since XBO, X1X, and now to XSX. They have a different Tier version of VRS that is not covered by DX12U (if our patent understanding is correct), both AMD and Nvidia implementations differ in what is offered in this case. Not sure if MS has more up it's sleeve. But as a baseline, yes all 3 should have VRS.
The DF article mentioned something about " the Series X GPU allows for work to be shared between shaders without involvement from the CPU, saving a large amount of work for the Zen 2 cores, with data remaining on the GPU". Definitely sounds like more development on the command processor being able to generate work without the cpu.
 
Last edited:
The DF article mentioned something about " the Series X GPU allows for work to be shared between shaders without involvement from the CPU, saving a large amount of work for the Zen 2 cores, with data remaining on the GPU". Definitely sounds like more development on the command processor being able to generate work for the cpu.
It would be a major win for this generation, if next generation pushed developers to go this route. Having executeIndirect with state changes is the only forward looking feature that they have in the console and if the programming moves into this direction could prolong the lifespan of these consoles as Jaguar is significantly weaker compared to Zen2. But if you're reducing the amount of work they need to do Jaguar might just be enough to keep up.

Traditionally, I believe executeIndirect can be used to generate work for itself. In the demo Max presented, the GPU using executeIndirect to send the near finished results over to the iGPU to fiinish the buffer off and send out for processing without CPU intervention.
 
generation pushed developers to go this route. Having executeIndirect with state changes is the only forward looking feature
Bet you can't wait. It'll be like living your wet dream, how much you've been wanting this over the years.
May finally be at an implementation that is widely usable.
 
Bet you can't wait. It'll be like living your wet dream, how much you've been wanting this over the years.
May finally be at an implementation that is widely usable.
Lol. I just curious to see the process carry out as expected. We’ve had a lot of dud features happen over the time. I do recall reading about some developers taking it on; it’s one step closer to complete separation of CPU and GPU activities. None of it really means anything for me; just interested in seeing this programming paradigm play out. XSX might just be the closest they’ll get to that with the GPU being able to call textures directly from the SSD. It may need very little CPU intervention. It’s the sort of thing where if we are successful here you can move forward further on mGPU as well.

ideally we’ll see much higher saturation of the CUs because you’re not waiting on the CPU for the next set of instructions. You just keep mowing through everything without CPU intervention.
 
ideally we’ll see much higher saturation of the CUs because you’re not waiting on the CPU for the next set of instructions. You just keep mowing through everything without CPU intervention.

Additionally with the potential to use the CUs for ML work, I think MS won't have much trouble getting high usage of the CUs. It'll be interesting to see how the hardware ends up getting used as this generation moves away from the pure CPU/GPU (rasterization) paradigm that's dominated gaming since 3D hardware acceleration hit.

Regards,
SB
 
Additionally with the potential to use the CUs for ML work, I think MS won't have much trouble getting high usage of the CUs. It'll be interesting to see how the hardware ends up getting used as this generation moves away from the pure CPU/GPU (rasterization) paradigm that's dominated gaming since 3D hardware acceleration hit.

Regards,
SB
Or at the least; If I'm going to re-use Mark Cerny's words here

"It's a very different model from what the GPU does; the GPU has caches, which are wonderful in some ways but also can result in stalling when it is waiting for the cache line to get filled. GPUs also have stalls for other reasons, there are many stages in a GPU pipeline and each stage needs to supply the next. As a result, with the GPU if you're getting 40 per cent VALU utilisation, you're doing pretty damn well. By contrast, with the Tempest engine and its asynchronous DMA model, the target is to achieve 100 percent VALU utilisation in key pieces of code."

Ideally on top of reducing CPU utilization, we're going to see a drop in GPU stalling (because the command processor is feeding everything to do whats next without the CPU needing to do as much hand holding) leading to increased utilization.

Quoting sebbbi again:
IMHO the biggest advantage of the indirect draws and multidraw (with indirect draw count argument) is the ability to prepare the draws solely on GPU side. This allows late culling on GPU side, based on GPU known data (such as the depth buffer). This is especially important for shadow map rendering, as you can achieve dramatic cost savings by doing fine grained culling of shadows based on currently visible surfaces (depth buffer = all visible surface pixels). Both me and Ulrich (from AC:Unity team) are talking about out shadow rendering optimizations in our Siggraph 2015 presentation: http://advances.realtimerendering.c...siggraph2015_combined_final_footer_220dpi.pdf. Not many cross platform games are yet using optimizations like this... most likely because of the PC graphics API issues discussed above.

oh yea good bye API issues hopefully.

As always; going to reference this asteroid demo made by a moderator on this board, forgot the username, he works at intel now. Good guy.
https://www.dsogaming.com/news/dire...proves-performance-greatly-reduces-cpu-usage/

DX11: 29fps
DX12: 75 fps
DX12 + ExecuteIndirect = 90fps.

Each step removed more CPU out of the way, allowing the GPU to push further.
For me this is a removal of stalls, there's not other change to the demo. So I see this as increased utilization. In this case, nearly 20% improvement in this particular demo. (that's how I scenario a 60% utilization in the above).

Might be more further potential to be realized!
 
Last edited:
Not sure if MS has more up it's sleeve.
The mesh shader implementation apparently goes beyond the PC DX12 spec in at least one way, too:

4m3V3Ox.jpg


(from here)
 
Apparently Some possible XSX screen shots here running with photogrammetry things (probably) Quixel:
This brand new technology created by Quixel allows us to design open procedural worlds using VRS and Ray-Tracing at full scale, creating a totally Photo-Realistic environment that leaves anyone jaw-dropping. This system was implemented in Unreal Engine 4, and is being used in Senua's Saga: Hellblade II, shown during the 2019 Game Awards running on Xbox Series X.


HQNH9v7.jpg


qEv6foi.jpg


vZ7lN7p.jpg
 
Is that from a game in development? It's atleast hellblade 2 level i think, possibly even matching the unreal tech demos from a few years back.

His title is

''Psycobaker 12Tflops RDNA2 VRS DirectML Mesh Shader''

Are they using all of that tech?
Some of it is from FH4. These are just untouched hi-res variants of their artwork before it goes into the game. The last piece was made in UE4, which is not Playground's engine.

I think they are just playing around with it, but it's interesting to see how far Quixel has come. I do feel like if everyone is going Quixel there's going to be a lot of homogeneity.
 
I find this boring, it looks like a hi resolution Version of Google Maps. Ok next Gen Consoles can display crisp and detailed Graphics, but that was to be expected. Where is the point in putting Photos from a Cam onto a Geometrymodel and running with it on a Google Maps Scenario?
 
I find this boring, it looks like a hi resolution Version of Google Maps. Ok next Gen Consoles can display crisp and detailed Graphics, but that was to be expected. Where is the point in putting Photos from a Cam onto a Geometrymodel and running with it on a Google Maps Scenario?

I think this is just some early stills from testing/playing around. The actual game is most likely going to have a different art style. Those details/textures ingame would be very impressive.
 
That can't be running on a GPU in realtime available now does it? Think it was the heretic demo running in real time on a 2080Ti. Looking at them again, the heretic demo looks even more impressive.
 
That can't be running on a GPU in realtime available now does it? Think it was the heretic demo running in real time on a 2080Ti. Looking at them again, the heretic demo looks even more impressive.

I am not expecting next-gen games to look like that, but if they do I'll be pleased. I'm just linking it because it's a large photogrammetry project.
 
MS really hit it out of the park with the XSX. They showed with the X1X that they were serious about the power crown from then out, and they have followed through with the XSX.
I also think MS isn't really getting the kudos they deserve for the customizations they have done either. Its more than just the Tflops. With the CPU they allowed devs to choose between having SMT enabled or not, and giving a speed boost if they chose not to enable it. Such a simple thing that will have a payoff, if even slight.
The customization to the GPU to allow Machine Learning. Wouldn't have cost much to do, they have the API to benefit from it, so why not? Again, an excellent decision.
Sampler Feedback Streaming, again another customization that will help with efficiencies.
VRS. MS has it's own patented version of VRS, which AMD has announced full support for with Direct X Ultimate. Haven't heard anything from Sony about this, and you would have expected then to have thrown those three letters into the mix when talking about Geometry Engine. They know people are asking about it, they know MS has been boasting about it, so there would be every reason for them to say "oh, and we have VRS as well".
The Velocity Architecture is again all about efficiencies and milking every bit of extra out of the console.
Mesh Shading will increase GPU and CPU performance. The demo showed off by MS gave an indication of how well it will work.
I expect Sony's Geometry Engine to be the same.
But all in all MS did pretty much all it could do.
I dont see Sony having anything that will out do what MS put together. Sony will sing about the areas they believe they have the advantage in, and they have been singing about the SSD and 3D Audio.
Now dont take this as Sony bashing, as I think the Developers will matter more than the hardware, and in this era of diminishing returns with graphics, I doubt anyone other than DF with all their tools will be able to pick out any differenews with them running side by side.
What this is about is seeing how MS has gone all out to put the best console they could have to the market.
Good on them, it has put Sony on the back foot, and this can only drive competition and innovation.
 
MS has it's own patented version of VRS, which AMD has announced full support for with Direct X Ultimate. Haven't heard anything from Sony about this, and you would have expected then to have thrown those three letters into the mix when talking about Geometry Engine.

What does variable rate shading have to do with the geometry engine? The two features operate on two different stages of the rendering pipeline AFAIK.
 
What does variable rate shading have to do with the geometry engine? The two features operate on two different stages of the rendering pipeline AFAIK.
That was the area of the talk where Cerny started talking about other GPU features other than the SSD or 3D engine. Its when he would have talked about VRS alongside GE, Ray Tracing etc.
 
Back
Top