Xbox Series X [XBSX] [Release November 10 2020]

It doesn't matter anymore, the solution in Series X is less powerful than Turing, it's now confirmed to share resources with the Texture units, so it's either 4 ray ops per clock or 4 texture ops per clock.
Nvm. sorry I didn't see that slide. Yes, 4 or 4

yTqYSzDr3MGWc2QzhGR4CN-650-80.jpg.webp

I think as a requirement for DXR 1.1 or whatever it is, you need to run things in parallel and in-line for a compute shader.
Nvidia was also doing this serially, as I understand it because of how DXR 1.0 is specced. This is solved with 1.1

Let me research more into this.
 
Last edited:
DXR1.1
Inline raytracing
(link to spec)

Inline raytracing is an alternative form of raytracing that doesn’t use any separate dynamic shaders or shader tables. It is available in any shader stage, including compute shaders, pixel shaders etc. Both the dynamic-shading and inline forms of raytracing use the same opaque acceleration structures.

Inline raytracing in shaders starts with instantiating a RayQuery object as a local variable, acting as a state machine for ray query with a relatively large state footprint. The shader interacts with the RayQuery object’s methods to advance the query through an acceleration structure and query traversal information.

The API hides access to the acceleration structure (e.g. data structure traversal, box, triangle intersection), leaving it to the hardware/driver. All necessary app code surrounding these fixed-function acceleration structure accesses, for handling both enumerated candidate hits and the result of a query (e.g. hit vs miss), can be self-contained in the shader driving the RayQuery.

The RayQuery object is instantiated with optional ray flags as a template parameter. For example in a simple shadow scenario, the shader may declare it only wants to visit opaque triangles and to stop traversing at the first hit.

Motivation
Inline raytracing gives developers the option to drive more of the raytracing process. As opposed to handing work scheduling entirely to the system. This could be useful for many reasons:

  • Perhaps the developer knows their scenario is simple enough that the overhead of dynamic shader scheduling is not worthwhile. For example a well constrained way of calculating shadows.
  • It could be convenient/efficient to query an acceleration structure from a shader that doesn’t support dynamic-shader-based rays. Like a compute shader.
  • It might be helpful to combine dynamic-shader-based raytracing with the inline form. Some raytracing shader stages, like intersection shaders and any hit shaders, don’t even support tracing rays via dynamic-shader-based raytracing. But the inline form is available everywhere.
  • Another combination is to switch to the inline form for simple recursive rays. This enables the app to declare there is no recursion for the underlying raytracing pipeline, given inline raytracing is handling recursive rays. The simpler dynamic scheduling burden on the system might yield better efficiency. This trades off against the large state footprint in shaders that use inline raytracing.
The basic assumption is that scenarios with many complex shaders will run better with dynamic-shader-based raytracing. As opposed to using massive inline raytracing uber-shaders. And scenarios that would use a very minimal shading complexity and/or very few shaders might run better with inline raytracing.

Where to draw the line between the two isn’t obvious in the face of varying implementations. Furthermore, this basic framing of extremes doesn’t capture all factors that may be important, such as the impact of ray coherence. Developers need to test real content to find the right balance among tools, of which inline raytracing is simply one.
 
Last edited:
Impressive specs XSX :)

I agree. I'm not sure what the recent FUD about the XSX power was about. It's clearly a very powerful and very well thought out machine. All of MS's problems are on the software side it seems...

Other than the original Xbox One, MS has always delivered on the hardware front for the console space. OG Xbox, X360, and X1X were all great hardware for the console space. X1 was unfortunately a victim of business decisions driven by wrong market insight.
 
I agree. I'm not sure what the recent FUD about the XSX power was about. It's clearly a very powerful and very well thought out machine. All of MS's problems are on the software side it seems...

Other than the original Xbox One, MS has always delivered on the hardware front for the console space. OG Xbox, X360, and X1X were all great hardware for the console space. X1 was unfortunately a victim of business decisions driven by wrong market insight.

Yes, as a pc gamer, i have to admit those specs are really awesome, even if it's going to cost 599. We swedes are going to have to pay 6500kr (over 600 dollars) for the XSX, according to new rumors, but it's still worth it with a 12+TF gpu with gobs of BW, a 3.8ghz zen2 cpu, ray tracing, 3d audio hardware, hdmi2.1 ready and 1TB NVME SSD with some serious decoding power enabling faster then most pc setups today.

MS is a software company.... they should have or get their shit together then :p
 
So 4 ray ops/clock. Okay I can do this guys!

1825Mhz * 52 * 4= 3.79600 * 10^11 ray operations per second

They said 95 G/s ray-tri at peak

so...
3.796 * 10 ^ 11 Ray OPS / 95 * 10^9 RAY-TRI =~ 3.9999 or 4.0 RAYOPS/RAY-TRI

So it takes 4 ray operations to complete a ray-tri intersection at peak rate according to this.

Now we need to see how many operations it takes RDNA 2.0 to complete a single ray-tri intersection. If they are different than it is custom hardware, if it is the same, it is the same hardware.

Or the Rays Ops can be issued, but it takes 4 cycles for it to progress through memory for a hit. This might make more sense actually.
 
Last edited:
I agree. I'm not sure what the recent FUD about the XSX power was about. It's clearly a very powerful and very well thought out machine. All of MS's problems are on the software side it seems...

Other than the original Xbox One, MS has always delivered on the hardware front for the console space. OG Xbox, X360, and X1X were all great hardware for the console space. X1 was unfortunately a victim of business decisions driven by wrong market insight.

I think what you see is people tend to attribute everything to hardware, so if the Halo Infinite demo on Xbox Series X is not impressive, it's because the hardware is bad. Conversely if you demo an impressive piece of software, it's because the hardware is good.
 
disappointment from the games they've shown so far, especially Halo which seems to look like a X1 game.

'They' must mean the last show then. Because otherwise i think, graphically, they have some really impressive stuff going on. FS2020 certainly looks next gen, it's one of the most impressive next gen titles out there so far. HellBlade 2 trailer was atleast forbidden west level, Gears 5 BC was very impressive too, lifted the game above most other current xbox titles, if not all. Minecraft ray tracing on XSX was also very impressive, people seem to find MC RT amazing.
Forza also did next gen stuff, up there, atleast, with GT7!



So nah, don't agree, only if you view just the one and last show, but in total, yes they have shown stuff not possible on current gen, that means it's...... next generation stuff. Halo infinite is a Apex/fornite style of online shooter game that should run on older hardware too, it's never going to match AAA exclusives.
Also, longer in the range, im sure Fable etc are going to massively impress both gameplay and graphics wise.
You know what FUD is against MS/Xbox? Posting things like 'MS hasn't shown/isnt going to deliver next gen'. That's just being blind.

Conversely if you demo an impressive piece of software, it's because the hardware is good.

For the average Joe not knowing anything about tech yes, people that post on B3D know (or should) better.
 
I think what you see is people tend to attribute everything to hardware, so if the Halo Infinite demo on Xbox Series X is not impressive, it's because the hardware is bad. Conversely if you demo an impressive piece of software, it's because the hardware is good.
What does it mean when it's not shown on the hardware at all?
 
not sure how to compare RT performance either. Is there a standardized metric?

No. RT performance fundamentally depends on the scene -- both the depth of the trees you are walking, and cache hit rates you get on them. So there is no inherent aspect you can use to compare, in the end you just have to benchmark against some arbitrary scene. No widely used "standard" benchmark exists yet.

check this out:
10 Giga Rays/s for a 2080TI

but here we are reading 95G/s ray-tri peak?
If a ray/triangle intersection is a ray, then this is weird because I'm reading 10 vs 95.
This is unlikely. Needs investigation.

nVidia is telling how many rays they can shoot at some specific scene per second, MS is telling how many intersection tests between a ray and a tri/box can they do per second. You need many intersection tests per ray. How many? It depends. Which of those two numbers is actually better? I'd guess NV, but I wouldn't bet a lot of money on it.

Curious as to why they say Zen2 Server Class cores. Do lower end Zen2s normally cut server features or are they just fluffing up their spec slide?

They are actually not really server-class, since they cut L3 cache like the APUs do.

It's marketed as server-class mostly to highlight the massive difference between Jaguar and Zen2, which deserves mention. As to why those specific words, probably the same reason some previous consoles have been "supercomputer on chip".
 
Last edited:
nVidia is telling how many rays they can shoot at some specific scene per second, MS is telling how many intersection tests between a ray and a tri/box can they do per second. You need many intersection tests per ray. How many? It depends. Which of those two numbers is actually better? I'd guess NV, but I wouldn't bet a lot of money on it.
which metric makes more sense in your opinion?
Wouldn't ray-tri be better than rays shot in terms of understanding how much performance is available? You can shoot rays into nothingness and never get a hit-return on your intersection.
 
I think what you see is people tend to attribute everything to hardware, so if the Halo Infinite demo on Xbox Series X is not impressive, it's because the hardware is bad. Conversely if you demo an impressive piece of software, it's because the hardware is good.

Its like everyone forgets the start of the last generation. I can't remember a generational change where there were really ground breaking titles that looked above and beyond what the other consoles could do at launch. I think the only one might have been dreamcast imo.
 
There was FUD about the SeriesX's performance?
All I remember from the recent Internets is disappointment from the games they've shown so far, especially Halo which seems to look like a X1 game.


It's almost like Halo Infinite was a 60 FPS game (with 1/4 the rendering time for effects as a 30 FPS game according to naughty dog dev on twitter) designed for 1.2 Tflop base hardware! Almost.

I still though it looked great and am mad it was delayed personally.

For XSX hotchips I notice they are blathering on about Series X targeting 8k and/or 120 FPS in one slide, sigh. Majority of games wont even be likely native 4k, why do they do this and I hope they dont actually believe it (marketing?).
 
Their VRS patents are software in nature.
hmm how does one prove this one? I been following it and I tracked it down to being their Hololens patent, which is clearly hardware since nothing else is really like hololens.

https://patents.google.com/patent/US10147227B2/en


There are continuing increases in pixel density and display resolution, and a continuing desire for power reduction in mobile display devices, like the HOLOLENS holographic headset device by MICROSOFT CORPORATION. Therefore, there is a need in the art for more efficient graphics processing in a computer device.

If it was software based, we'd see it other places I suspect.
 
Back
Top