Xbox Series X [XBSX] [Release November 10 2020]

No, because the OS would immediately crash having lost half the logical processors it was using. This is why this is usually a BIOS setting. There are other issues such as the cooling profiles being predicated on the CPU configuration which cannot change in most BIOS/OS's after boot. But from a purely hardware perspective, there is no reason a Zen2 CPU cannot hotflip SMT support. It just needs to be in an environment expecting that possibility - like a custom OS :yes:

https://community.amd.com/community/gaming/blog/2017/11/27/asdasd

Not saying anything about consoles but Precision Boost 2 allow variable frequency amongst individual cores depending on workload and the number of threads in flight.
 
Must the moderators start disciplining people who insist talking about non-Microsoft or non-Series X hardware in this thread?

Please pardon the interruption while we remove a few pure-noise posts. Thank you.
 
The XSX can generate 380 billion intersections per second for its Ray tracing according to Microsoft.
From what I can tell thats quite a good deal of Ray Tracing.
Do we know how this compares to the Nvidia cards?
 
The XSX can generate 380 billion intersections per second for its Ray tracing according to Microsoft.
From what I can tell thats quite a good deal of Ray Tracing.
Do we know how this compares to the Nvidia cards?

Intersections per second does have the capability to replace TFLOPs as the meaningless willy waving comparison of choice. :D
 
Are you denying maths my good man?
Which part exactly of your sentence quoting MS number can be considered maths ?

To reply to your question, those intersection numbers can't lead to any direct comparison with nvidi'a number, especially when there's still so many unknown regarding RDNA2 RT.
 
those intersection numbers can't lead to any direct comparison with nvidi'a number, especially when there's still so many unknown regarding RDNA2 RT.

This is true, we can't really conclude yet how RDNA2 will perform in ray tracing untill those GPU's are out, we can only guess, and i'm guessing it wont be so far off from what NV is doing with Turing, seeing the path traced dxr demo on XSX. It will be intresting also how rtx3000 will do in RT, how much they could improve since 2018. Will it be mainly performance increases or maybe other important changes?
 
The XSX can generate 380 billion intersections per second for its Ray tracing according to Microsoft.
From what I can tell thats quite a good deal of Ray Tracing.
Do we know how this compares to the Nvidia cards?
link?
 
That sounds like a blurb from DF deep dive article. I think it's just CU × ClockSpeed × constant conversion number.
hmm right yea I found it
380 Billion intersections per second.

hard to compare this to nvidia, nvidia rates things in Rays casted. They have 10 Giga Rays of performance on the 2080 TI. They rate 1 Giga ray at 10TF of compute performance.

XSX is talking about intersections. Completely different metric. Nvidia's you can take a resolution and figure out how many rays per pixel. Etc, you can figure out how many passes you can do with ray tracing. I don't know what to make of Xbox's number.

You'd have to figure out how many triangles there are in a typical scene. So say there are 100 Million triangles on a scene, XSX only has enough hardware to pass through 3.8times of it in a second. This actually might be an easier way to rate than Nvidia, since nvidia's numbers are tied to resolution.
 
Which part exactly of your sentence quoting MS number can be considered maths ?

To reply to your question, those intersection numbers can't lead to any direct comparison with nvidi'a number, especially when there's still so many unknown regarding RDNA2 RT.
It was a jk.
Nvidia have said their TU102 GPU can do 10 Billion intersections per second. So they are using the same metric as Microsoft it would appear?
https://www.anandtech.com/show/1334...tx-2080-ti-and-2080-founders-edition-review/3
 
hmm right yea I found it
380 Billion intersections per second.

hard to compare this to nvidia, nvidia rates things in Rays casted. They have 10 Giga Rays of performance on the 2080 TI. They rate 1 Giga ray at 10TF of compute performance.

XSX is talking about intersections. Completely different metric. Nvidia's you can take a resolution and figure out how many rays per pixel. Etc, you can figure out how many passes you can do with ray tracing. I don't know what to make of Xbox's number.
10 Gigarays = 10 Billion Instersections a second apparently.
https://www.anandtech.com/show/1334...tx-2080-ti-and-2080-founders-edition-review/3
 
nah.. Nvidia rates 1 Giga Ray as being 10 TF of performance. Something is off with someones calculations.

Andrew says:
"Without hardware acceleration, this work could have been done in the shaders, but would have consumed over 13 TFLOPs alone," says Andrew Goossen. "For the Series X, this work is offloaded onto dedicated hardware and the shader can continue to run in parallel with full performance. In other words, Series X can effectively tap the equivalent of well over 25 TFLOPs of performance while ray tracing."

So... maybe the equivalency of 1 Giga Ray. But that doesn't make a lot of sense either. It was able to run Minecraft with 1/10th the power doesn't make a lot of sense. A 2060 does 6 Giga Rays by comparison.
 
Compare XBox 'triangles per second' to PS2 'triangles per second'. Both were counting the same metric but PS2's triangles were far simpler than XB's. There's clearly no way XBSX is managing 38x more of the same work as the RTX 2080 Ti.
 
Didn't they in the PS2's case calculate them untextured, or was it something else? The comparison with paper specs for 6th gen consoles was basically useless i remember.
hmm..... bounding box intersections maybe.

I recall 2 numbers from teh Github leak. One was triangle, the other was box.
Box was nearly 4x more.

Can someone pull it up? I don't know where to find these Github leaked images anymore.
 
xbox-series-x-logo-trademark.png


new logo
 
I recall us having a thread about what the cost would be to MS to include Azure features, or features from the Azure team, this was the cost.

The exact same Series X processor is used in the Project Scarlett cloud servers that'll replace the Xbox One S-based xCloud models currenly being used. For this purpose, AMD built in EEC error correction for GDDR6 with no performance penalty (there is actually no such thing as EEC-compatible G6, so AMD and Microsoft are rolling their own solution), while virtualisation features are also included. And this leads us on to our first mic-drop moment: the Series X processor is actually capable of running four Xbox One S game sessions simultaneously on the same chip, and contains an new internal video encoder that is six times as fast as the more latent, external encoder used on current xCloud servers.

So the EEC cost them here, not sure if the memory must also be EEC if it's on chip. I suspect there to be 32GB on the server boards though. It would be neat to run 2 separate XBO instances on a XSX locally.
 
Last edited:
Back
Top