Xbox Series X [XBSX] [Release November 10 2020]

Well I'm out on a branch here speculating, but from reading the Sampler Feedback stuff for DX12U I think on PC you can get, literally feedback on what was hit in memory and what was missed. So even with just that, through either shaders or the CPU with API feedback you could then choose what to do, and then implement some kind of software driven mitigation strategy. It's unlikely that Nvidia DX12U stuff - as awesome as it is - has an equivalent to MS's custom hardware solution for seemingly "free" handling of transitions. Obviously, the more that's handled in software, the greater the overheads are likely to be. But with a bit of time, a bit more hardware, and some creative thinking the PC has proven for about the last 25 years that it'll find a way in some form or another...

Now on XSX, I *think* that through some combination of hardware and API back end you can set the system up to automatically request and then begin to blend in the higher-res mip map once it's no longer "missed". We know from MS that hardware is definitely involved in managing both the fetching and the blending, so there reasonably has to be more to it than just Sampler Feedback and Trilinear filtering. Maybe it's a hardware shortcut to skip writing a shader or CPU code to handle it, freeing up the overhead and reducing latency. I dunno!

So in other words, in DX12U on PC, I'm guessing that the Sampler Feedback data on individual page misses or hits is also available, but that more is left up to the developer to manage / evaluate through either shaders or the some kind of feedback to the CPU. Remember that this isn't all on MS through the DX12U API, it's also on Nvidia, Intel and AMD, their respective hardware, and what they want to implement in their APIs. And some of the stuff that can support this (well, basically Nvidia) is knocking on for a couple of years old now and a lot older than MS and their SFS reveal.

So in other words, if you know what developers on console are likely to want to do with Sampler Feedback on your particular console, you can probably build in some specific hardware acceleration. But there's normally another option, especially if time is on your side ...
Sampler feedback is also used for screen space shading. In fact, that is how it was proposed to be used by NVIDIA initially. The use for page fetch triggering during streaming was first introduced in a DirectX devblog entry.(Quelle coincidence!)
 
Seagate made a site for their Series X Storage Expansion Card. Nothing really new other than maybe a new pic & it has a 3-year limited warranty. It does let you sign up to be notified of new info, but after registering it says to keep an eye out on the Twitter page for up to the minute info.

https://www.seagate.com/consumer/play/storage-expansion-for-xbox-series-x/

Tommy McClain

At the bottom:
Code:
Flash Memory    Custom PCIe Gen4x2 NVMe

I don't remember seeing Gen4x2 confirmed before.
 
I mean the x2 part. Some people already speculated it was 2 lanes of Gen4 for the internal and 2 lanes for the external, but it's the first time I've seen an official source spell it out.
 
Why does the XSX have a split pool of Ram with different speeds? Is it because of backwards compatibility? Why was it not able to have the same speed throughout the whole pool? Will his hamper development in anyway?
 
Why does the XSX have a split pool of Ram with different speeds? Is it because of backwards compatibility? Why was it not able to have the same speed throughout the whole pool? Will his hamper development in anyway?
From the Digital Foundry analysis and interview with Xbox system architect Andrew Goossen:

"Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."

In terms of how the memory is allocated, games get a total of 13.5GB in total, which encompasses all 10GB of GPU optimal memory and 3.5GB of standard memory. This leaves 2.5GB of GDDR6 memory from the slower pool for the operating system and the front-end shell. From Microsoft's perspective, it is still a unified memory system, even if performance can vary. "In conversations with developers, it's typically easy for games to more than fill up their standard memory quota with CPU, audio data, stack data, and executable data, script data, and developers like such a trade-off when it gives them more potential bandwidth," says Goossen.

It sounds like a somewhat complex situation, especially when Microsoft itself has already delivered a more traditional, wider memory interface in Xbox One X - but the notion of working with much faster GDDR6 memory presented some challenges. "When we talked to the system team there were a lot of issues around the complexity of signal integrity and what-not," explains Goossen. "As you know, with the Xbox One X, we went with the 384[-bit interface] but at these incredible speeds - 14gbps with the GDDR6 - we've pushed as hard as we could and we felt that 320 was a good compromise in terms of achieving as high performance as we could while at the same time building the system that would actually work and we could actually ship."​
 
I don't know how I missed this:

So Shift Geezer was right: the XSX doesn't stream directly from virtual memory(SSD ).
However the next couple of tweets illustrate that:
1. SFS is purely reactive and eliminates prefetching by definition. You don't know that a texture page is needed until the gpu actually attempts to sample it.
2. Since you don't know if a page is required until it is needed, if it is not immediately available in memory then the GPU should stall. Since prefetching is not being done the page will 100% not be found in memory and will have to be streamed from the SSD. In the meantime, so as not to stall, the GPU will have to make do with the next best approximation (the determination of which is described in the SFS patent and also hardware accelerated). The texture filters are then used to seamlessly blend in the higher mip when it becomes available in memory. The upshot is that those texture filters are going to be constantly used and are thus essential.


 
As someone with a server background, you don't put substandard anything into a server farm. You need the absolutely highest quality and maximum efficiency from every single component because the costs of operating a farm are astronomical. Having a bunch of blades running at anything other than peak efficiency brings me out in a cold sweat. :runaway:

If an perfect xsx chip can run 4 instances of xbox one on xcloud then a defective one that can run 3 instances at once would still be a 3 times increase in the number of xcloud instances over the xbox one servers they are using. Also remember the xsx chip has the new encoder so they wouldn't need an external.

I mean after all what is the difference between intel and amd server chips. They take one that doesn't work as the highest binned chip and lower its clocks or disable a core and sell it as a lower end chip.
 
If an perfect xsx chip can run 4 instances of xbox one on xcloud then a defective one that can run 3 instances at once would still be a 3 times increase in the number of xcloud instances over the xbox one servers they are using. Also remember the xsx chip has the new encoder so they wouldn't need an external.
No, it's 25% less Xbox instances than the number the server could and should be supporting. You're looking at this all wrong, not what it does but what it doesn't. Again, server farms are crazy expensive and you can't afford to have a server box with some of the cores not operable :nope:
 
No, it's 25% less Xbox instances than the number the server could and should be supporting. You're looking at this all wrong, not what it does but what it doesn't. Again, server farms are crazy expensive and you can't afford to have a server box with some of the cores not operable :nope:

Like i said there are an array of different cpu's and gpus that can be used in a server. Costs and availability also have to be factored in
 
Like i said there are an array of different cpu's and gpus that can be used in a server. Costs and availability also have to be factored in
When it comes to measuring server cost, it's entirely about running costs not capital outlay which is why every possible core working for the next 5-10 years is critical. You're measuring efficiency/cost but taks/users supported. Microsoft aren't amateurs at this, the cost of tossing a bad CPU in the far bad is less than putting that CPU into a server farm for years.
 
When it comes to measuring server cost, it's entirely about running costs not capital outlay which is why every possible core working for the next 5-10 years is critical. You're measuring efficiency/cost but taks/users supported. Microsoft aren't amateurs at this, the cost of tossing a bad CPU in the far bad is less than putting that CPU into a server farm for years.
I understand. But MS is going to rapidly grow out the xsx blades they are using and they will be used for more than just gaming. So a chip that may not be ideal for a console will pass for their servers . The efficiency over the servers already there is quite high
 
I understand. But MS is going to rapidly grow out the xsx blades they are using and they will be used for more than just gaming. So a chip that may not be ideal for a console will pass for their servers . The efficiency over the servers already there is quite high
Seriously, you have to listen to my words, they come from somebody who managed a server farm. There is no way failed chips are gong into a Microsoft billion dollar server facility it's going into the desktop PC that prints out the security badges for visitors. :nope:

There is a reason that server CPUs, server RAM (ECC) server-grade networking cable all costs what it does. Because it's been tested to hell and back and will require near-to-no maintenance for years.
 
I think that what he means is that they are chips with just one defective cu that are rigorously tested to work for years requiring near-to-no maintenance, but just running two instances of Xbox one.
 
I think that what he means is that they are chips with just one defective cu that are rigorously tested to work for years requiring near-to-no maintenance, but just running two instances of Xbox one.
I understand this and it doesn't change anything. Server infrastructure is built to maximise efficiency which is why you don't save a few quid by using a chip with some broken cores. It's cheaper to toss it and put it a fully working processor, now you can support 1 or more extra users. The I'll-save-that-it-might-come-in-handy mentality is not compatible with how servers operate. :nope:
 
Back
Top