Xbox Series X [XBSX] [Release November 10 2020]

Don't know if this was already here



A rumor so far, but it would be really the right way to reach 4k on next gen consoles. DLSS v1 was really disappointing, but v2 is much better and shows what can be done through good algorithms (I really don't want to call it AI because it is not intelligent ^^)
Pretty sure I heard this elsewhere on this forum but not in a tweet. Not sure if it’s the same rumour just circulating around.
 
  • Like
Reactions: Jay
Pretty sure I heard this elsewhere on this forum but not in a tweet. Not sure if it’s the same rumour just circulating around.
Pretty sure when all the talk about ML compressed textures and ML upscaling was being discussed and it being reported that it's already in use, there was speculation it was in the next ninja theory game or fable.

So I'm assuming this is just the old rumor to which game it is coming around again by the looks of it.
 

Ronald Interview with him mentioning "Sampler Feedback Streaming where the I/O subsystem is so fast that we can actually stream data off of the I/O without putting it directly in memory".

Wouldn't this tie in with the old "100GB of instantly accessible data on the SSD claim"? You have roughly 4.8GB/s of peak bandwidth from the SSD. Since SF allows you to request only the (potentially small) part of texture that you need then it should be more than feasible to pull that in from the SSD in the space of a single frame given you could theoretically load 80MB in the space of a single 60fps frame and you may only be requesting KB worth of texture or at worst single digit MB.

You obviously won't be rendering everything, or even a large percentage of your workload directly from the SSD, but for individual or small numbers of cases like the above it sounds perfectly feasible.

On the ML rumour, I'll put that one into the "I'll believe it when I see it" category.
 
Wouldn't this tie in with the old "100GB of instantly accessible data on the SSD claim"? You have roughly 4.8GB/s of peak bandwidth from the SSD. Since SF allows you to request only the (potentially small) part of texture that you need then it should be more than feasible to pull that in from the SSD in the space of a single frame given you could theoretically load 80MB in the space of a single 60fps frame and you may only be requesting KB worth of texture or at worst single digit MB.
I guess the part that is interesting for me is to be able to bypass memory and put the textures directly into the cache/registers. I was always under the assumption that the textures need to be placed into memory. Lets be real here, people don't move that fast, so you're going to be reusing that texture for at least 60-600 frames. If that texture isn't in memory, it's constantly pulling from the SSD. Unless you write back to memory eventually.

If there is paging happening behind the scenes for this, this is interesting. Curious to see how that works.
 
Wouldn't this tie in with the old "100GB of instantly accessible data on the SSD claim"?
Don't see why?
I don't see why whole game installation/package is not able to be mapped if that's the option the dev took. That 100GB was mentioned once as an of the cuff example as far as I'm concerned. Until shown otherwise
On the ML rumour, I'll put that one into the "I'll believe it when I see it" category.
The ML texture compression is confirmed.
ML upscaling, out of curiosity why do you feel this way?
It doesn't need to be as performant as DLSS 2.0 and tensor cores to be a useful option.
 
Don't know if this was already here



A rumor so far, but it would be really the right way to reach 4k on next gen consoles. DLSS v1 was really disappointing, but v2 is much better and shows what can be done through good algorithms (I really don't want to call it AI because it is not intelligent ^^)
My hypothesis has always been that AMD wouldn't make their own competitor to DLSS but that Microsoft would make their own competitor for Xbox and then roll that out to PC as a component of DirectX. I really hope this is what happens but I could totally see Microsoft only implementing it on Gamepass games in order to get more subscriptions.
 
My hypothesis has always been that AMD wouldn't make their own competitor to DLSS but that Microsoft would make their own competitor for Xbox and then roll that out to PC as a component of DirectX. I really hope this is what happens but I could totally see Microsoft only implementing it on Gamepass games in order to get more subscriptions.
Not really the way MS would segregate it.
If anything they would add it to playfab(?) tool set if they wanted to go that route.
 
Don't see why?
I don't see why whole game installation/package is not able to be mapped if that's the option the dev took. That 100GB was mentioned once as an of the cuff example as far as I'm concerned. Until shown otherwise

I'm not sure if you misunderstood my meaning but I wasn't talking about a fixed 100GB. Merely that MS have effectively claimed before that they can read data from the SSD into the GPU "instantly". Which ties into the idea of bypassing VRAM.

The ML texture compression is confirmed.

Indeed but that's something entirely different and far simpler.

ML upscaling, out of curiosity why do you feel this way?
It doesn't need to be as performant as DLSS 2.0 and tensor cores to be a useful option.

Because based on it's INT4/8 capabilities the XSX is half as fast with ML than an RTX 2060. And a 4K upscale on an RTX 2060 takes 2.5ms. That means it will take ~5ms on the XSX which is more than 1/3rd of the frame time at 60fps. This may still be worth it, but it's also dependent on this single dev studio creating a model that's comparable in quality and performance to Nvidia with all their $billions in R&D, access to the worlds fastest ML supercomputers, synergy with their own hardware/tensor cores and massive ML experience (being the world leaders in the hardware that runs it and all).

So while I'm not saying it's impossible. I do think it's sensible to take any such claims with a massive pinch of salt until real world results have been shown and independently verified. After all, "ML upscaling" could mean almost anything and doesn't necessarily have to be comparable to DLSS which is outputting anywhere between 2.25x - 9x the original number of pixels.
 
Because based on it's INT4/8 capabilities the XSX is half as fast with ML than an RTX 2060. And a 4K upscale on an RTX 2060 takes 2.5ms. That means it will take ~5ms on the XSX which is more than 1/3rd of the frame time at 60fps. This may still be worth it, but it's also dependent on this single dev studio creating a model that's comparable in quality and performance to Nvidia with all their $billions in R&D, access to the worlds fastest ML supercomputers, synergy with their own hardware/tensor cores and massive ML experience (being the world leaders in the hardware that runs it and all).

So while I'm not saying it's impossible. I do think it's sensible to take any such claims with a massive pinch of salt until real world results have been shown and independently verified. After all, "ML upscaling" could mean almost anything and doesn't necessarily have to be comparable to DLSS which is outputting anywhere between 2.25x - 9x the original number of pixels.
It all depends on what the inputs are and how deep the network is in terms of layers. A lot of that will depend on the internal rendering resolution before adjusting for the upscale.
And it will also matter at what stage the ML upscale is being incorporated into the pipeline. If this is a feature that is to be added to Direct X, then that makes for an interesting discussion.
60 fps gaming is 16.6ms right now, and 30fps gaming at 33.3ms; XSX and PS5 can process up to 120fps at about 1440p which is 8ms. I think when you look at what is needed here, there is still sufficient time here to do this.
Also, and to be clear, nvidia does both SSAA first before upscale. So it's actually running through 2 networks before the final output.
There are things that MS can cut out and just look at the upscaling part of it.

I know the potential is there, but it's not a topic I'm interested in diving into until they tell me the first title that will release with it. DL models can be ready tomorrow or 5 years from now. Since the time span is too large and there has been no communication about it, it's just too early to have that convo.
 
Last edited:
I'm not sure if you misunderstood my meaning but I wasn't talking about a fixed 100GB. Merely that MS have effectively claimed before that they can read data from the SSD into the GPU "instantly". Which ties into the idea of bypassing VRAM.
Your right, I did misunderstand you. Thought it was bringing uo a seperate ssd partition etc again.
So while I'm not saying it's impossible. I do think it's sensible to take any such claims with a massive pinch of salt until real world results have been shown and independently verified. After all, "ML upscaling" could mean almost anything and doesn't necessarily have to be comparable to DLSS which is outputting anywhere between 2.25x - 9x the original number of pixels.
Yea, I'm of the same mindset in this regards.
People take the mention of ML upscaling by MS as an idication that its definitely on the cards when it doesn't mean that at all.

I do think it's more likely than you do though.

I reckon this means that all games are available to people who has XSX|S as of now.
The unboxing embargoes was up today on both consoles.
 
Damnit. I realized I still have a lot of Kinect titles to start and finish from X360. I should probably do something about that this weekend before I tear down for the new setup on November 10th. :LOL:
 
We could really use a list of Series X|S enchanted BC titles.

On top of resolution and framerate increases, autohdr etc. I could still really like to see texturing changes.
 
This may still be worth it, but it's also dependent on this single dev studio creating a model that's comparable in quality and performance to Nvidia with all their $billions in R&D, access to the worlds fastest ML supercomputers, synergy with their own hardware/tensor cores and massive ML experience (being the world leaders in the hardware that runs it and all).
Forgot to reply to this section.
I don't forsee a single dev doing this either.
But MS I could.
 
I guess the part that is interesting for me is to be able to bypass memory and put the textures directly into the cache/registers. I was always under the assumption that the textures need to be placed into memory. Lets be real here, people don't move that fast, so you're going to be reusing that texture for at least 60-600 frames. If that texture isn't in memory, it's constantly pulling from the SSD. Unless you write back to memory eventually.

If there is paging happening behind the scenes for this, this is interesting. Curious to see how that works.

I would imagine that if it were being fed directly into the GPU and bypassing the console's main memory, that the GPU would write back to memory whatever the results were from using that texture fragment.

Alternatively or in addition to that, while the GPU is using that texture fragment the system is loading the entire mip-level into memory in case the GPU needs it in the future. So, you essentially get virtually instant access to the texture fragment that you need in the next frame while the whole texture takes more than 1 frame to load into memory, at which point if the GPU needs further fragments from that mip-level in subsequent frames, it's now resident in memory.

Regards,
SB
 
Back
Top