Xbox Series X [XBSX] [Release November 10 2020]

If the XSX took 5ms to upscale from 1080p to 4K on average then any game that natively runs at 1080p with a frame time of 11.6ms could be upscale to 4K 60fps. It becomes a matter of giving up 30% of your fps in return for 4X the resolution. At 4K50 (VRR is now a thing) it's just a matter of giving up 25% of your fps for the 4X boost in resolution. Its the difference between 4K at 30 fps and 1080p at 35 fps.

Agreed that 1080p -> 4K upscale is still likely worth it in terms of frame time vs native rendering at 60fps and definitely at 30fps.

Although this presents a potential problem for the XSS on which the same upscale would take about 15ms and thus be unfeasible. So if you can't upscale on the XSS but can on the XSX, your resolution differential is gone. So what will they have to sacrifice on the XSS if not resolution in order to keep the same game playable?

And Xbox devs wouldn't necessarily be forced to create their own solutions.

https://venturebeat.com/2020/02/03/...t-generation-of-games-and-game-development/2/

MS is at least considering offering this as a part of GameStack.

Yes agreed, if anyone is going to do it then it will be Microsoft and/or AMD. That's why I'm doubtful when a single dev studio claims to have pulled it off on their own.
 
Although this presents a potential problem for the XSS on which the same upscale would take about 15ms and thus be unfeasible. So if you can't upscale on the XSS but can on the XSX, your resolution differential is gone. So what will they have to sacrifice on the XSS if not resolution in order to keep the same game playable?
Going from 720p to 1080/1440p.
Working against lot less pixels, but to be fair, it may be better to drop RT reflections and just render at 900p without ML upscale.
These sorts of decisions will be made at optimization stage. What to do to get best result.
Dropping RT reflections and therefore not needing to denoise etc, may even be able to do native 1080p with all the savings?

Will be interesting to see the decisions taken.
 
Agreed that 1080p -> 4K upscale is still likely worth it in terms of frame time vs native rendering at 60fps and definitely at 30fps.

Although this presents a potential problem for the XSS on which the same upscale would take about 15ms and thus be unfeasible. So if you can't upscale on the XSS but can on the XSX, your resolution differential is gone. So what will they have to sacrifice on the XSS if not resolution in order to keep the same game playable?



Yes agreed, if anyone is going to do it then it will be Microsoft and/or AMD. That's why I'm doubtful when a single dev studio claims to have pulled it off on their own.

But you don’t need to upscale to 4k but rather 1440p which is roughly 40% of the pixels needed for 4k. Simple math would require 6.5 ms on the XSS from 1080p to 1440p. If that performance is true than relatively you can get pretty close to XSX 1080p settings before upscale.

Or go with 900p before upscale where you only working with 1/3rd of the pixels of the native resolution.
 
Last edited:
But you don’t need to upscale to 4k but rather 1440p which is roughly 40% of the pixels needed for 4k. Simple math would require 6.5 ms on the XSS from 1080p to 1440p. If that performance is true than relatively you can get pretty close to XSX 1080p settings before upscale.

Or go with 900p before upscale where you only working with 1/3rd of the pixels of the native resolution.

If that's the way it works. I haven't had the impression that DLSS cost scales linearly with output resolution like that. I do think its more expensive at higher output resolutions but not necessarily that much more expensive.
 
If that's the way it works. I haven't had the impression that DLSS cost scales linearly with output resolution like that. I do think its more expensive at higher output resolutions but not necessarily that much more expensive.
DL computation costs are correlated with the depth of the network. More layers, more calculations.
If the input is closer to the output, the number of layers needed for a higher quality upscale could be less.
If your input was 200p, to 1440p... but the network layers are the same. all the way thorugh to outputting 4K. The computation should be nearly the same.
But the quality differences will be massive.

Hope that helps.
 
It seems Italy (and other european countries) are going into a new (long) lockdown from Monday 9 November. If someone comanding on MS is reading please anticipate the day one a few days... even Friday 6 would be great. Otherwise it will be a mayem in Europe for people waiting the new console. Shops like Gamestop are calling people for scheduling the delivery at different day times for November 10 right now. Don't know how the logistics work but just few days anticipation may be possible.
 
Last edited:

Just a few things I picked up that I found interesting:

6:30 - "The huge number of 320 dram banks allows for higher utilisable bandwidth and lower CPU latency despite the very high graphics bandwidth that's beating on the dram."

So there we go: the large number of memory banks helps with utilisation and also lowers latency (and therefore increases effective IPC) of the CPU. I think that was assumed by many folks here, but it's nice to hear the the engineers say it.

Also:

- Hardware display processor improved to remove work of resizing and light processing (now all linear and not gamma encoded) for the various hardware display planes that the GPU has.

- MS had been pushing for the HDMI standard to include Auto low latency mode, VRR and Display Stream Compression for years, very pleased to have it and to be able to support it with XSX.

- New Opus audio compression engine also allows savings on SSD use.

Now beard man:

- Multi core command processor is "dual stream" (is this a common RDNA thing?)

- Custom GPU firmware (a given, I suppose)

- VRS: on average only need to shade every other pixel, but done everywhere would compromise quality.

- 19:15 "VRS can be used alongside other resolution enhancing techniques including temporal AA, super resolution and even chequerboarding."

- 19:30 - SFS starts by allocating virtual memory space for the entire texture, which is pretty cheap and fast. Then load all the coarsest mip map pages, and validate.

- Shader issues a single macro sample instruction that combines lookup of current detail level and fetch of the actual texture page. Record texture lod that's requested/needed. (Requests can skip LODs, apparently don't need to work through even step in chain)

- Evict unused then load requested

- When load a page from a LOD, will also load corresponding page in next lower LOD to provide correct LOD in surrounding area.

- Separate mode of residency map to allow texture space shading of pages in memory on demand.

- Want tile maps to stay on die for low latency access, so ideally 1 pixel per tile for tile maps

- Custom SFS blending mode automatically moves blending between higher detailed and coarser pages entirely onto coarser page [where detail isn't needed] so this transition artefacts "is completely avoided".

- "SFS gives the same or better level of visual detail, with a lot lower latency, and a lot lower memory cost".

Hopefully my notes are fairly accurate and legible. It's surprisingly time consuming going through a presentation and typing! Any corrections are welcome.

Ray tracing is up next but I'm taking a break now. Suffice to say, this Hotchips presentation is a lot more interesting when you have the dudes behind the XSX filling in some of the gaps! Definitely worth a listen IMO.
 
Isn't dual stream just implying vertex and pixel shaders? That the multicore command processors are just more efficient at handling them than the the prior GCN architecture. I haven't kept up with the 'tech' 'tech' stuff happening at AMD, so my memory on their architectures history is quite hazy.
 
I am a bit perplexed that we barely see anything of the Series X's features promoted or demonstrated in actual exclusive games.

For the Playstation 5 we see games running and interviews talking features implemented in actual games.

The Series X is promoted as a machine that has these technologies and plays games, but barely games demoed
 
I am a bit perplexed that we barely see anything of the Series X's features promoted or demonstrated in actual exclusive games.

For the Playstation 5 we see games running and interviews talking features implemented in actual games.

The Series X is promoted as a machine that has these technologies and plays games, but barely games demoed

Hopefully, December's release of 'The Medium' should give a good glimpse into the system's capabilities.
 
Isn't dual stream just implying vertex and pixel shaders? That the multicore command processors are just more efficient at handling them than the the prior GCN architecture. I haven't kept up with the 'tech' 'tech' stuff happening at AMD, so my memory on their architectures history is quite hazy.
I don't believe multi-core and dual streams are referring to the same thing.

I think @3dilettante mentioned in the past that as the GCPs were becoming more complex they may have gained more cores over time. But I don't think that's the same as dual stream. Not sure what dual stream is.
 
I am a bit perplexed that we barely see anything of the Series X's features promoted or demonstrated in actual exclusive games.

For the Playstation 5 we see games running and interviews talking features implemented in actual games.

The Series X is promoted as a machine that has these technologies and plays games, but barely games demoed
You'll have to wait a little longer, the XDK apparently only became available in June, but it's easy to bet that the devs will use these resources and create very impressive games.
 
Gears Tactics and Gears 5, though I don't know if Gears 5 uses VRS while we know Tactics does.
 
  • Like
Reactions: Jay
Gears Tactics and Gears 5, though I don't know if Gears 5 uses VRS while we know Tactics does.
Somewhere recently (yesterday maybe) I came across something that said gears used VRS, but can't remember if it was just implied.
May have been from a xbox wire post or something.

Dual command processor - run different level of DX, one for game and a lower priority one for OS.
 
Back
Top