Xbox Series X [XBSX] [Release November 10 2020]

Software virtual texturing gets far better efficiencies and with low-latency storage, I expect that (full VT) to be the method of choice for texturing eventually. I think Sampler Feedback's place will be more subtle than raw BW reduction, certainly in next-gen only titles optimised for next-gen architectures. We might see its importance change over time.
 
Software virtual texturing gets far better efficiencies and with low-latency storage, I expect that (full VT) to be the method of choice for texturing eventually. I think Sampler Feedback's place will be more subtle than raw BW reduction, certainly in next-gen only titles optimised for next-gen architectures. We might see its importance change over time.

Quoting from the SFS patent:

"Migrating elements of texture streaming implementations from mip-based streaming (i.e. loading entire levels of detail) to tile-based streaming and partial residency can be an effective mitigation to performance issues. Techniques using partial residency can allow content complexity to continue to grow without a corresponding increase in load times or memory footprint. Tiled resources (also known as partially resident textures or PRTs) can be improved so that these PRTs can be widely adopted while minimizing implementation difficulty and performance overhead for GPUs. These improvements include hardware residency map features and texture sample operations referred to herein as “residency samples,” among other improvements.

A first enhancement includes a hardware residency map feature comprising a low-resolution residency map that is paired with a much larger PRT, and both are provided to hardware at the same time. The residency map stores the mipmap level of detail resident for each rectangular region of the texture. PRT textures are currently difficult to sample given sparse residency. Software-only residency map solutions typically perform two fetches of two different buffers in the shader, namely the residency map and the actual texture map. The primary PRT texture sample is dependent on the results of a residency map sample. These solutions are effective, but require considerable implementation changes to shader and application code, especially to perform filtering the residency map in order to mask unsightly transitions between levels of detail, and may have undesirable performance characteristics. The improvements herein can streamline the concept of a residency map and move the residency map into a hardware implementation."

SFS is not a solution searching for a problem. Silicon realstate does not come cheap.
 
The Partially Resident Texture feature never really seemed to get used last gen. I wonder if that was because of streaming performance from the HDD. Rage had terrible pop-in on PS360 from what I remember, and maybe Xbox One and PS4 would have had the same issue. Curious to see if SFS with it's filtering units in combination with an actual SSD will make PRT use a reality. Microsoft seems to be banking a lot on it.
 
The Partially Resident Texture feature never really seemed to get used last gen. I wonder if that was because of streaming performance from the HDD. Rage had terrible pop-in on PS360 from what I remember, and maybe Xbox One and PS4 would have had the same issue. Curious to see if SFS with it's filtering units in combination with an actual SSD will make PRT use a reality. Microsoft seems to be banking a lot on it.
I had the impression that part of the problem was that consoles only supported Tier 1 :?: Anyways, there were some caveats with virtual/megatexture approaches that idSoft wanted to get away from too (I'm sure you know about that :p).
 
SFS is not a solution searching for a problem.
I didn't say it was. :???: I said bandwidth savings in your examples aren't as good as VT gets.

Software VT should be able to texture a 4K display at 60-100 MB/s based on what's possible already. As your quote says, VT requires work that SFS helps with. It's not about reducing the BW more than VTs, but reducing the processing costs and workload of implementing texture streaming.

"These solutions are effective, but require considerable implementation changes to shader and application code"

If games are already developed with this in mind, there aren't "changes to shaders and application code" costs. If your game doesn't implement VT already, SFS helps implement it. If your engine already supports VT, SFS can reduce the processing overhead and perhaps improve quality. In terms of pure BW savings though, which is what you were talking about, all games using VT, whether in hardware or software, will have very low texture loading BW requirements, maybe a couple hundred MB/s tops.
 
The Partially Resident Texture feature never really seemed to get used last gen. I wonder if that was because of streaming performance from the HDD. Rage had terrible pop-in on PS360 from what I remember, and maybe Xbox One and PS4 would have had the same issue. Curious to see if SFS with it's filtering units in combination with an actual SSD will make PRT use a reality. Microsoft seems to be banking a lot on it.
Epic have just showcased virtual texturing already on a console without SFS, so it's not much of a gamble. ;) Every UE5 game will support VT, and I expect all major engines to get upgrades to this as it's the smart future for texturing and gets over the limited RAM capacities of next-gen consoles.
 
I didn't say it was. :???: I said bandwidth savings in your examples aren't as good as VT gets.

Software VT should be able to texture a 4K display at 60-100 MB/s based on what's possible already. As your quote says, VT requires work that SFS helps with. It's not about reducing the BW more than VTs, but reducing the processing costs and workload of implementing texture streaming.

"These solutions are effective, but require considerable implementation changes to shader and application code"

If games are already developed with this in mind, there aren't "changes to shaders and application code" costs. If your game doesn't implement VT already, SFS helps implement it. If your engine already supports VT, SFS can reduce the processing overhead and perhaps improve quality. In terms of pure BW savings though, which is what you were talking about, all games using VT, whether in hardware or software, will have very low texture loading BW requirements, maybe a couple hundred MB/s tops.

Maybe its a failing on my side...but what I was in fact addressing is the adequacy of the XSX's SSD for implementig a top-end texture streaming solution. Both a competent VT solution and SFS will decrease BW requirements and that's the point (but SFS will be the more efficient and effective solution because of a much lower computational overhead due to being a hardware implementation.) Boasting about having an X Gb/s capable SSD is meaningless.
 
Okay.
Boasting about having an X Gb/s capable SSD is meaningless.
Only in regards textures. ;) There's a lot more data to be read, and written. Something like Nanite might stress it to the max. In fact, you'll never have an excess resource and devs will always use as much as you give them and them want more. :mrgreen:
 
Okay.
Only in regards textures. ;) There's a lot more data to be read, and written. Something like Nanite might stress it to the max. In fact, you'll never have an excess resource and devs will always use as much as you give them and them want more. :mrgreen:

Once you've got the by far main offender in terms of memory bandwidth hogging under control, the rest becomes much more easier to deal with.
 
Epic have just showcased virtual texturing already on a console without SFS, so it's not much of a gamble. ;) Every UE5 game will support VT, and I expect all major engines to get upgrades to this as it's the smart future for texturing and gets over the limited RAM capacities of next-gen consoles.

well it’s a question of how it’s implemented and whether it actually takes advantage of SFS. I imagine that’s some Microsoft could contribute to UE for use with the XDK.
 
well it’s a question of how it’s implemented and whether it actually takes advantage of SFS. I imagine that’s some Microsoft could contribute to UE for use with the XDK.

It has likely already been done as there are XGS studios that have already announced that they will develop using UE5. Unreal engine seems, for whatever reasons, to be the engine of choice for Microsoft studios.
 
Well it's likely been contributed or slated to be contributed back to Epic in the UE4.x branch, then it's a matter of them pulling that into their UE5 branch.
 
You just never know what they're doing with nanite in UE5. There may be some oddities about how textures are mapped that make the virtual texturing system different. I really don't know enough. I just wonder if hardware PRT is somewhat restrictive. I'm not saying it is, I just don't know that it isn't.
 
I think it's just been held back by HDDs. JIT texture fetches are too slow, requiring lots of caching and adding complexity for minimal gains. Trials showed VT for all objects and had no issues, proving the theory. RAGE had to limit it to scenery for megatexturing. I reckon it'll become the norm and both console companies are counting on streamed assets to offset the relatively low RAM pools.

I suppose RTRT throws a spanner in the works as sampling is all over the shop. I models have enough detail, I wonder if we can't just use vertex data for reflections? Or maybe some large reflection texture-atlases of low fidelity textures? But I think within a few years, 50% of (larger budget) titles could be streaming textures at a minimum.
 
Or maybe some large reflection texture-atlases of low fidelity textures?

Apologies if I'm asking a stupid question, but it's just occurred to me: would it be necessary to have duplicate textures (high fidelity for traditional texturing, low fidelity for reflections) or could the same, high fidelity texture have a certain set of pixels flagged as the low fidelity version?
 
No, you can't access textures in that partial way. This is why we have mip maps, with lower-quality textures for distant rendering. You'd sample from a lower LOD mip. VT on reflections could try and sample tiles from a low LOD version, but the scattering of ray locality could really stress the system as it violates the principle of fetching one texel per drawn pixel with local geometry.
 
The discussion about cache scrubbers for the PS5 has got me thinking about the importance of latency for shader performance. Obviously, this was a prticular area of focus for Cerny and SIE R&D. It turns out that MS was also fretting about latency and cache misses but in a fashion typical of engineering, went about providing a solution in a completely different way. Andrew Goosen (that guy again), Ivan Nevraev and others came up with a novel method to improve cache prefetching in a patent titled : "Prefetching for a Graphics Shader" (US10346943B2).
They claim to be able to be able to "greatly reduce or eliminate the latency issues by ensuring that the subsequent cache lines of the shader are available in a cache before the shader executes those subsequent cache lines without pauses/delays in the execution of the shader for any further requests of cache lines" by using a "purposedly configured GPU" with a prefetcher block that can execute contemporaneously with the shader.
 
Just on the topic of whether a console has or does not have.
They don't market everything. They only want to market what they want to market. Doesn't mean their competitor does not have that feature. I'm not saying XSX has cache scrubbers, but it also doesn't mean they don't have something equivalent either. Just really hard to market cache scrubbers when it's meaningless to their consumers. It either works or it doesn't work.
 
No, you can't access textures in that partial way. This is why we have mip maps, with lower-quality textures for distant rendering. You'd sample from a lower LOD mip. VT on reflections could try and sample tiles from a low LOD version, but the scattering of ray locality could really stress the system as it violates the principle of fetching one texel per drawn pixel with local geometry.
I believe the method now is to 1 have asset shipped with each title (PC has a different issue with varying resolutions). I believe the process is that we load the texture (or tile) and generate the mips using a compute shader now (edit: WRT what UE5 did), where it samples the textures from that distance and generates MIP 0 -> 10 for instance and all the mips stay in resident memory until it's no longer needed. Standard SVT has the mips stored offline. If it's using realtime, they may sample the texture at it's distance and use a compute shader to generate that MIP just before it's needed. They must have some way of wrapping the textures to altering geometry.

ergo; if you're going to go the software raster route, and you're sending blocks of work to be done using compute shaders. Why not calculate both the geometry and texture for that block simultaneously.
 
Last edited:
Just on the topic of whether a console has or does not have.
They don't market everything. They only want to market what they want to market. Doesn't mean their competitor does not have that feature. I'm not saying XSX has cache scrubbers, but it also doesn't mean they don't have something equivalent either. Just really hard to market cache scrubbers when it's meaningless to their consumers. It either works or it doesn't work.
They could have mentioned a custom solution to this problem when talking to DF about the cool stuff in XSX. It's not like there was a word limit on the DF article. :LOL:
 
Back
Top