The scalability and evolution of game engines *spawn*

The target goals for XSX is 4K and 8K
The target goals for XSS is 1080p and 1440p | 4K with upscale

If you're asking what about when XSX wants to run something at 1080p and upscale to 4K I'm guessing?
yea that's going to be a hurting time on XSS, it'll need to cut off some features to stick around.

It's certainly well ahead of Xbox One X though. GPU Feature set alone is miles infront. CPU and SSD are once again, completely different ballpark.


So let me ask you. What happens when a game demands 1440p on XBSX?

We even saw such an example: UE5 tech demo (lets forget about SSD tech there for the argument). Ran on a PS5 with 1440p 30fps I believe?
 
So let me ask you. What happens when a game demands 1440p on XBSX?
Cut back and upscale if required would be my typical response.
But I would say next gen features offer a variety of methods to deal with resolution scaling problems: So you have your TAAU, you have checkerboarding, now you have ML Up Res (maybe - now that I see 4K Upscale... hmm), and you have VRS which also allows you to scale parts of your screen.

Typically the hardest hitting titles are at the end of the generation and not at the start. So there's time there to see how the technology will play out.

TLDR; don't know for sure, but there are options.

Even TLOU2 targetted 1080p on PS4. So they may stick to certain criteria and goals.
Perhaps the goal is to never drop below 1080p on XSS and never below a certain resolution on XSX.
PS5 may also have such goals as well. ie. I don't think we've seen a 720p game on PS4 for instance. 900p is the lowest they've ever gone. Majority of titles are 1080p
 
https://techreport.com/review/21404/crysis-2-tessellation-too-much-of-a-good-thing/

On a separate and possibly OT note does anyone think we're going to see "Crysis 2 renders million tri Barco barriers and an unseen tessellated sea under the world at all times because DX11" levels of stupid feature ticking this gen? Inspired by my chatting with my mates about new GPU features and how they are often badly shoehorned into existing tech and engines for no good reason so if you don't have a burning need for a new GPU wait 18 months for v2.0 of the h/w and for games/engines to take useful advantage of it. I'd plum forgotten abut Crysis 2 and it's magic tessellated barco barriers.

Which engine and which feature?\
 
So let me ask you. What happens when a game demands 1440p on XBSX?

We even saw such an example: UE5 tech demo (lets forget about SSD tech there for the argument). Ran on a PS5 with 1440p 30fps I believe?

As I already replied above, it runs with a resolution somewhere between 720p and 1080p. (Probably rendering at 1080p, with variable-rate shading doing the rest.)

That's still not terrible, better than what a lot of people are used on the lower-end current-gen consoles.
 
We even saw such an example: UE5 tech demo (lets forget about SSD tech there for the argument). Ran on a PS5 with 1440p 30fps I believe?
I think on this one Epic cited that there are a bunch of toggle switches on Lumens and Nanite to control the level of fidelity to scale across systems.
Since Lumens is the largest performance hog, there might be some settings there that can be dialed down for big performance wins.

Once again, the feature sets are all there, at least the critical ones, the engines are the ones responsible for the scaling.
 
Since it's Series S day, I couldn't help but think of that when watching DF's video. It'll be a fascinating comparison .vs the OneX for the uncapped performance mode.
 
Last edited:
My concern about LH is less about the sprinkling of RayTracing that we'll be getting with even PS5 or SeriesX in comparison to PC GPUs, but more about memory capacities if its only 10 GB total. I can see where you can save a good part of it from scaling render targets, but we don't know much on what they'll need with BVH/RT usage. I have to assume they figure textures will scale down by at least 50% and streaming and SFS will allow them to save enough so capacity isn't an issue.

I hope BVH doesn't chew ram or does it scale with resolution too, though kind of figured that would be in world space and not screen space.

I imagine you could adjust your RT LODs for complexity or distance and and that would make things easier on memory. Fewer polygons in the acceleration structure. Possibly even fewer stages in the hierarchy?

Traversals of the BVH tree would naturally scale with resolution, I reckon.
 
He's referring to screen resolution. I think only the cost of emitting primary rays is even usable as a metric even then I wouldn't call it fixed; I believe only ML algorithms are actually fixed in cost (all inputs will lead to an output within the same computation cost regardless). Secondary and incoherent rays are much more taxing. If your emitting rays per pixels, naturally it's a fair assumption that higher resolutions leads to more rays required to get the same job done.

I should have added that when I'm talking about fixed cost it is with a given quality expected independently of resolution. Like everything in software it is configurable, at a cost. But RT being such a demanding technique, are we really expecting Devs to turn it on massively (if at all) on the XBSS? I'm not.
 
Last edited:
I should have added that when I'm talking about fixed cost it is with a given quality expected. Like everything in software it is configurable, at a cost. But RT being such a demanding technique, are we really expecting Devs to turn it on massively (if at all) on the XBSS? I'm not.
I think the recent DF video on Ray Tracing on Xbox One X and PS4 Pro give us a glimpse into the future of how hybrid techniques will be used to create the final image. It doesn’t have to be used necessarily in the typical way we have done RT.
 
So let me ask you. What happens when a game demands 1440p on XBSX?

We even saw such an example: UE5 tech demo (lets forget about SSD tech there for the argument). Ran on a PS5 with 1440p 30fps I believe?

So let me ask you. What happens when a game runs at 1440p on RTX 2080 Ti and you want to run it on a Radeon 5500 XT?

It's simple, you scale some things down until the 5500 XT runs the game fine at the resolution you want it to. It could even be 4k on the 5500 XT if you wanted. Not at the same IQ settings obviously, but that's the point.

You can scale just about everything graphics related in order to hit the target performance you want at the resolution you want.

And XBSX - XBSS will be far FAR simpler scaling than going from RTX 2080 Ti to Radeon 5500 XT as the latter has features that exist on one GPU that don't exist on the other GPU.

Only difference is that on PC, the user does the scaling. On XBSS, the developers will choose what is scaled down (resolution, IQ levels, etc.) So a 1440p XBSX title could easily run at 1080p on XBSS with the reduced resolution and perhaps reduced shadow quality, less dense foliage, or any other graphical tweak that the developers feel would represent the least noticeable difference.

Graphics are the ONLY thing that has to scale as the CPU is exactly the same. So, physics, for example, wouldn't have to be touched. 3D audio wouldn't have to be touched. AI wouldn't have to be touched. Etc.

Again, if a developer can't target the XBSX and scale down to XBSS easily, then that is one fail developer.

[edit] Sorry if this should have gone into another thread. I was responding to a post that was a page or two before Brits post mentioning the scaling thread.

Regards,
SB
 
Last edited:
PS5 isn't even out yet and it's got 1440P 30 FPS UE5 demo

Without wanting to cross over into the scaling engines thread, the UE5 demo is so early doors it's not indicative of anything yet. What they have said is that there's plenty of optimisation to go, it scales well with resolution and there's various quality settings.

With regard to 1440p, DF had this to say about it:

We've spent a long time poring over a range of 3840x2160 uncompressed PNG screenshots supplied by the firm. They defy pixel-counting, with resolution as a metric pretty much as meaningless as it is for, say, a Blu-ray movie

That all strikes me a pretty optimistic outlook for UE5 titles looking alright at 1080p on XSS.
 
Thank you captain obvious. What you are really saying is that they can control the number of rays to increase performance. But there is zero direct correlation between resolution and Ray Tracing. As in, just because you reduce resolution, the number of rays does not automatically reduce, which was my point.

If you're using your render target as the basis for generating rays to cast, such as for reflections in a hybrid renderer, or for a "full fat" renderer like RTX Minecraft or RTX Quake 2 or RT in Blender, then you naturally have a direct relationship.

So for example if you're running a pixel shader and decide to spawn a ray for a reflection, you're going to use a position particular to the pixel you're working on. Here there is a direct relationship. Likewise, if in Blender I'm rendering something using the ray tracer, I select n rays per pixel, and the number of rays that start out are pixels multiplied by rays per pixel.

RTX Minecraft for example seems to do the same thing, Rays are inherently tied to, and calculated based up, resolution.

In everything I can think of RT is set up to scale with resolution. Any game designed for multiplatform or PC will have some default level of scaling, perhaps with some tweakables e.g. quarter res + denoise or whatever.

Plus if you are reducing the number of rays, you will have less accuracy on the ray tracing effects produced, therefore the ray tracing quality will NOT be the same, maybe even to the extent that is completely pointless and does not add much to traditional rendering, which was my whole point of looking at XBSS as a not very viable machine for RT. Time and DF will tell if I'm wrong or not, but I expect XBSS to have significantly less ray tracing effects than XBSX.

Well if you're reducing resolution you have less accuracy, and more aliasing, on everything. I think it's more likely RT will be sacrificed to allow for increases in resolution than because there's a realistic threshold where it just stops working.

Regarding the article, there is a data point missing: RTX Off and DLSS On. Otherwise it's impossible to distinguish if DLSS is benefiting more RTX, the normal rendering or both equally.

For the purpose of illustrating the direct relationship between the number of pixels you're casting rays from and the performance (which Minecraft RTX does - it scales automatically with resolution) I don't think performance gains that could be achieved with DLSS turned on are relevant.
 
If you're using your render target as the basis for generating rays to cast, such as for reflections in a hybrid renderer, or for a "full fat" renderer like RTX Minecraft or RTX Quake 2 or RT in Blender, then you naturally have a direct relationship.

So for example if you're running a pixel shader and decide to spawn a ray for a reflection, you're going to use a position particular to the pixel you're working on. Here there is a direct relationship. Likewise, if in Blender I'm rendering something using the ray tracer, I select n rays per pixel, and the number of rays that start out are pixels multiplied by rays per pixel.

RTX Minecraft for example seems to do the same thing, Rays are inherently tied to, and calculated based up, resolution.

In everything I can think of RT is set up to scale with resolution. Any game designed for multiplatform or PC will have some default level of scaling, perhaps with some tweakables e.g. quarter res + denoise or whatever.



Well if you're reducing resolution you have less accuracy, and more aliasing, on everything. I think it's more likely RT will be sacrificed to allow for increases in resolution than because there's a realistic threshold where it just stops working.



For the purpose of illustrating the direct relationship between the number of pixels you're casting rays from and the performance (which Minecraft RTX does - it scales automatically with resolution) I don't think performance gains that could be achieved with DLSS turned on are relevant.

Ok, I understand better now. I was under the impression that raytracing was done from light sources in the scene, as the actual sources of light, but apparently the rays are cast from the camera PoV, as it seems to be less computational intensive that way. So despite the fact that we have Ray Tracing, we are very far away from an actual real life "lighting simulator".

Edit - Then again I've just read that RTX and by extension DXR are not doing primary ray tracing from the camera, but actually secondary ray tracing from light sources?
 
Last edited:
I don't want to open a can of worms here but this will hold back the xsx/generation right? I mean there is more the GPU does than just graphics? GPGPU stuff.
 
I don't want to open a can of worms here but this will hold back the xsx/generation right? I mean there is more the GPU does than just graphics? GPGPU stuff.
No, not particularly. Not when you have the CPU as strong as it is. I don't expect there to be any GPGPU stuff.

Anything that has to do with rendering will stay on the GPU, sure, that's GPGPU to do say culling, it belongs there and the amount culled is dependent on assets and resolution (which should scale).

But if you're talking about GPGPU to do animation or other things, I don't see them going to GPGPU unless the CPU is not up to the task.
 
Back
Top