The scalability and evolution of game engines *spawn*

On one side of things, developers lives have become far easier, with nearly identical target platforms, gone are the days of esoteric hardware. I think they're extremely happy about next-gen. For most it's really no difference to doing PC game development.

The minor price they have to pay is being able to have scalable game engines for consoles if they opt for multiplatform. They only need to adapt to 2 different cpu ranges, 3 different GPU performance ranges, 2 different IO ranges, and possibly 3 (maybe only 2) different memory capacities. Far easier than previous console generations.

Maybe easier than PC cause you're not dealing with 3 different GPU providers each with their own quirks like Intel, Nvidia, and AMD?

I believe if game devs were presented with this option in the PS2 era, they'd jump on it in a heartbeat.
 
Its a dynamic clock range, so could be more, right?

It could, when the CPU is not as needed. I'm not denying the PS5 will not have issues with the dynamic clock. But it still is a world of difference to LH. Comparing both based on CPU clock speed is a joke.

In all honesty I don't believe XBSX will be that much impaired by XBSS. That observation was in answer to someone who said that there was no difference between what Microsoft is going to do and what Sony and them did last generation. It's very different to launch two SKUs at once and one SKU later, as this time engines should be better prepared for it

My main point is about LH being a shit show as supporting new features decently in a way that really makes it stand out from last generation. Especially at a 299 price point, over 50% more from the magical impulse buy 199.
 
Last edited:
My concern about LH is less about the sprinkling of RayTracing that we'll be getting with even PS5 or SeriesX in comparison to PC GPUs, but more about memory capacities if its only 10 GB total. I can see where you can save a good part of it from scaling render targets, but we don't know much on what they'll need with BVH/RT usage. I have to assume they figure textures will scale down by at least 50% and streaming and SFS will allow them to save enough so capacity isn't an issue.

I hope BVH doesn't chew ram or does it scale with resolution too, though kind of figured that would be in world space and not screen space.
 
My concern about LH is less about the sprinkling of RayTracing that we'll be getting with even PS5 or SeriesX in comparison to PC GPUs, but more about memory capacities if its only 10 GB total. I can see where you can save a good part of it from scaling render targets, but we don't know much on what they'll need with BVH/RT usage. I have to assume they figure textures will scale down by at least 50% and streaming and SFS will allow them to save enough so capacity isn't an issue.

I hope BVH doesn't chew ram or does it scale with resolution too, though kind of figured that would be in world space and not screen space.

That's my understanding as well.
 
You don’t think digital edition can be 100 cheaper?
The drive is less for sure, the the extra they make up in digital sales will make it a good solution. I think 300 is great value for those on a budget, my only concern is around why you might upgrade with no exclusives and if you already have an X that is likely because you have a 4K TV.

But the price will be appealing? Especially for parents.


Because it’s running at a lower resolution it won’t need as much RT power (how I understand it). Much like the PS5 will likely run at a lower res than XSX but the RT will scale.

As I understand it RT is not based on screen space but world space. How else would you have reflections of things that are not visible on the screen if you are scraping those objects before applying RT? Plus resolution itself hardly matters for RT. RT is applied before all the image is rasterised and it should not concern itself with texture resolution either? All it does bounce rays around geometry? I guess that the only thing that could have a big impact would geometry levels. Less geometry, less costly RT? So LH can trade less detailed worlds for RT. Not sure it's a good compromise...
 
Last edited:
As I understand it RT is not based on screen space but world space. How else would you have reflections of things that are not visible on the screen if you are scraping those objects before applying RT?
If the world is rendered at 1/4 of the resolution, surely the RT only needs to be 1/4 of the detail?
 
My concern about LH is less about the sprinkling of RayTracing that we'll be getting with even PS5 or SeriesX in comparison to PC GPUs, but more about memory capacities if its only 10 GB total. I can see where you can save a good part of it from scaling render targets, but we don't know much on what they'll need with BVH/RT usage. I have to assume they figure textures will scale down by at least 50% and streaming and SFS will allow them to save enough so capacity isn't an issue.

I hope BVH doesn't chew ram or does it scale with resolution too, though kind of figured that would be in world space and not screen space.
This has always been my one concern after hearing cpu is same, even slightly slower is ok.

But I've excepted that I personally don't have a good enough idea about what memory breakdown is at that type of level. So, it's more of a raised eyebrow now until get more details, not only about XSS but about memory usage in general in games.

Compromises was necessary to get to that price point. And I think the memory was the biggest one.
I'm hopeful that MS profiled, and took into account future developments enough that it just means that's where devs will have to optimize the most and not have to rethink things.

I suspect memory may need more optimization and thought than gpu.
 
If the world is rendered at 1/4 of the resolution, surely the RT only needs to be 1/4 of the detail?

Let's go like this: what do you think RT is doing and why do you think resolution matters? I've just told you RT has nothing to do with resolution because it even happens before the image is rasterised at so. Tell me why you think it does.
 
DigitalFoundry are going to be having fun with all these Xboxes. It'll be a full-time job just determining the resolutions and settings between the four Xbox consoles.

Microsoft's messaging for which Xbox to use under which scenario will be interesting.

I hazard a guess that the Series S will have two options; 1. Next gen graphics at last gen resolutions, 2. Last gen graphics at next gen resolutions.

If an Unreal Engine game appeared at 1440p on Series X where does that leave the other three boxes?
 
Let's go like this: what do you think RT is doing and why do you think resolution matters? I've just told you RT has nothing to do with resolution because it even happens before the image is rasterised at so. Tell me why you think it does.
:sleep:
 
Let's go like this: what do you think RT is doing and why do you think resolution matters? I've just told you RT has nothing to do with resolution because it even happens before the image is rasterised at so. Tell me why you think it does.
?
Of course it makes a difference, sure each ray has the same number of work to do (i.e. the triangle count aint reduced) but you are sampling less pixels at the start
for evidence, run any ray tracer at 400x300 and then run it at 800x600 & watch it run a lot slower
 
?
Of course it makes a difference, sure each ray has the same number of work to do (i.e. the triangle count aint reduced) but you are sampling less pixels at the start
for evidence, run any ray tracer at 400x300 and then run it at 800x600 & watch it run a lot slower

Are you talking about actual on screen resolution or texture resolution? Regardless the cost of running the rays is a fixed cost and does not change with resolution at all and is arguably a bit part of the performance hit?
 
Last edited:
Are you talking about actual on screen resolution or texture resolution? Regardless the cost of running the rays is a fixed cost and does not change with resolution at all and is arguably a bit part of the performance hit?
I think the confusion comes from the fact that some early RT games (Battlefield V for example) had RT effects that were very much tied to resolution, and so if you increased the resolution then you also increased the number of rays. Pretty sure that this is no longer the case, and that now RT does not scale linearly with resolution, but again the confusion is quite understandable.
 
Are you talking about actual on screen resolution or texture resolution? Regardless the cost of running the rays is a fixed cost and does not change with resolution at all and is arguably a bit part of the performance hit?

The cost of ray tracing changes largely with the number of rays. If you're reducing the resolution and not reducing the number of rays, why on earth did you have that number of rays in the first place? That goes for shadows and reflections and full on path tracing. And given the cost of ray tracing, you can bet that it will be made to not only scale with resolution, but dynamically with dynamic resolution. And in a hybrid rasteriser the number of rays can even change with what's on screen, and what initiates a RT job.

The whole point of DLSS is that you can reduce the resolution (including the RT load) and therefore run faster, and then construct a higher res image.

Some things, like GI probes of whatever might scale independently of resolution, but they'll still be scalable and you'll probably scale them based on your performance profile.

Here's Minecraft RTX on a 2080. Look for the RTX on, DLSS off, average fps numbers. 1440p vs 1080p.

https://www.pcgamer.com/uk/minecraft-with-rtx-preview-performance/

1440p -> 30 fps average
1080p -> 50 fps average

44% fewer pixels at 1080p, which should give you 53 fps if scaling directly with resolutoin. Actual number, 50 fps. That's pretty bloody close to scaling directly with resolution!
 
Are you talking about actual on screen resolution or texture resolution? Regardless the cost of running the rays is a fixed cost and does not change with resolution at all and is arguably a bit part of the performance hit?
He's referring to screen resolution. I think only the cost of emitting primary rays is even usable as a metric even then I wouldn't call it fixed; I believe only ML algorithms are actually fixed in cost (all inputs will lead to an output within the same computation cost regardless). Secondary and incoherent rays are much more taxing. If your emitting rays per pixels, naturally it's a fair assumption that higher resolutions leads to more rays required to get the same job done.
 
DigitalFoundry are going to be having fun with all these Xboxes. It'll be a full-time job just determining the resolutions and settings between the four Xbox consoles.

Microsoft's messaging for which Xbox to use under which scenario will be interesting.

I hazard a guess that the Series S will have two options; 1. Next gen graphics at last gen resolutions, 2. Last gen graphics at next gen resolutions.

If an Unreal Engine game appeared at 1440p on Series X where does that leave the other three boxes?
There aren't four Xboxes for next gen games. There's just the two. Crossgen games are not targeting Series X and porting down, they started their life being made for Xbox One and have been extended up. MS's commitment to cross gen was only ever in that context. Not artificially creating exclusives out of games that underwent years of development targeting Xbox One. It never was that much of a commitment. Halo Infinite is really the only first party title that this meaningfully applied to.

And all of the big third parties are going to be doing cross gen with PS4 and PS5 too. There's three Playstations by that logic.
 
Thought we were comparing the XB1X and the XBSS, not the vanilla last gen versions.

By all accounts, we have already factored in the new tech by setting the new 4TF as roughly equal to 6TF old tech.
CPU is a step up for sure, but I doubt that's going to do much if games are to run on Xbox One X/S too. Maybe more stable framerates.

Not sure why people are asserting the XBSS as having significantly more performance. 1080p isn't a luxury that the last gen consoles can't target. If devs don't do specifically target 1080p for XB1X now what's forcing them to do that in the future?

Right now the XOS costs $300. The XSS is going to cost $300. The XOX went as low as $400. If you own an XOX and want an upgrade the console you would look at is the XSX at $500 not the XSS at $300.

XB1X is a dead platform and the further we get away from the XSS/X launch the less the console will matter before nothing is developed for it anymore. The XSS will get support for a long time after
 
I think MS shot themselves in the foot with their cross gen comments, the subtleties of the message were not conveyed at the same "4 THE GAMERZ!!!!!" volume the "play anywhere" mantra was. The time limited MS published titles only nature of the cross gen promise was only something folks in here clued me into and I'm a pretty engaged guy with this stuff. I do wonder how many X1S customers who might have been interested in XSS on launch have held off thinking that the cross gen promise is broader than it really is.
 
Just to be clear I expected that if the XBSS existed it would have to exist in the $300 region, so there's no surprise for me there.

https://forum.beyond3d.com/posts/2149389/

It's just at the same time difficult for me to see how this product can differentiate itself significantly from the current Xbox One X.



Oh so we're suddenly declaring that all games for XBSX run at 4K and XBSS runs at 1080p?
So if along comes a XBSX game has to target 1440p we'll be seeing 720p games? yikes?
The target goals for XSX is 4K and 8K
The target goals for XSS is 1080p and 1440p | 4K with upscale

If you're asking what about when XSX wants to run something at 1080p and upscale to 4K I'm guessing?
yea that's going to be a hurting time on XSS, it'll need to cut off some features to stick around.

It's certainly well ahead of Xbox One X though. GPU Feature set alone is miles infront. CPU and SSD are once again, completely different ballpark.
 
Back
Top