[Alpha Point] - UE5 Tech Demo for Xbox Series X|S

Rift Apart benefits fairly little from switching from 60fps to 30fps mode. Clearly the 1080p->4k reconstruction method Insomniac uses is very good.

But a developer would give themselves twice the overhead if they targeted their game to do 30fps at 1080p->4k reconstruction. You can *always* do a lot more at 30fps relative to a given 60fps situation. Always. And I can never, ever see developers just giving up this competitive advantage. I think people underestimate just how much of an arms race exists between developers in the graphics area, as this totally sells games.

Maybe console gamers will just get so used to 60fps that they can never go back, but I really doubt it. Console gamers love to drool over graphics. And when even I can enjoy a 30fps game just fine, despite playing the vast majority of games at 60fps on PC, I'm pretty sure people primarily play on consoles will be able to adjust back just fine.


This was always a very silly expectation.

And you always have to trade fidelity for framerates. You can never have it all in a fixed spec scenario. There is no genuinely no way around this. It will be the same situation with the next generation in 2028 or whatever.
30fps and 40fps modes are usually running at native 4K and drop (resolution) very little when they do. While 60fps modes can run down to 1080p, they mostly run at 1440p (or not far below) then reconstructed to 4K.

You clearly didn't play the game yourself.
 
Isn't RTXGI pretty much unusable on anything that isn't a RTX graphics card from Nvidia, meant to be coupled with DLSS that is also only available for RTX graphics cards from Nvidia?


How does that help the Series consoles with RDNA2 GPUs from AMD?

It would/will/does work on consoles, just much slower.
 
Isn't RTXGI pretty much unusable on anything that isn't a RTX graphics card from Nvidia, meant to be coupled with DLSS that is also only available for RTX graphics cards from Nvidia?
RTXGI is only about a grope grid for looking up indirect lighting, so DLSS has nothing to do with it because screenspace and resolution are not really relevant.
If performance differs across HW, it can affect lag of indirect lighting, but does not have to affect FPS.
I think it's very practical for any HW, but don't know a game using it yet(?).
Exodus uses a similar system, but also many other RT work, so we can't use it to indicate isolated RTXGI performance.
 
Exodus uses a similar system, but also many other RT work, so we can't use it to indicate isolated RTXGI performance.

Why not? As you said updates to the probe grid are independent of resolution. So all we need to do is look at how well the game scales with resolution to determine if the probe update is a significant bottleneck.

The 3090 is twice as fast in Exodus at 1080p vs 4K. The 6900xt scales even better and is 130% faster. Which means the game is scaling pretty well with resolution and performance isn’t dominated by the probe update. So we can surmise that DDGI works very well on both Ampere and RDNA2 and the fixed cost per frame is relatively low.

https://www.computerbase.de/2021-05...mm-metro-exodus-pc-enhanced-edition-1920-1080
 
The 3090 is twice as fast in Exodus at 1080p vs 4K. The 6900xt scales even better and is 130% faster.
Hmm... quite telling to those who think gfx performance scales almost linearly with pixels.

But we can't tell how much RTXGI costs are in comparison to other resolution independent tasks, like BVH, skinning and what not.

Tried to find some resources from NV, but answer really seems 'scale as you want'.
0901RTXGI%20rtxgi-perf-2.png

However, in all those examples RTXGI is much higher than what i thought. I remember McGuire twittering 2ms at first, then the paper had examples with larger scenes at 4ms (iirc). Seems in practice it's more like a whopping 8.
Ofc. it depends mostly on grid resolution and update rates, so can be cheap or expensive.
 
Hmm... quite telling to those who think gfx performance scales almost linearly with pixels.

I don’t know anyone who thinks that.

But we can't tell how much RTXGI costs are in comparison to other resolution independent tasks, like BVH, skinning and what not.

Exactly. But we know that the total cost of all the fixed cost items isn’t that bad. The probe update is only a part of that so also can’t be that bad.
 
I genuinely never heard one console player in the world complain about Uncharted 4, God of War, Horizon Zero Dawn, Spiderman, Red Dead Redemption 2, Final Fantasy VII Remake, The Last of Us 2 etc etc only being 30fps. Literally not once. But what I did hear was endless praise for these games and how amazing they looked. The hype behind these games was built heavily on how amazing they looked as well, so it was a huge part of their success.

You can give me a call and I can rant several hours of the benefits of 60 fps console games.
 

All those "we literally have to work around nanite to get it to do what artists want at all" details suggest the system was made as an interesting challenge and because John Carmack talked about doing something like it a decade ago, rather than with a ton of input from artists and asking them what would make their job easier and faster. Which is to say, it's a technically brilliant piece of programming that seems to solve few if any of the truly pressing issues of high end game development today, which is ballooning budgets from needing an ever growing legion of artists for triple a games. Iteration speed ups from not having to bake LODs seem entirely negated by not supporting a legion of tools artists have gotten over the last 20 years; and can eat up far more of the memory budget than any set of texture blending tools and procedural meshes ever would.

Ohwell, at least Lumen, expensive as it can at the moment (edit- it's really fast on XSX somehow, cool), lets you light things quickly and has mostly great looking results. It's interesting to compare and contrast his reactions of the two. Nanite "this is how we worked around all these difficulties we encountered". Lumen "it's awesome and saves a ton of time."
 
Last edited:
Good video, especially with all the background, what they are doing and why.
But somehow this worries me. As he states that artists have much, much more to do. So instead of decreasing dev-time because of the new visual improvements games will now get developed even longer ...
So I expect much more games with a comic look as this look requires much less details and therefore should be less time extensive to create content for.
 
Well, that was a bit underwhelming IMO, mostly because I was expecting a Series S target.
It was basically walking through a tight corridor with a small room in the end. Though cudos to Coalition for showing the first tech demo not made by Epic.

It ran at 1080p-1440p30 + TSR up to 4K on the Series X, but according to them it averages at 46FPS so it needs a ~33% performance boost to reach 60FPS.
They didn't show it on the Series S because the internal resolution is too low to maintain performance parity with the Series X. They mentioned some problem with Virtual Shadow Maps and a Hierarchical Z-Buffering issue on the Series S.. could this be a problem with RAM amount? They specifically mention memory on Lockhart as something "to keep an eye on".
 
Last edited by a moderator:
I was expecting this, that the demo itself will not be very impressive to average gamers as we don't really comprehend what is going on. The full presentation was very good, really fascinating how they manage to combine nanite and non nanite geometry to overcome obstacles integrate lumen + reflections and more custom stuff. Really happy to see them exciting about UE5 and Virtual Texturing and they already found a areas where they can optimise to get more perf. Exciting times ahead.
 
30fps and 40fps modes are usually running at native 4K and drop (resolution) very little when they do. While 60fps modes can run down to 1080p, they mostly run at 1440p (or not far below) then reconstructed to 4K.

You clearly didn't play the game yourself.
You didn't correct anything I said, nor does me having played the game seem to matter here. My point doesn't change at all.

The point is that the reconstruction technique they use means that 60fps mode doesn't look much worse than 30/40fps modes whatsoever. So there's not a lot of incentive to take the hit to framerate for such a negligible graphics improvement in this situation.
 
It ran at 1080p-1440p30 + TSR up to 4K on the Series X,
.
They worded this a bit confusingly in the presentation. For the demo video out on Youtube it is locked at 30 fps and uses DRS to up image quality as much as possible, hence the floating internal resolution.

In the second half of the presentation when they mention 46 fps average, it is actually essentially running at 1080p (they call it 50% of 4K which actually is 50% axis scale of 4K) the entire time. They kind of say it an round about way probably because saying it is running at 1080p has bad PR optics, but makes a lot of sense for UE5.

That 46 fps average is basically at 1080p TSR-> 4K. That is why they spent the other portion of the presentation talking about 1080p TSR-> 4K quality, and gave out all the Lumen, Nanite, etc. performance numbers from 1080p TSR internal res.
 
Back
Top