[Alpha Point] - UE5 Tech Demo for Xbox Series X|S

They worded this a bit confusingly in the presentation. For the demo video out on Youtube it is locked at 30 fps and uses DRS to up image quality as much as possible, hence the floating internal resolution.

In the second half of the presentation when they mention 46 fps average, it is actually essentially running at 1080p (they call it 50% of 4K which actually is 50% axis scale of 4K) the entire time. They kind of say it an round about way probably because saying it is running at 1080p has bad PR optics, but makes a lot of sense for UE5.

That 46 fps average is basically at 1080p TSR-> 4K. That is why they spent the other portion of the presentation talking about 1080p TSR-> 4K quality, and gave out all the Lumen, Nanite, etc. performance numbers from 1080p TSR internal res.
The demo they showed is probably running Lumen on Epic setting at dynamic 1440p30fps (internal resolution) and their performance target tweaks currently running Lumen on High setting at 1080p46fps
 
All those "we literally have to work around nanite to get it to do what artists want at all" details suggest the system was made as an interesting challenge and because... rather than with a ton of input from artists and asking them what would make their job easier and faster.

No way dude. This is a totally normal tech art talk. This is just what technical artists do, there's a ton of invisible workflow hacks holding up every game you've ever played. Seeing somebody has kicked the tires on this, and this is their result, makes me more confident in nanite, not less. I can't even see a major/significant hack in the video. What in particular worried you?

My only concern at all is their tiling ground texture workflow, which sounds both clunky and like it ought to be very bad for performance, but I suspect there are several good solutions to that which they haven't pursued yet (and epic really needs to try to produce in-editor tools to handle some, like baking height map variations into nanite meshes).

(also -- this was a fantastic talk, so glad they shared. Dunno why anybody is talking about performance -- this is an internal initial test on an alpha engine. 60fps@1080p totally within reach.)
 
The demo was just a Test what the engine can do in the current state in the hand of developers with knowledge of the UE4 engine just updated to UE5.

And we saw again that some in the internet hype everything up to a point where they can only be disappointed.

Some very interesting stuff from the video are the memory sizes. Even though this demo was very short and just a small corridor (to get to know different features and their boundaries, that is what this demo was for in first place) it still had a size >1GB on the SSD for just that small area. This highlights how big games with such a detail-level (even with reusing assets over and over again) can get if you have more than just this one corridor. The SSD size get's the limit quite fast.
 
I was really underwhelmed myself, but as a non-developer I can appreciate that it wasn't targeted at me. So many of us Xbox gamers were hoping it was a MS/Xbox answer to the UE5/PS5 demo, but since it was part of GDC, we should have known better. It doesn't change my opinion that I believe The Coalition is still one of Microsoft top-tier developers. I'm plenty confident they will wow us with their next game whatever they decide to do. Hopefully with more time with UE5 they can show us something next year.

Tommy McClain
 
Original reveal demo: 2496x1404 at 4.5ms for nanite on a ps5

Alpha point: 1920x1080 at 2.1ms for nanite on a xbox

this was about what i'd expect?
 
TyHCKQe.jpg



Lighting in this screen shot above is incredibly hard to get right!
Bright, dark/hard shadows, bounce lighting, sky lighting, reflection lighting, particles etc coming together like this. Look at that left pillar!

Maybe they didn't do enough Michael Bay and JJ Abrams, but this is pretty sick to be happening in real time.

Super impressive to me... this is exactly how I do my spectrum analysis.

gooooo FFT convolutional bloom! (timed link)
 
Last edited:
New I was really underwhelmed myself, but as a non-developer I can appreciate that it wasn't targeted at me.
I always felt that they should not have in essence promoted it.
The tweets before hand was to the general gaming public.
People know what the coalition are capable of so had expectations based on that.

They should have put it out without the buildup that xbox did. Then no one would have had expectations and watched it in context.
Even if the context was, it's a small 15s corridor clip.
As it was everyone was eagerly waiting for it even though they knew it was dev/tech talk.

Most people here understand, but it just feels like they hyped it for no reason thereby under delivered in that context.
 
Most people here understand, but it just feels like they hyped it for no reason thereby under delivered in that context.

Yeah, I think this is another example of XGS (like, the management, not the individual teams they own) not doing quite as good of a job as they ought to.
 
I always felt that they should not have in essence promoted it.
The tweets before hand was to the general gaming public.
People know what the coalition are capable of so had expectations based on that.

They should have put it out without the buildup that xbox did. Then no one would have had expectations and watched it in context.
Even if the context was, it's a small 15s corridor clip.
As it was everyone was eagerly waiting for it even though they knew it was dev/tech talk.

Most people here understand, but it just feels like they hyped it for no reason thereby under delivered in that context.
Well, the problem with this is, the developers not only tweet for the public but also for other developers. Hard to get this right if you want to openly communicate a "for developers" tech demo and their results.
Fanboys around the world just have to adapt to the fact that not everything is for them.

The PS5 UE5 techdemo was intended for the general public, this wasn't. The valley tech demo was intended to show what is possible with the engine (like the old Unreal or 3d-mark tech demos). This tech demo was just what developers can do in practice with it (in the limited time they had with the engine). Just to get the numbers they can work with.
So they now have an internal demo they can test different algorithms, features, ... and look if it runs better or worse. If it looks better or worse, ...
 
Great to see The Coalition is still targeting 60fps on both X and S, I don't think many expected that with the performance we've seen so far from UE5.
 
Last edited:
All those "we literally have to work around nanite to get it to do what artists want at all" details suggest the system was made as an interesting challenge and because John Carmack talked about doing something like it a decade ago, rather than with a ton of input from artists and asking them what would make their job easier and faster. Which is to say, it's a technically brilliant piece of programming that seems to solve few if any of the truly pressing issues of high end game development today, which is ballooning budgets from needing an ever growing legion of artists for triple a games. Iteration speed ups from not having to bake LODs seem entirely negated by not supporting a legion of tools artists have gotten over the last 20 years; and can eat up far more of the memory budget than any set of texture blending tools and procedural meshes ever would.

Ohwell, at least Lumen, expensive as it can at the moment (edit- it's really fast on XSX somehow, cool), lets you light things quickly and has mostly great looking results. It's interesting to compare and contrast his reactions of the two. Nanite "this is how we worked around all these difficulties we encountered". Lumen "it's awesome and saves a ton of time."

unknown.png


DCC tool workflows need to evolve but when it will be done it will improve a lot and help artist to go faster.
 
You didn't correct anything I said, nor does me having played the game seem to matter here. My point doesn't change at all.

The point is that the reconstruction technique they use means that 60fps mode doesn't look much worse than 30/40fps modes whatsoever. So there's not a lot of incentive to take the hit to framerate for such a negligible graphics improvement in this situation.

You're likely to be sorely disappointed if you think 30 FPS mode is going to look massively better than 60 FPS mode this generation, especially if the developer is good. I'd expect the only titles that look massively better at 30 FPS versus 60 FPS will be ones released by 3rd party developers that are on a tight time budget or lack the expertise to optimize for 16.7 ms frame times. And even then, the lower temporal resolution of 30 FPS is going to counter most of the benefits of increased static resolution. And that will be exacerbated if they use any form of temporal reconstruction (like checkboard rendering or any form of temporal accumulation) as these will always look better in motion at higher frame rates. Sure static screenshots will look better, but as soon as you introduce motion, artifacts will be far more noticeable at 30 FPS.

Regards,
SB
 
Those are very good points. I'm not someone who has minded the trade off for 30 fps in the past, but once you introduce temporal accumulation etc.... 30 fps might not make as much sense anymore.
 
No way dude. This is a totally normal tech art talk. This is just what technical artists do, there's a ton of invisible workflow hacks holding up every game you've ever played. Seeing somebody has kicked the tires on this, and this is their result, makes me more confident in nanite, not less. I can't even see a major/significant hack in the video. What in particular worried you?

My only concern at all is their tiling ground texture workflow, which sounds both clunky and like it ought to be very bad for performance, but I suspect there are several good solutions to that which they haven't pursued yet (and epic really needs to try to produce in-editor tools to handle some, like baking height map variations into nanite meshes).

(also -- this was a fantastic talk, so glad they shared. Dunno why anybody is talking about performance -- this is an internal initial test on an alpha engine. 60fps@1080p totally within reach.)

Ok, let me try again.

Nanite doesn't fix having to work around and hack the things artists want to do. I hope that's clear. The point of comparing Lumen "it works and makes everything faster and better!" and Nanite "we had to re-do our entire workflow yet again and found no particular time savings so far" was to illuminate the above point. I'd have loved to see Nanite, or some new geometry platform, get the same reaction as Lumen. The reaction of "this made everything easier, it does everything we want it to do and more, we'll save so much time with this!"; rather than what it is, which is following the same curve of needing exponentially more artist time.]

In fact, that's exactly the reaction Valve's Source 2 got over a year ago. Artists and designers going "whoa, this level editor and these UV tools and etc. are great. Hey look at all these quality of life improvements. Look, they put in the sort of worlspace triplanar shaders and other stuff we do ourselves for us, and then hooked it up to the geometry editor at the same time, that's so cool!"



UE was going in this direction too. They had in engine modelling tools, better support for spline meshes, etc. Now maybe they'll pick that up again, but it doesn't seem exactly "compatible" with how nanite works, and especially not with the asset sizes nanite generates. The "click button to generate nanite mesh from in engine modelling" for example is probably an idea that could quickly lead to 1tb+ install sizes.
 
Last edited:
You're likely to be sorely disappointed if you think 30 FPS mode is going to look massively better than 60 FPS mode this generation, especially if the developer is good. I'd expect the only titles that look massively better at 30 FPS versus 60 FPS will be ones released by 3rd party developers that are on a tight time budget or lack the expertise to optimize for 16.7 ms frame times. And even then, the lower temporal resolution of 30 FPS is going to counter most of the benefits of increased static resolution. And that will be exacerbated if they use any form of temporal reconstruction (like checkboard rendering or any form of temporal accumulation) as these will always look better in motion at higher frame rates. Sure static screenshots will look better, but as soon as you introduce motion, artifacts will be far more noticeable at 30 FPS.

Regards,
SB
But this has always been the case.

It's weird that people are treating this as some new situation, when it's really the same exact duality as always.

In ANY given fixed spec device, you can always do a certain amount more at 30fps than 60fps. This is a rule that exists no matter what hardware we're talking about.

I also think it's ridiculous how people think 'reconstruction' = 60fps, when there's absolutely no reason for such an assumption. The way to really maximize the hardware is to do reconstruction + 30fps.
 
But this has always been the case.

It's weird that people are treating this as some new situation, when it's really the same exact duality as always.

In ANY given fixed spec device, you can always do a certain amount more at 30fps than 60fps. This is a rule that exists no matter what hardware we're talking about.

I also think it's ridiculous how people think 'reconstruction' = 60fps, when there's absolutely no reason for such an assumption. The way to really maximize the hardware is to do reconstruction + 30fps.

I'm assuming you know that reconstruction works because it accumulates (reconstructs) the data over X number of frames?

Let's say some game reconstructs an image based on 4 frames of data. That's 4/30th's of a second spent to accumulate the data. If you have fast motion you will easily be able to see artifacts introduced by this. At 60 FPS, 4 frames will only take up 2/30th's of a second leaving a much smaller window for any artifacts to accumulate or be visible.

Reconstruction is nothing new, there's been various forms of it for a long long time now. It's generally recognized that best results are at 60 FPS or more. Lower than that and artifacts when in motion due to temporal accumulation become too easily visible.

Sure, you can hide some of that with heavy use of full screen motion blur, but full screen motion blur isn't always desirable (I personally always disable it because it looks so bad and unnatural). And if you are using full screen motion blur to hide the temporal accumulation/reconstruction artifacts at 30 FPS, then you're already losing any detail that you've gained from going to 30 FPS by purposely blurring everything to hide the artifacts.

Using Control as an example, it looks OK-ish at 30 FPS with their motion blur, but looks significantly better at 60 FPS. Turn off motion blur and suddenly you have greater image fidelity, but also impossible to ignore reconstruction artifacts at 30 FPS. At 60 FPS it's better, but their reconstruction is so aggressive that it's still noticeable, albeit to a far lesser degree.

So, that's the trade-off for reconstruction at 30 FPS. When standing still, you have good IQ. When in motion you either have loss of detail due to blurring from motion blur or very visible artifacts. You can, of course, have less aggressive levels of reconstruction with lower potential increase in IQ, but also with less artifacts at lower frame rates, but those will still look better at 60 FPS even at reduced rendering quality.

Regards,
SB
 
the point of comparing Lumen "it works and makes everything faster and better!"

This video highlights multiple lumen issues and workarounds too, though.

In fact, that's exactly the reaction Valve's Source 2 got over a year ago.

I don't know, valve putting some qol tools in that everyone has had in blender and max and etc (I personally wrote a maxscript to do this workflow years earlier, albiet not as you modeled -- as did many others) into the level editor is a statement of intent, but it's not any better. I don't think three tweets from one hobbyist demonstrate that valve changed the world.

UE was going in this direction too. They had in engine modelling tools, better support for spline meshes, etc. Now maybe they'll pick that up again, but it doesn't seem exactly "compatible" with how nanite works, and especially not with the asset sizes nanite generates. The "click button to generate nanite mesh from in engine modelling" for example is probably an idea that could quickly lead to 1tb+ install sizes.

'Generate nanite mesh from engine modeling' would make things smaller. Nanite is a (incredibly effective) compression format -- it has to be in order to render fast. I feel like people keep misunderstanding that. Ultimately though, those tools are still there! You can still model geometry in ue5 for blockouts just like in ue4! It's just that nobody is going to have any of those models in their shipped games in ue5, just like nobody had them in their shipped games in ue4. Ue5 is opening the doors for much better looking and ultimately slightly easier workflows, they just target professionals with real pipelines. If those professionals want to be able to extrude a cube and ten intersecting wall meshes out of it they can make it themselves in a weekend in houdini or something.
 
But this has always been the case.

It's weird that people are treating this as some new situation, when it's really the same exact duality as always.

In ANY given fixed spec device, you can always do a certain amount more at 30fps than 60fps. This is a rule that exists no matter what hardware we're talking about.

I also think it's ridiculous how people think 'reconstruction' = 60fps, when there's absolutely no reason for such an assumption. The way to really maximize the hardware is to do reconstruction + 30fps.
I think we have have reached a point where hardware performance a features are at a level where no matter the frame rate, image quality will be limited mostly by the artists. I know we are early in this generation, but the best looking games on PS5 and Series consoles that have a 30/60 FPS toggle don't have a huge difference in visual quality. The image quality is closer between the 30/60 modes in most of these games than the difference between, say, Nascar 98 on PS1 and Saturn.
 
Something of an oddity but apparently TSR is more expensive upsampling from 1440p ~2.7ms than 1080p ~2.2ms.....
And they also note they have some Xbox-specific optimization for TSR to pass back to Epic or something to that effect.
 
Last edited:
Something of an oddity but it is apparently TSR is more expensive from upsampling from 1440p ~2.7ms than 1080p ~2.2ms.....
And they also note they have some Xbox-specific optimization for TSR to pass back to Epic or something to that effect.
Why is that odd? If you have 1440p you have more information to process than with 1080p.
 
Back
Top