Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

The challenge, even on top end PCs, will be lighting. Some VR specific lighting solutions would be good, especially for PSVR2.

Do you mean in terms of the performance cost? As if so then by the time high quality UE5 games are hitting VR on PC we'll likely be well into the 40xx / 7x00 generation of GPU's which I'd expect to handle this level of graphics at 90fps+ at the high end. And hopefully if they incorporate DL based upsampling methods (DLSS does support VR now I believe) the image quality could even exceed what we're seeing here. And that's without considering any potential foveated rendering gains.
 
Last edited:
I wonder if there is small normal variations in mesh normals in different lod/meshlets.
Quite severe changes in lighting on flat surfaces.

I'm still trying to nail down if it's lighting conditions, LoD/mesh transparencies issues, or screen space reflections instabilities that cause a sudden odd flicker (or delayed reflection) in car windows/windshields when approaching them from a short distance (especially, in a full parking lot).
 
Do you mean in terms of the performance cost? As if so then by the time high quality UE5 games are hitting VR on PC we'll likely be well into the 40xx / 7x00 generation of GPU's which I'd expect to handle this level of graphics at 90fps+ at the high end. And hopefully if they incorporate DL based upsampling methods (DLSS does support VR now I believe) the image quality could even exceed what we're seeing here. And that's without considering any potential foveated rendering gains.

A bold statement my friend! :oops:

Seriously, the Valley of the Ancient at high/epic settings still requires a lot of concessions (i.e., 1080/1440p temporal reconstruction) on getting a stable 4K/30fps presentation on an RTX 3090. IIRC, my RTX 3090 setup ran the demo in the mid 40s or slightly lower at the highest settings.


As Alex stated...
We're seeing some groundbreaking technology here and inevitably, there is a price to pay. Not even an overclocked RTX 3090 can run the demo fully locked at 60 frames per second at 1080p TSRed up to 4K.
 
Do you mean in terms of the performance cost? As if so then by the time high quality UE5 games are hitting VR on PC we'll likely be well into the 40xx / 7x00 generation of GPU's which I'd expect to handle this level of graphics at 90fps+ at the high end. And hopefully if they incorporate DL based upsampling methods (DLSS does support VR now I believe) the image quality could even exceed what we're seeing here. And that's without considering any potential foveated rendering gains.

Yeah, very probable seeing what mid/low end (consoles) can do today with UE5 without ML upscaling. One would imagine that with ML/DLSS etc running at 40xx/7x00 gen of hardware could come long ways, in special with optimized settings.
 
A bold statement my friend! :oops:

Seriously, the Valley of the Ancient at high/epic settings still requires a lot of concessions (i.e., 1080/1440p temporal reconstruction) on getting a stable 4K/30fps presentation on an RTX 3090. IIRC, my RTX 3090 setup ran the demo in the mid 40s or slightly lower at the highest settings.


As Alex stated...

It seems Valley of the Ancients is quite a bit heavier than the Matrix demo despite not looking as good. I based my estimate on the consoles hitting 30fps in that demo. I think it's realistic to expect the 4090 or it's RDNA equivalent to triple that, at least based on the current rumours of a massive performance uplift over the current generation.
 
A bold statement my friend! :oops:

Seriously, the Valley of the Ancient at high/epic settings still requires a lot of concessions (i.e., 1080/1440p temporal reconstruction) on getting a stable 4K/30fps presentation on an RTX 3090. IIRC, my RTX 3090 setup ran the demo in the mid 40s or slightly lower at the highest settings.


As Alex stated...

It seems premature to base the performance of a graphics engine on a demo that was released almost a year before the engine is to be released for use by developers. I think most here would find it odd if Epic isn't able to make lumen, nanite and/or UE5 more performant over time.
 
It seems premature to base the performance of a graphics engine on a demo that was released almost a year before the engine is to be released for use by developers. I think most here would find it odd if Epic isn't able to make lumen, nanite and/or UE5 more performant over time.

No doubt UE5 will improve by the time it fully arrives, but I'm almost certain 30fps and sub-1440 rendering (4K with temporal reconstruction) will be the flavor of the day for console gamers wanting anything looking remotely like The Matrix Awakens demo, especially one with full fledged open-world game mechanics and logic. And I wouldn't be surprised if the next-generation AMD/Nvidia GPUs will still struggle to achieve a native 4K image @60fps without PC gamers making certain concessions on reducing IQ settings and various effects on getting such a native 4K presentation (and if DLSS or FidelityFX isn't implemented in such a game, even more problematic on achieving such a presentation).
 
Last edited:
This demonstrates really well an uncanny valley for animate objects. The vehicles are are perfectly spaced apart and look eerily identical - because, of course they are.

Introducing variations in the orientation/spacing and cleanlines (dust, dirt grime) is the next step.

This would seem to be something ML could be quite good at. Have it generate random wear and grime patterns when a random vehicle is created. For random vehicles, there isn't even a need to store that particular configuration of wear and grime long term. With a good ML model, it should be able to generate reasonably realistic near infinite (well not actually near infinite but large enough variation as to appear infinite) variations.

While that can also be accomplished with decals and such, close inspection will often bring similarities in decals to the eye, especially with greater fidelity that modern rendering brings or will bring to the table.

Regards,
SB
 
No doubt UE5 will improve by the time it fully arrives, but I'm almost certain 30fps and sub-1440 rendering (4K with temporal reconstruction) will be the flavor of the day for console gamers wanting anything looking remotely like The Matrix Awakens demo, especially one with full fledged open-world game mechanics and logic. And I wouldn't be surprised if the next-generation AMD/Nvidia GPUs will still struggle to achieve a native 4K image @60fps without PC gamers making certain concessions on reducing IQ settings and various effects on getting such a native 4K presentation (and if DLSS or FidelityFX isn't implemented in such a game, even more problematic on achieving such a presentation).

I guess I have more faith than you. Because a small team using a not-ready-for-commercial-use engine producing a real time demo on new console hardware that’s barely a year old seems like a poor barometer of what’s possible on these consoles.

It’s not hard for me to imagine that a quality dev with 100s of millions of dollars worth of resources, a few years of development time and more intimate knowledge of the hardware pulling off something that performs better in a few years time.
 
This would seem to be something ML could be quite good at. Have it generate random wear and grime patterns when a random vehicle is created. For random vehicles, there isn't even a need to store that particular configuration of wear and grime long term. With a good ML model, it should be able to generate reasonably realistic near infinite (well not actually near infinite but large enough variation as to appear infinite) variations.

While that can also be accomplished with decals and such, close inspection will often bring similarities in decals to the eye, especially with greater fidelity that modern rendering brings or will bring to the table.

Regards,
SB

Couldn't the same logic and techniques which are used to generate random and unique MetaHumans be applied to cars (a sort of MetaCar)? Example: A base car model or game asset like a Ford F150 which can be generated hundreds of times with a multitude of unique and different imperfections, scuffs, scratches, dirt, and so-on. Correct me if I'm wrong, but didn't they use Carrie-Anne Moss model/scan as a base model on generating certain unique MetaHumans from her likeness?

Just a thought anyway...
 
Last edited:
This would seem to be something ML could be quite good at.

I'm sure it would, but just some mild randomness in placement, orientation and applying a variety of dirt/grime would achieve the same result.

Sledgehammers and nuts etc.
 
... sub-1440 rendering (4K with temporal reconstruction) will be the flavor of the day for console gamers wanting anything looking remotely like The Matrix Awakens demo ... I wouldn't be surprised if the next-generation AMD/Nvidia GPUs will still struggle to achieve a native 4K image @60fps without PC gamers making certain concessions on reducing IQ settings
Honestly I don't think 4K "native" (if such a thing even has a really well-defined meaning in these days) is even a target anymore. It's just not a good use of FLOPS compared to smart temporal upscaling. Yes there will always be cases where one can find undersampling and aliasing (even at 4K or any other sampling resolution), but modern frames are already a combination of so many different passes and sampling rates the notion that there's even such a thing as "native" is somewhat dated. I don't think it's even fair to call it a "concession" any more.

I imagine no upscaling will remain the realm of "sure I have extra performance to spare in this particular game so why not" on PC, but as far as games and engines are able to scale their own quality settings into the appropriate performance profile, that will almost always be preferable to 4k "native" for image quality with some rare exceptions.
 
Last edited:
I guess the logic behind nanite came with a trade off that brings more benefits than the cost. Its LOD and culling system since it treats polygons like pixels, I suppose changed the way we try to extract detail. It punches more detail than we could ever imagine at multiple distances consistently which is now factored by pixel density. If we continued with the traditional engines, surely we could reach 4k at 60fps or 30fos but would it have produced the same detailed environments or any signihicantly perceived detail considering the diminishing returns from higher resolution? The resolution would have been there but the detail to accompany it would have been less.
Now the 1440p may be the sweet spot for this engine as it also dictates how many polygons maximum will be visible on screen at any time for static assets at least. And thats a lot of geometry.
 
Honestly I don't think 4K "native" (if such a thing even has aa really well-defined meaning in these days) is even a target anymore. It's just not a good use of FLOPS compared to smart temporal upscaling. Yes there will always be cases where one can find undersampling and aliasing (even at 4K or any other sampling resolution), but modern frames are already a combination of so many different passes and sampling rates the notion that there's even such a thing as "native" is somewhat dated. I don't think it's even fair to call it a "concession" any more.

I imagine no upscaling will remain the realm of "sure I have extra performance to spare in this particular game so why not" on PC, but as far as games and engines are able to scale their own quality settings into the appropriate performance profile, that will almost always be preferable to 4k "native" for image quality with some rare exceptions.

More samples are always nice. Tons of speckly ghost trails everywhere from old shading data and whatnot are distracting, not to mention trying to get to Hollywood quality image stability. But between looking like a PS4 era game with a lot of samples and a PS5 era game I'm guessing most people would take the latter.

Besides, more can always be done for better AA. Better sampling noise patterns, better history rejection, new filtering stuff, the tricubic filtering for volumetric stuff shown off in Ghost of Tsushima is great. Between being hyper sharp and actually utilizing "4k" and having a blurrier but stabler image I imagine most people would take the latter as well.
 
Besides, more can always be done for better AA.
Yes, like AI upsampling. ;)

We've seen how good DLSS is in this regard. We just need other reconstruction techniques to catch up. I think it'll be a continuing development this gen and become the norm before too long. Looking at UE5's TAAU, or what Insomniac were doing with Ratchet and Clank, I think we're in pretty good shape here.
 
Besides, more can always be done for better AA. Better sampling noise patterns, better history rejection, new filtering stuff, the tricubic filtering for volumetric stuff shown off in Ghost of Tsushima is great. Between being hyper sharp and actually utilizing "4k" and having a blurrier but stabler image I imagine most people would take the latter as well.
Absolutely, and that's part of the point. Even if I wanted to "shade" (let's pretend that's just a single sampling rate again for laughs) 4k worth of pixels, it will still look much better if they are not in a regular grid. This can be sort of subtle with primary rays but I remember being struck back in the day when I compared filterable shadows w/ 4x MSAA to just doubling the resolution (i.e. same sample count but a regular grid). Despite the fact that the former were effectively "down-sampled" to half res before being used, they looked *way* better than the same number of pixels on a regular grid, especially in motion.

There is definitely some fundamental tradeoffs between noise, ghosting and stability that we will be dealing with probably for decades to come, but I think it's already pretty clear that we can do much better than even a "raw native 4k raytraced" image, which of course looks quite aliased.
 
Last edited:
Back
Top