Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Really would be lovely if companies could release higher quality versions of their videos on stuff like this. Streaming is convenient but for those of us who want the best quality it should be well within their means for them to make available a 4K, HDR, 10-Bit video in x265 with a high-bitrate.

The Vimeo video is 4K, although not sure what bitrate. It looks good despite running it on my 8K monitor.

The two things that bothered me were the water effects and climbing, if we have to nitpick.
 
I don't disagree with your healthy scepticism but did you watch the post video interview? This is a playable tech demo that plays different each time. It's not 100% scripted and pre-rendered. Sure, the insane zbrush models and 8K textures are likely not representative of final games, but otherwise this is the closest tech demo to approximate actual games that we've seen for a long time.
Star wars reflections is playable, Unity's Book of the dead (GDC 2018) is playable to etc.. Tech demos are glorified interactive real-time cinematics.
 
The two things that bothered me were the water effects and climbing, if we have to nitpick.
A few of us weren't fans of the water displacement, I wasn't watching the animation of the climbing I was looking at all the insanely high definition geometry.
 
If the PS5 version is mostly around 1440p, a 4TF system would be mostly around 900p everything else being equal. Drop the settings a notch and 1080p should be doable.

Everything in this engine is coming to the likes of AMD and Nvidia's mainstream cards, so there's no question that it can be done.
There may also be less pressure on the storage system as presumably LH would use lower resolution assets.
 
Last edited:
I need to know if I the only one who hates and absolutely despises this trope of "the world is falling on your head". Why is this so common in games and why game developers think this looks cool? It's a waste of work and effort, IMO.
Isn't it to create urgency? To adjust the pace of gameplay so it doesn't become pedestrian.
 
I'm afraid I'm missing your point? That's like you are saying that game running UE4 like FF7R should run on iOS and Android just because the engine supports them?

Regardless, I'm not discussing that the engine supports low tier devices, but obviously those devices won't be able to do everything top tier ones can. You are apparently saying they can.

If that's so, what's the point of a new generation? Just keep pumping games for the current one with UE5!!
I'm just saying, you can down port by reducing the graphical quality.
So the same basis of technologies that would work on PS5 would work the same way on Android device, but significantly less. But the pipeline is the same.
Libreri himself has an award-winning background in the movie business, an area where Unreal Engine 4 is proving increasingly influential. He's particularly enthused about the idea of a universal workflow that allows for assets of identical quality to be used in all areas. "A lot of this came from the fact that we have these two extremes. We have people making mobile games on UE4 and we have people making the Mandalorian on UE4 and trying to work out how can we have a unified way of everybody working, so there's not this pressure point," he says. "You know, people's time should be spent on making awesome games and awesome gameplay and not necessarily on the minutiae of asset creation, the tedium because we have these old techniques from over a decade ago that were necessary to be deployed to be able to produce your environments."

So instead of using 8K textures, you're using 1K or 2K texture sizes.
Instead of 33 million polygon statues, maybe you're only using 300K polygon statues.

Let the engine do the rest.
 
A few of us weren't fans of the water displacement, I wasn't watching the animation of the climbing I was looking at all the insanely high definition geometry.
The environment looks awesome, which is the point of the demo. Not the character animation. If games reach that fidelity in a few years that will truly be something.
 
I am trying to understand how this works. How is it so unconventionally efficient that it allows assets worth of 8k textures and billions of polygons without the requirement for LOD plus real GI?
How does it optimize such high density information so it can be processed fast enough by something barely as powerful as a workstation?
It sounds like magic. Bottlenecks and traditional time consuming workflows are a thing of the past?
My mind cant wrap around how it functions.
 
The environment looks awesome, which is the point of the demo. Not the character animation. If games reach that fidelity in a few years that will truly be something.
Yeah, I was almost exclusively focussed on the environment. The water stood out then a couple folks - eastman for one - commented that the protagonist looked out of place and a quick revisit I could see what he meant. I feel that Epic wanted us to look anywhere but at the protagonist!
 
I don't disagree with your healthy scepticism but did you watch the post video interview? This is a playable tech demo that plays different each time. It's not 100% scripted and pre-rendered. Sure, the insane zbrush models and 8K textures are likely not representative of final games, but otherwise this is the closest tech demo to approximate actual games that we've seen for a long time.

I think even if they end up having to scale down poly counts and texture quality, you'll still eliminate pop-in from lod swaps and have much higher poly counts up close to the camera instead of normal maps that look ugly up close.
 
Yeah, I was almost exclusively focussed on the environment. The water stood out then a couple folks - eastman for one - commented that the protagonist looked out of place and a quick revisit I could see what he meant. I feel that Epic wanted us to look anywhere but at the protagonist!
Programmer art? :p Looked about as goofy and exaggerated as that one scene from Jedi Fallen Order last year (I don't know if it was tweaked in final).
 
I would hate to be releasing a game in the next year or two that's too late in the dev cycle to abandon a more traditional geometry pipeline. People are going to freak out over pop-in if their expectations are set high, especially if they're releasing against games that are already built that way.
 
I'm just saying, you can down port by reducing the graphical quality.
So the same basis of technologies that would work on PS5 would work the same way on Android device, but significantly less. But the pipeline is the same.


So instead of using 8K textures, you're using 1K or 2K texture sizes.
Instead of 33 million polygon statues, maybe you're only using 300K polygon statues.

Let the engine do the rest.

What? So hardware features don't matter? Right.. So AMD and NVIDIA are stupid.. They should just keep doing die shrinks of the same architecture and call it a day pff... Why spend millions researching? If raw performance is the only thing that matters, as "the pipeline is the same"...
 
What? So hardware features don't matter? Right.. So AMD and NVIDIA are stupid.. They should just keep doing die shrinks of the same architecture and call it a day pff... Why spend millions researching? If raw performance is the only thing that matters, as "the pipeline is the same"...
That's why we have feature levels in direct X.
If you have a DirectX 12 card, it supports the core features of DX12.
If you have a DirectX 12 Ultimate care, it supports the core features of DX12 U.

There are sub features involved for all of these as well. Feature levels like 12_1, 12_2 etc.

When we introduced compute shaders, that was the flexible glory everyone was looking for. You don't need fixed function hardware for compute shaders to operate. The more compute shader operations you can process the better.

If this technology only worked on DX12U, that wouldn't necessarily get people to move over to UE5. Because a majority of compliant devices are closer to DX12 spec.

They made their own virtual texturing and virtual geometry pipeline. It may or may not leverage additional hardware features, but as least from what I can ascertain from reading, it does not require additional hardware features to run.

I think you're debating with me on what's impressive of PS5, what the system is capable of achieving with this engine. And me debating what's impressive of the engine: that the engine can scale from Android devices, up to render farms working on the Mandalorian.
 
I would hate to be releasing a game in the next year or two that's too late in the dev cycle to abandon a more traditional geometry pipeline. People are going to freak out over pop-in if their expectations are set high, especially if they're releasing against games that are already built that way.
Perhaps the SSDs will largely mitigate LOD and texture pop-in without any specific optimisations.
 
Not if feature start getting left off a game because the consoles can't run it well. the 360/ps3 held back the progression of graphics because they were on the market for so long and the ps4/xbo are doing the same.

Much more chance of features being left out if consoles targeting 4k and 60fps. If anything the lower the frame rate and resolution on the console version the more features.
 
Isn't it to create urgency? To adjust the pace of gameplay so it doesn't become pedestrian.
It's dumb, and I forget to comment about something: footprints.
I noticed that they're missing from this impressive demo, why? Was the first thing I noticed from watching the demo.
 
It's not the going to be the same thing.
Right! When Rangers says, "why isn't this running on current gen," the reason is the HW isn't fast enough. "Why isn't this tech in use today?" is a different question, to be followed with, "okay, if scaled down to current tech, what could be achieved with this streaming?" And that'll be something very cut down and quite different. I guess current gen could also render the path-traced Minecraft if you go low-res enough. Doom was squeezed onto the Amiga at something like 120x80 resolution 10fps.
 
Back
Top