Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Is he wrong?
I won't speak for him or Epic (usual disclaimer, etc.), but in terms of previous generation consoles, yes the SSDs offered by the new consoles (and PCs) are required to get the level of streaming that we want. It would not have been reasonable on consoles where you are required to be able to run off HDDs and optical drives.

I admit that the statement is a bit misleading in the context of demoing on the PS5, but I'll also point out that it was made clear at the time that it did not require the PS5 (among other places,
).

Anyways marketing stuff aside, I do encourage people to just go play with it. The awesome part of it being out now is there's no more need for people to speculate.
 
https://www.gamesradar.com/epics-un...-ps5-vision-that-sony-has-only-told-us-about/

Nick Warden, vice president of engineering at Epic, also touched on the capabilities of PS5 and what it means for in-game visuals. "There are tens of billions of triangles in that scene," he says, referring to a room of statues in the aforementioned Unreal Engine 5 demo, "and we simply couldn't have them all in memory at once. So what we ended up needing to do is streaming in triangles as the camera is moving throughout the environment. The IO capabilities of PlayStation 5 are one of the key hardware features that enable us to achieve that level of realism."

Is he wrong?

There is a flaw in the logic here: Referring to X instances of the same statue neither stresses memory nor storage. So i would assume this is probably a failure on reproducing of what was initially meant. I mean, such interviews get edited, shortened, and journalists do lots of bugs too.
 
Really appreciate all of the clarifying posts Andrew Lauritzen!

There is a flaw in the logic here: Referring to X instances of the same statue neither stresses memory nor storage. So i would assume this is probably a failure on reproducing of what was initially meant. I mean, such interviews get edited, shortened, and journalists do lots of bugs too.

Even so, it's not like nanite is loading in the whole mesh and instancing it -- it's loading in different bits and different levels of the tree for each. The marketing guy probably slightly misspoke on what was or wasn't possible when trying to do his job of wrapping the ps5 talking points in.

It's really not productive to dig into every single word every epic employee said while promoting the announcement, especially not when we have source code, an engine, and technical deep dives to refer to.
 
Even so, it's not like nanite is loading in the whole mesh and instancing it
If you are inside the room of statues, it surely does. So even technically it's a somewhat bad example.
The statue reminded me on Infinite Detail demos and all the down talk which followed for 'showing just repetition'.

Well, marketing is just never good. :)
 
You either accept the facts or you don’t.

Their still hung up on '2020 ue5 demo only possible on ps5 etc' even when that actual demo ran even better on a 2080maxq laptop.

Probably because the demo last year was all about how this was only possible due to the PS5's insanely fast IO.

Nah, theres even faster nvme drives on pc and in special when DS arrives (with next gen windows very soon). The PS5's IO seems kinda meh already now.
 
Funny thing is, we've said all that stuff several times in the past: crazy IO is not required to run the demo, stuffing data in the VRAM/RAM could substitute for high IO, a high end laptop (medium end desktop) could run it just fine, closed demos are nothing like full fledged games, implementations and performance will differ greatly. Wait for final results in a shipped game.

Now that these points are settled it's time to focus on what matters: will software rasterization substitute hardware? Is mesh shader a good alternative to Nanite?

Do we expect hardware RTGI to get mature enough to be a good alternative to Lumen within UE5? Metro Exodus showed how hardware RTGI can scale from Medium to High to Ultra with impressive performance results, can the same be achieved within UE5 with time?
 
If you are inside the room of statues, it surely does. So even technically it's a somewhat bad example.

This doesn't really matter for the point you're making, but I don't think it does. Pretty sure the result would be that it loads in, say, the lod 0 of the clusters of the statues that youre extremely close to, lod 1 for clusters further away, etc... in zoomed out shots where you're looking down on hundreds of statues, at most you'd still only be loading the low-res lods, and only of the clusters that are facing you. The worst case would be if you were super close several statues that were all rotated differently, and even then it's unlikely you'd load all of the lod-0 clusters contained in the statue model.


Now that these points are settled it's time to focus on what matters: will software rasterization substitute hardware? Is mesh shader a good alternative to Nanite?
1- Yes, for small triangles.
2- No, not for what nanite is used for, but yes, culling with mesh shaders will produce games that look similarly good (but can't stream near as well).
Do we expect hardware RTGI to get mature enough to be a good alternative to Lumen within UE5? Metro Exodus showed how hardware RTGI can scale from Medium to High to Ultra with impressive performance results, can the same be achieved within UE5 with time?

RTGI is plenty mature -- the question is whether nanite will mature in such a way that RTGI is complementary, or any competing technique will provide environments that look anywhere near as good as nanite but are rt friendly.

I'm guessing:
1- Yes, we'll see games using nanite with good hardware RT, from artists being very careful about avoiding bvh worst cases for static geo if nothing else.
2- No, we won't see any environments using traditional tech that look as good as nanite, due to streaming limitations. Maybe on ps5 only.
 

Not wrong, but it's been clarified that it requires an SSD for the streaming that they desire, but it does not require the PS5's SSD or performance.

The comment was in relation to the much slower HDD speeds that existed on the PS4 (and by extension other consoles).

So yes, the PS5's IO did certainly enable them to achieve that level of realism (compared to PS4 and prior consoles). What was left unsaid was that any modern IO using SSD would enable them to achieve that level of realism.

R&C is likely pushing the PS5 IO subsystem significantly harder than the UE5 reveal that was demoed on the PS5.

Regards,
SB
 
Any chance of seeing this video you swear by?

Anything.........a gif maybe?!?

We’ve still not seen it! Until you can show us all the mighty demo running on the meagre laptop, you’re flapping your gums!

You also keep changing your story on what exactly should be compared.

Might have been nice if you'd have done a Google search or perhaps just checked earlier in this thread before suggesting people are making things up.

https://forum.beyond3d.com/posts/2186917/
 
I honestly have no idea anymore why people think UE5 or last year's Demo is only possible on really high IO nvme found in the Ps5. All the reasonable evidence about the purpose of virtual texturing or geometry says is works fine elsewhere and Brian Karis very purposefully went on a debunking Tour of it all on PC.

It is a muliplatform engine for - it was just a marketing Stunt to Show it off on PS5 first vs. Any other highly capable Hardware.
 
I honestly have no idea anymore why people think UE5 or last year's Demo is only possible on really high IO nvme found in the Ps5. All the reasonable evidence about the purpose of virtual texturing or geometry says is works fine elsewhere and Brian Karis very purposefully went on a debunking Tour of it all on PC.

It is a muliplatform engine for - it was just a marketing Stunt to Show it off on PS5 first vs. Any other highly capable Hardware.

Because the EPIC/Sony PR machine tried to give it that spin.
Hence why I have stated before that it was "politics" that got the video of the UE5 demo running on a laptop to be downplayed.
Cannot have a "nex-gen" toybox...and have it beaten by last-gen PC hardware...bad PR.

And it seems to stick very efficiently to the most hardcore fanboys...even today now that there is no "special sauce" in the toyboxes...ruffled feathers, hurt feelings...the usual fallout ;)
 
Just because the current demo loads everything to VRAM doesn't mean all future games built on UE5 will reside that way.

It's a lot like Ridge Racer containing the whole game in the PS1's RAM. This isn't a game, it's an engine demonstrating two brand new technologies.

Games certainly will use the SSD. Fully saturate it? Dunno, we'll have to wait and see.

I want to see Wipeout on this engine.

Out of extreme case like teleportation and portal the PS5 fast I/O is not needed. Like I said in other thread if we take the average compression RAD tools game and Sony said the SSD is capable you can read in theory read the full data of R&C Rift apart in 3 to 4 seconds.

They build the SSD to be able to cover this extreme case but I doubt you even approch 8 GB/s during streaming probably far from it maybe 1 GB/s or maybe less.

EDIT: Read this twitter thread of Fabian Giesen who worked on oodle kraken and the PS5 decompressor






EDIT: The last tweet people keep thinking PS5 theorical 22 GB/s hardware decompressor like it can be reached.:LOL:
 
Last edited:
The worst case would be if you were super close several statues that were all rotated differently
You can also say that's a best case, because then likely statue instances is all you see, so there is no need to to have all those other models in the background.
The real question is: How many different statues can we have? I guess it will take some time until we recognize those limitations. Maybe PS6 hype is 1TB cardridges, or data streaming becomes essential, or we get breakthroughs in compression ideas.
Now that these points are settled it's time to focus on what matters: will software rasterization substitute hardware? Is mesh shader a good alternative to Nanite?
Likely we want to see still some more examples like Dreams or UE5 until we can believe what starts to makes sense.
If we have consistent detail per pixel there is no more need for triangles, and drawing points needs no HW acceleration. Mesh shaders are obsolete as well. But i really want to extend mesh shader execution model to compute. Currently we need to make too much roundtrips over VRAM for any kind of data flow which does not fit into a single shader (which often is just a small part of a program). This really feels restricted, kinda dumb and has to be addressed. In this sense, amplification shaders stand out and raise hopes. Restricting this to 'just rasterization' feels like a missed opportunity. I'd say improvements there (and also having coarser options so GPUs can operate independent from CPU control) are more urgent than removing ROPs.
Do we expect hardware RTGI to get mature enough to be a good alternative to Lumen within UE5? Metro Exodus showed how hardware RTGI can scale from Medium to High to Ultra with impressive performance results, can the same be achieved within UE5 with time?
To me it seems both Metro and UE5 GI is 'slow', while both looks good.
Ofc. we want full RT support for UE, including Nanite detail. Because RTs biggest strength is high frequency detail. There are no alternatives which can give exact results here.
Currently, issues can be fixed. BVH build is software on all architectures, and the problem only exists on PC due to API.
However, i really hope the debate helps to understand why it is too early to implement HW builders or ray reordering. I'm afraid NV may do this soon, and then doors are probably closed. Resulting limitations are the reason there are 20 years between Messiah and UE5.
 
To me it seems both Metro and UE5 GI is 'slow', while both looks good.
Ofc. we want full RT support for UE, including Nanite detail. Because RTs biggest strength is high frequency detail. There are no alternatives which can give exact results here.
Currently, issues can be fixed. BVH build is software on all architectures, and the problem only exists on PC due to API.
However, i really hope the debate helps to understand why it is too early to implement HW builders or ray reordering. I'm afraid NV may do this soon, and then doors are probably closed. Resulting limitations are the reason there are 20 years between Messiah and UE5.

Sure there's alternatives! Throw a high enough order basis function at virtual point lights and you've got your alternative, and you've got a full material representation that follows any brdf and goes over the entire roughness range. Dreams could probably add high quality reflections straight off, the texturing is already applied to the rendering representation, and high detail for direct or indirect already seems fully enabled, there's already detail traced shadows.

Hardware raytracing is really only useful if you have a standard triangle only representation. If you're already going for different representations you've probably broken it in some way, thus going once again to "fixed function hardware bad". Plenty of engines still have traditional triangle meshes though, so it's not the hardest thing to add a bvh. Also some people argue for bounding box bvhs for their nice sparsity, there's no "teapot in a stadium" problem with a BVH and so the idea might be just go through the upper level BVH with hardware RT, then ditch polys and leaves for some other representation and trace it through software.

So hardware RT is easiest to add and might have a place in hybrid pipelines. But it is hard to get around severe performance limitations, thus the need for some sort of hybrid pipeline.
 
However, i really hope the debate helps to understand why it is too early to implement HW builders or ray reordering. I'm afraid NV may do this soon, and then doors are probably closed. Resulting limitations are the reason there are 20 years between Messiah and UE5.

It doesn't necessarily close the door if this happens, but it would certainly put some roadblocks in place on PC.

Because game developers don't have access to NV hardware on consoles, then consoles will remain the primary design constraint WRT to graphics for most game developers. If something is too difficult to make work on console, then it is unlikely to be used as a basis for a graphics rendering engine. Instead it'll be relegated to something that can be tacked on after the fact, like RT prior to the release of the new consoles. Metro Exodus: Enhanced Edition likely wouldn't have existed as a RT only title if the consoles couldn't do some level of RT, for example.

Basically before the new consoles came out with RT support, RT in games would be at best tacked on for most AAA developers. Now that consoles have RT hardware, developers can make an engine based on RT rather than RT as a tacked on feature or alternate rendering path.

In that regard, it's more important to see whether AMD puts limitations on how things are done as those limitations will absolutely limit what the vast majority of game developers use as the basis for their graphics rendering engines.

That said, if AMD only does what NV does, then yes, any limitations of how NV do things will be overall limitations on what can be done.

Regards,
SB
 
In that regard, it's more important to see whether AMD puts limitations on how things are done as those limitations will absolutely limit what the vast majority of game developers use as the basis for their graphics rendering engines

Maybe I’m not following but AMD already imposes no limits on how things are done. Game engines are free to implement pure compute based pipelines as evidenced by Nanite and Lumen.

Investments in hardware accelerated paths won’t close the door on improvements to general compute. We’ve had continuous improvements on both fronts since GPUs have existed.
 
Maybe I’m not following but AMD already imposes no limits on how things are done. Game engines are free to implement pure compute based pipelines as evidenced by Nanite and Lumen.

Investments in hardware accelerated paths won’t close the door on improvements to general compute. We’ve had continuous improvements on both fronts since GPUs have existed.
When is the last time the fixed function rasterizer blocks have seen any notable improvements?
 
Back
Top