Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

You’re not loading 20 million triangles 30fps from storage.
Majority of this is happening from memory. You’re only loading in new data coming in and old data out. There is still a buffer period. PS5 can have at most a theoretical 50% shorter buffer length than XSX (very unlikely). Doesn’t mean buffer is gone.

the only way SSD is the bottleneck on pS5 is to require the hard drive to be running a full 5.5 GB/S every single second without rest. Be reasonable. That’s not virtual texture streaming at all.

there is a large buffer wrt to the performance of the hard drive or we would be hitching everywhere.

It depends on the games.

I think the challenge is when the character keeps moving, teleporting, and changing direction/view points. It's usually the edge cases that break/stall/pop-in things. It depends on how much freedom they need/want.

The other aspect is if it's user-generated assets, they may not be as optimized as developer-generated ones.

Plus what else the system is doing at the same time causing resource and bandwidth contention. And how complex and open the world is.
 
If depends on the games.

I think the challenge is when the character keeps moving, teleporting, and changing view points. It's usually the edge cases that break/stall/pop-in things. It depends on how much freedom they need/want.

The other aspect is if it's user-generated assets, they may not be as optimized as developer-generated ones.
The draw distance is fairly simple to figure out. It is Reyes. The smallest a polygon can be is 1 pixel wide. So that’s already being drawn. The buffer will likely exceed that buffer point.

And we are talking about a tech demo, not a video game. Once the character starts it’s flyby zoom sequence that can be optimized to cull everything behind and just load everything forward; far forward.
 
The draw distance is fairly simple to figure out. It is Reyes. The smallest a polygon can be is 1 pixel wide. So that’s already being drawn. The buffer will likely exceed that buffer point.

And we are talking about a tech demo, not a video game. Once the character starts it’s zoom sequence that can be optimized to cull everything behind and just load everything forward far forward.

Yes but your scenario is still too simplistic. It's a game world. Both the players _and_ the (destructible or creatable) world can change states, causing new assets to be loaded.

And we are talking about a tech demo, not a video game. Once the character starts it’s flyby zoom sequence that can be optimized to cull everything behind and just load everything forward; far forward.

I don't think people pay money to buy tech demoes. They are demanding nextgen games. Whatever that means.
 
Last edited:
or in the case of that demo it might be more important to have the double of SSD speed than 20% more GPU power.
who knows. UE5 devs know. We'll have answers soon.

Depends on how much of it pushes the SSD. If that demo uses the SSD at near peak-IO, then it's feasible that it could *only* be done on PS5.

I mean Epic have already said that the demo runs on XSX and fast PCs, and the demo is basically pointless if their "Nanite" approach can't work effectively on those platforms.

And frankly, if you're SSD bound at 5.5 GB/s (or even some high proportion of that) at 1440p (or lower) and 30 fps (or lower), then that's a pretty risky choice for next gen, multiplatform engine technology. If it doesn't work effectively for XSX and PC based, PCIe gen 3 SSDs? I see that as a huge problem.

What are we looking at for 4k 60 fps PC gamers? Four raided top end PCIe Gen 4 SSDs?

I just don't consider it reasonable. There might be moments where this demo benefits from the PS5's drive (more so than than fast PC and XSX drives), but most of the assets being used each frame are coming from a ram cache and most of the time the limitations are processing based.

UE5 has to work really well across a wide range of systems, IMO.
 
You’re not loading 20 million triangles 30fps from storage.
Did you read my post in full. To quote myself:

"That won't be 20 million unique triangles per frame as the same triangles will be mostly reused."​

However, the end scene of the fly-through was deliberately to showcase the performance of UE5's streaming. We don't know how much is cached already and how much is streamed directly.

the only way SSD is the bottleneck on pS5 is to require the hard drive to be running a full 5.5 GB/S every single second without rest. Be reasonable. That’s not virtual texture streaming at all.
Data streaming is more than just max throughput; you have the whole request and fetch aspect and loading randomised data rather than large, continuous streams. It's possible to max out an SSD at well below its peak theoretical transfer rate when you issue more iops than it can process.
 
The platform exclusives will likely push whatever envelopes the different consoles have -- even though UE5 is cross platform.
 
Last edited:
@milk The other thing to remember about how well it works with static geometry is the first UE5 game will be fortnite. Fortnite has a large map, the ability to build and delete things, lots of players and animations, water
Did you read my post in full. To quote myself:

"That won't be 20 million unique triangles per frame as the same triangles will be mostly reused."​

However, the end scene of the fly-through was deliberately to showcase the performance of UE5's streaming. We don't know how much is cached already and how much is streamed directly.

Data streaming is more than just max throughput; you have the whole request and fetch aspect and loading randomised data rather than large, continuous streams. It's possible to max out an SSD at well below its peak theoretical transfer rate when you issue more iops than it can process.


Yes, PS5 should have a large IOPs advantage on top of bandwidth. The flyover is definitely an interesting case. I'd love to see a breakdown of that particular part of the scene more than any other. It would be one of the worst case scenarios for streaming in data.
 
Did you read my post in full. To quote myself:

"That won't be 20 million unique triangles per frame as the same triangles will be mostly reused."​

However, the end scene of the fly-through was deliberately to showcase the performance of UE5's streaming. We don't know how much is cached already and how much is streamed directly.

Data streaming is more than just max throughput; you have the whole request and fetch aspect and loading randomised data rather than large, continuous streams. It's possible to max out an SSD at well below its peak theoretical transfer rate when you issue more iops than it can process.
Okay sorry missed a couple words there. I saw the 6.7Gb/s scanning quickly. They would be massively compressed btw.

Yes. So I’m not disagreeing with the fact that you need good performance on-a drive. There is likely a need for NVME level performance. I’m debating whether or not you need PS5 levels of performance of drive for this demo. I just don’t think you do. I think when the demo is allowed to be run and showcased on PC and XSX you will see that it will run the same thing but with either higher frame rate or resolution.

I’m confident for this demo in particular and the topic I am discussing at hand that compute is the limiter for PS5 performance. Not the drive.

Games won’t have movie quality assets. So that should make their use cases easier not harder.
 
We have habit of looking at worst cases and ignoring majority of gameplay where things are just fine. i.e. nothing better than finding a specific frame where taking a crop and zooming in reveals something bad or resolution dips temporarily. I assume this same will apply to streaming. People will go hunting for pop in/lower quality asset/less variety in assets and make a big deal of it. Exactly same that is done for dlls and game comparison between game in different consoles. We just have a bit more to look for now that there is minor difference in flops(20%, possible throttling) and major difference in streaming speed(2x).

For me, I'll take pop corn and enjoy the madness that ensues.
 
I mean Epic have already said that the demo runs on XSX and fast PCs, and the demo is basically pointless if their "Nanite" approach can't work effectively on those platforms.

And frankly, if you're SSD bound at 5.5 GB/s (or even some high proportion of that) at 1440p (or lower) and 30 fps (or lower), then that's a pretty risky choice for next gen, multiplatform engine technology. If it doesn't work effectively for XSX and PC based, PCIe gen 3 SSDs? I see that as a huge problem.
Almost certainly you can adjust caching amount to offset streaming requirements. As I mentioned elsewhere, on something like current-gen, you could perhaps dump the data in RAM and 'stream' from there.

Perhaps, though I'm not saying this is the case, moderate PCs will be stuck with only 1 triangle per 8 pixels. :runaway:Seriously, fidelity doesn't need to be 1:1. It's great for a reveal showcase where Epic can show the world, the production world in particular, that this is a 'solved problem' and they can use streamed assets in realtime on their four raided top-end PCIe Gen 4 SSDs. For games, data could be far saner and less demanding.

I just don't consider it reasonable. There might be moments where this demo benefits from the PS5's drive (more so than than fast PC and XSX drives), but most of the assets being used each frame are coming from a ram cache and most of the time the limitations are processing based.
That's an assumption. It may be true, but it's not provably nor IMO the only logical possibility. This is a bespoke demo to showcases UE5's new tech. It wouldn't behove Epic to make a less capable demo than they could, so if there's a drive-advantage they could leverage, they no doubt would.

I also point you and @iroboto to Epic's own words on this:

Epic Games chief technical officer Kim Libreri said that (an RTX 2070) should be able to get "pretty good" performance. But aside from a fancy GPU, you'll need some fast storage if you want to see the level of detail shown in the demo video.

Sony was heckled a bit for its focus on the PlayStation 5's storage speed, and if all you're imagining is loading screens disappearing more quickly, it does seem like an odd focus. But it's about moving beyond loading screens entirely, to the point where "you can bring in [the demo's] geometry and display it despite it not all fitting in memory," says Epic CEO Tim Sweeney.​
 
I mean Epic have already said that the demo runs on XSX and fast PCs, and the demo is basically pointless if their "Nanite" approach can't work effectively on those platforms.

And frankly, if you're SSD bound at 5.5 GB/s (or even some high proportion of that) at 1440p (or lower) and 30 fps (or lower), then that's a pretty risky choice for next gen, multiplatform engine technology. If it doesn't work effectively for XSX and PC based, PCIe gen 3 SSDs? I see that as a huge problem.

What are we looking at for 4k 60 fps PC gamers? Four raided top end PCIe Gen 4 SSDs?

I just don't consider it reasonable. There might be moments where this demo benefits from the PS5's drive (more so than than fast PC and XSX drives), but most of the assets being used each frame are coming from a ram cache and most of the time the limitations are processing based.

UE5 has to work really well across a wide range of systems, IMO.

i'm not implying UE5 won't work on PCs nor XsX, of course it will work well, just that in some instances of THAT demo, there might be things that would not work as well as on the PS5 if PS5 advantages are well used (or used at all)
And vice versa, when shown running on XsX it might use it's advantages over the PS5.

there is no need to downplay every positive guess that's made about any advantage PS5 may have (not talking about you in particular)
We are at the birth of a very interesting and exciting generation of games.
 
We will sure need other novel aproaches to process and render highly dense animated meshes if we don't want their apearence to clash with these super detailed environments they'll be in.

Funnily enough I got the same feeling as back on PS1 with Pre-rendered backgrounds with how the character looked out of place in the Unreal V demo.
 
What defines fast storage? Is 2.5 GB/s not fast?
It might well be. But streaming isn't just about transfer speeds but whole drive data access speeds, and we don't know well detail scales with drive performance (transfer rates and access times). In this case, it's quite possible Epic were happy to have the opportunity to let their engine really stretch its legs regards steaming because the showcase platforms enables that, so they ramp the streaming aspect up to 11.

I'll add that I get an Industry Artist news email once a week and this guy's first point was this demo and what it shows for the industry and the joy of no LODs. Epic wanted to reach these people and have done so because they went all out on 'unlimited detail' for this first showcase. If they had toned it down for what's possible on slower drives, perhaps, it wouldn't have had that impact in quite the same way.

The take home here really is no-one knows and assumptions people make about how the engine work and where the bottlenecks aren't founded in anything but guesswork. ;) it's impossible to look at this demo and discern where the bottleneck lies because no-one outside of Epic knows what their engine is doing to achieve this.

We don't know, as all we have are non-technical statements that read more like marketing material.
I think it's more consumer-focussed language. Details will come out at whatever presentation Epic are talking to devs.
 
What defines fast storage? Is 2.5 GB/s not fast? We don't know, as all we have are non-technical statements that read more like marketing material.

I doubt the requirement is fast. It's probably "consistent and sustaining fast under all weather". They wouldn't go out and design their 6-level priority controller. Hardware companies are super stingy. We often joke about them in standards meetings. They are the ones who would keep their heads down calculating and recalculating dollars and cents as specs are updated on-the-fly.

We have habit of looking at worst cases and ignoring majority of gameplay where things are just fine. i.e. nothing better than finding a specific frame where taking a crop and zooming in reveals something bad or resolution dips temporarily. I assume this same will apply to streaming. People will go hunting for pop in/lower quality asset/less variety in assets and make a big deal of it. Exactly same that is done for dlls and game comparison between game in different consoles. We just have a bit more to look for now that there is minor difference in flops(20%, possible throttling) and major difference in streaming speed(2x).

For me, I'll take pop corn and enjoy the madness that ensues.

They started by talking to developers going back to PS4 creation time. SSD was on the request list. I assume ease of development would be one of the goals. Some edge cases may be annoying to users and time consuming to fix.

The other should be demand generation: coming up with new ideas to attract new and old customers to their platform. Personally if they keep making same old games, I won't buy anything even if it's full 8K 60fps (My PS4 is boxed up somewhere in the garage. Only owned 3 games I bought together with the console).
 
Last edited:
  • Like
Reactions: Jov
I also point you and @iroboto to Epic's own words on this:

Epic Games chief technical officer Kim Libreri said that (an RTX 2070) should be able to get "pretty good" performance. But aside from a fancy GPU, you'll need some fast storage if you want to see the level of detail shown in the demo video.

Sony was heckled a bit for its focus on the PlayStation 5's storage speed, and if all you're imagining is loading screens disappearing more quickly, it does seem like an odd focus. But it's about moving beyond loading screens entirely, to the point where "you can bring in [the demo's] geometry and display it despite it not all fitting in memory," says Epic CEO Tim Sweeney.
And I think that's awesome. But I don't think that's representative of what's happening here in terms of bottlenecks. Just because you can, doesn't mean that's the enabler here.

Check these quote out from Dictator on Resetera: #17

We do not know that though as we do not know the Demos memory footprint :/

I think that is something we need to know before we start positing about it.

If anything, we learned that it is utilising much less texture memory than normal games presumably - no large normales and texture mips for that and no geo LODs. And virtual texturing as well.

We do not know at all how the memory load is for nanite, so I have no idea. We had a shorter conversation with Epic (Tim sweeney), but they (He) only mentioned nanite scales with Compute Power and they said nothing about other axes it scales along.
 
Last edited:
Those quotes are saying, "we don't know." ;)

Does Nanite scale with compute power? Yes. Does it also scale with drive performance? Maybe. Nothing there or anywhere else I've seen suggests compute is the bottleneck and storage performance isn't, and the dialogue from Epic is choosing to emphasise storage. The two reasons for that are either that it is important, or that they're basically marketing a falsehood for Sony to push PS5 interest where the tech doesn't actually benefit.

Given a range of possible ways of approaching this problem, either/or GPU/SSD could be limiting factors. In fact you'd expect the solution to vary based on available resources, streaming more on faster rives and caching more on larger RAM devices. There's no way I'd discount SSD impact though and point my finger at compute power here. That's jumping to conclusions.
 
There's no way I'd discount SSD impact though and point my finger at compute power here. That's jumping to conclusions.
So what you're saying here is you would leave open the possibility that if PS5 had 20 TF of compute power. It would still run 1440p30. With the same SSD setup. Our arguments don't appear aligned.
 
Back
Top