Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

The UE5 demo on PS5 may well have had a higher resolution and/or framerate with a stronger GPU and the same IO subsystem, depending on the VRAM bandwidth bottlenecks VS the CU bottlenecks.

And that’s what you’ll see across the board. Way more variation and density and better and more textures, regardless of GPU, but the stronger or weaker GPU will show more or less pixels and framerate, but have no further impact on the quality and detail of what is shown.

Wait a sec, are you saying that the complexity of a render is only dependent on resolution and target framerates? Like that's only partially true. It mainly depends on what you want to do inside that pixel. My point is that we are ignoring the complexity of a render and glossing it over to mean you can have infinite complexity in a scene with no cost of render time which is objectively false.
 
I'm aware of what the UE5 demo does. But no graphics programmer is going to tell you that the SSD renders any triangles. The GPU does. It may be designed to render pixel-sized triangles, but when you rasterize those triangles, you have all the assets in VRAM and now have to run shaders which is completely done by the GPU. UE5 demo could switch to a RT lighting paradigm (their code is already in 4.26) and how much would that cost for the PS5 GPU? Answer: ALOT. The lighting/shading of a pixel within a triangle will always be a bottleneck. That's what separates a visuals from realtime to offline.

Normal PS4 pipeline: RAM/HDD -> CPU for decompression ->VRAM -> GPU cache -> CU etc.

New PS5 pipeline: SSD -> IO subsystem hardware decompressor -> GPU Cache -> CU etc. And 100x faster even without decompression than the PS4 version above, with similar improvements in latency.

So it is using no CPU, much less VRAM and bandwidth, etc.

Are you getting it yet? I’m not saying you can’t also get improvements with more CU, but bang for buck, the IO system is more important.
 
Wait a sec, are you saying that the complexity of a render is only dependent on resolution and target framerates? Like that's only partially true. It mainly depends on what you want to do inside that pixel. My point is that we are ignoring the complexity of a render and glossing it over to mean you can have infinite complexity in a scene with no cost of render time which is objectively false.
No, he’s saying in the game development world, complexity of assets as they are increasing is putting an upper strain on general scene/environment/design complexity more than it is a rendering power issue.

And so the move to support SSD to open the sand box sort of speak to a variety of setups at the expense of rendering power is more desirable than just rendering power alone.
 
Wait a sec, are you saying that the complexity of a render is only dependent on resolution and target framerates? Like that's only partially true. It mainly depends on what you want to do inside that pixel. My point is that we are ignoring the complexity of a render and glossing it over to mean you can have infinite complexity in a scene with no cost of render time which is objectively false.

Of course but as in the post above, the IO is a more important bottleneck to be overcome. Of course if you were to make a game where all assets are procedurally generated and you wanted it to have raytraced lighting and shadows then you just need GPU power only with high bandwidth VRAM and that’s it.

Conversely, the infinite complexity of a scene is what UE5 demo was trying to achieve, by making sure the GPU gets all the detailed data it needs to render what the user sees, and showing that the bottleneck to be overcome here is basically culling a really big amount of mesh and texture data that holds the detailed information to only that what is needed, and if that is done efficiently, the current GPU in the PS5 was sufficient last year to render that what remains after culling. But processing the highly detailed models from disk to GPU is what the biggest bottleneck is here, not the GPU power. Does that make sense?
 
No, he’s saying in the game development world, complexity of assets as they are increasing is putting an upper strain on general scene/environment/design complexity more than it is a rendering power issue.

How can you measure that for every single game with various different scopes?

And so the move to support SSD to open the sand box sort of speak to a variety of setups at the expense of rendering power is more desirable than just rendering power alone.
Ok. Well, we should definitely be clear about that. I was under the impression from chris that rendering wasn't going to be an issue. He even quoted someone out of context to support his argument.
 
Of course but as in the post above, the IO is a more important bottleneck to be overcome. Of course if you were to make a game where all assets are procedurally generated and you wanted it to have raytraced lighting and shadows then you just need GPU power only with high bandwidth VRAM and that’s it.

Conversely, the infinite complexity of a scene is what UE5 demo was trying to achieve, by making sure the GPU gets all the detailed data it needs to render what the user sees, and showing that the bottleneck to be overcome here is basically culling a really big amount of mesh and texture data that holds the detailed information to only that what is needed, and if that is done efficiently, the current GPU in the PS5 was sufficient last year to render that what remains after culling. But processing the highly detailed models from disk to GPU is what the biggest bottleneck is here, not the GPU power. Does that make sense?

Gotcha. Now I understand. Thanks for clearing that up.

I think the main draw this generation will be on rendering power though. RT is much more coveted than SSD->VRAM in my opinion. We've got several games using RT as opposed to 2 using the SSD.
 
How can you measure that for every single game with various different scopes?


Ok. Well, we should definitely be clear about that. I was under the impression from chris that rendering wasn't going to be an issue. He even quoted someone out of context to support his argument.
Unfortunately you can’t; consoles are supposed to be design to support a variety of games and development teams and talents.
It’s the best they could get away with, with the costs they are available.

But since you are here, quick question, how much pre-baking of various things can be gotten away with in rendering with the move to faster IO? Does it open any new possibilities? Or is it the same as before but just higher quality?
 
But since you are here, quick question, how much pre-baking of various things can be gotten away with in rendering with the move to faster IO? Does it open any new possibilities? Or is it the same as before but just higher quality?

When you say "get away with" that would be subjective. I personally don't like pre-baked lighting which is has been the only solution for years and years until now. When I play Metro Exodus, I feel relieved to see light sources casting proper shadows and elements in the scene being lit by the same energy from the light sources in the scene as opposed to rendering assets in layers with different lighting configurations and appearing completely cutout of the scene. We will not get away with anything with pre-baked lighting no matter how "round" a surface is or how many instances are in a scene.

Faster I/O offers one main thing for me - No more baking out normal maps. The sooner we get to pure pixel sized triangles, we can offloading baking out normal maps from the pipeline - AND as an added benefit, our shadow casting lights will automatically cast shadows on objects very small instead of them being painted in by an artist or added on top as a separate pass. Now we need to focus on gutting all the complex light rigs in games and let the math do the job correctly.
 
When you say "get away with" that would be subjective. I personally don't like pre-baked lighting which is has been the only solution for years and years until now. When I play Metro Exodus, I feel relieved to see light sources casting proper shadows and elements in the scene being lit by the same energy from the light sources in the scene as opposed to rendering assets in layers with different lighting configurations and appearing completely cutout of the scene. We will not get away with anything with pre-baked lighting no matter how "round" a surface is or how many instances are in a scene.

Faster I/O offers one main thing for me - No more baking out normal maps. The sooner we get to pure pixel sized triangles, we can offloading baking out normal maps from the pipeline - AND as an added benefit, our shadow casting lights will automatically cast shadows on objects very small instead of them being painted in by an artist or added on top as a separate pass. Now we need to focus on gutting all the complex light rigs in games and let the math do the job correctly.
right, let me rephrase.
There is a finite limit of computational power available for all hardware, so as complexity quality per pixel increases there becomes a limit in which you can go no further. In the past baking has enabled developers to increase fidelity without largely increasing the computational load, so I guess my question is, with this coming generation, such limits will be hit again, and consoles don't have the luxury to upgrade like PCs do; so what will developers do to push the frontier of graphics when there is no computational power left.
 
That's some impressive goalposts moving there. I applaud you sir.

What goalposts? I'm confused?

Are you contending that you can in fact do a 180 degree turn in 1 frame (16.7 ms at 60 Hz) with a console controller? I'm sure it's possible, but outside of games that implement a 180 degree view change via a button press, I've yet to see a game allow you to do that with a console controller.

So unloading everything behind the player's camera (less than 90 degree FOV on consoles for what is in the player's view) doesn't mean you are loading everything outside of your view when you start turning because you don't need to. In R&C their game is designed around not holding things in memory that aren't needed, correct? So in 16.7 ms, they are only loading a very small fraction of what's not in view when the player turns. Why load something if it's not going to be displayed?

Even on world changes they are only loading what's in view (less than 90 degrees of 360 degrees).

If world changes are not player controlled (you can't change anytime you want at any view orientation) that would mean the developers can control when you world change and thus what portion of the world (thus scene complexity) when changing worlds. Hopefully players can change at any time they want facing any direction they want, the tech would be more impressive in this case.

Regards,
SB
 
Last edited:
right, let me rephrase.
There is a finite limit of computational power available for all hardware, so as complexity quality per pixel increases there becomes a limit in which you can go no further. In the past baking has enabled developers to increase fidelity without largely increasing the computational load, so I guess my question is, with this coming generation, such limits will be hit again, and consoles don't have the luxury to upgrade like PCs do; so what will developers do to push the frontier of graphics when there is no computational power left.

What goal am I trying to achieve? Openworld, Closed world, static scenes or dynamic, what's the goal of the game, what resolution, what FPS target? Give me a specific scenario like maybe the UE5 demo being a full game?
 
What goal am I trying to achieve? Openworld, Closed world, static scenes or dynamic, what's the goal of the game, what resolution, what FPS target? Give me a specific scenario like maybe the UE5 demo being a full game?
Let’s try to keep fidelity to slipping to past generation techniques, so perhaps:
Closed world, dynamic lighting and shadows, 1440p and 30fps.

I’m not sure how much lower developers will want to go than that.
 
Let’s try to keep fidelity to slipping to past generation techniques, so perhaps:
Closed world, dynamic lighting and shadows, 1440p and 30fps.

I’m not sure how much lower developers will want to go than that.

I guess, you can tell me your claim and we can discuss. Are you saying that under those constraints, the only thing viable to do is increase the geometric/texture detail? If so, that would definitely be the way to go. It would give you much better curved surface approximations on assets (despite being static only). Texture detail will be a remarked improvement too - especially for 1st person games where the camera is often very close to objects.
 
I guess, you can tell me your claim and we can discuss. Are you saying that under those constraints, the only thing viable to do is increase the geometric/texture detail? If so, that would definitely be the way to go. It would give you much better curved surface approximations on assets (despite being static only). Texture detail will be a remarked improvement too - especially for 1st person games where the camera is often very close to objects.

If we believe some here, the next products from Intel, AMD and NV are going to be SSDs :p
 
I guess, you can tell me your claim and we can discuss. Are you saying that under those constraints, the only thing viable to do is increase the geometric/texture detail? If so, that would definitely be the way to go. It would give you much better curved surface approximations on assets (despite being static only). Texture detail will be a remarked improvement too - especially for 1st person games where the camera is often very close to objects.
No claim, just curious what else could be done from your viewpoint to still increase graphical fidelity while side stepping compute pressure; aside from increasing geometric and texture detail if there is anything else left to do.

but it does appear from your answer that the answer is not really; aside from bypassing normal maps.
 
No claim, just curious what else could be done from your viewpoint to still increase graphical fidelity while side stepping compute pressure; aside from increasing geometric and texture detail if there is anything else left to do.

but it does appear from your answer that the answer is not really; aside from bypassing normal maps.

That's ok though. UE5 Nanite has a very disappointing limit with the asset streaming. It can't dynamically tessellate on deforming geometry. That's because the high res mesh is actually baked out itself. Any other graphics engine (like the Nvidia Marble demo) can implement what Epic is doing with their own tools and many will since most Sony devs will not use UE5.
 
If we believe some here, the next products from Intel, AMD and NV are going to be SSDs :p

How about just an SSD connector to a graphics optimized IO subsystem directly on the GPU. Do you think that could happen and have benefits?

Small hint: they already exist. AMD had them in 2016 on pro cards and if you don’t believe AMD, then how about this article from NVidia about the bottlenecks in modern game systems and how to solve them with better IO subsystems ...

https://www.nvidia.com/en-us/geforce/news/rtx-io-gpu-accelerated-storage-technology/

Again really no one said that SSDs are going to give you wonderful ray traced lighting for free in a previous gen console. But I think the point should by now be clear.
 
ow about just an SSD connector to a graphics optimized IO subsystem directly on the GPU. Do you think that could happen and have benefits?

Aha so, GPU's still have their importance, SSD/IO tech is just complementing them as that tech has seriously lagged behind (or the lack of adaption really). Seeing we are already sitting at 36TF gpus with new features each year, i see consoles catching up with their next iteration in that area.

and if you don’t believe AMD, then how about this article from NVidia

I have nothing against AMD, both IHV's produce GPU's that fit into my gaming systems, i have had ATI/AMD/NV the last 20+ years. Since you have the mod status, you in special dont have to behave like that.
 
I don't think you understand what I'm saying. The GPU isn't an empty void where you can throw any amount of triangles at it and it just renders it easily no matter what resolution, no matter what complex algorithm you introduce (i.e. RT lighting for example), no matter what advanced algorithm you are using in your game. That's just objectively not possible. The PS5 (as well as any other GPU) has a rendering limit that is completely independent of the speed of the SSD->VRAM transfer rate. If it didn't, then Nvidia and Sony's job would be done. We've achieved the perfectly limitless GPU everyone has been wanting.

If you think the faster loading is more of a limit, I question, why not render that UE5 demo at native 4k/60FPS. My answer? The GPU was limited to that particular bandwidth (ala 1440p/30FPS) with those particular rendering parameters and that particular scene complexity.

First the target is same quality 1404p 60 fps and the optimization to reach it needs to be done mostly on Lumen. Did I say than GPU power doesn't count? I just say that without SSD this is impossible to have this level of details after it will be impossible to fully reach this level because of storage size. I don't care if the demo runs at 1404p 30 or 60 fps, the quality of some of the asserts will need to decrease for a full game.

Small hint the demo render at 30 fps because of Lumen not Nanite. And they are very confident they can reach 60 fps but if the PS5 GPU was more powerful they would be able to be at 60 fps without optimization or at 60 fps with a better resolution.

This is ok but the most impressive things in Unreal Engine 5 demo is the complexity of the geometry without an SSD this would be impossible to do and the realtime GI system.

bastian-hoppe-artstation-bh-07.jpg
 
Last edited:
Back
Top