Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

I think its because of Tim keep mentioning the SSD and how the demo would not be possible without sonys SSD advancements and IO tech which is the reason why tim keeps saying thats why they built this demo for ps5 instead of high end pc or xbox like youd expect them to.

He never said that but some of his statements could confuse the large audience and only the large audience. Developers obviously know anything about UE5 would work on other platforms.

Edit : by other platforms i mean modern hardware = next-gen + PC.
 
Last edited:
Did Epic explicitly say to support PS4/XB1/Switch with UE5?

Pretty much on the official UE5 Blog that was linked early on. They're making it incredibly easy to port from UE4 to UE5. It will be nothing like porting from UE3 to UE4.. I'll link to it yet again: https://www.unrealengine.com/en-US/blog/a-first-look-at-unreal-engine-5

Unreal Engine 4 & 5 timeline
Unreal Engine 4.25 already supports next-generation console platforms from Sony and Microsoft, and Epic is working closely with console manufacturers and dozens of game developers and publishers using Unreal Engine 4 to build next-gen games.
Unreal Engine 5 will be available in preview in early 2021, and in full release late in 2021, supporting next-generation consoles, current-generation consoles, PC, Mac, iOS, and Android.
We’re designing for forward compatibility, so you can get started with next-gen development now in UE4 and move your projects to UE5 when ready.
We will release Fortnite, built with UE4, on next-gen consoles at launch and, in keeping with our commitment to prove out industry-leading features through internal production, migrate the game to UE5 in mid-2021.
They also made other statements that their engine scales from mobile to next-gen and that will continue on with UE5 revision. I think having their game Fortnite ported to UE5 is a testament as to the scalability of UE5.
 
jsdAZv.png

Frame without full details

jsd2c2.png

A few frame after loading all details

This is very interesting!!

I wonder what we're seeing here. It could be loading delay, but I think it could alternatively be a processing delay. By that I mean that the job of constructing the mesh (from data on the SSD) that you actually feed to the GPU is going to take some work, and that work may need to be spread over several frames (depending on how much has changed, what resolution you're at, and how much work you can do each frame).

Ideally in any computing work you want to avoid as much work as you possibly can, and so you often try and re-use as much as you can (situation e.g. memory, cumulative errors etc permitting).

Is what we're seeing here evidence of streaming limitations, or of the game's Nanite jobby having a cache of "draw ready" 3D data that gets added to / subtracted from each frame depending on the amount of processing available each frame?

I could imagine both being responsible depending on circumstance.

Edit: I explained that badly. What I mean is there has to be a limit on the number of verts you can add to or take away from the "one poly per pixel" representation of the base assets. If this number is too low to re-create everything every frame (and why would you want to if you didn't need to) then you may end up seeing this process happen across successive frame.

This is separate (or could be) to the process of loading into memory the data you then want to create or alter the mesh you're actually going to draw.
 
Last edited:
He never said that but some of his statements could confuse the large audience and only the large audience. Developers obviously know anything about UE5 would work on other platforms.

He didnt? I could have sworn hes said that both in interviews and responding to people directly on twitter
 
More evidence then that UE5 = UE4 + Nanite & Lumen tacked on.

That's not a bad thing at all. It should mean an even broader deployment and wider acceptance in the game development realm.
 
Did you not hear? They want people transition from ue4 fast for a unified dev enviorment. So yes...not at anywhere near this level of fidelity but the tech is designed to scale from phones to movie sets. And current gen machines (yes even base xb1 and ps4) are still much stronger than phones. In that way Switch may get the barest of bare bones conversions but its also possible there too.
I don't think the engine scaling means all feature will. eg. Unity supports some form of real-time GI processing, but you don't run it on mobile.

I don't think people should expect Nanite and Lumen work on HDD based platforms or mobile until shown just because the engine will run on those platforms. Epic are going to want Fortnite in a single project to build and deploy for all platforms, so they are going to ensure the one engine scales okay, by chucking out features low-spec platforms can't use.

We presently have a demo run from SSD, comments from Epic saying it needs an SSD, and a PC example running from a very fast SSD. Let's just look at that part of it for the time being.
 
This is very interesting!!

I wonder what we're seeing here. It could be loading delay, but I think it could alternatively be a processing delay. By that I mean that the job of constructing the mesh (from data on the SSD) that you actually feed to the GPU is going to take some work, and that work may need to be spread over several frames (depending on how much has changed, what resolution you're at, and how much work you can do each frame).

Ideally in any computing work you want to avoid as much work as you possibly can, and so you often try and re-use as much as you can (situation e.g. memory, cumulative errors etc permitting).

Is what we're seeing here evidence of streaming limitations, or of the game's Nanite jobby having a cache of "draw ready" 3D data that gets added to / subtracted from each frame depending on the amount of processing available each frame?

I could imagine both being responsible depending on circumstance.

Edit: I explained that badly. What I mean is there has to be a limit on the number of verts you can add to or take away from the "one poly per pixel" representation of the base assets. If this number is too low to re-create everything every frame (and why would you want to if you didn't need to) then you may end up seeing this process happen across successive frame.

This is separate (or could be) to the process of loading into memory the data you then want to create or alter the mesh you're actually going to draw.

Your reasoning is not logic Brian Karis told himself how he had this idea it begins by storing geometry as texture and the engine try to keep a 1 to 1 ratio between pixel and geometry, it means geometry is linked to the resolution if there is not enough power to draw geometry DRS will apply and reduce the resolution. And without DRS we will have a little dip in framerate.

This is virtual geometry, the GPU was probably unable to have the geometry needed in RAM.

 
Your reasoning is not logic Brian Karis told himself how he had this idea it begins by storing geometry as texture and the engine try to keep a 1 to 1 ratio between pixel and geometry, it means geometry is linked to the resolution if there is not enough power to draw geometry DRS will apply and reduce the resolution. And without DRS we will have a little dip in framerate.

This is virtual geometry, the GPU was probably unable to have the geometry needed in RAM.


There has to be a cost of maintaining your virtualised geometry, and I'm not sure that if your resolutions drops (because you can't maintain performance) that you'd want to throw out everything you have and start again that frame. You're already losing performance, that'd cost you more.

There has to be a degree of independence between the virtualised geometry and the particular dynamic resolution from frame to frame. Or at least, that's the way I see it.

For example, Rage (a criminally underrated game) used virtual textures that were maintained at one resolution, but also had a dynamic resolution that was independent of that.

Edited last paragraph for clarity.
 
There has to be a cost of maintaining your virtualised geometry, and I'm not sure that if your resolutions drops (because you can't maintain performance) that you'd want to throw out everything you have and start again that frame. You're already losing performance, that'd cost you more.

There has to be a degree of independence between the virtualised geometry and the particular dynamic resolution from frame to frame. Or at least, that's the way I see it.

For example, Rage (a criminally underrated game) used virtual textures that were maintained at one resolution, but also had a dynamic resolution that was independent of that.

Edited last paragraph for clarity.

Like every virtualised system the goal is to display what they have in memory if you don't have the level of geometry you want you display a lower one until it is resident in memory. Here difficult to say how many frame(one or two frame) video program it is hard to go frame per frame.

https://www.gamedev.net/forums/topic/700703-mipmap-in-procedural-virtual-texture/

Yes that's per pixel. Each pixel needs two mips to be resident.

All virtual texturing techniques need to deal with non resident data. If a pixel asks for a part of the VT that isn't resident (whether that's part of a particular mip, or an entire mip level!) your system needs to be able to satisfy that request with some alternate data (maybe a different mip level than the one they wanted) and then work to try and make that bit of missing data resident for the next frame.

EDIT:
Someone find this because Andrew Maximov tried to find what they are doing
 
I don't think the engine scaling means all feature will. eg. Unity supports some form of real-time GI processing, but you don't run it on mobile.

I don't think people should expect Nanite and Lumen work on HDD based platforms or mobile until shown just because the engine will run on those platforms. Epic are going to want Fortnite in a single project to build and deploy for all platforms, so they are going to ensure the one engine scales okay, by chucking out features low-spec platforms can't use.

We presently have a demo run from SSD, comments from Epic saying it needs an SSD, and a PC example running from a very fast SSD. Let's just look at that part of it for the time being.

Oh i am aware these two features are a product of the next gen advances irrespective of ue5s scalability in general. Im just saying the tech was also built for a variety of hw even if they cant use it due to not having enough power. So they arent going to make something xbsx and pc with their less advanced ssd tech cant use overall.
 
After it is impossible to see the loss of details at normal speed and the level of details is so high the streaming problem is not visible at all. After it probably depends of SSD speed but it shows that SSD is never fast enough.
 
He didnt? I could have sworn hes said that both in interviews and responding to people directly on twitter

Ok i did some researches and it's true that some statements made by EPIC are far more explicit :

Tim Sweeney : “[The PS5] puts a vast amount of flash memory very, very close to the processor,” says Sweeney. “So much that it really fundamentally changes the trade-offs that games can make and stream in. And that’s absolutely critical to this kind of demo [...] This is not just a whole lot of polygons and memory. It’s also a lot of polygons being loaded every frame as you walk around through the environment and this sort of detail you don’t see in the world would absolutely not be possible at any scale without these breakthroughs that Sony’s made."

https://www.ign.com/articles/ps5-ssd-breakthrough-beats-high-end-pc

Nick Penwarden : "There are tens of billions of triangles in that scene, and we simply couldn't have them all in memory at once," he says, referring to a bunch of statues in the demo. "So what we ended up needing to do is streaming in triangles as the camera is moving throughout the environment. The IO capabilities of PlayStation 5 are one of the key hardware features that enable us to achieve that level of realism."

https://www.gamesradar.com/epics-un...-ps5-vision-that-sony-has-only-told-us-about/

But it's not clear if they really needed 5Gb/s or more for this demo, at least for some scenes. People are claiming a laptop was able to run the demo, so SSD yes but not especially the one in PS5.
 
There has to be a degree of independence between the virtualised geometry and the particular dynamic resolution from frame to frame. Or at least, that's the way I see it.
We may need new nomenclature to describe what is happening. A couple to describe first:

- Epic mentioned that they use special normal maps for models. Not your usual type though. So what can it be? For REYES, you need a micropolygon map of your geometry.

- Geometry maps - what can they be? Depending on granularity, they can be a map of fragmented meshes, or a map of the aforementioned micropolygons at a 1:1 ideal triangle to pixel quality.

If texture is to texel:
What is micropolygon to...? A microcel?

If texture LOD is to mipmaps:
What is mesh LOD...? A meshmap?

They could store meshmaps to represent geometry LOD.
They could store microcels (special normal maps) to represent geometry maps.

At runtime, they will have a target resolution and load the appropriate texture and geometry LODs and maps.

For every object to draw, they can test against filling their normal maps ( appropriate microcel, micropolygon map).

Then keep a counter of its fill level for that frame. If falling behind, choose a contingency lower quality meshmap.

Shade texture with appropriate texel and mipmap.

So, if falling behind with your frames, you'll see geometry artifacts as shown earlier.

Kinda like that is what I'm thinking...
 
Back
Top