Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Exactly, it's likely at least 9 months before Early Preview builds and 17 months before Final Release; Nearly an eternity in terms of being able to refine and optimize.

Meh, it's... semi bluster? It's a tech demo, things never look like tech demos. It's like great, you did a thing in a tech demo. Now run game code and volumetric fog and clouds and raytraced reflections and a hundred characters and animate that forest and people and the enemies are tossing fireballs and oh there goes your 60fps again.

This is exactly why I expect only a few more series (and most indie games) to go 60fps in the mid run. Maybe when silicon is replaced as a semiconductor with something that can hit a hundred gigahertz at lower power, then we won't have to care. Pathtrace everything at a hundred fps, we've got 500 teraflops so why not! But until then there's still a tradeoff between visual quality and frames per second, and as some series reach the bar of "good enough" visual quality they'll go 60 fps. But the bar isn't to where every series is going to.
 
btw, does unreal released anything about UE5 UX?

The one on UE4 looks pretty archaic. Feels like im back on windows 98 or something every time i used UE4. They've acquired Twinmotion, so i kinda hoped they incorporate it at least for their level editor. Its way easier to do level editing/creation and pleasing to the eye than UE4 (and unreal has promised Twinmotion to UE bridge to be released in summer).
 
So if when PS5's IO gets pushed the most (when the character is flying through the ruins) some of the assets swap to higher levels of detail as they aproach the camera 1-2 frames later than they should have, that will happen in 2-4 frames in XBSX.

I wonder if the character flying through the ruins really are the moments when IO gets pushed the most.
During high motion scenes motion blur gets jacked up, so there's no need to load a whole lot of geometry or texture detail that will ultimately get blurred.
So if they're fetching geometry and texture data from storage at least once every frame, and if they can determine LODs before fetching (which is what's happening AFAIK?), it doesn't look like it'd make much sense to bring a lot of data that will be made useless by the post-processing pipeline.

zouuLXW.jpg







XBSX supports ZLib in its hardware block for general compression, and BCPack for textures.

I've been trying to figure out if Kraken can be used (or is effective) for general compression or only for textures like BCPack. The Oodle website doesn't seem to mention textures at all, only that it's very fast at decompressing, but all other sources in the web that I find have been saying it's used for textures.
Maybe because texture compression ratio is the metric that mattered the most so far.
 
I wonder if the character flying through the ruins really are the moments when IO gets pushed the most.
During high motion scenes motion blur gets jacked up, so there's no need to load a whole lot of geometry or texture detail that will ultimately get blurred.
So if they're fetching geometry and texture data from storage at least once every frame, and if they can determine LODs before fetching (which is what's happening AFAIK?), it doesn't look like it'd make much sense to bring a lot of data that will be made useless by the post-processing pipeline.

Yes, there's no need at all to load giant assets with amazing fine details that can't be seen, those fast blurred sections are a no-sense showcase for nanite
and if I'm not wrong, nanite work only with static assets

in action scenes I prefer to see realistic deformable structures. could be useful for a techdemo, but the real world is not all done only with rocks
 
I've been trying to figure out if Kraken can be used (or is effective) for general compression or only for textures like BCPack. The Oodle website doesn't seem to mention textures at all, only that it's very fast at decompressing, but all other sources in the web that I find have been saying it's used for textures.
Maybe because texture compression ratio is the metric that mattered the most so far.

Richard from Digital Foundry refers to Kraken as follows:

https://www.eurogamer.net/articles/...s-and-tech-that-deliver-sonys-next-gen-vision

Digital Foundry said:
The controller supports hardware decompression for the industry-standard ZLIB, but also the new Kraken format from RAD Game Tools, which offers an additional 10 per cent of compression efficiency

This to me suggests it's a general lossless compression scheme just like zlib but a little more efficient. That means it will further compress already compressed textures (in GPU native formats) along with any other compressible data on the disk.
 
This to me suggests it's a general lossless compression scheme just like zlib but a little more efficient. That means it will further compress already compressed textures (in GPU native formats) along with any other compressible data on the disk.

You can't know how much a general compression algorithm is effective on compressed media such images, textures, vmovies, etc
but I bet, it's near zero for such media.
Kraken is a good on-the-fly algorithm, but can't beat off-line general compressor algorithm, you can try rar (based on Lempel-Ziv (LZSS)) or similar, to the max compression ratio on a texture, the gain is near zero
you need texture-specific algorithm for textures, this is why BC-Pack is so better in this task
 
I wonder if the character flying through the ruins really are the moments when IO gets pushed the most.

So if they're fetching geometry and texture data from storage at least once every frame, and if they can determine LODs before fetching (which is what's happening AFAIK?), it doesn't look like it'd make much sense to bring a lot of data that will be made useless by the post-processing pipeline
Though true, the moment the player stops, the data needs to be there. If the platform can't handle fast-travel, gameplay options might be severly compromised. Given the potentially relatively low demands of streaming, I think it more likely the content can be streamed at high fidelity, maybe not LOD0 but not super-low-res, as required.

I've been trying to figure out if Kraken can be used (or is effective) for general compression or only for textures like BCPack.
It's a general purpose binary compressor. Did you find this?
http://cbloomrants.blogspot.com/2016/04/performance-of-oodle-kraken.html
 
I wonder if the character flying through the ruins really are the moments when IO gets pushed the most.
During high motion scenes motion blur gets jacked up, so there's no need to load a whole lot of geometry or texture detail that will ultimately get blurred.
So if they're fetching geometry and texture data from storage at least once every frame, and if they can determine LODs before fetching (which is what's happening AFAIK?), it doesn't look like it'd make much sense to bring a lot of data that will be made useless by the post-processing pipeline.

Streaming systems today do not take motion blur into account and I don't expect nanite too. It would be more headache to implement, and is not even clear if that's a desireable parameter. Streaming is about pre-fetching what you need now AND what you might need in the near future (ssd might have changed that near future from next 30 seconds to next 5 frames, but did not erradicate the consideration completely) so even if you are moving fast now, your character might hit a wall and stop in the very next instant.

It could be though, that nanite streaming is simply not keeping up at all at thar part and we barely noticed.
 
Though true, the moment the player stops, the data needs to be there. If the platform can't handle fast-travel, gameplay options might be severly compromised. Given the potentially relatively low demands of streaming, I think it more likely the content can be streamed at high fidelity, maybe not LOD0 but not super-low-res, as required.

It's a general purpose binary compressor. Did you find this?
http://cbloomrants.blogspot.com/2016/04/performance-of-oodle-kraken.html

I find better some test on PS4 on file linked to games like a ligthmap with bc3 texture compression.

http://cbloomrants.blogspot.com/2016/05/ps4-battle-lz4-vs-lzsse-vs-oodle.html

Another one with file linked to game texture like bc1, bc3 or dds on PS4

EDIT: .gr2 is animation

https://filext.com/file-extension/GR2

http://www.radgametools.com/granny.html

EDIT": other PS4 test
http://cbloomrants.blogspot.com/2016/05/ps4-battle-miniz-vs-zlib-ng-vs-zstd-vs.html

http://cbloomrants.blogspot.com/2017/03/kraken-perf-with-simultaneous-threaded.html
 
Last edited:
Anyway I have serious concerns about Lumen&Nanite:

Lumen is born to run on platforms that lacks dedicated HW for pathraytracing, and Mesh Shading, using only compute units, so leaving in the dust Nvidia from RTX 20x0 to the new RTX 30x0, Xbox Series X, AMD Radeon 6x00.
such as Android, Xbox One, PS4 and maybe PS5 (Cerny said RT runs on hardware, but never said a word about dedicated units, so could be it run on compute units)

Nanite is born to run on platform with fast I/O disk subsystem, leaving in the dust Android, a lot of PC userbase
such as PS5, XSX, best configs on PC userbase

So the only platform that fits both is PS5, L&N it will run everywhere, but, in my opinion, those are born to support fully only one platform.

Should a terrible shame if that new tech from UE5 will not use fully dedicated RT units and mesh shading HW support, and maybe Sampler feedback, without switching to the old algorithms/technology.

From what I understood, this is an engine useful to simplify developing on multiplatform games, but it doesn't seems suitable for first party games (excepts for Sony)
13 of 15 Microsoft Studios are using UE4, what advantage they will gain upgrading the engine to the version 5?
What advantage will gain, RTX and next gen AMD holders, when it doesn't exploits a lot of silicon of their GPU's?

EGC stream. 1:18:20

EYbTf_8XkAEwB6j
 
Last edited:
Anyway I have serious concerns about Lumen&Nanite:

Lumen is born to run on platforms that lacks dedicated HW for pathraytracing, and Mesh Shading, using only compute units, so leaving in the dust Nvidia (from RTX 20x0 to the new RTX 30x0, Xbox Series X, AMD Radeon 6x00)
such as Android, Xbox One, PS4 and maybe PS5 (Cerny said RT runs on hardware, but never said a word about dedicated units, so could be it run on compute units)

Nanite is born to run on platform with fast I/O disk subsystem, leaving in the dust Android, a lot of PC userbase
such as PS5, XSX, best configs on PC userbase

So the only platform that fits both is PS5, L&N it will run everywhere, but, in my opinion, those are born to support fully only one platform.

Should a terrible shame if that new tech from UE5 will not use fully dedicated RT units and mesh shading HW support, and maybe Sampler feedback, without switching to the old algorithms/technology.

From what I understood, this is an engine useful to simplify developing on multiplatform games, but it doesn't seems suitable for first party games (excepts for Sony)
13 of 15 Microsoft Studios are using UE4, what advantage they will gain upgrading the engine to the version 5?
What advantage will gain, RTX and next gen AMD holders

EGC stream. 1:18:20

EYbTf_8XkAEwB6j

Nanite is a mix of Primitive shader/Mesh shader rasterizer for triangle bigger than 1 pixel and software compute shader rasterizer for pixel of the size of a triangle. UE 5 support RTX but they don't use it for Lumen probably for performance reason.
 
Nanite is a mix of Primitive shader/Mesh shader rasterizer for triangle bigger than 1 pixel and software compute shader rasterizer for pixel of the size of a triangle. UE 5 support RTX but they don't use it for Lumen probably for performance reason.

Epis Engineer Wang, in EGC interview, said that it don't use mesh shading at all, but a part is similar. it's based on old primitive shading
 
such as Android, Xbox One, PS4 and maybe PS5 (Cerny said RT runs on hardware, but never said a word about dedicated units, so could be it run on compute units)

13 of 15 Microsoft Studios are using UE4, what advantage they will gain upgrading the engine to the version 5?
What advantage will gain, RTX and next gen AMD holders, when it doesn't exploits a lot of silicon of their GPU's?
PS5 does indeed deliver hardware-accelerated ray tracing via its Intersection Engine, which Cerny says is "based on the same strategy as AMD's upcoming PC GPUs"

https://www.eurogamer.net/articles/...s-and-tech-that-deliver-sonys-next-gen-vision


PS5 has the same RT as AMD PC GPU.


The assumption such as "PS5 doesn't have dedicated RT hardware so it will gain more from UE5 ", is not safe.
 
Last edited:
Anyway I have serious concerns about Lumen&Nanite:

Lumen is born to run on platforms that lacks dedicated HW for pathraytracing, and Mesh Shading, using only compute units, so leaving in the dust Nvidia from RTX 20x0 to the new RTX 30x0, Xbox Series X, AMD Radeon 6x00.
such as Android, Xbox One, PS4 and maybe PS5 (Cerny said RT runs on hardware, but never said a word about dedicated units, so could be it run on compute units)

Nanite is born to run on platform with fast I/O disk subsystem, leaving in the dust Android, a lot of PC userbase
such as PS5, XSX, best configs on PC userbase

So the only platform that fits both is PS5, L&N it will run everywhere, but, in my opinion, those are born to support fully only one platform.

Should a terrible shame if that new tech from UE5 will not use fully dedicated RT units and mesh shading HW support, and maybe Sampler feedback, without switching to the old algorithms/technology.

From what I understood, this is an engine useful to simplify developing on multiplatform games, but it doesn't seems suitable for first party games (excepts for Sony)
13 of 15 Microsoft Studios are using UE4, what advantage they will gain upgrading the engine to the version 5?
What advantage will gain, RTX and next gen AMD holders, when it doesn't exploits a lot of silicon of their GPU's?

EGC stream. 1:18:20

EYbTf_8XkAEwB6j

I'm pretty sure Epic has confirmed that UE3 will work from Android to the latest Nvidia hardware. Wasn't that stated in the original video...?

The engine will similar scale to whatever strengths the platform has; SSD, RTRT, etc. Exactly as any engine would.

Epic would have to be rather daft to create an engine that works best on one platform. Especially since their cashcow (Fortnight) works on everything.
 
Short answer is there needs be no concerns. Epic are one of, if not the, most significant third Party engine provider in the game development space. They will provide technologies to utilise their target platform's strengths and remain competitive. If Lumen does use RTRT, expect another solution for PC hardware that's better if they can run a better system.
 
"based on the same strategy as AMD's upcoming PC GPUs"

is differnt from:

"PS5 has the same RT as AMD PC GPU."

"strategy" is not "hardware units". words are important

I'm pretty sure Epic has confirmed that UE3 will work from Android to the latest Nvidia hardware. Wasn't that stated in the original video...?

The engine will similar scale to whatever strengths the platform has; SSD, RTRT, etc. Exactly as any engine would.

Epic would have to be rather daft to create an engine that works best on one platform. Especially since their cashcow (Fortnight) works on everything.

as I said, I believe it will run everywhere, but my concerns are about lacks exploiting some platforms

EYbTiQOXkAgV-HI
 
I'd have zero concerns about UE5 on Xbox considering most of their first(edit) parties use UE. I expect Microsoft to contribute a lot to UE5 so UE5 will have good performance "out of the box" when using the XDK. Like, I would not be surprised to see Microsoft contribute a modified virtual texturing system for Series X that takes advantage of sampler feedback streaming in a fairly optimal way. That way it's basically a check box for 3rd parties and you get reasonable performance.
 
Last edited:
Back
Top