Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

well, just remember that SSD performances depends on temperature, this is why I want to see it in action before say what is fast and what is not, we are talking on theoretical level, not real, about a cheap SSD closed in a little warm box as a console. maybe it will work fast, maybe good, maybe decent. I don't wanna downplay expectations, only be careful.

Oh I agree and which is why I am interested to know what Sony have planned for their cooler. It will have to be big enough to cover the APU, SSD chips and any other controller not amalgamated into the APU. It's going to be really interesting to see what they have come up with.

Cut out the other bits because I agree with you.

indeed, UE5 will shine on all the plaform, this is sure, and we'll see a lot of games using its tech, we should focus talking about the tech, instead of compare hardware platforms, ssf, and so on. Just my two cents.

Again, agree with you. This dick size comparison that's happening all over is folly. People should be getting more excited about the possible games that are coming in the future and not worry which console does what.
 
Just to add to that, the PS5 won't be transferring data at 22GB/s even 5% of the time, Cerny said best case scenario so that might be 0.001% of the time. Cerny says Kraken is 10% more efficient but with the compression engine in best case scenarios delivers over 22GB/s. So let's say the compression engine makes Kraken 10-30% more efficient(remember we won't know how efficient the engine will make it, it could be 50%, it's could be 12%.

But let's do some quick math and 9+10% is 9.9GB/s. 9+30% is 11.7GB/s.

Also note that this would be textures only, would Kraken be used for other data other than textures?

To contrast with XBSX, the XBSX uses BC for texture compression is it seems more efficient. I think I read somewhere that it's up to 30/40% more efficient than Kraken's 10% gain.

4.8GB/s + 40% is 6.72GB/s. Again this seems to align with what MS said about the SSD and the BC compression ratio i.e. over 6GB/s.
It should be given that most/all data will be compressed on next generation consoles.
 
It should be given that most/all data will be compressed on next generation consoles.

Agreed, but to what maximum degree? @Globalisateur says 4:1 or 22GB/s to which Cerny said was the best case scenario.

I am not sure the PS5 will have that degree of compression all the time(22GB/s). I think it will likely be around 2:1 like the XBSX most of the time.
 
No-one should be trying to compare real-world performance from IO stacks with clear software and arbitrary data involvement based on the white-paper specs. Let's drop arguments over how fast the console drives are and instead talk about how geometry virtualisation may work, or how Lumen may work, and that sort of thing.

Subsequent posts on compressions rates and drive performance numbers and stuff not clearly relating directly to Nanite and Lumen will be removed. The topic of IO performance appears to be happening in about three different threads at the moment.
 
Last edited:
Lumen looks awesome especially since RT wasn't used. The specular isn't new but looked good too. I wonder how the RT will look like on the engine.
 
Lumen seems very related to Dreams. I wonder if they've fixed some of Dreams issues, or if these are shared? Environment lighting in dreams is great, but direct lighting is weaker. Then again, I guess that's just a performance thing and the ideas will scale to more light sources on next-gen.
 
Lumen seems very related to Dreams. I wonder if they've fixed some of Dreams issues, or if these are shared? Environment lighting in dreams is great, but direct lighting is weaker. Then again, I guess that's just a performance thing and the ideas will scale to more light sources on next-gen.

I find it impressive, Martin Nebelong put 2,496 sculps at same time on a plain ps4. instancing does magic.
 
What do you mean by instancing?

rendering multiple copies of the same geometry mesh
the engine load one mesh, only one time, it renders the mesh n times
instancing uses few memory/bandwidh to show a lot on screen.
Nanite does instancing too in the UE5 demo, I suppose. just one asset, rendered pixel-to-pixel 500 times
 
Instancing the same statues it means only one model in memory...
Instancing means more than that. Since the beginning of graphics, there's only ever been one need to have a mesh in memory and then you draw it. It'd be daft to have duplicates of the same mesh, texture, or any other asset for drawing. A newer features of GPU is geometry instancing where the same mesh data can be reused, reducing draw overhead.

As it turns out, John Norum seems to be talking about GPU instancing, which of course doesn't apply to Dreams (or Nanite probably) because Dreams doesn't use meshes but SDFs. The SDF representation will exist in memory and be reused in the drawing calculations for each statue. I doubt GPU Instancing will be any value to virtualised meshes because each representation of that mesh will be drawing different subsets.
 
As it turns out, John Norum seems to be talking about GPU instancing, which of course doesn't apply to Dreams (or Nanite probably) because Dreams doesn't use meshes but SDFs. The SDF representation will exist in memory and be reused in the drawing calculations for each statue. I doubt GPU Instancing will be any value to virtualised meshes because each representation of that mesh will be drawing different subsets.

you are right, Instancing is a word bounded with VS (from 3.0), but I've used it to a logical level, Dreams uses other words meaning "reusing internal data", "clone assets". It stores one object, with capability to draw it till 10K times. This is what I suppose Nanite is doing here, load one asset, one time, and drawing it 500 times, tiling, via stream, little visible portions of the megatexture for models near to the observer point (so it can obtain the one triangle/one pixel claim)
 
Dreams doesn't use Meshes. ;)

well, even voxel renders, will end with vertex results, am I wrong?


from Alex Evans talk at Siggraph 2015

CS of doom => per object voxels => meshing => per object poly model => scene graph render

AL2eY6v

AL2eY6v.jpg


how can you avoid meshes at all?
anyhow, this is very interesting approach
 
well, even voxel renders, will end with vertex results, am I wrong?


from Alex Evans talk at Siggraph 2015

CS of doom => per object voxels => meshing => per object poly model => scene graph render

AL2eY6v

AL2eY6v.jpg


how can you avoid meshes at all?
anyhow, this is very interesting approach
You can have data in completely different format and trace or rasterize directly.
Like Claybook did for the play area. (Or old good 'voxel' games of old which wave surfed/traced heightmap.)

Dreams is apparently now a days a some form of Frankestein monster with different methods. (SDF geometry, polygon rasterization to fill holes.. splats/point cloud for detail.. etc.)
 
Last edited:
you are right, Instancing is a word bounded with VS (from 3.0), but I've used it to a logical level, Dreams uses other words meaning "reusing internal data", "clone assets". It stores one object, with capability to draw it till 10K times. This is what I suppose Nanite is doing here, load one asset, one time, and drawing it 500 times, tiling, via stream, little visible portions of the megatexture for models near to the observer point (so it can obtain the one triangle/one pixel claim)
Okay. That's what games have done since forever. It's what computers have done since their inception, reusing data. Any 1990s forest was made of the same tree object drawn repeatedly. Every sprite ever drawn had one copy in memory and duplicates drawn, and every repeating texture had one copy of the texture in memory and just repeatedly used it. Every time a sound effect is played, one copy of the sample is used repeatedly. It's such a given thing, I'm a bit perplexed by it being mentioned. ;)

Though as for Nanite, it doesn't store one copy of the model in RAM. You talk about them loading the asset but they aren't; they are streaming it. The entire model exists in storage. Part of the model is fetched to draw what is visible. For drawing one statue, a fraction of it will be fetched. For the next copy of the same statue, if that same data is needed, no more will be fetched as it'll already be cached in RAM. If some other parts of the model not already present in RAM are needed, those parts will be fetched. You'll have one copy of the model data on storage, and one copy of the partial model data in RAM. We don't talk about it as instancing as instancing implies other particular techniques, and 'instancing' of data in the logical sense is the default (and only!) way to use assets.
 
AL2eY6v
How can you avoid meshes at all?
Meshes represent the object as a bunch of numbers describing triangle vertices. Lots of maths turns this data into images, but that's not the only way to represent and draw object data. Another method is Constructive Solid Geometry (CSG) where objects are defined as formulas, and you can raytrace CSGs without any triangles being used at all. This gives you perfect spheres because the maths for the sphere isn't a bunch of triangles approximating it, but a mathematical formula describing it exactly*.

In the case of Dreams, the objects are represented as Signed Distance Fields. These are evaluated and turned into texture splats without any need for triangles at all. Although as jlippo says, there's more going on under the hood than we know now, and there may also be triangle meshes. But computer graphics aren't at all dependent on triangle meshes. That's one way of representing and drawing and the one that has gained prominence since the early days. It may or may not be presented by other data models and visual resolving processes (such as purely raytraced SDFs).

* (x−h)²+(y−k)²+(z−l)²=r²?
 
Back
Top