Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

the seamless UVs would have allowed displacement mapping on any surface
To me more clear, the bunny above has terrible distortion wtith the texture boundary collapsing to a singularity at the bottom, probably. That's useless as it can not handle complex topology at all.

Found this old picture on my HD:
upload_2020-5-18_19-49-52.png
Here you could place a displacement texture on each of those quads, and becasue their resolution matches at quad boundaries, the displacement is seamless, has not cracks, and texel to surface ratio is pretty uniform.
This would work.

But Epics solution does not need to address the difficult combinatorial problem of quadrangualtion, neither they have to recompute new UVs i guess.
 
What i take from there is that kraken decompression speed is way faster than zlib. I dont know how many cores are used in those tests but i supposse is a very parallel friendly task and thats why Sony's claims of 9 zen 2 cores performance for their Kraken decompressor.
That's a question for another thread. This thread is not a comparison of the different console performances. ;)

A-ha! I see the text quoted isn't off-topic at all. Carry on...
 
To me more clear, the bunny above has terrible distortion wtith the texture boundary collapsing to a singularity at the bottom, probably. That's useless as it can not handle complex topology at all.
The main point of that image and algorithm is it shows how non-obvious data mappings can change the problem considerably. It shows Epic likely aren't just throwing vertex data down the IO pipe and could be doing something that may be visualisable as a 3D texture
 
This needs more attention. This image shows what's happening very clearly...

View attachment 3899

The moment the data is arranged this way, we can see how virtualised textures would also apply conceptually to the geometry in a 2D array, along with how compression can change from having to crunch 3D data. You don't need to load the whole texture to show the model, but only the pieces of it that are viewable, which is the same problem as picking which texture tiles with virtual texturing.

Very clever stuff.
Indeed it is slurping up compressed structured texture and geometry data and painting it on to an invisible 3d world. A patina thin voxel layer of sorts.
 
Wow, leave this thread for 3 days and it's a friggin' rollercoaster of way-too-early conclusions everywhere.

- It's compute-dependent, it has little to do with I/O speed so PCs with Vegas will run this super fast!
- Tim Sweeney: "It's super dependent on I/O speed".
- It's only dependent on I/O overhead which Microsoft will solve and everyone will get this demo running on their PCs!
- Tim Sweeney: "It's I/O overhead and hardware data decompression which has no PC equivalent".
- Look there's this guy running on a laptop with a mobile RTX 2080 faster than 30 FPS.
-
Tim Seeney: "That was.. a video maybe? That doesn't look possible."
- No it was running in editor mode on a laptop because the chinese Epic guy said so!
- No LOD or resolution mentioned, so we're not comparing apples-to-apples and for all we know the laptop could be showing something very similar to today's games.
- If it's dependent on PS5's storage then it's useless!

Well the SeriesX also has hardware decompressor, even though base SSD speed is halved so it might have repercussions on how much geometry it can take per frame.


Finally a semi-official source.
If you consider Epic's founder and CEO a semi-official source, I wonder what it takes to be considered an actually official one.
;)


It’s not just “Fast SSD”. It’s a robust platform for streaming assets in near realtime.

Meaning: it's not about fast storage, it's about having reducing IO overhead (solvable by software) and a capable hardware decompression block (non-solvable by software).
The r/pcmasterrace sub has been mocking Tim Sweeney for the past 5 days non-stop because he dared to say there's no PC equivalent to the PS5's I/O performance, which enabled the demo (not UE5, simply the demo we saw).
People building $5000 PCs with 16 cores, Titans and super expensive PCIe 4.0 NVMe drives are in for a surprise.


In that I agree, this is impossible without the change in rendering, and that part needed to be solved before the IO, making it the 'primary bottleneck'.
Or maybe the fact that RAM density per-slot didn't increase nearly as much after ~2012 like it had been up until that point?
In 2011, I would have guessed a typical 2020 PC would have 64GB of RAM, yet most are stuck with 16GB due to a number of things that happened between RAM makers (earthquakes, tsunamis, cartels, etc.).
I'm guessing if most gaming PCs had 64 to 128GB of RAM, then game designers would have designed games to load most of the game into the RAM, with access to IO being much less frequent.

Imagine if the new consoles had also increased their RAM amount by 16x, like they did between PS360 and current-gen. 128GB RAM PS5/SeriesX?




Well, I wouldn't expect a typical CEO to know anything about their products; they tend to have other concerns and hire people to work on engineering products. A CEO still hands on with the technical implementation of their stuff is fairly exceptional. So TBH, people here being better experts on something over the CEO of the multinational that makes it isn't that unrealistic. ;)
AFAIK Tim Sweeney still gives keynotes about the latest Unreal Engine features.

That is not to say PC won’t catch up of course. Eventually they always do and they certainly likely will have by the time the new consoles are out, perhaps not quite as efficient but then through brute force.
Broadly speaking. The point is PS5's SSD is fast and Sweeney made a note of it. It's also worth noting that PS5's solution includes optimised OS access and intrinsic compression, so it's performance goes beyond the basic 5 GB/s rate and may or may not be faster than high-end PC SSD. That's not a meaningful contribution to this discussion though; the fact that technically there may be faster solutions on workstation class hardware (which is inevitable in any comparison; 'PC' will always have a superior solution in some $10,000+ professional workstation configuration) is neither here nor there.
Even pc exclusives, like star citizen, that are designed taking in mind nvidia titans and ssds in their recommendations, do seem to ignore nvmes too... how strange. Or maybe it is not that they are ignoring but that they can't do anything about the bottlenecks.
There's just nothing that provides hardware decompression in any roadmap, at the moment.
Unless some company appears with some AIB developed in secret for the PC, that puts the PC years behind the new-gen consoles on I/O performance. I just don't see any way around this.
(sorry for the multi-quote, but I think it addresses the same question)
 
In the UE5 demo around 1:45 Epic say that they are not using the game versions of the quixel megascan assets, but the cinematic versions that is typically used in films with around a million triangles each. Further they say that there are over a billion triangles of source geometry in each frame that Nanite crunches down lossless to around 20 million of drawn triangles. The key here I think is that what we will see in games are game versions of the assets and that the fear of massive sized game might not be an issue. One of the benefits with this engine is to ease work for developers to just swap in original high poly models. What if you can use the engine to create game assets versions that are what is ultimately stored on the game disc? The details we also see in the demo might not be to unrealistic to expect in games because they are basically crunched down by the engine to game versions of assets.
I suddenly feel a bit more optimistic that this demo is not just a pipe dream for real games, but we have to see:D
Here you can see some quixel megascan models used in the UE5 demo.

https://quixel.com/megascans/collec...nt&category=natural&category=limestone-quarry
 
The main point of that image and algorithm is it shows how non-obvious data mappings can change the problem considerably. It shows Epic likely aren't just throwing vertex data down the IO pipe and could be doing something that may be visualisable as a 3D texture
I never spend time to calculate proper storage / compression wins. If this would enough to get such details, maybe i should stick at it :)
If not, DAG could be used. If they use SVO that's almost the same and compression results are orders of magnitude better from SVO if the model is huge enough. It's basically SVO with instancing at any tree level, so 'micro instancing' to sound hip.
Though i did not pay much attention to the papers that handle geometry and texture and don't remember how the ratios were affected.
 
Well for the discussion we can assume that compression won’t greatly improve over the hardware supported methods on offer. So this begs the question - how much better graphics will we get and how much more varied graphics will we get? From the looks of things we got the detail covered but the assets are going to be a bottleneck. Everything that Uncharted taught (at least me) about clever combinations of assets and which Dreams also shows very clearly is going to hold true. So in a sense this demos many repetitions of huge poly statues highlights this too - we can render a tonne now but where we are now no longer bottlenecked by rendering speed and RAM we are more bottlenecked by space on disc. I expect the big games to be at least 200GB. The net benefit will be a bit higher thanks to some deduplication. I suspect we will see some innovations like perhaps much more here is asset x and here are the coordinates where to render them and with what deformations much I guess Dreams does.

I don’t know how the UE5 demo here plays into that or what the LOD0 stuff breaks down to. Does it work similarly?
I love watching other developers talk about their games chief example is dreams which still boggles my mind sometimes.
NSVE6Hk.jpg
 
I agree, but why should we use real speeds in game environment for PS5, but max theoretical speeds (never reached) for all others I/Os compared to PS5 ?

Who's doing that? The figures we should all be comparing are those advertised by Microsoft and Sony as their typical compressed throughput (4.8 vs 8-9) and on the PC side to the base uncompressed speed of whatever drive we're using as a point of comparison - with the caveat that we don't know whether PC's will reach those speeds until we have a better understanding of DirectStorage.
 
The r/pcmasterrace sub has been mocking Tim Sweeney for the past 5 days non-stop because he dared to say there's no PC equivalent to the PS5's I/O performance, which enabled the demo (not UE5, simply the demo we saw).

He doesn't lie perhaps, he says current, well yeah PS5 isn't current either, you cannot buy it. End of year drives will be over 7gb/s and with velocity architecture even those bottlenecks are removed. Theres no need for concerns in the pc space, it's already beyond both ps5 and xsx today ;)
 
If they are really going with geometry images by encoding verts xyz into a 2d texture and leveraging image compression algos that already exist, then they can't really claim losslessness.

Those rocky delapidated surfaces aren't just great for showing off high geometric density, they are also great at hiding compression artifacts. It's hard to point out small imperfections in a dilapidated ruin. But how about a smooth spaceship? Well, at least smooth surfaces will already compress very well without needing to get lossy. Hummm.

In that case, the most pathological case would be strucutures with regularly repeating fine-grained patterns of sharp and smooth forms. Say, a regularly ondulating pattern across a curved structure. That does not look like a smooth gradient when encoded in a 2D texture, and out-of-place verts would stick out visibly.
 
To summarize :

- We don't know if the PS5's SSD was essential for the demo (and i mean for the same fidelity).

- Some people claim that a laptop ran the demo.

- Even if true, we still don't know how much PS5's SSD will be useful for games.

- We have to wait for actual games.
 
I also find it surprising they manage map arbitrary topologies into a perfectly regular grid of quads. I can't wrap my head around how they make that work.

Maybe, as they divide the mesh up into multiple patches, each with their own geometry image, some of those, if not many, end up with multiple unused texels at the edges.
 
If they are really going with geometry images by encoding verts xyz into a 2d texture and leveraging image compression algos that already exist, then they can't really claim losslessness.

Those rocky delapidated surfaces aren't just great for showing off high geometric density, they are also great at hiding compression artifacts. It's hard to point out small imperfections in a dilapidated ruin. But how about a smooth spaceship? Well, at least smooth surfaces will already compress very well without needing to get lossy. Hummm.

In that case, the most pathological case would be strucutures with regularly repeating fine-grained patterns of sharp and smooth forms. Say, a regularly ondulating pattern across a curved structure. That does not look like a smooth gradient when encoded in a 2D texture, and out-of-place verts would stick out visibly.

They're not. They have their own solution. We just have to wait and see what it is.


"Path has moved far from there since." He started researching with inspiration from GIM and SVO, but it's not the solution they ended up with. Expect it to be quite a bit different.
 
Back
Top