Non UE5 Discussion about Instant Resume Between Multiple Titles moved to its own thread: https://forum.beyond3d.com/threads/instant-resume-between-multiple-titles-2020-spawn.61760/
Last edited by a moderator:
this means that a whole 8-12 hours game using those kind of assets are hardly possibile in this 'next' gen?
Pre or post decompression?Yan Chernikov supposes that this short demo weights hundreds of GB
Pre or post decompression?
Whose disk?in the disk
Whose disk?
A creator's / developer's disk (like the person in question) or the end users' disk?
Creators don't work on compressed data.will the same identical assets change automatically going from one disk to another with the same compression?
https://forum.beyond3d.com/posts/2125146/Yan Chernikov supposes that this short demo weights hundreds of GB, this means that a whole 8-12 hours game using those kind of assets are hardly possibile in this 'next' gen?
Creators will use raw data. Indeed, that's a key selling point of UE5 for professional content rather than games. For games, you can compress the assets to other formats including lossy compression. We're not talking system-level IO compression (Kraken on PS5) but the difference between a game using a 12 megapixel HDR RAW image during creation and a 24 bit JPEG of that image in the game. A lot depends on the geometry representation in Nanite and how well it can support compression with its performance profile.will the same identical assets change automatically going from one disk to another with the same compression?
Since Nanite apparently is prettyy clever on how to scale those original assets to screen, could it be utilized as the middleman to provide automatically scaled down versions of the original models to ship with the game?Creators will use raw data. Indeed, that's a key selling point of UE5 for professional content rather than games. For games, you can compress the assets to other formats including lossy compression. We're not talking system-level IO compression (Kraken on PS5) but the difference between a game using a 12 megapixel HDR RAW image during creation and a 24 bit JPEG of that image in the game. A lot depends on the geometry representation in Nanite and how well it can support compression with its performance profile.
UE supports 2 types of Virtual texturing, both streaming and run time. They've rolled their own solution for some time with varying support levels (now up to 8K texture sizes). Unless something has changed with nanite, I largely suspect they are using the same VT engine that they have made. It's possible that nanite is some form of pseudo VT that borrows elements from both, I'm not entirely sure to be honest.
Given the assumptions that nanite only works with static geometry and meshes, I lean towards it being SVT as this is what enables them to stream assets that largely surpass memory footprints. Key features behind SVT are:
- Supports very large texture resolutions.
- Texel data cached in memory on demand.
- Texel data cooked and loaded from disk.
- Well suited for texture data that takes time to generate, such as lightmaps or large, detailed artist created textures.
But of this streaming requirement, lightmaps aren't required to load. So there are additional savings there.
Tile sizes of the SVT system I belive are shown here, different colours show the different sources of where the textures come from. The squares should be the tile sizes:
As for why Tiled Resources may not be used, or SFS etc. Tiled Resources have fixed sizes on tiles. So software based solutions would be able to choose smaller tile sizes for their needs if that's what they want. I believe for PRT/TR it is 64K for 2d tiles, volumetric TR (Tier3) can get significantly larger in size. There is some possibility in my mind from watching the demo that volumetric textures are being leveraged for destruction (wrt to their future destruction engine). But in this case, I believe we're just looking at 2D. There is no lightmap data being loaded either because of Lumen. So you can determine the worst case bandwidth cost of being 64K tiles onto screen, as anything larger in software, you'd just use the hardware PRT version.
So you'd really just need to ask how much of the screen is changing with respect to tiles. You can see the video from above, many tiles are falling out of view, but very few tiles will be coming into view. The likely bandwidth requirements are very minimal in this type of linear scenario, strafing or spinning in circles would cause there to be more strain on your I/O, but generally speaking, in this fashion, bandwidth requirements are low.
As you are loading tiles from further away and you approach them coming towards you, SVT will attempt to figure out through sampling the texture what the player should see based from the tile. If the tile hasn't arrived yet, it'll just try to guess the colour. This is also how SFS works, but the hardware variant of it. So there is ample time to load the textures from far away (where you can't see the details anyway of the tile) from when it' becomes fully visible, to when you actually need the detail up close). This is generally how we get away with streaming assets today on slower HDDs.
So you actually get to remove tiles faster than you need to write them into memory. There are limits if you continually speed up the process however, and if you're swapping out lighting data for geometry data, then I suppose the requirements could be larger. I'm honestly not expecting much at all though, probably < 1 GB/s or at most around 1 GB/s at the worst case scenarios. I believe the fly by will be fairly heavy duty here (SSD/NVME speeds required). But the standard walking around will be quite minimal. You can only change so much in the memory. If you're replacing 5.5 GB/s of data in your actual memory, you don't have any bandwidth remaining to do any actual rendering. Just consider the fact that your chips are being monopolized to keep moving data in while it's trying to actually render things can be quite a cause for heavy contention between I/O, GPU and CPU needs. I largely suspect this is still only about 1 GB/s for the fly by section. That's significant because you're constantly loading this amount in, quite massive to be honest. Very rare to have a SVT load in so much memory when you consider the worst case tile size is 64K.