PC system impacts from tech like UE5? [Storage, RAM] *spawn*

Just dropping texture resolution to half would drop bandwidth needed to 1/4th.
Saved texture in paint.net reduced the size by 50% and it's 1/3rd (ish)
TmyEQ3C.jpg
 
Saved texture in paint.net reduced the size by 50% and it's 1/3rd (ish)
TmyEQ3C.jpg

The raw size is 1/4th if you halve the image. Adding compression can produce something different.

Example in point 256x256 = 65536, 128x128=16384. 65k/16k = 4

Or you can look at equation size=x*y, halved_size=(x/2)*(y/2)
 
Did it with a bmp it was the compression messing things up
but then again dropping the resolution by half may not result in 1/4 bandwidth if its sending compressed textures across the bus
maybe someone could repeat with dds
 
Yep.
And those .05ms may be a bit too optimistic for a lot of consumer drives but surely a lot of them can hit 0.1 ms, which already makes them 50-100 times faster than a HDD. So access latency is imperceptible, and the system has "instantaneous" access to the data.

For example, the 970 Evo M2, when performing 4K Random Reads


.3 ms

PC would be at big disadvantage here until DirectStorage is available. Sony solution DMA's data from controller to ram. in PC the data might flow through multiple sw layers and even IPC(kernel to user space). I know many datacenter solutions work around these kind of issues but regular apps probably don't.
 
PC would be at big disadvantage here until DirectStorage is available. Sony solution DMA's data from controller to ram. in PC the data might flow through multiple sw layers and even IPC(kernel to user space). I know many datacenter solutions work around these kind of issues but regular apps probably don't.
DirectStorage is not going to convert anybody's PC into a device that can perform I/O as fast as PS5. If anybody thinks it will, they are lacking a basic understanding of PC architecture which is bunch of components connected across a number of buses. A Software API cannot have your GPU access your SSD directly - it will still have to go over the equivalent of north and south bridges with the CPU driving I/O transfers and possibly unpacking and decompressing data before it even gets to the GDDR.
 
DirectStorage is not going to convert anybody's PC into a device that can perform I/O as fast as PS5. If anybody thinks it will, they are lacking a basic understanding of PC architecture which is bunch of components connected across a number of buses. A Software API cannot have your GPU access your SSD directly - it will still have to go over the equivalent of north and south bridges with the CPU driving I/O transfers and possibly unpacking and decompressing data before it even gets to the GDDR.

im actually wondering if directstorage will require hardware support from the cpu chipset and motherboard.
 
im actually wondering if directstorage will require hardware support from the cpu chipset and motherboard.
Oh, I'm 100% sure that Microsoft, along with Intel and AMD, have looked at options for changing the I/O arrangement in future hardware and future iterations of Windows. What will not appeal to anybody, and which really is necessary if you want to approach the I/O efficiencies of nextgen consoles, is allowing the GPU to tap the SSD directly - effectively cutting around/through all the layers (and protection) inherent of the Windows microkernel.

You can do it, but at great risk to the robustness and security of the existing driver model. I don't believe Microsoft would willing chose that path.

But in terms of the I/O load on the CPU, there is much DirectStorage can do to help but it's not going to be revolutionary like like people think it will be.
 
If any company were extremely bold, they would release a PC add-on PCI-Express card with hardware-level Kraken decompression engine and NVME slots. That is not something I expect either Sony or Microsoft to do. So in the meantime, both consoles extreme storage systems may go underutilized for cross-platform games.
 
Have a larger batch of system RAM reserved. System RAM to VRAM is faster than SSD to Unified RAM, and it has less latency.
PC games requiring 16 GB of RAM would be a very expected next generation difference I think.
Let's think:

Assuming a next-gen open world map has 100GB data. On PS5 you can warp to any location instantly with the SSD.
On PC you need to put all of the 100GB into system memory.

We can say that if we want the same degree of freedom in designing PC games we need 64GB system RAM REQUIRED and 128GB RECOMMENDED.
 
Last edited:
Let's think:

Assming a next-gen open world map has 100GB data. On PS5 you can warp to any location instantly with the SSD.
On PC you need to put all of the 100GB into system memory.

We can say that if we want to the same degree of freedom in designing PC games we need 64GB system RAM REQUIRED and 128GB RECOMMENDED.

Nope.
 
Assuming a next-gen open world map has 100GB data. On PS5 you can warp to any location instantly with the SSD.
On PC you need to put all of the 100GB into system memory.
Can we please not use devices definitions like "PC" as they are very ambiguous. Do you mean "HDD", or "current Windows 10 SSD" or "Future SSD PC's contemporaneous with PS5"?
 
PCs will have faster NVME drives and DirectStorage API. PC's also have GPUs with dedicated 8GB of Video RAM in addition to general system memory; PCs have ability of having more resources loaded at any given time.

Can we please not use devices definitions like "PC" as they are very ambiguous. Do you mean "HDD", or "current Windows 10 SSD" or "Future SSD PC's contemporaneous with PS5"?

If we want to design a game for PC, with 100GB world map and we can warp to anywhere in the map, we should have a storage with at least 3~4GB/s of real in-game speed. Or you should have 100GB system RAM.

Do we have any PC SSD which has in-game read speed 3~4GB/s (NOT just the speed test)? To the best of my knowledge even using the fastest NVME SSD the loading time is still quite long for PC games.
 
Since UE5 fidelity for geometry and pixels scales with resolution for Nanite, and lighting fidelity with compute for Lumen, do you think there'll be a return to SLI and Crossfite setups for high framerate PC gaming?
 
If we want to design a game for PC, with 100GB world map and we can warp to anywhere in the map, we should have a storage with at least 3~4GB/s of real in-game speed. Or you should have 100GB system RAM.

Do we have any PC SSD which has in-game read speed 3~4GB/s (NOT just the speed test)? To the best of my knowledge even using the fastest NVME SSD the loading time is still quite long for PC games.

The 5.5 GB/s supplied by Sony is the theoretical maximum sequential read speed of the hardware. The 8-9GB/s is the same figure with a typical Kraken compression ratio assigned. It's quite right to contrast those speeds with the theoretical maximum sequential read speed of PC SSD's (providing they are sufficiently cooled to maintain those speeds) with the caveat that we know Direct Storage needs to solve several software based issues on the PC side before those speeds will be achievable.

Real world data transfer speeds will vary greatly from those theoretical figures on both platforms depending on what's being read (or written).
 
Yes but GPU's deal with the common compressed texture formats natively so they will remain compressed in the PC space as they are now.

The difference here is the full data compression that the consoles are offering on top of that in the form of Kraken/BCPACK which is on average no more than 2:1.
going by the graphs and comments kraken achieves 3:1 compression on geometry. For these types of massive geometry new gen games, that would balloon size by 3x potentially. It would also mean, depending on the format, that ps5 potentially achieves 16.5GB/s effective bandwidth when streaming the billions of nanite triangles.
BCPACK and Kraken is for textures.. Zlib is for everything: textures, audio, geometry, probably some scripts too, etc. It's a zip file, and every game has its assets inside zip files (regardless of file extension).
where did you hear kraken is only for textures? kraken has been compared to zlib
http://www.radgametools.com/images/oodle_typical_vbar.png
and has higher compression than zlib and is faster to decompress.
It has also been compared to lzma which can compress geometry significantly well.
 
going by the graphs and comments kraken achieves 3:1 compression on geometry. For these types of massive geometry new gen games, that would balloon size by 3x potentially. It would also mean, depending on the format, that ps5 potentially achieves 16.5GB/s effective bandwidth when streaming the billions of nanite triangles.

Untextured triangles?
 
Untextured triangles?
I imagine it loads both triangles and textures, unless it is some form of geometry texture or some other unknown format. But given the data in whatever format should probably have similar underlying structure(i imagine since it is the same geometry data only in another format or shape), I imagine the geometry component would load relatively fast.

Part of the data would decompress at a rate of 16.5GB/s and part would decompress at a lower rate.
 
If we want to design a game for PC, with 100GB world map and we can warp to anywhere in the map, we should have a storage with at least 3~4GB/s of real in-game speed. Or you should have 100GB system RAM.
We don't know that until we know how virtualisation works. Remember, 100 objects with 4k textures requiring loads of data (~5 GBs raw 24 bit colour) can be represented with 64 MBs of textures when healthily virtualised. The actual amount of geometry needed to render a frame is as many triangles as there are pixels in a perfect case, so 8 million triangles for 4K. The amount to transfer will be somewhere between that 8 million vertices and all the geometry for all the object. Until we know what that mid-point is, we don't know what minimum speed is needed on storage.
 
Back
Top