Next-Generation NVMe SSD and I/O Technology [PC, PS5, XBSX|S]

Same we have a SIGRAPPH or GDC 2022 postmortem of Matrix Awakens demo and they said they use 4k textures and if someone needed a texture with better resolution they said they ask to artist to use multiple 4 k textures.

And I don't think dev out of a few cases like hair or fur will need better than 1 polygon per pixel.
 
Last edited:
I'll make it clear then. I was just talking about the fact that on PC these features are solved by force, but this requires a more powerful GPU or more CPU cores or more RAM so that it works with the same efficiency as on the console with fixed function hardware.

Did you actually read my post? Again, how is Sampler feedback on PC being "solved by force" when it is an inbuilt feature of the GPU exactly like it is in the consoles? It's literally exactly the same, there is no difference here at all.

And GPU decompression does not appreciably require "a more powerful GPU or more CPU cores or more RAM". As I stated in my previous post, it'll run perfectly fine on GPU's much slower than those in the consoles and the majority of it's use should be when the GPU shader cores are idle or utilising their spare capacity as the shader cores are not used 100% for 100% of the frame time. I'll grant it uses a staging buffer in VRAM so you will lose a very small amount if video memory to this solution, but nothing significant (around 1/64th) of a typical 8GB GPU.

How would it be free for players on PC? After all, they have to buy their PC assembled from components sold at a much higher price.

It's free because if you're a PC gamer you already have a GPU to play PC games on. You then get the new GPU decompression tech at no extra cost - hence that specific function is free.

Of course you have to buy the GPU in the first place but that's not what you were arguing. It's a given that PC hardware is more expensive than consoles at a given performance point for the reasons I've already stated in earlier posts. But you are arguing there is an extra cost to the PC gamer to enable specific features like sampler feedback and GPU decompression. Which is flat out wrong.


And one more thing. In today's world, since console and PC games are being developed together, this significantly limits the actual usability of consoles in terms of graphic quality. It is known that a game can only be released if it runs acceptably on all hardware. If it were the case that they would only develop for consoles first, and only later bring the games to PC, then what I was talking about would be even more obvious. If all the features of the Xbox Series or PS5 had been used in 2021, what percentage of the PCs on the market at that time would have been able to run these features at a sufficiently good speed...

This is complete rubbish. Are you not familiar with how scaling works on PC's? Games simply scale down to slower hardware and scale up to more capable hardware. If a feature is truly a must have with no fall back option then you simply set the recommended specs accordingly. i.e. like Returnal has done requiring the gamer to have either 32GB RAM or an SSD, or how Metro Exodus Enhanced edition requires an RT capable GPU.

In terms of how many PC's could "run these features", how about you give us a specific example of a "feature that PC's can't run" and we can get into details? If you're talking about fast IO then reducing settings to accommodate slower IO or reduce memory requirements is absolutely trivial.

If you're talking about some specific DX12U feature like Sampler feedback then the steam hardware survey tells us it's roughly 30-35% of the market which with a total PC gamer base of 200m (according to Nvidia as of 2 years ago) would equate to 60-70m PC's. I believe Sony recently announced they'd sold 30m PS5's.


I will not applaud the effect that, according to some, it is normal to buy new hardware every two years by paying more and more to the hardware manufacturers if you want to see better graphics...

To really take advantage of the capabilities and possibilities of a fixed hardware is what would be really appropriate.

How is upgrading ever 2 years a requirement to see better graphics in the PC space? Are not the same games that get better graphics over the course of a console generation also released on the PC? Spoiler: they are.

Upgrading is a choice and will allow you to exceed the graphical improvement curve seen on the consoles should you wish thanks to the scalability of PC games.
 
That sata solution wouldve run rift apart completely fine if data is to be believed, in special when teamed to a more generous amount of ram.

Its for sure not the pc thats behind.
It should be noted that each engine and game is different and what applies to a single title does not apply to another; even if one is better optimized than another. We've seen over the years, a great deal of scattering across performance benchmarks due to how engines render, and this will be no different on the IO side. Yes Rift Apart may have required a lesser drive, but another title could access the IO completely different and maybe that lesser drive is no longer suitable. About the only thing guaranteed is if you have IO faster than PS5 you would be able to run all of their titles for sure, but if you have a lesser drive, that may not be guaranteed, and you could run into instability at some moments.

32 GB is not useful in console.
I would disagree with this =P
With the SSD solution in place, PS5 would have 32GB to render with. That's a very complicated and different looking scene compared to 16GB to render with. Yes, typically a lot of that memory is used to store textures that may not be used (in the PC space today), but in the case of consoles they could use it all as they don't need to buffer. I would largely suspect that developers would be very happy with this much amount of memory.
 
It should be noted that each engine and game is different and what applies to a single title does not apply to another; even if one is better optimized than another. We've seen over the years, a great deal of scattering across performance benchmarks due to how engines render, and this will be no different on the IO side. Yes Rift Apart may have required a lesser drive, but another title could access the IO completely different and maybe that lesser drive is no longer suitable. About the only thing guaranteed is if you have IO faster than PS5 you would be able to run all of their titles for sure, but if you have a lesser drive, that may not be guaranteed, and you could run into instability at some moments.


I would disagree with this =P
With the SSD solution in place, PS5 would have 32GB to render with. That's a very complicated and different looking scene compared to 16GB to render with. Yes, typically a lot of that memory is used to store textures that may not be used (in the PC space today), but in the case of consoles they could use it all as they don't need to buffer. I would largely suspect that developers would be very happy with this much amount of memory.

I am sure you will never need 32 GB of RAM with virtualized texture and if engine begin to switch to virtual geometry memory like UE 5 memory requirement will fall down for geometry too.😉 I don't know if next generation of consoles will need more memory because of data structure like BVH but for assets I think 16 Gb is enough. Maybe 20 to 24 GB will be useful because of the data structure needed for rendering.

If next generation console release in 2027/2028 I would be surprised if UE 5 geometry is not fully virtualized.
 
I am sure you will never need 32 GB of RAM with virtualized texture and if engine begin to switch to virtual geometry memory like UE 5 memory requirement will fall down for geometry too.😉 I don't know if next generation of consoles will need more memory because of data structure like BVH but for assets I think 16 Gb is enough. Maybe 20 to 24 GB will be useful because of the data structure needed for rendering.

If next generation console release in 2027/2028 I would be surprised if UE 5 geometry is not fully virtualized.

But console games don't have access to 16GB of RAM though do they.
 
But console games don't have access to 16GB of RAM though do they.

They have access to less but this is enough. I am sure Xbox Series and PS5 are fast enough without virtual texturing to brute force texture loading but this is dumb. UE 5 demo run on PS5 with 8k texture and crazy assets with 12.5 GB available to dev when I say 20 to 24 GB I don't expect developer to have access to the full memory too. And BVH are streamed from SSD too on consoles.


There exists no developer, that I am aware of, that would not gladly take more memory.

I only speak of assets. For RAM I think this side of the problem is solved if an Engine can push 1 polygon per pixel asset and 8k textures.
 
Last edited:
UE 5 demo run on PS5 with 8k texture and crazy assets with 12.5 GB available to dev when I say 20 to 24 GB I don't expect developer to have access to the full memory too. And BVH are streamed from SSD too on consoles.

So? Not every game will be using UE5 and not every game will has an efficient a system as UE5 uses.
 
So? Not every game will be using UE5 and not every game will has an efficient a system as UE5 uses.

Wait a little bit to know if other developers will do the same thing at least on geometry side. I even heard indie dev talk about 4k textures on consoles. ;) Geometry is another problem and I don't think without virtual geometry this will be a problem too for LOD 0 at least. virtual geometry is only part of the advantage of UE 5 the micro polygon compute rasterizer and all the stuff around LOD are great too.
 
GPU are saturated if it was not the case games would have infinite resolution and infinite framerate. Direct Storage use Async Compute but games are now using Async compute heavily... Developers will need to find a moment where they can load data and it means less power use by the GPU for graphics. It will probably be minimal for most of the game.

I think this is the big unknown for Direct Storage GPU decompression at the moment. Naturally GPU's will be pushed to their performance limit with any given game but that limit doesn't mean 100% of the GPU is being utilisised 100% of the time. As you obviously know, a single frame will hit many different parts of the GPU at different points with the shader cores being just one part. And in any given frame it's unlikely the shader cores will be 100% utilised for the full duration of the frame. So the question is how much capacity will an in frame data transfer take on those cores and can it be slotted around the graphics workload without any impact. If it can't then will the graphics workload be given priority and what will the result of that be? Presumably a delayed texture load.
 
I am sure you will never need 32 GB of RAM with virtualized texture and if engine begin to switch to virtual geometry memory like UE 5 memory requirement will fall down for geometry too.
hur hur hur.
Never say never =P
But when paired with the GPU power of this generation, I agree 32GB of vram makes no sense.
But as we move further into dynamic lighting, that opens up the world of dynamic environments and destruction, all that geometry and texturing and decals etc, could require a significant lift in memory requirements.
 
A more efficient I/O stack would have still worked wonders on a 500MB/s SATA III SSD.

Going from an 70MB/s 5200rpm HDD to a 500MB/s SSD and more efficient I/O stack would have still been a 'next gen' update in my eyes.
But with the current solution, they can both say they have a fully forward facing design that fully eliminates any potential storage bottleneck for years to come instead of going with bare minimum upgrade to HDD. And it was probably relatively cheap thing to include so why not. Although Sony went a bit further 😂
 
Console gamers were offered the chance to "buy new hardware by paying more to the hardware manufacturers if you want to see better graphics" last generation.

It was called PS4 Pro and Xbox One X - And both sold extremely well indicating that like some PC gamers, some console gamers would also upgrade to a more powerful console every couple of years if they were given the chance.
Most people still bought PS4 slim in the end tbh. I still feel iterative consoles like these are a waste of good development effort that could be used elsewhere.
 
I think 4 is just the standard NVMe protocol. PS5 uses a more custom solution.

Technically NVMe supports 5 levels or queue types. Admin, urgent, high, medium and low. DirectStorage only refers to 4 but that might be related to what the developer has assess to with Admin reserved for the OS.
 
And most people don't upgrade their PC every 2 years.
I didn't actually say people did 😅 I'm not in the PC vs console squabble race. Just commenting on how iterative consoles are dumb af imo

If those last gen half consoles didn't come out I don't think most people would have been complaining about not having an iterative machine. Considering how they have aged even now. And ps5 and the series consoles would have been seen as much more attractive tbh
 
Back
Top