Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
That was noted as somewhat of an issue on PS4 specifically. I wonder what can be done to lessen that problem?
The problem where the base amount of bandwidth each client needs comes out of a single bus can't be helped. What was disclosed was that the PS4's bandwidth consumption with both CPU and GPU was worse than the sum of their bandwidths, due to interference between the different access patterns.

In that direction, AMD has patents and enhancements to more deeply buffer and schedule accesses to reduce the impact of interference, and the Zen cores are significantly more robust and adaptable than Jaguar. The additional fraction of bandwidth loss is hopefully smaller, though I doubt it's zero.
 
Here's the graph that Sony used to get the issue across:

PS4-GPU-Bandwidth-140-not-176.png


It seems to be indicating that for every 1 GB/s that that CPU uses, you lose 2 ~ 2.5 GB/s on the GPU side of things. I think it's interesting to think about what this may have meant for the Xbox One.

XB1 has a very fast 32MB chunk of dual ported eSram, so this should mitigate the issue ... in some circumstances. Problem is that 32MB is tiny, and for games using deferred rendering with lots of buffers you're going to have to spill out into main ram for some high bandwidth jobs.

And with < 60 GB/s effective bandwidth from main ram if you're losing 2.5x the CPU bandwidth that could leave the GPU a little starved.

Maybe that's a reason for some X1 games taking frame rate hits compared to PS4, even with the X1 dropping down to 900p?
 
It seems to be indicating that for every 1 GB/s that that CPU uses, you lose 2 ~ 2.5 GB/s on the GPU side of things. I think it's interesting to think about what this may have meant for the Xbox One.
This is the impact of bus arbitration. This is what happens when you have any two devices sharing a single RAM pool where they are running at indirectly reconcilable clocks. The CPU is slow arse, every access to RAM deprives the GPU of a disproportionate amount of RAM access.

The server solution is dual-ported RAM and a multi-bus interface but then your next problem is cache coherence.
 
Are games guaranteed to be optimized for streaming textures on the next-gen consoles, or do we also have situations still where multi-platform releases are basically PC code ported to consoles with no streaming optimizations?
 
Are games guaranteed to be optimized for streaming textures on the next-gen consoles, or do we also have situations still where multi-platform releases are basically PC code ported to consoles with no streaming optimizations?
What Do you mean? If a game uses a texture Streamer - it is "optimised" around the amount of Memory available by changing the Cache size.
 
tools arrived for playstation ;)
That's interesting. I'd be curious to see if the PS5 now runs the infamous death corridor scene slighty better in Control with the new firmware. This was the most demanding scene in Control AFAIK. The framerate improvements seem to be better in the low framerates scenes.
 
  • Like
Reactions: snc
Are games guaranteed to be optimized for streaming textures on the next-gen consoles, or do we also have situations still where multi-platform releases are basically PC code ported to consoles with no streaming optimizations?
DirectStorage is coming on PC and SSD is required I believe.
So that shouldn't be a concern, and I think it's more than reasonable for games that require that level of streaming to now make SSD minimum spec now anyway.
 
I think Alex is highly optimistic hoping for 3080/6800xtish performance at $400 next year. MSRP may be a possibility but real world pricing I just can't see happening.
Like I said - it is a wish. We had it for Xbox360/PS3 and for Xb1/PS4... might be nice to have it here as well.
 
DirectStorage is coming on PC and SSD is required I believe.
So that shouldn't be a concern, and I think it's more than reasonable for games that require that level of streaming to now make SSD minimum spec now anyway.

But I mean right now. When we look at a port of a PC title to PS5 or Xbox, are we guaranteed they don’t just load textures into memory the old way? And are we guaranteed they are flushing textures when not needed because they can be loaded back so fast?

To be honest, it doesn’t seem all that likely that this is guaranteed to be optimized for all titles on next-gen consoles?
 
But I mean right now. When we look at a port of a PC title to PS5 or Xbox, are we guaranteed they don’t just load textures into memory the old way? And are we guaranteed they are flushing textures when not needed because they can be loaded back so fast?

To be honest, it doesn’t seem all that likely that this is guaranteed to be optimized for all titles on next-gen consoles?

Why do we need this ? PC has the advantage of ram which is way faster than any ssd now or in the near term if not long term future. So why wouldn't pc developers leverage all of it ? Direct IO , faster ssd speeds and ram quantity ? We are going to start to see everyone transition to DDR 5 soon anyway on the pc side which should bring up capacity and bandwidth at the same time.
 
My comments were in the context of whether consoles are memory limited vs PC for textures and bandwidth. And I am asking if current titles are really indicative of that bottleneck or whether the bottleneck on consoles is potentially lower.

Once PC uses DirectStorage etc then of course it won’t matter and eventually the PC will be able to outperform the current consoles easily, but at the moment I still suspect current console releases aren’t always optimized for texture streaming with the best use of memory and bandwidth.
 
But I mean right now. When we look at a port of a PC title to PS5 or Xbox, are we guaranteed they don’t just load textures into memory the old way? And are we guaranteed they are flushing textures when not needed because they can be loaded back so fast?

To be honest, it doesn’t seem all that likely that this is guaranteed to be optimized for all titles on next-gen consoles?

Not all games want or need to use streaming textures. Something like Twelve Minutes, for example, doesn't need to stream in textures. Not the greatest example since the graphics aren't out of this world, but they could be if the developer had the budget for it. Heck, entire genres (like fighting games) don't really need streaming textures.

What will be interesting to see this generation is whether larger open world games that rely on streaming (HZD/Spiderman type games) will be able to approach the graphics quality of preload level type games (with smallish levels like GoW 2016).

Assuming that a studio has a large enough budget for the required asset creation, streaming games should have a large advantage with regards to variations in assets with any given scene.

But here we run into the biggest limitation most development studios run into, time and budget. To fully realize the potential for streaming with the new I/O, studios are, IMO, going to need a relatively large bump in time and budget compared to the previous generation in order to create all the assets for much more detailed and diverse worlds that can really show what the new I/O can do. Alternatively perhaps some type of AI driven automatic or semi-automatic asset creation (which again is likely out of reach of any but the largest dev studios).

Although, counterintuitively, to fully realize the potential of streaming I/O, developers may have to develop new techniques where less work is done per pixel so you minimize how much work is done with new assets so that you can flush and load in more assets per second without most of the memory being used for work done per pixel. Something like variable rate shading, for example. I'm sure developers will come up with more techniques to reduce work per pixel over the generation.

So, in short, some developers will be able to exploit the benefits of increased I/O but I'm not sure whether any but the largest development houses will be able to show significant improvements in graphics quality due to the faster I/O. For development studios with a limited budget, they may be just as well off with pre-load systems as they would be with streaming systems. Heck, some might be better off with pre-load systems as they are inherently less complex than a streaming system.

Regards,
SB
 
Last edited:
My comments were in the context of whether consoles are memory limited vs PC for textures and bandwidth. And I am asking if current titles are really indicative of that bottleneck or whether the bottleneck on consoles is potentially lower.

Once PC uses DirectStorage etc then of course it won’t matter and eventually the PC will be able to outperform the current consoles easily, but at the moment I still suspect current console releases aren’t always optimized for texture streaming with the best use of memory and bandwidth.
Just going to echo Silent Buddha here; we aren’t likely to be entering a stage where virtual textures are used for every title.

Undoubtedly, there will always be titles that won’t use them.

But you are correct; many titles are not yet optimized to take full advantage of IO streaming. But as @Dictator said earlier, optimization for IO really could in some ways boil down to reducing the size of the streaming pool in VRAM.
 

Summary
Can a 3200MB/S ssd run ratchet and clank?
  • Yes
  • No difference from human standards, data points exist within a margin of error of 1fps

It's too bad there aren't any 2.4 GB/s NVME drives that will work in the PS5 that could be tested.

So, TL: DR is that basically any NVME drive that the PS5 will accept will work just fine for gameplay regardless of the speed for any game currently released. The only place where you'll notice a difference in the speed of the drive is when loading the game or loading levels.

It'll be interesting to see when or if that will change in the future. Or whether it's so difficult to leverage higher than X amount of I/O bandwidth in actual gameplay that this ends up being representative of games going forward.

Good news for PS5 owners currently, however. Just get the cheapest available NVME drive that will work in the PS5 regardless of speed and you're likely just fine. Just be aware that things might (or might not) change in the future.

Regards,
SB
 
It's too bad there aren't any 2.4 GB/s NVME drives that will work in the PS5 that could be tested.

So, TL: DR is that basically any NVME drive that the PS5 will accept will work just fine for gameplay regardless of the speed for any game currently released. The only place where you'll notice a difference in the speed of the drive is when loading the game or loading levels.

It'll be interesting to see when or if that will change in the future. Or whether it's so difficult to leverage higher than X amount of I/O bandwidth in actual gameplay that this ends up being representative of games going forward.

Good news for PS5 owners currently, however. Just get the cheapest available NVME drive that will work in the PS5 regardless of speed and you're likely just fine. Just be aware that things might (or might not) change in the future.

Regards,
SB
I don't see this changing much. The increase in IO speeds helps reduce VRAM pressure when it comes needing to cache things not in view. There is a hard line in which you cannot reduce a texture pool any further than what needs to be rendered exactly on screen. To budget for a large number of events that can happen over the course of the game, developers are likely to set a high budget in which they know the game should not cross and typical game performance should be under that. So if that particular budget is large enough, then various slower speed drives should be able to keep up.

Pre-fetch and caching should always exist in games; both drives are too slow to be able to stream large amounts directly from disk on demand.
 
Status
Not open for further replies.
Back
Top