Playstation 5 [PS5] [Release November 12 2020]

Cache scrubbers, yes. Fast I/O is a bandwidth consumer.
Very fast I/O means you don't need to keep the less latency-sensitive data in the system RAM and just stream it as you go, meaning less juggling of data inside the RAM so lower amount of access requests = higher available bandwidth.

I guess fast I/O could be a bandwidth consumer to RAM if there were no caches dedicated to it, but Cerny claimed there's "a lot" o esram in the I/O block. The diagrams don't show the I/O block as a client to the main system RAM, in fact it's showing as just a one-way street.

tJttmW8.jpg
 
Last edited by a moderator:
Very fast I/O means you don't need to keep the less latency-sensitive data in the system RAM and just stream it as you go, meaning less juggling of data inside the RAM so lower amount of access requests = higher available bandwidth.

What do you mean by this? And "streaming as you go" inherently consumes bandwidth.
 
The last snippet where he says we could have the same loading times as today because of the more data next gen games could use doesn't make any sense to me.

Surely the biggest load possible is 16GB and the PS5 will do that in probably under 3 seconds?

Unless I'm missing something.

It'll do that if the data is optimized for fast loading on the nvme drive. Load times are notoriously poorly optimized in games. I think his point is that if the raw speed of the nvme is much faster then devs still won't have an incentive to optimize the layout and they'll just rely on the raw speed of the nvme to take care of it. So as assets and content grow, we'll be back to where we started. Just remember that those quoted numbers we keep seeing are sequential reads, not random reads.

Edit: Not to mention the way the reads from the disc are actually structured to load things into memory.
 
Last edited:
Indeed. If you're loading 50x faster, to have the same long load times you'd need 50x as much data. Games counted in the terabytes? I don't think so!

And not only this if your game is really able to load at 8 GB/s you need less than 2 seconds to fill the memory dedicated to game... This will not be the bottleneck.
 
What do you mean by this? And "streaming as you go" inherently consumes bandwidth.
Whatever operations you can do with the 5.5GB/s bandwith (with relatively large latency) you will, without having to place an access request to the system RAM.
 
Whatever operations you can do with the 5.5GB/s bandwith (with relatively large latency) you will, without having to place an access request to the system RAM.
SSD directly into GPU registers? Seems unlikely. There decompression work as well. I think you’ll see it still end up in memory before going to the GPU for work. As stated in the articles there’s enough time for assets to be called loaded into memory and processed within the same frame.

I don’t think there is enough consistency in the SSD to support direct to register. You’ll stall the GPU waiting for textures if latency on random access is bad.
 
Last edited:
It'll do that if the data is optimized for fast loading on the nvme drive. Load times are notoriously poorly optimized in games. I think his point is that if the raw speed of the nvme is much faster then devs still won't have an incentive to optimize the layout and they'll just rely on the raw speed of the nvme to take care of it. So as assets and content grow, we'll be back to where we started. Just remember that those quoted numbers we keep seeing are sequential reads, not random reads.

I really doubt there will be any minute long loads next gen.
Surely if they moving more data as suggested because of the NVME drives that data will be optimised for said drives.

I still don't get how they will optimise loading for NVME drives? What does that entail?
 
I really doubt there will be any minute long loads next gen.
Surely if they moving more data as suggested because of the NVME drives that data will be optimised for said drives.

I doubt it as well, but load times are already notoriously not optimized for games right now. I think he was just suggesting that devs that do not care to optimize for loading now will not optimize for loading later, especially if the drive can give them a huge improvement without effort. I do think once people experience games that load near instantly, they'll complain about loading times for ones that are slower, and devs will be incentivized to improve.
 
There's plenty of gray area.

Such as?

We don't know how low the clock will drop during worst case scenarios and what exactly worst case scenarios mean but that's all I can see as grey areas but that's the thing it's hard to predict what those worse case scenarios will be down the line.

The CPU and the GPU can hit max clocks simultaneously.
SmartShift can move power from the CPU to the GPU if it's current workload isn't power intensive / which doesn't mean it's not running at max clocks.
 
Like you I have trouble rectifying some of what has been said to be completely honest - I think that puzzlement is reflected in the article and video put out yesterday. Gotta wait for games or see what else devs can say over time I guess.
There's plenty of gray area.
If any of you are confused, there are people here to discuss what you find confusing.

If you claim there are contradictions, you can quote the exact words from Cerny to discuss it, otherwise it's willful confusion and nobody will learn anything or change their mind about anything. Selective paraphrasing is not helpful, it always adds to the confusion.
 
If any of you are confused, there are people here to discuss what you find confusing.

If you claim there are contradictions, you can quote the exact words from Cerny to discuss it, otherwise it's willful confusion and nobody will learn anything or change their mind about anything. Selective paraphrasing is not helpful, it always adds to the confusion.
I don't think Dictator is referring to him being confused on how the technology works. He said: "I have trouble rectifying some of what has been said to be completely honest"

That's not the same as not understanding what someone is saying. That's a he said, they said problem. But for obvious reasons he's not going to say more on the subject because it's entirely unprofessional to spread rumours on an incomplete unreleased system. The expectation is that it will fix any issues for release.

They will wait it out. If there are issues and they are resolved at launch then this is a nothing burger. If there are issues and it's apparent there will be an article.
 
Last edited:
Just simplify it, they couldn’t maintain 2 and 3ghz, with smartshift they now maintain 2.23/3.5ghz. Variable clocks shouldn’t have had to be mentioned since that basically never will happen, only in extreme cases, probably even less so then before smartshit.
 
Such as?

We don't know how low the clock will drop during worst case scenarios and what exactly worst case scenarios mean but that's all I can see as grey areas but that's the thing it's hard to predict what those worse case scenarios will be down the line.

The CPU and the GPU can hit max clocks simultaneously.
SmartShift can move power from the CPU to the GPU if it's current workload isn't power intensive / which doesn't mean it's not running at max clocks.
We don't know what the real trade off will be between the CPU and GPU clocks. There's only been vague statements explaining those scenarios. There's nothing confusing about what Sony is doing only confusion about what trade offs are actually being made. Like Dictator said we'll have to wait and hear from devs.
 
Back
Top