Playstation 5 [PS5] [Release November 12 2020]

We don't know what the real trade off will be between the CPU and GPU clocks. There's only been vague statements explaining those scenarios. There's nothing confusing about what Sony is doing only confusion about what trade offs are actually being made. Like Dictator said we'll have to wait and hear from devs.

It's vague because it's hard to predict with 100% certainty what devs will do 2-5 years from now.

Dictator should say what his having issues rectifying with unless he can't because of NDAs or embargo.
 
Indeed. If you're loading 50x faster, to have the same long load times you'd need 50x as much data. Games counted in the terabytes? I don't think so!

my guess would be this is because initially the thought was it was a standard SSD solution and not as fast and streamlined as the end product?
 
It's vague because it's hard to predict with 100% certainty what devs will do 2-5 years from now.

Dictator should say what his having issues rectifying with unless he can't because of NDAs or embargo.
Why is it that even Digital Foundry is not quite certain about what trade offs are going to look like yet you are? It's fine if you want to believe the best case scenario but don't be upset that some people are skeptical, and there's plenty of people that are...
 
Why is it that even Digital Foundry is not quite certain about what trade offs are going to look like yet you are? It's fine if you want to believe the best case scenario but don't be upset that some people are skeptical, and there's plenty of people that are...

Well they should be less vague about there misgivings so I can be enlightened then.

Who said I was upset? This whole thing about questioning people now all of a sudden means you upset? What's that about?

I never said I was certain about the trade offs as in how much the clock will drop during certain work loads.

And this "even digital foundry" what does this mean? Good Lord I've seen them use posters from neogaf before as sources back in the day.

I'm not saying they aren't probably the best and most reliable tech based outlet involving games but if he has misgivings voice them so we can be educated or maybe someone hear smarter than me can educate them.
 
Last edited:
SSD directly into GPU registers? Seems unlikely.
I doubt this very highly.

The GPU isn't the only client for the GDDR6 pool.
If the CPU can use the SSD data directly into its L2/L3 caches and work from there (say, O.S. tasks, or even the GPU with e.g. screenshots and video recording), then it'll cause a lower amount of requests to the main memory pool.

And we know that on APUs the CPU competing with the GPU for main memory access requests causes a disproportional blow to effective bandwidth:


bKu3r3T.png
 
I would imagine the only people "unhappy" with either machine would be jr devs and even then I'm still at a loss as to why anyone would have complaints about either machine. I'd have to imagine every mid/sr to go with the flow same as any other generation. Of course this is just me from the outside so I could be off but...
 
The GPU isn't the only client for the GDDR6 pool.
If the CPU can use the SSD data directly into its L2/L3 caches and work from there (say, O.S. tasks, or even the GPU with e.g. screenshots and video recording), then it'll cause a lower amount of requests to the main memory pool.

And we know that on APUs the CPU competing with the GPU for main memory access requests causes a disproportional blow to effective bandwidth:


bKu3r3T.png

I wouldn't think that the CPU would take well to massively increased latency and expect that the SSD will already be quite busy using all of it's available bandwidth for moving data into RAM (the thing it's actually designed for). This seems like a bad idea, even if it were technically possible.
 
It's vague because it's hard to predict with 100% certainty what devs will do 2-5 years from now.

Dictator should say what his having issues rectifying with unless he can't because of NDAs or embargo.
Definitely not wondering the things I am wondering because an NDA or embargo.
If any of you are confused, there are people here to discuss what you find confusing.

If you claim there are contradictions, you can quote the exact words from Cerny to discuss it, otherwise it's willful confusion and nobody will learn anything or change their mind about anything. Selective paraphrasing is not helpful, it always adds to the confusion.
My perplexion is probably best described as wondering about a lot of unknowns.

What were the load scenarios that were making 2.0 Ghz on the GPU and 3.0 Ghz on the CPU with static power "hard to attain"? What are the load scenarios making 2.23 Ghz/3.5 Ghz with dynamic power apparently more stable?

I wonder about certain types of games that we know exist - like those that have a free floating dynamic resolution nearly at all times below the top end bound - like Call of Duty Modern Warfare or many other late gen games. What does a GPU load like that do under this system? It is maxing the GPU the entire time and causing a lot of heat in my experience from utilising "insane" settings targetting 4K on PC with Gears of War 5. I imagine there the GPU power draw/GPU utilisation would be genuinely near 98-100% all the time, in spite of something like a 60 fps cap, and the variability of load then would be based upon what is happening in the game on the CPU (which will be different from frame to frame).

Or I wonder about games that genuinely max out both resources really well, Ubisoft games are notorious for this (they use dynamic reconstruction on the GPU and tend to be CPU monsters).

I wonder what happens for certain types of performance profiles we see in certain games, and not just those with static resolutions, vsync, or are cross gen.
 
The GPU isn't the only client for the GDDR6 pool.
If the CPU can use the SSD data directly into its L2/L3 caches and work from there (say, O.S. tasks, or even the GPU with e.g. screenshots and video recording), then it'll cause a lower amount of requests to the main memory pool.

And we know that on APUs the CPU competing with the GPU for main memory access requests causes a disproportional blow to effective bandwidth:

I get the need for trying to address older memory issues that have cropped up in the past. But that may have been an issue with the memory controller and not just the fact that they were trying to request data at the same time. From a software develpoment perspective I have huge reservations on using the SSD solution like a RAM drive. From a hardware perspective, I would have huge reservations on using it like a RAM drive because we don't know it's random access speeds or it's latency, but also the more you use it like RAM drive the more I would be concerned for heat usage.

We don't know what the characteristics for this SSD are in terms of heat, but I can only imagine it's going to be running hotter than our PC counterparts at this moment - just looking at the raw output numbers. so this combined with a high clocked SOC running maximum power; I have some reservations on how that SSD can be used.
 
I wonder about certain types of games that we know exist - like those that have a free floating dynamic resolution nearly at all times below the top end bound - like Call of Duty Modern Warfare or many other late gen games. What does a GPU load like that do under this system? It is maxing the GPU the entire time and causing a lot of heat in my experience

Talking of COD this could be one of those weird cases where like with Horizon Zero Dawn map uses more power than actually in game. Because my PS4 fan gets much louder while waiting in the lobby while matchmaking than it does in game.
 
Talking of COD this could be one of those weird cases where like with Horizon Zero Dawn map uses more power than actually in game. Because my PS4 fan gets much louder while waiting in the lobby while matchmaking than it does in game.
unlocked framerates. The in HZD the GPU is loaded up heavily, but because the refresh is only 30Hz, the CPU is likely sitting idle the whole time. So it doesn't draw as much power as it needs to. Once the map comes up without a frame limiter, it's now spooling at a much higher rate. And now we have both GPU and CPU pulling more power.
 
unlocked framerates. The in HZD the GPU is loaded up heavily, but because the refresh is only 30Hz, the CPU is likely sitting idle the whole time. So it doesn't draw as much power as it needs to. Once the map comes up without a frame limiter, it's now spooling at a much higher rate. And now we have both GPU and CPU pulling more power.

That shouldn't be the case in Warzone though because I'm pretty sure it's pushing everything to the max in game with the 60 fps and dynamic resolution and does Warzone use unlocked FPS in the lobby?
 
That shouldn't be the case in Warzone though because I'm pretty sure it's pushing everything to the max in game with the 60 fps and dynamic resolution and does Warzone use unlocked FPS in the lobby?
not sure; sometimes they are just 'oops' forgot to lock the frame rate type issues. Happens. I hit near 300 fps on loading screens for Apex Legends, I doubt devs often think about controlling 'power' during these times, they just want to get through loading as fast as possible.

in theory warzone should use more power than HZD. Whether it causes your fans to spin is a separate discussion.
 
Back
Top