General Next Generation Rumors and Discussions [Post GDC 2020]

That would mean Cerney was lying or at least being disingenuous when he said it maintains those clocks most of the time and I don't think he's the type of person to do that.

He can't ever be lying, as he never said the specifics of that. Most of the time when? There could be 1000's of situations. Read Alex's post on what different situations could mean.
 
He can't ever be lying, as he never said the specifics of that. Most of the time when? There could be 1000's of situations. Read Alex's post on what different situations could mean.

It would fall under being disingenuous.
He said that the clocks drop during the worst case scenarios.
 
He said that the clocks drop during the worst case scenarios.

And what's that? Games that really push the system, like open world games with many things going on, particle effects? Kinda.... logical that clocks only drop in worst case scenarios ;)

Why didnt they give the lowest bclk possible?
 
I'd like to know 2 things:

If the CPU is running SMT at max boost clock doing AVX, what's the highest clock the GPU can sustain?

If the GPU is at max boost with RT RT, what's the sustained CPU speed with and without SMT enabled?
One of the two must have priority over prior at least for the sake of sanity of the developers.
 
Most likely the GPU will be downclocked, as CPU clocks have much less of an impact on temps and voltages then GPU clocks do. The same goes for performance though. Also, (over)clocks have a sweetspot somewhere i have learned. At certain extremes, theres less to gain. Extreme clocks do well in benchmarks, less so in real world scenarios. Any overclocker knows this.
 
Most likely the GPU will be downclocked, as CPU clocks have much less of an impact on temps and voltages then GPU clocks do. The same goes for performance though. Also, (over)clocks have a sweetspot somewhere i have learned. At certain extremes, theres less to gain. Extreme clocks do well in benchmarks, less so in real world scenarios. Any overclocker knows this.

Most games are GPU bound (especially at 4K), prioritizing the CPU is nonsense.

That's why AMD closes the gap with intel CPUs at higher res.
 
Most games are GPU bound (especially at 4K), prioritizing the CPU is nonsense.

That's why AMD closes the gap with intel CPUs at higher res.
In consoles it is a bit different because they will look to utilize as much of CPU as possible. I think it is fairly obvious devs will go for 3.66 v 3.5 and 9.9TF GPU, then 3.66 v 3.2 and 10.2TF GPU, if such scenario was possible.

Its much easier to scale few % of GPU performance then to have +10% slower CPU.
 
So we think that Devs are gonna exploit less than 200mhz per cpu core to give xsx a great advantage but not use ps5's twice as fast ssd access to make open worlds with double the object density and variety?

BTW when gaming on my laptop I always make the trade off to reduce CPU clocks from 3.5 down to 3.2ghz to keep temps down (shared heat pipe with gpu) and my 2080 gpu clocks are maintained. When gaming at 60fps or dare I say 30 fps, it is the ideal compromise for the highest graphics settings and smoothest framerate
 
So we think that Devs are gonna exploit less than 200mhz per cpu core to give xsx a great advantage but not use ps5's twice as fast ssd access to make open worlds with double the object density and variety?
I think dev's will exploit the CPU if that's what they want to exploit, the option is available to them at a cost of reducing the resolution by a touch.
Dev's don't give a political leaning about giving XSX an advantage, this is about how the two systems are behave differently.
 
In consoles it is a bit different because they will look to utilize as much of CPU as possible. I think it is fairly obvious devs will go for 3.66 v 3.5 and 9.9TF GPU, then 3.66 v 3.2 and 10.2TF GPU, if such scenario was possible.

Its much easier to scale few % of GPU performance then to have +10% slower CPU.

How come? If PS5 has allegedly plenty of coprocessors to offload tasks from CPU, then I assume it wont be CPU bound in relation to XBSX, which uses CPU for the same tasks
 
In consoles it is a bit different because they will look to utilize as much of CPU as possible. I think it is fairly obvious devs will go for 3.66 v 3.5 and 9.9TF GPU, then 3.66 v 3.2 and 10.2TF GPU, if such scenario was possible.

Its much easier to scale few % of GPU performance then to have +10% slower CPU.

There are ways to utilize the cpu which would lead to heavy downclocking. In ps5 case this can be mitigated by using less gpu and potentially getting better overall performance. Below link refers intel cpu/implementation but similar thing applies to amd cpu's too and cerny mentioned it briefly on his presentation. Not every game uses those instructions or the instructions are only used very briefly. Would be interesting to know how xbox handles this. Is there thermal room built into cooling to account for avx2/512 or would xbox lower clocks?

Modern Intel CPUs reduce their frequency when executing wide vector operations (AVX2 and AVX-512 instructions), as these instructions increase power consumption. The frequency is only increased again two
milliseconds after the last code section containing such
instructions has been executed in order to prevent excessive numbers of frequency changes.
https://arxiv.org/pdf/1901.04982.pdf
 
There are ways to utilize the cpu which would lead to heavy downclocking. In ps5 case this can be mitigated by using less gpu and potentially getting better overall performance. Below link refers intel cpu/implementation but similar thing applies to amd cpu's too and cerny mentioned it briefly on his presentation. Not every game uses those instructions or the instructions are only used very briefly. Would be interesting to know how xbox handles this. Is there thermal room built into cooling to account for avx2/512 or would xbox lower clocks?


https://arxiv.org/pdf/1901.04982.pdf
I suspect it won't be different from the current mode: XSX clocks are fixed, if XSX overheats it would shut down, much like we see with today's generation.
 
So we think that Devs are gonna exploit less than 200mhz per cpu core to give xsx a great advantage but not use ps5's twice as fast ssd access to make open worlds with double the object density and variety?
Quite possibly. CPU scaling is already a part of cross-platform titles, so it's natural for XBSX to get an immediate advantage for games that do scale. SSD scaling on PS5 is going to be a platform specific optimisation. We may see an advantage in console-only games, if such a thing exists, where the same streaming tech uses the available streaming BW to full advantage.

That said, we've no idea how good at saturating the graphics pipeline 2-4 GB/s is going to be. If that can already populate the world in 'perfect' quality, streaming even faster won't bring any improvement. Assuming 4 GB/s still leave room for improvement, it'll be a matter of diminishing returns that we simply cannot predict. And will higher texture fidelity matter if the overall rendering resolution is lower?

66 MB/frame at 60 fps for XBSX. 133 MB/frame for PS5. 7 MB/s was Sebbbi's calculation for virtual texturing at 720p. 4K is 9x 720p, so 63 MB/s should be enough for ideal texturing if the engine can work that way.

In short, I doubt either will show much improvement either way. It'll be enough that DF stays in business for another generation, but likely not enough that Joe Gamer ever notices.
 
Last edited:
I believe initially the XSX will come out on top for multiplatform and overall image quality/performance, but once assets are heavily shared between movie/media production the PS5 is gonna pull far ahead. I think everyone is underestimating the value in being able to access so much data (texture variety/resolution, object density/variety, geometry, microdetails). Since unreal is heavily used in media production (see any Star Wars release in the last few years) assets will be shared, without making many adjustments. XSX will hit bottlenecks preventing devs from dropping in these assets in to their games. I'm sure the PS5 will lead to an easier development cycle and asset sharing.
 
I believe initially the XSX will come out on top for multiplatform and overall image quality/performance, but once assets are heavily shared between movie/media production the PS5 is gonna pull far ahead. I think everyone is underestimating the value in being able to access so much data...
Wha'? How big do you think game files are going to get? There's no way a console can use a movie's assets. Assuming games are capped at something like 100-200 GBs just for the sake of sanity (with some like GTA/RDR being maybe 500+), there's only so much data per level and scene.
 
In consoles it is a bit different because they will look to utilize as much of CPU as possible. I think it is fairly obvious devs will go for 3.66 v 3.5 and 9.9TF GPU, then 3.66 v 3.2 and 10.2TF GPU, if such scenario was possible.

Its much easier to scale few % of GPU performance then to have +10% slower CPU.

We've had consoles with 8 cores for a while and most engines are still notoriously bad at fully utilizing multiple cores and threads with some notable exceptions like Anvil and Frostbite. And even if you check any 3700x benchmark, you'll see that at 4k the 3700x doesn't have any trouble managing Battlefield V with a CPU usage that rarely goes over 50%. Even at 1080p, from the videos/benchmarks I've seen, the CPU is still GPU bound.

If, as some users here say, Cernie is lying and PS5 can't achieve max CPU and GPU clocks at the same time, it makes much more sense to lower the frequency on the CPU that you are not going to fully use anyway.

edit: or deactivating some threads (can it be done? Or you need to turn off SMT all together?).

If you want to achieve higher frame rates, the CPU is going to need additional resources.

Yeah, for CS:GO players with 244hz 1080p monitors I do agree that having a fast CPU is important, at 4K, not so much (A first gen ryzen is only 1% slower than Zen2 at 4K).
 
Back
Top