Playstation 5 [PS5] [Release November 12 2020]

why would you need to lower power in the first place is my question? why not run it at consistent clockspeeds? the answer is quite obvious, thermals. ergo these are boost clocks and will not be constant. ill change my name to sally if the ps5 can run at those clocks even half the time. also i believe they haven't figured out their cooling solution yet either, yes wait and see sure. but sony is being dishonest with the spec numbers thats quite obvious.

Cerny explains that it was not thermal limited. So the ps5 performance will be the same whether it's in northern hemisphere or in Asia.

But if a game is too efficient and hammer the cpu and gpu too much, it will draw more power at the same frequency.

So they throttle it down to keep it in the power budget.

The power budget also can be moved to cpu or to gpu depending on the game needs. I assume this is similar to Intel's smart power thingy (forgot the name) that allows you to allocate x TDP to give more room for Cpu or gpu. It's available in Intel extreme tuning, free download from Intel downloads center
 
if 10.3 teraflops was your original target then 36 CU's is a weird choice.
Sony now has to run the GPU at extraordinary clocks producing massive amounts of heat and drawing a lot of power and noise to make the 10.3 teraflop mark and by Cerny's own admission the console won't be able to run at that speed consistently and would be lowered by 10% making it 9.2 teraflops.
It seems more likely that the high clocks are a reaction to xbox's 12 teraflops. If Sony was genuine about 10.3 teraflops then 44 CUs would have been more appropriate.

I mean Cerney did make the comparison to 48CU's and points about parallelism and whatnot...also that he values higher clocks over compute power...
 
As PS5's are clocked higher, per flop should be a better comparison of RT capabilities which should be tied to CU count and clock speed same as the ALUs.
No. Because RT is mostly bandwidth limited for non-trivial scenes.
I would really like to know how you reach that conclusion as there's more then just bandwidth that affects how well you can utilize a CU.

Especially as historically AMD GPU's have had a hard time using the CU's full potential on GPU's with a larger CU count due to being held back by other parts of the GPU logic.
The more deterministic your system is, the easier it is to work around known limitations. The less deterministic is it, the harder.

Of course when playing a game, your scene changes, so that introduces some non-determinism to begin with. But as a game dev you know what to expect and have some control by tweaking levels, so you end up knowing your worst case. You can use that knowledge to work around (GPU) hardware limitations. But non-determinism introduced by game code (which in term is introduced by player input or network input) is much harder to work with. If the CPU is utilizing the memory just when GPU needs to load data, the GPU (or parts of it) sit idle until that memory data arrives. So it's better to have the GPU have the memory bus to itself.
 
Last edited:
Some games may not use avx because the instruction uses too much power? That is one of the dumbest things I’ve heard. SIMD is essential in leveraging maximum performance from a cpu.

It's a new wide extension that is not yet widely used in engines that is very power hungry. i.e. todays games just don't use it yet. Tomorrows games might really rely on it. Sonys approach optimizes for both cases within a fixed console power budget.
 
Boost is the wrong word. "Variable" is more accurate. Cerny said it was always in "boost" mode...ie it's always in variable frequency mode.
 
Some games may not use avx because the instruction uses too much power? That is one of the dumbest things I’ve heard. SIMD is essential in leveraging maximum performance from a cpu.

He was talking about 256bits instructions. AVX 2 I guess, or even avx 512 in two pass maybe.

On PC, it's really using a lot of power compared to avx1 for exemple. It's not for nothing than most intel cpu clock down with theses instructions, just to contain the power needed / heat produced. I don't know if avx2 of 512 have some utilitie in games...
 
why would you need to lower power in the first place is my question? why not run it at consistent clockspeeds? the answer is quite obvious, thermals. ergo these are boost clocks and will not be constant. ill change my name to sally if the ps5 can run at those clocks even half the time. also i believe they haven't figured out their cooling solution yet either, yes wait and see sure. but sony is being dishonest with the spec numbers thats quite obvious.

They are constant and unless you have hard evidence to prove Cerny wrong they will be refereed to as constant. Especially as Cerny has even said why you might need to lower to power to begin with.
 
Radeon VII vs Vega 64 Shows that really well I think. Very different RAM config, and the performance difference that came with it.

And Vega 64 vs Vega 56 at the same clocks shows less then 1% performance advantage to Vega 64 despite a 14% advantage in shader counts.

There's more to it then just throwing a higher numbers of CU's at a problem.
 
And Vega 64 vs Vega 56 at the same clocks shows less then 1% performance advantage to Vega 64 despite a 14% advantage in shader counts.

There's more to it then just throwing a higher numbers of CU's at a problem.
Yea...you wont see that in consoles. R7 and Vega64 are very different GPUs in fact.

Rest assured 2TF and more BW will come in handy, like it did with Pro v XBX.

Difference will not be night and day by any means, but same arch, same features and more compute will be felt (as will RT for example.
 
If developers can pull this off with a mechanical HDD that tops out at ~100MB/s what are they gonna do with 8-9GB/s.

https://i.kinja-img.com/gawker-medi...gressive,q_80,w_800/ucoln8kedwfglsrlxvm5.webm

PS5's GPU could very well end up with less resources being wasted on drawing stuff outside of the player frustum because there's no need to factor in slow HDD loading leaving more grunt to use on the actual pixels you can see.

The more I sit and think about what this amount of pure I/O performance can do and change the more excited I get.
 
If developers can pull this off with a mechanical HDD that tops out at ~100MB/s what are they gonna do with 8-9GB/s.

https://i.kinja-img.com/gawker-medi...gressive,q_80,w_800/ucoln8kedwfglsrlxvm5.webm

PS5's GPU could very well end up with less resources being wasted on drawing stuff outside of the player frustum because there's no need to factor in slow HDD loading leaving more grunt to use on the actual pixels you can see.

The more I sit and think about what this amount of pure I/O performance can do and change the more excited I get.

Hyperbole aside, one of my big big bears the last couple of years has been how low detail PS4 graphics have been. And I keep remembering how Bloodborne absolutely shocked me when it came out years ago, with the amount of small little details present everywhere.
I don’t know if more recent games have had less details, or I just got used to it, but to think that games will simply look that much more “full” of stuff on screen makes me very excited.
 
Last edited:
PS5's GPU could very well end up with less resources being wasted on drawing stuff outside of the player frustum because there's no need to factor in slow HDD loading leaving more grunt to use on the actual pixels you can see.

Modern games waste very little GPU resources DRAWING stuff outside the view frustum. Modern culling works fine. What we don't have is fast streaming, so things have to be read from disk in advance and waste RAM for just in case. So SSD is a nice cover for less RAM but it does little to cover for less compute.
 
Back
Top