Playstation 5 [PS5] [Release November 12 2020]

I give up.
If we had really misunderstood the part about the Retro by this time, surely, Sony would have already set the record straight, right?
But they did not, so the Retro is DOA.

No matter if the XBSX will cost more, it'll be worth more.
Sony exclusives are good and all but they can't be reason enough to go with the PS5 with so many compromises, even more now with the idea that they may come to the PC.
 
Last edited by a moderator:
if 10.3 teraflops was your original target then 36 CU's is a weird choice.
Sony now has to run the GPU at extraordinary clocks producing massive amounts of heat and drawing a lot of power and noise to make the 10.3 teraflop mark and by Cerny's own admission the console won't be able to run at that speed consistently and would be lowered by 10% making it 9.2 teraflops.
It seems more likely that the high clocks are a reaction to xbox's 12 teraflops. If Sony was genuine about 10.3 teraflops then 44 CUs would have been more appropriate.
 
Obviously, both Sony and MS had whole teams building the next gens consoles. Stating otherwise is just stupid, sorry to say.

The notion that Cerny alone decided what to do with this, shows a complete lack of understanding of how the world works.

No, it's you who's showing a complete lack of understanding of his this world works. Cerny is Earth's God Emperor, and he controls all things.
 
It'll be interesting to see e.g. Death Stranding running at >60 FPS on 120Hz HDMI 2.1 TVs.

Going to test that soon :)

if 10.3 teraflops was your original target then 36 CU's is a weird choice.
Sony now has to run the GPU at extraordinary clocks producing massive amounts of heat and drawing a lot of power and noise to make the 10.3 teraflop mark and by Cerny's own admission the console won't be able to run at that speed consistently and would be lowered by 10% making it 9.2 teraflops.
It seems more likely that the high clocks are a reaction to xbox's 12 teraflops. If Sony was genuine about 10.3 teraflops then 44 CUs would have been more appropriate.

Dont worry, it wont run at those speeds all the time.
 
10Gb of RAM for the GPU vs 10Gb+ of RAM for the GPU with stupid fast I/O??

There's one clear winner there for me and it's not the first option, I expect after developers get used the SSD on PS5 it'll run circles around Series X in regards to asset quality, asset resolution and asset density.
I don't think you understand very well what's going on here.
PS5 will have trouble reaching it's theoretical maximum GPU power of 10.2TF while XSBX will reach 12.155TF much more easily because PS5 CPU memory access will eat up memory bandwidth in a non-deterministic way (that's because the fine-grained memory access pattern that CPU code has does not go together very well with high latencies of GDDR memory) and thus starve PS5 GPU frequently, thus requiring PS5 devs to either don't use the CPU as much or don't use the GPU to the max. That's unless you like stuttering and your framerate all over the place, especially when it comes to 4K or 120hz.

PS5's fast I/O does not help the situation here. It's like saying you prefer a RTX 2060 with a 2x speed SSD to a RTX 2080 with a 1x speed SSD. I don't think many people will say that option #1 is the clear winner here.
 
According to Cerny it should run at max speed most of the time, but some specific loads are just so powerhungry there will be cases where it won't.

This is funny, because when I was listening last night, I interpreted what Cerny said to be that the console does not always need to use all its power to play games. So the clocks/power etc would vary due to load. Ie minimum effort to do what needs to be done. But I am not a native speaker and since basically everybody else says otherwise, I assume I was wrong.


True, but he most likely gets orders from the top (Sony). Ive said it before, Cerny is not to blame for this. MS probably had a whole team behind designing the thing im sure.

Question is, what is Sony strategy with Playstation. Are they doing a Nintendo and thinking that the power race is not worth it anymore? Or are they thinking that graphics is not what will define gaming going forward? Who is the target audience, hardcore gamers or the more casual ones?
They have invested heavily in VR which was not top notch graphics, but a different way of playing games, I found VR to be just annoying with the headset etc.
Is 3D audio a game changer for most? Or no visible loading time?

Personally, 3D Audio does not sound that exiting. But that the SSD speed PS5 is much more interesting, IF it brings in "disruption" in how games are played / built. Just having more textures and objects in world is not that exiting gain, just like 3D audio it will make the world richer. But will it underpin game changers.
Cerny talked about presence and location, are you immersed? I remember people on here talking about what breaks immersion, personally I never been in that breaks immersion camp. Maybe I am the outlier and the mass market is there.

I think that Sony diverged in what type of game experience they want to create. But is that a reaction to feedback from developers where bigger and balder just is not what they want to do?
MS is more like an American muscle car, raw power, the loud noise of the engine and just a very intense experience, tried and trusted.
While Sony seemed to have gone down the more elegant, comfort and laidback route, like maybe a high end German car.

I dont know, I think it's fun to think about. I am lucky enough to be in a position to be able to buy both consoles if they cater to different experiences and those are enjoyable to me.

Come on holiday 2020, if COVID-19 continues, I probably will be unemployed and have lots of time to play.
 
Sony now has to run the GPU at extraordinary clocks producing massive amounts of heat and drawing a lot of power and noise to make the 10.3 teraflop mark and by Cerny's own admission the console won't be able to run at that speed consistently and would be lowered by 10% making it 9.2 teraflops.
We don’t know that. We have to see more of RDNA2. It could be designed around that clock speed for all we know.
 
if 10.3 teraflops was your original target then 36 CU's is a weird choice.
Sony now has to run the GPU at extraordinary clocks producing massive amounts of heat and drawing a lot of power and noise to make the 10.3 teraflop mark and by Cerny's own admission the console won't be able to run at that speed consistently and would be lowered by 10% making it 9.2 teraflops.
It seems more likely that the high clocks are a reaction to xbox's 12 teraflops. If Sony was genuine about 10.3 teraflops then 44 CUs would have been more appropriate.

No......

1. 36CU's is what the current 5700 PC GPU has so it could be as simple as they had a part there and used it as a base.

2. Cerny did not say it would not run at those clocks consistently.

3. Cerny also did not say it would be lowered by 10%, Cerny said POWER will be lowered by 10% but CLOCKS will only drop a couple of percent as a result.

4. There is loads of advantages to running higher clocks as like Cerny stated, it speeds up the whole chip. Also AMD GPU's have historically responded better to clock speed increases over CU increases.

There's seems to be a lot of people who either didn't watch the presentation or who have watched it and not understood it.
 
This will be an important point to know, since there's a lot of PC CPUs around with no AVX2 compatibility (my old 10-core Xeon E5 v2 Ivy Bridge may finally need to be replaced), and others with poor AVX2 performance (Zen 1/1.5 included).
Zen2 has no AVX512 BTW, so that one isn't coming to new engines on consoles for sure.

You are right. I wasn't quite sure what avx is present in consoles. Cerny however specifically mentioned avx as a power hog and he explained how some games might not use the power hungry instructions and how at some later date some game could really hammer cpu via avx.
 
I don't think you understand very well what's going on here.
PS5 will have trouble reaching it's theoretical maximum GPU power of 10.2TF while XSBX will reach 12.155TF much more easily because PS5 CPU memory access will eat up memory bandwidth in a non-deterministic way (that's because the fine-grained memory access pattern that CPU code has does not go together very well with high latencies of GDDR memory) and thus starve PS5 GPU frequently, thus requiring PS5 devs to either don't use the CPU as much or don't use the GPU to the max. That's unless you like stuttering and your framerate all over the place, especially when it comes to 4K or 120hz.

I would really like to know how you reach that conclusion as there's more then just bandwidth that affects how well you can utilize a CU.

Especially as historically AMD GPU's have had a hard time using the CU's full potential on GPU's with a larger CU count due to being held back by other parts of the GPU logic.
 
We don’t know that. We have to see more of RDNA2. It could be designed around that clock speed for all we know.
One easy way to spot if 2.23GHz is such a big departure from what to expect on RDNA2 PC dGPUs is to look at AMD's own statements about RDNA2.

XxOyRRu.png


50% perf-per-watt improvement over RDNA1.

This 50% perf-per-watt improvement is coming from both pure architectural improvements and electrical ones (efficiency curve for given clocks).
RDNA1 was already a large departure in ISA compared to GCN, so I don't think we're looking at much more than 15% improvements from architectural improvements (probably a lot less).
There has also been talks of Zen engineers being sent to RTG to improve Radeon's clock efficiency curves, which is something we really haven't seen so far. Vega 20 / Navi 10 clock efficiency was only as better than Vega 10 as the N7 process allowed for, so there wasn't a lot of work done there.
Navi 20 is the family where we should see those CPU+GPU improvements on clocks.

So assuming a (unlikely high IMO) 15% improvement from architecture alone from RDNA1, we'd still have 35% to go, and that's coming from higher clocks at equal power consumption.

There are reports of the 36CU Radeon 5700 consuming as little as 140W when on ~1650MHz. A 35% clock increase at same 140W consumption would mean 2228MHz.
To me, it seems ~2.25GHz will be very close to what we should expect on typical clocks for most desktop RDNA2 GPUs, maybe even more.


Before we saw AMD's claims of 50% perf-per-watt improvements for RDNA2 GPUs over RDNA1, Sony's GPU clocks would have seemed a lot more outrageous.
 
I would really like to know how you reach that conclusion as there's more then just bandwidth that affects how well you can utilize a CU.

Especially as historically AMD GPU's have had a hard time using the CU's full potential on GPU's with a larger CU count due to being held back by other parts of the GPU logic.
Radeon VII vs Vega 64 Shows that really well I think. Very different RAM config, and the performance difference that came with it.
 
Some games may not use avx because the instruction uses too much power? That is one of the dumbest things I’ve heard. SIMD is essential in leveraging maximum performance from a cpu.
 
3. Cerny also did not say it would be lowered by 10%, Cerny said POWER will be lowered by 10% but CLOCKS will only drop a couple of percent as a result.

There's seems to be a lot of people who either didn't watch the presentation or who have watched it and not understood it.

why would you need to lower power in the first place is my question? why not run it at consistent clockspeeds? the answer is quite obvious, thermals. ergo these are boost clocks and will not be constant. ill change my name to sally if the ps5 can run at those clocks even half the time. also i believe they haven't figured out their cooling solution yet either, yes wait and see sure. but sony is being dishonest with the spec numbers thats quite obvious.
 
Cerny stressed the clocks won't vary with temperature or cooling performance. They'll vary with power consumption.
The bottleneck for not keeping max clocks 100% of the time will be the PSU and voltage controls, not the temperature.

The difference between throttling down due to temperature and power consumption is that with the later you can guarantee the same performance among all consoles (they're all using the same PSU and VRMs), so that devs can predict system performance.

Of course, I'm guessing that on a silly edge case if you put the console in an oven then it'll throttle from the temperature (like all SoCs in consoles from the last 15+ years AFAIK?). I bet almost all home consoles would eventually throttle if you put them in a 40ºC room with no airflow, and no home console manufacturer designs their consoles for that scenario.



I don't know if Sony themselves used the word "boost" for the clocks, or if DF was the one to loosely use that term.
In reality it's not a boost like we've seen on PC GPUs. It's throttling down from a typical value on edge cases, based on power consumption.
Sony did use the term "boost" for backwards compatibility, which is where they already used the term boost for when the PS4 Pro runs PS4 games with the GPU at 911MHz instead of 800MHz in non-boost.
On PS5's Boost BC mode, the console will be running PS4 Pro games with the GPU clocked at 2.23GHz instead of PS4 Pro's 911MHz, together with the much faster CPU and memory bandwidth.
It'll be interesting to see e.g. Death Stranding running at >60 FPS on 120Hz HDMI 2.1 TVs.



This will be an important point to know, since there's a lot of PC CPUs around with no AVX2 compatibility (my old 10-core Xeon E5 v2 Ivy Bridge may finally need to be replaced), and others with poor AVX2 performance (Zen 1/1.5 included).
Zen2 has no AVX512 BTW, so that one isn't coming to new engines on consoles for sure.

Cerny himself said that ps5 will run at boost clock all the time, except when a game hammer the chips too hard.

So I'm confused, are all of these boost on ps5?

1. PS5 default clock (boost except when power budget limits hit) running ps5 games

2. PS5 default clock running ps4 games (even more boost than ps4 pro running ps4 games). This only has been tested on 100 top games

3. Ps4 games running at ps4 pro boost clock backwards compatibility on ps5. Does this mean ps5 will run at lower clock than the normal boost? Or they will cut the CU counts to maintain compatibility? Both?
 
Back
Top