Bloomberg on PS5 yields, orders, and price ranges [2020-09-14]

This is interesting, the IO die is much big than GPU + CPU.

71340_512_understanding-the-ps5s-ssd-deep-dive-into-next-gen-storage-tech_full.png

cadf8a0f-8be4-4e88-8bdd-094771294756.jpeg
So CPU and GPU on ps5 are discrete chips ? I thought it was an APU [emoji16]
 
Surely that slide from Cerny doesn't have realistic proportions?

The IO on XSX is tiny.

Where does that photo of the chip come from?
 
They probably had better results (and easy life) with 18 x 3 = 54 CU with a fixed max frequency at 2 x 911 ghz.... Really don't understand why they went so narrow and fast...

the upclocking of the gpu is obviously a reaction to XSX 12 teraflops, together with the variable frequency, Sony did everything to come close to that mark and not look weak in comparison. Even Cerny slipped that if you drop the power consumption by 10% you only lose about 1-2% of performance but you can flip that and say if you add 10% of power you only gain 1-2% of performance that is a horrible performanc to watt ratio. And there were leaks that RDNA 2 did not scale really good with upclocking. In short Sony made a GPU and they are stuck with it and are now upcloking it to hell to look comparable in power to the XSX. I personally think they should drop the act downclock to normal levels at about 2.0 and call it a day.
 
PC gamer's welcome these techniques.

Yes but on ps5 its the other way around, they state max clocks instead of min. A 2080Ti is listed as a 13.5TF part, but if conditions allow, it will boost higher, not below.

With ps5, they state 10.2TF but it can be lower. This wouldnt work out well with pc gamers.
 
One of the few things that we know about rdna2 is that it scales over 2GHz (Lisa Su "the first multi gigahertz gpu")
 
Yes but on ps5 its the other way around, they state max clocks instead of min. A 2080Ti is listed as a 13.5TF part, but if conditions allow, it will boost higher, not below.

With ps5, they state 10.2TF but it can be lower. This wouldnt work out well with pc gamers.
I got the idea that the PS5 runs at boosted clocks by default and down clocks the CPU and GPU according to requirements.
I never understood how it works for the PS5 to be honest.
 
So we're suddenly spitting out FUD in technical discussions using powerpoint slides as sources for die size?

Keep in mind that 11 million is actually roughly what the PS4 managed in the same time frame.
as in end of March they sold 7.5 million units to retail that respective year. Add in WIP you'll probably get a maximum of 10 million chips or so done.
 
Last edited:
the upclocking of the gpu is obviously a reaction to XSX 12 teraflops, together with the variable frequency, Sony did everything to come close to that mark and not look weak in comparison. Even Cerny slipped that if you drop the power consumption by 10% you only lose about 1-2% of performance but you can flip that and say if you add 10% of power you only gain 1-2% of performance that is a horrible performanc to watt ratio. And there were leaks that RDNA 2 did not scale really good with upclocking. In short Sony made a GPU and they are stuck with it and are now upcloking it to hell to look comparable in power to the XSX. I personally think they should drop the act downclock to normal levels at about 2.0 and call it a day.
Same opinion.
 
Yes but on ps5 its the other way around, they state max clocks instead of min. A 2080Ti is listed as a 13.5TF part, but if conditions allow, it will boost higher, not below.

With ps5, they state 10.2TF but it can be lower. This wouldnt work out well with pc gamers.

I'm not sure if this comparison is strictly useful. With a fixed console platform looking for uniformity you'd want a fixed upper bound. I'm sure if there were no limiters in place some PS5s would be able to exceed those specs due to a combination of favorable silicon and environmental conditions, but that wouldn't really be useful from an end user perspective as presumably the games are hard coded to certain performance and graphics parameters.

Also its really only case with Nvidia's GPUs where there TF calculation is based on an a rather conservative advertised frequency which is frequently exceeded. AMD's GPUs are not like that in terms of advertised values with respect to typical frequency values (extremely tight, it's also only more recently that they weren't hard upper bounds). The marketed CPU upper bound frequency by both Intel and AMD are also hard up to values (they also mean different things in terms of tolerances).

I got the idea that the PS5 runs at boosted clocks by default and down clocks the CPU and GPU according to requirements.
I never understood how it works for the PS5 to be honest.

I feel ultimately this an issue of optics/messaging (in terms of the marketing) as well as "glass half full" or "glass half empty" view on things.

The practical effect on the end user experience is effectively negligible.
 
I got the idea that the PS5 runs at boosted clocks by default and down clocks the CPU and GPU according to requirements.
I never understood how it works for the PS5 to be honest.
There is an abstract model of the GPU&CPU where at some power drain curve from socket (that is measured trough a lot of sensors that are actually counters of instructions) correspond a certain manageable possible frequency.... this abstract model si then fixed into the real silicon by adjustments. I explain: the real silicon curve is memorized and the gaps are recognized but not shown to the software. At the end the silicon is somehow "abstracted" and we have a predictable way it works for all the silicons.... I mean the silicons that have enough quality. So we have a variable frequency but steady performances trough all the chips. This is how I understood it.
 
So we're suddenly spitting out FUD in technical discussions using powerpoint slides as sources for die size?

Keep in mid that 11 million is actually roughly what the PS4 managed in the same time frame.
as in end of March they sold 7.5 million units to retail that respective year.
Yes, but don't you see the different situation?
PS4 had only sold 7.5 units (or whatever) because they only produced the PS4 in "low" numbers. They couldn't know after the PS3 (before production) it will sell like hot cakes. Now they have all the "hype" on their side and a big userbase, they expect to sell much more and faster, so they upped the production from the beginning. You don't change those contracts in a few weeks. This gets done moths (sometimes years) in advance. So if they communicated a few month ago, that they ramped up production, the planning for that was month ago. You must also get all the parts (like GDDR6 memory), ... and those must also get ordered month before you need them. Covid19 didn't make this process easier. It is more like production was interrupted and work had still to be done when productions went back to "normal".

They communicated 10m at the end of 2020 (not their fiscal year, it was December 2020). Now they say, 11m by the end of march, well that is quite a difference. Only reason for that can be yield-problems (or whatever) because of chip-production.
After all:
- Chips have to work (4 compute units buffer)
- chips have to reach their target frequencies
- chips have to reach that with their given power-target (else smartshift wouldn't really work or needed)

Those are many more variables than many other chips have. E.g. CPUs and APUs have boost frequencies that they might never reach, but here, the top frequency must be reached. And if CPUs and APUs don't meet their target, they can still be used for other products (lower clocked, partially deactivated, ...).

MS should also have those problems, but they chose a lower frequency so more chips can hit that target frequency. And they seem not to limit the power (as the PS5 chip) so their power supply must deliver what is needed (well with some restrictions) and their cooling solution must cool it. So the XSX chip has much less variables that influence the output.
That must not necessary be a bad thing over time. If production output increases over time and the chips get better production cost go down. This might just be something sony must go through with the choices they made.
 
IF the bloomberg news is true (they're not always right, far from it), we don't know if it's linked to the clock speed yet... Or even the gpu part of the soc. I'll be curious about XBSX yields, but I guess we'll never have the numbers...

Sony is (if true) in this situation because of anticipated high demand. MS on the other hand....
 
I'm not sure if this comparison is strictly useful. With a fixed console platform looking for uniformity you'd want a fixed upper bound....
The practical effect on the end user experience is effectively negligible.

We hope this. But we actually don't know how this system behaves after a while, under so many different user condition. One thing is sure: XSX silicon (and probably also cooling solution) is the same designed to into server blades.... so something really reliable with thousand and thousand (I mean even decades) of continuity of service... This we can't sure say for ps5...... I don't want to say ps5 is bad. I say it's has much more complex power management that is linked to performances. This worries me. Maybe my worries are not well founded... but I'm going to spend my money. So I'll let ps5 rest and go with XSX.
 
Yes but on ps5 its the other way around, they state max clocks instead of min. A 2080Ti is listed as a 13.5TF part, but if conditions allow, it will boost higher, not below.

With ps5, they state 10.2TF but it can be lower. This wouldnt work out well with pc gamers.
Nvidia or Intel don't use the adaptive clocking system like AMD uses since around 2013 starting with steam roller. It's a very common technique for PC gamers if they especially use Ryzen CPUs.

I got the idea that the PS5 runs at boosted clocks by default and down clocks the CPU and GPU according to requirements.
I never understood how it works for the PS5 to be honest.
IIRC, Voltage for ideal processors wouldn't fluctuate when activity occurs. However, due to ohmic laws it fluctuates when current flows. Voltage drops when current is applied and it rises when the current is stopped. The amount of the drops and rises are bigger when the events occur faster and they are smaller when they occur slower (ohm's law for inductors).
Viewed over a long period of time the overall voltage is pretty much constant, but it's crucial for the processor that the voltage doesn't drop below a minimal threshold to function properly.
To guarantee that the voltage never drops below a threshold the overall voltage for the whole system is in most cases (other techniques or workarounds also exist) set higher than needed for the majority of the time and thus producing more heat than needed (for the majority of the time).
AMD uses a system of adaptive clocking which dynamically adjusts the cycle time (e.g., decreasing the frequency) to tolerate the fluctuation, without increasing voltage or decreasing the threshold barrier significantly. Response latency is critical of an adaptive clocking system and the faster the system can respond, the greater the reduction in voltage and therefore the greater the power savings. The latency for current Ryzen processors are as low as 2 or 3 cycles, so it's really really fast.
 
I got the idea that the PS5 runs at boosted clocks by default and down clocks the CPU and GPU according to requirements.
I never understood how it works for the PS5 to be honest.

Yes, same tech as pc parts boost then, but reversed boosting.

Nvidia or Intel don't use the adaptive clocking system like AMD uses since around 2013 starting with steam roller. It's a very common technique for PC gamers if they especially use Ryzen CPUs.

Yes, but that's not what i mean. Aside from that, yes even intel and NV use adaptive clocks, as in boosting when conditions allow for it.
A 2080Ti is advertised at 13.45TF, you know what you get. This thing will clock and perform way above that if conditions allow.
With the Sony gpu, if those terms applied to a 2080Ti, that GPU could be advertised as a 15/16TF part, and can boost downwards from there.

I think it's a difference as what im used to in the pc space. Ofcourse a 2080Ti can throttle when you block airflow i a 45 degree celcius home, but aside from that, you get that 13.5TF GPU. If AMD starts selling RDNA2 dGPUs this year, let's say a RX6700 is advertised as 20TF raw performance, with the notion it can boost down from there (unknown figures), how would that work out PR wise, let alone what pc gamers think.

IE, the PS5 solution is the other way around, it's allways boosted, and when conditions get too heavy, it clocks down.

I find the notion that 'we have had this in the pc space for ages' abit strange, since Sonys customized smartshift tech is doing things differently from what we have seen about anywhere so far.
Also, PS5's boosting is based on power/load, whilest for a 2080Ti atleast, it's based on things like thermals. PS5 should not be downclocking based on thermals.
 
Back
Top