Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
So I supposse that normally both CPU and GPU wont be fully utilized at the same time allowing the clock reduction to be only near 2% because the CPU in that moment is not at full occupacy.

At that rate, i wouldnt even have bothered mentioning it, to be honest. Its such a small difference that in performance numbers, no one is going to notice it. Would have been better to have a reduced 2% lower clock from the beginning then, even if it's just for marketing purposes, aside from complexity and a occasional 10% higher power draw.

If both were fully activated in all its transistors Cerny said that the speed would be 3 GHz and 2 GHz respectively.

Where did he mention those figures?
 
It can. PS2 was milked DRY by this generation standards.
There will be nothing that even comes close to that amount of power/perf draw from the hardware.

Is there a reference to that high level of utilization being common? What time frame had that high level of utilization, as there were indications as late as 2003 that the average utilization was far below the given figures?
https://forum.beyond3d.com/threads/ps2-performance-analyzer-statistics-from-sony.7901/

L2 is tied to each 64-bit channel. 320-bit interface requires 5 x 64-bit channels. There is no requirement for having a power of 2 number of shader arrays. Each array would contain 16 ROPs and there are 5 arrays. Just an unequal number of CU's in them, 3 have 10, 2 have 8. Look at the die shot. I don't know how that leaves no chance.
The RBEs and rasterizers subdivide the screen into tiles that they are individually responsible for. There's more straightforward tiling at 1, 2, and 4 groups. Is there a good tiling pattern that gives equal utilization to 5 clients?

As far as the L2 goes, the RDNA whitepaper states 4 L2 slices per 64-bit controller. This may scale to 5, although a counterexample may be the way the Xbox One X distributed 12 32-bit channels across 4 L2 slices. The rule for the discrete implementations might be modified in a custom chip.


No they're not tied. AMD's GPU L2 connects to Infinity Fabric, not memory controllers.

(as side note, I know everyone keeps referring to them as 64-bit memory controllers, but are they really that? At least certain AMD slides suggest they're actually 16-bit controllers, which also fits the fact that GDDR6 uses 16-bit channels (other option would be 64-bit split into 4x16 "virtual" memory controllers but why list them as 16 separate MCs (for 256bit controller) then?)

The RDNA whitepaper describes the controllers as being 64-bit even though the individual channels are 16. Similarly, it seems like the HBM GPUs combine multiple channels into a controller instead of having 8 separate controllers per stack.
That may mean that the individual channels have controller hardware with a shared component between them.
From a logical standpoint, even if there is a connected IF fabric between them, the L2 slices have assigned address ranges that would make 4 16-bit channels send all their traffic to one target slice.
 
At that rate, i wouldnt even have bothered mentioning it, to be honest. Its such a small difference that in performance numbers, no one is going to notice it. Would have been better to have a reduced 2% lower clock from the beginning then, even if it's just for marketing purposes, aside from complexity and a occasional 10% higher power draw.



Where did he mention those figures?
Well, if he was something is he was ultra sincere with the specs.
He mentioned it just in the part where he started to talk about the clocks.
 
Then one can wonder, why even bother. Just go with a 2% lower clock from the get go.



Or just leave out even mentioning it, if it never happens anyway. XSX never downclocks, so don't mention it, its sustained then.
Theres more to it, im sure.

Why should you decrease 2 to 3 percent forever because of something that will happen a couple of times in a console lifetime?

And Xbox mever downclocking doesnt mean it will not overheat. Heat is related to power usage. Fixed clocks lock the power usage caused by clock, but not the one caused by workloads.
So Xbox can go over its power budget too... difference is it wont downclock... it will overheat.
 
This is probably a fairly accurate assessment of what is to come.
At 33 ms frame times, I don't expect there to be much CPU usage so for the most part I do expect the GPU to run at it's capped rate.

I think this is where this setup will be fine, the setup for 60fps or greater titles is sort of the scenario that I'm looking at.
I hope that most titles on PS5 are 60fps though (personal preference)
Well, in a cpu situation like the ND pdf showing the porting of TLOU would be hard to get max. clocks in a 16ms frame time as the CPU is almost 100% busy. But Zen is so superior and,as said, its more expensive transistors in power terms wont be very used (AVX 256) that i doubt it will be many times near its 100% usage.
 
Why should you decrease 2 to 3 percent forever because of something that will happen a couple of times in a console lifetime?

Or why increase 2% which is unnoticable in practical performance? I'd rather have a 100% sustained CPU/GPU combined use permanently rather then that 2% upclock.
There's more to it, like DF mentioned here before.
 
He prefers to belive that PS5 GPU base clock is well below xsx target clock, despite the latter bigger apu. What can you do?

There is common opinion, that PS5 will need expensive cooling. What about xsx thermal output? The biggest AMD GPU so far and fairy highly clocked. I don't think that ms can cut any corners there.
Weren't some people here concerned about that when the pube leak was posted? Bottom intake obstructed by the base, small slits on the bottom, mesh on the back and now we know it's a 130mm fan as an exhaust.
Maybe rdna2 or at least xsx is more efficient than people think.
 
Its such a small difference that in performance numbers, no one is going to notice it.

I'm not sure, do you want him to lie? Or what?
Or, like "lie by omission"?

Would have been better to have a reduced 2% lower clock from the beginning then

Why is that?

2.23ghz 10% of the time
2ghz 80% of the time
<2ghz 10% of the time

Let's put that into perspective.
RDNA1 chips happily run with 2GHz sustained, drawing ~290W of power.
We do know from AMD that RDNA2 is "50% more power efficient".
So what' do we have here 200W?
And can we have 2.23GHz at 225W then? Why not?
 
Why should you decrease 2 to 3 percent forever because of something that will happen a couple of times in a console lifetime?

And Xbox mever downclocking doesnt mean it will not overheat. Heat is related to power usage. Fixed clocks lock the power usage caused by clock, but not the one caused by workloads.
So Xbox can go over its power budget too... difference is it wont downclock... it will overheat.

It won't overheat unless the cooling system fails or it's used in an extreme temperature environment. Same as PS5.
 
Status
Not open for further replies.
Back
Top