AbsoluteBeginner
Regular
But he didnt say that now did he?He also stated that it would maintain those clocks ( the clocks referenced in the reveal) most of the time.
So he lied is what you saying?
But he didnt say that now did he?He also stated that it would maintain those clocks ( the clocks referenced in the reveal) most of the time.
So he lied is what you saying?
But he didnt say that now did he?
So I supposse that normally both CPU and GPU wont be fully utilized at the same time allowing the clock reduction to be only near 2% because the CPU in that moment is not at full occupacy.
If both were fully activated in all its transistors Cerny said that the speed would be 3 GHz and 2 GHz respectively.
It can. PS2 was milked DRY by this generation standards.
There will be nothing that even comes close to that amount of power/perf draw from the hardware.
The RBEs and rasterizers subdivide the screen into tiles that they are individually responsible for. There's more straightforward tiling at 1, 2, and 4 groups. Is there a good tiling pattern that gives equal utilization to 5 clients?L2 is tied to each 64-bit channel. 320-bit interface requires 5 x 64-bit channels. There is no requirement for having a power of 2 number of shader arrays. Each array would contain 16 ROPs and there are 5 arrays. Just an unequal number of CU's in them, 3 have 10, 2 have 8. Look at the die shot. I don't know how that leaves no chance.
No they're not tied. AMD's GPU L2 connects to Infinity Fabric, not memory controllers.
(as side note, I know everyone keeps referring to them as 64-bit memory controllers, but are they really that? At least certain AMD slides suggest they're actually 16-bit controllers, which also fits the fact that GDDR6 uses 16-bit channels (other option would be 64-bit split into 4x16 "virtual" memory controllers but why list them as 16 separate MCs (for 256bit controller) then?)
Well, if he was something is he was ultra sincere with the specs.At that rate, i wouldnt even have bothered mentioning it, to be honest. Its such a small difference that in performance numbers, no one is going to notice it. Would have been better to have a reduced 2% lower clock from the beginning then, even if it's just for marketing purposes, aside from complexity and a occasional 10% higher power draw.
Where did he mention those figures?
Not really, no.He did.
Well, if he was something is he was ultra sincere with the specs.
He mentioned it just in the part where he started to talk about the clocks.
Then one can wonder, why even bother. Just go with a 2% lower clock from the get go.
Or just leave out even mentioning it, if it never happens anyway. XSX never downclocks, so don't mention it, its sustained then.
Theres more to it, im sure.
exactly said that it was hard to keep those mentioned speeds at fixed clocks. So i am imagining those would have been the clocks without the variable rates.So, 3Ghz for the CPU, and 2Ghz for the GPU?
Well, in a cpu situation like the ND pdf showing the porting of TLOU would be hard to get max. clocks in a 16ms frame time as the CPU is almost 100% busy. But Zen is so superior and,as said, its more expensive transistors in power terms wont be very used (AVX 256) that i doubt it will be many times near its 100% usage.This is probably a fairly accurate assessment of what is to come.
At 33 ms frame times, I don't expect there to be much CPU usage so for the most part I do expect the GPU to run at it's capped rate.
I think this is where this setup will be fine, the setup for 60fps or greater titles is sort of the scenario that I'm looking at.
I hope that most titles on PS5 are 60fps though (personal preference)
Why should you decrease 2 to 3 percent forever because of something that will happen a couple of times in a console lifetime?
He did.
Weren't some people here concerned about that when the pube leak was posted? Bottom intake obstructed by the base, small slits on the bottom, mesh on the back and now we know it's a 130mm fan as an exhaust.He prefers to belive that PS5 GPU base clock is well below xsx target clock, despite the latter bigger apu. What can you do?
There is common opinion, that PS5 will need expensive cooling. What about xsx thermal output? The biggest AMD GPU so far and fairy highly clocked. I don't think that ms can cut any corners there.
as there were indications as late as 2003 that the average utilization was far below the given figures?
Its such a small difference that in performance numbers, no one is going to notice it.
Would have been better to have a reduced 2% lower clock from the beginning then
2.23ghz 10% of the time
2ghz 80% of the time
<2ghz 10% of the time
Why should you decrease 2 to 3 percent forever because of something that will happen a couple of times in a console lifetime?
And Xbox mever downclocking doesnt mean it will not overheat. Heat is related to power usage. Fixed clocks lock the power usage caused by clock, but not the one caused by workloads.
So Xbox can go over its power budget too... difference is it wont downclock... it will overheat.
It won't overheat unless the cooling system fails
It will be loud then, like PS4Pro was.
Margins and opportunistic clocking significantly increase the average.Then one can wonder, why even bother. Just go with a 2% lower clock from the get go.
It will be loud then, like PS4Pro was.