AMD RX 7900XTX and RX 7900XT Reviews

So, different clocks (as advertised) means its broken?

Do you know how many clock domains there actually are?
The speculation is that they separated these clocks domains in response to a bug that is preventing it from reaching clockspeed targets, and then marketed the separate clocks as a performance enhancing feature (which is technically true). It is a baseless but reasonable theory.
 
If you wath the Video form JustTwoCents, i think you can see that the issue is the power budget. It takes power from the ram and put it in the gpu core that it can clock over 3.2 GHz. I wonder why AMD is not giving more Power Budget.
 
So is it balancing power between chiplets ? The shader core chiplets are different than the chiplet with the memory interfaces? Does the cache also have its own clock? I haven’t looked at a diagram for rdna3
 
So is it balancing power between chiplets ? The shader core chiplets are different than the chiplet with the memory interfaces? Does the cache also have its own clock? I haven’t looked at a diagram for rdna3
Shader core isn't chiplets. Everything else is in one GCD, except for Infinity Cache and memory controllers which live on 6 chiplets (16MB + 64bit mem controller (4x16bit) each)
 
Shader core isn't chiplets. Everything else is in one GCD, except for Infinity Cache and memory controllers which live on 6 chiplets (16MB + 64bit mem controller (4x16bit) each)

So potentially it's power balancing and favours the chip with the shader engines and starves the chiplets with the cache and memory interfaces.

Edit: What is jay actually doing in that video when he sets the clock? Is he setting a fixed clock? Is he setting a max clock? When you increase the max does it increase the min, like shifting a range? I haven't had an amd gpu since the 2000s, so I'm way out of the loop as to how their overclocking works.

Edit: Where I'm going with this is, if you're setting a fixed clock or shifting a range, maybe what you're setting is the shader core clock can't drop low enough to sustain power to the chiplets which causes the vram clock to drop. What we've seen in furmark and some games is the shader clock will drop really low to stay within the power budget. Maybe jay has basically disabled it's ability to manage power by reducing the shader clock, so it shifts to secondary power management which is adjusting other clocks.
 
Last edited:
The fact it can achieve very high clocks on irregular workloads (blender?) where the CUs might be stalling a lot (i.e. not consuming as much power..) could suggest that it is using more power than expected.
OTOH on gaming workloads clocks are lower because it's much better utilized and it's more likely to be power limited.
 
Add from 30w to 50w on top of this, as software doesn't read AMD's TBP in it's entirety, unlike NVIDIA.

See here:
 
they separated these clocks domains in response to a bug
You know that has to be built into the hardware right?
I mean maybe they intended it to be a 1:1 ratio across the clock boundary or maybe it was supposed to be closer to a 2:1 (1:2?) but I don't imagine they built that capability into it just for shits & giggles.
 
I mean maybe they intended it to be a 1:1 ratio across the clock boundary or maybe it was supposed to be closer to a 2:1 (1:2?) but I don't imagine they built that capability into it just for shits & giggles.
AMD said exactly this: decoupled clocks with front-end running at 2.5GHz and shaders running at 2.3GHz. That gets 15% frequency improvements and 25% power savings.
So a mere 200MHz difference can save 25% of power? That's a HUGE difference. I think the key to understanding the power characteristics of RDNA3 is in understanding how this difference actually materizlized, it wasn't necessary in RDNA1/RDNA2, or even in the power hungry Vega, so why now?
The fact it can achieve very high clocks on irregular workloads (blender?) where the CUs might be stalling a lot (i.e. not consuming as much power..) could suggest that it is using more power than expected.
In Blender, It could also be downclocking memory too to achieve sustained high core clocks.
 
AMD said exactly this: decoupled clocks with front-end running at 2.5GHz and shaders running at 2.3GHz. That gets 15% frequency improvements and 25% power savings.
So a mere 200MHz difference can save 25% of power? That's a HUGE difference. I think the key to understanding the power characteristics of RDNA3 is in understanding how this difference actually materizlized, it wasn't necessary in RDNA1/RDNA2, or even in the power hungry Vega, so why now?

And my question is why run the front end faster in the first place? They've never done it before, in fact I can't think of any GPU that's done it before and the only memory I have of split clocks is Nvidia running the shaders at a higher clock on their DX10 cards.

If the front is good enough to keep the shaders fed in the first place there should be no need to run them higher so why are they?

Maybe that's where the problem lies, the front end just isn't big enough for the rest of the GPU.
 
Last edited:
AMD said exactly this: decoupled clocks with front-end running at 2.5GHz and shaders running at 2.3GHz. That gets 15% frequency improvements and 25% power savings.
This statment is not logical. It make sence when the Frontend is not delifering enough content for the shaders. Of cause than you see big efficenty gains when the shader are down clocked. But we do not know how good the shaders are scaling with power.

It is intersting the old frontend was scaling realy good if you look at the rasterizer power. Only the Mesh Shader are not satisfying.polygons6.jpg
1671529276029.png
 
Last edited:
460w lol

Poor amd. with zen4, they lost the ability to make fun of intel making hot chips and with rdna3, lost the efficiency bragging.

Intel remains the best for gaming CPU's, in special with alder lake and beyond.

The architecture is not efficient it has been exposed

Add from 30w to 50w on top of this, as software doesn't read AMD's TBP in it's entirety, unlike NVIDIA.

See here:

Thats why amd kept the tdp low its too embarrassing
Another AMD thread derailed and bombed by the usual suspects.
 
Back
Top