AMD RX 7900XTX and RX 7900XT Reviews

oc-cyberpunk.png
overclocked-performance.png



That's a nice OC here on Asus TUF.
 
Last edited:
Wouldnt call the performance improvement of the 7900XTX "a normal generational increase". ~35% faster is not on par with the typical generational leap.
35% faster would be a bit underwhelming but ok in some situations, like if there was no major process leap available. But two years later, a full architecture overhaul, and a jump from 7nm to 5nm, and only 35%? :/

There's still absolutely no way this isn't well below what AMD was actually aiming for. Something is wrong with it.
 
Now that I had some time to look at some reviews, my thoughts have shifted a bit since the initial reveal.


The power draw, heat output and price are too high compared to the competition. I am a bit more disappointed now then I was when it was first revealed.

It's not a complete failure since it's a product that I could recommend given the right user and use case but I was hoping for something a bit better. (This applies to the 7900XTX not the 7900XT which should have been unlaunched like the 4080 12GB)
 
500+ XTXs at Overclockers UK sold in about half an hour. It turns out there were some custom cards, like 10...

EDIT: and pre-sold 300+ before the site could stop offering the card for pre-order...
 
Last edited:
Did you know that if you clock Vega 56 and Vega 64 at the same clock speeds they perform within 1% of each other (Even though Vega 64 has more CU's) because the architectural scaling is shit.

I'm willing to be that if you clocked the 7900XTX and 7900XT at the same clock speed they would also be within 1% of each other.
 
More interesting is big jump in fps after 3ghz. This is strange. Cybepunk also looks nice Tie with 4090 after OC

It looks like it's just severely power limited. It's not so much the clock speed increase, but that it's a card with a 3 x 8 pin power connectors and then the power limit was increased by 15%. Basically you can get more and more performance if you keep throwing Watts at it. I'd like to know how much that OC was actually drawing. At stock it was getting fairly close to a 4090 in power draw.

Like I expected, undervolting seems to be the way to go, but I wonder why increasing the max clock limit crashes the AMD reference card? Maybe the power limit of the card just can't handle it.

Once you have those two, start undervolting until your card is no longer stable—do not touch GPU clocks at this point. As you reduce the voltage, you'll see clockspeed go up automagically, because there is more power headroom to do so. But at some point you'll hit a wall with the clocks, they simply do not go higher, even though they still are 50 MHz below the "clock limit" slider. This is normal for RDNA2 / RDNA3, the "maximum" really seems to be "maximum minus 50 MHz."
At this point increase the max clock slider by 100 MHz, leave the minimum clock slider alone.
Now you can see GPU clocks increasing beyond the previous "wall." Keep reducing voltage until your card becomes unstable. If you hit the frequency wall again, increase max clocks by another 100 MHz.

There may be some interesting AIB cards with super-high power limits to get big performance gains, if this is how it works, but I'd expect those cards to be a lot more expensive to handle cooling and power. Still may be interesting. Those OC numbers look great.
 
So after all is said and done, I'm kindof intrigued. +30% cores, and in the end... +30% performance? What about all the other architectural changes? Did none of them functionally change anything? The XTX has like +40% memory bandwidth over last gen, is it not helping?
Something's clearly gone horrifically wrong, and it's sortof weird because (anomalous power draw notwitstanding) the XTX is still a good product in the current market and I still might splurge on one because its CPF is overall fantastic, one of the best on the market.

What I'm mostly wondering right now is how much will potentially be fixable via drivers and how much is out-and-out broken in the WGP.

(edit) And with the card already having one of the best CPFs on the market there IS also the very real possibility of drivers "fixing" the performance and restoring what was expected, which would rocket the 7900 XTX into stardom - even if they can't fix the power draw.
 
So after all is said and done, I'm kindof intrigued. +30% cores, and in the end... +30% performance? What about all the other architectural changes? Did none of them functionally change anything? The XTX has like +40% memory bandwidth over last gen, is it not helping?
Something's clearly gone horrifically wrong, and it's sortof weird because (anomalous power draw notwitstanding) the XTX is still a good product in the current market and I still might splurge on one because its CPF is overall fantastic, one of the best on the market.

What I'm mostly wondering right now is how much will potentially be fixable via drivers and how much is out-and-out broken in the WGP.

(edit) And with the card already having one of the best CPFs on the market there IS also the very real possibility of drivers "fixing" the performance and restoring what was expected, which would rocket the 7900 XTX into stardom - even if they can't fix the power draw.
It takes significant architectural effort to add 30% more cores and get 30% more performance.
It ain't easy :)
 
So after all is said and done, I'm kindof intrigued. +30% cores, and in the end... +30% performance? What about all the other architectural changes? Did none of them functionally change anything? The XTX has like +40% memory bandwidth over last gen, is it not helping?
Do you remember Vega 64?

It was years of teasing. It featured NCU aka Next gen CU, high IPC (wow 2x), high frequency, brand new DSBR rendering tech, Next Generation Geometry pipeline, HBM2 epic bandwidth, and a full node transition. In the end, it was 6% faster than Fiji at the same frequency.

It's funny that Vega 64 was also rushed to release. With RDNA 3 AMD promised 2022 - launched on Dec 12th. With Vega they promised Q2 2017 - launched on Jun 27th as 'Frontier Edition definitely not for gamers'.
 
It takes significant architectural effort to add 30% more cores and get 30% more performance.
It ain't easy :)
that's fair. i've been around long enough, you'd think i'd know better than to just think in such simplistic terms huh?
 
Back
Top