The AMD 9070 / 9070XT Reviews and Discussion Thread

Steve from Hardware Unboxed says AMD paid 50$ rebates to distributors (they were planning a 650$ price), meaning the price will go up when the rebates are over, further more AIB cards are costing more, the XFX model he reviewed will cost 770$ and after tariffs will reach 850$ (timestamped).

Isn't HUB like the anti-Christ for you guys?

DF seem quite happy with FSR4:
 
It's kinda ok.


Raster wise, it's basically a 7900xt with better upscaling and better ish RT performance.

View attachment 13259

I say better'ish because in RT heavy games, it really can't do it which tells me perhaps their RT pipeline is more about efficiency of what they had instead of having new dedicated silicon like nvidia.

View attachment 13260

Great to see FSR4 = DLSS CNN which for me was as an absolute baseline they had to hit. Great piece by @Dictator

What's happening with Indiana Jones and the Great Circle? At 1440p upscaled?
It's not memory limited like the lower Geforces. Is the RT this heavy in this title?


9070indyrt.jpg
 
No, you're more likely to get an out-of-order shader core.
We already do get an out-of-order scoreboard for memory loads in RDNA4, that already is a huge improvement. Still not perfect, but at least it does already mean that a wave that does fully hit a cache won't be stalled behind one that at least partially misses.

Doesn't change though that this does in no way replace out-of-order traversal. So it's only increasing utilization, not efficiency.
but in path tracing and heavy ray tracing the 5070Ti is 63% faster
That's kind of a moot comparison. Titles with PT usually have Nvidias proprietary denoiser in place for RTX cards only (and use for AMD cards is forbidden by license terms), while AMD and Intel cards have to run with whatever was available as open source at the time of creation.

Necessary ray counts for the same IQ vary heavily, and thus also the performance impact. No standardized denoiser interface, so you can't expect improvements without updates to software titles.


That's unlike FSR 3.1 -> 4 that already uses a common loader and permits the driver to swap for a newer revision.

Expect that to change this year though for denoising too.
 
Last edited:
No, you're more likely to get an out-of-order shader core.
Really? AMD has something against implementing more robust dedicated hardware for RT when they prefer to go for such a radical change to the shader core to improve RT performance instead. It will be interesting to see how it works out for them. In the meantime, RDNA4 lags behind, but in my opinion, the performance is acceptable for the price.
 
Titles with PT usually have Nvidias proprietary denoiser in place for RTX cards only (and use for AMD cards is forbidden by license terms)
You mean RR? There's only like 5 titles with this one and you can turn it off in all of them - it doesn't change the RT performance results.
The standard NRD denoiser which is used when RR isn't active is about as "proprietary" as any code which is supplied by an IHV and it works on all h/w in the same way.
Edit: Btw many RT games are seemingly using Intel's Open Image Denoise which isn't even affiliated with Nvidia, and the performance results are still the same.
 
Last edited:
That's kind of a moot comparison. Titles with PT usually have Nvidias proprietary denoiser in place for RTX cards only (and use for AMD cards is forbidden by license terms), while AMD and Intel cards have to run with whatever was available as open source at the time of creation.
No, ComputerBase and PCGH don't test with Ray Reconstruction (NVIDIA's main denoising advantage), if they test with it the advantage would be even higher.

PCGH also tested Quake 2 RTX and Half Life 1 path tracing, which use standard denoisers, and the 5070Ti is 40% and 55% faster respectively.
 
This is the first I’m hearing about games using a proprietary vendor locked denoiser outside of ray reconstruction. Is there a source for that claim?

pretty obvious isn't it?
Pretty much everyone bolted on FF blurbs to do BVH walking and they just didn't.

RDNA 4 is looking pretty good doing traversal on the shaders but it did add FF hardware to help with ray transforms. And of course they’re doing FF intersection. So AMD doesn’t appear to be religiously against FF at all.

In more demanding scenarios RDNA 4 still struggles so there’s still room for them to go FF traversal in the future. SIMD can only do much with highly divergent rays.
 
Its again the same question like at Teselation. Do you need really so much Ray Tracing Power. AMD Favourd games looks good with RT.
I'm often reading this trope that we're not ready for full ray tracing yet so we shouldn't bother. So yes. MUCH more please.
 
Linux benchmarks:


geometric-mean-of-all-test-results-result-composite-arr9slgb-1.svgz
 
Well it should do a lot better against 5070/Ti thanks to FSR4 and the improved RT - it is obviously not on the same level as Nvidia's but no one sane ever expected that.
The pricing for the XT seems solid at a first glance. I'll reserve a final judgement for when I'll look through all the interesting benchmark results.
The non-XT is a weird one though. It is seemingly just a tad faster than 5070 - but in both non-RT and "lite" RT workloads. I'm not sure that it's going to sell well against a similar priced 5070 - or just a +$50 XT model upsale (MSRPs of course, we'll see how they'll sit in retail). To me it seems almost like they couldn't have make it any cheaper and thus made it unattractive to not lose much on it.


It's doing fine unless you switch to heavy RT (PT, Full RT). CP2077 isn't an exception here.

View attachment 13261

Edit: I have to add that "doing fine" here means that the card is now less limited by its RT h/w throughput which means that the bottleneck has shifted to shading again. So in games where this is the case the "RT comparisons" are likely showing us shading comparisons, not RT comparisons. The PT/Full RT/whatever results are the ones which showcase the relative capabilities of RT h/w.

I saw the Gamer Nexus analysis and one thing he pointed out was that despite the 5070 generating more FPS, the game had a lot of stuttering due to reaching the VRAM limit, while on the 9070XT, the performance was more solid.
 
alan-wake-2-2560-1440.png


This one is interesting and I can't explain it with anything but architectural improvements RDNA4 got.
Radeons always did well in AW2 w/o RT but here 9070 XT is beating even the 7900 XTX - which has more shading power and higher memory bandwidth.
It also kinda goes against the general expectation of lower gains w/o RT. The XT is +40% to the GRE here.
Edit: There's a bunch of these where XT is above XTX so the architectural gains are definitely impressive.
I'm a bit puzzled by the results where the XT is below the previous XT though. This variation is hard to explain. Some changes to VOPD maybe?
 
Last edited:
Isn't HUB like the anti-Christ for you guys?
Not just Steve.

Retailers cannot buy AMD Radeon RX 9070 series GPUs for AMD’s MSRP price. Based on what we have heard, this isn’t just a European or North American problem, but a global one.

The retail business is simple. Retailers need to sell products at a higher price than what they buy them for. If retailers can’t buy Radeon RX 9070 series GPUs from distributors at below MSRP prices, they can’t sell them to consumers at AMD’s official MSRP price.


Radeons always did well in AW2 w/o RT but here 9070 XT is beating even the 7900 XTX - which has more shading power and higher memory bandwidth.
Perhaps RDNA4 has a better Mesh Shaders implementation?
 
Back
Top