The AMD 9070 / 9070XT Reviews and Discussion Thread

You seem to be trying quite hard to down play / exclude this parameter. If a 5070ti didnt need 40% more bandwdith why would Nvidia equip it with GDDR7 when there is a significate premium on it over GDDR, you think NV's a charity ? Why then would they run the 5080 with almost identical Shader to bandwidth ratio as the 5070ti ?
To jump start the production. Who would produce GDDR7 if nobody is using it? nVidia saved space with GDDR7 and provided 15%+ more performance without more transistors.

The fact that N48 needs nearly 20% more transistors for ~20% less performance is staggering. A 5080 can be nearly twice as efficient because N48 still is not modern enough to cutoff parts of the chip when they are not used.
 
Personally I think it's not very meaningful to compare different architectures with various parameters because it can be difficult to know how these parameters work across the architectures. The only real objective parameters are probably power consumption and maybe chip area (which may have cost implications, but again not necessarily directly proportional).

Also, different architectures have different requirements. For example, extra computation power, if not bringing more performance, seems to be wasteful. However, they might not be wasteful when running a different workload. Different workload has different requirements and can be bottlenecked by different paramters.

To use an extreme example, if you only benchmark non-RT games, a GPU with dedicated RT hardwares will look "wasteful" because they perform relatively worse (i.e. they require more transistors to perform at the same level). Another example is extra VRAM, which does not improve performance at all if a game does not use that much, but you'll still want some because future games might.

Extra computation power, AI units, and extra bandwidth, all could be useful in a different workload and current workloads might not show the benefits. Poeple used to think the tensor cores are uesless in games but now DLSS and FSR4 proved otherwise. All engineering is a balancing act.
 
Personally I think it's not very meaningful to compare different architectures with various parameters because it can be difficult to know how these parameters work across the architectures. The only real objective parameters are probably power consumption and maybe chip area (which may have cost implications, but again not necessarily directly proportional).

Also, different architectures have different requirements. For example, extra computation power, if not bringing more performance, seems to be wasteful. However, they might not be wasteful when running a different workload. Different workload has different requirements and can be bottlenecked by different paramters.

To use an extreme example, if you only benchmark non-RT games, a GPU with dedicated RT hardwares will look "wasteful" because they perform relatively worse (i.e. they require more transistors to perform at the same level). Another example is extra VRAM, which does not improve performance at all if a game does not use that much, but you'll still want some because future games might.

Extra computation power, AI units, and extra bandwidth, all could be useful in a different workload and current workloads might not show the benefits. Poeple used to think the tensor cores are uesless in games but now DLSS and FSR4 proved otherwise. All engineering is a balancing act.
It's also hard to compare architectural efficiency because you can only compare cards that got released as products, and you don't know (at least right away) where those cards sit on their respective efficiency curves. For example AD102 looks a lot better if limited to 300W. AMD or NVIDIA could make different choices on where to clock their cards that obfuscate any architectural efficiency advantages or disadvantages.
 
And that's why reviews should be done apples to apples and leave scaling, framegen and whatnot to articles specifically about them, like I've been saying since day 1

I'm okay with splitting it up or keeping it together as long as it's communicated what's being measured. One thing with side-pieces for things like DLSS is they generally don't get revisited and don't get updated across an entire gpu family.

So the cost of the FSR4 upscale in ms on a 9070XT at 1440p is going to be lower than on a 9060XT(or whatever), but usually that's not explored in detail on lower end cards. What I want to know is if I buy a 9070XT, how many fps do I gain on average for enabling FSR4 for each quality setting. Maybe just a chart that shows 1080p, 1440p, 4k vs each quality setting and then average gain at each setting found across 5 games. But then that should be repeated each time a new card comes out, because I'm not sure you can assume that if you get say 30% at 1440p Quality on a 9070XT that you'll get that much on a 9060XT. And then if you do that across vendors I can look and see maybe the performance gains are bigger for one vendor over another if I care more about upscaling than native rendering.
 
That’s not practical as scaling is gradually becoming the “normal” way to play some games. For a long time now we’ve essentially assumed that all hardware outputs the same image when that wasn’t always the case. One day we’ll assume the same for upscalers.
I think reviewers will just have to include both performance at native resolution and upscaled performance at equal image quality. Ultimately reviewers will need to make a somewhat subjective decision as to quality levels of one upscaler are equivalent to quality levels of another upscaler (e.g. DLSS 4 Performance = FSR4 Balanced) and perform their benchmarks at those levels on each card to do comparisons.
 
I think reviewers will just have to include both performance at native resolution and upscaled performance at equal image quality. Ultimately reviewers will need to make a somewhat subjective decision as to quality levels of one upscaler are equivalent to quality levels of another upscaler (e.g. DLSS 4 Performance = FSR4 Balanced) and perform their benchmarks at those levels on each card to do comparisons.

I think I'm leaning more towards them just providing the performance numbers for all of the quality settings and then I'll decide what I think looks good enough or not. The image quality comparisons are useful, and I think their subjective conclusions can be useful, but ultimately I'm the one that decides what's good enough for me. I don't really need to see Quality mode vs Balanced mode because a reviewer has decided they're a closest match.
 
The really difficult thing about upscalers is the really severe artifacting is often only visible in motion. Smearing, blurring, persistence of shadow, ghosting, so much of it is hard to capture in a still. And of course, YouTube isn't much help with their 60fps cap and bitrate constraints / re-encoding shens.

And worst is, some of it you'd never really see unless you knew right where and what to look for, other times it's pretty glaring -- like the DF video I watched today where they were playing RTX Remixed HL2 and the moving shadows of the rotating blade thingies for murdering zombies created a horrible shadow persistence / smearing issue. A picture would've shown nothing, but the motion made it sorely obvious.
 
The really difficult thing about upscalers is the really severe artifacting is often only visible in motion. Smearing, blurring, persistence of shadow, ghosting, so much of it is hard to capture in a still. And of course, YouTube isn't much help with their 60fps cap and bitrate constraints / re-encoding shens.

And worst is, some of it you'd never really see unless you knew right where and what to look for, other times it's pretty glaring -- like the DF video I watched today where they were playing RTX Remixed HL2 and the moving shadows of the rotating blade thingies for murdering zombies created a horrible shadow persistence / smearing issue. A picture would've shown nothing, but the motion made it sorely obvious.

Capturing screens while the camera is moving is sometimes better than a youtube video because you can get higher quality. It's a pain to do consistently. Controller movement is probably better than mouse for that. But you probably need video anyway for some things.

Either way, I'd still like to see thorough breakdowns of performance gains from upscaling at each major resolution and each quality setting in reviews. Not sure if there's something like that for 9070XT yet, but Daniel Owen's video did a nice job of showing some of the upscaling gains and then showing the final performance comparisons for different use cases like everything included, ray tracing only, native only
 
I'm strongly considering taking advantage of these newly imposed tariffs on Chinese products from the USA which are raising the prices of Nvidia GPUs yet again (another ~20% price increase to Asus 5090s) and selling my RTX 4090.. which I'm almost certain I will be able to get ~$3000CAD for.. and buying a Radeon 9070XT for ~$1100CAD and just pocketing the rest. This bullshit is becoming unsustainable. I'd rather support AMD regardless of whether or not their GPUs are inferior.. simply out of principle.

FSR4 is coming at the right time.. and a next generation enthusiast GPU from AMD will likely be very comparable to Nvidia.
 
I'm strongly considering taking advantage of these newly imposed tariffs on Chinese products from the USA which are raising the prices of Nvidia GPUs yet again (another ~20% price increase to Asus 5090s) and selling my RTX 4090.. which I'm almost certain I will be able to get ~$3000CAD for.. and buying a Radeon 9070XT for ~$1100CAD and just pocketing the rest. This bullshit is becoming unsustainable. I'd rather support AMD regardless of whether or not their GPUs are inferior.. simply out of principle.

FSR4 is coming at the right time.. and a next generation enthusiast GPU from AMD will likely be very comparable to Nvidia.

I think I'm just pretty much resigned to the fact that my pc gaming days are over. I'm at the age where I have limited interests in games and even spending $1k CAD on a gpu won't give me as big of a jump as I'm looking for. Kind of priced out while losing interest. If I were in your situation, I'd pocket the money. I could go to a lot of nice restaurants with that money, or buy a plane ticket, and enjoy it.
 
How does VAT work with respect to MSRP?
It's complicated(TM)
If they only give USD MSRP, VAT is added on top since USD prices are given without taxes
If they give EU MSRP, it's usually german VAT and you adjust it to your local (but sometimes manufacturer just keeps it the same everywhere despite different VATs)
If they give country specific MSRP, it includes local VAT
 
How does VAT work with respect to MSRP?
To give added context.
On release the 9070 was selling for basically £520+ all prices inc vat on site also
9070xt £570+
On that very site, so you can see how much it's gone up by.
And I don't really expect the lower priced ones to go back on sale at those prices even.
 
It's complicated(TM)
If they only give USD MSRP, VAT is added on top since USD prices are given without taxes
If they give EU MSRP, it's usually german VAT and you adjust it to your local (but sometimes manufacturer just keeps it the same everywhere despite different VATs)
If they give country specific MSRP, it includes local VAT
What is the official MSRP of the 9070/XT in the UK? I dunno if they made a separate annoucement for the EU or the UK.
 
What is the official MSRP of the 9070/XT in the UK? I dunno if they made a separate annoucement for the EU or the UK.
I think AMD didn't give specific MSRP for different regions, but not 100 % sure about UK.
$599 should translate to tad over £555 (todays exchange rate), but apparently the "MSRP" cards in UK were £569, close enough IMO considering currency fluctuations. They have 20% VAT.
 
Has anyone proven where the 80% number comes from? Is it the nvidia app or driver telemetry? Unfortunately, I doubt Nvidia will be forthcoming with how that data was collected and analyzed. Either way, I think upscaling is going to be pretty popular, and I'd expect that with a 9070 XT people will be using FSR4 with 1440p and 4k displays. I would guess the majority, but have no data to back that up.
 
“Of course, everyone likes a very high-end GPU, but not so many people can access it. The 9070 XT has been a fantastic success. Actually, it’s the number one seller for all of the AMD Radeon generations for first-week sales by far, by 10x more than previous generations, and we like to see people happy. People are happy with the 9070 XT. […] We are very excited about it and are increasing manufacturing so that more gamers can access it.”
 
Back
Top