The AMD 9070 / 9070XT Reviews and Discussion Thread

You seem to be trying quite hard to down play / exclude this parameter. If a 5070ti didnt need 40% more bandwdith why would Nvidia equip it with GDDR7 when there is a significate premium on it over GDDR, you think NV's a charity ? Why then would they run the 5080 with almost identical Shader to bandwidth ratio as the 5070ti ?
To jump start the production. Who would produce GDDR7 if nobody is using it? nVidia saved space with GDDR7 and provided 15%+ more performance without more transistors.

The fact that N48 needs nearly 20% more transistors for ~20% less performance is staggering. A 5080 can be nearly twice as efficient because N48 still is not modern enough to cutoff parts of the chip when they are not used.
 
Personally I think it's not very meaningful to compare different architectures with various parameters because it can be difficult to know how these parameters work across the architectures. The only real objective parameters are probably power consumption and maybe chip area (which may have cost implications, but again not necessarily directly proportional).

Also, different architectures have different requirements. For example, extra computation power, if not bringing more performance, seems to be wasteful. However, they might not be wasteful when running a different workload. Different workload has different requirements and can be bottlenecked by different paramters.

To use an extreme example, if you only benchmark non-RT games, a GPU with dedicated RT hardwares will look "wasteful" because they perform relatively worse (i.e. they require more transistors to perform at the same level). Another example is extra VRAM, which does not improve performance at all if a game does not use that much, but you'll still want some because future games might.

Extra computation power, AI units, and extra bandwidth, all could be useful in a different workload and current workloads might not show the benefits. Poeple used to think the tensor cores are uesless in games but now DLSS and FSR4 proved otherwise. All engineering is a balancing act.
 

Very impressive debut for FSR 4. Tim kills it again with a great analysis too.

At one point in the video he makes a distinction between detail and sharpness and that’s something I’m not really clear on. In theory sharpness can’t substitute for missing detail but in practice I find post processing sharpening can make textures appear more detailed by increasing contrast in color transitions.
 
Personally I think it's not very meaningful to compare different architectures with various parameters because it can be difficult to know how these parameters work across the architectures. The only real objective parameters are probably power consumption and maybe chip area (which may have cost implications, but again not necessarily directly proportional).

Also, different architectures have different requirements. For example, extra computation power, if not bringing more performance, seems to be wasteful. However, they might not be wasteful when running a different workload. Different workload has different requirements and can be bottlenecked by different paramters.

To use an extreme example, if you only benchmark non-RT games, a GPU with dedicated RT hardwares will look "wasteful" because they perform relatively worse (i.e. they require more transistors to perform at the same level). Another example is extra VRAM, which does not improve performance at all if a game does not use that much, but you'll still want some because future games might.

Extra computation power, AI units, and extra bandwidth, all could be useful in a different workload and current workloads might not show the benefits. Poeple used to think the tensor cores are uesless in games but now DLSS and FSR4 proved otherwise. All engineering is a balancing act.
It's also hard to compare architectural efficiency because you can only compare cards that got released as products, and you don't know (at least right away) where those cards sit on their respective efficiency curves. For example AD102 looks a lot better if limited to 300W. AMD or NVIDIA could make different choices on where to clock their cards that obfuscate any architectural efficiency advantages or disadvantages.
 
A meta analysis of FSR4 cost compared to FSR3/DLSS3.

Spider-Man Miles Morales:
4K: 5070Ti is 10% faster.
4K FSR3/DLSS3: 5070Ti is 12% faster.
4K FSR4/DLSS3: 5070Ti is 19% faster.

So the cost of FSR4 is 7% in this game vs FSR3.

Call of Duty Black Ops 6:
4K: 9070XT is 14% faster.
4K FSR3/DLSS3: 9070XT is 5% faster.
4K FSR4/DLSS3: 5070Ti is 12% faster.

So the cost of FSR4 in this game is 17% vs FSR3.

Horizon Forbidden West:
4K: 9070XT is 17% faster.
4K FSR3/DLSS3: 9070XT is 4% faster.
4K FSR4/DLSS3: 5070Ti is 1% faster.

So the cost of FSR4 in this game is 5% vs FSR3.

God Of War Ragnarock:
4K: 5070Ti is 6% faster.
4K FSR3/DLSS3: 5070Ti is 5% faster.
4K FSR4/DLSS3: 5070Ti is 12% faster.

So the cost of FSR4 in this game is 6% vs FSR3.

Conclusion:
FSR4 has an average cost of about 6% higher than FSR3, sometimes it can go higher (up to 17%), and DLSS3 is faster than FSR3, as it helps equalize several performance gaps at between the 5070Ti and 9070XT at native.

 
Yeah these heavier FSR4/DLSS4 models are going to weigh heavily on head to head comparisons. I suppose it’s technically part of GPU performance so shouldn’t be treated differently to any other workload in a game.

The bigger challenge is IQ comparisons. I can’t imagine these guys will continue to devote so much time and energy to doing zoomed in video comparisons of upscaling methods especially as they become more commonplace. Just like nobody does comparisons of texture filtering now but it was a big deal 20 years ago. Hopefully they can come up with some way to automate IQ evaluation.
 
Back
Top