The AMD 9070 / 9070XT Reviews and Discussion Thread

You seem to be trying quite hard to down play / exclude this parameter. If a 5070ti didnt need 40% more bandwdith why would Nvidia equip it with GDDR7 when there is a significate premium on it over GDDR, you think NV's a charity ? Why then would they run the 5080 with almost identical Shader to bandwidth ratio as the 5070ti ?
To jump start the production. Who would produce GDDR7 if nobody is using it? nVidia saved space with GDDR7 and provided 15%+ more performance without more transistors.

The fact that N48 needs nearly 20% more transistors for ~20% less performance is staggering. A 5080 can be nearly twice as efficient because N48 still is not modern enough to cutoff parts of the chip when they are not used.
 
Personally I think it's not very meaningful to compare different architectures with various parameters because it can be difficult to know how these parameters work across the architectures. The only real objective parameters are probably power consumption and maybe chip area (which may have cost implications, but again not necessarily directly proportional).

Also, different architectures have different requirements. For example, extra computation power, if not bringing more performance, seems to be wasteful. However, they might not be wasteful when running a different workload. Different workload has different requirements and can be bottlenecked by different paramters.

To use an extreme example, if you only benchmark non-RT games, a GPU with dedicated RT hardwares will look "wasteful" because they perform relatively worse (i.e. they require more transistors to perform at the same level). Another example is extra VRAM, which does not improve performance at all if a game does not use that much, but you'll still want some because future games might.

Extra computation power, AI units, and extra bandwidth, all could be useful in a different workload and current workloads might not show the benefits. Poeple used to think the tensor cores are uesless in games but now DLSS and FSR4 proved otherwise. All engineering is a balancing act.
 

Very impressive debut for FSR 4. Tim kills it again with a great analysis too.

At one point in the video he makes a distinction between detail and sharpness and that’s something I’m not really clear on. In theory sharpness can’t substitute for missing detail but in practice I find post processing sharpening can make textures appear more detailed by increasing contrast in color transitions.
 
Personally I think it's not very meaningful to compare different architectures with various parameters because it can be difficult to know how these parameters work across the architectures. The only real objective parameters are probably power consumption and maybe chip area (which may have cost implications, but again not necessarily directly proportional).

Also, different architectures have different requirements. For example, extra computation power, if not bringing more performance, seems to be wasteful. However, they might not be wasteful when running a different workload. Different workload has different requirements and can be bottlenecked by different paramters.

To use an extreme example, if you only benchmark non-RT games, a GPU with dedicated RT hardwares will look "wasteful" because they perform relatively worse (i.e. they require more transistors to perform at the same level). Another example is extra VRAM, which does not improve performance at all if a game does not use that much, but you'll still want some because future games might.

Extra computation power, AI units, and extra bandwidth, all could be useful in a different workload and current workloads might not show the benefits. Poeple used to think the tensor cores are uesless in games but now DLSS and FSR4 proved otherwise. All engineering is a balancing act.
It's also hard to compare architectural efficiency because you can only compare cards that got released as products, and you don't know (at least right away) where those cards sit on their respective efficiency curves. For example AD102 looks a lot better if limited to 300W. AMD or NVIDIA could make different choices on where to clock their cards that obfuscate any architectural efficiency advantages or disadvantages.
 
Back
Top