But didn't amd beet nvidia this time in tile-deferred pixel shading when they are everywhere faster then nvidia in normal rasterizsation games?
4K performance is not looking great. Being great at 1440p when your competitor is great at 4K doesn't make a halo product.
If it's because of a lack of FLOPS or because of low cache hitrate, is that because the GPU is shading too many pixels that aren't seen?
Also tiled deffered pixel shading belongs to culling belongs how it handls small and big poligons, if its strip or list polygons.. So many thinx have an influenece.
I don't know how we'll ever find out, when it comes to games.
The best example is tesselation. Everybody asked why AMD runs bad at Crysis. The looked at synthetic tesselation benchmarked and saw that amd is only bad at tasselation. Then they check crysis again and bingo. Crysis was totaly overtassaleted. How will you find out when you have any hints where AMDs weeknes is?
Of cause synthetics don't show you all information but it give you realy strong hints what could go wrong in games.
10 years ago those tessellation comparisons were interesting. Unigine Heaven was interesting too.
AMD defaults to "AMD optimised tessellation" these days, as I understand it. It could be having an effect on performance and IQ in current games, but there's no tech journalism these days that goes that deep as far as I can tell.
Another point of comparison is geometry shading. It turned out to be a dumb idea. NVidia scorched ahead in very specific synthetics - because the GS export data didn't leave the chip.
If you happened to watch Scott's Interview with HotHardware, he said the goal of IC is not just performance. It was a tradeoff vs die area, performance and power.
He specifically said if they would have needed a wider bus to get the same BW for more performance. And the power needed by wider bus and more memory chips means higher TBP. He also added that the memory controllers + PHY would occupy a significant footprint on the chip.
I did watch and wasn't edified.
In the end AMD beat NVidia by less than 5% in performance per watt (though there's more memory on 6800XT) and for the time being, 4K performance is not very good. 6900XT is not really going to make that better, either, since 20% more FLOPS (and other substantial advantages) in 6800XT versus 6800 brings about 11% more performance on average (varying from 4 to 19% based on Techspot).
A downclocked N22/N23 in mobile form would be very efficient looking at the chart below
View attachment 4965
That chart is almost as scummy as the NVidia equivalent, both making 2x performance per watt claims by cherry picking places on the curves that don't relate to the best performing cards being compared.
And according to banned member Navi 2x is getting a a lot of interest from Laptop OEMs for its efficiency which is what Scott also mentioned.
I'm glad to see that laptops is a place AMD can compete again, but they've plunked themselves back at Fury X versus 980 Ti in halo performance terms...