It's been very important while NVIDIA used less power than AMD equivalentsIt's been very important metric ever since Maxwell AFAIR.
It's been very important while NVIDIA used less power than AMD equivalentsIt's been very important metric ever since Maxwell AFAIR.
Ok. Just looking at the raw specs, we have;Except 5700 and 2070 are not equal.
2070 has 48 rasterized pixels per clk, 2070's ROPs can't write or blend more than 48 pixels per cycle. 5700 has 33% more rasterizers and ROPs throughput.
2070 Super and 5700 XT have much closer specs and 2070 Super is faster.
I was surprised people missed 80 series going from 220W card on 16nm to 320W card on 8nm. That is effectively going from mid high range wattage, to OC Titan wattage (on smaller node and new arch)
2080TI was 250W card, so for ~30-40% increase you get with 3080, card pulls ~25-30W more. Got to hand it to Nvidia, their marketing was on point again, but perf per watt is not looking too hot (or is it?), so it surprises me that people think AMD with ~50% increase in perf/watt over RDNA2 cannot match it.
No, it has been important beginning from NV30 at the latest.It's been very important while NVIDIA used less power than AMD equivalents
It's always been very important because the one who has better perf/watt wins in perf.It's been very important while NVIDIA used less power than AMD equivalents
Ok. Just looking at the raw specs, we have;
5700XT: 121.9 GPixel/s
2070S: 113.3 GPixel/s
5700: 110.4 GPixel/s
2070: 103.7 GPixel/s
I don't see how the former is closer than the latter. These are stock settings, but, their clocks don't differentiate that much anyway.
It's always been very important because the one who has better perf/watt wins in perf.
Since comparisons will be made against Turing, reviewers need accurate measurements of power required to achieve similar performance levels as Turing.I'm curious as to why they sent out those units to reviews to take accurate power measurements. I'm wondering if they believe their power numbers will look more favourable with accurate measurement under load.
https://www.tomshardware.com/features/nvidia-ampere-architecture-deep-diveNvidia gets the 1.9X figure not from fps/W, but rather by looking at the amount of power required to achieve the same performance level as Turing. If you take a Turing GPU and limit performance to 60 fps in some unspecified game, and do the same with Ampere, Nvidia claims Ampere would use 47% less power.
That's not all that surprising. We've seen power limited GPU designs for a long time in laptops. The RTX 2080 laptops for example can theoretically clock nearly as high as the desktop parts, but they're restricted to a much lower power level, which means actual clocks and performance are lower. A 10% reduction in performance can often deliver a 30% gain in efficiency when you near the limits of a design.
AMD's R9 Nano was another example of how badly efficiency decreases at the limit of power and voltage. The R9 Fury X was a 275W TDP part with 4096 shaders clocked at 1050 MHz. R9 Nano took the same 4096 shaders but clocked them at a maximum of 1000 MHz, and applied a 175W TDP limit. Performance was usually closer to 925MHz in practice, but still at one third less power.
I'm curious as to why they sent out those units to reviews to take accurate power measurements. I'm wondering if they believe their power numbers will look more favourable with accurate measurement under load.
I think the real reason is that AMD has been caught with their pants down with the 3080 and 3070 performance and prices and that AMD will more than likely have to clock Big Navi well beyond the sweet power spot and pull a heck of a lot of watts to get near or slightly exceed the 3070 performance. So with this gear reviewer's can accurately measure performance per watt for both vendors.
No, the real question is : can it beat 3070 in RT games and does it have a DLSS algorithm to compete?Lets get real. If XSX is any indication even a very moderately clocked 80CU Navi2x won't have any problems beating 3070. And at lower power. Real question is can it beat 3080.
No, the real question is : can it beat 3070 in RT games and does it have a DLSS algorithm to compete?
When RDNA2 will hit reviews, most new AAA games will be benchmarked :
Call of Duty (RT + DLSS)
Minecraft (RT + DLSS)
Cyberpunk (RT + DLSS)
Fortnite (RT + DLSS)
Watchdogs (RT + DLSS)
Vampire masquerade (RT + DLSS)
Crysis remastered (RT + DLSS)
It's a lot of Nvidia optimized games to fight...
Ok. Even if this is the case, it would be assuming that the lower pixel output per second is actually holding back the rest of the GPU. There is no evidence that that is really the case. How could the 2070 be faster than the 5700 in any occasion, if it was limited by the rasterizer?As he already told you in the very post you quoted, the 2070 has 48 rasterized pixels per clock, not 64. So actual relevant number of pixels are:
5700XT: 121.9 GPixel/s
2070S: 113.3 GPixel/s - limited by ROPs, rasterizers can actually do 141.6 GPixel/s because, 5 GPC x 16 (rasterized pixels) x 1770 Mhz
5700: 110.4 GPixel/s
2070: 77.76 GPixel/s - Limited by rasterizers: 3 GPC x 16 (rasterized pixels) x 1620 Mhz
How could the 2070 be faster than the 5700 in any occasion, if it was limited by the rasterize
DOOM manages to produce great quality visual at high performance because it cleverly re-uses old data computed in the previous frames. In total there were 1331 draw calls, 132 textures and 50 render targets used.
Let's do some ballpark napkin math. Everything was done at 1440p by Computerbase. So... If you have 77.76GPixels/s, at 1440p you could theoretically reach a max of 21 thousands frames a second. Obviously the rasterizer is not going to output 16 pixels every clock in real world scenarios. Even if you reduce that to 1 pixel per clock, that's still over 1300 fps.
English is not my native tongue but I'm pretty sure that "almost" is the wrong word when you can't play (as today) these:Almost none of this games are new.
Well, yeah, I realized something was wrong, but I decided to post it anyway, to see what the rebuttal would be. Helps me learn too.Even with no knowledge at all, you should have realized that something is really wrong with your math results, or rather the conclusions you made, that there must be something really huge missing in your reasoning. Or did you honestly think that they put thousands times more pixel fillrate than it is trully necessary?
This is some of the most cringe marketing.Since comparisons will be made against Turing, reviewers need accurate measurements of power required to achieve similar performance levels as Turing.
https://www.tomshardware.com/features/nvidia-ampere-architecture-deep-dive