Speculation: GPU Performance Comparisons of 2020 *Spawn*

Discussion in 'Architecture and Products' started by eastmen, Jul 20, 2020.

Thread Status:
Not open for further replies.
  1. DDH

    DDH
    Newcomer

    Joined:
    Jun 9, 2016
    Messages:
    36
    Likes Received:
    39
    It's been very important while NVIDIA used less power than AMD equivalents
     
    Randomoneh and NightAntilli like this.
  2. NightAntilli

    Newcomer

    Joined:
    Oct 8, 2015
    Messages:
    104
    Likes Received:
    131
    Ok. Just looking at the raw specs, we have;
    5700XT: 121.9 GPixel/s
    2070S: 113.3 GPixel/s

    5700: 110.4 GPixel/s
    2070: 103.7 GPixel/s

    I don't see how the former is closer than the latter. These are stock settings, but, their clocks don't differentiate that much anyway.

    More importantly, considering the 5700 and the 5700XT both have 64 ROPs, the limit is clearly not the ROPs for the Radeon cards.
    And considering Turing can surpass the Radeon cards, their pixel output is also not a limitation on their performance, not to mention the 2080S has the same amount of ROPs as the 2070S and can perform better.
     
    PSman1700 likes this.
  3. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,679
    I'm curious as to why they sent out those units to reviews to take accurate power measurements. I'm wondering if they believe their power numbers will look more favourable with accurate measurement under load.
     
    Lightman likes this.
  4. Cuthalu

    Newcomer

    Joined:
    Oct 28, 2006
    Messages:
    118
    Likes Received:
    3
    No, it has been important beginning from NV30 at the latest.
     
    CarstenS likes this.
  5. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,405
    It's always been very important because the one who has better perf/watt wins in perf.
     
  6. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,679
    If AMD offers equivalent or better performance with lower power consumption it'll be a highlight of the reviews. I guarantee reviewers will mention that one requires a power supply upgrade and the other doesn't etc.
     
    Cuthalu and CarstenS like this.
  7. Benetanegia

    Regular

    Joined:
    Sep 4, 2015
    Messages:
    394
    Likes Received:
    425
    As he already told you in the very post you quoted, the 2070 has 48 rasterized pixels per clock, not 64. So actual relevant number of pixels are:

    5700XT: 121.9 GPixel/s
    2070S: 113.3 GPixel/s - limited by ROPs, rasterizers can actually do 141.6 GPixel/s because, 5 GPC x 16 (rasterized pixels) x 1770 Mhz

    5700: 110.4 GPixel/s
    2070: 77.76 GPixel/s - Limited by rasterizers: 3 GPC x 16 (rasterized pixels) x 1620 Mhz
     
    Pete, DavidGraham, xpea and 3 others like this.
  8. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    Exactly, people care about perf/watt when it means better performance. This is why Fermi still sold despite being hot under the collar. Noise is a huge factor for me personally. I don't care about power bills but they can keep their loud and whiny fans.
     
  9. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,891
    Likes Received:
    4,539
    Since comparisons will be made against Turing, reviewers need accurate measurements of power required to achieve similar performance levels as Turing.

    https://www.tomshardware.com/features/nvidia-ampere-architecture-deep-dive







     
    Lightman and PSman1700 like this.
  10. neckthrough

    Newcomer

    Joined:
    Mar 28, 2019
    Messages:
    138
    Likes Received:
    388
    Agreed. Users care more about the indirect *consequences* of power consumption, i.e., heat and noise from inadequate cooling solutions. A proper cooling solution mitigates these issues but adds to cost. If this cost is passed on to the customer it makes for an unattractive card. But if the vendor eats the cost it's their problem, at the end of the day most users care about perf/$ and a good quality of life in terms of heat/noise.
     
    Man from Atlantis and PSman1700 like this.
  11. psurge

    Regular

    Joined:
    Feb 6, 2002
    Messages:
    955
    Likes Received:
    52
    Location:
    LA, California
    Having the upper hand in perf/W means you can provide more performance (well, throughput anyway) than your competitor for a given level of input power (or output noise, for those into quiet PCs). I'd say it's a critical metric, for CPUs or GPUs, from mobile to desktop to server. In this case, NVidia's power efficiency doesn't seem to have advanced by a huge amount - Turing was already great in that regard though, so it may just be a case of getting to the diminishing returns stage. There also seems to be at least a bit of a process disadvantage vs TSMC 7nm, so I'm optimistic that AMD will make the PC GPU market exciting again.
     
  12. A1xLLcqAgt0qc2RyMz0y

    Veteran

    Joined:
    Feb 6, 2010
    Messages:
    1,589
    Likes Received:
    1,490
    I think the real reason is that AMD has been caught with their pants down with the 3080 and 3070 performance and prices and that AMD will more than likely have to clock Big Navi well beyond the sweet power spot and pull a heck of a lot of watts to get near or slightly exceed the 3070 performance. So with this gear reviewer's can accurately measure performance per watt for both vendors.
     
    PSman1700 likes this.
  13. SimBy

    Regular

    Joined:
    Jun 21, 2008
    Messages:
    700
    Likes Received:
    391
    Lets get real. If XSX is any indication even a very moderately clocked 80CU Navi2x won't have any problems beating 3070. And at lower power. Real question is can it beat 3080.
     
  14. xpea

    Regular

    Joined:
    Jun 4, 2013
    Messages:
    551
    Likes Received:
    783
    Location:
    EU-China
    No, the real question is : can it beat 3070 in RT games and does it have a DLSS algorithm to compete?
    When RDNA2 will hit reviews, most new AAA games will be benchmarked :
    Call of Duty (RT + DLSS)
    Minecraft (RT + DLSS)
    Cyberpunk (RT + DLSS)
    Fortnite (RT + DLSS)
    Watchdogs (RT + DLSS)
    Vampire masquerade (RT + DLSS)
    Crysis remastered (RT + DLSS)

    It's a lot of Nvidia optimized games to fight...
     
    DegustatoR and Cuthalu like this.
  15. SimBy

    Regular

    Joined:
    Jun 21, 2008
    Messages:
    700
    Likes Received:
    391
    Almost none of this games are new.
     
  16. NightAntilli

    Newcomer

    Joined:
    Oct 8, 2015
    Messages:
    104
    Likes Received:
    131
    Ok. Even if this is the case, it would be assuming that the lower pixel output per second is actually holding back the rest of the GPU. There is no evidence that that is really the case. How could the 2070 be faster than the 5700 in any occasion, if it was limited by the rasterizer?

    Let's do some ballpark napkin math. Everything was done at 1440p by Computerbase. So... If you have 77.76GPixels/s, at 1440p you could theoretically reach a max of 21 thousands frames a second. Obviously the rasterizer is not going to output 16 pixels every clock in real world scenarios. Even if you reduce that to 1 pixel per clock, that's still over 1300 fps.
     
  17. Benetanegia

    Regular

    Joined:
    Sep 4, 2015
    Messages:
    394
    Likes Received:
    425
    Because things are far more complicated than you think. Frist of all, it's not just everything goes down in one pass from triangles to the final pretty pixels. There's many passes writing pixels to dozens of render targets multiple times per frame. So you can get an idea, IMO this Doom 2016 Rendering blogpost remains the best freely available example of what a modern game does: http://www.adriancourreges.com/blog/2016/09/09/doom-2016-graphics-study/

    From the conclusion:

    Even with no knowledge at all, you should have realized that something is really wrong with your math results, or rather the conclusions you made, that there must be something really huge missing in your reasoning. Or did you honestly think that they put thousands times more pixel fillrate than it is trully necessary?
     
    Pete likes this.
  18. xpea

    Regular

    Joined:
    Jun 4, 2013
    Messages:
    551
    Likes Received:
    783
    Location:
    EU-China
    English is not my native tongue but I'm pretty sure that "almost" is the wrong word when you can't play (as today) these:
    Call of Duty (RT + DLSS)
    Cyberpunk (RT + DLSS)
    Fortnite (RT + DLSS)
    Watchdogs (RT + DLSS)
    Vampire masquerade (RT + DLSS)
    Crysis remastered (RT + DLSS)

    6 out of 7...
     
    Cuthalu likes this.
  19. NightAntilli

    Newcomer

    Joined:
    Oct 8, 2015
    Messages:
    104
    Likes Received:
    131
    Well, yeah, I realized something was wrong, but I decided to post it anyway, to see what the rebuttal would be. Helps me learn too.

    Obviously you're not going to rasterize a final frame image in one go. Many frames are rasterized and then combined to form the final image, as far as I understand it. I doubt it would be thousands of frames per second though. I would be surprised if it exceeds 16.
    But in any case, what I am really asking for is some evidence that the amount of GPixels/s you provided actually is a bottleneck.
     
  20. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,426
    Likes Received:
    909
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...