Speculation: GPU Performance Comparisons of 2020 *Spawn*

Discussion in 'Architecture and Products' started by eastmen, Jul 20, 2020.

Thread Status:
Not open for further replies.
  1. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,236
    Likes Received:
    4,259
    Location:
    Guess...
    I think it was in the deep dive that went through those Wolfenstien frame time slides. It stuck with me because I would have thought like you that this would have been automatic. But apparently not.

    It seems if fully utilised Ampere packs an enormous amount of potential. But many of it's capabilities require specific game support so it'll be interesting to see how much of its potential is able to shine.

    It'd certainly be a monster in a console.
     
    PSman1700 likes this.
  2. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,092
    I think UE5/modern engines will do very well on those new GPUs, it's a generational shift towards this kind of rendering.
     
  3. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,405
    It's automatic in a sense that if you have two asynchronous overlapping workloads they will run simultaneously instead of running serially. But to actually take advantage of that your code must have such workloads and last time I've checked most games didn't use either RT or tensor cores simultaneously or otherwise.
     
    pharma, PSman1700 and pjbliverpool like this.
  4. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,714
    Likes Received:
    2,135
    Location:
    London
    Doubling the 5700XT bars at 4K in 3080 reviews is ...

    ... fun.
     
  5. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
  6. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,679
  7. It's exactly what I've been doing. 2x Navi 10 is what I'm expecting for rasterization performance.

    Though I'm guessing nvidia will release a 3080 20GB as soon as October 28th comes up, and in good nvidia fashion I bet they'll enable one more SM for the 20GB model, like they did for the GTX1060 3GB/6GB.
     
    yuri and BRiT like this.
  8. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,714
    Likes Received:
    2,135
    Location:
    London
    I would expect the 20GB variant to be $200 more.

    EDIT: GDDR6X module availability is a question though, isn't it? 2GB modules are not going to be available this year are they? So the possibilities for 3080 better than what's just been launched appear to be:
    1. 3080 12GB
    2. 3080 one more SM
    3. 3080 one more SM and 12GB
    4. 3080 20GB (this year?)
    5. 3080 one more SM and 20GB (this year?)
    AMD could price the 16GB model higher than RTX 2080 10GB, e.g. $800.

    AMD isn't going to persuade die-hard NVidia/G-sync users anyway, so there's no point in being "aggressive" on price with a 16GB card. Most Pascal users aren't waiting to find out if they'll be buying AMD, they're waiting to find out the "real" price-performance of Ampere.

    Obviously this presumes that the biggest Navi 21 is at least ~same performance as 3080.
     
    #548 Jawed, Sep 17, 2020
    Last edited: Sep 17, 2020
    DavidGraham and BRiT like this.
  9. It doesn't really matter if there'll be 2GB chips only in 2021. nVidia's purpose come October 28th will be to rain on AMD's parade, even if the RTX 3080 20GB (3080 Ti? Super?) isn't coming until March.
     
  10. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,405
    Yeah, this isn't going to happen. NV won't announce anything which won't be available in a week or two from the announcement date.
     
  11. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,714
    Likes Received:
    2,135
    Location:
    London
    Hang on, aren't 2GB modules required for 3090, which is releasing this year?
     
  12. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,405
    It'll have 24 chips on both sides of the PCB. Probably one of the reasons why it will be $1500.
     
    Jawed likes this.
  13. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,401
    Likes Received:
    1,845
    Location:
    France
    I wonder how they will solve the bandwitdh problem. I mean, amd/nvidia can have big increases in raw performances, but memory bandwitdh doesn't evolve a lot (for a given price)... Have we seen some patents about bandwitdh saving recently ?
     
  14. Except for the RTX 3090 that was announced 3 and a half weeks ago and is only being made available tomorrow?



    Well there's rumors of RDNA2 GPUs coming with relatively narrow GDDR6 buses (256bit on Big Navi?), and AMD will increase effective bandwidth by using large amounts of on-chip cache.
     
  15. T2098

    Newcomer

    Joined:
    Jun 15, 2020
    Messages:
    55
    Likes Received:
    115
    My vote is actually for a 6th option. 3080 with one more SM and 352-bit bus with 11GB.
    Makes the 1080Ti and 2080Ti owners able to heave a sigh of relief and not buy a card with less VRAM than their existing one, and still lets them harvest chips with 1 memory controller disabled (and splits the difference on VRAM costs as well)
     
  16. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,405
    A different model of the same card as 3080. If you were looking for a nitpick 3070 would be a much better option.

    I'm about 90% sure at the moment that these rumors are based on ROP numbers - which aren't tied to MCs in RDNA2.
     
  17. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,714
    Likes Received:
    2,135
    Location:
    London
    Added RT and presumably fixed shitty performance per watt (5700XT has worse PPW than 2080Ti).

    Do you think it's more than that?
     
    PSman1700 likes this.
  18. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    Well, I wouldn't call little over 10% worse perf per watt "shitty", especially when other variants of the same chip offer as good PPW as NVIDIAs best Turings.
    As for the Navi21 itself, while the architecture hasn't apparently gone through as major changes as RDNA did over GCN, it's still far cry from being just "tweaked chip" when it has twice the compute resources and who knows what other changes (we know at least that the chip supports both HBM and GDDR memories and that 256-bit GDDR6 will never be anywhere near enough fast enough for it unless they found the holy grail of memory bandwidth savings, so there has to be something major going on there)
     
    Lightman likes this.
  19. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    Are you expecting some secret sauce in discrete RDNA2 that’s not present in the consoles? I would bet the answer is simply that the 256-bit rumor is nonsense.
     
    PSman1700 likes this.
  20. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    I don't really know what to expect.
    I mean, it's highly unlikely that AMD would list false information in their drivers this close to release, which would mean it has both GDDR and HBM support, which is so far unheard of (to the point where they did two practically identical chips with just different memory controllers, Navi 10 & 12).
    Then there's the 16 L2 cache lines, which suggest the 256-bit GDDR memory controllers (I think only couple Xbox SoCs have taken a different route on this with crossbar?), which we know can't be enough for 80CU RDNA(2) chip (unless they found the holy grail, which I think we can agree is quite unlikely)
    So there needs to be another explanation - either it's right in front of us (using both GDDR & HBM) no matter how unconventional it sounds, or it has to be some as of yet unknown solution (512-bit? 384-bit?) + crossbar to deal with the L2 cache lines, or some third explanation I can't come up with.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...