Speculation: GPU Performance Comparisons of 2020 *Spawn*

Discussion in 'Architecture and Products' started by eastmen, Jul 20, 2020.

  1. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    7,841
    Likes Received:
    1,160
    Location:
    Guess...
    Fingers crossed for that as it should mean a 3070 (which is likely what I'll be getting) is roughly as fast as 2080Ti and hopefully even faster in RT and Tensor/DLSS performance.

    Given I'll be getting a WQHD monitor to go with it which is only about 60% the resolution of 4K, and given that DLSS in quality mode at that resolution uses an internal resolution that's roughly equivalent to 1080p in terms of pixels then in DLSS enabled games I should basically have 2080Ti level performance to throw at 1080p resolution while still maintaining image quality at native or above 3440x1440. Ultra high frame rates here I come!
     
  2. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    771
    Likes Received:
    361
    It is.
     
    PSman1700 and pjbliverpool like this.
  3. techuse

    Regular Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    286
    Likes Received:
    165
    50% has always seemed like a reasonable upper bound on general performance increase tier on tier IMO.
     
    PSman1700 likes this.
  4. PSman1700

    Veteran Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    2,739
    Likes Received:
    923
    Yes, but still, it seems alot. That would mean all things considered that a 3070 would be as fast as a 2080Ti in raw raster performance. Which is awesome. Then we also get more advanced features, faster ray tracing, upgraded tensor hardware. I think NV is really going all in, they know they got abit more competition from AMD.
     
  5. techuse

    Regular Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    286
    Likes Received:
    165
    It could and may fall lower of course.
     
  6. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    771
    Likes Received:
    361
    Nah.
    Prep your popcorn for this and the next year.
    Abit is uh, a very harsh understatement to put it rather bluntly.
    Nah.
    But you pay the power tax.
     
  7. PSman1700

    Veteran Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    2,739
    Likes Received:
    923
    Well, we don't know anything about RDNA2 yet, how that stacks up to Ampere. It seems that RDNA2 will be more of a match to Turing or an in between. This is also bad, as prices wont come down from NV still.
     
  8. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    771
    Likes Received:
    361
    Yes we do.
    Well enough for beancounters at AMD to not be bored.
    Oh noes not even close.
    See, bad mentality.
    You will be reeducated.
     
  9. PSman1700

    Veteran Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    2,739
    Likes Received:
    923
    Calm down lol, i want AMD to be competitive again in the gaming segment. As a PC gamer at heart, as things are now, it's way to expensive with the GPU pricing going on. AMD has promised alot now, but still not really making a comeback.
    If RDNA2 can compete with Ampere, prices will come down. Give me that 18+TF RDNA2 full gpu, being competitive with whatever NV is going to come with, on all levels including RT.
     
  10. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    771
    Likes Received:
    361
    Oh shite, Intel announced 6-12 months delay on their 7nm.
    Can I finally bury them?

    Then expect the best of them; for they only desire to deliver the best.
    Wish granted, and then some.
     
    PSman1700 likes this.
  11. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,608
    Likes Received:
    664
    Location:
    New York
    I don't know about you guys but I'm officially HYPED!
     
    Lightman and DegustatoR like this.
  12. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    771
    Likes Received:
    361
    Good, prep popcorn for the next 2-3 years where we'll see intense GPU titanomachy between two vendors.
    And Intel dying, but that's par for the course.
     
  13. P_EQUALS_NP

    Joined:
    Jun 17, 2020
    Messages:
    8
    Likes Received:
    1
    i wonder what this means for ponte vecchio and their hpc contracts, will aurora go cuda instead?
     
  14. P_EQUALS_NP

    Joined:
    Jun 17, 2020
    Messages:
    8
    Likes Received:
    1
    I work as a system programmer for a small hpc sever system, when we upgraded our system to use accelerators early in the decade we went with cuda because opencl at the time required a lot of proprietary extensions in order to vector-ize anything defeating the purpose of being open source in the first place. so to be fair i haven't looked at opencl in years has it gotten any better?
     
  15. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    771
    Likes Received:
    361
    No lol, it's getting delayed.
    No, use HIP/SYCL.
    OCL is going places you'd rather not see, courtesy of Nvidia.
     
  16. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,944
    Likes Received:
    2,288
    Location:
    Germany
    How so? Isn't it an open industry standard? Or do you refer to Nvidia not giving it the same amout of love they do with Cuda?
     
  17. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    771
    Likes Received:
    361
    That's more of a poison pill than a panacea.
    Yeah a third class citizen even on *the* GPGPU vendor along with Apple moving to Metal doomed it.
    Also OCL2.x kinda went absolutely nowhere, especially with nV refusing to implement the important bits.
    But hey, SYCL lives!
    Even more than that, just look at Codeplay or what Intel does with it or hipSYCL and all.
     
  18. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    219
    Likes Received:
    39
    Navi2 has more "IPC" and also faster clocks, than Navi1. It is also doesn't have a hybrid design and thus frees up die space.

    Given the same amount of transistors, How much more faster is navi2, over navi1..?
    I'd say... 30%+ faster.
     
  19. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    11,107
    Likes Received:
    5,645
    What do you mean with hybrid design?
     
  20. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    10,863
    Likes Received:
    10,948
    Location:
    The North
    I can only suspect he is referring to this:
    https://www.sweclockers.com/nyhet/2...med-inslag-av-gcn-renodlad-rdna-forst-ar-2020

    quick stolen translated summary from wccftech:
    However, while Navi sounds like a completely new chip design from the ground up, Sweclockers reports that it still has many things that it carries over from GCN. According to them, GCN GPU architecture has been here for almost a decade and AMD has made lots of optimizations for the gaming side for GCN which if they wanted to build a new GPU would just wash away. So there are still some design aspects being used by Navi GPUs that make it a hybrid of RDNA and GCN but for those looking for a purely new GPU, the next iteration of Navi which is rumored as Navi 20 will be the first completely RDNA design with AMD shifting all of their optimizations for gaming to the new enthusiast-grade chips.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...