Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc)

Discussion in 'Architecture and Products' started by Ike Turner, Aug 21, 2018.

  1. CarstenS

    CarstenS Legend Subscriber

    Newguy, Scott_Arm, pharma and 3 others like this.
  2. pharma

    pharma Veteran

  3. pharma

    pharma Veteran

    Benchmarks: Premiere Pro with NVENC: Rendering videos in 20% of the time (
    May 26, 2020

    https://www.hardwareluxx.de/index.php/news/software/anwendungprogramme/53246-premiere-pro-mit-nvenc-videos-rendern-in-20-der-zeit.html



     
    Lightman and xpea like this.
  4. pharma

    pharma Veteran

    Deleted member 7537 and Lightman like this.
  5. Malo

    Malo Yak Mechanicum Legend Subscriber

    That only took them a decade or so.
     
    Deleted member 7537 and Lightman like this.
  6. pharma

    pharma Veteran

  7. pharma

    pharma Veteran

    Dual NVIDIA Quadro RTX 8000 Review with NVLink Performance
    July 6, 2020
    [​IMG]


    https://www.servethehome.com/dual-nvidia-quadro-rtx-8000-review-with-nvlink-performance/
     
    PSman1700 and Lightman like this.
  8. Frenetic Pony

    Frenetic Pony Regular

    Lightman likes this.
  9. trinibwoy

    trinibwoy Meh Legend

  10. CarstenS

    CarstenS Legend Subscriber

    When Turing appeared, its 12nm process tech was already in a very comfortable spot in the yield curve. Partly due to it's very close relationship to TSMCs 16nm (some say it's virtually identical, so 16nm++). With a salvage part like the 2080 Ti was, you could already afford up to 5 crippling errors in the SMs and Memory Controllers - maybe there were not so many fully dysfunctional dies after all.
     
  11. trinibwoy

    trinibwoy Meh Legend

    Maybe. That would be pretty amazing for such a large chip even on a mature process.
     
  12. CarstenS

    CarstenS Legend Subscriber

    Yeah. And maybe, knowing how large the chip would end up anyway, Nvidia did some more finegrained redundancy as well in less replicated areas of the chip.
     
  13. pharma

    pharma Veteran

    18-Way NVIDIA GPU Performance With Blender 2.90 Using OptiX + CUDA
    September 6, 2020


    [​IMG]

    [​IMG]
    https://phoronix.com/scan.php?page=news_item&px=Blender-2.90-18-NVIDIA-GPUs
     
  14. CarstenS

    CarstenS Legend Subscriber

    Some of the performance deltas there just don't make sense. 2070 vs. 2070S? Almost no improvement for 2080Ti over 2080S?
     
  15. dorf

    dorf Newcomer

    I think this has something to do with TU104 having 8SM's in a GPC compared to TU102 and TU106 having 12SM's in a GPC.

    There are some TU104 based RTX 2060's out there which outperformed the regular TU106 variants in workstation tasks, but showed no difference in game benchmarks.

     
    T2098, pharma, BRiT and 1 other person like this.
  16. CarstenS

    CarstenS Legend Subscriber

    You're right, perf seems grouped by # of GPCs with Cuda, while in Optix, RT cores y/n seem to play a more dominant role, but yet 3-GPC 1660S is at the level of 4-GPC 1070/1080.
    That's quite strange, given that cycles is a pathtracer.
     
  17. arandomguy

    arandomguy Regular Newcomer

    A factor here could be that some GPUs (eg. 2080ti) are actually power limited in this usage scenario. Would be interesting to see power usage, clock rates, and utilization measurements during this tests and also in comparison to a gaming work load. The 2080ti for instance has much more hardware resources compared to the 2080 Super relative to the actual power available between the two (almost the same).

    As an aside I have problem with how power consumption is tested (including for CPUs) which also influences how people look at power consumption. With how it's mostly currently done what you're really testing and showing is just the behavior of the GPU (or CPUs) power limiter. Which further just creates this illusion that all work loads utilize the same amount of power and behavior.
     
    pharma and CarstenS like this.
  18. CarstenS

    CarstenS Legend Subscriber

    I checked a 2080 Ti FE under Windows 10 in the classroom scene. It averages at just shy of 200 Watts in Cuda and around 175 Watts via Optix, consistently boostig to north of 1900 MHz. Power budget should not be the main culprit here.
     
    Digidi, pharma and BRiT like this.
  19. Digidi

    Digidi Regular

    GPC and cash had some major improvments between 1660s and 1070. If you look at this list, it looks havely frontend bound.

    Thats why also the 2080ti is in front of the Titan because the clockspeed is higher.
     
  20. CarstenS

    CarstenS Legend Subscriber

    Not so sure about that. At least compared to the Founder's Edition (1635), the Titan RTX clocks higher (1770) on paper. And it has not only more ALUs, but more control logic for them too (72 vs. 68 SMs). Maybe though those were not
     
Loading...

Share This Page

Loading...