Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc)

Discussion in 'Architecture and Products' started by Ike Turner, Aug 21, 2018.

  1. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    390
    Likes Received:
    473
    Probably a misunderstanding.
    AFAIK, tensor program flow is controlled by regular compute cores, but the heavy math operations are executed on tensor units. (Same as with RT cores) Tensors can not do any logic, they only can do math.
    Also the diagram above does not make much sense to me, the term 'int32 shading' alone really is a bit pointless.

    The proper question more likely is: When the warp issues tensor or RT commands, does it become available for other work while waiting?
    If yes, then DLSS or RT can not only run along with other async compute or rendering tasks, also the warps working on RT / DLSS could help with those tasks while waiting.
    Even if we could see analysis of a BFV / Metro frame, and DLSS would run alone as in the diagram, this would not proof it's not possible. (often doing such stuff async hurts performance due to cache trashing or other bottlenecks, and the devs decide against async)

    Edit: To be more clear, DLSS can not run 'for free'. Even if it could, it would at least share bandwidth with other tasks.
     
    w0lfram and jlippo like this.
  2. Xmas

    Xmas Porous
    Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,294
    Likes Received:
    132
    Location:
    On the path to wisdom
    Not a proof of the actual hardware implementation, but the PTX ISA has fp16x2 instructions for fma/add/sub/mul/neg and comparisons (which is mostly subtraction with NaN detection and a tiny amount of bit logic), starting from Jetson TX1 and Pascal.
     
  3. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,825
    Likes Received:
    1,541
    NVIDIA Quadro RTX 4000 Review: Turing Powered Pro Graphics
    February 27, 2019
    https://hothardware.com/reviews/nvidia-quadro-rtx-4000-review?page=1


    If you are going to use solidworks 2019 make sure your GPU supports the application you will be using.
    https://www.javelin-tech.com/blog/2018/11/solidworks-2019-hardware-recommendations/
     
  4. AlBran

    AlBran Ferro-Fibrous
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,582
    Likes Received:
    5,680
    Location:
    ಠ_ಠ
    nervous twitch.gif
    chainsaw.gif
    hug.gif
    shifty.gif


    Thread-bans will be duely considered if the pooping continues.

    kthnxbai


    ninja.gif
     
    #764 AlBran, Mar 1, 2019
    Last edited: Mar 1, 2019
    pharma likes this.
  5. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,693
    Likes Received:
    2,421
    Am I weird if that post cracked me up so much? :lol:
     
    A1xLLcqAgt0qc2RyMz0y and pharma like this.
  6. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,825
    Likes Received:
    1,541
    No Slowing Down: How TITAN RTX Brings High-Quality Images to Gameplay Design
    February 11, 2019
    https://blogs.nvidia.com/blog/2019/02/11/titan-rtx-brings-high-quality-images-to-gameplay-design/
     
  7. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,396
    Likes Received:
    10,764
    Location:
    Under my bridge
    Which IMO is what RTX is really about and for. 16 seconds is ridiculously long for realtime applications, but for professional imaging and content creation, it's an insanely beneficial improvement.
     
  8. A1xLLcqAgt0qc2RyMz0y

    Regular

    Joined:
    Feb 6, 2010
    Messages:
    973
    Likes Received:
    264
    Just checking to see if this forum (and others) are still live as the last post here was 8 days ago.

    The GTX 1660 was released today and lots of reviews went live today but not even a peep here.
     
  9. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,893
    Likes Received:
    2,957
    Location:
    Pennsylvania
    Well it's not exactly an exciting product. Not that it's bad in any way really, just... yawn.

     
  10. Kyyla

    Veteran

    Joined:
    Jul 2, 2003
    Messages:
    990
    Likes Received:
    272
    Location:
    Finland
    Everybody is just waiting for the 7nm GPU's. I know I am.
     
    Kej, Heinrich4, Jozape and 2 others like this.
  11. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,719
    Likes Received:
    4,380
    I think the mobile versions might bring the largest step up.
    The GTX 1650 might bring GTX 1060 performance to the GTX 1050/1050Ti bracket (sub-$900 laptops), and that's a very nice improvement.

    Regardless, 12nm aren't doing any wonders here.
     
  12. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    708
    Likes Received:
    279
    The first Nvidia 7nm GPUs, I estimate, will be announced in a couple of days at GTC.
    These Ampere (?) GPUs, however will be targeted at HPC / NN training, like Volta was.
     
    Lightman and Ike Turner like this.
  13. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    12,145
    Likes Received:
    8,296
    Location:
    Cleveland
  14. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,719
    Likes Received:
    4,380
    So yesterday nVidia launched the desktop GTX1650 which is a TU117 with 14 SMs enabled and the mobile GTX1650 that is a TU117 with all 16 SMs enabled.
    It uses a 128bit bus with 4GB GDDR5.
    They're announcing it as a replacement for the GTX 950, and they're selling the desktop part for $150/150€.

    Apparently nVidia decided to block all pre-release reviews by not sending review units to anyone, and by not releasing any supported driver until the cards were in the shelves.
    In the meanwhile, reviews started popping up here and there with retail units/drivers and the general sentiment is the desktop 1650 is hard to recommend. It costs more than the RX570 4GB which goes for €130 and is substantially faster, and costs the same as the RX570 8GB which is not only faster but also more future-proof. Of course the new card consumes a lot less, but a RX570 only needs a 400W PSU anyways so it hardly makes any difference. Maybe that's why it nVidia tried to block the reviews.

    OTOH, the mobile version might be a bit more interesting because the new chip is going into laptops that previously had the GTX1050/1050Ti, so those will be getting a "free" performance upgrade.

    What puzzles me the most is that this new chip TU117 is 200mm^2. That's the exact same size as the GP106 GTX1060, which performs significantly better.
    So similar to what we saw with TU116 vs GP104+GGDR5 (GTX1660 Ti vs. GTX 1070 Ti), again we see a Turing card that has worse performance/area than its Pascal predecessor. This seems to be happening because they're trading the higher transistor amount / die area of the Turing SMs for a lower SM count and less PHYs, and the end result is worse performance.
     
  15. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Honestly I think the pricing is just too high; it’s ridiculous that RX570 has a slightly larger die and a 256-bit memory bus but is actually cheaper right now.

    I think TU117 is cheaper/better for NVIDIA than GP106 because 128-bit/4GB is going to be a lot cheaper than 192-bit/6GB but the pricing doesn’t really reflect that right now.

    As for die size, I suspect we’re seeing a combination of multiple things:
    1) Higher focus on perf/W than perf/mm2 at the architectural level, which makes a lot of sense for the high-end given thermal constraints and power cost in data centres, but isn’t so important in the low-end.
    2) Forward-looking compute features, e.g. the new SIMT model which I genuinely think is brilliant (but useless for existing workloads).
    3) Forward-looking graphics features, including mesh shaders and FP16 ALUs. It’s pretty obvious that FP16 is a perf/mm2 loss on existing games but hopefully a gain in future content.
    4) Better memory compression resulting in higher area but lower total cost (cheaper memory) for a given level of performance. Unfortunately memory speeds/sizes/buses are quite coarse, so it’s impossible for every chip to hit the sweet spot or be comparable between generations.

    Anyway whatever the reasons, the reality remains that Turing’s perf/mm2 is disappointing. And their pricing is even more disappointing but at least that gives them some room for manoeuvre against Navi. Hopefully AMD’s perf/mm2 increases significantly in Navi which gives them a chance to finally catch up...
     
  16. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,893
    Likes Received:
    2,957
    Location:
    Pennsylvania
    Nvidia have built a huge brand and don't need to worry about competing heavily on pricing even at the low end. Consumers will buy them anyway even if it's a worse product.
     
    homerdog, Ike Turner and BRiT like this.
  17. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    491
    Likes Received:
    212
    AMD is trying to burn off their inventory.
    GP106 also retailed for more.
    Plus laptops and OEM boxes.
     
    pharma and A1xLLcqAgt0qc2RyMz0y like this.
  18. yuri

    Newcomer

    Joined:
    Jun 2, 2010
    Messages:
    175
    Likes Received:
    146
    Not this again... The 1650 is the same lowend as the 1050 was. It targets OEM pre-builts, notebooks and also "gaming" market in developing countries. Having no PCIe connector and low power consumption is a big win in those specific segments. Look how popular the 1050 was...
     
    pharma and A1xLLcqAgt0qc2RyMz0y like this.
  19. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,893
    Likes Received:
    2,957
    Location:
    Pennsylvania
    Yep, I agree. It's power consumption is considerably better than the competition, making it ideal for OEMs, laptops etc. Note that the gaming versions built by MSI, Gigabyte etc being reviewed and indicating perf just below 570 are versions with power connectors, not 75w limited versions so they're clocked higher.

    I have a 1050ti without a power connector for my kids gaming PC. It was great for the price.

    I'm not sure why this changes my opinion though? There are still box products at BestBuy etc for these by major brands, it won't be limited to OEMs and China. I was merely commenting that Nvidia are in a position where they don't really have to compete on pricing.
     
    BRiT likes this.
  20. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,103
    Likes Received:
    1,795
    Location:
    Finland
    Considering the price they sell GP106-based products, I really, really doubt that. 64-bit worth of traces on a PCB and 1 - 2 x 8 Gb memory chips (for 3 & 6 GB models) can't be expensive enough to cover the difference. The chips themselves are the same size and even 3GB 1060 still retails for 20 eur more than 1650 (in Finland anyway, comparing cheapest 1060 3GB to cheapest 1650 listing, 1650's seem to have slight price premium over the official MSRP, too)
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...