Nvidia Volta Speculation Thread

Discussion in 'Architecture and Products' started by DSC, Mar 19, 2013.

Tags:
  1. xpea

    Regular Newcomer

    Joined:
    Jun 4, 2013
    Messages:
    371
    Likes Received:
    303
    because memory is only one (small) part of the power equation. For example, on your beloved MI25 300W accelerator board, HBM2 consumes only 30 watts, ie 10%. The Silicon consumption is still by far the biggest problem and Vega is behind Pascal, whatever says the single AI bench AMD showed us with their established now legendary fake results and using an old version of CUDA. So I will wait for independent results before admitting MI25 is faster than now discontinued P100, which by the way is irrelevant as V100 is now in the market...

    Check your facts. TPU2 is rated at 250W for a single chip. One blade of 4 TPU2 uses redundant 1500W PSUs. If you had ever looked at the MA-SSI-VE heatsink of TPU2 you would have keep quiet. I save you a google search with the picture below:

    [​IMG]

    See my reply above ; up to now, we have a single biased compute AI bench presented by AMD on old Nvidia hardware and old software, but you still call it a win ? As for gaming Vega, let's wait for independent AI benches before any definitive conclusion, especially against Volta range, the real competition.
    Then I remind you that the market is not only the single top chip, but customers are buying much more of the smaller cards (from both vendors, ie look at Instinct range). Thus Tesla and Instinct are also available with GDDR5/5x memory, reason of my remark. CQFD
    Finally, we agree in one point. I won't neither spend more of my time responding to your nonsens and FUD. I still wish you a nice day :smile:
     
    #641 xpea, Sep 28, 2017
    Last edited: Sep 28, 2017
  2. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,789
    Likes Received:
    2,049
    Location:
    Germany
    Apart from IHV launch decks - is there any actual data available? I mean, substantial data, not abusing multi-billion-transistor monsters for Direct Draw2D? I would be glad to see some independent analysis, since most outfits (ours including) do not have access to P100-class cards. IOW, measuring the chips performance, not how artificially crippled it's drivers are or if they've got an SSD on board. For the latter, I know the numbers, but they could have been achieved with a similarly outfitted Polaris GPU already, since nothing is as slow as going off-board.
     
    pharma likes this.
  3. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,702
    Likes Received:
    2,430
    Only one linking marketing BS here is you actually, the only one obssessing about fantastical gains without any shred of evidence other than vague and obscure marketing materials.

    Vega is only beating GP100 in an old AMD marketing slide, that has been bombarded with criticism towards it's accuracy and validity. And we all know how credible AMD's marketing has been lately with stunts like hiding fps, blind tests, selectively picking and cherrypicking certain results, or out right lying (480CF beating 1080, Premium VR, Vega/FuryX beating 1080 in mininum fps) and the list goes on. They didn't even bother reiterate the test during Radeon Instinct's launch. All they did was post some lame theoretical flop count comparisons on their official page, and even denying to ever have verifiable external tests.

    So unless you have any confirmation from a "trustworthy" source, NO, Vega isn't beating GP100.
     
    A1xLLcqAgt0qc2RyMz0y likes this.
  4. gamervivek

    Regular Newcomer

    Joined:
    Sep 13, 2008
    Messages:
    714
    Likes Received:
    220
    Location:
    india
    The half-baked software has always been there. Hawaii wouldn't need the jet turbine if AMD had better drivers at the start, the 390 series saw massive tessellation improvements at launch, 7970 saw massive improvement in BF3 almost a year after launch. This is worse than Fermi since AMD didn't have a Fermi die-sized card at the time to pummel Fermi like nvidia has 1080Ti nor were they tied with the gtx480. The point with the 'if' scenario is that AMD have to push their cards and they take a big hit to efficiency. AMD would have a much better standing with enthusiasts this round if not for 1080Ti still being 10% better at stock than what Vega can do with UV/OC with more power.

    That might be so but I don't see any relevance to the fact that both AMD and nvidia cards I have have the same 10% more performance for 30% higher power draw while being very similar in stock performance and power draw and thus reducing power draw on the stock 1070 wouldn't have the same impact.
     
  5. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    Not good third party benchmarks as that stuff tends to stay internal as companies run their data. Other technologies aside, there is a subset of HPC(fluids, sparse matrix, particle sims, image manipulation) that fall inline with pure FLOPs. So linpack will give a fairly accurate indication of performance and track theoretical numbers, therefore few custom benchmarks. Same thing with tensor hardware with everyone designing systolic arrays as the patterns are highly predictable and repeated. Essentially ultra-wide cascaded SIMDs. The very reason supercomputer clusters can support that many nodes.

    I haven't touched cluster stuff since college, but the math wouldn't have changed. Those tools will hit one bottleneck and hit it hard. That's just the nature of the data, complex branching and behavior won't exist in many cases. If not TFLOPs, memory capacity as the problems/sims/systems end up being huge once you get past classroom demos. Turning into SANs and unified/distributed memory type systems often falling to storage arrays.

    Raja did say raw performance was the largest factor in sales. The same applies to Nvidia, but they limit performance to position themselves safely ahead of competition. UV/OC would generally benefit AMD more just due to positioning on exponential curves. The real issue is software and not the underlying hardware. I really wouldn't be surprised if Nvidia had TBDR style optimizations infringing some patents. Power consumption isn't necessarily worse, but deliberately trashed as evidenced by the power saving modes. Negligible performance hits with double digit power drops.
     
  6. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,789
    Likes Received:
    2,049
    Location:
    Germany
    Ok, what a shame. So we're still left with more or less wild guesswork which in my book includes trying to extrapolate from algorithmic knowledge.
     
    pharma likes this.
  7. Mize

    Mize 3dfx Fan
    Moderator Legend Veteran

    Joined:
    Feb 6, 2002
    Messages:
    5,048
    Likes Received:
    1,097
    Location:
    Cincinnati, Ohio USA
    xpea and Antichrist4000: please refrain from personal insults an accusations of phanboism. Thanks.
     
  8. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    You mean ~100W with GDDR5. A full third or more of the power budget, necessitating the HBM in the first place. It's almost like there is a reason everyone uses it on their high end chips. Feel free to misconstrue the comparisons though. GDDR5 products will have a difficult time in power efficiency comparisons, not to mention density.

    Again, please check your facts as you said and that picture with P100, Xeons, and whatever else is there is a poor reference. Someone might almost mistake P100 for TPU2. You're seriously comparing a GPU with far more functionality to a product that is almost entirely tensor cores and expecting it to be vastly superior. The systolic arrays in TPU2 are as efficient as you can get and memory comparable.
     
  9. xpea

    Regular Newcomer

    Joined:
    Jun 4, 2013
    Messages:
    371
    Likes Received:
    303
    From google own blog (cannot be more official):
    https://www.blog.google/topics/google-cloud/google-cloud-offer-tpus-machine-learning/
    you can find the exact same picture of TPU2 blade board with the huge heat sinks attached to the TPU2s:
    [​IMG]
    As we say in France, "They are no more blind than the one who doesn't want to see" :no:

    Please, once for all, accept the truth, apologize for your error and move on, you will greatly benefit from it..
     
    Picao84, DavidGraham and pharma like this.
  10. rcf

    rcf
    Regular Newcomer

    Joined:
    Nov 6, 2013
    Messages:
    366
    Likes Received:
    302
    https://www.nextplatform.com/2017/05/22/hood-googles-tpu2-machine-learning-clusters/

     
  11. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,297
    Likes Received:
    464
    Please keep in mind that the next set of TPU2 drivers is practically guaranteed to reduce the heat sink size by 35% before making any comparison with competing product.
     
    xpea likes this.
  12. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,832
    Likes Received:
    1,541
    HPC Innovation Lab. September 27 2017
    http://en.community.dell.com/techcenter/b/techcenter
     
    #652 pharma, Sep 28, 2017
    Last edited: Sep 29, 2017
  13. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,789
    Likes Received:
    2,049
    Location:
    Germany
    Interesting, now we only need corresponding benchmarks with MI25.
     
    A1xLLcqAgt0qc2RyMz0y likes this.
  14. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,832
    Likes Received:
    1,541
    Are DGX-1 and DGX-Stations currently the only ones using NVLink2? Curious since the Dell benchmark server mentioned only hardware changes were GPU's.
     
  15. xpea

    Regular Newcomer

    Joined:
    Jun 4, 2013
    Messages:
    371
    Likes Received:
    303
    Found an interesting page at Thinkmate where you can customize many rack servers, with prices:
    http://www.thinkmate.com/systems/se...ter&utm_campaign=ced8f9355e-NVIDIA-v100-Volta

    If you click on the "GPX XT4-24S1 4NVLINK" model, you reach this page:
    http://www.thinkmate.com/system/gpx-xt4-24s1-4nvlink
    From here you can add up to 4 "NVIDIA® Tesla™ V100 GPU Computing Accelerator - 16GB HBM2 - SXM2 NVLink" art $7999 each, much lower than the initial $16k that people was expecting. All in one, V100 is only slightly more expensive than P100, which gives Volta a better value/performance for money than Pascal. But it also proves that the yields are good, very surprising for such Mammoth chip !
     
  16. Rufus

    Newcomer

    Joined:
    Oct 25, 2006
    Messages:
    246
    Likes Received:
    60
    Alexko, ImSpartacus and pharma like this.
  17. Rufus

    Newcomer

    Joined:
    Oct 25, 2006
    Messages:
    246
    Likes Received:
    60
    CSI PC, xpea, nnunn and 1 other person like this.
  18. xpea

    Regular Newcomer

    Joined:
    Jun 4, 2013
    Messages:
    371
    Likes Received:
    303
    Oracle joins the long list of Cloud providers offering GPU accelerated services. They are adding P100 and V100 in their racks:
    https://blogs.oracle.com/oracle-and-nvidia-provide-accelerated-compute-offerings

    Another Big win for Nvidia
     
    Grall and pharma like this.
  19. ImSpartacus

    Regular Newcomer

    Joined:
    Jun 30, 2015
    Messages:
    252
    Likes Received:
    199
    This is technically "post-Volta", but I think this thread might be the next best place to share.

    https://www.anandtech.com/show/1191...-pegasus-at-gtc-europe-2017-feat-nextgen-gpus

    130 TOPS in 220ish W is pretty sizeable increase considering V100 does 120 TOPS in 300W.

     
    pharma likes this.
  20. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    492
    Likes Received:
    212
    TSMC's N7+ will do the trick.
     
    el etro likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...