Nvidia Volta Speculation Thread

Discussion in 'Architecture and Products' started by DSC, Mar 19, 2013.

Tags:
  1. CSI PC

    CSI PC Veteran

    Clock parity has meaning if one has the other performance envelope data to give it context; specifically power demand/voltage/performance.
    But makes more sense if shown as an envelope over a broad performance/frequency/power demand range, which Tom's Hardware does.
     
    ieldra likes this.
  2. ieldra

    ieldra Newcomer

    Certainly worth considering, memory bandwidth as well at that point. The point is they all increase proportionally unless there is clock gating on the die, except for bw naturally. Eh, it's interesting looking at clock parity if the shader array is of similar size/throughput. Vega vs Fiji comes to mind. Past that I'm uncertain why clock parity would be a convenient way to look at things. Take pascal vs maxwell for instance, would you push maxwell well beyond it's efficient clock/voltage range or push pascal well into diminishing returns territory in terms of efficiency ? It's still going to skew your results in terms of perf/w etc. I guess if you underclock a card far enough you can remove any semblance of memory bandwidth limitations so that's a bonus.
     
  3. Frenetic Pony

    Frenetic Pony Regular

    Huh, I'd heard from plenty it was just "run linpack". Besides, the formula make absolutely no sense and would in no way take IPC into account. I mean what makes a "processor" anyway? A software thread, a hardware thread? Gigaflops is used for CPUs as well. The top 500 supercomputers list is Linpack, so I'm not seeing it; or if it's true it's absurdly useless instead of only somewhat useless. Besides, Linpack gives you a "(G)flops" number after its run. It's a benchmark of floating point operations per second, theoretical input is useless compared to the most basic emperical test, which is what linpack is.
     
    Last edited: Nov 5, 2017
  4. Infinisearch

    Infinisearch Veteran

    Nope theoretical FLOPs for GPU's is the equation I posted. If you don't believe me multiply Vega 64's peak clock * 4096 * 2 and you'll get the quoted 13.7TFLOP's.
    If you still don't believe me https://www.google.com/search?clien...graphic+card&sourceid=opera&ie=UTF-8&oe=UTF-8
    edit - that google "how to compute gflops of a graphic card"
    Actually it sort of does... that what the times two is for (IIRC multiply and add). If you want IPC like cpu IPC I suppose the best metric would be Perf/TFLOP in a compute only workload.
    IIRC Nvidia calls them CUDA cores and AMD calls them streaming processors...
    Yeah it never made much sense to me at first but I got used to it after a while, especially after I started paying attention to console specs.
    I guess that's why it's "theoretical peak FLOP's", you'll only achieve/approach it in a micro-benchmark created to do just that. Thats why I like the Perf/GFLOP metric for GPU's especially for compute workloads. But even for games benchmarks it is still useful.
     
    Last edited: Nov 5, 2017
  5. BoMbY

    BoMbY Newcomer

    Search for "FlopsCL" and "FlopsCUDA" from "Kamil Rocki", they measure the raw performance values for GPUs, in GFLOP/s, pretty accurately.
     
  6. pharma

    pharma Veteran

  7. pharma

    pharma Veteran

    nnunn likes this.
  8. Bondrewd

    Bondrewd Veteran

  9. Ryan Smith

    Ryan Smith Regular

    NVIDIA announced the DGX Station before the Xeon-SP was announced. I'm sure they had technical reasons as well, but timing played a part.
     
  10. pharma

    pharma Veteran

    CUTLASS: Fast Linear Algebra in CUDA C++
    https://devblogs.nvidia.com/parallelforall/cutlass-linear-algebra-cuda/#more-8708
     
    nnunn likes this.
  11. DavidGraham

    DavidGraham Veteran

    Last edited: Dec 8, 2017
    xpea, nnunn, Grall and 2 others like this.
  12. Bondrewd

    Bondrewd Veteran

    el etro likes this.
  13. swaaye

    swaaye Entirely Suboptimal Legend

    Would be neat to see some site benchmark it.
     
  14. CarstenS

    CarstenS Legend Subscriber

    Better at least than the last Titan carrying the 3K price-tag. :)
     
    iMacmatician and Clukos like this.
  15. Geeforcer

    Geeforcer Harmlessly Evil Veteran

    The most impressie part is how leak-proof Nvidia has been as of late. Nothing-nothing-Bam, brand new card, in stock.
     
  16. Bondrewd

    Bondrewd Veteran

    Acktually it was leaked ages ago (there was a photo of a card with very similar looking golden shroud from an nVidia intern).
    So about as leak-proof as AMD (looks at Threadripper).
     
  17. Geeforcer

    Geeforcer Harmlessly Evil Veteran

    A single picture with no specks or anything else is hardly a leak: no one knew what the heck that thing really was. Heck, Titan Xp (which also came out of nowhere) was released since then. I, for one, had no idea what this new card is or when it was coming until AFTER it went on sale. That’s completely unlike Threadripper, IMO, which was well know long before the official announcement.
     
    xpea and pharma like this.
  18. Bondrewd

    Bondrewd Veteran

    Everyone inferred a new Titan.
    It was the new Titan.
    The end result is basically the same.
    At least you can now buy somewhat reasonably priced V100 if you need one.
    TR leaks appeared in April, roghly month before FAD announcement.
    It didn't exist on the roadmaps, and what makes it even sillier was AMD denying their future entrance into HEDT market back on Ryzen launch day.
     
    Last edited: Dec 8, 2017
  19. CSI PC

    CSI PC Veteran

    Ah well not the Volta Titan I was looking to buy in December lol.
    Kinda blows for general consumers but not bad if wanting to play around with the Tensor cores or DL or want FP64, half the price of the previous gen Quadro GP100 that has less functionality-performance so I wonder how Nvidia will manage this to ensure sales are not too cannibalised by this Titan.
    They improved the drivers for Geforce to be better with professional visual applications, albeit obviously this Titan in that situation would not be using the Tensor cores.
    That aside it will be a strong card for universities and various other labs.
     
    Last edited: Dec 8, 2017
    HKS likes this.
  20. ImSpartacus

    ImSpartacus Regular

    So you're telling me Nvidia is charging $3000 for the new "rose gold" color?

    Ridiculous.

    [​IMG]

    [​IMG]

    Seriously though, we all knew there would be a new Titan for Q1 2018, but a $3000 G@100 Titan is a surprise. I guess we know the cost of 1/2 rate DP and full tensor now, eh?
     
Loading...

Share This Page

Loading...