Nvidia Volta Speculation Thread

$NVDA is one hell of a bubble fueled by meme learning and they will do anything to not let it pop.
“meme learning”?

I don’t think there’s a case in the past decades where there has been such a jump in machine learning abilities in such a short time in such a wide variety of fields.

We’re only a few years in, but I think that the consequences will be similar in impact as the explosion of the internet and mobile computing: very disruptive for a lot of industries and professions.

(Anybody with teenagers getting ready for college should be worried about them choosing a profession that won’t collide head on with AI. Do Not Start a Radiology Career Today.)

Nobody knows which company will end up eating the other for lunch in terms of providing infrastructure, but dismiss the core technology as some fad at your own peril.
 
$NVDA is one hell of a bubble fueled by meme learning and they will do anything to not let it pop.
I think you will find, at least in a few years' time, that image recognition/AI is a bit more than just "meme learning"...

Charlie would've liked it. :^)
Around these parts we prefer our posts to be serious and factual instead of deliberately and deceptively hyperbolic. Save the Charlie stuff for another forum please. There's enough fake news on the internet as it is, thank you. :p

The V100 single GPU score for Hotel is 12770. Which is “only” about 20% or so faster than a GP100 after correcting for cores and BW.
Why would you 'correct' for anything? The performance you get out of a Thing is the performance of the Thing, not the performance of what it would have been with the hardware resources of the previous generation...
 
I think you will find, at least in a few years' time, that image recognition/AI is a bit more than just "meme learning"...
Yes, it's another hype train ready to crash and kill several companies along with it.
I find it quite amusing.
Besides calling it "AI" is silly.
 
Yes, it's another hype train ready to crash and kill several companies along with it.
I find it quite amusing.
Besides calling it "AI" is silly.

I don't believe this cat will go back to bag any more than steam engines or early cars did. We are at the verge of revolution where medical imagery like x-rays, 3d scans, rashes etc. can be majorly be analyzed by computers. In consumer space image recognition, voice recognition, robotics etc. is about to take a huge leap. Self driving cars and drones could be a really big deal.

Many companies for sure will fail. AI is not a solved problem but many interesting problems are now solvable and there are a lot of people trying to push the boundary further.

My belief is for hw to do well it must be easy enough to program, easy enough to obtain, be flexible and powerful enough. If hw is too hardcoded it just might be whatever the next big improvement in algorithms is it will not run and then misses out sales on market. That said it makes sense to have dedicated accelerator for mostly solved problems like image detection or voice recognition.
 
I don't believe this cat will go back to bag any more than steam engines or early cars did. We are at the verge of revolution where medical imagery like x-rays, 3d scans, rashes etc. can be majorly be analyzed by computers. In consumer space image recognition, voice recognition, robotics etc. is about to take a huge leap. Self driving cars and drones could be a really big deal.
It IS a valid technology, BUT
Many companies for sure will fail. AI is not a solved problem but many interesting problems are now solvable and there are a lot of people trying to push the boundary further.
It's overhyped, very, very much so.
And self-crashing cars are still mostly self-crashing for now.
And calling it "AI" is streching it pretty far.
 
Yep, entirely overhyped ....

Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them. All of them requested anonymity because they did not want to damage their professional prospects.
...
Well-known names in the A.I. field have received compensation in salary and shares in a company’s stock that total single- or double-digit millions over a four- or five-year period. And at some point they renew or negotiate a new contract, much like a professional athlete.
https://www.nytimes.com/2017/10/22/technology/artificial-intelligence-experts-salaries.html
 
Is that really interesting in any meaningful sort of way though?

In the end of the day, the Thing has the performance it has.
I guess thats up to the individual. A lot of people found it interesting that vega64 had less/about the same performance per clock as the fury x. A lot of people have been expecting AMD to improve perf/GFLOP so they can catch up to Nvidia in performance and perf/watt. I find it interesting anyway, and currently its the difference between AMD and Nvidia.

edit - ^major difference between
edit 2 - I suppose I should also mention this is a technical board, subject to technical discussion and perf/GFLOP is indicative of architectural efficiency.
 
Last edited:
Yep, entirely overhyped ....
Higher salaries don't prove anything, and can also be seen as indicators of a bubble.
People in the right places also had nice salaries before the dotcom crash or the 2008 financial crisis, but after the crash a lot of them lost their jobs and saw the value of their shares going to zero as their companies disappeared or got acquired for almost nothing.
 
I guess thats up to the individual. A lot of people found it interesting that vega64 had less/about the same performance per clock as the fury x.
But you don't buy a Thing for its performance per clock compared to some other Thing/previous generation Thing. You buy it for its absolute performance (assuming there aren't any competitors, then you'd have to consider price and other factors as well, but volta here is kind of unique in its league, so if you want/need it, you'll have to pay for it.)
 
Why would you 'correct' for anything? The performance you get out of a Thing is the performance of the Thing, not the performance of what it would have been with the hardware resources of the previous generation...
Why would you not correct for a thing if you want to understand the impact of the number of cores vs the impact of improved architecture?
 
Is that really interesting in any meaningful sort of way though?

In the end of the day, the Thing has the performance it has.
At the end of the other day, this is also the architecture forum. [emoji4] I find ratios way more fun and interesting to talk about than absolute performance.
 
Last edited:
I guess thats up to the individual. A lot of people found it interesting that vega64 had less/about the same performance per clock as the fury x. A lot of people have been expecting AMD to improve perf/GFLOP so they can catch up to Nvidia in performance and perf/watt. I find it interesting anyway, and currently its the difference between AMD and Nvidia.

edit - ^major difference between
edit 2 - I suppose I should also mention this is a technical board, subject to technical discussion and perf/GFLOP is indicative of architectural efficiency.

Improving perf/GFLOP would help but the competitor has significant advantage on other parts of a gpu. I was expecting AMD to try and catch up with nvidia in terms of clockspeed because I think overall for a given die size AMD and nvidia's chips perform pretty similarly at same clocks despite the dissimilarity in architecture. This would also improve the perf/W since they don't have to overvolt their cards to the limit to hit competitive clockspeeds.

They've improved but not as much as the 1600Mhz clock from last year's rumors of professional cards made me assume.

For instance check this latest video comparison using wolfenstein 2 from duderandom on youtube, Vega64 runs at <1.5Ghz while 1080Ti is pegged at 1860Mhz, that's around 25% difference just there with a clockspeed deficit,


The difference isn't as great in AC origins but AMD still have to run with a 300Mhz deficit on core,

 
Higher salaries don't prove anything, and can also be seen as indicators of a bubble.
People in the right places also had nice salaries before the dotcom crash or the 2008 financial crisis, but after the crash a lot of them lost their jobs and saw the value of their shares going to zero as their companies disappeared or got acquired for almost nothing.
Indicators of a bubble for what? There is no bubble for companies like Facebook, Amazon, Google, Baidu, etc... paying higher salaries in a research/development segment for talent in short supply. It's basically an executive talent search, and don't think we associate any bubbles or crisis with paying executives high salaries. High salaries to protect your future ROI in a low supply market (the article sites fewer than 10,000 people have the correct skillset) is a good investment. These aren't "fly-by-nite" companies risking the entire portfolios on a buy/sell decision and are much better positioned to absorb the costs. Companies who have already reaped some rewards from deep learning know the value from investing in talent and isn't stopping anytime soon.
 
Tesla V100 VS. P100 (images trained per second)
October 23, 2017

Including links only as I can't seem to post the graphs.

System: Supermicro SYS-4028GR-TXRT
Motherboard: X10DG0-T
CPU: E5 2699V4 x 2
MEM: 32GB Micron x 12
BIOS: 5/25/17
GPU: Nvidia Tesla V100 SXM2 x 8 | P100 SXM2 x 8
OS: Ubuntu 16.04 x64
Driver: 384.81
Deep learning framework: NV-Caffe
CUDA: version 9
 
Last edited:
Tesla V100 VS. P100 (images trained per second)
October 23, 2017

Including links only as I can't seem to post the graphs.

Here are the images

20171023054629_hd.png


20171020022349_hd.png


Edit: Problem seems to be with beyond3d as the images are seen when creating or editing this message. However they are not seen in the actual post. Just a big red X.
 
Improving perf/GFLOP would help but the competitor has significant advantage on other parts of a gpu. I was expecting AMD to try and catch up with nvidia in terms of clockspeed because I think overall for a given die size AMD and nvidia's chips perform pretty similarly at same clocks despite the dissimilarity in architecture. This would also improve the perf/W since they don't have to overvolt their cards to the limit to hit competitive clockspeeds.
For instance check this latest video comparison using wolfenstein 2 from duderandom on youtube, Vega64 runs at <1.5Ghz while 1080Ti is pegged at 1860Mhz, that's around 25% difference just there with a clockspeed deficit,
I just find it interesting and encouraging that Nvidia can get more perf/GFLOP with newer architectures. I am also impressed that they have upped their clocks with both maxwell and pascal. With the quoted example whats the %difference for stream processors? Less stream processors less die size, like you said if only AMD could catch up with clocks... But what I'm really trying to get at is the GFLOPs is directly related to clock speed. So are the Vega64 and the 1080ti comparable in GFLOP's? If so are they comparable in performance?
 
Back
Top