Volta is obviously faster, but it not available yet and going to cost even more (rumors say around 10000$).
Anandtech said V100 will cost $18k per GPU: http://www.anandtech.com/show/11367...v100-gpu-and-tesla-v100-accelerator-announced
Volta is obviously faster, but it not available yet and going to cost even more (rumors say around 10000$).
AMD hasn't talked about tensor tasks a lot, but they have said they support them. Can't find the quote now but I think it was in the Financial Analyst Day broadcast
Bear in mind that's just the cost of a DGX-1 divided by the number of GPUs, as you can only get V100 via that route for a quarter or so.Anandtech said V100 will cost $18k per GPU: http://www.anandtech.com/show/11367...v100-gpu-and-tesla-v100-accelerator-announced
View attachment 2001
Like the DeepBench results that were 50% faster than P100 despite the theoretical 21 vs 25 TFLOPs advantage only being 20%?
Especially as it involves that test, along with the use of consumer Titan xp for Solidworks vs Frontier on their website instead of the much higher performing Quadro designed for professional visualisation and specific drivers/libraries - and to emphasise each company I agree do their own bit of cherry picking (Nvidia did it when comparing to their own previous products).I thought, we (as in the people in this thread) had mostly agreed upon that being the usual cherrypicking long before.
Bear in mind that's just the cost of a DGX-1 divided by the number of GPUs, as you can only get V100 via that route for a quarter or so.
View attachment 2001
Like the DeepBench results that were 50% faster than P100 despite the theoretical 21 vs 25 TFLOPs advantage only being 20%?
Because if Nvidia did the same against AMD or Intel most of us would be critical about using it, although Nvidia were carefully selective at times even with V100 in how they showed relative performance to P100, some of which comes down to the framework changes (such as Caffe2) or Nvidia libraries (cuDNN and TensorRT) and how they can be used now.What I think is really funny is the ridiculous amount of nitpicking over one slide showing a single comparison.
And when the card comes out, I bet no one here will bother to pick this data and compare it to anecdotal results.
Because if Nvidia did the same against AMD or Intel most of us would be critical about using it, although Nvidia were carefully selective at times even with V100 in how they showed relative performance to P100, some of which comes down to the framework changes (such as Caffe2) or Nvidia libraries (cuDNN and TensorRT) and how they can be used now.
Cheers
What I think is really funny is the ridiculous amount of nitpicking over one slide showing a single comparison.
And when the card comes out, I bet no one here will bother to pick this data and compare it to anecdotal results.
So Google will be dominating that market? Because it seems more likely the specialized TPU from Google and others will be used for the DL workloads while all the GPUs continue to handle more generalized workloads with DL on the side.And if anyone thinks AMD would do anything meaningful with Vega in DL markets, yeah that's a fools hope. They don't have the market cap, the money, the penetration is software, or software development to do anything in the market in the short to mid term with Vega, and in the long term its still in the air.
I quoted you stating that if AMD had an advantage they would be touting it. A picture of their CEO on a stage in front of a bunch of analysts touting DL performance would seem to be just that. Regardless of the content of the benchmark. The rest is just moving goalposts.Sounds far fetched.
They're coming with a Skylake-X. Shame they didn't go with a full AMD setup with Threadripper, though I guess when asking $5000 minimum for the setup it would be bad if it didn't bundle the CPUs with the highest single-threaded performance.
TPU is specfic to only part of the DL market, and that is what Volta's tensor units are focused on too. But Volta has more then just tensor units it can do the rest of the DL markets.So Google will be dominating that market? Because it seems more likely the specialized TPU from Google and others will be used for the DL workloads while all the GPUs continue to handle more generalized workloads with DL on the side.
I quoted you stating that if AMD had an advantage they would be touting it. A picture of their CEO on a stage in front of a bunch of analysts touting DL performance would seem to be just that. Regardless of the content of the benchmark. The rest is just moving goalposts.
22tflops of half precision @ boost clocks, that is slower *lower clocked or cut down* than the instinct version of their cards
It's a giant all in one device that mounts to a wall. Is it really a surprise a mobile part doesn't run as fast as a server part?22tflops of half precision @ boost clocks, that is slower *lower clocked or cut down* than the instinct version of their cards
That's not outside the box, it's how past devices worked with slight modifications. What's strange is you assuming AMD would release two different new architectures the same time. Then dismissing the marketing materials calling out Volta, similar release dates, display support, memory models, etc.Its nice that you are trying to think outside of the box and trying to give reasons for Vega to be Volta's competitor in everything (because they both start with V), but I don't think it will hold a candle to Volta, and its going to have problems against Pascal.