Speaking about Benchmarking V100 - someone has done it on LuxMark
What you mean? The top 2 results look fishy - "gfx900" ??? - and the the array of 4x V100 was only beaten by a setup with 7 Pascal + 2 Maxwell cards.
That's Vega."gfx900"
What you mean? The top 2 results look fishy - "gfx900" ??? - and the the array of 4x V100 was only beaten by a setup with 7 Pascal + 2 Maxwell cards.
At launch, it was powered by four NVIDIA Tesla P100 GPUs but it is now powered by four NVIDIA Tesla V100 GPUs. Here are the key specs of the updated machine.
https://www.servethehome.com/nvidia-dgx-station-upgraded-tesla-v100/Sporting the new NVIDIA Tesla V100 with Tensor Core technology for $69,000 you can own a dedicated system for about the same price as 1-year of cloud instance pricing. For example, an AWS p3.8xlarge instance with all up-front pricing is $68301 for 1-year.
Why?V100 being ready this early is such a great achievement.
It's an 815mm GPU, 33% larger than anything I've heard of being mass producedWhy?
They kindly asked TSMC to push the reticle limit forward.It's an 815mm GPU, 33% larger than anything I've heard of being mass produced
They need to do something before various ML/DL ASICs flood the market. V100 kinda works. Not the most cost effective solution, but it'll do.Is ahead of the two year cadence that Nvidia updates the architecture.
They kindly asked TSMC to push the reticle limit forward.
Thats nothing groundbreaking or anything. The node is very mature so the yields must be in >1.7% range.
If you include other critical factors such as HPC/AI large scale power requirements/power budget then it is a pretty important product for many out there or look at those that do DP scientific modelling where no other product touches this.They need to do something before various ML/DL ASICs flood the market. V100 kinda works. Not the most cost effective solution, but it'll do.
$NVDA is one hell of a bubble fueled by meme learning and they will do anything to not let it pop.
He's being cheeky. 1.7% would be 1 GV100 per wafer.Any data to back up this yield number?
Charlie would've liked it. :^)1.7% would be 1 GV100 per wafer.
The V100 results show the expected perfect scaling, with the 4x number being exactly 4x the performance of the 1x.That's the question. The same user has LuxBall HDR (simple scene) results at place 9 and 10 with 4x Vega, less than half of Anaconda's 4xV100. On Hotel (complex scene) it's 1 and 2 for 4x Vega, and that's 1.5x more than Anaconda's results. So I guess something is wrong with the V100 results, the question is: what?
Ahh, the two very good results seem to be using ROCm 1.6.4, the other are under Windows.