Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
That won't be a static power consumption though. It's just a matter of what power profile Nvidia configures. What I'm suggesting is that the IO power consumption is pushed onto a separate chip in some configurations. Increasing the distance between chips for example will increase the energy expended. Leaving the switch chip to absorb most of the work of driving IO over longer distances and capacitance. Effectively an inline buffer/repeater. In the presence of a switch, the power usage of the GPU may drop and the 300W figure somewhat arbitrary.Just for clarity.
SXM2 32GB V100 is still 300W, shown as that in parts list and also on sites such as Anandtech.
Not sure I follow or how you feel this affects SXM2 (this is non-NVSwitch model), although I agree they subtly change the envelope which is why TDP does not usually increase due to memory capacity; mentioned this in response to another poster earlier.That won't be a static power consumption though. It's just a matter of what power profile Nvidia configures. What I'm suggesting is that the IO power consumption is pushed onto a separate chip in some configurations. Increasing the distance between chips for example will increase the energy expended. Leaving the switch chip to absorb most of the work of driving IO over longer distances and capacitance. Effectively an inline buffer/repeater. In the presence of a switch, the power usage of the GPU may drop and the 300W figure somewhat arbitrary.
Interesting paper (more so than the memory) and thanks for the link."Dissecting the NVIDIA Volta GPU Architecture via Microbenchmarking" (I originally saw it posted on Reddit).
.......
Ping Arun as he has a Titan V and doing various tests when he has the time, he might be interested, not sure what his CPU is though.Did anyone test Volta on this interesting double precision fractal zoomer ?
How many images per second on 4K initial screen ?
How does it compare to a 18 core AVX512 CPU ?
https://www.extremetech.com/extreme...ance-in-machine-learning-at-much-lower-pricesNew benchmarks from RiseML put both Nvidia and Google’s TPU head-to-head — and the cost curve strongly favors Google.
...
The comparison is between four Google TPUv2 chips (which form one Cloud TPU) against 4x Nvidia Volta GPUs. Both have 64GB of total RAM and the data sets were trained in the same fashion. RiseML tested the ResNet-50 model (exact configuration details are available in the blog post) and the team investigated both raw performance (throughput), accuracy, and convergence (an algorithm converges when its output comes closer and closer to a specific value).
![]()
The suggested batch size for TPUs is 1024, but other batch sizes were tested at reader request. Nvidia does perform better at those lower batch sizes. In accuracy and convergence, the TPU solution is somewhat better (76.4 percent top-1 accuracy for Cloud TPU, compared with 75.7 percent for Volta). Improvements to top-end accuracy are difficult to come by, and the RiseML team makes the small difference between the two solutions out to be more important than you might think. But where Google’s Cloud TPU really wins, at least right now, is on pricing.
![]()
...
the current pricing of the Cloud TPU allows to train a model to 75.7 percent on ImageNet from scratch for $55 in less than 9 hours! Training to convergence at 76.4 percent costs $73. While the V100s perform similarly fast, the higher price and slower convergence of the implementation results in a considerably higher cost-to-solution.
Google may be subsidizing its cloud processor pricing, and the exact performance characteristics of ML chips will vary depending on implementation and programmer skill. This is far from the final word on Volta’s performance, or even Volta as compared with Google’s Cloud TPU. But at least for now, in ResNet-50, Google’s cloud TPU appears to offer nearly identical performance at substantially lower prices.
AWS price has also been that for quite awhile now without dropping,which is why it would be interesting to try and get a comparable price comparison with GCP now that it is available for V100 (or very soon will be more generally).That's interesting, but the conclusion hardly seems final, since the V100 costs over $8000,*and could conceivably be priced much lower than that while remaining comfortably profitable. Of course, NVIDIA probably wouldn't want to take it anywhere near Titan-level prices, but $3000 doesn't sound impossible.
*まさか!
The TITAN V is $3000 and has less memory and memory bandwidth than the Tesla V100.Of course, NVIDIA probably wouldn't want to take it anywhere near Titan-level prices, but $3000 doesn't sound impossible.
I think Alexko's point was that, with competition, BOTH Tesla and Titan prices have room to move. A lot!The TITAN V is $3000 and has less memory and memory bandwidth than the Tesla V100.
I think Alexko's point was that, with competition, BOTH Tesla and Titan prices have room to move. A lot!