AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

I think changing the underlying hardware won't be a big challenge for them. Compared to say, finding some actual users for this kind of service
 
Amazon already had Tesla GPUs in some of the configurations. Just not a single instance with up to 16 as I recall. May be as simple as a customer asking for the possibility of a certain GPU configuration. This kind of server setup would be perfect for smaller colleges and research institutions.
 
What I mean is, I have not seen a live one anywhere installed and reported upon.
Does this count?

I don't think there's any difference between P100 and, say, K80, in terms of the time of announcement and seeing them deployed in the field. In both cases, it's stuff that mostly happens behind closed doors.
 
Does this count?

In the context of quite a long product lifetime in the HPC market, I guess yeah, I would count it. Apparently, there's no 6 month lifecycle and with Hawaii having half-rate DP, thus north of 2 TFLOPS per GPU, I do not see it as a product in extra-dire need of replacement as of right now. Of course, when P100 starts shipping - and depending on it's price of course - that might change.

My point simply was, that Hawaii was/is battling GK110/GK210 in that market, which is every bit as old a GPU and is just getting replaced with no assured availability as of now. (That I'm aware of).
 
True for Hawaii, but the Amazon paradigm doesn't prove much if anything IMHO. Especially not if something like the NV Link P100 could end up costing as much as almost $9.5k per unit.
 
True for Hawaii, but the Amazon paradigm doesn't prove much if anything IMHO. Especially not if something like the NV Link P100 could end up costing as much as almost $9.5k per unit.
Quote from Anandtech Power8 review:

The S822LC will cost less than $50000, and it offers a lot of FLOPS per dollar if you ask us. First consider that a single Tesla P100 SXM2 costs around $9500. The S822LC integrates four of them, two 10-core POWER8s and 256 GB of RAM.

Pascal P100 seem quite expensive indeed :)
 
True for Hawaii, but the Amazon paradigm doesn't prove much if anything IMHO. Especially not if something like the NV Link P100 could end up costing as much as almost $9.5k per unit.
Quote from Anandtech Power8 review:

The S822LC will cost less than $50000, and it offers a lot of FLOPS per dollar if you ask us. First consider that a single Tesla P100 SXM2 costs around $9500. The S822LC integrates four of them, two 10-core POWER8s and 256 GB of RAM.

Pascal P100 seem quite expensive indeed :)
 
In the context of quite a long product lifetime in the HPC market, I guess yeah, I would count it. Apparently, there's no 6 month lifecycle and with Hawaii having half-rate DP, thus north of 2 TFLOPS per GPU, I do not see it as a product in extra-dire need of replacement as of right now. Of course, when P100 starts shipping - and depending on it's price of course - that might change.

My point simply was, that Hawaii was/is battling GK110/GK210 in that market, which is every bit as old a GPU and is just getting replaced with no assured availability as of now. (That I'm aware of).
I don't know who popular things like K80 are in general, but when you follow the deep learning forums, it's surprising to me how many suggest using Kepler based AWS units at spot pricing as a good alternative of owning one yourself.

I don't think Nvidia is positioning P100 as a K80 alternative: in his GTC keynote in April, he considered P100 simply a level above. As long as AMD isn't a viable competitor in the compute space (which requires more than just a fast chip), that's not going to change soon. Intel is probably going to be a bigger factor.
 
Quote from Anandtech Power8 review:

The S822LC will cost less than $50000, and it offers a lot of FLOPS per dollar if you ask us. First consider that a single Tesla P100 SXM2 costs around $9500. The S822LC integrates four of them, two 10-core POWER8s and 256 GB of RAM.

Pascal P100 seem quite expensive indeed :)


suprised elon musk went with nvidia as he is known to reduce prices and Nvidia is known to just raise them.
dont seem compatible to me.
 
P100 pricing:
The table below gives a quick breakdown of the Tesla P100 GPU price, performance and cost-effectiveness:

Tesla GPU model............Price......Double-Precision Performance (FP64).......Dollars per TFLOP
Tesla P100 PCI-E 12GB..$5,899*.............4.7 TFLOPS.....................................................$1,255
Tesla P100 PCI-E 16GB $7,374*.............4.7 TFLOPS.....................................................$1,569
Tesla P100 SXM2 16GB $9,428*.............5.3 TFLOPS....................................................$1,779
* single-unit price before any applicable discounts

As one would expect, the price does increase for the higher-end models with more memory and NVLink connectivity. However, the cost-effectiveness of these new P100 GPUs is quite clear: the dollars per TFLOPS of the previous-generation Tesla K40 and K80 GPUs are $2,342 and $1,807 (respectively). That makes any of the Tesla P100 GPUs an excellent choice. Depending upon the comparison, HPC centers should expect the new “Pascal” Tesla GPUs to be as much as twice as cost-effective as the previous generation. Additionally, the Tesla P100 GPUs provide much faster memory and include a number of powerful new features.

https://www.microway.com/hpc-tech-tips/nvidia-tesla-p100-price-analysis/
 
I don't know who popular things like K80 are in general, but when you follow the deep learning forums, it's surprising to me how many suggest using Kepler based AWS units at spot pricing as a good alternative of owning one yourself.

I don't think Nvidia is positioning P100 as a K80 alternative: in his GTC keynote in April, he considered P100 simply a level above. As long as AMD isn't a viable competitor in the compute space (which requires more than just a fast chip), that's not going to change soon. Intel is probably going to be a bigger factor.


A lot of companies are doing their custom stuff as well, like google and their TPU custom chip.
 
A lot of companies are doing their custom stuff as well, like google and their TPU custom chip.


Its not just the hardware that is what silent_guy is getting at. Breaking into the HPC market companies need the complete package. Google is still using nV chips for specific portions of deep learning, like image and speech recognition, so its more of a complimentary piece of hardware for the time being. Right now TPU has its own API too, not sure how well its suited for existing software.
 
suprised elon musk went with nvidia as he is known to reduce prices and Nvidia is known to just raise them.
dont seem compatible to me.

If you're referring to the Tegra used in their automobiles, it's because Nvidia are discounting them heavily in order to attempt to gain some adoption for the chips. Similar to when Microsoft chose them for the Surface RT.

It may be that they can soon stop discounting them in the automotive sector if they can get entrenched there.

BTW - when I say discount, that's relative to their high company wide margins.

Regards,
SB
 
Quote from Anandtech Power8 review:

The S822LC will cost less than $50000, and it offers a lot of FLOPS per dollar if you ask us. First consider that a single Tesla P100 SXM2 costs around $9500. The S822LC integrates four of them, two 10-core POWER8s and 256 GB of RAM.

Pascal P100 seem quite expensive indeed :)

For some reason the forum created a double post for the above....in any case I don't know what the cost of a K80 right now would be, but I've seen recent rates between $3 and 4k depending on case, which if it's slightly below 3k it would be almost half what the cheapest 12GB PCI-e P100 would cost. Albeit a layman I'm not suprised one bit about those prices, considering how idiotically high FF process manufacturing costs (all tools incl.) for very complex chips.

NV is waiting for early 2017 to start shipping cheaper GP102 based SKUs than in the current TitanX and I'm not in the least surprised AMD doesn't ship it's first VEGA based SKUs earlier. OT but like someone else said in a another forum we'll have another ball with future 7nm manufacturing costs in the foreseeable future.

The former doesn't imply that AMD sits on a ready Vega and doesn't ship products based on it. It should read that AMD most likely projected the Vega release early for a timeframe where manufacturing costs are more favorable for very complex chips on a FinFET process.
 
I don't know who popular things like K80 are in general, but when you follow the deep learning forums, it's surprising to me how many suggest using Kepler based AWS units at spot pricing as a good alternative of owning one yourself.

I don't think Nvidia is positioning P100 as a K80 alternative: in his GTC keynote in April, he considered P100 simply a level above. As long as AMD isn't a viable competitor in the compute space (which requires more than just a fast chip), that's not going to change soon. Intel is probably going to be a bigger factor.
Hm, I don't know how to put it any other way than I've done this before.
Of course P100 is not an alternative to K80. And of course it is massively faster on paper. My point was, that GK110/GK210 have enjoyed a very long lifetime due to Maxwell being simply not usable in HPC installations that require FP64. So it is not a catastrophe for AMD to have gone to similar lenghts with Hawaii, which in terms of pure FP64 throughput is faster than GK210.
 
For HPC it makes more sense that AMD wants Zen providing an alternative to a PCIE bus for some large datasets. Possibly even graphics. A SSG could be a viable alternative, although I'm not sure they have any high FP64 GPUs compatible with that.
 
From WCCFTech: "AMD’s Vega 10 Flagship GPU Coming End of 2016, Vega 11 Due Early 2017 – “Magnum” Board Debuting In November."

AMD’s most powerful GPU yet code-named Vega 10 is set to debut at the end of the year with Vega 11 following early next year. The company also has a new board code-named “Magnum” that will be showcased at SC 2016 this upcoming November.
[…]
Magnum is a unique chip, it features a matrix of logic blocks that can be configured and programmed individually for any desired application or program. In other words, it’s the company’s first ever FPGA and its greatest attempt yet to expand its penetration into the high performance embedded market.
[…]
Vega 11
RX 580​
14nm
7 TFLOPS
8GB HBM2
1024-bit
256 GB/s
130W
7 TFLOPS at 130 W would imply 5 TFLOPS at 100 W with linear scaling. So I speculate that a hypothetical laptop Vega 11, with slightly more perf/W than the desktop version, could be VR capable within 100 W.
 
Last edited:
Back
Top