AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

What the hell went wrong with vega? Good in mining, good in professional market, also good in cad programmes which have the same belongings like games (much polygonoutput/lot of shader work/a lot of pixeloutput) . The Level of Vega is the same like Nvidia P6000(sometimes better in high polygonoutput test like energy-01, an Vega have only 4 geometry engine Nvidia has 6 ) but in games it is realy bad. I don't understand this.
 
Last edited:
Miners will wait for actual proof before gobbling up all the cards. They're not going to invest thousands on cards that may be not that good at mining.

This means if the mining performance is true, gamers who want to have the cards should get a chance if they pre-order.
Of course, no one really wants to pre-order before actual proof of performance is out there, so...
 
"fetch/bandwidth-bound"?

Yeah I guess it's better.

And from what I understand, HBM2 is not much different from HBM1... So I don't see Vega being more than twice as powerfull that Fiji at mining eth with basically the same memory...
 
The Level of Vega is the same like Nvidia P6000(sometimes better in high polygonoutput test like energy-01, an Vega have only 4 geometry engine Nvidia has 6 ) but in games it is realy bad. I don't understand this.
energy-01 is by no means a hi poly test, not all ViewPerf subtests are mindless polygon grinders like some CAD programs.
http://spec.org/gwpg/gpc.static/energy-01.html
„The energy-01 viewset is representative of a typical volume rendering application in the seismic and oil and gas fields. Similar to medical imaging such as MRI or CT, geophysical surveys generate image slices through the subsurface that are built into a 3D grid. Volume rendering provides a 2D projection of this 3D volumetric grid for further analysis and interpretation.“
 
Even with mem-OC, I only got 37 MH/s out of Vega FE. But none of the mining programs were optimized for Vega at the time, only for GCN in general. Will be interesting to see.

BTW:
Claymore's Dual Ethereum + Decred/Siacoin/Lbry/Pascal AMD+NVIDIA GPU Miner.
=========================

Latest version is v9.8:

- added Vega cards support (ASM mode).

edit:
Looks like this guy still get's around 36/37 MH/s with the new Claymore 9.8:
https://bitcointalk.org/index.php?topic=1433925.msg20549957#msg20549957
BTW - that display of Vega GPUs running at 852 MHz when changing wattman clocks is one of the niftier things I also saw when trying to optimize Vega FE for mining. It's probably a bug in display and the card runs at a higher freq - like he explains a couple posts later.
 
Last edited:
It's worth remembering that optimising mining code is a black art.

For a long time back in the original GPU mining era of bitcoin (2010/11, not the shitcoin era that came after) there was private code that was substantially more efficient than code that was publicly available. In other words, there has long been a market for better mining code that isn't generally available.

https://en.bitcoin.it/wiki/ArtForz
 
It's worth remembering that optimising mining code is a black art.

For a long time back in the original GPU mining era of bitcoin (2010/11, not the shitcoin era that came after) there was private code that was substantially more efficient than code that was publicly available. In other words, there has long been a market for better mining code that isn't generally available.

https://en.bitcoin.it/wiki/ArtForz

Yep, now a lot of the devs get a 1% fee from their software though which is very lucrative as well.

I'm sure all the major "farms" have custom software thats tweaked for their cards and make a lot better hash/watt.
 
No. The max boost clock is not a disclosed number and varies from chip to clip. Also, it's about impossible to sustain outside of certain low-impact compute workloads.
Adding to that: The highest possible boost can be read out with NVSMI - for our 1070 FE sample it is 1911 MHZ. It usually is tied to ridiculously low temperatures, like 45 or 48 °C IIRC, so basically any workload worth the name is pushing the GPU almost immediately beyond that threshold.

Well that makes zero sense for Nvidia.
On the contrary, I think it is one of the better illustrations of the matter. You cannot make the mistake though and read every graph as the same scale.
Under ideal cirumstances, Maxwell ran mostly a little higher than it's advertised boost. Pascal increased this gap, hence Boost is closer to Base clock relative to actual GPU-clock (but, and this is not shown in the diagram still higher than maxwell). With Polaris, AMD sometimes went below the advertised boost clock in normal workloads like games (inb4 the collective corrective: Yes, Nvidia sometimes did that too, but far less regularly). With Vega, apparently, the peak engine clock can exceed boost clock maybe similar for XFR (special cirumstances being met).

That said, I wish no one ever invented this marketing tool. Made reviewers' lifes who actually care for their tests to have some meaning a lot more complicated - especially with thermally limited (and lately glorified) reference designs.
 
Last edited:
Well that makes zero sense for Nvidia.
My 1070 is rated for 1680MHz boost, however it's actual attainable clocks during gaming is between 1850 to 1911 MHz.
Adding to that: The highest possible boost can be read out with NVSMI - for our 1070 FE sample it is 1911 MHZ. It usually is tied to ridiculously low temperatures, like 45 or 48 °C IIRC, so basically any workload worth the name is pushing the GPU almost immediately beyond that threshold.
Yep, my ASUS Strix (non OC'ed) 1070 hits 1911 MHz most of the time, even at 65c. The only time it drops to 1860 is when the load is very heavy on the GPU, typically when that happens, the GPU can't sustain 60fps.
 
Back
Top