AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

We've seen the explanation below - that each Megahash requires 7.8 GBytes of data transfer, right? So you could, theoretically/in an ideal world, calculate hash rates based on memory transfer.

I was merely sizing up which kind of roughly guessed efficiency various cards are known to achieve (apart from BIOS modifications). Best I've seen is ~85%, in other word, instead of the theoretical 7.8 GBytes you need (for reasons unknown to me) around 9.2 GB/s to get 1 MH/s. And that's with best-case memory subsystems -the memory type seems to play an important role here.

Fiji, which also uses HBM, is quite far from that 85%/9.2 GB/s per MH/s. It is unknown to me at least whether or not HBM gen2 in Vega will remedy this - at least in part. Chances are there: L2 cache is larger in Vega than in Fiji and the clock rates for the memory is higher, even though the bus width is only half as wide.

Other memories, like GDDR5X in GTX 1080/1080 Ti and Titan XP, seem to fare worse than regular GDDR5 as well, hence the example of the 1070 outperforming the 1080.
 
Thanks for the explanation, but could you dumb it down for me just a bit? I'm not getting it. :oops:
If RX Vega does 65MH/s, it'll be gobbled up by miners because it mines about twice as fast as a GTX1070 while costing 20% more money.
Since power consumption is bound to be close to less than twice a 1070, miners will get their investment returns much faster, and start making more money earlier.
 
What other situations are there where automatic conversion by the driver is not possible?
None that come to mind that wouldn't already break the culling mechanisms. Cases where attributes significantly deform the geometry in subsequent passes, but that shouldn't be too difficult to work around. Explains why Vega combined stages in the Linux drivers.

In theory primitive shader culling can run at higher throughput than fixed function culling, simply because fixed function stages are never sized for the worst-case. So even ignoring attribute-shading and buffering capacity, culling during the primitive shader will increase overall pipeline throughput.
If AMD converted static meshes to FP16 positions/normals for culling they could increase throughput further. Packed math and more indices. Back-face, course frustum, and possibly zero coverage not needing the accuracy and accounting for much of the culling. Then rerun the math at full precision for remaining geometry. Probably bin and sort based on the low precision as well.
 
If RX Vega does 65MH/s, it'll be gobbled up by miners because it mines about twice as fast as a GTX1070 while costing 20% more money.
Since power consumption is bound to be close to less than twice a 1070, miners will get their investment returns much faster, and start making more money earlier.

It won't be 65mh/s... (based on all we know about existing cards performances)... Don't forget this rumor came from a cards seller... so... 37-38 would be good I guess, but with the powerdraw, their are more efficient solutions.
 
Is it possible to see Amd just get out the graphic business on pc, and focusing on console ? I mean it seems that, 14 month later, they still can't beat (or barely) the 1080, with much more power used. And I guess nVidia next gpu will arrive soon.
At one point, they can't spend money to compete with a last gen product...
 
I was merely sizing up which kind of roughly guessed efficiency various cards are known to achieve (apart from BIOS modifications). Best I've seen is ~85%, in other word, instead of the theoretical 7.8 GBytes you need (for reasons unknown to me) around 9.2 GB/s to get 1 MH/s. And that's with best-case memory subsystems -the memory type seems to play an important role here.

Fiji, which also uses HBM, is quite far from that 85%/9.2 GB/s per MH/s. It is unknown to me at least whether or not HBM gen2 in Vega will remedy this - at least in part. Chances are there: L2 cache is larger in Vega than in Fiji and the clock rates for the memory is higher, even though the bus width is only half as wide.
Well getting to 85% of theoretical peak is already quite an accomplishment.
The second thing is that those 8KB are read (by design) from DAG (2GB+) in practically random manner. Which means L2 cache is close to useless. Additionally each thread will do 64 128 byte reads to calculate a hash. GPUs like it when thread x accesses location y and thread x + 1 accesses location y + 1 and so on (well warp is 32 threads and you want this 32 threads to stay within 128 bytes continuous block in NV case). In this case each thread in a warp is its own 128 byte transaction (because each thread in a warp will fetch "random" 128 bytes from a DAG). Meaning 16 4 byte loads in a row within each thread will start to run out of L1 pretty quickly.
So not only is it memory bound but it also pushes GPUs ability to hide memory access latencies to the extreme.

P.S.: Thinking a bit more abou this... It might not even be the GDDR5x or HBM issue... But simply that these parts also have a larger number of SMs/CUs and are as such more capable to overload their memory subsystems with loads from random locations... Anyone did any mining on GDDR5x 1060? :)
 
Last edited:
Is it possible to see Amd just get out the graphic business on pc, and focusing on console ? I mean it seems that, 14 month later, they still can't beat (or barely) the 1080, with much more power used. And I guess nVidia next gpu will arrive soon.
At one point, they can't spend money to compete with a last gen product...
Vega is far more than just 1 product or even 1 generation, it'll be the base for next gen development too. Just like with original GCN, AMD is again setting their aim (maybe too far) into the future, while NVIDIA seems to continue to focus what's here right now
 
AMD is again setting their aim (maybe too far) into the future, while NVIDIA seems to continue to focus what's here right now
That's one of the most common misconception in the tech world right now IMO.

The future according to whom? AMD is playing catch up with NVIDIA on memory compression, Tiled Rasterization, complex geometry processing & Tessellation, high clock speeds, conservative rasterization, and Raster Ordered View, among other things. While AMD is focused on integrating console fearures into PC space (FP16, RPM).

The truth is each company is designing their GPUs according to the future they expect to happen. Whether their perspective materializes or not is a different matter entierly.
 
It won't be 65mh/s... (based on all we know about existing cards performances)... Don't forget this rumor came from a cards seller... so... 37-38 would be good I guess, but with the powerdraw, their are more efficient solutions.

According to GamersNexus, it will indeed be around 70mh/s
: "We’ve received reports from contacts at AIB partners that RX Vega will be capable of mining at 70MH/s" I will point out that since ETH hashrate is a memory bound workload, this would suggest a significant improvement in Vega's memory bandwidth compared to the ~20% regression vs. Fiji it is currently showing on Vega FE.
 
Is it possible to see Amd just get out the graphic business on pc, and focusing on console ? I mean it seems that, 14 month later, they still can't beat (or barely) the 1080, with much more power used. And I guess nVidia next gpu will arrive soon.
At one point, they can't spend money to compete with a last gen product...

Yeah they're drowning in the losses caused by all those graphics cards sitting in the shelves that no one buys, right?

If only we lived in an alternate reality where their graphics cards are constantly sold out despite being priced at over twice their msrp...
 
Is it possible to see Amd just get out the graphic business on pc, and focusing on console ? I mean it seems that, 14 month later, they still can't beat (or barely) the 1080, with much more power used. And I guess nVidia next gpu will arrive soon.
At one point, they can't spend money to compete with a last gen product...

It is to AMD's advantage to be able to turn their R&D spending into products for as many markets as they can be profitable in. I don't think they can kill their consumer PC business without killing their semi-custom, semi-professional, professional and HPC business right along with it.

As long as they can make any money at all actually selling the products they develop, there's no reason to leave any of those markets.
 
The future according to whom? AMD is playing catch up with NVIDIA on memory compression, Tiled Rasterization, complex geometry processing & Tessellation, high clock speeds, conservative rasterization, and Raster Ordered View, among other things. While AMD is focused on integrating console fearures into PC space (FP16, RPM).

Console style APIs, async compute. Don't make it sound like AMD didn't bring anything to the PC space before Vega.
 
AMD don't make any more money from those cards and it actually hurts their business a lot. Not only they don't take market share of games but also hurts their partners by reducing potential sales on periferias and even hurts its image cuz gamers will think is amd trying to scam them by trying to sell their cards for those crazy prices.
 
It is to AMD's advantage to be able to turn their R&D spending into products for as many markets as they can be profitable in. I don't think they can kill their consumer PC business without killing their semi-custom, semi-professional, professional and HPC business right along with it.

As long as they can make any money at all actually selling the products they develop, there's no reason to leave any of those markets.
Unless of course catering to a certain market puts them at a competitive disadvantage in another. Then they need to make choices.
 
AMD is part of a duopoly that is vital to the health of the industry, consumers and technology enthusiasts alike. We all seen what happens when they falter in one of their markets: A decade ago, Intel, throughly embarrassed by smaller competitor rolled out the fantastic Core architecture and tick-tock production schedule, tearfully promising to never again lose sight of the drive to innovate. It took all of a couple of missteps by AMD for them to abandon all that and plunge right into the exploitive complacency that Zen is now so rudely shaking them out of. Does anyone think that if Vega was not going through this tumultuous release GV10x chips would be as invisible as they are? We are all worse for it.
 
Vega is far more than just 1 product or even 1 generation, it'll be the base for next gen development too. Just like with original GCN, AMD is again setting their aim (maybe too far) into the future, while NVIDIA seems to continue to focus what's here right now
Considering the pervasiveness of change AI could bring to our future shortly, you're right on one account; NV isn't aiming for the future -- they just help shaping it up.
 
I am serioulsy wondering about hen and egg here. I mean, there are reports of mining farms buying whole truckloads directly from AIBs with cards not even reaching e-tail anymore. Maybe the e-tails gradually raised prices up to this level strictly because cards were out of stock constantly (and not even in stock in between).
 
Back
Top