AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

On the other hand for the very same reason, using Power Saver -profile cuts your performance by only 4 % while your power limit goes down by 25 % (. No card running in a "comfortable range" would have such extreme differences between the profiles
Maybe on average, but if you go into details, several games lose 8, 9 and 10% of performance. Heck Witcher 3 lost 14% @1080p. So it's very workload dependent, AMD wouldn't have pushed the clocks that high for a mere 4%.
Power Saver -profile at 165W.
The least power saver is 200W in that review, which is still much higher than GP104 (166w), and with 10% less performance.
NVIDIA on the other hand has had the luxury to be more moderate with their clocks and thus voltage, they had a lot of headroom on both the clocks and the voltage to go higher, but they didn't need to. Being in more comfortable, dare I even say optimal clockrange for the chip, lowering the voltage and/or clocks a bit makes a smaller difference here.
You can push Pascal and Volta clock to the limits comfortably @~2.1GHz without increasing voltage or going through the roof in power consumption, so once more it comes down to architecture, which is what the main argument is about. One arc (Vega) has a ceiling on clocks and thus scales badly power wise once you push past a certain point, and the other one doesn't, because it has that certain threshold point at a much higher position. Even when it has nearly double the transistor count, and even when being on an older node.
but I don't think I've heard a single one that didn't benefit from lowering voltage.
I've heard of several. Even unstable cards. Once more it's lottery.
 
WCCFT reached out to NVIDIA about their ResNet 50 performance using Tensor Cores, NVIDIA got back to them with their latest results for Turing (T4) and Volta (V100).

AMD-MI60-TESLA-V100-Tensor-ResNet-Benchmarks.jpg

https://wccftech.com/amd-radeon-mi60-resnet-benchmarks-v100-tensor-not-used/

Official statement from NVIDIA:
“The 70W Tesla T4 with Turing Tensor Cores delivers more training performance than 300W Radeon Instinct MI60. And Tesla V100 can deliver 3.7x more training performance using Tensor Cores and mixed precision (FP16 compute / FP32 accumulate), allowing faster time to solution while converging neural networks to required levels of accuracy.” – NVIIDA


Official Statement from AMD:
“Regarding the comparison – our footnotes for that slide clearly noted the modes so no issues there. Rationale is that FP32 training is used in most cases for FaceID to have 99.99%+ accuracy, for example in banking and other instances that require high levels of accuracy.” – AMD
 
Maybe on average, but if you go into details, several games lose 8, 9 and 10% of performance. Heck Witcher 3 lost 14% @1080p. So it's very workload dependent, AMD wouldn't have pushed the clocks that high for a mere 4%.
Of course on average
The least power saver is 200W in that review, which is still much higher than GP104 (166w), and with 10% less performance.
GPU power != card power
I never said Vega matches Pascal perf/watt, those points were just to illustrate how Vega 10 was pushed close to it's limits and GP104 not, contrary to what you claimed earlier ("GP104 is clocked to the max")

You can push Pascal and Volta clock to the limits comfortably @~2.1GHz without increasing voltage or going through the roof in power consumption, so once more it comes down to architecture, which is what the main argument is about. One arc (Vega) has a ceiling on clocks and thus scales badly power wise once you push past a certain point, and the other one doesn't, because it has that certain threshold point at a much higher position. Even when it has nearly double the transistor count, and even when being on an older node.
I'm not sure how we got here but we both seem to be claiming the same thing but still 'arguing' about it o_O

I've heard of several. Even unstable cards. Once more it's lottery.
Guess there has to be some bad cards always, and yes, there's always some lottery involved no matter what you buy.
 
I don't know.
But shouldn't dedicated hardware for a specific task always trump a Jack of all trades-chip?
GPU:s should be be about graphics no?
If you want a piece of the AI-market, maybe better develop specifik hardware for that.
 
It really depends on use case what makes most sense. If you are giant company with service Y it probably makes all the sense in the world to optimize a solution for Y. If you are renting compute time to a set of diverged customers you likely want flexible solution instead of multiple niche solutions(maintenance, shifting demand, volume plays to your advantage). If you are researcher at university you might want something extra flexible and quite likely cheap. Your budget is limited, you might want to work on your laptop/desktop and possibly you are pushing boundaries with new algorithms so anything too hardcoded doesn't cut it.

This is exciting time as generic AI is very much unsolved problem. It's difficult to even imagine what solution both in hardware and algorithms would feasibly lead to generic ai/singularity in semi near future. Computing needs are diverged enough and growing enough that there is room for many players to innovate and play today.

One super interesting thing is various types of neural networks as they seem to be great at different kind of graphics tasks. Maybe the nature of graphics rendering is about to get greatly enhanced. Maybe someone even dares to dream of graphics engine outputting data structures that are very suitable for neural networks to enhance in such a way that pleasing visuals are outputted. Think this of as cell shading in steroids kind of idea. "ai rendering" might sound crazy and it probably is but there already are demos of taking a neural network and teaching it style of specific artist. Neural network then takes ordinary pictures and changes their style to match that artist.
 
Every chip has a limit on how high it can clock. Once you get closer to it the voltage requirements start to ramp up rapidly and the power consumption grows exponentially. Limit depends on both the architecture and the process the chips is built on (as well as individual variation between each chip). (also, GP104 and Vega 10 are not built on same or equal processes)

Vega models are clocked and are using voltages far closer to actual limits of the chip and chosen so that every card should be able to meet the advertized clockspeeds at pre-set situation even if it means using higher voltage on many of the cards. Being close to it's limits, lowering the voltage and/or clocks a bit makes a huge difference on power consumption here.
This behaviour is nicely demonstrated for example in TPU's Vega 64 review (https://www.techpowerup.com/reviews/AMD/Radeon_RX_Vega_64/)
Using (primary) Balanced profile has GPU consumption limit at 220W, Turbo-profile at 253W and Power Saver -profile at 165W. Vega being already so close to it's limits, you can reach about 1 % higher performance with 15 % higher consumption. On the other hand for the very same reason, using Power Saver -profile cuts your performance by only 4 % while your power limit goes down by 25 % (in other words losing 4 % performance gives you 33% higher energy efficiency). No card running in a "comfortable range" would have such extreme differences between the profiles

NVIDIA on the other hand has had the luxury to be more moderate with their clocks and thus voltage, they had a lot of headroom on both the clocks and the voltage to go higher, but they didn't need to. Being in more comfortable, dare I even say optimal clockrange for the chip, lowering the voltage and/or clocks a bit makes a smaller difference here.


Just like GP104 performance even out of the box, but I don't think I've heard a single one that didn't benefit from lowering voltage.

That's huge granted I know little about downvolts.
 
MI50 & 60 scale much better than any other alternative. Doesn't matter how much power is in one chip, it matters how much power can be achieved with numerous chips.
 
Mobile Vega 20 tested in the latest 15" Macbook Pro:


6UnJYfD.png


He also claims the thermals are substantially improved. The i9 throttles substantially less now, while the whole system is quieter.

Comparing Polaris 11/21 with Vega M 20, we're seeing an 82% performance upgrade within the same power envelope and same fab process.

I have no idea how these compare to windows scores, but it would be really interesting to compare the Mobile Vega 16/20 with the GP106 and GP107 mobile cards.
 
Mobile Vega 20 tested in the latest 15" Macbook Pro:


6UnJYfD.png


He also claims the thermals are substantially improved. The i9 throttles substantially less now, while the whole system is quieter.

Comparing Polaris 11/21 with Vega M 20, we're seeing an 82% performance upgrade within the same power envelope and same fab process.

I have no idea how these compare to windows scores, but it would be really interesting to compare the Mobile Vega 16/20 with the GP106 and GP107 mobile cards.

If this is true, why are they pushing RX590 at all? Vega 20 must be exclusive to Apple even on desktop?
 
If this is true, why are they pushing RX590 at all? Vega 20 must be exclusive to Apple even on desktop?

They're pushing Polaris 10 again because it's a whole lot cheaper to make than a new Vega mid-range chip, and contrary to the belief of many power efficiency isn't all that important in the $200-300 price range for gaming video cards.
Vega 20 may not be exclusive to Apple and we could still see a design win for it in the laptop PC space, but the macbook pro gets away with enormous margins that are very rare elsewhere and Vega 20 is probably a whole lot more expensive than e.g. a GTX1060 Max-Q.
 
Well technically the consumer doesn't even know of the existence of a 7nm Vega 20.
All they'll see in AMD's webpage is a Radeon Instinct MI60 and MI50.
 
Well technically the consumer doesn't even know of the existence of a 7nm Vega 20.
All they'll see in AMD's webpage is a Radeon Instinct MI60 and MI50.

Yeah, I mean for "us" sometimes, like even some tech sites use "Vega 20" without the mobile mention.
 
He also claims the thermals are substantially improved. The i9 throttles substantially less now, while the whole system is quieter.
Even in cpu-only benchmarks... So that would point more towards an improved cooling solution (or luck of the draw) actually...

Comparing Polaris 11/21 with Vega M 20, we're seeing an 82% performance upgrade within the same power envelope and same fab process.
There weren't any power consumption numbers, right? If the cooling solution is indeed better, it could easily draw a bit more power and still be quieter (although I don't doubt efficiency increased substantially).
 
If this is true, why are they pushing RX590 at all? Vega 20 must be exclusive to Apple even on desktop?

Vega seems to be fantastic... for running super low power at super low frequencies. However that doesn't mean as a cost of silicon it's great. It takes more silicon, and thus more cost, to build an equivalently performing Vega chip than to build a Polaris one. So an Rx590 is what we get instead of a Vega 32.
 
Back
Top