AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

So Frontier seems to be the MI25 GPU scheduled for June
No, the Mi25 is part of the Radeon Instinct line, without display outputs.
We'll still be seeing RX Vega for gamers and Pro Vega SSG.

The Frontier is probably a 'Super Premium' product because it bundles the software stacks for content creation, offline render, AI and gaming capabilities.
It's like a Tesla, a Quadro and a Geforce all at once.
 
Last edited by a moderator:
But this is not a consumer product...looks like we won't be seeing "regular" Vega before this fall.
Lisa was asked about the launch date of Vega. She was very specific in saying that the first iteration of Vega is launching in the 2nd half of June, and that is the Frontier Edition. Guess regular Vega will launch after that.
 
No, the Mi25 is part of the Radeon Instinct line, without display outputs.
We'll still be seeing RX Vega for gamers and Pro Vega SSG.

The Frontier is probably a 'Super Premium' product because it bundles the software stacks for content creation, AI and gaming capabilities.
Ah ok yeah what skewed my thinking was it seems this being used compared to the P100 and his slide leading with Data Scientist.
And of course the Mi25 is passive cooled for a chassis design and would not have the blower doh.
So the priority in launching, I guess their strategy is to get buy-in with the Frontier and then use that to gain an adoption for the Mi25 for larger scale chassis Node solutions.
Cheers
 
Lisa was asked about the launch date of Vega. She was very specific in saying that the first iteration of Vega is launching in the 2nd half of June, and that is the Frontier Edition. Guess regular Vega will launch after that.

More profit margins and overall profits from corporate stuff today, same reason why Nvidia is making huge corporate only chips. And as an obvious guess the "consumer" Vega is probably the 1600 clocked chip with some bad silicon, so will have to wait for stockpiling. Besides, it seems like corporate stuff is expected to sell out when new, whereas no availability for consumers is seen as bad press.

Still, doesn't mean a Computex reveal isn't forthcoming. The Frontier announcement could just be a way to make sure no one blows all their cash pre-ordering Nvidia's "available, at some point" Volta GPUs without having AMD to consider as well.
 
It's encouraging that AMD went from the 12TFLOPS on the leaked slide by VC last year to 13TFLOPS today but the pixel fillrate is stated as if it's only 1.4Ghz or so. Perhaps the latter is with the base clock?

And it looks more like the 1600Mhz 16GB card was a pro. card, if sheer clockspeed can do 40-50% over Fury X, it'd be great if the arch. improvements cover the remaining 20% or so to 1080Ti. Now, if they can do it with a substantial distance from Volta launch.
 
Contrary to the render, I'm pretty sure the card in Raja's hands had 1× 8-pin and 1× 6-pin.

Impressive numbers shown there, for sure.
Yeah double checked pausing replay and it is 1x 8-pin and 1x 6-pin for 300W.

But the big surprise going by the numbers; Frontier TDP-TBP/TFLOPs relatively is nearly a match for the 1080ti.
1080ti: 10.6 TFLOPs, 1582MHz, 250W.
OC to 1950MHz 290W (accurately measured by PCPer with oscilloscope) gives around 13 TFLOPs

So from the context of efficiency and going by the numbers presented by AMD Frontier has a perf/w pretty similar to the Pascal 1080ti with peak FP32 and performance equalised; one is GDDR5X and other HBM2 for allowances but still.
However I must say if using the Quadro P6000 or Tesla P40 due to being the full core GV102 they are more efficient with just over 12TFLOPs FP32 at 250W.
So yeah will move some depending upon comparing to enthusiast consumer or professional and HPC models, but the gap is closing relative to Pascal; caveat being potentially mixed precision performance and FP16/Int8 for training and inferencing specifically for Deep Learning frameworks/apps.

I doubt reviewers will see this I assume top Vega for awhile so difficult to say if the AMD numbers measure up in terms of those 3 parameters, but with Fiji they were actually within TDP and reasonably conservative.

Cheers
 
Last edited:
This may not be a consumer card, but unless drivers are especially crappy, I see no reason why this shouldn't be benchmarked in games at launch, just so we can get some idea of what Vega can do in games.
 
Last edited:
It was mentioned before, but here's the reference:
http://pro.radeon.com/en-us/vega-frontier-edition/

16GB at ~480GB/s.

(Bonus points for what it seems to be an AiO watercooled version in gold)

Unless they're clocking these 4*HBM2 stacks at lower values than HBM1, it looks pretty clear to me it's two 8-Hi stacks rated at 1.9Gbps.
Fair point - but where was it mentioned before? The 480 GB/s I mean? Must have missed that. OT: Your constant bickering about how you were possibly right and other were wrong is mildly annoying. Remember, until yesterday, we were in a rumors/speculation thread.


edit:
Mild discrepancies between specs, rendered images and shown products not withstanding....

- Card rendered has 2× 8-pin, card shown had 1×6- and 1×8-pin
- TFLOPS indicate ~1,58 GHz (or, if rounded down, 1,6 GHz)
- Pixel Fillrate indicates 1,4 GHz. Maybe calculated off base clock. Or a VERY odd number of ROPs, for completeness' sake.

edit2:
Spec Viewperf against Titan Xp is ... well chosen.
Here another view at the scores of Spec ViewPerf 12.1 for Vega FE, Titan Xp and Quadro P6000 and P5000
Catia 137,75 - 107,29 - 167,36 - 155,89
Creo 83,94 - 65,2 - 121,05 - 117,88
SW 114,88 - 67,75 - 151,73 - 128,94
Clearly, Titan Xp is being stalled by it's software stack which AMD rightfully pointed out as one of the major pillars of modern competitiveness amongst GPUs. Bot the Quadro P6000 and even the lower-end model P5000 are outperforming it in Spec ViewPerf 12.1 considerably.
Hopefully, AMD chose Titan Xp here because they intend to compete on it's price with Vega FE.

edit3:
As was to be expected, the others beat me to most of it. :)
 
Last edited:
1587MHz boost clock = exactly 13 TFLOPS. The pixel fillrate number is indeed odd.
 
This may not be a consumer card, but unless drivers are especially crappy, I see no reason why this shouldn't be benchmarked in games at launch, just so we can get some idea of what Vega can do in games.

Well it is a FAD presentation, in general they are mostly speaking margin, market growth etc. they wanted to present this gpu, this gpu is more aimed at entreprise than particular ). They have too present a lot of CPU, without showing much presentation. I can imagine it is sometimes a not so bad idea to dont mix all.

First audience concerned by Financial analysis are not really there to see how games run on a graphics card ( when this card dont aim to this market ). I dont think they wanted too much blur the information.

But i aggree that they could have maybe, present the gaming gpu at the same time, specially when they have take the time to show some HBCC result on ( TR ), and one game benchmark ( 4K sniper elite) ( who was just show average at 80fps vs 60-70 for the 1080TI )

1587MHz boost clock = exactly 13 TFLOPS. The pixel fillrate number is indeed odd.

Dont forget they write everytime "approximatively"... 13TFlops FP32 and 25 FP16... clock could be anywhere close to 12.6 - 12.8Tflops or 13.2.. Final clock was maybe not completely finalized when the slide have been made or simply they dont want give too much straight numbers.
 
Last edited:
The way they chose to demonstrate RotTR is a bit odd. They could have done the same thing at 4k with Very High textures, the game demands way more than 8 gigs of vram at that res and texture settings, I've personally seen it go up to 10 gigs during gameplay. That could have been a good demonstration of HBCC at 8 gigs of vram (for consumer products). The way they chose to demonstrate it just shows they are not yet confident about the gaming performance of high-end Vega, RotTR just happens to be a very demanding game at 4k. I could be wrong though, maybe the drivers are not up to par yet and they are doing last minute optimizations. Vega is a new architecture after all.
 
The way they chose to demonstrate RotTR is a bit odd. They could have done the same thing at 4k with Very High textures, the game demands way more than 8 gigs of vram at that res and texture settings, I've personally seen it go up to 10 gigs during gameplay. That could have been a good demonstration of HBCC at 8 gigs of vram (for consumer products). The way they chose to demonstrate it just shows they are not yet confident about the gaming performance of high-end Vega, RotTR just happens to be a very demanding game at 4k. I could be wrong though, maybe the drivers are not up to par yet and they are doing last minute optimizations. Vega is a new architecture after all.

For what i have understand they have only enable 2GB on the gpu, then set HBCC on top of that ( simulate a 2GB gpu ).. and yet you see the min fps double and average taking a good 50% boost... well they could have maybe enable only 4 and going to 4K, but i think the point was more to demonstrate a big impact on small memory pool ... ( ofc, i can imagine that on a 8+GB gpu, it can be quite lower as impact ).... lets be honest if a game run at 30fps average at 4K, showing it at 40fps will not look the same, specially with variation.. This tech could be really nice on average to low end gpus, but this dont change the compute power, it only reduce the impact on memory.
 
Last edited:
edit2:
Spec Viewperf against Titan Xp is ... well chosen.
Here another view at the scores of Spec ViewPerf 12.1 for Vega FE, Titan Xp and Quadro P6000 and P5000
Catia 137,75 - 107,29 - 167,36 - 155,89
Creo 83,94 - 65,2 - 121,05 - 117,88
SW 114,88 - 67,75 - 151,73 - 128,94
Clearly, Titan Xp is being stalled by it's software stack which AMD rightfully pointed out as one of the major pillars of modern competitiveness amongst GPUs. Bot the Quadro P6000 and even the lower-end model P5000 are outperforming it in Spec ViewPerf 12.1 considerably.
Hopefully, AMD chose Titan Xp here because they intend to compete on it's price with Vega FE.
Having in mind that P2000 (cut down GP106, 1024 CUDA cores) is faster than WX 7100 (full P10), these results are expected. And are similar to $850 Quadro P4000 (1792 CUDA cores). So I supose no-one will buy this card for 3D modeling, especially if it costs more than $1000

What is suprising is this time Catia result is better than SW.
 
Dont forget they write everytime "approximatively"... 13TFlops FP32 and 25 FP16... clock could be anywhere close to 12.6 - 12.8Tflops or 13.2.. Final clock was maybe not completely finalized when the slide have been made or simply they dont want give too much straight numbers.
Knowing marketing FP32 must be at max 12.74 TFLOPS, since if it was 12.75 or higher they would round FP16 up to 26 instead of 25 TFLOPS
 
Back
Top