AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Can't get any clearer than this, conversely you could argue that it makes no damned sense to market a card for pro graphics and have it ship with gaming drivers.
You would suggest software developers test packed math and HBCC support on AMD cards against the Quadro line with pro drivers then? Game developers will want gaming drivers and access to features as early as possible to start development.
 
Isn't it even worse if these consecutive wait another 2 months statements did not get somehow vetted by the marketing team? Then the blame is 100% on the management, not the marketing team.
I think any statements would be vetted or polished by marketing, but I don't think they could commit AMD to any schedule or product data on their own. The specific message or idea that there would be something substantial to say within some time limit would involve management or engineering providing the timelines and official commitment.

The marketing group would have been directed to spin the product, and what marketing put out reflects likely reflects what they were given to work with. Given how counterproductive it is to excitedly schedule a damp squib in advance, I think it's plausible they were led to expect more as well.
 
You would suggest software developers test packed math and HBCC support on AMD cards against the Quadro line with pro drivers then? Game developers will want gaming drivers and access to features as early as possible to start development.
I'm sure developers could get access to gaming drivers from AMD with an NDA or something, if they're very much against providing game drivers shipped with the FE.
 
I'm sure developers could get access to gaming drivers from AMD with an NDA or something, if they're very much against providing game drivers shipped with the FE.
Going forward with DX12/Vulkan and GPU driven rendering, drivers may not matter. Compilers will still make a bit of a difference. Linux for example is going towards a unified driver, including even the Windows foundation. So hiding performance likely won't be easy in the future. Anyone with a card will know performance and the past development models are likely invalid. I could see an argument for selling "engineering samples" with new features and limited performance profiles to accelerate development. That's historically been an issue for AMD where they had more advanced features, but the market took a while to catch up. Low level APIs facilitate that to some degree as cards need more specific code paths.
 
What makes you think the Vega FE card has all the pro features that specviewperf uses? Why would you compared vs a Quadro then? Does comparing a Titan to a Quadro make sense?
I don't think it has many of those pro features.

That's why it makes no sense for AMD to bring up SpecViewPerf in the first place. Only pro users care about that benchmark and those aren't using a Titan Xp anyway.

By doing the comparison, everybody is a Google away from discovering that the FE is a pretty bad choice for pro workloads.

So what's the upside of doing it to begin with?

I do agree that it's trying to be a Titan Xp competitor and it will probably be competitive in games and machine learning (if it works.) But they're doing their very best to muddle the message.
 
That "single parameter" is the only hardware bullet-point that defines the price bracket and market for the card, with the other being driver and sales/technical support.
If you want to compare "absolute performance or price/performance" then what's ridiculous is to even mention Quadro and Frontier / Radeon Pro models. On those factors, the aforementioned Quadro P4000 pales in comparison to a similarly priced GTX 1080 Ti.

You are so right. That's why, for example, a 16GB GP100 is so much cheaper than 24GB P6000. Wait, what's that? It's actually $1,000-1,500 MORE? How strange, a card with 66% Vram is significantly more expensive. Someone better contact the retailers, notify them of this egregious divination from TotTenTraz workstation card value classification 2017 and had them fix the pricing error!


That conclusion is anything but logical because there won't be any 12GB Vega. HBM2 is either 2-Hi, 4-Hi or 8-Hi, meaning a 2 stack solution like Vega 10 will only get 4, 8 or 16GB VRAM.
The Titan Xp is the closest Pascal card there is in comparison to Vega FE. The GP102 has a 384bit bus meaning the card can only have 6, 12 or 24GB of VRAM. nvidia would need to cut a third of the memory bus and ROPs to reach 16GB, but that would essentially end up in a GP104 with slightly higher compute throughput but lower fillrate.

How is this relevant? My point, which you are clearly side-stepping, is that while you are protesting two cards being compared because the only correct and proper way of doing so requires them to have the same vram size... that's EXACTLY what AMD has done.



No one's upset, don't worry ;)
But if at this point you can't differentiate the audience between 8GB and 16GB workstation cards, it's only natural that you would struggle to see how it's an invalid comparison.

But at the same time, I have to wonder why you aren't preaching about how bad the Quadro P5000 is in perf/$ in comparison with the P4000. The difference in the SPECviewperf results is 15-50% but the difference in price is 300%.
Wow, what a terrible card that Quadro P5000 is...

Funny enough, despite both being Pascal-based, a 8GB P5000 is about $500 more expensive than a 8GB P4000. It's almost as if there was some other mysterious factor that was differentiating these two cards, besides the Vram size. Maybe one day we will learn what it may be.
 
I would say, if you're a professional user and there's a price as well as a performance difference (with the more expensive product being also faster), it is only a matter of usage time as to when the price difference amortizes itself. Also, if you're a professional user, you are very likely to know the memory size that's sufficient for your needs. There's 8, 12, 16, 24 and even 32 GiByte readily available.

Really? Some very knowledgeable people who probably know a lot more than you seem convinced the card will sell out for sure.
Ryan Smith @ Anandtech said:
To date, AMD has not yet said anything further about the launch since last month’s Computex unveil, however it appears that either AMD is opting to quietly release the sure to sell out cards, or some of their retailers have jumped the gun, as listings for both models have begun to show up.
There's the option of interpreting that bolded (by you) part as "low volume". Now, large demand is obviously a function of target audience. And here's what Ryan said about that:
Ryan Smith @ Anandtech said:
All messaging so far from AMD is that these are a low volume part meant for customers to evaluate Vega as early as possible, so it’ll be interesting to see where AMD goes from here.

So, maybe you're all talking about the same thing, but come from different expecations?


*edit: Ah, beat by several, as always. sorry for re-posting earlier points then.*
edit2: typos, urls and tags fixed.
 
Last edited:
The whole Vega release process is embarrassing, even more than it was in Polaris' case. The FE Vega itself seems to exist solely for the purpose of having a Vega release in 1H '17. They haven't been present in any market segment above midrange since Pascal/May 2016. So not actively pushing "the gaming Vega" seems awkward at best.

The whole thing reminds me debut of HD2900XTX. Delays, expectations, no presence, scary power figures, etc.
 
The whole thing reminds me debut of HD2900XTX. Delays, expectations, no presence, scary power figures, etc.
Huh? AMD had the fastest card out there (or at least fought well in the very high end too) before GTX 8800 came, which is considered "same generation" as HD 2900 even though HD 2900 came later, just like GF FX & R9700 are considered "same generation" even though GF FX was late
 
The whole Vega release process is embarrassing, even more than it was in Polaris' case. The FE Vega itself seems to exist solely for the purpose of having a Vega release in 1H '17. They haven't been present in any market segment above midrange since Pascal/May 2016. So not actively pushing "the gaming Vega" seems awkward at best.

The whole thing reminds me debut of HD2900XTX. Delays, expectations, no presence, scary power figures, etc.
I agree, this launch reminds me of HD2900 but at the same time it didn't stop me from buying two HD2900Pro's at that time.
Vega mighy not turn out the best at everything, but I'm sure it will be a groundbreaking base for future innovative products.

Only few more weeks to find out what's what.
 
Each time you have some delay, someone pull the 2900 story lol ... I think in this case, everyone can imagine the problem is coming from HBM2... Frontier edition is surely the way they have find to release it without having too much "stock "problems at an 1200+ $ release price.

Ofc, some could say they have just to put a standard memory controller and cache, but in the case of Vega, this seems just completely redo the whole chips.


I had 2x 2900XTX in CFX, if with AA, they had some problem in games ( unbalanced rop ), they raw power was unbeatable on overclocking with H2o, let alone subzero system. they was so fun to tweak.

thoses 2 ones was replacing my previous X1950XTX+1900XTX CFX system. ( h20 too ).. ( after X1800XT PE 512mb,, 2x 6600GT SLI ... 9800-9700Pro maya edition. )

The 2900XTX was a risky architecture, ring bus of 512bits, clear direction on the architecture forwarding technology and games technology who have only come 5 years then.. there prediction was wrong, or too early ..
 
Last edited:
Going forward with DX12/Vulkan and GPU driven rendering, drivers may not matter. Compilers will still make a bit of a difference. Linux for example is going towards a unified driver, including even the Windows foundation. So hiding performance likely won't be easy in the future. Anyone with a card will know performance and the past development models are likely invalid. I could see an argument for selling "engineering samples" with new features and limited performance profiles to accelerate development. That's historically been an issue for AMD where they had more advanced features, but the market took a while to catch up. Low level APIs facilitate that to some degree as cards need more specific code paths.
Drivers will continue to matter for DX12 and Vulkan and compilers matter a lot too.
 
Drivers will continue to matter for DX12 and Vulkan and compilers matter a lot too.
If a shader is responsible for submitting draw calls, and the card can run a loop with no interaction from the CPU and driver, how do you propose the drivers will be making a huge difference? Even with async the scheduling is ideally hardware driven. Beyond setup and configuration the drivers shouldn't be doing much work in a GPU driven scenario. That's the whole point of GPU driven rendering, beyond the GPU scaling better on lots of objects. I did say compilers will make a difference.
 
how do you propose the drivers will be making a huge difference?
Well, developers state otherwise:

37c6cb8dfd1d.jpg


The idea behind new-generation "close-to-the-metal" APIs such as DirectX 12 and Vulkan, has been to make graphics drivers as less relevant to the rendering pipeline as possible. The speakers contend that the drivers are still very relevant, and instead, with the advent of the new APIs, their complexities have further gone up, in the areas of memory management, manual multi-GPU (custom, non-AFR multi-GPU implementations), the underlying tech required to enable async compute, and the various performance-enhancement tricks the various driver vendors implement to make their brand of hardware faster, which in turn can mean uncertainties for the developer in cases where the driver overrides certain techniques, just to squeeze out a little bit of extra performance.

We've seen this happen in DX12 games already. AMD drivers helped fix their bad performance on Gears Of War Ultimate Edition, NVIDIA increased their DX12 performance in Hitman significantly through drivers, they did so to a moderate degree in Doom as well.
 
You mean where they state "on some drivers"? Just because Nvidia lacks the hardware schedulers to do it? All that means is the hardware wont be able to run the more efficient paths.

We've seen this happen in DX12 games already. AMD drivers helped fix their bad performance on Gears Of War Ultimate Edition, NVIDIA increased their DX12 performance in Hitman significantly through drivers, they did so to a moderate degree in Doom as well.
And these games were using the features that haven't been released how exactly? Only a few developers have been experimenting with the capability and XB1/Scorpio seem to be the current testbed. Indirect execution is still relatively new, but will likely be based on upcoming DX12/Vulkan releases.
 
Close to the metal usually makes it possible for crack developers to extract better performance. It also gives less developers more rope to hang themselves (both in terms of crashes and bad performance.)
Anytime that happens, there'll be opportunities for a driver to step in and fix things.
If we accept, for the sake of argument, the premise that DX12 is close to the metal, what makes it so special that it won't be the case there?
 
You mean where they state "on some drivers"?
Actually here they refer to drivers from both parties, you could listen to the whole lecture yourself. AMD drivers also require big efforts to tune in Async Compute since they rely on it far more than NVIDIA.
Just because Nvidia lacks the hardware schedulers to do it? All that means is the hardware wont be able to run the more efficient paths.
Again with this fallacy? I thought we put it to rest long time ago. This has nothing to do with a "hardware scheduler".
 
Last edited:
If a shader is responsible for submitting draw calls, and the card can run a loop with no interaction from the CPU and driver, how do you propose the drivers will be making a huge difference?
Except that in neither DX12 or Vulkan case shader is responsible for submitting draw calls. Shaders can prepare data for the draw calls and that data doesn't have to be shipped back to the CPU side, but the actual draw is still dispatched by the CPU.
 
Except that in neither DX12 or Vulkan case shader is responsible for submitting draw calls. Shaders can prepare data for the draw calls and that data doesn't have to be shipped back to the CPU side, but the actual draw is still dispatched by the CPU.
This exactly. In fact in dx12 underneath every executecommandlists call is a gpu pipeline flush. Which by its very nature requires CPU intervention. So the driver is more involved than you think.
 
Back
Top