AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

For Pascal, I seriously have no doubt it will happen again, probably sometime after Volta gets out in the consumer space
With the seemingly emulated Tier3 resource binding that just released and async issues on Pascal, that seems a fairly safe bet.

I'd say it's fair to call it late by a year when you have gone an entire generation of GPUs without any answer to your competitor's Enthusiast/High end lineup.
Why would Vega be late when they skipped releasing a high end Polaris? Might as well compare the performance of Nvidia's CPU to Ryzen.
 
It's unordered at the level of the API, whereas there is a causal dependence that is empirically detected in this test.
Actually, the API defines that there isn't a causal dependence (some useless heuristic may falsely identify it as one) and the test only measures a correlation. ;)
But I don't want to start splitting hairs. Maybe we should leave it at that. In the end, it's a somewhat pointless discussion. We both agree, that binning isn't active in that test and we will need to wait for the Vega RX launch anyway (hopefully not longer) to see if there are some significant performance improvements from the driver side.
 
Why would Vega be late when they skipped releasing a high end Polaris? Might as well compare the performance of Nvidia's CPU to Ryzen.

They skipped a high end Polaris.. which Vega was meant to fill, no? AMD needed Vega last year. They are later than they want to be.. and Nvidia is a ways ahead of them at this point.
 
They skipped a high end Polaris.. which Vega was meant to fill, no? AMD needed Vega last year. They are later than they want to be.. and Nvidia is a ways ahead of them at this point.
Your logic fails in the fact that Vega is leaps and bounds different from Polaris - if they planned to get Vega to fill quickly after Polaris, why would they have bothered creating Polaris in the first place and not just fill the whole line-up couple months later with Vega-based chips? We know they can use different memory controllers on same architecture so HBM wouldn't be an issue for the cheaper models
 
They skipped a high end Polaris.. which Vega was meant to fill, no? AMD needed Vega last year. They are later than they want to be.. and Nvidia is a ways ahead of them at this point.
Eventually fill. They cut costs by removing cards from development while focusing on Zen. Remember R&D costs for IP and product development are separate. They simply prioritized and chopped a tier to cut costs.

Companies are always later than they want to be. Except maybe Intel who was confident with its CPU lineup. Nvidia isn't ahead either if you consider Vega has a superior feature set. Still need to wait for Volta to see how that pans out. As well as drivers for a valid comparison to even Pascal. All these current benchmarks are seemingly pointless and unrepresentative of the architecture.
 
Your logic fails in the fact that Vega is leaps and bounds different from Polaris - if they planned to get Vega to fill quickly after Polaris, why would they have bothered creating Polaris in the first place and not just fill the whole line-up couple months later with Vega-based chips? We know they can use different memory controllers on same architecture so HBM wouldn't be an issue for the cheaper models
Ok I get that.. however my original point was I'm not looking necessarily looking at it as AMD is being late with regards to their plans.. I'm saying it's fair to say AMD is late with regards to where Nvidia is and will be.. by around year. We've had this current level of performance for about a year now. Now, with that said.. I'm not in any way saying that VEGA FE gaming benchmarks should be taken as RX VEGA performance. It will most certainly be more optimized and improved for the RX launch. Looking at the workstation benchmarks the card is looking pretty good, and price could make it great. Of course it's too early to tell, but if RX VEGA slots in between the 1080 and 1080ti.. it's still pretty late. Card will be bought up by miners anyway though, and prices will go sky high. :(
 
Didn't the quote from Reddit start right here with Rys's post, and then went through an internet version of a broken telephone,
older->"legacy!"->"Fury drivers!!!"
It started before Rys mentioned "older" driver. The first FE benchmark was the Firestrike which was reported as driver 17.1.1, even though the actual internal driver version definitely indicated a very recent driver. Combined with the further benchmarks basically showing FE as identical performance to Fury with a higher clock, then Rys' mention here and it's turned into "Vega FE using old Fury X driver". Which is nonsense.
 
At launch HD 7970 was some 10% faster than GTX 580. Fast forward couple years to GTX 780 Ti launch and HD 7970 beats GTX 580 by 20% already (1920x1080/1200, TPU)
What kind of contrived comparison is that? You compare GPUs within the same generation, not a generation above! GTX 580 should be compared to 6970 not 7970.

And this of comparison is moot anyway, GPUs will compare differently across generations (even within the same company), as games consume more VRAM, and shift their bottleneck to a different aspect. I can force a similar comparison and come up with the inverse of your conclusion. It's still moot.

Besides, a GPU which is 30% faster in 2012, and is only 15% faster in 2016 is still the better choice. Kepler is different in that it regressed in performance considerably against it's counterparts.
 
Last edited:
It started before Rys mentioned "older" driver. The first FE benchmark was the Firestrike which was reported as driver 17.1.1, even though the actual internal driver version definitely indicated a very recent driver. Combined with the further benchmarks basically showing FE as identical performance to Fury with a higher clock, then Rys' mention here and it's turned into "Vega FE using old Fury X driver". Which is nonsense.
We're clearly talking about 2 different things, the one I'm referring to didn't talk about old drivers, but just "fiji drivers"
 
Nvidia isn't ahead either if you consider Vega has a superior feature set.

Really? I haven't seen a benchmark yet demonstrating a "superior" feature set. Perhaps we should wait until these checkboxes are tested before claiming it superior. The sad reality is that these features will most likely go unused on both cards for the vast majority of games due to console deficiencies.
 
We know they can use different memory controllers on same architecture so HBM wouldn't be an issue for the cheaper models
You obviously more so than me for example, given that you mean HBM and G5 and not G5 and D3. Do you have any link where I can enlighten myself?

With the seemingly emulated Tier3 resource binding that just released and async issues on Pascal, that seems a fairly safe bet..
I must have missed this as well, can you provide a link with an explanation?
 
Last edited:
he one I'm referring to didn't talk about old drivers, but just "fiji drivers"
So are we to believe AMD launched Vega FE for content creators, VR pioneers and game developers with Fjij class drivers so that they can do nothing with Vega FE at all, not able to use a single new feature it supports? @sebbbi might as well give the card away now that it's practically useless.
 
So are we to believe AMD launched Vega FE for content creators, VR pioneers and game developers with Fjij class drivers so that they can do nothing with Vega FE at all, not able to use a single new feature it supports? @sebbbi might as well give the card away now that it's practically useless.
Well, to be fair, if that were the case, it would be impressive since the card is already a good/decent content creator/workstation card. I wouldn't say it's useless. But yea, assuming the drivers thing is true, maybe they can't get their Vega code path working properly and were forced to release the card, and thus use the fallback Fiji code path?

AMD was likely caught in a situation where they had to release the card, and workstation performance was sufficient enough that they released it with the drivers in the state that they are in. While at the same time downplaying any other talk (ie being relatively silent) while they work on their drivers to get them to where they need to be for the time of the RX launch.
 
Last edited:
Workstation performance is as bad as gaming performance. Getting beating by a GTX1080 with Quadro drivers doesnt look better...
And sometimes the card loses to a GTX1060 (with 1024 cores) with Quadro drivers, too.
 
Workstation performance is as bad as gaming performance. Getting beating by a GTX1080 with Quadro drivers doesnt look better...
And sometimes the card loses to a GTX1060 (with 1024 cores) with Quadro drivers, too.

Can you link me to this? I'm not doubting you, I just haven't seen them yet.
 
Really? I haven't seen a benchmark yet demonstrating a "superior" feature set. Perhaps we should wait until these checkboxes are tested before claiming it superior. The sad reality is that these features will most likely go unused on both cards for the vast majority of games due to console deficiencies.
Rys said it was top tier everything. Pascal isn't listed as top tier everything. Even for conservative raster which they pushed upon release. They may not all see immediate use, but that doesn't invalidate a superior feature set. Performance is irrelevant and yes we still need benchmarks to gauge effectiveness.

I must have missed this as well, can you provide a link with an explanation?
Not a good one, hence "seemingly". Just informed discussion elsewhere about a new feature Nvidia quietly released. Very little actual information, or testing for that matter, that I've seen. Even Fermi to my understanding got Tier3 and it shouldn't have the hardware to really use the feature efficiently. Someone was trying to get a response from Nvidia last I checked, but holiday weekend in the states. Will let you know if I hear more.
 
Not a good one, hence "seemingly". Just informed discussion elsewhere about a new feature Nvidia quietly released. Very little actual information, or testing for that matter, that I've seen. Even Fermi to my understanding got Tier3 and it shouldn't have the hardware to really use the feature efficiently. Someone was trying to get a response from Nvidia last I checked, but holiday weekend in the states. Will let you know if I hear more.
Thank you - even though i'd love to see and follow that informed discussion myself, I hope you will relay important updates to us normal people here. :)
 
Why would Vega be late when they skipped releasing a high end Polaris?
Rys said it was top tier everything. Pascal isn't listed as top tier everything. Even for conservative raster which they pushed upon release. They may not all see immediate use, but that doesn't invalidate a superior feature set. Performance is irrelevant and yes we still need benchmarks to gauge effectiveness.
So it's all conjecture at this point.
 
Rys said it was top tier everything. Pascal isn't listed as top tier everything. Even for conservative raster which they pushed upon release. They may not all see immediate use, but that doesn't invalidate a superior feature set. Performance is irrelevant and yes we still need benchmarks to gauge effectiveness.
You also need a product on the shelves to sell. It's like AMD's DX12 lead was meaningless because they couldn't capitalize on it until Nvidia had better support for it. AMDs feature set can be great on paper, but if Nvidia brute forces things at a quicker rate, then I don't see how Nvidia isn't still ahead at that point.
 
Back
Top