AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

You are not comparing like for like unless you take into account functionality. Perhaps all the extra transistors is needed for that additional functionality. How do you know the competitors wouldnt end up with even more trannies to hit the advanced functionality?
It's fair to use adjust GPU die size estimates to compensate for differences in die sizes.

At launch, a 360mm2 Tahiti performed worse than a 300mm2 GK104, but at least the latter had way better FP64 and a 50% larger memory system. And if it had given AMD any traction in the commercial compute space, that cost would have been justified.

But for Vega I just don't see it. Area is consumed by the units with the highest multiples: shaders primarily, texture units and ROPs and MCs and caches after that.

A control or management unit that doesn't manipulate data like HBCC isn't the kind of thing that's going to explain a massive die size difference.

So you are betting on some unknown feature to explain the difference. I don't think that makes sense, and it surprises me that after more than a decade on this board, you're saying that it's unfair to compare die sizes of GPUs are very similar GP102-FP16-adjusted and Vega.

If Vega turns out to have a large amount of FP64 after all, you'd have a point.

If AMD decided to spend 40% more area on a speculative, currently unused feature, and betting that it will start making them boatloads of money soon (enough to justify being a year later than the competition), then they made a huge mistake, IMO.

But I don't think any of that is true: Vega's pathetic results can best be explained by an obscure architectural mistake or corner case bug that could not be fixed without a full base spin. They had no choice but to release one of their worst performing chips ever.

I believe future Vegas will return to a performance ratio that's more or less in line with Pascal.
 
TPU preview, trade blows with the 1080:
AMD provided some internal performance numbers from their own testing, so as always take these with a grain of salt. These tests were done on an Intel Core i7-7700K at 4.2 GHz, 16 GB of DDR4-3000 MHz RAM and with the latest available drivers for the AMD and NVIDIA GPUs at the time. Refer to the complete slide deck at the end for full testing information, but there was nothing that caught our eye otherwise. The chosen games were all running on DX12 or Vulkan APIs, and the order of the four GPUs being presented is certainly on purpose but looking purely at the numbers and no more it appears that RX Vega 64 trades blows with the NVIDIA GeForce GTX 1080 at stock settings under WQHD (21:9, 1440p). With no mention on the exact GPU and memory frequencies, we definitely recommend holding off till independently tested reviews are out first.
https://www.techpowerup.com/reviews/AMD/Radeon_RX_Vega_Preview/2.html
 
Anand's write up:

Instead, what we’ve been told is to expect the Vega 64 to “trade blows” with NVIDIA’s GeForce GTX 1080.

http://www.anandtech.com/show/11680...rx-vega-64-399-rx-vega-56-launching-in-august

Also interesting:
the DSBR is not enabled in Vega FE’s current drivers. Whereas we have been told to expect it with the RX Vega launch. AMD is being careful not to make too many promises here – the performance and power impact of the DSBR vary wildly with the software used – but it means that the RX Vega will have a bit more going on than the Vega FE at launch.
http://www.anandtech.com/show/11680...-vega-64-399-rx-vega-56-launching-in-august/3
 
Wondering where did most of Vega's transistors go? Just to increase clock speeds!
Talking to AMD’s engineers, what especially surprised me is where the bulk of those transistors went; the single largest consumer of the additional 3.9B transistors was spent on designing the chip to clock much higher than Fiji. Vega 10 can reach 1.7GHz, whereas Fiji couldn’t do much more than 1.05GHz. Additional transistors are needed to add pipeline stages at various points or build in latency hiding mechanisms, as electrons can only move so far on a single clock cycle; this is something we’ve seen in NVIDIA’s Pascal, not to mention countless CPU designs. Still, what it means is that those 3.9B transistors are serving a very important performance purpose: allowing AMD to clock the card high enough to see significant performance gains over Fiji.
http://www.anandtech.com/show/11680...-vega-64-399-rx-vega-56-launching-in-august/3

GCN is really getting old by now. Time for AMD to keep up with something a whole lot different.
 
If AMD decided to spend 40% more area on a speculative, currently unused feature, and betting that it will start making them boatloads of money soon (enough to justify being a year later than the competition), then they made a huge mistake, IMO.

But I don't think any of that is true: Vega's pathetic results can best be explained by an obscure architectural mistake or corner case bug that could not be fixed without a full base spin. They had no choice but to release one of their worst performing chips ever.

I believe future Vegas will return to a performance ratio that's more or less in line with Pascal.

From Anandtech's write-up, the largest consumer of Vega's additional 3.9B transistors over Fiji is additional drive logic and pipeline stages to take it to higher clocks versus Fiji. Not sure what pixie dust fell on Polaris to allow it to get so close without them.

Density-wise, Vega is a little more dense than GP104's ~23M transistors/mm2 at 25, barely more than Polaris at ~24, and denser than Fiji's ~15Mt/mm2 but rather short of doubling the density.

It doesn't seem like GP104 needed to bloat its transistor budget just to get higher clocks than its predecessor. It doesn't seem like the fundamentals for a higher-performing architecture are there. It takes a lot to get it to higher clocks, while AMD is apparently just about done improving things once it adds just the somewhat higher clocks. Is a lack of killer instinct a hardware flaw?

I guess my hedging that the rasterizer could have been partially on was wrong, it's off.
 
But I don't think any of that is true: Vega's pathetic results can best be explained by an obscure architectural mistake or corner case bug that could not be fixed without a full base spin. They had no choice but to release one of their worst performing chips ever.

I believe future Vegas will return to a performance ratio that's more or less in line with Pascal.

I tend to agree. It just seems so odd that the performance/area or performance/watt is so horrid. Something went seriously wrong and they ran out of time to fix it. Now they're polishing a turd.
 
It's fair to use adjust GPU die size estimates to compensate for differences in die sizes.

At launch, a 360mm2 Tahiti performed worse than a 300mm2 GK104, but at least the latter had way better FP64 and a 50% larger memory system. And if it had given AMD any traction in the commercial compute space, that cost would have been justified.

But for Vega I just don't see it. Area is consumed by the units with the highest multiples: shaders primarily, texture units and ROPs and MCs and caches after that.

A control or management unit that doesn't manipulate data like HBCC isn't the kind of thing that's going to explain a massive die size difference.

So you are betting on some unknown feature to explain the difference. I don't think that makes sense, and it surprises me that after more than a decade on this board, you're saying that it's unfair to compare die sizes of GPUs are very similar GP102-FP16-adjusted and Vega.

I never said it was unfair. I'm saying you're not comparing Apples to Apples. I don't see how it's possible to accurately estimate how much has to be added for the others to add all the advanced Tier 3 features.
 
Such a disappointing GPU. ~300W on a die larger than GP102 for 1080-level performance? Ouch.

The cards are priced competitvely strictly on an FPS basis, but bearing in mind the cost of electricity (and the knock-on effects of high GPU power consumption on system reliability) Vega 64 doesn't make much sense.

Maybe there's a key IP block on the chip that didn't fab properly? Or maybe they expected much higher clocks in retail silicon?

There has to have been a huge screwup at some point. There's no way these performance figures would have been thought remotely acceptable in the design stages.
 
Such a disappointing GPU. ~300W on a die larger than GP102 for 1080-level performance? Ouch.

The cards are priced competitvely strictly on an FPS basis, but bearing in mind the cost of electricity (and the knock-on effects of high GPU power consumption on system reliability) Vega 64 doesn't make much sense.

Maybe there's a key IP block on the chip that didn't fab properly? Or maybe they expected much higher clocks in retail silicon?

There has to have been a huge screwup at some point. There's no way these performance figures would have been thought remotely acceptable in the design stages.
doesn't seem like a huge power diffrence looking at reviews of the 1080 . Dunno how much money you pay for power though
 
From Anandtech's write-up, the largest consumer of Vega's additional 3.9B transistors over Fiji is additional drive logic and pipeline stages to take it to higher clocks versus Fiji.
Over 45MB of SRAM across the chip
That would do it. Significant jump over Polaris (~50% more adjusted for CUs) if I did the math right. P100 is around half of that counting registers and L2.

Maybe there's a key IP block on the chip that didn't fab properly? Or maybe they expected much higher clocks in retail silicon?
AMD did confirm the new DSBR wasn't enabled, so more like a giant IP block that isn't currently doing anything. Just need a better idea where all the SRAM went.
 
Last edited:
doesn't seem like a huge power diffrence looking at reviews of the 1080 . Dunno how much money you pay for power though

The 1080FE consumes ~170-180W under load. In extreme circumstances, it goes to ~200W.

That's about 50% greater power draw for similar performance, if we take the 295W TDP at face value for Vega 64.
 
A 99% percentile performance comparison in DX12/Vulkan only against the 1080 and it is practically a draw... and that as cherry picked manufacturer data? We all know that min-FPS suck in BF1 under DX12 with NV, but DX11 works fine. That is bad, really bad.
 
They aren't releasing more info yet because its clear drivers are still WIP.

Power management is still WIP:

Turns out, RX Vega isn’t just a matter of restricting power target, they’re actually doing something for power optimization. We couldn’t get explicit examples at this time. One thing we do know is that the voltage targets change, so voltage checks are at different frequencies than FE, and voltage should be lower. We’d expect that this will align with our findings in the undervolting testing on Vega: Frontier Edition, where power consumption can equalize while improving performance. It’s still AVFS, but just a better tuning profile than FE.

...

A few additional items of note include the power saving features: We spoke with AMD team members at the event and learned definitively that specific power saving features were disabled on Vega: Frontier Edition to just get the thing out the door. RX Vega will run lower power.

Tile based rasterization is disabled currently:

We also asked AMD’s architects, including Mike Mantor, about whether TSBR was actually disabled in Vega: Frontier Edition or whether it was just a rumor. The architects loosely confirmed that tile-based rasterization was in fact disabled for Frontier Edition’s launch, which we think mostly aligns with statements about pushing the card out in time, and noted that TSBR will be enabled on both Vega FE and RX Vega on launch of RX Vega. We asked about expected performance or power consumption improvements, but were not given any specifics at this time. Wait for launch on that, though – we’ll get that information.

http://www.gamersnexus.net/news-pc/3004-rx-vega-64-and-vega-56-power-specs-price

It's clear that memory bandwidth needs to be address since its 50% higher clocks but less memory bandwidth than Fiji, and the Vega 56 will have even less bandwidth since it runs at 850mhz or something like that. There is no way that won't completely bottleneck it and run even slower than Fury X unless they get the new stuff all working in drivers. Once its no longer held back by memory performance should go up a lot. The current scaling vs Fiji is very poor for the huge core clock increase. In Pro tasks which aren't memory bandwidth heavy it does way better.
 
The 1080FE consumes ~170-180W under load. In extreme circumstances, it goes to ~200W.

That's about 50% greater power draw for similar performance, if we take the 295W TDP at face value for Vega 64.

Your putting the 1080FE a bit low there from what i can see its around 200w or more. But anyway i am interested in what the $400 card performs like it has a low power draw and if what anand has said about drivers is true we could see some decent gains still.

The 1080 series has had a long time for driver updates while this chip is new
 
"Next-gen" CUs... "Poor Volta"... yea sure.

This card surely takes the place of the worst marketing bullshit ever. It is just a Netbursted GCN with low b/w.

Given AMD must have got super low R&D for Vega, they should have marketed it differently. No ridiculous events, teasers-of-teasers, trailers, etc.
 
Back
Top