AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

And AMD has been saying since the beginning that FE in gaming mode is not the same as RX
In what way? What does that mean exactly? It's the same chip, clocked at 1650Mhz. I'm sure a lot of people are curious as to what the RX Vega will do differently than the FE in "gaming" mode. It certainly won't be clock/memory speeds.
 
And AMD has been saying since the beginning that FE in gaming mode is not the same as RX

And Nvidia have been saying the Titan is not meant for gaming but it is the best gaming card out there. We won't know what's what until the RX Vega release, drivers should be more mature by then at least, that's for sure.
 
In what way? What does that mean exactly? It's the same chip, clocked at 1650Mhz. I'm sure a lot of people are curious as to what the RX Vega will do differently than the FE in "gaming" mode. It certainly won't be clock/memory speeds.
RX Vega could be a liquid cooled only card *like FuryX", with stable clocks and a newer driver. This could close the gap to the 1080Ti.
 
RX Vega could be a liquid cooled only card *like FuryX", with stable clocks and a newer driver. This could close the gap to the 1080Ti.
Sounds expensive :cry: How much was Fury X when it was first release with AIO?
 
In what way? What does that mean exactly? It's the same chip, clocked at 1650Mhz. I'm sure a lot of people are curious as to what the RX Vega will do differently than the FE in "gaming" mode. It certainly won't be clock/memory speeds.
We'll see in about a month.
What things does the RX Vega have over the Radeon Vega FE that would make it worth the extra wait?
Raja Koduri: RX will be fully optimized gaming drivers, as well as a few other goodies that I can’t tell you about just yet….But you will like FE too if you can’t wait:)
 
We'll see in about a month.
Yep and looking forward to it. Peoples expectations on those "goodies" went up in a big way after this FE release :)

I'm just curious on what it could possibly be since it can't be related to the actual chip itself.
 
I remember AMD saying it works without application support. And what should be that "hidden bottleneck"?
I'd hazard a guess at register pressure if part of the design wasn't working yet. Consider this Nvidia paper that came up a while back. Using a register cache they lowered RF energy by 36% and accesses by 40-70%. That would be compiler/driver driven and could be leaving a lot of performance on the table. Lots of references in the drivers to scratch registers and a new memory hierarchy. All registers seem to be backed by memory as well in drivers.

https://www.cs.utexas.edu/users/skeckler/pubs/micro11.pdf

In what way? What does that mean exactly? It's the same chip, clocked at 1650Mhz. I'm sure a lot of people are curious as to what the RX Vega will do differently than the FE in "gaming" mode. It certainly won't be clock/memory speeds.
FE had ECC with the 8-Hi stacks. Faster memory without would seem possible. Should run faster with only 4-Hi. That could be 5-10% more bandwidth.

Water cooling would seem an obvious gain if they all have loops. So far it just looks like software, and not hardware issues.

Stickers, a Vega baseball cap, a 9" statue of Ruby, and a cloth map of the die layout?
¯\_(ツ)_/¯
Hot sauce.
 
  • AMD is voltage binning the chips to get better yields, this means that power is suffering and to get a proper median distribution and representative data on power we need a chip that has been in line production for a couple of weeks already. The proof of this is Polaris as you've clearly seen the power difference between a day 1 card and a card produced a year later. It doesn't matter if 14nm is "known", the yield curve for a new chip is something that only goes up in time, irrespective of process maturity. You can safely bet that the FE cards have the earliest silicon out of the fab.
  • AMD is eating the yields and throwing away bad chips or keeping them for an even worse SKU, this means the power we see would actually be representative and the architecture is just disappointing.

How long ago did AMD demo Vega? Six months or so? I suppose they could use the earliest steppings, but I think they've had time for respins and sampling of the process over time as well as a number of samples per start.

The second option seems like it would be happening anyway. There are Vega chips without a full CU count, for example. A salvage SKU collecting chips that have defects or do not meet the parameters of a full chip would be out there.
AMD's products now going back generations have been getting progressively dodgy with their characterization, fan control, and power management on the first iteration. Ryzen was decent, but Vega seems to be following the trend. (edit: although there were iffy things like the X chips misreporting temps)
 
RX Vega could be a liquid cooled only card *like FuryX", with stable clocks and a newer driver. This could close the gap to the 1080Ti.
Perhaps, it's good with compute, I can't imagine AMD would be so dumb as to market the hell out of this architecture when they knew all along this would be the final gaming performance, they would have modelled the projected performance of this in lab with results probably 18 months - 2 years ago, something must be missing, if not raja might be on his way as this would be the worst gpu launch in history, or very close to it.
 
Perhaps, it's good with compute, I can't imagine AMD would be so dumb as to market the hell out of this architecture when they knew all along this would be the final gaming performance, they would have modelled the projected performance of this in lab with results probably 18 months - 2 years ago, something must be missing, if not raja might be on his way as this would be the worst gpu launch in history, or very close to it.
It's possible they projected a GP104 competitor, expecting GP102 to perform like it. Could be that NVIDIA's clockspeed boost from Maxwell to Pascal was beyond their expectations.
 
I have to wonder if vega was ment to compete with 1080 and win. Maybe 1080ti and pricecuts surprised amd. Current 1080 price point is painful for larger chip with more expensive memory. Though poor volta makes no sense in this context.

It looks like amd is forced to create rather exotic product to be competetive with 1080ti. Amd profit with Vega might be slim even if it ends up being customer success.
 
FE had ECC with the 8-Hi stacks. Faster memory without would seem possible. Should run faster with only 4-Hi. That could be 5-10% more bandwidth.
I wonder how much voltage and therefore heat the 8-hi generates compared to 4-hi? The package will be identical but 4-hi instead. Was there ever any Fury tests to determine contribution to overall tdp from the HBM stacks?
 
I wonder how much voltage and therefore heat the 8-hi generates compared to 4-hi? The package will be identical but 4-hi instead. Was there ever any Fury tests to determine contribution to overall tdp from the HBM stacks?
Voltage is probably about the same. Heat/TDP increased, but somewhat insignificant compared to the GPU. The real issue was from the stacked chips insulating each other. Heat from the bottom die has to travel through 7 (8?) layers of silicon before hitting a cooler. Say you lose 2C (random guess) each die, that means you could have a 16 degree delta between sides of the stack. Four high being half that. If the GPU is running 85C those HBM stacks nearby are probably getting warm. Current driver errata mention random crashes, so that could be the culprit in addition to immature drivers etc. The actual hardware looks to be performing reasonably, although there could still be a bug we haven't seen. Haven't seen many overclocking results, but it seems plenty capable of taking power and peak clocks are as expected. The definite gain would simply be from not having ECC with 4-Hi, if that's an option. That's a direct 5-7% bandwidth hit. I personally wouldn't mind seeing ECC on consumer cards, even if it costs some bandwidth. HBM2 clocks seen surprisingly good so far.
 
The actual hardware looks to be performing reasonably, although there could still be a bug we haven't seen. Haven't seen many overclocking results, but it seems plenty capable of taking power and peak clocks are as expected. The definite gain would simply be from not having ECC with 4-Hi, if that's an option. That's a direct 5-7% bandwidth hit. I personally wouldn't mind seeing ECC on consumer cards, even if it costs some bandwidth. HBM2 clocks seen surprisingly good so far.

A GPU using HBM2's ECC option wouldn't lose bandwidth or capacity like it would have with GDDR5, assuming the HBM device type was built for ECC.
Nvidia's ECC-enabled GPUs opted for ECC-native HBM2.

It's not stated what AMD has chosen as far as devices go, but there's no sign that ECC is enabled for Vega FE.
 
It's possible they projected a GP104 competitor, expecting GP102 to perform like it. Could be that NVIDIA's clockspeed boost from Maxwell to Pascal was beyond their expectations.
IF this was 2016 then yes I would agree, but not H2 2017!, Also the die size is massive compared to gp104 so no I don't think it was intended as a competitor to gtx 1080.(probably a late competitor to original Titan Pascal)
I do agree however they underestimated the clock speed increase of Pascal and lost a lot of efficiency in Polaris and maybe Vega by trying to clock higher than originally forecast.

I also think this was supposed to be q4 2016 or q1 2017 according to the original roadmap I saw and was perhaps pushed back because of hbm2 and then again for a respin/driver work In retaliation to gtx 1080ti.
That poor Volta trolling now looks ridiculous, Volta will mop the floor with this.
 
Back
Top