AMD Vega Hardware Reviews

I can understand if you think I'm biased based on what I've posted in this thread, but in reality I am super disappointed with AMD. Based on everything they've been saying the last few months + using common sense and extrapolating from Fiji performance I thought there was no way they could fail this hard.
Not in disagreement here, but the question remains why performance is where it is.

BTW I only wanted you to cool it with the outrageous claims of dramatic future performance increases that may or may not appear.
I get that, but I don't view the claims as outrageous for the reasons I listed. I normally run Linux, seeing large gains from drivers isn't exactly uncommon. I've been tracking AMDs commits with Vega for that reason. The 2MB page change, should only affect compute, was quoted at 10-15%, sometimes more, across most workloads. AMD has been working on those drivers real hard and that change is only now hitting git.

Point being there are a few cases, FP16 being one, that I legitimately feel would put Vega close to Ti. The power issues are something weird, probably a result of driving all that SRAM too hard. Still no idea where that 45MB SRAM went, and it may be the most significant piece of the puzzle we've been overlooking. In compute, Vega is beating 1080ti in a number of tests, so performance is there. For graphics I'm still wondering about TBDR, not DSBR. Fully enabled, it would address Vegas current shortcomings and makes sense with some unexplained design choices. That or these chips were designed with giant APUs and low power in mind where they could do rather well.

There is, but when you're late like they are, by the time your performance starts looking favorable against the competition the competition will have introduced new products. They really aren't in a good place, ATM.
Is that before or after the current gold rush ends? If miners bought all the cards at above MSRP, does current gaming performance or that in a few months matter more?

AMD probably knew this and couldn't produce the drivers that is actually able to bypass all the bottlenecks.
Going off some Linux driver commits related to their compiler, likely used for all shaders, they have some features temporarily disabled due to an unforeseen bug. Stating "would like to have", "disabled until issue resolved" so there are some significant issues lingering. That's in regards to VGPR indexing, so could be the bottleneck we're seeing.
 
Does anyone know how much integrated memory AMD's previous designs (Hawaii, Fiji and Ellesmere) have, to help put the 45MB number in some perspective?

BTW does Vega 10 have the Island-theme code name or is it truly the end of an era?
 
Then again, we do have historical evidence of AMD cards gaining more performance over time compared to GeForces from every generation since GCN was first introduced. Vega being biggest change in the architecture, but still essentially GCN, is there a reason not to expect at least similar performance uplift as previous generations, if not bigger due the bigger than usual changes?

Fury X' performance got worse over time. Maybe AMD will drop the support of Vega too when Navi arrives?
 
You could just as easily make the argument that all the low hanging fruit has been picked and future increases the likes of what we saw with Tahiti and Hawaii are unlikely.
But we've seen it with Fiji and Polaris too.

Fury X' Performance got worse over time. Maybe the same will happen with VEGA when Navi arrives?
It did?
vs 980 Ti it has gained a lot at least based on TPU numbers, at launch it was behind 980 Ti on 1440p & 4k, now it's ~4% and ~10% faster.
 
BTW does Vega 10 have the Island-theme code name or is it truly the end of an era?
Somewhere some driver guys said they did away with it. Replacing the islands with GFX# because they kept confusing the islands. In Linux drivers they've been renaming from SI to GFX8, etc with Vega being GFX9 or Vega. That may only be a Linux driver thing, but it's a start.
 
DigitalFoundry tested the liquid cooled version @4K, basically equal to 1080.

Untitled.png

http://www.eurogamer.net/articles/digitalfoundry-2017-amd-radeon-rx-vega-64-performance-preview

Note: COD Infinite Warfare's performance is bad on NV hardware because of a bug in the latest driver:

https://forums.geforce.com/default/...lay-driver-feedback-thread-released-7-24-17-/


Well, *cough* those are static and have no heuristics & have zero granularity.

I could post static charts that show the exact opposite, with a $399 V56 equaling the 1080 FE. So.. you have to take a step back and look at what you need and where are you going with it. And as a Consumer/Buyer you are more concerned with a brand's ecosystem (& driver suite) and game support, etc. More so, than being focused outright on the Hardware's raw numbers.


Being a Gamer..
I am into technology that pushes Gaming, not marketing gimmicks. I am a prosumer of GPUs because over the last 30 years I sure have bought a lot of them & even owned a video toaster. (I essentially started my GPU understudy with agnus, denise & paula.) I have seen my share of gimmicks and APIs, chip features, & technologies that flopped, or morph, over the years.

To me, Radeon is far ahead of their Competition in software development and advanced gaming. Microsoft, Sony and Samsung seem to think so, also.



RX Vega & simple Logic:
  • FLOPS does mean anything without software to use.
  • RX Vega is leaving considerable amounts of FLOPS on the table currently.
  • RX Vega 56 $399 playing BF1 widescreen @ 90 frames. (On 1st day release)

The Vega 56 has 10.5 TFLOPS of performance. Yet looked over, is it's 21.0 TFLOPS of half-precision performance. That shows me there is considerable more performance left in Vega archetecture, than what we have now. Already, Anandtech has just written about how AMD's Rapid Packed Math can have considerable changes within the Gaming ecosystem, & for AMD.

RX Vega as it stands today (even as rough around the edges it is), is still a better buy than their Competition. I see RX Vega being a tremendous value over it competition and I believe people building their new rigs, for the new era of gaming. will too.

Ironically (from 3 years ago), I now see Radeon as a premium brand. AMD simply offers too many things, no other GPU manufacturer does.



For my own personal use, I am looking to buy 2 GPUs for stand alone LAN rigs I am building in the coming months. RX Vega is a win for Gamers. (I already have other machines, doing other things, with other video cards.)
 
I´d say if AMD thinks "Poor Volta.." would just be seen as a funny Easter egg they have the problems. And if they think the few percent between the FE and RX Vega 64 in gaming is what people understand when they say "FE is not representative for RX Vega performance" they missed some very basic communication rules.
Let me remind you of the wallpapers a long time before the launch of the product. Imho they talked a lot about Vega.
The public's reaction to "Poor Volta" was probably unexpected from AMD, it was in no way anything to be hyped about when you consider it up for half a second in a teaser video far far before the product was even announced. If you were a marketing company that was charged with coming up with some background signs in a promotional video and came up with one that said "Poor Volta[XXX]" and thought it was a cleaver tidbit, you wouldn't expect backlash when people take it literally. Poor Volta was never a tagline or mentioned anywhere else. A couple of frames in the background for a video isn't supposed to be looked at so closely.

FE being not representative of RX performance wouldn't be a problem if people didn't expect magic drivers that give 40% performance boost. AMD probably didn't even know how much they can do in the 2 month window between releases but what they said was true. It was the fanboys that really hyped themselves up for Vega and conjured their own unreasonable expectations. FE when it wasn't being modified to run at max clocks performed a lot worse than the RX at stock today too. It's also not too hard to see that AMD couldn't just say, RX and FE will both be around 1080 performance but below the 1080ti in gaming performance but use 100w more power.

And lastly releasing a whitepaper isn't really marketing.
 
AMD's Vega marketing goes back 18months:
amd-capsaicin-7-rs.jpg


This was published at the GDC 2016. Sure, we know these "Perf/W" claims are made up but AMD has done their work...
 
Point being there are a few cases, FP16 being one, that I legitimately feel would put Vega close to Ti.
Specifically addressing FP16, I think there are far fewer use cases for it in graphics than you seem to believe. TINSTAAFL. And even in the cases where it is useful, unless it is an operation that is already taking a significant chunk of your frame time it wouldn't have a huge impact even if the speedup was ∞ (reduced time spent on that operation to 0).

We should expect a minor boost from the use of FP16 math in games that end up supporting it on Vega, and be (happily) surprised if it turns out to be of great significance. Expecting it to be a huge deal is setting yourself up for a letdown, and trying convince other less knowledgeable folks who are trying to get the most for their $$ that FP16 will turn this turd into gold is almost certainly misleading at best.
 
I think it would be better and "cheaper" for AMD to use the 20MM they use in regards to their management into the develop of VEGA...

The Poor Volta was perceive as a secret message and tbh we all get hints of VEGA performance because of how the marketing changed from Polaris launch to Vegas with the blind tests and stuff that were really a damage control strategy.

And while Vega may gain a lot of performance over time you don't pay for future performance, you pay for out of the box performance and right now the prices are not really that representative of that. But its probably that vega is just too expensive to produce and AMD is playing the "don't lose to much" strategy there.
 
Agreed for FP16. There's been far too much focus on FP16 and it's usefulness in games when it was primarily added to address the AI market. If there's a toggle in Wolfenstein 2 for FP16 on/off and we see some decent gains for its use (>10%?) then a case might be made for it.
 
Specifically addressing FP16, I think there are far fewer use cases for it in graphics than you seem to believe. TINSTAAFL. And even in the cases where it is useful, unless it is an operation that is already taking a significant chunk of your frame time it wouldn't have a huge impact even if the speedup was ∞ (reduced time spent on that operation to 0).

We should expect a minor boost from the use of FP16 math in games that end up supporting it on Vega, and be (happily) surprised if it turns out to be of great significance. Expecting it to be a huge deal is setting yourself up for a letdown, and trying convince other less knowledgeable folks who are trying to get the most for their $$ that FP16 will turn this turd into gold is almost certainly misleading at best.

Incredible, you have talent and know about the FP16 and how it can hurt, or not work within software... Yet, you spend all so much effort to resist it, & battle Rapid Packed Math instead of allowing yourself to imagine/calculate/theorize the benefits of it.

Why?




Logically, if I benefit from FP16 in new games, so do you...!

So, it is quite odd behavior to dismiss FP16, after you allegedly know what it is/does and seems like you are being disingenuous. Xbox will use FP16, so that means Game Developer's will too. RPM and half precision is the new era of gaming, & will be used when needed. This is a long time coming and even AMD's competition is moving to FP16 functionality.


It is my understanding, that AMD's competitor can not decode it natively and uses FP16 emulation ? So it is possible for "certain people" to dismiss the importance of technologies like Rapid Packed Math. While other see instantly how it can help in many future game titles & engines.
 
Nobody is dismissing FP16 altogether. It is definitely useful for some operations. Just don't expect it to make a very big difference in the grand scheme of things.

Generally speaking it is extremely rare these days for a single feature to yield massive gains across the board. Even Conservative Rasterization hasn't made much of an impact and it is a way bigger deal than FP16.
 
Well, then that is where we differ in logic.

We don't have to wait, because we already know that it is being introduced into mainstream gaming (see Xbox 1X), & simple math alone we know there is a "difference" in potential compute. Matter of fact, in the RX Vega 56 case, that potential difference is 10.5 vs 21 TFLOPS. Even % of that will add up to better game performance.

The only question is how will FP16 translate into overall performance within modern game engines. This is what is being discussed.here.

Marketing slide:
1501481674403m5pkzqq_2_16_l.jpg


It seems FP16 & INT16 is more than just a consideration in games.

RX Vega is in the Consumer's hands now, this is Day 1. So, things will only get better not worse. And AMD already hinted at the potential here.
 
Marketing slide:
Keep in mind that the performance increase numbers are for the particular effect. If bloom effect takes up a total of 2% of the frame time, 20% extra speed on it doesn't count for much overall. It all adds up and should see some benefit, but I don't think gamers should see it as a huge benefit that it's being touted as.
 
Somes devs already said fp16 for games can bring 5-15% improvments when used, but it won't be used everytime/ in every frame.

Damn what is happening in this topic with fp16... It's not a miracle people, you won't double the compute performance. A lot of stuff need fp32.
 
Keep in mind that the performance increase numbers are for the particular effect. If bloom effect takes up a total of 2% of the frame time, 20% extra speed on it doesn't count for much overall. It all adds up and should see some benefit, but I don't think gamers should see it as a huge benefit that it's being touted as.

This. And fp16 is already used in ps4pro. It's helps here and there but it's no magic.
 
I think if AMD were to do 4 stacks again, they'd need a bigger chip and interposer and would face the same or even worse problems than Fiji with HBM2 being a bit bigger and that's why the last year's leak of 7nm Vega 20 had four stacks. It looks like AMD have scrapped those plans and are going for 14nm+ which would have the same problem.

There does seem to be something of a discrepancy between Greenland as it was described in some past leaks and the Greenland that supposedly is in Vega now.
Perhaps there was a Greenland A and Greenland B, or something got changed mid-stream.
At least for now, we do not have a Vega GPU with 1/2 rate DP and 4 GMI links as a product (caveat: die shots are too blurry).
One possible advantage to the 2-stack arrangement, aside from area savings to the GPU and interposer, is that the GPU section may have been able to grow/shrink more freely than if I/O, HBM, and all the other logic surrounded it.
GCN GPUs tend to have an internal patchwork of revisions even within a generation, and potentially some elements like the CUs and uncore were switching later in the process.

On that note, it does appear from the latest diagrams that the data-side Infinity Fabric interfaces on the outside of the L2, and does not run within the L2 and CU boundary.

While GF can be seen as year behind TSMC, they're not spending their R&D to 10nm process at all unlike TSMC, and their 7nm (at least the 1st 7nm anyway) is indeed performance oriented
That might have implications as well, and some may apply to Vega.
GF's high-performance process specifically points out performance ranges akin to the POWER or high-end CPU lines, which AMD's slides concerning its older APUs and Excavator showed to have very different structure and needs from ASICs.
A lot of AMD's motivation for its interposer-based integration method is to get the GPU and CPUs back out of compromising their process features and metal stack to cater to the other.
In Vega's case, it has features like its register file that took input from design expertise derived from Zen, but is that a win if that means parts of Vega are closer to CPU-optimized than the GPU they are in?

Then again, we do have historical evidence of AMD cards gaining more performance over time compared to GeForces from every generation since GCN was first introduced. Vega being biggest change in the architecture, but still essentially GCN, is there a reason not to expect at least similar performance uplift as previous generations, if not bigger due the bigger than usual changes?
That will be interesting to watch. One caveat is that it may be less consistent and may favor the CLC and 3rd party boards with custom coolers. Potentially, it may also favor the lower-performance settings since Vega appears to be pushed even further than prior designs.
Performance optimizations that help utilization without a corresponding power decrease would just push Vega into a throttling zone, so some changes may not behave as expected.

Can sombody Test David Kanters trianglebin?
I hope someone remembers it, although it may not be possible to capture the full range of that function with this test, due to the UAV.

Draw Stream Rasterizer is offline again?
http://www.pcgameshardware.de/Radeo...3/Tests/Benchmark-Preis-Release-1235445/3/#a1

See results beyond3d suite.
Maybe or maybe not? I do not recall what the test does to create the culling cases.
It may help to determine if the primitive shaders are up to snuff. If the synthetic's culling settings work by making a varying percentage of triangles back-faced, primitive shaders theoretically would not let them get to the rasterizer.
In that case, the primitive shaders and rasterizer render each other partly redundant. However, in that case the question is why the culling isn't better. Vega's polygon tests show it to be competitive or better in non-culled and half-culled cases. It's the fully culled cases that do not have the outsized benefit that Nvidia's GPUs do.
 
Back
Top