AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Sneaky marketing, using an aftermarket 980 Ti to make the 1080 FE look bad by comparison, most probably against the watercooled Vega :smile:

The ideal choice for 4k is neither the 1080 nor the Vega RX, it's the 1080 Ti and even then you'll struggle in some titles and have to play around with settings.

Especially as the G-Sync monitors go lower in minimum Frequency than the FreeSync example, which does not work with the NV cards anyway.
 
Anandtech stated IIRC that AMD used the air cooled version as basis for their performance metrics.

Then they are making their GPU look bad, the reference design is not capable of providing the best performance out of Vega. I'm interested to see how aftermarket designs perform, the ones to look for are the ASUS Strix and MSI Gaming X line.
 
just my two cents:
in slide 12 we see
HBCC will enable 27gb of data assets on OpenGL apps
says for "content creators" so Radeon support also?
or vega fe and vega pro only?
also would be nice if & how fast amdgpu pro driver for linux will support it? and even Mesa amdgpu driver..

in slide 28 we see the peak primite discard throuput..
in fact comparing with:
http://www.hardware.fr/articles/953-7/performances-theoriques-geometrie.html
that should be more than 2x faster than GP102 so 1080ti..
in fact (see below) sad is that it might not be enabled on launch drivers:
they tested with a prototipe 17.320 driver assuming 17.320=17.32>17.30 current driver..
hope hardware.fr recieves a Vega for testing and we will see clearly if launch drivers have it enabled..

in slide 49
we see the dirty details..
in fact there is a comment for primitive shaders (slide 27):
saying in future radeon drivers, gpuopen sdks or future support from 3d apis
slide 11: similar comment (saying possible future support in radeon drivers, gpuopen sdks or future support from 3d apis ) :
is that refering to "programamble inclusive/exclusive caching"? exposing configuring page size (large,small)? who knows..

related note OpenGL 4.6 released today most interesting for me is support for some advanced OGL shading language extensions in OGL SPIR-V extensión.. how fast AMD will support that? will also Mesa amdgpu driver be faster with support?
also seems float16 support in HLSL 2018 (seen on github compiler site) so should be sm 6.2 or sm6.3 (too late for SM6.1 coming in falls creators update..)
PD: hope we see 3DMark Serra this year ..
 
At the 8m00 mark:

(...) a few items to know including power saving features. We spoke with AMD at this event, and learned, definitively (not rumors this time), that specific power saving features of the card were disabled for Vega FE when it launched. And that was, to quote, "get it out the door", which is sort of what we said in our coverage, that needed to hit Q2 and they did it for about 2 days. So they did Q2, but some things were disabled, and that makes power consumption look a whole lot higher, from what we've been told, on those [Vega FE] versus Vega RX version.

So AMD is claiming Vega RX's power consumption is a whole lot lower than Vega FE's.
I wonder if this refers purely to driver level (meaning current Vega FE owners will get lower power consumption), or it's something that's actually disabled in FE's PCB configuration. I'f hope for the former, but we already know the VRMs are different between the cards, for example.
 
Vega RX seems to be very dependent upon FreeSync to garner any kind of acceptance from the gaming community. Wonder if the targeted GPU strategy is to maintain marketshare until Navi?
 
At the 8m00 mark:



So AMD is claiming Vega RX's power consumption is a whole lot lower than Vega FE's.
I wonder if this refers purely to driver level (meaning current Vega FE owners will get lower power consumption), or it's something that's actually disabled in FE's PCB configuration. I'f hope for the former, but we already know the VRMs are different between the cards, for example.
Would be nice if it hit state 7 more often within the same power budget.
 
Then they are making their GPU look bad, the reference design is not capable of providing the best performance out of Vega. I'm interested to see how aftermarket designs perform, the ones to look for are the ASUS Strix and MSI Gaming X line.

It's not that they are trying to make their card looking bad, it's that the air cooled version has the same price as the "Partners" version of 1080 (and 50$ less than 1080 FE). Liquid cooled version costs more, so it would have been not an "apples to apples" comparison, and of course competition had pointed out it immediately.
 
I might of missed it in the disscusion, but has there been much talk of the move to 8 Shader engines? seems a fundamental new aspect of Vega vs Fiji.. I wonder if this would also explain the fairly consistent performance from 1080p through 4K ?
 
Here's a slide I haven't seen elsewhere:

View attachment 2125

It seems Scott's Fury X review really got to them, not only did they ask him to join them but are totally focusing on that metric to the point where people are wondering if the averages aren't good enough.

Speaking of which, if you dig deeper using our frame-time-focused performance metrics—or just flip over to the 99th-percentile scatter plot above—you'll find that the Fury X struggles to live up to its considerable potential. Unfortunate slowdowns in games like The Witcher 3 and Far Cry 4 drag the Fury X's overall score below that of the less expensive GeForce GTX 980. What's important to note in this context is that these scores aren't just numbers. They mean that you'll generally experience smoother gameplay in 4K with a $499 GeForce GTX 980 than with a $649 Fury X. Our seat-of-the-pants impressions while play-testing confirm it.

http://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed/14

Eurogamer also weigh in, with imo the silver lining of this 'launch',

Obviously, vendor-supplied metrics need to be taken with a massive pinch of salt, but AMD is promising big improvements to the variable gaming performance reported from the existing Radeon Vega Frontier Edition, a prosumer version of the card available now. The boosts to performance come from significantly improved drivers and a revision to power delivery, even though consumer card frequencies are mildly downclocked compared to the Frontier Edition

Meanwhile, at the other end of the spectrum, the cheaper RX Vega 56 loses eight compute units (384 shaders) and clocks are dialled back by around 75MHz. On top of that, the HBM2 memory speed gets a downgrade in the region of 15 per cent, giving us an estimated bandwidth in the region of 410GB/s compared to the 64's 484GB/s.Overclocking may make up much of the difference, and AMD has mentioned that the cut-down model's core clock can be pushed significantly - though the possibility remains that it'll require more robust third-party thermal solutions to get the best out of the lower-priced offering.

http://www.eurogamer.net/articles/digitalfoundry-2017-radeon-rx-vega-revealed

Might get the cutdown one once it falls below $300 after Volta launches. :LOL:

edit: and yowza :oops:

qhs93wo35vcz.jpg
 
Last edited:
That seems true for modern gaming on any platform: FreeSync or GSync.

Being dependant on FreeSync as a major selling point isn't really a great long term strategy for AMD. I mean Nvidia could switch to Freesync or make their standard free tomorrow if they wanted too. AMD coming up with a legit competitive top-tier gpu on the other hand is much tougher...

What are the chances AMD top tier Navi will even match Volta's xx80 equivalent next time?...
 
Average FPS were not high enough?

part of DP 1.2a

usually DP 1.2 needed, works even on 1.1 on my 32'' HP
http://www.hdmi.org/manufacturer/hdmi_2_1/

Game Mode VRR features variable refresh rate, which enables a 3D graphics processor to display the image at the moment it is rendered for more fluid and better detailed gameplay, and for reducing or eliminating lag, stutter, and frame tearing.
 
Does anyone one know the underlying reason for min frame rates? The typical bottleneck that gets hit when it happens?

Running out of vram/ram can do that, hitting 100% cpu util can also do that. On the green side, hitting power limit also seems to affect min framerate.
 
AMD estimates a Vega NGCU to be able to handle 4-5x the number of operations per clock cycle relative to the previous CUs in Polaris. They demonstrate a use case of Rapid Packed Math using 3DMark Serra- possibly a yet unannounced Futuremark benchmark- wherein 16-bit integer and floating point operations result in as much as 25% benefit in operation count.
https://www.techpowerup.com/reviews/AMD/Vega_Microarchitecture_Technical_Overview/3.html

Guessing that means co-issuing FP+INT similar to Volta.

Being dependant on FreeSync as a major selling point isn't really a great long term strategy for AMD. I mean Nvidia could switch to Freesync or make their standard free tomorrow if they wanted too. AMD coming up with a legit competitive top-tier gpu on the other hand is much tougher...
I'm not saying AMD is dependent on FreeSync, but gaming in general benefits heavily from the feature regardless of which vendor implements it. Any gamer should have FreeSync/GSync at the top of their list when building a new system. I'd think it's a matter of time before Nvidia adopts FreeSync while still supporting GSync as a superior solution. Albeit with less royalties.

Does anyone one know the underlying reason for min frame rates? The typical bottleneck that gets hit when it happens?
Going off one of those spider graphs, it was Battlefield that was messing up 1080 with a really low minimum. Under DX12 they probably had periodic stalls due to resource management.
 
I’m not sure this bundle idea is the best way to move product. Don’t get me wrong; $300 in coupons for quality hardware is a worthwhile bonus, as are the two solid games — but only if you were already planning to build a new system in the first place. That Ryzen 7 CPU + motherboard bundle is still going to cost you over $200, and even the sale price on the CF791 is $749. If you’re planning to drop serious cash on a new rig, these offers are helpful. Otherwise, not so much. In fact, I think AMD knows it, and has deliberately made the liquid-cooled Vega a bundle-only part precisely because it knows it either can’t sell enough cards at that price to make any money or because they’re only planning a very limited run in the first place.
...
As always, we’ll hold on final judgment until we have shipping, tested silicon, but these are not the kind of figures people were hoping for. The GTX 1080 Ti has a TDP of 250W. Anyone who says “TDP doesn’t equal power consumption” is absolutely, 100 percent right, but TDP ratings tend to at least point in the general direction of power consumption, and a rating of 295W for the AC Vega and 345W for the WC version tells us a lot about how these chips handle clock rates.

Consider: The RX Vega 64 AC is clocked 8 percent higher (base) and 5 percent higher (boost) than the RX Vega 56, and has 15 percent more cores. Yet the TDP difference between the two chips is enormous, with Vega 64 AC drawing 1.4x more power than Vega 56. Now, as we’ve often discussed before, power consumption in GPUs isn’t linear — it grows at the square or cube of the voltage increase, and clock speed or memory clock increases will only make that worse.

Being able to compare with RX Vega 64 LC makes the problem a bit easier to see. The AC and LC variants of Vega only differ in clock speeds. RX Vega LC’s base clock is 1.13x higher than Vega AC, with a boost clock gain of 1.08x. But those gains come at the cost of an additional 1.17x TDP. In other words, at these frequencies, Vega’s power consumption curve is now rising faster than its clock speeds are.
https://www.extremetech.com/gaming/...-product-stack-features-released-ahead-launch
 
Back
Top