AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

The AMD subreddit is now heavily moderated. Every new thread about "NGG Fast path" and "primitive shaders" gets deleted now.
Is it AMD company reps who moderate that reddit?

So which is worse : less info or more , but potentially misleading?
More is fine, if it is honest. If it's just going to be "toot your horn about a bunch of features which then quietly goes MIA", then they can go fuck themselves.

What appreciable harm or loss is there in a product that is "faster than before, but a different kind of faster" or "that culled triangle was culled later and I almost saw it"?
Not a real genuine fan of "you don't really know what you're missing, so how could you be missing anything" type reasoning. If they first give the impression that your purchased product is more advanced with a bunch of smart, evolved features which would improve performance in one way or another (be it absolute performance, lower power consumption, whatever), but then never actually deliver on that, then they've sold something under false pretenses. This isn't allowed in other markets, so why should it be OK in computing?

I don't recall if AMD ever changed its story that R600 had chosen to not have hardware MSAA resolve, although most chatter about it was "yeah, it was broken".
I don't recall that either, but I do remember Intel announcing there was a bug in transactional memory handling in Haswell/Broadwell chips. They disabled the feature in all sold processors in a firmware update, including those that would have been bought with that usage in mind. I don't remember reading about them giving any refunds, product exchanges or getting sued over it though. Unsure if they respun the chip with a fixed implementation, or if they simply waited for skylake to reintroduce the feature anew.
 
Not a real genuine fan of "you don't really know what you're missing, so how could you be missing anything" type reasoning.
There's at least empirical evidence at this point for consumer-facing products that there are visible differences in native versus reprojected methods for reaching high resolution graphics.
It's externally tangible, so a person can in some way argue they are affected enough that the lack thereof is a loss to them. That would generally be needed to progress as far as the lawsuit against Sony over Killzone Shadow Fall, and that wasn't significant enough to carry the case very far.

If they first give the impression that your purchased product is more advanced with a bunch of smart, evolved features which would improve performance in one way or another (be it absolute performance, lower power consumption, whatever), but then never actually deliver on that, then they've sold something under false pretenses. This isn't allowed in other markets, so why should it be OK in computing?
Some kind of negative impact, no matter how small, is generally needed for some kind of court action.
The question of "can you know it's there if you aren't missing it" would figure into that.

The idea of a case is to provide some sort of remedy, the scope of which is in part determined by what kind of injury or loss was incurred.
If the answer is that what was lost is something a consumer cannot detect or interact with, the court's remedy would be equally undetectable--which would likely make a court decide it's a waste of time to look at it further.

I don't recall that either, but I do remember Intel announcing there was a bug in transactional memory handling in Haswell/Broadwell chips. They disabled the feature in all sold processors in a firmware update, including those that would have been bought with that usage in mind. I don't remember reading about them giving any refunds, product exchanges or getting sued over it though. Unsure if they respun the chip with a fixed implementation, or if they simply waited for skylake to reintroduce the feature anew.
I believe some server variants of were able to have a fixed spin.
This is a case where a consumer cannot use the instruction personally, and cannot discern if an instruction is running. Like a culling shader, a transactional memory operation's end results are indistinguishable. Rather, there's the possibility of better or worse performance, as determined by someone other than the consumer.

Perhaps a developer or vendor, say of server software that would explicitly be investing in using the instruction, would be in a position to argue for an actual loss. Intel was either lucky that validation took longer, or delayed the products where the buyers could know the difference. The back and forth with those parties are unlikely to be as public.
 
Has this been posted?

http://creators.radeon.com/Radeon-pro-vega/

Enhanced Geometry Engine
The “Vega” architecture is designed to tackle the ever-increasing demand for more complex and detailed 3D graphics with an efficient new geometry engine. It combines the brute force of dedicated parallel processing pipelines with intelligent load balancing and new fast paths to deliver a major step forward in polygon throughput.

The most challenging workloads for a GPU can present it with millions of geometry primitives per frame, all of which must be evaluated to determine their contribution to the final image. New primitive shader technology allows Radeon Pro Vega graphics to perform geometry culling at an accelerated rate, eliminating unnecessary work for the rest of the GPU. An advanced workload distribution mechanism then assigns processing tasks to the available pipelines in a way that maximizes their utilization and avoids idle time. The result is Radeon Pro Vega graphics is capable of rendering extremely complex 3D models and scenes smoothly in real time.
 
Yeah, on last page, salt in the wound. I don't expect AMD to enable it soon either, Hawaii's tessellation improvements came when they rebadged it to 390 series, Vega enhancements sound, at least, more complex than that.
 
Yeah, on last page, salt in the wound. I don't expect AMD to enable it soon either, Hawaii's tessellation improvements came when they rebadged it to 390 series, Vega enhancements sound, at least, more complex than that.

If there is a hardware issue it would make sense they fixed it for the 12nm Vega refresh slated for next year.
 
So it must be broken. No answer on the reddit thread indicates that it is not working. If it will work the statement will come out immediately
 
What hardware issue?
They are selling Apple broken hardware?

Yes "if". If is a conditional. We simply don't know if primitive shaders is working or not. If they aren't we don't know why. As PS was folded into driver functionality, neither Apple nor any end user wouldn't have any idea what PS additional performance "should be". Talk of lawsuits is asinine beyond belief.
 
neither Apple nor any end user wouldn't have any idea what PS additional performance "should be". Talk of lawsuits is asinine beyond belief.
AMD is still advertising the feature, and Apple is advertising it as well. So if it is broken, then there's absolutely deception involved.
 
It's entirely possible that since the apparent official confirmations that it was disabled, they have since enabled it but in 90% of cases it produces no measurable benefit so they decided to cease talking about it but retain it as a marketing bullet point as some tech jargon.
 
It's entirely possible that since the apparent official confirmations that it was disabled, they have since enabled it but in 90% of cases it produces no measurable benefit so they decided to cease talking about it but retain it as a marketing bullet point as some tech jargon.
Regarding primitive shaders. Not all but a least some models to be rendered is most likely to have backfacing triangles in clusters. This is because if you preprocess a mesh's vertices (and alot of the time even if you don't) for the post transform cache adjacent triangles will be adjacent in the triangle stream. So if you have a bunch of backfacing triangles in a row primitive shaders will go through them faster. Since about 50% of triangles of a model are backfacing you could see up to half of a drawcall be accelerated by primitive shaders.

edit - point being it should have an impact on all rendering.
 
Last edited:
I have another, very speculative theory about the whole Vega features situation:

Given the statement from Raja Kodouri in the summer that desktop Vega was running on modified FuryX drivers, which was later publicly denied by AMD (that denial is BS though as how would HE not know what the driver base of Vega is) and

the Raven Ridge Vega drivers currently being in an abysmal/very early state (they are an extra branch, official latest drivers from the AMD website do not work for this hardware), there might be a chance that these are proper Vega drivers built from the ground up. We might see a driver release enabling all features sometime (months if not more than a year) later. Desktop Vega might be switched over to these new drivers then.

Features that distinguish Vega from FuryX are still not working properly atm: power management stuff (AVFS), HBCC (still unstable as mentioned in the driver changelogs), DSBR (it is enabled but sometimes removes polygons that should stay visible). This indicates to me that these features were added to an existing codebase later on and still need proper integration. If NGG Fast Path and Primitive shaders are really as difficult to implement as it is rumored then I can understand why they wouldn't want to implement them twice.

Concluding from this I think that there is still a very small chance that these features will eventually come, but we probably shouldn't wait for them.
 
I have another, very speculative theory about the whole Vega features situation:

Given the statement from Raja Kodouri in the summer that desktop Vega was running on modified FuryX drivers, which was later publicly denied by AMD (that denial is BS though as how would HE not know what the driver base of Vega is) and...

I am pretty sure that part (Raja saying anything about “Fury Drivers” this summer) definitely didn’t happen.
 
Given the statement from Raja Kodouri in the summer that desktop Vega was running on modified FuryX drivers, which was later publicly denied by AMD (that denial is BS though as how would HE not know what the driver base of Vega is) and
Perhaps there is a specific interview or tweet, but I recall this claim being in circulation early 2017.
There are a few different phases in the summer rumor mill, so I'm not sure if I missed that wave versus the claims of being gimped or RX Vega's supposed metamorphosis from the FE.

Features that distinguish Vega from FuryX are still not working properly atm: power management stuff (AVFS), HBCC (still unstable as mentioned in the driver changelogs), DSBR (it is enabled but sometimes removes polygons that should stay visible).
Certain elements would need to be implemented in a manner that matched Vega versus Fiji or earlier. The specifics of Vega's hardware registers and command packets were different from Vega, and if you believe the marketing the DVFS management would be relying on a different controller with logic and communications linked to the Infinity Fabric, and it would be quite obvious if that part of the infrastructure was not implemented.
The microcode load and PSP-related changes were part of the initial Linux patches, and would be a significant departure from what came before.

This indicates to me that these features were added to an existing codebase later on and still need proper integration. If NGG Fast Path and Primitive shaders are really as difficult to implement as it is rumored then I can understand why they wouldn't want to implement them twice.
Looking at many of the Linux patches, sections for Vega were inserted into programs with GFX version or chip ID checks, and co-existed with similar checks going back to Southern Islands or perhaps earlier. It makes sense that there would be a modular approach when possible at a high level, and that doesn't necessarily make Fiji's drivers just modified SI drivers.

If the architecture were modular as well, then the elements specific to the new functions would be abstracted enough that their routines would be insertable into a "native" code base as well.
If appropriately organized, the project would have had the opportunity to lay down the necessary interfaces and functionality in advance of Vega silicon.
If not organized that way, or if implementation assumptions did not hold, I could see why developing these features was more problematic.
 
Back
Top