AMD Vega Hardware Reviews

Yes, however DSBR is likely just a higher abstraction of primitive shaders and the pipeline.
No offense but what are you talking about? Its two separate parts of the pipeline.
Guarantees have a habit of being fungible when drivers detect they can get away with it. As linked above, there are mechanisms to explicitly control it. As an optimization all drivers likely go out of order where possible.
And how would a driver detect it when the API has no construct to relay intent?
edit - also if as an application programmer I go out of my way to sort my draws front to back and/or screen space bin them for better cache utilization, are you gonna get more performance by doing things out of order? Also again if the driver does potentially do this how does it know it's safe for correct rendering? how does it know it will be a perf gain and not a loss? A per app profile might help somewhat but thats a far cry from a driver automatically detecting and choosing to go out of order.
There was a comment to that effect on a recent linux commit.
Care to share a link?
 
Last edited:
It does say quite specifically in the GPUOpen article I linked above that:
What I recall was whitelisting specific apps for certain optimizations. So some unsafe or more aggressive optimizations are likely occurring. That GPUOpen blog would be the safe answer.

No offense but what are you talking about? Its two separate parts of the pipeline.
The pipeline is being implemented as giant monolithic shaders. Primitive shaders already replacing several parts of the pipeline to better share or omit certain calculations. This would be inlining calculations to reuse the data.

For certain types of culling, it may be advantageous to do a coarse rasterization pass or feed back results from rasterization to facilitate culling and binning. Z culling is normally fragments, but it could apply to triangles or portions of them at a coarse level. If binning in screen space a primitive could cross tiles. The binning process should collect and reorder geometry prior to pushing it through the normal pipeline. Not binning fragments.

And how would a driver detect it when the API has no construct to relay intent?
By application name, code injection, driver bug, or analyzing the shaders. Drivers are great at breaking guarantees for performance. Much of this is likely already happening behind the scenes.

Care to share a link?
http://nhaehnle.blogspot.com/2017/09/radeonsi-out-of-order-rasterization-on.html

It was something tied to that. Phoronix comment I think.
 
The pipeline is being implemented as giant monolithic shaders. Primitive shaders already replacing several parts of the pipeline to better share or omit certain calculations. This would be inlining calculations to reuse the data.
Primitive shaders handle up to and including primitive assembly, DSBR handles where that leaves off up to pixel shaders. They are two distinct sections.
By application name, code injection, driver bug, or analyzing the shaders. Drivers are great at breaking guarantees for performance. Much of this is likely already happening behind the scenes.
You can't break a guarantee that is for rendering correctness. And why would you think out of order rasterization would automatically be a win? Code injection yet again doesn't can't convey intent, how is a driver bug going to detect anything, and finally how is analyzing shaders going to do anything? It has nothing to do with intent of submission order.
Thanks I'll take a look to see what its about.
 
Meh. Two months and counting now beyond the date stated by AMD for 3rd-party Vegas, with no word on what's going on.

#unhappycamper
 
#myVegaLiquidEditionComesThisWeekButIvePayedAFortuneToSillyBundles

But I'm pretty sure I'll be happy when it's here nonetheless. Got a fluid experience temporarly with Rx580 anyway
 
Primitive shaders handle up to and including primitive assembly, DSBR handles where that leaves off up to pixel shaders. They are two distinct sections.
Same issue as the hull and domain shaders though. Information from one is useful to the other. The entire pipeline really needs to be programmable. Much of the setup and culling information could be highly relevant for binning. Same idea as deferring attribute interpolation.

You can't break a guarantee that is for rendering correctness. And why would you think out of order rasterization would automatically be a win? Code injection yet again doesn't can't convey intent, how is a driver bug going to detect anything, and finally how is analyzing shaders going to do anything? It has nothing to do with intent of submission order.
Breaking a guarantee is easy, just don't enforce it. While it won't necessarily increase performance, it shouldn't hurt it. Driver bugs exist because some specification likely wasn't enforced or the driver tries to assume intent. That assumption is the problem. Shaders can be analyzed for dependencies. So long as they aren't dependent on values being overwritten it can be inferred order doesn't matter.
 
Why are they focusing on this kind of stuff... Chill is useless and need specific tweaks to work. OSD , products already exist for that, and people interested in it are already using it... I mean, I wouldn't mind is AMD had ressources to spare, but given the state of Vega drivers, why waste time on this...
 
Why are they focusing on this kind of stuff... Chill is useless and need specific tweaks to work. OSD , products already exist for that, and people interested in it are already using it... I mean, I wouldn't mind is AMD had ressources to spare, but given the state of Vega drivers, why waste time on this...
Because community of amd whanted this. Maybe community was take over by Nvidia trolls! :D
https://radeon.com/radeonsoftware/feedback/
 
On-screen on-screen display? RIP in peace.
Well, I thought that since it has 'Redux' in the name, there has to be repetition... :p

Why are they focusing on this kind of stuff...
I'm happy they're adding it. Radeon chill is a cool feature, wish it worked universally though, or at least wasn't mutually exclusive with framerate capping; the driver could do Chill in titles that support it, and then cap FPS in all the others. That would be even better.

Afterburner I'd just as soon not install at all since it's MSI branded and I have no MSI stuff in my PC. I have nothing against MSI; if anything their hardware might actually be better than my preferred brand ASUS; it is simply a peace of mind thing. I like consistency. It's why I stick to certain brands of hardware and those only, like all my fans are Noctua, and so on. :p
 
Because community of amd whanted this. Maybe community was take over by Nvidia trolls! :D
https://radeon.com/radeonsoftware/feedback/
Oh man not always a good idea to rely heavily on the community though.
Look at the amusing situation where the very expensive and sensible Scientific Vessel in the UK had the highest input from the community to be called Boaty McBoatface (which started as a witty tweet) :)
 
Well, I thought that since it has 'Redux' in the name, there has to be repetition... :p


I'm happy they're adding it. Radeon chill is a cool feature, wish it worked universally though, or at least wasn't mutually exclusive with framerate capping; the driver could do Chill in titles that support it, and then cap FPS in all the others. That would be even better.

Afterburner I'd just as soon not install at all since it's MSI branded and I have no MSI stuff in my PC. I have nothing against MSI; if anything their hardware might actually be better than my preferred brand ASUS; it is simply a peace of mind thing. I like consistency. It's why I stick to certain brands of hardware and those only, like all my fans are Noctua, and so on. :p


I don't use Afterburner. But the combo RTSS (you're not obligated to installer afterburner with it) + hwinfo is really flexible IMO.

Anyway... Maybe I overreacted. I just hope they don't chose to implement that instead of working on performances and features (primitive shaders ?)
 
Back
Top