AMD Vega Hardware Reviews

This is not a matter of "if".
Unless we hear it directly from AMD, that feature X is going to bring Y amount of performance, speculating on what level should Vega gain from these features is moot, many things new in Vega appear immature or in early stages, whether that is due to hardware or software will be revealed in time. It's not wise to hype up something AMD is not.
strongly suggest that primitive shaders are not currently enabled in the RX drivers
Yet AMD told Anandtech they are enabled!
 
On the Linux side ...
Draw Stream Binning Rasterizer is not being used yet in the open drivers yet. Enabling it would be mostly in the amdgpu kernel driver but optimizing performance with it would be mostly in radeonsi and game engines.

HBCC is not fully enabled yet, although we are using some of the foundation features like 4-level page tables and variable page size support (mixing 2MB and 4KB pages). On Linux we are looking at HBCC more for compute than for graphics, so SW implementation and exposed behaviour would be quite different from Windows where the focus is more on graphics. Most of the work for Linux would be in the amdgpu kernel driver.

Primitive Shader support - IIRC this is part of a larger NGG feature (next generation geometry). There has been some initial work done for primitive shader support IIRC but don't know if anything has been enabled yet. I believe the work would mostly be in radeonsi but haven't looked closely.

For both DSBR and NGG/PS I expect we will follow the Windows team's efforts, while I expect HBCC on Linux will get worked on independently of Windows efforts.
https://www.phoronix.com/forums/for...opengl-proprietary-driver?p=970697#post970697
 
Last edited by a moderator:
For both DSBR and NGG/PS I expect we will follow the Windows team's efforts, while I expect HBCC on Linux will get worked on independently of Windows efforts.
Reading that post again this kinda stood out to me, possibly inferring that indeed it is not enabled on the windows side and when it is, they would follow their efforts for the linux side.
 
Unless we hear it directly from AMD, that feature X is going to bring Y amount of performance, speculating on what level should Vega gain from these features is moot, many things new in Vega appear immature or in early stages, whether that is due to hardware or software will be revealed in time. It's not wise to hype up something AMD is not.

Yet AMD told Anandtech they are enabled!
Vega is 1337 GPU of the future. DX12 and DSBR and HBCC and FP16 and driver and primitive shader lol NGREEDIA Pisscal has no chance once true potential of Vega is revealed.
 
Reading that post again this kinda stood out to me, possibly inferring that indeed it is not enabled on the windows side and when it is, they would follow their efforts for the linux side.
I think Windows support is already there, but whether it's enabled or not only AMD can state. I imagine if this feature was not working they would have already spoken up (especially after RX Vega's reviews) if only for the mindshare audience's sake. And the deafening silence leads one to believe any performance gains are likely insignificant one way or the other.
 
Unless we hear it directly from AMD, that feature X is going to bring Y amount of performance, speculating on what level should Vega gain from these features is moot, many things new in Vega appear immature or in early stages, whether that is due to hardware or software will be revealed in time. It's not wise to hype up something AMD is not.
I don't recall speculating on what level of performance Vega would or would not gain from primitive shaders being implemented in drivers. I only pointed out that the available direct evidence appears to support the position that primitive shaders are not enabled, and in any case I prefer to rely on my own analysis of the direct evidence available rather than waiting to be told something by a company representative.

Vega is 1337 GPU of the future. DX12 and DSBR and HBCC and FP16 and driver and primitive shader lol NGREEDIA Pisscal has no chance once true potential of Vega is revealed.
If you have anything constructive to add to the discussion please feel free to do so.
 
Rasterizer is correct that primitive shaders are not enabled though the Beyond3D test would only prove they aren't enabled in DX11. I don't know why the press got mixed messages as Mike Mantor stated during the Tech Day that primitive shaders aren't enabled.
 
AMD took almost a year before 7970 got around 30%boost in bf3 and overtook 680.
Hawaii got a significant tessellation boost after almost a year and a half.

So I've no doubt that vega would show something pretty good by the time volta is released.
 
This is not a matter of "if". The B3D Suite results directly show that Vega currently has a culled triangle throughput of 3.75 triangles per clock, which is not compatible with the claims made in the whitepaper about primitive shaders and their effect on culled triangle throughput. Unless someone is going to assert that RTGs claims in the whitepaper are fabrications, one is forced to infer from the B3D Suite results that this feature is not activated in drivers yet.

As for why RTG would throw RX Vega out the front door with the drivers in this state? Who knows. Personally, I would have delayed until the drivers were in better shape than this.

The number in the white paper could easily be a best case for code fully optimized to make maximum use of primitive shaders. The question is you can convert conventional vertex shaders to that level and how much work it takes for each application.
 
Thx for the Rys thing on twitter. Well, if everything is working (primitive, DSBR, etc), then it's all more alarming for performances compared to 2 years old Fury X, and to 1080. The absolute performances are ok, but given all the new tech and the raw power, it should be a lot higher... Yeah I know, I say that everytime. But I'm still baffled.
 
Quick note on primitive shaders from my end: I had a chat with AMD PR a bit ago to clear up the earlier confusion. Primitive shaders are definitely, absolutely, 100% not enabled in any current public drivers.

The manual developer API is not ready, and the automatic feature to have the driver invoke them on its own is not enabled.
 
Quick note on primitive shaders from my end: I had a chat with AMD PR a bit ago to clear up the earlier confusion. Primitive shaders are definitely, absolutely, 100% not enabled in any current public drivers.

The manual developer API is not ready, and the automatic feature to have the driver invoke them on its own is not enabled.
Thank you for posting on this matter. ;)
 
Quick note on primitive shaders from my end: I had a chat with AMD PR a bit ago to clear up the earlier confusion. Primitive shaders are definitely, absolutely, 100% not enabled in any current public drivers.

The manual developer API is not ready, and the automatic feature to have the driver invoke them on its own is not enabled.
Thank you for clearing that up.
 
So I understand that there is an API for the developer to use them by hand an driver feature that automatically translates Vertex Shaders into unified shaders?

I miss the knowledge to guess how effective the automatic translation will be, but I guess VEGA requires all Vertex Shaders to be translated / converted before the new geometry pipeline starts to work.
 
As far as I understood, there will be a "automatic mode" in the drivers that will specifically work for increasing the primitive discard rate and thus speeding up the geometry processing. I can only guess that is taking a lot of time to implement it because of possible compatibility issues with existing software. Then, there is the possibility to expose completely the primitive shaders to the developers, allowing other options (that means: new feature/possibilities in game engines). But Rys has confirmed that that is not planned yet in a tweet some time ago.
 
As far as I understood, there will be a "automatic mode" in the drivers that will specifically work for increasing the primitive discard rate and thus speeding up the geometry processing. I can only guess that is taking a lot of time to implement it because of possible compatibility issues with existing software. Then, there is the possibility to expose completely the primitive shaders to the developers, allowing other options (that means: new feature/possibilities in game engines). But Rys has confirmed that that is not planned yet in a tweet some time ago.
AMD is still trying to figure out how to expose the feature to developers in a sensible way. Even more so than DX12, I get the impression that it's very guru-y. One of AMD's engineers compared it to doing inline assembly. You have to be able to outsmart the driver (and the driver needs to be taking a less than highly efficient path) to gain anything from manual control.
 
Back
Top