AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

An UAV atomic is no dependency in that sense as order isn't guaranteed, just the execution (which disables any HSR which may have happended otherwise). As said before, it could only constitute a false dependency which is caught erroneously.
It's a real dependency. The judgement call is on whether it should be missed or caught conservatively.
One of the unknowns is how intelligently the batching method can make that distinction. The test meets the literal meaning of the patent claims, since we see the output of pixels changing based on what was processed before.
Special treatment was mentioned for transparencies and other cases that could interrupt a batch, but I did not see a hint either way on acceptable access types.
It seems plausible a solution could flag a read after write for a UAV as being exempt, but there's no statement either way.
 
I'm starting to think the quote from reddit is right, and they're indeed running "Fiji-drivers" without any of the new whizbang. GamerNexus's fresh Doom benchmarks are pretty much spot on same performance as Vega had late last year with actual Fiji-drivers
That would make sebbbi very sad... ;-) Even if were possible to run Vega with Fiji drivers.
 
But how is it possible that drivers aren't ready after such a long time?
If Vega FE still uses Fiji drivers after a year of delays then I have no confidence that drivers will be ready for RX Vega's launch. They would probably still suck for months after RX Vega is launched.

Something something FineWine :p
 
GamerNexus's fresh Doom benchmarks are pretty much spot on same performance as Vega had late last year with actual Fiji-drivers
Their Sniper Elite 4 numbers are actually lower than what they showed for Vega FE at AMD's Analyst day, granted AMD only demoed the first area which is very light on hardware resources, the demo was suspiciously quick, but still. Looking at these numbers the demo didn't run Ultra settings.

vega-fe-sniper-4k.png

Turns to vinegar if it takes overlong. Where did FineWine come from for describing AMD's drivers?
The Kepler era, Kepler cards fell behind their AMD's counterparts because of their VRAM deficiencies and the heavy use of compute in modern games.
 
The Kepler era, Kepler cards fell behind their AMD's counterparts because of their VRAM deficiencies and the heavy use of compute in modern games.
Not just Kepler era, holds true for all GCN at least, if not even earlier. Hardwarewise they seemed to start with the R580 and it's whopping 48 pixel shaders, pretty much useless with the games of it's era, but late in life it started to gain more ground over the competition (and in what could best be described as afterlife with way too modern games for them, absolutely demolished the competition of it's time)
 
But how is it possible that drivers aren't ready after such a long time?
If Vega FE still uses Fiji drivers after a year of delays then I have no confidence that drivers will be ready for RX Vega's launch. They would probably still suck for months after RX Vega is launched.
The drivers may be ready but not out for the public. When gaming vega launches the updated ready driver can be introduced into the shipping driver package with the modifications.

I am hoping for better performance but I am not sure we will get some. However it would be odd that we have a faster Fiji that is also much larger and on a die drop.
 
But how is it possible that drivers aren't ready after such a long time?
If Vega FE still uses Fiji drivers after a year of delays then I have no confidence that drivers will be ready for RX Vega's launch. They would probably still suck for months after RX Vega is launched.
How was it delayed a year? Polaris released last year and Vega drivers are maybe a week late now?
Then consider top tier everything will take some work and they may have refocused software/compiler guys on Zen. SM6, Vulkan, etc likely takes some resources, despite not being currently available. It's no secret they allocated extra resources towards the CPU and there would seem to be a lot of work. That's not to say they aren't close to finished, but just not quite ready. Getting FE out a month in advance so games can actually patch/test against Vega may not hurt either.
 
I'm not an expert in GPUs but even I know you can't use a driver from one architecture to run another architecture. Unless every slide shown by AMD is false Vega is a completely redesign so it would need a completely know driver in the low level which is the actually important part of the driver.

Whether the driver is crap or AMD manage to archive the very complex and hard task to completely redesign a chip and end up with the same performance @ equal clocks will be shown next month.

But the real problem with vega is not its performance, is its fermi like TDP.
 
It's a real dependency.
No, it's not. At least not between different pixels or triangles (it's called unordered access view for a reason). Only within a pixel, which doesn't matter for the question, if binning is possible or not.
One of the unknowns is how intelligently the batching method can make that distinction.
If some heuristic can't make the correct call in that relatively simple situation, it would be useless.
 
No, it's not. At least not between different pixels or triangles (it's called unordered access view for a reason). Only within a pixel, which doesn't matter for the question, if binning is possible or not.
It's unordered at the level of the API, whereas there is a causal dependence that is empirically detected in this test. Per an earlier claim, if the processing of a primitive has an effect on the final shading of a later one, the later primitive is not appropriate for the current batch.
Perhaps this can be flagged as unnecessary, or the embodiment is not reflective of what Vega does. It's undefined in terms of publically available information.

If some heuristic can't make the correct call in that relatively simple situation, it would be useless.
The current situation is ambiguous in that regard. It's seemingly not in use. I'll allow that it should be more flexible to avoid an overly conservative or twitchy algorithm.
What level of the system, be it the batching hardware, microcode, or driver that makes the determination could affect how readily this could be done.
The analysis outside of this test would be increasingly complicated, which might give a reason for a highly conservative or disabled option in the early days.
 
I'm starting to think the quote from reddit is right, and they're indeed running "Fiji-drivers" without any of the new whizbang. GamerNexus's fresh Doom benchmarks are pretty much spot on same performance as Vega had late last year with actual Fiji-drivers

Didn't the quote from Reddit start right here with Rys's post, and then went through an internet version of a broken telephone,
older->"legacy!"->"Fury drivers!!!"
 
Didn't the quote from Reddit start right here with Rys's post, and then went through an internet version of a broken telephone,
older->"legacy!"->"Fury drivers!!!"
Possibly, I'm not sure where it started, but it actually did claim going through the driver paths with some tool. And it really is the only explanation at least I can figure out which could possibly explain the fact that Doom performance over half a year ago with Fiji drivers is pretty much dead on with current Doom performance

edit: and I don't mean the rumor about january or some similarily old drivers
 
well it depends on whether you consider matching a 14 month old Nvidia card in terms of performance is "a year late"
That reasoning ended the second we knew it had a superior DX12 feature set. We still don't know the performance to even justify that claim without proper drivers.
 
Nope, Pascal/Maxwell didn't suffer this, Fermi didn't suffer this, Tesla and G80 didn't as well. Only Kepler.
At launch HD 7970 was some 10% faster than GTX 580. Fast forward couple years to GTX 780 Ti launch and HD 7970 beats GTX 580 by 20% already (1920x1080/1200, TPU)
At GTX 980 launch it beat the R9 290X by around 23%. Fast forward to GTX 1080 launch and GTX 980 beats R9 290X by around 14% (1920x1080, TPU)
Seems to be happening to Fermi and Maxwell too compared to GCN - obviously the game selections change over time too, but that's part of the "finewine". For Pascal, I seriously have no doubt it will happen again, probably sometime after Volta gets out in the consumer space
 
How is Vega late by a year? AMD never said or even suggested it would come in 2016, let alone summer 2016.

I'd say it's fair to call it late by a year when you have gone an entire generation of GPUs without any answer to your competitor's Enthusiast/High end lineup.
 
I'd say it's fair to call it late by a year when you have gone an entire generation of GPUs without any answer to your competitor's Enthusiast/High end lineup.
Should have used "delayed" not "late", like the original claim used, you could say they're late performance wise (depending on where the RX hits when it gets out on how much late actually) - but the claim was it's been delayed a year which simply isn't true in any sense.
 
Back
Top