AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

But wasn't it so that the new primitive accelerator, binning rasterizer etc. need application support? Maybe the driver is largely done but what has not yet come out is patches to applications to really take advantage of vega?
 
As to performance: Since I do not have a board at hand, I won't make general statements/assumptions (and that's also why I wrote "impact" and not "performance" in the R300 analogy). I am quite confident though, that Vega will be able to perform at least on par with 1080 TI in certain scenarios (and no, I do not mean canned benchmarks were essentially the level of intentional driver-crippling is shown as in ViewPerf)
Yeah, this is my feeling as well. And the comment of Rys only strengthens that.

There are so many points during development where people could put a stop on a bad design. It's hard to believe that something could slip through that's not a significant improvement compared to the previous generation.

From those AMD footnotes that surfaced yesterday "this crashes, that crashes, ..." it seems to me that drivers are still problematic.

The biggest question mark, IMO, could be only 2 HBM2 stacks. If they didn't improve on memory compression to match or at least come close to Nvidia, that could be a serious issue.
 
Well... This looks beyond bad. Either just immature drivers, or Vega is really only going for the AI/Server/pro market. Lets hope RX delivers at least in between Nvidias 1080:s
 
well, it's a good bit faster than Fury X, but, given the clocks it's not impressive, feels like no efficiency gain, so it's probably not the real potential of it?

that benchmark might not be representative... still, this launch is really strange.
From my archived results, I have a stock Fury X scoring 16,550-ish in Firestrike Graphics, so that would be a VERY PRELIMINARY 37% improvement for Vega FE. It will get better, I'm certain.
 
But wasn't it so that the new primitive accelerator, binning rasterizer etc. need application support? Maybe the driver is largely done but what has not yet come out is patches to applications to really take advantage of vega?
Some features, primitive shaders, can benefit from support. The rest should be transparent and require compiler and driver work to properly implement. Features that are apparently tied to the RX release. Current results likely bad because most of these features aren't fully implemented or enabled yet.
 
I really need a Vega for my freesync 4k-display, will probably buy one either way... but anyhow... was expecting a bit more. One good thing though, with these numbers, there should be easy to get hold of one ;)
 
I would like Beyond3D to be one place where speculation on performance isn't based on a single Firestrike screenshot under uncontrolled and mostly unknown setup. Oh and any WCCFcrap clickbait links kindly kept away from here as well if possible. Even posting the links here in disgust feeds them.
 
Yeah, this is my feeling as well. And the comment of Rys only strengthens that.

There are so many points during development where people could put a stop on a bad design. It's hard to believe that something could slip through that's not a significant improvement compared to the previous generation.

From those AMD footnotes that surfaced yesterday "this crashes, that crashes, ..." it seems to me that drivers are still problematic.

The biggest question mark, IMO, could be only 2 HBM2 stacks. If they didn't improve on memory compression to match or at least come close to Nvidia, that could be a serious issue.

But that is typical AMD. The card is late and the driver are not ready and to make things worse they do not release it to competent media sites, that could be give support and put the numbers into perspective. They release it onto the market with immature drivers, questionable performance and will let this go on for over a month until the official test hit.

And a significant improvement is relative. It is a huge improvement if you use FP16 and Int8, if you do use F32 the numbers speak for more less for a Fury with higher clocks.
 
I would like Beyond3D to be one place where speculation on performance isn't based on a single Firestrike screenshot under uncontrolled and mostly unknown setup. Oh and any WCCFcrap clickbait links kindly kept away from here as well if possible. Even posting the links here in disgust feeds them.

Hear Hear.
 
The biggest question mark, IMO, could be only 2 HBM2 stacks. If they didn't improve on memory compression to match or at least come close to Nvidia, that could be a serious issue.
HBM, with the lower prefetch, should be acting a form of compression. Giving 2n/8n/16n for HBM/G5/G5X respectively. Color compression already existed and the L2 based ROPs should allow that, but again require driver work as a form of programmable blending. Only benefits graphics, so lower hanging fruit probably exists.
 
HBM, with the lower prefetch, should be acting a form of compression. Giving 2n/8n/16n for HBM/G5/G5X respectively. Color compression already existed and the L2 based ROPs should allow that, but again require driver work as a form of programmable blending. Only benefits graphics, so lower hanging fruit probably exists.
With halved bus width but almost double clocks, access patterns should be favorable for some workloads as well. Wasn't L2 capacity increased as well?

Why should I care about Futurmarks scores for a workstation GPU?
Isn't it great to live in a free country, where everyone can choose which poison to pick?
 
HBM, with the lower prefetch, should be acting a form of compression. Giving 2n/8n/16n for HBM/G5/G5X respectively.
I'm lost...

None of those anything to do with BW reduction.
AFAIK, the prefetch size is 32 bytes for both HBM and GDDR5 (and 64 bytes for GDDR5X?) It's just that you trade clock rate for bus width.

Color compression already existed and the L2 based ROPs should allow that, but again require driver work as a form of programmable blending. Only benefits graphics, so lower hanging fruit probably exists.
Color compression is not a simple check mark that you either have or not have. Pascal color compression is better than Maxwell compression is better than Kepler compression.
 
I would like Beyond3D to be one place where speculation on performance isn't based on a single Firestrike screenshot under uncontrolled and mostly unknown setup. Oh and any WCCFcrap clickbait links kindly kept away from here as well if possible. Even posting the links here in disgust feeds them.
This could have been easily avoided had AMD briefed the press and gave them cards.
 
The guy that got the FE mentioned throttling (core clock all over the place). Firestrike isn't that demanding in terms of power draw (the non-extreme/Ultra variant) so it is either hitting a low power limit or thermal instability. I'm not sure why the GPU would downclock itself otherwise.
 
This could have been easily avoided had AMD briefed the press and gave them cards.
Funny how there wasn't such chaos and grief when NVIDIA didn't send Titan Xp to press (yes, of course new architecture is different thing, but still no-one even shrugged at that)
 
The guy that got the FE mentioned throttling (core clock all over the place). Firestrike isn't that demanding in terms of power draw (the non-extreme/Ultra variant) so it is either hitting a low power limit or thermal instability. I'm not sure why the GPU would downclock itself otherwise.
Just stating the obvious again and I'm sure you know this as well: Bottlenecks can shift quite a bit between architectures. For example removing or alleviating the stalls from high tessellation factors can lead to much better shader utilization and thus to increased power draw. That said, removal of bottlenecks should give a decent performance boost as well. OTOH, +37% fits pretty good to a 1,38 GHz base clock. Hopefully it's just that.

Funny how there wasn't such chaos and grief when NVIDIA didn't send Titan Xp to press (yes, of course new architecture is different thing, but still no-one even shrugged at that)
I (for one) did more than shrug.
 
Back
Top