Yeah, this is my feeling as well. And the comment of Rys only strengthens that.As to performance: Since I do not have a board at hand, I won't make general statements/assumptions (and that's also why I wrote "impact" and not "performance" in the R300 analogy). I am quite confident though, that Vega will be able to perform at least on par with 1080 TI in certain scenarios (and no, I do not mean canned benchmarks were essentially the level of intentional driver-crippling is shown as in ViewPerf)
From my archived results, I have a stock Fury X scoring 16,550-ish in Firestrike Graphics, so that would be a VERY PRELIMINARY 37% improvement for Vega FE. It will get better, I'm certain.well, it's a good bit faster than Fury X, but, given the clocks it's not impressive, feels like no efficiency gain, so it's probably not the real potential of it?
that benchmark might not be representative... still, this launch is really strange.
Some features, primitive shaders, can benefit from support. The rest should be transparent and require compiler and driver work to properly implement. Features that are apparently tied to the RX release. Current results likely bad because most of these features aren't fully implemented or enabled yet.But wasn't it so that the new primitive accelerator, binning rasterizer etc. need application support? Maybe the driver is largely done but what has not yet come out is patches to applications to really take advantage of vega?
Yeah, this is my feeling as well. And the comment of Rys only strengthens that.
There are so many points during development where people could put a stop on a bad design. It's hard to believe that something could slip through that's not a significant improvement compared to the previous generation.
From those AMD footnotes that surfaced yesterday "this crashes, that crashes, ..." it seems to me that drivers are still problematic.
The biggest question mark, IMO, could be only 2 HBM2 stacks. If they didn't improve on memory compression to match or at least come close to Nvidia, that could be a serious issue.
I would like Beyond3D to be one place where speculation on performance isn't based on a single Firestrike screenshot under uncontrolled and mostly unknown setup. Oh and any WCCFcrap clickbait links kindly kept away from here as well if possible. Even posting the links here in disgust feeds them.
HBM, with the lower prefetch, should be acting a form of compression. Giving 2n/8n/16n for HBM/G5/G5X respectively. Color compression already existed and the L2 based ROPs should allow that, but again require driver work as a form of programmable blending. Only benefits graphics, so lower hanging fruit probably exists.The biggest question mark, IMO, could be only 2 HBM2 stacks. If they didn't improve on memory compression to match or at least come close to Nvidia, that could be a serious issue.
(Bulldozer)There are so many points during development where people could put a stop on a bad design. It's hard to believe that something could slip through that's not a significant improvement compared to the previous generation.
With halved bus width but almost double clocks, access patterns should be favorable for some workloads as well. Wasn't L2 capacity increased as well?HBM, with the lower prefetch, should be acting a form of compression. Giving 2n/8n/16n for HBM/G5/G5X respectively. Color compression already existed and the L2 based ROPs should allow that, but again require driver work as a form of programmable blending. Only benefits graphics, so lower hanging fruit probably exists.
Isn't it great to live in a free country, where everyone can choose which poison to pick?Why should I care about Futurmarks scores for a workstation GPU?
I'm lost...HBM, with the lower prefetch, should be acting a form of compression. Giving 2n/8n/16n for HBM/G5/G5X respectively.
Color compression is not a simple check mark that you either have or not have. Pascal color compression is better than Maxwell compression is better than Kepler compression.Color compression already existed and the L2 based ROPs should allow that, but again require driver work as a form of programmable blending. Only benefits graphics, so lower hanging fruit probably exists.
This could have been easily avoided had AMD briefed the press and gave them cards.I would like Beyond3D to be one place where speculation on performance isn't based on a single Firestrike screenshot under uncontrolled and mostly unknown setup. Oh and any WCCFcrap clickbait links kindly kept away from here as well if possible. Even posting the links here in disgust feeds them.
Well that's the problem, it's exactly the same card that is going to be sold to consumers, just with less memory.Why should I care about Futurmarks scores for a workstation GPU?
Funny how there wasn't such chaos and grief when NVIDIA didn't send Titan Xp to press (yes, of course new architecture is different thing, but still no-one even shrugged at that)This could have been easily avoided had AMD briefed the press and gave them cards.
Just stating the obvious again and I'm sure you know this as well: Bottlenecks can shift quite a bit between architectures. For example removing or alleviating the stalls from high tessellation factors can lead to much better shader utilization and thus to increased power draw. That said, removal of bottlenecks should give a decent performance boost as well. OTOH, +37% fits pretty good to a 1,38 GHz base clock. Hopefully it's just that.The guy that got the FE mentioned throttling (core clock all over the place). Firestrike isn't that demanding in terms of power draw (the non-extreme/Ultra variant) so it is either hitting a low power limit or thermal instability. I'm not sure why the GPU would downclock itself otherwise.
I (for one) did more than shrug.Funny how there wasn't such chaos and grief when NVIDIA didn't send Titan Xp to press (yes, of course new architecture is different thing, but still no-one even shrugged at that)
Well that's the problem, it's exactly the same card that is going to be sold to consumers, just with less memory.