Nvidia Pascal Reviews [1080XP, 1080ti, 1080, 1070ti, 1070, 1060, 1050, and 1030]

Has AMD every had a 40% performance uplift from software alone over the life of a product?
Polaris had a 10-15% gain shortly after launch related to async. So near 40% on Windows no. That said they have laid out paths that do just that. The Battlefield gains along would be over 40% if AMDs savings numbers held. Gains will obviously vary, but getting 20-30% from that one feature on average isn't unreasonable.

On Linux you can see those moves monthly on various titles and per app optimizations, until recently, weren't allowed. Obvious bottlenecks where all hardware from a vendor is running into the same wall. Not uncommon in early driver development which would characterize the current situation.
 
Pro benchmarks place Vega FE slightly behind pro GP104 in all but very CPU limited scenarios. So now you have both gaming and pro at the level of 1080.

You say 1080 but post numbers from P6000 ($5500+) and P5000 ($2000+). Where are the 1080 results?

If a 1080 now performs the same as a P6000, isn't that a great thing that was only done because of competition from Vega? I don't see how you are framing this as a bad thing.

AMD's own projections and statements don't consider DSBR as a game changer for big Vega, maybe for much smaller GPUs with limited resources, furthermore, they already had it enabled in their comparisons and marketing slides, despite that Vega is still around 1080.

Quoting yourself isn't proof. We don't know if it was enabled or not, and we don't know the actual performance or power usage of RX. The difference in minimums could be from clock speed alone. Most reviewers had clocks under 1600.

vega-undervolt-doom-4k.png


That shows it right between 1080 Ti and 1080 when using higher clocks like we'll see from RX. And they aren't using AA which AMD is more efficient at in Doom thanks to async compute usage.


The higher clocks also helped productivity:

vega-hybrid-spec-results.png


And bumped it past custom 1080's

vega-hybrid-firestrike-normal.png


So considering it already performs very well, having TBR enabled + newer drivers in general will help a lot.

I also don't get why you are saying AMD says DSBR won't help or change much, thats not what they've said at all. They said even when its not needed it will save on power usage as well as allowing a huge saving in memory bandwidth in some titles, and even showed getting over a 30% reduction in BF4. Considering all of the games tested in the slide were very old, it makes no sense for them to have had it enabled but not tested a single game from 2017 for the slide. As for performance they've basically said "no comment" which isn't downplaying it at all but rather withholding the information until they are ready to disclose it. If it was already enabled, why wouldn't they have shown off more performance details? They gave themselves another 2 weeks to get the drivers ready before cards are sold.

The end notes for the "Bytes per frame saved" slide shows it was done with the Frontier Edition with 17.20 drivers not RX Vega like you are stating. AMD has stated that TBR is disabled for Vega FE right now. Considering the current 17.7.2 drivers are 17.30 those are very old drivers.

https://www.techpowerup.com/reviews...cture_Technical_Overview/images/slides-49.jpg

And DSBR also helps a lot with pro work:

https://www.techpowerup.com/reviews...cture_Technical_Overview/images/slides-42.jpg
 
Considering all of the games tested in the slide were very old, it makes no sense for them to have had it enabled but not tested a single game from 2017 for the slide.
Oh they have, Bytes saved per frame slide shows them testing in several modern titles, you just have to read.
The end notes for the "Bytes per frame saved" slide shows it was done with the Frontier Edition with 17.20 drivers not RX Vega like you are stating
Because it's the one that doesn't have DSBR right now. But has it enabled in a new unreleased driver from AMD. RX Vega has it enabled at this point.

Considering the current 17.7.2 drivers are 17.30 those are very old drivers
17.7.2 have the full build name of: 17.30.1041-170720a, and they don't support Vega, AMD uses driver 17.30.1041-170711n for RX Vega, it's an unreleased driver that only AMD has at the moment.

Here is the break down of everything:
Current Vega FE drivers are 17.6 Radeon Vega Frontier Edition 17.6
AMD had DSBR enabled for Vega FE in 17.20 driver that is not yet publicly released, tested it in Bytes Saved Per Frame slide on multiple modern games
AMD released public driver 17.7.2 which are 17.30.1041-170720a, these don't support Vega at all
WX9100 is tested in SPECview using DSBR and driver 17.30.1041-170711n which is an unreleased driver for Vega
RX Vega is also tested with that driver 17.30.1041-170711n.

So you see DSBR has been enabled for Vega FE back with an experimental branch of 17.20, then 17.30 came along with DSBR too, and was also tested on WX9100 and of course RX Vega.
I also don't get why you are saying AMD says DSBR won't help or change much,
Ryan Smith
"From the tone of the conversations I had, while DSBR will improve things, everyone was quick to point out that the gains would be higher on a more resource-constrained card. Those aren't the kind of comments I'd expect if they thought performance would make a huge jump with DSBR."
DvHardware
"From what we've gathered, "the Radeon RX Vega performance figures AMD revealed yesterday include the expected gains of DSBR"
TomsHardware
"Don’t expect any miracles from the feature’s activation. After all, AMD is assuredly projecting performance with DSBR enabled. But a slide of presumably best-case scenarios shows bandwidth savings as high as 30%:

You say 1080 but post numbers from P6000 ($5500+) and P5000 ($2000+). Where are the 1080 results?
P5000 is an underclocked GTX 1080.
 
DSBR not being enabled right now as a general statement is false. It IS enabled for select workloads at least. Just look at the slide with DSBR difference in Spec ViewPerf 12.1 energy-01 and the corresponding footnotes.

Vega FE already IS at that performance level in energy-01 (18,xx-ish and higher instead of 9-ish).
 
Oh they have, Bytes saved per frame slide shows them testing in several modern titles, you just have to read.

AvP - 2010
BF4 - 2013
Bioshock Infinite - 2013
Crysis 3 - 2013
Dirt Rally - 2015
Fallout 4 - 2015
Overwatch - 2016
ROTTR - 2015/ Jan 2016 (PC)
UE4 Elemental Demo - 2014
Unigine - 2013?

Maybe I can't read.. or maybe you can't, but none of those are 2017 games, barely even 2016 games. The newest is a year and a half old. Hardly what I'd call "modern titles" when they average 3-4 years old.

17.7.2 have the full build name of: 17.30.1041-170720a, and they don't support Vega, AMD uses driver 17.30.1041-170711n for RX Vega, it's an unreleased driver that only AMD has at the moment.

You are bolding the wrong part. I'm talking about 17.20.xyz which was used for that slide and you are looking at 17.30.xyz

AMD released public driver 17.7.2 which are 17.30.1041-170720a, these don't support Vega at all


I never said those supported Vega. I said the current non-vega branch was already higher (17.30 vs 17.20 from the vega fe drivers: 17.20.1035-170624a-315622C-CrimsonReLive)

"From what we've gathered, "the Radeon RX Vega performance figures AMD revealed yesterday include the expected gains of DSBR"

"Don’t expect any miracles from the feature’s activation. After all, AMD is assuredly projecting performance with DSBR enabled. But a slide of presumably best-case scenarios shows bandwidth savings as high as 30%:

Sorry but thats not a fact, thats them guessing. And AMD doesn't have any performance metrics up for anything other than minimum frametimes. If they had any testing on games from 2017, I'm sure they would have shown them on the slide.

AMD can't include faked performance in their slides, they can only include what they've actually achieved. And none of the games listed in their tests show up in the DSBR testing slide which makes me think they haven't been able to use it for those games.

Hard to say if it is disabled for other apps, or just not showing much of an effect.

AMD's said it's currently not enabled in the FE drivers.

We also asked AMD’s architects, including Mike Mantor, about whether DSBR was actually disabled in Vega: Frontier Edition or whether it was just a rumor. The architects loosely confirmed that tile-based rasterization was in fact disabled for Frontier Edition’s launch, which we think mostly aligns with statements about pushing the card out in time, and noted that DSBR will be enabled on both Vega FE and RX Vega on launch of RX Vega. We asked about expected performance or power consumption improvements, but were not given any specifics at this time.
 
Sorry but thats not a fact, thats them guessing.
No guessing, just educated conversations with AMD.
Maybe I can't read.. or maybe you can't, but none of those are 2017 games, barely even 2016 games.

Because DSBR is not a magic bullet that works in every game, it has its own limitations, as evident by AMD's conversations with Anandtech.
You are bolding the wrong part. I'm talking about 17.20.xyz which was used for that slide and you are looking at 17.30.xyz
17.20.xxx for the slides is an unreleased Vega FE driver thar supports DSBR, and so does the 17.30.xxx in the slides as well.
 
We already know packed math could do 25-30% of that. Add in a rather pessimistic 10% in game optimizations over the course of a year and they're roughly equal. It's not a stretch to say that currently known improvements will do just what I've suggested.
Gains from packed math seem to be not as high your predictions, as well as not being continuous and lowering graphics quality. FM Serra July 26 demo averaged - 50fps (fp16) vs 44fps(fp32).
In our Vega technology overview we talked a bit about AMD having the ability (with Vega) to have several render targets in a scene to render at lower precision (fp16 versus fp32). AMD calls this feature fp16 Rapid Packed Math (floating point 16). According to AMD this will increase performance.

Rapid Packed Math basically halves the floating point calculations for data request, resulting in a faster turnaround time of that request/data, however with less precision and thus quality in some form. Considering lowering image quality in specific segments will pretty much always create more FPS, we'll just have to wait and see how the feature will evolve. Basically half-precision would be applied in segments where it really isn't needed. AMD has not revealed anything specific as to what and where excatly the feature can be used.
http://www.guru3d.com/news-story/ra...d-in-fm-serrawolfenstein-2-and-far-cry-5.html
 
Last edited by a moderator:
Would someone please tell me why the discussion of how much AMD has been sandbagging on RX Vega is so important in the Pascal Reviews thread?

The only off-topic tidbit I'll add to this is: DBSR and power saving features are the two parts that AMD sandbagged on RX Vega and revealed last sunday. By August 14, it could have stopped here, or there could be more stuff yet to mention.
We don't even know the full effect of the power saving techniques being applied. I've seen reports saying advertised boost clocks are now sustained clocks in RX Vega because of the power-saving features being enabled, for example.
So all this conjecture is a bit useless until reviews are out. Especially in this thread.
 
DBSR and power saving features are the two parts that AMD sandbagged on RX Vega and revealed last sunday. By August 14, it could have stopped here, or there could be more stuff yet to mention.
Sandbagging? people need to stop hyping everything AMD does or says, maybe then they could have more realistic perspective of what AMD is capable of instead of myths like titan killer and unicorn drivers. the company is very careful regarding DSBR promises which means it is not going to amount to much. AMD also specified their target already. When a company tells you it's GPU is trading blows with the competitor you dont hype it over the moon and claim it is gonna be the be second coming of jesus.
 
Oh please.. it was one post to clarify the MX150 vs. Intel IGP comparison.
The drivers talk has taken over two pages.


Sandbagging? people need to stop hyping everything AMD does or says
Who hyped what? All I wrote was we still don't know exactly what the final performance on RX Vega will be so the fixation on exactly how much DSBR enabled contributes to performance makes little sense because there are other factors at play.

No one suggested the RX Vega was going to beat the 1080 Ti. Vega 64 is priced against the 1080. What remains to be seen is how much RX Vega with its release driver manages to increase performance relative to Vega FE in current drivers, but no one is suggesting it'll compete with a different product.
 
Acer Swift 3 SF314-52 review – budget 14-inch laptop with Nvidia MX150 graphics
Mobile version of the desktop GeForce GT 1030.
All in all, the MX150 is about twice the performance in games of the regular GT 940MX with GDDR3 memory and a big step-up from the DDR5 version as well. So if you care about gaming on a thin-and-light laptop or a budget ultraportable, this is absolutely the option to get starting with the second half of 2017
http://www.ultrabookreview.com/16888-acer-swift-3-sf314-52-review/
 
Last edited by a moderator:
EVGA GeForce GTX 1080 Ti FTW3 ELITE – Now Available With 12GHz Memory
The EVGA GeForce GTX 1080Ti FTW3 ELITE cards are now available with 12GHz of GDDR5 memory, giving it 528 GB/s of memory bandwidth! This is a 9% increase in total memory bandwidth. These cards are available with either the ELITE Black or White shroud, and of course comes with EVGA’s exclusive iCX technology, giving you 9 thermal sensors, onboard thermal LED indicators and incredible cooling with quiet operation.
http://www.babeltechreviews.com/evga-geforce-gtx-1080-ti-ftw3-elite-now-available-12ghz-memory/
 
Back
Top