AMD Vega Hardware Reviews

11% faster in FS Extreme and Ultra against an aftermarket 1080. Timespy isn't quite as friendly.
Will interesting to see if these are the highest Vega clocks attainable. I think we know the kind of scores overclocked aftermarket 1080's can achieve.
 
11% faster in FS Extreme and Ultra against an aftermarket 1080.
An aftermarket card that is essentially not overclocked, it's base clocks are only 70Mhz faster than FE and yielding only about 30MHz more boost speed. While also sporting 10Gbps memory, instead of the new 11Gbps 1080 cards.
Timespy isn't quite as friendly
Nor is the standard FireStrike score, I don't see how ignoring these two to focus on Ultra and Extreme variations is any good either.
 
Last edited:
AMD Moves Vega 56 Embargo Forward, Asks Reviewers to Prioritize Over 64

RX Vega 56 has been abruptly pushed forward to an August 14 review embargo date. Initial plans had the card positioned for a September launch.

AMD noted that RX Vega 56 cards have been shipped to reviewers, along with a request that reviewers specifically “prioritize coverage” of RX Vega 56 over RX Vega 64 under time-constrained conditions. This clearly indicates AMD’s faith in RX Vega 64 and 56, demonstrating that 56 should more reasonably compete with nVidia at the ~$400 price-point, while 64 will undoubtedly be more fiercely embattled at $500-$600.

AMD has timed RX Vega strategically so that it launches following Threadripper, where most reviewers have had attention focused for the past week. Cards have been received over the past day or so, leaving little time for deep testing. RX Vega 56 cards are to arrive by the weekend.

Although we were told we wouldn’t be sampled, following our defiance of AMD’s decision to express favoritism by permitting only select reviewers to publish early Threadripper data, we will still have Vega content ready on embargo lift dates. We’ve sourced information and parts elsewhere. Turns out that talking about thermal compound efficacy on an IHS is disallowed, despite the fact that TR’s thermal performance was already published by outlets expressly permitted to break embargo.

http://www.gamersnexus.net/news-pc/3016-amd-moves-vega-56-embargo-forward-prioritizes-over-64
 
Something which I have always found disturbing is when nVidia and AMDs release drivers that are optimized for certain games and you get performance improvements. What has actually happened?
Is it shader replacements a la the old benchmark cheating? Is it driver level lowering of certain parameters ostensibly transparent to the end user?

Whatever it is, it only concerns those particular titles. Title specific driver code is not necessarily transferable between architectures. How much effort does AMD have to spend not only on getting the drivers to work optimally for the general case, but also tweaking specifically for the most common benchmarking applications in order to produce competitive scores in reviews?
 
More benchmarks from reddit, AotS this time with some firestrike scores and 56 version showing up,




What 20% faster? Going by TimeSpy scores the liquid cooled version is like 3% faster than 1080FE. And 6% by Firestrike numbers.

Don't quote out of place, follow the conversation.

An aftermarket card that is essentially not overclocked, it's base clocks are only 70Mhz faster than FE and yielding only about 30MHz more boost speed. While also sporting 10Gbps memory, instead of the new 11Gbps 1080 cards.

Nor is the standard FireStrike score, I don't see how ignoring these two to focus on Ultra and Extreme variations is any good either.

Overclocked cards mean little with Pascal, it's the cooling and power that makes the difference. I added the difference from the TPU review. As for ignoring the standard firestrike score, I was making a point regarding what Vega's performance should've been. The 1080p score is uselss for this card anyway, interesting to see that the difference was bigger at 1440p than at 4k, might hint to bandwidth issues.
 
An aftermarket card that is essentially not overclocked, it's base clocks are only 70Mhz faster than FE and yielding only about 30MHz more boost speed. While also sporting 10Gbps memory, instead of the new 11Gbps 1080 cards.

Nor is the standard FireStrike score, I don't see how ignoring these two to focus on Ultra and Extreme variations is any good either.
The standard Firestrike score is very similar to the rest. It's the watercooled Vega that's performing like I said, and there's none of that with standard firestrike.

Timespy is the only one where watercooled Vega is slower than the 1080 there.
 
Do we know if any games come with the 56 or is wolf just with the 64 cards ?
I don't think they're giving away games with cards, just with the bundles. I could be wrong, but that's what I thought. I don't know anything about a Vega56 bundle, but I could easily see them doing one.
 
Something which I have always found disturbing is when nVidia and AMDs release drivers that are optimized for certain games and you get performance improvements. What has actually happened?
Is it shader replacements a la the old benchmark cheating? Is it driver level lowering of certain parameters ostensibly transparent to the end user?

Whatever it is, it only concerns those particular titles. Title specific driver code is not necessarily transferable between architectures. How much effort does AMD have to spend not only on getting the drivers to work optimally for the general case, but also tweaking specifically for the most common benchmarking applications in order to produce competitive scores in reviews?
Games often do things that aren't good for hardware and drivers fix these things. Sometimes they're shader optimizations, sometimes the hardware has a configuration that works well in some cases but not all the time, other times they may be an inefficient use of some aspect of the API, etc. In some cases the drivers can do things the game developer is unable to do via the API. There are so many possibilities.

I'm not a driver developer, but I'll give a realistic example. Say a dispatch is writing to a buffer and a draw is reading from the buffer. By default the driver will likely insert a flush between the dispatch and draw so the dispatch finishes before the draw starts. But what if the dispatch is writing different memory addresses from what the draw reads? In this case the driver can remove the flush, thus increasing performance.

Also, driver optimizations don't always concern a particular title. If the optimization is very specific this may happen, but if the optimization seems like a good idea and doesn't hurt performance of other apps it may be enabled by default so future apps can take advantage of the optimization.

Don't think shader replacements or optimizations mean cheating. If the final result is the same optimizations are just good engineering. Sure, it's possible to cheat but this has never been the common case.

Driver optimizations also don't mean lowering certain parameters. This could happen, but once again it's not the common case and in some cases driver control panels give an option to change from the optimized behavior so users can skip this type of optimization if they want to see things exactly as intended by the application developer.
 
Games often do things that aren't good for hardware and drivers fix these things. Sometimes they're shader optimizations, sometimes the hardware has a configuration that works well in some cases but not all the time, other times they may be an inefficient use of some aspect of the API, etc. In some cases the drivers can do things the game developer is unable to do via the API. There are so many possibilities.

I'm not a driver developer, but I'll give a realistic example. Say a dispatch is writing to a buffer and a draw is reading from the buffer. By default the driver will likely insert a flush between the dispatch and draw so the dispatch finishes before the draw starts. But what if the dispatch is writing different memory addresses from what the draw reads? In this case the driver can remove the flush, thus increasing performance.

Also, driver optimizations don't always concern a particular title. If the optimization is very specific this may happen, but if the optimization seems like a good idea and doesn't hurt performance of other apps it may be enabled by default so future apps can take advantage of the optimization.

Don't think shader replacements or optimizations mean cheating. If the final result is the same optimizations are just good engineering. Sure, it's possible to cheat but this has never been the common case.

Driver optimizations also don't mean lowering certain parameters. This could happen, but once again it's not the common case and in some cases driver control panels give an option to change from the optimized behavior so users can skip this type of optimization if they want to see things exactly as intended by the application developer.
Thanks.
This was in line with my expectations.
What it does mean is (apart from creating obvious issues with benchmarking) that there is a fair amount of app-specific code in drivers. And that this code on a case by case basis may not be transferable to a new architecture.
So if you come up with something new, not only do you have to produce error free and high performance drivers for the comman APIs, but for competitive reasons you have to go through whatever tweaks and fixes you have implemented for, say, the hundred most common benchmark apps and games and check if they are still valid, and ideally see if you can do better with the toolchest the new hardware offers.

Sounds like a fair bit of work.
Also, as someone who has been involved in benchmarking since before the inauguration of SPEC, it seems to me as if these practices make valid comparative benchmarking very difficult.
 
Also, as someone who has been involved in benchmarking since before the inauguration of SPEC, it seems to me as if these practices make valid comparative benchmarking very difficult.
I agree. That's why you need two things IMO:
a) a large variety of games and engines to test with.
b) avoid canned benchmarks and built-in functions as hard as you can.

Where a) just makes single data points pretty much invalid because it is very prone to cherry picking (does anyone really believe that benchmarks do "leak" and no vendor can do anything against it? If so, why is mostly AotS and 3DMark with their auto-upload features?) in one or the other direction or just plain unintentional bias, b) is worse. I've seen cases for both vendors, where built-in benchmarks or purpose-built benchmark versions do tell a sometimes very different story than playing the actual game - and not only a single scene in a game, but serveral. Hell, at some point even using a cracked .exe for a game did invalidate some optimizations and perf was lower.
 
Back
Top