No DX12 Software is Suitable for Benchmarking *spawn*

have to suffer the abysmal DX11/OpenGL performance of their hardware

I do not know, I have had both an amd and nvida system up and running (intel based though at the moment). When I sit down and play I am hard pressed to see the difference between them, they are close enough that I do not even give it a thought. Though I will say I lost interest in benchmarks in the early 2000's. I do not think you are lying, the benchmarks agree with you. Actually going from one to the other though, it's not as big of a deal as the benchmarks seem to say. I guess people like to pick their feudal lords to back.

Then again, I am thinking about picking up a second hand fury x.. only a fool such as I.
 
Right now, and in many occasions, low level APIs are a just an excuse by AMD to shift the blame of optimizing to developers rather than do it themselves, worse yet, this comes later after game launch, suffers delay, gets incomplete support "Beta" and worst of all, in many cases (Ashes, RoTR, Warhammer) can't even beat the performance of their competitor's High level API, that they are supposed to maintain and optimize for.
The new APIs aren't exactly low level, they're lower level, and they're not an excuse for anything or anyone. Many developers asked for APIs like DX12 and Vulkan because there are advantages to the application having control over memory, syncs, etc. Even though developers have experience with similar APIs on consoles there's still a learning curve. And if you think AMD has the market power to push Microsoft and the other Khronos companies around you're mistaken. We have these new APIs because multiple hardware and software companies agree they're a good idea.
 
The new APIs aren't exactly low level, they're lower level, and they're not an excuse for anything or anyone. Many developers asked for APIs like DX12 and Vulkan because there are advantages to the application having control over memory, syncs, etc. Even though developers have experience with similar APIs on consoles there's still a learning curve. And if you think AMD has the market power to push Microsoft and the other Khronos companies around you're mistaken. We have these new APIs because multiple hardware and software companies agree they're a good idea.

That is a good point and one aspect of GPUOpen that AMD presents, because this then also needs to consider custom shader extensions that could be argued are low level; how much complexity and headaches this adds for developers if both AMD and Nvidia push for this in games will be interesting, especially from a QA and optimisation perspective when integral to the game-rendering engine.
Cheers
 
Last edited:
I agree shader intrinsics are low level, but they are not fundamental to the new APIs. What makes these more practical for AMD is console developers already use these instructions. Gameworks was a brilliant response to the console effect for AMD because it allowed Nvidia to implement low level optimizations and push them into games.
 
I agree shader intrinsics are low level, but they are not fundamental to the new APIs. What makes these more practical for AMD is console developers already use these instructions. Gameworks was a brilliant response to the console effect for AMD because it allowed Nvidia to implement low level optimizations and push them into games.
I guess it depends upon interpretation as the APIs support this, and for some games it is integral-fundamental to the engine such as we see for AMD and Vulkan Doom, or Nvidia with openGL and Doom.
Gameworks IMO is not necessarily core low level optimisation in same sense but for bolt-on effects - in the way it seems majority developers implement it anyway, just like AMD's Tressfx/Purehair, although I appreciate there can be an overlap to core game-rendering engine design and some of these visual solutions.
Context for me is low level optimisation, but yeah the discussion is broader than that and also depends upon ones perspective.
Cheers
 
ThWe have these new APIs because multiple hardware and software companies agree they're a good idea.
Exactly, some developers, not all of them, in fact many others have approached this subject cautiously, mentioning the cost of development and the burden of maintenance as barriers. So far, results have been mixed at best, and in many cases it leaned on the very bad side. So if current trends continue these "lower level APIs" will hit the drain for sure. Other developers would be hard pressed to follow suit in a redundant and failed endeavor. DX10 comes to mind.

And no, not all "relevant" hardware vendors wanted lower APIs, and the only relevant three when it comes to gaming are Intel, NVIDIA and AMD. The first two were concentrating on pushing the visual front more, with the likes of Order Transparency, increasing scene complexities through using more draw calles (NV's FF DX12 demo comes to mind), And so they got behind the APIs that will allow such things to become standard. Which is nothing unusual in by itself. Every vendor will support every API, it's not rocket science, you can barely support something till it dies (OpenGL, OpenCL, DX10 ..etc). Or you can get behind it to make it truly universal (DX9, DX11). Big difference. And so their support for the new APIs lies within the promises of Graphics advancements. But if you are going to use such APIs to increase the performance of low graphics games (such as half of DX12 titles), -through vendor gouging- to barely keep up with the performance levels of old APIs, then you are not going to get far with them. This will be the quickest way to bury such APIs. And no body will care.

Suddenly all the shiny new promises for DX12 are gone and are replaced with Async, which is I repeat nothing but an excuse for AMD to shift the burden of optimizations off their shoulders. Such is the current trend. If someone thinks big PC vendors will rally behind DX12/Whatever because it offers Async then they are severely mistaken.
 
Exactly, some developers, not all of them, in fact many others have approached this subject cautiously, mentioning the cost of development and the burden of maintenance as barriers. So far, results have been mixed at best, and in many cases it leaned on the very bad side. So if current trends continue these "lower level APIs" will hit the drain for sure. Other developers would be hard pressed to follow suit in a redundant and failed endeavor. DX10 comes to mind.
If it were truly that bad we wouldn't have all these developers getting on board in the first place. I doubt it's a fluke the low level APIs have seen the fastest uptake of any graphics API in recent year. There are some markets, mobile, where the benefits are significant. Writing off the upside because some IHVs currently lack support seems short-sighted.

And no, not all "relevant" hardware vendors wanted lower APIs, and the only relevant three when it comes to gaming are Intel, NVIDIA and AMD. The first two were concentrating on pushing the visual front more, with the likes of Order Transparency, increasing scene complexities through using more draw calles (NV's FF DX12 demo comes to mind), And so they got behind the APIs that will allow such things to become standard. Which is nothing unusual in by itself. Every vendor will support every API, it's not rocket science, you can barely support something till it dies (OpenGL, OpenCL, DX10 ..etc). Or you can get behind it to make it truly universal (DX9, DX11). Big difference. And so their support for the new APIs lies within the promises of Graphics advancements. But if you are going to use such APIs to increase the performance of low graphics games (such as half of DX12 titles), -through vendor gouging- to barely keep up with the performance levels of old APIs, then you are not going to get far with them. This will be the quickest way to bury such APIs. And no body will care.
The problem with this argument is that the low level APIs are held back in some cases by reliance on high level APIs still being supported. Take away that requirement and the improvements will come. It would seem to be an argument between better looking shadows in the background and an order of magnitude more characters in the scene. I haven't seen many details on high priority compute with high level APIs and I'm sure there is a reason for that.

Suddenly all the shiny new promises for DX12 are gone and are replaced with Async, which is I repeat nothing but an excuse for AMD to shift the burden of optimizations off their shoulders. Such is the current trend. If someone thinks big PC vendors will rally behind DX12/Whatever because it offers Async then they are severely mistaken.
I'd still argue async is somewhat fundamental to what the low level APIs are doing. Fully async behavior is just the result of multiple threads running with zero synchronization. Not doing work is inherently faster and simpler than doing work. The biggest advantages of the low level APIs in my mind are the high priority queues. Being able to accelerate game logic and other subsystems independently of the rendering thread. That's a feature we've yet to see demoed, but appears to be coming. Increasing the complexity and realism of an environment is a huge leap IMHO. No more limiting characters on screen because they would crush your framerate with all their independent animations and decision making.
 
There are some markets, mobile, where the benefits are significant. Writing off the upside because some IHVs currently lack support seems short-sighted.
Of course, the problem is, gaming is not related to mobile, gaming depends on gameplay and graphics innovations, developers will not invest time and resources in something that doesn't advance either of them. If they did so in the short term, they will not long term. Look how many studios experimented with DX10 then abandoned it for DX9. The other problem is the focus on certain features that truly are redundant in the large scheme of things. Almost all modern hardware support DX12, but did we really get something out of it other than the mildly selective fps boost of some ugly/average looking game?

It would seem to be an argument between better looking shadows in the background and an order of magnitude more characters in the scene. I haven't seen many details on high priority compute with high level APIs and I'm sure there is a reason for that.
Except that in DX12 we don't have neither, so far at least, games run in DX12 barely faster than DX11. Without looking or behaving any differently.


Increasing the complexity and realism of an environment is a huge leap IMHO. No more limiting characters on screen because they would crush your framerate with all their independent animations and decision making.
I'll drink to that.
 
Except that in DX12 we don't have neither, so far at least, games run in DX12 barely faster than DX11. Without looking or behaving any differently.

Which shouldn't be a surprise considering pretty much every game currently in existence was designed for Dx11/Dx9/OGL. There isn't a single game currently out that was designed for Dx12 or Vulkan (Doom and AOTS come the closest, but still have a Dx11/OGL base). This is especially true for art and art assets, which is the primary influence on how a game will look. Lighting designed to work on Dx11/Dx9/OGL isn't going to look different when a Dx12/Vulkan rendering path is added. However, in the future, it'll be possible to implement features that takes greater advantage of Dx12/Vulkan and then ported to Dx11/Dx9/OGL.

The fact that so many developers are attempting to implement some parts of Dx12/Vulcan onto engines designed for Dx11/Dx9/OGL shows that there is huge interest in the development community for the new APIs.

Think of it another way. Dx11 and Dx10 games didn't look or perform much differently from Dx9 games until the PS4/XBO came out. Why? Because up until that point virtually every game created had a Dx9-ish foundation due to the PS3/X360 and thus was primarily designed with Dx9 in mind.

That same influence means there is likely to be far greater support and far faster adoption for Dx12/Vulkan than there was for Dx11 or Dx10 as the consoles already basically support all/most features of Dx12/Vulkan. Unlike the situation with Dx10/11. However, that doesn't change the fact that Dx12/Vulkan weren't available on PC until current games were already well into development.

Regards,
SB
 
Last edited:
Exactly, some developers, not all of them, in fact many others have approached this subject cautiously, mentioning the cost of development and the burden of maintenance as barriers.
While I disagree with much of your opinion about the new APIs I'm not going to keep debating the issue. We'll just have to wait and see how it pans out over time.

I quoted the comment above to note even AMD agrees with this comment and it doesn't reflect negatively on the new APIs. When AMD pitched Mantle there was no expectation that Mantle would replace a DX11 style API. The expectation was the big engine developers want more control and performance and their licensees would use Mantle via the engines. Many console developers would appreciate the API as well.
 
But if you are going to use such APIs to increase the performance of low graphics games (such as half of DX12 titles), -through vendor gouging- to barely keep up with the performance levels of old APIs, then you are not going to get far with them. This will be the quickest way to bury such APIs. And no body will care.
I have yet to see a game were low level API would provide inferior experience to high level. Plus not many people have CPUs as powerful as most hardware review sites are using. Only Warhammer screwed the CPU load somehow- but that might be why they call it beta.
 
Last edited:
Are two RX 480s faster than a single GTX 1080? no
While my second RX 480 had to be sent back to the publication I borrowed it from before I could conduct more tests (thanks, guys!), the folks over at TechPowerUp also managed to pull together some Crossfire benchmarks. It found that while the RX 480 fared well in certain games, on average it was much slower than a GTX 1080 and just slightly slower than a GTX 1070. Given that the 1070 costs roughly the same as a pair of 4GB RX 480s, buying them outright isn't a particularly good idea.
....
Ultimately, the lesson is this: always take company claims with a pinch of salt, and if you are looking to promote a product, you can't go wrong with a bit of blood and guts.
http://arstechnica.co.uk/gadgets/2016/07/amd-rx-480-crossfire-vs-nvidia-gtx-1080-ashes/
 
Last edited:
Think of it another way. Dx11 and Dx10 games didn't look or perform much differently from Dx9 games until the PS4/XBO came out. Why? Because up until that point virtually every game created had a Dx9-ish foundation due to the PS3/X360 and thus was primarily designed with Dx9 in mind.
That is not true, early DX11 games (such as Battleforge and Bad comapny 2) offered performance boost for all hardware, other games that came after them offered Tessellation, better shadows, lighting, better MSAA performance, better AO, reflections and post processing. These elements helped separate PC ports over consoles even during X360/PS3 era. (look at Crysis 2, Far Cry 3 and the difference between DX9 and 11 there, just to name a few).

With DX12 we have nothing, nill, nada. Majority of GPUs recieve fps degradation from it for no apparent reason. The rest benefits slightly, in several cases the benefits are actually there because of lackluster DX11 performance in the first place. You know you are facing a farce when games run so much better In DX11 that there is no need whatsoever to switch to DX12. Sometimes DX11 looks better too (VXAO in Tomb Raider only works in DX11)! We have reached an unprecedented divide in the market where the majority of hardware are better left running with DX11 which sometimes offers even better experience than hardware running DX12. This never happened before (well except with DX10).

An outside observer could theorize that it's because DX12 has been "whored" into operating within restricted realms of optimizations, while ignoring others that provide better IQ, game simulation or deeper levels of optimizations (draw calls, memory management.. etc). With current trends, that outside observer would be right.

the consoles already basically support all/most features of Dx12/Vulkan.
I think we still yet to rid ourselves of confusions and misconception that run rampant on the internet, as well as contradictory statements. First of which is consoles support all DX12 fearures, (not true by the way) which means they run DX12 by default. Then it is said that multi-plat games are designed for DX11, which -According to that logic- is false as well. Because in reality they are designed for consoles.

The truth is, consoles support some features of the full DX12 set, and those that are supported are low tiers as well. Current console games are designed around those features from the get go. PCs on the other hand are vastly different (even with GCN) , their hardware scales differently (various iterations of GCN, different CPUs, RAM, OS.. etc), and they have different limitations as well (eg, no universal memory pool), workloads that suit consoles don't perfectly suit PCs or scale as well as consoles. Some workloads do, but the majority don't. As evident by the time it takes to implement DX12 path back into multi-plat games. Tomb Raider runs vastly better with DX11 even on NV. And so does Hitman. Ashes was designed with DX12 in mind, same result. (though this is a PC exclusive).

We have DX12 only games already, Forza, Gears and Quantum Break, they were designed mainly for consoles and DX12 capable GPUs. and with the exception of Forza (a light game by nature), they run like crap and doesn't look that much different compared to DX11, in fact if we integrate current trends into the mix, we could say they are a performance hog for a reason. And DX11 could have made them run faster. One of them actually ran like crap on AMD at first despite the shared architecture.

We have another huge example for this. Intel CPUs beat AMD CPUs even with lower level APIs, despite the shared arc with consoles, and the advantages it supposedly brings.

Which doesn't means just because consoles are GCN, then DX12 works better on them by default and with much ease, That is simply not true, DX12 works better on certain hardware because that's the path of optimization the developer sought for their game, and it can very well be the reversal of that situation if the developer wanted to. Eg, Warhammer where NV takes a massive nose dive with DX12 despite stellar DX11 performance, clearly tbe developer was content with NV performance in DX11 and thought there were no more reason to optimize their DX12 any further. Same with Ashes, Hitman and even DOOM.

So much has been said about Async, for most titles, the difference between enabling it and disabling usually amounts to less than 15%, most of the time it is way less than that. Which means we are seeing the effect of optimization bias.

Take that bias away and you get something like Time Spy benchmark where every chip from every vendor falls in it's right place.

Only Warhammer screwed the CPU load somehow- but that might be why they call it beta.
No, Tomb Raider as well, also Hitman and Ashes for NV.

Plus not many people have CPUs as powerful as mosthardware review sites are using.
While it maybe true Doom can run on a 2 core CPU under Vulkan, the truth is, once you go quad core, OpenGL will deliver better fps, only with very high end CPUs does Vulkan surpass OpenGL. (at least for AMD).
 
Last edited:
Take that bias away and you get something like Time Spy benchmark where every chip from every vendor falls in it's right place.

At this point I'm going to follow 3dcgi's example as your bias is extremely strong and there's no getting through it. Futuremark benchmarks have never been representative of how graphics cards perform in actual games.

BTW - early Dx11 implementations (Dx11 bolted on similar to the current state of Dx12) also featured severe performance hits quite often with little to no noticeable visual improvements. Especially if you didn't have a top of the line CPU. Considering DICE (creators of BF:BC2) are moving quickly to take advantage of Dx12, it'll certainly be interesting to see how their Dx12 implementation fares.

Regards,
SB
 
Last edited:
At this point I'm going to follow 3dcgi's example as your bias is extremely strong and there's no getting through it.
I am sorry if you feel that way. But I would rather you refrain from calling people biased just because they engage in a plausible argument with opposite opinions to you. Calling people biased and making Un supported, unreferenced claims are easy. Providing counter argument is not.

Futuremark benchmarks have never been representative of how graphics cards perform in actual games.
I never said that, however it tells you to a fairly accurate degree how video cards will behave relative to each other.

BTW - early Dx11 implementations (Dx11 bolted on similar to the current state of Dx12) also featured severe performance hits quite often with little to no noticeable visual improvements. Especially if you didn't have a top of the line CPU.

As far as I remember, that didn't happen. Please provide the example to back up your statement. Early DX11 games were Bad Company 2, Battleforge, Dirt 2, STALKER COP. And they all had visual or fps improvement over DX10/9 renderers.
 
Last edited:
I have yet to see a game were low level API would provide inferior experience to high level. Plus not many people have CPUs as powerful as most hardware review sites are using. Only Warhammer screwed the CPU load somehow- but that might be why they call it beta.
You willing to count Doom on Nvidia hardware? Feels a bit more laggy, stutters intermittently due to DXGI sync issue and thus runs not really faster than OpenGL.

edit:
My colleague's article is now live: http://www.pcgameshardware.de/Doom-2016-Spiel-56369/Specials/Vulkan-Benchmarks-Frametimes-1202711/


The truth is, consoles support some features of the full DX12 set, and those that are supported are low tiers as well.
Isn't that true basically for all hardware? One or the other feature is not supported and of the supported ones, one or more are at tiers below the maximum.
 
Last edited:
Back
Top