No DX12 Software is Suitable for Benchmarking *spawn*

Yeah, Async can be forced for Ashes, however it can not be forcibly enabled for Maxwell in Gears 4, meaning that If you have a Maxwell card, the game removes the option completely from the menu, or greys it out. So if a site mentioned Async is On in the testing process, it means it is on for GCN and Pascal only, Maxwell and Kepler are automatically off, they stay off and can not be changed.

Gears 4 appears to dictate some other settings outside of user's control, not just Async, for example in Techspot's testing, they found that 6GB cards offer more image quality than 3GB ones, and so do 4 GB vs 2GB cards, even though ALL of them are running the Ultra preset. Higher memory cards have better shadows, AO and texture details.

However what's even more interesting is that the 1060 3GB offered more IQ than the RX 460 4GB, also despite both using the Ultra preset, So some other factor is at play here other than the VRAM capacity.

Here we see that the 3GB 1060 provides much more detail over the 2GB RX 460, just as the 4GB RX 460 did. However, it is interesting to note that there is significantly more shadow detail on the Nvidia graphics card, this is also true when compared to the 4GB RX 460.
http://www.techspot.com/review/1263-gears-of-war-4-benchmarks/
 
It's probably detecting by Device ID and having presets behind the scenes for every card... Which is dumb and takes control out of the users hands. Some people are fine running at 20 FPS for maximum pretties.
 
Just a reminder it was still disabled May 2016 even when slightly positive results were still being seen for Pascal cards, Ryan confirmed this with Dan Baker and that there is still some re-org that goes on with Async turned on in Ashes.
Think about the headache for the developer enabling a path for some devices and not others, currently their path identifies by vendor rather than diverse range of cards for Async, yeah can be done but I doubt a developer wants to take responsibility of this without large engagement from Nvidia (and I cannot see that happening with Oxide).
Info provided by Ryan Smith, this comes from getting further clarification regarding the posts on another forum I linked from Kollock going back to March stating they were still disabled for the public game:
https://forum.beyond3d.com/threads/...nd-analysis-thread.57188/page-64#post-1915300

And also about the work re-org: https://forum.beyond3d.com/threads/...nd-analysis-thread.57188/page-64#post-1915326
Cheers
 
Last edited:
Just a reminder it was still disabled May 2016 even when slightly positive results were still being seen for Pascal cards, Ryan confirmed this with Dan Baker and that there is still some re-org that goes on with Async turned on in Ashes.
Think about the headache for the developer enabling a path for some devices and not others, currently their path identifies by vendor rather than diverse range of cards for Async, yeah can be done but I doubt a developer wants to take responsibility of this without large engagement from Nvidia (and I cannot see that happening with Oxide).
Info provided by Ryan Smith, this comes from getting further clarification regarding the posts on another forum I linked from Kollock going back to March stating they were still disabled for the public game:
https://forum.beyond3d.com/threads/...nd-analysis-thread.57188/page-64#post-1915300

And also about the work re-org: https://forum.beyond3d.com/threads/...nd-analysis-thread.57188/page-64#post-1915326
Cheers

Yup just like it's a headache for developers in Dx11/Dx9/OGL to offer different paths and detect whether a rendering feature will work on one card versus another. :p It's been happening for years and years, it's nothing new.

What is somewhat new is one vendor claiming support for a performance enhancing feature, but developers having problems enabling support for it. Alternatively supporting it, but with no performance gain or negative performance when supporting it. Although not entirely new, I guess, just not as common.

Regards,
SB
 
Yup just like it's a headache for developers in Dx11/Dx9/OGL to offer different paths and detect whether a rendering feature will work on one card versus another. :p It's been happening for years and years, it's nothing new.

What is somewhat new is one vendor claiming support for a performance enhancing feature, but developers having problems enabling support for it. Alternatively supporting it, but with no performance gain or negative performance when supporting it. Although not entirely new, I guess, just not as common.

Regards,
SB
So you are saying that the developer creates a unique path for each Kepler/Maxwell/Pascal in Dx11/Dx9 using the many device ids (only way to identify each card model) rather than function/api level query?
Can they identify the Async Compute in a better way to guarantee they only target Pascal GPUs without having to use deviceid?
For Pascal and excluding Gp100 that currently stands at 43 device ids.
I would expect something could be done in driver but then Nvidia would had recommended for it to be enabled rather than the current disabled by developer (Oxide).

That is just Oxide and maybe there is a consideration with the other developers/games using Async Compute and whether they will or have clarified what is being done with Nvidia and also specific drivers *shrug*.
Cheers
 
Last edited:
Yup just like it's a headache for developers in Dx11/Dx9/OGL to offer different paths and detect whether a rendering feature will work on one card versus another. :p It's been happening for years and years, it's nothing new.

What is somewhat new is one vendor claiming support for a performance enhancing feature, but developers having problems enabling support for it. Alternatively supporting it, but with no performance gain or negative performance when supporting it. Although not entirely new, I guess, just not as common.

Regards,
SB
You have Gears of War 4.
Do you have an Nvidia pre-Pascal card and if so is the option greyed out for enabling Async Compute or does it still allow you to enable?
This would answer whether they identify at a developer level and block non-Pascal GPUs or just leave it to Nvidia and the driver to resolve this.
Still relevant to this thread as it has implications for DX12 benchmarking beyond what we have been discussing - just saying before our posts are spirited away into another thread :)
Thanks
 
Battlefield 1 single player testing:
542fc63cc8.jpg
54319cb578.jpg


http://www.purepc.pl/karty_graficzn...tlefield_1_pc_wymagania_pod_kontrola?page=0,3
 
So you are saying that the developer creates a unique path for each Kepler/Maxwell/Pascal in Dx11/Dx9 using the many device ids (only way to identify each card model) rather than function/api level query?

Not a unique path, just a branch in the path similar to Oxide or other developers that have the ability to enable/disable just the async compute portion of the rendering path.

You have Gears of War 4.
Do you have an Nvidia pre-Pascal card and if so is the option greyed out for enabling Async Compute or does it still allow you to enable?
This would answer whether they identify at a developer level and block non-Pascal GPUs or just leave it to Nvidia and the driver to resolve this.
Still relevant to this thread as it has implications for DX12 benchmarking beyond what we have been discussing - just saying before our posts are spirited away into another thread :)
Thanks

Not anymore, I had been using a GTX 780 for a while. But that was basically on gift/loan from a friend, I returned it back to him when I decided to get a new video card and was using a 7790 in my main rig until I got the 1070. He has since sold it on Craigslist, so I don't have access to try it over at his place.

Regards,
SB
 
More testing:
12277
12278


The game's DirectX 12 mode, however, remains somewhat of a mystery. We find certainly a much higher frame rate cuts, for example Radeon RX 480, but it all comes with a hefty back in the form of coarse FPS dips and a generally uneven experience with both AMD and Nvidia-based hardware.

http://www.sweclockers.com/test/22807-snabbtest-battlefield-1-i-directx-11-och-directx-12

17091746247l.jpg

17091847971l.jpg

http://www.overclock3d.net/reviews/software/battlefield_1_pc_performance_review/9

Tests from pcgameshardware, all cards lose fps moving from DX11 to DX12, even when CPU limited! NV cards lose more:
http://www.pcgameshardware.de/Battl...attlefield-1-Technik-Test-Benchmarks-1210394/
http://www.pcgameshardware.de/Battl...attlefield-1-Technik-Test-Benchmarks-1210394/
They also found DX12 fps to be inconsistent for both AMD and NV for some reason.
http://www.pcgameshardware.de/Battl...attlefield-1-Technik-Test-Benchmarks-1210394/
Direct X 12 however disappointed so far. With Nvidia GPUs, very similar to Doom is used when using the Vulkan API before the fix by the Geforce driver 372.70 when Direct X 12 is activated. This creates a delicate latency and, just like in Doom, frames are "dropped", which leads to coarse stocker and also affects the measured values. With AMD, the full-screen mode under Direct X 12 functions correctly, but the first measurements with Radeons do not look very edifying: just like the GTX 1080, the RX 480 loses under the new Microsoft interface to power, in addition, the Direct X 11 in principle Very pleasant and balanced Frametimes under DX12 very out of round. Further measurements and evaluations will follow.

In summary, depending on the test area and resolution, AMD GPUs will sometimes lose fps, stay flat or gain slightly better fps in DX12, NV GPUs will always lose fps in DX12 sometimes by a large margin. However NV's DX11 performance is so good that it is faster than AMD's DX12 performance, at their worst they are equal. DX12 is delivering inconsistent frames for both AMD and NVIDIA.
 
Last edited:
BOY DX12 SURE IS THE MOST AMAZING THING TO HAPPEN TO THE PC GAMING INDUSTRY


I kid. I'm just frustrated with all these fucked up ports. I get why it's happening(Lack of time to properly integrate it, engines totally not designed for it, drivers that are going to be immature on this for quite a while...), I truly do. But, it is frustrating, you know?
 
BOY DX12 SURE IS THE MOST AMAZING THING TO HAPPEN TO THE PC GAMING INDUSTRY


I kid. I'm just frustrated with all these fucked up ports. I get why it's happening(Lack of time to properly integrate it, engines totally not designed for it, drivers that are going to be immature on this for quite a while...), I truly do. But, it is frustrating, you know?

I'm tired of DX12 being a marketing point
 
BOY DX12 SURE IS THE MOST AMAZING THING TO HAPPEN TO THE PC GAMING INDUSTRY

I kid. I'm just frustrated with all these fucked up ports. I get why it's happening(Lack of time to properly integrate it, engines totally not designed for it, drivers that are going to be immature on this for quite a while...), I truly do. But, it is frustrating, you know?


I advise you to look into the results yourself and not just gobble up whatever hand-picked results + (I imagine bolded?) advertisement someone wrote in a forum post.

In both DX12 games I played with my own hardware, I got sizeable gains. If you're not getting any gains with your hardware, you can just use the old API.
The best part of DX12 games so far is that developers are making sure there aren't any IQ effects exclusive to the new API (unlike the DX9-DX11 with e.g. tessellation), so its introduction could hardly be any more user-friendly than this.


Yes, one IHV came up better prepared for the new APIs than the other. Perhaps because said IHV was the first to answer the call by developers themselves to create lower-overhead APIs, but you could ask @repi about that.
Yes, this was bound to get the other IHV's fanboys to rage and foam from their mouths in response. Next year or the year after probably most games will be using DX12 and/or Vulkan simply because it's better.
Until then, just follow @ieldra 's advice and don't let it be a decisive point for you. If it works better, choose the new API. If it doesn't, choose the older.
 
I kid. I'm just frustrated with all these fucked up ports. I get why it's happening(Lack of time to properly integrate it, engines totally not designed for it, drivers that are going to be immature on this for quite a while...), I truly do. But, it is frustrating, you know?
What's even more tiresome is the people who do nothing but go on like a broken record about DX12 being the next best thing right now, despite all the data over the internet stating otherwise and then have the hypocrisy to call one vendor better at DX12 even though the data points otherwise and there are no true DX12 titles to this day, not IQ wise or performance wise!
 
BOY DX12 SURE IS THE MOST AMAZING THING TO HAPPEN TO THE PC GAMING INDUSTRY


I kid. I'm just frustrated with all these fucked up ports. I get why it's happening(Lack of time to properly integrate it, engines totally not designed for it, drivers that are going to be immature on this for quite a while...), I truly do. But, it is frustrating, you know?
I don't think you appreciate just what's going on.
1.6GHz-ish 8 core APU on a console vs. 3GHz-ish 4 core CPU on a PC is a whole different ballpark for starters.
There simply are no games out there that would push into the realm where DX12 truly shine and there are very good reasons for that! Developers have been dealing with the "batch batch batch" philosophy for literally decades. They are used to it and it's not like anyone is going to throw away the tool chain behind it just because they can (for no benefit in DX12 and huge cost in DX11). You can push seriously into this direction but then no sane developer will even make a DX11 version because it simply won't be usable. Say DX12 version runs at 60FPS and DX11 runs at 20FPS. That's entirely doable what would be the point in spending resources for a DX11 port of such a game though? To make a few forum enthusiasts happy? Developers already know that, there are tech demos for that. A good example of this direction is Ashes of Singularity. How playable is it in DX11? And you can check the development for way back when DX12 was not even on the drawing board? It's still not close to the limits of what DX12 allows.

When talking about this you also need to take note about what are we supposed to use all those draw calls for. 10000 player first person shooter? Yeah graphically I think we could handle that. Everything else will crumble. Space invaders with 500000 independently moving uniquely shaped, uniquely textured asteroids on screen at once? You could probably still do it with some effort on DX11.

If you're reading what developers are talking about around here I'd say it's pretty obvious that's not the direction that's going to be taken. There's a lot more talk about graphics pipelines where basically GPU feeds itself or advanced on GPU culling techniques for example. This approaches are again something that fits DX12 much much better and is again something that we probably are not going to see back ported to DX11 just so that we could make a comparison between the two APIs and be amazed by the awesome speedup.

I'd also like to point out that calling out immature drivers is a seriously slippery slope here... There's sort of a pinky promise here that drivers should be lite! They should not do a whole bunch of background analysis about what application is trying to do and then rearrange stuff to better fit hardware like they do in DX11. Which then causes head aches for game developers when pipeline stalls out of the blue. It's not guaranteed to stay this way.
Basically if DX11 runs better then DX12 then driver does a better job then game developers. Which after 7 years since DX11 was publicly released I'm freaking amazed anyone finds surprising. If it's the other way around then game developers are doing a better job then the drivers. There are no magic "DX12 instructions". There is async compute that can help if hardware can make use of it but it's in no way magic.
 
I don't think you appreciate just what's going on.
1.6GHz-ish 8 core APU on a console vs. 3GHz-ish 4 core CPU on a PC is a whole different ballpark for starters.
There simply are no games out there that would push into the realm where DX12 truly shine and there are very good reasons for that! Developers have been dealing with the "batch batch batch" philosophy for literally decades. They are used to it and it's not like anyone is going to throw away the tool chain behind it just because they can (for no benefit in DX12 and huge cost in DX11). You can push seriously into this direction but then no sane developer will even make a DX11 version because it simply won't be usable. Say DX12 version runs at 60FPS and DX11 runs at 20FPS. That's entirely doable what would be the point in spending resources for a DX11 port of such a game though? To make a few forum enthusiasts happy? Developers already know that, there are tech demos for that. A good example of this direction is Ashes of Singularity. How playable is it in DX11? And you can check the development for way back when DX12 was not even on the drawing board? It's still not close to the limits of what DX12 allows.

When talking about this you also need to take note about what are we supposed to use all those draw calls for. 10000 player first person shooter? Yeah graphically I think we could handle that. Everything else will crumble. Space invaders with 500000 independently moving uniquely shaped, uniquely textured asteroids on screen at once? You could probably still do it with some effort on DX11.

If you're reading what developers are talking about around here I'd say it's pretty obvious that's not the direction that's going to be taken. There's a lot more talk about graphics pipelines where basically GPU feeds itself or advanced on GPU culling techniques for example. This approaches are again something that fits DX12 much much better and is again something that we probably are not going to see back ported to DX11 just so that we could make a comparison between the two APIs and be amazed by the awesome speedup.

I'd also like to point out that calling out immature drivers is a seriously slippery slope here... There's sort of a pinky promise here that drivers should be lite! They should not do a whole bunch of background analysis about what application is trying to do and then rearrange stuff to better fit hardware like they do in DX11. Which then causes head aches for game developers when pipeline stalls out of the blue. It's not guaranteed to stay this way.
Basically if DX11 runs better then DX12 then driver does a better job then game developers. Which after 7 years since DX11 was publicly released I'm freaking amazed anyone finds surprising. If it's the other way around then game developers are doing a better job then the drivers. There are no magic "DX12 instructions". There is async compute that can help if hardware can make use of it but it's in no way magic.


Good points seriously limiting factor here is that console cpu's are very limiting at the moment even for LLAPI when it comes to draw calls, at least compared to the desktop CPU's. I think IST and other are more disappointed at why even experienced developers are getting these types of pretty much atrocious benchmarks for DX12 I think its not too much to ask for one of the main teams that have been pushing LLAPI's not to have this type of showing.....
 
I don't think you appreciate just what's going on.
1.6GHz-ish 8 core APU on a console vs. 3GHz-ish 4 core CPU on a PC is a whole different ballpark for starters.
There simply are no games out there that would push into the realm where DX12 truly shine and there are very good reasons for that! Developers have been dealing with the "batch batch batch" philosophy for literally decades. They are used to it and it's not like anyone is going to throw away the tool chain behind it just because they can (for no benefit in DX12 and huge cost in DX11). You can push seriously into this direction but then no sane developer will even make a DX11 version because it simply won't be usable. Say DX12 version runs at 60FPS and DX11 runs at 20FPS. That's entirely doable what would be the point in spending resources for a DX11 port of such a game though? To make a few forum enthusiasts happy? Developers already know that, there are tech demos for that. A good example of this direction is Ashes of Singularity. How playable is it in DX11? And you can check the development for way back when DX12 was not even on the drawing board? It's still not close to the limits of what DX12 allows.

When talking about this you also need to take note about what are we supposed to use all those draw calls for. 10000 player first person shooter? Yeah graphically I think we could handle that. Everything else will crumble. Space invaders with 500000 independently moving uniquely shaped, uniquely textured asteroids on screen at once? You could probably still do it with some effort on DX11.

If you're reading what developers are talking about around here I'd say it's pretty obvious that's not the direction that's going to be taken. There's a lot more talk about graphics pipelines where basically GPU feeds itself or advanced on GPU culling techniques for example. This approaches are again something that fits DX12 much much better and is again something that we probably are not going to see back ported to DX11 just so that we could make a comparison between the two APIs and be amazed by the awesome speedup.

I'd also like to point out that calling out immature drivers is a seriously slippery slope here... There's sort of a pinky promise here that drivers should be lite! They should not do a whole bunch of background analysis about what application is trying to do and then rearrange stuff to better fit hardware like they do in DX11. Which then causes head aches for game developers when pipeline stalls out of the blue. It's not guaranteed to stay this way.
Basically if DX11 runs better then DX12 then driver does a better job then game developers. Which after 7 years since DX11 was publicly released I'm freaking amazed anyone finds surprising. If it's the other way around then game developers are doing a better job then the drivers. There are no magic "DX12 instructions". There is async compute that can help if hardware can make use of it but it's in no way magic.
Well said , im going to copy paste this in future discussions ;p


Lovely results eh, an early christmas miracle!

980ti 5% faster than a 980. 390 5% faster than 970, 390x on par with 980ti.

Im talking about 1080p and 1440p, the g3d results dont add up and they dont match virtually all the others
 
980ti 5% faster than a 980. 390 5% faster than 970, 390x on par with 980ti.

Im talking about 1080p and 1440p, the g3d results dont add up and they dont match virtually all the others
Yep, 980Ti results in guru is inconsistent with other sites, guru3d already knows this, they will retest it again, as they suspect a bug in the test setup:

To do:
  • Add more graphics cards (!)
  • Retest GTX 980 Ti (current result set is odd)
  • CPU scaling in-between Intel <> AMD versus DirectX 12
http://www.guru3d.com/articles_pages/battlefield_1_pc_graphics_benchmark_review,9.html
 
I don't think you appreciate just what's going on.
1.6GHz-ish 8 core APU on a console vs. 3GHz-ish 4 core CPU on a PC is a whole different ballpark for starters.
There simply are no games out there that would push into the realm where DX12 truly shine and there are very good reasons for that! Developers have been dealing with the "batch batch batch" philosophy for literally decades. They are used to it and it's not like anyone is going to throw away the tool chain behind it just because they can (for no benefit in DX12 and huge cost in DX11). You can push seriously into this direction but then no sane developer will even make a DX11 version because it simply won't be usable. Say DX12 version runs at 60FPS and DX11 runs at 20FPS. That's entirely doable what would be the point in spending resources for a DX11 port of such a game though? To make a few forum enthusiasts happy? Developers already know that, there are tech demos for that. A good example of this direction is Ashes of Singularity. How playable is it in DX11? And you can check the development for way back when DX12 was not even on the drawing board? It's still not close to the limits of what DX12 allows.

When talking about this you also need to take note about what are we supposed to use all those draw calls for. 10000 player first person shooter? Yeah graphically I think we could handle that. Everything else will crumble. Space invaders with 500000 independently moving uniquely shaped, uniquely textured asteroids on screen at once? You could probably still do it with some effort on DX11.

If you're reading what developers are talking about around here I'd say it's pretty obvious that's not the direction that's going to be taken. There's a lot more talk about graphics pipelines where basically GPU feeds itself or advanced on GPU culling techniques for example. This approaches are again something that fits DX12 much much better and is again something that we probably are not going to see back ported to DX11 just so that we could make a comparison between the two APIs and be amazed by the awesome speedup.

I'd also like to point out that calling out immature drivers is a seriously slippery slope here... There's sort of a pinky promise here that drivers should be lite! They should not do a whole bunch of background analysis about what application is trying to do and then rearrange stuff to better fit hardware like they do in DX11. Which then causes head aches for game developers when pipeline stalls out of the blue. It's not guaranteed to stay this way.
Basically if DX11 runs better then DX12 then driver does a better job then game developers. Which after 7 years since DX11 was publicly released I'm freaking amazed anyone finds surprising. If it's the other way around then game developers are doing a better job then the drivers. There are no magic "DX12 instructions". There is async compute that can help if hardware can make use of it but it's in no way magic.


Gonna use some bullet points.

1. The drivers are barely a year old(younger in the case of Vulcan). Of course they're immature. Any driver would be immature after just a year. Recall the issues that GCN had in its beginning. After about a year and a half, it improved better than the average driver maturity cycle does for new generations because it was a completely new architecture and needed time. They had something to build off of(Adhering to DX11/Open GL), unlike DX12/Vulcan(Well, AMD has Mantle to build off of. Nvidia and Intel are SOL there. they have their own little APIs that we all know about, of course, but that's far different from DX12/Vulcan) where you have do things incredibly different, thus it's easier to make mistakes on the first go around when writing in support for your hardware and etc. No one's perfect, otherwise we wouldn't have software bugs. ;)

It's not that drivers are magic bullets, it's that software development takes time and often the first release isn't optimal. I know DX12 is lighter, and thus there's only so much the driver can do. That's not what I was referring to. I'm referring to the fact that it's so new.

I also expect that if somehow more DX12/Vulcan games were out, the drivers would be more advanced. That said, there's only so much one can do in a limited time.

2. Tool chains, etc. That's what I meant by engines, though given I was using engines as shorthand for everything that goes into the programming side of development I see where you got mixed up. My bad. I use shorthand a lot because typing is hard due to disabilities. you'll likely find a lot of other confusing terminology if you look through my posts. I literally knew all of what you were saying; I read this place a lot. I combined it all because typing as much as I am causes pain.

Anyway, I know that current games are in no way gonna make it work properly. I honestly don't think we're gonna see a mainstream game with... not sure what phrasing I should use here... I guess built with DX12/Vulcan in mind until 2018 or 2019, possibly later. After all, it took ages for proper DX11 games into being. Most games supporting it until about three and a half years ago(maybe just 3) were engines that had it... Again, not sure what phrasing to use here, given this is a technical subject... bolted on, maybe?

We'll go with bolted on for the moment, but I'm sure there's a better phrase.

Anyway, most games until about 3/3 and a half years ago just had it bolted on, for lack of a better term. Then 2013 hit and everything began to change very quickly, especially if the game had no X360/PS3/Wii U ports and was just PC or just(then) next gen/PC.

Mainstream games tend to take longer to adopt APIs in terms of building a game to suit the API rather than integrating the API into an already existing codebase... So, I expect we'll likely see more independent or semi-independent games pop up built with DX12/Vulcan in mind in the next five years than we will mainstream games. They likely won't go to the heights(So to speak) that the mainstream games do(Budgets are always an issue!), but we already have one game built closer to what DX12/Vulcan can enable than the mainstream games: AOTS. It's got DX11 support, sure, but it acts a lot more like a DX12 game from what I've read.

I'll likely amend this later; said disabilities are kicking in a little too much.
 
Last edited:
Whoopsie, noticed a bit I thought I had re-written after I did some googling to check facts, but I only corrected it in one place rather than two.

Edit: And then I had to correct a typo here. FML
 
Back
Top