NVIDIA Game Works, good or bad?

He possibly refers to the opaque implementations of the algorithms offered by Gameworks. Developers couldn't optimize them for GCN if they wanted to, if that's the case. I would expect Nvidia to give support in case they underperform or pose other problems, regardless if the developer is talented enough or a "beginner".
If the price is 0, say "Logo for Gameworks", then it's very attractive, and gives a competitive edge, but it also might grow "stoopit" (sorry, no offense) customers in the long term.

I can see the conflict of interest, but as Andrew I don't think it's novel or uncommon. It has two sides though, and maybe the bias which evolves will tip the coin to the wrong side in the end.
 
If it was Mantle only then I'd be boycotting those as well. As in this case it would be worse than Gameworks as it currently stands (currently only able to run on one IHVs hardware and only on some of their hardware).
What about games that run "better" with Mantle? Clearly that's the only real reason to do a Mantle implementation to start with, but where do you as a user draw a line between whether you "feel" a developer did a "good/optimal" job on the DX path vs. the Mantle one? My point is that vendor-specific optimizations have always been a grey area and they've always been in play. Gameworks changes nothing there.

In the case of Gameworks, do they really? And even if they do, are they going to also offer a non-Gameworks DirectX path to ensure their game can potentially operate somewhat optimally on all graphics vendor's hardware?
Yes, and it's up to them whether they deem GW on other vendors as an "optimal enough" implementation to enable/ship. If they don't they have a few options... 1) disable GW stuff if not on NVIDIA (a la. MSAA in Batman; that went over well ;)), 2) implement a completely different path (lots of opportunity for whining about differences between paths) or 3) allow GW to work everywhere; optionally let the user disable the relevant effects.

At least in the case of Mantle, it doesn't even touch the "standard" rendering path available to all graphics hardware by any graphics IHV making DirectX drivers.
Developer time is limited and any time spent on Mantle takes away from time spent optimizing on other platforms. Trust me, it's already the case that IHVs give lists of things to improve in games and developers absolutely do not get to all of it... or in some cases any of it.

In the case of Gameworks, it takes the standard DirectX that all graphics cards (on Windows) will use by default and obfuscates it and optimizes it such that any code written to it works optimally on their hardware.
No it doesn't... Gameworks is just a library that happens to use DirectX. It's not some nefarious "engine layer" or something that you have to sign over your soul to use and henceforth all of your code gets sent to NVIDIA for inspection before being run on any piece of hardware ;)

The part that isn't is that since it is still DirectX base, all other graphics cards from any graphics hardware provider will have to go through it unless the developer makes a separate DirextX path in addition to the Gameworks DirectX path.
Would you prefer if the developer simply disables the functionality entirely if you're not on NVIDIA, a la. Batman AA support?

And what developer would do that as long as the Gameworks path performed "well enough" on non-Nvidia hardware
But that's the crux of the matter - game developers *always* evaluate whether something runs "well enough" and decide to ship it or not. Don't think for a second that there's some way to just generally use the API that is optimal across all IHVs. Only some engines/games do significant optimization for >1 vendor. Few do it for >2.

So it's not like the competitors would be able to find out and point it out or even attempt to fix something Gameworks does which has some unnecessary performance impact on their hardware.
If you think IHVs require access to C++ source code to see how an application is using the GPU - in detail - I've got some news for you ;) How would driver optimizations even be possible if this was the case?

So, I guess I should amend that. If a game developer makes a game using Gameworks but no alternate default DirectX path for other graphics vendor's hardware to use, then I will not purchase the game and will be unlikely to purchase anything from that developer in the future.
That's fine, ultimately what you are supposed to judge is the final product. But if the GW game runs great on AMD or Intel too (or one of those two... omg what now :)) - regardless of whether it is using a custom path or the stock GW DX one - isn't that fine?

i.e. whether or not something uses GW is irrelevant to you in the end. See if a game performs acceptable for you on whatever hardware you want it to run on and if not, don't buy it. Easy. Whether it uses GW, "super sekrit custom code for Vendor A", intentionally sabotages performance on Vendor B, etc. is at best part of the explanation for bad performance, not something to be evaluated in isolation. This is how it always has been and always should be. Let the game developers deal with the IHV relationships and you just judge the final product. If game developers want to be pissed off about GW they have every right not to use it. End users really have no place in that discussion.
 
If you think IHVs require access to C++ source code to see how an application is using the GPU - in detail - I've got some news for you ;) How would driver optimizations even be possible if this was the case?
It's much easier to have access to HLSL code and suggest changes rather than analyzing IL and performing a shader replacement.

While some developers likely consider the decision to use Gameworks or Mantle a pure performance trade off I think the situation is more nuanced.

I think some developers just want the API's to evolve and Mantle is a way to force change. Plus, no sane developer is going to implement Mantle without supporting another API. The same cannot be said for Gameworks.

As for Gameworks, I can see it being difficult to justify writing shaders from scratch vs. using those provided by Nvidia. It's got to be tempting to use the Gameworks shaders on all hardware and hope that Nvidia isn't crippling hardware from other vendors. Especially if you don't know if you can do better on non-Nvidia hardware.

I wonder if there's an opportunity for middleware like Gameworks that's vendor agnostic. The challenge would be competing against free.
 
I'm actually vilifying the developers who choose to use Gameworks, hence, avoiding purchase of anything made with it ... So, I guess I should amend that. If a game developer makes a game using Gameworks but no alternate default DirectX path for other graphics vendor's hardware to use, then I will not purchase the game and will be unlikely to purchase anything from that developer in the future.
That's a bit extreme. That's a LOT of them these days, you have got Unreal 4 and it's games, ALL Ubi games now including the next or current installments of franchises like (Splinter Cell,Assassins Creed,Watch_Dogs and even Division) and then you have Batman, Borderlands, Metro, TitanFall, Call Of Duty, GTA, Resident Evil, Witcher 3, Blizzard games, id tech games, Sony Online games (EverQuest, PlanteSide 2), many free to play titles and any other game that might be coaxed into the initiative. You practically would have half the market thrown away.

In the case of Gameworks, it takes the standard DirectX that all graphics cards (on Windows) will use by default and obfuscates it and optimizes it such that any code written to it works optimally on their hardware.
AMD did the same with some SquareEnix games, manily Tomb Raider (with it's TressFX), Deus Ex and Hitman Aboslution, these games had AMD friendly shaders that worked much better on AMD hardware, it's only after NVIDIA optimizes through drivers and possibly patches do they compete, same situation with Codemasters games too(Dirt, GRID), well before they switched to Intel anyway. In fact IMO, AMD code tinkering is much more severe than NVIDIA, it's only because AMD has far fewer games that this hasn't become a pain the neck for the market.
 
I don't doubt that rewriting HSLS shaders is easier than the compiled version. So, yes, AMD will need to go in and get their hands a bit dirty. Just like Nvidia needed some time after the initial release of Tomb Raider to optimize their driver for TressFX because they hadn't been given early access. Nothing new there.
 
What about games that run "better" with Mantle? Clearly that's the only real reason to do a Mantle implementation to start with, but where do you as a user draw a line between whether you "feel" a developer did a "good/optimal" job on the DX path vs. the Mantle one? My point is that vendor-specific optimizations have always been a grey area and they've always been in play. Gameworks changes nothing there.


Yes, and it's up to them whether they deem GW on other vendors as an "optimal enough" implementation to enable/ship. If they don't they have a few options... 1) disable GW stuff if not on NVIDIA (a la. MSAA in Batman; that went over well ;)), 2) implement a completely different path (lots of opportunity for whining about differences between paths) or 3) allow GW to work everywhere; optionally let the user disable the relevant effects.


Developer time is limited and any time spent on Mantle takes away from time spent optimizing on other platforms. Trust me, it's already the case that IHVs give lists of things to improve in games and developers absolutely do not get to all of it... or in some cases any of it.


No it doesn't... Gameworks is just a library that happens to use DirectX. It's not some nefarious "engine layer" or something that you have to sign over your soul to use and henceforth all of your code gets sent to NVIDIA for inspection before being run on any piece of hardware ;)


Would you prefer if the developer simply disables the functionality entirely if you're not on NVIDIA, a la. Batman AA support?


But that's the crux of the matter - game developers *always* evaluate whether something runs "well enough" and decide to ship it or not. Don't think for a second that there's some way to just generally use the API that is optimal across all IHVs. Only some engines/games do significant optimization for >1 vendor. Few do it for >2.


If you think IHVs require access to C++ source code to see how an application is using the GPU - in detail - I've got some news for you ;) How would driver optimizations even be possible if this was the case?


That's fine, ultimately what you are supposed to judge is the final product. But if the GW game runs great on AMD or Intel too (or one of those two... omg what now :)) - regardless of whether it is using a custom path or the stock GW DX one - isn't that fine?

i.e. whether or not something uses GW is irrelevant to you in the end. See if a game performs acceptable for you on whatever hardware you want it to run on and if not, don't buy it. Easy. Whether it uses GW, "super sekrit custom code for Vendor A", intentionally sabotages performance on Vendor B, etc. is at best part of the explanation for bad performance, not something to be evaluated in isolation. This is how it always has been and always should be. Let the game developers deal with the IHV relationships and you just judge the final product. If game developers want to be pissed off about GW they have every right not to use it. End users really have no place in that discussion.

I, as anybody, have all the right to not condone a company's actions, be it a developer or IHV. Let me leave it at that.
 
Looks like this whole spat was just a Forbes guy not being able to do a benchmark. Here's the conclusion of Hardocp: "In terms of performance we were surprised how close the R9 290X and GTX 780 Ti are. There has been a lot of FUD around the internet about AMD potentially lacking in performance compared to NVIDIA. We hope we have smashed the rumors and provided facts based on gameplay and not some quick-use benchmark tool that will many times tell you little. We actually found the Radeon R9 290X slightly faster in some scenarios compared to the GeForce GTX 780 Ti. We also found out that gameplay consistency was a lot better on Radeon R9 290X with "Ultra" textures enabled thanks to its 4GB of VRAM."

(http://www.hardocp.com/article/2014/05/27/watch_dogs_amd_nvidia_gpu_performance_preview/5)
 
I don't doubt that rewriting HSLS shaders is easier than the compiled version. So, yes, AMD will need to go in and get their hands a bit dirty. Just like Nvidia needed some time after the initial release of Tomb Raider to optimize their driver for TressFX because they hadn't been given early access. Nothing new there.

Not having early access is pretty much SOP on both sides, but TressFX source code is freely available and open to examination by anyone. That's a pretty significant difference to a black box solution from nVidia doing god knows what before it starts sending commands to the GPU.

Personally, I don't buy nVidia precisely because of their anticompetitive tactics in the GPU market. AMD's policies have long been far more ethical and that's why they get my money irrespective of any price/performance disparity.
 
Self-advertisement incoming...

http://www.pcgameshardware.de/Watch...atch-Dogs-Test-GPU-CPU-Benchmarks-1122327/#a4

If you scroll a tad down and click on CPU/Driver Overhead Benchmark (which it technically isn't really, just an indicator) in the box, then you see that in our test scene in 720p with max. details except AA, the 780 Ti pulls ahead of 290X. Given the resolution, I highly doubt that his has much to do with black-box special effects, but rather driver efficiency - since it's an open world game with lots of stuff going on and has been [strike]developed[/strike] optimized [strike]on[/strike] for light-weight-APIs on consoles, my guess would be something along the lines of DX11 command lists which help NV to give a boost here.

Since AMD guys are probably reading this: I'd love to know the reason, DX11 CL are not supported apart from making mantle look (even!) better.
 
It's much easier to have access to HLSL code and suggest changes rather than analyzing IL and performing a shader replacement.
Agreed for making changes, but t's pretty easy to spot glass jaws or anything intentional sabotage regardless. I'm pretty used to looking at code that hasn't been optimized for Intel GPUs (sadly a lot of it) and while I agree shader replacement isn't simple I don't really need to see the HLSL to understand both what is going on algorithmically and how optimal it is. i.e. it's easily enough to call shenanigans if need be even if "fixing" it purely in the driver is a pain.

I think some developers just want the API's to evolve and Mantle is a way to force change.
Absolutely and it has obviously done a great job of that so far. But one could definitely argue that AMD continuing to encourage people to develop Mantle paths on Windows once D3D12 hits is really no different than GW. Even if any "sane" developer is going to have a D3D path in addition you are still taking resources away from that portable path.

As for Gameworks, I can see it being difficult to justify writing shaders from scratch vs. using those provided by Nvidia. It's got to be tempting to use the Gameworks shaders on all hardware and hope that Nvidia isn't crippling hardware from other vendors. Especially if you don't know if you can do better on non-Nvidia hardware.
True but sort of irrelevant. It's always easier to just accept/use someone else's code or libraries but you obviously have to vet whether it satisfies your requirements or not. If a dev considers it acceptable performance on AMD then that's that, end of story. If they don't they have a variety of options.

Not having early access is pretty much SOP on both sides, but TressFX source code is freely available and open to examination by anyone. That's a pretty significant difference to a black box solution from nVidia doing god knows what before it starts sending commands to the GPU.
AMD isn't quite the white knight you make them out to be here. For instance while they seem to unofficially "say" that game devs are allowed to modify the TressFX code (although they have no license to say as much), they will not allow other IHVs to post optimized versions. Mantle is similar... to the press they say that it's "portable" and they'll discuss standardizing it but so far they have refused to even share specs with other IHVs let alone have a discussion. Really not that different from GW in practice, they've just managed to avoid getting press attention about it so far.

Anyways enough said really. As I noted none of this is new at all... folks who think this hasn't been the situation since day one are just fooling themselves. Pretty much all IHVs are entirely self-serving and their actions are mostly equivalent... the only thing that varies is the PR spin.
 
Not having early access is pretty much SOP on both sides, but TressFX source code is freely available and open to examination by anyone. That's a pretty significant difference to a black box solution from nVidia doing god knows what before it starts sending commands to the GPU.
I'm not God, but from what we now know it doesn't seem to hurt AMD in any kind of suspicious way. All the rest is just FUD from AMD with a lot of if's and then's about the potential of abuse. Show real evidence and we can talk.

I do fault Nvidia for failing to do one very important step: they didn't yell from the rooftops that "GameWorks is Open!". History has shown this to be an excellent way to silence a lot of people right there!

Personally, I don't buy nVidia precisely because of their anticompetitive tactics in the GPU market. AMD's policies have long been far more ethical and that's why they get my money irrespective of any price/performance disparity.
That's nothing to be ashamed of. We all sometimes have a need to feel superior at something. I confess to sometimes reading the semiaccurate forums for that same reason. :devilish:
 
So nvidia and intel have approached amd about creating their own mantle drivers ?
We have asked them for specs several times (which would be step one) and they have refused. I would not be surprised if NVIDIA has done the same since AMD themselves said they would entertain such a conversation. For all I know they gave the specs to NVIDIA but I sort of doubt it ;)

In any case with D3D12 on the horizon though it's not really as interesting as it may have been before that.
 
Guru's benches are unreliable, the game is VRAM limited and their quality settings are so high, that 2GB and 3GB cards are instantly crippled.

You can avoid all of that by carefully managing settings as not to exceed your v.ram limit. 2GB cards can only use 1080 at Ultra +FXAA, 3GB cards can use 1080p with MSAA or TXAA, and 4GB cards can play higher than 1080p.

hence why he test then in high quality setting and provide a full set of testing with different AA method and setting using the 780 ... ( the tweaking guide from Nvidia is too a good place for start too anyway, but it was not realized at the time he write his article ) .
 
Last edited by a moderator:
whats wqhd ?
Wide Quad HD: 2560x1440.

my guess would be something along the lines of DX11 command lists which help NV to give a boost here.
Just out of curiosity, is that purely speculation or is it based on any kind of inside source? Given how little used command lists are (and how troublesome/unhelpful they are), I would be surprised by that. But stranger things have happened.
 
@ Andrew Lauritzen
Did they just ignore your requests, did they refuse, did they say "we're not ready yet", did they give any reason ?
 
Just out of curiosity, is that purely speculation or is it based on any kind of inside source? Given how little used command lists are (and how troublesome/unhelpful they are), I would be surprised by that. But stranger things have happened.
What we know for sure is that AMD DX11 drivers are much worse than NVIDIA at extracting CPU performance for an unknown reason.
http://www.pcper.com/reviews/Graphi...DIA-33750-Scaling-Demonstrated-Star-Swarm-AM1

http://www.tweakpc.de/hardware/test..._337_50/benchmarks.php?benchmark=starswarmfps
 
@ Andrew Lauritzen
Did they just ignore your requests, did they refuse, did they say "we're not ready yet", did they give any reason ?
Mantle is only just getting out of the "beta" phase as the moment and we are rolling out the access on a controlled basis as we gauge the number of requests and support requirements (hence the release a few weeks ago on access for 40 dev's).
 
What we know for sure is that AMD DX11 drivers are much worse than NVIDIA at extracting CPU performance for an unknown reason.
http://www.pcper.com/reviews/Graphi...DIA-33750-Scaling-Demonstrated-Star-Swarm-AM1

http://www.tweakpc.de/hardware/test..._337_50/benchmarks.php?benchmark=starswarmfps
Of course; I agree with that much as the benchmarks are solid evidence. I am just pondering whether it's command lists or something else. Given all the places that can be CPU bottlenecked versus how rarely command lists were used, it seems to me the CPU bottleneck is most likely elsewhere (unless CarstenS has anything indicating otherwise).
 
Back
Top