NVIDIA Game Works, good or bad?

Mantle is only just getting out of the "beta" phase as the moment and we are rolling out the access on a controlled basis as we gauge the number of requests and support requirements (hence the release a few weeks ago on access for 40 dev's).
There is a huge difference between giving access to a game developer and a driver developer. The latter needs much more time to fully cover all functionality, and can start early even though things are still in flux. But you already knew that.

I think it's the right thing to do from AMDs point of view. But let's not pretend that it's done for the benefit of others.
 
It's one example with one demo.. Who knows what kind of optimizations nVidia might have made in their drivers when the starswarm.exe is detected?
I'm sure they are doing some stuff specific to StarSwarm, but they have definitely made some significant overall performance improvements on the latest set of drivers. I have some microbenchmarks that indicated they've reduced the overhead of stuff like tight cbuffer discard/draw loops to half of what it was. You can now do ~120k of these per frame @60Hz (single threaded) vs. around 60k before. Static cbuffer changes are up to ~200k/frame @ 60Hz. That's not to the level of D3D12 or Mantle but it demonstrates that there was definitely some performance left on the floor here in D3D11, at least for them.
 
There is a huge difference between giving access to a game developer and a driver developer. The latter needs much more time to fully cover all functionality, and can start early even though things are still in flux. But you already knew that.

I think it's the right thing to do from AMDs point of view. But let's not pretend that it's done for the benefit of others.

The comment was fairly clear, the efforts are focused on game developers right now. The number of people that know Mantle in detail are pretty small, limited to the architecture team that have written it and some of our ISV engineers. The architecture team are still developing it (i.e. getting the first phase out of beta, working on "whats next", ensure it is operationally feature comparable with DX/OGL [see "switchable graphics" support being just added], etc.), while the ISV teams focus is on supporting the developers that are using it.
 
Mantle is only just getting out of the "beta" phase as the moment and we are rolling out the access on a controlled basis as we gauge the number of requests and support requirements (hence the release a few weeks ago on access for 40 dev's).

That says having mantle supported by other ihv's is not a priority, with dx12 coming the window to get it supported seems to be closing fast.
As for intel's/nv's request it would of been another 2 clicks on the bcc button in your email client when you sent of the specs to the 40 devs. You people do like to over complicate things ;)
 
It's one example with one demo.. Who knows what kind of optimizations nVidia might have made in their drivers when the starswarm.exe is detected?
Carestan and other sites presented other examples as well(techreport and BF4), so it is not limited to this one case only.

Fixed that for you.
NV did the same with BF4 and Thief too.
That doesn't make it wrong, it means there is more to be done that AMD hasn't done yet, they should start doing it if they don't want to render Mantle irrelevant. It means nothing when Mantle is faster over your own DX11 which you didn't spend the effort of extracting every bit of performance out of.
 
That says having mantle supported by other ihv's is not a priority, with dx12 coming the window to get it supported seems to be closing fast.
As for intel's/nv's request it would of been another 2 clicks on the bcc button in your email client when you sent of the specs to the 40 devs. You people do like to over complicate things ;)

With DX12 coming Nvidia probably won't even bother. It's not like they ever would have anyway, I can't see them throwing anything AMD's way unless they get unfettered free access to the API.
 
So AMD made a bunch of claims that can't readily be falsified because they're buried under a mountain of contracts and legalese or that don't actually turn out to be true in practice (Watchdog performing no worse on AMD GPUs.) And the one accusation that can unambiguously be checked, the disappearing source code, turns out to be flat out wrong. Is that a good summary of the situation? ;)
 
Thanks Andrew for that link! It's fairly straightforward now to see the ongoing argument between code samples vs proprietary GameWorks modules is all about, and specifically why the modules are not included with code samples. And it certainly seems to go both ways ... it'd really be interesting to see what's in the developer contracts, specifically "hidden clauses" as pointed out in the article. ;)
 
Last edited by a moderator:
Nvidia used to have the best developer web site around, but I was looking for a sample the other day and couldn't find it. It might be funny to take a shot at AMD for failing to find something, but if others have the same experience I had it means their site design needs improvement.

I eventually found the sample on one of my computers after realizing that would be easier to search multiple computers than using Nvidia's web site.
 
On that point I think most of us agree 3dcgi :) Not that I'd say any of the IHV dev web sites are "good", but NVIDIA's has definitely regressed.
 
Just out of curiosity, is that purely speculation or is it based on any kind of inside source? Given how little used command lists are (and how troublesome/unhelpful they are), I would be surprised by that. But stranger things have happened.

It is not based on a quotable inside source. But even without that, every indication points in that direction.

Apart from that: Traditionally, for a few years, Nvidia drivers have had the reputation of being able to extract more performance in apparently CPU-limited scenarios, but only if the CPU in question had at least four threads available. This has been attributed from time to time to a "low-res strength of Fermi/Kepler/whatevs" or a "hi-res strength of Radeon XYZ", where the latter looked comparatively better and better the higher you went, available vid-mem nonwithstanding.

When I compared the R9-290X and the 780 Ti a couple of months ago for our CPU-tests on older dual- and quad-cores from the Core 2 generation, it showed, that they were basically on par again (and thus the tests quite ok for comparing CPU performance). This is a step up for Nvidia in case of the dualcores, where they didn't perform that good earlier. Notable was though, that in BF4, the Nvidia card managed to pull ahead of the Radeon (both using DX11, in single-player, no AA, AF, AO, 720p!). Further tests with Mantle showed that with AMDs API, the Radeon could really pull ahead, as was to be expected. This further confirms, that the performance difference is indeed due to the way how the cards' drivers utilize processor ressources in this case, since we were far away from the graphics cards graphical limits with these old CPUs.

So, given that there are other DX11 games in our testing parcour, here's a little on that: Anno 2070 (I think it is or was called Dawn of Discovery in the rest of the world?) is multithreaded, but a very strong primary thread dominates performance here, same goes for F1 2013 apparently, since we're seeing here a great disparity according to IPC, not only core count between CPU types. Crysis 3 seems to be an example, where the dev took really care of distributing the load on many lighter threads to a lot of cores (and this is not only speculation), which is especially apparent in our testing scene. Here, AMDs CPUs are able to keep up very good with Intel ones, so maybe (and this is a speculation maybe), we're seeing a custom-case of DX11 CL at work already. This test's scores increased quite a bit with the recently released REL337 drivers from Nvidia - maybe (again, spec.) able to better/fully utilize CLs as well.


And the one accusation that can unambiguously be checked, the disappearing source code, turns out to be flat out wrong. Is that a good summary of the situation? ;)
Well, I don't know if it has ever been freely available in the first place, but the gameworks-specific modules are not to be freely downloadable on Nvidias site (anymore?) - instead of a link, there's a "contact us for licenscing".

The other DX11- and OpenGL-samples (still) are readily available, though.
 
Last edited by a moderator:
Some questions :

With non IHV middleware is there ever a problem with IHV devrel getting to see the source code when the developer has a source code license?

With Mantle games can NVIDIA devrel still look at a game's source code (with NDA?) without the developer needing to excise all the parts which touch it?
 
GameWorks libraries as plug-ins ...

It turns out that Nvidia is working on integrating its own GameWorks libraries as plug-ins that developers can readily access and use for UE4 game development. In other words, if you pay $20 for a monthly UE4 license, the core engine that you receive will not have GameWorks libraries baked into it apart from support for PhysX and Apex. If you do choose to license the GameWorks libraries, Nvidia wants to make certain they’re available and fully up to snuff in terms of performance and compatibility.

One of the concerns I raised in my article on the topic when this news was announced was that we were seeing a new era of game development in which your choice of GPU vendor would have an even greater impact on which games ran best on your own hardware. Game developers and hardware vendors have cooperated for years to optimize titles, but there’s always been a fine line between optimizing for one GPU family and making development decisions that subtly harm the competition.

While it’s clear that Nvidia and Epic are planning a tight partnership with Unreal Engine 4, this is scarcely the first time the companies have worked extensively together. It’s now been made clear that Nvidia’s entire suite of GameWorks libraries won’t be the default option for game developers or hobbyists who want to use the engine, and that does somewhat reduce the threat to competitive balance between AMD, Nvidia, and yes, Intel.

http://www.extremetech.com/computin...nto-unreal-engine-4-core-says-nvidia-and-epic
 
Overall a good article with a nice info from actual developers. Some of their concluding remarks go a little too far into speculation, but that's ok.

This, x1000, is entirely the point I've been making:
Tim Sweeney said:
Game and engine developers who choose middleware for their projects go in with full awareness of the situation.

Would I personally use GameWorks if I was making a game? Nope. But consumers would have every right to blame *me*, not NVIDIA if I decided to and it ran poorly on their hardware (be it NVIDIA, Intel, AMD or otherwise) just like any other middleware. Ultimately I'm responsible to my users for their experience with my game on whatever hardware I claim to support.
 
Back
Top