NVIDIA Game Works, good or bad?

Repeating the same argument does not make it more compelling. As I've said repeatedly as well, I have no disagreement about wanting to live in a world where we all give out source, agree on standards, etc (and I might note that only one of the three IHVs is role modeling this behavior...). But your argument that "Mantle code doesn't run on NVIDIA" is not a compelling answer to the reality that code development and optimization time is a fixed resource. Nothing guarantees that the general DirectX path of a game has to be fast/optimized or that it's even doing the same things as a Mantle path.
But then again, using GameWorks also means that there's absolutely no time spent optimizing said effects for AMD/Intel.
Mantle can take some optimization work off from DX and thus NVIDIA/Intel, but GameWorks makes sure said effects won't be optimized for AMD/Intel
 
But then again, using GameWorks also means that there's absolutely no time spent optimizing said effects for AMD/Intel.
Using GW *on AMD/Intel* means that. No one is forcing you to do that, which is why I put this squarely on ISVs. I think NVIDIA would be perfectly happy for you to not run those effects at all on competitive hardware.

GameWorks is vendor-specific for all intents and purposes, and that's fine. If ISVs are getting fooled about this point then educate them, but the existence of vendor-specific code/middleware is neither new nor a reason to freak out.
 
Repeating the same argument does not make it more compelling.
Neither does finding convoluted justifications and solutions to GW that are not realistic to implement, or don't rely on accurate or verified assumptions.

If you can't see that your core argument isn't purely technical and does rely on the "philosophical" assumption that "game developers will continue to write optimized DirectX paths and won't do anything particularly different in the Mantle path" then I guess we're not speaking the same language.
Maybe we're not. Using specific capabilities enabled by certain APIs is not the problem being discussed. CUDA, GPU PhysX, PixelSync, even the currently AMD-only Mantle-specific features such as async compute are all valid examples.

I think it's stupid to fault NVIDIA for "allowing" their code to run on other hardware, when you whined about them not allowing it before (Batman, etc). If GW didn't work at all on AMD it wouldn't really be any different from a vendor-specific path (Mantle) now would it?
We're definitely not speaking the same language :). The objection is about developers' inability to see GW code, preventing them to "own" the code optimizations shipping in their game. The Batman case was standard DX9 MSAA code in the game that happened to be artificially vendor-locked (for a time). To answer your question if GW modules implemented with standard DX11 API calls were artificially vendor-locked then this case would be very different from Mantle (see above paragraph about vendor-specific features).

You can question my motives all you want, but as you pointed out yourself, I'm the least likely of us to have an agenda/bias here :)
I don't know what you think I pointed out but it's certainly not the innocence of your repeated and biased postings involving AMD.

If you don't want ISVs running "NVIDIA code" on your hardware, tell them to turn off GW on AMD (or better, not to use it).
"Oh but then we don't get the pretty FX that NVIDIA implemented!"
So provide the ISVs an alternative for your hardware.
"Oh but we don't have the time or resources to do that!"
Cry me a river ;)
As was said before developers hate different code paths, and shouldn't have to put up with this choice because they cannot see the code supplied.
 
Neither does finding convoluted justifications and solutions to GW that are not realistic to implement, or don't rely on accurate or verified assumptions.
I guess you missed the "theoretically" in that originally statement. I was not implying you should actually do it, I was explaining the reason why something being a "DLL" is sort of irrelevant. UMDs are DLLs too and we don't open source those...

Maybe we're not. Using specific capabilities enabled by certain APIs is not the problem being discussed. CUDA, GPU PhysX, PixelSync, even the currently AMD-only Mantle-specific features such as async compute are all valid examples.
Not to get off on a tangent, but the hardware/software line isn't as clear as you're trying to paint it. You guys obviously could implement up to some version of CUDA if you wanted to, but you don't.

The objection is about developers' inability to see GW code, preventing them to "own" the code optimizations shipping in their game.
They can't see the UMD code either, something about which they constantly complain. If GW was a "GL extension", would you have the same opinion or would that make it different somehow? See for instance the GL path rendering extension which is mostly software.

To answer your question if GW modules implemented with standard DX11 API calls were artificially vendor-locked
How they are implemented is ultimately irrelevant as per the GL extension example. If they put it in NVAPI or CUDA instead would you still complain? If I made a wrapper DLL that checked for NVIDIA's vendor ID before calling into GW and called it "GW_API.dll" would that make it ok? If you don't want ISVs to use it on your hardware, tell them as much, but you can't split hairs on how NVIDIA implements it on *their* hardware, whether it be through DX11, CUDA, PhysX or custom microcode.

Let me as a related question. Are you guys promising never to put out any Mantle-based FX/examples/source for game developers to use for which you don't also provide an efficient DX implementation?

I don't know what you think I pointed out but it's certainly not the innocence of your repeated and biased postings involving AMD.
Oh come on Nick, you don't even know me and the "repeated biased postings" are all on the same topic. I call out all nonsense equally and I think if you ask anyone who does know me they will tell you the same thing :) You correctly pointed out that you work for AMD and I work for neither of the companies involved in this cat-fight. You tell me who is likely to be somewhat more biased in this discussion...

I have nothing against you personally and I fully understand where you're coming from. If you think I don't deal with this sort of thing on a daily basis to an even larger extent (how many folks optimize for Intel right now do you think?) then you're greatly mistaken. But I think I'm allowed to disagree with your logic here without being "biased against AMD" and by extension Intel too, since we have the exact same issue...?

Anyways we may have to agree to disagree here. Folks seem to be taking this kind of personally which was never my intention.
 
It's a weird concept initially, but you get used to it :) The reality is that you can either spend a lot of time trying to "optimize" game API use on the fly and thus ensure that *all* games are going to see that overhead regardless of how "nice" their use of the API code is, or trust the game to be efficient. Ultimately the only right answer is to make a thin driver and let games that make efficient use of the API go as quickly as possible, even if it means dropping the "safety net" for games that do stupid stuff.

The new APIs really just take that same concept to the next level... you absolutely can screw yourself over pretty badly if you don't know what you're doing in D3D12/Mantle, but the experts finally have a path with the training wheels off :)


I'm not sure. IIRC it was the 15.31 driver that shifted to the new D3D11 UMD so perhaps someone benchmarked that? Power-constrained chips like ultrabooks - especially the Macbook Air chip with HD5000 - will show the largest differences, but going forward it's really all chips. I don't think the "old" driver was ever shipped on Haswell though so you'd have to test an HD4000 ultrabook or something if you wanted to see the delta.

Beyond that I'm not really sure what there is publicly. A lot of the improvements were just mixed in with other game improvements as well so it's not necessarily possible to tease it out without a directed test. Anyways it was mentioned here (http://techreport.com/news/24600/intel-gets-serious-about-graphics-for-gaming) and I think here (http://techreport.com/news/24604/performance-boosting-intel-igp-drivers-are-out), but like I said I'm not sure how to tease apart those performance improvements quoted to get the parts that were due to CPU overhead reduction vs. something else.

Is it possible to log clock speeds on Intel's integrated graphics? If so, it should be easy to detect, since other kinds of optimization aren't likely to affect clock speeds too much. Hardware.Fr did it for Kaveri using hwinfo64, so presumably, that might work.

Absolutely it would be a poor investment to try and exactly reverse engineer the DLL. The point was that NVIDIA is providing game developers some middleware and anyone else could do the same if they wanted. This is entirely a problem that other IHVs need to work out with ISVs. If you don't want them using X, Y or Z because it doesn't work well on your hardware, tell them as much. If they say "well that's GW doing it" then tell them to stop using GW. If they tell you "we signed a contract" then swat them on the side of the head and tell them to use their brains next time.

I can easily picture that conversation:

AMD — Hey, your tessellation effect is really overdone and inefficient, it kills performance.
ISV — Yeah, that's a GameWorks effect, we can't modify it.
AMD — So don't use it.
ISV — But it works and it's easy. Plus it's in our contract.
AMD — Why would you sign such a contract?
ISV — It came with support and marketing funds.
AMD — Oh. But still, it hurts performance, even on NVIDIA cards.
ISV — Yeah, but, you know. Marketing funds. Next time if you can match NVIDIA's offer we won't use GameWorks.
AMD — Err… I can get you support and some free Radeons.
ISV — And marketing funds?
AMD — Err… how about some Snickers bars? No? Maybe even some Kit Kats if I move some money around. No promises, though.
 
Is it possible to log clock speeds on Intel's integrated graphics?
Absolutely, I think even GPU-Z or similar can do it. GPA definitely can.

AMD — Oh. But still, it hurts performance, even on NVIDIA cards.
On this point I think everyone can agree that NVIDIA is doing stupid nonsense that needs to be stopped. I fully agree that lowering performance on your own cards because it hurts the competition more is BS, and borderline anti-competitive.
 
Again: during the previous (Forbes instigated) GW-is-bad round, Nvidia said that the source is now available for game developers. So can we at least drop that strawman?
 
Again: during the previous (Forbes instigated) GW-is-bad round, Nvidia said that the source is now available for game developers. So can we at least drop that strawman?

Wasn't it "available for some game developers on certain licensing terms, not automatically to all devs using it"?
 
Again: during the previous (Forbes instigated) GW-is-bad round, Nvidia said that the source is now available for game developers. So can we at least drop that strawman?
All that was mentioned was something along the lines of a separate license for Gameworks source. This does not imply free access or access as standard. If this license comes at a cost to developers then the problem statement hasn't really changed.
 
Absolutely, I think even GPU-Z or similar can do it. GPA definitely can.

Cool. I hope someone tries it.

On this point I think everyone can agree that NVIDIA is doing stupid nonsense that needs to be stopped. I fully agree that lowering performance on your own cards because it hurts the competition more is BS, and borderline anti-competitive.

I'd say it's clearly over that border; it might even be illegal if NVIDIA were in a dominant position. In any case, until I see clear signs of a change in policy, they're not seeing any of my money.

Again: during the previous (Forbes instigated) GW-is-bad round, Nvidia said that the source is now available for game developers. So can we at least drop that strawman?

NVIDIA said so, but:

1. Is it true?
2. If it is, are developers allowed to share that code with other IHVs?
3. And are developers even allowed to modify it?
 
Again: during the previous (Forbes instigated) GW-is-bad round, Nvidia said that the source is now available for game developers. So can we at least drop that strawman?

I have not completely follow what happend there but, what is said, is they can sell / show the code to developpers against a paid licence, this said it stay the intellectual propriety of Nvidia. ( we can imagine, Nor the developpers can use it for optimize the game for AMD,
nor show it to anyone who dont have the licence, in addition i suppose they cant even altere it, or modifiy it. )

I hope to be wrong there, but i dont really see it otherwise right now. ( well if i dont do any mistake the guys who use gameworks are Ubisoft ( who anyway allready use it in AC, WD and the next farcry )
( crytek too, but well the last news about them is not really encouraging ( Crytek plan to include Mantle too, who can lead to some funny thing when we think to it )
 
Last edited by a moderator:
By having access to the source code (and presumably, change it), the ISVs can make fixes as they see fit.

It's completely unreasonable to expect that highly complex IP can be shared with a direct competitor.

Once the ISV has access to the source code, it's no different that what I assume is currently happening when Nvidia and AMD sponsor a game: one gets access to the game source code, the other doesn't.

E.g. I doubt that Nvidia wasn't allowed to look at the Battlefield 4 source code, just like they didn't have early access to a Tomb Raider build with AMD special effects.

Then again, Nvidia claims that they don't need source code to optimize for games. AMD claims they need it. Maybe that's the core problem they should fix first?
 
For BF4, i will not bet on this, for the DX version offcourse ( why will you they get the mantle one anyway ). I remember that Dice use a lot of Nvidia tools when developping games, and im 100% sure that Nvidia and Dice have collaborate on the developpement of Battlefield4 DirectX ...
 
Everyone likes to compare Gameworks to Mantle, but it seems more like Havok to me. Havok code runs on a CPU and it's created by a CPU vendor.

I've never seen an analysis but I can imagine more time is spent optimizing Havok for Intel rather than AMD CPUs. I bet Intel is less likely to do anything nefarious since they have a monopoly position and don't need to, but the risk is higher when competition is more intense as with graphics.

Ideally none of these physics or effects middleware packages would be owned by a hardware vendor so the real shame is the apparent lack of a business model to support an independent company supplying these services.
 
There's a difference between using Nvidia tools (you can download them from their developers website AFAIK), and giving access to your source code. But if you know for sure...
 
I've never seen an analysis but I can imagine more time is spent optimizing Havok for Intel rather than AMD CPUs.
I think there's no doubt that this is the case. One case where the analogy breaks down is that, for a pure CPU physics library, there is no Windows API required at all through which the library has to go. So there's no way to intercept and optimize. AFAIK there's no way to launch anything on a GPU without going through a Windows API call.
 
On this point I think everyone can agree that NVIDIA is doing stupid nonsense that needs to be stopped. I fully agree that lowering performance on your own cards because it hurts the competition more is BS, and borderline anti-competitive.

That's the crux of the matter, Andrew Lauritzen. Nvidia abused of their developer relationship before and Gameworks reinforces that.

Only after we all complained Nvidia opened up their source code. For a fee. And even after you paid up you can't talk with AMD about that code, if AMD's accusations are true. That may be a perfectly "usual" business decision, but it's very different from AMD's way (Mr. Huddy himself mentioned TressFX).
 
That may be a perfectly "usual" business decision, but it's very different from AMD's way (Mr. Huddy himself mentioned TressFX).
It is a much easier business decision to give everything for free when there's not much of it to begin with.

I'm sure TressFX is a neat effect, but in terms of scope, it's in the 'hey, cool research project', league. Not exactly the kind of stuff GW seems to have.

Go to the site has lists AMD innovations (http://www.amd.com/en-us/innovations/software-technologies) and the only thing that are relevant to GW, is TressFX and ... Mantle.
 
That may be a perfectly "usual" business decision, but it's very different from AMD's way (Mr. Huddy himself mentioned TressFX).
Which, as I've mentioned before, AMD still will not allow other IHVs to post optimized versions of as Mr. Huddy knows first-hand :) AMD folks please correct me if you've changed your minds on this issue... as you've said yourselves let's all collaborate on making the graphics industry better, right?

I'm not going to disagree that NVIDIA is the most manipulative of the IHVs... which is partially why this GW situation simply doesn't rate high on the scale ;) But part of that simply comes from the fact that they have the resources to do it. "Power corrupts" and all, while the rest of us just get to complain and be jealous of their reach.

Everyone likes to compare Gameworks to Mantle, but it seems more like Havok to me. Havok code runs on a CPU and it's created by a CPU vendor.
It is indeed unfortunate that Havok, PhysX, etc. are owned by IHVs. I'm not totally sure how PhysX operates (there was some accusations about crippling CPU perf a few years back but it wasn't clear to me that it wasn't just neglect/ignorance rather than intentional sabotage), but from my experience Havok operates as much as a separate entity from Intel as any other company. In some ways they even work less directly with us than other ISVs to avoid perception issues from competitive middleware vendors. I'm not sure why Intel owns Havok to be honest, but for all intents and purposes it's really a separate entity that do their own thing (i.e. spend a lot of time writing code for consoles, etc...)

Obviously you may not take my word from it but feel free to ask them or other folks familiar with the relationship there. I'd be happy if they were a separate company but at least they don't operate as "Intel" per se.

Ideally none of these physics or effects middleware packages would be owned by a hardware vendor so the real shame is the apparent lack of a business model to support an independent company supplying these services.
Yeah it would be nice. I think engines really have an opportunity to change the situation here. As with most things, ISVs have the power; they can definitely reject IHV-owned middleware as soon as any indication of manipulation comes up (or even before, on principle).

I still maintain that the line of what's "API/driver software" and what's "middleware" is grey though, especially when IHVs provide their own APIs and extensions.

One case where the analogy breaks down is that, for a pure CPU physics library, there is no Windows API required at all through which the library has to go. So there's no way to intercept and optimize.
True, but there's also a lot less potential to do something architecture-specific/friendly on the CPU since the level of "abstraction" is far lower and CPU performance differences are much smaller (GPU implementations of the same feature can vary by an order of magnitude). And of course you can definitely see all the instructions going through a CPU including any abuse such as running different code on different machines, etc.
 
Back
Top