NVIDIA Game Works, good or bad?

If they can't provide the same API they won't be used, now Oracle v Google established that APIs can't be copyrighted (in the US) for the moment ... but unlike PhysX a lot of this code doesn't seem to have an openly available API.

If the Gameworks license is particularly obnoxious it might even become impossible for them to meaningfully cooperate with developers using it at all.

Since Gameworks is hardly a de facto standard for now, it's not that important to be compatible at API level.

The biggest problem I have with this discussion is that some seems to believe that because someone isn't doing something then no one should be doing it. I mean, if Gameworks is such a good idea that game developers are willing to use it, why don't AMD and Intel do that as well? Shouldn't we encourage them to compete with NVIDIA, instead of trying to suppress NVIDIA for doing it? If it's not a good idea, then no worry then, because eventually no one will use it, no?
 
How bad an idea it is just raises the cost.

Both AMD and NVIDIA are at the moment paying developers to use their technology. Outright, through simple donation of manpower or by providing middleware (mostly PhysX).
 
How bad an idea it is just raises the cost.

Both AMD and NVIDIA are at the moment paying developers to use their technology. Outright, through simple donation of manpower or by providing middleware (mostly PhysX).

If you put it this way, then a lot of things are just "paying developers to use their technology." Apple's App Store, Google Play, DirectX, all the same. They must all be very bad ideas.
 
If you put it this way, then a lot of things are just "paying developers to use their technology." Apple's App Store, Google Play, DirectX, all the same. They must all be very bad ideas.
Hey I was almost honestly replying before I noticed the fallacy ...

I did not say that payment in services or goods makes something a bad idea, just that it can offset the cost of one for the developer.
 
I did not say that payment in services or goods makes something a bad idea, just that it can offset the cost of one for the developer.

Weel, yes, but I do think developer support is important for hardware vendors. Hardware vendors shouldn't be just "making good hardware." It's never enough with just that. Gameworks can be seen as a package coming along with NVIDIA's GPU products. This is really not that different from Mantle.
 
Mantle though it's problematic in it's own right will be an open API ala PhysX (AMD has gone on the record for this). Not all parts of Gameworks are openly available ... if it has a NDA this could be rather disastrous for non NVIDIA developer relations, sending on site engineers would be essentially impossible.
 
Mantle though it's problematic in it's own right will be an open API ala PhysX (AMD has gone on the record for this).
Meh, they get credit for that only when they actually do it. For now, it's no different than CUDA being "open", which NVIDIA said at the start too. So far I have yet to see any specs, despite them saying "just come ask us"...
 
Mantle though it's problematic in it's own right will be an open API ala PhysX (AMD has gone on the record for this).
Is this really true? I know the guy from Dice said so, but has AMD confirmed that they want this to be open for other vendors as well?

IMO the only reason for AMD to declare it to be open is to be able to claim higher moral ground (without actually doing it.)
 
Last edited by a moderator:
Is this really true? I know the guy from Dice said so, but has AMD confirmed that they want this to be open for other vendors as well?

IMO the only reason for AMD to declare it to be open is to be able to claim higher moral ground (without actually doing it.)

As far as I'm aware, AMD has only said they'd be open to it if other IHVs wanted to support Mantle, which makes sense for at least three reasons:

1) It would validate their approach;
2) It would make Mantle a de facto standard that they would still control, which would give them a lot of power;
2) Which is precisely why NVIDIA won't adopt it, which AMD knows, which is why they can extend this offer without worrying, they know full well it won't be accepted.

It's exactly the same as PhysX in all three reasons.
 
How exactly is PhysX "open"? You can't just tell NV that "hey, we'll support it, np", at least to my understanding making your hardware support CUDA isn't free, and supporting CUDA is required to support PhysX? AMD at least claims they offered NVIDIA that they would support PhysX, if they were allowed to create OpenCL port of it?
(This is assuming you obviously mean GPU PhysX, not PhysX in general)
 
There's also the question on controlling of the spec. For example, if we ignore potential legal problems, in theory, any one can implement their own CUDA, including AMD. However, it's unlikely to be optimal, as CUDA might have some elements that's native to NVIDIA's GPU but (not yet) implemented in AMD's GPU, and AMD's GPU may have some elements that's not exposed in CUDA. Of course, open standards such as OpenCL has similar problems, but at least every one can contribute to the standard and the extension mechanism is quite reasonable.

Mantle is, at least for now, pretty much the same. I haven't seen its spec yet, but for a lower level API it's likely to have something that's close to how AMD's GPU works. Even if NVIDIA's able to implement their own Mantle, it's probably not going to be as effective as AMD's implementation.

I don't oppose Mantle, I like the idea. I posted here about the idea of some (close to) "native instruction sets" for GPU before. Ideally it should be something that's every one can support or adhere to, but that'd need a consortium or something and can be very time consuming. Mantle could be that something, but it might not. In any case, at least it can be like CUDA, which eventually lead to the creation of OpenCL.
 
Speaking for myself, there's a reason why I didn't identify GW as problematic based on "openness." PhysX isn't open. CUDA isn't open. Mantle isn't open yet. Some people still argue that TressFX isn't open either, and while I think that argument is weaker, it shows that there's a lot of ways in which the word "open" can be used.

The problem I have with GW rests on the question of whether or not developers can collaborate with both IHVs to optimize code. Mantle, while it gives a unique advantage to AMD, does not penalize NV's ability to work with developers to optimize code for DX11. But Mantle isn't "open" for any meaningful use of the term as far as I can tell.

Whether this is bad depends entirely on how you set your terms. Certainly it's not bad for Nvidia. It's not bad from a cost/benefit analysis of shipping code out, either. But what I don't think anyone wants to see is a system where IHV support becomes a buy-in system, where AMD and NV both provide code paths to guarantee good performance on their own hardware, but potential third parties are again locked out.

It seems to me that the point of common standards for both DX and OGL was to break down this kind of walled garden and make cross-porting easier for everyone. Is it in anyone's best interest to return to that kind of system?
 
How exactly is PhysX "open"?
I said open API ... I don't quite see how that leads to any confusion. I can see how all the selective quoting going leads to confusion though.

My concern is whether merely having access to binary Gameworks requires a NDA. If so, will they give competitor on site engineers the opportunity to sign that NDA without onerous restrictions?
 
The problem I have with GW rests on the question of whether or not developers can collaborate with both IHVs to optimize code. Mantle, while it gives a unique advantage to AMD, does not penalize NV's ability to work with developers to optimize code for DX11. But Mantle isn't "open" for any meaningful use of the term as far as I can tell.

I don't think this is entirely fair. If a developer decided to use Mantle, it takes away the resources available for optimisation for Direct3D, so the game's performance on other IHV's GPU may suffer.

You might say, hey, but NVIDIA is free to help this developer to optimise their Direct3D code path! However, if it's this simple, then you can say that even if a developer decided to use GW, AMD can still help them creating effects without using GW. Of course, it's much harder than helping someone tweaking existing codes, but that's the price to pay for not providing something NVIDIA have.

As I said before, for AMD or Intel to "solve" this problem is to provide similar libraries. All they have to do is to outperform NVIDIA's offerings. They don't even have to write it themselves, just sponsor some 3rd party effect libraries is probably good enough. If they are not able to do that, it's their problem, definitely shouldn't be NVIDIA's.

It seems to me that the point of common standards for both DX and OGL was to break down this kind of walled garden and make cross-porting easier for everyone. Is it in anyone's best interest to return to that kind of system?

Well, IMHO, common standards is meant for compatibility, not for best performance. You almost always want to have multiple code paths for different hardwares to be optimal on everything. Even if the application is in the same instruction set (e.g. x86), this is sometimes required.
 
I don't think this is entirely fair. If a developer decided to use Mantle, it takes away the resources available for optimisation for Direct3D, so the game's performance on other IHV's GPU may suffer.
There is a huge difference between an IHV owning the code that runs on all IHVs' hardware and an IHV controlling the code that runs only on their hardware.

However, if it's this simple, then you can say that even if a developer decided to use GW, AMD can still help them creating effects without using GW.
As I said before, for AMD or Intel to "solve" this problem is to provide similar libraries.
All they have to do is to outperform NVIDIA's offerings. They don't even have to write it themselves, just sponsor some 3rd party effect libraries is probably good enough. If they are not able to do that, it's their problem, definitely shouldn't be NVIDIA's.
Other IHVs cannot provide the same libraries since they're not exposed to the libraries' entry points, parameters, what it does internally etc. And providing different libraries mean different code paths, which leads to my next point.
You almost always want to have multiple code paths for different hardwares to be optimal on everything.
Developers hate different code paths. They shouldn't have to put up with additional complexity, development time, QA risk and maintainability issues in their game because they're not able to optimize the same code path for all hardware due to its abstraction in a black box library.
Working closely with all IHVs and having the developer manage the optimizations they propose for a given effect is the best way to ensure that a game will overall run well on all hardware without the multiple paths route.
 
Other IHVs cannot provide the same libraries since they're not exposed to the libraries' entry points, parameters, what it does internally etc. And providing different libraries mean different code paths, which leads to my next point.

Why do they need to be the same? I mean, if AMD or Intel has a better special effect library, better than GameWorks, developers will use them, instead of using GameWorks. Of course, NVIDIA may try to "buy" some developers to use GW, but if GW is not competitive, only desperate developers are going to be bought.

My point is, when people are complaining that GW could be NVIDIA's way to ensure that games run better on their HW, they shouldn't be complaining to NVIDIA, they should complain to AMD and Intel and ask why they aren't doing the same thing to make something better than GW.
 
There is a huge difference between an IHV owning the code that runs on all IHVs' hardware and an IHV controlling the code that runs only on their hardware.
How about the developers and everybody who has a problem with GW being multi-vendor simply pretend that GW only works for Nvidia GPUs and design their game with that in mind? And then suddenly pretend to discover that there's an addition bonus: hey! It works for other hardware too! What a pleasant surprise!

The other ones simply accept that fact that life is full of decisions where trade-offs need to be made... Yes, some effects may not be implemented as efficiently on other hardware. Tough, but think about the amount of work we saved by taking this shortcut.
 
Using one library can save work, using 3 with different APIs adds work ... so either the pay offs will have to increase or the current stratified situation just gets worse.

I don't understand why you'd think that's a good idea, we have an object lesson of where this goes Batman:AA ... compare this to the traditional situation exemplified by the recent Tressfx situation. If they did it like you wanted the engineers NVIDIA sent could have stayed home.

It makes sense for the market encumbant, it makes sense for the developers and it's still a fucking bad idea ... because it's bad for us.
 
Back
Top