Hairworks, apprehensions about closed source libraries proven beyond reasonable doubt?

Status
Not open for further replies.
I tend to believe that AMD have too his responsability on a certain extend, they maybe dont push the work on thoses titles enough, or dont pressure enough the developpers, who know. maybe they should increase their relations with the studio who use gameworks or who are close to Nvidia.

As long as they don't use their own restrictive licensed source code it's an asymmetrical battle ... any developer they get on their side can still cooperate at source code level with NVIDIA while any company who gets a gameworks source code license will make it very hard for AMD (or Intel for that matter) to cooperate that way.

AMD is fighting the battle in a cheaper way ... PR.
 
And so does AMD.
The difference is that AMD chose to open TressFX, FreeSync and Mantle (through the Vulkan fork). nVidia chose to lock Hairworks, G-Sync and GPU-accelerated PhysX.
Customers can't force nVidia to do jack, but they can very well prefer another IHV based on the above.

Nvidia also choses to spend a greater percentage of its revenue on R&D and has clearly stated that it considers itself a software company.
That was more than just idle talk. Combined this with the fact that Nvidia also has greater total revenue than AMD, and is arguably more focused, it means that the company has a lot more value locked in its software.
Nvidia has built a rich software environment which is just one of the benefits that it provides to its customers. While the availability of source code from AMD is a great thing, it is a reflection on market realities as much as a difference in philosophy.

And so is TressFX.
Though TressFX is open for nVidia, Intel and ISVs to optimize their code as they see fit. Hairworks is deliberately closed in order to prevent such optimizations.

It is closed to protect Nvidia's investment and made available where that is in the company's interest, yes.

No one was talking about hardware development.
But if you do want to talk about software development, you should know that Intel had to pay a sizable sum of money to governments and AMD when they were found guilty of purposely sabotaging AMD CPUs with their x86 compilers.



Perhaps nVidia is lucky that official anti-monopoly entities don't take gaming very seriously. Yet.

I think that's a laughable sentiment. In which field could Nvidia possibly be perceived to have anywhere close to a monopoly?
 
...just a question I was posing to myself: using an insane tessellation level which gives no serious visual advantages but impairs AMD cards is a casualty, right?

Or maybe if AMD provided something similar to the GeForce Experience utility, its users could have the benefit of automatically selecting lower quality tessellation to match the hardware's limitations.
 
Isn't Amd Gaming Evolved aka Raptr little bit like GeForce Experience? Although I don't think it adjust Catalyst profiles = Tesselation factor unless game settings have it.
 
Isn't Amd Gaming Evolved aka Raptr little bit like GeForce Experience? Although I don't think it adjust Catalyst profiles = Tesselation factor unless game settings have it.

Yes that does look like the same kind of utility. I wasn't aware of that one, good point.
 
...just a question I was posing to myself: using an insane tessellation level which gives no serious visual advantages but impairs AMD cards is a casualty, right?
You have to be careful about 'no serious visible advantages'. 16x MSAA exists over 8x MSAA. It doesn't provide a serious visual advantage either, but it's still there, and there are probably cases where it is visible.

I expect the same to be true for hair tessellation. With complex libraries like this, there must be an API with multiple dials and knobs to select a quality setting. They are probably set to maximum in Ultra mode, because why not? The fact that it helps Nvidia more than AMD is a nice bonus, but it's not Nvidia's fault that AMD sucks more at tessellation: it has been like that since the beginning.
 
Or maybe if AMD provided something similar to the GeForce Experience utility, its users could have the benefit of automatically selecting lower quality tessellation to match the hardware's limitations.

So what you would like to see is AMD provide developer's with closed source code so that Nvidia cards performance is deliberately sabotaged? After all, you applaud Nvidia for doing it, thus it would be fantastic for gaming if AMD did it as well?

And not just sabotaging performance of their competitor, but sabotaging performance of any card that isn't the current generation, thus forcing it's own users into a situation of either poor performance or upgrading.

Even better when the closed source version is inferior to the open version. But it's OK, because it makes the competition look bad. Who cares if the game looks worse using the closed source version than if they used the version that is open for everyone to optimize for.

Yup sounds great.

There's a reason you see increasing unhappiness among developers with regards to Gameworks, and that is one of them. But as long as Nvidia continues to pay developer's to use it, it'll get used.

Regards,
SB
 
You have to be careful about 'no serious visible advantages'. 16x MSAA exists over 8x MSAA. It doesn't provide a serious visual advantage either, but it's still there, and there are probably cases where it is visible.

I expect the same to be true for hair tessellation. With complex libraries like this, there must be an API with multiple dials and knobs to select a quality setting. They are probably set to maximum in Ultra mode, because why not? The fact that it helps Nvidia more than AMD is a nice bonus, but it's not Nvidia's fault that AMD sucks more at tessellation: it has been like that since the beginning.

By that rationale, if TressFX 4.0 or some other AMD feature used double precision "for extra accuracy and quality", thereby completely thrashing performance on Maxwell, it would be perfectly fine and the terrible performance on NVIDIA's cards would be their fault for sucking at DP.

(I have a feeling the end of the previous sentence will considerably increase the number of hits on this page from Google.)
 
Even better when the closed source version is inferior to the open version.

You've mentioned this a number of times. Aside from the performance hit I'm not sure there's a direct comparison. Do we have examples of TressFX doing animal/beast fur like there is in TW3?
 
By that rationale, if TressFX 4.0 or some other AMD feature used double precision "for extra accuracy and quality", thereby completely thrashing performance on Maxwell, it would be perfectly fine and the terrible performance on NVIDIA's cards would be their fault for sucking at DP.
Absolutely. It would be completely in AMD's right to do that. And it would be up to the developer to decide whether or not it was acceptable to use that mode or not. Just like it was up to developers to use Mantle or not, something that wasn't available to Nvidia.

However, in your hypothetical case, within a week, Nvidia would release a driver that patched the shader to used FP32 instead of FP64, instead of just whining about not getting access to the source code.

(Every time this kind of topic comes up, it fills me with incredible joy that AMD chose to 'sabotage' Nvidia with Tomb Raider and TressFX. And that Nvidia was able to fix it within a week. It's a gift that will keep on giving. ;) )
 
Last edited:
I think that's a laughable sentiment. In which field could Nvidia possibly be perceived to have anywhere close to a monopoly?

I've had my fair share of experience with lawyers to know that every single one of them will always find their opposition to have laughable sentiments all the way up until the sentence comes up and they have to break the poker face.

How about making deals with ISVs in order to break the performance on both their cards and the competition's, with the purpose of breaking the competition's even more?



their fault for sucking at DP
sucking at DP
sucking at DP
sucking at DP.

:runaway: I'm feeling triggered!:runaway:
 
Absolutely. It would be completely in AMD's right to do that. And it would be up to the developer to decide whether or not it was acceptable to use that mode or not. Just like it was up to developers to use Mantle or not, something that wasn't available to Nvidia.

However, in your hypothetical case, within a week, Nvidia would release a driver that patched the shader to used FP32 instead of FP64, instead of just whining about not getting access to the source code.

(Every time this kind of topic comes up, it fills me with incredible joy that AMD chose to 'sabotage' Nvidia with Tomb Raider and TressFX. And that Nvidia was able to fix it within a week. It's a gift that will keep on giving. ;) )

Replacing "double" with "float" in a string is a fairly trivial affair, but some things could be harder.

NVIDIA managed to fix their TressFX problems within a week because they had the code. Optimization based on reverse-engineered DLLs is a very different matter.
 
NVIDIA managed to fix their TressFX problems within a week because they had the code. Optimization based on reverse-engineered DLLs is a very different matter.
I may be wrong about this, but I don't remember TressFX being available publicly when Tomb Raider was introduced.

But even if they had the source: it's not as if they can change the source and recompile the game. In the end, it's still a low level fix somewhere in the driver... Nvidia has stated multiple times that they don't need and almost never use source code to optimize the drivers. In a world where it's hard to get source code even when NDAs are in place, that doesn't seem to be an unreasonable statement.

I don't think this has anything to do with reverse engineering DLLs, but about getting a shaded in some intermediate representation handed to you through the API, and mapping it optimally to your hardware.

Every time AMD releases a new driver with improved performance, do you think they had source code? Do you think the reversed a DLL? I think not. They are using the lack of source code as a crutch. IMO they are simply starved for driver engineers and things are falling through the cracks.
 
Last edited:
I may be wrong about this, but I don't remember TressFX being available publicly when Tomb Raider was introduced.

It's unknown how tightly NVIDIA and Nixxes/Jason Lacroix cooperated ... what is known is that one of the first things NVIDIA said is that they couldn't fix it purely in the drivers.

"The developer will need to make code changes on their end to fix the issues on GeForce GPUs as well."
 
Well, at least Nvidia apologized and got to work instead of whining about it.

And one thing I don't understand: if the performance on AMD can be fixed simply be reducing tessellation settings, then the problem isn't one of no-source at all. It's simple AMD GPUs running into a limit that Nvidia doesn't have.
 
So what you would like to see is AMD provide developer's with closed source code so that Nvidia cards performance is deliberately sabotaged? After all, you applaud Nvidia for doing it, thus it would be fantastic for gaming if AMD did it as well?

AMD is welcome to work with software vendors to leverage benefits unique to their hardware, yes.
GCN based architectures dominate current console hardware and I've read quite a few suggestions that this may be a great opportunity for AMD.
That might not translate down to older architectures like the VLIW4 and 5 generations but those customers should probably be preparing for upgrades at some point.
Similarly, I believe HSA is intended as a prospective competitive advantage. That would of course be to the exclusion of non-HSA compatible vendors.

As a prospective videocard buyer I can then weigh whether such benefits are compelling to me.
AMD probably made some sales because customers felt a Mantle compatibility check mark was a plus for a particular card.
Conversely, I can also decide to avoid software where some part of the development effort went towards a Mantle port if it doesn't bring value to me.
 
Last edited:
Well, at least Nvidia apologized and got to work instead of whining about it.

I only said the level of cooperation was unknown, but it was still there ... they got to work in cooperation with the developers. CDPR has denied that being possible here, presumably because they don't have the source code either.
 
Well, at least Nvidia apologized and got to work instead of whining about it.

And one thing I don't understand: if the performance on AMD can be fixed simply be reducing tessellation settings, then the problem isn't one of no-source at all. It's simple AMD GPUs running into a limit that Nvidia doesn't have.

From what I gather, reducing tessellation levels mitigates the issue somewhat, but doesn't solve it entirely.

Besides, Kepler apparently suffers almost as much as GCN with Hairworks, and it's no slouch with tessellation, so it's not just a matter of GCN's sucky tessellation performance or AMD's supposedly lackluster drivers. Plus, Tonga actually has pretty decent tessellation performance, and from what I read a few posts above it's still slower than Tahiti with Hairworks.

There's more than tessellation going on.
 
Status
Not open for further replies.
Back
Top