Hairworks, apprehensions about closed source libraries proven beyond reasonable doubt?

Status
Not open for further replies.
In this case it's even more relevant because the open source TressFX is superior to Hairworks in almost every way, especially when it comes to performance across a variety of hardware.

Regards,
SB

Provided you are not prohibited by NDA, is there anything else you can share about your experience implementing TressFX vs Hairworks?
Which particular aspects make you say TressFX is superior in almost every way? Do you have any comments on the level of developer support between these 2 solutions?
 
AMD Is Wrong About 'The Witcher 3' And Nvidia's HairWorks


Let’s assume Huddy’s claim of working with the developer “from the beginning” is true. The Witcher 3 was announced February 2013. Was 2+ years not long enough to approach CD Projekt Red with the possibility of implementing TressFX? Let’s assume AMD somehow wasn’t brought into the loop until as late as Gamescom 2014 in August. Is 9 months not enough time to properly optimize HairWorks for their hardware? (Apparently Reddit user “FriedBongWater” only needed 48 hours after the game’s release to publish a workaround enabling better performance of HairWorks on AMD hardware, so there’s that.)

Hell, let’s even assume that AMD really didn’t get that code until 2 months prior, even though they’ve been working with the developer since day 1. Do you find that hard to swallow?

That’s all irrelevant in my eyes, because the ask never came in time. Via Ars Technica, Huddy claims that when AMD noticed the terrible HairWorks performance on their hardware two months prior to release, that’s when they “specifically asked” CD Projekt Red if they wanted to incorporate TressFX. The developer said “it was too late.”

Well, of course it was too late. Nvidia and CD Projekt Red spent two years optimizing HairWorks for The Witcher 3. But here’s the bottom line: The developer had HairWorks code for nearly two years. The entire world knew this. If AMD had been working with the developer “since the beginning” how on earth could they have been blindsided by this code only 2 months prior to release? None of it adds up, and it points to a larger problem.

Look, I respect AMD and have built many systems for personal use and here at Forbes using their hardware. AMD’s constant pulpit of open source drivers and their desire to prevent a fragmented PC gaming industry is honorable, but is it because they don’t want to do the work?

A PC enthusiast on Reddit did more to solve the HairWorks performance problem than AMD has apparently done. AMD’s last Catalyst WQHL driver was 161 days ago, and the company hasn’t announced one on the horizon. Next to Nvidia’s monthly update cycle and game-ready driver program, this looks lazy.

What you don’t do is expect your competitor to make it easier for you by opening up the technology they’ve invested millions of dollars into. You innovate using your own technologies. Or you increase your resources. Or you bolster your relationships and face time with developers.

In short, you just find a way to get it done.

If I sound frustrated, it’s because I am. I’ve been an enthusiastic fan of AMD for a good long while (just look at the numerous DIY builds and positive reviews I’ve given them), and last year at this same time I was admittedly on the other side of this argument. But what I’m seeing now is a company who keeps insisting their sole competitor make their job easier “for the good of PC gaming.” And I see said competitor continuing to innovate with graphics technologies that make games more beautiful. And I see promises like the concept of “OpenWorks” laying stagnant a full year after they’re hyped up. And I see AMD’s desktop GPU market share continue to slip and think to myself “maybe this is not a coincidence.”

http://www.forbes.com/sites/jasonev...ng-about-the-witcher-3-and-nvidias-hairworks/
 
In this case it's even more relevant because the open source TressFX is superior to Hairworks in almost every way, especially when it comes to performance across a variety of hardware.

Regards,
SB

Really? I've never seen either in person but hair works seems to look better in pics/vids. Can you elaborate?
 
Provided you are not prohibited by NDA, is there anything else you can share about your experience implementing TressFX vs Hairworks?
Which particular aspects make you say TressFX is superior in almost every way? Do you have any comments on the level of developer support between these 2 solutions?
You can assess the quality and performance of TressFX yourself by running the existing SDK sample that's freely available on the AMD developer website. It runs on both AMD and NVIDIA GPUs. This is not the latest version but the main tech is the same.

In a nutshell (and based on public information) the technical comparisons are as follows:

NVIDIA Hairworks
Compute based simulation.
Use isoline tessellation with tessellation factors up to 64 (maximum possible value) to generate curvature and additional strands.
Use Geometry Shader for extruding segments into front-facing polygons.
Renders hair strands onto MSAA 8x render target to obtain smooth edges.
No OIT solution, non-edge hair strand pixels are fully opaque.

AMD TressFX
Compute based simulation with master/slave support to reduce simulation cost.
Use fixed number of vertices per strand (user-configurable, from 8 to 64).
Use Vertex Shader for extruding segments into front-facing polygons.
Per-pixel linked list OIT solution for hair transparency and smooth edges (no MSAA in any part of the pipeline). Configurable for desired performance/quality.

TressFX also has a very efficient LOD system that reduces the number of strands and makes them thicker as distance increases. I think Hairworks relies on varying tessellation factors for density and curvature but I'm happy to be corrected if this is not the case.

The use of isoline tessellation in Hairworks presents major technical deficiencies affecting performance such as long pipelines (VS HS DS GS PS all enabled) and poor quad occupancy caused by huge tessellation factors and MSAA rendering. A lot of users have already found out by themselves that clamping such tessellation factors to reasonable values didn't significantly adverse quality... TressFX's fixed number of vertices per strand works fine and avoid both tessellation and GS usage. One would need to be very close to the model to see curvature-related issues. In a nutshell hair/fur rendering do not need tessellation.
TressFX probably uses more memory than Hairworks due to the use of per-pixel linked lists for the OIT solution. This can be optimized (e.g. tiled mode) or a mutex/Pixelsync-like approach can be used to approximate OIT results and control memory usage.

If the technical arguments are not convincing enough then I invite anyone to assess performance and quality of games using these technologies on both AMD and NVIDIA hardware.
 
It is his point of view, opinion, and i have nothing against that, the problem is i dont really understand his arguments...
I think he's pointing out that AMD's latest marketing tactic consists of deliberately not optimizing their driver for new games at all, and blame Nvidia of actively sabotaging them, in order to garner sympathy from the masses.

It seems to be working.
 
Is it optimization or implementing "cheat"? Cheat as in driver level and game specific optimization that probably only bloat the driver and isn't useful for other games?
 
Did CDPR have a source code level integration for most of Gameworks (not Hairworks apparently, but the rest of it). If so it would be rather awkward and labour intensive to create a situation where AMD could offer much help ... don't think NVIDIA would let them sign a NDA to get access.
 
If the technical arguments are not convincing enough then I invite anyone to assess performance and quality of games using these technologies on both AMD and NVIDIA hardware.

Thanks for the insight. I've been looking around for implementation details without much luck.

So hair works seems to brute force its way to glory while tressfx takes a more methodical approach. I wonder though if the former is a more generic solution. Are there examples of tressfx applied to multiple on screen characters at once or with very fine hair (like TW3 wolves)?
 
How come the guys from Forbes gives all the credit for improving AMD GPU's performance in Hairworks by utilizing AMD's CCC slider to some Reddit guy? It sounds bizarre and fanboish at best. It was AMD who putted the work ages ago into their driver to allow user selectable tesselation levels. Could AMD work closer with CDPR? Quite likely, but not a given. Is AMD slow on this occasion with game ready driver? Yes it is, but only in context of Hairworks as game without it runs really well!
So all this for me is much noise for little reason IMHO.

PS. You also need to realize that Poland as a market is very nVidia skewed compared to rest of Europe and AMD has poor distribution channel there. One of 4 largest computer e-tailers sells about 90% nVidia 10% AMD discreet GPU's.
 
The Forbes' article is a bit terrible.
He says that Hairworks cause a 30% performance hit in a GTX 980 because "of course it does, extra eye-candy". How about 2013's Tomb Raider with an old implementation of TressFX causing a much lower impact?

The guy knows very little of what he's talking about.
 
The original TressFX also had a very big performance impact, especially as it was basically only the main characters hair! Lots of similar complaints were leveled back then, but it's been greatly optimized since and opened up. Wouldn't give up on gameworks in that respect, but at the same time I hope that with DirectX12 such things can be a little more 'integrated'.
 
The Forbes' article is a bit terrible.
He says that Hairworks cause a 30% performance hit in a GTX 980 because "of course it does, extra eye-candy". How about 2013's Tomb Raider with an old implementation of TressFX causing a much lower impact?

The guy knows very little of what he's talking about.

I will personally not discuss much about TressFX vs Hairwork ... actually Hairwork is really close of what was the initial TressFX 1.0 when presented on TombRaider (on the technical aspect, not implementation ), TressFX 3.0 on the other hand is a complete rewrite on every step of the pipeline.

- If AMD was use a lower tesselation level on his driver, similar of what we can do by forcing it on the CCC, they will not wait a long time to see reviewers, peoples accusing AMD to cheat the benchmarks. hence why they have never force the tesselation on driver profile but let the choice on the end user. ( it was even a concern when they have introduct this setting on CCC ).
Reviewers will anyway disable this profile in this case when testing the game.

- If AMD have work since a while on witcher3, Huddy claim it have got the copy with hairwork on 2 months ago. Well this is not impossible at all, it is offtly the case with this type of library that they are not enabled on the work in progress copy of the game. Look AssassinCreed, FarCry4 most features was not even included in the game and have been patched after the release of the game. ( including hairwork for FC4 ).. Lol that even remember me the tressfX story on Tombraider. ( On purpose as TressFX was under a secreet warp at this time and have been presented at the same time of the game release )

- The developper say they cant optimize the gamework features for AMD, like him i think many things can be the cause, money, time, or just they dont care .. the problem is due to the nature of gamework, you cant easely optimize the driver for the specific implementation who is put in the game.

Is Nvidia pushing tesselation to an extreme level on hairwork knowing this will put other hardware on the knees ?.. maybe, finally it is their right, it is their library, even if questionnable. On this level, AMD cant do anything on the driver side..

The most funny is The TW3 is using an engine who is extremely close of what is working the best on AMD GPU's, and the performance without gamework show it nicely.

- The GTA5 things is a counter example, GTA5 is not a gamework game, even if it use 1 feature you can find in gamework ( HBAO+), Rockstar have a different approach on game developpement, they are surely the only one who use Bullet as physic engine.

The problem today, whatever is the reason, it seems continue a trend we have seen since some years with TWITMP games, and the list start to be really long.. So is it AMD who is responsible or Nvidia, or the situation, i dont know.. but we cant deny this trend.

I tend to believe that AMD have too his responsability on a certain extend, they maybe dont push the work on thoses titles enough, or dont pressure enough the developpers, who know. maybe they should increase their relations with the studio who use gameworks or who are close to Nvidia.

Anyway, lol ff this continue like that, reviewers will start to be in pain for find games for test the hardwares . we will need use only synthetic benchmarks.
 
Last edited:
The Forbes' article is a bit terrible.
He says that Hairworks cause a 30% performance hit in a GTX 980 because "of course it does, extra eye-candy". How about 2013's Tomb Raider with an old implementation of TressFX causing a much lower impact?

The guy knows very little of what he's talking about.

Actually, TressFX performed very poorly on Nvidia hardware with the Tomb Raider release build.
You could say that there are some similarities between both of these incidents. For one, Nvidia also pointed to the timing of the release as a factor in their poor performance. And secondly AMD reps were around to publicly speculate that the problem was likely caused by Nvidia.
What happened in the Tomb Raider case however was that Nvidia apologized for the inconvenience caused to their customers and went to work on a fix. Which eventually arrived in the shape of a driver update and - notably - a game patch.

The lesson here is that when there's poor performance on Nvidia hardware it is Nvidia's fault. And when there's poor performance on AMD hardware it is also Nvidia's fault.
 
Actually, TressFX performed very poorly on Nvidia hardware with the Tomb Raider release build.
You could say that there are some similarities between both of these incidents. For one, Nvidia also pointed to the timing of the release as a factor in their poor performance. And secondly AMD reps were around to publicly speculate that the problem was likely caused by Nvidia.
What happened in the Tomb Raider case however was that Nvidia apologized for the inconvenience caused to their customers and went to work on a fix. Which eventually arrived in the shape of a driver update and - notably - a game patch.

The lesson here is that when there's poor performance on Nvidia hardware it is Nvidia's fault. And when there's poor performance on AMD hardware it is also Nvidia's fault.

Isn't TressFX open?
 
Actually, TressFX performed very poorly on Nvidia hardware with the Tomb Raider release build.
You could say that there are some similarities between both of these incidents. For one, Nvidia also pointed to the timing of the release as a factor in their poor performance. And secondly AMD reps were around to publicly speculate that the problem was likely caused by Nvidia.
What happened in the Tomb Raider case however was that Nvidia apologized for the inconvenience caused to their customers and went to work on a fix. Which eventually arrived in the shape of a driver update and - notably - a game patch.

The lesson here is that when there's poor performance on Nvidia hardware it is Nvidia's fault. And when there's poor performance on AMD hardware it is also Nvidia's fault.

The key difference is that AMD shares the source code. Thus NVIDIA can easily optimize their drivers for it, or even suggest improvements to TressFX itself to game developers. NVIDIA does not share the source code, so AMD can't do this.
 
The key difference is that AMD shares the source code. Thus NVIDIA can easily optimize their drivers for it, or even suggest improvements to TressFX itself to game developers. NVIDIA does not share the source code, so AMD can't do this.

Nvidia chooses what code to share or make Open Source, not what their competitor wants them to. HairWorks development is far from complete and will only get better through Nvidia's continued investment. I'm sure there is also some some IP techniques that Intel could share with AMD to make their CPU's more competitive, but I doubt you will see that happening. Basically goes back to the thought mention in Forbes, and also echoed in the Ars Technica article.

What you don’t do is expect your competitor to make it easier for you by opening up the technology they’ve invested millions of dollars into. You innovate using your own technologies. Or you increase your resources. Or you bolster your relationships and face time with developers.
 
Nvidia chooses what code to share or make Open Source, not what their competitor wants them to.
And so does AMD.
The difference is that AMD chose to open TressFX, FreeSync and Mantle (through the Vulkan fork). nVidia chose to lock Hairworks, G-Sync and GPU-accelerated PhysX.
Customers can't force nVidia to do jack, but they can very well prefer another IHV based on the above.


HairWorks development is far from complete and will only get better through Nvidia's continued investment.
And so is TressFX.
Though TressFX is open for nVidia, Intel and ISVs to optimize their code as they see fit. Hairworks is deliberately closed in order to prevent such optimizations.

I'm sure there is also some some IP techniques that Intel could share with AMD to make their CPU's more competitive, but I doubt you will see that happening.
No one was talking about hardware development.
But if you do want to talk about software development, you should know that Intel had to pay a sizable sum of money to governments and AMD when they were found guilty of purposely sabotaging AMD CPUs with their x86 compilers.



Perhaps nVidia is lucky that official anti-monopoly entities don't take gaming very seriously. Yet.
 
Status
Not open for further replies.
Back
Top