Health of Hair in Games *split hairs*

http://advances.realtimerendering.com/s2019/hair_presentation_final.pdf


It is always a matter of optimization, here on a base PS4

n any case, here are some numbers showing what the performance is currently like on a regular PS4 at 900p resolution, without MSAA, with the hair covering a big chunk of the screen. So for the long-haired asset, which contains about 10’000 strands and a total of 240’000 points, the physics currently take around 4.7ms and the rendering takes around 4.5 ms. The short-haired asset, which contains about 15’000 strands and a total of 75’000 points, the physics instead take around 0.4ms and rendering about 1.9ms.

The main reason for the long render times are currently that our GPU utilization is very low, something we are currently investigating different ways to improve. Keep in mind again that this is with all strands rendered and simulated every frame. In comparison some of the alternative hair simulation system only simulate about 1% of all hair strands. Early experiments show that we can get a 5x performance boost by simulating only 1/10th of all strands and interpolating the results.
 
Last edited:
I hope one day this patent will find the way inside an AMD GPU.

What a waste... devs being unable to avoid the case where triangle rasterization becomes pointless? -> Solve it with custom hardware. :(

I don't say Frostbites approach is better, because it suffers from the same issue: LOD too cumbersome? -> brute force zillions of subpixel triangles and claim innovative 'compute culling' :(

However, i think Polaris already has this feature in hardware or something similar?
 
I didn't read through the info, but would nvidia mesh shaders solve this kind of issue? Seems like it will be revolutionary in terms of being able to handle huge numbers of triangles.
 
When i made my comment i missed the context of hair rendering.
Mesh shaders are better at culling tiny triangles because no need to go through out of chip memory. I would still rant iny triangles should be avoided in the first place, where mesh shaders are also welcome.

For hair rendering, no matter how efficient triangle culling or tessellation is, we still end up rendering lines.
Lines are always bad becasue they can never saturate the 2x2 pixel quads, which is the smallest element of rasterization?
Maybe compute based rasterization could help? Bresenham 2.0? :D
Hmmm... maybe strands is still the best option, just finer and more strands + some OIT approx?
 
Triangles for lines is plain dumb. ;) There should be hardware drawing for lines, for things like cables, fences, and hair.

The raytracer RealSoft 3D could compute and trace or rasterise curves with width, colour, displacement, etc. It was just damned slow as it ran on the CPU. As it's a fundamentally different drawing requirement to objects and surfaces though, I think it should be dealt with differently. I guess we'd be looking at a computer-based line drawing as you suggest, but I don't know that compute is ideally set up for that. Although it should be way better than triangles!
 
Hehe, in this sense: I'm not sure if modern game girls like from Wolfenstein Younglood would look much better with hair simulation at all :p
 
Based on it running at 30fps on a 2080, that type of hair is probably limited to Catwalk Model II: Stilleto Turbo Edition.
Yeah. When game devs can afford to throw this level of detail at hair (and cloth) on Assassin's Creed volume of of individuals, I'll be impressed.

It reminds me of the T-Rex demo that came on the PlayStation demo disc, then you see a T-Rex in a game (Tomb Raider) and the game T-Rex is about 100th of the polygons. Still terrifying, but yeah.. no. :nope:
 
Agree, but on the other hand it's embarrassing we still have no long hair in games.
I think this could work for hero characters:
Use a low res volume around the head (16 or 32 ^ 3)
Simulate a small number of hairs (200 per head, 60 segments each)
Propagate simulated velocities to the grid.
Use that to update a larger number of more hairs (so interpolating simulation like Frostbite proposed - maybe they do it this way, did not pay attention)
Extrude a quad skirt from each hair and use usual card hair tech for rendering. Volume could provide occlusion.
This would allow dynamic number of hairs, segment count and card width to support LOD.
Likely hideable with async compute.

I'd agree to this. But individual lines seems overkill.

Edit: using line direction instead velocity for the volume, and tracing procedural hair through the vector field is probably much faster - new hair can be added on the fly and no state is necessary.
 
Or just tape tiny little wigs to your eyelids in order to give every character beautiful flowing locks.

Is physics simulation more of a CPU or GPU bound task these days? I know it used to be very CPU heavy, and that there was a lot of talk around the start of this generation about moving increasing amounts of simulation to the GPU. We've also not seen any improvements to physics this generation, which makes me think that may not have been a successful endeavour.

It may also just be a current industry trend that pretty graphics are more lucrative accurate physics, so I'm aware I might be wrongly connecting the dots.
 
Agree, but on the other hand it's embarrassing we still have no long hair in games.
Tomb Raider 2 on PlayStation ushered in the first hair tech that impressed me. A pony tail that, kind of, reacted to physics and what Lara was doing.

Plenty of 'hero' characters have long hair though! :yep2: Lara Croft, Nariko (Heavenly Sword), Aloy (Horizon Zero Dawn), Elise (Assassin's Creed Unity), the female misthios (Assassin's Creed Odyssey)
 
Back
Top