Game development presentations - a useful reference

The engine that clueless people love to hate...:
FROSTBITE HAIR RENDERING AND SIMULATION
https://www.ea.com/frostbite/news/frostbite-hair-rendering-and-simulation

Nice, but I wanted to laugh so hard when I saw the sudden spinning air kick... I mean, you see that creepy dummy with no face suddenly going Bruce Lee and the hair wobbling in that weird way... it's just pure gold! :D:D

And again, nice tech demo. I hope one day we start getting games with decent hair instead of tech demos (maybe Tomb Raider is the one who did it best, so far, but it's not mainstream, yet).
 
Cloth physics are more in need of a fix than hair physics :(

If you mean for clothes, I think cloth physics are less a problem than cloth modeling. If you have modeled cloth you can always fall back to letting an offline physics engine fit it to a body and tieing it to a skeleton after all. The modeling will require a major shift in tools and artist mindset.
 
Modeling cloth isn't an issue (and hasn't been for a little while). As most of the industry is using https://www.marvelousdesigner.com for years now. It's mainly a performance budget issue in games and the fact that this generation of consoles are powered by horrible CPUs really set things back in this domain.
https://80.lv/articles/making-clothes-for-video-games/
On-topic: good hair rendering is immensely more difficult than cloth BTW.
 
Modeling cloth isn't an issue (and hasn't been for a little while). As most of the industry is using https://www.marvelousdesigner.com for years now.
I watched some tutorial videos and it's impressive.

It's mainly a performance budget issue in games
That's the brunt of my point: coarse meshes and low simulation rate. Though layers, collisions, mass, elasticity, friction all seem to be significant points of failure.

and the fact that this generation of consoles are powered by horrible CPUs really set things back in this domain.
Cloth physics not running on the GPU?

On-topic: good hair rendering is immensely more difficult than cloth BTW.
Agreed, but the failure in clothes is more distracting.

I found this old thread:

https://forum.beyond3d.com/threads/...finally-possible-in-a-realistic-manner.60048/
 
Wonder why they even showed bistro, the temporal stability of raytracing is pathetic even in vanilla.
 

Very interesting. I don't have a 240hz monitor to test. Only 144hz. It's interesting that this work actually requires 240Hz for good results and falls apart at 60Hz according to the paper.

So essentially you can reduce the cost of ray tracing per frame very significantly, but only as long as you keep your fps quite high. Typically you can gain quality by lowering your frame rate, but in this situation that no longer bears true, if you're seeking ray tracing quality. You'd just end up having to invest more of your frame budget for ray tracing the lower your fps to get the same quality. Would be very odd in a game for a user to try to balance those considerations. If you're a gamer and your system can't maintain 240Hz because you're gpu limited, then what's your fallback?
 
Interesting idea. Basically they reach framerates high enough that a lot of the temporal filtering actually happens in your eye (and your brain) instead of in the GPU.
While I think the idea is a little too bold for actual products, it does point out other pardigms that open up once we actually go full RT (secondary AND primary rays). Instead of thinking of rays per frame, devs can think in terms of Rays per second. The bottleneck then is only on animation side, but rendering becomes mostly framerate independent (as long as your temporal accumulation and filtering can stay stay slim). It's an extention of what already is being done with temporally reconstructed checkerboarded buffers (or other sparse sampling techniches that are rasterization-friendly) only that when we let go of rasterization, sparse sampling becomes much cheaper, easy to implement, and more flexible. And once we are there, of course we can also start talking about temporal jittering for high quality MB and actual lense optic simulation based DoF and Bloom which will be relatively more trivial to adopt when sparse stochastic-like sampling + smart reconstruction and denoising already is implemented and a fundamental and robust part of the rendering pipeline.
 
Last edited:
we let go of rasterization, sparse sampling becomes much cheaper and easier to implement.
Adding to this:
Primary rays are not that expensive.
Deferred shading still possible as usual.

Conclusion:
Replace ROPs with RT Cores.

So instead just adding more and more fixed function crap, remove older crap as well, then there is enough die area for what really matters: Compute Powaaah!!! :D :p :D
 
Back
Top