Game development presentations - a useful reference

Great courses at Siggraph 2015


http://s2015.siggraph.org/attendees/courses/events/advances-real-time-rendering-part-i

9 am
Welcome and Introduction
Tatarchuk

9:10 am
Towards Unified and Physically-Based Volumetric Lighting in Frostbite
Hillaire (DICE)

9:40 am
Stochastic Screen-Space Reflections
Stachowiak (DICE)

10:10 am
The Real-time Volumetric Cloudscapes of Horizon: Zero Dawn
Schneider (Guerrilla Games)

10:50 am
A Novel Sampling Algorithm for Fast and Stable Real-Time Volume Rendering
Bowles and Zimmermann (Studio Gobo)

Sparkly But Not Too Sparkly! A Stable and Robust Procedural Sparkle Effect
Bowles and Wang (Studio Gobo)

11:35 am
Multi-Scale Global Illumination in Quantum Break
Silvennoinen and Timonen (Remedy)

12:15 pm
Closing Q&A

http://s2015.siggraph.org/attendees/courses/events/advances-real-time-rendering-part-ii

Course Schedule

2 pm
Welcome (and Welcome Back!)
Tatarchuk

2:05 pm
Rendering The Alternate History of The Order: 1886
Pettineo (Ready at Dawn)

2:50 pm
Learning from Failure: a Survey of Promising, Unconventional and Mostly Abandoned Renderers for ‘Dreams PS4’, a Geometrically Dense, Painterly UGC Game
Evans (MediaMolecule)

3:35 pm
Dynamic Occlusion with Signed Distance Fields
Wright (Epic)

4:20 pm
GPU-Driven Rendering Pipelines
Haar (Ubisoft Entertainment) and Aaltonen (Ubisoft Entertainment)

5:15 pm
Closing Remarks
Tatarchuk
 
Abstract: The first half of the talk will present the GPU-driven rendering pipeline of Assassin's Creed Unity – co-developed by multiple teams atUbisoft Montreal – that was designed to efficiently render the game’s complex scenes, containing many highly modular buildings and characters.

After a brief introduction, we will describe the core of the pipeline, which supports per-material instance batching instead of the more traditional per-mesh batching. We will then show how this can be combined with mesh clustering to obtain more effective GPU culling, despite coarser draw call granularity. Additional techniques such as shadow occlusion culling and pre-calculated triangle back-face culling will also be discussed.

In the second half of the talk, we will introduce the RedLynx GPU-driven rendering pipeline: a `clean slate’ design that builds on the latest hardware features, such as asynchronous compute, indirect dispatch andmultidraw.

Our aim from the outset was to support heavily populated scenes without CPU intervention, using just a handful of indirect draw calls. In practice this allows us to render hundreds of thousands of independent objects with unique meshes, textures and decals at 60 FPS on current console hardware. We will go into of all the details on how we achieve this, including on our novel culling system, as well as virtual texturing, which is an integral part of the pipeline.

Finally, to wrap up the talk we will look at how our pipelines could evolve in the future, especially with upcoming APIs such as DirectX 12.

Sebbbi time maybe
 
Abstract:

Rendering convincing participating media for real time applications, e.g. games, has always been a difficult problem. Particles are often used as a fast approximation for local effects such as dust behind cars or explosions. Additionally, large scale participating media such as depth fog are usually achieved with simple post-process techniques. It is difficult to have all these elements efficiently interacting with each other according to the lights in the scene.


The authors propose a way to unify these different volumetric representations using physically based parameters: a cascaded volume representing extinction, voxelizationmethod to project particles into that extinction volume, a simple volumetric shadow map that can then be used to cast shadow from any light, according to every volumetric element in the scene, and finally a solution to render the final participating media.


The presented set of techniques and optimizations form the physically based volumetric rendering framework that will be used for all games powered by Frostbite in the future.

I saw tweet about this courses on twitter, it will be very interesting.
 
The very opposite to the direction DX12 is trumpeting with a gazillion draw calls. Will be interesting to pit the two approaches against each and see what the differences are.
 
So by 'draw call', Sebbbi's speaking specifically about CPU-based draw calls.
You can actually render a whole scene (or all your shadow maps) at once with a single DrawInstancedIndirect call. This is handly for DirectX 11 and OpenGL ES 3.1, since these APIs don't support any mechanism of GPU generated draw calls.

With OpenGL 4.4 you can of course use arb_multi_draw_indirect + arb_indirect_parameters and with DirectX 12 you can use ExecuteIndirect to generate cheap draws on GPU side.
The very opposite to the direction DX12 is trumpeting with a gazillion draw calls. Will be interesting to pit the two approaches against each and see what the differences are.
http://www.dsogaming.com/news/direc...proves-performance-greatly-reduces-cpu-usage/

As you can see from these results, ExecuteIndirect + indexing to texture/mesh bindings is faster than setupping draw calls and bindings on CPU side, even in DirectX 12. The difference is much less than it used to be in DirectX 11, but 75 fps -> 89 fps is still a nice boost (19% higher fps). You can clearly see from the graphs that CPU usage dropped significantly with ExecuteIndirect, meaning that you can use the freed CPU cycles elsewhere (better AI, better physics, more destruction, etc).

Console games tend to use roughly half of their CPU cycles for rendering and culling (good example: http://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine). Getting rid of half of your CPU cost is a huge improvement :)
 
Yes, it's me. My first time speaking at SIGGRAPH :)

I will be talking about our (unusual) pipeline that is able render arbitrary scenes with just a single draw call (actually two draw calls + of course lots of compute shaders). I will also be talking about GPU occlusion culling, virtual texturing and deferred texturing among other things.
Congrats Sebbbi, hope you enjoy it. Will there be free slides/pdf for download after the presentation? Oh and good luck.
 
Typically, how much GPU do you have to give up? Is it a more efficient process on the GPU or simply offset by the raw power of the GPU.
GPU is very good at culling, animating and setting up hundreds of thousands of "draws". GPU driven culling is so fast that we can cull everything at sub-object granularity (occluded parts, back facing parts, etc), saving more GPU time in our G-buffering pass than we pay to perform all the culling and setup stages together. So in practice it costs "negative" GPU time and doubles the CPU time available for game logic. And it is fully real time. Destruction doesn't need any visibility structure rebuild :)
 
GPU is very good at culling, animating and setting up hundreds of thousands of "draws". GPU driven culling is so fast that we can cull everything at sub-object granularity (occluded parts, back facing parts, etc), saving more GPU time in our G-buffering pass than we pay to perform all the culling and setup stages together. So in practice it costs "negative" GPU time and doubles the CPU time available for game logic. And it is fully real time. Destruction doesn't need any visibility structure rebuild :)
That sounds superb, I hope your talk inspires rest of the developers! I'll reserve any questions until after I see the presentation as I don't want to take your time for stuff that you already cover there, thanks for all the insight.
 
Back
Top