Game development presentations - a useful reference

I honestly don't like the idea of training from a photo set of dubious quality cameras without separating and treating white point. They should have used raw data sets from homogeneous CMOS sources and settings. But yes, such approaches could be used offline by artist content creation pipelines to get less awful effects.
 
I honestly don't like the idea of training from a photo set of dubious quality cameras without separating and treating white point.
They did something like that as well, see here: https://intel-isl.github.io/PhotorealismEnhancement/

But to me this looks less realistic again. CG is always too perfect. The more imperfection we add, the more realistic, somehow.
Ofc. that's subjective, so i wonder if ML will allow more customization to personal preferences as well sometime...
 
They did something like that as well, see here: https://intel-isl.github.io/PhotorealismEnhancement/

But to me this looks less realistic again. CG is always too perfect. The more imperfection we add, the more realistic, somehow.
Ofc. that's subjective, so i wonder if ML will allow more customization to personal preferences as well sometime...

I don't see this stuff as producing photorealistic graphics, it's just reshade++, IE with smarts, the parameters for "photorealism" are way too huge for a realtime neural net. But there are neat things that you can do with it, there's a ton of "style transfer" networks that are trained on non realistic art styles. I can see that stuff being really cool for certain art styles. Trying to figure out how to render abstract art or a van goph painting or whatever "properly" seems an absolute pain. Just being to render the basic stuff normally, and then throwing a realtime neural net on top of it to make it look like Spongebob or directly out of a Studio Ghibli film, seems right up the alley for this.

Link: https://medium.com/tensorflow/neura...ing-tf-keras-and-eager-execution-7d541ac31398

 
https://schedule.gdconf.com/session/marvels-spider-man-miles-morales-a-technical-postmortem/878267

A Spiderman MM post mortem at GDC 2021 in July

https://schedule.gdconf.com/session/a-deep-dive-into-frostbites-hair-rendering-tech/878214

A frosbite hair rendering deep dive

The new Frostbite strand-based hair system shipped in FIFA 21 on PS5 and XBox Series X, to great reception. This talk goes into details of the custom compute-based software rasterizer underpinning the Frostbite hair rendering system, including its architecture, the techniques used to achieve great-looking hair, and how it was heavily optimized for next-gen consoles.

https://schedule.gdconf.com/session...-fog-geometry-lighting-in-demons-souls/878999

Geometry and lighing in Demon's souls
Following Shadow of the Colossus, with its predominantly sparse outdoor environments, Demon's Souls' architecture and art direction posed many new rendering and production challenges for a relatively small team. This session focuses on the Bluepoint Engine updates that were most impactful to achieving the final look of this PS5 launch title. We'll detail a possibly controversial approach to compute-based tessellation, our global illumination implementation using light field probes for both runtime GI and sound propagation, and the benefits of screen-space directional occlusion.

Tons of good session

https://schedule.gdconf.com/sessions

https://schedule.gdconf.com/session...-procedural-grass-in-ghost-of-tsushima/879611

The art direction for Ghost of Tsushima calls for giant fields of lush grass blowing in the wind for Jin to ride his horse through. To this end, Sucker Punch Productions chose to render their fields by generating individual blades of grass on the GPU that could each have their own procedural appearance and animation. In this talk, graphics programmer Eric Wohllaib will discuss how they generate acres of grass within reasonable memory and performance limits, techniques for rendering and animating individual blades, and methods for making thousands of individual blades of grass look like a natural field.

https://schedule.gdconf.com/session...d-war-a-scalable-approach-to-shadowing/879688

Call of Duty Black Ops Cold War launched across several generations of hardware. To deliver the best experience for players on original Xbox/PS4 to Next-Gen consoles and PC, we developed a system to automatically scale shadow quality. The system allows for true ray-traced shadows at the high-end, falling back to a dynamic-res shadowmap system that supports plausible contact hardening. This dynamic-res system works within a fixed memory budget and perf-target, delivering shadow resolution where it is needed across an unrestricted number of lights.

https://schedule.gdconf.com/session/character-technology-of-the-last-of-us-part-ii/878264

Attendees will walk away with an understanding of the artist-driven, character focused tools developed for The Last of Us Part II. This session will take a deep dive into character tech and pipeline tools to accommodate a challenging workflow. It includes character assets optimization to fit in the memory budget, data driven face rigs and deformation tech, animation tools, NPC assets authoring (human and infected, RATKING), gore tool development. How our current tools and challenges are shaping a pipeline for future titles as production continues to scale.

MAny others good presentation and I forget many others need to do another pass.

https://schedule.gdconf.com/session...musical-storytelling-in-cyberpunk-2077/878351

https://schedule.gdconf.com/session/emotional-systemic-facial-of-last-of-us-part-ii/878196

https://schedule.gdconf.com/session...e-techniques-in-the-last-of-us-part-ii/878333

https://schedule.gdconf.com/session...-reinforcement-learning-for-navigation/879035

https://schedule.gdconf.com/session...imation-generation-for-expressive-npcs/880503

https://schedule.gdconf.com/session...nd-game-balancing-with-project-chimera/879273

https://schedule.gdconf.com/session...to-3d-rotations-for-non-mathematicians/879197

EDIT:
https://schedule.gdconf.com/session/rope-simulation-in-uncharted-4-and-the-last-of-us-2/878186

For Uncharted 4 a new rope technology had to be developed to allow for Drake's grapple rope and jeep winch to bend and wrap around collision while setting a new high bar for grapple rope presentation in games. In The Last Of Us 2 this technology was taken a step further to support rope puzzles where the player has the freedom to throw 14m of rope into the environment and pick it up at any point. This presentation will go over the history of development of this technology and the major challenges encountered on the way. Then it will dive into some details of the soft body simulation as well as the technique used to solve the problem of taut rope wrapping around collision.

https://schedule.gdconf.com/session/the-art-of-ori-and-the-will-of-the-wisps/878256

Attendees can expect an in-depth discussion of the artistic goals and processes behind Moon Studios' critically acclaimed Ori and the Will of the Wisps. It will cover not only the core principles of the Ori art style, but the various technical methods that were implemented to push the game's visuals forward. Examples of scene construction showing the art department's iterative process will be included.

As a completely remote studio spanning numerous countries around the globe, Moon will elaborate on their studio structure and philosophy. This talk will also cover various challenges faced by the art department during production, and how they were overcome.

Loading on Ghost of Tsushima are incredible on PS4
https://schedule.gdconf.com/session/zen-of-streaming-building-and-loading-ghost-of-tsushima/878191

Ghost of Tsushima is ~15x larger than previous Sucker Punch games and won praise from users for its fast load times and compact patch sizes. This talk examines the technology choices that made this transition possible, from its bumpy start to its even finish. It will delve into our world building strategy and how it evolved. It will also show the details of our various streaming technologies and the tradeoffs encountered in switching to a fine-grained streaming model. It will cover a variety of memory, disc and performance optimizations made to systems including terrain, pathing, physics, AI and rendering. Lastly, this talk will discuss the process of shipping Ghost with an emphasis on ensuring fast loading times and how our technology held up (or not) working from home during a pandemic.

https://schedule.gdconf.com/session...the-ground-the-terrain-of-call-of-duty/879628

Over the last couple projects Treyarch developed a new terrain system that's been seen in Black Ops 4, Warzone, and Black Ops Cold War. Part of it is a uniquely powerful set of tools for editing this terrain and developing biomes procedurally for it, where editing features are implemented on the GPU for real-time iteration. The talk will also go into details on Black Ops Cold War graphical terrain features that take advantage of virtual texturing to implement seamless blending of artistic elements, and the unique implementation details that allowed us to push as much visual quality as possible in this system while staying in the memory and performance budgets of a 60 fps game.
 
Last edited:
https://momentsingraphics.de/Siggraph2021.html

BRDF importance sampling for polygonal lights

Abstract
With the advent of real-time ray tracing, there is an increasing interest in GPU-friendly importance sampling techniques. We present such methods to sample convex polygonal lights approximately proportional to diffuse and specular BRDFs times the cosine term. For diffuse surfaces, we sample the polygons proportional to projected solid angle. Our algorithm partitions the polygon suitably and employs inverse function sampling for each part. Inversion of the distribution function is challenging. Using algebraic geometry, we develop a special iterative procedure and an initialization scheme. Together, they achieve high accuracy in all possible situations with only two iterations. Our implementation is numerically stable and fast. For specular BRDFs, this method enables us to sample the polygon proportional to a linearly transformed cosine. We combine these diffuse and specular sampling strategies through novel variants of optimal multiple importance sampling. Our techniques render direct lighting from Lambertian polygonal lights with almost no variance outside of penumbrae and support shadows and textured emission. Additionally, we propose an algorithm for solid angle sampling of polygons. It is faster and more stable than existing methods.

Keywords: projected solid angle sampling, solid angle sampling, light sampling, next event estimation, spherical polygons, spherical triangles, polygonal lights, real-time ray tracing, rendering, linearly transformed cosines, LTC, Monte Carlo integration, optimal MIS
 

Motion matching is definitely neat. In the back of my head I'm wondering if one can author a collection of hand animated sequences and use a motion matching graph/ai generated graph to pick and blend between them; so you get the same production benefits of not having to hand craft a bunch of transitions, but now you get the control and art style of hand authored cycles as well.
 
Overall, the results aren't all that convincing until 64spp is used where we see a noticeable improvement in half of the scenes ...
I think it makes some really nice in-roads in those areas of the image at 1spp where the path trace + restir merely returns dark + salt and pepper. Or even DDGI for that matter.

nrc2hgjpo.png

nrc3jtkuw.png
 
Back
Top