Game development presentations - a useful reference

145 slides and a 45min presentation!? He's optimistic...
It was totally an awesome presentation :)

MJPs presentation was nice, and I really liked the Remedy guys GI stuf, DICEs unified volumetric lighting solution and the distance field shadows + AO by Epic. Lots of very high quality talks, thanks for Natasha to making it all happen :)

I had only 3 slides about virtual texturing and still got a huge amount of questions about it specially. Lots of guys came to talk about VT with me later. It seems that people are starting to get interested about virtual texturing again, as the modern versions are fully dynamic and don't need 3 discs to bake data (like id-software did in Rage). Maybe I need to talk specially about virtual texturing sometime in the future.
 
I saw on twitter, Alex evans slides are ready to be uploaded. Maybe they will be available today!
 
Last edited:
I had only 3 slides about virtual texturing and still got a huge amount of questions about it specially. Lots of guys came to talk about VT with me later. It seems that people are starting to get interested about virtual texturing again, as the modern versions are fully dynamic and don't need 3 discs to bake data (like id-software did in Rage). Maybe I need to talk specially about virtual texturing sometime in the future.

That would be great. There's a lot of potential for it. So many games use procedural texture compositing on their content creation pipeline (like substance designer and such), with VT, the compositing could happen at run-time, exchanging asset streaming latency for texture creation compute-time, allowing faster loading times or richer open-worlds... Then there's the ability to have lots of decals with cool material properties modifications for very low cost.... And all your texture-space g-buffer experiments.
 
Correct me if i am wrong but it seems like this approach puts more strain on the GPU and less on the CPU. I don't know if you can answer this but it's strange that the Ps4 version of AC:U was under performing in comparison to the X1 version at the same res no less. Most people assumed it was the CPU given that cutscenes ran better on Ps4 and gameplay better on X1. In any case, i really liked the game, especially how it looks, i have an album dedicated to it on my flickr after all :p https://www.flickr.com/photos/128836441@N08/sets/72157649274981640
 
Correct me if i am wrong but it seems like this approach puts more strain on the GPU and less on the CPU. I don't know if you can answer this but it's strange that the Ps4 version of AC:U was under performing in comparison to the X1 version at the same res no less. Most people assumed it was the CPU given that cutscenes ran better on Ps4 and gameplay better on X1. In any case, i really liked the game, especially how it looks, i have an album dedicated to it on my flickr after all :p https://www.flickr.com/photos/128836441@N08/sets/72157649274981640
Can't comment on AC:U stuff. In our implementation the GPU cost for culling is less than the GPU savings for G-buffer rendering. So it is practically free for GPU and it also frees 3 CPU cores to gameplay/physics/destruction. For us this pipeline is a bigger benefit compared to moving from last gen to current gen. It allows us to render 10x more objects. We can also simulate up to 20k active physics objects now at 60 fps. A huge improvement over our old tech.
 
Oh wait I have! I just tried bilateral filtering and stratified sampling over 8x8 blocks, it does help a lot.
I think the general principle of z buffer for close, simple bitmask voxelization for further range gather occlusion is so simple that it’s worth a try in almost any engine. Our
voxel cascades are IIRC 64^3, and the smallest cascade covers most of the scene, so they’re sort of mine-craft sized voxels or just smaller at the finest scale. (then
blockier further out, for the coarser cascades). But the screenspace part captures occlusion nicely for smaller than voxel distances.

It seems Dreams use Voxel for occlusion(page 139)
 
Last edited:
Can't comment on AC:U stuff. In our implementation the GPU cost for culling is less than the GPU savings for G-buffer rendering. So it is practically free for GPU and it also frees 3 CPU cores to gameplay/physics/destruction. For us this pipeline is a bigger benefit compared to moving from last gen to current gen. It allows us to render 10x more objects. We can also simulate up to 20k active physics objects now at 60 fps. A huge improvement over our old tech.

1080p + MSAA + 60 fps on Xbox One great job
 
Slide of dreams presentation are up

Looking through those slides with my very limited technical ability, it kind of reminds me of those "unlimited detail" videos from a while back. Is it a similar technology?
 
Can't comment on AC:U stuff. In our implementation the GPU cost for culling is less than the GPU savings for G-buffer rendering. So it is practically free for GPU and it also frees 3 CPU cores to gameplay/physics/destruction. For us this pipeline is a bigger benefit compared to moving from last gen to current gen. It allows us to render 10x more objects. We can also simulate up to 20k active physics objects now at 60 fps. A huge improvement over our old tech.

I too was wondering the comparison. Is your technique suited to AC:U as well? 50% of CPU saving is a game changer; enough to breath a few more years into consoles. I'm wondering if similar wins were already present in AC:U.
 
Back
Top