Game development presentations - a useful reference

It seems Dreams use Voxel for occlusion(page 139)
Dreams is still a WIP. That's Alex's latest idea quickly thrown into the mix. ;)

No. They explored various techniques including volumetric lighting, but the current engine uses splats and basic lighting techniques. The clouds are rendered to a G buffer and rendered conventionally -

Alex Evans said:
I should at this point pause to give you a rough outline of the rendering pipe - it’s totally traditional and simple at the lighting end at least. We start with 64 bit atomic min (== splat of single pixel point) for each point into 1080p buffer, using lots of subpixel jitter and stochastic alpha. There are a LOT of points to be atomic-min’d! (10s of millions per frame) Then convert that from z+id into traditional 1080 gbuffer, with normal, albedo, roughness, and z. then deferred light that as usual.
Then, hope that TAA can take all the noise away. ;)

Looking through those slides with my very limited technical ability, it kind of reminds me of those "unlimited detail" videos from a while back. Is it a similar technology?
I don't think so. The spatial representation is CSG based. These are evaluated into the distance fields which produce the point clouds which produce the splats. I'd have to read it a second time to get my head fully around the final representation (a cloud of clouds of point clouds). I'm not sure how UD worked.
 
Looking through those slides with my very limited technical ability, it kind of reminds me of those "unlimited detail" videos from a while back. Is it a similar technology?

No it is different.


Dreams is still a WIP. That's Alex's latest idea quickly thrown into the mix. ;)

No. They explored various techniques including volumetric lighting, but the current engine uses splats and basic lighting techniques. The clouds are rendered to a G buffer and rendered conventionally -



I don't think so. The spatial representation is CSG based. These are evaluated into the distance fields which produce the point clouds which produce the splats. I'd have to read it a second time to get my head fully around the final representation (a cloud of clouds of point clouds). I'm not sure how UD worked.

Yes I understand after reading a second time it will probably be too expensive to use....

It will be a VR title it is 60 fps and not 30 fps...
 
I'd be interested to hear what the variances in render times are for simple to complex scenes in Dreams. I imagine they'd need to implement a complexity budget for those using the tools to create their own dreams/films/games.
 
We are live:
http://advances.realtimerendering.c...iggraph2015_combined_final_footer_220dpi.pptx

My slides page 40+. If you have any questions, please ask. I recommend reading the slide notes. Lots of info in the notes.

DX12 slide has no explanations. If you need some, please ask :)

Just read it again with the notes, in this part specifically about VT
captureq4srm.png


Does that mean more efficient memory allocation? You didn't cover that part in detail that's why i am asking. And if you can provide any comparison with traditional techniques :D
 
The very opposite to the direction DX12 is trumpeting with a gazillion draw calls. Will be interesting to pit the two approaches against each and see what the differences are.

From my understanding, one of the points of DirectX 12 (and other "low-level" APIs like Vulkan and Mantle) is to reduce the necessary draw calls to get a desired result, boosting performance. So it seems to be in the same basic direction.

We are live:
http://advances.realtimerendering.c...iggraph2015_combined_final_footer_220dpi.pptx

My slides page 40+. If you have any questions, please ask. I recommend reading the slide notes. Lots of info in the notes.

DX12 slide has no explanations. If you need some, please ask :)

Thanks for posting this! :) I'm no expert, but it seems like a lot of good work was put in there. I do have questions about DirectX 12. From my understanding, DirectX 12 is supposed to reduce the abstraction layer between a game and the hardware on PC, allowing for more flexible and efficient programming, albeit a bit trickier. From what you know about DirectX 12, how might DirectX 12 allow you to improve even further on what you've already done with DirectX 11? Does DirectX 12 really enable rendering methods that weren't really possible -- or, at least, came with a higher performance cost -- on DirectX 11?

I'm also curious if the features talked about in the slide notes make use of the Deferred Contexts feature in DirectX 11. Again, from my understanding, Deferred Contexts is a feature intended to help maximize CPU performance, but only Nvidia actually ever supported it, and AMD never enabled support for it in their graphics cards. If you are using Deferred Contexts, how do you think it compares to a general low level API like DirectX 12? How much of a difference can in performance can we expect to see between Deferred Contexts-enabled Nvidia graphics cards and AMD cards without the feature?
 
I'm reading the baked GI of the order, I had no idea one could bake the speculars until now (if we don't count cubemapped reflections a form of baking speculars). Also seeing entire locations working with nothing but baked lighting, it's kind of sad to think the kind of visuals we had on the Order wouldn't be achievable with dynamic level geometry.
 
Just skimmed through unified volumetric rendering.. I like how temporal techniques are being put to good use (save a few examples, I'm looking at you P.Cars temporal blending) this generation. All that information accumulating over the course of time is too valuable to waste.
 
Back
Top