Polygons, voxels, SDFs... what will our geometry be made of in the future?

I wonder why the two games Bunnell worked on never released.
His idea is really outstanding, as it replaces the visibility term with a cheap approximation, reducing (brute force) cost of a sample from N^2 to N. It's the only proposed GI method i know which achieves a serious speedup.
Though, at a high cost. The shifting / leaking errors make it inpractical for indoor in my experience, and proposed grouping is just a hack but no fix. But for outdoor games this would be still interesting IMO.
Somewhat sad. Feels like a lost opportunity. Maybe the patent on the algorithm is an issue.
 
Impressive CPU raytracing with support for subdiv and patches:
Nice.
Direct RT of patches and solids, quite rare software seem to do this. (Perhaps due to displacement mapping supports etc..)

Really love that they rendered rasterized version with same features as the RT one. (Not the usual RT is light bouncing and realistic materials tricks.)
 
Atomontage has finally launched... or something. Underwhelming, to say the least.

https://www.atomontage.com/#prototypes


https://www.atomontage.com/news/we-are-live/
Atomontage said:
We are LIVE
December 1, 2021
To everyone who has been so patiently waiting to see the first manifestation of our tech teased for so long, there is a lot to take in today! There is a great writeup on VentureBeat that covers it well.

On our entirely new site, you are not only able to experience live prototypes of the tech in any modern web browser, but also to start creating and sharing your own 3D content using our unveiled cloud-native platform.

While the current minimum viable product state of the platform does support the visualization, editing, and sharing of huge 3D data, the coming year will be packed with major feature releases that move us as quickly as possible toward our big vision: supporting groundbreaking games and other deeply interactive experiences that Montage Makers (i.e. you) can directly monetize themselves.

We are looking forward to sharing more details about this roadmap filled with big features that are already well underway being developed, so sign up for the newsletter and be the first to know when they drop!
 
"We didn't take any shortcuts, using instead 3D representation, so we can bring true, volumetric, flat lit, chunky, pixelated approximate game worlds to you while claiming some kind of superiority for marketing purposes."

Their tech may be awesome in what it's doing, but their presentation is annoying and off-putting!
 
Nice, though I noticed that apart from the static animations of the tank, they didn't show any dynamic animations like a character moving, which will be important if they want game developers to adopt it.
 
Last edited:
I just don't see how anyone can maintain that hardware raytracing as currently implemented isn't a huge drag on innovation. NVIDIA needed an entire new hardware generation to add alpha testing during raytracing and a very limited displacement mapping implementation. Meanwhile Nanite is advancing rapidly, because mostly software.

What we needed in GPUs was and are scalar processors for device side scheduling through user code and AABB/slab/sphere/triangle/quad intersection testers callable from shaders. What we didn't need, black box ray tracing.
 
I just don't see how anyone can maintain that hardware raytracing as currently implemented isn't a huge drag on innovation. NVIDIA needed an entire new hardware generation to add alpha testing during raytracing and a very limited displacement mapping implementation. Meanwhile Nanite is advancing rapidly, because mostly software.

What we needed in GPUs was and are scalar processors for device side scheduling through user code and AABB/slab/sphere/triangle/quad intersection testers callable from shaders. What we didn't need, black box ray tracing.
Isn't this pretty much what RDNA2 does? i.e. callable AABB and triangle intersection HW tests from shader.
 
Isn't this pretty much what RDNA2 does? i.e. callable AABB and triangle intersection HW tests from shader.
BTW, up to a point. You can't separate them out. A highly variable latency instruction for a shader ... feels wrong.

PS. oops, I guess you could just detect leaf nodes in the shader and limit triangle count during build. So it doesn't really matter.
 
Last edited:
If Media Molecule can continue iterating the Dreams engine I'd like to see how far they can take it. If they can solve some of the issues with deformation and animating objects that would be spectacular. Some scenes in Dreams just have a level of geometry that just feels right.
 
If Media Molecule can continue iterating the Dreams engine I'd like to see how far they can take it. If they can solve some of the issues with deformation and animating objects that would be spectacular. Some scenes in Dreams just have a level of geometry that just feels right.
But is MM actually putting much effort into Dreams anymore? Outside, building the community, I mean.
 
Back
Top