Dreams : create, share & play [PS4, PS5]

What does Point-based Rendering have to do with the lighting? Nothing.

Nothing?
https://www.scss.tcd.ie/Michael.Manzke/CS7055/cs7055-2015-16-VolumeRendering-mm.pdf
j49zdpA.png


Their SDF approach also gives them for instance a good ambient occlusion approximation, which helps the lighting system.

How is this your next post after saying "What does Point-based Rendering have to do with the lighting? Nothing."?
 
Last edited:
Think that was posted already. Love how they use the move controllers though with all the different gestures.
 
Nothing?
https://www.scss.tcd.ie/Michael.Manzke/CS7055/cs7055-2015-16-VolumeRendering-mm.pdf
j49zdpA.png




How is this your next post after saying "What does Point-based Rendering have to do with the lighting? Nothing."?
SDF also != Point-based rendering. The slide talks about a volume representation of their scene, they are leveraging for their lighting system. So, still the actual geometry generated does have nothing to do with the lighting. They could generate polygons or voxels from the SDF and still apply the same lighting.
 
Nothing?
How is this your next post after saying "What does Point-based Rendering have to do with the lighting? Nothing."?

onQ, I'm sorry, but I feel like you don't fully understand how dreams is rendered. Definetly, a volumetric representation of geometry can allow for some rendering features not possible by pure poly-based models. But those are simply not leveraged by the specific implementation dreams is using right now which apparently is targeting 60fps. This might have changed, but as of their siggraph presentation, all volumes are rendered into a regular g-buffer, so all the exoticness of their rendering system stops there. The lighting is then done using data from that g-buffer not unlike that of most modern games. Whatever special sauce their lighting system has (if there's any) could be perfectly usable into rasterized triangle based assets. But mostly, I think most screens look good because MM has some damn good artists. That scene might be geometrically simple, but that minimalism deceptively hides a lot of good taste from the artists side on picking out good color choices, interesting shapes and proportions, and an expressive light set-up.

I think things will get interesting if they manage to implement a volume based lighting system. They had something like that for LBP2, and Evens mentioned the work of frostbite team on their Unified Volumetric Rendering, which is actually quite similar to LBP2's. They sure use a lot of scattering lights, and that's one of the key aspects of LBP's surreal look, it would do a lot of good for dreams, but might be hard to get working at 60fps... lets see.
 
The original presentation covered this in detail, but this overview shows the lighting model was straight forward although they're still experimenting.
http://www.dualshockers.com/2015/08...-wip-screenshots-showing-failure-and-success/

A volumetric representation does intuitively suggest a volumetric lighting solution as per LBP2 and irradiance slices (which'll need something else in a larger space). I hope MM pull that (some GI type solution) off as it'd make a world of difference for using Dreams for animations.
 
Actually, I just re-read the slides from sig, and watched the Umbra presentation on umbra. They actually do some less typical stuff on the lighting and, which are indeed aided by the volumetric representation of geometry. Specificaly, they use a large number of Imperfect Shadow Maps to do occlusion, and also do a quick binary cascaded voxelization of the scene for AO. So yeah, those things are probably faster to do with volumes than triangles. There's some room to do that stuff with hybrid representations like those of UE4, but if we are talking indie, going volumetric all the way is more straight-foard, ignoring of course the massive overhead of doing the RnD to create such engine and the tools to make content for it.
 
Actually, I just re-read the slides from sig, and watched the Umbra presentation on umbra. They actually do some less typical stuff on the lighting and, which are indeed aided by the volumetric representation of geometry. Specificaly, they use a large number of Imperfect Shadow Maps to do occlusion, and also do a quick binary cascaded voxelization of the scene for AO. So yeah, those things are probably faster to do with volumes than triangles. There's some room to do that stuff with hybrid representations like those of UE4, but if we are talking indie, going volumetric all the way is more straight-foard, ignoring of course the massive overhead of doing the RnD to create such engine and the tools to make content for it.

Simon Brown of Media molecule said he use many idea from SIGGRAPH presentation to improve Dreams rendering engine...
 
The game is more than one year from release and they decided to use point based rendering not so far ago.... The rendering techology will improve...
 
Last edited:
Watching the video above I still find the menus a bit convoluted; there is lot of PC traditional browsing there that I don't' like ;)
LBP menus were not quick to navigate either so it's part of MM modus operandi.

Voice controls would definitely be ideal for this game, at least as shortcuts; few key words like "shapes, physics, properties, colours, materials, etc..." would be enough to make navigation a lot easier.
A companion app with touch control would be great especially for kids.

MM needs to work on that area.
 
Last edited:
onQ, I'm sorry, but I feel like you don't fully understand how dreams is rendered. Definetly, a volumetric representation of geometry can allow for some rendering features not possible by pure poly-based models. But those are simply not leveraged by the specific implementation dreams is using right now which apparently is targeting 60fps. This might have changed, but as of their siggraph presentation, all volumes are rendered into a regular g-buffer, so all the exoticness of their rendering system stops there. The lighting is then done using data from that g-buffer not unlike that of most modern games. Whatever special sauce their lighting system has (if there's any) could be perfectly usable into rasterized triangle based assets. But mostly, I think most screens look good because MM has some damn good artists. That scene might be geometrically simple, but that minimalism deceptively hides a lot of good taste from the artists side on picking out good color choices, interesting shapes and proportions, and an expressive light set-up.

I think things will get interesting if they manage to implement a volume based lighting system. They had something like that for LBP2, and Evens mentioned the work of frostbite team on their Unified Volumetric Rendering, which is actually quite similar to LBP2's. They sure use a lot of scattering lights, and that's one of the key aspects of LBP's surreal look, it would do a lot of good for dreams, but might be hard to get working at 60fps... lets see.

What part of my post make you think that I don't understand?
 
What part of my post make you think that I don't understand?

onQ, you just circled a topic in a presentation about volume rendering regarding subsurface scattering. There´s nothing about whats been talked about dreams rendering so far that implies any sss that couldn´t be done on other deferred renderers. The things you say comonly sound like that of many forum goers that relate similar technical concepts without grasping implementation specificities, which makes a big difference, especially in real time aplications.
 
onQ, you just circled a topic in a presentation about volume rendering regarding subsurface scattering. There´s nothing about whats been talked about dreams rendering so far that implies any sss that couldn´t be done on other deferred renderers. The things you say comonly sound like that of many forum goers that relate similar technical concepts without grasping implementation specificities, which makes a big difference, especially in real time aplications.

That wasn't about subsurface scattering, subsurface scattering was just an example of the illumination being more than just a surface function. if you watch the video when Anton was talking a few weeks ago he talk about how the IMPs have a interior glow that look like subsurface scattering but it's not.

1 hour mark
 
onQ, you just circled a topic in a presentation about volume rendering regarding subsurface scattering. There´s nothing about whats been talked about dreams rendering so far that implies any sss that couldn´t be done on other deferred renderers. The things you say comonly sound like that of many forum goers that relate similar technical concepts without grasping implementation specificities, which makes a big difference, especially in real time aplications.

That wasn't about subsurface scattering, subsurface scattering was just an example of the illumination being more than just a surface function. if you watch the video when Anton was talking a few weeks ago he talk about how the IMPs have a interior glow that look like subsurface scattering but it's not.

You just repeat words.
 
I was watching the PlayStation VR panel at PSX last night and they've got a guy from MM on it. He says they're experimenting with PSVR but that they don't have anything that's ready to show yet.

Sounds like a given that this will have some compatibility with PSVR.
 
Back
Top