General Next Generation Rumors and Discussions [Post GDC 2020]

if it wasn't for PSVR, i would not have upgraded to a PS4 pro, as I still have a 1080p TV, it's not much of an improvement in normal games for me.

This is a good point. Almost the entire focus of the mid gen consoles this generation has been on increased resolution. There's no-where to go from 4K so the allure of a mid-cycle upgrade this gen may be significantly reduced.
 
Honestly I dont see a point for mid gen refresh this time around. These consoles compare to PC counterparts much better then 2013 and as others said, there is no 4K this time around so why?
 
*ahem*
Deliberate trolling and baiting will earn at least a temporary site-ban. Do so at your own peril.

Carry on.
 
This is a good point. Almost the entire focus of the mid gen consoles this generation has been on increased resolution. There's no-where to go from 4K so the allure of a mid-cycle upgrade this gen may be significantly reduced.

They could bump ray tracing, these are amds first outings, we can only assume in 4 years it will be much more powerful and effecient.?
 
After having taking a closer examination at D3D12 sampler feedbacks in the context of texture space shading, it does not seem to be very useful integrating this feature with ray traced pipelines. There's very few viable cases where developers would want to reuse shading results from the previous frames for ray traced effects.

Overall, I'm not sold on either variable rate shading or sampler feedbacks being an advantage for high-end graphics since ray traced pipelines will likely be the defining benchmarks of the next generation. I see obvious benefits with these features on old and non-ray traced pipelines such as post-processing, stereo rendering, static GI/light maps, rasterization, & etc.
 
No, not really ...

If we take reflections as an obvious example and reused the shading results from the previous frames via sampler feedbacks then you'd get inaccurate reflections. There are also failure cases with dynamic global illumination too when the lighting conditions in a scene change quickly, it won't be able to properly compute the change in global illumination.

What if we also want to simulate dynamic scattering/transmittance effects too with ray tracing ? How are sampler feedbacks going to be useful if want detailed caustics from non-static bodies like water waves or moving objects like glass ? Will sampler feedbacks behave nicely with ray traced dynamic volumetric effects like explosions or participating medium with varying densities over time ? There's not nearly as many opportunities to apply texture space shading with ray tracing.
 
No, not really ...

If we take reflections as an obvious example and reused the shading results from the previous frames via sampler feedbacks then you'd get inaccurate reflections. There are also failure cases with dynamic global illumination too when the lighting conditions in a scene change quickly, it won't be able to properly compute the change in global illumination.

What if we also want to simulate dynamic scattering/transmittance effects too with ray tracing ? How are sampler feedbacks going to be useful if want detailed caustics from non-static bodies like water waves or moving objects like glass ? Will sampler feedbacks behave nicely with ray traced dynamic volumetric effects like explosions or participating medium with varying densities over time ? There's not nearly as many opportunities to apply texture space shading with ray tracing.
In reflections, wouldn't the obvious use be the shading of reflected surface?

Combining materials, perhaps lighting to single or couple of textures and reuse.
Non reflective surfaces further away shouldn't need new shading for each ray or frame.
 
I was surprised to know global illumination is the less intensive RT effect. Is by far IMO the most shocking one. I would like if is performat all games to use it like Metro did.
 
I was surprised to know global illumination is the less intensive RT effect. Is by far IMO the most shocking one. I would like if is performat all games to use it like Metro did.
Really depends on how it's made.
without simplified shaders/surfaces or use of some kind of probes or sample feedback, it is ridiculously expensive.
 
Really depends on how it's made.
without simplified shaders/surfaces or use of some kind of probes or sample feedback, it is ridiculously expensive.

Depends of light transport algorithm too, primitives use for the representation of the scene(surfel, voxel, sdf...) maybe they use some photon mapping or they can use this too with or without photon mapping.

https://graphics.pixar.com/library/PointBasedColorBleeding/

https://graphics.pixar.com/library/PointBasedColorBleeding/paper.pdf

This technical memo describes a fast point-based method for computing diffuse global illumination (color bleeding). The computation is 4-10 times faster than ray tracing, uses less memory, has no noise, and its run-time does not increase due to displacement-mapped surfaces, complex shaders, or many complex light sources. These properties make the method suitable for movie production.

The input to the method is a point cloud (surfel) representation of the directly illuminated geometry in the scene. The surfels in the point cloud are clustered together in an octree, and the power from each cluster is approximated using spherical harmonics. To compute the indirect illumination at a receiving point, we add the light from all surfels using three degrees of accuracy: ray tracing, single-disk appoximation, and clustering. Huge point clouds are handled by reading the octree nodes and surfels on demand and caching them. Variations of the method efficiently compute area light illumination and soft shadows, final gathering for photon mapping, HDRI environment map illumination, multiple diffuse reflection bounces, ambient occlusion, and glossy reflection.

The method has been used in production of more than a dozen feature films, for example for rendering Davy Jones and his crew in two of the ``Pirates of the Caribbean'' movies.

https://graphics.pixar.com/library/PointBasedColorBleeding/SlidesFromAnnecy09.pdf

From this slide

Point-based
– little memory, no shader evaluations

Out of this ligthcuts or radiosity light transport algorithm can be interesting for specular for example...

I am not sure pathtracing will be the de facto light transport algorithm for realtime.

From a july 2019 Sony ATG job ad:

Senior Graphics Researcher (12 month contract) - London

PlayStation London, GB


Research and develop graphics techniques like real time ray tracing and point cloud rendering for next generation console platforms.
 
Last edited:
Isn’t texture-space shading (sampler feedback) a good pair with ray tracing?
Texture space shading as a concept is attractive for RT (and any other global lighting approach) because it can serve as radiance cache to give infinite bounces for free.
Texture space would also give better spatial data for denoising.

But sampler feedback addresses a different problem: It is useful to determinate the texels that are fetched for primarily visible geometry, so a visibility based texture shading technique can focus on what is really needed.
(This is also the kind of texture space shading discussed in older of sebbies posts here, also the related topic of streaming of only visible mega texture data, etc.)

Ofc. for a global lighting solution we also want to shade surface that is not visible from the camera, so sampler feedback is not needed or useful for this purpose.
But it would make sense to have higher resolution texture for primary visibility and low res for 'everything', so feedback can be a nice extra feature even when targeting global lighting effects.


The real problem however is that global texture shading / caching solution means heavy changes for runtime of engines, related tools, and artist workflow.
That's the reason i guess we wont see it happening. (Probably only i am crazy enough work on this idea...)
Maybe people will use TSS on selected models for selected effects (like subsurface scattering for example), but i guess focus will be just on primary visibility approaches like 'shade once, use for both VR views', and mainly streaming.


I realize it's a confusing topic.
To list what 'Sampler Feedback application', 'Texture Space Shading' (or 'Object Space Shading') could mean, i see those coarse cases, adding RT requirements to it:

* Streaming based on primary visibility. Use lower res mips for RT that are in memory independent of visibility, depending only on camera position and LOD from that.
* Streaming and shading of textures based on primary visibility. Use lower res mips for RT but shade them on the fly.
* Streaming and shading of textures based on primary visibility. Use lower res mips for RT, also shade them to get radiance cache.
* Full texture shading and streaming ignoring visibility, only based on camera distance. No need for sampler feedback. RT does not need to shade hitpoins except specular eventually. Maybe unpractical because of missing details.
 
Technically maybe not but it saves games from needing to manage and keep their state themselves (for this particular scenario of the system going to sleep, not in general of course).
 
Was just thinking with how fast games will load next gen is suspend and resume even necessary?
They'll probably keep the suspend and resume feature. But PS5 games will also have specific shortcuts to load just the level we want or even the MP map and such. So it'll make the loadings even faster.

Personnaly this is one of the most interesting stuff for me. For instance starting a Monster Hunter World game it's incredibly annoying, long, with plenty of usueless steps, always the same usesless steps when I always do the same thing.

I want to go the main main city with my main character, simple enough. I don't care about the main screen, I am playing offline so I don't care about all the different steps to select a MP party and such. I don't want the game to check at each start if there is a DLC available etc (when it's actually never being used by the devs themselves). Ugh.
 
Was just thinking with how fast games will load next gen is suspend and resume even necessary?

Other than speed one aspect of making life easier is minimising clicks to get to where you want to go. in the XSX suspend scenario I can pick up exactly where I left of in several games in a couple of clicks - it’s an ideal that I really hope is in PS5 because I am not the only user of my console.

Even with quick loading you’d still need to load the game, then the save and then it won’t be exactly where you left off (unless they revamp the save system in games).
 
Back
Top