PS4 PlayRoom using Real time Ray Tracing?

onQ

Veteran
I know I'm not going crazy. I was just watching a live stream of someone playing with the new Toy Maker DLC for the PlayRoom & the toys he was making was reflecting the light from his ceiling & casting perfect shadows.


What is this witchery?
 
Reflection mapping (take the video texture and project it onto the reflecting object).

Have you actually looked at the game when someone is playing with the Toy Maker in a bright room with the lights reflecting off of the toys & everything casting shadows?


The toys & robots even reflect & cast shadows on each other.
 
The toys & robots even reflect & cast shadows on each other.
That's different to what you initially described. So I went ahead and found a reference material (not sure why you didn't do this?) and the reflections are certainly sophisticated...


1:19

Raytracing seems the most obvious solution, and the simplicity of the scene would certainly help that.
 
That's different to what you initially described. So I went ahead and found a reference material (not sure why you didn't do this?) and the reflections are certainly sophisticated...


1:19

Raytracing seems the most obvious solution, and the simplicity of the scene would certainly help that.

I didn't say much different from my 1st post.

& there wasn't any videos at the time that I posted the thread because most people was doing live streams of the new DLC.



This video doesn't show it as good as the stream that I was watching last night with the guy's lights reflecting on the toys from the ceiling but at around 2:56 you can see the couch reflecting off the balloon.


It's not perfect but this look really cool.
 
Last edited by a moderator:
I didn't say much different from my 1st post.
The first post only mentioned reflecting lights from the ceiling and perfect shadows. Those don't need raytracing.

& there wasn't any videos at the time that I posted the thread because most people was doing live streams of the new DLC.
You didn't look. Your post is 5 days after the video I linked to was posted on YouTube. ;)

This video doesn't show it as good as the stream that I was watching last night with the guy's lights reflecting on the toys from the ceiling but at around 2:56 you can see the couch reflecting off the balloon.
Room reflections on their own can be handled with a reflection map. The game doesn't have any understanding of the room geometry (or even perspective) so isn't raytracing the space. It's just computing reflections from surface normals, and given the known projection of the camera, that can be mapped accurately. Lights and shadows can be simulated from the room environment easily enough with enough queues (big light in the background!), but your video shows this isn't happening and the bot shadows are a random direction unrelated to scene.

The only bit of interest is the self-reflecting and reflecting of other bots.
 
There was a nice article and post here from the guy who did the Pool/Billiards game for PS3, that had very nice results. They may be using a similar technique here.
 
This is so cool

Found some videos where someone else is doing something similar to this.





https://www.ims.tuwien.ac.at/projects/rayengine

High-Quality Real-Time Global Illumination in Augmented Reality
Research project in the area of Virtual and Augmented Reality.

Keywords: Augmented Reality, High-quality rendering, Photorealistic rendering, Visual coherence.

About this Project
In this research we aim to examine using high-quality rendering techniques in augmented reality scenario. We use ray-tracing based rendering to increase visual coherence between real and virtual objects while still achieving interactive frame-rates. The video of real scene is captured by camera and on the fly composited with virtual objects to produce final result. We use modern massive parallel GPUs capable of covering high computational cost of ray-tracing based rendering techniques. Our plan is to evaluate user experiences with high-quality rendering in augmented reality application.

Additional Information
Differential Irradiance Caching for Fast High-Quality Light Transport Between Virtual and Real Worlds

Fast and realistic synthesis of real videos with computer generated content has been a challenging problem in computer graphics. It involves computationally expensive light transport calculations. We present an efficient algorithm for diffuse light transport calculation between virtual and real worlds called Differential Irradiance Caching. Our algorithm produces a high-quality result while preserving interactivity and allowing dynamic geometry, materials, lighting, and camera movement. The problem of expensive differential irradiance evaluation is solved by exploiting the spatial coherence in indirect illumination using irradiance caching. We enable multiple bounces of global illumination by using Monte Carlo integration in GPU ray-tracing to evaluate differential irradiance at irradiance cache records in one pass. The combination of ray-tracing and rasterization is used in an extended irradiance cache splatting algorithm to provide a fast GPU-based solution of indirect illumination. Limited information stored in the irradiance splat buffer causes errors for pixels on edges in case of depth of field rendering. We propose a solution to this problem using a reprojection technique to access the irradiance splat buffer. A novel cache miss detection technique is introduced which allows for a linear irradiance cache data structure. We demonstrate the integration of differential irradiance caching into a rendering framework for Mixed Reality applications capable of simulating complex global illumination effects.



Differential Progressive Path Tracing for High-Quality Previsualization and Relighting in Augmented Reality

A novel method for real-time high quality previsualization and cinematic relighting is proposed. The physically based Path Tracing algorithm is used within an Augmented Reality setup to preview high-quality light transport. A novel differential version of progressive path tracing is proposed, which calculates two global light transport solutions that are required for differential rendering. A real-time previsualization framework is presented, which renders the solution with a low number of samples during interaction and allows for progressive quality improvement. If a user requests the high-quality solution of a certain view, the tracking is stopped and the algorithm progressively converges to an accurate solution. The problem of rendering complex light paths is solved by using photon mapping. Specular global illumination effects like caustics can easily be rendered. Our framework utilizes the massive parallel power of modern GPUs to achieve fast rendering with complex global illumination, a depth of field effect, and antialiasing.

High-Quality Reflections, Refractions, and Caustics in Augmented Reality

We present a novel high-quality rendering system for Augmented Reality (AR). We study ray-tracing based rendering techniques in AR with the goal of achieving real-time performance and improving visual quality as well as visual coherence between real and virtual objects in a final composited image. A number of realistic and physically correct rendering effects are demonstrated, that have not been presented in real-time AR environments before. Examples are high-quality specular effects such as caustics, refraction, reflection, together with a depth of field effect and antialiasing. We present a new GPU implementation of photon mapping and its application for the calculation of caustics in environments where real and virtual objects are combined. The composited image is produced on-the-fly without the need of any preprocessing step.

An evaluation was performed to study how users perceive visual quality and visual coherence with different realistic rendering effects. The results of our user study show that in 40.1% cases users mistakenly judged virtual objects as real ones. Moreover we show that high-quality rendering positively affects the perceived visual coherence.



Physically-Based Depth of Field in Augmented Reality

We present a novel method for rendering and compositing video in augmented reality. We focus on calculating the physically correct result of the depth of field caused by a lens with finite sized aperture. In order to correctly simulate light transport, ray-tracing is used and in a single pass combined with differential rendering to compose the final augmented video. The image is fully rendered on GPUs, therefore an augmented video can be produced at interactive frame rates in high quality. Our method runs on the fly, no video postprocessing is needed.
 
Back
Top