Probably the best example I've seen to date, thanks! I've played a half dozen hours or so and have to admit the difference has not been super obvious to me in a lot of cases. When I see some nice lighting and toggle back to "ultra RT" (whatever the non PT one is called) it often looks very much the same, which is a testament to the original mode I think. I'll have to keep an eye out for this location and similar ones to check.
This is the cases that most people show and while it's definitely a nice improvement it's also a bit frustrating because for the budget they are spending on overdrive mode you could absolutely add RT shadows to most/all of those other light sources too. From a user perspective though, it's probably the most consistently obvious visual improvement I see.Shadows from all light sources in scenes also help with light leaking, thanks to RTXDI.
https://ibb.co/xfPbqsM
https://ibb.co/kH66pW6
Is 3080 capable of running RT overdrive at 1080p@30fps with dlss2? Got no budget to upgrade...
I'm more than happy to know 4k isn't impossible with dlss ultra performanceFrom the look of it you'll be a pretty good place with a 3080. 1080p 60 fps might even be in reach with the right settings:
Cyberpunk 2077 Ray Tracing Overdrive on RTX 3080 Benchmarks – SFF.Network
smallformfactor.net
Many of the graphics settings are accessible via Creation Kit's modding interface even on consoles. If you look at graphics mods for Skyrim and Fallout 4, you'll find a 60fps mod for the Xbox One build of Fallout 4 and a bunch of mods that tweak settings for things like draw distance, level of vegetation and so on.One additional thing to think about WRT Starfield on PC is that users will be able to tweak settings that are unchangeable on console. That can greatly increase or decrease CPU or GPU load.
Not everyone needs or wants to run every setting at max.
A lot of gamers don't even know at what direction lighting will scatter. The first image proves that.Probably the best example I've seen to date, thanks! I've played a half dozen hours or so and have to admit the difference has not been super obvious to me in a lot of cases. When I see some nice lighting and toggle back to "ultra RT" (whatever the non PT one is called) it often looks very much the same, which is a testament to the original mode I think. I'll have to keep an eye out for this location and similar ones to check.
This is the cases that most people show and while it's definitely a nice improvement it's also a bit frustrating because for the budget they are spending on overdrive mode you could absolutely add RT shadows to most/all of those other light sources too. From a user perspective though, it's probably the most consistently obvious visual improvement I see.
Thanks. The difference is not obvious for larger scale scenes, for which the sparse probes really do their best by capturing the diffuse lighting and general look (and where all the small shadows and correct lighting aren't that visible). But, as with high-quality assets, resolution does matter. Once you go from the macro level to the details, this is where the per-pixel lighting starts to shine as probes can no longer capture the correct look for a huge variety of possible lighting conditions and their resolution would no longer be sufficient for diffuse shadows and other details that make lighting realistic. The same principle applies to high-poly assets or textures. You don't need 200K poly meshes for general large-scale views to look good. Similarly, you don't need 4K textures everywhere, as most of the screen will maintain a 1-to-1 texel to pixel density with lower resolution textures, unless you come very close to objects.Probably the best example I've seen to date, thanks! I've played a half dozen hours or so and have to admit the difference has not been super obvious to me in a lot of cases.
You can, but I guess the advantage of the overdrive mode is less noise and better quality. Not sure whether non-restir RT shadows can handle that many light sources well from an image quality point of view, as this was the main idea behind restir - improving the signal-to-noise ratio.a bit frustrating because for the budget they are spending on overdrive mode you could absolutely add RT shadows to most/all of those other light sources too
Can you elaborate on this, please?A lot of gamers don't even know at what direction lighting will scatter. The first image proves that.
In this picture:Can you elaborate on this, please?
In this picture:
You can't tell where the light is coming from in the alley. So many people will not notice that that is wrong. That's the limit of GI light probes as they are computed in all directions (ie. a spherical direction).
In this picture:
It is clear that the sun is coming from the right hand side of the alley so the light scatters on the left side of the alley walls (i.e. take note of the shadows that the pipes are casting on the wall) - while the right side is completely blocked from the sun since it's also on the right side.
Direct lighting is always correct even with GI light probes used to add light to the material. The situation where GI light probes break down is when the object is in shadow. That's when you don't know where the light is coming from. Using PT, you can tell because the light bounce directions are accurate while the object is in shadow.But if you look at the shadows on the building in the background the light is obviously coming from a light source (the sun) that is above and slightly to the left as the shadows go down and to the right. The shadows on the building in the background are showing the sun being in the same above and slightly to the left position in both the RT and non-RT shot.
For GI, people sample for a random direction in a hemisphere centered around a normal. The original light source's location doesn't matter much (as GI rays bounce 2 times in random directions in this case). What is crucial is which part of the scene is brighter. As we can see, the part of the scene that's outside is brighter. Therefore, when rays travel from the darker part of the scene closer to the camera to the brighter lit one, they compute the brighter lighting in hit points, which adds lighting to the darker part of the scene. If a ray hits a pipe or something on the way (which is in the darker lit part of the scene), there will be a shadow.So, both shots are showing an inconsistency between the outdoor lighting direction and that shaded alley lighting direction. For consistency, one explanation is that there might be a strong light bounce from somewhere off screen to the right. Like a white wall or something.
This week, John discusses the reaction to the Final Fantasy 16 reviews, the focus on the 720p-1080p performance mode and whether the game was actually targeting 30fps all along. Meanwhile, the team are impressed by the latest Nintendo Direct, Xbox raises prices, while the lack of DLSS in AMD sponsored PC titles once again comes to the fore. Also: to celebrate our 117th edition, the team share their favourite Halo memories.
For GI, people sample for a random direction in a hemisphere centered around a normal. The original light source's location doesn't matter much (as GI rays bounce 2 times in random directions in this case). What is crucial is which part of the scene is brighter. As we can see, the part of the scene that's outside is brighter. Therefore, when rays travel from the darker part of the scene closer to the camera to the brighter lit one, they compute the brighter lighting in hit points, which adds lighting to the darker part of the scene. If a ray hits a pipe or something on the way (which is in the darker lit part of the scene), there will be a shadow.
Regarding Q3, I'm not sure what the questioner means by "automatic", but I believe this is possible on the software side.
has not been super obvious to me in a lot of cases
Talking about the fight between probes and per-pixel rt, it seems the future of realtime GI has splitted into two. One is the super high quality per-pixel rt (comparing to other gi method) but also with a super high cost even with the help of ReStir and DLSS. And then we have the screen space probe solutions which are led by Lumen from Unreal 5. It is more capable I'd say, but still providing way better GI details than conventional world space probes. AMD seems to follow this path and has released their solution GI 1.0 which looks very inspired by Lumen, though I haven't seen any games use it. Ambient occlusion will probably be a outdated term in the future
That's right. But the world space probes mostly exist as a fallback for far rays and volumetrics. I also believe in the Matrix demo they kinda swap to a HLOD solution for distant field RT. Still, the most important part is the screen space probes.Lumen uses both world space and screen space caches. It even has multiple world space caches (probe grids and surface cards).
A network stress test for Mortal Kombat 1 gave us our first opportunity to get to grips with the game running on Xbox Series consoles - and it's looking very impressive indeed. Here's John to share his initial impressions on the new game. Was Geoff Keighley right when he described it as 'literally insane'...?
But if you look at the shadows on the building in the background the light is obviously coming from a light source (the sun) that is above and slightly to the left as the shadows go down and to the right. The shadows on the building in the background are showing the sun being in the same above and slightly to the left position in both the RT and non-RT shot.
So, both shots are showing an inconsistency between the outdoor lighting direction and that shaded alley lighting direction. For consistency, one explanation is that there might be a strong light bounce from somewhere off screen to the right. Like a white wall or something.
Regards,
SB