yea... but it shouldn't deploy the scarlett version. So there should be a way to compress teh scarlett one and not the xbo versions.During gameplay the difference should increase a bit, if there isn't another bottleneck (e.g. memory bandwidth).
The corridor of doom is strange, though. If the CPU is somehow limiting here, it would make sense, because than the GPU would not have that much work and the PS5s CPU would "upclock" to it's maximum. That could explain why they are both almost on par in that scene.
But I really don't know why this scene is so CPU limited. Doesn't really look like something special is going on there. Maybe it is just one thread limiting there, or an engine inefficiency or something like that.
The xbox one version, should also be build with same GDK version, as this is for all xbox one/series consoles.
might not be the case, because the shot is not made from the exact same spot. On the PS5 image the character is at least one step nearer or the camera angel is further away on the xbox shot. Might just be that RT is just not done that "far" away ^^. Might be just a step behind the RT border.
how is this frame take 3x longer calculated ? just curious (what scene example)
Did PC receive an update with the release of "next gen" versions ? Maybe it did not get the optimisations found in the new version.
The PC version was the Ultimate Edition to begin with.
*fanboy moment*
He may be onto something though as it seems that the PC RDNA2 GPU's are simply running the original RT implementation in Control that was likely developed with Turing in mind. The consoles obviously have a more custom implementation which is perhaps better tailored for RDNA2's RT shortcomings.
What I can't get over is how crazy fast the 3090 is there. Extrapolating the 2070S performance to faster Turing cards it would certainly be well in excess of 2x faster than a 2080S.
16% over 21 test samples!
In this case, I am not sure why photomode would underclock the GPU. On PC - when you turn on photomode, the game's CPU related framerate skyrockets as the entire simulation threads stop. Here I actually think we are looking at a thermal situation on PS5, by means of inference, where the CPU is doing very little and the GPU has all the power it wants for full clocks.
there's no VRS on Control IIRC.
I think that's a bug. Typically VRS doesn't destroy detail that is marked for high quality. It would scale detail really far away instead. Reflection rendering at distance is just fine if you look further back behind it. It's possible they just forgot to include the poster as part of the BVH tree.
The floor by her foot on xbox is also reflecting red on the steps, which is a continuation of the reflection by her right arm. But we don't see that on PS5 for instance. And on the right side of the stairs near the bottom you can see a red tinge on the steps for Xbox reflecting onto it, and that red tinge is missing on PS5 (or at least is extremely difficult to see). So once again, not sure if it's just angling, or being a bit more forward for PS5. But I don't think this is a by product of low vs high. it might just be rendering distance and PS5 is just closer and therefore it rendered and XBox is further back. You can see on the right side of screenshot the red box sticking out that is not present on Xbox. DF does their best to line up the positions, but it may not be perfect.
I think it's just a forgotten texture. Or its' entirely possible as we'll soon find out, there are actual differences between the two games in terms of textures etc. (see below)
hmm, an excellent question. If I were to take a stab at it, Ray Tracing is often done by rays per pixel of resolution. The more resolution the less rays you have per pixel.That’s interesting.
As an obvious kinda-informed layman, I have a question about RT I just thought about.
Just like we have dynamic res, VRS, and a whole host of performance optimisations...
What is stopping engines from dynamically change the number of rays (or whatever quality variable) on a per frame basis, in order to keep up with the frame rate budget?
Surely we wouldn’t be able to see the difference while playing. Like I can’t see the difference between various states of dynamic resolution.
Yes it’s Friday in lockdown in London, I may have had a couple of vodka coke zeros. Sue me.
Could the console be switching the main thread to a new core? Or do we lean toward it being an OS/API inefficiency?It's not just dropping frames like we typically see (which it certainly does do some in places too!), it's the flow of the game being interrupted for several frames in a drop and then recovery that uncharacteristic of the way GPU load usually changes. As they say, it's like something's stalling the game for a while.
Could the console be switching the main thread to a new core? Or do we lean toward it being an OS/API inefficiency?
It would be neat if DF could sync their framerate graphs with power draw.
As it doesn't need gameplay logic the photo mode shouldn't tax the CPU at all (compared to the gameplay). It should be similar in most cutscenes. McDonald is the one that did the Bloodborne 60fps hack on his PS4 devkit.
Those conditions are ideal to test the GPU + bandwidth of those consoles. But things get really interesting when the CPU has much more work to do: like during gameplay.
hmm, an excellent question. If I were to take a stab at it, Ray Tracing is often done by rays per pixel of resolution. The more resolution the less rays you have per pixel.
Reducing the number of rays too much may not be enough there for your denoiser to work, and certain effects you're relying on for lighting make not be sufficient to finish. Ie: standard compute/raster based lighting surpasses the quality of RT if RT drops too low in quality. you may as well choose the traditional path.
On the flip side, to improve the number of rays per pixel and still keep performance high: one could reduce resolution to increase the number of rays per pixel, and rely on a really good denoiser to make up for a lack of rays shot, and then upscale the image to a very high resolution while keeping the performance benefit.
Well the data show it is actually the case. Most (but not all) gameplay comparisons (so when the CPU has much more work to do) with similar settings show a performance advantage on PS5 (or near indentical performance). And the scenes that have being picked by DF that show XSX performing better were all taken from cutcenes in Hitman 3 and photo mode in Control.This is true, but the results of a more heavily loaded CPU (and its impact on bandwidth and power available for the GPU) might not necessarily change things in favour a different platform.
Well the data show it is actually the case. Most (but not all) gameplay comparisons (so when the CPU has much more work to do) with similar settings show a performance advantage on PS5 (or near indentical performance). And the scenes that have being picked by DF that show XSX performing better were all taken from cutcenes in Hitman 3 and photo mode in Control.
yea agreed.Well we’re theorising anyway but the way I understand it, the big power sucker of RT is the BVH structure. So decreasing the number of rays wouldn’t really help there.
I just think that if Insomniac managed to offer a 60 FPS RT mode, which was clearly lower quality than the standard RT mode, then other games could have the same.
I just HATE that I have to choose between 60fps and RT in other games. All I do is turn RT on, look at myself, then turn it off, play the game, then go back and forth and it’s all VERY CONFUSING.
It's unintuitive, but, except if the CPU is using specific power hungry instructions, the max power on PS5 should be reached during cutscenes and others scenes not taxing the CPU (particularly if uncapped).So loading the CPU and taking power away from the GPU helps the PS5?
Not sure I'm with you here.