Current Generation Games Analysis Technical Discussion [2023] [XBSX|S, PS5, PC]

Status
Not open for further replies.
they may keep in ram the simplified assets we see in the reflections, while the full detailed assets behind the camera are dumped.
An interesting test on the PC version will be to mod the turning speed to make it a lot faster and see what happens.
 
You still need the triangles and textures to be present when the ray is in the final bounding box. The performance does not make sense otherwise.

But they don't need to be the full resolution versions, they can be loading extremely basic assets and then when a ray hits load the higher quality version.

Either way the PC version will make for an interesting test point, it won't happen but I would to see a built in benchmark tool.
 
But they don't need to be the full resolution versions, they can be loading extremely basic assets and then when a ray hits load the higher quality version.
What I mean is the latency just as not there for the performance to make sense. The Performance of ray tracing going to system emory (Or VRAM) is already cost-prohibitive vs. staying in cache... it having to then go to an SSD with all the latency that implies would be catastrophic for performance.
 
Footage is from version 1.09 and 1.09.1 on all three consoles.

Timestamps:
00:00 - Ray Tracing Mode
10:02 - 1.09 Prioritize Frame Rate Mode (Performance Mode)
18:06 - 1.09 Prioritize Quality Mode

The ray tracing mode on PS5 and Xbox Series X features ray traced shadows and ray traced ambient occlusion. Xbox Series S doesn't have a ray tracing mode.

PS5 and Xbox Series X in ray tracing mode use a dynamic resolution with the highest resolution found being 2880x1620 and the lowest resolution found being 1536x864. Pixel counts at or below 1920x1080 seem to be uncommon on both consoles. PS5 can have a resolution advantage over Xbox Series X such as here https://bit.ly/3MJkfiJ where PS5 renders at 2880x1620 and Xbox Series X renders at approximately 2560x1440. There are areas where both consoles render at the same resolution though such as here https://bit.ly/3MIZBPT where both PS5 and Xbox Series X render at 2880x1620.

The time of day can noticeably affect performance in the ray tracing mode. This can be seen when comparing 0:10 to 8:36. The dynamic resolution at 9:00 and 9:40 on Xbox Series X gets stuck at the lower bounds for some reason and this gives the Xbox Series X a frame rate advantage over the PS5 in these parts of the video.

The frame rate drop on Xbox Series X at 5:40 in ray tracing mode wasn't seen in other modes or on other consoles.

Performance on version 1.09 has improved compared to the version 1.02 test here: • Elden Ring PS5 vs... except for PS5 in prioritize frame rate mode (performance mode) which performs similarly with 1.09 compared to 1.02. The resolution bounds on all three consoles with 1.09 are unchanged compared to 1.02. The performance improvements seen in 1.09 may have been from an earlier patch.

In performance mode on 1.09 the average frame rate stat for PS5 and Xbox Series X is similar. However, the performance between these two consoles varies per scene, with PS5 having a higher frame rate in some scenes and Xbox Series X having a higher frame rate in other scenes.

Stats: https://bit.ly/424GzcG
Frames Pixel Counted: https://bit.ly/43cFNLH
 
PS3 was 371.2 GFLOPS.
Please tell me where did you got that info?
It was thoroughly outclassed by a high end PC at its release.
To be fair. 8800 GTX released november 8 2006 and had 345.6 gflops. Only a mont later on december 11 8800 GTS was released and had 416 gflops. :) Don't know about ATI.
Everything the Cell was superior at relative to a CPU, a high end GPU thrashed it.
I will desagree with word everything.

On the other hand the PlayStation 4 held up pretty well.
Disagree. PS4 GPU was 2-3 times behind PC high end. That is not even near Xbox 360 and PS3 era. :)

If you class the 7950GX2 as a 'single' graphics card then it took 7 months for 360 to be outclassed on the PC side.
No way, that's "cheating". :)
If you don't class the 7950GX2 as a 'single' graphics card then it was nearly a nearly until PC outclassed 360 with the release of the 8800GTX.
Exactly.
 
Please tell me where did you got that info?

To be fair. 8800 GTX released november 8 2006 and had 345.6 gflops. Only a mont later on december 11 8800 GTS was released and had 416 gflops. :) Don't know about ATI.

I will desagree with word everything.
Cell - 179.2 GFLOPS
RSX - 192 GFLOPS

Why do you disagree? The strength of Cell relative to desktop CPUs was FP processing. GPUs were much faster in that area.

Edit - the G92 based 8800 GTS was released a year later in December 2007.
 
If you don't class the 7950GX2 as a 'single' graphics card then it was nearly a nearly until PC outclassed 360 with the release of the 8800GTX.

I dunno, the 7800 GTX 512 was a pretty potent card for the time and had more grunt in many respects than Xenos. Where it fell down was it's older featureset and lack of unified shaders which meant it could be bottlenecked in games that required more vertex shader throughput than it had to offer. That said, it should have been pretty comparable in performance for the games of the time.

Then 2 months after the X360 launched, the Radeon X1900XTX released which was an absolute monster of a card that comfortably outclassed Xenos in pretty much every respect. It did still come with the DX9 featureset and non-unified shaders, but should have more than enough grunt to overcome those shortcomings in most cases. They called Xenos "the shader monster" but X1900XTX was pushing around twice the FLOPS in total (although only 25% in vertex shader limited scenarios).

Obviously with the release of the 8800GTX though all such caveats where put to bed.
 
Pretty sure in all of the ratchet "we load x on demand" talk by non technical team members what they're referring to is higher mip levels being loaded on demand. Triangles aren't that big on disk anyway, textures are, and what you lose if you're a few ms too late with the load is just texture resolution rather than the whole game working.

edit: and I should note I don't remember seeing any instances of texture pop in when I played ratchet, quite impressive!
 
to be fair it's already at use in R&C when you switch worlds on the fly. But they had said that since release they could get these transitions even faster after working on it.
 
to be fair it's already at use in R&C when you switch worlds on the fly. But they had said that since release they could get these transitions even faster after working on it.

IN R&C the two world are the same each time you use a crystal for example. Here I don't know how they will manage this but if they let the two spiderman being in two random place they can't preload a bit of data like in R&C.
 
they said that during the prologue sequence when there are multiple switches through levels they load the entire levels and not just part of them even if it's an on rail sequence.
 
they said that during the prologue sequence when there are multiple switches through levels they load the entire levels and not just part of them even if it's an on rail sequence.

Yes but they can preload a little. Same when you use a crystal. After for sure it will go faster. IN R&C Rift Apart they were able only to push to 5GB/s of uncompressed data.
 
Last edited:
IN R&C the two world are the same each time you use a crystal for example. Here I don't know how they will manage this but if they let the two spiderman being in two random place they can't preload a bit of data like in R&C.
I reckon they'll do a snappy - but-slow-enough - transition. Something spiderey, likes loads of webs strands hiding/blurring the screen it switches then they pull the webs away. We've see how quick Spider-Man fast loads, it really needs a couple of seconds on PS5 and maybe they'll keep a chunk of memory for key assets for the other character to keep switching quick.
 
I reckon they'll do a snappy - but-slow-enough - transition. Something spiderey, likes loads of webs strands hiding/blurring the screen it switches then they pull the webs away. We've see how quick Spider-Man fast loads, it really needs a couple of seconds on PS5 and maybe they'll keep a chunk of memory for key assets for the other character to keep switching quick.

It was a little bit more than 1 seconds but they said they were CPU bound because they needed to improve CPU code and they reach only 5GB/s of uncompressed data and they said in Spider-man post mortem. They probably need to improve CPU code after R&C Rift apart.

After I suppose the two spider-men will always be in RAM.
 
Status
Not open for further replies.
Back
Top