Current Generation Games Analysis Technical Discussion [2020-2021] [XBSX|S, PS5, PC]

Status
Not open for further replies.
I propose a test. Load said same save game at different times of the day and look for loading time differences on Xbox.

Geography may matter as well as closeness to data centres may also matter.

I am in no rush to test this; still wrapping up hzd. I may get around to buying RE8 much later down the line.
 
Again first loading time is more than loading data. Second SSD speed is not all. PS5 has 1 MB if SRAM cache into the I/O Complex and it seems some 2GB DDR4 on SSD controller, The XSX SSD is cache less.

https://www.resetera.com/threads/pl...-technical-discussion-ot.231757/post-48562453

https://www.resetera.com/threads/pl...-technical-discussion-ot.231757/post-48565207
Matt insider from resetera said that i/o on ps5 is just on another level comparing to xsx or pc tough not sure how credible he is.
 
if it's a cloud check thing, maybe just try loading a save while offline VS online ?

Good thing to try, make sure you are truly offline mode and not just with network cable unplugged or wifi disabled.

https://support.xbox.com/en-US/help/hardware-network/connect-network/using-xbox-one-offline

Set your Xbox to offline
  1. Press the Xbox button  on your controller to open the guide.
  2. Select Profile & system > Settings.
  3. Choose General > Network settings.
  4. Select Go offline.
To go back online, repeat the first three steps, and in step 4 choose Go online.
 
afterall even if XsX ends up loading games 5s slower than PS5 from menu or save, i don't think it will be the end of the world, we are still talking about less than 10s loadings, something forgotten on consoles since games went from cardbridges to discs.
The real and interesting test will be when next gen games will have to load big amount of data on the fly during gameplay.
 
Do you think the PS5 SSD load at full speed normally 10 GB is less than 1 second with compression?

http://cbloomrants.blogspot.com/2020/09/how-oodle-kraken-and-oodle-texture.html

With compression the PS5 SSD can average 11GB/s using Kraken and oodle texture.

You seems to talk about thing you don't understand. Do you know that loading time is not only load data, the CPU need to setup all the level object, entity and other datastructure. We talk about loading time like it is only loading data. It does much more than this.

And the PC SSD doesn't use direct storage. This is one of the reason PC load as fast as Xbox Series X.
Do you have a benchmark? We don't know this. Here what is sure the XSX SSD is slower than the PC one and is able to load as fast. It proves something is done on XSX side not done on the PC Drive and decompression is not a problem on PC for loading data from a save. This is not streaming, if it use multiple CPU core or all CPU Power this is not a problem.

There is a moment you need to stop being delusional. The PS5 SSD is faster than the Xbox Series X one. From a streaming perspective there is a coherency engine with cache scrubber present in PS5 not on Xbox Series X same of the work done on Xbox Series X CPU is done fully inside the I/O Complex on PS5.

This is like thinking Xbox Series X will not be faster on raytracing than PS5. On DMC 5 and RE8 raytracing is a bit faster on XSX than PS5.

And PS5 is able to do virtual texturing and Partial resident texture too in hardware this is available on AMD GPU since GCN 1 since 2011. SFS means if the mip level needed is not inside RAM, the XSX can load a lower mip level present in memory, on PS5 it will be a big texture pop in but the SSD and streaming hardware being faster on PS5 side less chance this problem arrive.

https://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/6

And in Direct X it is called hardware tiled resource
https://docs.microsoft.com/en-us/windows/win32/direct3d11/tiled-resources

https://docs.microsoft.com/en-us/windows/win32/direct3d11/why-are-tiled-resources-needed-

After some devs don't use it because it needs a call to CPU

Loading a game on a non-nvme ssd:
 
dynamic res would be perfect solution for rt mode
Yea that would help. I guess I’m just surprised. Unfortunately clamping 60fps doesn’t actually let me see how much both are dropping here and whether they drop equally. That would really put shed some light on the fill rate discussion between the two platforms: clock vs bandwidth.
 
dynamic res would be perfect solution for rt mode
VRR should be much better with those drops. The problem is with dynamic res you lose "native" rendering. Even with CBR a full 4K CBR image brings you a noticeable sharpness edge compared to anything upscaled by the GPU using cheap solutions.

This is where I think Spiderman has the edge with their own custom reconstruction + custom upscaling.

With VRR this is where XSX has the biggest advantage IMO.
 
faster than pc with faster ssd and cpu ?

The Windows PC IO stack means that when loading levels in games, an SSD doesn't provide anywhere close to it's theoretical transfer rates. Take a look at benchmarks between an SATA SSD versus a NVME SSD for games in general on Windows. NVME SSDs (3+ GB/s) offer a very slight advantage in load times compared to SATA SSDs (500-ish MB/s). If the XBS-X is basically just barely faster than a SATA SSD on PC when loading games, then something is very VERY wrong with the system and everyone at MS that worked on the storage subsystem should be fired. :p

That said, it's far more likely that RE8 just plain isn't doing a single thing to take advantage of the XBS systems subsystems other than the blind transfer rates. In other words, it's loading data on XBS system similar to how it would load data on PC. To put it more bluntly it isn't taking advantage of anything on the XBS systems that likely have to be coded for.

Hell, it might be using a less than optimal compression scheme on XBS systems and instead using something that is more easily shared between XBS consoles and PC.

Regards,
SB
 
VRR should be much better with those drops. The problem is with dynamic res you lose "native" rendering. Even with CBR a full 4K CBR image brings you a noticeable sharpness edge compared to anything upscaled by the GPU using cheap solutions.

This is where I think Spiderman has the edge with their own custom reconstruction + custom upscaling.

With VRR this is where XSX has the biggest advantage IMO.
vrr won't help with drops under 50 fps and droping resolution on rarly scenes for few seconds wouldn't be that noticable as even xss 1440p interpolated apparently doesn't look that bad
 
vrr won't help with drops under 50 fps and droping resolution on rarly scenes for few seconds wouldn't be that noticable as even xss 1440p interpolated apparently doesn't look that bad

That will depend on the user. I can't stand drops with VRR below 58-ish FPS (with a target of 60 FPS). But plenty of NV GPU owners were quite satisfied with drops into the 30's or 40's with a 60 FPS target and when AMD first came out with their VRR implementation they quite loudly complained about how sad it was that AMD users didn't also benefit from VRR since the VRR range for FreeSync displays (often 45-48 Hz on the low end) was much narrower than G-Sync displays.

Regards,
SB
 
In this video you can see that the game adds only 1 GB to RAM when loading from the menu. One gigabyte in five seconds.
Something tells me that SSD speed is not the bottleneck here :-?
 
In this video you can see that the game adds only 1 GB to RAM when loading from the menu. One gigabyte in five seconds.
Something tells me that SSD speed is not the bottleneck here :-?

It does not mean they load only 1 GB, it means 8 GB of memory is used after loading. Before loading anything in memory you reserved some memory. I doubt all 7GB of memory is replaced but probably some part.
 
It does not mean they load only 1 GB, it means 8 GB of memory is used after loading. Before loading anything in memory you reserved some memory. I doubt all 7GB of memory is replaced but probably some part.
I've posted a video above where the sata ssd (500 Mb/s) with i7-4790k loads from the menu in 4 seconds (at 1080p with ultra settings).
I think at worst case for 1440p-4k it could be 2 gigabytes.
 
If it was the case the XSX load time will be slower to PC because the SSD they used is much better than the Xbox SSD 2,4 GB/s no DRAM cache like I said 3.5 GB/s and 512 MB of DRAM cache.

The PS5 SSD is 5.5 GB/s and some SRAM cache use for adress translation and can be use as cache cf. PS5 SSD patent.

edit: typo 512 MB of DRAM not 512 GB.

What’s 512mb of cache going to offer when the hardware is streaming GBs of textures off the SSD during use?
 
Status
Not open for further replies.
Back
Top