Next-Generation NVMe SSD and I/O Technology [PC, PS5, XBSX|S]

There are a lot of weird behaviors in this demo that are still unaccounted for. The game reads through 65GB in a few minutes while standing still, for example.


I thought it might be loading/evicting data in that scene so I set up a RAM cache to see how large it was. But I had to stop at 24GB because it seems to read through the entire 40GB data files over and over.

I have also seen this behaviour in the final game. Just sitting still seeing a constant 500 mb/s on the NVme for no reason. This happens for mintues on end. This screenshot was taken with me having sat still here for about 4 minutes and it was doing 500-530 mb/s the entire time.

Which makes this seem all the stranger:

FORSPOKEN-DIRECTSTORAGE-1.jpg


Why would a SATA drive have a higher FPS (implying the NVMe is causing a higher decompression load due to it's greater throughput) if the streaming load is within a SATA drives transfer limits of 550MB/s? Or does the streaming load increase when moving through the world?
 
Finally tried the demo and as I suspected before, the rendering distance for the grass and foliage is insane.

I've loaded up Days Gone, Witcher 3, Horizon:ZD, Elden Ring and none them draw grass out in to the distance any where near as far as what Forespoken does.

Don't get me wrong, it's actually nice to play a game with loads of dense foliage on screen and not see an obvious line 25 metres in front of me where the foliage render distance stops, but it comes at a cost as all that alpha is surely killing off memory bandwidth.
This reminds me of FF XV which at first had a major performance issue caused by the game drawing some animals that were miles outside off-screen because of an HairWorks bug.
 
Only if the developers programmed the game to cache as much RAM as possible which is something I wish games did on PC as so much RAM is just sat there doing nothing.

Message to developers..............If my RAM is there doing nothing then please use it.

We'd need to ask more developers this but what I suspect is that implementation for something like this that is that scalable is not trivial in terms of complexity. This applies to VRAM as well.

This is why essentially in practice if games do need to fit within a certain memory paradigm, having more also results in limited benefits.
 
This confirms it is using DS 1.1 as the CPU usage barely increases at all when loading.

If it was using DS1.0 it would shoot up during the loading.
Well it does increase by about 10% but we got no other game to compare it to so it's not like we have a baseline to work with.
 
I'd even trade %25 gpu performance if it meant my 8 GB budget can load proper textures that are suited for 12 GB budget. Only if that was the case.
 
I'd even trade %25 gpu performance if it meant my 8 GB budget can load proper textures that are suited for 12 GB budget. Only if that was the case.

What GPU do you have? The reason I bring up is that 25% number is interesting because it's the drop in performane roughly from the 3060ti to 3060, and the release of those products did bring up an interesting dilemma in terms of card specifications. For me personally both had issues, the 3060 wasn't fast enough and the 3060ti didn't have enough VRAM.
 
Well it does increase by about 10% but we got no other game to compare it to so it's not like we have a baseline to work with.
The CPU use would increase as the higher decompression rate would also increase the I/O work the CPU would need to do on its side.

In the original presentation on the game the developers claimed CPU decompression in the game used 40% of a 5950X so that's 6-7 cores fully loaded just for decompression itself.

We're not seeing any where close to that in the game so I'm taking that as the game is using DS1.1.
 
Last edited:
What GPU do you have? The reason I bring up is that 25% number is interesting because it's the drop in performane roughly from the 3060ti to 3060, and the release of those products did bring up an interesting dilemma in terms of card specifications. For me personally both had issues, the 3060 wasn't fast enough and the 3060ti didn't have enough VRAM.
I have a 3070.
Honestly, yes. I might even end up selling it and getting a 3060 instead. I really have no quarrels with playing games at 30-40 frames. I could even get some cash on top of it. I just want high quality textures.
But I hoped that with new gen tech, VRAM amounts would only be a miniscule detail. Sampler feedback streaming, DirectStorage and all that jazz made me hopeful that intelligent use of a 8 GB buffer could enable nextgen graphics with increased texture fidelity.
If Hogwart Legacy and Starfield ends up the same way, I'm done with this card. I could even downgrade to 3060 if it needs be. Or maybe sidegrade to a 6700xt/6750xt.
 
The CPU use would increase as the higher decompression rate would also increase the I/O work the CPU would need to do in its side.

In the original presentation on the game the developers claimed CPU decompression in the game used 40% of a 5950X so that's 6-7 cores fully loaded just for decompression itself.

We're not seeing any where close to that in the game so I'm taking that as the game is using DS1.1.
The CPU does increase and the dev is on record saying it uses 1.0 and Alex also confirmed that it uses 1.0. Unless it changed right before release which I guess is possible.
 
It is using 1.0 not 1.1. I went read the era topic and it seems using the CPU for loading. It seems the game is buggy too with protagonist not moving at all with a constant hundreds of MB/s loaded like in Alex example.

 
Last edited:
Can someone try with a SATA SSD, someone complain about long loading time with SATA Evo 870. After maybe this is just a few seconds but seeing instantaneous it is the placebo effect. Someone try with the HDD, this is a nightmare for example open the map take 30 seconds because the game is optimized for load data just in time.

Hello, long time lurker. I joined because i could provide data on this.
I have a 870 (8TB) and the game
Here a video of initial login :


FI :I have 5600x, 16gb ddr4 3200 and a 3060ti.
You can also notice the texture not loaded...sometime they load (only if my resolution is set to 1080p and lower but unload / reload as soon as i move)
It's difficult to capture because most of the time no better texture load at all
Also it was with HDR on (so capture is washed out)
 
Which makes this seem all the stranger:

FORSPOKEN-DIRECTSTORAGE-1.jpg


Why would a SATA drive have a higher FPS (implying the NVMe is causing a higher decompression load due to it's greater throughput) if the streaming load is within a SATA drives transfer limits of 550MB/s? Or does the streaming load increase when moving through the world?
Maybe lower res textures for a while longer and therefore less to do for the gpu
 
I have a 3070.
Honestly, yes. I might even end up selling it and getting a 3060 instead. I really have no quarrels with playing games at 30-40 frames. I could even get some cash on top of it. I just want high quality textures.
But I hoped that with new gen tech, VRAM amounts would only be a miniscule detail. Sampler feedback streaming, DirectStorage and all that jazz made me hopeful that intelligent use of a 8 GB buffer could enable nextgen graphics with increased texture fidelity.
If Hogwart Legacy and Starfield ends up the same way, I'm done with this card. I could even downgrade to 3060 if it needs be. Or maybe sidegrade to a 6700xt/6750xt.

I've kind of had these debate in the currently locked down sub forum, but my feeling as always been that if you wanted to maintain console+ parity in every aspect this generation you'd likely need 12GB of VRAM, which is akin to 6GB with the last cycle. Albeit there might be some edge cases otherwise as well.

The GA104 GPUs basically always had this VRAM issue in my opinion, but otherwise they'd be perfectly suited for this entire generation. I wonder if this factored into Nvdia's decision in the possibility of cannibalizing future sales (and their GPUs sold to content creators) in not putting out 16GB SKUs. The RTX 3060 on the other hand is too slow, or at least not having enough performance headroom over the console GPUs in all areas.

6700XT is really the closest and cheapest console+ GPU in my opinion. The caveat being it will likely be worse in terms of what will likely be the PC centric graphics addons, specifically with RT, than the Nvidia options. On the Nvidia the catch all would be the rumored RTX 4070 12GB config, but it will likely in on the more optimisitc side carry a $600 MSRP.

As for Sampler Feedback and DirectStorage, my opinion is that I don't think people should assume they will be leveraged in such a way to actually lower memory usage even if they theoretically could be. Historically it's rare that on paper performance/efficiency software advantages get leveraged in that way. Just look at say DX12 itself, did it really lower CPU requirements in practice?
 
I've kind of had these debate in the currently locked down sub forum, but my feeling as always been that if you wanted to maintain console+ parity in every aspect this generation you'd likely need 12GB of VRAM, which is akin to 6GB with the last cycle. Albeit there might be some edge cases otherwise as well.

The GA104 GPUs basically always had this VRAM issue in my opinion, but otherwise they'd be perfectly suited for this entire generation. I wonder if this factored into Nvdia's decision in the possibility of cannibalizing future sales (and their GPUs sold to content creators) in not putting out 16GB SKUs. The RTX 3060 on the other hand is too slow, or at least not having enough performance headroom over the console GPUs in all areas.

6700XT is really the closest and cheapest console+ GPU in my opinion. The caveat being it will likely be worse in terms of what will likely be the PC centric graphics addons, specifically with RT, than the Nvidia options. On the Nvidia the catch all would be the rumored RTX 4070 12GB config, but it will likely in on the more optimisitc side carry a $600 MSRP.

As for Sampler Feedback and DirectStorage, my opinion is that I don't think people should assume they will be leveraged in such a way to actually lower memory usage even if they theoretically could be. Historically it's rare that on paper performance/efficiency software advantages get leveraged in that way. Just look at say DX12 itself, did it really lower CPU requirements in practice?
I mean for lastgen, historically for most games, even the most recent PS4 ports, 4 GB VRAM was enough to hold parity compared to a PS4. So I wonder what is going on now.
 
According to CapFrameX, the performance is the same but the frame rate has a huge increase during the black screen which is longer on the SATA SSD resulting in higher average fps.
DSO tried the game and their findings are related to this, the frametimes improved A LOT when installing the game on SSD drives, if you install it on NVMe drives, frametimes will get considerably worse. They recommend playing the game only on SSDs.

So what PCGH has monitored is the reduction of fps in combination with an NVMe drive, it's not related to DirectStorage or GPU Decompression.

Now why would that game drop fps on NVMe drives is beyond me, but this game gets so many things so wrong, that I stopped wondering about anything and gave up.
For some reason, the game suffered from a lot of stutters when we installed it on both our Samsung 970 Pro Plus and Samsung 980 Pro NvME SSDs. By moving the game folder to our SATA SSD, we were able to get better frametimes. Below you can find some screenshots that showcase this behaviour. The screenshots on the left are with the game installed on the NvME SSD. The screenshots on the right are with the game on the SATA SSD. We are not sure what is going on here, however, we currently recommend installing the game on a SATA SSD. Yes, the loading times may be a bit longer but you’ll get a smoother gaming experience.



 
DSO tried the game and their findings are related to this, the frametimes improved A LOT when installing the game on SSD drives, if you install it on NVMe drives, frametimes will get considerably worse. They recommend playing the game only on SSDs.

So what PCGH has monitored is the reduction of fps in combination with an NVMe drive, it's not related to DirectStorage or GPU Decompression.

Now why would that game drop fps on NVMe drives is beyond me, but this game gets so many things so wrong, that I stopped wondering about anything and gave up.




I have a 13900K and never encountered frame time problems but I'm using the demo. Them using a 9900K with a 4090 is also certainly questionable.
 
DSO tried the game and their findings are related to this, the frametimes improved A LOT when installing the game on SSD drives, if you install it on NVMe drives, frametimes will get considerably worse. They recommend playing the game only on SSDs.

So what PCGH has monitored is the reduction of fps in combination with an NVMe drive, it's not related to DirectStorage or GPU Decompression.

Now why would that game drop fps on NVMe drives is beyond me, but this game gets so many things so wrong, that I stopped wondering about anything and gave up.





If CapFrameX is right this is a mesureament problem. The performance stay the same.

 
Back
Top