Not really, at least not from the R&C -- we dont know when it starts, when it stops, or how much it loads.
We have no data, i can counter with the ue5 demo running on a nvme/rtx3080q (2070rtx dgpu) laptop. It was supposedly a ssd showcasing tech demo. It ran better on the laptop (higher res).
We have no data, i can counter with the ue5 demo running on a nvme/rtx3080q (2070rtx dgpu) laptop. It was supposedly a ssd showcasing tech demo. It ran better on the laptop (higher res).
We had typical data. We had blog post with compression ratio, everything seems more detailed and we have example of real example in a game Ratchet and clank...
The compression ratio of RTX IO is 2:1. That's perfectly in line with the XSX's BCPACK. I strongly suspect that's not coincidental.
Although at a disk IO-level, ones and zeroes are still being moved at up to 7 GB/s, the de-compressed data stream at the CPU-level can be as high as 14 GB/s (best case compression). Add to this that each IO request comes with its own overhead—a set of instructions for the CPU to fetch x resource from y file and deliver it to z buffer, along with instructions to de-compress or decrypt the resource.
ue5 doesnt either but a pcie nvme laptop did the same thing at a higher fps.
Not even on paper as PS5 best case is 22GB/s. Besides, there are no bottlenecks on PS5, there are still plenty on PC + RTX. In practice PS5 I/O will still be quite faster. Currently the loading are so quicks, almost too quick (from 0.8 sec to 1.6 sec) that many people are in denial because it can't be, the loading is happening before and after, that kind of thing.RTX IO is the fastest ssd tech of the bunch now, yes its marketing but so was sonys. Ratchet doesnt provide any data, ue5 doesnt either but a pcie nvme laptop did the same thing at a higher fps.
I would imagine the difference between them would be inconsequential past a point. PC's throughout the generation will have far more RAM and VRAM and could keep more in memory reducing the need to swap full sets of assets in and out at any given time. And why would they even bother using compute resources for decompression on PS5 if they have a dedicated hardware block? It's already plenty fast enough.. They're going to want to keep all those precious resources for the GPU. Also, with RTX I/O since the decompression is done with the shader cores, there's plenty of resources to decompress assets.. which Nvidia has already stated has a negligible performance hit. It should also scale as more powerful GPUs and faster storage drives are released. There's also a chance that at some point Nvidia and AMD could include a dedicated decompression block right on the GPU in the future as well.Not even on paper as PS5 best case is 22GB/s. Besides, there are no bottlenecks on PS5, there are still plenty on PC + RTX. In practice PS5 I/O will still be quite faster. Currently the loading are so quicks, almost too quick (from 0.8 sec to 1.6 sec) that many people are in denial because it can't be, the loading is happening before and after, that kind of thing.
We'll have a better assessment with games like The Witcher 3 with its quite long loading times. It's going to be hard to deny anything then with such a fair comparison.
Also finally don't forget that 22GB/s is using the custom hardware on PS5. They could have even better compression if they used the GPU shaders like RTX IO.
Nvidia told this is best case compression. It will be often less than this. The compression ratio will vary from level to level depending of the set of textures. There is nothing as a unique compression ratio even in the same game.
I like Nvidia's approach but you only do so much when most games store all their data together and the CPU and GPU have their own RAM pools. Once games start to support it, it should be an overall win but it's still a distant step from the simplified, unified architecture of nextgen consoles that don't have this problem to solve. This is an architectural design approach when one model's advantages is the other model's disadvantages and vica-versa.Not even on paper as PS5 best case is 22GB/s. Besides, there are no bottlenecks on PS5, there are still plenty on PC + RTX. In practice PS5 I/O will still be quite faster. Currently the loading are so quicks, almost too quick (from 0.8 sec to 1.6 sec) that many people are in denial because it can't be, the loading is happening before and after, that kind of thing.
...
If this takes off, will this result in a proliferation in games patches that re-organize game data to support Nvidia's brand of DirectStorage? What if AMD come up with a different implementation?
....
Not even on paper as PS5 best case is 22GB/s.
Also finally don't forget that 22GB/s is using the custom hardware on PS5. They could have even better compression if they used the GPU shaders like RTX IO.
Besides, there are no bottlenecks on PS5, there are still plenty on PC + RTX. In practice PS5 I/O will still be quite faster.
We'll have a better assessment with games like The Witcher 3 with its quite long loading times. It's going to be hard to deny anything then with such a fair comparison.
I could have done with Nvidia's slide PC architecture bottlenecks about six months back to the folks who struggled to understand it (or even believe it).
Unless I'm missing something, RTX I/O only solves half the problem and requires a new implementation to game data structuring issue to achieve that.
But for compressed data read off storage that is for sole use by the CPU, or is needed by both CPU and GPU like geometry data where AI, collision detection and any other interactions are handled by the CPU, you're still waiting on the CPU - which will have had some load lifted.
I think the conundrum comes to how games, or rather installers, package data. If you have a supported Geforce RTX card (this only shows the GeForce 30xx series on Nvidia's site but surely must include first generation RTX cards), a PCIe 4.x board and fast NVMe drive all of your games ever released still have all the data shoved together in one pack, stored for the CPU to pick apart. You know need GPU-only and CPU-only data stored separately so they can be routed to the appropriate RAM pool right up front.
If this takes off, will this result in a proliferation in games patches that re-organize game data to support Nvidia's brand of DirectStorage? What if AMD come up with a different implementation?
Then why are Nvidia branding this as Nvidia RTX I/O and not just DirectStorage? Perhaps it's just marketing. But some standard needs adhering too for interoperability wit the GPU. Maybe this is the equivalent of Nvidia API extensions for DirectStorage.My understanding was DirectStorage was sort of a api / "norm", like Direct3D ? So, if nvidia solution utilise DirectStorage, and AMD too, it should not be a problem for the devs since for them they only need to make it work for DirectStorage ?
Touché ;-). I know this is in jest but it's solving half of the problem but the lifting the load will help in equal amounts I'd have thought.I seem to remember some members struggling to believe something like RTX IO was even possible on a PC
This may take a while to gain much traction, or supporting will transition in over time. But you have to start somewhere. Direct3D took a while to gain traction. I'd expect DirectStorage 1.0 to be the beginning of a more comprehensive solution which will require further tweaks to the PC's architectural arrangement.Yes, they've specifically stated that games need to be "Direct Storage enabled" to take advantage of this. The good news is that PC's don't need to be Direct Storage capable in order to run Direct Storage enabled games. So developers can use it without worrying about whether a PC can run it or not. That should help adoption significantly.
It'll be interesting to see how RTX IO handles this. i.e. does everything go through the GPU for compression first and then get doled out to the CPU or GPU as required? Or does it work as you say with the CPU handling the decompression of it's own data? Nvidia's claims of the overhead reduction suggest the former but if it's the latter then you're still talking about an 80%+ reduction in the load on the CPU (typical percentage of streamed game content made up of textures according to Microsoft).
PCIe 4.x isn't a technical requirement but without it you're losing a lot of potential bandwidth without it.PCIe 4.x isn't a requirement, and yes Turing is also supported. Time for me to upgrade!
Many of the DirectX set of APIs have a core set, plus a method of extension. You want an API to set the standard but not limit future hardware. Nvidia and AMD-specific graphics extensions have been common on graphics cards for many years remember.Can you have your own brand of Direct Storage? The whole point of an API like this is so that any game that supports it will run on any hardware that supports it, regardless of how it's implemented. I'm sure AMD will have their own implementation of Direst Storage but I don't expect games to have to cater to one or the other.
Nvidia told this is best case compression. It will be often less than this. The compression ratio will vary from level to level depending of the set of textures. There is nothing as a unique compression ratio even in the same game.
I would be really quite surprised if more data was required by the CPU/man RAM than the GPU/VRAM. We know how massive geometry and texture data is. There may be some edge cases but this surely also has to be a win.
But there is a question of how much data in existing games is packed optimally for the GPU decompressors.
Many of the DirectX set of APIs have a core set, plus a method of extension. You want an API to set the standard but not limit future hardware. Nvidia and AMD-specific graphics extensions have been common on graphics cards for many years remember.
Then why are Nvidia branding this as Nvidia RTX I/O and not just DirectStorage? Perhaps it's just marketing. But some standard needs adhering too for interoperability wit the GPU. Maybe this is the equivalent of Nvidia API extensions for DirectStorage.
Yup, and normally this would be an ask for developers - asking them to change the way data is organised, structured and compressed. If Nvidia had rolled this out in isolation I'd be skeptical of it's adoption (speaking as a 2080Ti owner) but with this benefiting nextgen consoles as well. Hopefully devs will make the effort to embrace the paradigm shift.My guess would be none as Nvidia have already stated a game needs to be Direct Storage compatible to work with RTX IO. Looks like we're looking at a whole new paradigm for developers.