Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
You may need to update your CPU/Motherboard/NVMe drive in a couple of years as well, when proper next gen games start to hit. Fast storage is next gen's real innovation.

According to Nvidia's Q&A on RTX IO the only requirements are an NVMe drive, Windows 10 and of course, and RTX GPU. Even the slowest NVMe drives should be able to keep up with the XSX with RTX IO (and likely whatever AMD's equivalent will be). So I'd say if you already have an NVMe of any speed, you're sitting pretty for next gen. You'll be wanting a good 8 core CPU within a short number of years though I'd expect.
 
Wow, so I can get a card better than a 2080ti for ~£450! Even the 3060 which is due early next year is near 2080ti for ~£300!!

Without turning this in a console war kind of thing, it does beg the question where XSX will sit - my card is worth ~£150-200, this is a very tempting upgrade!
updating your PC you get a Series X for free, basically
 
3080 shouldn't have much of an issue doing 1440/144 in a lot of games. My 2080ti already does 3440x1440/120 pretty well and this will be faster. Cpu and mem help to drive high fps as well.
good to know. I am currently playing Battlefield 1 at 165fps 1440p which is awesome --Low settings though.

I could run BF1 at High-Ultra (165fps at 1080p) but on my monitor half resolution it looks blurry -plus they aren't equivalent resolutions, compared to 4k/1080p, 4x perfect ratio-, so I prefer to sacrifice graphics, and even on Low settings BF1 looks very nice.

At 1440p and 165fps the smooth framerate also creates a natural antialiasing. Currently saving money for an Ampere which runs every single game -or almost- at 1440p 165fps.
 
Last edited:
According to Nvidia's Q&A on RTX IO the only requirements are an NVMe drive, Windows 10 and of course, and RTX GPU. Even the slowest NVMe drives should be able to keep up with the XSX with RTX IO (and likely whatever AMD's equivalent will be). So I'd say if you already have an NVMe of any speed, you're sitting pretty for next gen. You'll be wanting a good 8 core CPU within a short number of years though I'd expect.

I actually think the XSX is pretty well placed still.

It can decompress zlib with no overhead, so that's still a bonus for what would go into CPU memory on PC. BCPack is 2:1 lossless (curiously similar to Nvidia's stated figures) and that's also overhead free. And while a 3080 probably isn't going to worry too much about using a few percent of its resources in decompressing textures and the like (assuming it's compute based), it's good that Series X and especially Series S have this covered.

And XSX can still do extra decompression on the GPU if it wants, even after a "free" pass from zlib.

But the main thing in its favour could end up being SFS, so long as it actually gets used. If you're only accessing a few tiles from the current mip map every few frames, that's going to not only reduce the data you're transferring and what's in ram, but crucially it will spread reads from a mip map level out over a much longer time period. Much less demanding on their IO than loading a full mip map at once.

I'm all aboard PC for next gen, but I'm confident MS have built their "Velocity Architecture" to survive comfortably in the post Direct Storage world that they themselves have been building*. DS, DX12U, this stuff has intentionally been developed in parallel IMO.

[Edit: along with their partners, of course!]


Anyway, I'm really hoping that the PC sees standardised compression formats, because screw having to download AMD or Nvidia specific game packages, or having to decompress then recompress to a specific format at install time!
 
Last edited:
good to know. I am currently playing Battlefield 1 at 165fps 1440p which is awesome --Low settings though.

I could run BF1 at High-Ultra (165fps at 1080p) but on my monitor half resolution it looks blurry -plus they aren't equivalent resolutions, compared to 4k/1080p, 4x perfect ratio-, so I prefer to sacrifice graphics, and even on Low settings BF1 looks very nice.

At 1440p and 165fps the smooth framerate also creates a natural antialiasing. Currently saving money for an Ampere which runs every single game -or almost- at 1440p 165fps.
Can't you run at 720p and force integer scaling in the display driver?
 
According to Nvidia's Q&A on RTX IO the only requirements are an NVMe drive, Windows 10 and of course, and RTX GPU. Even the slowest NVMe drives should be able to keep up with the XSX with RTX IO (and likely whatever AMD's equivalent will be). So I'd say if you already have an NVMe of any speed, you're sitting pretty for next gen. You'll be wanting a good 8 core CPU within a short number of years though I'd expect.
Yes, I’m glad I spent the extra on my SSD now!

Waiting on AMD now...
 
I actually think the XSX is pretty well placed still.

It can decompress zlib with no overhead, so that's still a bonus for what would go into CPU memory on PC. BCPack is 2:1 lossless (curiously similar to Nvidia's stated figures) and that's also overhead free. And while a 3080 probably isn't going to worry too much about using a few percent of its resources in decompressing textures and the like (assuming it's compute based), it's good that Series X and especially Series S have this covered.

And XSX can still do extra decompression on the GPU if it wants, even after a "free" pass from zlib.

But the main thing in its favour could end up being SFS, so long as it actually gets used. If you're only accessing a few tiles from the current mip map every few frames, that's going to not only reduce the data you're transferring and what's in ram, but crucially it will spread reads from a mip map level out over a much longer time period. Much less demanding on their IO than loading a full mip map at once.

I'm all aboard PC for next gen, but I'm confident MS have built their "Velocity Architecture" to survive comfortably in the post Direct Storage world that they themselves have been building*. DS, DX12U, this stuff has intentionally been developed in parallel IMO.

[Edit: along with their partners, of course!]


Anyway, I'm really hoping that the PC sees standardised compression formats, because screw having to download AMD or Nvidia specific game packages, or having to decompress then recompress to a specific format at install time!

I have a sneaky suspicion that DirectStorage, and thus RTX IO might by very closely linked with BCPACK.

Regarding SFS, I'm curious to understand how that compares with SF (which all DX12U GPU's are capable of) from a practical standpoint. I know the XSX has the dedicated hardware to blend between mid levels if required but I can't imagine its a show stopper without that, otherwise what would be the point of SF in first place?
 
Regarding SFS, I'm curious to understand how that compares with SF (which all DX12U GPU's are capable of) from a practical standpoint. I know the XSX has the dedicated hardware to blend between mid levels if required but I can't imagine its a show stopper without that, otherwise what would be the point of SF in first place?

Well I might be wrong (that's where the thrill is!), but SFS appears to be an XSX specific extension to SF in DX12U. And the goal appears to be create an easy to implement, low overheard way to bring in the next tile in the mip chain in as soon as the system realises it's needed, and also to flush no longer needed levels in the mip chain. On top of that, it seems to automatically apply some optimised filters to hide transitions between mip levels, and probably also to hide the edges between tiles of different mip levels when there's a visible disparity.

I would guess that it's a combination of an API selection, some code running on the GPU that's perhaps sandboxed away from the developer, and some custom hardware.

If I'm wrong about any of that, it would be great to hear from someone who really knows!

Anyway, I don't think this is something the PC will particularly miss. XSX has a particular set of circumstances - relatively constrained memory, a fast but not infinitely fast SSD, and features MS want developers to use now rather than in 2 or 3 years (like we're seeing with Turing!). Offering a solution that's easy to use, fast, well tested, and that waits till literally the last frame (+1) to initiate a transfer (so it's never done unnecessarily) seems like a good move.

I mean, something like this will really help with memory management. On XSS it could / should automatically scale to the reduced resolution games will be running at. And that's got to be a help, if devs implement it.

On PC with DX12U I'm guessing you could do something pretty similar, just with more work and probably a bit more overhead. But on PC, you might not want to do that because not everything supports DX12U, and even if it did given the range of hardware and resources you might want to do things a little differently anyway.

I just think this is something that makes absolute sense for XSX / S, but that the PC is a different environment. More variety, more considerations, and down the line probably more memory and more power too.

I totally agree that lacking SFS isn't a show stopper. I think it's more that for XSX, it's sort of a show enhancer. But PC is a different ... show.
 
On PC with DX12U I'm guessing you could do something pretty similar, just with more work and probably a bit more overhead. But on PC, you might not want to do that because not everything supports DX12U, and even if it did given the range of hardware and resources you might want to do things a little differently anyway.

I just think this is something that makes absolute sense for XSX / S, but that the PC is a different environment. More variety, more considerations, and down the line probably more memory and more power too.

I totally agree that lacking SFS isn't a show stopper. I think it's more that for XSX, it's sort of a show enhancer. But PC is a different ... show.
I hope because the baseline of features is moved up to DX12U as a result of consoles becoming just that. I hope that the PC market and console market move to adopt that as the baseline and we move to using these features sooner than later.

It did take us quite a long time for GPU dispatch engines to finally arrive. I think we're finally there and it's exciting times. Took a whole gen.
 
According to Nvidia's Q&A on RTX IO the only requirements are an NVMe drive, Windows 10 and of course, and RTX GPU. Even the slowest NVMe drives should be able to keep up with the XSX with RTX IO (and likely whatever AMD's equivalent will be). So I'd say if you already have an NVMe of any speed, you're sitting pretty for next gen. You'll be wanting a good 8 core CPU within a short number of years though I'd expect.
some extra info on the IO performance. Also maybe 165fps at 1440p might be achievable on almost all games after all.







https://www.techpowerup.com/review/...chitecture-board-design-gaming-tech-software/
 
some extra info on the IO performance. Also maybe 165fps at 1440p might be achievable on almost all games after all.







https://www.techpowerup.com/review/...chitecture-board-design-gaming-tech-software/

Although at a disk IO-level, ones and zeroes are still being moved at up to 7 GB/s, the de-compressed data stream at the CPU-level can be as high as 14 GB/s (best case compression). Add to this that each IO request comes with its own overhead—a set of instructions for the CPU to fetch x resource from y file and deliver it to z buffer, along with instructions to de-compress or decrypt the resource

I was sure they used the best case compression for lossless compression ratio. This is marketing.
 
It’s marketing. The ratchet demo doesnt provide any data at all.

We have some point showing how much time a new level is loading. It gives some indication and we will have a GDC presentation about this. And we have tons of data about oodle Kraken and oodle texture and how it works with different type of texture from BC1 to BC7.
 
Status
Not open for further replies.
Back
Top