Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.

Edit:
1. Chose a platform with an upgrade path.
2. PCIe 4.0
3. 6/12 CPU, although consider increasing the thread count to 16 (see point 1)
4. 16GB DDR4 3GHZ
4. DX12U 8GB VRAM GPU
 
Last edited by a moderator:
If the need for very wide access and high levels of frame by frame decompression moves over to the PC, then 8 cores might not end up being enough for high frame rate PC junkies.
 
If the need for very wide access and high levels of frame by frame decompression moves over to the PC, then 8 cores might not end up being enough for high frame rate PC junkies.

With luck developers will give PC gamers the option of installing games uncompressed thus entirely bypassing the need for decompression either in hardware or software. I think most PC gamers are in a position to give up and extra 60% of SSD space to store uncompressed game data vs giving up 5-6 CPU cores on decompression.
 
With luck developers will give PC gamers the option of installing games uncompressed thus entirely bypassing the need for decompression either in hardware or software. I think most PC gamers are in a position to give up and extra 60% of SSD space to store uncompressed game data vs giving up 5-6 CPU cores on decompression.

Requiring more storage space makes more sense then requiring 12+core CPU's and wacky decoder chips. It seems by far the easiest and probably cheapest solution. With raw uncompressed data, you don't even have to rely on compression and everything that comes with it. Maybe even best quality.

The reason consoles need this compression is because you can't just ship then with 2+TB's worth of SSD, or require users to buy massive external drives to play more then one game.
On pc, it's abit of a different story. It to me, seems a better solution, you circumvent the whole compression/decompression part side of things. SSD's will get cheaper down the line.
 
Last edited:
With luck developers will give PC gamers the option of installing games uncompressed thus entirely bypassing the need for decompression either in hardware or software. I think most PC gamers are in a position to give up and extra 60% of SSD space to store uncompressed game data vs giving up 5-6 CPU cores on decompression.

As primarily a PC gamer who hasn't owned a "current gen" console but who's had 32 GB of memory for the last four years, I'd also support the option for a large decompressed (meaning down to DXTC etc) pool in main ram.

Take your time, no need to burst over a few frames, retain data so multiple spins of the mouse don't require multiple reloads and decompressions etc.

PC has lots of options at its disposal. If you have lots of cores and lots of ram and you can't keep the GPU fed with rendering data then, IMO, your engine isn't scalable enough in all the ways necessary.
 
With luck developers will give PC gamers the option of installing games uncompressed thus entirely bypassing the need for decompression either in hardware or software. I think most PC gamers are in a position to give up and extra 60% of SSD space to store uncompressed game data vs giving up 5-6 CPU cores on decompression.
You would be giving up SSD space and reducing the effective amount of data that can be streamed. 4mb of data that compressed at 1.9mb now takes twice the bandwidth to stream from the SSD to CPU/RAM/GPU. Loading data faster is the goal. It's a more complex problem than just giving up SSD space.

Pick your poison. :yep2:
 
You would be giving up SSD space and reducing the effective amount of data that can be streamed. 4mb of data that compressed at 1.9mb now takes twice the bandwidth to stream from the SSD to CPU/RAM/GPU. Loading data faster is the goal. It's a more complex problem than just giving up SSD space.

Pick your poison. :yep2:

Yes that's a given. But the discussion, even in the media has always been framed as "how can a PC keep up with a 4.8GB/s or 8-9GB/s (i.e. compressed speeds) console when NVMe drives currently top out at xGB/s (raw)". Compressed throughput is rarely considered when considering effective bandwidth on the PC side (despite decompression regularly being raised as an issue) but you raise a good point that perhaps it should be.

The suggestion I'm making is that with uncompressed data (down to native GPU texture compression formats), there are commercial drives today that can match or exceed the XSX SSD compressed performance and by the end of this year there will be drives which come within about 75-80% of PS5 compressed throughput, and that allows you to completely sidestep the decompression bottleneck. For now, until PCIe 5.0 comes along (possibly as soon as next year) and doubles those speeds, I'd argue that's sufficient at the very high end.

But as you say, if compression is factored in then that bandwidth could potentially far exceed either console. The trade off then is in CPU time for decompression. That trade off it too high IMO but to put it in context, by Microsofts own calculations, using a combination of zlib and BCPACK on a current top end NVMe drive at 5GB/s would net you around 10GB/s effective throughput but cost around 6 CPU cores in decompression time.
 
Requiring more storage space makes more sense then requiring 12+core CPU's and wacky decoder chips. It seems by far the easiest and probably cheapest solution. With raw uncompressed data, you don't even have to rely on compression and everything that comes with it. Maybe even best quality.
No. The real time compression is lossless.

The reason consoles need this compression is because you can't just ship then with 2+TB's worth of SSD, or require users to buy massive external drives to play more then one game.
On pc, it's abit of a different story. It to me, seems a better solution, you circumvent the whole compression/decompression part side of things. SSD's will get cheaper down the line.
It's an efficiency thing. Compressed date 1) loads faster 2) gives you more storage 3) has no down sides. A PC with hardware compression will have more storage, access that data faster, and cost less (won't need 2x the storage capacity) - in every way better.

PC don't have it because they are stuck in legacy thinking, perhaps in no small part because the IO deals with lots of writes as well as reads and compression can't be used so well for writing data. Consoles have it because they can and it's a better solution.

I don't think brute forcing a solution is ever the better solution.
 
All of a sudden more CU is better than higher clocks now. I wonder why?

According to inside sources at Microsoft, the focus with Xbox One was to extract as much performance as possible from the graphics chip's ALUs. It may well be the case that 12 compute units was chosen as the most balanced set-up to match the Jaguar CPU architecture. Our source says that the make-up of the Xbox One's bespoke audio and "data move engine" tech is derived from profiling the most advanced Xbox 360 games, with their designs implemented in order to address the most common bottlenecks. In contrast, despite its undoubted advantages - especially in terms of raw power, PlayStation 4 looks a little unbalanced by comparison.


Ps4 unbalanced .....sure....
when has it not been better? Are modern chips going back to higher clocks or more cores?
DF is talking about pairing between CPU and GPU, not against GPU vs GPU. They never made the claim that more CU was worse than higher clockspeed. especially if you're going to end up with more TF from increased CUs.
 
All of a sudden more CU is better than higher clocks now. I wonder why?
No. The XB1 opinion was about system balance, which is a thing. What you posted was DF's opinion of the time. Maybe they've changed opinion since then having years of retrospective to consider the impacts of the HW choices?

What exactly is the purpose of your recent (and in part removed) contributions to this thread? It's for discussing DF articles. If you just want to post anti-DF opinions, that doesn't belong on this forum. You're free to disagree with them, to post disagreements with observations/opinions, and to even suggest they aren't reliable if other people cite them elsewhere, but talk in this thread should be on the ideas they float.

I also feel the need to point you to the DF article on XB1's cloud computing promise. DF quite happily produce content challenging consoles. They may not always be right, but it's far from one-sided.
 
Richard mentioned you need a 2070-2080 to future proof but I think he's lowballing it by heaps, at least if you wanna turn on Raytracing. We saw quite a few PS5 games running RT in native 4k/30fps already and some with graphics vastly superior to the likes of BF5, Metro, Control or Shadow of the Tomb Raider. Yet those cards struggled heaps in those games at native 4k with RT On, only the 2080 TI can run them in a much playable fps. I think he underestimated the console optimization, efficiency a lot here, you'd definitely need a 2080 ti to future proof or just wait for the Big Navi and 3000 RTX series.
 
Very hard to say, BF5 and the other titles you mentioned are basically designed without RT in mind, bolted on RT as many called it. Next-generation games are most likely build around and optimized with ray tracing in mind, in special 1st party exclusives.
Ray tracing really starts with next generation consoles, the pc was kind of in proving grounds with only NV supporting it in high end expensive GPU's. And even then, with later patches, BF5 ran very well even with RT, in 64 player sized multiplayer matches.

So no, i don't think they have it all wrong, a 2070 and 2080+ for ps5 and xsx respectively doesn't seem low balling to me. Ue5 tech demo was supposedly doing rather well (1440/40fps??) on a 2080 laptop edition, which is close to a 2070.

Aside from that, even the consoles are going to be limited in hw ray tracing, what we saw in the PS5 demo's was rather limited RT, with GT being the most obvious, but only for the cars reflections and made the game look rather close to GTS. HW RT as in todays PC games at high resolutions is going to be taxing.

A 2070 or higher (and the other parts they mentioned) and you're more then good to go. Hell even a 2060 would be, and that's without even counting in DLSS2.0 and future revisions of that. Aside from all of that, scaling these days is extremely efficient, i have no doubt 2070 users won't be able to play next gen games at close to PS5 level detail, resolution and framerate. That is again ofcourse, if the rest of the system is fitted with matching components and devs do some work on the port etc.

As a side note, thanks DF for the informative and nice article ;)
 
Richard mentioned you need a 2070-2080 to future proof but I think he's lowballing it by heaps, at least if you wanna turn on Raytracing. We saw quite a few PS5 games running RT in native 4k/30fps already and some with graphics vastly superior to the likes of BF5, Metro, Control or Shadow of the Tomb Raider. Yet those cards struggled heaps in those games at native 4k with RT On, only the 2080 TI can run them in a much playable fps. I think he underestimated the console optimization, efficiency a lot here, you'd definitely need a 2080 ti to future proof or just wait for the Big Navi and 3000 RTX series.

This doesn't seem like a particularly good comparison. Without knowing the extent to which RT was used in each of those games relative to each other, to say nothing of how other settings would impact performance, there's no reasonable basis here for comparison.

The closest RT performance comparison we have right now (and this isn't really a reasonable basis for comparison either, but much better than the above) is Richards statement about a RTX 2060 "comfortably outperforming" the XSX in Minecraft RTX at 1080p DLSS with the XSX at native 1080p.

The following video gives some indication of how much performance a RTX 2060 can gain from DLSS in Minecraft RTX at 1080p (spoiler: it's huge), but it still only puts a RTX 2060 into 2080 Super territory. So that leaves you with a 2080 Super "comfortably outperforming" the XSX in Minecraft RTX. And of course we'd expect the XSX to comfortably outperform the PS5 in the same test. That certainly seems to make Richards suggestion of a 2070-2080 look fairly reasonable. At least in the 2-3 year timescale which he's discussing. Later on in the generation you'll no doubt want more.

 
This doesn't seem like a particularly good comparison. Without knowing the extent to which RT was used in each of those games relative to each other, to say nothing of how other settings would impact performance, there's no reasonable basis here for comparison.

The closest RT performance comparison we have right now (and this isn't really a reasonable basis for comparison either, but much better than the above) is Richards statement about a RTX 2060 "comfortably outperforming" the XSX in Minecraft RTX at 1080p DLSS with the XSX at native 1080p.

The following video gives some indication of how much performance a RTX 2060 can gain from DLSS in Minecraft RTX at 1080p (spoiler: it's huge), but it still only puts a RTX 2060 into 2080 Super territory. So that leaves you with a 2080 Super "comfortably outperforming" the XSX in Minecraft RTX. And of course we'd expect the XSX to comfortably outperform the PS5 in the same test. That certainly seems to make Richards suggestion of a 2070-2080 look fairly reasonable. At least in the 2-3 year timescale which he's discussing. Later on in the generation you'll no doubt want more.

It's an interesting discussion point, I think we've as a group asked why Tensor Cores etc, why the investment into RT Cores and Tensor Cores. But if it were plain compute you'd have to contend with so many other factors like, increasing the bandwidth several times greater to ensure all that compute is fed.

The whole DLSS/RT Core bit, really reduces the bandwidth footprint required to obtain the results you want, at a cost of a large die. (likely better performance in specialized cases) But without the added cost of significantly larger bandwidth.
 
Later on in the generation you'll no doubt want more.

No doubt about that, a contrast to needing more. In case of less good scaling and optimizing, one could drop a setting or two, reduce the resolution somewhat or revert to DLSS. A 2070 will last you through a generation. But PC gamers usually want more, that's the thing with the platform. A 3060S or 3070S down the line could give some nice performance per price/watt ratio.
Imagine a BF6 designed around next generation hardware (strongly doubt BF6 is going 2013 consoles), with RT in mind. DF will be busy comparing and analysing between all these platforms, GPU's etc ;)
 
This doesn't seem like a particularly good comparison. Without knowing the extent to which RT was used in each of those games relative to each other, to say nothing of how other settings would impact performance, there's no reasonable basis here for comparison.

The closest RT performance comparison we have right now (and this isn't really a reasonable basis for comparison either, but much better than the above) is Richards statement about a RTX 2060 "comfortably outperforming" the XSX in Minecraft RTX at 1080p DLSS with the XSX at native 1080p.

The following video gives some indication of how much performance a RTX 2060 can gain from DLSS in Minecraft RTX at 1080p (spoiler: it's huge), but it still only puts a RTX 2060 into 2080 Super territory. So that leaves you with a 2080 Super "comfortably outperforming" the XSX in Minecraft RTX. And of course we'd expect the XSX to comfortably outperform the PS5 in the same test. That certainly seems to make Richards suggestion of a 2070-2080 look fairly reasonable. At least in the 2-3 year timescale which he's discussing. Later on in the generation you'll no doubt want more.

It will outperform it in RT that's for sure but comfortably? That remains to be seen. Also Richard pinned PS5 to 2070S level in best case scenario and that's <5% slower than 2080 in 4k.
 
Richard mentioned you need a 2070-2080 to future proof but I think he's lowballing it by heaps, at least if you wanna turn on Raytracing. We saw quite a few PS5 games running RT in native 4k/30fps already and some with graphics vastly superior to the likes of BF5, Metro, Control or Shadow of the Tomb Raider. Yet those cards struggled heaps in those games at native 4k with RT On, only the 2080 TI can run them in a much playable fps. I think he underestimated the console optimization, efficiency a lot here, you'd definitely need a 2080 ti to future proof or just wait for the Big Navi and 3000 RTX series.
There are quite a few concessions given with PS5's RT implementations.
Where I think they did great was that perhaps many people didn't notice where or what the limits are.

Where I think you're wrong, is where you believe their RT solutions are to do more for less. I'm pretty sure it's just doing less, much less. I think you'll find that BF5, Metro, Control and SOTR RT implementations are very thorough high varying levels of quality with very few compromises.

I don't want to put @Dictator on the spot, but I believe he will have something coming that will go over the individual games and speak about their RT implementations, you'll get a better idea then.
 
It will outperform it in RT that's for sure but comfortably? That remains to be seen. Also Richard pinned PS5 to 2070S level in best case scenario and that's <5% slower than 2080 in 4k.

As mentioned above, Minecraft RTX was a point of comparison of 30-60fps at 1080p compared to 2080Ti, which had a locked 60fps. Ergo, we might infer that SX simply has up to half the raw RT performance for really heavy scenes, which would then put it closer to a 2060-level of RT power.

Obviously, developers are still grappling with implementations for future (hybrid) use, but those are performance optimizations that should carry through to every platform.
 
Status
Not open for further replies.
Back
Top