Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
Yeah, agreed. Although the consoles are proportionally more powerful this time, they don't have the significant RAM advantage.

Only wild card is the SSDs, but I think the PC will overtake quickly
In terms of CPU and GPU consoles will end up closer than ever to high end PCs at the end of the gen. I suspect by 2027 or so we will have 1.7~x CPU performance and 2.5x~ GPU performance of whats available today.
 
I don't see the SSD changing that as by the end of this generation, SSD's and Direct Storage with some form of non-CPU based decompression like RTX-IO are likely to be mainstream in all serious gaming PC's so the playing field is already equalised in that regard
I guess I would debate this here. PC has been slow to change. DX12 based engines are really only starting to finally arrive here, and engines based around gpu-dispatch which the features for it has been around forever, are now finally here. It took long time for developers to want to leave DX11 behind. And I feel this I/O might be exactly the same issue. AMD needs to pick it up. AMD and Nvidia have to have similar or comparable performance. How many GPUs can support it etc. All of that will determine how quickly developers will support those features sets imo.
 
In terms of CPU and GPU consoles will end up closer than ever to high end PCs at the end of the gen. I suspect by 2027 or so we will have 1.7~x CPU performance and 2.5x~ GPU performance of whats available today.

Even at only 50% increase every 2 years you'd still be at well over 3.5x faster on the GPU front after 7 years. That's from a starting position of lets say 1.5x meaning over 5x performance after 6 years.
 
Even at only 50% increase every 2 years you'd still be at well over 3.5x faster on the GPU front after 7 years. That's from a starting position of lets say 1.5x meaning over 5x performance after 6 years.

I don't think we will see 50% every 2 years. I think it will decrease to somewhere around 30%. I was using the 3090 as the starting point.
 
I guess I would debate this here. PC has been slow to change. DX12 based engines are really only starting to finally arrive here, and engines based around gpu-dispatch which the features for it has been around forever, are now finally here. It took long time for developers to want to leave DX11 behind. And I feel this I/O might be exactly the same issue. AMD needs to pick it up. AMD and Nvidia have to have similar or comparable performance. How many GPUs can support it etc. All of that will determine how quickly developers will support those features sets imo.

But high speed IO dependent game engines will be driven by the consoles this generation. And thanks to it being the baseline for XSX which will have a version of almost every major game released for the PC, Direct Storage should become pretty prevalent on the PC within a couple of years I'd expect. I take your point around non-CPU based decompression though, this really comes down to whether that's a fundamental component of Direct Storage of which RTX-IO is simply Nvidias implementation, or whether it's a custom Nvidia solution like DLSS that requires specific game support. If it's the latter then I agree the take up will be minimal or at best, very slow.
 
In terms of CPU and GPU consoles will end up closer than ever to high end PCs at the end of the gen. I suspect by 2027 or so we will have 1.7~x CPU performance and 2.5x~ GPU performance of whats available today.

We have absolutely no idea about that, we cant look that many years into future and assume hardware stagnates that much. They are rather far from a high end pc already today, in ray tracing even further.

Seven years ago we didnt saw RT coming this fast, neither DLSS. Or close to 40TF GPUs for that matter. Zen3 has a rather large IPC improvement over Zen2 etc. GDDR6x made its entry, 800GB/s and higher happening. RTX IO says to be 14gb/s and up.
Things are moving very fast still, even now. Hell, in many aspects the 2020 consoles are pretty mid range, just as the 2013 ones. Except for the SSD then (the slowest part still).
 
In terms of CPU and GPU consoles will end up closer than ever to high end PCs at the end of the gen. I suspect by 2027 or so we will have 1.7~x CPU performance and 2.5x~ GPU performance of whats available today.
I think you are overestimating the starting point here a bit - an RTX 3090 for example is already just about 2x Raster/compute performance over the XSX GPU in a real World scenario (Gears 5 based upon the metric that exists). RT I assume will be even a larger difference.

I think this gen will see less growth in GPU Performance over time as RDNAs Performance is not based wholly around harder to utilise async Compute. It is meant to have much more easily extractable Performance. At the same time, due to last gen GCN, Engines are already geared to extract a lot of performance at the outset now as GPU threading optimsations are really common place now. Hence how we see amazing linear scales between the consoles in frostbite, call of duty, etc games.
I think the big challenge this gen will be asset creation and some novel uses of hybrid rendering that will surprise us.
 
I think you are overestimating the starting point here a bit - an RTX 3090 for example is already just about 2x Raster/compute performance over the XSX GPU in a real World scenario (Gears 5 based upon the metric that exists). RT I assume will be even a larger difference.

I think this gen will see less growth in GPU Performance over time as RDNAs Performance is not based wholly around harder to utilise async Compute. It is meant to have much more easily extractable Performance. At the same time, due to last gen GCN, Engines are already geared to extract a lot of performance at the outset now as GPU threading optimsations are really common place now. Hence how we see amazing linear scales between the consoles in frostbite, call of duty, etc games.
I think the big challenge this gen will be asset creation and some novel uses of hybrid rendering that will surprise us.
I worded my post poorly. I should have specified I'm using the RTX3090/10900K as a starting point. Using Series X GPU as a starting point, I think we will have around 4x the performance by 2027 or so.
 
...
I think this gen will see less growth in GPU Performance over time as RDNAs Performance is not based wholly around harder to utilise async Compute. It is meant to have much more easily extractable Performance. At the same time, due to last gen GCN, Engines are already geared to extract a lot of performance at the outset now as GPU threading optimsations are really common place now. Hence how we see amazing linear scales between the consoles in frostbite, call of duty, etc games.
I think the big challenge this gen will be asset creation and some novel uses of hybrid rendering that will surprise us.
That is something I also mentioned a while ago. GCN must use optimized code to really get all out of it.
Question now is, is RDNA really that much faster (with same TF) if the code is optimized?
 
I worded my post poorly. I should have specified I'm using the RTX3090/10900K as a starting point. Using Series X GPU as a starting point, I think we will have around 4x the performance by 2027 or so.

You may be right, but if I were a betting man I'd say you're under estimating by quite a bit. 30% in 2 years every generation is extremely slow performance growth and unprecedented in GPU history.

Last generation for example, according to TPU, in the 6.5 years between Kepler and Turings launches, performance grew just over 4x (GTX 680 - > RTX 2080Ti) at 1080p alone. That's well over 30% or even 50% every 2 years and our most recent data point, 2080Ti -> 3090 sits at 54% in 2 years at the more appropriate 4K. If the 680 - > 2080 Ti measurement were taken at 4k or even the middle ground 1440p the growth would be far higher than 4x.

It's also likely all those data points were taken at each GPU's launch which means you're also significantly underselling it's performance increase since you're measuring at the point when games haven't yet started to take advantage of new architectural features vs the old GPU that has had time to mature and for games to take advantage of it's full potential. For example Ampere's performance advantage over Turing should grow with time as games make more use of RT and it's async RT/Tensor/CUDA capabilities. Similar to how Maxwell extended it's lead over Kepler when compute became more heavily used.
 
I worded my post poorly. I should have specified I'm using the RTX3090/10900K as a starting point. Using Series X GPU as a starting point, I think we will have around 4x the performance by 2027 or so.
Oh woops! Then yeah I think you are maybe right! :mrgreen:
Unless we get some novel chiplet like GPUs some point in the next 5 years!
 
You may be right, but if I were a betting man I'd say you're under estimating by quite a bit. 30% in 2 years every generation is extremely slow performance growth and unprecedented in GPU history.

Last generation for example, according to TPU, in the 6.5 years between Kepler and Turings launches, performance grew just over 4x (GTX 680 - > RTX 2080Ti) at 1080p alone. That's well over 30% or even 50% every 2 years and our most recent data point, 2080Ti -> 3090 sits at 54% in 2 years at the more appropriate 4K. If the 680 - > 2080 Ti measurement were taken at 4k or even the middle ground 1440p the growth would be far higher than 4x.

780Ti or or the first 2 Titans would be better comparison as they launched before the consoles as well and are a closer match to RTX 2080Ti in terms of die size. You will not get a similar die size increase from the 3090 going forwards as you did from the 680, also even though Samsung 8nm does not appear to be as good as TSMC 7nm, it is unlikely that the future processes will offer same speed/power advancements as we got coming down from 28nm. This puts a lot of burden on architectural improvements.
 
You may be right, but if I were a betting man I'd say you're under estimating by quite a bit. 30% in 2 years every generation is extremely slow performance growth and unprecedented in GPU history.

Last generation for example, according to TPU, in the 6.5 years between Kepler and Turings launches, performance grew just over 4x (GTX 680 - > RTX 2080Ti) at 1080p alone. That's well over 30% or even 50% every 2 years and our most recent data point, 2080Ti -> 3090 sits at 54% in 2 years at the more appropriate 4K. If the 680 - > 2080 Ti measurement were taken at 4k or even the middle ground 1440p the growth would be far higher than 4x.

It's also likely all those data points were taken at each GPU's launch which means you're also significantly underselling it's performance increase since you're measuring at the point when games haven't yet started to take advantage of new architectural features vs the old GPU that has had time to mature and for games to take advantage of it's full potential. For example Ampere's performance advantage over Turing should grow with time as games make more use of RT and it's async RT/Tensor/CUDA capabilities. Similar to how Maxwell extended it's lead over Kepler when compute became more heavily used.

Shrinks are harder to come by and provide less and less improvement. See the meager ~15% power efficiency improvement Ampere offers over Turing. With 3090 already using almost 400 watts where is the path to any worthwhile improvement in the next Few years? 5nm monolithic GPUs arent coming anytime soon. Looking at the Kepler to Turing timeframe in isolation isn't useful. We need to look at the timeframes leading up to that to see the trend of consistently decreasing performance increases over a given time span.
 
Last edited:
Could DLSS produce good results for VR ? Or the fact that the screen is so close to the eyes, artifacts would be too visible ? Or too much latency ?
 
I think the big challenge this gen will be asset creation and some novel uses of hybrid rendering that will surprise us.
GPU and CPU cooperating very tightly on hybrid rendering (shared memory) in a way that's simply not possible at the same latencies/bandwidths on PC appears to be the big opportunity.

780Ti or or the first 2 Titans would be better comparison as they launched before the consoles as well and are a closer match to RTX 2080Ti in terms of die size. You will not get a similar die size increase from the 3090 going forwards as you did from the 680, also even though Samsung 8nm does not appear to be as good as TSMC 7nm, it is unlikely that the future processes will offer same speed/power advancements as we got coming down from 28nm. This puts a lot of burden on architectural improvements.
FinFET unlocked nodes smaller than 28nm. Extreme ultra violet lithography unlocks nodes smaller than 7nm.

Progress with 5 and 3nm is going to surprise :)

But it's true, until Volta (815mm²), everyone (well, seemingly) thought that GPUs were limited to about 600mm².

Within 2 years chiplets should change the face of GPUs as they have done with CPUs. Fingers-crossed.
 
780Ti or or the first 2 Titans would be better comparison as they launched before the consoles as well and are a closer match to RTX 2080Ti in terms of die size. You will not get a similar die size increase from the 3090 going forwards as you did from the 680, also even though Samsung 8nm does not appear to be as good as TSMC 7nm, it is unlikely that the future processes will offer same speed/power advancements as we got coming down from 28nm. This puts a lot of burden on architectural improvements.

I used the 680 at it represented the start of the Kepler generation 6.5 years before Turings launch. The Titan launched a year later and so would be more comparable to the Titan RTX but yes that would change the dynamic quite a bit.

However as I mentioned above the performance deltas from TPU are likely much lower than the real deltas based on the tested resolution and the lack of ability to show how performance improves over time when taking advantage of new GPU architectures and features. I'd expect that to more than make up the delta of using Titan instead of the 680.

Take this for example:

https://wccftech.com/amds-fine-wine-in-action-gcn-radeon-gpus-decimate-nvidia-kepler-maxwell-cards/

Even a 980 (one generation newer than Kepler in the same performance category) is over 3x faster than the 680 in a modern game at 1080p. So a 2080Ti would clearly be well over 6x faster here, probably more at 4k.

Even compared with the 780Ti the 980 is 2.5x faster meaning the 2080Ti would be over 5x faster, again at a mere 1080p.

Although the above is just one example and likely a fairly extreme one, similar results can be seen in many modern games over at DSOG which regularly benchmarks the 680 against the 2080Ti at 1080p resulting in regular performance increases of well over 5x. Here's their latest review showing a 5.77x increase in Star Wars Squadrons at 1080p.

https://www.dsogaming.com/pc-performance-analyses/star-wars-squadrons-pc-performance-analysis/
 
Shrinks are harder to come by and provide less and less improvement. See the meager ~15% power efficiency improvement Ampere offers over Turing. With 3090 already using almost 400 watts where is the path to any worthwhile improvement in the next Few years? 5nm monolithic GPUs arent coming anytime soon. Looking at the Kepler to Turing timeframe in isolation isn't useful. We need to look at the timeframes leading up to that to see the trend of consistently decreasing performance increases over a given time span.

Certainly a good point on the power usage. Even consoles are falling victim to that. Comparing performance further back than the start of this generation becomes almost impossible though given the changes in supported features and standard rendering resolutions. Even going back as far as Kepler is probably stretching the realms of making sense.

What game or resolution would you use to compare a GF3 to an RTX3090 for example?
 
Status
Not open for further replies.
Back
Top