Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
Not in real world game performance, like GCN/CDNA the Tflops are not as efficient as Turing or RDNA1 for games.

We need to wait to know more things about RDNA2.

Depends on what kind of game engines being used. Real world true, but thats why i mentioned technically, 3 to 4 times more powerfull.
If navi2 rumors of around 25TF are true, about 2.5 times more powerfull for rdna2 dgpu.
 
Depends on what kind of game engines being used. Real world true, but thats why i mentioned technically, 3 to 4 times more powerfull.
If navi2 rumors of around 25TF are true, about 2.5 times more powerfull for rdna2 dgpu.

This is the case for all games engine currently even RT one if we compare to Turing. After RT will probably be less efficient on RDNA2 GPU.

EDIT: And it is rumored to be 21 Tflops to 22,94 Tflops
 
Last edited:
This is the case for all games engine currently even RT one if we compare to Turing. After RT will probably be less efficient on RDNA2 GPU.

Indeed, we will have to see. One thing is for sure, AMD GPUs will be atleast twice as powerfull of whats in the consoles. Not impossible almost three times when considering their 6900xt monster.
RDNA2 TF for TF they should be alike.
 
I used the 680 at it represented the start of the Kepler generation 6.5 years before Turings launch. The Titan launched a year later and so would be more comparable to the Titan RTX but yes that would change the dynamic quite a bit.

However as I mentioned above the performance deltas from TPU are likely much lower than the real deltas based on the tested resolution and the lack of ability to show how performance improves over time when taking advantage of new GPU architectures and features. I'd expect that to more than make up the delta of using Titan instead of the 680.

The part about "real deltas" is a little problematic. Kepler has long gone of being any sort of priority with regards to driver optimizations and its performance is surely handicapped by that today. Best time to compare would imo be when the old and new tech are both relevant.

The point I was making anyway was mainly about the future and how it looks more challenging to extract more performance compared to the past.
 
The part about "real deltas" is a little problematic. Kepler has long gone of being any sort of priority with regards to driver optimizations and its performance is surely handicapped by that today. Best time to compare would imo be when the old and new tech are both relevant.

This is a major flaw with trying to do pieces that compare PS4 Pro versions of games with equivalent PC hardware, the drivers on the equivalent PC at the end of the console life cycle will not have been optimised for years let alone being individually tweaked for card and game combos the way they are for current generation products. There's a 2-3 year window around console launch when PC isn't handicapped by drivers being unmaintained but on the flip side the console isn't getting the most value out of it's single task focused design either as most console specific tricks tend to show up at the tail end of the generation with the start of a gen tending towards "existing techniques but more".

It's why you'll never see solid numbers for how much benefit a console derives for being focused on a single task versus the multi-tasked PC, there's just too many other variables to filter out the noise. Still fun to read/watch so long as you're not expecting definitive answers either way.
 
This is a major flaw with trying to do pieces that compare PS4 Pro versions of games with equivalent PC hardware, the drivers on the equivalent PC at the end of the console life cycle will not have been optimised for years let alone being individually tweaked for card and game combos the way they are for current generation products. There's a 2-3 year window around console launch when PC isn't handicapped by drivers being unmaintained but on the flip side the console isn't getting the most value out of it's single task focused design either as most console specific tricks tend to show up at the tail end of the generation with the start of a gen tending towards "existing techniques but more".

It's why you'll never see solid numbers for how much benefit a console derives for being focused on a single task versus the multi-tasked PC, there's just too many other variables to filter out the noise. Still fun to read/watch so long as you're not expecting definitive answers either way.
Even if this is true, and I'm not commenting on that in any way, it's sort of irrelevant. There are lots of factors that give advantages or disadvantages to different pieces of hardware, but at the end of the day, only achievement matters. If a game can look good and run well on hardware X, but struggles on hardware Y, well that's pretty much the data we are interested in. And if there isn't a user facing solution to fix hardware Y's shortcomings, well that's really the only thing that's important. Hardware Y runs the game slower.
 
The part about "real deltas" is a little problematic. Kepler has long gone of being any sort of priority with regards to driver optimizations and its performance is surely handicapped by that today. Best time to compare would imo be when the old and new tech are both relevant.

The point I was making anyway was mainly about the future and how it looks more challenging to extract more performance compared to the past.

Yes totally agree with this. I'd imagine the sweet spot would would be to say compare Turing performance to Pascal performance now (at Ampere launch) given they're likely both still in support but have also had a chance to stretch that performance closer to their full potential. Although even that has significant challenges given the many performance enhancing features available on Turing that aren't yet fully used or can't be reasonably replicated. I imagine by this time in 2 years, Turing will be a lot faster compared to Pascal than it is today, even if Pascal was still under full driver support by then.
 
Very thorough job by Richard here.
Mhh..
They just missed one point. Testing multiple slower SSDs to test how much bandwidth is really needed for BC games.
But still SSD space is to expensive. Still no replacement for a 4 or 8 TB drive in sight. SSDs get cheaper, yes but really slow. When you want more space that don't have to be fast you still paying several hundreds of dollars. More than a new console costs :)
 
It's great to see the really inexpensive Sabrent adapter work so well.[ DF product link @ https://geni.us/DvqBHv ]

Full article is up @ https://www.eurogamer.net/articles/...x-series-x-back-compat-ssd-load-time-analysis

Xbox Series X back-compat loading times: SSDs and HDDs compared
It turns out that off-the-shelf solid state storage works great.

Xbox Series X backwards compatibility is shaping up to be an absolute home run based on our testing so far - yes, every title we've looked at so far seems to hit their performance target when 30fps or 60fps caps are in place, games using dynamic resolution scaling can show clear improvements, while we're looking at anything up to a 2x multiplier in GPU performance in games with unlocked frame-rates. On top of that, there's an image quality bonus too: texture filtering is improved via enforced 16x anisotropic filtering. Loading time improvements are also significantly improved - and that's the focus for this follow-up coverage.

You see, there's one disadvantage to the next-gen dream. Storage space on the internal solid state drive is at a premium - Xbox Series X ships with 802 useable gigs on the 1TB drive. On the one hand, that's actually an improvement over the 781GB of Xbox One X's 1TB HDD (my theory: Microsoft uses its hardware decompression engines to reduce the OS footprint, the console decompressing system files on demand). On the other, with the 1TB plug-in expansion drive priced at £220/$220, fast storage for next-gen titles comes at a price premium.

...
 

Only manages to watch the full video today but it's another really good presentation. Answered all the questions that were on my mine even though the answers themselves were quite disappointing. I was pretty stoked for Ultra performance mode but it looks like it need a bit more time to bake.

Oh well, guess I'll have to slum it by playing 4k at a system hogging 1080p for a little longer!
 
Quick Resume is always read and written from the internal NVME, regardless of where the game loaded from.

I just thought there still might be some difference.

But yeah when you think of it this makes external SSD's even more appealing in a way...as you can fully leverage Quick Resume even for external drives.

It's almost like an built-in Optane-like feature...for the initial load anyway.
 
It's great to see the really inexpensive Sabrent adapter work so well.[ DF product link @ https://geni.us/DvqBHv ]

Full article is up @ https://www.eurogamer.net/articles/...x-series-x-back-compat-ssd-load-time-analysis

Just to draw attention to this, the article mentions the "Sabrent USB to SATA adapter" which comes in USB 3.0 (5Gbps) and USB 3.1 (10Gbps) versions and in the video, Richard mentions he's using the USB 3.1 version. It is mentioned later in the article but not everybody reads whole articles.

So I'm now going thought the mental hoops of what capacity SSD to buy for Series X. The Samsung 870 QVO SATA SSD looks a good choice for b/c games so 4Tb of 8Tb?. I don't really want to buy a smaller capacity only to have to buy a larger one later because I'm not convinced overall I'll save money. Hmmm.. :neutral:
 
Last edited by a moderator:
Just to draw attention to this, the article mentions the "Sabrent USB to SATA adapter" which comes in USB 3.0 (5Gbps) and USB 3.1 (10Gbps) versions and in the video, Richard mentions he's using the USB 3.1 version. It is mentioned later in the article but not everybody reads whole articles.

USB 3.1 is really a bit of marketing faff. They renamed USB 3.0 to be USB 3.1 Gen 1. :rolleyes:

USB 3.1 Gen 1 = USB 3 = 5 Gbps
USB 3.1 Gen 2 = 10 Gbps

So your USB 3.0 devices are now instantly USB 3.1 devices.
 
So your USB 3.0 devices are now instantly USB 3.1 devices.
Indeed, which is why it's important to look as the bandwidth of the device. In the case of the Sabrent devices referenced the USB 3.0 version is clearly marked as 5Gbps and the only USB 3.1 version is 10Gbps. :yep2: And that;'s literally a 100% increase in bandwidth. :runaway:
 
Status
Not open for further replies.
Back
Top