Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
Certainly a good point on the power usage. Even consoles are falling victim to that. Comparing performance further back than the start of this generation becomes almost impossible though given the changes in supported features and standard rendering resolutions. Even going back as far as Kepler is probably stretching the realms of making sense.

What game or resolution would you use to compare a GF3 to an RTX3090 for example?
I wouldn’t try to compare a GF3 to a 3090. I’d look at the progress from say GF256 to a GF6800. A GF6800 to a GF580. Those were both 5-6 years timeframes and progress was much greater than Kepler to Turing. I also don't think its useful to retroactively look at Nvidia performance in newer games for this particular discussion. Nvidia GPUs lose an unreasonable amount of performance when not being hand optimized on a per game basis in drivers.
 
Last edited:
It looks like I was correct that for xbox one and previous games getting a 2TB 2.5 inch sata drive for $160-$180 and an enclosure is the best bang for your buck .
 
It looks like I was correct that for xbox one and previous games getting a 2TB 2.5 inch sata drive for $160-$180 and an enclosure is the best bang for your buck .

No need for an enclosure. Just get that Sabrent USB 3.1 to SATA cable. However, I'd rather find a way to mount the drive to the back of my Series X where I can't see it. Thinking of having something 3D printed that would allow hot swapping. Maybe even put some USB-powered RGB LEDs in it so it looks like there is an actual green light emitting from the top. LOL

Tommy McClain
 
Indeed, which is why it's important to look as the bandwidth of the device. In the case of the Sabrent devices referenced the USB 3.0 version is clearly marked as 5Gbps and the only USB 3.1 version is 10Gbps. :yep2: And that;'s literally a 100% increase in bandwidth. :runaway:
The Xbox USB ports are only gen 1 though, so unless there's some other deficiency in the "USB 3.0" adapter (cheaper, crappier controller, etc), it shouldn't make any difference.
 
The Xbox USB ports are only gen 1 though, so unless there's some other deficiency in the "USB 3.0" adapter (cheaper, crappier controller, etc), it shouldn't make any difference.
I wasn't aware of that.
 
From the DF tests for BC it seems it makes little difference over the internal drive, gen2 would help with speedily archiving Series X games negating the need for their external storage card......, otherwise it seems the bandwidth bottleneck gives way to others in real world gaming.
 
From the DF tests for BC it seems it makes little difference over the internal drive, gen2 would help with speedily archiving Series X games negating the need for their external storage card......, otherwise it seems the bandwidth bottleneck gives way to others in real world gaming.
Yup, and it's a shame that the USB ports on Series X are only 5Gbps because one of the functions I want that external drive for is to act as a cheap(ish) mass storage; games I'll know I'll want to play again so will just copy them to them external drive, and copy them back to the internal NVMe when I want to play them so faster transfer speeds are welcome. I can understand Sony's reasons for providing three 10Gbps USB ports, certainly more than including Wifi6.
 
Last edited by a moderator:
Yup, and it's a shame that the USB ports on Series X are only 5Gbps because one of the functions I want that external drive for is to act as a cheap(ish) mass storage; games I'll know I'll want to play again so will just copy them to them external drive, and copy them back to the internal NVMe when I want to play them so faster transfer speeds are welcome. I can understand Sony's reasons for providing three 10Gbps USB ports, certainly more than including Wifi6.

I think Sony is future proofing also because of VR - which I assume will simply plug into the USB-C port- and whatever they may or may not come up in the next 7 years.
 
I wouldn’t try to compare a GF3 to a 3090. I’d look at the progress from say GF256 to a GF6800. A GF6800 to a GF580. Those were both 5-6 years timeframes and progress was much greater than Kepler to Turing. I also don't think its useful to retroactively look at Nvidia performance in newer games for this particular discussion. Nvidia GPUs lose an unreasonable amount of performance when not being hand optimized on a per game basis in drivers.

Unfortunately TPU doesn't give average performance metrics further back than Tesla 2.0 launched in Dec 08 in its fastest GTX 285 iteration, but that allows us to compare all the way back to the good old 8800GTX launched in Nov 2006, 14 years ago, and all the way back to the start of the PS3 console generation. That covers 9 architecture iterations so should be a reasonable sample for predicting future performance scaling.

Looking at the fastest iterations of each architecture (using Ti's rather than Titans where they are available) which seem to have launched roughly every 2 years gives us the following performance uplifts architecture to architecture starting with the GTX285 uplift over the 8800GTX and ending with the 3090 uplift over the 2080Ti. All comparisons are taken at the time of the newer GPU's launch thus removing any driver optimisation questions for the older GPU and if anything favouring the older GPU (especially in the Turing/Pascal comparison) given the new architectures inability at that point to stretch it's legs. I used resolutions appropriate to both GPU's at the time of comparison.

Code:
        Resolution    Performance Uplift    New Architecture    Launch Date
3090    2080Ti    4k    145%    Ampere    Sep-20
2080Ti    1080Ti    4k    139%    Turing    Sep-18
1080Ti    980Ti    1440p    175%    Pascal    Mar-17
980Ti    780Ti    1440p    141%    Maxwell    Jun-15
780Ti    580    1080p    169%    Kepler    Nov-13
580    285    1080p    164%    Fermi 2.0    Nov-10
285    8800GTX    1680*1050    149%    Tesla 2.0    Dec-08

Since the table's not so easy to read, the summarized performance uplifts roughly every 2 years, oldest to newest are 49%, 64%, 69%, 41%, 75%, 39%, 45%. While there may be a very slight downwards trend there, we have to remember that Turing (the 39%) introduced Ray Tracing which was a major change in approach to real time graphics. And where RT is used, that 39% increase would obviously be far, far larger.

EDIT: I forgot to mention, the cumulative increase from the 8800GTX to the RTX 3090 over 14 years is about 2000%. So the 3090 is roughly 20x faster than the 8800GTX.
 
Last edited:
Full DF Article @ https://www.eurogamer.net/articles/digitalfoundry-2020-assassins-creed-unity-series-x-60fps

Xbox Series X can finally run Assassin's Creed Unity at 60fps
A revolutionary improvement.

Some might say it was a game that was simply too ambitious for its intended platform. In 2014, Ubisoft's spectacular Assassin's Creed Unity pushed back technological boundaries in a range of directions. Its depiction of Revolutionary Paris was dense in detail, packed with hundreds of residents on-screen at any given point, in a city that didn't just increase detail outdoors - but introduced highly detailed interiors too. Combined with big advances in character rendering and an astonishing global illumination system that still looks incredible today, Unity had all the makings of a masterpiece. The problem was, it didn't run very well at all.

Ubisoft itself admitted that the focus on technology was just too strong, to the detriment of the final product, which ran poorly on both PS4 and Xbox One. Even on PC, it took years for CPUs and GPUs to run this game well. Curiously, with the console builds, Unity actually ran better in some cases on the less powerful Microsoft machine, despite a locked 900p resolution on both - indicative of the extraordinary CPU load placed on the consoles and the small clock speed advantage enjoyed by Xbox One. Patches followed, the game improved, but it took the arrival of PS4 Pro and Xbox One X with their higher frequency CPUs to get the game running at anything close to a locked 30 frames per second.

Now, with the arrival of Xbox Series X, it's finally possible to play the game on console at 60 frames per second. And with one extremely minor exception, that's a locked 60fps. It's one of the most transformative experiences I've yet experienced via the new console's backwards compatibility feature - a game renowned for sub-optimal play is now basically flawless in performance terms. Actually breaking the 30fps limit of the game isn't easy though. It requires users to have access to the original disc release and to block any attempts to download any patches. This OG code is different from all the patches that followed by actually running with an unlocked frame-rate - a poor state of affairs back in the day for console users, but essential six years later in allowing us to leverage the huge CPU power offered by the Zen 2 processors within the next-gen machines.

...
 
Wonder what resolution all next gen machines could run at 60fps.

Be pleasant surprise if it got patched just for that.
 
Since the table's not so easy to read, the summarized performance uplifts roughly every 2 years, oldest to newest are 49%, 64%, 69%, 41%, 75%, 39%, 45%. While there may be a very slight downwards trend there, we have to remember that Turing (the 39%) introduced Ray Tracing which was a major change in approach to real time graphics. And where RT is used, that 39% increase would obviously be far, far larger.

Nice work there!

I want to mention few things though :) There is a difference between what the chips did/are doing vs what they are capable of. 980ti was one of the most conservatively clocked card there has ever been and the 41% it has on your table does not fully represent what that chip could do. There were customer cards that out of the box offered 20% more performance and still could OC 10% on top of that. The chips after that were clocked closer to the max in the reference model. If you take that into account the downward trend is slightly steeper especially when you combine it with the upward pointing trends in die sizes and power consumption.
 
Unfortunately TPU doesn't give average performance metrics further back than Tesla 2.0 launched in Dec 08 in its fastest GTX 285 iteration, but that allows us to compare all the way back to the good old 8800GTX launched in Nov 2006, 14 years ago, and all the way back to the start of the PS3 console generation. That covers 9 architecture iterations so should be a reasonable sample for predicting future performance scaling.

Looking at the fastest iterations of each architecture (using Ti's rather than Titans where they are available) which seem to have launched roughly every 2 years gives us the following performance uplifts architecture to architecture starting with the GTX285 uplift over the 8800GTX and ending with the 3090 uplift over the 2080Ti. All comparisons are taken at the time of the newer GPU's launch thus removing any driver optimisation questions for the older GPU and if anything favouring the older GPU (especially in the Turing/Pascal comparison) given the new architectures inability at that point to stretch it's legs. I used resolutions appropriate to both GPU's at the time of comparison.

Code:
        Resolution    Performance Uplift    New Architecture    Launch Date
3090    2080Ti    4k    145%    Ampere    Sep-20
2080Ti    1080Ti    4k    139%    Turing    Sep-18
1080Ti    980Ti    1440p    175%    Pascal    Mar-17
980Ti    780Ti    1440p    141%    Maxwell    Jun-15
780Ti    580    1080p    169%    Kepler    Nov-13
580    285    1080p    164%    Fermi 2.0    Nov-10
285    8800GTX    1680*1050    149%    Tesla 2.0    Dec-08

Since the table's not so easy to read, the summarized performance uplifts roughly every 2 years, oldest to newest are 49%, 64%, 69%, 41%, 75%, 39%, 45%. While there may be a very slight downwards trend there, we have to remember that Turing (the 39%) introduced Ray Tracing which was a major change in approach to real time graphics. And where RT is used, that 39% increase would obviously be far, far larger.

EDIT: I forgot to mention, the cumulative increase from the 8800GTX to the RTX 3090 over 14 years is about 2000%. So the 3090 is roughly 20x faster than the 8800GTX.

Using Techpowerup 1080p you have incorrect 780ti numbers. 780ti is 82% faster than 580. Some good data you gathered. I think its more than a slight trend, and the further back you go it just keeps increasing. I went and checked various individual reviews going back to the pre g80 days and there was a ton more progress at a rapid pace. Power consumption also never came close to approaching 400 watts to achieve it. Nvidia substantially raised the power ceiling giving them an advantage over every previous GPU they have built and the gain is still one of the smaller they have offered. It’s the performance uplift you would expect of a non die shrinked successor. Die size is another factor to consider. Previous GPUs were quite small and weren't coming close to the limits of manufacturing to offer larger performance increases.
 
Last edited:
Using Techpowerup 1080p you have incorrect 780ti numbers. 780ti is 82% faster than 580. Some good data you gathered. I think its more than a slight trend, and the further back you go it just keeps increasing. I went and checked various individual reviews going back to the pre g80 days and there was a ton more progress at a rapid pace. Power consumption also never came close to approaching 400 watts to achieve it.

I've not used TPU's performance tables but rather went into their individual reviews. For the 780ti to 580 there is no direct comparison so I had to put the 680 in the middle and multiply the performance increase from 580 to 680 by 680 to 780ti.
 
Is the XSX not running it at 4K?

Just watched the video. It runs at 900p lol. I understand the reason behind the video though.

I genuinely loved this game. For one it was gorgeous in it's day, IMO easily the best looking game of it's time. The gameplay too, was brilliant IMO although I know plenty of people didn't appreciate it.
 
Status
Not open for further replies.
Back
Top