Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
My GTX670 still outperforms my base PS4 in the games ive tested it agaist. It's a 2.6TF gpu but still, i guess not much optimization for that gpu :p

It was originally suppose to be a +50%~ faster GPU. What games have you tested and what differentials are you seeing? In the poorly optimized games(standard UE4/ubisoft etc) it still does ok.

You are viewing it in a slanted way, GCN Was not implicitly better, it had a different performance Profile with a different Level of effort needing to be expended, or bandwidth, to extract Performance.

Judging off games. Leave devs to their own devices and they produce games that run better on GCN. Not only console devs either. Typically the more impressive the perf to visual result, the larger the gap.
 
Last edited:
I would guess the game is not poorly optimized for a gtx6xx or gtx7xx, but that the game is actually just leveraging modern architectures as much as possible, so modern cards are pulling away in terms of performance.
 
It was originally suppose to be a +50%~ faster GPU. What games have you tested and what differentials are you seeing? In the poorly optimized games(standard UE4/ubisoft etc) it still does ok.

Wolfenstein (both), Doom 2016 (have not played ethernal yet), BF4, BFV, planetside 2, some other games too but that's a while ago. Yes the 670 is a whole lot faster, and in most games, it shows. In Doom, the 670 does some settings higher and also doesn't have to do any dynamic scaling.
In BF4, i can run everything Ultra @1080p, averaging 90FPS in 64MP games, both TDM and conquest. Planetside 2 on the PS4 runs like crap, but that game is heavily CPU bound, and my 920 @ 3.6ghz is in a whole another world compared to the jag @1.6ghz.

Going to test HZD on the old pc just for fun, i like to do comparisons :p Have not tested any PS4 exclusives yet, detroit would be a good test case. Don't think its much of a problem if you aim for that 30fps.
Oh and, its a 2GB 670, i really like how long that old system held up, CPU is almost 12 years old, the GPU from 2012, latest ram update was 2013 (8GB total now).

Ofc i have a newer system too (2080Ti, 3900X, 64gb ram, 1tb nvme) ;)

Yeah I've also seen that Kepler still has gotten updates for Vulkan 1.2 and DX12 updates. Though I wouldn't expect the Game ready drivers to be more than bug fixes at this point. EDIT: I actually tried Vulkan in Ghost Recon Breakpoint now because it's free weekend and latest driver actually corrupted the graphics. Performance wise, the earlier and latest driver otherwise seemed on par. But it seems Kepler users should stick to DX11 anyway, my own observation was DX11 ran up to 10 FPS faster.

Hopefully, Doom Eternal is just the big exception. Even though AMD's older GCN parts have aged better, nothing else has been this bad for Nvidia's older GPUs AFAIK.
Red Dead Redemption 2 for example still has the GTX 770 competing with the R9 280, instead of falling beneath the HD 7790 like in Doom.
https://www.sweclockers.com/test/28500-red-dead-redemption-2-sweclockers-utmanar-systemkraven

Have not tried Ethernal yet, strange because Doom 2016 has no such problems. Hopefully we will see a patch :)
Yes i read that SweC article before, they shared it on their fb page. In general the 600/700 keplers have aged rather well, i can't complain, my 670 is 1.5year older then the base consoles. Never did GPU's last this long.
 
Wolfenstein (both), Doom 2016 (have not played ethernal yet), BF4, BFV, planetside 2, some other games too but that's a while ago. Yes the 670 is a whole lot faster, and in most games, it shows. In Doom, the 670 does some settings higher and also doesn't have to do any dynamic scaling.
In BF4, i can run everything Ultra @1080p, averaging 90FPS in 64MP games, both TDM and conquest. Planetside 2 on the PS4 runs like crap, but that game is heavily CPU bound, and my 920 @ 3.6ghz is in a whole another world compared to the jag @1.6ghz.

Going to test HZD on the old pc just for fun, i like to do comparisons :p Have not tested any PS4 exclusives yet, detroit would be a good test case. Don't think its much of a problem if you aim for that 30fps.
Oh and, its a 2GB 670, i really like how long that old system held up, CPU is almost 12 years old, the GPU from 2012, latest ram update was 2013 (8GB total now).

Ofc i have a newer system too (2080Ti, 3900X, 64gb ram, 1tb nvme) ;)

All the benchmarks ive seen of both ID Tech games as well as BFV make me doubt your claims. Bf4 and planetside are ancient in terms of rendering tech so those will be ok on the 670.
 
Hardwareunboxed published its benchmarks today and tested it at Low, with alot of old cards, and it's still worse than what you'd expect. He claims that their testing sequence is one of the most demanding in the game. The game apparently has a very minor performance impact going between the different graphical presets.

The graph is shown after 4:35.



Yeah I've also seen that Kepler still has gotten updates for Vulkan 1.2 and DX12 updates. Though I wouldn't expect the Game ready drivers to be more than bug fixes at this point. EDIT: I actually tried Vulkan in Ghost Recon Breakpoint now because it's free weekend and latest driver actually corrupted the graphics. Performance wise, the earlier and latest driver otherwise seemed on par. But it seems Kepler users should stick to DX11 anyway, my own observation was DX11 ran up to 10 FPS faster.

Hopefully, Doom Eternal is just the big exception. Even though AMD's older GCN parts have aged better, nothing else has been this bad for Nvidia's older GPUs AFAIK.
Red Dead Redemption 2 for example still has the GTX 770 competing with the R9 280, instead of falling beneath the HD 7790 like in Doom.
https://www.sweclockers.com/test/28500-red-dead-redemption-2-sweclockers-utmanar-systemkraven


My 8GB RX 570 I picked up like a year ago for 129 new just to have something, remains the best purchase ever!

It still plays almost everything capable, even it would work fine for HL Alyx, even though it wasn't my intent to have good gaming capability. It would work well for Doom Eternal too.
 
My 8GB RX 570 I picked up like a year ago for 129 new just to have something, remains the best purchase ever!
For you or in the history of all purchases? Because I'll have you know that yesterday I managed to get a loaf of bread, pears, blueberries, bananas, chicken, milk and 500g of fresh mushrooms from Waitrose! I'll admit your framerate in Doom are probably higher than my mushrooms.
 
For you or in the history of all purchases? Because I'll have you know that yesterday I managed to get a loaf of bread, pears, blueberries, bananas, chicken, milk and 500g of fresh mushrooms from Waitrose! I'll admit your framerate in Doom are probably higher than my mushrooms.
depends on the mushrooms , you can get quite high framerates in doom with them.
 
its not a VRAM issue. Its an architetcure deficiency one.
No it's not, Doom Eternal is behaving erratically and in illogical ways.

See here:

xybr820heap41.png


The 780Ti is not much faster than 680 4GB, which is impossible. The 950 is running as fast as 780Ti which is also impossible. Same for the 1050Ti being twice as fast as the 780Ti, or the 1050 being faster.

2wz5bbp73ep41.png



Even with AMD cards there is much inconsistency, the R9 Nano (which is almost the level of FuryX -10%), is way behind the RX 580/RX 5500XT, which is impossible, it's also hardly any faster than 290X. The RX 570 is also slower than R9 290, which should be impossible as well.

Typically the more impressive the perf to visual result, the larger the gap.
It's not about performance to visual ratio, it's about optimizations.
 
Even with AMD cards there is much inconsistency, the R9 Nano (which is almost the level of FuryX -10%), is way behind the RX 580/RX 5500XT, which is impossible, it's also hardly any faster than 290X. The RX 570 is also slower than R9 290, which should be impossible as well.
So everything is impossible, or anything is possible. :runaway:
 
No it's not, Doom Eternal is behaving erratically and in illogical ways.

See here:

xybr820heap41.png


The 780Ti is not much faster than 680 4GB, which is impossible. The 950 is running as fast as 780Ti which is also impossible. Same for the 1050Ti being twice as fast as the 780Ti, or the 1050 being faster.

2wz5bbp73ep41.png



Even with AMD cards there is much inconsistency, the R9 Nano (which is almost the level of FuryX -10%), is way behind the RX 580/RX 5500XT, which is impossible, it's also hardly any faster than 290X. The RX 570 is also slower than R9 290, which should be impossible as well.


It's not about performance to visual ratio, it's about optimizations.
The Thing is Doom Eternal breaks pretty often if you use non in game overlays - we have to use their internal fcat from the console and do Video recording.

If you use rtss or anything like it, it will mess up stuff
 
Some very fine engineering, doesn't sound cheap. We saw with the One X what MS was capable of, they seem to have improved upon that design.
 
DF Article updated to go along with the video, introduction included below, @ https://www.eurogamer.net/articles/...-xbox-series-x-a-revolution-in-console-design

Undoubtedly the biggest surprise of The Game Awards back in December 2019 was Microsoft's decision to reveal Xbox Series X: the name, the branding - and most crucially, the form factor. It was a console quite unlike anything we'd seen before, possibly the most original home console design since Nintendo's GameCube way back in 2001. During our recent visit to the Microsoft campus in Redmond WA, we had a chance to meet key members of the hardware team that created this remarkable-looking device - and in the process, we gained a much better understanding of why Xbox Series X required a top to bottom revamp of the traditional console form factor.

"When we started thinking about how we would design this, everything was theoretical," says Chris Kujawski, principal designer at Microsoft. "We didn't have stuff we could test, we didn't have measurements we could take, we knew it was going to be powerful and we knew it was going to require a totally different way of thinking about how to design a console."

The key issue facing the designers came down to power and target performance. The Xbox system architects decided from the get-go that the next generation console had to deliver an absolute minimum of twice the overall graphics performance of the Xbox One X, meaning 12 teraflops of GPU compute, sitting alongside the Zen 2 cores that would deliver a 4x improvement in CPU power. At the same time, the mandate was set that the machine also had to equal the acoustic performance of the Xbox One X - a tall order when system power would be increasing significantly.

The challenge came into focus once the outsize power requirements of the new hardware came into focus. Based on the prototype hardware we saw, Xbox Series X ships with a 315W power supply and in keeping with all of Microsoft's console designs since Xbox Series S, this would be delivered internally. With the sheer amount of electrical power pumping through the processor, the regulators pump up to 100W per square inch, delivering up to 190A. What made this all coalesce into the form factor we have today is the key decision to move to a split motherboard design: one board houses the high-power components like the Series X processor, the GDDR6 and the power regulators. The other is the Southbridge board, principally handling I/O. The boards sit on either side of a substantial chassis block - a sheer aluminium casting.

upload_2020-3-28_16-3-53.png
 
No it's not, Doom Eternal is behaving erratically and in illogical ways.

See here:

xybr820heap41.png


The 780Ti is not much faster than 680 4GB, which is impossible. The 950 is running as fast as 780Ti which is also impossible. Same for the 1050Ti being twice as fast as the 780Ti, or the 1050 being faster.

2wz5bbp73ep41.png



Even with AMD cards there is much inconsistency, the R9 Nano (which is almost the level of FuryX -10%), is way behind the RX 580/RX 5500XT, which is impossible, it's also hardly any faster than 290X. The RX 570 is also slower than R9 290, which should be impossible as well.


It's not about performance to visual ratio, it's about optimizations.

There's actually very little inconsistency with any of the results here. WRT AMD, Nano being ~10% behind 5500/580 is completely normal, as is it being 10% faster than a 290. 570 matching a 290 is also completely normal. The Nvidia results are also completely normal and in line with recent trends, just to a worse extent. The more forward looking a renderer is, the worse Nvidia does with pre Turing GPUs. There are clearly all kinds of architectural bottlenecks in Kepler preventing performance from scaling. Not surprising, at this stage its pretty well known that Kepler is a poor architecture. Maxwell and Pascal performing uncharacteristically higher than Kepler has become the norm. Again, this is not limited to this specific title. you can see similar performance drop offs for Nvidia GPUs prior to Turing in Forza Horizon 4, Battlefield V, RDR 2, Call of duty warzone, World War z, Division 2, Wolfenstein young blood etc.

I agree it is about optimization, GCN GPUs have much more performance potential than their Nvidia competitors. Better architecture. When Nvidia release a new architecture and stops hand optimizing all the popular games their performance drops off a cliff.
 
I wouldn't call it a bad architecture, nor have GCN1 more performance/architecture then nvidia products. Up untill now i have had no problems with a 670, even Doom 2016 and wolfenstein. It must be a optimization problem.

How many games on completely different engines by completely different developers have to perform awfully before its no longer an optimization problem? Seems like an odd way to look at it. You could be having a better experience on a GCN GPU you could have bought during the same time period for less than half the price.
 
Status
Not open for further replies.
Back
Top