Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
That vid is not low settings, its the original preset. But ya its a poor showing no matter how you look at it. Consoles age far better than PC hardware, especially when looking at the Nvidia side where long term value absolutely tanks.
I was also going to point out this correction. Need to run original preset on Horizon to match PS4 settings. this is low and still unable to hold that 30fps number.
The lack of VRAM makes it nearly impossible for the 7870/50 series keep up with PS4 once they push the VRAM beyond 2 GB.
But in titles that still manage to keep itself within the 2GB VRAM allocation, it does seem to more or less produce similar performance to PS4.

That being said though, with the generous ram available on PS4, the developers will often opt to choose the graphical settings that will maximize VRAM usage. Why make a worse looking product if it can be better looking.

$399 PS4 at launch was a very amazing price performance. And while I do believe both consoles, X1 and PS4 were anemic in some areas, of which has been to many great debates, the size of that VRAM kept both consoles pushing forward while PC counterparts started to struggle with the lack of it.

the 7850/70 series were a hell of an amazing series when they came out. GCN 1.0 exceeded so many expectations. I remember wanting one so bad but just felt priced out at the time. If they shipped with 8GB DDR no one would be able to afford them ;)
 
Last edited:
The lack of VRAM makes it nearly impossible for the 7870/50 series keep up with PS4 once they push the VRAM beyond 2 GB.
But in titles that still manage to keep itself within the 2GB VRAM allocation, it does seem to more or less produce similar performance to PS4.

Yes, its the GPU performance im talking about. It's what Alex wrote before in this thread too. Those GPUs are mid rangers from mid 2012, where 2GB was seen as enough. A 7950 comes with 3gb standard, 7970 has either 3gb or 6gb vram.
And yes, the 7950, a GPU from january 2012, is a much better buy for anyone in 2013. That thing wasnt expensive by late 2013.

It's like comparing a 2018 RTX 2060 non super with 3gb to the PS5 and XSX. That GPU would be hugely ram limited to begin with.
 
Yes, its the GPU performance im talking about. It's what Alex wrote before in this thread too. Those GPUs are mid rangers from mid 2012, where 2GB was seen as enough. A 7950 comes with 3gb standard, 7970 has either 3gb or 6gb vram.
And yes, the 7950, a GPU from january 2012, is a much better buy for anyone in 2013. That thing wasnt expensive by late 2013.

It's like comparing a 2018 RTX 2060 non super with 3gb to the PS5 and XSX. That GPU would be hugely ram limited to begin with.
yea the silicon sure, is the same. But if we're going to call PS4 a PS4, you can't swap components so you're really paying for the whole thing.

It's sorta the challenge we have with current GPUs, even with the RDNA 2.0 cards that are coming. Yea you'll find similar performing silicon, but you're just comparing a single component vs a single component. The console is a whole system put together, and it when you look at it holistically as a sum of parts working together, it's a core reason why it's shelf life is so much longer.

Its a good topic to reflect on honestly as we head into next generation. It's actually probably a pretty important topic to discuss further, the more I think about it.

Looking at the generation in review of how the consoles managed to keep going as a system of things working together vs a pre-built of somewhat equivalent hardware that the consoles are based off - what changed, what was ultimately the difference between the two.

It stands to me now, looking in review, that over a long enough period of time, the SSDs may enable or allow the consoles to stick around longer despite there being say a 30TF monster 3080 being out there before they even get out the door. Will that 10GB be enough to stick around vs 16GB? How far do developers push SSD/IO streaming to reduce VRAM footprint to free up memory to do other things? It's clear that neither console has the compute or ML power to compete with anything nvidia has, but they have VRAM and they have SSD tech... they may not have the muscle to push heavy loads, but they may have the endurance to last.

It's worth reviewing, though I don't know the future for either console, it's a piece I'd be interested in reading more about: consoles... built to last.
 
Last edited:
Il keep to benchmarks to measure performance. They are there for a reason.
Nice 'edit' btw ;)

Guess GTAIV was a stellar port on the PC back in the day then, as the benchmark indicated!

Benchmarks are only useful when comparing like for like, that's their benefit in GPU comparisons. In this case, we're comparing GPU's to consoles - there is no benchmark in the PS4 version. So we have to compare actual gameplay. In this case, it's known that the benchmark of Horizon is not indicative of actual gameplay, so it makes no sense to compare it against a PS4.

Using actual gameplay, a 7850 absolutely gets clobbered.
 
Warzone

Sub 720p and everything low or off to achieve PS4 like performance.
It’s interesting how, when having a compute deficit combined with a large amount of VRAM, rendering for this generation moved towards baking as much as possible. Effectively offline computation at the cost of the abundant space.
I’m wondering if we’re just going to see more of this going into next generation. As computation becomes locked we can continue to push the envelope using offline computation. It does seem like the strategy, but dynamic scenes and environments will take a hit for this.
 
It’s interesting how, when having a compute deficit combined with a large amount of VRAM, rendering for this generation moved towards baking as much as possible. Effectively offline computation at the cost of the abundant space.
I’m wondering if we’re just going to see more of this going into next generation. As computation becomes locked we can continue to push the envelope using offline computation. It does seem like the strategy, but dynamic scenes and environments will take a hit for this.
I think it will remain the strategy. Streaming higher quality baked assets.
 
It’s interesting how, when having a compute deficit combined with a large amount of VRAM, rendering for this generation moved towards baking as much as possible. Effectively offline computation at the cost of the abundant space.
I’m wondering if we’re just going to see more of this going into next generation. As computation becomes locked we can continue to push the envelope using offline computation. It does seem like the strategy, but dynamic scenes and environments will take a hit for this.

Don't UE5's ambitions suggest that a good selection of titles will be very dynamic with lighting/geometry?
 
I predict that ps5 gpu comparison to rx5700(not xt) will age even worse and faster than geforce 750ti to ps4 comparison ;)
 
I think the 2013 consoles generation was the first where you didnt have to upgrade if you had a matching GPU at the time of launch, and could have the PC setup for an entire generation. Even the best looking pc game still runs as well on the 7850. Same settings as base ps4.

7850 has almost same tf and arch as ps4 gpu, rx5700 is 25% slower than ps5 gpu in terms of teraflops not to mention it's older arch
 
Don't UE5's ambitions suggest that a good selection of titles will be very dynamic with lighting/geometry?
Well the geometry is still done offline, but lighting is dynamic yes.
UE5 certainly does suggest we are moving towards dynamic, however as video cards get more powerful, they can continually add more to their dynamic lighting engine and that same ability to keep pushing forward isn't an allowance for console.

The consoles will eventually hit a compute limit. In which the only way to increase fidelity once you've maxed out your compute ability is to gather some compute power from elsewhere.

After enough resolution reduction and all forms of checkerboarding and upscaling are resolved, if you want to keep pushing the envelope further the choices start to get slim. So this would point back to the eventuality of finding ways to incorporating or offloading some of that dynamic lighting calculations somehow back onto baking them into the textures again on strong developer PCs or to leverage the power of the cloud to deliver them through an internet connection; While FS2020 seems to be an appropriate game to leverage it and it's use cases are very specific, perhaps it is the start of something bigger, I do not know.
 
Last edited:
Well the geometry is still done offline

Individual elements are created offline. They can have 100,000's of those in a scene though. Scenes are not monolithic pieces of geometry. They can do building destruction and Fortnite/Satisfactory "big lego" style user generated content. That's plenty of opportunity for dynamism just from games using UE. Not quite sure what more you're after? Dinosaurs stampeding though a hurricane swept forest? (I want this)

Agree that FS2020 does feel like either the verge of something that might not be. It blows right past the internal storage and the content generation issues of other games. We need a version that can go to meter and cm scales! FS only does the business at 100m+ where you can't see all the photogrammetry wibbles. :)
 
Individual elements are created offline. They can have 100,000's of those in a scene though. Scenes are not monolithic pieces of geometry. They can do building destruction and Fortnite/Satisfactory "big lego" style user generated content. That's plenty of opportunity for dynamism just from games using UE. Not quite sure what more you're after? Dinosaurs stampeding though a hurricane swept forest? (I want this)

Agree that FS2020 does feel like either the verge of something that might not be. It blows right past the internal storage and the content generation issues of other games. We need a version that can go to meter and cm scales! FS only does the business at 100m+ where you can't see all the photogrammetry wibbles. :)
Right, there's that portion and wrt UE, static geometry is done offline. But yea, there's dynamic geometry that still needs to be handled as well. I'm not sure what I'm after, I'm just looking at end points of how this generation in particular managed to keep upping the fidelity of graphics without getting dealt a more a powerful GPU. And I really believe the secret in that success was baking and I think we may see similar strategies play out by the end of this generation as well.
 
7850 has almost same tf and arch as ps4 gpu, rx5700 is 25% slower than ps5 gpu in terms of teraflops not to mention it's older arch

The 7850 is an older iteration of GCN vs the PS4. So if games are taking advantage of those newer features then the PS4's advantage could scale beyond what it's raw specs suggest.

The comparison with a 4GB R265 that @Dictator considered would be a much better one. You'd probably have to equalize settings but then drop texture resolution a step or two down on the R265 for the most direct comparison. It'd certainly make for interesting reading and although I'd still expect the PS4 to come out in front, the margin by which it does would be fascinating.

As for the rx5700, it lacks RT so it's a moot point. Whatever AMD launches at the end of this month will be the far more interesting comparison which the R265 comparison above would be more directly analogous to.
 
GTX Titan from 2013 can run Alien: Isolation with UHD, 60 fps and higher settings than the Playstation 4 version which just had 1080p, 30fps and much lower graphics settings.

Do not forget that games prefer different hardware. When Tesselation is used the game was much faster on Nvidia GPUs. On the other hand there are methods like compute shading which were performing better on AMD GPUs.
 
Last edited:
GTX Titan from 2013 can run Alien: Isolation with UHD, 60 fps and higher settings than the Playstation 4 version which just had 1080p, 30fps and much lower graphics settings.

Yes, its more powerfull then the PS4 Pro mid gen console. I think its close to 5TF and sports 6GB of very fast VRAM. If im not misstaken it performs even better then a GTX970.
Anyway, you didnt or dont really need that 2013 titan to outmatch the 2013 base consoles. An early 2012 7950 3gb will do. I think its more fair to compare to late 2013 products, where AMD offered us the R290X, this thing was basically as fast as the GTX Titan, and probably aged much better too.

Hell, a R9 270x 4gb was basically a late 2013 mid ranger, it matches and outperforms even today. For the 2020 generation, differences seem even bigger.
 
Status
Not open for further replies.
Back
Top