Digital Foundry Article Technical Discussion [2024]

720p internal with drops res to 540p without any rt on ps5/xsx sounds like not funny joke, whats with new game devs generation and their performance optimization skills ?
Obviously, all these big studios filled with professional developers who do this stuff for a living have all become incompetent at the same time, like some crazy coincidence. That must be it.
 
Obviously, all these big studios filled with professional developers who do this stuff for a living have all become incompetent at the same time, like some crazy coincidence. That must be it.
I’m not really sure how else to put it though lol, this level of image quality just isn’t acceptable. Developers are obviously packing too many graphical effects in if they need to go down to SD level resolutions on supposedly ‘4k’ consoles.
 
There's a very obvious reason why games are being rendered at such low internal resolutions: 60FPS. I bet that for many of these games development started with a 30FPS target only and the budget for geometry, materials, and shaders was set accordingly. Then a mandate came for a 60FPS mode late in the development cycle, so developers just used FSR2 and cranked down the resolution to hit that target.
 
Obviously, all these big studios filled with professional developers who do this stuff for a living have all become incompetent at the same time, like some crazy coincidence. That must be it.
oh yeah sure all big studios are only filled with amazing professionals with years of experience and all projects are perfectly managed ;)
 
Physics need to be toned down just a bit, though: it looks like black, cooked, slippery, heavy ramen: no friction, it looks way heavier than normal hair, etc. Still, it looks great with animations, as it gives a realistic look to them (even though the purposedly exaggerated head movements/turns to show off the hair movements are way too comical, IMHO).

Oh, and the female dwarf is really into the male dwarf at 5:18.
Yea it’s a bit too Fabio-esque. lol. But it would be hard to notice if it weren’t; fantasy hair lol.
 
What would it actually take to get photorealistic graphics at 1080p and 2160p? Tim Sweeney said 40 TFs, but that seems far off the mark save in some specific cases like body-cam urban scenes. Is it a case of the computational power being wrongly directed, or has the workload of reality been grossly underestimated? Given the complete fail of things like accurate foliage that we are nowhere near solving, and truly natural human behaviours, and solid, correct illumination, and realistic fire and smoke, and the many, many flops of ML that we're looking to to solve some of these, the actual workload to create something like watching a film in realtime seems a long, long way off, if even possible. We inch ever closer, but the closer we get, the more the shortcomings stand out.
Assuming we aren't just referring to rendering a single static scene, I would say we would need a GPU with at least 100x the performance of a 4090. Maybe future advances in software will bring that down dramatically though.

WRT Dragon Age, I'm probably in the minority who finds it underwhelming visually. The hair is nice but that extreme GPU demand is not translating efficiently to the final output.

Obviously, all these big studios filled with professional developers who do this stuff for a living have all become incompetent at the same time, like some crazy coincidence. That must be it.
There are still a small pool of developers who achieve much more though.
 
Last edited:
oh yeah sure all big studios are only filled with amazing professionals with years of experience and all projects are perfectly managed ;)
Or you realize that a changing of the guard has happened at a lot of studios with lots of developers leaving or retiring. There’s lots of evidence to support this sentiment. Saints row devs, Dice, Rockstar, etc. Game development is also easier than it’s ever been with a serious drop in the level of technical competence required to put a game out. I didn’t get the opportunity to program games during the ps1/ps2 days but when I compare for example working with XNA back in the 360 days to unity or ue5, it’s laughable simpler.
 
Hair rendering looks great. Though it might have exaggerated movement the hair has nice volume and most importantly it looks consistent. Can’t think of a game that does it better.
 
I definitely think the low resolutions this generation are from games made around 30 fps and then retroactively changed to have 60 fps modes later in development. If you targetted 60 at the only way to play the game on a console, I am pretty sure base assets and rendering features would be different.
 
Or you realize that a changing of the guard has happened at a lot of studios with lots of developers leaving or retiring. There’s lots of evidence to support this sentiment. Saints row devs, Dice, Rockstar, etc. Game development is also easier than it’s ever been with a serious drop in the level of technical competence required to put a game out. I didn’t get the opportunity to program games during the ps1/ps2 days but when I compare for example working with XNA back in the 360 days to unity or ue5, it’s laughable simpler.
Yes, this I have been saying a lot. I followed a lot of game-devs on twitter over the last 10+ years and a most of them left game dev because of a combination of crunch, pay and a toxic game community. Their talents are just appreciated a lot more at other companies. And then you have engines like UE, that make it possible for artists to dabble in programming without really knowing the inner workings of the engine.

I definitely think the low resolutions this generation are from games made around 30 fps and then retroactively changed to have 60 fps modes later in development. If you targetted 60 at the only way to play the game on a console, I am pretty sure base assets and rendering features would be different.
Yeah the new rendering techniques do not scale down really well. Think of nanite, ray tracing or any new global illumination solution like lumen. Those techniques have a big base cost that you can not get rid off easily. For 60 fps at a high resolution you would like to go back to static lighting but that would be really hard because now nor your assets nor your level design take that into account. Doing so would require a different workflow which would add a lot of cost to the project.
 
On X360/PS3 it was 720p for higher resolution, and more complex graphics it was about 600p. That's a 50% pixel difference between the two extremes looking at the grand average. We played here on an average of 32-inch TVs.
I don't think this is correct. Top PS3 exclusives (Uncharted 3, The Last of Us, God of War 3, God of War Ascension, Killzone 2, Killzone 3) were 720 p, GT6 was even 1440x1080, and that was racing game with top graphics on PS3 and in 60 fps. Top Xbox 360 exclusives (Gears of War 3, Gears of War Judgment, Halo 4, Forza Motorsport 4, Forza Horizon) were 720p. Almost all top multiplatform games for those consoles were 720p or very close to that.

About 540p on XSX/PS5. I said some 5 years ago (and many disagreed with me), what all this simplification in hardware and software will not lead to anything positive. I'm not trying to say I know something better than anyone else but I had and still have this opinion. When consoles were harder to programm, and there wasn't much or any automitic processes in development, developers could've achieve better results. They worked hard, but improved their skills, when programmers wasn't able achieve something, designers or artists was able, thet lead to more interesting and even experemental ideas to games, (of course that wasn't always cool results :) but still). This is like in my work, here are moments when I should do something harder, but results after that is better, and when all goes by standart system, it's just routine and sometimes results are basic and even worse. :)
 
I don't think this is correct. Top PS3 exclusives (Uncharted 3, The Last of Us, God of War 3, God of War Ascension, Killzone 2, Killzone 3) were 720 p, GT6 was even 1440x1080, and that was racing game with top graphics on PS3 and in 60 fps. Top Xbox 360 exclusives (Gears of War 3, Gears of War Judgment, Halo 4, Forza Motorsport 4, Forza Horizon) were 720p. Almost all top multiplatform games for those consoles were 720p or very close to that.

About 540p on XSX/PS5. I said some 5 years ago (and many disagreed with me), what all this simplification in hardware and software will not lead to anything positive. I'm not trying to say I know something better than anyone else but I had and still have this opinion. When consoles were harder to programm, and there wasn't much or any automitic processes in development, developers could've achieve better results. They worked hard, but improved their skills, when programmers wasn't able achieve something, designers or artists was able, thet lead to more interesting and even experemental ideas to games, (of course that wasn't always cool results :) but still). This is like in my work, here are moments when I should do something harder, but results after that is better, and when all goes by standart system, it's just routine and sometimes results are basic and even worse. :)
Yes, that is correct, the 720p - 600p analysis refers to the two extreme resolution ranges at the time, but the most beautiful games were indeed native 720p.

However, this further supports my theory regarding the excessive volatility of current generation resolutions compared to previous trends.
 
None of those PS360 games had to support both 60 and 30 fps modes. That's a huge performance range to cover in a fixed hardware platform. Whatever you do you're going to have to have a big drop in resolution for a 60 fps mode.

Additionally, with more dynamic elements - particularly lighting and RT reflections - you're exposing yourself to even more variability between areas. One frame rate target and baked lighting makes hitting a standard resolution (like 720p or 1080p) far simpler.
 
Yeah the new rendering techniques do not scale down really well. Think of nanite, ray tracing or any new global illumination solution like lumen. Those techniques have a big base cost that you can not get rid off easily. For 60 fps at a high resolution you would like to go back to static lighting but that would be really hard because now nor your assets nor your level design take that into account. Doing so would require a different workflow which would add a lot of cost to the project.
An important element of this is that they do not use an engine designed for consoles. I acknowledge the great capabilities of UE5, but it is clear that with the high image resolution required for today's 4K TVs, this set of features can only be served with an expensive PC.

And it really doesn't scale well. If it's so easy to develop with it and it's almost automated, then why isn't there, for example, a solution to make the Nanite system more scalable for consoles and thus display relatively less geometry to achieve a higher image resolution with 60 FPS? I only say this because there are a few UE5 games that do not use Nanite or Lumen and as a result run in 4K(ish)/60FPS on current consoles.

That is why game engines written specifically for consoles would have an important role, more than ever.
 
An important element of this is that they do not use an engine designed for consoles. I acknowledge the great capabilities of UE5, but it is clear that with the high image resolution required for today's 4K TVs, this set of features can only be served with an expensive PC.

And it really doesn't scale well. If it's so easy to develop with it and it's almost automated, then why isn't there, for example, a solution to make the Nanite system more scalable for consoles and thus display relatively less geometry to achieve a higher image resolution with 60 FPS? I only say this because there are a few UE5 games that do not use Nanite or Lumen and as a result run in 4K(ish)/60FPS on current consoles.

That is why game engines written specifically for consoles would have an important role, more than ever.
Like I said. Nanite is a technique that has a up front cost that you can't scale down but in return you can get dense geometry and invisible lod transitions. But also relatively low VRAM usage because all the LOD's are streamed at cluster level instead of mesh-level (something underappreciated about nanite that lets games like Hellblade 2 work well with 8GB VRAM) If you don't need dense geometry or invisible LOD transitions when you can choose to not use nanite.

The reason teams use nanite is because they can get the same result faster so you can have smaller or less experience teams making games that look like they are made by AAA-teams (but in practice they sometimes miss in art direction and because of that still look worse). It just takes time and skill to make low-poly geometry and it's LOD's look good.
The same can be said for going all-in for dynamic lightning. Not needing to think about correct light map uv's and density will speed up making assets a lot. And not to mention make you way more flexible in what can be done with lighting.
 
The performance target that a developer sets, their general technical talent, their willingness to spend time optimizing for a specific platform, their familiarity with the engine, and their ability to make under-the-hood tweaks are more important for the final result than whether or not the engine is multi-platform or designed for consoles. When Gears of War: E-Day releases, I doubt there will be anyone complaining that The Coalition should have used an engine designed specifically for Xbox instead of Unreal.

I remember EA taking a lot of flak for making its studios adopt Frostbite, and how ME: Andromeda's issues were blamed on that decision. But now that BioWare has become experienced with the engine, adapted it to their own needs, and presumably created tooling and workflows to support RPG development with Frostbite, they aren't having issues anymore. If you set reasonable goals for performance and fidelity, and have enough time and talent, then nearly any modern engine can achieve good results. Some engines will require less time investment than others of course.
 
Like I said. Nanite is a technique that has a up front cost that you can't scale down but in return you can get dense geometry and invisible lod transitions. But also relatively low VRAM usage because all the LOD's are streamed at cluster level instead of mesh-level (something underappreciated about nanite that lets games like Hellblade 2 work well with 8GB VRAM) If you don't need dense geometry or invisible LOD transitions when you can choose to not use nanite.

The reason teams use nanite is because they can get the same result faster so you can have smaller or less experience teams making games that look like they are made by AAA-teams (but in practice they sometimes miss in art direction and because of that still look worse). It just takes time and skill to make low-poly geometry and it's LOD's look good.
The same can be said for going all-in for dynamic lightning. Not needing to think about correct light map uv's and density will speed up making assets a lot. And not to mention make you way more flexible in what can be done with lighting.
That's why I mentioned that there are undoubtedly advantages to UE5 game engines, which mostly applies to development. As I mentioned earlier, now that game developments based on such a modern GPU driven pipeline are appearing in large numbers, the difference is obvious. It sounds good that there is almost infinite geometry in the picture, but it is not so good that this graphic developed with huge geometry is displayed on the console in too low a resolution, taking into account the general size of today's TVs.

You mentioned that the Nanite technology cannot currently be scaled down, that is, it only works as an on/off switch and therefore requires a lot of power when turned on. Is there a way to change this in the future? Are there any improvements in this direction that could benefit cheaper console hardware?
 
You mentioned that the Nanite technology cannot currently be scaled down, that is, it only works as an on/off switch and therefore requires a lot of power when turned on. Is there a way to change this in the future? Are there any improvements in this direction that could benefit cheaper console hardware?
It's not that Nanite can't be scaled down, but rather that one of the main points of Nanite is to scale according to the resolution. If you want more performance with Nanite then you turn the resolution down. If you want higher resolution and are willing to sacrifice geometric detail to achieve it then you're better off using traditional LOD meshes instead.
 
Back
Top