Digital Foundry Article Technical Discussion [2024]

Different people have different values. To some, they are happy to sacrifice fidelity in order to other graphical features at a higher level such as lighting, shadowing, animation, and/or framerate stability.
This is a reasonable argument, would it be... But if we look at the comparison of high-resolution vs. more complex graphics for games on previous generation consoles, we find that there was not nearly as much of a difference in image quality then as it is now.

On X360/PS3 it was 720p for higher resolution, and more complex graphics it was about 600p. That's a 50% pixel difference between the two extremes looking at the grand average. We played here on an average of 32-inch TVs.

On XBO/PS4, the higher resolution was 900/1080p and, the more complex graphics were around 720p. This is a 100% pixel difference between the two extremes, but I remember many people were confused even then because of only 720p... We played here on an average of 40-inch TVs.

Now. For XSX/PS5, typically 1440p/4K native is the usual higher resolution and 720p (or less!!) is in principle the more complex graphics. I can't even describe the percentage difference, colossal even with FSR! We play here on an average of 65-inch TVs.

Perceptible?
 
Last edited:
Different people have different values. To some, they are happy to sacrifice fidelity in order to other graphical features at a higher level such as lighting, shadowing, animation, and/or framerate stability.

I refer you to Post 1 on this thread and point out that framing your posts the way you have here will lead you to lose posting rights in this thread. Discuss technically or not at all.
Fair, I'll stick to discussing the technical aspects of this game. The games image quality is poor. You can see it on youtube and FSR is not doing a good job at all. It's very visible. Personally speaking, 1080p is the lowest acceptable base resolution for DLSS in my eyes and DLSS is the best image upscaler. Even at 1080p input resolution for DLSS, it's still mostly average and it's flaws can clearly be seen. So how's a modified FSR with an input resolution of 720p with drops to 504p going to be good? I don't think this is a case of sacrificing fidelity and other graphical features. It might be valid in games that look significantly better and are then stripped of features to become more performant. I don't think the base optimization for this game is very good based on the raster visuals. Many UE5 games received a lot of flak for their performance and lower base input resolutions but, we could visibly see massive improvements. Many of them look a clear generation ahead of Dragon Age yet deliver performance in the same ball park as Dragon Age. Dragon age delivers a game with questionable image quality and performance that is not locked during gameplay. The geometry and assets have their roots in last gen and there are lots of ps4 games with better animation.
 
Last edited:
What would it actually take to get photorealistic graphics at 1080p and 2160p? Tim Sweeney said 40 TFs, but that seems far off the mark save in some specific cases like body-cam urban scenes. Is it a case of the computational power being wrongly directed, or has the workload of reality been grossly underestimated? Given the complete fail of things like accurate foliage that we are nowhere near solving, and truly natural human behaviours, and solid, correct illumination, and realistic fire and smoke, and the many, many flops of ML that we're looking to to solve some of these, the actual workload to create something like watching a film in realtime seems a long, long way off, if even possible. We inch ever closer, but the closer we get, the more the shortcomings stand out.
I actually think with the advent of LLMs and machine learning we have a shot at reaching photorealism quickly, AI will be the shortcut here.


As we've seen with the videos showing gaming scenes converted with AI to photorealsitic scenes full of life like characters, hair physics, cloth simulation, realistic lighting, shadowing and reflections, we have a glimpse into the future. There are many shortcomings of course, but they will be fixed when the AI is closely integrated into the game engine.

The AI model will have acess to 3D data, full world space coordinates, lighting information and various details instead of the 2D video data, this will be enough to boost it's accuracy and minimize the amnout of inference it has to do. We will also have faster and smarter models requiring less time to do their thing.

I can see future GPUs having much larger matrix cores، to the point of out numbering the regular FP32 cores, CPUs will also have bigger NPUs to assist, this would be enough to do 720p @60fps rendering, maybe even 1080p30 or 1080p60 if progress allows it.

Next، this will be upscaled, denoised and frame generated into the desired fidelity.

All in all, this path is a much quicker path -at least in theory- than waiting for traditional rendering to be mature and fast enough, which is becoming much harder and requires longer times, we simply lack the transistor budget to scale up the required horse power for traditional rendering to reach photorealism and do so at the previously feasible economic levels.

Even now traditional rendering faces huge challenges, chief among them is the code being limited by the CPU, and the slow progress of CPUs themselves, something has to give to escape these seemingly inescapable hurdles that existed for far too long.

So, playing with the ratios of different portions of these transistors budgets to allow for bigger machine learning portion than the traditional portion would be the smart thing to do, especially when it allows access to entirely new visual capabilities.
 
Last edited:
I actually think with the advent of LLMs and machine learning we have a shot at reaching photorealism quickly, AI will be the shortcut here.
I'm not so convinced. It's always the case with prototyping games that you get something fabulous in a weekend, but all the efforts needed to make the polished final takes forever. I think these quick results show promise, but the end result is actually a long way off and the imagined potential isn't within reach. At best, subdividing the game into aspects the ML can solve, like cloth dynamics, might work. I've too much life experience to look at these current results and extrapolate a near-term future of the best we can imagine! The magic bullets never are, and what we always end up with is an awkward compromise of glitchy fudges no matter how much power we throw at it.
 
720p internal with drops res to 540p without any rt on ps5/xsx sounds like not funny joke, whats with new game devs generation and their performance optimization skills ?
Obviously, all these big studios filled with professional developers who do this stuff for a living have all become incompetent at the same time, like some crazy coincidence. That must be it.
 
Obviously, all these big studios filled with professional developers who do this stuff for a living have all become incompetent at the same time, like some crazy coincidence. That must be it.
I’m not really sure how else to put it though lol, this level of image quality just isn’t acceptable. Developers are obviously packing too many graphical effects in if they need to go down to SD level resolutions on supposedly ‘4k’ consoles.
 
There's a very obvious reason why games are being rendered at such low internal resolutions: 60FPS. I bet that for many of these games development started with a 30FPS target only and the budget for geometry, materials, and shaders was set accordingly. Then a mandate came for a 60FPS mode late in the development cycle, so developers just used FSR2 and cranked down the resolution to hit that target.
 
Obviously, all these big studios filled with professional developers who do this stuff for a living have all become incompetent at the same time, like some crazy coincidence. That must be it.
oh yeah sure all big studios are only filled with amazing professionals with years of experience and all projects are perfectly managed ;)
 
Physics need to be toned down just a bit, though: it looks like black, cooked, slippery, heavy ramen: no friction, it looks way heavier than normal hair, etc. Still, it looks great with animations, as it gives a realistic look to them (even though the purposedly exaggerated head movements/turns to show off the hair movements are way too comical, IMHO).

Oh, and the female dwarf is really into the male dwarf at 5:18.
Yea it’s a bit too Fabio-esque. lol. But it would be hard to notice if it weren’t; fantasy hair lol.
 
What would it actually take to get photorealistic graphics at 1080p and 2160p? Tim Sweeney said 40 TFs, but that seems far off the mark save in some specific cases like body-cam urban scenes. Is it a case of the computational power being wrongly directed, or has the workload of reality been grossly underestimated? Given the complete fail of things like accurate foliage that we are nowhere near solving, and truly natural human behaviours, and solid, correct illumination, and realistic fire and smoke, and the many, many flops of ML that we're looking to to solve some of these, the actual workload to create something like watching a film in realtime seems a long, long way off, if even possible. We inch ever closer, but the closer we get, the more the shortcomings stand out.
Assuming we aren't just referring to rendering a single static scene, I would say we would need a GPU with at least 100x the performance of a 4090. Maybe future advances in software will bring that down dramatically though.

WRT Dragon Age, I'm probably in the minority who finds it underwhelming visually. The hair is nice but that extreme GPU demand is not translating efficiently to the final output.

Obviously, all these big studios filled with professional developers who do this stuff for a living have all become incompetent at the same time, like some crazy coincidence. That must be it.
There are still a small pool of developers who achieve much more though.
 
Last edited:
oh yeah sure all big studios are only filled with amazing professionals with years of experience and all projects are perfectly managed ;)
Or you realize that a changing of the guard has happened at a lot of studios with lots of developers leaving or retiring. There’s lots of evidence to support this sentiment. Saints row devs, Dice, Rockstar, etc. Game development is also easier than it’s ever been with a serious drop in the level of technical competence required to put a game out. I didn’t get the opportunity to program games during the ps1/ps2 days but when I compare for example working with XNA back in the 360 days to unity or ue5, it’s laughable simpler.
 
Hair rendering looks great. Though it might have exaggerated movement the hair has nice volume and most importantly it looks consistent. Can’t think of a game that does it better.
 
I definitely think the low resolutions this generation are from games made around 30 fps and then retroactively changed to have 60 fps modes later in development. If you targetted 60 at the only way to play the game on a console, I am pretty sure base assets and rendering features would be different.
 
Or you realize that a changing of the guard has happened at a lot of studios with lots of developers leaving or retiring. There’s lots of evidence to support this sentiment. Saints row devs, Dice, Rockstar, etc. Game development is also easier than it’s ever been with a serious drop in the level of technical competence required to put a game out. I didn’t get the opportunity to program games during the ps1/ps2 days but when I compare for example working with XNA back in the 360 days to unity or ue5, it’s laughable simpler.
Yes, this I have been saying a lot. I followed a lot of game-devs on twitter over the last 10+ years and a most of them left game dev because of a combination of crunch, pay and a toxic game community. Their talents are just appreciated a lot more at other companies. And then you have engines like UE, that make it possible for artists to dabble in programming without really knowing the inner workings of the engine.

I definitely think the low resolutions this generation are from games made around 30 fps and then retroactively changed to have 60 fps modes later in development. If you targetted 60 at the only way to play the game on a console, I am pretty sure base assets and rendering features would be different.
Yeah the new rendering techniques do not scale down really well. Think of nanite, ray tracing or any new global illumination solution like lumen. Those techniques have a big base cost that you can not get rid off easily. For 60 fps at a high resolution you would like to go back to static lighting but that would be really hard because now nor your assets nor your level design take that into account. Doing so would require a different workflow which would add a lot of cost to the project.
 
On X360/PS3 it was 720p for higher resolution, and more complex graphics it was about 600p. That's a 50% pixel difference between the two extremes looking at the grand average. We played here on an average of 32-inch TVs.
I don't think this is correct. Top PS3 exclusives (Uncharted 3, The Last of Us, God of War 3, God of War Ascension, Killzone 2, Killzone 3) were 720 p, GT6 was even 1440x1080, and that was racing game with top graphics on PS3 and in 60 fps. Top Xbox 360 exclusives (Gears of War 3, Gears of War Judgment, Halo 4, Forza Motorsport 4, Forza Horizon) were 720p. Almost all top multiplatform games for those consoles were 720p or very close to that.

About 540p on XSX/PS5. I said some 5 years ago (and many disagreed with me), what all this simplification in hardware and software will not lead to anything positive. I'm not trying to say I know something better than anyone else but I had and still have this opinion. When consoles were harder to programm, and there wasn't much or any automitic processes in development, developers could've achieve better results. They worked hard, but improved their skills, when programmers wasn't able achieve something, designers or artists was able, thet lead to more interesting and even experemental ideas to games, (of course that wasn't always cool results :) but still). This is like in my work, here are moments when I should do something harder, but results after that is better, and when all goes by standart system, it's just routine and sometimes results are basic and even worse. :)
 
I actually think with the advent of LLMs and machine learning we have a shot at reaching photorealism quickly, AI will be the shortcut here.


As we've seen with the videos showing gaming scenes converted with AI to photorealsitic scenes full of life like characters, hair physics, cloth simulation, realistic lighting, shadowing and reflections, we have a glimpse into the future. There are many shortcomings of course, but they will be fixed when the AI is closely integrated into the game engine.

The AI model will have acess to 3D data, full world space coordinates, lighting information and various details instead of the 2D video data, this will be enough to boost it's accuracy and minimize the amnout of inference it has to do. We will also have faster and smarter models requiring less time to do their thing.

I can see future GPUs having much larger matrix cores، to the point of out numbering the regular FP32 cores, CPUs will also have bigger NPUs to assist, this would be enough to do 720p @60fps rendering, maybe even 1080p30 or 1080p60 if progress allows it.

Next، this will be upscaled, denoised and frame generated into the desired fidelity.

All in all, this path is a much quicker path -at least in theory- than waiting for traditional rendering to be mature and fast enough, which is becoming much harder and requires longer times, we simply lack the transistor budget to scale up the required horse power for traditional rendering to reach photorealism and do so at the previously feasible economic levels.

Even now traditional rendering faces huge challenges, chief among them is the code being limited by the CPU, and the slow progress of CPUs themselves, something has to give to escape these seemingly inescapable hurdles that existed for far too long.

So, playing with the ratios of different portions of these transistors budgets to allow for bigger machine learning portion than the traditional portion would be the smart thing to do, especially when it allows access to entirely new visual capabilities.
Yeah tbh I really don’t think that video looks good at all. Most of these ‘AI re-imagined’ games look like stylistic messes.

I feel like people forget games are art and therefore you need more than an artificial intelligence coming up with all the artwork. Chasing photorealism at the expense of the art form produces bad results.
 
Back
Top