Digital Foundry Article Technical Discussion [2024]

I don't think this is correct. Top PS3 exclusives (Uncharted 3, The Last of Us, God of War 3, God of War Ascension, Killzone 2, Killzone 3) were 720 p, GT6 was even 1440x1080, and that was racing game with top graphics on PS3 and in 60 fps. Top Xbox 360 exclusives (Gears of War 3, Gears of War Judgment, Halo 4, Forza Motorsport 4, Forza Horizon) were 720p. Almost all top multiplatform games for those consoles were 720p or very close to that.

About 540p on XSX/PS5. I said some 5 years ago (and many disagreed with me), what all this simplification in hardware and software will not lead to anything positive. I'm not trying to say I know something better than anyone else but I had and still have this opinion. When consoles were harder to programm, and there wasn't much or any automitic processes in development, developers could've achieve better results. They worked hard, but improved their skills, when programmers wasn't able achieve something, designers or artists was able, thet lead to more interesting and even experemental ideas to games, (of course that wasn't always cool results :) but still). This is like in my work, here are moments when I should do something harder, but results after that is better, and when all goes by standart system, it's just routine and sometimes results are basic and even worse. :)
Yes, that is correct, the 720p - 600p analysis refers to the two extreme resolution ranges at the time, but the most beautiful games were indeed native 720p.

However, this further supports my theory regarding the excessive volatility of current generation resolutions compared to previous trends.
 
None of those PS360 games had to support both 60 and 30 fps modes. That's a huge performance range to cover in a fixed hardware platform. Whatever you do you're going to have to have a big drop in resolution for a 60 fps mode.

Additionally, with more dynamic elements - particularly lighting and RT reflections - you're exposing yourself to even more variability between areas. One frame rate target and baked lighting makes hitting a standard resolution (like 720p or 1080p) far simpler.
 
Yeah the new rendering techniques do not scale down really well. Think of nanite, ray tracing or any new global illumination solution like lumen. Those techniques have a big base cost that you can not get rid off easily. For 60 fps at a high resolution you would like to go back to static lighting but that would be really hard because now nor your assets nor your level design take that into account. Doing so would require a different workflow which would add a lot of cost to the project.
An important element of this is that they do not use an engine designed for consoles. I acknowledge the great capabilities of UE5, but it is clear that with the high image resolution required for today's 4K TVs, this set of features can only be served with an expensive PC.

And it really doesn't scale well. If it's so easy to develop with it and it's almost automated, then why isn't there, for example, a solution to make the Nanite system more scalable for consoles and thus display relatively less geometry to achieve a higher image resolution with 60 FPS? I only say this because there are a few UE5 games that do not use Nanite or Lumen and as a result run in 4K(ish)/60FPS on current consoles.

That is why game engines written specifically for consoles would have an important role, more than ever.
 
An important element of this is that they do not use an engine designed for consoles. I acknowledge the great capabilities of UE5, but it is clear that with the high image resolution required for today's 4K TVs, this set of features can only be served with an expensive PC.

And it really doesn't scale well. If it's so easy to develop with it and it's almost automated, then why isn't there, for example, a solution to make the Nanite system more scalable for consoles and thus display relatively less geometry to achieve a higher image resolution with 60 FPS? I only say this because there are a few UE5 games that do not use Nanite or Lumen and as a result run in 4K(ish)/60FPS on current consoles.

That is why game engines written specifically for consoles would have an important role, more than ever.
Like I said. Nanite is a technique that has a up front cost that you can't scale down but in return you can get dense geometry and invisible lod transitions. But also relatively low VRAM usage because all the LOD's are streamed at cluster level instead of mesh-level (something underappreciated about nanite that lets games like Hellblade 2 work well with 8GB VRAM) If you don't need dense geometry or invisible LOD transitions when you can choose to not use nanite.

The reason teams use nanite is because they can get the same result faster so you can have smaller or less experience teams making games that look like they are made by AAA-teams (but in practice they sometimes miss in art direction and because of that still look worse). It just takes time and skill to make low-poly geometry and it's LOD's look good.
The same can be said for going all-in for dynamic lightning. Not needing to think about correct light map uv's and density will speed up making assets a lot. And not to mention make you way more flexible in what can be done with lighting.
 
Yeah tbh I really don’t think that video looks good at all. Most of these ‘AI re-imagined’ games look like stylistic messes.
The funniest parts are the where the AI fancies itself as a Surrealist. You have motorbikes turning into benches, fragments of car on the sidewalk as features, and especially unexpected was the realtime gender transformation at the end! It can't even maintain colour consistency across time. As the model is fed more data, it'll become more robust, but it'll also presumably increase in exponential complexity. Will it get to a point where it's just unwieldy, taking too much time to train to produce inconsistent results? I guess when we get perfect, artefact free upscaling from AI, we'll have a first step towards AI generated realtime visuals.
 
It can't even maintain colour consistency across time.
This is an artificial limitation, the service only accepts 10 seconds clips, that's why things change every 10 seconds. This wouldn't be a problem at all if the 10 seconds rule doesn't exist.

Yeah tbh I really don’t think that video looks good at all. Most of these ‘AI re-imagined’ games look like stylistic messes.
It's a very basic and generic AI video generator not intended for game use at all, and it's not even the state of the art, yet we are getting these perfectly serviceable results, many call it amazing even, especially when you see results for GTA IV, Metro Exodus and others. Imagine a more gaming specific model.
 
Last edited:
The performance target that a developer sets, their general technical talent, their willingness to spend time optimizing for a specific platform, their familiarity with the engine, and their ability to make under-the-hood tweaks are more important for the final result than whether or not the engine is multi-platform or designed for consoles. When Gears of War: E-Day releases, I doubt there will be anyone complaining that The Coalition should have used an engine designed specifically for Xbox instead of Unreal.

I remember EA taking a lot of flak for making its studios adopt Frostbite, and how ME: Andromeda's issues were blamed on that decision. But now that BioWare has become experienced with the engine, adapted it to their own needs, and presumably created tooling and workflows to support RPG development with Frostbite, they aren't having issues anymore. If you set reasonable goals for performance and fidelity, and have enough time and talent, then nearly any modern engine can achieve good results. Some engines will require less time investment than others of course.
 
This is an artificial limitation, the service only accepts 10 seconds clips, that's why things change every 10 seconds. This wouldn't be a problem at all if the 10 seconds rule doesn't exist.
No, it breaks across fractions of a second. At 25 seconds, the protagonist's left leg changes from navy to white. The driver is pulled out as a white blob, maybe a jacket, but after rolling on the floor they have turned into a navy top. At 48-49 seconds, a white car turns into a black car.
It's a very basic and generic AI video generator not intended for game use at all, and it's not even the state of the art, yet we are getting these perfectly serviceable results,
They aren't serviceable at all! When you first watch it, it looks incredible, but subsequent viewings reveal just how incredibly broken it is.
many call it amazing even, especially when you see results for GTA IV, Metro Exodus and others. Imagine a more gaming specific model.
It is amazing! But it's also very, very far from being something people can use in games in realtime to make games that look like movies. This is the first 10% of the journey and it's an incredible. The next 10% requires vastly more effort, and the next 10% after that, even moreso. No-one knows what it'll take to train and operate an ML to produce solid results, and expectations for the results in the near future are based solely on hope. As I say, ML can't yet produce missing details with complete accuracy for upscaling, which is a far simpler job. When we can feed an upscaler with game data and image data and produce faultless images, we'll have evidence as to what rich-data models can achieve.
 
Like I said. Nanite is a technique that has a up front cost that you can't scale down but in return you can get dense geometry and invisible lod transitions. But also relatively low VRAM usage because all the LOD's are streamed at cluster level instead of mesh-level (something underappreciated about nanite that lets games like Hellblade 2 work well with 8GB VRAM) If you don't need dense geometry or invisible LOD transitions when you can choose to not use nanite.

The reason teams use nanite is because they can get the same result faster so you can have smaller or less experience teams making games that look like they are made by AAA-teams (but in practice they sometimes miss in art direction and because of that still look worse). It just takes time and skill to make low-poly geometry and it's LOD's look good.
The same can be said for going all-in for dynamic lightning. Not needing to think about correct light map uv's and density will speed up making assets a lot. And not to mention make you way more flexible in what can be done with lighting.
That's why I mentioned that there are undoubtedly advantages to UE5 game engines, which mostly applies to development. As I mentioned earlier, now that game developments based on such a modern GPU driven pipeline are appearing in large numbers, the difference is obvious. It sounds good that there is almost infinite geometry in the picture, but it is not so good that this graphic developed with huge geometry is displayed on the console in too low a resolution, taking into account the general size of today's TVs.

You mentioned that the Nanite technology cannot currently be scaled down, that is, it only works as an on/off switch and therefore requires a lot of power when turned on. Is there a way to change this in the future? Are there any improvements in this direction that could benefit cheaper console hardware?
 
You mentioned that the Nanite technology cannot currently be scaled down, that is, it only works as an on/off switch and therefore requires a lot of power when turned on. Is there a way to change this in the future? Are there any improvements in this direction that could benefit cheaper console hardware?
It's not that Nanite can't be scaled down, but rather that one of the main points of Nanite is to scale according to the resolution. If you want more performance with Nanite then you turn the resolution down. If you want higher resolution and are willing to sacrifice geometric detail to achieve it then you're better off using traditional LOD meshes instead.
 
That's why I mentioned that there are undoubtedly advantages to UE5 game engines, which mostly applies to development. As I mentioned earlier, now that game developments based on such a modern GPU driven pipeline are appearing in large numbers, the difference is obvious. It sounds good that there is almost infinite geometry in the picture, but it is not so good that this graphic developed with huge amount of geometry is displayed on the console in too low image resolution, taking into account the general size of today's TVs.

You mentioned that the Nanite technology cannot currently be scaled down, that is, it only works as an on/off switch and therefore requires a lot of power when turned on. Is there a way to change this in the future? Are there any improvements in this direction that could benefit cheaper console hardware?
There has been a lot of optimization to nanite (and lumen) in UE 5.1 to 5.5 and they continue to optimize further. But of course we are in diminishing returns territory unless they have an eureka moment. Skeletal meshes becoming nanite might help since "nanite culls nanite" which means that skeletal meshes will be able to cull environment more aggressively and in turn the skeletal meshes themself will be culled more aggressively by the environment as well as well. But if this is a net win only time will tell.

But honestly my take on UE5 is that the engine has a lot of overhead because it's a general purpose engine. Epic has a lot of toggles in place to disable some of these features but the more toggles you add the harder the code will be to maintain so it's a balancing act for Epic. I bet there is some low hanging fruit in there (for instance the G-buffer of UE was pretty fat the last time I looked at it. But that might have changed now I haven't looked at it recently). The small teams that use UE just don't have the man-power, experience or time to dive deep into the engine code and identify the features that add to this overhead that they themself don't need. Also because the engine has to many features and use cases it's harder to modify as more system might depend on each other.

But the problem is not only UE, Unity has the same problems. Remember City Skylines. The dev's used a middle-ware for the characters. That middleware produced very high-poly characters, including teeth (who needs teeth in a top-down city building game). It's a simple thing but the team missed that (and they missed more other stuff too) and shipped. The point is, smaller teams without custom engines have less in house tools and use middleware (and of course the engine itself can also be seen as middleware) that they don't understand fully. All middleware is general purpose and will add overhead. You need to fully understand it and have time to optimize that. But most smaller teams without a custom engine will have less engineers while teams that have a custom engine also have in-house engineers who understand the tech and can educate their artists. Another thing that does not help is all the lay-offs, teams form and fall apart and each time they need to start from almost scratch. There is an upside to all this, thanks to these easy to use engines there are more devs making games than ever so you now have a choice. If a game runs bad just don't buy it as there are now more games coming out than you have time to play anyway.
 
This is an artificial limitation, the service only accepts 10 seconds clips, that's why things change every 10 seconds. This wouldn't be a problem at all if the 10 seconds rule doesn't exist.


It's a very basic and generic AI video generator not intended for game use at all, and it's not even the state of the art, yet we are getting these perfectly serviceable results, many call it amazing even, especially when you see results for GTA IV, Metro Exodus and others. Imagine a more gaming specific model.
I don’t want to get testy but if you think this is at all serviceable I seriously question your taste here. This looks horrible and soulless. Why are we trying to inject AI slop into an art form we are all supposedly into?

Notice how painters and photographers dislike the AI garbage? How people reject AI written scripts as cold and inhuman (because they are)? Why are we trying to bring this into games? Art is a fundamentally human endeavor, giving up genuine creativity just to make games look better (these don’t but let’s pretend AI could make it look better) feels horrible.
 
I don’t want to get testy but if you think this is at all serviceable I seriously question your taste here. This looks horrible and soulless. Why are we trying to inject AI slop into an art form we are all supposedly into?

Notice how painters and photographers dislike the AI garbage? How people reject AI written scripts as cold and inhuman (because they are)? Why are we trying to bring this into games? Art is a fundamentally human endeavor, giving up genuine creativity just to make games look better (these don’t but let’s pretend AI could make it look better) feels horrible.
Not to mention, it'll erase a lot of jobs. AI should be used to tackle problems that humans struggle to solve while aiming to minimize job losses. The tech industry as a whole is brutal right now. Huge layoffs, global competition due to offshoring, and a supply side market in terms of workers are some of the many problems we're facing right now. The idiocy of many corporations chasing short term profits by means of layoffs is the failure to realize that people actually need income to continue to purchase the products.

Anyway, all of that is to say that AI for the most part needs to stay out of gaming at least until the industry reaches a new equilibrium.
 
Why are we trying to inject AI slop into an art form we are all supposedly into?
Because a simple non special generic AI successfully injected human like characters into game scenes, with realistic clothing, hair and physics, with practically zero programming effort, imagine the potential when it's a more specialized directed model in the hands of professionals.

Why are we trying to bring this into games? Art is a fundamentally human endeavor, giving up genuine creativity just to make games look better feels horrible

This is just an overly sentimental way of thinking about it, no different than conventional artists objecting to use computers to do 2D or 3D drawings instead of doing it manually like humans used to do for hundreds of years, no different than sculptures objecting to people using 3D printing instead of traditional sculpting, it is a very superficial way to think about this. AI will be just another tool in the hands of game programmers and game artists to do their thing more productively, more cheaply and in a much much more visually stunning way.

That's certainly way better than the clusterfuck of an industry that is the gaming industry right now with it's overly blown budgets, dirty money milking schemes, repetitive boring designs and slow visual progress. I mean if you like the status quo then be my guest, but I would much prefer a small team of creative people successfully making unique and polished experiences through the assistance of AI, rather than endure another decade of this bullshit we have as a gaming industry.

and expectations for the results in the near future are based solely on hope
Not on hope at all, you've seen the Adope video showcasing turning hand drawn 2D figures into full 3D models.


We are seeing developers gearing up game engines for AI integration.

This LWM is allegedly capable of generating all the components of a video game, from environments, 3D models, and gameplay to NPC (non-player character) behavior - along with detailed metadata


We've seen textures generated on the fly with a simple prompt.


We are seeing IHVs experimenting with augmenting rendering with AI, and creating a preliminary vision on how things will go.

The purpose of the image generator is to synthesize an image from a novel, unobserved view of the scene. The generator receives: parameters of the camera, a G-buffer rendered using a traditional renderer from the novel view, and a view-independent scene representation extracted by the encoder.

We envision future renderers that support graphics and neural primitives. Some objects will still be handled using classical models (e.g. triangles, microfacet BRDFs), but whenever these struggle with realism (e.g. parts of human face), fail to appropriately filter details (mesoscale structures), or become inefficient (fuzzy appearance), they will be replaced by neural counterparts that demonstrated great potential. To enable such hybrid workflows, compositional and controllable neural representations need to be developed first.


We are seeing bits and pieces, but they are enough to form a picture of where things are heading, they are solid pieces based on actual progress made, certainty not based on hope. Only time will tell of course.

ML can't yet produce missing details with complete accuracy for upscaling
Complete accuracy is never a prerequisite for building an interactive gaming experience, the whole rendering process is just an approximation, and many many inaccuracies are allowed for the purpose of other greater gains.
 
Last edited:
Because a simple non special generic AI successfully injected human like characters into game scenes, with realistic clothing, hair and physics, with practically zero programming effort, imagine the potential when it's a more specialized directed model in the hands of professionals.
Did what now? Those game clips are literally reimagined videos, you could insert real movie, cartoon, whatever and it would do the same
 
Because a simple non special generic AI successfully injected human like characters into game scenes, with realistic clothing, hair and physics, with practically zero programming effort, imagine the potential when it's a more specialized directed model in the hands of professionals.



This is just an overly sentimental way of thinking about it, no different than conventional artists objecting to use computers to do 2D or 3D drawings instead of doing it manually like humans used to do for hundreds of years, no different than sculptures objecting to people using 3D printing instead of traditional sculpting, it is a very superficial way to think about this. AI will be just another tool in the hands of game programmers and game artists to do their thing more productively, more cheaply and in a much much more visually stunning way.

That's certainly way better than the clusterfuck of an industry that is the gaming industry right now with it's overly blown budgets, dirty money milking schemes, repetitive boring designs and slow visual progress. I mean if you like the status quo then be my guest, but I would much prefer a small team of creative people successfully making unique and polished experiences through the assistance of AI, rather than endure another decade of this bullshit we have as a gaming industry.


Not on hope at all, you've seen the Adope video showcasing turning hand drawn 2D figures into full 3D models.


We are seeing developers gearing up game engines for AI integration.




We've seen textures generated on the fly with a simple prompt.


We are seeing IHVs experimenting with augmenting rendering with AI, and creating a preliminary vision on how things will go.






We are seeing bits and pieces, but they are enough to form a picture of where things are heading, they are solid pieces based on actual progress made, certainty not based on hope. Only time will tell of course.


Complete accuracy is never a prerequisite for building an interactive gaming experience, the whole rendering process is just an approximation, and many many inaccuracies are allowed for the purpose of other greater gains.
This is all incredibly bleak and nothing like using computers to accelerate 2D and 3D rendering. If one cannot tell the difference between using a computer as a tool and having an AI literally do all of the work then you probably never really appreciated video games as art. Which isn’t uncommon, I’ve seen that type of sentiment pretty often online, where the endgoal isn’t making good art and experiencing it but instead striving for lifelike graphics with design taking a backseat.
 
Back
Top