Digital Foundry Article Technical Discussion Archive [2015]

Status
Not open for further replies.
This is getting silly now. You're citing MGSV as evidence that agents in a game don't need to be constantly evaluating the combat zone. You then relate AI agents in an action computer game to normal people going about their every day business. Neither is the slightest bit representative of what an AI agent has to do in a close-quarters, urban, one shot kill combat game where clearly the enemy AI has to be as aware and responsive as a human player in that computer game.

Well that an interesting take that bears no relation on what I said. I used MGS V as an example as a) a game where decent AI clearly has multiple states of awareness/behaviour during which AI reactions to stimuli are different, and b) also an example of a game decent AI can be done on dozens of AI characters in a game running at 60fps without impacting the frame rate.

Even if Joe Gamer drives his car not thinking about anything in particular and having accidents, and even if Joe Gamer was to spend most of a war patrolling and cleaning his weapon and gazing at a well-worn photo of his sweetheart, when that human being is in a combat situation like R6 (assuming they aren't in a blind panic and are suitably trained), they'll be 100% focussed on staying alive and killing the enemy, constantly processing cues, orders, and looking for opportunities.

Except that "focus on staying alive" really doesn't mean anything in terms of purpose. It's your goal, not a specific action. This is why there is no game where the X button is 'stay alive'. Staying alive in a combat zone is itself just another set of choices of what to do (or not to do) albeit at heightened situation awareness. MGS V certainly mimics this behaviour. So you want to stay alive, what's the best thing to do? Run away immediately? How about hide in cover then then run away? How about stay in cover and lay down suppressing fire? How about a tactical retreat? How about moving positions stealthily and trying to snipe the enemy? How about trying to flank the enemy? How about try and get to a radio and call reinforcements? These are all valid decisions for staying alive but you sure as hell can't focus on them all simultaneously.

And this is how things go wrong in combat even with very experienced soldiers. If you hide you're not preventing the enemy from moving about and completing their mission but can hear them and perhaps evade them. If you're firing at the enemy may not hear or see the enemy sneaking up or flank you. Sticking your head out to see what's going on may result in it getting shot off.

Incidentally, these all behaviours I've seen in MGS V. I've also seen the AI do dumb arse things. However people also do dumb arse things.

That's something smart AI will need to replicate.

Have you actually played MGS V? I'm assuming you haven't but

If you want smarter AI, you need to spend processing cycles on it, and that's how a 60 fps can be bogged down to 30 fps when including AI.

And again, as I said above, this is a basic programming problem. You have X cycles available and need to do Y. If you can't do Y in X cycles or less you need to do something differently. Could that be targeting a lower framerate? Sure. But it could be a bunch of other things including rewriting your AI.
 
I suppose what I'm getting at is that there has to be some sort of bottleneck specific to PC setups that hides the performance cost for various titles.

Oh well.

Poor drivers and broken game updates - will always be the PC's achilles heel. :yep2:
 
And again, as I said above, this is a basic programming problem. You have X cycles available and need to do Y. If you can't do Y in X cycles or less you need to do something differently. Could that be targeting a lower framerate? Sure. But it could be a bunch of other things including rewriting your AI.
Perhaps Ubi could rewrite R6 to run the same AI at 60 fps - quite possibly new compute algorithms will enable more, especially sharing computer workloads across aspects of the game (spatial representations recycled across phsyics and AI perhaps). Maybe also they can't because the AI is just that complex, and it's literally a case that the hardware can't run it faster (where on the PC it does), or at least that can't be achieved with current programming know-how and budgets. Certainly we shouldn't be saying the devs just weren't trying hard enough and TerroHunt should still be 60 fps because the AI shouldn't be that taxing. Ultimately we do need to accept that there's only so much straw we can stack on these console's backs before they break and low framerates are the result of that.
 
I can only repeat what I've said before - I don't think Ubisoft would need to rewrite the AI for a higher framerate. AI is not something that needs tying to framerate.

Have a read of the Game Engine Architecture extract that I linked it. It covers asynchronous processing by the game engine and how to approach the task of achieving synchronicity. It's not desirable or necessary to have everything in the game world update at a constant speed, be it the target framerate or anything else.
 
No but I think Shifty's point is that if the AI runs on the CPU, and the CPU is overburdened, then it doesn't matter if you run the AI synchronous or not - there just aren't any CPU cycles available for everything combined. A good solution could be to move at least part of the AI logic to GPU, which is more feasible on consoles than it is on PC, but that potentially comes with a sizeable development effort.
 
A good solution could be to move at least part of the AI logic to GPU, which is more feasible on consoles than it is on PC, but that potentially comes with a sizeable development effort.
There are two problems with this. The first is that compute shader GPU programming is very fragile regarding to performance. You need to know exactly how the GPU operates in order to write fast code for it. Slow GPU code can easily be 10x+ slower than optimized one (stealing most of the GPU resources from the rendering). Game play programmers in general do not know low level GPU details well enough, and do not have experience of the GPU programming languages used (HLSL). Game play programming teams tend to have more junior programmers than rendering teams, and game play programmers in general are less interested in high performance multithreaded programming than rendering programmers.

The second problem is of course latency, and this makes things even harder. GPU runs asynchronously from the CPU, and you should be prepared to have at least half a frame latency (on consoles, on PC 2+ frames) to get the compute results back. Writing (bug free) asynchronous code is significantly harder than writing synchronous code. Game play programmers prefer fast iteration and prototyping. Writing asynchronous GPGPU code and debugging+optimizing it before it works properly completely ruins your iteration time for game play prototyping.
 
Have they always designed the game to run at split framerates or is this a recent change because it was just not working in the 60fps budget.

We were only just discussing changing framerate of a game and this almost looks like they casually flicked the 30fps switch as they had only ever talked about 60fps target before I believe this is weak evidence admittedly.

Now xbox has lost its media vision its irrelevant I suppose but I always wondered if they could include a 50fps mode in these games to up the graphics slightly / have less frame drops and play far nicer with PAL users snapping tv and 50hz media.
 
There are two problems with this. The first is that compute shader GPU programming is very fragile regarding to performance. You need to know exactly how the GPU operates in order to write fast code for it. Slow GPU code can easily be 10x+ slower than optimized one (stealing most of the GPU resources from the rendering). Game play programmers in general do not know low level GPU details well enough, and do not have experience of the GPU programming languages used (HLSL). Game play programming teams tend to have more junior programmers than rendering teams, and game play programmers in general are less interested in high performance multithreaded programming than rendering programmers.

The second problem is of course latency, and this makes things even harder. GPU runs asynchronously from the CPU, and you should be prepared to have at least half a frame latency (on consoles, on PC 2+ frames) to get the compute results back. Writing (bug free) asynchronous code is significantly harder than writing synchronous code. Game play programmers prefer fast iteration and prototyping. Writing asynchronous GPGPU code and debugging+optimizing it before it works properly completely ruins your iteration time for game play prototyping.

Thanks for this. I noticed in the AI talk for Ellie in the Last of Us that artists actually write scripts for AI behavior these days, and the presenter discussed the interaction as important - whenever you need something complex or often, check with the systems programmers if they can solve the problem more efficiently.

Perhaps it is better to take a more holistic approach to the whole game system rather than separating graphics from the rest too much? And identify complex tasks on both sides and assign them to high level coders?

Also while I appreciate the complexity of async programming, shouldn't it be possible to have the top coders setup a framework that is accessible enough for the juniors? Certainly raycasting results coming from GPU should be doable? Otherwise it would be quite a step backwards from Cell programming ... And I was under the impression that CU was easier rather than harder?

I think the latency on PC is more problematic. But here too, in a good job system it would be possible to have the job executed by different code. The main problem is perhaps the programming language differences? At least on SPEs you could still run the same C++ logic as you did on PPE.
 
I think the latency on PC is more problematic. But here too, in a good job system it would be possible to have the job executed by different code. The main problem is perhaps the programming language differences? At least on SPEs you could still run the same C++ logic as you did on PPE.

If developers wanted to go in the direction of offloading AI onto the GPU in consoles then perhaps they could take advantage of multiadapter in DX12 and do the same with PC's to the CPU's local GPU. It would limit CPU support for that feature to only CPU's with modern IGP's but many games only claim support for Haswell based CPU's these days anyway so I don't see why that would be a huge problem. Especially if you had a "CPU only" fall back path.

Then again you could argue that if you have a Haswell in the first place with those massive AVX2 units, why would you need to offload AI to the GPU anyway?
 
For you AI code to work well on GPUs you need very coherent program flow between individual actors/entities. Running AI for hordes of entities might be a good fit, but advanced AI for low number of individual characters, not so much.

Cheers
 
No but I think Shifty's point is that if the AI runs on the CPU, and the CPU is overburdened, then it doesn't matter if you run the AI synchronous or not - there just aren't any CPU cycles available for everything combined.

The same could be said for any of the CPU-bound subsystems in the game; such as UI, animation, sound, streaming and the core renderer. There is nothing particular about AI that would lend itself to blame for insufficient cycles to run the game at the desired framerate and unlike some other subsytsems, which would benefit in reduced cycle time proportionate to the reducetion in frame rate, the AI shouldn't.
 
The same could be said for any of the CPU-bound subsystems in the game; such as UI, animation, sound, streaming and the core renderer. There is nothing particular about AI that would lend itself to blame for insufficient cycles to run the game at the desired framerate and unlike some other subsytsems, which would benefit in reduced cycle time proportionate to the reducetion in frame rate, the AI shouldn't.
AI tends to be branch based code however. I imagine really complex AI attacking a lot of different parts of memory leading to a lot of data cache misses and instruction cache misses. You apply that to a lot of characters on screen and you've got a really slow system, not many ways to get around that.
 
The same could be said for any of the CPU-bound subsystems in the game; such as UI, animation, sound, streaming and the core renderer. There is nothing particular about AI that would lend itself to blame for insufficient cycles to run the game at the desired framerate and unlike some other subsytsems, which would benefit in reduced cycle time proportionate to the reducetion in frame rate, the AI shouldn't.
Any (well, some!) of those subsystems could be cranked up to demand more the CPU and leave less available for the rendering. Audio could use an extremely advanced audio tracing system producing extremely accurate 3D audio. UI could use a CPU based vector drawing engine. The subsystem itself, AI, physics, audio, doesn't represent a workload of a given proportion of the CPU time. The CPU time is there for devs to allocate how they choose. If a dev decides to spend 50% of CPU time processing AI, why shouldn't they if it makes the game they want? If another dev wants to spend 50% of CPU time on physics, why shouldn't they?
 
Because using more processing cycles reduces cycles left for rendering, necessitating a reduction in framerate to fit all the required workloads into the intended timeframe. AI doesn't have to be tied to framerate, but it does have to be tied to how much actual processing power you have available, which in turn means framerate can be tied to AI.
 
AI tends to be branch based code however. I imagine really complex AI attacking a lot of different parts of memory leading to a lot of data cache misses and instruction cache misses. You apply that to a lot of characters on screen and you've got a really slow system, not many ways to get around that.

This has always been my understanding of AI, that's it's 'branchy' as all hell, quite 'chatty' to RAM, is largely ALU based and thus a very bad fit for GPUs with their emphasis on SIMD and doing the same thing over and over on a large fixed data set. Have the changes to contemporary GPUs made them a better match for AI or are there new AI paradigms that can fit the GPU better?
 
You can represent some of the decision making with matrices of values, which is a better fit for GPU. Basically calculate a mass of options/parameters, like distances between objects, and then just pick the best, instead of the usual list of branches testing individual cases. There's certainly potential for clever stuff there, but I don't know how developed it is, and question whether anyone's investing in it anyway as it's such an overhaul where game AI is presently satisfied with the CPU. Finding new ways to do the same work isn't high on many people's priorities ;)
 
AI doesn't have to be tied to framerate, but it does have to be tied to how much actual processing power you have available, which in turn means framerate can be tied to AI.
Naturally, but we've wandered into computer science 101 valley.

You can represent some of the decision making with matrices of values, which is a better fit for GPU. Basically calculate a mass of options/parameters, like distances between objects, and then just pick the best, instead of the usual list of branches testing individual cases.
This is definitely the way to go rather than a whole heap of conditionals and branches :yes: And you can scale up the decision making without disproportionate increase in CPU time as long as the engine itself is relatively efficient at providing the data needed for the algorithm (distances, line of sights, distance to cover, whether the AI is armed and how much ammo it has etc). You can get very dynamic by increasing the size of the matrices.

AI tends to be branch based code however. I imagine really complex AI attacking a lot of different parts of memory leading to a lot of data cache misses and instruction cache misses. You apply that to a lot of characters on screen and you've got a really slow system, not many ways to get around that.

Not necessarily. Most AI starts out as branching code but you convert many decisions trees into a matrices of actions navigated by a simple algorithm. You know that $70 book? Buy it. :yes:
 
Naturally, but we've wandered into computer science 101 valley.

You know that $70 book? Buy it. :yes:
Hahaha. Damn. I was wondering when you'd respond to that PM. Well that's quite interesting. I'll look into buying it. Quite interested in how it's done.
 
Hahaha. Damn. I was wondering when you'd respond to that PM. Well that's quite interesting. I'll look into buying it. Quite interested in how it's done.
Books are absolutely fantastic if you want to quickly learn from those who know what they're doing. Equally I think it's as valuable (perhaps more so) to learn by trying and re-iterating code and techniques yourself. It'll certainly give you experience of what won't work (and why) which is equally as valuable as knowing what does work. I don't think nearly enough credit is given to the experience of failure.

Screw the book, just do it your own way. It's more satisfying!
 
Status
Not open for further replies.
Back
Top