Digital Foundry Article Technical Discussion Archive [2015]

Status
Not open for further replies.
I don't think you can decouple AI from the update.
I can't see why you can't have everything in the simulated game world decoupled from the rendering engine with the rendering engine only showing its rendition of the world state as quickly as it can.

Humans typically aren't making 60 individual decisions a second, have you ever lined up in Starbucks? Some people can't decide between s latte or mocha in 15 seconds!!!

What you don't want is physics tied to frame rate or animation tied to AI. Decouple these and everything should be fine.
 
I can't see why you can't have everything in the simulated game world decoupled from the rendering engine with the rendering engine only showing its rendition of the world state as quickly as it can.

Humans typically aren't making 60 individual decisions a second, have you ever lined up in Starbucks? Some people can't decide between s latte or mocha in 15 seconds!!!

What you don't want is physics tied to frame rate or animation tied to AI. Decouple these and everything should be fine.

Usually when I hear decouple I mean decouple from fixed update. Meaning could I save CPU time if I only run the AI in half the frames I run physics, positions, effects and animation. If AI was so advanced could I give it more time like 66ms to complete while the game still operates at 33ms. Or for a 33ms for AI to complete while the rest of the game runs at 16ms. At least this was my understanding of the question.

So if I look at path finding, say the path is 8 steps up and 6 steps left. Say somehow we linked fixed update to 1 step was 1 frame or something. Then it would take 8 frames to walk up and 6 frames to walk left. With AI running at half frames. It would start at frame one and it would have to wait until frame 3 to be updated with new values of its current position again. So it says keep going up. Then it waits until frame 5 and it shows it needs to keep going. Then it hits 7, AI says keep going up. Now you sitting there through frame 8 waiting until frame 9 before it says to go left.

Anyway that's how I see it. I know games don't move that fast but for a game to feel alive I think the Ai shouldn't look like its on happy drugs. People will notice I think. But then again I could be wrong. But certain things could be separated but I think certain things should be right up there with game code. Like missile tracking logic is different from MGSV guard AI logic.
 
Almost all AAA multiplat titles that have low AF on PS4 have being patched without any performance penalty.

In D3D11 and corresponding hardware AF is a per-sampler setting. It does have a significant cost that increases as you go up if you just do it naively, but since it's per-sampler it can be targeted to where it needs it most. Most textures do not need AF at all. For the ones that do maybe only the normal map or diffuse map needs it. For most textures it's really hard to notice values above 4x even on long glancing views.

If you just naively set all your textures to 16xAF you're going to have terrible performance. If you set a large amount of the scene to 4x or even 2x you're still often bumping into unacceptable performance deltas. When we talk about games that have "patched in AF" chances are that you're talking about a handful of materials changed to have a slightly higher setting. And when you're talking about "no performance penalty" you're talking about something you don't have the tools to measure appropriately in the consumer world.

This is really simple. Why don't games have AF? Because it's expensive to do it naively and only a handful of textures benefit from it, so the smartest thing to do is default it to off and raise it by hand as needed. See a blurry texture? An artist hasn't checked the "use AF" box on it. The end.
 
Usually when I hear decouple I mean decouple from fixed update. Meaning could I save CPU time if I only run the AI in half the frames I run physics, positions, effects and animation. If AI was so advanced could I give it more time like 66ms to complete while the game still operates at 33ms. Or for a 33ms for AI to complete while the rest of the game runs at 16ms. At least this was my understanding of the question.

So if I look at path finding, say the path is 8 steps up and 6 steps left. Say somehow we linked fixed update to 1 step was 1 frame or something. Then it would take 8 frames to walk up and 6 frames to walk left. With AI running at half frames. It would start at frame one and it would have to wait until frame 3 to be updated with new values of its current position again. So it says keep going up. Then it waits until frame 5 and it shows it needs to keep going. Then it hits 7, AI says keep going up. Now you sitting there through frame 8 waiting until frame 9 before it says to go left.

Anyway that's how I see it. I know games don't move that fast but for a game to feel alive I think the Ai shouldn't look like its on happy drugs. People will notice I think. But then again I could be wrong. But certain things could be separated but I think certain things should be right up there with game code. Like missile tracking logic is different from MGSV guard AI logic.

When I meant decouple, I meant running in its own thread, asynchronous from the renderer. It doesn't need at all to be tied to the framerate (every frame or nth frame), when it makes a decision, an event is fired (I'm just a javascript guy, don't shoot me!), the decision is passed to game logic that updates at every frame that displays those animations. When a path is found, the only thing that needs to be checked every Nth frame (and not even every frame) is whether the path is still a valid one due to blocked paths, closed doors etc, suppressing fire from opposing team, but that's so much easier than deciding the initial path. Any major chess like decision that involves in team work or just what to do and whether that would benefit the AI controlled player and it's team, should be able to work async from the renderer.

The actions resulting from the decision is "dumb" to execute (walk, shoot, stay in cover, do suppressing fire). So the complex AI that makes the AI decide like a human doesn't need at all to be fast. Since ubisoft says it's the advanced AI making things run at 30fps instead of 60, I'm a bit confused as to why this had to be the case, and whether there wasn't any other work around for this without making the AI "dumb".
 
Here's another theory. The developers developed the AI on PC on the CPU and are not using any GPU code whatsoever because on PC GPU and CPU can't talk efficiently. And they have the whole engine tied to the frame rate. They are fine with 30fps on consoles so have little motivation to rewrite anything to take advantage of close CPU/GPU integration possible on consoles (and a bit more presumably with DX12).
 
When I meant decouple, I meant running in its own thread, asynchronous from the renderer. It doesn't need at all to be tied to the framerate (every frame or nth frame), when it makes a decision, an event is fired (I'm just a javascript guy, don't shoot me!), the decision is passed to game logic that updates at every frame that displays those animations.
Absolutely. And now what if the time it takes to process one step of AI is 13 ms of CPU time on top of the 15 ms already used for the graphics? The graphics will have to wait until the AI processing is finished, or work at half rate in parallel with the AI code.

There's no limit to how much AI processing you can do, and increasing complexity is potentially exponential in impact. There's a finite amount of CPU time and you can't just slot in AI here and there between the important jobs if you want to do a decent job.
 
Usually when I hear decouple I mean decouple from fixed update. Meaning could I save CPU time if I only run the AI in half the frames I run physics, positions, effects and animation.

Ideally you would want the AI decoupled from the game world so that the actions/problems of the AIs do not impact the world, much in the same way if you stop moving because you're thunking about a complicated problem, the whole world doesn't stop around you. Equally you don't want any single AI process (say one NPC) to consume so much time that other AI processes cease functioning.

So if I look at path finding, say the path is 8 steps up and 6 steps left. Say somehow we linked fixed update to 1 step was 1 frame or something. Then it would take 8 frames to walk up and 6 frames to walk left. With AI running at half frames. It would start at frame one and it would have to wait until frame 3 to be updated with new values of its current position again. So it says keep going up. Then it waits until frame 5 and it shows it needs to keep going. Then it hits 7, AI says keep going up. Now you sitting there through frame 8 waiting until frame 9 before it says to go left.

I don't know any modern games that approach AI like this, this seems like more a throwback to when games just ran a sequential loop. Now games are multithreaded and there is often a conscious AI decision (take cover, shoot target X, move to position Y, use ability Z, run away etc) then many frames (which could be several seconds) where the AI executes that intention, then the AI starts over unless the AI system has been written in such a way that the AI can make a new decision based on new stimuli.
 
You'll want an AI loop of some form evaluating the world. This could be per agent or the whole scene, or a combination (let's say a spatial representation of threats/cover updated every half second). Even with several second execution time on an instruction to an agent, the world needs to be evaluated multiple times in case they need to change their mind (depending on game complexity). And it's the evaluation that's costly. Could be looking at (many) hundreds of ray casts and vector magnitude tests and condition branches.

So if I look at path finding, say the path is 8 steps up and 6 steps left. Say somehow we linked fixed update to 1 step was 1 frame or something. Then it would take 8 frames to walk up and 6 frames to walk left. With AI running at half frames. It would start at frame one and it would have to wait until frame 3 to be updated with new values of its current position again. So it says keep going up. Then it waits until frame 5 and it shows it needs to keep going. Then it hits 7, AI says keep going up. Now you sitting there through frame 8 waiting until frame 9 before it says to go left.
Path finding is something you can shoot off on a thread so it doesn't bog down the execution. An agent requests a path, the pathfinding thread starts up, returns a list of nodes after a while and the agent can then act. Then each physics update it'll move along this path, while another agent may be requesting a path from the pathfinding engine.

The problem is the need to keep testing, using up CPU time. Let's say you have your agent moving through an urban battle zone. You need ray casts for who you can see, who you can hear, intercept calculations for who you might encounter and who you're trying to encounter, proximity tests for bullets (rays) to determine if one should be keeping low or not, tests against scenery to see if it's been destroyed, tests against scenery for vantage points ahead of ambushes while maintaining cover, etc. You can't have 2 seconds between these evaluations or you'll have idiot AI that runs to cover that's already been blown away, and crouch in the middle of an open street.

The AI evaluations can also spike like physics can, so if you don't want massive framerate fluctuations, capping to 30 fps makes sense even when the engine (graphics + AI) can happily run at 50 fps (20 ms CPU time required between frames for drawing + AI calcs).
 
Ideally you would want the AI decoupled from the game world so that the actions/problems of the AIs do not impact the world, much in the same way if you stop moving because you're thunking about a complicated problem, the whole world doesn't stop around you. Equally you don't want any single AI process (say one NPC) to consume so much time that other AI processes cease functioning.



I don't know any modern games that approach AI like this, this seems like more a throwback to when games just ran a sequential loop. Now games are multithreaded and there is often a conscious AI decision (take cover, shoot target X, move to position Y, use ability Z, run away etc) then many frames (which could be several seconds) where the AI executes that intention, then the AI starts over unless the AI system has been written in such a way that the AI can make a new decision based on new stimuli.
Hmm. Might not be explaining this right.

But if AI is acting much slower than the fixed update its exploitable. So for AI to respond to stimuli something must enter its collision radius. Working off the collision trigger you'd work the AI function. If the game is moving fast than the AI can receive stimuli, it is exploitable, well, inconsistent. The behaviour would seem buggy. I could dance into the collision radius that should trigger the AI, fire at the Ai and then step out of it freely before the AI would be given the variables to even know to respond.

State machines have been in AI for some time now (probably nearly since RTS), modern complexity is having AI for more than just the enemies but for bringing the world to life and to have the AI act off more inputs. AI can receive a lot more stimuli which means it's checking a lot more things before it embarks on its behaviour.

The AI should always respond to changes IMO. Because in my example I did not bring up the fact that the path could change at any given time. Think about how quick Star Craft players are hammering their keys to get their units to dance how many readjustments are being made to path finding every frame!

And we could also look at a feature like smart casting. Just highlighting a group of units and clicking a spell and the screen. The AI will choose the unit with the energy available and closest to that target, path find and fire. But if you mash it several times it will know to send multiple units.
 
You'll want an AI loop of some form evaluating the world. This could be per agent or the whole scene, or a combination (let's say a spatial representation of threats/cover updated every half second). Even with several second execution time on an instruction to an agent, the world needs to be evaluated multiple times in case they need to change their mind (depending on game complexity). And it's the evaluation that's costly. Could be looking at (many) hundreds of ray casts and vector magnitude tests and condition branches.

Path finding is something you can shoot off on a thread so it doesn't bog down the execution. An agent requests a path, the pathfinding thread starts up, returns a list of nodes after a while and the agent can then act. Then each physics update it'll move along this path, while another agent may be requesting a path from the pathfinding engine.

The problem is the need to keep testing, using up CPU time. Let's say you have your agent moving through an urban battle zone. You need ray casts for who you can see, who you can hear, intercept calculations for who you might encounter and who you're trying to encounter, proximity tests for bullets (rays) to determine if one should be keeping low or not, tests against scenery to see if it's been destroyed, tests against scenery for vantage points ahead of ambushes while maintaining cover, etc. You can't have 2 seconds between these evaluations or you'll have idiot AI that runs to cover that's already been blown away, and crouch in the middle of an open street.

The AI evaluations can also spike like physics can, so if you don't want massive framerate fluctuations, capping to 30 fps makes sense even when the engine (graphics + AI) can happily run at 50 fps (20 ms CPU time required between frames for drawing + AI calcs).
Yea you explained this Better than I. I consider AI path finding and stuff. But your right we could have path finding still separate from AI.
The reason I would bring in path finding as AI is because lol the game I've been working on I've been trying to get it to dodge asteroids while going through behaviour loops based upon your position and actions. So should the AI fire at you or hide behind an asteroid for cover. Etc

I guess there could be a translation error with Ubisoft , what constitutes their definition of AI? I think as your write all the checking and testing eats up a ton of CPU cycles. Especially where you try to have the CPU do group think like it does with the citizens of assassin creed games. One is just checking but then you have to load up conversation audio, animation, where to walk, what they can see. The more seemingly random things the city AI could do the more it likely needs start hitting random access patterns for memory which is bound to kill any system despite how powerful it is.
 
You'll want an AI loop of some form evaluating the world. This could be per agent or the whole scene, or a combination (let's say a spatial representation of threats/cover updated every half second). Even with several second execution time on an instruction to an agent, the world needs to be evaluated multiple times in case they need to change their mind (depending on game complexity). And it's the evaluation that's costly. Could be looking at (many) hundreds of ray casts and vector magnitude tests and condition branches.

An AI loop like this would be costly in terms of CPU cycles (and unrealistic because people don't evaluate the world constantly) what you really want is a halfway house where an AI is idling, i.e almost operating on subconscious - follow this path to patrol this area, eating this food etc, but not really 'thinking' and where the real computationally intensive stuff is only kicks if triggered. Ray casts for line of sight aren't really that expensive except is very complicated scenes; Paradoid on the C64 was doing this in 2D at four degree radials at 30fps.

Think of a program running in Windows, it's mostly idea until there is stimuli like the user pressing a key, selecting a menu option or where higher level 'thinking' occurs when something that shouldn't be there come into their vision, or they hear something.

MGSV is a good example of this type of AI. The guards go about their business and are easy to evade, distract and fool if they are not alerted. It's not unlike simulating the adrenal reaction and heightening of the senses causes by something unusual but without having to simulate biology.

But if AI is acting much slower than the fixed update its exploitable. So for AI to respond to stimuli something must enter its collision radius. Working off the collision trigger you'd work the AI function. If the game is moving fast than the AI can receive stimuli, it is exploitable, well, inconsistent. The behaviour would seem buggy. I could dance into the collision radius that should trigger the AI, fire at the Ai and then step out of it freely before the AI would be given the variables to even know to respond.

By fixed update do you mean the rate at which the game world updates? It's quite normal, and far more realistic, for AI to operate slower than other events simulated in the game. You don't want to have AI reacting the bullets immediately after they've been fired for example because by the time you've heard a gun discharged close by the bullet has missed (or hit) you. People mostly react to things after they've happened. Sometimes many seconds after they've happened.
 
Last edited by a moderator:
By fixed update do you mean the rate at which the game world updates? It's quite normal, and far more realistic, for AI to operate slower than other events simulated in the game. You don't want to have AI reacting the bullets immediately after they've been fired for example because by the time you've heard a gun discharged close by the bullet has missed (or hit) you. People mostly react to things after they've happened. Sometimes many seconds after they've happened.
I want them reacting ;) Not all games are MGSV ;)
Yes fixed update (game world update) is separate from update (which is like networking, input controls, graphics etc).

That being said you don't want your AI thread to miss the fact that a bullet was fired at it either. It's okay that it operates slower or behind, that's fine. But you don't want it miss things it shouldn't be missing as a result of being updated behind. And depending on the type of game you have, you do want the AI to react to changing stimuli.

I'm just going to make up an example with Assassin's Creed - hiding in the hay stack when you have guards alerted. They could be right on you but say for some reason you managed to hit the hide in haystack button right when AI doesn't get an update even though they are within proper range where you shouldn't be able to haystack and hide. When the AI gets its turn back, it no longer knows where you are (or at least game code has you hidden). That would be a weird bug, where sometimes it would get you and sometimes not.
 
II'm just going to make up an example with Assassin's Creed - hiding in the hay stack when you have guards alerted. They could be right on you but say for some reason you managed to hit the hide in haystack button right when AI doesn't get an update even though they are within proper range where you shouldn't be able to haystack and hide. When the AI gets its turn back, it no longer knows where you are (or at least game code has you hidden). That would be a weird bug, where sometimes it would get you and sometimes not.

Assassin's Creed utilised the 'last known position' mechanic so even when AIs lose line of sight, they'll head to where you last were and if you're still not visible, they will begin to investigate the area. So jumping into haystack unseen when pursued is no longer a guaranteed way to evade guards because they will generally check haystacks with their swords. If there is just one or two guards you may be able to slip out of the opposite side of the haystack to them and sneak away though.

Last known position mechanics are new for Assassin's Creed but also pretty old. I remember clearing out enemy camps in the original Far Cry by attacking from afar then, as the occupants of the camp head towards your point - always preferably taking a dry land route, you could head into the water and skirt around them.
 
Assassin's Creed utilised the 'last known position' mechanic so even when AIs lose line of sight, they'll head to where you last were and if you're still not visible, they will begin to investigate the area. So jumping into haystack unseen when pursued is no longer a guaranteed way to evade guards because they will generally check haystacks with their swords. If there is just one or two guards you may be able to slip out of the opposite side of the haystack to them and sneak away though.

Last known position mechanics are new for Assassin's Creed but also pretty old. I remember clearing out enemy camps in the original Far Cry by attacking from afar then, as the occupants of the camp head towards your point - always preferably taking a dry land route, you could head into the water and skirt around them.

A worthy point. I don't doubt that any solution can be engineered. I've only done indie/amateur stuff in my time and I updated every fixed update, I've yet to do a solid state behaviour quite yet, still working on building my first one. But at the end of the day what constitutes as 'complex' AI could be defined as a great deal of many things. Eventually your AI thread needs to sync up with your game code, and if you can't budget for that the game needs to slow down.

I really don't know what else to say since AAA AI programming is beyond me. I imagine it's not always CPU cycles you need to burn through that is slowing the thread down; it might also be the code is hitting main memory a lot more because it's trying access many different things to check against that it's getting cache misses left right and centre.
 
There's an old presentation from Naughty Dog out there about AI pathfinding using ray-casting. I also remember something about it being almost free on PS3's SPEs. No doubt on GPU it should be even cheaper, but latency between CPU and GPU could be interesting, and again I am curious about how they approached this on PC vs console.

This is old, GDC08, I'm fairly certain there is a more detailed one out there about the path-finding, but here already they show what they did on SPU vs PPU

http://www.naughtydog.com/docs/Naughty-Dog-GDC08-UNCHARTED-Tech.pdf

EDIT: here's something more recent:
http://www.gamasutra.com/view/news/217215/Video_Ellies_buddy_AI_in_The_Last_Of_Us.php
 
I've only done indie/amateur stuff in my time and I updated every fixed update, I've yet to do a solid state behaviour quite yet, still working on building my first one. But at the end of the day what constitutes as 'complex' AI could be defined as a great deal of many things.

I've been dabbling with AI since the Commodore 64, programming AIs to work round a 2D 20x20 grid with different square representing different physical properties - flat land, water, trees etc and wrote a Boulderdash-style game (basic gravity, grad passed motion) on the Amiga where I programmed AIs that worked together to defeat you. I've also done some AI behavioural programming at work although little that would be useful in a game.

Eventually your AI thread needs to sync up with your game code, and if you can't budget for that the game needs to slow down.

I recommend you read Game Engine Architecture. It will make you think very differently about how you're designing your game engine. There are also a huge amount of books dedicated to AI in video games. Avoid the pure AI stuff, they really won't be of any use.
 
An AI loop like this would be costly in terms of CPU cycles (and unrealistic because people don't evaluate the world constantly) what you really want is a halfway house where an AI is idling, i.e almost operating on subconscious - follow this path to patrol this area, eating this food etc, but not really 'thinking' and where the real computationally intensive stuff is only kicks if triggered. Ray casts for line of sight aren't really that expensive except is very complicated scenes; Paradoid on the C64 was doing this in 2D at four degree radials at 30fps.
That's kinda dumb AI. If they get a path and follow it without constantly evaluating their surroundings and changes in the game state, they aren't really being intelligent. They'd start on a walk and merrily proceed through a stream of bullet crossing their path, getting mowed down because they weren't performing constant evaluations. A smart agent would start on a path but be constantly testing proximity of surroundings, players, threats, ready to change mind at any point. You can either do that per agent, or as an overall game state. Either way you need evaluations more frequently than every time a job has concluded.

MGSV is a good example of this type of AI. The guards go about their business and are easy to evade, distract and fool if they are not alerted. It's not unlike simulating the adrenal reaction and heightening of the senses causes by something unusual but without having to simulate biology.
MGSV suits this AI. However, the question Hesido presented was how is it at all possible that AI can slow down framerate. The answer is by being computationally expensive. It is possible to have simple AI that's undemanding and wouldn't impact framerate, but it's also possible to have AI that brings the world's most powerful processors to their knees leaving framerates in a single digits regardless of GPU.
 
Status
Not open for further replies.
Back
Top