Jokes aside most games that he mentioned as "high level AF" have it between 4x to sometimes 8x....I've rarely seen one use 16x.
Only Bloodborne does on Ps4 as far as i am aware, from AAA games at least.
Jokes aside most games that he mentioned as "high level AF" have it between 4x to sometimes 8x....I've rarely seen one use 16x.
I can't see why you can't have everything in the simulated game world decoupled from the rendering engine with the rendering engine only showing its rendition of the world state as quickly as it can.I don't think you can decouple AI from the update.
I can't see why you can't have everything in the simulated game world decoupled from the rendering engine with the rendering engine only showing its rendition of the world state as quickly as it can.
Humans typically aren't making 60 individual decisions a second, have you ever lined up in Starbucks? Some people can't decide between s latte or mocha in 15 seconds!!!
What you don't want is physics tied to frame rate or animation tied to AI. Decouple these and everything should be fine.
Almost all AAA multiplat titles that have low AF on PS4 have being patched without any performance penalty.
Usually when I hear decouple I mean decouple from fixed update. Meaning could I save CPU time if I only run the AI in half the frames I run physics, positions, effects and animation. If AI was so advanced could I give it more time like 66ms to complete while the game still operates at 33ms. Or for a 33ms for AI to complete while the rest of the game runs at 16ms. At least this was my understanding of the question.
So if I look at path finding, say the path is 8 steps up and 6 steps left. Say somehow we linked fixed update to 1 step was 1 frame or something. Then it would take 8 frames to walk up and 6 frames to walk left. With AI running at half frames. It would start at frame one and it would have to wait until frame 3 to be updated with new values of its current position again. So it says keep going up. Then it waits until frame 5 and it shows it needs to keep going. Then it hits 7, AI says keep going up. Now you sitting there through frame 8 waiting until frame 9 before it says to go left.
Anyway that's how I see it. I know games don't move that fast but for a game to feel alive I think the Ai shouldn't look like its on happy drugs. People will notice I think. But then again I could be wrong. But certain things could be separated but I think certain things should be right up there with game code. Like missile tracking logic is different from MGSV guard AI logic.
Absolutely. And now what if the time it takes to process one step of AI is 13 ms of CPU time on top of the 15 ms already used for the graphics? The graphics will have to wait until the AI processing is finished, or work at half rate in parallel with the AI code.When I meant decouple, I meant running in its own thread, asynchronous from the renderer. It doesn't need at all to be tied to the framerate (every frame or nth frame), when it makes a decision, an event is fired (I'm just a javascript guy, don't shoot me!), the decision is passed to game logic that updates at every frame that displays those animations.
Usually when I hear decouple I mean decouple from fixed update. Meaning could I save CPU time if I only run the AI in half the frames I run physics, positions, effects and animation.
So if I look at path finding, say the path is 8 steps up and 6 steps left. Say somehow we linked fixed update to 1 step was 1 frame or something. Then it would take 8 frames to walk up and 6 frames to walk left. With AI running at half frames. It would start at frame one and it would have to wait until frame 3 to be updated with new values of its current position again. So it says keep going up. Then it waits until frame 5 and it shows it needs to keep going. Then it hits 7, AI says keep going up. Now you sitting there through frame 8 waiting until frame 9 before it says to go left.
Path finding is something you can shoot off on a thread so it doesn't bog down the execution. An agent requests a path, the pathfinding thread starts up, returns a list of nodes after a while and the agent can then act. Then each physics update it'll move along this path, while another agent may be requesting a path from the pathfinding engine.So if I look at path finding, say the path is 8 steps up and 6 steps left. Say somehow we linked fixed update to 1 step was 1 frame or something. Then it would take 8 frames to walk up and 6 frames to walk left. With AI running at half frames. It would start at frame one and it would have to wait until frame 3 to be updated with new values of its current position again. So it says keep going up. Then it waits until frame 5 and it shows it needs to keep going. Then it hits 7, AI says keep going up. Now you sitting there through frame 8 waiting until frame 9 before it says to go left.
Hmm. Might not be explaining this right.Ideally you would want the AI decoupled from the game world so that the actions/problems of the AIs do not impact the world, much in the same way if you stop moving because you're thunking about a complicated problem, the whole world doesn't stop around you. Equally you don't want any single AI process (say one NPC) to consume so much time that other AI processes cease functioning.
I don't know any modern games that approach AI like this, this seems like more a throwback to when games just ran a sequential loop. Now games are multithreaded and there is often a conscious AI decision (take cover, shoot target X, move to position Y, use ability Z, run away etc) then many frames (which could be several seconds) where the AI executes that intention, then the AI starts over unless the AI system has been written in such a way that the AI can make a new decision based on new stimuli.
Yea you explained this Better than I. I consider AI path finding and stuff. But your right we could have path finding still separate from AI.You'll want an AI loop of some form evaluating the world. This could be per agent or the whole scene, or a combination (let's say a spatial representation of threats/cover updated every half second). Even with several second execution time on an instruction to an agent, the world needs to be evaluated multiple times in case they need to change their mind (depending on game complexity). And it's the evaluation that's costly. Could be looking at (many) hundreds of ray casts and vector magnitude tests and condition branches.
Path finding is something you can shoot off on a thread so it doesn't bog down the execution. An agent requests a path, the pathfinding thread starts up, returns a list of nodes after a while and the agent can then act. Then each physics update it'll move along this path, while another agent may be requesting a path from the pathfinding engine.
The problem is the need to keep testing, using up CPU time. Let's say you have your agent moving through an urban battle zone. You need ray casts for who you can see, who you can hear, intercept calculations for who you might encounter and who you're trying to encounter, proximity tests for bullets (rays) to determine if one should be keeping low or not, tests against scenery to see if it's been destroyed, tests against scenery for vantage points ahead of ambushes while maintaining cover, etc. You can't have 2 seconds between these evaluations or you'll have idiot AI that runs to cover that's already been blown away, and crouch in the middle of an open street.
The AI evaluations can also spike like physics can, so if you don't want massive framerate fluctuations, capping to 30 fps makes sense even when the engine (graphics + AI) can happily run at 50 fps (20 ms CPU time required between frames for drawing + AI calcs).
It's because PC folks can turn it on for "free", but bottlenecks are kinda different when you've got a PCI-E bus adding latency.
idk @Graham correct me pls.
Yeah, path finding is definitely part of AI. Just putting AI onto another thread doesn't stop it requiring CPU time though and taking that away from rendering.The reason I would bring in path finding as AI...
You'll want an AI loop of some form evaluating the world. This could be per agent or the whole scene, or a combination (let's say a spatial representation of threats/cover updated every half second). Even with several second execution time on an instruction to an agent, the world needs to be evaluated multiple times in case they need to change their mind (depending on game complexity). And it's the evaluation that's costly. Could be looking at (many) hundreds of ray casts and vector magnitude tests and condition branches.
But if AI is acting much slower than the fixed update its exploitable. So for AI to respond to stimuli something must enter its collision radius. Working off the collision trigger you'd work the AI function. If the game is moving fast than the AI can receive stimuli, it is exploitable, well, inconsistent. The behaviour would seem buggy. I could dance into the collision radius that should trigger the AI, fire at the Ai and then step out of it freely before the AI would be given the variables to even know to respond.
I want them reacting Not all games are MGSVBy fixed update do you mean the rate at which the game world updates? It's quite normal, and far more realistic, for AI to operate slower than other events simulated in the game. You don't want to have AI reacting the bullets immediately after they've been fired for example because by the time you've heard a gun discharged close by the bullet has missed (or hit) you. People mostly react to things after they've happened. Sometimes many seconds after they've happened.
II'm just going to make up an example with Assassin's Creed - hiding in the hay stack when you have guards alerted. They could be right on you but say for some reason you managed to hit the hide in haystack button right when AI doesn't get an update even though they are within proper range where you shouldn't be able to haystack and hide. When the AI gets its turn back, it no longer knows where you are (or at least game code has you hidden). That would be a weird bug, where sometimes it would get you and sometimes not.
Assassin's Creed utilised the 'last known position' mechanic so even when AIs lose line of sight, they'll head to where you last were and if you're still not visible, they will begin to investigate the area. So jumping into haystack unseen when pursued is no longer a guaranteed way to evade guards because they will generally check haystacks with their swords. If there is just one or two guards you may be able to slip out of the opposite side of the haystack to them and sneak away though.
Last known position mechanics are new for Assassin's Creed but also pretty old. I remember clearing out enemy camps in the original Far Cry by attacking from afar then, as the occupants of the camp head towards your point - always preferably taking a dry land route, you could head into the water and skirt around them.
I've only done indie/amateur stuff in my time and I updated every fixed update, I've yet to do a solid state behaviour quite yet, still working on building my first one. But at the end of the day what constitutes as 'complex' AI could be defined as a great deal of many things.
Eventually your AI thread needs to sync up with your game code, and if you can't budget for that the game needs to slow down.
That's kinda dumb AI. If they get a path and follow it without constantly evaluating their surroundings and changes in the game state, they aren't really being intelligent. They'd start on a walk and merrily proceed through a stream of bullet crossing their path, getting mowed down because they weren't performing constant evaluations. A smart agent would start on a path but be constantly testing proximity of surroundings, players, threats, ready to change mind at any point. You can either do that per agent, or as an overall game state. Either way you need evaluations more frequently than every time a job has concluded.An AI loop like this would be costly in terms of CPU cycles (and unrealistic because people don't evaluate the world constantly) what you really want is a halfway house where an AI is idling, i.e almost operating on subconscious - follow this path to patrol this area, eating this food etc, but not really 'thinking' and where the real computationally intensive stuff is only kicks if triggered. Ray casts for line of sight aren't really that expensive except is very complicated scenes; Paradoid on the C64 was doing this in 2D at four degree radials at 30fps.
MGSV suits this AI. However, the question Hesido presented was how is it at all possible that AI can slow down framerate. The answer is by being computationally expensive. It is possible to have simple AI that's undemanding and wouldn't impact framerate, but it's also possible to have AI that brings the world's most powerful processors to their knees leaving framerates in a single digits regardless of GPU.MGSV is a good example of this type of AI. The guards go about their business and are easy to evade, distract and fool if they are not alerted. It's not unlike simulating the adrenal reaction and heightening of the senses causes by something unusual but without having to simulate biology.