CPU budgeting for games

iroboto

Daft Funk
Moderator
Legend
Supporter
Wasn't sure where to place this topic, so I landed here just because we have AAA console Devs on the forum.

So I was just watching a presentation on AI and they discussed was that budgeting for AI was approximately now 1ms and the rest of it being rendering and "other". I assume other being physics or sound or networking etc.

I'm just curious what the trends have been for budgeting the CPU times are we moving towards budgeting more for rendering? Or are we seeing a move towards AI and other?


Sent from my iPhone using Tapatalk
 
Sofar rendering has been king, and with VR it might not change, but with new API we might have some spare time for other things.
Since you're selling a video experience your primary method of conveying information and markting your game will be what's on screen, which naturally leads to most time for rendering.

Now if consoles had CPU worth the name, it might be better ;p
 
Last edited:
Yeah, in general the state of computer AI is abysmal, IMO. In very small scale games with limited AI active at any given moment it's passable-ish. With larger scale games (like open world games) with many active agents, you can really see how absolutely brain dead AI is.

Note that this is highly genre dependent as well. An RTS like StarCraft 2 spends significantly more CPU cycles on AI, so much so that CPU power rather than GPU power is the prime limiter once unit counts start to ramp up. It also helps that it's not something the deverlopers ever have to worry about trying to port to rather CPU limited consoles.

IMO, I'd love to see developers dedicate more time to AI. Especially in the cases where they plan to have large numbers of active NPCs. Someday, maybe we won't have flocks of sheep wandering around cities masquerading as humans. :p Although the AI in this case isn't even up to the level of a flock of sheep.

But as Roderic mentioned new API's might allow more time to be dedicated to AI, at least on PC. AOTS is a good example (although again an RTS) that does things that graphically taxes the GPU while still being able to spend significant CPU cycles on an excellent AI.

Regards,
SB
 
Note that this is highly genre dependent as well. An RTS like StarCraft 2 spends significantly more CPU cycles on AI, so much so that CPU power rather than GPU power is the prime limiter once unit counts start to ramp up. It also helps that it's not something the deverlopers ever have to worry about trying to port to rather CPU limited consoles.
Is it AI or just DX9/drawcalls ?
 
Is it AI or just DX9/drawcalls ?

It's the AI mostly. Pathfinding AI especially starts to ramp up with higher unit counts. Especially when you consider that the pathfinding isn't limited to a certain radius around the unit/camera (like Diablo 3, for example) and has to have the ability to look at the entire map. As well it has to deal with user placed buildings that can not only alter the path but can potentially completely block some paths. As well, not just buildings, but units in SC2 can also alter and block pathing, so the pathing AI has to deal with that as well. So in the late game, even in just a 2 player match you can have hundreds of units and buildings each potentially affecting the pathing of hundreds of units.

Of course, all of that is compounded by Dx9 draw calls.

Regards,
SB
 
At initial release, draw calls were a significant part for the CPU cost, due to a complete lack of efficiency.
Curiously, who is responsible for writing the draw calls in terms of roles? I understand that graphics people are most definitely working on shader code, but are they responsible for everything in render() as well?


Sent from my iPhone using Tapatalk
 
So will moving past 8 cores on the console give more cpu time to AI or will it be used for rendering ? How about on the pc side , With zen being 8 cores + hyper threading and rumors of intel moving the i7 to a 6/8 core with hyper threading in the fall for sane prices ?
 
So will moving past 8 cores on the console give more cpu time to AI or will it be used for rendering ? How about on the pc side , With zen being 8 cores + hyper threading and rumors of intel moving the i7 to a 6/8 core with hyper threading in the fall for sane prices ?
I guess it depends on what you need it for. I don't know if many people here would agree, but it must be somewhat easier to have a game be multiplayer where your opponents are constantly smarter and worse than you, than it is to design AI.

If the experience requires heavy curation and is tied into some sort of story, or some sort of open world experience I guess AI is fairly important then. I still see everyone pushing graphics though. AI will always take a back seat.
 
Both AI and good network are complete buggers IMO! Believing one is the 'easier solution' is just lulling oneself into a false sense of security by going that route. "A few people running around a football pitch" took two months for me to solve. Moving on tiles in a regular grid continues to throw up network synchronisation problems. Although TBH if I wasn't grid based it'd actually be easier, just passing vectors. But no matter what you do, there's always something (increedibly frustratingly) difficult about it. And debugging network is far from easy. I suppose that's a little different when you an expert network developer who knows networks inside and out, but then you can argue the same for AI too. ;)
 
Last edited:
AI is hard, but games don't have that, there's no intelligence in the behaviour, it's just routine + reaction state machines. (No ability to learn)
This. Game AI is all about faking behaviour. The reason that networking problems still exist is that publishers do not want to spend money on robust solutions and that is really the problem. Reliable syncronicity is a solved problem, the issues stem from cost and implementation which will vary depending on the different requirements for the volume and frequency of data updates.

@Shifty Geezer - I strongly recommend looking through the networking documentation of the Source engine. Even the high level overview will give you things to think about. Networking is a solved problem so don't try to re-invent the wheel. On AI there are a lot of very good books aimed at game developers - you can ignore the pure AI research. I'd recommend looking at books by Ian Millington and/or John Funge but there are no doubt plenty of others and I'm sure most cover the fundamentals.
 
Curiously, who is responsible for writing the draw calls in terms of roles? I understand that graphics people are most definitely working on shader code, but are they responsible for everything in render() as well?
My cynical view is that artists and level designers are responsible for most of the draw calls. We rendering programmers write code that contains draw calls, but we don't have control over the content. It's mostly the content that dictates the draw call count.

Unless of course you do GPU-driven rendering. In that case you have pretty much fixed amount of draw calls (and CPU cost) regardless of content. But this doesn't solve the GPU bottleneck. Artists and level designers can still bog down the GPU. When you don't have a CPU bottleneck, it becomes easier to spam huge amounts of objects. However, it is impossible to perfectly LOD objects at distance (= triangle size in pixels is stays constant, no matter what viewing distance). Unless you combine far away objects together as big impostors (low poly and/or alpha clip) you will still hit a GPU bottleneck when there's lots of objects in the background.

In many engines, there are lots of dependencies between the data structures of the game and the renderer. Renderer directly reads data (such as transform matrix) from the game objects. Personally I prefer complete data separation. Different kind of data (such as transform data) is separated to their own data structures (preferably linear array per data type = SoA layout). There is a culling procedure (likely multithreaded and vectorized) that reads the data relevant to rendering and prepares linear data (with zero indirections) for the renderer. Renderer goes through this data and pushes the draw calls to the graphics API. This kind of design is best suited for DirectX 11 and OpenGL. DirectX 12 and Vulkan have efficient support for multiple command lists. This allows the culling threads/jobs to directly output data to graphics API commands buffers.
 
@Shifty Geezer - I strongly recommend looking through the networking documentation of the Source engine.
I did. ;) If I were creating a typical FPS or whatever, it wouldn't be an issue. But there's always stuff one may want to do that no-one else has done (or at least written a solution for) and it doesn't matter whether it's AI or networking or some new physical something-or-other - it all ends up difficult. Where Iroboto says multiplayer is easier to do than AI, I know people who say, "do single player games because networking is a nightmare," and they're all right. When you are struggling to get AI behaving itself, multiplayer looks stuipdly easy, and when your networking code is bugging out just in getting the frickin' server started (current issue I have, based on the library I'm using where after a couple of days it appears there's a bug), you think to yourself this would be a lot easier just creating a single player game. I know a guy who's game was refused release on PSN because of multiplayer bugs and there was almost no means for them to debug it without multiple SDKs.

In short, there's no such thing as an easy game. ;)
 
In short, there's no such thing as an easy game. ;)
I think that's coding in general! When you don't know how to approach a problem and have nothing to draw on it can be very frustrating. That's why sometimes it's really helpful to have a varied bunch of dead tree knowledge. I remember when I was first trying to do path finding and even on a basic grid it was take way more cycles than I had. But a book with techniques of types of pathfinding, or how to fake it ;), really help. It's definitely more satisfying to solve problems like this yourselves but when you have a lot of code to write, solving a hundred problems isn't the way to feel like you're progressing. :nope:
 
Back
Top