Garbage collectors suck for games

Frank

Certified not a majority
Veteran
Because you have very little, if any, control about the memory usage, and even more important: perceived lag!

Your game will freeze every time the garbage collector kicks in.
 
I know nothing of the topic. But could this have something to do with the hitching and stuttering I've encountered on a number of games released since about 2007?
 
Because you have very little, if any, control about the memory usage, and even more important: perceived lag!

Your game will freeze every time the garbage collector kicks in.

Then either the game is not properly multi-threaded or the garbage collector is very poorly designed.
 
Can't you trigger the garbage collector yourself in regular intervals, say once per frame? What language/framework is this?
Try calling System.gc() in Java.

The deferred garbage collection of a lot of stuff has amortized lower cost than doing it very regular but sorting out very little each time. For interactive applications this lower amortized cost is not as important as smooth frame timing, as you've noticed.
 
Some garbage collectors have incremental and cooperatively-scheduled cleanup routines. Those are compatible with games just giving them "spare time" at non-critical moments.
 
Java's System.gc() call does not force the garbage collector to run, it merely suggests that it may be able to run. In most JVM's this call is no different then doing a Thread.sleep() or Thread.yield() and at no time will it force the GC to run.
 
Frank, my tongue is planted firmly on my cheek when I [strike]say[/strike] type this: I'm waiting for the punchline. :p
 
As long as your GC is not too ancient and you can trigger it, there's no reason it would have any worse penalty that 'standard' (plain new/delete) manual memory management.

Unfortunately I've not found such a feature yet, I'm waiting for D2 GC to improve so I can use it in games.
(Still porting my code to D2 because it's soooo much better than C++ [ok not hard :p])
 
I could be wrong but I doubt modern GC's are designed around the concept of low (or at least bounded) latency.
 
Last edited by a moderator:
I could be wrong but I doubt modern GC's are designed around the concept of low (or at least bounded) latency.
You are wrong for .NET. For .NET4 they have added a feature that reduces latency of gc for old objects. That said there are a couple of techniques one can employ in order to minimise the impact of gc: Alloc large and long-lived objects at app start or when latency isn't an issue, keep frequently allocated objects small, alloc objects in local scope instead of global scope if possible. There may be more that I don't remember.
 
You are wrong for .NET. For .NET4 they have added a feature that reduces latency of gc for old objects. That said there are a couple of techniques one can employ in order to minimise the impact of gc: Alloc large and long-lived objects at app start or when latency isn't an issue, keep frequently allocated objects small, alloc objects in local scope instead of global scope if possible. There may be more that I don't remember.

Now, that .Net have a decent GC what about managed DX11? Auto-vectorization JIT compiler or extensions?
 
I've argued form years that on none time critical code (read most of the gameplay code).
GC is a win, if you can remove the time spent on stupid pointer bugs, and spend it on better gameplay logic and optimizing the more performance sensitive code, I believe you have a net win.

My estimate is that most programmer a can produce code 4x faster in C# than C++, the ratio drops of as the code matures, but it leaves scope for so much more experimentation.

Having said that, more recently I've been using GC'd languages in conjunction with native code in a none game setting, and there are still compromises you need to be willing to make. I still see enormous variance in the execution time of managed code. This is especially true if you cap the memory usage at reasonable levels. In the .NET runtime the overhead of the managed to unmanaged transition is so high that you have to make smart decisions on where and how you do it.

I still think it's a win, especially on larger projects where not everyone is a star.
 
I could be wrong but I doubt modern GC's are designed around the concept of low (or at least bounded) latency.
Lua IIRC also has an incremental garbage collector. This is important for its embedded use in games.

I agree that it makes sense to use GC for "non-time-critical code", and I'd extend that to note that for time critical stuff you can't really allocate at all per-frame so you basically end up statically allocating everything or at worst letting a vector grow to the right size, etc. You are *not* allocating/deallocating these data structures frequently though, so the point here is that the either system would technically work. There are a few cases in which a nice scalable multithreaded allocator is useful in kernels but normally you just want to avoid all memory allocation/deallocation in these places.
 
I can hardly imagine that a generalistic garbage collector could be efficient for things like complex games.

Problems like memory fragmentation are already solved by data oriented code and I don't buy it that the quality of the code increases just because some logic of memory management is automated.
 
I can hardly imagine that a generalistic garbage collector could be efficient for things like complex games.
"Complex games" generally aren't allocating and deallocating memory all of the place, so there's no reason why it couldn't be reasonable efficient.

Problems like memory fragmentation are already solved by data oriented code and I don't buy it that the quality of the code increases just because some logic of memory management is automated.
I won't argue with you on your latter point, but I'd hesitate to call memory fragmentation ever "solved", regardless of your code and data structure. If you're doing anything non-trivial, you always have to worry about fragmentation to some extent although obviously you should design your algorithms to reduce the severity of it.
 
"Complex games" generally aren't allocating and deallocating memory all of the place, so there's no reason why it couldn't be reasonable efficient.
I have a broadly different field experience.


I won't argue with you on your latter point, but I'd hesitate to call memory fragmentation ever "solved", regardless of your code and data structure. If you're doing anything non-trivial, you always have to worry about fragmentation to some extent although obviously you should design your algorithms to reduce the severity of it.
More work for the programmers... spare us, automatize please ! ;-p
 
I have a broadly different field experience.
Depends on the game I guess. You certainly can't get away with frequent and non-custom memory allocation on consoles :)

More work for the programmers... spare us, automatize please ! ;-p
Hehe sure, well obviously you concentrate most on the performance-critical kernels, but it's something to always keep in mind when designing data structures and algorithms.
 
Back
Top