Because you have very little, if any, control about the memory usage, and even more important: perceived lag!
Your game will freeze every time the garbage collector kicks in.
Then either the game is not properly multi-threaded or the garbage collector is very poorly designed.
You are wrong for .NET. For .NET4 they have added a feature that reduces latency of gc for old objects. That said there are a couple of techniques one can employ in order to minimise the impact of gc: Alloc large and long-lived objects at app start or when latency isn't an issue, keep frequently allocated objects small, alloc objects in local scope instead of global scope if possible. There may be more that I don't remember.I could be wrong but I doubt modern GC's are designed around the concept of low (or at least bounded) latency.
You are wrong for .NET. For .NET4 they have added a feature that reduces latency of gc for old objects. That said there are a couple of techniques one can employ in order to minimise the impact of gc: Alloc large and long-lived objects at app start or when latency isn't an issue, keep frequently allocated objects small, alloc objects in local scope instead of global scope if possible. There may be more that I don't remember.
Lua IIRC also has an incremental garbage collector. This is important for its embedded use in games.I could be wrong but I doubt modern GC's are designed around the concept of low (or at least bounded) latency.
"Complex games" generally aren't allocating and deallocating memory all of the place, so there's no reason why it couldn't be reasonable efficient.I can hardly imagine that a generalistic garbage collector could be efficient for things like complex games.
I won't argue with you on your latter point, but I'd hesitate to call memory fragmentation ever "solved", regardless of your code and data structure. If you're doing anything non-trivial, you always have to worry about fragmentation to some extent although obviously you should design your algorithms to reduce the severity of it.Problems like memory fragmentation are already solved by data oriented code and I don't buy it that the quality of the code increases just because some logic of memory management is automated.
I have a broadly different field experience."Complex games" generally aren't allocating and deallocating memory all of the place, so there's no reason why it couldn't be reasonable efficient.
More work for the programmers... spare us, automatize please ! ;-pI won't argue with you on your latter point, but I'd hesitate to call memory fragmentation ever "solved", regardless of your code and data structure. If you're doing anything non-trivial, you always have to worry about fragmentation to some extent although obviously you should design your algorithms to reduce the severity of it.
Depends on the game I guess. You certainly can't get away with frequent and non-custom memory allocation on consolesI have a broadly different field experience.
Hehe sure, well obviously you concentrate most on the performance-critical kernels, but it's something to always keep in mind when designing data structures and algorithms.More work for the programmers... spare us, automatize please ! ;-p