Is the difficulty of debugging complex games non-linear?

I use peek definition a tonne, though most often as a quick way to get to a class to add a property. I am also a huge fan of refactoring. That said, I prefer my code to be readable and will append info to the name when improves readability, prefer to start parameters with lower case, etc. Semantically meaningful is the primary driver for me.
We start type names with big letters and almost everything else with small letters. This improves readability, and makes stuff like "Renderer &renderer" possible (= name of a variable is the same as a name of the type). This is the most natural case when you have just one variable of a certain type. This way you don't need to invent some random alternative name for your variable (often making it harder for people to understand your code).
Wow, I have learned programming a LONG time ago. Didn't even know there was a thing called Hungarian Notation :p
Hungarian notation is ancient (it was popular in 70s and 80s). But I am sure that some big software companies still use it...
 
You can even use Renderer as variable now even when it is also a type. Before that, I would just use NewRenderer or MyRenderer, but that's no longer necessary. I still use NewRenderer now at times, when it makes sense.

I am getting the impression VS wants me to use lower case for everything but properties now though.
 
Most people use Hungarian notation, whether they know it or not. The problem people seem to have is with so called Systems Hungarian, which tells you about the type of the variable. But the original Hungarian notation "invented" in the Office team was, what is now know, applications Hungarian, where you prefix your variable with intent or unit. Prime example would be prefixing some (where it makes sense, obviously) variables in Excel with row and col so you know that rowFoo+colBar is probably a bug. This lets you distinguish numeric types from each other by providing context to what these integers and what not actually stand for. Whether you're in love with camelCase, PascalCase or kernel_normal_form, you probably use it in some capacity.

But it's not like systems Hungarian has no merit. When you're actually writing systems or embedded code (which means: C that talks to the hardware) you bloody damn care about the width of the value you're writing to the registers. It's just that most people don't write code at this level (and those who do follow KNF anyway and pray for maintainers to spot problems). So, yeah, if you're writing awesome fancy C++ code, then sure, Hungarian (especially of the systems type) is not for you. But then again every time you type auto foo = bar(); in your code, perhaps it'd be better to provide some hint WRT the type you expect here to see. It's easier to validate such assumptions with consistent style than it is with comments.
 
Most people use Hungarian notation, whether they know it or not. The problem people seem to have is with so called Systems Hungarian, which tells you about the type of the variable. But the original Hungarian notation "invented" in the Office team was, what is now know, applications Hungarian, where you prefix your variable with intent or unit. Prime example would be prefixing some (where it makes sense, obviously) variables in Excel with row and col so you know that rowFoo+colBar is probably a bug. This lets you distinguish numeric types from each other by providing context to what these integers and what not actually stand for. Whether you're in love with camelCase, PascalCase or kernel_normal_form, you probably use it in some capacity.

But it's not like systems Hungarian has no merit. When you're actually writing systems or embedded code (which means: C that talks to the hardware) you bloody damn care about the width of the value you're writing to the registers. It's just that most people don't write code at this level (and those who do follow KNF anyway and pray for maintainers to spot problems). So, yeah, if you're writing awesome fancy C++ code, then sure, Hungarian (especially of the systems type) is not for you. But then again every time you type auto foo = bar(); in your code, perhaps it'd be better to provide some hint WRT the type you expect here to see. It's easier to validate such assumptions with consistent style than it is with comments.

I think this is a discussion that illustrates a somewhat deeper concern in software development / programming. We are still in the "crafter" age, as opposed to a formal industrialised one. In my humble opinion, this is most unfortunate, and a sign of the immaturity of the field. We yet to move beyond having religious debates (not here, in general) about the one true intending stile, spaces vs tabs, where to put const or *. Other fields have sorted these incipient dilemmas, chosen a path and stuck with it. This is part of what contributes to the difficulties associated with large software projects, such as the ones the OP had in mind.
 
That's because there are so many ways to solve a computing problem, it shares very much in common with artistry rather than engineering. Just as there's no one true way to structure a novel, there's no one true way to structure code. Different people will have different preferences. Plenty of us have been through formal training at college/university, seen the technical arguments for certain ways of doing things, and then seen plenty of reason to do them differently. I also know of big projects where an academic has come in with grand plans to do everything 'properly' and the end result is unworkable when it comes to getting a product out the door. One of the issues faced is an early design choice that's suboptimal and then hacks to get it work because a complete re-engineering is economically infeasible (probably one of the major issues affecting repeatedly delayed games that up being pretty crap when they finally get released).

The practicalities of real life can make implementing and maintaining well-engineered code extremely hard. I'm not convinced there'll ever be a non-"crafter" age for code due to its nature.
 
It's time to stop pre-ordering. I think I'm going to try skipping any multiplayer focuses game unless they've actually had a beta. Otherwise I'll wait a month before purchasing.

Master Chief collection is busted up.http://uk.ign.com/articles/2014/11/25/343-apologizes-for-ongoing-master-chief-collection-issues

DriveClub is busted up and you get horribly unacceptable responses from executives like this: http://uk.ign.com/articles/2014/11/...-driveclub-its-no-fun-being-safe-all-the-time

There are lots of well documented buggy games right now. I'm actually a person that believes bug free games are impossible. Bug free software is nearly impossible, and games are insanely complex. That said, a lot of the stuff that's hitting the market right now, I just can't understand how it doesn't get caught. EA Dice shit the bed with BF4 and they lost a huge amount of respect. Since that happened, they've been working overtime fixing BF4, turning it into the product it should have been at launch. They actually fixed the netcode, which makes the game incredibly better. I think they'll be a good company to watch to see if real lessons can be learned and product quality can be improved. Ubisoft should be next on the list.
 
~~~***PREORDER NOW!!!***~~~

Preorder now and get get this special day one DLC:

- Stability patch
- Lobby kick armour
- "I was there" tear-stained avatar hat
 
  • Like
Reactions: NRP
There are lots of well documented buggy games right now. I'm actually a person that believes bug free games are impossible. Bug free software is nearly impossible, and games are insanely complex. That said, a lot of the stuff that's hitting the market right now, I just can't understand how it doesn't get caught. EA Dice shit the bed with BF4 and they lost a huge amount of respect. Since that happened, they've been working overtime fixing BF4, turning it into the product it should have been at launch. They actually fixed the netcode, which makes the game incredibly better. I think they'll be a good company to watch to see if real lessons can be learned and product quality can be improved. Ubisoft should be next on the list.
I blame management.

No, really, I do. AAA games (and many other software products) are immensely complex beasts and it's super important that people managing development are brilliant. Most of the time - they aren't. This results in the following, non-exhaustive list of problems:
1. Features are overpromissed and underdelivered.
2. Estimates are in no way conservative and so don't account for delays and problems.
3. Crunch is considered cultural thingie and not an abysmal anomaly.
4. Headcount quota is set and people who shouldn't be writing code - do.
5. Most competent developers are promoted to managers even if they lack soft skills for the job; added sorrow comes from the fact that what made them awesome - tech skills - is now underutilized.
6. Non-technical types are brought to manage the project with complete disregard for domain they're operating in.
7. Business side of the development dictates requirements for the project. Problem is: it's not like developers don't think about business, they do (some of them; others shouldn't be writing commercial code) and if management thinks otherwise, they're either incompetent or hired wrong developers.
8. Outside consultants with wrong money incentives are hired to help with the process.

There are many more and some may be potentially attributed more to hiring practices than management, but that's the state of software development we have.
 
A good way to maintain documentation for an API like this is to tie the documentation and the unit tests together. In the document, you refer to the unit tests (all code examples in the documentation MUST have unit test cases to prove

:O this is DAMN smart, really.
Indeed, the naming conventions you suggest works very nicely for huge projects - that's my experience about.
Still, I think that important variables should be prefixed for readability (m_, g_, kXXX helps ALOT in that regard ), it aids alot on code reviews.
I partially disagree on comments, however. Often functions have block sequences that does something, and a simple comment line that says "this does this" before each logical block is cool, especially on long functions (which are not bad, that way). But in general, if classes are well defined and function names clear, there is no need to put description on top of each function for autodocs (except, maybe for SDK purposes).

@Dominik D : I could give funny example for each (just to add that imho 5 is way worse than 6, since good Architects can easily fill the gap, but terrible managers are terrible managers...), but I miss (7) - what do you mean?

@Davros: "Hashtabl_Str_Placenames" ?? Are you a Java developer?! :D (kidding, java devs dont take it bad...)
 
When you're actually writing systems or embedded code (which means: C that talks to the hardware) you bloody damn care about the width of the value you're writing to the registers. It's just that most people don't write code at this level (and those who do follow KNF anyway and pray for maintainers to spot problems). So, yeah, if you're writing awesome fancy C++ code, then sure, Hungarian (especially of the systems type) is not for you.
This is a graphics technology forum, so we can expect people to care about the width of their variables, as they are likely writing the variables to GPU buffers or CPU side crunching the values using VMX/SSE/AVX/NEON vectors (which need special intrinsics for data loading/storing). I would argue that graphics programmers bit pack even more than embedded programmers (and consoles also can be considered as embedded systems). My code has lots of types that specially state how many bits will be used. uint32_t, uint16_t are common (and also types like Half, UDec3, Dec3N, etc that have specific bit packed patterns that must match the GPU presentations). However I still don't see any reason to for example name my uint16_t variable with some specific Hungarian prefix. That doesn't make my code any more readable. If my variable has some specific purpose, like being a bit mask, I name it accordingly (uint32_t maskRed).

C++ is good for low level code. Templates make it possible to combine fast code (compile time instead of run time) with better maintainability. For example templated vector swizzles produce better code (more readable and definitely faster) than any C equivalents. Also compile time polymorphism is much better for performance (or memory) critical code than run time polymorphism (= virtual functions).
I partially disagree on comments, however. Often functions have block sequences that does something, and a simple comment line that says "this does this" before each logical block is cool, especially on long functions (which are not bad, that way).
If your function has a separated block of code, you can just paint it with your cursor and select "Extract Method" (if you have Resharper C++ for example). Then name the method to include the the same information as your comment had. End result: one comment removed, one future lie removed :)

Sometimes comments are good (also in the case above), I don't deny that. However I think people tend to overuse comments instead of refactoring bloated code to smaller functions (and/or splitting classes). Adding more comments is not an excuse to write long functions (or long class interfaces) that are hard to reason about. Comments do not fix mismatch of abstraction level. It's preferable that a single function contains code of a single abstraction level. Mixing high level code with low level code in a single function tends to make readability harder. This is also why I don't like std algorithms that much. As C++11/14 doesn't yet have ranges, using algorithms results in too much low level code (messing with multiple iterators, and thus being error prone and hard to read).
 
@Davros: "Hashtabl_Str_Placenames" ?? Are you a Java developer?!
Good job I'm not a cobol developer (or pigin english as I like to call it)

I partially disagree on comments,

So do i, why? Code does not convey intent.

Let me give you the following contrived example

Your boss says "dav's added the final lines of code to our program will you check it make sure
its ok if so we can send it to the replicator"
You load the code see the comments and think i hate comments i'll delete them it makes the code
more readable plus I'm clever enough to work out what he's trying to do, so you delete the
comments.
Then you come across the following line :

result = 3.14 *(radius * radius)

You say to your boss "its not the greatest code but its fine, it ok to have it replicated"

The next day you fell a little uneasy and think maybe I should of read the comments I have a
backup I'll double check.
So you load the code and come across the following line this time with comments :

result = 3.14 *(radius * radius) // calculates the circumference of a circle and stores the answer in result

see the problem
 
Last edited:
:O this is DAMN smart, really.
I've seen a pretty cool documentation system for (I think) Ruby where you could annotate tests with special comments and pieces of code like this would be pulled into documentation when you generate it from some markdownish format. In C world you'd surround a piece of code with /* +doc-foo */ ... /* -doc-foo */ or whatever and then putting magical [code:foo] in your documentation would fill the spot with code. It's usually obvious that code (whether this comes from sample, unit test or the product itself) doesn't match the narrative in your documentation, even if you just skim it. I plan on rolling something like this in our driver at some point in the future.

I partially disagree on comments, however. Often functions have block sequences that does something, and a simple comment line that says "this does this" before each logical block is cool, especially on long functions (which are not bad, that way).
This only works if you have tight code review process. Otherwise comments will rot.

but I miss (7) - what do you mean?
Conventional wisdom goes like this: unless some suit tells smelly devs what to do, they'll destroy the business in a second. But the reality is that most of the developers these days have survived several successful and/or bombing release cycles and they understand what makes the product (and hence the business). We all have mouths to feed (our own at the very least) so we do care about the end product (which is your business) and not just about some religious BS that's so often attributed to developers. Sure, we're 100% rational only 95% or the time ;) but if you're writing and playing games for some time, if you're in data storage business or you're in web development business for some time, you know perfectly well what works and what doesn't. Most of us, given a chance, won't spend 10 years on perfecting something that was good enough after a year or two. By the same token if something is lacking, we'll ack that and spend more time on getting things right. I've got this feeling that not every suit understands that, or the business itself. This is probably the main advantage of Imagination over other places I've worked at (or decided not to work at).

This is a graphics technology forum, so we can expect people to care about the width of their variables, as they are likely writing the variables to GPU buffers or CPU side crunching the values using VMX/SSE/AVX/NEON vectors (which need special intrinsics for data loading/storing). I would argue that graphics programmers bit pack even more than embedded programmers (and consoles also can be considered as embedded systems). My code has lots of types that specially state how many bits will be used. uint32_t, uint16_t are common (and also types like Half, UDec3, Dec3N, etc that have specific bit packed patterns that must match the GPU presentations). However I still don't see any reason to for example name my uint16_t variable with some specific Hungarian prefix. That doesn't make my code any more readable. If my variable has some specific purpose, like being a bit mask, I name it accordingly (uint32_t maskRed).
It doesn't make my code any less readable either. [1] But I can easily spot problems during code review when someone takes u32 and uses RegWrite64() with it (the opposite would be flagged by the compiler, sure). Either we're writing more than we need or the data we have is wrong. In a perfect world if would be impossible to introduce this kinds of problems into the well maintained code. But I'm not writing from scratch - I maintaining and extending legacy code of varying quality. Some of these problems come from the fact that, say, logic changes around how we use certain existing structure and during code review we find that variable width was already wrong, it just never bit us in the ass.

[1] I'm not going to get into religions wars about that's "clearly" easier to read or write; we're paid to write and - most often - read code, so if someone doesn't like either of actions because notation offends them, he or she should change the line of job :p

C++ is good for low level code. Templates make it possible to combine fast code (compile time instead of run time) with better maintainability. For example templated vector swizzles produce better code (more readable and definitely faster) than any C equivalents. Also compile time polymorphism is much better for performance (or memory) critical code than run time polymorphism (= virtual functions).
It's hard to find competent C developers but it's even harder to find C++ ones. I'd rather use the most straightforward C code possible than use C++ and risk that someone who'll be maintaining this down the line will have to understand templated vector swizzles I've added. In general I'm not a huge fan of languages that let you build dialects of themselves. This raises the already high bar of entry for the field I'm in. On top of that I'd have to change expectations for C++ developers WRT runtime behavior. Exceptions in the code are a big no-no, both in drivers and most of the game code. You may take that as an obvious element of game development, but it's not an obvious element of C++ development. Mike Acton proved that at CppCon recently. ;)

In the perfect world where only the most talented developers join your team I'd be all for smart code. But that's simply not the environment we (well, I) work in so IMO obvious is better than smart. I'd love to work with people smarter than me that'd prove me wrong (hey, some of the ppl around here in fact rock and I learn a tons) but that's simply not something I think business can afford right now.
 
It's hard to find competent C developers but it's even harder to find C++ ones. I'd rather use the most straightforward C code possible than use C++ and risk that someone who'll be maintaining this down the line will have to understand templated vector swizzles I've added.
@sebbbi : C++ for low level stuff is a BAD idea. Had dealt with it once, and results were... lets forget. On the other hand, I've seen some decent result either. Yet, in my experience, you want to avoid things where you cant map source->assembly easily, and with C++ you cant. Also, you are exposed to the implicit constructor issues, to strange optimization done by the C++ compiler and such.
Imho, it's not a good idea.

Oh, as a quick proof, try to read and understand this nice 'templated' use:
https://code.google.com/p/gperftools/source/browse/src/windows/patch_functions.cc

It took me alot of times to understand the perversion behind these developers - consider that you can write their code in lean, clean, short C plus some minimal assembly routine.
 
An example of bad-ish code is not a proof that sane C++ developer couldn't have written this better. It's also not true that you can't map C++ code to machine language easily. But all that comes at a higher cost than quality C code does. So, I feel, in systems that already deal with a lot of complexity (drivers for example) smarter languages requiring people to know and remember more tend to result in worse code (all other things being equal, obviously, as you can't take incompetent C dev put him next to Alexandrescu and with that prove that C++ is clearly better for driver development).

But don't mistake the forest for the trees, we're getting into philosophical discussions here losing sight of the original topic: what the heck causes software products[1] to underperform at launch. One thing is how software development is managed (there's no silver bullet but management types are usually hung up on "proven" buzzwords, like agile) and another is that complexity in code comes from many places, yet many people seem to disregard the fact that a lot of complexity also comes from the types of languages we're using. We're fooling ourselves so often saying that we're sticking to C and C++ for some noble reasons where in reality we're using them because tools available exceed those for other languages and we've got more control over compilation and execution than in other languages. I doubt we'd be writing game logic i C++ if, say, memory management in C# was better[2].

[1] today hardware is also in many ways software, but that's another issue altogether
[2] i.e. we could control it for the types of workloads we know we have; and yes, Unity kind of proves me wrong, but hey!
 
Sebbbi: you're a lucky one it's rare to have such a working environment with code reviews and unit tests and everything ^^
I do use some kind of hungarian notation, but not the stupid type prefix one, the other one with intent prefix, like "i" prefix for index "ct" for counter... but only when it makes sense, usually I don't need to.
I also name classes with a capitals and variables precisely for the reason Sebbbi mentionned "(Colour& colour)".

As for comments I find lots of
// Destructor
CGeometryManager::~CGeometryManager(void)
or
//--- Release data access
ReleaseMutex(m_hDataAccess);
Which is useless.
I want complicated functions to be documented, but above all I want to know what options where evaluated and why this one was implemented, because I'll most likely revisit the code some day and wonder why it's not x or y and having to re-think the whole thing to reach the same conclusion is a waste of time.

As far as I'm concerned, C++ is a PoS that manage to do the job, it's not a language anyone should ever use but we are stuck with it for legacy and tool reasons, stuck in the past when we badly need to move forward.
(Apple Swift looks nice though, there may be a chance to move on !)
 
There are a lot of things that go into this when talking about "why are games buggier now than on PS1/PS2?

- Addition of HDD leads to decrease in hardware uniformity (more test setups required).
- Addition of network leads to decrease in user setup uniformity (more test setups required).
- Increase in hardware complexity leads to decrease in hardware reliability (more components that can fail, and yes this is relevant).
- Increase in hardware complexity leads to increase in code complexity (multithreading).

In 2003 you could reasonably assume that if your single-threaded game ran one way on a retail PS2, it ran the same way on every other retail PS2. That's out the window today.

Game companies are getting better about testing, but with today's big AAA launches your users playtest more in the first half day of release than you could have put in with a full QA team for the entire course of the project. Especially with multithreading in the mix it's highly unlikely that everything will be caught pre-release.
 
Game companies are getting better about testing, but with today's big AAA launches your users playtest more in the first half day of release than you could have put in with a full QA team for the entire course of the project.
That makes it look that the bugs slipped through the net, I don't believe that. In most cases I believe the bugs are known about, but some how publishers have managed to cultivate an environment where bug ridden games are acceptable. There are people (and Ive seen it even on b3d) that will actively defend buggy games and will even criticize someone for complaining.
 
Back
Top