*Spin-off* Coding Education & Practise

I personally think .NET is a big step backwards. Things I think are simple and required, or just basic, turn out to be removed to "protect naive developers against themselves". I like pointers. Pointers are great!

Simply the goal to make everything generic and bullet-proof makes me try many different approaches that all should work to find the one that actually does. I tend to spend four times the time developing in .NET than when writing the same in Delphi, or twice than C++.

My junior co-workers mock me and tell me I think wrong, and don't understand the language (C#, mostly). They consider me a dinosaur. Except when they don't understand why things aren't working as they should, of course.
 
I personally think .NET is a big step backwards. Things I think are simple and required, or just basic, turn out to be removed to "protect naive developers against themselves". I like pointers. Pointers are great!

Simply the goal to make everything generic and bullet-proof makes me try many different approaches that all should work to find the one that actually does. I tend to spend four times the time developing in .NET than when writing the same in Delphi, or twice than C++.

My junior co-workers mock me and tell me I think wrong, and don't understand the language (C#, mostly). They consider me a dinosaur. Except when they don't understand why things aren't working as they should, of course.

.NET/C# is great for tools and asset pipeline development. These days it seems to have become something of a standard for that. Now, is it something you can get away with in a game engine? No, but of course people still try.
 
I'm with ban25 on this. C# isn't good for a production, desktop code (it's great for web apps though). But it's excellent for RAD (much better than Delphi IMO). Carving tools in C/C++ is a real pain. But you're right Frank. If you approach C# as if it was a bastardized child of C++, everything is going to consume more of your time. :)
 
Simply the goal to make everything generic and bullet-proof makes me try many different approaches that all should work to find the one that actually does. I tend to spend four times the time developing in .NET than when writing the same in Delphi, or twice than C++.

This sounds familiar... back in 1994/5, as part of my changing jobs, I went from a Texas Instruments TMS320xxx DSP assembly/C/C++ coder to an Internet applications job that was looking into the early Java (beta at the time) technologies.

I can somewhat fondly remember my simultaneously thinking "this is cool" to "why can't I understand what's going on with this package I'm supposed to use and extend? It was bad enough in C++, but at the time, most C++ libraries were very thinly repackaged C libs (OSI, X, SGI graphics stuff mostly was what I worked with).

Some things were simple, but some things took forever, because I just couldn't get over all the functionality that was provided in java.lang or java.util -- I can't remember, but I think this was before there were even javax stuff. I just hated that there were all these high level data structures, that I used to hand-code myself, that I was supposed to use and I didn't know how they were implemented.

I would read all the tech specs on the JVM and byte code compiler, just to feel more comfortable that I knew what a java.lang.String class did. Don't even ask me about the garbage collection aspect of the JVM and why there were no destructors/frees.
 
I personally think .NET is a big step backwards. Things I think are simple and required, or just basic, turn out to be removed to "protect naive developers against themselves". I like pointers. Pointers are great!

As someone who codes .NET 95% of the time, I think you're right, basically. The most terribly hard bugs to deal with end up being because you don't properly understand what happens when you pass an object byref or byval.

Also, I think C# is actually worse than VB in .NET, especially in RAD environments, which is odd (I used to be 100% pro C#).

Simply the goal to make everything generic and bullet-proof makes me try many different approaches that all should work to find the one that actually does. I tend to spend four times the time developing in .NET than when writing the same in Delphi, or twice than C++.

For me the big strength of .NET is having the framework. This makes it so that I don't have to worry about which version of Windows I'm working with, and I don't have to worry about installing dlls so much, etc. Things that used to be a huge pain back when I was forced to work with VB 6.0. This is the biggest plus for me. But with the latest VB/C# .NET variants I can also code fairly neatly (classes, objects, controls, inheritence, etc.).

Except when they don't understand why things aren't working as they should, of course.

Indeed.
 
As another totally off topic post....

Pretty much any GC language is an improvement over C/C++ from a pure engineering standpoint, Garbage Collection removes all the most common "difficult" bugs that crop up in C/C++ codebases.

5 years ago I couldn't imagine using STL containers in a game code base, I know of games today that have the bulk of their gameplay code written in GC'd languages, and I'd bet that within 10 years we'll see a triple A title developed exclusively in a GC'd language.
 
While I agree that garbage collectors do sound nice, I've had a few C# programs and an MS SQL Reporting Service report (which is written in C# as well), that made Windows kill the .NET runtime due to excessive memory usage.

With .NET programs that do a single task and terminate, I often see the memory usage explode until some time after the app terminated. It looks like the .NET GC has a really hard time dealing with lots of successive creates and destroys in a short time. And as you don't have much control over it yourself, there isn't much you can do against it.
 
Realtime performance and low-level data structures are also a problem with an interpreted language that uses garbage collection.


For example, National Instruments, which builds data acquisition hardware, switched to .NET for their libraries a few years ago, and depreciated all other interfaces. Because that was the new, required development platform. And they really did a great job in building a new library the correct .NET way.

Unfortunately, there is no way in .NET to do accurate timing or handle events within a certain (small) time window. Which made it really hard to use their hardware (plug-in boards) to do what it was supposed to do: measure accurately. Because you don't know how much "CPU" time your app is going to get, and when. And all bets are off when the GC starts it's sweep.

They solved that problem by switching to a different line of products: external rack mounted modules, with their own processor and OS in the rack. So they can still offer the .NET library.


And without pointers, strings being unicode exclusively and no direct control over how, when and where your datastructures are created, their memory layout, or the ability to travel them in the lowest latency possible, communication with many devices becomes erratic at best, and there is no way to meaningful profile your functions to run within the allotted time.


And, while much of the abstraction went in the ability to painlessly being able to write multithreaded apps, it still only works when you know what you're doing. Which has become a lot harder because you don't know how or can predict how the runtime is going to execute it all.


In short: the most the abstraction has done is to make it harder to figure out what is really happening.


Then again, with the focus on webapps and there being such a mountain of stuff underneath your application, it stands to reason to try and simplify it as much as possible. But then again, that only works if you never have to travel outside the bounds. So, each and every interface and piece of attached hardware should be integrated and abstracted, through a common interface. Which only works if you'll never use the extra functionality it has to offer.

That's the main reason it works so well for webapps: a browser is a pretty closed virtual machine. Like a console. There are some differences you need to take into account, but they're both pretty fixed.


So, the main programming market has mostly split into light, high-level webapps (the main bulk), electronics, embedded devices, tools, games and server services (for the experienced die-hards) and architecture/integration/communication/management (for the problem solvers and managers).


What we need is a common language, that runs in all of those environments. Something that allows you to describe the project, keep track of it, write the code, add, use and animate the art and other assets, test it, compile and link it into the files needed for the application to run, be it native code, JavaScript, Flash, Java or whatever.

But .NET isn't it.
 
What we need is a common language, that runs in all of those environments. Something that allows you to describe the project, keep track of it, write the code, add, use and animate the art and other assets, test it, compile and link it into the files needed for the application to run, be it native code, JavaScript, Flash, Java or whatever.

I don't think a common, one-size-fits-all language is really something we need or want. It's better to use the right tool for the job. For performance-sensitive applications like games, that's C/C++. For web apps, .NET or Java are good solutions. For embedded scripting, you might choose Lua, or for tool scripting, maybe Powershell or Perl.
 
5 years ago I couldn't imagine using STL containers in a game code base, I know of games today that have the bulk of their gameplay code written in GC'd languages, and I'd bet that within 10 years we'll see a triple A title developed exclusively in a GC'd language.

LOL, my last game didn't use STL at all! But it is true that these days we have more memory to play around with and game studios now have access to optimized STL implementations and you'll need to write your own custom allocators anyway, so you can tune as needed.

AAA titles in GC'd languages? If you just mean sales, then there might already be a few. Otherwise, I wouldn't be surprised, especially if next-gen consoles have a lot more memory (or if future games move to a client-server architecture).
 
LOL, my last game didn't use STL at all! But it is true that these days we have more memory to play around with and game studios now have access to optimized STL implementations and you'll need to write your own custom allocators anyway, so you can tune as needed.

AAA titles in GC'd languages? If you just mean sales, then there might already be a few. Otherwise, I wouldn't be surprised, especially if next-gen consoles have a lot more memory (or if future games move to a client-server architecture).

Am I the only one dreaming of a multi-core optimized .NET CF run-time with aggressive garbage collector and few targeted instructions added to the chosen CPU ISA being standard in a next-generation console ;)?
 
But the software industry has changed radically over the past 20 years. For 95% of software developer graduates today software correctness (ie. no bugs) is way more important than how the code performs.

One has direct effect on the other.
There are people who can write code and there are ones who can not.
It doesn't matter how many excuses the latter ones have.

C (and C++) is the new Cobol. Not because it is useless but because so few people will be using these languages.

95% of people are dumb.

How come nobody learns Fortran anymore? It's way better suited at a lot of game related tasks than C, yet nobody, to my knowledge, uses it.

Good schools teach Fortran, good schools teach Lisp, good schools teach Prolog.
Dumb students just do not attend.

Game development is a software industry niche where knowledge about memory usage, cache locality, various hardware defined latencies (instruction, memory etc.) and similar stuff is important because it influences the decisions you make. But for the vast majority of the software industry it doesn't matter

It always matters if you're dumb or smart. But for 95% of people being dumb is not an issue. Furthermore, being dumb is indeed normal.
 
Good schools teach Fortran, good schools teach Lisp, good schools teach Prolog.
Dumb students just do not attend.
No, they don't.

Good schools gives students skill sets they will need. They teach computer science fundamentals (of which architecture is one course). And then they teach a variety of programming languages that:
1.) Exemplifies a specific paradigm, eg. imperative OO (C++, Java, Eiffel, C#), functional (Lisp), multiparadigm (Beta, Dylan) and parallel (SQL)
2.) Might actually be useful.

C++ scores low on (1) and lower and lower on (2). Java scores a lot lower on Eiffel on (1), but a lot higher on (2). Combined, Java makes a lot more sense to teach.

If a university made a C++ course mandatory for a CS grade today, I'd demand my tutoring money back.

C and C++ makes sense for degrees in computer engineering, but not in CS.

Cheers
 
Last edited by a moderator:
No, they don't.

Good schools gives students skill sets they will need. They teach computer science fundamentals (of which architecture is one course). And then they teach a variety of programming languages that:
1.) Exemplifies a specific paradigm, eg. imperative OO (C++, Java, Eiffel, C#), functional (Lisp) and multiparadigm (Beta, Dylan), parallel (SQL)

I agree.

If a university made a C++ course mandatory for a CS grade today, I'd demand my tutoring money back.

The problem is not in language itself but in limited field of view.
You cannot write code if you don't understand how the hardware works.
You cannot write code if you do not understand math.
You cannot write code if you do not know and understand all the aforementioned paradigms.
You cannot write code if you do not know what compilers do and how they do it.
 
I agree.



The problem is not in language itself but in limited field of view.
You cannot write code if you don't understand how the hardware works.
You cannot write code if you do not understand math.
You cannot write code if you do not know and understand all the aforementioned paradigms.
You cannot write code if you do not know what compilers do and how they do it.

What type of code are you talking about? In general, I'd say only the paradigms matter for the majority of cases. It really depends what you're working on.
 
Any code.



No, it doesn't. This binary approach proved itself numerous times.

Well, I know a lot of grads that don't know much about how compilers work and they've successfully written many different applications ... Maybe we're disagreeing on the semantics of "cannot." If we're talking game code, then I don't have experience to say anything about that, but there are a lot of programs that can be written without having to worry about what the compiler is doing.
 
The problem is not in language itself but in limited field of view.
True. You need to pick the right tool for the job.

You cannot write code if you don't understand how the hardware works.

Emphatically wrong. I would much rather spend time getting and algorithms time complexity down, write decent readable code instead of worrying about cache hierarchies, memory latency, MMU behaviour etc.

If I get a pathological bad case, *then* I might look into what low level effects causes it.

You cannot write code if you do not understand math.

Math in the widest possible sense? Todays software systems are so big that almost all the challenges lies in managing complexity.

You cannot write code if you do not know and understand all the aforementioned paradigms.

Sure you can. People can go through life working as programmers without ever knowing about SQL or Lambda calculus. In fact, most do.

You cannot write code if you do not know what compilers do and how they do it.

Might be true for some cases, but certainly wrong for the general case. Normally you don't give a toss about how a compiler mangles your code into something that produces a result. You only care about this if there is some pathological issue (bug). The whole point of compilers is to isolate you, the

Cheers
 
Emphatically wrong. I would much rather spend time getting and algorithms time complexity down, write decent readable code instead of worrying about cache hierarchies, memory latency, MMU behaviour etc.

Don't worry, know. If you know you shouldn't worry.

If I get a pathological bad case, *then* I might look into what low level effects causes it.

I think you do not understand: if you constantly remember about all the low level stuff you write better high level.

Math in the widest possible sense? Todays software systems are so big that almost all the challenges lies in managing complexity.

Math in a sense that you know how it works and you can prove theorems and invent new ones, even if somebody else invented them before. In short: you should be able to think in math.

Sure you can. People can go through life working as programmers without ever knowing about SQL or Lambda calculus. In fact, most do.

Dumb people are not good to work with, even if it's normal to be dumb.
It's better to work with smart people, and if you need to do a dumb job, just let the computer do it, it's even dumber than most people.

Might be true for some cases, but certainly wrong for the general case. Normally you don't give a toss about how a compiler mangles your code into something that produces a result. You only care about this if there is some pathological issue (bug). The whole point of compilers is to isolate you, the

This is an illusion.
Nothing abstracts anything if you do not know how to dissect it.
Abstractions are based on knowledge and not vice versa.
 
The problem is compounded by the hire-develop-fire cycle most studios seem to employ on a per-project (game) basis. This makes it impossible to retain skilled developers in the long run. Although things are looking like they are maturing.

I haven't seen skilled programmers laid off at the close of a project, unless the company had nothing else on the schedule. What you do see is a big roll on and roll off of content creators (artists, animators, designers, etc.) as they scale up to finish the game and then slim down once it's released. Many game companies, large and small, have a core of experienced senior engineers that have been working together for many years.

It's true that some other sectors of the software industry are not performance-critical, but that doesn't mean they can avoid performance to the exclusion of all else. Many enterprise server applications have to be optimized as heavily as possible because while they can scale horizontally, that costs money. Likewise, many large software companies like Microsoft, Oracle, SAP, and more are absolutely focused on performance (who do you think is making the .NET CLR fast?).
 
Back
Top