Code reusability and a 'constant' hardware platform *spawn

TheChefO

Banned
I have 2 points here, one is to note that that for none vector workloads, any concept that 360's CPU or the PPU on a PS3 were competitive with existing x86 CPU's at the time is simply not true.
The second is although you could get good performance out of them it took a lot of work, work that in general doesn't get done to every line of a 3 M+ line codebase, and at some level means work not getting done elsewhere.


[Off topic, but I think it's important and useful tech information]

ERP, I get what you're saying here and it makes sense, but how much of the old code base is still in modern games, engines, libraries? I'd think by now, most of the bigger developers/games would be optimized for Xenon, Cell.

Is that not the case?
 
[Off topic, but I think it's important and useful tech information]

ERP, I get what you're saying here and it makes sense, but how much of the old code base is still in modern games, engines, libraries? I'd think by now, most of the bigger developers/games would be optimized for Xenon, Cell.

Is that not the case?

It's not a question of legacy code.

The traditional viewpoint on optimization, is don't optimize prematurely measure find the hot spots and address. Practically on large applications this approach doesn't actually work.

Hypothetically I have a team of say 20 people 3 or 4 of those (the team "stars") may work on what's considered performance sensitive code, in general they will optimize as they write the code. The rest will work on general systems or gameplay code, none of the individual pieces are expensive and the vast majority of the programmers are more concerned with readability and maintainability (and they should be) than overall performance. They also tend to take the fast route to a solution rather than the best route because so few pieces of gameplay code survive any sort of gameplay review, and there is pressure for the designers to see it in game as quickly as possible.
The last point is why performance travesties like Havoks character control stuff make it into final builds, it's quick to get in and working and by the time you measure the cost, the cost of change is too high to justify.

There is too much churn in the codebase for anyone to look at or be familiar with all of the code and a lot of reasons that you believe performance is currently below par and will improve. You get to 4 weeks before E3 and have to show a demo, at this point you start looking at performance, which is usually horrible, some of that is assets that are way over budget, some of it is poorly optimized code.

You sit down with the performance analyser, run it over the code and you cry. Because it's not a single hotspot it's literally thousands of pieces of code wasting 0.1ms here or 0.01ms there and it's impossible to "fix" them all for the demo and even by the time you ship.

20 years ago on SNES or Genesis everything was assembler, every line of code was likely vetted by a single individual.
15 years ago circa PS1 when games were a few hundred thousand lines of code, every line of code was looked at and evaluated, if I didn't write it I saw it go into the codebase and had it fixed when it did.
10 years ago circa PS2 it started to get difficult to do that, plus you started to see legacy code and large blocks of external code that were poorly optimized for the platform.

FWIW I didn't do my first dynamic allocation in a piece of game code until PS2, and only the because we inherited the codebase.

The concept that game teams some how lovingly craft every line of code with an eye to performance and are some how much better than their software engineering counterparts in other disciplines is utter crap, it hasn't been that way for over a decade.

Building a game today is all about software engineering and project management and much less about programming.

There are still a very few teams that are very "old school" in there development approach, but even there they use 3rd party libraries and not every engineer is a star.

IMO Console hardware should be designed around what lets teams produce the best games.
 
It's not a question of legacy code.

The traditional viewpoint on optimization, is don't optimize prematurely measure find the hot spots and address. Practically on large applications this approach doesn't actually work.

Hypothetically I have a team of say 20 people 3 or 4 of those (the team "stars") may work on what's considered performance sensitive code, in general they will optimize as they write the code. The rest will work on general systems or gameplay code, none of the individual pieces are expensive and the vast majority of the programmers are more concerned with readability and maintainability (and they should be) than overall performance. They also tend to take the fast route to a solution rather than the best route because so few pieces of gameplay code survive any sort of gameplay review, and there is pressure for the designers to see it in game as quickly as possible.
The last point is why performance travesties like Havoks character control stuff make it into final builds, it's quick to get in and working and by the time you measure the cost, the cost of change is too high to justify.

There is too much churn in the codebase for anyone to look at or be familiar with all of the code and a lot of reasons that you believe performance is currently below par and will improve. You get to 4 weeks before E3 and have to show a demo, at this point you start looking at performance, which is usually horrible, some of that is assets that are way over budget, some of it is poorly optimized code.

You sit down with the performance analyser, run it over the code and you cry. Because it's not a single hotspot it's literally thousands of pieces of code wasting 0.1ms here or 0.01ms there and it's impossible to "fix" them all for the demo and even by the time you ship.

20 years ago on SNES or Genesis everything was assembler, every line of code was likely vetted by a single individual.
15 years ago circa PS1 when games were a few hundred thousand lines of code, every line of code was looked at and evaluated, if I didn't write it I saw it go into the codebase and had it fixed when it did.
10 years ago circa PS2 it started to get difficult to do that, plus you started to see legacy code and large blocks of external code that were poorly optimized for the platform.

FWIW I didn't do my first dynamic allocation in a piece of game code until PS2, and only the because we inherited the codebase.

The concept that game teams some how lovingly craft every line of code with an eye to performance and are some how much better than their software engineering counterparts in other disciplines is utter crap, it hasn't been that way for over a decade.

Building a game today is all about software engineering and project management and much less about programming.

There are still a very few teams that are very "old school" in there development approach, but even there they use 3rd party libraries and not every engineer is a star.

IMO Console hardware should be designed around what lets teams produce the best games.

This may be a really stupid question, but your response I believe would be valuable:

What if a group of top notch programmers were given a directive to optimize a set of flexible code modules which would be targeted to a specific hardware which was guaranteed to be the base of development for the next 15-20 years?

After these flexible modular code blocks were written, they could then be sold/licensed to other dev houses for use in their games which would "guarantee" the most efficient use of hardware.

2 big assumptions:

1) That MS or Sony could guarantee such a fixed hardware target (which could be scaled and or added upon in the future)

2) That the effort would be worth it.

I know game devs have historically been small shops which control every bit of their internal code and as they get bigger, this control has been lessened, but there still has been a general shunning of external code which I can understand as most programmers outside the gaming world arent as concerned with optimized code.

But if the external house was a team of skilled console programmers with the sole intent to optimize a set of common flexible modules, I'd think acceptance would be there, and with enough scale, this operation could produce magnificent results.

Only problem would be on narrowing down the architecture to focus on.

What do you think?
 
After these flexible modular code blocks were written,

HAHAHAHAHAHA

Sorry that was a little too easy.

For Modular code look up the fallacy of code reuse.
The problem with "flexible" code is it's sub optimal by definition.

I called out Havok's character control code above, if I had to write code to meet all the requirements it has, I'm not sure I'd do very much better than the Havok implementation. It's a generic solution assuming nothing about your game.
However if I had to write character control for a specific title, and I can chose the collision DB format given the games constraints require designers to do explicit markup of certain features and even limit the designers in some subtle ways, I might be able to beat it by a factor of 20 or more.

What you're suggesting is what various 3rd party engines are trying to produce, the problem is you end up having 5x the code you actual need to solve the problem you have, that code is suboptimal for your problem and you still end up having to maintain it (without expertise).
Rarely will you find any core engineer who will tell you that the engine saved engineering time over building it in house.
However what engines do is allow you to get things running faster and iterate on gameplay more, which generally is as big an improvement on the final game as any technical change.

Games are no longer technology centric in the way they used to be.
 
HAHAHAHAHAHA

Sorry that was a little too easy.

For Modular code look up the fallacy of code reuse.
The problem with "flexible" code is it's sub optimal by definition.

I called out Havok's character control code above, if I had to write code to meet all the requirements it has, I'm not sure I'd do very much better than the Havok implementation. It's a generic solution assuming nothing about your game.
However if I had to write character control for a specific title, and I can chose the collision DB format given the games constraints require designers to do explicit markup of certain features and even limit the designers in some subtle ways, I might be able to beat it by a factor of 20 or more.

What you're suggesting is what various 3rd party engines are trying to produce, the problem is you end up having 5x the code you actual need to solve the problem you have, that code is suboptimal for your problem and you still end up having to maintain it (without expertise).
Rarely will you find any core engineer who will tell you that the engine saved engineering time over building it in house.
However what engines do is allow you to get things running faster and iterate on gameplay more, which generally is as big an improvement on the final game as any technical change.

Games are no longer technology centric in the way they used to be.

I think the trick would be in producing enough specific modules for popular use cases.

But this would be heavily reliant on ensuring the hardware would be a constant for enough time to warrant the investment which I take from your response, would be a futile attempt.

Thanks for the response though.

/offtopic
 
I think the trick would be in producing enough specific modules for popular use cases.
That's basically what Windows and DirectX do. You have 'constant' hardware (x86 and DX class GPU) and can write an engine on it that'll work on hardware also running on x86 and DX class GPUs 15 years later. Only it doesn't work as you hope as the hardware and software technologies upgrade in ways that aren't predicatble and couldn't be designed into a modular system. The only way it could be kept hardware compatible is if the underlying hardware technology doesn't change. That is, we'd be stuck on shader model 2.0 for 15 years after the original XBox because new ideas about how to render graphics require new approaches. eg. the modern post-FX AA techniques can prompt a change in hardware from fixed MSAA to a more versatile approach, which the API's will have to adapt to. Now of course you can update the APIs, but then you have legacy issues to worry about and end up with obese, inefficient libraries, which is where a clean-slate rival can come in and achieve much more on the same hardware.

The concept of a 'pure' atomic hardware that can scale and take its code with it is one I've endorsed before, and I even hoped before its release that Cell could achieve this. The idea of something like Cell that can be implemented as one or two cores in a mobile, four in a TV, eight in a console and 64 in a workstation (and then scaling up over the years), all running the exact same code, is very appealing and would simplify things considerably. However, it's not practical. Graphics are changing far too fast, and CPU ideas, and a fully programmable graphics engine like Larrabee isn't able to compete. We'll keep having new GPUs that do things differently (we may have voxels or raytracing in the not too distant future) that a 15 year old library or hardware design couldn't cope with, not even on a general structure level to plug in new modules. Computer tech just moves too fast for any great designs now to be applicable in a decade. Compare 15 years of tech up to now. Windows 98 - there's no way back in 98 MS could have designed an OS capable of efficiently scaling to modern requirements that weren't even concepts back then, which is why a various rewrites were needed. We didn't even have hardware TLS on GPUs 15 years ago to design software modules for, let alone be able to design an API that'd scale to modern shaders, so there was no way a set of libraries could be designed to interconnect with future ideas.

Or putting it another way, there isn't AFAIK a single high-tech industry where the learning of now isn't outdated in 15 years. Architects who think they know everything about designing buildings and bridges come across new problems and solutions. Computer engineers who think they've designed a flawless system find all sorts of things they missed. Heck, even within a single project over a few years, a software company will not be able design the engine up front to work ideally. Someone like Naughty Dog will have a first go, learn a truckload of things they'd never have thought of without the pracitcal experience, and then have a different, more mature thought process the next time around. As such, planning something that'll be valid for 15 years isn't possible on changing hardware, while constant hardware that scales doesn't exist and won't for a while yet at least.
 
That's basically what Windows and DirectX do. You have 'constant' hardware (x86 and DX class GPU) and can write an engine on it that'll work on hardware also running on x86 and DX class GPUs 15 years later. Only it doesn't work as you hope as the hardware and software technologies upgrade in ways that aren't predicatble and couldn't be designed into a modular system. The only way it could be kept hardware compatible is if the underlying hardware technology doesn't change. That is, we'd be stuck on shader model 2.0 for 15 years after the original XBox because new ideas about how to render graphics require new approaches. eg. the modern post-FX AA techniques can prompt a change in hardware from fixed MSAA to a more versatile approach, which the API's will have to adapt to. Now of course you can update the APIs, but then you have legacy issues to worry about and end up with obese, inefficient libraries, which is where a clean-slate rival can come in and achieve much more on the same hardware.

The concept of a 'pure' atomic hardware that can scale and take its code with it is one I've endorsed before, and I even hoped before its release that Cell could achieve this. The idea of something like Cell that can be implemented as one or two cores in a mobile, four in a TV, eight in a console and 64 in a workstation (and then scaling up over the years), all running the exact same code, is very appealing and would simplify things considerably. However, it's not practical. Graphics are changing far too fast, and CPU ideas, and a fully programmable graphics engine like Larrabee isn't able to compete. We'll keep having new GPUs that do things differently (we may have voxels or raytracing in the not too distant future) that a 15 year old library or hardware design couldn't cope with, not even on a general structure level to plug in new modules. Computer tech just moves too fast for any great designs now to be applicable in a decade. Compare 15 years of tech up to now. Windows 98 - there's no way back in 98 MS could have designed an OS capable of efficiently scaling to modern requirements that weren't even concepts back then, which is why a various rewrites were needed. We didn't even have hardware TLS on GPUs 15 years ago to design software modules for, let alone be able to design an API that'd scale to modern shaders, so there was no way a set of libraries could be designed to interconnect with future ideas.

Or putting it another way, there isn't AFAIK a single high-tech industry where the learning of now isn't outdated in 15 years. Architects who think they know everything about designing buildings and bridges come across new problems and solutions. Computer engineers who think they've designed a flawless system find all sorts of things they missed. Heck, even within a single project over a few years, a software company will not be able design the engine up front to work ideally. Someone like Naughty Dog will have a first go, learn a truckload of things they'd never have thought of without the pracitcal experience, and then have a different, more mature thought process the next time around. As such, planning something that'll be valid for 15 years isn't possible on changing hardware, while constant hardware that scales doesn't exist and won't for a while yet at least.

Thanks for splitting this out of the old thread Shifty.
_____________________________

Two things bother me with the current industry for developing games.

1) The inherent waste of having multiple developers writing relatively the same code in different offices for different games which intend to solve the same problems.

2) The above example inevitably has some which come up with clearly more efficient modules/libraries/code-blocks, but they are limited in use for the specific game/developer where they are currently working


The idea with the constant hardware was a method to address the time it would take to write efficient code, and in some cases, go back and rewrite the same code again as more efficient methods are found.

There are only so many console programmer "ninjas" around and each person could only work so fast. And with code being as huge as it is these days for a single game, it makes sense that this would take a while.

Addressing your GPU points, I think the wild-wild-west days of yesteryear are coming to a close with GPGPU coming into its own at both AMD and Nvidia. The GPU architecture growth rate is stabilizing, but mostly where I was coming from was on the CPU side.

High level calls on the GPU still make sense to allow breathing room for future GPU advancement as they are still growing rapidly ... unlike the CPU's of today.




I don't know if there is honestly a way to address those 2 points above, but it would be great for the games industry to find a way to do so as game types have also settled substantially from the Wild-wild-west days.

Not that new games and types will never come out, but the overall structure of the most popular games are pretty well established:

FPS
GTA style
Racer
Skyrim/Fallout/Oblivian style open world RPG


Identifying, and writing the most efficient modules possible to handle the needs of these main game types I would think would be worth while ... if difficult to do and even more difficult to institute.
 
1) The inherent waste of having multiple developers writing relatively the same code in different offices for different games which intend to solve the same problems.

This assumes that code re-use is free or at least relatively low cost.
The assumption drives the middleware and central tech groups.
It's caused companies to waste 10's of millions of dollars by dictating technology decisions to teams because it's a better "business decision".

I currently work for a central tech group, shared tech can certainly be a positive but the indiscriminate assumption that it's always the right choice leads to delays, and buggy software.

If you adopt a piece of software it has to provide value above it's cost of adoption, that's difficult to quantify, but adopting software you have no expertise in and are highly dependent upon is often not the right solution. Often building it is the cheapest way forwards.

This is especially true of more complex, more game specific technologies.

You're also making the assumption that there is a single best solution to game problems, where as the truth is far from that, equally technically competent teams will solve the same problem very differently.
 
I think outside of the field of battle, the view of a perfect, efficient middleware is a fair and lofty goal, but it's sadly not realisable. That seems mostly due to trying to do more in limited hardware than a very high-level engine would allow. If we had infinite processing resources than a perfect middleware would be possible and the game could just be made without having to worry about the code driving it, but the amount of a bloat of a 'does everything' middleware would never provide enough for a game to actually run!

I don't think anyone who's not had a go at programming something slightly complex can really appreciate. There are a truckload of middleware engines for you to try, and all have their issues and limitations and need head-exploding workarounds. And every time someone looks at the middleware market and decides all the options are rubbish because none does what they want and they're going to write their own, they ended up with the same problems. I put this down to the fundamental issues of engines being designed by humans, and humans being very varied beings.
 
Efficient code reuse and using generic middleware efficiently are not simple problems to solve.

Reusable/middleware components are efficient for tasks that do not have too many dependencies to your other systems and/or the reusable/middleware components include lots of specialized knowledge. They behave like black boxes, and do certain highly specified tasks correctly and quickly. Sound mixing / DSP effects are a good example of this kind of reusable components.

Licensing a full blown game engine as middleware can be problematic, unless you are doing games for the most common genres and do not want to experiment with radically new ideas. Our games for example depend highly on dynamic user created content, and all our own levels are created with the same ingame tools as user levels. Most middleware engines however are designed heavily around their PC content production tool set and their production pipelines depend on offline baking and optimization (GI lightmaps, baked visibility calculation, offline mesh / draw call optimization, artist made low poly backdrops instead of real geometry). Games that require fully dynamic lighting/shadowing, dynamic structures, occlusion culling, physics (applicable to every object), runtime content optimization (instead of offline generated) and in generally capability to run well using unoptimized user generated content are pretty much out of luck when using standard middlewares that are designed for a completely different point of view.
 
Two things bother me with the current industry for developing games.

1) The inherent waste of having multiple developers writing relatively the same code in different offices for different games which intend to solve the same problems.

This stems from a single, simple failure on your side. The single most fundamental rule on software development, from which most of it's oddities compared to other engineering disciplines stem from is: "Reading and understanding old code takes more time than it does to write new code."

That's why a single smart guy can get so much more done than a team of average guys. That's why the idea of over-the-counter modules doesn't work. That's why when you take two smart guys and put them in a team they can make only about 1.3 times more work than either of them could do individually.

You are assuming that if I have a well designed and written module that's relevant to the problem I'm solving, I can integrate it for a fraction of the cost of writing new code. This is generally not true. Pre-made modules only ever save development time if the problem they solve is genuinely hard (as in, it would take me years to do), or if they manage to abstract away their internals so that I don't ever have to look in their code. (Which, so far, nothing even remotely complex has succeeded in.)

The idea of saving coding expenses by splitting code into reusable modules has existed at least since the original "software crisis" in the late 60's. A lot of really smart people have bet their houses (both literally and figuratively) on it and tried to make it work.

It. Never. Works.

If you think you can make it work, have a go at it. If you succeed, you could revolutionize the entire field of software engineering and make yourself richer than God. Just don't expect anyone else to bet anything on it -- the common wisdom very much is that it's a fool's errand.
 
I think that's a little extreme. We have plenty of examples of code modules in the wild. We have LAME and libavcodec, OS IO libraries, physics libs, etc., etc. The whole premise of modern program design modular with, potentially, if it's well written, closed-box functions that take input and deliver output that can be swapped with other functions. The idea of scaling this up is too big to its greatest possible conclusion is too big a task to be achieved, the idea itself isn't wrong. Yes, using other people's code sucks, but that's as much because it's never properly documented or designed for other people's benefit.
 
I think that's a little extreme. We have plenty of examples of code modules in the wild. We have LAME and libavcodec, OS IO libraries, physics libs, etc., etc. The whole premise of modern program design modular with, potentially, if it's well written, closed-box functions that take input and deliver output that can be swapped with other functions. The idea of scaling this up is too big to its greatest possible conclusion is too big a task to be achieved, the idea itself isn't wrong. Yes, using other people's code sucks, but that's as much because it's never properly documented or designed for other people's benefit.

To me it's always a value proposition.
What would it cost me to build it.
What would it cost me to maintain/fix it.
How good is it.
Does it get my artists and designers working faster.

Unfortunately a lot of those are guess work, but IME you tend to massively underestimate the second one.

There is a certain class of problem that lends itself to libraries or middle ware solutions. They generally either require specific knowledge and are complex (codecs), and have extremely simple external interfaces.

The other class of commonly used libraries are the simple widely used libraries like the C runtime library, virtually no one bothers rewriting it. Although efficient implementations of things like memcpy are often written.

Once you get into the more complex libraries STL, Boost etc, there is a lot of resistance in game teams, because unlike the C libraries, the semantics are not well understood by many of the people using them, leading to poor choices and often big headaches.

Look at code re-use this way, a lot of programmers can't agree that it's cheaper to use STL than write their own containers.
 
Look at code re-use this way, a lot of programmers can't agree that it's cheaper to use STL than write their own containers.
And STL is a relatively widely-adopted, trivial code. Things become exponentially more complex when you try to move to more complex 3rd party modules.

I'm entirely with ERP here - code reusability has little to do with some utopian code run-time performance metrics and everything to do with dev-time metrics. And this even before considering that factors like a bad HAL component revision can undo months of optimisation effort. Which of course brings us to another can of worms: the subject of validation/verification of 3rd party modules - are you sure that latest-and-greatest off-the-shelf codec produces output compliant-enough to keep the rest of your pipeline happy? What if it's the latest-and-greatest revision of your off-the-shelf engine? How many things can break if you ever decided to upgrade (largely a rhetorical question as projects usually stick with the one version of their middleware engine they started life with, any potential further evolution occurring in-house, if possible).
 
ERP said:
Look at code re-use this way, a lot of programmers can't agree that it's cheaper to use STL than write their own containers.
People write their own containers for the same reason every library on the planet contains its own version of math classes and memory allocators. It's human nature to have 'religious' preferences (that, and every programmer has a 'secret' plan to take-over the world with HIS particular syntax and style), we're only ever getting around that if we could migrate to syntax-agnostic coding tools.

But I do feel that the bigger problem (in some corporate environments at least) is insistence on making large monolithic packages for people to use, rather then breaking things down small.
 
Oh you'd be be surprised... :/

Though the other part is true - high-end CPUs don't need to be efficient at burning through such codebases as they are targeted at lower CPU budget. This kinda adds to what I said earlier about there being little point in serious optimization on PC.

So two schools of thought going on:

1) make code as efficient/lean as possible

2) make code as quickly as possible


#1 Requires more time (money), better coders, but demands less of the CPU

#2 Requires more CPU die space (money), but demands less of the coders

Perhaps Sony/MS will give the option to pub/dev houses ... "We guarantee the new boxes will be easier to code for, but it will cost us (MS/Sony) money to deliver the kind of cpu necessary to make up for loose codebases, so we want a bigger cut of the game royalties to offset the cost of the bigger CPU and Ram necessary to make development cheaper/quicker."

Would that be fair?
 
Back
Top