Sweeney presentation on programming

DiGuru said:
Yes, but exactly that (map, filter, fold) is what database engines excel in. And that would force you into a data-centric model, which is half the battle. The other half is starting the analysis at the output side. Trace back to the logic from there, by writing procedures that see how that data has to be transformed according to the program logic, instead of executing the program logic and seeing what data you need to manipulate. Store that "business logic" where it belongs. After all, that's exactly what large business apps have been doing for some time now, to counter the same problems. Although for quite different reasons.

I'm sorry, but this seems like a non-sequitur. I don't get what this has to do with the discussion. All programs transform input to output. If a programmer starts writing code without thinking about what he wants output first, then he is doing something wrong.

We are discussing new language paradigms, because new language paradigms forces the programmer to recast the problem into a form which can be optimized differently and mroe efficiently. You seem to have argued against functional and high level appropriates, yet have given no explanation as to how this is going to be achieved in raw C++ code with libraries using procedure code, while reducing bugs and maintaining high performance.

And as soon as you start designing C++ libraries which build up AST-like structures, or parse mini-languages, I guarantee you, you'll have reinvented a functional Scheme/Lisp approach hidden as another syntax.

Yes, I agree. But that's still whishing for things to "patch" C++, without having to change the way you think: leave that to the language constructs. It isn't going to happen. If you want a better language, just pick one, don't try to graft bits and pieces to C++. It won't work. It will only add yet another layer and syntax to the mess.


Well, I don't like he literally intends to patch C++. I think he is thinking about UnrealScript NextGen and what he wants to do. And what he has noticed is that mainstream programmers don't like prefix or suffix notation, and prefer C-syntax. Syntax is irrelevent. A new paralle functional language modeled on C-syntax would not be a "patch". It might look like C++, but it will not have the same semantics.

HLSL looks like C-syntax, but it's execution is much closer to functional. JavaScript looks C-ish, but ironically, JavaScript has alot in common with Scheme.



And for having some kind of language tree, where you can pick a specific level and syntax for each piece of your code, wouldn't that completely invalidate the main goal: making it harder to fuck up, and improving the development efficiency?

No, and you gave a good example yourself: SQL. Do you think a procedural C++ implemented query is going to be less prone to fuck-up than C++ code interspersed with SQL fragments?

Today's C++ programmers use many DSLs without even realizing it and the benefits. Regexps, SQL, XPath, HLSL, etc. We aren't neccessarily talking about *literally* sticking 10 different languages into a single .c file, but about linking multiple languages today in your project as appropriate.

Today, we have C++ code for main program, HLSL for GPU, SQL if you use an embedded database (or OQL if you use an object-db) and usually some scripting engine for ingame rules, logic, events, etc. But large pieces of that C++ code can be split off into data-parallel pieces, because big pieces of that code are operating on huge geometry and game databases.
 
Last edited by a moderator:
Demo, I actually agree with you. I even explained all of it. You might want to re-read it, as I have been composing a good answer, but I keep on repeating the same stuff I already posted in this thread.

Btw, I don't like C++. I used it plenty, but I prefer Delphi, as you know.
 
Was your SQL example meant to, by analogy, state that games have enormous databases (geometry, connectivity, objects, etc) and that a game engine should act like a database engine with its own language for executing logic and queries against its this game database?

So instead of saying Database.execute(PL/SQL), you'd say GameEngine.execute(GL) (game language). And presumably, GL would specify what data to fetch, how to group it, and then which function to apply to the grouped data, and where to put the output?
 
DemoCoder said:
Was your SQL example meant to, by analogy, state that games have enormous databases (geometry, connectivity, objects, etc) and that a game engine should act like a database engine with its own language for executing logic and queries against its this game database?

So instead of saying Database.execute(PL/SQL), you'd say GameEngine.execute(GL) (game language). And presumably, GL would specify what data to fetch, how to group it, and then which function to apply to the grouped data, and where to put the output?
Exactly.

And you should use a transaction model, and be able to drop objects waiting for data to the end of the list. And like the transactions, use pending property changes instead of directly changing them. And have some meta-class process the objects, instead of letting them do it themselves: just load a block of the next objects and properties on the pending list, and process (stream) them.
 
If you use a fixed set of answers instead of having the game logic determine what to change, it becomes "embarrassingly parallel" instead of serial. And if you use lists, you can stream it.
 
The only problem is, maintaining ACID properties + high concurrency with an RDBMS is a solved problem. OODBMSes have shown the problem is much more difficult for operating on object graphs.
 
Yes, that's why you would want to store your object properties in that database and use a meta-class to stream them, instead of having the individual objects do it themselves. Because that breaks data coherency. Make a list, indexed by property type. So you can group all the objects that share that property, if that's the one you're interested in, and add them to the next block of the (distributed) stream. And any changes to (other) properties can be handled by adding the receiving object to the end of the list, with a set of parameters for the meta-class that describe those pending property changes.

Which is trivial to do when you have that game language database engine, and prevents all the locks and race conditions.

Sure, you have some serious overhead, but it will work. And it will actually put all those other cores to good use. It transforms the current model neatly into a massively parallel one, where you can "just add more hardware".
 
Last edited by a moderator:
darkblu said:
same with vc as its optimiser still drops the ball each time it encounters an asm statement. nothing has changed in this regard in redmond for the past 10 years.

There's been a change.... the VC8 AMD64 compiler doesn't support inline ASM at all. You must use intrinsics.
 
Colourless said:
There's been a change.... the VC8 AMD64 compiler doesn't support inline ASM at all. You must use intrinsics.

holy incompetence, batman! these guys have some attitude.. whenever they can't reinvent the wheel they abolish it.
 
Just as a note, I don't see what wrong with gated abstraction. Errors are not an issue, just tell your coder he can only write code in such and such a level of abstraction, and be strict about it. It's more of a management issue. Understanding is not an issue, because you can frame the "gate" with contextual info (e.g. metadata) so that the compiler can understand it. Of course, this implies a trust that the coder provided the correct context, but when you transition to a lower level of code you know this and act responsibly. It's the same with unsafe type casts in C++. In other words, I don't believe that affording an opening where trust can be broken is unacceptable. I don't want DRMed code, lol. I believe providing easy to use, but very clear, gates between the levels of abstraction is absolutely necessary.

Now maybe this means having a suite of distinct languages, one for each layer, with powerful language interop fuctionality to pull it all together. However, I think this is a rather crude method. I don't see why a procudural language that had sufficent metadata capability to self-describe down to the bare metal couldn't fulfill all these desires and more. For example, you could implement a dynamic language layer by creating a metaclass library that provides a smart "code" pointer, which replaces itself with the best runtime code block it can find, given an analysis of the context in which it is used. This would be metalanguage of the most extreme form, capable of expressing any concept by creating the abstraction (e.g. the DSL) and backend (e.g. the DSL runtime) entirely within itself. Although, I don't think it would be "productively" universal, since syntax could get in the way sometimes, making an external DSL desirable.

Of course, the question then becomes, what is the minimum of contextual functionality do you need to do this? I'm not sure, but I have a strong feeling it can be done.
 
DudeMiester said:
Just as a note, I don't see what wrong with gated abstraction. Errors are not an issue, just tell your coder he can only write code in such and such a level of abstraction, and be strict about it. It's more of a management issue. Understanding is not an issue, because you can frame the "gate" with contextual info (e.g. metadata) so that the compiler can understand it.
But understanding is an issue: for the programmers! They would have to learn all those other languages to be able to do their work, as they might need to write or debug a part in some strange dialect tomorrow.

How much syntax errors do you make when switching from one language to another?
Of course, this implies a trust that the coder provided the correct context, but when you transition to a lower level of code you know this and act responsibly. It's the same with unsafe type casts in C++. In other words, I don't believe that affording an opening where trust can be broken is unacceptable. I don't want DRMed code, lol. I believe providing easy to use, but very clear, gates between the levels of abstraction is absolutely necessary.
If all programmers wrote perfect code, this would be a non-issue.

How often do your programs run as intended, directly after you made some substantial changes? It did happen a few times?

Now maybe this means having a suite of distinct languages, one for each layer, with powerful language interop fuctionality to pull it all together. However, I think this is a rather crude method. I don't see why a procudural language that had sufficent metadata capability to self-describe down to the bare metal couldn't fulfill all these desires and more. For example, you could implement a dynamic language layer by creating a metaclass library that provides a smart "code" pointer, which replaces itself with the best runtime code block it can find, given an analysis of the context in which it is used. This would be metalanguage of the most extreme form, capable of expressing any concept by creating the abstraction (e.g. the DSL) and backend (e.g. the DSL runtime) entirely within itself. Although, I don't think it would be "productively" universal, since syntax could get in the way sometimes, making an external DSL desirable.

Of course, the question then becomes, what is the minimum of contextual functionality do you need to do this? I'm not sure, but I have a strong feeling it can be done.
I think the real question might be: what is the maximum contextual understanding you're requiring your programmers to have? And please have someone handy who really understands all of it down to the machine level, including all the inconsistencies, to debug things...
 
The issue of learning a new syntax vs learning a new API library is canard. There is not much difference. And moreover, syntax errors are preferable to runtime errors, so if I have to choose between a system in which the IDE or compiler catches syntax errors, vs one where I have do a build and execute to catch those same errors, I'd rather catch them earlier. Runtime errors are more expensive to debug, and the more of them that can be caught earlier, the better.

As I said before, learning SQL or Regexp, or HLSL I think is far more beneficial than a procedural library approach. We had that with some early OpenGL extensions, and the result is terrible. Just compare the ease of use of a shader language or register combiner language syntax vs the procedural combiner or shader extensions.

The procedural approaches wipe out the possibility for a whole cornicopia of verification and validation, defering checks often until runtime.
 
Agreed, as long as you limit your language tree to the same kind. Because when switching from one procedural language to another mostly generates syntax errors, switching form a procedural language to a functional one or vice versa takes time to catch up with the right frame of mind.

Although I agree it can be done, it requires some interesting organizatorial changes to pull it off.
 
Back
Top