Questions for Tim Sweeney?

MfA said:
JF_Aidan_Pryde said:
I intend to ask him something regarding Cell. But I guess the big question is whether IBM will have the tools to generate parallel code.

Those kind of tools dont get you very far, you need large granularity parallelism ... if the developer does not take care not to introduce dependencies in his code that kind of parallelism isnt present to begin with, and even if it were it is hell on a compiler trying to analyze wether there is any dependence on that scale. You'd have to put a restrict modifier on every pointer :)

I have to protest a bit against your initial assertion. Well optimised libraries can help a lot. I'd imagine that most vanilla tasks will be readily available in that form.
Generally speaking though, programming for machines with higher levels of parallellism does require a bit of a shifting of mental gears, even with auto parallellising/vectorising tools. Most programmers think algorithically in an Algol/Fortran/Pascal/C++ tradition, which isn't 100% appropriate. However, unless programmers have grown thickheaded over the years they should be able to adapt, and by appearances this first iteration of the Cell concept won't be massively parallell anyway, so Amdahls law won't be as punishing.
 
JF_Aidan_Pryde said:
MfA said:
Mr. Sweeney in the past you have suggested rendering would move back to general purpose processors, but on the other hand you have admitted reservations to massively parallel processors like Cell ... how do you unite these views?

Hi MfA,
I intend to ask him something regarding Cell. But I guess the big question is whether IBM will have the tools to generate parallel code. If yes, then no one need to worry, if no, then with 'arbitrary cells', it's hard to see how one can program for them..

Word for word MFA is asking a great question. An answer to that would be very intresting to read.
 
Entropy said:
Well optimised libraries can help a lot. I'd imagine that most vanilla tasks will be readily available in that form.

IMO you arent going to get the kind of parallelism you need by providing parallel implementations of matrix math or stuff like that (that might work on supercomputers, but they tend to use slightly larger matrices). The kind of "vanilla tasks" with enough appropriate parallelism is something like collision detection ... so we are then talking middleware game engines, rather than vanilla tasks.

Generally speaking though, programming for machines with higher levels of parallellism does require a bit of a shifting of mental gears, even with auto parallellising/vectorising tools.

I think the general consensus is that for non regular problems explicitly parallel programming is still unavoidable.
 
I'm slow, I just thought of asking about the slide at Nvidia's editor's day regarding UnrealEngine 3 optimizations in NV50. Like what they really meant.
 
MfA said:
Generally speaking though, programming for machines with higher levels of parallellism does require a bit of a shifting of mental gears, even with auto parallellising/vectorising tools.

I think the general consensus is that for non regular problems explicitly parallel programming is still unavoidable.
In my experience, yes.
Depending on the problem, it's not necessarily that big a deal to explicitly code for parallellism, at least in a case like this with a small and fixed number of processors. It should be quite easy to adapt to.
I don't know what background people here have, but for those who've taken some computer science, the adaption has some similarities to programming in LISP. The first attempts you make will be really clumsy as you try to express your habitual thinking in a new language - tail recursion everywhere - until you gradually switch into another way of thinking about algorithms. Intellectually, shifting to LISP is a lot trickier than programming simple multiprocessors, so I can't really imagine that programmers will find the transition all that much of a challenge. It will limit the usefulness of legacy code snippets/libraries/constructs, is all.

Your comments about the differences between game programming and supercomputing is well taken though. In supercomputing there's usually a fairly small nucleus of code that does 99.9% of the work, and spending some effort optimising that is typically a limited and very worthwhile investment of graduate student time. ;)
 
It seems as though Doom 3 has a heavy focus on stencil shadows and using a single fragment program on all surfaces to compute global illumination. Halflife 2 on the other hand is known to have many different pixel shaders for different surfaces. It seems as though Doom 3's solution is more elegant, but Halflife's has a chance to look better in some areas. I presume everyone in the future, including Doom 3 engine games will go to multiple pixel shaders, but it is a balancing act between complexity and satisfying content creators. I'd like this rambling formed into a question and posed to Sweeney.

Epic seems to have taken the stance that driver optimizations that reduce image quality but improve performance (thinking of NVIDIA?s trilinear and aniso optimizations) are ok. This seems to create a very fussy line and I'm curious why they don't take the stance many other developers have where driver optimizations that reduce image quality are wrong.
 
Scalability.

Are there any plans to scale the engine in regards to the number of con-current players? It appears that MMPOGs are catching on - yet developers are going in-house for the engine or using one of two already out.

It would be rather impressive to see a MMPOG using Unreal's engine...
 
When will the time come when developers will no longer need to create a new engine? By this I mean do you predict a time in the future where a 3D Engine wll become so powerful that all effects and scalability can be handled by it? Or are we stuck with updates of engines whenever new processing power increases features and performance?

I can't think of anything else.
 
Tahir said:
When will the time come when developers will no longer need to create a new engine? By this I mean do you predict a time in the future where a 3D Engine wll become so powerful that all effects and scalability can be handled by it? Or are we stuck with updates of engines whenever new processing power increases features and performance?

I can't think of anything else.

Carmack has already talked about this, although that of course doesn't mean that i wouldn't be interested in what Tim has to say about it.

http://www.gamespy.com/quakecon2003/carmack/

GameSpy: Are you going to retire after DOOM 3?

John Carmack: No. I've got at least one more rendering engine to write

And i think he has mentioned (somewhere else) that it's likely that there'll be a "in between" engine between the one after Doom3 and the supposedly final one. Perhaps he was talking about a souped up Doom3 engine.
 
Bjorn said:
GameSpy: Are you going to retire after DOOM 3?

John Carmack: No. I've got at least one more rendering engine to write

And i think he has mentioned (somewhere else) that it's likely that there'll be a "in between" engine between the one after Doom3 and the supposedly final one. Perhaps he was talking about a souped up Doom3 engine.
I'm not certain that's what he meant.

Heck, if I had said something like that, all that it would have meant is that I'm working on another engine now, and I'm going to hold off on deciding to write another engine after that when the time comes.

I'd say that in the future, instead of hardware flexibility improving that then requires significant leaps in software technology to take advantage of, we'll see hardware performance improve that occasionally allows for significant leaps in software technology.
 
Chalnoth said:
I'm not certain that's what he meant.

Heck, if I had said something like that, all that it would have meant is that I'm working on another engine now, and I'm going to hold off on deciding to write another engine after that when the time comes.

There's been lots of rumours (ususally rooted in recent interviews with Carmack) that he may give up programming games to concentrate on his space rocket business, which is what seems to be becoming his passion.

I interpret this as meaning that he intends to stay programming games engines for at least another interation or two before going off to do something else, not that he will hit some kind of limit that stops him being able to develop new engines.
 
Chalnoth said:
I'm not certain that's what he meant.

Heck, if I had said something like that, all that it would have meant is that I'm working on another engine now, and I'm going to hold off on deciding to write another engine after that when the time comes.

I think he means that he might not be interested in writing new engines when they're more of a incremental step up from the previous. And he seems to believe that we're going to be there pretty soon.
 
Back
Top