Is Doom3's renderer revolutionary?

Is Doom3's renderer revolutionary?

  • No, the main ingredients are bumpmapping, Blinn-Phong shading and shadowvolumes, which were all inve

    Votes: 0 0.0%

  • Total voters
    141
Reverend said:
[edit] Link re post on OpenGL.org regarding offset mapping.

THANKS Rev!

I remember when you guys talked about that demonstration back when I still had no DX9 card, and then I forgot all about it. Now I downloaded and ran it, and DAMN, it looks GREAT! Jesus, what a difference, it's friggin enormous! :D

Now, wasn't this added to Farcry in the second patch I remember hearing long ago? How do I enable it?
 
Luminescent said:
Good point. Doom 3's renderer was ready many moons ago, but game content development, etc. slowed it. Are we going to penalize a renderer because it had to put up with the delays of voice acting, polish, content creation, etc. due to something aside from renderer development.

I didn't, anyway.

Finally, there is a major difference between the time a technique was discovered for offline rendering and when it was officially implemented in a complete (robust) solution, in-game. I figured that in this case we were referring to the first time a complete game renderer (not tech demo), implemented whatever technique in a fashion that made its technology readily acceassiable (playable with reasonble performance) and noticeable. I mean, we could render spectacular things with ray-tracing/offline techniques on current hardware, but whether it comes at an interactive framerate is another story.

I see no reason to make a difference between a game renderer or any other type of renderer, as long as they are doing the same thing.
A simple example: raytracers are offline renderers now... Eventually PCs will be fast enough for them to run at interactive framerates. Does this change anything about the renderers?

Maybe we could agree on the prerequisite attributes that must be met for the consideration of a revolutionary game renderer (in this context).

I propose it mus be:

1. A solution that is the core rendering technology of a newly released game.

Argument: We can mod old games with rendering features, but a revolution will not commence if an old game has lost its impact. We can create tech demos, but they will suffer the same fate, and perhaps lack a robust interactive/playable solution. The whole point is to make the solution interactive for the purpose of a game.

2. A solution that creates a percievable visual impact and influences other engines

3. A solution that is accessiable (playable) and scaleable (a general expectation for pc game titles are expected to be).

Again, this is aimed at games, while this thread is not.
Even so, as I stated before, the Doom3 engine is most probably not very scaleable, especially in things like world size and polycount. I doubt that we'll ever see a game based on the Doom3 engine with significantly higher polycount or significantly larger rooms than Doom3. I also doubt that we'll ever see a game that is not in the style of Doom3 (indoor, lots of metal, very dark).
 
Reverend said:
DeanoC said:
HL2's radiosity light maps are cartesian formulation of 1st order diffuse irradiance maps which spherical harmonics were original used
Um, could you repeat that? :)
I can try ;-)
Note: I know nothing more about HL2 radiosity mapping than what I read in their paper and discussion with various people (all who are much better at this stuff than I am). Peter Pike Sloan has some interesting observation on GDAlgorithms. Also I don't feel too comfortable discussing this, as I don't that much about the subject BUT anyway...

Diffuse irradience maps are based on the idea that a cubemap surrounding an object can be seen as an infinite approximation of the incoming light. So you can put a box around an object, render in the incoming light (via drawing circles etc.) and the just use the normal to look up into the cube map.

Problem is that cubemaps are big and relatively expensive to update (6 renders currently), so Ramamathi et al showed that a function approximation can produce very close results (3% error in the diffuse case). But there are lots of function approximations and lots of basis system to base your function approximation on. The order is how high frequency data your capturing, 0th order is ambient, 1st order is enough for low frequency diffuse, 2nd and higher is needed for specular and higher frequency changes.

HL2 uses a 1st order linear cartesian basis (good ol' fashioned X,Y,Z). Geometrically you can think of as, that they basically store 3 light maps per area, these maps are treated as infinitely far away along particular vectors.
These vectors are 'special' and are constructed in such a way that that it all works(tm). The coefficients are in the paper they presented.
Runtime is just dotting the normal against each light map vector and multiply that by the light map colour and then accumalate the results.

The other commonly used basis are spherical harmonics, this a basis that is mapped on a sphere. Its harder to think about as the geometric 'map' itself is curved, but its basically the same thing.

At the level used in HL2 and most games, they are just alternative representation of the same thing (simlar to storing things in cartesian or polar coordinates) but SH (and some other basises) have certain mathetical properties than are required as you move to higher frequency (precomputed radiance transfer etc.)

Thinking about HL2 system a bit...
Effectively your doing PCA (Principle Component Analysis) compression on the cube map then decompressing per pixel. PCA is a method of compressing correlated values, the 3 light maps are obviously correlated as they are all generated from the same lighting data (a light will in most cases affect at least 2 faces). Now PCA wasn't actually used, the basis was picked as a good average but I wonder if you PCA'ed the data for every area you would get better lighting...

The more I look, the more PCA is looking look another 'magic' operator in CG. Similar to how everything can be solved with a matrix inversion, it seems PCA can reduce any problems complexity...
 
DeanoC said:
HL2 uses a 1st order linear cartesian basis (good ol' fashioned X,Y,Z). Geometrically you can think of as, that they basically store 3 light maps per area, these maps are treated as infinitely far away along particular vectors.
These vectors are 'special' and are constructed in such a way that that it all works(tm). The coefficients are in the paper they presented.
Runtime is just dotting the normal against each light map vector and multiply that by the light map colour and then accumalate the results.

The other commonly used basis are spherical harmonics, this a basis that is mapped on a sphere. Its harder to think about as the geometric 'map' itself is curved, but its basically the same thing.
Well, actually, what you just described HL2 as using is spherical harmonics, albeit only the first four terms (L=0, L=1).
 
Chalnoth said:
Well, actually, what you just described HL2 as using is spherical harmonics, albeit only the first four terms (L=0, L=1).

(I'm not sure my following answer is correct but I did a little checking based on Simon Brown SH coefficients (http://www.sjbrown.co.uk/sharmonics.html) and fiddling with the HL2 ones a bit, I think I'm right...)

The basis is different. SH is orthogonal, HL2's linear basis isn't. You can't exchange basis in the HL2 version.

SH is a particular set of basis functions, they should be orthonormal and orthogonal (orthonormal might be a weak constraint, certainly some 'SH' solution seem to drop it).
 
Edit: Nevermind. I think what I just wrote was nonsense. This, however, is not:

The L=1 spherical harmonic terms can be written as (not normalized) x/r, y/r, and z/r, and are appropriately labeled Px, Py, and Pz. The first order terms, then, are essentially linear.

And a side comment on your terminology: orthonormal means both orthogonal and normalized. Normalizing the spherical harmonic functions isn't really necessary, as long as you "renormalize" things later.
 
I see no reason to make a difference between a game renderer or any other type of renderer, as long as they are doing the same thing.
A simple example: raytracers are offline renderers now... Eventually PCs will be fast enough for them to run at interactive framerates. Does this change anything about the renderers?
Good, now everyone should be clear on how they should answer the pole; you created it. You've forced me to vote no, at least according to your use of revolutionary in the context of 3D.

We must differentiate between revolutionary offline and online/real-time renderers, though. Relative to offline renderers, real-time renderers are a step backwards, at least in terms of innovation of techniques, etc., because there are less processing constraints. There is a major consideration everyone must make in the context of this poll. That is, it must be determined whether a renderer is revolutionary in general (mainly a property of offline techniques) or whether it is a creative way of taking real-time a step closer to offline/ideal quality, whether it be through a robust solution or new rendering techniques.

Again, this is aimed at games, while this thread is not.
Even so, as I stated before, the Doom3 engine is most probably not very scaleable, especially in things like world size and polycount. I doubt that we'll ever see a game based on the Doom3 engine with significantly higher polycount or significantly larger rooms than Doom3. I also doubt that we'll ever see a game that is not in the style of Doom3 (indoor, lots of metal, very dark).
If I remember correctly, a couple of the Quake 4 magazine pics posted in the game forum were be outdoor (albeit at night).
 
Chalnoth said:
Edit: Nevermind. I think what I just wrote was nonsense. This, however, is not:

The L=1 spherical harmonic terms can be written as (not normalized) x/r, y/r, and z/r, and are appropriately labeled Px, Py, and Pz. The first order terms, then, are essentially linear.

And a side comment on your terminology: orthonormal means both orthogonal and normalized. Normalizing the spherical harmonic functions isn't really necessary, as long as you "renormalize" things later.

Yep that sound right, as been noted the use of HL2 linear or 1st order SH is largely arbitary.

The GDAlgorithms has an incredible detailed description of the difference between Gary's basis and SH, where PPS and Gary McTagget basically dissected the whole thing.

The orthonormal thing is 'interesting', its not strictly nessecary but as we are working real-time we don't do the re-normalize. So having orthogonal but non orthonormal SH basis would be bad.
Why is this important?, well some of the papers out there (including the classic Ramamathi et al) don't use a orthonormal SH basis. Which means people who use that basis without reading the fine print get a slightly wrong result.
 
Luminescent said:
We must differentiate between revolutionary offline and online/real-time renderers, though. Relative to offline renderers, real-time renderers are a step backwards, at least in terms of innovation of techniques, etc., because there are less processing constraints. There is a major consideration everyone must make in the context of this poll. That is, it must be determined whether a renderer is revolutionary in general (mainly a property of offline techniques) or whether it is a creative way of taking real-time a step closer to offline/ideal quality, whether it be through a robust solution or new rendering techniques.

I'm not too sure about that one. Offline renderers generally use vastly different rendering methods than realtime solutions. There is a bit of overlap, but generally they are completely different areas of research (one of the most obvious examples: 3d hardware has various limitations, and a lot of research is about implementing something within these limitations. This is not an issue at all for offline rendering).
This overlap is also a problem... For example, if the shadowmapping idea is borrowed from offline rendering, is it suddenly revolutionary again in a realtime context, even though it's the same as what offline renderers have been doing for years?
If anything about that is revolutionary, then I would say it is the hardware that makes it possible, not the software.

If anything, I would say we are heading towards less and less revolutionary engines, since the hardware will allow them to get closer to offline renderers.
Perhaps the revolutionary thing about Doom3 is that it is actually NOT revolutionary, but it can simply implement many age-old rendering methods in realtime for the first time, without the need for a lot of hacks.

If I remember correctly, a couple of the Quake 4 magazine pics posted in the game forum were be outdoor (albeit at night).

Yes, MOHAA also appears to be outdoor, but it is not. It looks outdoor, but everything is still constructed as if they are small rooms and hallways, like any game based on a Quake engine.
The same goes for the 'outdoor' parts in Doom3.
So I wonder how 'outdoor' these Quake4 scenes really are.
 
Scali,

I gotta ask ya this, but WTF's your beef with Doom, Quake engines and Carmack, huh? You look either like a jealous friggin loonie in general when you keep on ragging on id like this, or a jealous friggin loonie with some kind of obsessive/compulsive disorder.

Cut it out already, you've made whatever point you wanted to make long ago, it gets tiresome to hear you nagging about the same things in post after post in multiple threads. Christ you even started a new one when the old got locked, that's one of the hallmarks of a loonie.

You want to claim D3 can't do big rooms, FINE, but seems to me you haven't even tried to noclip out through the panorama windows in the Marine HQ on the very first level of the game... ;)
 
I was pretty happy with the last level's larger areas. It doesn't have a terrain engine, but that wasn't a surprise or a disappointment.
 
DeanoC said:
The orthonormal thing is 'interesting', its not strictly nessecary but as we are working real-time we don't do the re-normalize. So having orthogonal but non orthonormal SH basis would be bad.
Actually, for low-order spherical harmonics it may be useful to not normalize the eigenfunctions, as it would cut out a multiplication to bake the normalization into the coefficients. But once you start having to use series that require two or more terms, that extra multiply becomes necessary, and it is pointless to not normalize the eigenfunctions.
 
Scali said:
Perhaps the revolutionary thing about Doom3 is that it is actually NOT revolutionary, but it can simply implement many age-old rendering methods in realtime for the first time, without the need for a lot of hacks.
Why single out Doom 3 then, as this can be true for many other engines? What is your point in all this, anyway? Are these engine developers to be stripped of any credit because they aren't reinventing the wheel every single time? The basis on which you declare Doom 3 non-revolutionary can be applied to many other areas as well. As Reverend noted, is the Voodoo chipset revolutionary? I just don't see your main point, other than wanting to discredit Carmack for whatever he accomplished in Doom 3.
 
dksuiko said:
Why single out Doom 3 then, as this can be true for many other engines? What is your point in all this, anyway? Are these engine developers to be stripped of any credit because they aren't reinventing the wheel every single time? The basis on which you declare Doom 3 non-revolutionary can be applied to many other areas as well. As Reverend noted, is the Voodoo chipset revolutionary? I just don't see your main point, other than wanting to discredit Carmack for whatever he accomplished in Doom 3.

This is a response to people who try to single out Doom3 in a positive way, and in effect pretty much discredit all other engines because well, they're not Doom3, and all developers because they're not Carmack.
I try to educate people by explaining what Doom3 is and what it is not, what it can do, and what it cannot do, and why other engines do different things.
Not to discredit Carmack, but to be realistic, and put Carmack back in his place, and also the other developers.
I never actually discredited Carmack for the Doom3 accomplishments. I merely pointed out that the technology he used was invented in the 70s.
People who claim Carmack did it all, would discredit the real inventors. Is that fair?
But looking at all the emotional responses regarding Carmack, apparently most people already see him as a god, and Doom3 is his gift to this world, and all games must be like it from now on.
 
Scali, you'll be happy to know that the D3 engine can indeed cope with "FarCry"-big outdoor levels. Here's the proof:

1024 x 768 (118kb)
http://pwp.netcabo.pt/Tobril/D3_outdoor_test.jpg

This is just something I did in under 10 minutes. I inserted a model of an outdoor part of mars and cloned it 8 times and then enclosed them with six "sky" brushes. Each individual model is pretty big by itself but 8 times should remove all doubt. Funny statistics:

- For scale, those railings on top of the silos are about human size, meaning an enemy on the very last would be virtually undistinguishable.
- There are 8 point lights in this map (full bump, specular, ambient light enabled) with some overlap between them.
- It takes about 9 mins to walk all the way down to the last two silos (nocliping -- actual walking would probably take twice as long).
- FPS in that spot were actually around 42 but dropped because of the screenshot and I'm noclipping right near the "ceiling" to get as many polys on screen as possible.
 
Mordenkainen said:
Scali, you'll be happy to know that the D3 engine can indeed cope with "FarCry"-big outdoor levels.

That level doesn't bear a whole lot of resemblance to the average FarCry level. There's no daylight, there's no foliage, there aren't all that many objects or buildings, etc.
There's more to FarCry than just the physical area of the level, obviously.
 
Yeah we already saw it can do big areas, but I have to admit that I find it to look not so groovy with the totally black shadow, it would look nicer if you put another light in a specific spot to get rid of them. :p Of course you did it in 10 minutes I guess.

Yeah if that was farcry there would be hundreds of repetitive sprite, I demand the sprites.
 
He's someone who isn't out to disparage, but simply "educate" people of how Doom3 really isn't all that, and neither is Carmack by the way, and besides, Scali's done "stuff" too. Except he is too modest to brag about exactly what it is here on the board. It's not that his stuff isn't AS GOOD as teh Carmack's or anything, he's just modest. Isn't that right, Scali? ;):LOL:;)

I don't think this guy's John Romero. Y'know, Romero always struck me as much too cool a guy to ever be this petty and bitter. Seems to me, Scali's just some 2-bit, wise-ass programmer a la Derek Smart who never really amounted to anything in the game biz and now just wants to try to topple those who are more successful than him. :devilish:
 
Scali said:
Mordenkainen said:
Scali, you'll be happy to know that the D3 engine can indeed cope with "FarCry"-big outdoor levels.

That level doesn't bear a whole lot of resemblance to the average FarCry level. There's no daylight, there's no foliage, there aren't all that many objects or buildings, etc.

It was never meant to resemble a FarCry level. You said the D3 engine couldn't handle big outdoor areas. Foliage, objects, buildings are just more polygons (or sprites for that matter). You've praised UT2004's big Onslaught levels and yet they're also pretty much barren wastelands with a few scattered bases. And like I said, that screenshot was taken noclipping from the air looking down and to the distance. Framerates while walking were locked 60. And I only have a R9800 Pro. More polys are not going to hurt.

But I'm wondering why you don't prove your argument with tangible facts. You say D3 can't handle FarCry levels, why should we have to take your opinion at face value, that you continue to maintain even after I've showed you some proof? Why don't you try to prove your side of the argument with more than just opinion?

There's more to FarCry than just the physical area of the level, obviously.

And there's more to games than FarCry. While FarCry did outdoors extremelly well, it did other things less well, like indoors, where D3 excells. No engine is perfect at everything and yet, I only see you banging D3/JC. I don't see you ranting how CryTek should be put in its place because the outdoor/indoor transitions in FC look like something out of 1994 and how Tim Sweeny should be relegated to assistant programer status for releasing a game in late 2003 that only uses up to Dx8.1 shaders, or even insult Gabe because HL2's levels seem to have two suns: one for static geometry and another for models.

Of course FC's engine is going to be better at recreating breathtaking paradise islands with jungle because that's what the debut game was about. Of course UE2 is going to be better at large multiplayer games because the debut game was all about multiplayer. And of course Source is going to be better at physics because the debut game even has a physics gun to throw stuff around. And D3's engine was made with tight detailed interiours and dramatic lighting in mind, because the debut game was all about dark corridors and tension and being stuck in a martian outpost with no chance to escape.

You want to make a predator game where a team of special ops hunt down a Predator? Use FC's engine. Want to make a big, full-on multiplayer game pitting marines vs aliens in large open landscapes? Use UE2's engine. You want a game where environment interactivity is the key? Use Source. You want a solo romp through detailed interiours where shadows play a big role? Use D3's engine.
 
Back
Top