Doom III vs HL2

DekerBSB

Newcomer
I have already seen that somewhere but i could not find it.
Which one uses the most advanced technology, Doom 3 or HL2?
 
Uh... define advanced. HL2 makes the most use of DX9 features, if that's any measure to go by. (I think it is, but not the only one.)

Let's start like this:
With "technology", do you mean hardware technology or software technology?
 
Both games uses completely different graphics technologies (e.g. Doom uses dynamic lighting everywhere, whereas HL uses lightmaps in many places) . And both are not released yet... How do you wanna compare them...?
 
Well, IMO:

HL 2 uses the most advanced shader implementation.

Doom 3 uses the most advanced shadowcasting model.

I say this instead of lighting, because there are "lighting" effects they have in common (normal maps) as well as "lighting" effects that HL 2 uses extensively and Doom 3, and its engine, do not try to (at last discussion by Carmack AFAIU). For competitive reasons, I do think that addressing some of this as far as the base engine is a likely reason for the delay for Doom 3, and this might change (though not so much for the game itself, most likely).

For the artistic success in achieving their different game environment goals, I don't think the word "advanced" is really a useful comparison word for evaluation, so I don't intend these comparisons to serve in that regard.
 
Doom 3 was built on a DX8 level featureset. Its use of the newer shader functionality in OpenGL appears only to be meant to reduce the number of rendering passes that were needed for older technologies.
 
from the thread "nv faster in d3":
Joe DeFuria said:
Doomtrooper said:
I look at this way, Carmack is endorsing a inferior chipset as the 'DOOM 3 Card to have'.

Yes and no.

It's certainly inferior in floating point performance. That doesn't make it inferior for his game.

Where consumers will get "confused" about which cards are "the best at bleeding edge games", will be the likely assumption that Doom3 requires the latest "tech", because well, it's "an Id engine." This is very untrue.

Doom3 requires old, DX7 level tech....but requires that it be very, very fast.

By design, the best card for Doom3, is not necessarily the best for DX9 games.

The best cards for doom3:

1) Most effective multitexturing fill-rate
2) Most bandwidth
3) Support for fast stencil
4) Support for fast "fixed function dot-3 shading"

Those have little to do with DX9 capability.

The current 5900 has more multitextured fill-rate, more bandwidth, equal (maybe better?) stencil support, and fast dot-3 shading when in integer / fixed funtion precision. So it's not surprisng that the FX (with current specs) would beat a 9800 Pro by an amount that the fill-rate / bandwidth specs would suggest.

What's really interesting in this regard, is the Volari duo V8. It has TONS of raw pixel fill rate, this could potentially make it a very intersting doom3 card if carmack makes a special path for it. (And if XGI exposes special extensions for him to do so.)

If carmack doesn't make a special path for it, it will be forced to use the ARB2 path (floating point) to be fully featured, which might cut its performance in half.

...Now we have the market flooded with inferior graphic processors all based off...money. Games optimized for FX and FP 16 formats.

I wouldn't worry about it. Half-Life is going to be a staple in benchmarks much like Doom3, and Half-Life is basically here now. These two benchmarks represent a nice contrast: performance in highly stressful "old architecture" apps (Doom3), and performance in new tech apps (Half-Life).
 
epicstruggle said:
from the thread "nv faster in d3":
Joe DeFuria said:
Doomtrooper said:
I look at this way, Carmack is endorsing a inferior chipset as the 'DOOM 3 Card to have'.

Yes and no.

It's certainly inferior in floating point performance. That doesn't make it inferior for his game.

Where consumers will get "confused" about which cards are "the best at bleeding edge games", will be the likely assumption that Doom3 requires the latest "tech", because well, it's "an Id engine." This is very untrue.

Doom3 requires old, DX7 level tech....but requires that it be very, very fast.

By design, the best card for Doom3, is not necessarily the best for DX9 games.

The best cards for doom3:

1) Most effective multitexturing fill-rate
2) Most bandwidth
3) Support for fast stencil
4) Support for fast "fixed function dot-3 shading"

Those have little to do with DX9 capability.

The current 5900 has more multitextured fill-rate, more bandwidth, equal (maybe better?) stencil support, and fast dot-3 shading when in integer / fixed funtion precision. So it's not surprisng that the FX (with current specs) would beat a 9800 Pro by an amount that the fill-rate / bandwidth specs would suggest.

What's really interesting in this regard, is the Volari duo V8. It has TONS of raw pixel fill rate, this could potentially make it a very intersting doom3 card if carmack makes a special path for it. (And if XGI exposes special extensions for him to do so.)

If carmack doesn't make a special path for it, it will be forced to use the ARB2 path (floating point) to be fully featured, which might cut its performance in half.

...Now we have the market flooded with inferior graphic processors all based off...money. Games optimized for FX and FP 16 formats.

I wouldn't worry about it. Half-Life is going to be a staple in benchmarks much like Doom3, and Half-Life is basically here now. These two benchmarks represent a nice contrast: performance in highly stressful "old architecture" apps (Doom3), and performance in new tech apps (Half-Life).

Thanks epicstruggle!
 
epicstruggle said:
will be the likely assumption that Doom3 requires the latest "tech", because well, it's "an Id engine." This is very untrue.

Doom3 requires old, DX7 level tech....but requires that it be very, very fast.

By design, the best card for Doom3, is not necessarily the best for DX9 games.
.....
I wouldn't worry about it. Half-Life is going to be a staple in benchmarks much like Doom3, and Half-Life is basically here now. These two benchmarks represent a nice contrast: performance in highly stressful "old architecture" apps (Doom3), and performance in new tech apps (Half-Life).

Epic I am not sure I can wholey agree, your first stuff about why the 5900 can be good in doom3 and bad at directx9 I agree with. The thing I have an issue with is that, Half-Life often uses directx9 simply to pull the same visual thing in less passes or make it look better. And this is what doom does as well. The difference is Yes HL2 will have more of that kinda thing and some stuff exclusively directx9, whereas D3 has nothing that you will miss (at least I think) if you dont have directX9 level hardware. In any case I just felt that people are kind of making it seem like there is this huge quantum leap in technology between HL2 and D3 and I don't really see it.
 
Its a pitty that GLSlang isn't used Doom 3 because unless iD makes another game in the next two years then I'd be scared to see when we really start to see GLSlang being used in games as you know iD sets the standards for ogl in games.
 
Well, personally I was impressed that Carmack used the NV1x level register combiners to produce the same visual quality as on a NV2x card. So you could argue that he's done alot 'more' with 'less' than other developers.
 
I think the thing is, a Geforce1 is all thats needed for everything D3 does. I think D3 is DX7 pushed to it's limits, while HL2 is baseline DX9.
 
I think the thing is, a Geforce1 is all thats needed for everything D3 does. I think D3 is DX7 pushed to it's limits, while HL2 is baseline DX9.

Except for the fact that the latter is completely false, what you say is true. ;)
 
When were register combiners introduced? And is there really a DX7 equivalent to that feature? I had always been under the impression that they were not and were more of a stepping stone that would eventually lead to PS1.x. . .
 
Ostsol said:
When were register combiners introduced? And is there really a DX7 equivalent to that feature? I had always been under the impression that they were not and were more of a stepping stone that would eventually lead to PS1.x. . .

That's what I always thought, except that AFAIK no one has ever used them, except for Carmack on Doom 3 :p

edit: except for maybe TechDemo-Games like Evola :LOL:
 
Well, this whole "DoomIII vs HL2" reminds me of Dilbert: "Which one's better - Paris or Rome?" :) The point is that they are different. You can't easily compare them and say "this one roxorz, this one suxorz"...
 
zurich said:
Ostsol said:
When were register combiners introduced? And is there really a DX7 equivalent to that feature? I had always been under the impression that they were not and were more of a stepping stone that would eventually lead to PS1.x. . .

That's what I always thought, except that AFAIK no one has ever used them, except for Carmack on Doom 3 :p

edit: except for maybe TechDemo-Games like Evola :LOL:
You mean Evolva? I kinda liked that game. . . :oops:
 
Doom 3 was built on a DX8 level featureset. Its use of the newer shader functionality in OpenGL appears only to be meant to reduce the number of rendering passes that were needed for older technologies.

What is a DX8 level feature set (say vs. DX9)? I mean basically the 'featuresets' are that you have a new data type (floats), explicit support for displacement mapping, line AA, scissor tests (in the pixel stage), and sphere map generation. (I won't go into the minor stuff like vertex streams, COM object binding, etc..) I hesitate the call the extended shader models "features" since I consider those retrofits of the existing concept to produce a more flexible, orthogonal programming model. The fundamental aspect from a hardware viewpoint is that the new programming models allow hardware to be designed that collapses more multi-pass functions into a single pass...

Well, personally I was impressed that Carmack used the NV1x level register combiners to produce the same visual quality as on a NV2x card.

Well register combiners were/are a pretty powerful feature considering how long they've been around. By using them vs. (say something like PS 1.1, although that's strictly regarding color ops) is that you're going from a parameter based model to a programmable model. I find it funny that everybody started creating a hullaballoo over "per-pixel this and that" when nVidia's initial per-pixel demos were on the GeForce256.

When were register combiners introduced? And is there really a DX7 equivalent to that feature? I had always been under the impression that they were not and were more of a stepping stone that would eventually lead to PS1.x. . .

Register combiners originally showed up on the GeForce256 IIRC (although they were an extensive evolution of the TNT's texture combiners), and later retrofitted further in the NV2x architecture. As far as DX7 equivalence, I don't believe there was any (IIRC they're only available via NV's register combiner extensions) although I guess if there is a caps set for a bunch of texture combiner stages with an extended combiner op featureset (dunno specifically what though in DX) you might be able to come up with a similar featureset...

That's what I always thought, except that AFAIK no one has ever used them, except for Carmack on Doom 3

They're tons of demos floating around on the web using them though...
 
Back
Top