If Doom 3 had been written in D3D....

Scali said:
I happen to have an XP1800+, and an R3x0 card. The thing is, the framerate is always relatively low, regardless of whether I use 640x480 with low detail or 1024x768 with high detail.

Well obviously framerate will be "relatively low", because the game is essentially capped at 60fps.

The game seems to be aimed completely at PCs with high-end CPUs and low-end sound and graphics hardware.

This comment doesn't make any sense, considering what [H]OCP said about the game being relatively forgiving to those who do not have the latest and greatest CPU's. But then again, considering the completely negative slant which you have demonstrated in this thread against OGL, hardly surprising coming from you.
 
Hyp-X said:
What elements do you define key parts of the gaming experience? I'm curious as everyone refers to Doom3 as if Carmack did it alone... I wonder how would you rate your gaming experience when you'd subtract everything that is not done by him.

Obviously he did not "do it alone", and no one in their right mind would suggest as such. But at the same time, almost surely he was one of the driving forces behind the game. There is certainly no reason to call him a "moron", unless one has an axe to grind.

So saying that "OpenGL is superior to DirectX because Doom3 is an enjoyable game" is somewhat stupid.

This is a nonsensical comment. Who said anything about OGL being superior to D3D because Doom3 is an enjoyable game?

Of course, you don't seem biased at all, do you...

Why would I be biased in a discussion about OGL vs D3D? I don't favor one over the other, and am not involved in the industry. But referring to John Carmack as a "moron" for coding in OpenGL is juvenile and childish at best.
 
Scali, as I explained on page 2, the gl crossbar allows your app to run faster because of less rendering passes as you can fit more math into multitexturing scheme. You don't end up burning the second stage unit by loading it with texture which is the same one as in the 1st unit.

Okay... so you can re-use the texture register from an earlier stage... How often is that useful, and how often does it actually reduce stages?
And how many non-shader cards actually support this? (For shader-cards it's a non-issue ofcourse, becasue they can do this in the shader itself).
That is the point. In theory it may gain something in some cases... But how practical is it? I don't think it's such a big deal, as I said before.
But ofcourse people just assume you don't understand it and start insulting. I understand how it works, I just don't think it's a big deal to have it. Most stuff will manage fine without it.

Opengl almost always exposes more functionality either thru core or extensions or both. You have more choices under gl than you do under d3d and you can't use d3d in tools very effectively because it's a gaming engine api while gl can handle engines and tools as well.

Well, let's just say that I don't consider OpenGL supporting something unless it's an extension that is implemented by at least two major vendors. Else you get the Glide-effect. I won't support something that only runs on one vendor's hardware.
And why exactly can't D3D be used in tools effectively? What kind of tools are we talking about anyway, and what makes OpenGL better there?
 
Scali, if you were experienced in gl as many of us are you wouldn't have written what you did about gl and carmack. I don't know why you like to argue so much but it's evident by the lack of response from competent members that they're done with you. I think about the time you started calling carmack a moron things turned for the worse. This is my last transmission...
 
Hyp-X said:
It goes into the 10-20 fps range at quite a few places, and stalls for as long as half a second when I'm hit. That's on a XP2000+ and Radeon 9800Pro playing in 800x600. (I have to admit the game suggested playing in 640x480 on that config - so it seems I put too much stress onto the hardware.)
Long stalls in a game, in my experience, are usually a result of running out of texture memory. Do you have the 128MB or 256MB Radeon 9700 Pro? If you have the 256MB, then it can't be that. I would instead suspect it would be due to not enough RAM. How much RAM do you have?
 
Scali, please ... I hope you still remember this:
You said:
I said:
Scali said:
People writing code that must support cards below the NV2x class care. Most of these cards are much more capable than you might know, if all you ever used to program them was DX7.
Is this where you are going to lecture about vendor-specific register-combiner extensions?
No.
This is when you get a lecture on this and its grandpa.
I know about the register combiner thing, but there is quite little that it can do that DX7+ FF can't do (which actually works on more than one card at a time anyway), if you ask me (okay crossbar, but how useful is that anyway?).
This is not register combiners functionality (RCs are NVIDIA proprietary). This is a cross-vendor fixed function multitexture combiner mechanism, supported by everyone and their dog, including even Matrox.
Oops. Now I've told you :?

Scali said:
That's rich, coming from someone who STILL hasn't explained what's so great about the crossbar extension.
Crossbar was briefly explained already. It allows you to reference any texture sampler from any texture environment (which is the equivalent of a "texture combine stage" in Direct3D). Eg you can bind three textures and do something like this pseudo-code:
Tex env0:
out.rgb=TEXTURE0.rgb*TEXTURE1.rgb
out.alpha=TEXTURE2.alpha*TEXTURE2.alpha

Tex env1:
out.rgb=previous.rgb+previous.alpha
out.alpha=TEXTURE0.alpha

etc.
Just a simple example. This removes the need to manage texture binding points throughout a renderer. E.g. you can always bind a diffuse texture to "unit" 0, always bind a lightmap to "unit" 1, always bind the gloss map to "unit" 2, and always bind a detail map to "unit" 3. You can make this a constant convention, independent of the order your individual combiner setups reference the textures (if at all). And it allows you to reference any given texture sample multiple times.

Scali said:
It's more like you're not interested to explain why OpenGL is so great. Now, I could take a guess at your motives, like you've taken a lot of guesses at my expense... I say you either don't know why, or you actually know it's not, but cover up with personal insults instead.
I didn't make any claims of OpenGL greatness in this thread. And I don't think I have insulted you yet. I didn't call anyone a zealot or some such. As for motives, why would you care? I don't care about your motives either. All I'm saying is that you are in no position to judge OpenGL development. You don't even know the basics. I don't need to question your motives for this assertion.

Scali said:
Seems like you're the one making a fool of yourself at the moment.
Funny how people get so worked up about a silly thing like an API.
If you're indifferent about API choices, why are you calling OpenGL users morons? :rolleyes:
Anyway, if you would not have made your blatantly wise remarks about things you don't understand, there would have been no need for rebuttal.

I personally can't stand it when people argue about and judge things they really have no clue about. Information propagates, but at the same time, misinformation propagates just as well. There will always be someone who actually believes it. So much for my motivation. Oops again.
 
Well obviously framerate will be "relatively low", because the game is essentially capped at 60fps.

If it got 60 fps, I wouldn't be calling it "relatively low".

This comment doesn't make any sense, considering what [H]OCP said about the game being relatively forgiving to those who do not have the latest and greatest CPU's.

Well, personally I still consider an XP1800+ 'high-end' for gaming. I did not expect a game like Doom3 to be CPU-limited while other games with simpler graphics aren't. Max Payne 2 for example. It does the 3d sound thing, it does the full physics thing (much more interactivity than Doom3 I might add), yet an XP1800+ blasts through it with ease. Same with Halo. I would have expected Doom3 to be fillrate-limited, because of the stencil-shadows, but apparently not.

To me it just doesn't seem right that an XP1800+ with a 9700 would be less playable than a P4 2.4 with an 8500, yet this is what is happening.
 
Scali said:
And how many non-shader cards actually support this? (For shader-cards it's a non-issue ofcourse, becasue they can do this in the shader itself).
These plus all NVIDIA chips since the Riva TNT (requires no changes in code that uses the functionality, just a change in the detection scheme; has already been discussed here).
 
This is not register combiners functionality (RCs are NVIDIA proprietary). This is a cross-vendor fixed function multitexture combiner mechanism, supported by everyone and their dog, including even Matrox.

Excuse me, but the things you store texture samples, current results, blendfactors etc in, are called registers in my book. So you're still combining registers.
But let's discuss nomenclature instead of useful stuff. You seem to want to move the focus away from the technical side anyway.

Crossbar was briefly explained already. It allows you to reference any texture sampler from any texture environment (which is the equivalent of a "texture combine stage" in Direct3D).

I never said I didn't know what it does, you simply assumed it for some reason. I wanted to know why this was such a big deal... A bigger deal than not supporting ps1.x with a multivendor extension for example. Because that's what started the whole discussion...

All I'm saying is that you are in no position to judge OpenGL development. You don't even know the basics.

That is an insult right there. You know nothing about me, and certainly not whether or not I know the basics (which don't include crossbar anyway, last time I looked). You just keep attacking me personally because you lack the arguments to get your point across. Very bad form.

If you're indifferent about API choices, why are you calling OpenGL users morons?

Why do you care anyway?

Anyway, if you would not have made your blatantly wise remarks about things you don't understand, there would have been no need for rebuttal.

There we go again. Keep implying that I don't understand, and it may come true!

I personally can't stand it when people argue about and judge things they really have no clue about.

And again!
Why would I even listen to someone with such bad manners? Obviously he isn't well-educated :)

I personally can't stand it when people argue about and judge things they really have no clue about. Information propagates, but at the same time, misinformation propagates just as well. There will always be someone who actually believes it. So much for my motivation.

I can sympathise with your motivation, but you really jumped the gun here. You just make horribly arrogant false assumptions about people you don't even know, and then you start insulting them from there.
I can't stand it when people argue and judge like that.
 
These plus all NVIDIA chips since the Riva TNT (requires no changes in code that uses the functionality, just a change in the detection scheme; has already been discussed here).

Okay, so that's the Radeon 7x00 series then, and the Intel Extreme Graphics. I didn't see the Matrox there.
 
Scali said:
Well, personally I still consider an XP1800+ 'high-end' for gaming.

I doubt that even a single hardware review website would agree with this statement. But of course, you play to your own tune ;)

To me it just doesn't seem right that an XP1800+ with a 9700 would be less playable than a P4 2.4 with an 8500, yet this is what is happening.

This is not an even comparison (AMD vs Intel, different processor speeds). That said, as graphics cards become faster, they tend to require faster CPU's to really push them. Take a look at FS's CPU scaling review on the X800 cards for more information.
 
Kindly ignore that previous post of mine because seems to me like you want to discuss things now and are looking for answers.

How often crossbar is used? Whenever your traditional non-crossbar math done thru multitexturing units doesn't fit entirely into single pass. Like when you only have 2 texture units as most cards do. In doom3, crossbar is essential in arb multitexture units where you don't have as many math functionality as you do thru register combiners. You can stuff more math into reg.cmbs than arb multitexture ie. d3d multitexture. I agree with you that shaders is where the future is. I also write with glsl so it's not like I'm worshiping ff pipe. It's just that ff hw is so common that I don't see folks penalizing them by buying new hw when I can do the same on old hw thru gl while can't as well thru d3d api.

I'm also leaning towards common hw functionlity so I prefer arb over extensions. But sometimes those ihv features are handy like in doom3 case. The difference is that gl allows you access to ihv features while d3d doesn't (at least not until v2 shaders came about). The tools I had in mind were CAD and various game editors like valve's hammer, unrealed, jet3d, etc. Gl is more flexible, it allows wide range of primitives and various ways to draw with them. D3d is more streamlined for game engines where your data is already fit for vertex buffers unlike for editors where you have data structures that then you render from. Also, CAD hw like from 3dlabs can expose lots of functionality thru gl extensions. Since I use gaming card for my work I'm not too experienced with those CAD cards that much but I do know they have features that d3d api doesn't expose. Accumulation buffer and god knows what else is stuffed into those CAD cards. I assume those features are used otherwise why would 3dlabs put them into their CAD hw?

For me it was the lack of crossbar, dependance on vertex buffers and lack of other primitives that made me switch to gl. I then found gl easier to use, I didn't have to mess with non-gfx related stuff as I did in d3d. I also found it much easier and faster to write test apps using gl than to deal with FVFs and some other nonsense.
 
This is not an even comparison (AMD vs Intel, different processor speeds).

Why not? I think we can both agree that the P4 is faster, while the 8500 is lots slower. Apparently the CPU matters more than the GPU. In the 1800+, the card cannot reach high framerates at all.

That said, as graphics cards become faster, they tend to require faster CPU's to really push them.

Yes but 'pushing them' also is different. Eg, you push a 8500 with a framerate of 100 fps in game X, while you would push an 9700 with 200 fps in game X. Obviously the game logic would have to be processed twice as fast to get 200 fps.
This is different however. Apparently an 1800+ doesn't have the power to push the 9700 beyond 8500's framerates.
I agree that a slow CPU will limit the performance of a graphics card... It's just that I do not consider an 1800+ to be such a CPU, because of the code I've written myself, and the other games and demos I've run on this PC. Never before was the CPU the bottleneck.
Which is why I am of the opinion that Doom3 is unusually CPU-demanding, and it could most probably have been solved by offloading more processing to the vertexshaders (and possibly the sound chip).
That is what I have been saying all along, is that so hard to understand? Does anyone agree? Or do people just consider Carmack to set the standards, so all other software should be measured against Doom3, not the other way around?
 
How often crossbar is used? Whenever your traditional non-crossbar math done thru multitexturing units doesn't fit entirely into single pass. Like when you only have 2 texture units as most cards do.

Do they? I think most cards have 3 or 4 (Matrox G400+, Radeon 7x00, Intel Extreme), and only NV1x has 2?
Besides, how often is that? Numbers?
I cannot imagine that it is required all that much... When I still had my GF2, I wrote a Doom3-ish lighting system in D3D, and I can't say I really missed crossbar functionality. It's not like the entire lighting system fits in 1 pass anyway, on such hardware. Heck, even on ps1.1 it's a tight fit if you want to have specular aswell.
I think the usefulness of this particular extension is blown way out of proportion, while the lack of ps1.x support is simply ignored. I find both ridiculous.

I'm also leaning towards common hw functionlity so I prefer arb over extensions. But sometimes those ihv features are handy like in doom3 case.

But it is extremely slanted. Why is NV1x supported while eg Radeon 7x00 or Extreme Graphics are ignored? I would much prefer a slightly less feature-rich path which runs on all hardware of the same class.
So I don't see why you support this. First you say it's nice if you can do the same on old hw... then you support something that ignores most old hardware... basically it means that you need to get a shader card, if your old hardware is the 'wrong' brand. And I don't believe in the 'wrong brand' thing.

Also, CAD hw like from 3dlabs can expose lots of functionality thru gl extensions.

Which is nice if your customers all use 3dlabs... The CAD-software I developed for, had to run on G400s too, for example. Not all CAD-users have high-end workstations... It is not required anyway. That is probably a large misconception... I think most CAD-users actually don't have a (recent) 3dlabs card at all. Perhaps only the big guys that design cars and airplanes and such. But the stuff I wrote was for hydraulic manifolds... Not exactly the same demands as cars or airplanes or such.

I then found gl easier to use, I didn't have to mess with non-gfx related stuff as I did in d3d. I also found it much easier and faster to write test apps using gl than to deal with FVFs and some other nonsense.

Well, when I write tools, I already have a usable engine, so I just build the tools around that engine, which already takes care of all the tough stuff. After all,t hat is what engines do. Stuff like FVFs are just set once in a constructor which is called during an import function or such, and you're done... Besides, even a glVertex() style wrapper would only take about 5 minutes to write in D3D... I've always found that such a non-argument.
Also I don't accept dog-slow code, not in my tools either, so even in OGL I wouldn't use glVertex() everywhere if I could help it.
But really, all you need is just a decently designed set of constructors, and you can abstract all that stuff away, either OGL or D3D.

Anyway, what lack of primitives are you referring to? D3D can draw lines and triangles... convex polygons are just triangle fans ofcourse. What else is there? Quads? Well it's so trivial to convert quads to triangles, that it never bothered me.
 
Scali wrote:

" I think we can both agree that the P4 is faster, while the 8500 is lots slower."

I don't agree with this. The 8500 has dedicated hw for doing gfx(algebra) math that most definitely out performs P4. Blending, filtering, etc. for example and who knows what else. You also have like 2 vertex shaders in 8500 that are probably running in parallel. Plus P4 suffers nasty stalls in its pipelines if you get a cache miss. They're very deep, much deeper than in Amd cpus. That's why Intel needs to crank up gigahertz to match slower speed amd.

Doom3 extrudes the shadow volumes on the cpu and maybe that's why cpu takes a hit. Also, they have home brewed physics engine and maybe it's doing things other engines aren't capable of. Same thing for AI. I do know that some games like far cry cheat in some places that results in faster gameplay. Btw, is doom3 the only game using bumps on actors? Many folks said that doom3 is visually stunning and I'm not surprised to see hw brought to its knees. Many games use static lightmaps, impostors and environments that allow for these tricks while doom3 being indoor game might not be suitable for this. Doesn't deusex2 and thief3 tax the wh as much as doom3?
 
Scali said:
This is not register combiners functionality (RCs are NVIDIA proprietary). This is a cross-vendor fixed function multitexture combiner mechanism, supported by everyone and their dog, including even Matrox.

Excuse me, but the things you store texture samples, current results, blendfactors etc in, are called registers in my book. So you're still combining registers.
But let's discuss nomenclature instead of useful stuff. You seem to want to move the focus away from the technical side anyway.
Whatever. Google agrees with me that the term "register combiners" is commonly used to refer to NVIDIA proprietary hardware. People who develop(ed) OpenGL stuff on the PC are especially likely to interpret the term in this way. Funny how you don't. But then, maybe you do. After all it was you who wrote "vendor-specific register-combiner extensions".
Scali said:
I never said I didn't know what it does, you simply assumed it for some reason. I wanted to know why this was such a big deal... A bigger deal than not supporting ps1.x with a multivendor extension for example. Because that's what started the whole discussion...
Now just give me a break. Reread how our own little sub-discussion started out. Full quotes are on this page. It certainly had nothing to do with crossbar.
Scali said:
All I'm saying is that you are in no position to judge OpenGL development. You don't even know the basics.
That is an insult right there. You know nothing about me, and certainly not whether or not I know the basics (which don't include crossbar anyway, last time I looked). You just keep attacking me personally because you lack the arguments to get your point across. Very bad form.
What should I argue about? That you'd need one more path for your three-path DXG renderer if you wanted to implement it on OpenGL? I already did that! The fun started when you didn't follow what I was trying to say about fixed function stuff. Maybe I should have withheld that comment. And maybe you shouldn't blame me for misunderstanding what I said.
Scali said:
If you're indifferent about API choices, why are you calling OpenGL users morons?
Why do you care anyway?
Because I don't want you to call anyone, including myself, morons collectively, without making a proper argument. Yes, I use OpenGL. Duh. Now that was to be expected.
Scali said:
Anyway, if you would not have made your blatantly wise remarks about things you don't understand, there would have been no need for rebuttal.
There we go again. Keep implying that I don't understand, and it may come true!
You didn't. See register combiners above.
Scali said:
I personally can't stand it when people argue about and judge things they really have no clue about.
And again!
Why would I even listen to someone with such bad manners? Obviously he isn't well-educated :)
My manners are quite adaptive to circumstances.
Scali said:
I can sympathise with your motivation, but you really jumped the gun here. You just make horribly arrogant false assumptions about people you don't even know, and then you start insulting them from there.
Like all the morons who use OpenGL :LOL:
Scali said:
I can't stand it when people argue and judge like that.
It's a great relief that we finally agree :D
 
Scali said:
Apparently the CPU matters more than the GPU.

This sounds like gross oversimplification to me. See the [H]OCP CPU scaling review on Doom 3 for reference (I know you are not very fond of [H] judging from your very first post here, but it is always good to have extra data). For instance, a GeForce 6800 Non-Ultra with an Athlon 1800+ will probably provide a better gaming experience than a Radeon 8500 using an Athlon 64. Some combinations of GPU and CPU will work better than others, and it is irresponsible to attribute a greater weight or importance to one or the other with the limited sample size that you are basing your "conclusions" on.
 
I don't agree with this. The 8500 has dedicated hw for doing gfx(algebra) math that most definitely out performs P4.

You misread it. I meant the P4 is faster than the 1800+ while the 8500 is slower than the 9700.

They're very deep, much deeper than in Amd cpus. That's why Intel needs to crank up gigahertz to match slower speed amd.

I think it's the other way around. Because Intel's P4 pipelines are designed deeper, it allows them to crank up the gigahertz. Intel can also do the 'slower speed' game though, Pentium-M clearly proves that... And even the regular Pentium 3 was a good match for the Athlon at the same clockspeed. Intel just chose another road... which paid off nicely with SSE/SSE2 performance, for example.
 
That is a extremely bad comparison, like almost 3 years difference in comparison for technolgy. The 8500 coupled with a modern CPU would still offer a good gaming experience on Doom 3, much better than a Geforce MX 440.

Like I said before, if you already have a modern DX9 graphics card and a older platform based off DDR 266 or even SDRAM you would be better off spending your money on a better platform.
Doom 3 is CPU limited at the most popular resolution played today, 1024 x 768...this based based off many polls.
 
Whatever. Google agrees with me that the term "register combiners" is commonly used to refer to NVIDIA proprietary hardware. People who develop(ed) OpenGL stuff on the PC are especially likely to interpret the term in this way. Funny how you don't. But then, maybe you do. After all it was you who wrote "vendor-specific register-combiner extensions".

Yes, go on... Keep arguing about something completely useless!
I'm sure you can get at least 5 more posts out of this nonsense!

Now just give me a break. Reread how our own little sub-discussion started out. Full quotes are on this page. It certainly had nothing to do with crossbar.

I recall someone mentioning the crossbar... I asked what it was, and who cared when there are shaders.

The fun started when you didn't follow what I was trying to say about fixed function stuff.

I followed you perfectly, it was more the things that you DIDN'T say, like what made the crossbar so desirable, even more desirable than ps1.x shaders (since you bothered arguing about crossbar forever, while constantly ignoring the shaders). But you didn't understand that, you just assumed you were so much smarter than me, so I must have been me who could not follow you! In fact, it was the other way around. How arrogant.

Because I don't want you to call anyone, including myself, morons collectively, without making a proper argument. Yes, I use OpenGL. Duh. Now that was to be expected.

Then still I say: why do you care? Does calling you a moron make you a moron? If you think that, you already were a moron to begin with.

You didn't. See register combiners above

I understand perfectly what register combiners are. I also understand perfectly what the non-NVIDIA combiner extensions are. Just because we do not use the exact same terminology for everything doesn't mean that either of us doesn't understand the technology behind it. I am not sure if you assuming (and persevering in) this is arrogant or even... moronic.

Like all the morons who use OpenGL

Actually, I never said that either. I just said that Carmack is the only moron still coding OpenGL... That does not imply that all morons code OpenGL. It doesn't apply that all people not coding OpenGL are not morons, or anything. Technically it simply states that Carmack is a moron, of a set of morons who happen to code in OpenGL... That set is at least one person large.
That is all you can conclude from my words. No more, no less.
And you do not know why I called Carmack a moron either. It was not solely for his choice of OpenGL.
 
Back
Top