Is Doom 3's release going to change the OpenGL community?

One reason why 3dfx didn't have OpenGL drivers is that it's a lot of hard work to develop an efficient, compliant set of OpenGL drivers.
There is the problem that the OpenGL core demands support for features that most consumer-level cards don't support (especially in the early 3dfx age), so workarounds need to be implemented.

Then there is the problem that the efficiency of many parts of the API (eg glVertex() or glDrawElements()) relies on the quality of the driver for the most part. And in many cases, only heuristics can determine when to send a batch, or when to wait and buffer more... Wrong heuristics means suboptimal performance. Especially T&L-hardware is plagued by that (in Direct3D it was even worse, since the abstraction was below the level where T&L would come into play. It simply didn't work at all until DX7 included support with vertexbuffers in videomemory).
We finally have an ARB extension for vertexbuffers now, but well, DX7 was years ago.

As it stands, NV invested a lot of resources into OpenGL, and their drivers are quite good. ATi is still trailing behind, after all these years, although they're doing much better than in the early days. But most other IHVs still have very bad driver support. Direct3D drivers are simpler to develop, and performance is pretty much guaranteed because Direct3D works at a level where the developer is responsible for the batching and uploading of geometry (with the ARB extension this is now finally possible in OpenGL aswell, without having to code for every card out there, or arbitrarily leaving out support for certain cards, due to lack of resources).

OpenGL performance can often be summed up like: "If you don't have NV, you're screwed". Although ATi is quite close these days.
 
zeckensack said:
It's an exact mapping of R200's fragment processing capabilities, so for all intents and purposes, it's an ATI-extension to DX8.

"For all intents and purposes" doesn't count...;) Extensions by definition fall outside of the OpenGL API. *Anything* supported in D3d becomes a formal part of the structure of the API. It's a very different environment for developers, is my point.

Now, DX's versioning constraints mandate that any PS2.0 capable driver must also accept PS1.4 code, ie emulation of non-native features for the sake of backwards compatibility.
For PS1.4, that really only happened with DX9, though, which is important to note. When DX8.1 was the latest and greatest, this was a simple question of R200 vs the rest of the world. Smells like a vendor specific extension to me.

It cannot, however, be an "extension" in the same way that an OpenGL extension exists, because unlike in OpenGL, everything permissible in D3d becomes a part of the formal core API. That seems like a key differential for me, at least for game developers.

Nitpick: PS2.0b is not a shader version. There's a "PS2_0_b" HLSL compiler target profile, which you were probably referring to. "2.0" is a shader version, "2.x" is another one. There are no other shader versions starting with "2". PS2.0 is extended via a number of optional caps and, in the process, becomes PS2.x (which is not a true version but rather a category of messes; it can be anything between PS2.0 and PS3.0). If it weren't for the fact that all other IHVs except for ATI and NVIDIA have slowly withered and died, disaster would be upon us.

I think that where we might differ is that I believe that in the absence of 3d API structure, we most likely would not have seen the amount of competition over the last 8-9 years that we saw, and that today instead of two key players we'd most likely have only one, who'd being doing everything from an utterly proprietary code base...;) The purpose of the non-proprietary 3d API is to make things "easy" for developers and to create as level as possible a playing field for IHVs, so that developers and IHV's alike write their respective games and drivers to the API (instead of some proprietary scheme.)


The difference between caps and extensions is that extensions come when they are ready, and caps bits come when Microsoft releases a new runtime. And that's the whole story. Your narrow definition of "extension" is amusing, but not particularly relevant for development purposes.

Your reaction is kind of amusing, especially when I spelled out a very highly public case of developer problems with extensions, as we all saw with NWN...;) If that wasn't a case "relevant for developer purposes," what would be?

Wrong. Unless you want to poke random PCI registers and cross your fingers.

I thought I read somewhere that Splinter Cell supported "Ultra Shadow," and that Splinter Cell was a D3d game. So, I certainly could be wrong, here...;)

But anyway, thanks for again underscoring a key difference in the APIs (if it turns out you are right....;))

This is getting silly. I'll stop now.

Yes, it's so silly to imagine that there are differences between D3d and OpenGL in terms of developer concerns for 3d games. How silly of me to propose that they are anything less than absolutely identical in every respect...;)

And, of course, the fact that 3d-game developers overwhelmingly and voluntarily choose to either make use of pre-packaged D3d game engines (as from Epic), or choose to write their own D3d games engines, and choose *not* to go OpenGL--how silly of me to suggest that the game developer community as a whole might have rational reasons for this behavior...:D
 
Deathlike2 said:
The TNT had a lot of features.. but most likely unusuable for that generation (like the use of 4096x4096 texture sizes)... for the Voodoo 3 at the time.. it was no big deal...
But it also had some very usable features that the competition did not, such as 32-bit color and 512x512 or 1024x1024 texture sizes.

Well, their "OpenGL drivers" in the form of MiniGL wasn't all that bad. They did end up making some (not sure if they were truly compliant.. as running them through OpenGL Extension Viewer shows nay).... however, they wrapped to Glide calls (to their Glide driver of course)... their Direct3D wasn't bad at all though.
Well, as far as I know, MiniGL was only supported through a couple of games. From what I understand, a MiniGL driver is made for a specific game, not vice versa.

3dfx could claim to some extent their drivers were as good as NVidia (personally I had no problem).. but.. there goes their legacy...
Every developer I've heard talk of 3dfx's drivers has stated that they were very poor.
 
WaltC said:
zeckensack said:
It's an exact mapping of R200's fragment processing capabilities, so for all intents and purposes, it's an ATI-extension to DX8.

"For all intents and purposes" doesn't count...;) Extensions by definition fall outside of the OpenGL API. *Anything* supported in D3d becomes a formal part of the structure of the API. It's a very different environment for developers, is my point.
La-di-da. Extension exposure and import is a formal part of the OpenGL API. glGetString(GL_EXTENSIONS) returns a list of extension identifiers. wglGetProcAddress returns entry points to functions. Both of these are clearly defined and actually work in practice.
*sigh*
I'll show you actual code. Macro definitions withheld, be imaginative.
Code:
	if (is_supported("GL_ATIX_texture_env_combine3")||
		is_supported("GL_ATI_texture_env_combine3"))
	{
		gl_caps.atix_tec3=true;
		LOG("+\n");
	}
	if (is_supported("GL_ARB_texture_env_combine")||
		(gl_caps.gl_version>=1.3f))
	{
		gl_caps.arb_tec=true;
		LOG("+\n");
	}
	if (is_supported("GL_ATI_fragment_shader"))
	{
		if (GL_FP_GRAB(glGenFragmentShadersATI)&&
			GL_FP_GRAB(glBindFragmentShaderATI)&&
			GL_FP_GRAB(glDeleteFragmentShaderATI)&&
			GL_FP_GRAB(glBeginFragmentShaderATI)&&
			GL_FP_GRAB(glEndFragmentShaderATI)&&
			GL_FP_GRAB(glPassTexCoordATI)&&
			GL_FP_GRAB(glSampleMapATI)&&
			GL_FP_GRAB(glColorFragmentOp1ATI)&&
			GL_FP_GRAB(glColorFragmentOp2ATI)&&
			GL_FP_GRAB(glColorFragmentOp3ATI)&&
			GL_FP_GRAB(glAlphaFragmentOp1ATI)&&
			GL_FP_GRAB(glAlphaFragmentOp2ATI)&&
			GL_FP_GRAB(glAlphaFragmentOp3ATI)&&
			GL_FP_GRAB(glSetFragmentShaderConstantATI))
		{
			gl_caps.ati_fragment_shader=true;
			LOG("+\n");
		}
	}
You'll note that I fill a structure named "gl_caps". Yes, I have caps bits, too, internally, but I get to decide which set of caps I care about in OpenGL. And I can decide to care about proprietary caps. I don't have to, though.
I can't decide on caring about features that aren't sanctioned by Microsoft in DXG, because they just aren't available at all.

But in the end, for both APIs, I have a bit for each feature my rendering code might use (and integers for others, such as texture unit count).
WaltC said:
It cannot, however, be an "extension" in the same way that an OpenGL extension exists, because unlike in OpenGL, everything permissible in D3d becomes a part of the formal core API. That seems like a key differential for me, at least for game developers.
Really?
Code:
if (gl_caps.ati_fragment_shader) do_this();
else                             do_that();

<=>

if (dx_caps.ps_version>=1.4) do_this();
else                         do_that();
I already "admitted" that there are no unified "DX8 level" shader interfaces in OpenGL. But that doesn't change a damn thing if a renderer, or one of its code paths, specifically targets R200 class hardware.
WaltC said:
I think that where we might differ is that I believe that in the absence of 3d API structure, we most likely would not have seen the amount of competition over the last 8-9 years that we saw, and that today instead of two key players we'd most likely have only one, who'd being doing everything from an utterly proprietary code base...;) The purpose of the non-proprietary 3d API is to make things "easy" for developers and to create as level as possible a playing field for IHVs, so that developers and IHV's alike write their respective games and drivers to the API (instead of some proprietary scheme.)
Define proprietary, please.
WaltC said:
Your reaction is kind of amusing, especially when I spelled out a very highly public case of developer problems with extensions, as we all saw with NWN...;) If that wasn't a case "relevant for developer purposes," what would be?
Maybe Bioware just don't have a clue?
Maybe Bioware were just too lazy to properly support R200 class chips?
The statements I've read during the early days of the NWN incident rather point to first possibility. IIRC they blamed ATI, demanding that they should implement the NV_register_combiners and NV_texture_shader extensions. Problem is that, in contrast to DXG where some of NV2x's features cannot be used at all, NV2x can actually do things under OpenGL that R200 cannot do (and vice versa, obviously). YMMV, but in my book that's a case of "no clue".

What about the initial rthdribl release that wouldn't run on Geforce FX cards? It used DirectX Graphics all along, so how could this have happened?
Why does 3DMark2k1's "Nature" test not run on my Geforce 3?
The simple truth is that if you rely on features to be present, but they are not present, your code will not run properly. This applies to both DXG and OpenGL in the same way. You either write a fallback, or you decide to not care about that hardware and err out ... or you might opt to be a jerk and just let your renderer crash or produce garbage :p
WaltC said:
I thought I read somewhere that Splinter Cell supported "Ultra Shadow," and that Splinter Cell was a D3d game. So, I certainly could be wrong, here...;)
UltraShadow is a marketing umbrella which encompasses not only NV_depth_bounds and NV_depth_clamp, but also the NV3x's (and higer) "zixel" fill optimizations. These are, of course, automatic. You do not need to "turn them on", under neither API. Maybe this is the truth about Splinter Cell's UltraShadow support? I don't know either.
WaltC said:
But anyway, thanks for again underscoring a key difference in the APIs (if it turns out you are right....;))

This is getting silly. I'll stop now.

Yes, it's so silly to imagine that there are differences between D3d and OpenGL in terms of developer concerns for 3d games. How silly of me to propose that they are anything less than absolutely identical in every respect...;)
I never said they were identical in every respect. Now that would have been silly.
WaltC said:
And, of course, the fact that 3d-game developers overwhelmingly and voluntarily choose to either make use of pre-packaged D3d game engines (as from Epic), or choose to write their own D3d games engines, and choose *not* to go OpenGL--how silly of me to suggest that the game developer community as a whole might have rational reasons for this behavior...:D
Of course there are rational reasons to prefer DXG over OpenGL. Microsoft provides a comprehensive framework and many tools around the core hardware abstraction layer. Less work for developers, because so much stuff is already provided.

It were rather your cited reasons against using OpenGL which I found silly. GL's extension and versioning mechanisms just do not work as you portrayed them. Your whole argument was based on misconceptions from the start.
 
But it also had some very usable features that the competition did not, such as 32-bit color and 512x512 or 1024x1024 texture sizes.

I'm not arguing that point... but they were much more usuable in the GF era... not the TNT (perhaps the TNT2, but definately not the TNT)

Every developer I've heard talk of 3dfx's drivers has stated that they were very poor.

Anything that's referencable?
 
Xmas said:
Outside of the DX API? How do you imagine this to work?

I related to Z that I thought I had read somewhere that Splinter Cell was a D3d game which supported Ultra-Shadow through an nVx engine-specific custom code path--but as I told him I could surely be mistaken in that recollection--but that's the only reason I added that bit in parenthesis...;)

But yours and Z's comments do underscore the basic difference between the APIs from a structural standpoint, that whereas OpenGL currently permits IHV extensions unsupported by the API, D3d does not.

How is that different from features in the DX spec that are only supported by one IHV? Loud PR noises? Remember TruForm, RT patches, displacement mapping?

Thanks for bringing up TruForm....;) Was it an ATi OpenGL-only feature? (I ask because I've never really used it.) Your statement implies that although truform was unsupported by D3d, it could be nonetheless supported by ATi's D3d drivers through D3d game-engine specific support--similar to original FSAA support relative to the APIs as implemented by 3dfx. This would make me think, then, that a D3d game engine could support something like nVx Ultra Shadow without its direct support in the D3d API.

Anyway, it's a bit hard to follow the general "how is OpenGL different from D3d?" questions I've been getting, since I would assume that the differences are obvious to the point of being self-explanatory. And judging by the disparity in the numbers of game developers who elect to use/develop D3d engines as opposed to OpenGL engines, I should think it would be obvious that developers plainly see a difference between the APIs, from their point of view.

By "loud PR noises" I am referring to cases similar to the launch of nV30 wherein nVidia's PR talked incessantly about fp32 being superior to fp24 under DX9, but nV30's initial drivers (when the product "shipped" to reviewers months later) included support only for fp16 sans fp32--and fp16 support was only added later (post nV30's paper launch in '02), I believe, to the DX9 spec as pp.

Practically speaking, as we all know, fp32 was unworkable for D3d game development targeting nV3x, but it was only towards the end of '03 that nVidia finally came "clean" about that, at least to some extent, and started pushing fp16 like there was no tomorrow...;) Developers always knew better, however, and didn't try fp32 for their nV3x-targeted game engines. (They didn't try for fp24 precision levels, either, but that doesn't change the fact that for R3x0 fp24 was workable *had developers chosen to support it* but that nV3x fp32 was not, as it was entirely too slow.)

OpenGL developers can do just the same. They can choose to do more.

Exactly the point. Why should they "choose to do more" work for an OpenGL engine, in terms of supporting multiple vendor-specific extensions in their engines to support the *same engine functionality* (going back to NWN's shiny water, again), when the extra work isn't necessary under D3d to obtain the same results?--Only because of the differences between the ways the APIs are structured, imo.

Why is that a problem? John Carmack apparently saw no problem in that at all.

Isn't this obvious?...;) Since he voluntarily constrains himself to the confines of OpenGL as his exclusive 3d API (unlike companies like Epic, which use both APIs--although Epic is "officially" only D3d), he is prepared to "choose to do more work" in supporting multiple OpenGL code paths and supporting multiple OpenGL vendor-specific extensions, because it is his personal election to do so. If you read between the lines in his various commentaries, however, you can see that he pines for the day when his ARB2 path will be the only coding he has to do...;) Can't really blame him for that, right?

Since most new core functionality is basically unchanged former extension functionality, driver developers have to write almost no extra code at all. All they have to do is to provide the same API entry points with two different names (and many extensions only define a few new constants, which means no additional work at all).

Heh...;) Why are you confusing functionality with the code supporting that functionality?

Hopefully, what the ogl 1.5-2.0 specs do is to make functionality addressable through the API in a single, unified way. This then shifts the burden from the game developer to the IHV to make sure that the appropriate functionality in his hardware can be called through the unified API interface that developers can use to support that functionality in their engines.

OTOH, If you take all of the vendor-specific extensions which now exist and throw them, as is, into a large messy stew, and simply call that approach "core functionality"--it doesn't seem to me you've accomplished anything...;)

I would think that moving hardware feature support into the API core is *doing away* with the current requirement that such functionality may only be supported through vendor-specific extensions. But then you've got the demon of "backwards compatibility" to consider. Old game engines are going to expect very specific things from vendor-specific extensions (not the least of which is that they be present in the IHV drivers), and chances are these expectations won't jive with the new specs for OpenGL core API function implementation. So, for a time at least, IHV OpenGL drivers will have to support both--is the way it looks to me.

Let's go back to the NWN shiny water thing again to illustrate. Had the functionality support for "shiny water" been a part of the core OpenGL API spec at the time Bioware wrote its engine, then Bioware could have written its engine support for shiny water one time, in one way, and the hardware support would have been there for all sets of hardware truthfully compliant with that OpenGL API spec, right? But as it was, Bioware had to implement multiple vendor-specific extension support in its engine, which resulted in the fact that shiny water only worked on nV3x--obviously because by the nature of what "vendor-specific extensions" are you would not expect ATi's and nVidia's extensions to be the same.

And indeed they were not, and so shiny water worked on nVidia but not on ATi. The deficiency of support, then, resided exclusively in the Bioware engine, and not, of course, in the ATi hardware. It took Bioware a few months to add *NWN engine support* for the ATi extensions required to support shiny water, and the result was that it took a lot more work and a lot more time for Bioware to support shiny water through the mechanism of OpenGl extensions than would be true if support for the required functionality would have been inside the core API from the beginning.


The DX specs define some minimum capabilities for "compliancy" with a certain version, however these minimum caps are quite low.
You can't write a modern game engine based on these minimum caps.

I think you're being a tad rationalist here...;) By "minimum compliancy" I can only think that you mean "backwards compatability" with earlier DX versions. For instance, I've got several Dx5/6/7/8 games that run just fine under my DX9.0b installation, under my DX9-compliant 3d hardware and drivers. If by "modern game engine" you mean a game engine requiring functionality support exclusive to DX9, then of course it's true that a DX9 game won't run if it only supports cap bits relative to DX6...;) But then, it would not be a "modern game engine" in the first place, would it, but it would be a DX6 game engine, right...? There's nothing, however, within DX9 which would prevent a developer who wanted to from writing a DX6-restricted cap bit game, is there?

Again, the difference between succeeding DX versions and OpenGL extensions is that everything stays uniform from version to version as a standard function of the core API under DX, for all IHVs, which simplifies things greatly for game developers. Under the OpenGL extensions mechanism, support for IHV hardware is exactly the opposite of uniform and predictable. And as I see it, that is the problem with an extensions mechanism in the first place, from a developer's point of view.

Extensions, otoh, lack any formal cross-IHV, trans-API-version structure.
:?:

Extensions differ by nature from one IHV to the next, even when they are written to support the same basic hardware functionality. This is not true of of DX, where it's up to the IHV to support the API function (and not up to the developer to directly support the IHV's custom extension.) That's what I mean by "cross-IHV." Extensions have little or nothing to do with the API version; whereas each succeeding version of the DX API contains as a subset formal support for all previous versions of the API--which is what I mean by "trans-API version."

There's just no way to get around the fact that vendor-specific extensions for hardware function support are very different from formal API function support in the core API. I mean. it seems like night & day to me...;)


"100% compatible" does not mean it supports all features. It hardly means anything. There is no card that ever supported all features of any given DX version.

I disagree--I think it means everything...;) Of course, I'm assuming that the cap bits checked actually relate to function support in the drivers--otherwise, as you say, checking out cap bits that are non-functional would be no different than checking out extensions that don't do anything, either...;)

I do agree that thinking about it by way of "cap bits present" is a very superficial and incomplete analysis. But the only thing we can be sure of is that when the cap bits are absent--so is the functionality. DX has an expected list of cap bits, though, pertaining to each version of the API--there's no such condition relevant for OpenGl extensions, is my point, since the core API presently would be content with none.

Relying on anything else than just the minimum caps for a certain DX version without caps checking is a bug. It's no different than OpenGL extension checking.

The difference, of course is, that for DX it's the *API* which expects the cap bits first, then the game engine, based on how it is written. For OpenGL extensions, it the only game engine expecting the extensions, the API doesn't enter the picture ...;) We keep coming back to this core difference, don't we? Last time I checked there's a big difference between writing a game engine to an API, and writing it to support custom vendor-specific extensions.

Isn't DX10 supposed to have a formal extension mechanism?! If it had one, probably DX9 developers could be using NV's depth bounds be now...WGF is intended to stay for a long, long time. Extensions are a necessity here.

Extensions are only a "necessity" when the core API function support hardly changes over time--as has been true of OpenGL to date. However, this is somewhat circular, because it can be convincingly argued that the *reason* the OpenGL API core function support has been so slow to evolve is because of its tolerance for vendor-specific extensions. This might also be why game developers overwhelmingly choose to use the non-extensions approach of DX over the extensions approach of OpenGL.

What I meant is that *extensions* might be rational if DX10 was going to be the last DX version M$ would establish until, say, 2015, or something like that...;)



Yes?.....;) Which of these was written by nVidia and accompanied nV40 when nVidia sent it out on the review circuit earlier this year at nV40 launch, when nVidia was telling us all about the splendors of SM3.0....? More importantly, which of these demonstrate functionality (or distinct and clear advantages) impossible with SM2.0 but only possible with SM3.0? Heh...;) All I heard at that time was how it was necessary to wait on DX9.0c, and until that time, we'd have to take nVidia's word on the subject of nV4x's SM3.0 implementation.

In closing, I'd like to state that nothing I've written here represents an "attack" on the OpenGL 3d API. Not even close...;) I'm simply trying to frame an overview on just why it might be that the great majority of developers eschew OpenGL for their 3d-game engine development and choose DX instead, despite the assumed "cross-platform" advantages that OpenGL ostensibly offers them. I'm not manufacturing information when I say that developers clearly prefer DX, and that DX9 to many of them represents a kind of "watershed" in the evolution of the D3d API. I'm simply trying to understand why they choose DX over OpenGL so consistently...;)
 
Chalnoth said:
Well, I'd be more inclined to say that the 3dfx Voodoo kickstarted the 3D industry. If 3dfx had produced OpenGL drivers from the start, there may never have been a need for Glide (at the time, Direct3D was pretty bad...). But, drivers were not 3dfx's strong suit....

Heh...;) I made a bet with myself as to someone saying this at some point, and I won...;) To say:

"If 3dfx had produced OpenGL drivers from the start, there may never have been a need for Glide (at the time, Direct3D was pretty bad...)."... to me is the same as to say,

"Gee, if the Wright brothers had gone straight to jet engines and swept-back, mono-wing designs, just look at how far ahead we'd be today!"....;) It's such a senseless statement that you might as well say: "Gee, what if the Wright brothers had gone straight to anti-gravity--we'd be populating the universe by now!"

Contrary to this sentiment: "But, drivers were not 3dfx's strong suit...." the fact is that for years 3dfx fielded the only 3d hardware and drivers that 3d-game developers could support, since they were alone in the field in terms of manufacturing 3d hardware with drivers that did more than run as a literal slide show at 3-5 fps. For years 3dfx was so far ahead of nVidia, Matrox and ATi, and several other long-defunct players, in terms of 3d gaming support, that these other companies were not even considered competitive in 3d.

nVidia's first 3d hardware & driver experiment failed, remember? Indeed, D3d development began a good while after Glide and OpenGL as a 3d-gaming API didn't begin to mature under Carmack's auspices until long after the first version of Glide had shipped. I can literally recall when you could not buy a 3d-game that wasn't GLIDE...;) 3dfx was so "poor" in its driver support, in fact, that the whole of 3d-gaming tech today is based upon the principles 3dfx established when it shipped the V1 and GLIDE.

Look, it's just plain idiotic to state that 3dfx should have been looking after the interests of the entire industry and shipped OpenGL drivers instead of GLIDE--because there *was no 3d gaming industry* when 3dfx developed GLIDE. It simply did not exist, and no one, especially 3dfx, could project either its success or its eventual popularity.

So, the idea that it was strange, odd, or bizarre that 3dfx was mainly concerned with 3dfx when it shipped GLIDE and the V1, is no more "bizarre" than recognizing that nVidia obviously puts nVidia first, 3d-gaming second, and no more "odd or strange" than the fact that ATi is more interested in itself than it is in "sharing" with nVidia these days...;)

What is truly "bizarre, odd and strange," though, is sentiment to the effect the company that created "3d-gaming" as a viable commercial industry, and shaped the direction the 3d-gaming industry would follow, somehow did anything "wrong" simply by attempting to look after itself, especially since it was *all by itself* at the time ...;)

The argument as to what 3dfx "should have done" in the opinions of some is just irrelevant. 3dfx didn't start D3d, of course, and D3d today is far more popular with developers than OpenGL (for 3d gaming engines), and that surely cannot be blamed on 3dfx by any stretch of the imagination...;)
 
Glide really didn't take off until after, IMHO, Quake + miniGL driver made having a Voodoo 1 a neccessity. It is inaccurate to say there was no 3D industry before 3dfx. VQuake ran at far more than 3-5 fps.

In the first year of Glide, there were few games, and 3dfx had to lobby developers big time to support it, that is, until GLQuake. Mech Warrior was about the only Glide game I had that didn't suck or look like a demo. Of course, the story changed later when they picked up momentum.

No, OpenGL would have worked fine and been better because of prior experience and pre-existing development tools and learning resources. 3dfx did Glide simply because it was easier to code it up and ship it out faster. A real OGL ICD would have meant they'd have to delay shipping the V1 for a few quarters.
 
WalteC: It seams that you are still basing your arguments on wrong ideas concerning OpenGL. But first, stop giving the NWN example. It is the only you can have and it seams more like a case of bad coding than anything else...

It seams the only problem you have is the fact that extensions, for what GL is concerned don't interconnect with the APi in a magical way that each vendor can respond to. That is wrong.

When you have to check for an extension in GL, and make your game dependent on it, it becomes "core" inside your game, concerning the API. For example, if you must have something, check it at start up, and if it doesn't exists, say it and get out. Apart from that, you can use it, like it was in OpenGL's core all along.

OpenGL doesn't move slow. It just takes time to think. Things are only included in core WHEN their functionality is proven. When that happens, most of the time OpenGL ends up with a better mechanism than D3D's (example: VBOs and VBs, GLSL and HLSL). For example, why does D3D have TrueForm? Nobody uses it, I think, and the ones that do use it, are with GL I think (RTCW, CoD...). Simply because this is a feature that is better out of the core, because it really never took off. The need of DX to have everything of everyone so that it can proclain to have a unified mechanism, just makes it complicated and caps bits hell...

TrueForm, untill proven otherwise (only possible if developers start to use it), should just be an extension and never core. Just like NV's depth_bounds...

Another example is the NV30 FP textures, that do exist, just only in GL, because D3D didn't figured out a way to support NV30 arquitecture. But really, it shouldn't, because it was way to limited. But at least it is there, in form of an extension (and never in the core), in GL. But, with the 6800, NVIDIA supports everything, and so it uses ATI's extensions that ATI created for their hardware when it supported FP textures. Now there is a need to have a unified mechanism in GL to deal with FP textures and so the ARB creates it, using ATI's extension as a base.

Another good example of extensions in GL is GLSL. The first version was OK but it lacked somethings. Developers argued, and they were included before GLSL was included in core. Now, the GLSL of GL2 is a better GLSL. Try and do that with HLSL...

If a developer treats extensions properly (or sometimes there is no way to escape them) they do not complicate things: it is just a couple of "if"s...
 
zeckensack said:
La-di-da. Extension exposure and import is a formal part of the OpenGL API. glGetString(GL_EXTENSIONS) returns a list of extension identifiers. wglGetProcAddress returns entry points to functions. Both of these are clearly defined and actually work in practice.
*sigh*

Yes, you get *lists* of extensions and you get *entry points*--what you do not get, however, is any extension functionality--which is accomplished by the extensions themselves, which by definition are not included in the API.

Look, I think you are shooting your own arguments in the head--because if extensions were the panacea you proclaim then there'd be no need--ever--for OpenGL 1.5/2.0, etc. Obviously, somebody, somewhere, sees a need to integrate more *function support* into the API, formally, apart from the mere function of extensions list generation, that is...;)

I'll show you actual code.

Great, but how does that code allow me to gauge the precise operation of each extension within *my* engine? More particularly, how does it enable me to understand how the different extensions IHVs create to provide similar functionality will separately and simultaneously impact the behavior of my engine? I've got to go far beyond that level in order to understand how to integrate the varying IHV-specific extensions into my engine, don't I? What you're showing me here is high-level stuff, which at best provides me with a rough theoretical approximation of what *might* happen, based upon my understanding and experience, but doesn't tell me anything at all as to the practical impact of each extension in my particular engine environment. Because the nature of extensions is as customized as it is, looking at high-level language integration of extension identifiers doesn't provide me with the kind of info I need (which comes through a lot of hands-on practical experimentation on my part, as to integrating extensions support effectively and seamlessly into an engine.)

Define proprietary, please.

Extensions are, by nature, proprietary to the IHV which creates them.

Maybe Bioware just don't have a clue?
Maybe Bioware were just too lazy to properly support R200 class chips? The statements I've read during the early days of the NWN incident rather point to first possibility. IIRC they blamed ATI, demanding that they should implement the NV_register_combiners and NV_texture_shader extensions. Problem is that, in contrast to DXG where some of NV2x's features cannot be used at all, NV2x can actually do things under OpenGL that R200 cannot do (and vice versa, obviously). YMMV, but in my book that's a case of "no clue".

I really cannot argue, but would add that as NWN-engine development was nVidia-centric from the start (understandably so, imo, as at the time development on the NWN engine began, far earlier than R300 shipped, nVidia had far better OpenGL driver support than anybody else.)

What I recall is that Bioware said first that shiny water simply wasn't possible on R300 because it lacked the hardware; this was later amended to a complaint that it was ATi's fault since ATi wasn't using nVidia's extensions (as if that was odd)...;)...which, finally, was in turn amended to "We're working with ATi to implement shiny water support," which at last is what actually happened (and I got shiny water on my R300.)

I don't necessarily see these tactics as resulting from ignorance, but rather from the opposite--the knowledge of the work involved to correct the situation, and an unwillingness to do it. It seemed like all the initial stonewalling was just to avoid the considerable extra work--which in a sense I can't blame them for resisting. At the same time, I also feel that had core API support for the functionality required to support shiny water existed, then almost all the extra work, if not all of it, if any would have been required, would have been the responsibility of the IHV--and not have fallen to Bioware. Whether we say they were ignorant or lazy, though, the problem was directly an OpenGL extensions problem, and it did cause them a lot of extra work.

What about the initial rthdribl release that wouldn't run on Geforce FX cards? It used DirectX Graphics all along, so how could this have happened?

nVidia's D3d drivers were buggy? Perhaps the compiler got a tad too "optimized"?...;) Bugs are as equally possible in a structured API environment as they are in an extensions-based environment. A discussion of driver bugs has nothing to do with the discussion here, imo.

Why does 3DMark2k1's "Nature" test not run on my Geforce 3?

Is this a rhetorical question, or is it because you really don't know how much your GF3 drivers suck?...;) Perhaps there's a bug in 3dMk01 somewhere? It's your GF3, what do you think?...;)

The simple truth is that if you rely on features to be present, but they are not present, your code will not run properly. This applies to both DXG and OpenGL in the same way. You either write a fallback, or you decide to not care about that hardware and err out ... or you might opt to be a jerk and just let your renderer crash or produce garbage :p

Of course you have to test everything...;) That goes to my response about the code example you provided.

But, suppose you are testing and discover the functionality in the driver isn't there, and you contact the IHV about it, and he tells you to go ahead, that by the time you are finished the functionality will be there. So you do your thing and the IHV doesn't ever do his. I think that's probably common to some degree. Especially when it concerns functionality technically required by an API version, but presently absent with so-called API-compliant drivers.

UltraShadow is a marketing umbrella which encompasses not only NV_depth_bounds and NV_depth_clamp, but also the NV3x's (and higer) "zixel" fill optimizations. These are, of course, automatic. You do not need to "turn them on", under neither API. Maybe this is the truth about Splinter Cell's UltraShadow support? I don't know either.

Don't get you....You're saying "ultra shadow" is always "on" in the drivers and doesn't require engine support? All the "shadows" then for nV3x+ are "ultra"...? If that's the case I didn't know that...:D If that's the case--yea--that's some "marketing umbrella" for sure...;) Maybe next from ATi we'll see the introduction of "Penultimate Shadowing," or something, used to describe ordinary shadows. Perhaps it's things like this that prompted the phrase: "The only good marketer is a dead marketer."

I never said they were identical in every respect. Now that would have been silly.

I quite agree....;) But the majority of critical responses I've been getting in this thread seem determined to educate me to the fact that there just aren't any *real* differences, aside from syntax, in the structuring of DX and OpenGL. People keep saying that OpenGL doesn't need version improvements, since extensions cover whatever functionality improvements are needed, and are equivalent to DX version evolutions, without bothering to explain why it is that OpenGL needs to go to 1.5 & 2.0, but use extensions at the same time...;)

It seems to me, though, that as OpenGL adds more and function support to the core, that the need for extensions support will progressively fall off accordingly.

Of course there are rational reasons to prefer DXG over OpenGL. Microsoft provides a comprehensive framework and many tools around the core hardware abstraction layer. Less work for developers, because so much stuff is already provided.

It were rather your cited reasons against using OpenGL which I found silly. GL's extension and versioning mechanisms just do not work as you portrayed them. Your whole argument was based on misconceptions from the start.

Pretty much my entire commentary has been that developers prefer DX to ogl, precisely because, as you put it, there's "Less work for developers, because so much stuff is already provided."....;) Sorry you didn't get that out of my comments....;)
 
OpenGL performance can often be summed up like: "If you don't have NV, you're screwed". Although ATi is quite close these days.
It's nowhere near to the point where you can say "you're screwed" if you don't have Nvidia hardware for OpenGL. In Serious Sam: SE on my 9500 Pro I can keep a nearly constant 75fps (v-sync) at all times: 1024x768 4xAA 8xAF details maxed.
 
WaltC said:
"If 3dfx had produced OpenGL drivers from the start, there may never have been a need for Glide (at the time, Direct3D was pretty bad...)."... to me is the same as to say,

"Gee, if the Wright brothers had gone straight to jet engines and swept-back, mono-wing designs, just look at how far ahead we'd be today!"....;) It's such a senseless statement that you might as well say: "Gee, what if the Wright brothers had gone straight to anti-gravity--we'd be populating the universe by now!"
Um, no. I didn't claim that Glide was less advanced than OpenGL, or vice versa. I just claimed that Glide was not necessary at that time, as some currently claim. 3dfx had the option of supporting an API that was already inexistence, or making their own API. Now, they probably couldn't have supported OpenGL fully with the original Voodoo, but they could have just asked developers to work around that (as they did, with the miniGL drivers).

What I see 3dfx doing is deciding that it was less work for them to develop their own API with which they could offload much of the optimization work onto game developers than to develop for a known API and have to do much of that optimization themselves. Though I'm sure it was a great decision for them at the time, it was doomed to fail eventually, as once an "underdog" decided to produce a low-cost video card to compete with 3dfx's, they would naturally push for support of an "open" API.

And by the way, my comment on 3dfx not having drivers as their strong suit comes from comments from developers. Every developer comment I've yet heard with regards to 3dfx and drivers has been to the tone of, "not good." The consensus seems to be that the pecking order on driver quality (over a long time: note I'm not mentioning ATI as their driver quality has changed drastically) is Matrox first and nVidia second, with every other manufacturer having generally poor support.
 
DeathKnight said:
OpenGL performance can often be summed up like: "If you don't have NV, you're screwed". Although ATi is quite close these days.
It's nowhere near to the point where you can say "you're screwed" if you don't have Nvidia hardware for OpenGL. In Serious Sam: SE on my 9500 Pro I can keep a nearly constant 75fps (v-sync) at all times: 1024x768 4xAA 8xAF details maxed.
While I'm sure it depends upon the drivers you're using, I've noticed that my Radeon 9700 Pro had typically poor performance in OpenGL, and was sometimes utterly unplayable (as it was with some recent drivers in UT2004, with the same settings with which it ran fine in Direct3D).
 
WaltC said:
Look, I think you are shooting your own arguments in the head--because if extensions were the panacea you proclaim then there'd be no need--ever--for OpenGL 1.5/2.0, etc. Obviously, somebody, somewhere, sees a need to integrate more *function support* into the API, formally, apart from the mere function of extensions list generation, that is...;)

See?! This is why I think you don't know exacly what you are saying.
The thing is nobody needs GL versions. At that is why current boards still expose EXT_texture in the extension list. Do you know the main reason for a GL2.0? Marketing...

A GL version is just a bunch of extensions. Nothing more. The only thing you know of GL1.5 is what extensions are promoted into the core... Either check GL for version X or check the list of extensions of version X: either way it is the exact same thing...
 
Chalnoth said:
What I see 3dfx doing is deciding that it was less work for them to develop their own API with which they could offload much of the optimization work onto game developers than to develop for a known API and have to do much of that optimization themselves.

In other words, what 3dfx did was write a shell of an API that made porting already existing software rendering code to hardware as painless as possible for devs.

This is exactly why the API took off.

No one really wanted or needed a robust API to write a brand new hardware rendering path for a game that was coming out in 2 years. They needed something to get thieir very sson to be release game some hardware acceleration.

Though I'm sure it was a great decision for them at the time, it was doomed to fail eventually, as once an "underdog" decided to produce a low-cost video card to compete with 3dfx's, they would naturally push for support of an "open" API.

It's not the advent of an "underdog". It was the game development cycle in general. Developers started writing games with 3D acceleration in mind from the get-go, and targeting multiple platforms just makes sense.
 
Sigma said:
WalteC: It seams that you are still basing your arguments on wrong ideas concerning OpenGL. But first, stop giving the NWN example. It is the only you can have and it seams more like a case of bad coding than anything else...

(I'm only pointing out that it's "seems", not "seams," since you've spelled it that way before, and I'm guessing that English is not primary for you, so, no biggie. Heaven forbid I'd have to write or speak anything but English--I'd be lost!)

The NWN example I like because it was a highly visible, very public example of the problems with extensions that a developer faces, and it's something most people remember. It's classic, in fact, and proves the point so well that I can understand why those who don't wish to see or understand the points I've made would find my use of it objectionable...;)

Bad coding or not, it still revolves around the fact that the IHV hardware support relative to the NWN-engine shiny water feature support was not accessible to Bioware through a formalized API structure, but only through custom IHV extensions. Remember that because the engine supported the required nVidia extensions, the shiny-water coding was fine for nVidia gpus, and worked as expected. This wasn't true for ATi, simply because the NWN engine did not also support the *different* ATi extensions required for the engine to support the same NWN engine functionality for shiny water rendering on ATi hardware.

So the coding was only "bad" for ATi--it was fine for nVidia--and OpenGL extensions support (or rather the lack of it) within the Bioware NWN engine was the cause. Overall it was bad for Bioware as they spent a lot of time and energy post shipping adding the required ATi "shiny water" extensions support to their engine. Normally and ideally this sort of thing gets done before shipping a game, and the public never learns of it. That's what makes it such a good example, as the timing of things relative to origination of the NWN engine, and the shipping of R300, served to expose it and illuminate the issue publicly.

It seams the only problem you have is the fact that extensions, for what GL is concerned don't interconnect with the APi in a magical way that each vendor can respond to. That is wrong.

What's wrong is your understanding of what I've been trying to say clearly...;) Far from it being a case of "magic," I've simply pointed out that in such cases a game developer cannot write his engine feature support to the API, but must instead write to *multiple* extensions from *multiple* IHVs. In this case the required extensions of one IHV were ignored, where the required extensions by another IHV were supported, and that was the basic root of the problem. Again, had the game engine been written to support shiny water rendering through formal API function support (instead of through custom IHV extensions, which by definition differ from IHV to IHV) then the burden of support would have fallen to the IHVs, primarily if not exclusively, instead of primarily on Bioware as it did.

When you have to check for an extension in GL, and make your game dependent on it, it becomes "core" inside your game, concerning the API. For example, if you must have something, check it at start up, and if it doesn't exists, say it and get out. Apart from that, you can use it, like it was in OpenGL's core all along.

Please, let's not confuse APIs with game engines--it's bad enough confusing API versions with extensions...;) Also, it wasn't that NWN wouldn't run on ATi cards--only that "shiny water" at first wouldn't render on them--which is very far from making your game "dependent" on an OpenGL extension. Other than the absence of shiny water support, NWN ran fine for me on my R300.

I daresay that if the whole NWN engine had been "dependent on extensions" as opposed to being dependent on the core OpenGL API function support (which was supported the same way by ATI and nVidia, and did not involve extensions), then probably NWN would not have run at all on the ATi hardware...;) But of course that was not the case, so no point in talking about it that way.

OpenGL doesn't move slow. It just takes time to think.

I wasn't aware that OpenGL was a sentient, thinking entity. Learn something new every day about OpenGL around here...;)

Things are only included in core WHEN their functionality is proven.

As opposed, I suppose, to D3d, in which only unproven, undemonstrated functionality is supported? Heh...;) Oh, boy...;)

It might interest you to know that D3d has spanned several years of development between Dx3.x and DX9.x, or is that not slow enough to suit your tastes?...:)

When that happens, most of the time OpenGL ends up with a better mechanism than D3D's (example: VBOs and VBs, GLSL and HLSL). For example, why does D3D have TrueForm?

TruForm is extra-D3d--kind of like an ATi-custom extension--sort of...;) It's not in D3d, which is why nVidia's D3d drivers have never supported it. Which of course would be a problem if it *was* in D3d, which kind of speaks to the problem here. If it was a part of D3d, then nVidia's D3d drivers would support it. But it's not, so they don't. TruForm is an ATi marketing bullet these days, basically.

I have to say, too, that all of the very premature announcements we were all hearing as to OpenGL 2.0 (later amended to 1.5, with a promise of 2.0 in the future), seemed to me nothing if not knee-jerks in response to DX9.

As far as "better" goes, I can only say, again, that the great majority of developers obviously think D3d is the better 3d-gaming API, for a variety of reasons, certainly, which must be why most of them use D3d and not OpenGL as the basis for the games they write. Again, that is not *me* saying that--that is just an easily demonstrated fact.

Another example is the NV30 FP textures, that do exist, just only in GL, because D3D didn't figured out a way to support NV30 arquitecture. But really, it shouldn't, because it was way to limited. But at least it is there, in form of an extension (and never in the core), in GL.

Sorry--that went right by me...;)


But, with the 6800, NVIDIA supports everything, and so it uses ATI's extensions that ATI created for their hardware when it supported FP textures. Now there is a need to have a unified mechanism in GL to deal with FP textures and so the ARB creates it, using ATI's extension as a base.

Stop right there--what's good for the goose is good for the gander, right? Sure, and you've just summarized the entire development paradigm behind D3d. Great standards should be adopted in the API regardless of who invents them, so that everyone can use them, free of the fear of patent-infringement lawsuits. By formalizing the API structure and getting all the IHV's to voluntarily contribute to it, we get to advance the state of the industry without a bunch of stupid lawsuits hurled back and forth, by one IHV trying to derail the other in court (much as I liked 3dfx, I loathed them for the patent lawsuit they brought against nVidia, even though I think nVidia's full of **** most of the time--because I wanted to see them compete in the marketplace and not in a courtroom.) There are many advantages to having an independently arbitrated API stucture which aren't immediately obvious, and go beyond the mere technical aspects.

Another good example of extensions in GL is GLSL. The first version was OK but it lacked somethings. Developers argued, and they were included before GLSL was included in core. Now, the GLSL of GL2 is a better GLSL. Try and do that with HLSL...

Already been done--it's just that the developer debates with respect to HLSL aren't nearly as public...;) Don't assume that because you don't hear about something that it never happens...;)

If a developer treats extensions properly (or sometimes there is no way to escape them) they do not complicate things: it is just a couple of "if"s...

"If" is the largest word in the dictionary, etc....;) I think the incontrovertible point with respect to this entire matter is that OpenGL is actually moving away from an extensions orientation towards a more formal structure which will integrate more and more functionality within the core API. As this happens, I would not be surprised to see overall developer interest in the API begin to pick up and accelerate, directly in proportion to the degree to which extensions reliance is reduced by the addition of functionality directly supported by the core API. As this begins to happen, I think we will start to see much better tools and other things come out of it, which I think will be better for the future of OpenGL as a 3d-gaming API. (I actually wanted to say this much earlier in this thread by somehow kept getting side tracked...;))
 
WaltC said:
Yes, you get *lists* of extensions and you get *entry points*--what you do not get, however, is any extension functionality--which is accomplished by the extensions themselves, which by definition are not included in the API.
That's why they are called extensions :p
Seriously though, PS1.4 was the example that brought this up. If you had a DX8 renderer using PS1.1 back then, and wanted to use PS1.4, you essentially had to learn an all new shader structure. So while it is true that the infrastructure to support PS1.4 was (and still is) integral to DX Graphics and looked familiar, even uses the same entry points as PS1.1 AFAIK, PS1.4 behaves a bit different. You have to learn about the "phase" concept and a few other subleties.

Btw, I'm a bit puzzled about what you meant by "what you do not get, however, is any extension functionality". As soon as I have detected the extension and imported the relevant entry points, I can access the functionality as I see fit. But if you meant documentation ...
ATI_fragment_shader has its own documentation. PS1.4 documentation is wrapped into the DX doc package, but you still need to read it, if you want to use the functionality. There is significant overlap in these two pieces' contents ;)

WaltC said:
Look, I think you are shooting your own arguments in the head--because if extensions were the panacea you proclaim then there'd be no need--ever--for OpenGL 1.5/2.0, etc. Obviously, somebody, somewhere, sees a need to integrate more *function support* into the API, formally, apart from the mere function of extensions list generation, that is...;)
That's not where I stand. In fact I think GL versions are necessary only for marketing reasons, if at all. I've just recently shot myself in the foot, which became possible only because a particular IHV insists on providing GL1.5 drivers for the entire product line. May I dare quote myself:
some changelog said:
Changes: version 0.80c posted Jun-25-2004 vs version 0.80b posted Jun-24-2004
  • Use EXT_blend_func_separate functionality only if the driver exposes the extension. It's a core feature in GL1.4, but using
    it on Geforce <=4 cards forces software rendering ...
    This was a regression from 0.78
Most people will naturally use DX versions to categorize games and graphics cards by some.
There are millions of registered users on the Futuremark forums who cannot even type properly but will happily tell you that "ur gfx is only DX8 that why haha".

Has anyone ever tried to tell you that Doom 3 was "just a DX7 game", or something similar? Certainly happened to me, a lot. Why would anyone ever try to categorize an OpenGL renderer by means of a DirectX version number, let alone be confident enough about it to actually utter the results? Boggles the mind.

Or lo and behold:
a retail box of an FX5700 said:
Ensures the Best Performance and Application Compatibility for All DirectX9.0 and OpenGL 1.4 Applications
[sic]
Well, who gives a f***? Certainly not the people who know how OpenGL works. You do not need higher GL version numbers for anything, certainly not on Windows platforms. It rather increases the burden on the driver developers to provide software emulation of all those non-hardware accelerated features that went into the core when noone wants to use software emulation in the first place. But the "market" is so used to this "bigger must be better" version number madness that the ARB is forced to follow.


WaltC said:
I'll show you actual code.
Great, but how does that code allow me to gauge the precise operation of each extension within *my* engine? More particularly, how does it enable me to understand how the different extensions IHVs create to provide similar functionality will separately and simultaneously impact the behavior of my engine?
If, after executing this block of code, which is part of a one time post-window creation initialization function, gl_caps.ati_fragment_shader is set to true, it would mean, to your code, that you may call the entry points and they will do whatever is described in the extension documentation. Same as above: if you want to use PS1.4, you first need to learn how it's supposed to work ;)

It does not mean, that you are forced to use ATI_fragment_shader semantics from there on. You can toggle it on/off at any time, and it initially is off.

Interaction and priority between extensions with functionality overlap (such as ATI_fragment_shaders vs the basic ARB_texture_env_combine) is clearly stated in the extension documents (yes, there are rules for writing extension documentation, including a document template).

WaltC said:
I've got to go far beyond that level in order to understand how to integrate the varying IHV-specific extensions into my engine, don't I? What you're showing me here is high-level stuff, which at best provides me with a rough theoretical approximation of what *might* happen, based upon my understanding and experience, but doesn't tell me anything at all as to the practical impact of each extension in my particular engine environment. Because the nature of extensions is as customized as it is, looking at high-level language integration of extension identifiers doesn't provide me with the kind of info I need (which comes through a lot of hands-on practical experimentation on my part, as to integrating extensions support effectively and seamlessly into an engine.)
Yes, obviously. You can't use a feature effectively if you don't understand what it is. That's why you need the documentation, and you do get the documentation. What is the problem?

You don't run around screaming when you end up with a black screen after enabling alpha blending, and not taking care about your alpha values at all. You first read up on what alpha blending actually does before you blindly try it out. At least I would (but I already know what I does and I'm sure so do you). Same goes for GL extensions. You don't go "hmm, maybe this makes my demo purty" and blindly enable something you don't know anything about.

WaltC said:
Define proprietary, please.
Extensions are, by nature, proprietary to the IHV which creates them.
And then, anything written by Microsoft may well be considered proprietary software, too. OTOH it may also be "designed by comitee" because they supposedly work with all major IHVs :D

Okay, we agree on a lot of GL extensions being proprietary. But note that you can do a lot of very fancy stuff without them. Would you consider ARB extensions that didn't make it into a core version to be proprietary?

WaltC said:
<...>
It seemed like all the initial stonewalling was just to avoid the considerable extra work--which in a sense I can't blame them for resisting. At the same time, I also feel that had core API support for the functionality required to support shiny water existed, then almost all the extra work, if not all of it, if any would have been required, would have been the responsibility of the IHV--and not have fallen to Bioware. Whether we say they were ignorant or lazy, though, the problem was directly an OpenGL extensions problem, and it did cause them a lot of extra work.
It is probably the ARB's fundamental mistake, and will forever haunt them, that they did not agree on a cross-vendor fragment level shading extension for the R200/NV2x class (Parhelia and Xabre anyone?).

One of the problems, probably, was that there is no clear matching between R200 and NV2x. As I stated above, NV2x can do things R200 cannot do and vice versa. Getting down to the lowest common denominator between the two would have meant that both IHVs significantly underexpose their hardware. It may still have been enough for Bioware's requirements. We'll never find out.

Re the "considerable extra work", I have a feeling that this is, at least in part, FUD, and not based on actual hands-on experience. My most popular private project (~120 downloads per day :D) now has direct support for six distinct classes of graphics hardware, while the amount of "redundant" per-target code is only about 15% of the whole project (115k out of 750k raw source code). Writing and validating the complete ARB_fragment_program target took about 8 hours. YMMV, but I can say with some confidence that choosing OpenGL did not hamper the progress of this particular project in any way.

WaltC said:
What about the initial rthdribl release that wouldn't run on Geforce FX cards? It used DirectX Graphics all along, so how could this have happened?
nVidia's D3d drivers were buggy? Perhaps the compiler got a tad too "optimized"?...;) Bugs are as equally possible in a structured API environment as they are in an extensions-based environment. A discussion of driver bugs has nothing to do with the discussion here, imo.
It wasn't a bug. Rthdribl asked for a floating point render target, and that wasn't supported. Newer versions of the demo can fall back to (high precision) integer render targets. This was just an example of "feature not present => go boom".
WaltC said:
Why does 3DMark2k1's "Nature" test not run on my Geforce 3?
Is this a rhetorical question, or is it because you really don't know how much your GF3 drivers suck?...;) Perhaps there's a bug in 3dMk01 somewhere? It's your GF3, what do you think?...;)
Exactly my point :D

WaltC said:
Of course you have to test everything...;) That goes to my response about the code example you provided.

But, suppose you are testing and discover the functionality in the driver isn't there, and you contact the IHV about it, and he tells you to go ahead, that by the time you are finished the functionality will be there. So you do your thing and the IHV doesn't ever do his. I think that's probably common to some degree. Especially when it concerns functionality technically required by an API version, but presently absent with so-called API-compliant drivers.
Who needs GL1.5 drivers anyway ;)
Bugs happen in both models. You don't want to see what Gothic II looks like currently on a DeltaChrome S8 ;)
Report bugs as you find them. Don't rely on anything until you've actually seen it works. Should be common sense, no? :)

WaltC said:
Don't get you....You're saying "ultra shadow" is always "on" in the drivers and doesn't require engine support?
No, I was trying to say that NV3x cards produce "zixels" swiftly, all the time. This is just one part of the "UltraShadow" package, the other parts would indeed require API/application support to work.
WaltC said:
All the "shadows" then for nV3x+ are "ultra"...? If that's the case I didn't know that...:D If that's the case--yea--that's some "marketing umbrella" for sure...;) Maybe next from ATi we'll see the introduction of "Penultimate Shadowing," or something, used to describe ordinary shadows. Perhaps it's things like this that prompted the phrase: "The only good marketer is a dead marketer."
:LOL:
WaltC said:
I quite agree....;) But the majority of critical responses I've been getting in this thread seem determined to educate me to the fact that there just aren't any *real* differences, aside from syntax, in the structuring of DX and OpenGL. People keep saying that OpenGL doesn't need version improvements, since extensions cover whatever functionality improvements are needed, and are equivalent to DX version evolutions, without bothering to explain why it is that OpenGL needs to go to 1.5 & 2.0, but use extensions at the same time...;)

It seems to me, though, that as OpenGL adds more and function support to the core, that the need for extensions support will progressively fall off accordingly.
I've already hinted to this above, but there is little point in revving OpenGL for the Windows market. There's a glue layer on Win32 to bind ICDs to windows (lower case w!). This is, of course, the famous opengl32.dll and is provided by Microsoft. In attempt to kill off OpenGL, MS froze development of this glue layer ages ago.
So, if you want to use GL1.5 core features on Windows, you must use wglGetProcAddress to extract the entry points. This is the only way to bypass this roadblock. You might as well just use the "normal" extension import song-and-dance, which involves wglGetProcAddress, too, but additionally provides useful hints about feature performance (my changelog snippet be my witness).
Why have GL1.5 drivers then? Marketing.

WaltC said:
Pretty much my entire commentary has been that developers prefer DX to ogl, precisely because, as you put it, there's "Less work for developers, because so much stuff is already provided."....;) Sorry you didn't get that out of my comments....;)
:oops:
But isn't it fun, wasting away the first true summer evening in weeks in front of the 'puter? 8)
 
WaltC said:
I related to Z that I thought I had read somewhere that Splinter Cell was a D3d game which supported Ultra-Shadow through an nVx engine-specific custom code path
Your statement implies that although truform was unsupported by D3d[...]
TruForm is supported in D3D, just like RT patches and displacement mapping. Which was my point. Big PR, no results. D3D isn't immune to that.
This would make me think, then, that a D3d game engine could support something like nVx Ultra Shadow without its direct support in the D3d API.
No way.
--and fp16 support was only added later (post nV30's paper launch in '02), I believe, to the DX9 spec as pp.
Nope. FP16 pp was already in there before R300 was launched.
I think you're being a tad rationalist here...;) By "minimum compliancy" I can only think that you mean "backwards compatability" with earlier DX versions.
Nope. By minimum compliancy I mean the minimum features a card has to support to be able to be accessed via DX9 interfaces. And those requirements are extremely low. Shaders? Forget it.
Again, the difference between succeeding DX versions and OpenGL extensions is that everything stays uniform from version to version
:LOL: "staying uniform from version to version" :LOL:
Great joke, Walt.
Extensions are only a "necessity" when the core API function support hardly changes over time
Extensions are what kept OpenGL alive in the first place, despite significant core changes. And I bet it will take less than three years until GPUs will have grown out of WGF, supporting features MS didn't think of today.
Which of these was written by nVidia and accompanied nV40 when nVidia sent it out on the review circuit earlier this year at nV40 launch, when nVidia was telling us all about the splendors of SM3.0....? More importantly, which of these demonstrate functionality (or distinct and clear advantages) impossible with SM2.0 but only possible with SM3.0?
Why, do you think, did I mention those extensions? Right, because they actually represent more than SM3.0


There are so many misconceptions in those quotes, I just don't know where to start... sorry, Walt, but you have no clue whatsoever what programming with DX or OpenGL is really like. When you want to discuss a topic, you really should try to understand it rather than repeat unfounded claims you might have heard somewhere.
 
Joe DeFuria said:
No one really wanted or needed a robust API to write a brand new hardware rendering path for a game that was coming out in 2 years. They needed something to get thieir very sson to be release game some hardware acceleration.
That is true, I guess. I had forgotten about that aspect of it. There were a number of games that were originally designed for software rendering that later only ran well in Glide (the original Unreal was perhaps the most notorious of these....).

It's not the advent of an "underdog". It was the game development cycle in general. Developers started writing games with 3D acceleration in mind from the get-go, and targeting multiple platforms just makes sense.
Well, it may be true that you can't exactly separate the two possible reasons from one another, as they happened at about the same time.

But still, I'll argue my case. Remember that when 3dfx released their Voodoo, there really was no hardware that could seriously compete with it. The first piece of hardware that gave a serious attempt was nVidia's RIVA 128, but it had serious visual quality problems, and thus wasn't really a contender. And yet, at the release of the RIVA 128, nVidia was touting their support of Direct3D. Even a year later, when nVidia released the TNT, nVidia was still harping on their support of an industry standard, Direct3D (they had pretty poor OpenGL support until around the release of the TNT2).

From a developer's perspective, when there were no serious competitors to 3dfx, it made lots of sense to make a Glide-only game. It was relatively painless compared to using a general API, and they still had software rendering for owners of other hardware. But once some serious competitors emerged, it started to make more sense to only write one hardware-accelerated path in a general API, regardless of how much more challenging it was to make it run well.
 
Back
Top