Is Doom 3's release going to change the OpenGL community?

karlotta said:
Will ATIs new OGL driver change the industry.....will it be out before Doom3, or right after with the august driver release?
I don't think it'll be out this year at all. I've heard it was going to take a very long time. Probably next year...
 
Humus said:
This myth continues to live on I see. There's no more clear point where OpenGL was just way ahead of DirectX than around the DX9 release. I released 5 demos using GL_ARB_fragment_program before DX9 was released.

GL_ARB_fragment_program was approved by ARB on September 18, 2002.
DirectX 9 was released on December 19, 2002.

The only reason people think DirectX is ahead is because it's hyped up at every new version.

While I would agree with you that DX has most often run behind the raw capability of OpenGL (for a long time DX was way behind GLIDE, even), I'd have to say that the DX9 release was the time when M$ pretty much caught up. In my book I prefer DX9 because it doesn't support custom extensions, which I think are what has tremendously slowed the progress of the mythical "ARB committee"...Heh...;)

Basically, if all you have to do to support new features is write extensions under the existing API version for them, the impetus to tie them directly into the API is lost to a large degree, and I think that's what the trouble is, really, with Win OpenGL--which is really still more of a "seat of the pants" approach (which isn't bad and kind of fun at times) than is the more rigid and somewhat more formal approach of DX. But DX I think delivers the more consistent approach of the two which I think is a better platform for proceeding with "the future of 3d" than is OpenGL with its wild, "old-west" extensions approach. Of course, being somewhat "cross platform" (although that phrase has nowhere near the meaning it had a few years ago, I think), OpenGL as a 3d API has just about got to be different from DX in these respects.

The remarkable thing to me here is not the typical nature of OpenGL, but that M$ has done as well as it has with DX9--that's what surprised me. ARB is just slow as old man Christmas, but considering the different applications for OpenGL as opposed to DX, the glacial progress of a formal OpenGL API structure has never seemed that big a deal to me because it could always be offset somewhat or completely through extensions. I'm very ambivalent about extensions. When Carmack first championed them I supported the idea, but in succeeding years have grown increasingly less fond of the concept in a 3d API.
 
Mordenkainen said:
keegdsb said:
No, there never was a Parhelia path. Or, if he started one, it was never completed. I have emailed Carmack about Parhelia and his response did not inspire confidence.

That which cannot be named has a Parhelia path. Whether it was even functional I don't know however, JC's June 25 2002 .plan update says the card will run DOOM. Like I said, a lot could have changed since then. Can you tell us when you talked to JC?
April 28, 2004.
 
Humus said:
GL_ARB_fragment_program was approved by ARB on September 18, 2002.
DirectX 9 was released on December 19, 2002.

OK, it wasn't DX9 - but its still two years after shader hardware was available.
 
DaveBaumann said:
Humus said:
GL_ARB_fragment_program was approved by ARB on September 18, 2002.
DirectX 9 was released on December 19, 2002.

OK, it wasn't DX9 - but its still two years after shader hardware was available.
That's not the same shader hardware that's covered by ARB_fragment_program, though. ARB_fp isn't (and technically can't be) supported on "DX8" class chips, just like PS2.0 isn't supported on 'em.

It is true that there's no common API to get at R200/NV2x fragment shading capabilities under OpenGL. You need to write separate code paths for the two families, if you want to use anything more than basic daisy-chained combiner stuff.
 
Humus said:
GL_ARB_fragment_program was approved by ARB on September 18, 2002.
DirectX 9 was released on December 19, 2002.

The only reason people think DirectX is ahead is because it's hyped up at every new version.

You seem to be ignoring the development model the DirectX follows, all developers effectively get access to the tech a year before release. I was developing HLSL PS2.0 programs in early 2002. Beta1 was released in May 2002.

So games developers do remember OpenGL being late to the party, especially as it was only ASM fragment shaders, whereas DX9 had new vertex shader models, pixel shader models and HLSL.
 
WaltC said:
In my book I prefer DX9 because it doesn't support custom extensions, which I think are what has tremendously slowed the progress of the mythical "ARB committee"...Heh...;)

Isn't PS1.4 a custom extension to ATI? What about DX9.0a/b/c/d/e/f/g..... ? Why so many of them? It almost seams that they are trying to map each hardware that comes along from NVIDIA or ATI. :rolleyes:

Also don't forget that GL is not games only. There must be an ARB to decide the best for everyone, and that includes CAD software and games.

WaltC said:
But DX I think delivers the more consistent approach of the two which I think is a better platform for proceeding with "the future of 3d" than is OpenGL with its wild, "old-west" extensions approach. Of course, being somewhat "cross platform" (although that phrase has nowhere near the meaning it had a few years ago, I think), OpenGL as a 3d API has just about got to be different from DX in these respects.

I also find the subject of extensions funny. If Doom3 demanded a version of GL, say 1.5, it would not require any extension to run. Exacly like any other game demanding DX9.0.

What is the diference between checking caps bits or checking for extensions? There is only one problem with extensions. Sometimes, each ISV would create it's own extension to do the same thing. But that does not happend that often (nice to see NVIDIA using ATI's extensions....)

WaltC said:
ARB is just slow as old man Christmas, but considering the different applications for OpenGL as opposed to DX, the glacial progress of a formal OpenGL API structure has never seemed that big a deal to me because it could always be offset somewhat or completely through extensions. I'm very ambivalent about extensions. When Carmack first championed them I supported the idea, but in succeeding years have grown increasingly less fond of the concept in a 3d API.

Isn't DX10 supposed to have a formal extension mechanism?! If it had one, probably DX9 developers could be using NV's depth bounds be now...
If the supper buffer are rectified at the end of the year, GL2.0 + super buffers will surpass DX9 I think (without the annoying SM versions thingy).

And GL2.0 + super buffers + topology processor even equal DX10, right? That doesn't seam "slow as old man Christmas" to me...
 
DaveBaumann said:
Humus said:
GL_ARB_fragment_program was approved by ARB on September 18, 2002.
DirectX 9 was released on December 19, 2002.

OK, it wasn't DX9 - but its still two years after shader hardware was available.

I don't see any point at which shaders were behind in OpenGL.

DirectX 8.1 was released November 8, 2001.
GL_ATI_fragment_shader was released August 21, 2001.
 
Humus, you may want to look up when DX8 was around... that's when Shader 1.1 came into play.

I don't think there were many games taking advantage of shaders at the time (or the year after). At that point in time, both ATI and NVidia were touting faster hardware and features (faster T&L, faster memory architecture, programmable T&L - which was a flop, Truform)...
 
Well, the same thing again, though a bit fuzzier since the register combiners date back in the GF1/2 era, and NVs extension specs seems not to include original release dates. But DirectX 8 was released November 9, 2000. GL_NV_register_combiners spec doesn't list release date, but first revision was April 4, 2000, so it was released at some point before that.
 
Hmmm didn't the register combiners date back to the TNT era? Not that they were very useful till the GF era (when they became incredibly useful).

Register combiners weren't exactly what I would call programmable though :p It was more like setting switches for whitch operations you want to occur though I guess thats all programming really is, just really limited with register combiners (could do though bunch of useful stuff of course with them to bad no games made use of them that I can think off the top my head).
 
Sigma said:
WaltC said:
In my book I prefer DX9 because it doesn't support custom extensions, which I think are what has tremendously slowed the progress of the mythical "ARB committee"...Heh...;)

Isn't PS1.4 a custom extension to ATI? What about DX9.0a/b/c/d/e/f/g..... ? Why so many of them? It almost seams that they are trying to map each hardware that comes along from NVIDIA or ATI. :rolleyes:

Also don't forget that GL is not games only. There must be an ARB to decide the best for everyone, and that includes CAD software and games.

WaltC said:
But DX I think delivers the more consistent approach of the two which I think is a better platform for proceeding with "the future of 3d" than is OpenGL with its wild, "old-west" extensions approach. Of course, being somewhat "cross platform" (although that phrase has nowhere near the meaning it had a few years ago, I think), OpenGL as a 3d API has just about got to be different from DX in these respects.

I also find the subject of extensions funny. If Doom3 demanded a version of GL, say 1.5, it would not require any extension to run. Exacly like any other game demanding DX9.0.

What is the diference between checking caps bits or checking for extensions? There is only one problem with extensions. Sometimes, each ISV would create it's own extension to do the same thing. But that does not happend that often (nice to see NVIDIA using ATI's extensions....)

WaltC said:
ARB is just slow as old man Christmas, but considering the different applications for OpenGL as opposed to DX, the glacial progress of a formal OpenGL API structure has never seemed that big a deal to me because it could always be offset somewhat or completely through extensions. I'm very ambivalent about extensions. When Carmack first championed them I supported the idea, but in succeeding years have grown increasingly less fond of the concept in a 3d API.

Isn't DX10 supposed to have a formal extension mechanism?! If it had one, probably DX9 developers could be using NV's depth bounds be now...
If the supper buffer are rectified at the end of the year, GL2.0 + super buffers will surpass DX9 I think (without the annoying SM versions thingy).

And GL2.0 + super buffers + topology processor even equal DX10, right? That doesn't seam "slow as old man Christmas" to me...
PS1.4 isn't exclusive to ati, just they had the first product to support it, nvidia dx9 support it, that's like saying, isn't ps3.0 nvidia?
 
DaveBaumann said:
Look at the number of different code paths there are for the engine: at least two "OpenGL" paths, 3 vendor specific paths and one test OpenGL2.0 path - six different egine paths have been coded; had this been DX probably 3 of those could have been removed. Had the ARB got its ass in gear probably a two or three of those paths would be needless for modern hardware under OpenGl 2.0.
I don't think that has much to do with OpenGL, but rather with the hardware features.

The game clearly targets hardware from the NV1x/R1xx to the NV4x/R3xx. I don't think a DirectX game targetting all of this hardware would be much different. Granted, in Direct3D you'd also have the option of using PS 1.1-1.3 on the R2xx instead of coding another path for 1.4, but that's about the only difference.
 
Extensions (vendor or ARB) arent inherently "evil". They are a good thing. However, they _can_ be evil when you cover the same functionality, like ATI and NV's vertex array extensions. That was silly. Some vendor extensions are a good thing too, like NV's depth bounds (ultrashadow) extension, which doesn't really put a huge burden on your engine wrt API feature support.

It looks like the ARB is "rushing" OpenGL 2.0. I don't think some of the proposed changes (3dlabs) will make it, which is really pretty crap.

http://www.opengl.org/about/arb/notes/meeting_note_2004-03-02.html
 
Cryect said:
Hmmm didn't the register combiners date back to the TNT era? Not that they were very useful till the GF era (when they became incredibly useful).

Register combiners weren't exactly what I would call programmable though :p It was more like setting switches for whitch operations you want to occur though I guess thats all programming really is, just really limited with register combiners (could do though bunch of useful stuff of course with them to bad no games made use of them that I can think off the top my head).

The TNT only supported env_combine, which is similar to register combiners, but not as flexible, and couldnt do dot3 etc. Geforce1 introduced register combiners. Geforce2 doubled the speed of them, and Geforce3 increased the number of general combiners (and added the texture shader stuff).

You're right tho, they weren't programmable.. just configurable. AFAIK the DX texture stage state stuff does a similar thing, but is a bit less flexible, so it IS used a lot. Also, register combiners make up part of the PS 1.1 functionality. ATI's part (r200) works a bit differently.
 
AndrewM said:
It looks like the ARB is "rushing" OpenGL 2.0.
This will sound lame, I know, but 'I hope they are'..
AndrewM said:
I don't think some of the proposed changes (3dlabs) will make it, which is really pretty crap.
Would the changes being proposed by 3dlabs have an effect on pc game developers?
 
micron said:
AndrewM said:
It looks like the ARB is "rushing" OpenGL 2.0.
This will sound lame, I know, but 'I hope they are'..
AndrewM said:
I don't think some of the proposed changes (3dlabs) will make it, which is really pretty crap.
Would the changes being proposed by 3dlabs have an effect on pc game developers?

I mean they are rushing it at the expense of proper functionality, and are increasing the version number for no real reason except "marketing".
 
We are waiting for OpenGL 2.0 for so long...
I wouldn't mind the ARB moving faster, but certainly not at the expense of API quality.
Given history there's little doubt in my mind that even if speeding things up the ARB won't 'rush' an OpenGL2.0 which wouldn't be a real huge step from the current API.
I like most of 3DLabs proposal, let's see what's coming.

(I hope ARB_super_buffer won't be much more complex than proposed EXT_render_target, even with a greater feature set. Simplicity is the key.)
 
Back
Top