Professional, and work-arounds.
Let me address a couple of items.
As far as work-arounds -- my application already has work-arounds. But, usually the work-around impairs performance. For example, my work around is simply to stop using compressed textures on Radeon series cards. Makes the end result look right, but greatly increases (I believe by a factor of 4) the amount of texture memory required by my application. So, (for texture usage anyway) a 256Mb Radeon performs like a 64Mb NVidia card. If _that_ doesn't inspire ATI to realize they're hurting themselves, I don't know what will. As far as the VSYNC spinlock, we have a predictive VSYNC loop that tries to estimate how long it will be until the VSYNC event, and avoids doing the wglSwapBuffers() call until we estimate there's only about 15% of the frame timeslot remaining. That way, instead of wasting _all_ of the unallocated frametime in busy looping, we keep our other work threads doing useful tasks longer, and only waste 15% of the time. Why 15%? because with CPU loads and program uncertainty you don't want to overshoot your estimate (causing a lurch in the redraw), so instead you are forced to undershoot to be safe. So, our application can do 15% less work on an ATI system, resulting in 15% worse performance than on a comparable system with a properly working VSYNC -- like an NVidia or 3DLabs card.
As far as professional goes, I think you set the bar too high. A system doesn't have to be a 6-headed flight simulator installation to run professional software. A huge number of our users (mine and our listed partners) use (and are only allowed to use) completely stock-standard business PCs. If they're US Government (as many are), they often are _required_ by purchasing to buy from a pre-established vendor, often someone like Dell. So, they're out there in the Real World, trying to do Real Work, using something that's probably less powerful than your average enthusiast gaming machine.
And that's how it works. We deploy top-of-the-line systems whenever it's called for, and the customer has the choice. But we also have to make do with what the customer brings to the table.
digitalwanderer>but what you're basically saying is that your non-ATi drivers for an enthusiast level card aren't capable of doing what their professional line-up can do and you're miffed about it....right?
Actually, what I'm saying is that my ATI card, regardless of what it was sold as, doesn't live up to its specs. If you claim to implement an extension, then it better work. If not, don't offer it as implemented. It so happens that software with large datasets (like ours) are more likely to utilize SGIS-mipmap and compression, but no where in the GL specs does it say "pro-only, don't let consumers have power like this!". It's just a common extension, and everyone and their brother has offered it for forever.
http://oss.sgi.com/projects/ogl-sample/registry/SGIS/generate_mipmap.txt
Likewise, nowhere does the wglSwapBuffers() documentation say "only professional cards should do this efficiently, make sure consumer-level cards just waste the CPU time".
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/opengl/ntopnglr_1ss3.asp
But it's sort of implied that a modern graphics system doesn't spin the CPU idly when doing _anything_. Heck, we had sleep-till-interrupt double-buffer-swap-sync capability on the Amiga platform in, what, 1986?
There's nothing pro-only about these features. The only reason we qualified our announcement with "pro" is because we wanted to avoid the obvious (and inapplicable) arguments like "well, Quake runs fine on it, so it must be your fault". Yes, I do feel that Google Earth is the exact epitome of the pro-level application (formerly known as Keyhole) becoming the ultimate consumer application. I know it has workarounds of the same type as my own software. I know those poor guys over at Google Earth must tear their hair out daily trying to ensure that that program works on all these crazy computers and graphics systems around this planet. GE works in both DirectX and OpenGL, which simply doubles the number of bugs they have to cope with. I don't envy them, but they have a few billion dollars so they can throw more engineers at the problem. Wouldn't it be great if we could just get the vendors to take responsibility for their bugs? It's a dream world, I know, but thow much more time would ISVs have to make great software even better if we _each_ didn't have to fight with working around the same bugs over and over?
Finally, yes, as I said, there are various work-arounds available, with differing degrees of success. The easiest work-around is not to buy the ATI hardware in the first place.
But, in the end, which do you think is a better end-result:
One company, ATI, never fixes the bugs and hundreds if not thousands of OpenGL programmers are forced to find and implement semi-effective workarounds to deal with them. Or,
One company, ATI, fixes their bug, and thousands of programs on millions of computers magically start working better.
As far as we can tell, yes these features have always been broken. Why is it an issue now? Because as we continue to push the envelope, we need features like this to work. Back in 1999 or even 2003, maybe spinlooping during vsync could be overlooked. But today, with multiple CPUs and cores, and lots of threads running trying to get the most out of every system, it's inexcusable. During my time working with OpenGL I've had to pressure every single OGL IHV into fixing bugs, S3, NVidia, 3DLabs, Matrox, Dynamic Pictures (later bought by 3DL), don't even get me started on that Intel Extreme thing. IHVs only fix bugs when there's motivation to -- bug fixes don't sell new cards, whiffy new features do. Developers _have_ to do this to keep them honest. As someone else noted, they fix game bugs a lot faster, because they're a mass-market lots-of-ticked-off-users problem. Those of us will higher-end niche products must find our own ways of raising awareness.
ATI doesn't owe me anything, and far from what was implied by Demirug, we shouldn't have to somehow bribe ATI into fixing their bugs. We have gone out of our way to make ATI aware of the problem, provided screenshots, source, executables and data to replicate the problem. It's not a problem of us misusing their driver and we need their help to set ourselves straight. It's their problem and we've done everything in our power to help them be able to fix it. I've been the point of contact for 6 months of back & forth with them about getting this fixed. Our business is to write our software, not make and give away demos in order to somehow persuade IHVs into fixing bugs. As we've said before, and the whole point of this release, if they're not gonna fix them, we're simply going to stop using their hardware, and tell people why we're doing so.
We mentioned the Radeon series in the PR because that's what people are familiar with -- we didn't want this written off as "oh, it's only some exotic card that's broken". I suppose for completeness we should have said that FireGL is busted too. But, I know the mipmap feature is broken, and I believe (though it's not listed on the OSG page) I think vsync is too. I don't have a FireGL to try it myself, but I've heard plenty of griping from people who did buy them.
Vysez>Why should Ati provide support for something that they never advertised in the first place?
I guess ATI never specifically listed SGIS_generate_mipmap as a feature in their magazine ads -- once you put in all the explosions and Dawn & Dusk's curvy... faces, there's not enough room. But one could argue that when they list SGIS_generate_mipmap in the glextensions call, they're advertising that it's there and can be expected to work. As I said before, if you can't make it work, don't have your driver offer up a busted implementation that software will be convinced to try and use.
Humus, thanks for pointing out that the sample model for the mipmap cause is a broken link. It doesn't actually matter what model you use -- converting any textured object using compression will cause the problem -- this was just the model used for the screenshot. I'll revise the web page to note this.
And as far as us doing this for our own benefit -- what benefit, exactly, do you think I'm getting here? I have spent uncounted days of my time tracking down the issue, replicating it, documenting it and communicating it. When all that failed, we wrote and reviewed a factual but firm announcement and got it out here. And, for our efforts, we enjoy getting scragged on on forums across the Internet. All, for the sole result of... helping get ATI's drivers working better for everyone? Man, I love my job. If you think my company, or any of our companies, is reaping great rewards somehow from this, I have some dot-coms you might want to invest in. I guess the only folks who are benefitting from this are probably NVidia. If I were an NVidia plant, I guess I'd have done my job well, but I'm not that devious of a person.
To close, I will pass along a quote of a quote of a quote of an e-mail that someone sent me last night:
From: Essa Qaqish
To: Glenn Patterson; Toshiyuki Okumura; Larry McIntosh
Sent: Fri May 05 16:28:45 2006
Subject: RE: 3D Nature and Partners Declare ATI drivers Unsuitable for Professional Visualization
All,
Unfortunately, this is true and we are currently aware of it. There is an EPR and a committed fix scheduled for 8.261 (June posting).
While we can't tell from this when the fix was made, and thus if it was motivated by our wheel squeaking or not, it does confirm that it is a real issue and just maybe, after 7 months, we might see a fix for it. And we can all be happy. And all of this can fade into history. And that's all we've asked for.