* Next Generation Cards - will they need 4.5+ GHz CPUs?

DiGuru said:
With OpenGL Slang, the code generated can be much better optimized than with Direct3D HLSL.
Well, this will only really be possible if OpenGL has an extension added to output the compiled shaders for later use, as optimized compiling can take significant time. If a game uses many different shaders, well, you wouldn't want the hardware to have to recompile shaders that frequently.
 
Scali said:
You could blame ATi, but ATi is not the only one who had problems with OpenGL drivers, it's just the only one that still has a big marketshare.
You could say ATI is the only one who had problems with OpenGL drivers but not with D3D drivers. The smaller companies have problems with both.

I say it is because OpenGL itself makes it too hard to design proper drivers for modern hardware.
I disagree.

NVIDIA succeeded anyway, good for them, but their hardware works fine with Direct3D aswell, so I see no reason for using OpenGL in Windows.
I see reasons to favor OpenGL over D3D as well as vice versa. Depends on the requirements.

I would much prefer it if Direct3D would develop itself further for realtime/game use, and if OpenGL went back to the original focus of professional workstation use, instead of trying (and failing) to keep up with Direct3D, so certain game developers won't be tempted to try and use OpenGL for their games, and not bother me with slow and buggy software (alt-tab doesn't even work in Doom3, this should not be possible in 2004).
:LOL:
So OpenGL is responsible for slow and buggy software? Do you know how many D3D games don't allow you to alt-tab out of the game, or simply crash when you try to switch back to it? I've seen it too often...

Scali said:
Compared to the current generation, perhaps. But that generation is nearly 2 years old already. It's bad enough that OpenGL is still behind in terms of features and performance.
I disagree.

If you can say that in theory GLSL is faster, I can say that in theory, Direct3D runs on other platforms. In fact, it does so in practice aswell: http://www.macdx.com
Besides, 'only' running on Windows can hardly be called a disadvantage with 95+% marketshare. Basically the entire target audience for games/realtime graphics runs Windows anyway.
Besides, my point was that I don't want to be bothered by rubbish OpenGL code on Windows. I don't care what other platforms use, since I can choose not to use these platforms.
95% is still less than 99%.
Besides, I neither want to be bothered by rubbish OpenGL code nor by rubbish D3D code. That's API-independent.
 
Chalnoth said:
DiGuru said:
With OpenGL Slang, the code generated can be much better optimized than with Direct3D HLSL.
Well, this will only really be possible if OpenGL has an extension added to output the compiled shaders for later use, as optimized compiling can take significant time. If a game uses many different shaders, well, you wouldn't want the hardware to have to recompile shaders that frequently.

The ability to store optimized shaders for future use has a negative side as well: it would allow for hand-tuning and shipping pre-compiled shaders. And that would be counter-productive for all but the initial configurations.

A clever, optimizing compiler will store the intermediate states in between compiles, as the mechanics of a specific application will tend towards the same patterns. The mechanisms used will be consistent for the whole run for the majority of shaders. So the parser will tend to break it down into states that generally favor a small set of minimum cost solutions, depending on the specific hardware used.

Those intermediate trees generally only have to be build, analyzed and optimized once, all later compiles for the small set of target solutions will be close to optimal without another lengthy optimize cycle. Even if one specific case is optimized badly, that will only have a bad impact in worst-case scenarios. And that could be countered by submitting that case at the start as well, so it is taken into account during the initial cycle.
 
Chalnoth said:
DiGuru said:
With OpenGL Slang, the code generated can be much better optimized than with Direct3D HLSL.
Well, this will only really be possible if OpenGL has an extension added to output the compiled shaders for later use, as optimized compiling can take significant time. If a game uses many different shaders, well, you wouldn't want the hardware to have to recompile shaders that frequently.
That wouldn't be The Right Thing to do, as you could only reload such a precompiled shader on the exact same hardware and the exact same driver (otherwise drivers would need to stick to one format for eternity, or contain an ever growing amount of backwards compatibility support).

I don't see runtime compilation of HLSL shaders as a big problem. The time required for compiling your average shader will IMO stay comparable to or shorter than the time required to load your average texture, and will have much of the same design choices:
a)if you do it lots of times, you may want to do perform the work in a "level loading" screen, so it's all resident when the action begins
b)if you need dynamic JIT stuff, you can afford to compile shaders "on the fly" much in the same way as you can afford dynamic textures: sparingly, to avoid hitches. Works best if you spread bigger chunks of work out over multiple frames.

Btw, I agree wholeheartedly with your initial response. It's a common misinterpretation that strong GPUs are "slowed down" by weak CPUs. It's more true to say they are being "held back". You still get all the raw muscle and can invest it into high res and image beautification.

A system with a weak CPU does not get slower if you swap in a faster graphics card, if you follow my drift.

edited bits: tried harder to make sense :)
 
zeckensack said:
Chalnoth said:
DiGuru said:
With OpenGL Slang, the code generated can be much better optimized than with Direct3D HLSL.
Well, this will only really be possible if OpenGL has an extension added to output the compiled shaders for later use, as optimized compiling can take significant time. If a game uses many different shaders, well, you wouldn't want the hardware to have to recompile shaders that frequently.
That wouldn't be The Right Thing to do, as you could only reload such a precompiled shader on the exact same hardware and the exact same driver
I don't see what's wrong with that.
 
You could say ATI is the only one who had problems with OpenGL drivers but not with D3D drivers. The smaller companies have problems with both.

Even ATi and NVIDIA have problems with D3D drivers (just look at the changelogs for every new driver release), they are just a lot less than the ones with OpenGL.
So the point is not that there are problems, the point is the amount of problems.

I disagree.

Then what explanation do you have for the fact that all manufacturers have worse performance in OpenGL than in Direct3D, and more compatibility problems?

I see reasons to favor OpenGL over D3D as well as vice versa. Depends on the requirements.

Indeed, as I said: don't write Windows games in OpenGL.

So OpenGL is responsible for slow and buggy software?

Yes, as I say, in OpenGL my Radeon doesn't get anywhere near the performance level it gets in Direct3D. As for bugs like alt-tab... those happen because OpenGL is not integrated with the OS at all. The OS doesn't know that you have an exclusive fullscreen application... You just hack the desktop resolution into the one you see fit, then draw all over it. Direct3D/DirectDraw solve this in a much nicer way. You also get problems with gamma correction and such. If OpenGL crashes, the desktop is not reset, so you're stuck with the resolution and gamma settings for the game on your desktop.
Stuff like that really annoys me.

Besides, I neither want to be bothered by rubbish OpenGL code nor by rubbish D3D code. That's API-independent.

But that was not the point I was making. Even if you program 'good' OpenGL code, you still have problems with lousy drivers or bad cooperation with the desktop. I will be very happy when OpenGL is finally abandoned completely for Windows games and realtime animations.
 
DiGuru said:
The ability to store optimized shaders for future use has a negative side as well: it would allow for hand-tuning and shipping pre-compiled shaders. And that would be counter-productive for all but the initial configurations.
1. I don't think it would be a good idea to ship such pre-compiled shaders with the game, since you'd need different shaders for every piece of hardware out there, and later drivers may improve performance.

2. IHV's could do shader replacement regardless of whether the shaders were precompiled or not.

I would expect that cached shaders would be either done at install time, when a game load time, at game load time when a new video card is detected, or possibly at game load time when a new driver is detected. There are many ways to do it without doing it when the game ships.
 
Scali said:
Even ATi and NVIDIA have problems with D3D drivers (just look at the changelogs for every new driver release), they are just a lot less than the ones with OpenGL.
So the point is not that there are problems, the point is the amount of problems.
Just dl'd the 61.76 release notes, shows very few OpenGL issues...

Then what explanation do you have for the fact that all manufacturers have worse performance in OpenGL than in Direct3D, and more compatibility problems?
That's a fact? Then show me conclusive proof.

Indeed, as I said: don't write Windows games in OpenGL.
Rest assured your opinion doesn't influence my choice of API at all ;)

So OpenGL is responsible for slow and buggy software?

Yes, as I say, in OpenGL my Radeon doesn't get anywhere near the performance level it gets in Direct3D.
And that certainly comes from the OpenGL architecture not being performance-friendly. Uh, no, I don't think so.

As for bugs like alt-tab... those happen because OpenGL is not integrated with the OS at all.
So if they happen in D3D applications, it's also because OpenGL is not integrated with the OS?
Or is it just that some developers are too lazy to handle task switching correctly?

The OS doesn't know that you have an exclusive fullscreen application... You just hack the desktop resolution into the one you see fit, then draw all over it. Direct3D/DirectDraw solve this in a much nicer way. You also get problems with gamma correction and such. If OpenGL crashes, the desktop is not reset, so you're stuck with the resolution and gamma settings for the game on your desktop.
Stuff like that really annoys me.
That's not pretty, right. Though a crash is already annoying by itself. What problems with gamma correction?
There's an easy solution to those kinds of problems: MS could properly support OpenGL.

Besides, I neither want to be bothered by rubbish OpenGL code nor by rubbish D3D code. That's API-independent.

But that was not the point I was making. Even if you program 'good' OpenGL code, you still have problems with lousy drivers or bad cooperation with the desktop. I will be very happy when OpenGL is finally abandoned completely for Windows games and realtime animations.[/quote]
Wow, and D3D won't give you problems with lousy drivers?
I would be very happy when MS finally decides to properly integrate OpenGL into Windows...
 
Xmas said:
zeckensack said:
Chalnoth said:
DiGuru said:
With OpenGL Slang, the code generated can be much better optimized than with Direct3D HLSL.
Well, this will only really be possible if OpenGL has an extension added to output the compiled shaders for later use, as optimized compiling can take significant time. If a game uses many different shaders, well, you wouldn't want the hardware to have to recompile shaders that frequently.
That wouldn't be The Right Thing to do, as you could only reload such a precompiled shader on the exact same hardware and the exact same driver
I don't see what's wrong with that.
It would marginalize any perceived benefits of the approach.

You want something like an on-disk semi-persistent shader cache that gets "flushed" at driver updates/card swaps. Correct?

Will the execution time savings of this idea justify the implementation effort? I'm not so sure. Starting a new game in Doom 3 takes me roughly half a minute I guess (1 gig of memory, XP2400+; yes, reloading an already resident level is much faster). This just can't be shader compilation overhead, as there are only about a dozen shaders in the whole game ...

I'd conclude that there's so much time spent for downloading textures and shuffling resources around, that IMO an order or two of magnitude more shaders could hardly be felt in terms of level load times.

Btw it would be very interesting in this context if someone could offer some "typical" shader compilation times on various drivers. Please :)
 
Which I'm sure they won't. Microsoft has no intention of supporting OpenGL as well as it could be supported. Hell, they still only support version 1.1, meaning that developers have to use extensions for what is now basic functionality.

Microsoft has a vested interest in making Direct3D ubiquitous, and that is to reduce the viability of multi-platform games. As long as games require Windows to run, other operating systems don't have much chance of making their way in the desktop world.

For that reason alone I want more developers to make use of OpenGL.
 
Scali said:
Yes, as I say, in OpenGL my Radeon doesn't get anywhere near the performance level it gets in Direct3D.
You mean it's slower in Doom 3 than in Battle of Proxycon or what?
You don't write OpenGL code. Is that correct? So how do you judge?
Scali said:
As for bugs like alt-tab... those happen because OpenGL is not integrated with the OS at all. The OS doesn't know that you have an exclusive fullscreen application... You just hack the desktop resolution into the one you see fit, then draw all over it. Direct3D/DirectDraw solve this in a much nicer way. You also get problems with gamma correction and such. If OpenGL crashes, the desktop is not reset, so you're stuck with the resolution and gamma settings for the game on your desktop.
Now that's the most stupid point you could have made. OpenGL contexts are bound to a window. You can't have a (visible) OpenGL context without a window (even if it's the "desktop" window, but that's extremely bad practice, of course). There's your Windows integration!
Fullscreen OpenGL is done by switching resolution (ChangeDisplaySettings - GDI!), opening a WS_POPUP window of sorts that covers the whole screen (GDI!) and attaching a context to it.

Reacting to ALT+TAB, window resizing, minification etc is a matter of catching and handling the relevant window messages. Windows integration!

Btw, an OpenGL context is never "lost" *snickers*.

The only OpenGL thing that matters which you can have without a window is an offscreen PBuffer context for R2T stuff.

Gamma: SetDeviceGammaRamp(hdc) works perfectly fine for me, and has done so for ages. Hey, wait a minute, that's another GDI function. Yet more Windows integration!

If an app sets the ramp, it is its obvious responsibility to set it back. That's why you have issues with crashing applications because they are terminated before getting a chance to restore desktop gamma. Btw, restoring desktop gamma yourself is no biggie on ATI drivers (nor S3 drivers nor Kyro drivers, for that matter). Only NVIDIA drivers suck in this respect because they insist that whatever gamma ramp was left hanging around by a crashed app is exactly "gamma=1.0".

And now, if you will, please tell me how a DirectX Graphics fullscreen device integrates with Windows in your well informed opinion :rolleyes:
Scali said:
Stuff like that really annoys me.
Your own imagination annoys you? Ouch.
 
Just dl'd the 61.76 release notes, shows very few OpenGL issues...

Just because they didn't fix issues, doesn't mean they aren't there. Besides, NVIDIA ofcourse has the least OpenGL issues, partly because for some reason most OpenGL developers are NVIDIA-minded, and partly because their drivers have reached maturity a long time ago.

That's a fact? Then show me conclusive proof.

Visit tomshardware or any other review site, and look at benchmarks of Matrox, S3, XGI or ATi cards, and compare them to NVIDIA (we'll take them as the reference, for lack of a better measurement). You will see that for all of them the performance margin in Direct3D is significantly smaller than the performance margin in OpenGL. There is your conclusive proof.

And that certainly comes from the OpenGL architecture not being performance-friendly. Uh, no, I don't think so.

It comes from the OpenGL implementation being less efficient than the Direct3D implementation. Since you have not given your view on why this is, I will stick to my view: architectural problems (forced software emulation for one).

So if they happen in D3D applications, it's also because OpenGL is not integrated with the OS?
Or is it just that some developers are too lazy to handle task switching correctly?

If you don't handle it in D3D, you are lazy, because D3D allows you to handle it correctly without major hacks. In OpenGL it's a mess.

What problems with gamma correction?
There's an easy solution to those kinds of problems: MS could properly support OpenGL.

I disagree. OpenGL should have had its own window management built in, like Direct3D. That way such states can be linked to the context of that window, and when the application dies, the context is restored properly. This is the integration I'm talking about.

Wow, and D3D won't give you problems with lousy drivers?
I would be very happy when MS finally decides to properly integrate OpenGL into Windows...

I would be very happy if MS removed OpenGL from Windows completely.
 
You mean it's slower in Doom 3 than in Battle of Proxycon or what?
You don't write OpenGL code. Is that correct? So how do you judge?

I do write OpenGL code (ironically enough I have only written OpenGL code professionally so far, the Direct3D code has been a hobby project until now). I judge both from my own OpenGL code performance and from other applications.

Now that's the most stupid point you could have made. OpenGL contexts are bound to a window. You can't have a (visible) OpenGL context without a window (even if it's the "desktop" window, but that's extremely bad practice, of course). There's your Windows integration!

Wrong. The OS apparently has no way of knowing that a window has exclusive fullscreen, or that it has its own resolution, like in Direct3D.
So apparently the OpenGL context is attached to the window, but not the other way around. OpenGL knows about the window, but the window does not know about OpenGL. Stupid point? I think not. Alt-tab in Doom3 and you see what I mean.

Fullscreen OpenGL is done by switching resolution (ChangeDisplaySettings - GDI!), opening a WS_POPUP window of sorts that covers the whole screen (GDI!) and attaching a context to it.

Obviously ChangeDisplaySettings is a GLOBAL function, not related to the window itself, let alone to any OpenGL contexts attached to it. Which is where the problems stem from.
But you knew this, I hope?

Btw, an OpenGL context is never "lost" *snickers*.

Yes, how exactly is that possible?
Obviously when you switch away from a fullscreen application, the videomemory is trashed. Apparently OpenGL leaves this to be handled by the driver in some way. Perhaps this again has something to do with that architecture, which is harder to implement than the Direct3D one?

Gamma: SetDeviceGammaRamp(hdc) works perfectly fine for me, and has done so for ages. Hey, wait a minute, that's another GDI function. Yet more Windows integration!

Again, calling a GLOBAL Windows API function is the opposite of Windows integration. If you want to look at integration: in Direct3D you set the gamma on the D3DDevice itself. Which is also why a crashing D3D app has no problems resetting the global settings, its settings were only local.

I hope this was educational for you.
 
Chalnoth said:
Which I'm sure they won't. Microsoft has no intention of supporting OpenGL as well as it could be supported. Hell, they still only support version 1.1, meaning that developers have to use extensions for what is now basic functionality.

Microsoft has a vested interest in making Direct3D ubiquitous, and that is to reduce the viability of multi-platform games. As long as games require Windows to run, other operating systems don't have much chance of making their way in the desktop world.

For that reason alone I want more developers to make use of OpenGL.

I totally agree.
 
Scali said:
Just because they didn't fix issues, doesn't mean they aren't there. Besides, NVIDIA ofcourse has the least OpenGL issues, partly because for some reason most OpenGL developers are NVIDIA-minded, and partly because their drivers have reached maturity a long time ago.
Then I guess you think NVidia likes to keep those issues a secret. But only if they affect OpenGL.

That's a fact? Then show me conclusive proof.

Visit tomshardware or any other review site, and look at benchmarks of Matrox, S3, XGI or ATi cards, and compare them to NVIDIA (we'll take them as the reference, for lack of a better measurement). You will see that for all of them the performance margin in Direct3D is significantly smaller than the performance margin in OpenGL. There is your conclusive proof.
That's likely the poorest "proof" I've ever seen. It doesn't say a thing about one API being faster than the other.

And that certainly comes from the OpenGL architecture not being performance-friendly. Uh, no, I don't think so.

It comes from the OpenGL implementation being less efficient than the Direct3D implementation.
And I don't believe that.

If you don't handle it in D3D, you are lazy, because D3D allows you to handle it correctly without major hacks. In OpenGL it's a mess.
Yep, window message handling is indeed an utter mess, and a major hack. It should be done away with.
 
Then I guess you think NVidia likes to keep those issues a secret. But only if they affect OpenGL.

No, I think that if problems aren't fixed, they are not mentioned in the changelog. Especially with ATi, we all know they have performance problems and bugs in a lot of OpenGL applications, but a lot of them don't get fixed, so they are not in the changelog. So the changelog is not a good measurement for bugs.

That's likely the poorest "proof" I've ever seen. It doesn't say a thing about one API being faster than the other.

I never claimed that the OpenGL API was slower. I claimed that all manufacturers (possibly NVIDIA excluded) have relatively poor performance in OpenGL compared to Direct3D, and the benchmarks show that. This is common knowledge anyway. Don't pretend you don't know that ATi/Matrox/S3/XGI perform poorly in OpenGL.

And I don't believe that.

If it's not the implementation, it must be the API itself. But that is not what I was saying. I was saying that it was hard to create an efficient implementation of an OpenGL driver, since apparently only NVIDIA managed so far. Whether that is due to the architecture of OpenGL, or due to the complexity of an implementation of the architecture, is not relevant, really.

Yep, window message handling is indeed an utter mess, and a major hack. It should be done away with.

Indeed, like with Direct3D, where alt-tab simply works for fullscreen applications. That's because the resolution and gamma settings etc are local settings to the window. When the window loses focus, the desktop settings are restored. The programmer only needs to handle the unmanaged resources that are lost, and all is well. This is very simple and clean to do. Eg a simple callback function that releases all unmanaged resources, and another one that recreates them.
With OpenGL apparently even Carmack himself is not capable of implementing proper handling of alt-tab on a fullscreen application.
 
watches his lovely little topic sail away South, replaced by a bigger, meaner topic...

Chanolth - Agreed its the game engine algorithms that determine the CPU / GPU loading. My shorthand in the title was loose - but I corrected this by my third post. Still the situation remains, on present trend it will be all to easy to see if many titles didn't adapt then with next generation video hardware you'd need alot more then a next generation CPU to feed it.
 
Well, put simply, game developers can manage how much CPU and GPU power their game requires independently, to a large degree. So how much CPU power that next-generation games require will really depend upon what CPUs people have in their machines in the next couple of years.
 
Scali said:
I never claimed that the OpenGL API was slower. I claimed that all manufacturers (possibly NVIDIA excluded) have relatively poor performance in OpenGL compared to Direct3D, and the benchmarks show that. This is common knowledge anyway. Don't pretend you don't know that ATi/Matrox/S3/XGI perform poorly in OpenGL.
Which would be a result of all IHV's except nVidia focusing on Direct3D at the expense of OpenGL. If more game developers wrote for OpenGL, this situation would no longer occur.
 
Which would be a result of all IHV's except nVidia focusing on Direct3D at the expense of OpenGL. If more game developers wrote for OpenGL, this situation would no longer occur.

Is that so? Some of the most popular games run on OpenGL. I doubt the amount of time spent on OpenGL is that much less than on Direct3D.
I think it's more a case of OpenGL requiring a lot more time. After all these years of driver writing, one would expect at least ATi to have decent OpenGL drivers, if it were no harder than Direct3D drivers, which they did manage quite well.
We'll see... Doom3 apparently has given ATi's OpenGL team a new impulse. But I doubt they will improve all that much, really.
 
Back
Top