DX 10 New | OpenGL?

micro

Newcomer
Everyone knows about Direct X 10 and is excited, some not but others are. A question i have is: What's going on with OpenGL? Is there any new technology announced from them, a new opengl version perhaps?

just wanted to know because i have heard nothing about opengl for years now..
 
Again:
Version numbers don't matter much for OpenGL, the least of all on Windows systems, because OpenGL has an extension mechanism to expose new functionality.

Direct3D needs to be revved for new features because Microsoft wants to define a maximum feature set for each version, and does in fact not give WHQL for drivers that circumvent this control out of the box (geometry instancing on ATI cards is the example here). OpenGL OTOH does not need to be revved to expose any and all new features the next generation cards will bring to developers and users. In fact OpenGL will be better at exposing new features of the next generation cards, because there is no such artificial tie to an OS as with DirectX 10, and you can use all the features on Windows XP (or even 98, if ATI/NVIDIA bother to release an updated driver). Also there's no need for the IHVs to throw away and rewrite their drivers to match a new internal interface to the system.

All is good for OpenGL on the technical side. Politics is the name of the single problem OpenGL may be facing (again).

The ARB did not compete agressively with Direct3D through revisions in the past because there has not been, and there still isn't, any technical necessity to do so. It's obviously great for marketing, because not-so-techies perceive things in peculiar ways, and version numbers have proven to be a very powerful instrument even when being utterly hollow.

Khronos appears to be more dedicated on making things interesting for the old marketing department, which is nice I guess.
 
OpenGL 3.0 should be equal if not better than DX10, I don't know what else you need...
I hope so, but I'll believe it when I see it... talk is cheap. They already got too scared to make the real changes that were needed in GL2.0; I hope they don't repeat the same mistake.

In fact OpenGL will be better at exposing new features of the next generation cards, because there is no such artificial tie to an OS as with DirectX 10
Again, that's true in theory but even NVIDIA has started to slow down on actually bothering to author and implement GL extensions. ATI just doesn't bother.
 
As I already stated it in the OpenGL forums, I find it rather funny that DirectX is seen as a ultimative standard feature-wise. With OpenGL, you will get the same functionality (maybe even more) without need for Vista! OpenGL shaders are even now ahead of DirectX9 both in flexibility and in functionality. That is the nice thing about OpenGL - vendors can expose functionality that is not included in DirectX! The only problem of OpenGL is the API that needs a major revision, but I hope this will happen soon. Even now, OpenGL and DirectX 9 are almost even, while OpenGL still is more flexible...
 
As I already stated it in the OpenGL forums, I find it rather funny that DirectX is seen as a ultimative standard feature-wise.
I don't think that's necessarily the case... they just have a spec and supporting hardware out for DX10, whereas I've seen only "this is what we want to do"-type articles for OpenGL. I can only assume that they're going to do it, but as it stands, OpenGL is trivially behind D3D10, if arguable for D3D9.

With OpenGL, you will get the same functionality (maybe even more) without need for Vista!
Are you honestly trying to say that GL currently has all of the functionality of D3D10? The need for Vista has a lot to do with virtualized GPU memory and driver changes, etc.

OpenGL shaders are even now ahead of DirectX9 both in flexibility and in functionality.
What? The feature sets are pretty equivalent last time I checked (maybe I'm missing something?). IMHO GLSL is certainly the more poorly designed language of the two syntax-wise as well.

Anyways I'm rooting for OpenGL too, but seriously use the best tool for the job. I don't see a need for brand loyalty in this case - pretty much any graphics programmer could use both comfortably.
 
Are you honestly trying to say that GL currently has all of the functionality of D3D10? The need for Vista has a lot to do with virtualized GPU memory and driver changes, etc.

Not currently (I said *will* :) ... But why do you need virtualized memory anyhow? I think this should be covered by driver... With buffer objects in OpenGL we get good virtualization (to be sincere I didn't manage to see greit difference to DX10 memory model, but I also admit that I didn't study it very precisely). A new, simpler API is needed, without questions.


What? The feature sets are pretty equivalent last time I checked (maybe I'm missing something?). IMHO GLSL is certainly the more poorly designed language of the two syntax-wise as well.

I just looked once more and you are right. I must apologize, I thought that DirectX 9 had no deviates and back-facing register. The advantage of GLSL that I see is being more abstract, but this is really a choice of taste, agreed.
 
But why do you need virtualized memory anyhow? I think this should be covered by driver... With buffer objects in OpenGL we get good virtualization (to be sincere I didn't manage to see greit difference to DX10 memory model, but I also admit that I didn't study it very precisely).
Yeah that's a different discussion entirely... I'm not convinced about the "let the driver figure everything out" strategy; it seems to me that that's entirely why OpenGL gets into performance trouble and a lot of the reason for creating an "LM": entirely because too-high-level concepts can't always be optimized very well. The driver ends up trying to "guess" and "infer" on what you are trying to do when you could just more easily tell it.

Anyways I can see advantages and disadvantages to both ways here, but I agree that GL's memory model should be flexible enough to slap on virtual memory at the driver level. I certainly think that it's needed moving forward, but I don't think either D3D nor GL will expose it to the application (just as it isn't exposed in the OS), and that's a good choice IMHO.
 
Could you sketch a scenario how a game is going to utilize the virtualized GPU memory (or the driver changes/etc.) ?
Ex. lots of textures in a scene. They can all be loaded into "memory" and paged in as the user moves around the scene and so forth. That's a simple example, but I think it motivates the need.
 
AndyTX - question: won't the performance hit of that (loading pages when a texture access misses on board-memory) be pretty catastrophic compared to the application that figures out what will be needed and sends it to the board while the GPU is busy with other things?
 
How does the application know how much memory is free?
How does the application know how much memory something is going to take up?

What happens if there simply isn't enough memory does the program just terminate?
 
A1: It queries the driver.
A2: It knows when you upload.
A3: It makes room or goes to the next best thing, my Seagate Barracuda.

How does the application know how much memory is free?
How does the application know how much memory something is going to take up?

What happens if there simply isn't enough memory does the program just terminate?
 
A1: It queries the driver.
A2: It knows when you upload.
A3: It makes room or goes to the next best thing, my Seagate Barracuda.
Q2: what you mean after you upload you query the driver again? or is their going to be a special driver function to tell you how much space its going to need. You'll ahve to do it more often then just uploading texture as well as it may have cache in the video card memory for geomertry ect.
Q3: It can run off your Seagate Barracuda that would require virtualisation and wouldn't it be better to have it in system memory.
 
Ex. lots of textures in a scene. They can all be loaded into "memory" and paged in as the user moves around the scene and so forth. That's a simple example, but I think it motivates the need.
If I'm understanding your scenario corectly, this can be done already, with hardware and API that is existing today. Both graphics APIs are abstracted enough to allow this today (no need for the driver model changes).

I can write infinite loop that uploads thousands of textures until OGL stops accepting them and reports "Out of memory" error. This won't happen until the swap file gets full. Hence, I can already use texture data, which sum of occupied memory exceeds system physical memory. These swapped-out textures are perfectly usable for rendering. They will be swapped-in by driver when the renderer binds them. Of course, I'm assuming it's renderer's responsiblity to detect which textures are needed for rendering a frame.

Perhaps you have actually meant something different, so I'd be glad if you cleared this up.
 
If I'm understanding your scenario corectly, this can be done already, with hardware and API that is existing today. Both graphics APIs are abstracted enough to allow this today (no need for the driver model changes)..
He means he doens't like the driver doing this he would rather it say out of memory and that the program should do it because the driver isn't doesn't understand players and levels and stuff so it can't be as intelligent.
 
Back
Top