DX 10 New | OpenGL?

Sorry, I don't understand what are you meaning. I simply still don't see where the so called "virtualized GPU memory" kicks in. Drivers are today capable of transfers between GPU memory <--> Sysytem memory <--> swap file. They are doing it transparently from the the client code POV. The only noticable thing may be slow down, which is only characteristic of the memory hierarchy, bus bandwidths, etc. I don't see any relevance to GPU, DirectX generation, driver models etc.

What's new and "virtualized" in the virtualized GPU memory then?
 
I suspect the so-called "megatexture" kind of functionality is where developers
would really like to see virtualized GPU memory kick in.

We have yet to see this function efficiently in practice though, and the max texture
sizes (8k) are still ~8x too small to support ginormous virtual textures.
 
I think it's a good idea for the same reason virtual memory in an OS is a ridiculously good idea. All of the same principles apply, and adding the GPU on to the cache hierarchy is smart!

The question on whether or not to expose explicit control to the application is debatable, and I have no strong opinion. However GL has shot itself in the foot numerous times by trying to "infer" too much about what the user is doing, to the point that there is no clear "fast path" any more... stupid subtle changes can cause chaotic performance. OpenGL 3.0 LM is *supposed* to fix that (by getting rid of and/or layering all of the cruft), but I have no idea if/when it is coming.
 
I think it's a good idea for the same reason virtual memory in an OS is a ridiculously good idea. All of the same principles apply, and adding the GPU on to the cache hierarchy is smart!

The question on whether or not to expose explicit control to the application is debatable, and I have no strong opinion. However GL has shot itself in the foot numerous times by trying to "infer" too much about what the user is doing, to the point that there is no clear "fast path" any more... stupid subtle changes can cause chaotic performance. OpenGL 3.0 LM is *supposed* to fix that (by getting rid of and/or layering all of the cruft), but I have no idea if/when it is coming.

I disagree, the OS virtual memory is a totally different beast.

In CPU programs operate with raw addresses, so a dedicated hardware is needed in CPU to get around this and implement memory paging so that it was invisible for a running program.

Graphics aplications, however, use (relatively) high level API. They don't see any raw addresses, just some objects, like textures, framebuffers (we can ignore locking, as it's irrelevant for our discussion). This allows a 3D API to completely hide memory managment from user, in a purely software way (unlike the OS virtual memory).

The only situation in which a dedicated hardware for paging might make a difference, would be related to granurality of the paging. In the software method (which has been available for ages), the granularity of swapped memory block is an entire texture. A dedicated paging hardware onboard the GPU could reduce the granularity to n*n pixel blocks. This would allow to upload, say, 1 GB texture, let it be swapped out, then render a scene that accesses only a tiny portion of this giant texture.

However, this has already been done long time ago (so I still don't see what's so special about Vista's solution)

(quote from http://www.thecomputershow.com/computershow/hardware/permedia3.htm)
Additionally, PERMEDIA 3 is expected to be the first graphics processor to offer Virtual Texturing - a capability that automatically manages optimal placement of textures in system and local graphics memory. The PERMEDIA 3 architecture incorporates a demand-page texture sub-system that causes a dedicated DMA unit to download 256x256 pages of textures to local memory when they are first accessed. This will allow software developers to straightforwardly load all textures into system memory, while the hardware autonomously maintains an optimal working set of texture pages cached in available local graphics memory for maximum performance. Virtual Texturing will allow execution of textures from system memory in PCI, as well as AGP systems, provides optimized use of backplane bandwidth and avoids local texture memory fragmentation through virtual to physical texture address mapping.
 
Last edited by a moderator:
I disagree, the OS virtual memory is a totally different beast.
It is different, I agree, but the need is still there IMHO.

In particular you hinted at the cases that require/would benefit from it: when you need more dynamically addressable space. "Mega-texture" is one example as cass mentions, but there are certainly a lot of other usage cases as well.

It also is no longer difficult to map a huge address range into a useful space with texture arrays. Sure a 8k by 8k texture is somewhat limited in size, but with texture arrays you can easily make more memory available to a single shader (dynamically) than your GPU has. That can certainly be useful for a lot of things.

Also remember the low-level access that ATI and now NVIDIA allow. These things are getting closer to general purpose processors every day, and the abstractions of "textures" and "framebuffers" are no longer as useful, particularly for non-graphics work.

Anyways I don't see anything "special" about Vista's solution; I was just mentioning it to note some of the reasons why D3D10 is not supported on XP - i.e. they've moved to a much better driver model.
 
I don't think that's necessarily the case... they just have a spec and supporting hardware out for DX10, whereas I've seen only "this is what we want to do"-type articles for OpenGL. I can only assume that they're going to do it, but as it stands, OpenGL is trivially behind D3D10, if arguable for D3D9.
NVIDIA has their extension specs published, status is "shipping with Geforce 8 series". Where can I download that D3D10 runtime that you speak of, and who exactly makes the drivers for it?
AndyTX said:
Are you honestly trying to say that GL currently has all of the functionality of D3D10? The need for Vista has a lot to do with virtualized GPU memory and driver changes, etc.
Ex. lots of textures in a scene. They can all be loaded into "memory" and paged in as the user moves around the scene and so forth. That's a simple example, but I think it motivates the need.
OpenGL 1.0 can do that for you.
AndyTX said:
Anyways I'm rooting for OpenGL too, but seriously use the best tool for the job.
Yeah right.

edited bits: Watch closely now kids because this is the kind of posting Acert93 will neg-rep for.
 
Last edited by a moderator:
NVIDIA has their extension specs published, status is "shipping with Geforce 8 series".
I know, I was very surprised and happy! I intend to be testing some of those beasts out in the next few days (once NVIDIA ships x64 drivers that work... sigh).

OpenGL 1.0 can do that for you.
I think you missed my whole argument there. Theoretically either API can do it, but the complexity of doing it "automagically" means that neither does it at the moment. Furthermore the problem is exacerbated with the new large amounts of addressable memory that texture arrays, etc. introduce. My guess is that the drivers will not handle these cases well right now, or for quite a long time. It seems like an obvious case for the application to simply tell the runtime what to do.

Yeah right.
I'm not sure what you're implying... perhaps I'm reading too much into that comment.
 
I think you missed my whole argument there. Theoretically either API can do it, but the complexity of doing it "automagically" means that neither does it at the moment. Furthermore the problem is exacerbated with the new large amounts of addressable memory that texture arrays, etc. introduce. My guess is that the drivers will not handle these cases well right now, or for quite a long time. It seems like an obvious case for the application to simply tell the runtime what to do.
That's all fine and well, but besides the point. D3D10 is not required for virtualized resources. OpenGL has always hidden low-level memory management from applications, which means that devices and drivers always had the opportunity to go all-out optimal if desired. Virtualized memory doesn't need a single new API entry point to be fully exploited, and in fact 3DLabs have done exactly that, starting with the P10/Wildcat VP series back in, uh, 2002 IIRC.

Of course the drivers need to take advantage of it to become useful, but that's something entirely different to needing D3D10.
AndyTX said:
I'm not sure what you're implying... perhaps I'm reading too much into that comment.
What I'm implying is that you have displayed a rather FUDdy way of rooting for OpenGL which does more harm than good.
 
I think the point here is that the way DX9 and previous are designed makes implementing virtual memory difficult, whereas with OpenGL this was always a possibility thanks to it's client/server design. From the start, OpenGL was intended for a high end audience (ex. SGI workstations) unlike Direct3D, and thus this design discrepency. This is self-evident in the fact that with DX9 the app must take total control of the GPU, whereas with OGL this was never a requirement. Moreover, once OGL3 comes around (should be soon), it will again be the most functional and highest end.

On the other hand, being on the high end has its costs. As we all know, the OGL extension mechanism isn't always fun, although libraries like GLEW reduce the problems tremendously. Still, even worse is how it ensures that different hardware will have different capabilities de facto. However, when you what the best performance and maximum feature set, these are the facts of life. Afaik, AAA games are in this category.
 
Last edited by a moderator:
a consequence of resource types exposed by Direct3D 10 (e.g., large 1D buffers and texture arrays - e.g., 512 x 8K x 8K) is more pressure on the memory system and a greater need for virtualization of a single resource. It turns out that technology (software, hardware) is generally not quite there yet so the max resource size that need be supported was reduced to 128MB rather than say 1GB or larger. Virtual memory eliminates a lot of the bickering around what resource sizes should be broadly supported.
 
NVIDIA has their extension specs published, status is "shipping with Geforce 8 series". Where can I download that D3D10 runtime that you speak of, and who exactly makes the drivers for it?

Not too many days ago, anyone could download it from http://www.windowsvista.com in the form of Vista RC1, however, the CTP is now sadly closed. It was available for months, though.
I don't know wether the current drivers have support for it, but both NVIDIA & ATI/AMD have released drivers for Vista RC1 and RC2 (and even RTM, at least in case of ATI), which can be downloaded from their driver pages.
 
D3D10 is not required for virtualized resources. [...] Virtualized memory doesn't need a single new API entry point to be fully exploited
Did I say it was? If so, sorry. Note that this is true of both APIs, and indeed virtual memory by necessity doesn't need application or API support. This is true in OSes as well, so I certainly didn't mean to imply anything to the contrary.

I think this discussion has gotten a bit derailed... my initial response was:
AndyTX said:
The need for Vista has a lot to do with virtualized GPU memory and driver changes, etc.
And I certainly stand by that - Vista has a very different driver model from NT derivatives, and D3D10 takes some advantage of that. Where did this turn into being about the API?

What I'm implying is that you have displayed a rather FUDdy way of rooting for OpenGL which does more harm than good.
That's certainly not my intention. As I may have mentioned, I use OpenGL almost exclusively, with some dabbling in Direct3D. They're both comparable APIs and any programmer worth his/her salt can use either happily. However it has been my experience that Direct3D is a lot better supported than OpenGL recently - mostly by ATI, but increasingly by NVIDIA. I'm excited to see that NVIDIA has released lots of new extensions for the G80, and this gives me more hope that they're proceed with plans for OpenGL 3.0 LM, which includes badly-needed changes.

I certainly didn't want to get into any sort of API discussion here. I merely meant to comment that D3D10 needing Vista was quite justified.
 
...
However it has been my experience that Direct3D is a lot better supported than OpenGL recently - mostly by ATI, but increasingly by NVIDIA. I'm excited to see that NVIDIA has released lots of new extensions for the G80, and this gives me more hope that they're proceed with plans for OpenGL 3.0 LM, which includes badly-needed changes.
...

I am under the impression that they (NV/ATI) are working hard on OpenGL3.0, hence the relativbe low pace of OpenGL extensions.
I have to say I prefer to have well thought, multi vendor extensions rather than one per vendor, and I really wanted OpenGL 2.0 "Pure", so I can't wait to get OpenGL 3.0 Lean & Mean...
 
You think you're looking forward to it?
I wet my pants just thinking about it, literally.
I just want to play with the darn thing rather than do anything serious with it, as far as I'm concerned new toys are good for me but I just may end up upgrading my video card to play with most of the new features.
 
Back
Top