DX10 Cards & Apple

So how are the new DX10 cards going to benefit Apple? Will they upgrade their GUI to make use of the new features? I'm sure they will probably benefit from the new HD video acceleration in some way.

I know DX10 is a windows implementation, but what about OpenGL?

So far the focus has been entirely on Vista & DX10, I'm curious as to what else these new cards can and will do.
 
For what? Where are the benefits needed in OSX and the general programs used in the space? Maybe some the GPGPU performance would be nifty and I guess some programs *might* make use of the new abilities but otherwise.... don't really see the benefit to a Mac.
 
The biggest benefit the new DX10 cards can offer to Apple is the video decoding acceleration of x264 / VC1.
 
So how are the new DX10 cards going to benefit Apple? Will they upgrade their GUI to make use of the new features? I'm sure they will probably benefit from the new HD video acceleration in some way.
All features of the new cards can be used under Mac OS, just like they can be used under Linux or any older version of Windows for that matter. The tie of DX10 to Windows Vista is artificial and irrelevant if you have other APIs to access the same features.

The new generation didn't bring any new features that are particularly useful for UI rendering. That would have been the "DX7" generation, if anyone still remembers that.
 
All features of the new cards can be used under Mac OS, just like they can be used under Linux or any older version of Windows for that matter. The tie of DX10 to Windows Vista is artificial and irrelevant if you have other APIs to access the same features.

The new generation didn't bring any new features that are particularly useful for UI rendering. That would have been the "DX7" generation, if anyone still remembers that.

The tie of DX10 and Vista isn't artificial, but there isn't any ties between Vista and the new features brought by DX10 cards, in other words, if you have OpenGL extensions for them, you can use them on any OS.
 
The biggest benefit the new DX10 cards can offer to Apple is the video decoding acceleration of x264 / VC1.

I'd say it's more about professional apps like Final Cut Studio 2 and Aperture.
I have been thinking about something laterly, doesn't mt evans also come out in october retty much like leopard is now supposed to do?

For video decoding I think they'll more likely use UVD :LOL:;)
 
[maven];981760 said:
Write (hello LLVM) another back-end for CoreGraphics and suddenly you've taken advantage of the new features in all of the applications that use CG. Done!
Of what use could the new features be, though?
 
Don't forget these new GPUs (or VPUs) support OpenGL 2.1 which Leopard (the next iteration of Mac OS X uses) has support for and uses for several things.

I am also certain you will see a nice improvement in Motion 3 (now that it supports 3D effects) from the Final Cut Studio 2. It is always nice to be able to render effects in real-time while editing video.
 
Apple has a version of OpenGL that is multithreaded (it gives twice performance than the monothreaded one) and it will be included in Leopard final version (it exist in the beta versions of the next OS).

I don´t know how DX10 cards will run in these circumstances and I interested to know how the multithreaded OpenGL can run in an Unified Shader architecture like the DX10 ones.
 
[maven];981760 said:
Write (hello LLVM) another back-end for CoreGraphics and suddenly you've taken advantage of the new features in all of the applications that use CG. Done!

I guess you are talking about Core Image. Apple will need a new version of it to be able to support the new features of GPU. As the current language called GIKernel is a subset of GLSL and it only supports the features of the PS 2.0 counterpart.

Learning the CIKernel Language

The best way to learn how to create CIKernel based filters is to study the book OpenGL Shading Language, by Randi J. Rost. There are a few things to note, however, since the CIKernel language is a subset of the OpenGL Shading Language. In particular, Core Image does not support the OpenGL Shading Language source code pre-processor. As well, the following are not supported:
  • The mat2, mat3, mat4, struct, and arrays data types.
  • The continue, break, and discard statements.
  • The %, <<, >>, &, ^, ||, &&, ^^, and ~ expression operators.
  • The ftransform, matrixCompMult, dfdx, dfdy, fwidth, noise1, noise2, noise3, noise4, and refract built-in functions.
As well, the if, for, while, and do while flow control statements are supported only when the loop condition can be inferred at the time the code is compiled



So not even dynamic branching. Hopefully we will get a new core image version with Leopard.
Things are different for Core Audio, that could pretty well be accelerated in Leopard. This could be why R600 looks like a nice candidate for the Mac Pros. Think about accelerated Logic Pro.


Apple has a version of OpenGL that is multithreaded (it gives twice performance than the monothreaded one) and it will be included in Leopard final version (it exist in the beta versions of the next OS).

I don´t know how DX10 cards will run in these circumstances and I interested to know how the multithreaded OpenGL can run in an Unified Shader architecture like the DX10 ones.

It's already present in Tiger if you have a Mac Pro. I think all it does (and it is still a lot) it's to process all the OpenGL calls in another thread, so the application is less CPU bound. So even when Longs Peak and mt Evans come it should still be a nice advantage.
 
I know DX10 is a windows implementation, but what about OpenGL?

OpenGL supportet all new features of 8x00 series long before DX10 or Vista appeared to end consumers :-/ Nvidia presented all required extensions (and drivers that support it) at the card launch day. I am really shocked that apparently no one knows it. All that "DX version" and "SM verion" nonsence makes me puke :(
 
OpenGL supportet all new features of 8x00 series long before DX10 or Vista appeared to end consumers :-/ Nvidia presented all required extensions (and drivers that support it) at the card launch day. I am really shocked that apparently no one knows it. All that "DX version" and "SM verion" nonsence makes me puke :(

Well that's true, but I have my doubts that Apple will implement anything using Nvidia Extensions. So no D3D 10 equivalents features used in apple products until the new versions of OGL come out.

"That DX version" and "SM Version" are the easiest ways to let people understand what you mean in a conversation. Sure you can say GLSL 1.10 or GLSL 1.20, Longs Peak or Mt Evans but It's hard to think people will be much more familiar with them.
I don't see saying "GLSL 1.0 without DN, dfdx, dfdy, fwidth and yadda dadda" it's really better than saying "PS 2.0 features equivalent".
And for the sake of discussion, saying "OpenGL 2.1 plus NV_fragment_program4, NV_geometry_program4, NV_geometry_shader4, NV_vertex_program4 andwhatnot" it's still not better than saying "D3D 10 features equivalent".
 
<offtopic>
Yes, you are absolutely right, but unfortunately such comparisons are not being used as they should. The internet is full of things like "SM4.0 brings better IQ and performance then SM3.0" that is plain stupid. IMHO, it is much more important to see what exactly new feature are and how they can be used, then just putting it all under the same "SM xx" hood. I think a statement like "card X supports fast fragment-shader branching" says much more then "card X is SM3.0". I am just crasy because everyone talks about "new DX" and "vista 3d graphics features" but absolutely neglects the fact that the features of the new cards are in no way bound to DX or Windows or whatever.
</offtopic>

Regarding OpenGL, there are also some "EXT" (as opposit to "NV") extensions that introduce new capabilities for the GLSL (like integer support). I guess they will be also supported by ATI in the same form, so they can be safely used.

Still, I see nothing in new cards that could benefit the UI creation. All UI compositors need is a card that can render to an offscreen target, with some basic shader support.
 
<offtopic>
Yes, you are absolutely right, but unfortunately such comparisons are not being used as they should. The internet is full of things like "SM4.0 brings better IQ and performance then SM3.0" that is plain stupid. IMHO, it is much more important to see what exactly new feature are and how they can be used, then just putting it all under the same "SM xx" hood. I think a statement like "card X supports fast fragment-shader branching" says much more then "card X is SM3.0". I am just crasy because everyone talks about "new DX" and "vista 3d graphics features" but absolutely neglects the fact that the features of the new cards are in no way bound to DX or Windows or whatever.
</offtopic>
Fair enough! :smile:

Regarding OpenGL, there are also some "EXT" (as opposit to "NV") extensions that introduce new capabilities for the GLSL (like integer support). I guess they will be also supported by ATI in the same form, so they can be safely used.

I was under the impression that GLSL wouldn't be updated anymore in OpenGL 2.1 and that all the new features would be brought in mt evans... Cannot seem to find the piece in question tho.
But you are right, I didn't see it: many of those extensions are named both NV and EXT.

Still, I see nothing in new cards that could benefit the UI creation. All UI compositors need is a card that can render to an offscreen target, with some basic shader support.

Well performance wise a UA it's a nice touch for them, as before they was most likely only using the PSs and not the VSs.
Feature Wise I don't think there is really much you could use..
 
No..OpenGL no..please..not in 2007, save your soul, PLEASE :) /end of minirant
 
OT:

I wonder what is keeping openGL from standardizing all this features. I am no expert, but are even all the "recent" DX feature sets, at least for shading like sm2 and 3, even implemented in openGL without the need of extensions yet?

I can imagine for a game developer is twice the work to do:

if nvidia do
this
elseif ati do
that
else
somethingelse
fi

it would be more work than just working with one standard like d3d, and that is why MS is in control of pc gaming right now maybe?
 
If you're following the OpenGL Pipeline Newsletters, you'll know that Jeremy Sandmel from Apple is chairing the ARB next gen TSG which is responsible for OpenGL Mt Evans.
 
Back
Top