OpenGL 4.3 (with compute shaders!)

New addition: Compute Shaders! Wha?

CL-GL interop is not exactly a substitute for something that exposes similar functionality yet lives in the same API and employs the same scheduling mechanism. This was a reasonably big advantage for DX CS, IMHO.
 
CL-GL interop is not exactly a substitute for something that exposes similar functionality yet lives in the same API and employs the same scheduling mechanism. This was a reasonably big advantage for DX CS, IMHO.

True, but does that mean that OCL is a dead end and we'll see khronos adding more features into compute shaders going forward? So now we have two ways of doing the same thing.

WRT CL, what the bloody hell? 6 years after G80 and all they can offer is G80+CLU. The hw is miles ahead of what CL is offering. Pathetic.
 
WRT CL, what the bloody hell? 6 years after G80 and all they can offer is G80+CLU. The hw is miles ahead of what CL is offering. Pathetic.

But but but...extensions!!! and it's open!!!
 
True, but does that mean that OCL is a dead end and we'll see khronos adding more features into compute shaders going forward? So now we have two ways of doing the same thing.

WRT CL, what the bloody hell? 6 years after G80 and all they can offer is G80+CLU. The hw is miles ahead of what CL is offering. Pathetic.

About OCL, not really, but OCL is far more generalist and dont concern directly gaming and general graphism, it is more flexible and aimed at "computing" on a large sense ... OpenGL, for graphic and games, need to bring a more suitable solution directly with it for some specific purpose.
 
About OCL, not really, but OCL is far more generalist and dont concern directly gaming and general graphism, it is more flexible and aimed at "computing" on a large sense ... OpenGL, for graphic and games, need to bring a more suitable solution directly with it for some specific purpose.

Non sense argument. Besides, I could be wrong but compute shader seems like a crippled version of cl kernel.
 
but OCL is far more generalist and dont concern directly gaming and general graphism

Part of OCL's problem is that it doesn't concern anything directly, it's more like a hodge-podge of who pulled in which direction when. Another big part is that it doesn't have a proper arbiter to rap people over the knuckles and actually give it some direction.
 
Obviously GPU compute is useful for a lot of things outside of graphics, but it's also important to note that they're also really good for graphics! In that regard compute shaders have been a major advantage (IMO) for D3D compared to OpenGL. In D3D if I wanted to render to a texture and then use a compute shader to do some fancy maths on it with a compute shader, it was easy since compute shaders are a first-class citizen and no interop is required. Or if I wanted to to have a whole slew of shader code shared between a compute shader used for deferred rendering and a more standard forward rendering pixel shader, it was also easy since both shaders use the same language, compiler, and API.
 
Where's my OpenGL 2 Lean & Mean ?
Where's my OpenGL Long Peaks ?

Always promesses, never delivering...
 
It's way too little and it's too late.

What we need/want is a minimal standard command buffer API, standard texture layouts in memory, and good memory control too, gimme an ISA and it will be even better. (Not required for now, a GPU specific compiler is ok)

Alternatively, go higher level and make Chapel (http://chapel.cray.com/) work well with GPU/heterogenous compute.
 
Obviously GPU compute is useful for a lot of things outside of graphics, but it's also important to note that they're also really good for graphics! In that regard compute shaders have been a major advantage (IMO) for D3D compared to OpenGL. In D3D if I wanted to render to a texture and then use a compute shader to do some fancy maths on it with a compute shader, it was easy since compute shaders are a first-class citizen and no interop is required. Or if I wanted to to have a whole slew of shader code shared between a compute shader used for deferred rendering and a more standard forward rendering pixel shader, it was also easy since both shaders use the same language, compiler, and API.


Cool. So can we please have all the CL functionality baked into GL compute shaders, pretty please?;)
 
Laughing at someone's opinion without explaining why it's funny isn't much of a contribution.

Because you hear the same thing over and over. "Extensions are great! Now developers and end users don't have to wait for all the red tape! Everyone wins!"

But in reality, they just end up making a mess. Developers have to maintain a more complex code base (what if ihv x and y support extension w but ihv z doesn't? or what if ihv x, y, and z all implement extension w differently? what if it's changed again when it's officially added to the API?) for usually minor gains. Things are even worse in the embedded market. Extensions could work if somehow we could force all the relevant ihv's to agree on implementing the same feature in same manner in the same timeframe.

I'd much prefer D3D10+'s all or nothing approach. I can perform essentially one check and immediately know what the card is or is not capable of. I realize this is more of a personal preference thing (and my initial post was probably a bit too strong), but never will extensions (in their current form) make sense to me.
 
Back
Top