OpenGL ES 3.0?

i was curious as to whether the 5xx series PowerVR unified-shader cores were capable of operating as geometry shaders just as they can with pixel and vertex shader operations.......?

from an inexpert outsiders view that would appear to be the biggest architectural hitch, not quite the same as requiring a hardware implementation of a tessellation engine were DX11 the target.
Geometry shaders are certainly a big feature. Another fundamental feature of DX10 was "unlimited" length shader programs which pretty much required a instruction cache in the shaders. Does anyone know if OpenGL ES 3 removes a restriction on shader length?
 
'm pretty sure occlusion queries aren't in the ES2 spec at all.
whoops youre right, I just had a look, surprised cause theyve been available on desktop for a decade. Though it took nvidia > year to get theres working correctly.
I wonder why not? (conspiracy cap on) do powervr have a large sway over whats in the spec
 
Thanks god for performance counter extensions ; )
which ones?
Ive tried this on powervr and from reading the spec,
it will only give a true/false answer but not the number of samples passed

i.e. its like the original occlusion spec
http://www.opengl.org/registry/specs/HP/occlusion_test.txt
Im wanting something like
http://www.opengl.org/registry/specs/NV/occlusion_query.txt
 
Halti doesn't actually support geometry shaders. I'm not sure I can comment on whether SGX supports Halti, but since SGX544/554 is a superset of DX9_3, there's not that much missing if it doesn't (in which case how much of that gets exposed depends on extensions).

i did not know that, thank you. why tho? would seem to be the core functionality of what DX10/OpenGL3+ era had to offer alongside more complex shader operations......

as an aside; phoronix is saying E3/Siggraph:
http://www.phoronix.com/scan.php?page=news_item&px=MTEwNzk
 
which ones?
Ive tried this on powervr and from reading the spec,

it will only give a true/false answer but not the number of samples passed

i.e. its like the original occlusion spec
http://www.opengl.org/registry/specs/HP/occlusion_test.txt
Im wanting something like
http://www.opengl.org/registry/specs/NV/occlusion_query.txt
I've used successfully GL_AMD_performance_monitor. It's a very generic-interface performance-counter API that some ES parts and software stacks use to expose very detailed counters, fragment-counted depthchecks included. If you're interested, you can check any of the unit-test apps from the test-es project in my signature for an example of using that extension.
 
thanks darkblu I wasnt aware of this externsion
though http://code.google.com/p/test-es/downloads/list looks to be empty?
but never mind from glancing at the spec, its seems pretty straightforward to use

Any idea on how soon the results are typically ready? i.e. Rough idea
eg Im wanting to use this like so

draw scene
draw light with occlusion test
..
draw transparent stuff
check o see how much % of light visible, so I can draw lensflare

I suppose I could always delay a frame, as I'll be lerping the result anyways so this aint a biggy
 
Qualcomm's Adreno 320 slides. Developer hardware is now out, and consumer hardware should be later this year.

http://www.anandtech.com/Gallery/Album/2186#5

OpenGL ES 3.0 "Halti" support:
- Buffer centric design (pixel/uniform/frame buffer objects)
- Significantly reduces cross platform-feature variablity for programmers
- Occlusion queries
- Instancing
- MRT support
- Texture compression built in to core specification (ETC2/EAC)

Finally we get more unified API and cross platform texture compression formats. MRT support, instancing and uniform buffer objects are excellent additions as well. Finally OpenGL ES is on par with DirectX 9_3 feature level. This is excellent news! :)
 
Someone will have to explain to me one of these days why we still need OpenGL ES. Seems like OpenGL 3.x (core, not compatibility) should be easy enough to support.
 
And what is good?
DirectX 10 and beyond. All legacy crap was removed, and we got a nice new set of features that have close to 1:1 mapping to hardware functionalities: command buffers, constant buffers, state blocks, separate resource buffers and views, etc. DirectX 11.1 also removed constant buffer size limits and allowed partial updates/binding (we can finally allocate a big buffer of data for constants and manage it ourselves, just like on consoles).

The best thing in DirectX (beyond 10.0) is that it guarantees you that all GPUs supporting the API implement all the features, and implement the features in exactly the same way. Your code works perfectly on all supported GPUs, and you do not need to code multiple paths for different hardware. That's a big deal, since there's so many different PC configurations (3 GPU manufacturers, each having 3-4 generations of hardware currently in widespread use).
 
DirectX 10 and beyond. All legacy crap was removed, and we got a nice new set of features that have close to 1:1 mapping to hardware functionalities: command buffers, constant buffers, state blocks, separate resource buffers and views, etc. DirectX 11.1 also removed constant buffer size limits and allowed partial updates/binding (we can finally allocate a big buffer of data for constants and manage it ourselves, just like on consoles).
I'm certainly not a specialist but my understanding is that OpenGL has feature parity with latest D3D, and that one can use the core profile which makes the API much simpler.

The best thing in DirectX (beyond 10.0) is that it guarantees you that all GPUs supporting the API implement all the features, and implement the features in exactly the same way. Your code works perfectly on all supported GPUs, and you do not need to code multiple paths for different hardware. That's a big deal, since there's so many different PC configurations (3 GPU manufacturers, each having 3-4 generations of hardware currently in widespread use).
According to a friend working in a game company this is simply not true: due to performance issues and bugs in drivers they need multiple shaders and paths for different GPUs. Is he wrong?
 
I'm certainly not a specialist but my understanding is that OpenGL has feature parity with latest D3D, and that one can use the core profile which makes the API much simpler.
It's feature parity in the sense that all shader types, all manner of texturing lookups etc. are there in gl.

The bind to use model of GL is horrible though.
 
Back
Top