Will OpenGL ES 3.0 prove to be a short-lived standard?

jedibeeftrix

Newcomer
The mobile industry took a long time to shake off the legacy of the fixed pipeline hardware of ES 1.0, but in moving to ES 2.0 with its streamlined shader based system I wonder if they weren't a little too successful:

Perhaps it lasted too long, pushing back the arrival of a successor API until the hardware evolution coming from the desktop had moved on too far...

With ES 3.0 arriving in summer 2012, no platform support until Android 4.3 in summer 2013, and the arrival of various DX11 compliant SoC's in summer 2014, I wonder if we won't all turn around and ask: what was the point?

We are but a few quarters away from Tegra4, Adreno 420 and Rogue 6XT sporting full DX11 compliance with features such as a tessellation which were deemed to die-size expensive to be worth including in ES 3.0.

This at a time when the vast majority of android hardware still has an ES 2.0 SoC, a pre-Android 4.3 OS, or both!

Thus ES 3.0 is going to be concertinaed between the longevity of its predecessor and the arrival of a runaway success from the desktop world:

After the failure of DX10 with Vista, pent up demand for Windows 7 saw the rapid adoption of DX11 which has continued onwards with Windows 8, and now set to be a console standard for the next five years with the Xbone (with the PS4 too given that it shares the same hardware feature set).

So to bring this ramble back to a more focused question:

1. Do you expect to see OpenGL ES 3.0 rapidly superseded by a new Kronos mobile API based on OpenGL 4.4 providing parity with the DX11 hardware feature set (i.e. mass adoption)?
2. If yes, do you believe that OpenGL ES 4.0 will be a superset of 3.0, with the predecessor living on in non tablet/phone hardware as a low-cost option (fridge/car/hifi displays, etc)?
 
Last edited by a moderator:
We had this discussion about ES 3.0 a while back. I said it wont get mainstream support for a long time simply because its not economically sound to target a cutting edge group of 30-40 million over the billion users that are on existing hardware. I feel the same applies to ES 3.0 successors

DX11 i feel is somewhat irrelevant to mobile market unless Microsoft makes a dramatic turnaround. Maybe you will see some nice tech demos or a few exclusive games but overall the majority of mobile app developers are still focusing on iOS with Android coming in 2nd

Keep in mind the majority of games are simple, only a handfull are focusing on graphics and even there we go back to my original point, would a developer prefer using new tech like tessellation for 30 million users that have hardware support for that or make their game work on the lowest common denominator? For me its an easy answer
 
Cheers chaps, must have missed that bit of CES coverage, and i am surprised to see an announcement so quickly.

To reconfigure the original questions:

1. Do you expect to see OpenGL ES 3.0 rapidly superseded by a new Kronos mobile API, and thus will never see mass adoption into games etc?
2. Now we know ES 4.0 will be a superset of ES 3.0, do we see the predecessor living on in non tablet/phone hardware as a low-cost option (fridge/car/hifi displays, etc)?
 
It will take time and a lot of it for even ES3.0 to become more widely adopted. Performance of current ES3.0 ULP high end GPUs should still be in the single digit framerate ballpark and it will take quite some time until a healthy portion of ULP GPUs will get playable framerates with ES3.0.

And no I obviously don't mean any meaningless possibility where an ISV in the future creates a ES2.0 game with one or a couple of singled out ES3.0 effects to call it an "ES3.0 game" for marketing purposes.
 
It will take time and a lot of it for even ES3.0 to become more widely adopted. Performance of current ES3.0 ULP high end GPUs should still be in the single digit framerate ballpark and it will take quite some time until a healthy portion of ULP GPUs will get playable framerates with ES3.0.

And no I obviously don't mean any meaningless possibility where an ISV in the future creates a ES2.0 game with one or a couple of singled out ES3.0 effects to call it an "ES3.0 game" for marketing purposes.
ES 3.0 is mostly about making things more efficient. It's not about "new effects". ES 3.0 games should run faster and be more battery efficient compared to ES 2.0 games.

Rendering efficiency / memory usage improvements in ES 3.0:
- occlusion queries
- instanced rendering
- MRT support
- transform feedback
- R/RG textures
- immutable textures
- 2D array textures
- NPOT textures
- Lots of smaller texturing improvements (swizzles, mip level clamp, seamless cube map, sampler objects)
- Wider selection of guaranteed vertex formats (saves memory BW, also a compatibility improvement)

Compatibility improvements:
- Standardized ETC2 / EAC texture compression (biggest help for Android)
- Much wider selection of guaranteed ARB texture formats (biggest help for Android)

There are no new graphics "features" supported (such as tessellation or geometry shaders) that would be cost inefficient for mobile platforms. ES 3.0 is purely an improvement for efficiency and compatibility.
 
ES Next feature list suggests it is also all about increasing the efficiency of rendering.

Indirect draw calls can be used to feed the GPU by it's own data (bypassing inefficient CPU processing, and often reducing the need for costly draw calls). Compute shaders can be used to do lighting and post processing more efficiently than using pure pixel shaders (saving GPU & CPU cycles and reducing data transfer to RAM -> battery savings). Texture gather also increases efficiency of some algorithms (saves TMU bandwidth and ALU instructions by using fixed function hardware instead of programmable hardware to do the same thing).
 
Last edited by a moderator:
ES 3.0 is mostly about making things more efficient. It's not about "new effects". ES 3.0 games should run faster and be more battery efficient compared to ES 2.0 games.

"Should" run faster but in the end will they?
 
thank you again for the responses, all.

when we are talking about high-end smartphone/tablet SoC's is it [still] correct to talk in terms of geometry shaders and tesselation (i.e. the delta in hardware features between ES 3.0 and DX11), as "cost inefficient"?

i ask this in light of the fact that we are but quarters away from receiving TegraK1, Adreno 420 and Rogue 6XT. all sporting full DX11 compliance.

yes, i fully accept that SoC's designed for the ÂŁ50-ÂŁ200 bracket are a different matter, just as they are today, but this comes back to my question of ES 3.0 coexisting as a streamlined subset of high-cost mobile graphics...
 
Last edited by a moderator:
I think you meant to say Tegra K1. As for Rogue, I did hear there would be versions that had DX11 support, but I haven't really heard anything about those in a while. All current Rogue models are DX10 capable at most.
 
Can't tessellation improve efficiency too? In the sense that it reduced vertex input data for higher level structures and can help prevent the final geometry load from being higher than it needs to be. But I don't know how this balances against the cost of the tessellation itself.

I can understand not making it a core requirement for ES Next, but you'd think it'd be prudent to offer extensions for it.
 
Can't tessellation improve efficiency too? In the sense that it reduced vertex input data for higher level structures and can help prevent the final geometry load from being higher than it needs to be. But I don't know how this balances against the cost of the tessellation itself.

Yes but in the same manner more or less than some of the additional ES3.0 functionalities vs. ES2.0. Likeliest scenario being that something that without the N advanced capability was nearly impossible on hw X, it now is but with quite low performance. No doubt that you get higher efficiency with N, however it'll still take eons before hw can use it at playable peformance. It's the exact same story as at the desktop for decades now; there's no reason why ULP mobile should be any sort of exception.
 
Tessellation improves performance when it's used to "compress" triangle meshes or to emit triangles only where needed (based on viewing angle and distance). However this kind of tessellation is quite hard to achieve, since it affects the whole content pipeline (including 3d modeling software, which game developers cannot modify themselves). Artists also need to learn new modeling techniques to use tessellation efficiently (with LODs, etc).

Brute force tessellation by programmers has thus been more popular method. It can be used to smooth surfaces, but is very inefficient (even for modern PC and console hardware).

Most OpenGL ES 3.0 features on the other hand are all very easy to integrate by programmers alone, often requiring only a few lines of code to get performance improvements.

Examples: tighter texture and vertex formats (saves bandwidth = saves battery), instancing (saving lots of CPU cycles needed to validate draw calls), array textures (reduces CPU side state changes and validation cost), npot textures (saving memory), mip level clamp (reduces texture resource creation CPU cost when streaming data), occlusion queries (reduce GPU cost of draw calls).

MRT support is the only debatable change. However there are already games that use deferred rendering techniques on mobile platforms and run at 60 fps. These games no longer need to render their geometry twice (saving lots of GPU cycles and thus battery life). However MRT support might also mean that more games use deferred rendering, and that's not always a positive thing for battery consumption (as it can increase bandwidth requirements). Deferred rendering can be quite hard to implement efficiently (especially without compute shaders). Still, it's often the most efficient technique if you want to have lots of local light sources (and that is the case for mobile games as well). Deferred shading is the most common lighting choice in last generation (PS3 and Xbox 360) games, and mobile devices are reaching GPU performance parity with these devices. Thus using similar (efficient) rendering techniques is the key for best GPU performance (and thus most optimal battery usage).
 
By the way first GFXbench3.0 results are appearing slowly at Kishonti's database. Results for A7/S800 are as predicted by NV's K1 presentation slides. Since I can't download the application yet, is there anyone that could help what each of the tests stands for? For example what does the "driver overhead" test stand for?
 
I kind of expected a difference in frame rate between the iPad Air and the iPhone 5S. Not sure why, as I can't recall ever having read about a difference in GPU clock speed.
 
I kind of expected a difference in frame rate between the iPad Air and the iPhone 5S. Not sure why, as I can't recall ever having read about a difference in GPU clock speed.

The frequency should be the same between the two (=/>450MHz).

All: is it just me or are results more or less in line with each GPU's GFLOPs?
 
Back
Top