Qualcomm Roadmap (2011-2012)

Nothing except atomics. The changes are cosmetic compared to hw changes we have seen since it first came out. Every API is years behind hw, except CUDA, which is not surprising.

Atomics are only required in the "Full" profile of OpenCL; they're still optional in the "embedded" profile, even in 1.2.

As for 1.2, the most substantial feature addition seems to be "printf" as a built-in shader function; the rest seems .. yes, cosmetic is a good word for it.
 
And on mobile space they are well behind...however they have the potential to go straight to number one as soon as the first cuda part comes to tegra.

What makes you think they are going to lead with a gk110 derivative in tegra? They are more likely to start off with a g92 derivative. Not exactly leadership material, imo.
 
What makes you think they are going to lead with a gk110 derivative in tegra? They are more likely to start off with a g92 derivative. Not exactly leadership material, imo.

Well im expecting some kind of Kepler set up..heavily cut down and modified obviously. ...maybe wishfull thinking...however I was pointing more towards software...nvidia would more than likely pay devs to make apps / games utilise cuda..where as open cl may take a long time to trickle down on android.

I still think apple might strike first for gpgpu compute. ..nearly all of their products contain harsware with that cpability baked in am I right? ...I mean even sgx 535..is the same basic uarch as sgx 540...at that has been demoed alreay with impressive gpgpu compute.
 
Well im expecting some kind of Kepler set up..heavily cut down and modified obviously. ...maybe wishfull thinking...however I was pointing more towards software...nvidia would more than likely pay devs to make apps / games utilise cuda..where as open cl may take a long time to trickle down on android.

rpg.314 is right. Games utilize CUDA? How?

I still think apple might strike first for gpgpu compute. ..nearly all of their products contain harsware with that cpability baked in am I right? ...I mean even sgx 535..is the same basic uarch as sgx 540...at that has been demoed alreay with impressive gpgpu compute.

I would expect someone like Apple to support actively their own invention (OpenCL); while it makes sense so far I don't see anything moving yet.
 
Nothing except atomics. The changes are cosmetic compared to hw changes we have seen since it first came out. Every API is years behind hw, except CUDA, which is not surprising.
OpenCL has been on a 1.5 year release cycle so the next one should be due soon. Hopefully OpenCL 2.0 takes a big stride in exposing newer hardware features.

In terms of GPU compute for mobile graphics though, I thought Khronos finally adding compute shaders to OpenGL 4.3 was an admission that OpenGL-OpenCL interop wasn't an efficient match for graphics use-cases. If that's the case, even when OpenCL 1.x Embedded drivers become widely available on mobile it might not see much uptake in games. I wonder then if Khronos should add some form of relaxed compute shader support to OpenGL ES, akin to Compute Shader 4.x for DX10 GPUs, perhaps based on OpenCL 1.1 Embedded which seems to be the common minimum for announced OES3.0 GPUs?

rpg.314 is right. Games utilize CUDA? How?
I seem to remember a nVidia statement a while ago that given the more diverse nature of mobile, they plan on sticking with cross-platform solutions there, namely OpenCL over CUDA.

I would expect someone like Apple to support actively their own invention (OpenCL); while it makes sense so far I don't see anything moving yet.
You would definitely think Apple would introduce OpenCL 1.1 Embedded in iOS 7 probably to help show off a Rogue A7 SoC but also supporting Series 5XT USSE2 GPUs. I wouldn't be surprised if they decided not to bother with an SGX535/USSE driver given they'll be going off market by the end of the year.

And in terms of my above comments on the possibility of OpenCL ES compute shader, Apple tends to develop their own new OES extensions with every iOS version, and so perhaps they would be a candidate to create an OES compute shader extension especially with gaming being such an important aspect of iOS.
 
Last edited by a moderator:
I still think apple might strike first for gpgpu compute. ..nearly all of their products contain harsware with that cpability baked in am I right? ...I mean even sgx 535..is the same basic uarch as sgx 540...at that has been demoed alreay with impressive gpgpu compute.

You don't want to do compute on series 5. Series 6, however, is a different ball game.
 
OpenCL has been on a 1.5 year release cycle so the next one should be due soon. Hopefully OpenCL 2.0 takes a big stride in exposing newer hardware features.
a) I wouldn't count OCL releases other than 1.0 as releases to begin with.
b) I am not expecting anything useful to come out of Khronos OCL anytime soon.

In terms of GPU compute for mobile graphics though, I thought Khronos finally adding compute shaders to OpenGL 4.3 was an admission that OpenGL-OpenCL interop wasn't an efficient match for graphics use-cases. If that's the case, even when OpenCL 1.x Embedded drivers become widely available on mobile it might not see much uptake in games. I wonder then if Khronos should add some form of relaxed compute shader support to OpenGL ES, akin to Compute Shader 4.x for DX10 GPUs, perhaps based on OpenCL 1.1 Embedded which seems to be the common minimum for announced OES3.0 GPUs?
That would be a natural progression.
 
You don't want to do compute on series 5. Series 6, however, is a different ball game.

Well the demos viewed a page back looked pretty fantastic to me..and that was sgx 540.

No one doubts the latest and greatest is going to be more effective. .but that doesn't mean current gen is useless.
Besides part of the attraction of targetting that uarch is the massive target market it brings instantly. ..much more incentives for developers if money is to be made..over a new niche market.
 
Well the demos viewed a page back looked pretty fantastic to me..and that was sgx 540.

No one doubts the latest and greatest is going to be more effective. .but that doesn't mean current gen is useless.
Besides part of the attraction of targetting that uarch is the massive target market it brings instantly. ..much more incentives for developers if money is to be made..over a new niche market.

It's not niche if Apple is doing it. It's pervaisive.
 
I thought Apple was one of the first companies to push OCL. Not necessarily produce products using it but tout it as the way forward for GPGPU.
 
I think legacy is one of the reasons they may not be going to bigger screens and higher resolutions.

The big app advantage they have could be lost, if all the developers don't support yet another resolution change.
 
I think legacy is one of the reasons they may not be going to bigger screens and higher resolutions.

The big app advantage they have could be lost, if all the developers don't support yet another resolution change.
These are different types of legacy. Apple making slow, orderly changes to resolutions is because developer support for any screen changes is basically non-optional. If your app isn't updated to support Retina or widescreen in a timely manner you're going to get hammered in reviews and it inherently looks ugly. So Apple has to be careful with their stewardship there. But for OpenCL support, it really will be an optional feature for developers, so Apple doesn't have to be as worried about the state of the existing ecosystem.

Still, I think Apple will support Series 5XT if they do introduce OpenCL support since that one additional driver brings with it a huge-user base of existing devices, plus the upcoming iPad Mini 2, iPod Touch 6, and supposed low-cost iPhone. When Apple originally introduced OpenCL 1.0 in OS X they supported all possible GPUs back to the nVidia 8000 and ATI HD4000, shipping official drivers before nVidia and ATI themselves had official drivers in Windows, and also supported Intel CPUs down to SSE3 while Intel to this day only supports CPUs with SSE4.1 and up. So Apple is certainly willing to reach back and maximize user-base to encourage adoption.
 
Last edited by a moderator:
You don't want to do compute on series 5. Series 6, however, is a different ball game.

Depends what exactly you want to do with it; there are going to be quite a few cases where GPGPU might make sense and even if you wouldn't get in a worst case scenario a performance increase compared to the CPU you'd still likely save in terms of power consumption. Series6 is a different ball game, but then again not only compared to Series5.
 
rpg.314 is right. Games utilize CUDA? How?

FWIW, there's at least one PC desktop using CUDA (non-PhysX): Just Cause 2. It uses CUDA GPGPU to do water/wave simulation.


With OpenCL evolving at a faster pace and the GPGPU community starting to avoid CUDA at all costs partly because of GCN's success in the AMD side, nVidia doesn't seem to be championing CUDA anymore. I doubt they would pay mobile game developers to use CUDA.

Furthermore, the situation is completely different from the desktop GPUs fom 6 years ago. nVidia had the initiative with efficient GPGPU back then so they took the liberty to shape the ecossystem at their will.
nVidia won't have the initiative on GPGPU for mobiles. In fact they'll be quite late to the game, so taking the liberty of imposing their vendor-specific API is a bit unlikely IMO.
 
FWIW, there's at least one PC desktop using CUDA (non-PhysX): Just Cause 2. It uses CUDA GPGPU to do water/wave simulation.


With OpenCL evolving at a faster pace and the GPGPU community starting to avoid CUDA at all costs partly because of GCN's success in the AMD side, nVidia doesn't seem to be championing CUDA anymore. I doubt they would pay mobile game developers to use CUDA.

Furthermore, the situation is completely different from the desktop GPUs fom 6 years ago. nVidia had the initiative with efficient GPGPU back then so they took the liberty to shape the ecossystem at their will.
nVidia won't have the initiative on GPGPU for mobiles. In fact they'll be quite late to the game, so taking the liberty of imposing their vendor-specific API is a bit unlikely IMO.

ltcommander.data catched immediately where I'm pointing at and answered correctly. The "catch" in relative terms is also in your last sentence.
 
ltcommander.data catched immediately where I'm pointing at and answered correctly. The "catch" in relative terms is also in your last sentence.
Finally found the nVidia quote on the subject that I remembered:

http://streamcomputing.eu/blog/2012-04-21/neil-trevett-on-opencl/

At 44:05 he states: “In the mobile I think space CUDA is unlikely to be widely adopted“, and explains: “A party API in the mobile industry doesn’t really meet market needs“. Then continues with his vision on OpenCL: “I think OpenCL in the mobile is going to be fundamental to bring parallel computation to mobile devices” and then “and into the web through WebCL“.

Also interesting at 44:55: “In the end NVidia doesn’t really mind which API is used, CUDA or OpenCL. As long as you are get to use great GPUs“. He ends with a smile, as “great GPUs” refers to NVidia’s of course.

At 45:10 he puts NVidia’s plans on HPC, before getting back to : “NVidia is going to support both [CUDA and OpenCL] in HPC. In Mobile it’s going to be all OpenCL“.

At 45:23 he repeats his statements: “In the mobile space I expect OpenCL to be the primary tool“.
Of course, with future Project Denver family chips including both those for mobile and HPC, the line between HPC and mobile might not be as clear as he is saying. Still, it doesn't look like nVidia is going to force CUDA into mobile through their The Way Its Meant To Be Played program.
 
With OpenCL evolving at a faster pace and the GPGPU community starting to avoid CUDA at all costs partly because of GCN's success in the AMD side, nVidia doesn't seem to be championing CUDA anymore. I doubt they would pay mobile game developers to use CUDA.
In what universe is OpenCL evolving faster than CUDA?
 
Finally found the nVidia quote on the subject that I remembered:

http://streamcomputing.eu/blog/2012-04-21/neil-trevett-on-opencl/


Of course, with future Project Denver family chips including both those for mobile and HPC, the line between HPC and mobile might not be as clear as he is saying. Still, it doesn't look like nVidia is going to force CUDA into mobile through their The Way Its Meant To Be Played program.

The chairman of the OpenCL working group should advocate for OpenCL.
http://en.wikipedia.org/wiki/Neil_Trevett

However, for the work I do, OpenCL's dual source model is fundamentally unworkable, and so I hope CUDA comes to fruition on mobile devices.
 
Back
Top