OMAP4 & SGX540

Yea they massively undershot graphics, omap 4 series should have shipped with a sgx 543 @ 200-300mhz.

What makes you think that a SGX543@200-300MHz would have made a massive difference compared to:

OMAP4430 = SGX540@305MHz
OMAP4460 = SGX540@384MHz
OMAP4470 = SGX544@384MHz

While it's true that they could have been more aggressive with GPU investments for the OMAP4 family (something rather like an MP2@=/>200MHz) it wouldn't had come for free either. Not everyone is Apple where with the volume they're dealing with super large SoCs aren't an issue.

Going by TI's own early released GLBenchmark2.5 scores, the OMAP4470 scores for the time being in it 19 fps, while (I assume) iPad3 scores at 43 fps and their own OMAP5230 at 46 fps.

I really thought omap.4 was going to be the generation leader with its dual channel memory which was unique when compared to its direct competition, at least till exynos blew the bloody doors off.

It seems they have undershot the graphics again, if 5 series isnt going to be out till 2013...
Why hamper it again with last generation graphics? Does it support lpddr3? Samsung has the right idea although they could have really done with a couple of A5's of M3's to have made 4210 perfect, likewise with 4412 could have done with a 28nm atheros lte baseband and 2gb ram to make that perfect.
For the time being the only SoC manufacturer that is staying on the forefront of execution on multiple levels is Qualcomm. Krait, LTE, Adreno3x0; do I need to say more? And after that about everyone involved in the market is pretending to be surprised why Qualcomm cashes in a gazillion of smart-phone desing wins.

That said I don't see any Mali400MP4 nor Adreno320 beating a SGX543MP4 yet in terms of performance. If IMG hasn't created a slouch with Rogue (which I consider unlikely) any competitor should be rubbing its hands for the time it'll take for any Rogue to show up in final devices.

Here's another tidbit as food for thought: http://www.fudzilla.com/home/item/27538-driver-certification-issue-for-arm-customers

Else there are expectable "buts" for about everything considering that market and no no-one can have it all; at least not yet. They all have their own advantages and disadvantages. It's not that TI doesn't care about technology evolution at all, rather the vast opposite: http://www.xbitlabs.com/news/other/...rogeneous_System_Architecture_Foundation.html
 
Well forget sgx 544 because that didn't exist when the blackberry playbook was released, but yes a modern sgx 543 @ 300mhz would have been wayy more capable than sgx 540, at least compared to its competitors.

Just as we had discussed some time ago, nvidia ahead with windows drivers, this is the the only short term thing that can hamper Qualcomm (well if we exclude 28nm problems) who would have completely dominated the market had it not been for tsmc.

I can see adreno 320 beating sgx 543.mp4 with the correct clock speed and drivers, it also has a future isa,power consumption and likely open cl performance advantage.
 
Qualcomm does have one ace in the hole....that awesome on die 28nm lte /dc hsdpa baseband (dual core only).

They are the only company who can offer a solution like that in the next 12 months or so which gives them a massive advantage.

They also own most of the cdma ip.
 
Well forget sgx 544 because that didn't exist when the blackberry playbook was released, but yes a modern sgx 543 @ 300mhz would have been wayy more capable than sgx 540, at least compared to its competitors.

No it wouldn't had been way more capable; give or take any possible difference would had played out with a SGX540@384MHz in the 4460.

Just as we had discussed some time ago, nvidia ahead with windows drivers, this is the the only short term thing that can hamper Qualcomm (well if we exclude 28nm problems) who would have completely dominated the market had it not been for tsmc.
It's not a problem for Qualcomm alone; pretty much none of the other involved IHVs in the small form factor market have an as big and experience driver team as NVIDIA has. Qualcomm however is dominating the market either way; in any contrary case show me another SoC manufacturer that yields as many design wins on a yearly basis.

I can see adreno 320 beating sgx 543.mp4 with the correct clock speed and drivers, it also has a future isa,power consumption and likely open cl performance advantage.
And what exactly makes you think that IMG, ARM or anyone else cannot increase frequencies and/or amount of cores and improve their drivers in the future? For some strange reason ARM Mali400MP4 saw a more than decent driver performance increase and moving from 45 to 32nm got the frequency from 266 to 440MHz. They all cook with water.

As for Qualcomm and their 3xx generation I've already tipped my hat off for Qualcomm's outstanding execution. For the time being however a next generation GPU still cannot beat a currect generation GPU and for the apples to oranges comparison color me unimpressed. I'll preserve judgement until I see Adreno 3xx compared to Mali T6xx, IMG Rogue, NV Wayne and the likes.
 
Well I was talking about first implementation of ti omap 4xxx.
Namely 4430...a sgx 543 @ 300Hz would have been much faster than 540 at the same clock speed, again clocking that @ 400Hz would be much better than 540 at the same speed would it not?

I see your point about next generation, but all I'm trying to say is the adreno 320 results were very early tests that were on early drivers and it was a smartphone soc.

In all likely hood with decent shipping drivers it would match or exceed the new ipad setup..which is a power hungry quad channel tablet chip, with the advantage of new halti isa and likely much much higher flop output for open cl, I would also assume bandwidth would be on par and consome much less power..

The sgx 5 series have been out for ages and are on very mature drivers, so they are nearly performing at there maximum..will the iphone 5 be carrying a5x at the same speeds? Even on 32nm I doubt it.

Qualcomm needed to get the graphics out quicker as it doesn't look like 3xx is going to compete with rogue or a high blocked Mali t-604.
 
Software on Series 5 and 5XT cores could still see great performance enhancement from improvements to the compiler.

If any SoC designer has left room for upping the clock speed of the GPU in their next iteration, it's certainly Apple. If they wanted to match or exceed the new iPad's performance with a 32nm A5X for the new iPhone, they'd have no trouble. Apple has been pushing the envelope with Series5 GPUs but definitely not maxxing it out; their silicon should also benefit from design improvements in power management techniques each generation.

As a result of design focus, I could see Adreno 3xx boasting improved OpenCL performance over Series5XT, yet I wouldn't expect the difference to be so large if that turned out to be true. I definitely wouldn't be so sure that Series5XT would be at a disadvantage in power consumption efficiency on a comparable fabrication process.
 
Well I was talking about first implementation of ti omap 4xxx.
Namely 4430...a sgx 543 @ 300Hz would have been much faster than 540 at the same clock speed, again clocking that @ 400Hz would be much better than 540 at the same speed would it not?

Of course would a single 543@300MHz had been faster than a 540@300MHz, but not by any signficant proportion so that it would had made any groundbreaking difference. As I said if TI wanted back then to really make a much bigger standpoint with graphics under 45nm for OMAP4 it would had needed something like a SGX543MP2 which of course would had cost them way more in die area.

I see your point about next generation, but all I'm trying to say is the adreno 320 results were very early tests that were on early drivers and it was a smartphone soc.

In all likely hood with decent shipping drivers it would match or exceed the new ipad setup..which is a power hungry quad channel tablet chip, with the advantage of new halti isa and likely much much higher flop output for open cl, I would also assume bandwidth would be on par and consome much less power..

Apples to apples and apples to oranges; assuming 320 is as advanced as a next generation SFF GPU as I expect it to be it doesn't come as a particular surprise if it even can beat later on the iPad3 MP4.

The sgx 5 series have been out for ages and are on very mature drivers, so they are nearly performing at there maximum..will the iphone 5 be carrying a5x at the same speeds? Even on 32nm I doubt it.

It's not even close to its full hw potential; I'd rather suggest that apart from ULP GFs in Tegras there's hardly any SFF GPU out there as close to its theoretical maximum as the latter. A5X is still on 45nm; no idea about Apple's plans exactly for the next iPhone, but how sure are you exactly that it's impossible under 32nm, considering that Exynos went from 45 to 32nm from dual to quad core A9s and the GPU went from 266MHz to 440MHz?

Qualcomm needed to get the graphics out quicker as it doesn't look like 3xx is going to compete with rogue or a high blocked Mali t-604.

Or they simply delivered a next generation custom designed CPU along with their next generation GPU.
 
Yes perhaps they should have released it @ higher clock speed then, either way they really undershot the graphics.

I get the feeling that Qualcomm could have really put the hammer down this gen if it wasn't for manufacturing problems which had nothing to with them, they should have timed the next gen grAphics alongside krait...but I suppose you can't have everything..in anycase their lead in baseband means they are the replacement soc for every oem when lte enters the equation, in some cases oems are using snapdragon s3 instead of s4 where supply is cut short at 28nm.
 
I get the feeling that Qualcomm could have really put the hammer down this gen if it wasn't for manufacturing problems which had nothing to with them.

You could say exactly the same about Apple and the A5X. If 32nm had beenmature enough and available in enough volume, then their power/performance options would have been far better. Even at 45nm, the A5X graphics performance of the new ipad is still ahead of the competition who are all at 40nm or better.
 
Valid points, but a5x is a tablet chip, good luck fitting that into a smartphone on 45nm....32nm of course you are correct with that, still apple would need Qualcomm father's discrete lte anyway, Qualcomm atheros socs for the moment.
 
A SoC producer chooses whether to target a new manufacturing process coming to market when designing their chip.

It's a risky bet in order to get a jump on the competition. Qualcomm's current production issues are entirely of their own making, though I'm not sure the gamble isn't paying off.
 
Valid points, but a5x is a tablet chip, good luck fitting that into a smartphone on 45nm....32nm of course you are correct with that, still apple would need Qualcomm father's discrete lte anyway, Qualcomm atheros socs for the moment.

When you mention in the same sentence tablet/smartphone SoCs and are noting a full node manufacturing process difference it actually builds a nice oxymoron. As for the rest Qualcomm is already selling chips to Apple; I'm sure Qualcomm would say "no" if someone like Apple and it's volumes would request further chips from them. The only other real question is what Apple's real plans are for LTE or whatever else, since its damn hard with Apple's typical anal secrecy to find anything out before release.
 
The SkyCastle graphics demos are nice showpieces for the Unity engine and also a pretty way to test the performance of Android devices.

http://www.youtube.com/watch?v=c-OXPy3iADM
http://www.youtube.com/watch?v=9A4CKvB2ev4

The guy who made them, Kim Stockton, developed versions for a few different companies to use to show off their hardware, and I've seen videos of TI powered devices showing it off quite well.

Here the same thing but with a framecounter enabled on a prototype TI device:

http://www.youtube.com/watch?v=J0Q1axo3HsI
 
Archos101G10 (OMAP4470, SGX544@384MHz) GLBenchmark2.5 results are out and it seems to scale quite well compared to the iPad2 (SGX543MP2@250MHz) and iPhone4S (SGX543MP2@200MHz):

http://www.glbenchmark.com/compare.jsp?benchmark=glpro25&showhide=true&certified_only=1&D1=Archos%20101G10&D2=Apple%20iPad%202&D3=Apple%20iPhone%204S

I wonder if the SGX544 would be clocked at 440MHz what its die area would look like compared to a 32nm Mali400MP4:

http://www.glbenchmark.com/result.j...&screen=4&screen=3&screen=2&screen=1&screen=0
 
The balance between optimizing for area or for heat or for attainable frequency seems to vary quite a lot by implementation of the different SoC designers, yet a single 544 would probably end up smaller than a Mali400 MP4 in most any implementation by my estimate.

They're obviously not direct competitors from a graphics functionality/feature-set standpoint, but a few of their SoC implementations will be facing off in the marketplace nonetheless.

Some of these more recent/modern benchmarks like Basemark GUI and GLBench 2.5 seem to be at least a little closer in workload to what a lot of the SoC designers had in mind when selecting and internally testing their graphics parts.
 
The balance between optimizing for area or for heat or for attainable frequency seems to vary quite a lot by implementation of the different SoC designers, yet a single 544 would probably end up smaller than a Mali400 MP4 in most any implementation by my estimate.

We don't even have a clue how much die area a 544 captures, but it's obviously bigger than a 543 due to the added DX9L3 functionalities. If we knew how big a 544@384MHz is in OMAP4470@45nm it would be a starting point for speculative estimates.
 
Back
Top