OMAP4 & SGX540

I assume that the 544 is clocked at 384MHz in that one?

Unless its an early test.

Allowing for the 95% linearity of the dual core sgx543@200mhz in the iPhone4, shouldn't the expectation be that a single core 544 running @384mhz pretty much match the iphone4 ?

Iphone4 gets 73 and 123 in equivalent tests. That's almost 60% more than this 4470 in Egypt, but only 23% more in the es1.1 test. And TI have made big noises about the multimedia capabilities of the 4470.

I guess it could be early drivers.

Is there any suggestion that sgx544 takes a hit in opengles performance over 543 as part of the dx9 circuitry inclusion ? I certainly don't see that suggestion in any of IMG PR.
 
Unless its an early test.

Allowing for the 95% linearity of the dual core sgx543@200mhz in the iPhone4, shouldn't the expectation be that a single core 544 running @384mhz pretty much match the iphone4 ?

Not necessarily and that for more than one reason. Similar performance would presuppose:

1. 400MHz
2. Comparable bandwidth
3. Comparable system level cache amount for the GPU (A5 has for the SGX543MP2 afaik 128KB/core)
4. Secret sauce core :p
5. Lord knows what else.

Iphone4 gets 73 and 123 in equivalent tests. That's almost 60% more than this 4470 in Egypt, but only 23% more in the es1.1 test. And TI have made big noises about the multimedia capabilities of the 4470.

I guess it could be early drivers.

Well going by dumb speculative math the analogy should be with 100fps for PRO to yield around 60fps in Egypt offscreen. So yes it could be early drivers, but could also be some parts from the points above. Either way 46 fps at this point might look poor, yet on the other hand here's a Renesas SoC with a 543MP2: http://www.glbenchmark.com/phonedetails.jsp?D=Renesas+Mobile+MP5225+(ODIN)&benchmark=glpro21

Is there any suggestion that sgx544 takes a hit in opengles performance over 543 as part of the dx9 circuitry inclusion ? I certainly don't see that suggestion in any of IMG PR.

I don't think so; I'd rather suggest it's more subject to partner implementation (sw platform, SoC bandwidth, frequency, system level cache etc.) than anything else. Which shouldn't of course mean that future drivers might not increase performance, but I frankly don't expect any miracles either.
 
The unnamed Intel SGX544 device that briefly appeared on GLBenchmark scored very differently, and that Archos line traditionally uses OMAP. So, that 101 G10 score is almost certainly OMAP4470.

The clocks of that 4470 appear to be running less than maximum spec in that test result if the reported CPU speed is anything to go by.
 
Unless its an early test.

Allowing for the 95% linearity of the dual core sgx543@200mhz in the iPhone4, shouldn't the expectation be that a single core 544 running @384mhz pretty much match the iphone4 ?
No processor, CPU or GPU, ever reaches 100% linearity with clock speeds unless memory speeds increase by the same percentage. This isn't just bandwidth but also (often more importantly) latency. GPUs are obviously better at hiding latency than CPUs at that but they're not necessarily perfect... :)
 
No processor, CPU or GPU, ever reaches 100% linearity with clock speeds unless memory speeds increase by the same percentage. This isn't just bandwidth but also (often more importantly) latency. GPUs are obviously better at hiding latency than CPUs at that but they're not necessarily perfect... :)

Clearly there is no public info regarding latency. However regarding bus size, the omap4470 pdf that I linked to earlier in this thread shows there are 2x 128-bit buses connected to the SGX544. I also note that the TI cites that the max memory bandwidth has been increased by 16% over the 4460, and that the 4470 has a dual channel memory controller, unlike the other members of the Omap4 family.

I am also mindful that the Apple A5 processor original spec was likely for 1Ghz max clock, given thats it max clock rate today (in the ipad2).

One assumes TI specing the omap4470 for a max processor clock of 1.8Ghz, would balance the rest of the system to a similar level, and at least its core metrics in terms of bandwidth etc would exceed those of an Soc spec-ed at 1Ghz.
 
Just as another reference point, the 4460 equiped Huawei 9200 Omap4460 phone got 34.0 in egypt 720p, and the 4460 archos 80G9 shows pretty much identical results.

So those initial Archos 101G10 4470 results are showing a 33% improvment in egypt for 544 over 540. I assume the two 4460's are both running SGX540 at stock 384Mhz. One assumes the 544 in the omap 4470 should show a bigger improvement in opengles2.0 testing than the 33%, so I'm going with the notion that it is underclocked or early drivers in the benchmark thus far. Either than or its inclusion is solely for Dx compliance and not for a signficant android performance boost.

http://www.glbenchmark.com/compare....rtified_only=1&D1=Archos 80G9&D2=Huawei U9200
 
Last edited by a moderator:
Video interview today with TI's GM of mobile devices unit.
http://armdevices.net/2012/06/05/se...s-unit-at-texas-instruments-at-computex-2012/

States that max clock for 4470 is now 1.5ghz citing that no more is needed for better user experience. Perhaps they just couldn't get 1.8ghz reliably at 45nm.

Also reconfirms graphics 'at least' 2x performance of 4460

The 4460 set a pretty low bar considering what it was supposed to be before the Galaxy Nexus launched. The 4460 is basically a 4430 and the 4470 is a little bit better than what the 4460 was supposed to be.
 
Even though the media capabilities of the OMAP4 chips are top-notch (thanks to those dual Cortex M3?), I think they kind of dropped the ball regarding 3D performance.

Maybe they weren't expecting for the competition to be so aggressive with that, but the truth is that having a 4 year-old GPU in it (by launch time) has cost them the last place for 3D performance in 2011 SoCs, trading blows with the earlier, NEON-less Tegra 2. At least for the 4430 and the "smartphone version" of 4460.

And the 4470 doesn't really change this. It seems to be quite a bit behind last year's Exynos 4410 and Tegra 3 and this year's Snapdragon S4 (and even the RK3066 that's already in the market).



But you know what? It doesn't really make a difference.
All Android games must play on a Tegra 2 @ 1280*800, and the 304MHz SGX540 is still a bit faster than the 333MHz ULP Geforce, so almost all new games will play on it just fine, at full speed.

I'm still impressed at how fluidly the SGX530 @ ~196(?) MHz in my OMAP3630 can play Shadowgun and Dead Space @ 480p.
 
But you know what? It doesn't really make a difference.
All Android games must play on a Tegra 2 @ 1280*800, and the 304MHz SGX540 is still a bit faster than the 333MHz ULP Geforce, so almost all new games will play on it just fine, at full speed.

I'm still impressed at how fluidly the SGX530 @ ~196(?) MHz in my OMAP3630 can play Shadowgun and Dead Space @ 480p.
But do the games look exactly the same on low-end vs. high-end devices? It's just like PC games supporting a spectrum of GPUs, theoretically all fluidly, through a variety of graphical settings. It's a matter of how many people want their games to present all the latest whiz-bang effects (probably a lot given how motivated vendors are in pushing smartphone GPU performance) vs how many people just want their games to run (which is no doubt significant especially for first-time smartphone buyers) and how much more that 1st group is willing to pay to support the expenses of envelope pushing GPUs. Ideally TI would have SoCs to target both audiences.
 
I wonder why more games (PC and otherwise) don't auto-profile the machine they're on in order to set the settings for effect intensity. nVidia's recent driver suite included something like this but it seems like it should be on an individual program scale.
 
Even though the media capabilities of the OMAP4 chips are top-notch (thanks to those dual Cortex M3?), I think they kind of dropped the ball regarding 3D performance.

More accurately, the entire omap4 execution has been very poor, it was initially announced @mwc feb 09, to sample later that year, with volume production in 2nd half of 2010. On its initial release schedule, 540 was by far the strongest gpu available for licensing.
http://www.linuxfordevices.com/c/a/News/TI-unveils-OMAP4/

However come MWC 2010, they announced in all over again, again with production in 2010.
It was april 2011 before it saw its first product, playbook. Given its not a handset, the delay from volume production to 1st product can not be put down to conformacy testing, and as I recall 4460 has also had a lot of press regarding late appearance.

And the 4470 doesn't really change this. It seems to be quite a bit behind last year's Exynos 4410 and Tegra 3 and this year's Snapdragon S4 (and even the RK3066 that's already in the market).

4470 was a late addition to the family and is there to give them windows on arm options, which otherwise would not have been available To TI until omap5
 
But do the games look exactly the same on low-end vs. high-end devices? It's just like PC games supporting a spectrum of GPUs, theoretically all fluidly, through a variety of graphical settings. It's a matter of how many people want their games to present all the latest whiz-bang effects (probably a lot given how motivated vendors are in pushing smartphone GPU performance) vs how many people just want their games to run (which is no doubt significant especially for first-time smartphone buyers) and how much more that 1st group is willing to pay to support the expenses of envelope pushing GPUs. Ideally TI would have SoCs to target both audiences.

Actually, except for a few "Tegra-exclusive" effects on some games (not because it's faster, but because the driver says nVidia, so TWIMTBP infection again), they all look the same and they all play fluidly if Google Play lets you purchase/install it.

Very few relevant games have customizable settings for the visuals. GTA3 has it, but sliding it all the way up will keep the performance up @ 720p on my Tegra 2 tablet.

That's why it will hardly make any difference either you're playing with an OMAP4430 or a Exynos 4412, on 99% of today's Android games.
I don't know if the new Mass Effect and NOVA3 titles do show differences, though.


I wonder why more games (PC and otherwise) don't auto-profile the machine they're on in order to set the settings for effect intensity. nVidia's recent driver suite included something like this but it seems like it should be on an individual program scale.

You mean like Rage?
It'd be a nice feature to increase PC gaming, as long as they don't forcibly take the customization away from us.

More accurately, the entire omap4 execution has been very poor, it was initially announced @mwc feb 09, to sample later that year, with volume production in 2nd half of 2010. On its initial release schedule, 540 was by far the strongest gpu available for licensing.
http://www.linuxfordevices.com/c/a/News/TI-unveils-OMAP4/

However come MWC 2010, they announced in all over again, again with production in 2010.

I didn't know it had been announced so early :oops:
But I guess that was the pace of execution for ARM SoCs before the Android boom of 2010.
 
Hummingbird with the later drivers was the 540 SoC that traded blows with Tegra 2 graphics performance; OMAP4430 definitely went beyond that.

At its launch, OMAP4 was reasonably competitive, yet TI will need to step up their graphics implementations to keep up with the accelerating competition.
 
Seeing the lightness and fluidity of Windows Phone UI carry over to mainline Windows 8 is encouraging. Microsoft apparently understands that responsiveness is a high priority, and it looks well programmed.

TI and partners are making it tuned well on OMAP4470, judging by the performance shown in the videos. Flipping between windows and scrolling was impressive.
 
Yea they massively undershot graphics, omap 4 series should have shipped with a sgx 543 @ 200-300mhz.

I really thought omap.4 was going to be the generation leader with its dual channel memory which was unique when compared to its direct competition, at least till exynos blew the bloody doors off.

It seems they have undershot the graphics again, if 5 series isnt going to be out till 2013...
Why hamper it again with last generation graphics? Does it support lpddr3? Samsung has the right idea although they could have really done with a couple of A5's of M3's to have made 4210 perfect, likewise with 4412 could have done with a 28nm atheros lte baseband and 2gb ram to make that perfect.
 
TI plans to have variants of OMAP5 with Rogue.

I'm so far not seeing any sub-standard real-world performance in OMAP4 products. Windows RT looked great in the videos.
 
Back
Top