SGX comments (pruned from Amd z430 thread)

THe Z430 has positioned the LG GT-540 phone in 2nd place in the GLbenchmark list, just above the 3GS position.

The SGX540 powered samsung Galaxy S has just went to the top of the list, even though it has twice the number of screen pixels than the LG phone.

Be interesting to see where the iphone4 slots in, when someone gets round to jailbreaking it.

http://www.glbenchmark.com/result.jsp
 
No jailbreak needed, Kishonti can run GLBenchmark on the device as they see fit.
 
No jailbreak needed, Kishonti can run GLBenchmark on the device as they see fit.

In which case they musn't have got their hands on an iphone4 yet, as previously they were quite quick with the ipad testing.

After I posted yesterday, Nokia N8 got tested and now lies in 2nd place.

At one time IMG-ed devices occupied pretty much the top 20 in there, now 9 of the top 20 are AMD-ed graphics, and one(N8) with graphics via an ST chip (is it broadcom graphics? )

Can we conclude that the competition has caught up in terms of usable 3D graphics (for es1.1)
 
I'm not speaking for ImgTec here whatsoever, but just the fact the results don't take into account screen resolution means it's hard to interpret the results, thus making it hard to position devices and come to that kind of conclusion.
 
I'm not speaking for ImgTec here whatsoever, but just the fact the results don't take into account screen resolution means it's hard to interpret the results, thus making it hard to position devices and come to that kind of conclusion.

Well, the 3GS is getting beaten by one with AMD Z430 using same resolution, and by one with Broadcom (Nokia N8) using higher resolution
 
The 3GS is vsync locked. Yet more variables to throw into the mix ;)
 
The iPhone 4 most likely uses the PowerVR SGX 535 like in the iPad,and 3GS. I don't think Apple would put a higher clocked GPU since the A4 needs to be extra power efficent since the Battery isn't a brick like in the iPad and that power hungry Retina screen is factor also.
 
I had noticed z430 leadership in some of the triangle tests at GLBenchmark a couple days ago, demonstrating some of the core's promise, but today's additions to the list are a real shake up. Broadcom's VideoCore III actually makes a very strong debut, and a phone I just bought myself a couple weeks ago, the myTouch Slide, places quite highly.

I'm not too surprised to see the Slide make the list; it impressed me with its smooth web browser despite being based on a last generation Qualcomm apps processor, a testament to the influence of drivers/software over hardware. The lack of FP support, however, ironically prevents this Android device from getting a port of Google Earth, to my annoyance, and it's still not as smooth overall as my first generation iPhone. As soon as T-Mobile US gets the iPhone 4, either officially or through jailbreak unlock, I'll be switching back to iOS and away from the mishmash of user interfaces and features that is Android.
 
The last words from the test author indicated that Vsync limit was not a significant contributing factor in the iphones scores.

http://forum.beyond3d.com/showthread.php?p=1421240#post1421240
That's not quite true. To quote my reply from that thread:
Vsync is always enabled on iPad/iPhone. Even though the devices are not reaching 60 fps (which would be equal to 9.81 Mtri/s) at which point the tests would be entirely vsync limited, vsync already has an effect at lower framerates. Many of the geometry tests are running at either 30 fps (4.9 Mtri/s) or 40 fps (6.54 Mtri/s) on iPad/iPhone.
 
That's not quite true. To quote my reply from that thread:

your "thats not quite true" might suggest what I said was wrong, the author DEFINITELY said it was not significant.

"Low level tests are not vsynced on iPad/iPhone. Only the high-level "HD" test (pretty old) is vsynced"

Now whether or not THAT'S true is a another matter

I have a vague recollection of seeing some video on the net of Quake running on the iphone @ >60FPS, but it could have been another IMG-ed device.
 
Now whether or not THAT'S true is a another matter
Sorry, I thought it was clear I was referring to what Laszlo said, not what you said.

I have a vague recollection of seeing some video on the net of Quake running on the iphone @ >60FPS, but it could have been another IMG-ed device.
Maybe Aava/Moorestown? There is no documented method to disable vsync on the iPhone (though there might be a private API).
 
As coincidence would have it, iphone4 has just hit the GLbenchmark results list, in 2nd place behind to galaxy S (which runs the SGX540 and a smaller res screen).

Some anomolies in the results.

iphone4 int score looks at odds with just about everything else, @8032. the ipad gets 3968,and the 1Ghz Galaxy S gets 4264.

float is 21810, against the ipad's 26455. and the 3GS @ 9751. If we use this to guess the clock, one might guess the iphone is running about 20% slower than the ipad, say 800Mhz ?

Should this part of the discussion be moved back to the phone4 thread ?
 
Kudos to Kishonti for getting good and timely reads on all of the iOS devices.

The iPhone 4 ranked predictably in GLBenchmark behind the brute force of the four pipeline SGX540 yet ahead of the rest of the pack due to what I believe is its combination of a relatively well performance-tuned driver/software environment and an aggressive clock speed for the core.

I've seen a lot of claims in articles on the web that the 535 has double the triangle rate of the 530, but I suspect the only performance doubling between the two is in the texel fill. Can anyone confirm? GMA500 docs I've researched seem to support my belief.
 
Seconded. I'd love to know what GLBenchmark uses to unlock vsync.

There isn't a way to disable vsync on iOS. If you want to get unencumbered performance results, we usually suggest replacing calls to -presentRenderbuffer with glFlush. For example, for every 4 frames, swap once and flush three times. This gets past vsync, and still let's you see what the app is doing.
 
I've seen a lot of claims in articles on the web that the 535 has double the triangle rate of the 530, but I suspect the only performance doubling between the two is in the texel fill. Can anyone confirm? GMA500 docs I've researched seem to support my belief.

You won't find published numbers for either, or at least I haven't. The numbers in circulation (ie, on Wikipedia.. no citation given..) are probably interpolated from some ranges IMG gave which went from SGX520 all the way up to several core SGX543. All we really got from this is that SGX520 starts at 7M (at 200MHz), SGX545 is 40M, and SGX543MP4 is 140. Ailuros believes that SGX543 is being pre-scaled at MP1 to make the progression look linear, such that it's 40M like 545 and not 35M.

This is the progression he suggests (and currently I agree with):

SGX520: 7M (1 USSE with some other performance reduction in system)
SGX530, SGX535: 20M (2 USSE)
SGX540, SGX545: 40M (4 USSE)
SGX543MP1: 40M (4 USSE2)

The rate might be limited by triangle setup regulated per USSE, and not ALU (or else USSE2 would increase it)

On the other hand, IMG claims "enhanced triangle setup delivering up to 50% higher throughput" for SGX543. Where this actually fits in is anyone's guess. IMG also claims that these numbers are "real world" and not "synthetic", leading us to speculate that Samsung's crazy 90M numbers for SGX540 are.

It'd probably be good if we got some real raw triangle throughput tests, but driver quality would probably distort the story...
 
It doesn't necessarily have to be those exact numbers (albeit IMG has stated in the 545 announcement 40M Tris and in an older newsletter 31M for 540, I guess it comes down as to what the marketing department decides to rate each core at for any given time heh...) since they usually have a footnote for conditionals (<50% shader load f.e.).

It's the relative performance between cores that probably interests most and I have no reason to doubt that if I have X rate for 53x I won't have 2*X for 54x at least for USSE1 cores always at the same frequency.

In general triangle ratings are a bit of a mess especially if you look it up at wikipedia and yes it's probably some folks adding data ignoring frequency differences between different implementations of core A or B.

***edit: albeit completely unrelated Exophase: http://www.highperformancegraphics.org/media/Hot3D/HPG2010_Hot3D_NVIDIA.pdf ....food for thought when you have something that has multiple raster units.

******edit Nr.2:

Kudos to Kishonti for getting good and timely reads on all of the iOS devices.

The iPhone 4 ranked predictably in GLBenchmark behind the brute force of the four pipeline SGX540 yet ahead of the rest of the pack due to what I believe is its combination of a relatively well performance-tuned driver/software environment and an aggressive clock speed for the core.

I've seen a lot of claims in articles on the web that the 535 has double the triangle rate of the 530, but I suspect the only performance doubling between the two is in the texel fill. Can anyone confirm? GMA500 docs I've researched seem to support my belief.

Given the resolution by the way (320*480) I've compared the iPhone3GS vs. LG GT540 Optimus. On another comparison set I compared the Galaxy (800*480) with the iPhone4 (640*960) and the iPad (768*1024) and the very first gut feeling I get from the results is that the latter two must have quite a bit higher frequency on the GPU side then for the 540. You might have twice the ALUs in the latter but still the very same amount of TMUs compared to 535, so what really meows on a hot tin roof? ;)

Assume I'm right on track here, I've been saying for a long time that Apple concentrated mostly on fill-rate. A 540 might guarantee you twice the pipelines but still the very same amount of TMUs and frequency is going to be inevitably lower (think higher die area <-> power consumption). I wouldn't be in the least surprised if the 540 frequency in the Galaxy is still at best around iPhone3GS margins.
 
Back
Top