Qualcomm Krait & MSM8960 @ AnandTech

Looks like a nice phone and it's good to see the option of a very large battery.

MIUI is a pretty well-made interface and the MIUI apps are very good (I use MIUI on my Atrix). Of course, you can always use a different launcher if you don't like the MIUI launcher and 2GB memory sounds useful.

Screen quality will be important, however.

The big problem is that it will undoubtedly prove extremely difficult to get hold of one outside of China!
 
A GPU architecture that's perhaps less tolerant to latency or the way data is scheduled/fed, a more "fragile" design, doesn't necessarily have more to gain from well written drivers/apps/compiler, just more to lose when they're not written well.
 
Advantage: A MAC unit is smaller than standalone ADD and MUL units, so you can have more of them. You also save on instruction issue rate, so if your code fits MAC, it's a win.

Depends on the MAC implementation. Chained implementations don't have much of a size advantage other than the bloat that comes with instruction tracking.

How prevalent is a MAC type op in GPGPU applications compared to chains of MUL or ADD? Any matrix-based operations would obvious benefit greatly. But then again, with embarrassingly parallel algorithms, instruction issue bandwidth doesn't have to be a limitation.
 
Depends on the MAC implementation. Chained implementations don't have much of a size advantage other than the bloat that comes with instruction tracking.

How prevalent is a MAC type op in GPGPU applications compared to chains of MUL or ADD? Any matrix-based operations would obvious benefit greatly. But then again, with embarrassingly parallel algorithms, instruction issue bandwidth doesn't have to be a limitation.

It won't help much if the algo is mem bound, which is the majority. It's primary advantage comes in ALU soup algorithms, like dense linear algebra.

MAC operation is prevalent enough. But, as said above, it's perf advantages are not always there in all algos.
 
A new LG device with an Adreno320 appeared in the GLBenchmark2.5 database:

http://www.glbenchmark.com/compare....fied_only=1&D1=LG E970&D2=LG E971&D3=LG F180K

What's weird is that the LG E971 & LG F180K results are quite similar, while the LG E970 has only one singled out test result which gives each device its according ranking. The E970 probably has some differences f.e. in terms of SoC bandwidth or whatever else, but it still sounds weird so far.
 
A new LG device with an Adreno320 appeared in the GLBenchmark2.5 database:

http://www.glbenchmark.com/compare....fied_only=1&D1=LG E970&D2=LG E971&D3=LG F180K

What's weird is that the LG E971 & LG F180K results are quite similar, while the LG E970 has only one singled out test result which gives each device its according ranking. The E970 probably has some differences f.e. in terms of SoC bandwidth or whatever else, but it still sounds weird so far.
Does the new s4 pro quad core carry 2mb l2?...linpack single thread is off the charts on Anandtech optimus g performance preview...but quad-core scaling tops out at 2.5x....is that the L2cache at play there? Bandwidth? Or something else?

One thing is for sure though, LG have average software optimisations at best, no jellybean and a poor sunspider score on stock LG browser despite the obvious power on tap.
http://www.anandtech.com/show/6305/lg-optimus-g-performance-preview
 
Well 29fps is a LOT higher than the 17/18 fps the two LG devices get in the database in Egypt 2.5 1080p offscreen. The highest result from the LG E970 is at 24.5fps. I wonder when Anand's results will get published in the Kishonti database.
 
Well if that is the gpu performance we can expect from a phone with a 720p screen, then that is going to take some beating, I doubt even a6 with its various improvements besides 2x iPhone 4s gpu will even match it.
 
Well if that is the gpu performance we can expect from a phone with a 720p screen, then that is going to take some beating, I doubt even a6 with its various improvements besides 2x iPhone 4s gpu will even match it.

I don't expect the iPhone5 to match those 29fps, however it still wil remain a highly competitive product on its own merits.
 
Here we have results from Xiaomi Mi 2 running apq8064. It gets almost 29fps just like Optimus G on anandtech's benchmark results, and those scores are almost 20% higher than ipad3. That is quite impressive if you ask me.
 
Here we have results from Xiaomi Mi 2 running apq8064. It gets almost 29fps just like Optimus G on anandtech's benchmark results, and those scores are almost 20% higher than ipad3. That is quite impressive if you ask me.

Xiaomi Mi 2 is now at 30.5 fps meaning 22% ahead of the 25 fps of the iPad3. Impressive for Qualcomm itself definitely. However if you consider that both GPUs have most likely the same amount of ALU lanes and one is clocked at 400MHz while the other at 250MHz, if you normalize performance to frequencies it'll tell you an entire different story. Especially while comparing a 6 months old 45nm to a not yet shipping in devices 28nm SoC.
 
Xiaomi Mi 2 is now at 30.5 fps meaning 22% ahead of the 25 fps of the iPad3. Impressive for Qualcomm itself definitely. However if you consider that both GPUs have most likely the same amount of ALU lanes and one is clocked at 400MHz while the other at 250MHz, if you normalize performance to frequencies it'll tell you an entire different story. Especially while comparing a 6 months old 45nm to a not yet shipping in devices 28nm SoC.


This is a major win for Qualcomm....this can't be underestimated.
If you think back to the iPhone 4s...or even every iPhone launch besides 4...apple has had the best mobile phone gpu by quite some margin....they have carried on being ultra aggressive on that front and DOUBLED the performance of last years power house...that's ridiculous in its self...but to think Qualcomm has beaten even the new a6 before its launched is bbq pretty amazing considering the resources and technical expertise apple has thrown into its SOC division.

It actually beats the new iPad...with its quad core/quad channel memory..in a phone...awesome.

Yes ailuros if you compare just alu lanes perhaps you are right...I'm not as familiar as you with regards to the other execution resources available for both gpus...but I would guess sgx mp4 has more of everything else...and also probably if process node being equal...takes up less die area.

Let's not forget part of gaining g an advantage is through process nodes...and Qualcomm manages it with a relatively high clocked quad core krait, 2mb ram IN A PHONE....apples new phone chip can't touch it outside of some memory benchmarks and a blazing sunspider score...good job Qualcomm!.
 
A company puts a chip that's been designed mainly for tablets in a phone, so I guess you shouldn't be surprised it beats the competition :)
 
This is a major win for Qualcomm....this can't be underestimated.

Who's underestimating anything? It's a natural reality that the newer a product the better it usually ends up or at least should be. Qualcomm & Apple have different design cycles and just for the record's sake Qualcomm has its own customers which it's selling its SoCs too, while Apple has none. They're indirectly competing.

Shrink the A5X to 28nm hypothetically, clock its GPU at the same 400MHz as Adreno320 and do the math yourself how the picture would look like then.

If you think back to the iPhone 4s...or even every iPhone launch besides 4...apple has had the best mobile phone gpu by quite some margin....they have carried on being ultra aggressive on that front and DOUBLED the performance of last years power house...that's ridiculous in its self...but to think Qualcomm has beaten even the new a6 before its launched is bbq pretty amazing considering the resources and technical expertise apple has thrown into its SOC division.
I don't recall "every" iPhone to have the best GPU at the time apart from iPhone4S and even then didn't it take too long before its GPU performance had been matched or exceeded by competing solutions.

Yes Adreno320 doubles GPU performance over Adreno225 but so does A6 vs. iPhone4S A5.

It actually beats the new iPad...with its quad core/quad channel memory..in a phone...awesome.
Qualcomm has undeniably excellent execution, but considering the raw specs of the 320 it's delivering expectable performance compared to a 225 with the same frequency and probably half the unit counts. One would expect that a newer architecture would had come with some homework to increase efficiency by quite a bit or that the driver/compiler would had matured in the meantime.

Yes ailuros if you compare just alu lanes perhaps you are right...I'm not as familiar as you with regards to the other execution resources available for both gpus...but I would guess sgx mp4 has more of everything else...and also probably if process node being equal...takes up less die area.
Let's flip that coin into another direction; assuming I'm right and the 320 has "64SPs" as in desktop marketing parlance at 400MHz. I'm putting an upcoming "64SP"@400MHz Wayne ULP GeForce against it and you tell me on which you'll place your bets for which of the two would win with flying colours. And no the point here isn't neither to compare it to future products in theory nor to undermine the 320 from any perspective.

The real point here is efficiency and it shouldn't be too hard to understand and digest.

Let's not forget part of gaining g an advantage is through process nodes...and Qualcomm manages it with a relatively high clocked quad core krait, 2mb ram IN A PHONE....apples new phone chip can't touch it outside of some memory benchmarks and a blazing sunspider score...good job Qualcomm!.
Why are you so hung up on Apple's i-Gear anyway? I don't have even a single i-device nor any other high end smartphone at the moment, despite that I fool around with several of them from time to time. As I said Apple actually competes with itself and they have an insane profit margin for what they're doing.

But if it has to be a direct comparison from so far GLBenchmark2.5 results there are devices with Adreon320s that range from 2000+ up to 3400+ frames. My own guess would place the iPhone5 into the 2300-2400 frame ballpark which still would make the product highly competitive and still quite a bit above the 2.5 score of Samsung's SIII hotseller.

Will any of the above affect in its majority the public's buying decisions? I severely doubt so especially for iPhones since "i-fanatics" (if I may call them that) will buy the 5 for its own merits or if you prefer the total user experience.
 
Yes you could flip the coin and clock mp4 @ 400mhz @28nm...yes that would obviously win...but die area would be bigger...power consumption would likely be bigger and so on...the point is its about balance....Qualcomm has cut down redundancy instead of just bolting cores together...thus saving die space....and maybe power?...let's flip it another way....Qualcomm could just odd more units to fill up the die space that the A5x would take and clock lower?? Thus prevailing anyway.

None of that hypothetical mumbo jumbo makes any difference though, it's about actual products shipping...and I use I products socs because tests what we were talking about was it not?..and happen to have had the highest performing gpu parts have they not?...so to put a chip into a smartphone that bests both apple solutions....from tablet only 6 month old...and the new fangled a6...is quite an achievement compared to the gpu gap thst existed between adreno and the competition a couplenof yesrs ago. Thsts all I'm saying.

Also, it's now unlikely that a smartphone gpu will top the adreno 320 till at least MWC (galaxy s3)...which how many months away?...and what was my time scale k predicted for adreno to rule smartphknes? 4-6 months.

Tegra 3 plus has not even been announced yet..so you can forget about tegra 4 showing in a smartphone anytime soon.
 
A company puts a chip that's been designed mainly for tablets in a phone, so I guess you shouldn't be surprised it beats the competition :)

If it doesn't suffer with batterylife issues then what's the problem? If it goes into a smartphone before s shipping tablet I guess it makes it a smartphone soc does it not?
 
If it doesn't suffer with batterylife issues then what's the problem? If it goes into a smartphone before s shipping tablet I guess it makes it a smartphone soc does it not?
My understanding is that Qualcomm has presented APQ 8604 as a tablet chip. OTOH I agree, and forgot to say in my previous post that we should wait for battery life results. But performance always has a price (especially when using the same process).
 
Yes you could flip the coin and clock mp4 @ 400mhz @28nm...yes that would obviously win...but die area would be bigger...power consumption would likely be bigger and so on...the point is its about balance....

More die area and power consumption compared to what exactly? Compared to A5X@45nm with a purely hypothetical shrink two nodes down and not one to 28nm would result into much less die area and at least comparable power consumption despite the higher frequency.

Samsung clocked in Exynos the Mali400MP4 from 266MHz@45nm up to 440MHz@32nm without affecting in any particular way power consumption from what I recall and that only with one node shrink.

Qualcomm has cut down redundancy instead of just bolting cores together...thus saving die space....and maybe power?...let's flip it another way....Qualcomm could just odd more units to fill up the die space that the A5x would take and clock lower?? Thus prevailing anyway.

I'm not aware of neither die area for Qualcomm's 28nm SoC nor its respective power consumption. If you are go ahead I'm all eyes.

None of that hypothetical mumbo jumbo makes any difference though, it's about actual products shipping...and I use I products socs because tests what we were talking about was it not?..and happen to have had the highest performing gpu parts have they not?...so to put a chip into a smartphone that bests both apple solutions....from tablet only 6 month old...and the new fangled a6...is quite an achievement compared to the gpu gap thst existed between adreno and the competition a couplenof yesrs ago. Thsts all I'm saying.

Comparing an older product with an unavailable yet product isn't hypothetical mumbo jumbo. And since one product is "just" 6 months old let's see how long that reign will last for Adreno320 exactly from actual products shipping with it until other solutions appear in the not so distant future. If after 6 months of 320 device availability something better should come along I won't change perspective in that regard. It's the natural course of technology that by the time A appears, within months B is around the corner for which in the majority of cases B is more attractive from many perspectives whether from the same technology source/OEM or from someone else.

Also, it's now unlikely that a smartphone gpu will top the adreno 320 till at least MWC (galaxy s3)...which how many months away?...and what was my time scale k predicted for adreno to rule smartphknes? 4-6 months.

Tegra 3 plus has not even been announced yet..so you can forget about tegra 4 showing in a smartphone anytime soon.

I asked a legitimate question about a next generation GPU and its respective efficiency and I'm also counting Adreno320 to next generation GPUs whether it sounds convenient or not. If the Wayne example doesn't help try ARM Mali T604@500MHz.

If it doesn't suffer with batterylife issues then what's the problem? If it goes into a smartphone before s shipping tablet I guess it makes it a smartphone soc does it not?

I don't think LG or anyone else would integrate a SoC into a smartphone that would empty it's battery in no time for anything 3D. It sounds completely absurd to me. We'll have to wait and see detailed measurings which shouldn't take too long.

In the GLBenchmark2.5 thread I've noted that there are starting battery life tests to get filled in for 2.5. I didn't search each and every device but I've hit so far on the following:

http://www.glbenchmark.com/compare....sung GT-I9300 Galaxy S III&D3=Apple iPhone 4S

The SIII battery time here is outstanding but as I said in the other thread I can't figure out battery life goes down with reduced brightness to 50% when it should be exactly the other way around (unless I'm missing something). iPhone4S battery life is too low for my taste but that's just me.
 
Xiaomi Mi 2 is now at 30.5 fps meaning 22% ahead of the 25 fps of the iPad3. Impressive for Qualcomm itself definitely. However if you consider that both GPUs have most likely the same amount of ALU lanes and one is clocked at 400MHz while the other at 250MHz, if you normalize performance to frequencies it'll tell you an entire different story. Especially while comparing a 6 months old 45nm to a not yet shipping in devices 28nm SoC.

http://www.gsmarena.com/
http://www.gsmarena.com/samsung_i9300_galaxy_s_iii-review-761p5.php
http://www.gsmarena.com/lg_optimus_g-review-814p5.php

Another public result of LG Optimus G. In GLbenchmark 2.1 Egypt offscreen 720p test, Optimus G (adreno 320) only scores 113 (fps), while iPad 3 is 140 fps, Galaxy S III is 103, and Ones (with Adreno 225) is about 56 fps.

Since most mobile games only render 720p HD (or sub-HD) resolution, which result is more representative of real condition? The "2.5 Egypt HD 1080p offscreen" test or "2.1 Egypt 720p offscreen" test?

In 720p offscreen test Galaxy S3 leads Ones very much, while in 1080p offscreen test, Galaxy S3 has minor advantage of only 15%.
 
Back
Top