I meant their own bar. Chinese SoCs are getting a lot less terrible with their integrated GPUs. Remember that by the end of 2012 Mediatek's highest end SoC had a 6 year-old PowerVR SGX 531. By the end of 2013, Qualcomm and Samsung were launching new SoCs with OpenGL ES 3.0 GPUs, compute-capable unified shaders and >100GFLOP/s, where Mediatek and HiSilicon's flagships launched with a Mali 450MP4 (separated vertex/fragment shaders, OpenGL ES 2.0, zero compute capabilities).
Apple and Co. went through that trend too, just earlier.The original iPad had 2 GFLOPs from the SGX535 GPU; it wasn't any better with smartphones either at first.
Late 2014 and 2015 finally saw true parity on GPU featureset with those IHVs bringing PowerVR Series 6 and Mali Midgard solutions. 2016's chips will show performance on the chinese vendors actually really close to Samsung/Qualcomm's 2015 flagships.
But now even their midranges are getting decent GPUs. The MT675x have better GPUs than Snapdragon 615617, the 673x have much better GPUs than the S410/412 (which has an embarrassingly ancient GPU in it).
They're raising (their own) bar at a really high pace, IMO.
GPU performance continues to scale everywhere.Chinese smartphones have still a sizeable distance in GPU performance compared to Android/iOS high end solutions.
Don't you mean the G6200@700MHz has
close to 30% of the performance of a Mali880MP4@900MHz (more like 40%)? The Helio X10 is actually 60% slower than the Kirin 950.
The MT6795 has a 6200@700MHz for which I don't know why the specific HTC has as crappy performance, but a 6200@700MHz looks more like that:
https://gfxbench.com/device.jsp?benchmark=gfxgen&os=Android&api=gl&cpu-arch=ARM&hwtype=GPU&hwname=Imagination Technologies PowerVR Rogue G6200&did=25741695&D=Gionee GN9008
Maybe crappy drivers and a 6795M variant with quite a bit lower frequencies?
For the record the upcoming HelioX20 (MT6797 - decacore yadda yadda....) will have a Mali T880MP4 clocked at 700MHz. If the Mate with it's MP4@900MHz gets 17+ in Manhattan 3.0 the upcoming Mediatek X20 GPU might end up in the 13-14fps region which is exactly where A7 GPUs and the likes of that generation stands in performance.
That all assuming the Kirin950 doesn't have immature GPU drivers. In any other case it might end rather in the A8 GPU/Adreno430 region and it'll be one instead of two generations behind for which I'll of course stand corrected. For the moment I just use the existing data.
Today we have at the highest smartphone end an Adreno530 for Android vs. a 6 cluster GT7600 for iOS. The highest end Rogue you'll see from IMG's IP portofolio in chinese stuff sounds more like a dual cluster GT7200, which will be quite a bit ahead compared to today's G6200s, but also considering frequency differences at least 2x times lower than the peak smartphone GPUs. I'd estimate a GT7200@700MHz at 20-21 fps in Manhattan3.0; GT7600 & Adreno530 are in the 41-49 fps range.
Except if we
look at "long term performance" results, the Adreno 420 is barely any better than the Adreno 330. Which definitely leaves you thinking..
That's QCOM's own problem and their DX11 craze for Microsoft's wet dreams which didn't lead anywhere.
Hum.. and GPGPU will start rising performance on smartphone applications such as?
Javascript seems hopelessly dependent on single-threaded performance. Photo/Video processing seems to consistently need fixed-function DSPs.
Are we going to perform physics simulations on smartphones? Run advanced video/image editing software?
Unless you're thinking of the transition to the smartphone-as-a-PC when docked to a screen+keyboard+mouse on Android, I don't see any actual need for this ever increasing GPU performance at the moment.
No let's leave SoCs as they are because we can scale CPU core amount and/or frequencies there endlessly. Not only do SoCs need a fine balance between CPU and GPU processing power (like everywhere else), but scaling GPU performance within boundaries is by a lot cheaper than with CPUs because GPUs actually scale almost linearly with increasing cluster (say "core" count for marketing's sake) count. Both ARM and IMG have endless blog entries about GPGPU and no not every vendor can use a crapload of dedicated processing units nor can everyone develop them. In such a case where you actually need a fair amount of parallel processing the GPU is ideal and will burn way less power than any garden variety CPU out there:
http://blog.imgtec.com/powervr/measuring-gpu-compute-performance (bottom of the page has links to even more articles).
Case example Mobileye EyeQ4 for ADAS:
http://www.prnewswire.com/news-rele...-its-first-design-win-for-2018-300045242.html
The EyeQ4® will feature four CPU cores with four hardware threads each, coupled with six cores of Mobileye's innovative and well-proven Vector Microcode Processors (VMP) that has been running in the EyeQ2 and EyeQ3 generations. The EyeQ4® will also introduce novel accelerator types – two Multithreaded Processing Cluster (MPC) cores and two Programmable Macro Array (PMA) cores. MPC is more versatile than a GPU or any other OpenCL accelerator, and with higher efficiency than any CPU. PMA sports compute density nearing that of fixed-function hardware accelerators, and unachievable in the classic DSP architecture, without sacrificing programmability. All cores are fully programmable and support different types of algorithms. Using the right core for the right task saves both computational time and energy. This is critical as the EyeQ4® is required to provide "super-computer" capabilities of more than 2.5 teraflops within a low-power (approximately 3W) automotive grade system-on-chip.
Not everyone can turn up resources to develop hw and algorithms like that.
But increasing peak (and not sustainable) performance is completely bonkers. As I said, it's really only useful for reviewers who don't know any better to run a certain benchmark a couple of times.
That's a topic very specific for Samsung and or QCOM solutions. If you watch carefully to which frequency within each generation GPUs scale down to while throttling you'll see that it's usuallly not too far apart between most of them. Obviously a solution that is clocked at "just" 533MHz throttling down to 400MHz compared to a solution clocked at 772MHz throttling down to 400MHz, means higher sustainable perfornance chances for the first case. They're just using very high frequencies to stay competitive; that it's not free in terms of power consumption isn't something new.
However what I said was meant for all solutions and wasn't targeted at the above. If an ISV today would want to create a triple A mobile game that would run decently only on something like the iPad Air2 they won't be able to squeeze all of its 270+ GFLOPs out of it but a sizeable portion less because it'll just get too hot. If FLOPs don't help as a measure use anything else instead to set a mark for GPU performance. It's a general problem of ULP mobile devices.
Just imagine the clusterfuck that would be if this was happening in the desktop space. nVidia launching a graphics card that could run Witcher 3 maxed out at 60 FPS on 4K. And then when people took said card home and played Witcher 3, after 10 minutes they would only get 40 FPS because of power and heat throttling.
That would be an instant lawsuit right there. Together with all the cheating happening with overclocking for specific apps, how come Samsung and Qualcomm are getting away with this?
Nice example; now imagine a mobile solution that would contain that graphics card mentioned above. Chances are damn high it would actually end up like that under circumstances. Just for the record's sake and just because some of us haven't gone nuts: when many said that the GM20B GPU in the X1 cannot in its full form make it into an ultra thin tablet it wasn't a joke. It's clocked at 1000MHz in the Shield Android TV and at 850MHz in the PixelC. If possible someone should downclock the first's GPU to 850MHz and run the very same stressful 3D benchmark over a fair amount of loops. Which and WHY would you think is likely to throttle first and more?
As for the last question ask the press why they're too mild for either/or cases. You should also consider that the endless smartphone/tablet crowd for Samsung and QCOM solutions aren't a bunch of gamers that have the knowledge or interest to find out what's going on. If you'd have a big enough angry crowd opposing against it, the press would also chime along in order to gain even more hits.
But you know all the above already; why are you even asking?