ARM-Mali T678 vs Intel HD 4500/5000

There is no Mali T678 notebook..

If you want to compare x86 to ARM solutions, the latest GFXBench and 3dmark offer apples-to-apples comparisons between them.

AFAIK, the only ARMv8 that is out now is the one from Apple A7. Anandtech compared the Cyclone cores from the A7 to Intel's current Ivybridge and Haswell, but they run at very low clockspeeds (1.3GHz?) because of the small form factor. I guess you should imagine a dual-core, no hyperthreaded mobile i3 underclocked to 1.3Ghz.
 
I was comparing a high end mobile gpu to intel's embedded graphics controllers.

Not the intel x86 CPUs.
 
I was comparing a high end mobile gpu to intel's embedded graphics controllers.

Not the intel x86 CPUs.

You have two questions in your initial post one considering GPUs and one considering CPUs and Tottentranz answered them both.
 
I guess you should imagine a dual-core, no hyperthreaded mobile i3 underclocked to 1.3Ghz.

No, it's not a fair comparison. CPUs are designed differently for the different clock speed they target. An i3 running by design at 1.3Ghz would probably be designed with a lower number of stages, for example.
 
No, it's not a fair comparison. CPUs are designed differently for the different clock speed they target. An i3 running by design at 1.3Ghz would probably be designed with a lower number of stages, for example.

The OP asked for a comparison between cores built for ~2W SoCs and cores built for 20-45W APUs (I don't see where fairness fits in this huge gap on power consumption).
He didn't ask for a fair comparison between 2W SoCs and Intel's would-be-Core-CPU-had-they-developed-one-to-work-at-1.3GHz.

The first comparison is measurable, at least. The second is pretty much stupid and would amount to nothing intelligible.
 
With all due respect both comparisons are absurd in their own way, exactly because ULP SoC units play in a completely different power portofolio than units in other markets. That's why I personally am usually objecting against those kinds of comparisons.

For the record the T678 hasn't been integrated yet; however since the T760 is actually a DX11 GPU which poses for a more reasonable comparison and based on the first quad cluster T760 results of the RK3288 I'd say that an 8 cluster T760 at a reasonably high clock wouldn't have to hide itself from a HD4400 Intel GPU.
 
I dont see the harm in drawing comparisons even if its not a fair one given the power envelope.

Tegra K1 alludes to doing just that by showing an optimised version of project Ira originally running on Titan. It starts begging the question; how will future mobile chips stack up against desktops? Is the delta changing over time? It appears to be shrinking, both in feature sets and performance.
 
Eventually process problems will also hit the ULP SoC side of things and bottleneck somewhere down the line the scaling process. In that regard I don't see the delta between ULP mobile and desktop changing dramatically in the longrun.

As for the featureset there's a gigantic difference between having a DX11 compliant GPU and playing a DX11 game at full tilt at playable framerates (you'd even find a damn hard time finding an OGL_ES3.x mobile game for the coming year). Guess what on the desktop you can on a ultra thin tablet and below you cannot and will not for quite some time exactly because there are worlds between them in terms of bandwidth and power envelopes.
 
Thank you for responding to my thread.

So the Mali Soc graphics cards if multicore and higher energy profile would content against a Intel IGP.

I know intel's cpus performance is the best right now for desktops. I am not comparing an 4ghz high watt i5 processor to a small and alot more cooler mobile arm core.

I am just comparing Intel's graphics controllers to ARM's.
 
It might be fairly close now, but Intel is also close to releasing Broadwell which will increase perf/watt dramatically(beyond process)

The biggest reason said for availability of 4.5W TDP SKUs on Broadwell is because of tremendously improved perf/watt on the iGPU side.

With Cherry Trail they'll basically use an HD 4400 for their Atom chips.

Some rumors have the perf/watt increase at 4x, although maybe that's only in limited SKUs(like with low power Atom chips).

I think its more of a necessity than anything, with competitors also thought to be said to increase perf/watt dramatically in the near future. It's driven by efficiency improvements needed for a reasonably sized Petaflop supercomputer.
 
Eventually process problems will also hit the ULP SoC side of things and bottleneck somewhere down the line the scaling process. In that regard I don't see the delta between ULP mobile and desktop changing dramatically in the longrun.

Agreed, in time, the mobile soc performance increases that we have seen in recent years will slow like desktops did 2/4 years ago and the performance delta will plateau.

As time goes by, its the smaller the devices that benefit most from process improvement, today its tablets / phones. tomorrow its wearables. (that sweet spot that allows for that explosive growth, remember the era of GPU growth between 1999 and 2005. was remarkable).

imo, In terms of software and hardware feature set, its not hard to see eventual parity in the coming years.
 
remember the era of GPU growth between 1999 and 2005. was remarkable).

It wasn't as much as scaling as it was TDP growth from 10W to 300W. Nowadays even the memory chips on the midrange cards consume more than the entire cards back in the day.
 
It wasn't as much as scaling as it was TDP growth from 10W to 300W. Nowadays even the memory chips on the midrange cards consume more than the entire cards back in the day.

Fair point, It was a free lunch that couldn't last forever.
 
Back
Top