ARM Beats Intel With Revised AnTuTu Benchmark

DSC

Regular
Banned
http://www.eetimes.com/author.asp?section_id=36&doc_id=1318894

The current and upcoming revisions to the AnTuTu benchmark will also drastically alter the scores of Intel's upcoming Bay Trail Atom processor. As with the Atom Z2580, many in the press prematurely proclaimed it the victor over the next-generation ARM processors. Now that also appears to be highly questionable.

0JbiULt.jpg
 
Isn't there a significant upgrade to atom due this year? I would figure that 22nm silvermont would likely wear the crown for both performance and performance/w.
 
Isn't there a significant upgrade to atom due this year? I would figure that 22nm silvermont would likely wear the crown for both performance and performance/w.
Intel came with some pretty impressive slides a while ago, everybody I guess was waiting for a review, then a "guy" leaked some Antutu benchmarks that was really good.

Then Exophase found out that the benchmark was heavily tweaked to make Intel processor looks good. He posted the code and he got coverage from the media.
I followed the things on RWT and pretty much everybody agrees that it goes further than using a better compiler, so for everybody Intel got caught... its hand in the bag... again :LOL:

The results above show what Antutu scores should be on Intel existing CPU, without those optimizations, they lose to ARM processors.

Now the upcoming Atom have yet to be reviewed but I think that rightfully, people will extremely wary about the results of some benches.

Anyway everybody seems to complains about the state of benchmarking on Android /mobile space.
Luckily those new Atom will be reviewed running Windows and a more serious set of benches.

I still expect those processors to be great.

------------
Edit
WillardJuice you are Wilco?
 
I followed the things on RWT and pretty much everybody agrees that it goes further than using a better compiler, so for everybody Intel got caught... its hand in the bag... again

That was not quite my interpretation. It's still unclear whether it's a legit compiler optimization or not (timing seems suspect, but it appears generic enough). Regardless Exophase confirmed what seemed obvious before, comparing Intel and ARM results for AnTuTu is essentially worthless. But then again, that's true for a lot of android benchmarks... :p

WillardJuice you are Wilco?

lol no. I hold no allegiance to any company. ;)
 
That was not quite my interpretation. It's still unclear whether it's a legit compiler optimization or not (timing seems suspect, but it appears generic enough).
I'd be really interested in seeing code to which this optimization can be applied and brings measurable gains. I certainly won't claim such code doesn't exist, but I have seen so many good ideas applicable to only a piece of benchmark code that I am expecting more than "it appears generic" ;)
 
I'd be really interested in seeing code to which this optimization can be applied and brings measurable gains. I certainly won't claim such code doesn't exist, but I have seen so many good ideas applicable to only a piece of benchmark code that I am expecting more than "it appears generic" ;)

I don't claim that it's a revolutionary breakthrough in compilers. I don't even know Intel's "true motivation" behind the optimization. :D But optimizing bit operations imo is still technically legit. However unpractical this optimization may be in the real world (or whatever the true motivation was for Intel), it still counts in my book (i.e. not cheating).

Obviously I'm not thrilled with Intel's behavior. It would be nice to see them dedicate their time on more "constructive" use cases, but I'm not ready to paint Intel as a bunch of cheaters. In fact, I honestly blame AnTuTu way more than Intel for this mess.
 
We used to though, it's called SPEC CPU. :) Unfortunately, cheating with SPEC CPU is (was) even more crazy.
 
I don't claim that it's a revolutionary breakthrough in compilers. I don't even know Intel's "true motivation" behind the optimization. :D But optimizing bit operations imo is still technically legit. However unpractical this optimization may be in the real world (or whatever the true motivation was for Intel), it still counts in my book (i.e. not cheating).

Obviously I'm not thrilled with Intel's behavior. It would be nice to see them dedicate their time on more "constructive" use cases, but I'm not ready to paint Intel as a bunch of cheaters. In fact, I honestly blame AnTuTu way more than Intel for this mess.

Optimizing bit operations is legit and if we were talking about merging something like this:

Code:
u32 a;
for(i = 0; i < 32; i++)
{
  a |= (1 << i);
}
Into this:

Code:
u32 a = 0xFFFFFFFF;
Then I wouldn't see a problem. The loop can be unrolled and once that happens a few sets of normal looking transformations can get you to the end result.

The problem I have with what we saw ICC doing with AnTuTu is that the bit-op loop had a variable counter. The variables came from a (deterministic) random number generator so I really don't think Intel statically analyzed anything about these numbers. It would have to have broken the loop into blocks of 32 so they look like the code above.. but the thing is, unless your loop count is actually >= 32 a significant amount of the time this will make your code slower. And it doesn't seem like a reasonable assumption in this case. I definitely can't see this being something you'd do with variable loop counts in general, and even some basic heuristics like block it by 32 if you see bit operations seems like a very bad idea. I suspect that they're matching a very specific case here.

One way to check would be to take the code and try modifying it slightly (but keep the overall result the same) and see how much it takes before the compiler no longer optimizes it. That's always a good way to fish out if the compiler is special casing for benchmarks. I personally don't have ICC so I can't do it, at least not yet, but maybe someone else can...
 
Back
Top