Problems do crop up, though, when customers begin to focus on on what they think is a single important feature of a product like FP ops per second. Do most customers really know that there might be a difference between 16-bit floats or 32-bit floats or 64-bit floats? Many do not.
If you ignore the exception of a handful of laymen tech geeks like some of us the majority doesn't and shouldn't really know how many FLOPs each solution has. In the given case IMG isn't marketing end products since that's the rightful job of their licensees, but even then I don't recall a single case where Apple or anyone else ever quoted N GFLOPs of whatever for the GPUs.
It's that sort of ignorance I was referring to. If customers desire max FP ops per second, though, then limited precision is going to be the norm as it tends to maximize FP numbers and cards are going to be designed with that in mind. NVidia seems to understand that well as the DP of their most recent commodity cards is dismal. AMD does have the R9 280x with very good DP performance (probably the best DPFlops/$ on the market), but how many customers has it given them?
In the past it was the clock rate for CPUs. In that case it led to the disaster which was the Pentium 4. Intel had actually planned a 10GHz P4 but the physics got in the way.
It took a long time for people to realize that clock rate isn't everything.
The current fad is CPU cores/chip.
Compare the single thread performance of a Core 2 Duo with a similarly clocked Core i7. They're isn't a huge difference except perhaps in benchmarks that are memory latency sensitive.
DP FLOPs and CPU cores aside it is NVIDIA actually that started marketing its GPUs more actively than anyone else in the ULP SoC market where all of the sudden an ALU lane became a "core" and the recent GK20A in Tegra K1 went from the initial "projected" 364 GFLOPs down to 326 GFLOPs in developer boards.
If manufacturers would get knocked out of their socks with those raw numbers and if they'd actually appeal to the average consumer, they'd stand in line by now to get K1 SoCs in their devices. In reality it seems to be doing well, but so far I haven't seen the foundations of the ULP market moving yet either.
Personally as a matter of fact I don't even oppose to the above to be honest; marketing based on GFLOPs (I'm just borrowing it as an example here) seems far healthier than N device getting 50k points in something as worthless as Antutu or a gazillion of vertices/sec quoted for any other GPU.