I'm not saying they explicitly decided to follow such a cycle - just that's what it turned out to be, and so Jen-Hsun is explaining it in that way. He has made similar specific "rhythm" claims in the pre-G92/GT200 which turned out to be basically true (but arguably only for that specific generation). However I agree his comments are vague enough it doesn't even need to mean that, so yes, the evidence points in that direction but it is rather weak.I don't think I'd take the Intel model comments too seriously. They are likely far more influenced by 1. what their competition is doing and 2. engineering constraints than by any philosophy on when to release products.
And when he made these comments and NVIDIA released their 'performance roadmap', they were still nearly one year from tape-out. Not strictly too late to change from A9 to A15 (hi QSC7230/MSM7230! ) - then again this is the company that released a 750MHz ARM11 to compete with OMAP3 then later downgraded it to 600MHz.
Can you elaborate? Grafted onto the chained implementation HW-wise? I think we discussed this in another thread and concluded that would be more expensive and have no benefit whatsoever, or do you mean something else entirely? Also are you saying the compiler not spitting out FMA hints at bad HW performance, or it's just immature SW or trying to maintain compatibility?Sadly, that's the case for many of A15's new instructions; including fused MAC. A15's implementation of FMA was an afterthought (grafted onto the chained implementation) so ARM's compiler doesn't like to -- rather, just doesn't -- spit out FMA instructions.
x264 is the world's best encoder, period. Most handheld encoders are significantly below average quality. PowerVR VXE is pretty damn good from what I've seen (and can scale to higher performance through not only clocks but also encode core count) but still not competitive with the maximum quality (i.e. slowest) x264 profile afaict. Many handheld encoders get humiliated IQ-wise by very fast x264 profiles, and with a dual-core 2.5GHz A15 x264 might actually achieve more than the HW encoder's 30fps in some cases (at massively higher power).But I suspect most applications that perform workloads like x264 have dedicated (either fixed function or DSP) processors to do that and won't be using the CPU.
I don't think there is a use case for x264 on smartphones or even much on tablets, but there is a very strong one on Windows 8 ARM clamshells. So it's definitely not just a theoretical gimmick long-term IMO. And even if it was, I'd expect it to get benchmarked because of its importance on x86 machines. Compression benchmarks (e.g. 7-Zip/Winrar) would also benefit from multi-core but not as much from NEON.
Once again, I agree 2xA15 is clearly superior, but I'm not sure the press/enthusiast reaction to 4xA9 would be as negative as you think it would be. Hard to say htough.
Agreed, the complexity of that kind of outsourcing is a big problem. But actually now that I reread the article, I think they might be handling much more of the logic/architecture as well: http://www.livemint.com/2011/06/01224527/Nvidia8217s-India-unit-to-s.htmlI'm not sure how that would really speed anything up. Most teams are divided into dedicated physical/synthesis groups and logic/architecture groups. Shifting physical design to India would only cause complications.
Also I realise there is plenty of parallelism to the chip development process already, but what I meant is that if it was completely separate teams, it would be theoretically possible for Logan to be lagging behind Wayne by only, say, 6 months. I don't think that's very likely but it would certainly make it more reasonable to stick with A9 on Wayne. Then again with Grey added to the roadmap, I don't see how they could pull it off, it just doesn't make sense.
Hmm, that's true. The real question on whether it's worth the trouble is screen power consumption though obviously.Looking at the market, 13-15" notebooks are where it's at. Even Intel has shifted their strategy to mainly target processors for that market. And A15 would fit perfectly in that area, more so than Haswell I'd say.
Woah, 20%? Just for the parts using the G transistors, or the entire die? I don't really understand how it could be that much either way.Hell, going from 45LP to 45LPG was a pretty big shift; die area ballooned somewhere on the order of 20%.