No they obviously wouldn't had been better off with 40nm, but NVIDIA could have theoretically gone for 28nm with Tegra3 too.
True, and maybe they should have, but the situation is a lot different going from Tegra 2 to Tegra 3 and going from Tegra 4 to Logan. Tegra 2 was a tiny core with fair room to grow while still keeping a competitive price. With NDK taking off like it did nVidia had to correct the boneheaded decision to keep NEON off the chips, and they had to make something that could individually power gate separate cores and therefore be useable in phones. The tablet market was still burgeoning and Tegra 2 was poorly paired with immature software so hardly locked in the market. They had to do something about memory bandwidth; while they still kept it single channel they grew to support much faster DDR3 which was vital for tablets. Because of growth in tablets there was still a fair amount of untapped power consumption that nVidia could tap into.
And going for quad cores was an easy marketing and review site win. nVidia knew neither that Qualcomm nor any of its other serious competitors were going to be offering quad cores any time soon (Samsung eventually did but quite a bit later). And they knew that stupid benchmarks and PR stunts work. I have no doubt that going quad first was a big part of what success Tegra 3 had. The power saver core was also in a much better light than it is today because no one had anything comparable, and nVidia ramped up the marketing behind it.
Fast forward to early 2014 and almost none of this will apply to Tegra 4. The only big problems on the table that it can fix without changing process node are the CPU organization, which I really doubt nVidia will change, and the GPU. GPU is big but IMO not big enough to drive things by itself. There are no cheap and easy wins on the CPU side. They could make an octa-core Cortex-A15 or A57 but I think we both know that would be outrageous.
I can claim then that the "power problem" for T4 is not in its majority tied to the manufacturing process?
Sure. But it doesn't matter. nVidia has basically one option here: big.LITTLE. If you think Tegra 5 will contain it then fine - but I doubt nVidia will swallow its pride on this one. Not when Denver is pending. They may have no faith in the technology to begin with.
So let's just say, if they can't move away from Cortex-A15 or A57 4+1 arrangement then they need 20nm to make it worthwhile.
According to rumors TSMC has not granted to any other bigger competitors any priorities for 20nm.
Do they tend to give priorities? What bearing does this have on schedule?
That former link of yours that paints the typical pretty marketing picture TSMC draws before each of their new process releases makes me almost think or suggest otherwise when it comes to Apple. It's not even granted that the majority of the ramblings in that writeup are accurate.
A lot of sites reported the same thing. It's pretty straightforward. I don't think TSMC is claiming something differently.
Yes, their schedule claims are not to be trusted, but if they really are two months ahead for something that was pretty near term that's positive news and should only improve the prospects for 20nm in 2014.
And no I'm not going to ignite any funky conspiracy theories, but if Apple truly has a major chunk of its production volume scheduled for 20nm at TSMC it is going to affect quite a few other IHVs at least to some degree.
I'm not saying that 20nm doesn't make more sense for Logan/NV; I just can't believe yet the 20nm and early 2014 combination that's all.
Apple could disrupt things, although rumors are that Apple wasn't able to buy priority allotments like everyone else.
I agree that 20nm seems hard for early 2014. But I also don't consider that to be a hard date at all. Why deny TSMC's schedules but take nVidia's at face value? Neither have ever kept close to them