While it's interesting to see any number of things confirmed, the most surprising is the just apparent scaling of the whole thing. 600mm already (cough, no release date and the only mention is Tesla aka super high end and very high price). But... that's all they got out of 600mm? They can't scale more on this process, at all. And just a 66% jump on the highest end?
Is HBM2 really that big? I mean, geeze. It's not like the transistors to performance even scaled linearly, hell it dropped. We have a 66% (approximate) performance improvement from an 87.5% increase in transistors. With HBM 2 and finfet Pascal manages to be worse, transistor per transistor, than Maxwell. A 12% drop in efficiency is not what you want out of a node jump and "architecture jump" at all.
In its way it's similarly impressive in the same way Nvidia has done in the recent past, aka make a huge chip. But the limited ram (16gb) for an exclusively high end card (AMD puts out 32gb cards already for their highest end) and the rather disappointing performance for a new node already being maxed out for size is... well hopefully Volta works out well for Nvidia next year. Which is not to say 66% wouldn't be impressive in its own right, but the Tesla line costs a hell of a lot so it doesn't seem like, price for performance wise, this is going to be all that attractive for a while.