Starting to think it must have bulldozer cores in it. They've been talking about Denver for over 10 years.
From my ignorace. It could be possible/feassible to use that ARM Cpu for post process antialiasing (FXAA/MLAA/etc.) or direct compute/OCL?, in PC graphics cards.
Starting to think it must have bulldozer cores in it. They've been talking about Denver for over 10 years.
January 2011 was not 10 years ago.
It's exactly that slide that leads me to ask my former question. Since Maxwell is supposed to launch for desktop in 2014, why would Logan skip Denver CPUs and the latter appear only Tegra SoCs in 2015? Isn't TSMC going for FINFET only at 16nm?
So far it was my understanding that Project Denver CPUs will appear under TSMC 20nm. Given the slide above it sounds suspiciously more and more that there might not be any Denver cores at all in Maxwell and the entire Denver introduction has been postponed by one more generation.
I think that people tend to have unrealistic expectations about Denver (Boulder),
So, what is the Tesla then?
the possibilities are :
- a Maxwell "20nm" with no included CPU cores
- a Maxwell "Finfet" with CPU cores
- a Maxwell "Finfet" with no CPU cores
And perhaps only the first one gets launched, or the first one followed by one of the two Finfet ones.
TSMC's 16nm FinFET process will enter mass production in less than one year after ramping up production of 20nm chips, company chairman and CEO Morris Chang said at an investors meeting today (April 18).
Chang indicated that TSMC already moved its 20nm process to risk production in the first quarter of 2013. As for 16nm FinFET, the node will be ready for risk production by the year-end, Chang said.
Maxwell can't wait for finFET, since that's 2 nodes away (TSMC's 14nm process).
Rather than pushing to the real 14nm node—which likely will require multi-patterning because of a long series of delays in getting EUV lithography out the door, as well as a new manufacturing flow and new equipment—the foundries have combined 14nm fins with 20nm BEOL processes. (There is even talk now of moving finFETs to 28nm, according to sources, where double patterning is not required.)
I think that people tend to have unrealistic expectations about Denver (Boulder),
Needless to say I think we all expect a Maxwell generation of products on 20nm, at least the regular Geforces.
It has now come to my mind that they can sell a Maxwell Tesla variant based on a consumer chip, as they did with Tesla K10 (for folks who only care about FP32 performance without strong data integrity). So I'm curious whether the next "big" Tesla, successor to the GK110, is on 20nm or 14/16nm (that process is hard to name exactly, it's an ambiguous and hybrid process. Perhaps calling it the "FinFET" process is less ambiguous)
If we're interested into gaming cards : I don't think we need to care much at this point in time we may look forward to 128bit, 192bit and 256bit consumer GPUs on 20nm to replace the current ones, giving more power efficiency and I don't know which other goodies.
A January article I've found about the new process : it's hinting at a 20nm/14nm hybrid. Dunno if "16nm" is an artificial middle-ground to just give it a recognizable name..
And maybe there will be a 28nm/14nm hybrid?
http://lp-hp.com/blog/2013/01/17/first-silicon-at-14nm/
The way they are talking they make it sound like they solved 20 nm ... when no one but Intel has produced any real chip yet. Don't expect 20 nm this year, don't expect anything better for 3 ... from anyone except for Intel of course.