That is their strongest GPU competition wise, matches 256-bit AMD chips. They should be more interested in better low-end.If so, I wonder what segment of the market that product would fit into; maybe a shrink of GM206 as it's quite big for a 128-bit GPU?
I suppose they could reduce the number of SMs slightly and clock it higher to match GM206's performance, effectively replacing both GM107 and GM206 in one go, while using Pascal to replace the GM206 price-point (or cost-point, depending on AMD's competitiveness).That is their strongest GPU competition wise, matches 256-bit AMD chips. They should be more interested in better low-end.
Regarding the original topic - I am curious whether we will see a 14/16nm Maxwell before Pascal. NVIDIA has historically done this as a way to test the process (e.g. GF117). If so, I wonder what segment of the market that product would fit into; maybe a shrink of GM206 as it's quite big for a 128-bit GPU?
Tegra X1 at 20nm is arguably such a node risk evaluation. The BEOL interconnect of TSMC's 20nm is the same as their 16nm so NVidia has already vetted half of the 16nm node changes with a working design. Interconnect modeling failures of the (at the time) new 40nm node were the cause of the failed Fermi spins, delaying it considerably, so vetting the new interconnect is significant.I am curious whether we will see a 14/16nm Maxwell before Pascal. NVIDIA has historically done this as a way to test the process (e.g. GF117).
Tegra X1 at 20nm is arguably such a node risk evaluation. The BEOL interconnect of TSMC's 20nm is the same as their 16nm so NVidia has already vetted half of the 16nm node changes with a working design. Interconnect modeling failures of the (at the time) new 40nm node were the cause of the failed Fermi spins, delaying it considerably, so vetting the new interconnect is significant.
Regarding the original topic - I am curious whether we will see a 14/16nm Maxwell before Pascal. NVIDIA has historically done this as a way to test the process (e.g. GF117). If so, I wonder what segment of the market that product would fit into; maybe a shrink of GM206 as it's quite big for a 128-bit GPU?
I hope so. If Pascal doesn't release until 2016, there's nothing new (in the non-premium) tier this year.
By Nvidia's own words Pascal doesn't seem much different from Maxwell in terms of architecture.
He doesn't specify that means just Pascal. Based on GTC, the biggest differences between Maxwell and Pascal are HBM-memories, Mixed Precision support and NVLINK supportNot by the last words of Jen-Hsun Huang:
“I cannot wait to tell you about the products that we have in the pipeline,” said Jen-Hsun Huang, chief executive officer of Nvidia, at the company’s quarterly conference call with investors and financial analysts. “There are more engineers at Nvidia building the future of GPUs than just about anywhere else in the world. We are singularly focused on visual computing, as you guys know.”
http://www.kitguru.net/components/g...about-future-pascal-products-in-the-pipeline/
The last time he talked so much about the number of engineers and man-hours in the making to hype a product NVIDIA launched the G80...
Seems like Pascal evolved from what Maxwell was meant to be, had there not been those long delays. But those delays mean "Paxwell" gets next-gen bandwidth, so not complaining.There's no fast DP variant of Maxwell, NVIDIA's skipping a generation in that area, ...
Putting mixed precission in won´t lead to the shader alus configuration changes?.He doesn't specify that means just Pascal. Based on GTC, the biggest differences between Maxwell and Pascal are HBM-memories, Mixed Precision support and NVLINK support
Tegra X1, which is considered Maxwell, already supports FP16, so I would assume the required changes are quite smallPutting mixed precission in won´t lead to the shader alus configuration changes?.
Ok. Although I suppose Tegra X1 is very mobile oriented (there FP16 format is very beneficial) and Pascal could end up being quite different with respect to shader core configuration.Tegra X1, which is considered Maxwell, already supports FP16, so I would assume the required changes are quite small
Changed my wording.What?
PS: thanks to AMD for doing some process pipe-cleaning for HBM :-}