NVIDIA Maxwell Speculation Thread

If so, I wonder what segment of the market that product would fit into; maybe a shrink of GM206 as it's quite big for a 128-bit GPU?
That is their strongest GPU competition wise, matches 256-bit AMD chips. They should be more interested in better low-end.
 
That is their strongest GPU competition wise, matches 256-bit AMD chips. They should be more interested in better low-end.
I suppose they could reduce the number of SMs slightly and clock it higher to match GM206's performance, effectively replacing both GM107 and GM206 in one go, while using Pascal to replace the GM206 price-point (or cost-point, depending on AMD's competitiveness).
 
Regarding the original topic - I am curious whether we will see a 14/16nm Maxwell before Pascal. NVIDIA has historically done this as a way to test the process (e.g. GF117). If so, I wonder what segment of the market that product would fit into; maybe a shrink of GM206 as it's quite big for a 128-bit GPU?

Huum, im will not bet on it, i think we could see an early test of a small pascal, thats quite possible, something like the 750TI maxwell.. Then they will enter the same Tic Toc method used since 3 generations. ( middle end > high end full chips ). it increase their revenue by a large amount, i dont think they will change this method anytime soon.
 
I am curious whether we will see a 14/16nm Maxwell before Pascal. NVIDIA has historically done this as a way to test the process (e.g. GF117).
Tegra X1 at 20nm is arguably such a node risk evaluation. The BEOL interconnect of TSMC's 20nm is the same as their 16nm so NVidia has already vetted half of the 16nm node changes with a working design. Interconnect modeling failures of the (at the time) new 40nm node were the cause of the failed Fermi spins, delaying it considerably, so vetting the new interconnect is significant.
 
Tegra X1 at 20nm is arguably such a node risk evaluation. The BEOL interconnect of TSMC's 20nm is the same as their 16nm so NVidia has already vetted half of the 16nm node changes with a working design. Interconnect modeling failures of the (at the time) new 40nm node were the cause of the failed Fermi spins, delaying it considerably, so vetting the new interconnect is significant.

Yes, assuming that NVIDIA uses TSMC's 16/14nm process and not Samsung's.
 
Regarding the original topic - I am curious whether we will see a 14/16nm Maxwell before Pascal. NVIDIA has historically done this as a way to test the process (e.g. GF117). If so, I wonder what segment of the market that product would fit into; maybe a shrink of GM206 as it's quite big for a 128-bit GPU?

I hope so. If Pascal doesn't release until 2016, there's nothing new (in the non-premium) tier this year.
 
I hope so. If Pascal doesn't release until 2016, there's nothing new (in the non-premium) tier this year.

By Nvidia's own words Pascal doesn't seem much different from Maxwell in terms of architecture. There's some new compute features and the HBM bus, but otherwise it appears to be the same. Which isn't bad necessarily, 14/16nm or whatever you want to call it is a big leap in transistor density and efficiency from 28nm.

Besides, I'd be amazed at any new node GPU coming out this year at all. Apple once again seems to have nabbed pretty much the entire early run of the new node from both foundries that now have it in production, as they've had to split between Samsung and TSMC just to get all their orders filled; let alone anyone else's. I'd only expect maybe a handful of other mobile SOCs being built on the process by the end of the year at most, with anything else like a CPU or GPU coming next year at the earliest.

Which is fine, as 14/16nm is going to be a very, very long node. Sure both Samsung and TSMC "want" to accelerate 10nm, whatever that is (just a higher density backend? I know the finfet design isn't going anywhere) but announcing they're going to come out with a new node sooner and actually accomplishing that haven't been the same thing for years now.
 
I wonder if Nvidia will do the Pascal release as they did the Maxwell release; as in Pascal-1 first for mobile them sometime later Pascal-2.

And with that thought: Pascal-1 on Samsung 14nm and Pascal-2 on TSMC's 16FF.
 
There's no fast DP variant of Maxwell, NVIDIA's skipping a generation in that area, so I'm wondering if "big" Pascal may be part of the first round of Pascal chips (which could also include an xx7 part).

I don't expect any first round (non-Tegra) Pascal chips to be in the same segment as any 14/16 nm Maxwell chips. Given that chips nowadays tend to last a few years before being succeeded, I don't think it makes much sense for, say, a 16 nm GM206 shrink to show up at the beginning of next year only to be replaced later that year.
 
By Nvidia's own words Pascal doesn't seem much different from Maxwell in terms of architecture.

Not by the last words of Jen-Hsun Huang:

“I cannot wait to tell you about the products that we have in the pipeline,” said Jen-Hsun Huang, chief executive officer of Nvidia, at the company’s quarterly conference call with investors and financial analysts. “There are more engineers at Nvidia building the future of GPUs than just about anywhere else in the world. We are singularly focused on visual computing, as you guys know.”


http://www.kitguru.net/components/g...about-future-pascal-products-in-the-pipeline/

The last time he talked so much about the number of engineers and man-hours in the making to hype a product NVIDIA launched the G80...
 
Last edited:
Not by the last words of Jen-Hsun Huang:

“I cannot wait to tell you about the products that we have in the pipeline,” said Jen-Hsun Huang, chief executive officer of Nvidia, at the company’s quarterly conference call with investors and financial analysts. “There are more engineers at Nvidia building the future of GPUs than just about anywhere else in the world. We are singularly focused on visual computing, as you guys know.”


http://www.kitguru.net/components/g...about-future-pascal-products-in-the-pipeline/

The last time he talked so much about the number of engineers and man-hours in the making to hype a product NVIDIA launched the G80...
He doesn't specify that means just Pascal. Based on GTC, the biggest differences between Maxwell and Pascal are HBM-memories, Mixed Precision support and NVLINK support
 
There's no fast DP variant of Maxwell, NVIDIA's skipping a generation in that area, ...
Seems like Pascal evolved from what Maxwell was meant to be, had there not been those long delays. But those delays mean "Paxwell" gets next-gen bandwidth, so not complaining.

PS: thanks to AMD for doing some process pipe-cleaning for HBM :-}
 
Tegra X1, which is considered Maxwell, already supports FP16, so I would assume the required changes are quite small
Ok. Although I suppose Tegra X1 is very mobile oriented (there FP16 format is very beneficial) and Pascal could end up being quite different with respect to shader core configuration.
 
Last edited:
Ah, yes, that's possible. But I don't think FP16 support would be especially likely to motivate changes, since it didn't in Tegra X1 or in Tonga.
 
PS: thanks to AMD for doing some process pipe-cleaning for HBM :-}

And also for demonstrating it's economical enough to get on a consumer GPU, albeit a high end one.
I was skeptical about the cost but that was an unfounded feeling, I guess it's simply a thing that is viable with production volume.

I guess we'll see an xx7 chip (whatever it's called) and indeed as iMacmatician says they just can go on selling the GM200 line.

Not sure what the difference would be between a shrunk Maxwell but-with-FP16, and a Pascal without the NVLink and HBM. It just seems to be the same thing to me.
Note that I'm imagining Maxwell 16nm to be about the same as Pascal, sort of like the difference between GK107/GK104 and GK110. I'm figuring out they can get away with selling GM20x for a while.

/edit : or difference between GK208 and GK210 if that's slightly more subtle.
 
Last edited:
Back
Top