NVIDIA Maxwell Speculation Thread

http://pastebin.com/jm93g3YG

A post by someone claiming to know the specs of upcoming Maxwell chips. Seems plausible enough to be true. What do you all think?

Thanks Ailuros. You Germans seem to be having all the fun.

completely plausible, when Maxwell have yet not been tapped out . ( seriously, nobody have yet seen this architecture tapped out ) .. ( look like more of what he expect to see )
TSMC dont use 20nm FinFet, Finfet is only usable from GF and Samsung on 14 and 16nm Finfet ..
 
Last edited by a moderator:
completely plausible, when Maxwell have yet not been tapped out . ( seriously, nobody have yet seen this architecture tapped out ) .. ( look like more of what he expect to see )
TSMC dont use 20nm FinFet, Finfet is only usable from GF and Samsung on 14 and 16nm Finfet ..
Of course has TSMC plans for FinFets. It's planned as an addition to 20nm and called 16FF. It is really their 20nm process (so no shrink compared to their 20SoC process) upgraded with FinFets. It's supposed to come about 1 year after 20nm. Would fall inline with a refresh a year later.
 
Source? Quadro is mainly for workstations, 3d modelling and visualization. Nvidia wants people to use the Maximus platform with Quadro for these tasks and Tesla for the (DP?) computing.

The feature set of the Quadro line is a superset of the Tesla line's.

Typically, in a Maximus setup you'd get a relatively cheap, powerful Tesla card with a less powerful Quadro for visualization. I'm not sure anyone is actually doing that, but it's the idea.
 
Of course has TSMC plans for FinFets. It's planned as an addition to 20nm and called 16FF. It is really their 20nm process (so no shrink compared to their 20SoC process) upgraded with FinFets. It's supposed to come about 1 year after 20nm. Would fall inline with a refresh a year later.

I know that, but one year after 20nm, mean 2015 at best ( 2016 more likely ) .. and it is 16nm Finfet .. I dont see why he' s speaking about 20nm Finfet from TSMC ... Finfet will only be used on 14-16nm... ( even if it's in reality a mix of different process )

Its unusable for Nvidia Maxwell, it could be for Volta, but Volta is not for tomorrow, so i dont know what will be ready at this time..
 
Last edited by a moderator:
I know that, but one year after 20nm, mean 2015 at best ( 2016 more likely ) .. and it is 16nm Finfet .. I dont see why he' s speaking about 20nm Finfet from TSMC ...
I don't read there anything at all from specifically naming any process size. The first smaller Maxwell chips could very well arrive still in 28nm (GM100 obviously won't fit).
And of course it means a 2015 time frame for a Maxwell refresh. I mean, from all we know, larger chips in 20nm will only arrive at the second half of 2014, probably more to the end for larger volumes in the consumer space. Is it so unconceivable nV will refresh the Maxwell line about a year later to realize the gains from finfets befor Volta arrives some time in 2016? The next smaller process (which the top dog of Volta will need) won't be ready before 2016 anyway. So I don't see the massive problem with the timeline. Especially as nVidia (Huang itself) is on record to deliver Maxwell + project Denver ARM cores together on a FinFet process in the Tegra chip Parker in 2015.
Its unusable for Nvidia Maxwell, it could be for Volta, but Volta is not for tomorrow, so i dont know what will be ready at this time..
As I said, Volta needs the next smaller process after 16FF (10FF), nV could only introduce maybe smaller variants (GV106 and GV104???) in 16FF, GV100 simply won't fit. It's unrealistic to expect it before the (very) end of 2016.
 
I don't read there anything at all from specifically naming any process size. The first smaller Maxwell chips could very well arrive still in 28nm (GM100 obviously won't fit).
I was assuming that all GXyzt chips with the same z value would be on the same process, but you have a good point (after all, GF117 was 28 nm while all the other GF11x's were on 40 nm).
 
the big thing is that they [arm cores] will be used by the graphics driver to offload some heavy lifting from the system cpu. basically most part of the driver will be running on the gpu itself! nvidia expects this will give them at least the same speed up as amd will get from mantle, but without using a new api with straight dx11 or opengl code!

It should barely be necessary to call out that marketing bullshit in here (in short: it isn't a driver side cpu processing issue), but you can't help wondering where that put the rest of the information in this "leak".
 
It should barely be necessary to call out that marketing bullshit in here (in short: it isn't a driver side cpu processing issue), but you can't help wondering where that put the rest of the information in this "leak".
It makes me wonder how often we see "marketing bullshit" and a lot of spin to bend the truth in official presentations. :LOL:
 
huh? It's fully expected there of course. And I'm not surprised to see that kind of "nvidia expects" leaks to counter the mantle announcement either...
It's just the question if we should put the rest of the info in the message in the same category then ;)
 
Last edited by a moderator:
huh? It's fully expected there of course. And I'm not surprised to see that kind of "nvidia expects" leaks either...
It's just the question if we should put the rest of the info in the message in the same category then ;)
How fit some numbers in the same category as marketing spin?
Or do you mean it appears legit, because it contains some nVidian style marketing phrases? That would be also an angle to look at it. :D
 
Yeah, some of them should be wrong (or even both...).
Maybe the driver is not detecting the SMX size right? GF108 was detected 64SPs in early drivers by control panel (used 32SPs base of GF100 instead of 48SPs base of GF10x for calculation). So it could be 768SPs (one M-SXM turned off).

On the other hand the Maxwell SMX could grow to 288SPs (3*3*32SPs) with some decoupled 16 TMUs, to get a bigger ALU:Tex-ratio. In bigger chips they could integrate the dedicated DP-ALUs of Kepler.
 
64bit GDDR3 for 576 cores?

Doesn't make much sense... that would do 15GB/s of memory bandwidth at best.
Besides, the last card launched with GDDR3 was how long ago? 3/4 years? DDR3 nowadays can be a lot faster and cheaper than GDDR3 so all low-end cards use DDR3.

64bit GDDR5 at 6Ghz would make much more sense, though. 48GB/s sounds like a more appropriate combination for a low-end card.
Even though that probably means 8ROPs at most, right?
 
GK180 makes no sense to me at all. If they'd do another high-end 28nm chip it should be gk2xx imho. gk208 also has only half the tmus per smx, which might be a quite worthy change for such a card (pro cards don't need lots of tmus and even for gaming it doesn't seem to hurt much) which could make room for another smx or so on its own without increasing die size.
I have to say though "speculation" that K6000 isn't using gk110 is utterly ridiculous, so there's really zero hints (I know of...) that another kepler chip is in the works.

Most likely just fancy name for "GK110B" or similar optimization as GF100>110
 
Back
Top