NVIDIA Maxwell Speculation Thread

Why is it that low leakage chips cannot be run at low voltage, provided they are stable?
High leakage chips need to be run at low voltage to limit power consumption. That doesn't mean low leakage chips can't run at low voltage... if they are stable.

It's just that transistor speed and leakage are related. A higher leakage transistor will usually be faster than a low leakage one, all other things equal. (But things are often not equal...)
 
When we overclock under subzero, what we want is the lower Asic possible, for a reason, we control the leak by the cooling and thoses chips can take any voltage you want to throw at it, they are less willing to provide problem with cold bug .. and they can take any voltage ( for a golden chips ).

I'll confirm this. I've had a few Evga 780 Classifieds where we are able vary 3 separate voltages that affect's OC'ing. In my case the card that had an ASIC of 92 required less current to achieve a higher OC but was not able to OC as high as a card with an ASIC of 67. The lower ASIC card was able to receive much more current and remain stable beyond a certain point and achieve higher OC's. It seemed some current spillage actually helped to extreme OC's ...
 
That's what i'm not sure about, I think they can but perhaps the manufacturers don't bother. I assume they don't test individual chips, they sample a few in a batch and characterize the leakage based on that. Given X leakage, we can run these chips at Y voltage. I don't think they bother testing to see if they'll actually run at a lower voltage.
Every single chip is tested for many, many parameters. The data collected over the course of a production run is a statistician dream.
 
ha, well there you go, from the horse's mouth! so can you elaborate on what determines the default operating voltage for a given chip?
 
Pretty good clocks ... :)

A week or so ago GALAX announced their Hall of Fame edition GeForce GTX 980 that boosts over 1400 Mhz at default on air. We just received our sample and I just wanted to share some photos with you guys already.

  • Voltage tool (developed by enthusiasts) allows voltage beyond stock limits
  • Hardware dual BIOS simplifies the tweaking process by enabling risk-free firmware updates and customization
  • Prelim internal overclocking results: >1.5GHz on air, >2.1GHz on LN2

http://www.guru3d.com/index.php?ct=news&action=file&id=8839
 
Last edited:
The amount of BS in that article makes my head hurt.

The only reasonable claim is TSMC buying $46M in new equipment. Which should be just enough to buy a fab certified microwave.

Seriously..reading that made my head hurt too.

I did read some interesting news a while back though. The CEO of Ultratech claimed that "a major logic manufacturer has delayed their FINFET ramp" and rumours say this is Samsung. He also says yields are bad all around at the moment (10 to 20%). Link to full transcript - Here
Nvidia won't release anything below 28 nm for 2015. There you have it. Or so they announced to some select and people..

Assuming you mean for GPU only as Erista is on 20nm and will ship in 2015. I would be surprised if they didn't ship any 16nm GPU's in 2015 though, even in a limited quantity.
 
I meant GPU only, yeah.

Will see how it pans out but they did claim 28 nm only throughout 2015 in a recent meeting with developers.
 
Last edited:
"GeForce GTX 965M (13D9) is listed under the same Device ID tree as GTX 980M (13D7) and GTX 970 (13D8)."

Would that imply that the 965M uses a GM204 GPU?
 
"GeForce GTX 965M (13D9) is listed under the same Device ID tree as GTX 980M (13D7) and GTX 970 (13D8)."

Would that imply that the 965M uses a GM204 GPU?

There's probably another Maxwell chip coming. nVidia only has a 5 SMM part (GM107) and a 16 SMM one (GM204).
The previous Kepler generation had a middle chip betweem GK107 and GK104, called GK106. There's a big hole between GM107 and GM204, so I guess there's still a GM206.

My guess is it's a 10 SMM part. That GTX 970M having a large GM204 chip with 6 laser-cut SMMs (almost 40% of the chip) is probably being costly for nVidia.
This GTX 965M should be close to a 970M in performance but with lower power draw.
 
Both options are credible, using GM204 would be okay because it's already there and the strategy is simple enough, build a ton of GM204 wafers ; GM206 would show up later when it's more beneficial to not waste wafer area.
 
There's probably another Maxwell chip coming. nVidia only has a 5 SMM part (GM107) and a 16 SMM one (GM204).
The previous Kepler generation had a middle chip betweem GK107 and GK104, called GK106. There's a big hole between GM107 and GM204, so I guess there's still a GM206.

My guess is it's a 10 SMM part. That GTX 970M having a large GM204 chip with 6 laser-cut SMMs (almost 40% of the chip) is probably being costly for nVidia.
This GTX 965M should be close to a 970M in performance but with lower power draw.

I will not be surprised that this 965M is based on GM204, and the 960M based on GM206.
This could explain the "xx5" moniker and a lower version with GM206 based will come at the same time of the desktop GTX960. but there again i will not be suprised that the first GTX960 will be a "Ti" version based on GM204.

Its not a first for Nvidia, they had allready do it with Kepler and Fermi based card ( desktop GTX660TI, 560Ti etc and if im right with some laptop models gpu's by the past ),
Mobile gpu's need be ready really early and send to laptop builder. They need build the motherboard placement, the cooling, the case of their laptop around the hardware.
The gpu take an important place in the test and developpement of the final product, affecting too the cost of production and the final price. ( it will surprise nobody that cooling seems just made for keep with the gpu in load, and not 1W more )

Marketing wise it will even benefit Nvidia, with a strong gpu laptop performance wise and they can control really well the " average and max TDP" with boost and base clock speed.
 
Last edited:
But how much further will they cut the GM204 down, to fit into a solution with a lower performance than the 970M?
The 970M already cuts the GM204 by 40%.
 
But how much further will they cut the GM204 down, to fit into a solution with a lower performance than the 970M?
The 970M already cuts the GM204 by 40%.

Just cut on base and turbo clock speed ? But you got a point, i had forget that the 970 was allready so much cuts.
 
Just cut on base and turbo clock speed ? But you got a point, i had forget that the 970 was allready so much cuts.
I don't know, just lower clocks wouldn't really warrant a new name usually, since typically clocks aren't quite that fixed on mobile parts in any case. But yes, going down even lower than 10 (out of 16) SMM and 3 (out of 4) memory partitions/ROP-blocks gets a bit into the silly range...
 
But yes, going down even lower than 10 (out of 16) SMM and 3 (out of 4) memory partitions/ROP-blocks gets a bit into the silly range...

I am thinking of AMD APUs. If you want a cheap one, that will be with half the CPU disabled and half the GPU disabled, and 3/4th of the L2 missing. GPU can be disabled further.
In olden days we used to have CPU dies identical except one had twice the L2 as the other.
 
I suppose this fits here, too:
Original source: http://www.chiphell.com/thread-1196441-1-1.html (but it just keeps loading, loading and loading)
Graphs available here too: http://www.forum-3dcenter.org/vbulle...7#post10455327

Claimed tests from ChipHell, a site known for both legit and fake leaks, but without getting to see if those tests are actually ChipHell's own or just random forum post, it's hard to say one way or another whether it's more likely fake or real.

According to first graph, averaging performance over 20 games, Fiji XT Engineering Sample is somewhere around 10-15% faster than GTX 980 while GM200's cut-down version is some 2-5% faster than Fiji XT ES.
They claim that in BF4 multiplayer, Fiji XT ES uses some 15-20% more power than GTX 980 while cut-down GM200 uses some 5% give or take few %'s more than Fiji XT ES.

The second graph has numbers for BF4 MP, CoD AW, DA:Inquisition, Ryse and Watch Dogs, it also includes "full fat GM200" and Bermuda XT ES in addition to the Fiji XT ES and cut-down GM200
In it, Bermuda XT ES is faster than GM200 full-fat on all games, difference ranging from just few %'s to well over 10%. Fiji XT meanwhile is slower than GM200 cut-down version in most games, but slightly faster in DragonAge and Ryse.
 
So for that they also say:

1. Those are made from 20nm process by GlobalFoundries
2. The flagship card is not Fiji XT, but Bermunda
3. Bermunda is faster than 290x by 65%, and ref uses Hybrid cooling (water and air).
4. GM200 engineering sample beats 980 by 34%

From here.

Graph number 1 - http://i.imgur.com/xfvsB1C.png

Number 2 - http://i.imgur.com/qRd8w0k.png

Direct google translate:

1. A new generation of graphics cards based on the message seems to be GlobalFoundries 20nm manufacturing
2. Fiji XT is not Extreme, but Bermuda XT, finished the test 290X faster than 65%
3. Ref Design Bermuda XT uses Hydro + Air Cooling "power remain in control In the acceptable range "(if Hawaii is reasonable?) GM200 (Cut-down) is an engineering sample 21SMM / 2688SP's do not know what will eventually change to the specifications, can only guess would be named 980Ti. GM200 (Full-fat) faster than 980 34%
 
Back
Top