NVIDIA Maxwell Speculation Thread

My problem is the article mentions variable TDP from 100-185W. At 100W it would probably be barely faster than a 980M.
Question is, why wasn't there a full GM204 mobile from the beginning? It may very well be, for example, that obtaining a high quantity of fully enabled low leakage parts was at first impossible, but has now become possible. In which case even at a 100W TDP there may be a large increase in performance.
Or it may be that fully enabled GM204 mobile was simply held back for purposes of having a refresh later, in which case, as you said, there will be no or low performance increases at 100W.
 
Question is, why wasn't there a full GM204 mobile from the beginning? It may very well be, for example, that obtaining a high quantity of fully enabled low leakage parts was at first impossible, but has now become possible. In which case even at a 100W TDP there may be a large increase in performance.
Or it may be that fully enabled GM204 mobile was simply held back for purposes of having a refresh later, in which case, as you said, there will be no or low performance increases at 100W.
Why would Nvidia need to out a full-die GM 204 before this point? The GTX 980M (and GTX 970M for that matter) already sit above the competitions highest performing mobile AIC, the R9 M295X.
Seems counter-intuitive to lead with your best performer when you aren't obliged to do so - especially from amortization and refresh strategy points of view.
 
Seems counter-intuitive to lead with your best performer when you aren't obliged to do so - especially from amortization and refresh strategy points of view.
Yup. I should have been more clear. There is no doubt in my mind that not releasing the full GM204 mobile was the right move for Nvidia. The question is whether they actually had any other option, due to yields of low-leakage parts.
 
http://www.anandtech.com/show/9516/...-m4000-video-cards-designworks-software-suite
Quadro M5000 is a full GM204 solution with 6.6GT/s memory and 1050MHz of boost clocks (base clocks not mentioned), with a 150W TDP.

There's your GTX 990M.


10-30% higher actual clocks, not maximum boost clocks.
Doesn't make any more sense than the boost clocks (which AFAIK are actual clocks and make a big difference to the actual performance).
The base clocks for the 980M are already 1038MHz. 30% above that is 1350MHz for base (or what you call actual) clocks. It's still ridiculous.
 
Doesn't make any more sense than the boost clocks (which AFAIK are actual clocks and make a big difference to the actual performance).
The base clocks for the 980M are already 1038MHz. 30% above that is 1350MHz for base (or what you call actual) clocks. It's still ridiculous.
My point was that the GTX 980M theoretically has 70% shading, 75% texturing and 70% memory performance of the desktop GTX 980.
The GTX 980M actually has 58-75% of the performance of the desktop GTX 980.
There is a possibility that the 12% performance gap against the reference card arises from the GTX 980M throttling more. The performance gap against binned cards is even larger.
No matter how you call the clocks that matter, increasing the power limit is a way of increasing actual, not theoretical, performance.
 
All of that is correct, but it doesn't invalidate the fact that your post claiming up to 30% higher clocks with more enabled units on top is a completely unrealistic goal for a GM204 on 150-180W.
33% more enabled units. With 1.5-2 times the previous TDP + more time to bin low-leakage GPUs.
There is a reason for the up to. If the game is already power-limited, it will be the one that gets 10-15% higher clocks. If it isn't power limited, it might get more, if Nvidia unlocks additional voltage bins.
 
NVIDIA GRID 2.0 Launches With Broad Industry Support

NVIDIA GRID 2.0 delivers unprecedented performance, efficiency and flexibility improvements for virtualized graphics in enterprise workflows. Employees can work from almost anywhere without delays in downloading files, increasing their productivity. IT departments can equip workers with instant access to powerful applications, improving resource allocation. And data can be stored more securely by residing in a central server rather than individual systems.

The ability to virtualize enterprise workflows from the data center has not been possible until now due to low performance, poor user experience and limited server and application support. NVIDIA GRID 2.0 integrates the GPU into the data center and clears away these barriers by offering:
  • Doubled user density: NVIDIA GRID 2.0 doubles user density over the previous version, introduced last year, allowing up to 128 users per server. This enables enterprises to scale more cost effectively, expanding service to more employees at a lower cost per user.
  • Doubled application performance: Using the latest version of NVIDIA's award-winning Maxwell GPU architecture, NVIDIA GRID 2.0 delivers twice the application performance as before -- exceeding the performance of many native clients.
  • Blade server support: Enterprises can now run GRID-enabled virtual desktops on blade servers -- not simply rack servers -- from leading blade server providers.
  • Linux support: No longer limited to the Windows operating system, NVIDIA GRID 2.0 now enables enterprises in industries that depend on Linux applications and workflows to take advantage of graphics-accelerated virtualization.
More than a dozen enterprises in a wide range of industries have been piloting NVIDIA GRID 2.0 and are reporting direct business benefits in terms of user productivity, IT efficiency and security improvements.

http://www.guru3d.com/news-story/vidia-grid-2-launches-with-broad-industry-support.html
 
*cant edit yet*
Seems theres been some discussion about Async on some previous pages, still theres some weird things going on when Devs are asked by Nvidia to disable DX12 features for performance.
 
Asus’ GX700 is an insane, water-cooled laptop — with an unreleased Nvidia GPU

Currently, the top-end GTX 980M packs 1,536 GPU cores, 160GB/s of memory bandwidth, 96 TMUs, 64 ROPS, and a base clock of 1,308MHz.
The GX700, in contrast, reports 2,048 cores, a 1,190MHz clock speed, 128 TMUs, 64 ROPS, and 160GB/s of memory bandwidth. Those specs put this new GPU in-between the GTX 980M and the standard desktop GTX 980. It wouldn’t surprise us if Nvidia is planning to unveil the first mobile Titan.

One key concern of many gamers is going to be whether or not the system has to use water cooling. Asus told Computerbase.de that it doesn’t, but that hooking up the water cooler and cranking up clock speeds on both the CPU and graphics card can increase performance by as much as 80%. That’s a very interesting claim — but the sheer size of that number gives us a bit of pause.

asusgx700handson1_1020.0.jpg
.

GX700-WC1-640x427.jpg

http://www.extremetech.com/extreme/...r-cooled-laptop-with-an-unreleased-nvidia-gpu

http://www.computerbase.de/2015-09/...hltes-laptop-mit-2.048-shader-gpu-von-nvidia/
 
It's the GTX990M, a GM204.
It claims 1190MHz core clock, but those values are probably only attainable with the watercooling block attached.
 
Nvidia's next laptop graphics chip is a full, desktop-class GTX 980
digitalfoundry-2015-nvidias-next-laptop-graphics-chip-is-gtx-980-144291380205.jpg


This week, the company has announced that the GTX 980 has migrated across to the gaming laptop space, its hardware spec completed unaltered from its desktop iteration.According to the firm, a GTX 980 in the laptop form-factor is 35 per cent faster than the current performance king, the GTX 980M. Based on the hands-on benchmarking we carried out, the claims have merit.
...
Meanwhile, the arrival of the GTX 980 Ti has redefined the high-end to the 980's detriment: if you're spending $500 on a graphics card, why not save up $150 more and get the top-tier product? Squeezed from both directions, the desktop GTX 980 has lost some of its sheen. Propelled into the laptop space, it should be the absolute state of the art - something we look forward to checking out soon.
http://www.eurogamer.net/articles/digitalfoundry-2015-nvidias-next-laptop-graphics-chip-is-gtx-980

Six models announced - one with water cooler.
geforce-gtx-980-notebooks-upcoming-models-640px.png


http://www.geforce.com/whats-new/ar...Graphic&utm_campaign=Gaming-Email-Sep-2015-US
 
Last edited by a moderator:
So, this is essentially "980Ti" for laptops, since the mobile SKU nomenclature is usually bumped up, compared to the spec's.
 
Last edited:
Isn't the normal 980m exactly the same chip but with shaders deactivated and at lower clocks?

I don't understand how these laptop chips can be so small and the desktop cards so unbelievably massive. Especially considering some 980m versions have 8GB of RAM.
 
8 chips at 32bit is 256bit, so yes, full desktop. GTX 980/970 has 8 chips too.

They've probably been binning these chips for a long time. The margins must be crazy...
 
Back
Top