NVIDIA Maxwell Speculation Thread

If everything is scaled up by 50%, and the clocks are a little lower, you don't even need (potentially fake) benchmarks have high confidence in it being 40-45% faster than a GTX980.

In the suite of 22 games (if that's what 22 - game means) it's 32% faster. But I suppose scaling may not be ideal at this level of performance.
 
GTX 960, 970 and 980 Game Bundle includes The Witcher 3: Wild Hunt.

To experience The Witcher 3: Wild Hunt at its very best, with every effect enabled at a high resolution, you may well need an upgrade, so kill two wyverns with one stone by buying an “Undeniably Epic” GeForce GTX 980, 970 or 960, which’ll give you the performance you need as well as a free copy of The Witcher 3: Wild Hunt.

http://www.guru3d.com/news-story/nvidia-bundles-the-witcher-3-wild-hunt-with-geforce-gtx.html
 
Well, Alexko has a point: nothing can be scaled up perfectly forever. There's always going to be some inefficiencies somewhere along the way.

And how relevant is this now? It is hypothetical IMO since it seems evident by previous GPUs that Titan X will have lower clocks than GTX 980.
 
And how relevant is this now? It is hypothetical IMO since it seems evident by previous GPUs that Titan X will have lower clocks than GTX 980.

I don't think it is. GM200 is pretty much 1.5×GM204, which has a TDP of 180W. And 180W×1.5 = 270W, which is fine. I see no requirement to lower clocks speeds.
 
And how relevant is this now? It is hypothetical IMO since it seems evident by previous GPUs that Titan X will have lower clocks than GTX 980.
We were discussing relative performance between a GTX980 and Titan X. That depends on the clocks and the loss of efficiency due to it being wider. Those 2 effects happen to be cumulative.

Other than that, it's not relevant at all.
 
I don't think it is. GM200 is pretty much 1.5×GM204, which has a TDP of 180W. And 180W×1.5 = 270W, which is fine. I see no requirement to lower clocks speeds.

I do. I doubt TDP will be 270W, rather 250W. Second you might need higher voltage to drive clocks through such a large chip compared to GM204. How high did GK110 clock vs GK104 and what were the TDPs? 875 MHz for 780 Ti vs 1046 MHz for the GTX 770 or 1006 for the GTX 680 (base clock).
 
I do. I doubt TDP will be 270W, rather 250W. Second you might need higher voltage to drive clocks through such a large chip compared to GM204. How high did GK110 clock vs GK104 and what were the TDPs? 875 MHz for 780 Ti vs 1046 MHz for the GTX 770 or 1006 for the GTX 680 (base clock).

Yes, but GK110 had 87.5% more SPs than GK104, not 50%, although its memory resources were only 50% higher.
 
In the suite of 22 games (if that's what 22 - game means) it's 32% faster. But I suppose scaling may not be ideal at this level of performance.
Well I can think of a few considerations.
1. Chiphell........hopefully it isn't the same guy that supposedly had four unreleased GPUs two months ago, but hasn't released a single verified benchmark during that time?
2. 22 Games.....all working optimally with current drivers for the unreleased card ?
 
Well I can think of a few considerations.
1. Chiphell........hopefully it isn't the same guy that supposedly had four unreleased GPUs two months ago, but hasn't released a single verified benchmark during that time?
2. 22 Games.....all working optimally with current drivers for the unreleased card ?

I completely aggree with you, just by the fact it is "from" chiphell, but at one week of the release of the gpu, on the driver side, this should not been much of a problem, specially, considering, you had allready 3 gpu's maxwell ( i dont count the 750TI ) in the market for 6 months now 960-970-980. ( i will not much watch the " 22 games )

As for the clockspeed, it could surely be a bit lower of the 980 ( rumor suggest 1006 mhz ). The point is not really to know the base clockspeed, but the turbospeed, most 980 have an extremely high stock turbospeed out of the box, the margin is way higher on thoses " under 200W chips " ( who have a turbopower limit based between 180 and 230W ). ..

Nvidia can still play on the turbo clock speed / TDP / thermal max ( more or less 5-10W and you change by 10% the performance of the gpu.. ).

As for the 3D Marks benchmarks as shown previously ( videocardz etc ), it is needed to watch the config ( CPU overclock of a 8 cores, even if on graphic score the difference is only of 500-600 on extreme manage easely 1000-2000 pnts more on performance graphs), image quality setting ( mipmap, LOD ) and what was really the gpu clockspeed ( i dont think 3Dmark report the max turbo clockspeed applied )


http://www.guru3d.com/news-story/new-geforce-gtx-titan-x-photos.html

So currently the EU tech press has landed in Paris where Titan X is demonstrated. The card should launch next week already during the Nvidia GPU developer conference. For more information I suggest you read up on our previous posts, however I have taken some photos.
 
Last edited:
From PC Tuning: "GeForce GTX Titan X - 1 GHz, 35 percent above the GTX 980" (original).

GM200_small.png


More interesting, I think, is the following claim:

(Bing Translate) said:
But it is certain that the Boost of the third one-čipového Titanium will be above 1 GHz and therefore we can easily estimate the performance somewhere between 35-40 percent of the GeForce GTX 980. All of this is but a theory, the core is not apparently triggered the whole thing. The first game card (with GM200-400-A1)is said to be active all the SMM blocks will probably lower performance. It will, therefore, likely to repeat the situation with great Kepler, when the first Titan not active the whole core and full performance of Nvidia's investigated for later placing.
The translation is slightly confusing, I read it as the Titan X not being fully enabled.
 
Gotta love the "Source: internet" notation. No higher validity required :p

Well.. they do like us, but they drive an internet site instead....

Only slightly? I thought we got word from Nvidia that TitanX is fully enabled?

I think they are prudent or bring it for a "gaming" more standard part, that it could not be fully enabled.

Everything is funny at this point, as most reviewers have their invitations for the Paris Nvidia conference about the launch of this card, and so get most of informations allready in their hands..
I will only cite HH from Guru3D

So currently the EU tech press has landed in Paris where Titan X is demonstrated. The card should launch next week already during the Nvidia GPU developer conference. For more information I suggest you read up on our previous posts, however I have taken some photos.
 
Last edited:
I don't think it is. GM200 is pretty much 1.5×GM204, which has a TDP of 180W. And 180W×1.5 = 270W, which is fine. I see no requirement to lower clocks speeds.

Because TDP doesn't scale linearly with clockspeeds. It never has.

Regardless, with 12gb of ram this seems guaranteed to be aimed at compute tasks, just like that last Titan launch was. No game needs 12gb, nor will for years. That's more than double what's currently available to ISVs on modern consoles, so what would be the point?
 
Even granting the current-gen consoles the benefit of doubt and starting from 1080p rendering (which they oftentimes don't do), going only to Ultra-HD (there are already dispays in the market with higher res, and also there's the multi-display option on PC) quadruples the buffer space requirements for example. Now, of course there's also textures and other things that can remain fairly constant regardless of resolution, but I don't think you're set with 6 or even 8 Gigabytes for "years" - surely for this year, maybe for next. But if you really want to, you can already exceed 8 gigabyte of video memory even [in] today[s games].

[] edited
 
Because TDP doesn't scale linearly with clockspeeds. It never has.

Regardless, with 12gb of ram this seems guaranteed to be aimed at compute tasks, just like that last Titan launch was. No game needs 12gb, nor will for years. That's more than double what's currently available to ISVs on modern consoles, so what would be the point?

We can already see the GTX 970 struggling in some games with its 3.5GB pool of fast memory, and the R290X 8GB is sometimes significantly faster than the 4GB variant. So it's not unlikely that games would need more than 6GB within a couple of years, especially in usual circumstances (4K, supersampling, VR, stereoscopy, multi-monitor setups, or any combination of the above).

GM200 has a 384-bit bus, so barring a hybrid memory configuration, it was either 6GB or 12GB (3GB is clearly not enough).
 


Offtopic, but the results listed as reference average are a complete joke on this benchmark, outside the benchmark still use openCL 1.1. explain me how with an OpenCL benchmark you can have the same performance between a 295x2 and a 290x ... Peoples who have send their score dont know that you need to disable CFX / SLI for computing GPU's accelerated softwares ( whatever it is CUDA, OpenCL, C++ etc ).

You have GT 610M faster of an Titan Black in 64K particle.

I dont even compare AMD and Nvidia on this benchmark, but even between AMD gpu's, the result are completely inconsistent. Hopefully we have Luxmark and other OpenCL benchmark for get a right idea of the performance.
 
Back
Top