NVIDIA Maxwell Speculation Thread

The interesting this is, those that will go on and on about the power efficiency but then overclock their card or cards and end up in the same neighborhood as the 290/290x.

ETA- The boost makes the GTX980 no where close to a 165w TDP card.

What else is new? That's the very essence for every enthusiast out there to push things beyond any boundaries.

How much does the Hawaii TDP increase for those enthusiasts that pushed their 290/290x beyond their boundaries? Now you have an equal metric to the above.
 
What else is new? That's the very essence for every enthusiast out there to push things beyond any boundaries.

How much does the Hawaii TDP increase for those enthusiasts that pushed their 290/290x beyond their boundaries? Now you have an equal metric to the above.

Not the same metric...

A 165w TDP card that matches the same power consumption of a 230w TDP card from the same company...
 
The interesting this is, those that will go on and on about the power efficiency but then overclock their card or cards and end up in the same neighborhood as the 290/290x.

ETA- The boost makes the GTX980 no where close to a 165w TDP card.
yeah keep dreaming...
FACT: factory OC Asus Strix 970 at 161W matching 258W R290X performance:
power_average.gif
 
The most impressive about GM204 is that it beats Hawai and Tonga in every single metrics, even the ones where AMD is traditionally strong like DX level (11.3 and 12 for Maxwell) and video (HDMI 2.0 + HVEC enc + hybrid HVEC decode + triple DP).
Finally, when you know that a custom $340 AIB board like the ones from MSI, ASUS and Gigabyte, give us nearly R290X performance at roughly half the power is the last nail in the coffin :oops:
For sure, an uber scary moment for AMD...
and BTW, what will be AMD answer ? I don't talk about price reduction, it's a fact in the short time, but in terms of new product and when ?

He specially beat the own architecture that is Kepler ... in all metrics, power consumption, performance etc ... its a new architecture, what you was expect ? to be a garbage compared to the old one ? Why Nvidia have milk us money with their kepler architecture in 28nm for so long... lol .

Sorry im a bit drunk, coming back from a party, and his post was make me laugh a bit.

I know Nvidia have not release a new series before AMD since a long long time, and so we are not used to see Nvidia use new technology and features before them, Dx11.3, HDMI+HVENC etc etc .. but you compare a new architectures and new gpu's to old one.. Like if you was comparing Maxwell to kepler gpu's .
 
Last edited by a moderator:
Very impressive cards! Loving the price of the 970... I really am tempted by it. I'm just very turned off by being locked into a proprietary standard like gsync for something that lasts as long as a monitor. If Nvidia supported adaptive sync, I'd be all over it. But I can't see them announcing that until at least when the monitors hit the market. Alternatively they could stubbornly refuse to support it until g-sync (hopefully) dies.

It's the perfect card otherwise... it really is a shame that I can't accept nvidia's proprietary implementations that force vendor lock in.
 
The interesting this is, those that will go on and on about the power efficiency but then overclock their card or cards and end up in the same neighborhood as the 290/290x.

ETA- The boost makes the GTX980 no where close to a 165w TDP card.


Power efficiency is about more than absolute power consumption. It's about the performance you get in return. Given the impressive overclocking results so far nVidia could have aimed even higher for the 980 but apparently they preferred to rub it in with the low TDP. It appears the stock cooler isn't up to the task of consistently hitting advertised boost speeds though.

I might have spoken too early before thinking that Hawaii would remain competitive. The 970 @ $330 is way too close to the 290x at 4k and beats it on average at lower resolutions.

The VXGI thing seems interesting. The fact that there's a software fallback could make life uncomfortable for AMD in the future if adopted by major engines (support mentioned for UE4). Better lighting is hard to dismiss as just a gimmick.
 
AFAIK, Maxwell is not dual issue.


Maxwell Tuning Guide said:
The power-of-two number of CUDA Cores per partition simplifies scheduling, as each of SMM's warp schedulers issue to a dedicated set of CUDA Cores equal to the warp width. Each warp scheduler still has the flexibility to dual-issue (such as issuing a math operation to a CUDA Core in the same cycle as a memory operation to a load/store unit), but single-issue is now sufficient to fully utilize all CUDA Cores.

http://docs.nvidia.com/cuda/maxwell-tuning-guide/index.html#ixzz3DkmElrQ4
 
Since you're avoiding to answer the above: if you overclock a 290/290x it's TDP will remain idle or will it increase?

http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/22

Yeah, and looking at power consumption for a heavily OC'd (and/or OV'd) GPU is largely academic anyway because the people who overclock like that are usually interested in maximizing perf. rather than perf. per watt. That said, I do believe that Ryan's measurement is a peak system power consumption while under load, which is much different than average power consumption over the course of the benchmark. In fact, Tom's Hardware stated in the 750 Ti review that avg. power consumed for that GPU was much less than peak whereas most other GPU's have a relatively small difference between avg. and peak power consumed.

FWIW, in the Eurogamer review, even when OC'd to provide much higher performance in Metro LL compared to 780 Ti and 290X, the peak system power consumption with GTX 980 was still 46w and 64w lower, respectively, in comparison. So choice of benchmark could have something to do with it too.
 
Last edited by a moderator:
The VXGI thing seems interesting. The fact that there's a software fallback could make life uncomfortable for AMD in the future if adopted by major engines (support mentioned for UE4). Better lighting is hard to dismiss as just a gimmick.

True. If you can get my jaws to drop then give me anything that relates to advanced lighting.

***edit: by the way the german Computerbase review has some nice videos/screenshots of DSR: http://www.computerbase.de/2014-09/geforce-gtx-980-970-test-sli-nvidia/13/

...there is some blur noticable but not from the looks of it that would annoy me (can't be sure until I see it in realtime). However the performance drop isn't what I 'd call moderate: http://www.computerbase.de/2014-09/geforce-gtx-980-970-test-sli-nvidia/14/

Dumb question: if I am to reduce performance to roughly 1/4 from the original resolution, then why not just use garden variety 4x Supersampling instead?
 
Sorry to disappoint but that chart proves my point...
Edit- I wasn't talking about factory overclocks.

It is a fact it consumes much less power than Hawaii and therefore has more OC potential and performance to extract. There is nothing to discuss about it, it is a fact.
 
Last edited by a moderator:
Sorry to disappoint but that chart proves my point...
Edit- I wasn't talking about factory overclocks.
ok I see, even in front of the fact you deny :rolleyes:
if you show me a heavily boosted 970 with 250W power draw, I will change my mind. but hey, don't waste your time, such thing doesn't exist...

PS: BTW comparing heavily boosted GM104 to standard Hawai, how fair is that ? apple vs orange. and what about GM104 boosted vs Hawai boosted that reaches 350~400W TDP ???
 
The most impressive about GM204 is that it beats Hawai and Tonga in every single metrics, even the ones where AMD is traditionally strong like DX level (11.3 and 12 for Maxwell) and video (HDMI 2.0 + HVEC enc + hybrid HVEC decode + triple DP).
To be fair, we don't know yet whether Hawaii & Tonga can do 11.3 related stuff, too, or not. Same goes for the level of DX12 support, the API support is there but rest is unknown for now.
Triple DP has been done by AMD ages ago, they did six.
 
Well you could say that, but it is a bit more sophisticated in implementation than you think. From Tech Report:
[..]
"DSR goes beyond traditional supersampling, though. Rather than just sample multiple times from within a pixel, it uses a 13-tap gaussian downsizing filter to produce a nice, soft result. The images it produces are likely to be a little softer and more cinematic-feeling. This filter has the advantage of being able to resize from intermediate resolutions. For instance, the user could select 2560x1440, and DSR would downsize to 1080p even though it's not a perfect 2:1 or 4:1 fit."

Whoa, what a quote:
a) "sample multiple times from within a pixel" - That hasn't been true since...since when have wide tent & co. been around? 2007?
b) It boggles the mind how a gaussian filter can be unconditionally seen as an improvement. But it seems "blur = good" is all the rage these days.
c) Not even a mention that DSR by principle only has an ordered sample grid...

No thanks, I'll stay with SGSSAA. (Theoretically. Not that my 560Ti could shoulder it in most games nowadays...:LOL:).
 
Slightly disappointed Maxwell is not feature level 12 (perhaps this was asking too much :p). Better than I thought though! Makes me wonder what feature level 12 will have (does ms even know yet? :D).
 
Slightly disappointed Maxwell is not feature level 12 (perhaps this was asking too much :p). Better than I thought though! Makes me wonder what feature level 12 will have (does ms even know yet? :D).
from official nvidia maxwell white paper (pages 25 and 26):
DirectX 12
Spanning devices ranging from PCs to consoles, tablets, and smartphones, Microsoft’s upcoming DirectX12 API has been designed to have CPU efficiency significantly greater than earlier DirectX versions. One of the keys to accomplishing this is providing more explicit control over hardware—giving game developers more control of GPU and CPU functions. While the NVIDIA driver very efficiently manages resource allocation and synchronization under DX11, under DX12 it is the game developer’s responsibility to manage the GPU and GPU. Because the developer has an intimate understanding of
their application’s behavior and needs, DX12 has the potential to be much more efficient than DX11 at the cost of some effort on the part of the developer. DX12 contains a number of improvements that can be used to improve the API’s CPU efficiency; we’ve announced that all Fermi, Kepler, and Maxwell GPUs will fully support the DX12 API.

In addition, the DX12 release of DirectX will introduce a number of new features for graphics rendering. Microsoft has disclosed some of these features, at GDC and during NVIDIA’s Editor’s conference. Conservative Raster, discussed earlier in the GI section of this paper, is one such DX graphics feature. Another is Raster Ordered Views (ROVs,) which gives developers control over the ordering pixel shader operations. GM2xx supports both Conservative Raster and ROVs. The new graphics features included in
DX12 will be accessible from either DX11 or DX12 so developers will be free to use these new features with either the DX11 or DX12 APIs on GPUs that implement the features in hardware.
so not sure yet about full support, but some level12 features are supported by 2nd gen Maxwell
 
Slightly disappointed Maxwell is not feature level 12 (perhaps this was asking too much :p). Better than I thought though! Makes me wonder what feature level 12 will have (does ms even know yet? :D).
I suspect that anything referred to as FL 12 is actually FL 11_3, seeing as how FL 11_3 hasn't officially been defined and named yet.
 
Unlucky your 970 was unstable Ryan, considering the 970 looks like it's a "no brainer" for a lot of folk at that price point.

Can you do a lot of testing with the aftermarket coolers on these cards? As the chip seems to have a lot, and I mean a lot, of overhead, the quality of the cooling is important when the 80C default target is there, to avoid throttling. Would be a good thing for Anandtech to do. I know you like overclocking....

1500MHz boost on air for this it seems with a bit of luck, that is prettty impressive.
 
That is serious blow to AMD, a severe blow for the red team would be the release of a proper mid range GPU.
The GTx 970 should trigger a price adjustment from AMD, that is going to hurt the margins but AMD has compliant products. The pretty aggressive pricing of the GTX 970 should impact indirectly the pricing of the GTX 750/750ti as a result of AMD adapting its pricing.

It would make sense to me if Nvidia release another Maxwel GPU before this fall.
 
Back
Top