NVIDIA Maxwell Speculation Thread

You are all overestimating the importance of low power consumption for the high end. I'm waiting for the real deal, gm210.
I didn't even mention power consumption: just pointing out that all reviews are extremely positive... for a variety of reasons.
 
You're still wrong. The powertarget of the card caps consumption at 165W which is the TDP. It may not be the fastest implementation, hence some spike measurements for instance at TPU, but under sustained gaming load please prove that the TDP is exceeded before making such claims.

THG claims its compute load had a sustained average draw above TDP, but I'm uncertain this isn't a case of playing with a new toy in the wrong way.

Even if there were "one micro second spikes" as suggested by THG, i don't think it would do much to the TDP (or the average heat dissipated from the GPU + some margins perhaps) because the temperature can't just rapidly change within that time period.
You can't do much to TDP, because it's just a technical specification that serves as a guide to the cooling solution designer and it is a measure of power output that is averaged of a short period of time (not usec short).
In terms of temperature averaged over the chip, changes usually aren't that rapid, but local hot spots like highly utilized ALUs can spike in temperature in the time frame of tens of microseconds.

Designs have thermal sensors situated at various parts of the chip, and they do have failsafes. However, they also have known latency periods for how quickly they can react, depending on the method used, so what usually happens is that a safety margin of X watts of TDP and of Y degrees C is set below what the chip can draw and what it can heat up to.
This is one reason why everyone freaked out when AMD's 290 purposefully sat at 95C, but it's something of a feather in their cap that their thermal/voltage/clock solution can react that quickly at those power levels--and with that cooler.
GPUs without that level of responsiveness leave that power and thermal budget on the table.


I would be very interested in the type of the mentioned compute load. Even if you only take plain averages, you should be able to see significantly higher power values at the wall socket integrated over time - especially with a large difference at almost a hundred watts. Maybe Igor could elaborate?
He could try comparing with more standard power measurement methods, like those that try to isolate the board, or a Kill-A-Watt meter. The claimed sustained violation of the TDP is bigger than typical margins of error from those schemes, and would show up as a higher value at the wall.
Choosing to take a noisy input and then flipping some settings to average things leaves open the possibility of a sampling or interpretation error.

The jittery graphs of exactly one architecture don't have any comparative value, and the analysis such as it is credits the spiking to things like turbo/voltage steps they didn't bother recording, and then extends a comparison to AMD's GPU that they didn't even measure.
All we get from that is people breathlessly pointing at spikes they have no context to understand.

It might be interesting if any spikes managed to breach the maximum safe limits for the card/power supply, but I don't think those are iron-clad enough to allow people to freak out over 1 usec twitches every once in a while, and I see no sign that the reviewer read through the electrical and thermal specifications of the PCIe spec and Nvidia's card/thermal solution guide.

I didn't even mention power consumption: just pointing out that all reviews are extremely positive... for a variety of reasons.

Playing devil's advocate: Bayer's Heroin also launched to rave reviews... ;)
 
I didn't even mention power consumption: just pointing out that all reviews are extremely positive... for a variety of reasons.

Yes, mostly because power consumption is kind of low for the performance. It's like reviewers don't understand the market gtx 980 is inserted. High end users already have their 1000-1500w power supplies and, in general, want the best performance. They don't care about power consumption if it means higher performance.

If Amd releases a 300w part with only 20% higher performance than gtx980, that's were the high end will go.

Nvidia's new product is cool for the tech experts. For consumers, it's disappoiting to see such low increase in performance after all the time from gtx780 release.

As I said, wait for the real deal, the new titan or a new high end from amd.
 
The market for the GTX 980 is much bigger than just those people who already own a 780. I'm sure owners of 580's and 680's are far from disappointed.
 
Where does Tom's say what applications they are measuring? If it's a custom workload, have they ever detailed it? Is the performance measured on the same workload so that Perf/W can be estimated on it? There's probably a very interesting story in there, but it's frustrating to see someone with access to such low-level tools not give all the data. It basically forces others to replicate the same analysis with more rigour if we want to conclude anything really interesting...
For what it's worth, none of my compute benchmarks break TDP containment according to NVIDIA's drivers. You can definitely light up enough CUDA cores and push the card into throttling itself, but it's not violating TDP as far as I can tell. Power at the wall is consistent with FurMark.
 
Yes, mostly because power consumption is kind of low for the performance. It's like reviewers don't understand the market gtx 980 is inserted. High end users already have their 1000-1500w power supplies and, in general, want the best performance. They don't care about power consumption if it means higher performance.

If Amd releases a 300w part with only 20% higher performance than gtx980, that's were the high end will go.

Nvidia's new product is cool for the tech experts. For consumers, it's disappoiting to see such low increase in performance after all the time from gtx780 release.

As I said, wait for the real deal, the new titan or a new high end from amd.

Perhaps I'm an alien, but I tend to avoid graphics cards above 200W. I like to be able to play games in the summer without drowning in my own sweat.
 
For what it's worth, none of my compute benchmarks break TDP containment according to NVIDIA's drivers. You can definitely light up enough CUDA cores and push the card into throttling itself, but it's not violating TDP as far as I can tell. Power at the wall is consistent with FurMark.
AFAIK, Kepler have a GPR bandwidth deficiency so probably can't get all the ALU lanes loaded at the same time (mostly). Dunno about DP mode.
 
Nvidia's new product is cool for the tech experts. For consumers, it's disappoiting to see such low increase in performance ...
If reactions on forums are anything to go by, it's going to do very well with the mid-upper range upgrade crowd. There's a reason this was positioned by Nvidia against a GTX 680.
 
AFAIK, Kepler have a GPR bandwidth deficiency so probably can't get all the ALU lanes loaded at the same time (mostly). Dunno about DP mode.

But this is Maxwell, which had cut back it's ALUs per SM for a reason - the Kepler flaw you mentioned.
 
If reactions on forums are anything to go by, it's going to do very well with the mid-upper range upgrade crowd. There's a reason this was positioned by Nvidia against a GTX 680.

I don't see anything compelling for those with gtx 680. The performance is almost the same as the 780 ti, and you can buy it, for the last few months, for the same price as the gtx 980.

Nvidia is positioning the new card against the gtx 680 just for marketing purposes, as it would be ridiculous the comparisons against the card it's effectively replacing.

The new card just bring benefits for nvidia, because of its lower cost to produce/same price of sale=more profits.
 
Mabye power management is done by the GPU on GM2xx with support of driver profiles? And THG found a non-profiled app, Furmark is restricted.
I thought the time of driver hacks to restrict power draw of specific apps were over ever since the chips can actually measure this and react to it accordingly on their own.
I have no idea really though what's going on with these measurement. In any case, even if they are true, near certainly there would still be quite a good perf/w increase over Kepler chips in this workload.
 
Perhaps I'm an alien, but I tend to avoid graphics cards above 200W. I like to be able to play games in the summer without drowning in my own sweat.
I own 2 titan blacks and don't care about power consumption. I know many people with high end gear, and no one of them cares about it. You are not an alien, but your priorities are different than those that buy high end video cards.

Noise really can be an issue, especially for those who don't watercool, but low power consumption is not a necessity. We can always buy a higher wattage power supply, or even buy a second one.
 
I don't see anything compelling for those with gtx 680. The performance is almost the same as the 780 ti, and you can buy it, for the last few months, for the same price as the gtx 980.

Yep it's a better, faster, cheaper 780 Ti so what's not to like for a 680 owner?

I want to see what Asus does with its 980. I had a great experience with my directCU 680 and it seems Evga's ACX cooler isn't as good from a noise/temp standpoint. I suspect air cooled 980's will put the 780 Ti to shame.
 
I own 2 titan blacks and don't care about power consumption. I know many people with high end gear, and no one of them cares about it. You are not an alien, but your priorities are different than those that buy high end video cards.

Noise really can be an issue, especially for those who don't watercool, but low power consumption is not a necessity. We can always buy a higher wattage power supply, or even buy a second one.

Yet I've owned a GF 4 Ti 4600, an FX 5900XT, a Radeon X1950 Pro, an HD 4850 and a 6950. Those may not all be super high end, but still pretty high.

Being demanding in graphics performance doesn't imply that one is OK with graphics cards drawing 600W.
 
I don't see anything compelling for those with gtx 680. The performance is almost the same as the 780 ti, and you can buy it, for the last few months, for the same price as the gtx 980.
The GTX980 is not very compelling for its price. I think they priced it so high as an anchor to justify higher prices for higher performance Maxwell GPUs. But what about the GTX970?


Nvidia is positioning the new card against the gtx 680 just for marketing purposes, as it would be ridiculous the comparisons against the card it's effectively replacing.
They are. And I think it's positioned incredibly well. Most people don't buy new GPUs each year, so those who bought a GTX680 were never in the market for a GTX780(Ti) in the first place.

The new card just bring benefits for nvidia, because of its lower cost to produce/same price of sale=more profits.
I think this is case where it's benefitting both...

I own 2 titan blacks and don't care about power consumption. I know many people with high end gear, and no one of them cares about it. You are not an alien, but your priorities are different than those that buy high end video cards.
Wouldn't you agree that $2000 in GPU performance and $330 in GPU performance are targeting very different markets? I have no problem with people buying the former, but I could only ever justify the latter to myself (and my wife!) I think you are simply not in the target group of the GTX970 and GTX980. I'm confident that in the next 6 months AMD or Nvidia will have something for you that will. And then others will nope out of it. ;)

Mildly amusing :LOL:

Hey, that's a pretty cool demo!
 
Not sure if this has been posted yet:

In addition to Nvidia’s new Maxwell GPU having top-of-the-line performance and power efficiency, it has another feature that will probably make a lot more difference in the real world: It’s the first GPU to offer full support for Microsoft’s upcoming DirectX 12 and Direct3D 12 graphics APIs. According to Microsoft, it has worked with Nvidia engineers in a “zero-latency environment” for several months to get DX12 support baked into Maxwell and graphics drivers.
Even more importantly, Microsoft then worked with Epic to get DirectX 12 support baked into Unreal Engine 4, and to build a tech demo of Fable Legends that uses DX12.

All of this comes at a time when the future of AMD’s next-gen GPU and accompanying low-level API efforts are somewhat uncertain. UE4 support for DX12 is a huge deal; hundreds of games for the PC, Xbox One, and PS4 will be developed with UE4 over the next few years. Will this force AMD to fully support DX12 with its next-gen GPU, or will it stick to the Mantle party line? And conversely, if Mantle wins out over DX12, where does that leave Nvidia, Microsoft, Epic, and anyone else who jumped on the DX12 train?
The video below is what happened to Fable Legends when Lionhead applied some DX12 tweaks and ran it on some Maxwell hardware.
http://video.ch9.ms/ch9/a74e/127de55e-4e42-4b76-ae4e-1edb1ff7a74e/DirectXTechdemo_mid.mp4

http://www.extremetech.com/computin...is-the-first-gpu-with-full-directx-12-support
 
Back
Top