NVIDIA Maxwell Speculation Thread

Are you sure of this ?, For what i have understand, M6000 dont have turbo clock speed. ( I can be wrong, but this was not the case on test version, fixed at 988mhz )



It is amazing that in normal operation (D3D, OpenGL, CUDA, OpenCL), the core clock rate remained consistently high through almost every workload and GPU Boost wasn't dialed back, even though the card was actually running into its thermal limit. Only when we launched a power virus like FurMark could we force GM200 down to its base frequency, though no lower.
01-Boost-Clock-rate_w_600.png



Across all three levels of complexity, the gap between AMD's once-unbeatable FirePro W9100 and the new M6000 is like an abyss. The Hawaii-based card used to be the yardstick, but Nvidia's Maxwell architecture now lands at the top of the OpenCL food chain.

http://www.tomsitpro.com/articles/nvidia-quadro-m6000,2-898-4.html
 
Last edited by a moderator:
Blimey, Titan X without backplate would appear to be constrained by thermals, judging by the experiment on M6000 with the backplate removed.
 
Blimey, Titan X without backplate would appear to be constrained by thermals, judging by the experiment on M6000 with the backplate removed.
I thought it's optional with the Titan X (or maybe for reference models). Evga prod mgr once mentioned backplates (at least for their high-end products) were more for window-dressing than functionality. But seems to be opposite effect and lowers temps with it on for at least for this review.

Edit: Yeah, reference models lack backplate since Guru3D's Titan X not have a backplate.
As you can see, no backplate. The opinions on backplates differ per person. Of course they protect the backside of the PCB and its components, but backplates can also easily trap heat. And then they are often added for aesthetic reasons of course.

http://www.guru3d.com/articles_pages/nvidia_geforce_gtx_titan_x_review,2.html
 
Last edited by a moderator:
http://www.tomsitpro.com/articles/nvidia-quadro-m6000,2-898-4.html

A backplate like the one found on Quadro M6000 is fine for cooling the memory packages mounted on the back of the PCB, but it does distort our readings slightly. Since we know that heat is evenly distributed across the board, we removed a portion of the plate and measured the board temperature in the area we suspected it'd be hottest. A reading of 90 °C was the result. That's not cool, but it’s not hot enough to make us worry.

[...]

Of course, we were also keen to find out what happened if we took the back plate off, similar to GeForce GTX Titan X. This experiment ended as soon as we saw temperatures on the memory packages march up to 100 °C, which is well beyond a healthy temperature.

I'd say that they demonstrated that the backplate helps in cooling the memory on this card.

It's kind of interesting that memory running at >100 celcius is considered acceptable.
 
Because people keep saying that HBM is going to suffer from being near a GPU due to the high temperatures of the GPU.
 
Because people keep saying that HBM is going to suffer from being near a GPU due to the high temperatures of the GPU.
There may be some difficulty with cooling so many chips concentrated in such a small space. But there's nothing particularly special about 100C.
 
DRAM has to use shorter refresh timings as temps increase, and the curve starts to ramp faster above 85C. Other DRAM types with temperature-controlled refresh switch to a high temperature refresh rate above 85. GDDR5 has temp-controlled refresh, but the datasheet I saw didn't mention its thresholds. HBM has multiple temperature modes for refresh. The DRAM would still work above 85, it just might not be as effective as it would be if it were cooled better.

Micron had a datasheet for GDDR5 that had a max temp of 95C for its operating range. Others didn't mention it clearly. Hynix's GDDR5 has a thermal sensor whose top reading is that it is above 95. Somewhere above 115 or so is where it would be physically damaging, I think.
 
PCPer Deep Learning demo at GTC 2015
Deep Learning Demo using Digits DevBox

 
Life of a triangle - NVIDIA's logical pipeline
Since the release of the ground breaking Fermi architecture almost 5 years have gone by, it might be time to refresh the principle graphics architecture beneath it. Fermi was the first NVIDIA GPU implementing a fully scalable graphics engine and its core architecture can be found in Kepler as well as Maxwell. The following article and especially the “compressed pipeline knowledge” image below should serve as a primer based on the various public materials, such as whitepapers or GTC tutorials about the GPU architecture. This article focuses on the graphics viewpoint on how the GPU works, although some principles such as how shader program code gets executed is the same for compute.
https://developer.nvidia.com/content/life-triangle-nvidias-logical-pipeline

There is no new info here as far as I know, but it is interesting to see how little things have changed at a high level since Fermi.
 
Last edited:
EVGA launches Hybrid GTX 980

Features:

  • Built on Maxwell - the EVGA GeForce GTX 980 HYBRID features the most advanced GPU architecture ever made, designed to be the engine of next-generation gaming.
  • Hydro Performance without the Hassle - All in one cooling solution that is completely self-contained. No filling, no custom tubing, no maintenance. Just plug and play.
  • Sleek Looks - Intelligent wiring system and sleeved tubing make this one sleek cooler without the messy wires.
  • Copper Base - Provide maximum heat transfer.
  • Virtually Silent Operation - Variable controlled fans allow dynamic fan speed based on GPU temperature, and water cooling efficiency means very low noise fans.
  • Built in Radiator and Fan - Built in 120mm radiator and fan helps dissipate the heat, keeping the GPU as cool as possible. Fan can also be swapped or customized.
  • Cooling for VRM and Memory - VRM and Memory cooling solution separated from GPU, allowing for lowest GPU temperatures, and efficient VRM and Memory cooling.

  • 980-Hybrid-Header-PD.png
http://www.evga.com/articles/00920/EVGA-GeForce-GTX-980-HYBRID/

Very interesting indeed .... always wanted to try a water solution.
 
Last edited by a moderator:
Everyone, including Sweclockers, is wrong about the GTX 980 Ti. NVidia never has and never will use the "Ti" decorator for such a large jump. Any possible GTX 980Ti would be GM204 based. Since the GTX 980 is already a fully enabled GM204 with 2048 cores, the only improvements for a Ti label would be clock rate. Overclocking the GTX 980 shows there's a safe 10% improvement possible. They could also jump to 8GB of memory.

Which leaves the question of what the eventual consumer GM200 GPU will be called. GTX 1080 is the obvious name.
 
From Sweclockers: "GeForce GTX 980 Ti arrive after the summer" (original).

The 980 Ti is claimed to use a full GM200 with base/boost clocks about 1100 MHz and 1200 MHz, as well as 6 GB GDDR5.

Aka "Card that actually competes with the 390x" They got the suckers to buy the Titan X already, and no doubt made hundreds in profit margins off each one in turn. Now they can just cut the ram to a size reasonable for game applications and release something competitive in price. Nvidia's engineering efforts over the past few years have been a mixed bag of very good (maxwell) and eye rollingly bad (G-sync, their pathetic console attempts) but in terms of business acumen they've been on top at every last turn.
 
Nvidia's engineering efforts over the past few years have been a mixed bag of very good (maxwell) and eye rollingly bad (G-sync, their pathetic console attempts) but in terms of business acumen they've been on top at every last turn.
I can see how you don't like the business aspect of GSYNC, but what exactly do you think is so bad about the engineering aspect? It seems quite the contrary to me. They nail the quality aspects in their first try (ghosting, low FPS behavior) while FreeSync seems to have problems with those aspects.
 
I can see how you don't like the business aspect of GSYNC, but what exactly do you think is so bad about the engineering aspect? It seems quite the contrary to me. They nail the quality aspects in their first try (ghosting, low FPS behavior) while FreeSync seems to have problems with those aspects.

Ghosting appears to be a monitor side implementation problem, as it appears different on different monitors. G-Sync meanwhile is a closed standard, performance penalty, licensing fee required, more expensive solution all around. Oh, and it maxes out at V-sync speeds, meaning you can't disable v-sync if you're going too fast. G-sync is "Freesync"s dimwitted more expensive brother. Assuming, as it looks now, that ghosting and etc. its just dependent on how the monitor vendor sets up support and some driver variable, that done correctly would eliminate ghosting, Freesync is the better solution in every other way possible.

That's an assumption of course, pending more information. But as of now it doesn't appear to be an incorrect one.
 
Back
Top