NVIDIA Maxwell Speculation Thread

There is roughly a 10,000x brightness difference in a white indoor room wall (no window, basic home lighting) vs a white building wall directly hit by the sun in a bright summer day. Your eyes adapt to bright environments. A bright summer day is not tiring for the eyes in the long run and being outside in a sunny day doesn't cause permanent damage to your eyes. Current displays are not even close to this brightness.

If the display is too small and dark surroundings cover most of the view, then there is a problem. But this problem is easily solved by bigger displays that cover most of your view. Also head mounted displays (such as Oculus Rift) solve this problem.

Currently one of the biggest problem (in addition to resolution) in believability in Oculus (and other HMDs) for me is the lack of HDR. Sky and highlights just look really flat compared to real world.
Couldn't we tackle the problem from the other end and go for really low black levels? There would no doubt be lawsuits if a monitor came out that could be bright enough to damage your eyesight. If you looked up at the sun in a game you would actually have to put on sunglasses! :cool:
 
A bright summer day is not tiring for the eyes in the long run and being outside in a sunny day doesn't cause permanent damage to your eyes.
No, but outdoors, on a sunny day, all of your field of view is bright. It's not a glum, dimly lit periphery, with a very bright window-sized rectangle in the middle blasting sunlight-level brightness at your face. :) It's the contrast between your lamp-lit surroundings and your computer monitor's screen that do it, and the difference doesn't have to be that great to be tiring. My Apple Thunderbolt Display goes up to 400 nits or something like that according to specs. At max brightness, it is nowhere near a summer day, but very, very uncomfortable to look at for any length of time. With auto-brightness enabled, the backlight slider doesn't go above 50% ever for me, and often sits at about 1/3 brightness.

If the display is too small and dark surroundings cover most of the view, then there is a problem. But this problem is easily solved by bigger displays that cover most of your view.
"Easily." :LOL: Sorry, but most regular folks can't afford super huge displays. Or even have room to fit them. Besides, a display that is larger than what can fit in the center of my vision would be even more tiring than one that is small and extraordinarily bright I would think, not to mention a lot of software is not designed with displays so large they stretch into the peripheral vision. UI elements (like a HUD in a game) would slide out into the periphery and become useless.

Also head mounted displays (such as Oculus Rift) solve this problem.
HMDs are not for everyone. It's also another gadget that costs money, and which hooks up with cables to your PC which some feel is annoying and clutter-y, and there's also people who argue they're just not ready yet (screen door effect, color abberration and whatnot.)

Also, the blueness of computer light (and the alledged problems that brings) would not be fixed by just putting a set of goggles over your head. In fact you might excarbate the issue by covering more of your vision with the display... :p
 
No, but outdoors, on a sunny day, all of your field of view is bright. It's not a glum, dimly lit periphery, with a very bright window-sized rectangle in the middle blasting sunlight-level brightness at your face. :) It's the contrast between your lamp-lit surroundings and your computer monitor's screen that do it, and the difference doesn't have to be that great to be tiring. My Apple Thunderbolt Display goes up to 400 nits or something like that according to specs. At max brightness, it is nowhere near a summer day, but very, very uncomfortable to look at for any length of time. With auto-brightness enabled, the backlight slider doesn't go above 50% ever for me, and often sits at about 1/3 brightness.
Yeah for non gaming and non video, I run one of the low blue light modes and drop brightness way down. LCD brightness has always caused me discomfort in dimly lit rooms and I really like this monitor's options.

I don't like auto brightness though because it likes to fluctuate and that's unwanted.
 
No, but outdoors, on a sunny day, all of your field of view is bright. It's not a glum, dimly lit periphery, with a very bright window-sized rectangle in the middle blasting sunlight-level brightness at your face. :) It's the contrast between your lamp-lit surroundings and your computer monitor's screen that do it, and the difference doesn't have to be that great to be tiring.

You can create any strength of additional illumination on the back side of the monitor, for as long as you can pay the electricity:
https://en.wikipedia.org/wiki/Ambilight
 
I don't like auto brightness though because it likes to fluctuate and that's unwanted.
My apartment is perpetually gloomy (due to poorly placed, small windows), so there's no noticeable fluctuation. Having auto brightness enabled also lets me watch movies, or even browse the web at night with all lights turned off without straining my eyes... :)

You can create any strength of additional illumination on the back side of the monitor, for as long as you can pay the electricity:
Yes, I have a couple low-wattage LED bulbs hanging from a bookshelf down behind my monitors for this reason, but as they don't adjust accordingly to what's shown on the screen they can't compensate for a very bright image. Even if they could, the paint on my wall just might catch fire from the level of flux hitting it! :LOL:

Also, an overly bright ambilight would mess with dark monitor imagery, so the opposite situation is also a bother. :p
 
At the International Conference on Machine Learning this week in France, Nvidia rolled out improvements to its CUDA development environment for accelerating applications with GPUs that will substantially improve the performance of deep learning algorithms on a couple of fronts.
....
The first big change is that CUDA 7.5 supports mixed precision, which means it supports FP16 16-bit data as well as the normal 32-bit single precision and 64-bit double-precision floating point data types. “The change to 16-bits allows researchers to store twice as large of a network,” explains Ian Buck, vice president of accelerated computing at Nvidia, to The Platform. “And though you lose some precision with the 16-bit numbers, the larger model more than makes up for that and you actually end up getting better results.”
....
We also assume that the performance improvements that Nvidia has announced this week for deep learning are additive, meaning that the combination of larger, lower-resolution models, single-GPU training speed improvements, and multi-GPU scale-out can result in larger neural networks to be trained in a much shorter time. By our math, it looks like the speedup using a mix of these technologies could be on the order of 4X to 5X, depending on the number of Titan X GPUs deployed in the system and the nature of the deep learning algorithms.
....
What is interesting to contemplate is how much performance improvement the future “Pascal” GPUs from Nvidia, coupled with the NVLink interconnect, will deliver on deep learning training algorithms. Back in March, Nvidia co-founder and CEO Jen Jen-Hsun Huang went through some “CEO math” as he called it to show that the combination of mixed precision (FP16) floating point plus 3D high bandwidth memory on the Pascal GPU card (with up to 1 TB/sec of memory bandwidth) plus NVLink interconnects between the GPUs would allow for Pascal GPUs to offer at least a 10X improvement in performance over the Maxwell GPUs used in the Titan X. (There is no Maxwell GPU available in the Tesla compute coprocessors, and as far as we know there will not be one, although Nvidia has never confirmed this.)

The Pascal GPUs are expected to come to market in 2016 along with CUDA 8 and the NVLink interconnect, which will tightly couple multiple GPUs directly together so they can share data seamlessly. The NVLink ports will run at 20 GB/sec, compared to a top speed of 16 GB/sec for a PCI-Express 3.0 x16 slot, and it will be possible to have four NVLink ports hook together two Pascal GPUs for a maximum of 80 GB/sec between them. There will be enough NVLink ports to have 40 GB/sec between three GPUs in a single node (two ports per GPU) and with four GPUs customers will be able to use a mix of NVLink ports to cross-couple the devices together with a mix of 40 GB/sec and 20 GB/sec links. Depending on how tightly the memories are coupled, such a cluster of GPUs could, in effect, look like one single GPU to any simulation or deep learning training framework, further simplifying the programming model and, we presume, providing a significant performance boost, too.

http://www.theplatform.net/2015/07/07/nvidia-ramps-up-gpu-deep-learning-performance/
 
From WCCFTech: "NVIDIA GeForce GTX 950 Launches on 17th August – Features GM206 GPU With 768 Cores, 2 GB VRAM and 128-bit Bus."

NVIDIA-GeForce-GTX-950-Specifications-635x580.jpg


The launch date and price are expected to be August 17th and $129–$149.
 
Couldn't we tackle the problem from the other end and go for really low black levels? There would no doubt be lawsuits if a monitor came out that could be bright enough to damage your eyesight. If you looked up at the sun in a game you would actually have to put on sunglasses! :cool:

Not really, your eyes can tell the difference in brightness between say, 500 nits and 1000 nits regardless of contrast level. It's honestly the software support that I'd be worried about. We have standards for "HDR images" and a new color space, and all hardware manufacturers need to do is hit that target (or at least close enough in the case of cheaper hardware).

But software, I'm just afraid a new image compression format to replace JPEG will take forever. BPG seemed the obvious choice, until HVEC turned overly greedy and evil. WEBP just doesn't any future proof support at all, with 8bit encoding max and nothing beyond SRGB colorspace. Not to mention how consoles would cope, as you'd need an increased bandwidth for increased color precision across the board to avoid banding. Ohwell, probably just worrying for no reason. The new HDR and REC 2020 standards are probably only going to be widespread enough to care in like 5 years anyway, LCD's can't even get to either 1,000 nits reliably or REC 2020 colorspace at all yet.
 
Will be upgrading to 980Ti....but found out Nvidia do not enable 10bit color in their consumer cards, fml.

Where can we bring this up to Nvidia? 10bit consumer monitors are dime a dozen now, AMD has it in their CCC. (thanks Dave!)

Good news for you!
Nvidia latest geforce drivers for W10 have selectable 8/10-bit color output!
...so Nvidia does listens too...:D
 
Nvidia: Geforce GTX 980 (990M) for notebooks based on the desktop GTX 980?

It claimed that the new super chip should do the performance level of a GTX 980M SLI solution reach, which is roughly a doubling of the power reserves of the current GTX 980M would mean. Shortly thereafter circulated also first indications that the new chip would run under the Code NVIDIA E-GXX.
....
The rumored chip TDP is between 100 to 200 watts, so notebook manufacturers will have a wide range performance to work with depending on the cooling capacity of their devices obtained.

Another interesting detail is that the chip apparently will not be offered as MXM solution but must be soldered directly to the board.

Article is in German, so could be errors in the above translation.

http://www.notebookcheck.com/Nvidia...s-auf-Basis-der-Desktop-GTX-980.147678.0.html
 
A bit coarse, but facts are true to the german source.

One thing should be added:
The rumored chip TDP is [said to be variable] between 100 to 200 watts, so notebook manufacturers will have a wide range performance to work with depending on the cooling capacity of their devices obtained.
 
The current 980M already uses a GM204.

A newsworthy info would be if the 990M was not using GM204.
 
30% higher clocks is a bit unbelievable. The 980M already has a 1038-1127Mhz range of clocks. 30% above that would bring the 990M towards 1.45GHz, which not even the most expensive factory-overclocked 980 models reach at default values, with 2x the TDP.


I know some people believe Maxwell to be the second coming of Christ, but let's be real.
 
990M Speculation ...


20150809gtx990m.jpg


GTX 990M's are reported to be on power at best comparable to the GeForce GTX 980M SLI configurations. Its TDP is said to be configurable between 100 and 200 watts, depending on the notebook cooling capacity and the needs of the manufacturer. GPU options are virtually GM204 and GM200, but because the desktop version of the GTX 980 copies (full GM204) does not match the performance GTX 980M SLI, GM200 is probably the more likely option two form. GTX 980M uses GM204-GPUs by 2048 CUDA core is in use 1536. The GTX 990M might be NVIDIA's drivers recently appeared N16E-Gxx. graphics card price is reported to be almost double the GTX 980M GG.

At the press conference, CEO Wu Haijun told that NVIDIA will be releasing over the GeForce GTX 990M graphics card in the last quarter, probably closer to the first half of the quarter, ie October, but also in Q3 the end be located side-September have been proposed for the correct publication date. Haijun also hinted that graphics cards based on Pascal architecture would be published in June 2016.

http://muropaketti.com/nvidia-valmistelee-uutta-lippulaivaa-kannettaviin-geforce-gtx-990m
 
30% higher clocks is a bit unbelievable. The 980M already has a 1038-1127Mhz range of clocks. 30% above that would bring the 990M towards 1.45GHz, which not even the most expensive factory-overclocked 980 models reach at default values, with 2x the TDP.
10-30% higher actual clocks, not maximum boost clocks.
There is a reason I specified a range. GTX 980M is probably, at least in some laptops, power and/or temperature limited when boosting.

I know some people believe Maxwell to be the second coming of Christ, but let's be real.
A site that did a direct comparison had GTX 980M at 58-75% of GTX 980 performance.
In other words, GTX 980 was 33-72% faster than a GTX 980M.
The rumors mentioned claimed a 200W TDP. The current 980M has a 100W TDP and a quarter of shaders disabled.
I do not see how unlocking the full GPU and giving it twice the TDP will not result in a huge performance increase, in most applications.
 
The rumors mentioned claimed a 200W TDP. The current 980M has a 100W TDP and a quarter of shaders disabled.
I do not see how unlocking the full GPU and giving it twice the TDP will not result in a huge performance increase, in most applications.
My problem is the article mentions variable TDP from 100-185W. At 100W it would probably be barely faster than a 980M. But at 150W (or more) that's going to look quite different.
 
Back
Top