NVIDIA Maxwell Speculation Thread

It's 11.1 / 11.2 compliant. We only know 4 features of 11.3 it supports, but those 4 aren't everything in 11.3 (and 12) as Max McMullen already posted
Can they expose those 11.3 features in DirectX if they do not support *all* 11.3 features? If not, then it's probably reasonable to expect full 11.3 support.
 
Can they expose those 11.3 features in DirectX if they do not support *all* 11.3 features? If not, then it's probably reasonable to expect full 11.3 support.

If I've understood it right, they've exposed 11_1 features they support via NVAPI, even though they can't expose them in DirectX - I don't see why they wouldn't do the same with 11_3 if that's the case.
 
Can Kepler cards expose 11.2 features even though they don't support all 11.1 features? Or does that have to go through NVAPI as well?

Microsoft demoed 11.2 on a GTX770 so I'd guess the former, but what do I know.
 
Can Kepler cards expose 11.2 features even though they don't support all 11.1 features? Or does that have to go through NVAPI as well?

Microsoft demoed 11.2 on a GTX770 so I'd guess the former, but what do I know.

11.2 didn't bring new feature level, utilizing its features doesn't require d3d11_1 support

(we should, again, focus on 11.1 vs 11_1 and so on, to make clear if one means feature levels or dx version)
 
If I've understood it right, they've exposed 11_1 features they support via NVAPI, even though they can't expose them in DirectX - I don't see why they wouldn't do the same with 11_3 if that's the case.

This isn't true. Kepler supported all 11_0 optional features which were made mandatory in 11_1. That's how nvidia claimed they supported "most of 11_1". You don't need to use NVAPI for optional features though (that was more for DX10 with its "all or nothing" approach). I don't recall them ever claiming to support "11_1 unique" features, but I'm not certain.

Kepler also supported tier 1 tiled resources, which DirectX 11.2 retroactively made an optional feature for 11_0 and 11_1 (note there is no such thing as feature level 11_2).

I suspect it'll support all of 11_3/12_0/[whatever Max decides to call it] or those features will retroactively be made optional on lower feature levels (hopefully not as things are already starting to get real confusing in DX land :p).
 
Is it therefore reasonable to think there is a possibility that GM200 (or rather, GM210) will have a further level of hardware feature support that GM204 does not have ?
 
With Kepler you had a midrange generation first (GK107, GK104) and then a half generation with GK110 which gave more features, though compute related. Here it seems the first half gen was GM107 (and the rarer GM108) followed by GM204 and we presume, GM200. So it seems reasonable to believe GM200 brings no new feature at all, even compute ones. It may simply have non fundamental features like more FP64 units, ECC and a large L2.
 
With Kepler you had a midrange generation first (GK107, GK104) and then a half generation with GK110 which gave more features, though compute related. Here it seems the first half gen was GM107 (and the rarer GM108) followed by GM204 and we presume, GM200. So it seems reasonable to believe GM200 brings no new feature at all, even compute ones. It may simply have non fundamental features like more FP64 units, ECC and a large L2.


I see.

What about the competition in 2015, AMD's Pirate Islands ?
 
With Kepler you had a midrange generation first (GK107, GK104) and then a half generation with GK110 which gave more features, though compute related. Here it seems the first half gen was GM107 (and the rarer GM108) followed by GM204 and we presume, GM200. So it seems reasonable to believe GM200 brings no new feature at all, even compute ones. It may simply have non fundamental features like more FP64 units, ECC and a large L2.


Things are pretty bad when we refer to the 18 month gap between the 680 and 780 Ti as a half generation. I hope GM200 doesn't take quite as long to make an appearance.
 
Things are pretty bad when we refer to the 18 month gap between the 680 and 780 Ti as a half generation. I hope GM200 doesn't take quite as long to make an appearance.

I think we can mostly blame TSMC for that, and reasonably hope that it won't take quite as long to move to 16nm.
 
Things are pretty bad when we refer to the 18 month gap between the 680 and 780 Ti as a half generation. I hope GM200 doesn't take quite as long to make an appearance.

Strictly speaking, Titan was the half generation move and the 780Ti was just a Titan refresh.

I'm assuming we'll see a GM200 based Titan much sooner.
 
Strictly speaking, Titan was the half generation move and the 780Ti was just a Titan refresh.

I'm assuming we'll see a GM200 based Titan much sooner.


I don't give full credit for partially disabled chips :)

I'm really late to the GK110 party but just jumped on a good deal for a MSI GTX 780 Ti @ 1Ghz for ~$400. Will wait out GM204.
 
The regular 780 is like three hundred bucks right now. And it comes with the fantastic new Borderlands.

I'd still pick a 290 for around the same price or a little less with the amazing Gold bundle that now includes Alien Isolation and Star Citizen. Not to mention the 290 is faster than the 780 and has an extra gig of VRAM.
 
Tigerdirect has finally gotten their 3rd party to ship a Zotac GTX 970 to me. Over a week after it was supposedly in stock. Gotta watch those white lie 48-hour-ship inventory promises. Usually I get things from TigerDirect in a day or two.


I'm pumped to try the super rez downscaling on my big TV. My 560 Ti wasn't fast enough anymore for the 2720x1536 thing I do on my 50" 1360x768 plasma and I never could get it working on a Radeon.
 
I was playing with some DSR last night and very impressed. I haven't played with over clocking at all yet, just using the gameapp tool that came with the MSI for switching between OC/gaming/silent modes. DSR Diablo 3 with no fans spinning is funny.
 
Just tried DSR myself using the modded inf for Kepler cards.

Borderlands 2 immediately saw the new resolutions. The HUD scaled really well too. I set smoothness to 10% and didn't see an increase in blurriness compared to native. The whole process was very easy and painless. Why did it take so long to do something like this?!

My 680 couldn't quite handle 3840x2400 with fps dropping down to 30-45fps in "easy" scenes. I have a 780 Ti incoming that should fare a bit better.

Aliasing is cleaned up a lot but B2 doesn't have sufficient texture resolution to take advantage of DSR. Other games with higher resolution assets will probably see much greater benefit.
 
Clearly, NVIDIA is so incompetently lazy as to leave performance on the table for who knows what, to chase a ridiculous goal of low power consumption, GTX 980 can reach an insane 1500MHz core speed quite easily, people with custom boards can push for even more, upping the default frequencies would have allowed for a much greater performance while still retaining an excellent perf/w ratio, and playing their cards conservatively will do them no good.

Not sure Nvidia left performance on the table when securing their low power consumption goal. Apparently high OC's (At 1516MHz core and 8GHz VRAM) are reachable using default cooling on reference models as shown by HardOCP. The techniques used are no different than a experienced OC'er would use, plus a dash more patience .... ;)

We have learned many things today from this evaluation. We have learned that overclocking a GeForce GTX 780 Ti does outperform a stock default clocked GeForce GTX 980. Where the GTX 980 outperforms a stock clocked 780 Ti, overclocking that 780 Ti does push performance over a stock GTX 980 again.

We then found that overclocking the GeForce GTX 980 flips the tables again and leap frogs over the overclocked GeForce GTX 780 Ti. An overclocked GTX 980 will outperform an overclocked GTX 780 Ti around 10%, depending on the game, sometimes more, sometimes less.

We also found that it takes overclocking a Radeon R9 290X just to reach close to default GeForce GTX 980 performance. Even then, only in one game was it slightly faster than a default GeForce GTX 980, in the rest it was still slower.

When we overclocked the GeForce GTX 980 it ran away with performance compared to overclocked R9 290X. There just isn't any comparison with differences as high as 30%+ in performance with an overclocked GTX 980 compared to overclocked R9 290X.

You know what the GeForce GTX 980 has done? The GeForce GTX 980 has made the AMD Radeon R9 290X look like last generation technology, even though the AMD Radeon R9 290X is AMD's current flagship video card for high-end gaming. The GTX 980 makes it look the AMD Radeon R9 290X look old. That is impressive.
http://www.hardocp.com/article/2014...0_overclocking_video_card_review#.VEU702d0xEZ
 
Back
Top