NVIDIA Maxwell Speculation Thread

There are two 3d feature in D3DFL11.1:
1. UAV in all stages
2. Larger number of UAVs (64 vs. 8)

Fermi, Kepler and Maxwell support the first. And even all DX11 card can support this. The problem is the 64 UAV. Only GCN (this architecture support unlimited number of UAVs) and gen7.5 support this feature, and the max number for all other architecture is 8. This is why NV can't use this feature level, and the 2D features ofcourse.
I'm not sure what would prevent this feature from working with Gen7. Gen7 still has surface binding table with 256 (255 actually) entries per shader stage (everything goes in there, RTs, textures etc.), so I don't see why you couldn't fit 64 UAVs in there too?
 
Taken from latest roadmap:

attachment.php


Some kind of Photoshop fantasy (like Tegra die-shots) or a Maxwell GPU?

5 GPC with each 16 SIMD32? 2560SPs?
 

Attachments

  • New NV GPU.jpg
    New NV GPU.jpg
    18.8 KB · Views: 247
Not a single word about Maxwell at the keynote. Volta seems to be renamed to Pascal.
It was either current gen products (maybe multiplied) or 2 years out in the future.
 
No mention of any Maxwell suggests to me that the "2nd gen" Maxwell, which I guess would contain a "big" Maxwell, is still some ways off, at least H1 2015 if not later. Perhaps they will use 16 FF?

And I notice that the y-axis of that chart is SGEMM/W, so perhaps NVIDIA could have met the "Maxwell" milestone already with the 750 Ti? [EDIT: Although there may be an implicit assumption that the chips involved are Tesla parts.]
 
Last edited by a moderator:
I'm ignoring clueless mods from now on since they can't seem to understand the simple concept that the hardware supports the feature. Also no one is stopping you from using another card if you're so unhappy.

I'm happy with my fermi card and I am fully aware under other apis uav is supported in all stages. But that's not what you claimed:




Fermi, Kepler and Maxwell supports everything in DirectX 11.1 except the Direct2D features which are not related to 3D at all. It's not Nvidia's fault MS is boneheaded in their DX11.1 design in the first place. Damien Triolet still seems to be clueless about this.

It does not support everything in dx11.1 relating to 3D. Now that has been established, please try to remain civil or forced vacations will be incoming.
 
No mention of any Maxwell suggests to me that the "2nd gen" Maxwell, which I guess would contain a "big" Maxwell, is still some ways off, at least H1 2015 if not later. Perhaps they will use 16 FF?

And I notice that the y-axis of that chart is SGEMM/W, so perhaps NVIDIA could have met the "Maxwell" milestone already with the 750 Ti?

Suddenly I got reminded about Charlie's article about "Nvidia to exit another major market segment". Could it be that they are indeed giving up the BIG CHIP strategy and with it slowly the high end gaming GPU market to concentrate more on computing niches like Cars and HPC? I found this keynote very eerie and lacking the usual inspiration. The give out of Shield to every attendant also smells a bit, IMO.
 
Suddenly I got reminded about Charlie's article about "Nvidia to exit another major market segment". Could it be that they are indeed giving up the BIG CHIP strategy and with it slowly the high end gaming GPU market to concentrate more on computing niches like Cars and HPC?

What indications of this are there?
Big chip has never served them better, high end gaming has never been healthier.
Plus they invented whole new prosumer market - all the benefits of DP with no driver or ECC support for midway price.
 
Yeah, giving out free Shields, that's like Google giving away free Nexus phones. (I understand the desire to put everything in a negative light, but can we please allow companies to give away free stuff? ;) )
 
What indications of this are there?
Big chip has never served them better, high end gaming has never been healthier.
Plus they invented whole new prosumer market - all the benefits of DP with no driver or ECC support for midway price.

First, its just speculation based on the fact that they were completely "sush" about Maxwell on the keynote. Even on Fermi times, when things were not going well they showed something (wood screws event says hello).

Second, changes in company strategies are taken well in advance of their realisation and take into account the future and not only the current moment, so how well things are at the moment is not necessarily the major driving force.

Third, even if high end gaming is healthy, if nVIDIA sees more attractive perspectives elsewhere for their technology, which would demand a refocus of resources (even a company as big as nVIDIA has to make choices), they could make the change. They are a company after all, looking for profit.

Fourth, I am not talking about doom and gloom here (like Charlie probably did). I am talking about conscious business choices.

Lastly, you can "attack" me however you want, since I am just speculating with no access to info whatsoever :LOL:

End note: I am far from being an nVIDIA hater as many of our members probably know :D

EDIT - Another point: nVIDIA is pushing GRID and cloud computing hard. I would say they (crazily I know) envision a future where gaming is cloud and not desktop based.
 
Last edited by a moderator:
I'm not sure what would prevent this feature from working with Gen7. Gen7 still has surface binding table with 256 (255 actually) entries per shader stage (everything goes in there, RTs, textures etc.), so I don't see why you couldn't fit 64 UAVs in there too?

Well technically nothing, but it requires a driver. If Intel don't enable this feature, than you can't use it. Business as usual.:cry:
 
EDIT - Another point: nVIDIA is pushing GRID and cloud computing hard. I would say they (crazily I know) envision a future where gaming is cloud and not desktop based.
I spoke to Ben Berraondo on this point exactly, NVIDIA is pushing GRID mostly for the professional segment (they received positive feedback), they are well aware of the bandwidth and latency limitations that plague cloud gaming platforms, they can't even be applied in the most areas in one country like the UK.
 
No Maxwell GRID M1 with 4 GM107 announcement either, would be more power efficient than Kepler GRID K1 with 4 GK107.
 
Suddenly I got reminded about Charlie's article about "Nvidia to exit another major market segment". Could it be that they are indeed giving up the BIG CHIP strategy and with it slowly the high end gaming GPU market to concentrate more on computing niches like Cars and HPC? I found this keynote very eerie and lacking the usual inspiration. The give out of Shield to every attendant also smells a bit, IMO.
NVIDIA's bread and butter is it's gaming hardware and software. That is just Charlie's wet dreams as usual. I though we were done with the guy (he is under a rock for 2 years now).

If anythink the company is mad enough to push the boudnaries of big chips even more with the likes of TitanZ, Maxwell is left out of this presentation probably because it has been already introduced and detailed and because it doesn't bring anything new like Pascal.
 
Consumers do have a limited Geforce GRID gaming in the form of streaming from Kepler (and up) to Tegra and SteamOS. A bit like Windows XP was a fully multi-user OS, but only one physical user could use it at a time.

The technology is best used on a LAN really, even wireless hops ought to be minimized (for an enterprise GRID, maybe go through one really good VPN link. Or there may be usage scenarios where some latency is tolerable, like showing off your umpty-gigabyte power plant 3D model at a remote location).
 
If anythink the company is mad enough to push the boudnaries of big chips even more with the likes of TitanZ, Maxwell is left out of this presentation probably because it has been already introduced and detailed and because it doesn't bring anything new like Pascal.

And Osborne Effect prevention is not bad for them.
 
No mention of any Maxwell suggests to me that the "2nd gen" Maxwell, which I guess would contain a "big" Maxwell, is still some ways off, at least H1 2015 if not later. Perhaps they will use 16 FF?

And I notice that the y-axis of that chart is SGEMM/W, so perhaps NVIDIA could have met the "Maxwell" milestone already with the 750 Ti? [EDIT: Although there may be an implicit assumption that the chips involved are Tesla parts.]
2nd Gen Maxwell will be out later this year. GM204 and GM206 are on track for a release towards late Q3/early Q4. The info I've heard is that GM204 is currently being taped out at TSMC and first silicon is expected back from the fab sometime next month. GM206 is slightly behind GM204 and there is a "big Maxwell" GM200 in the pipeline as well.

I noticed the change in graph as well. I think its just marketing. The graph from the old roadmap looked more linear, whereas the new one looks more exponential.
Suddenly I got reminded about Charlie's article about "Nvidia to exit another major market segment". Could it be that they are indeed giving up the BIG CHIP strategy and with it slowly the high end gaming GPU market to concentrate more on computing niches like Cars and HPC? I found this keynote very eerie and lacking the usual inspiration. The give out of Shield to every attendant also smells a bit, IMO.
Nope, NV is not going to give up the "big chip" strategy just yet. As stated above, GM200 does exist and will be released as the last member of the family, just like the Kepler generation.
 
Back
Top