NVIDIA Maxwell Speculation Thread

It looks like just someone doing a "what if?" like on B3D or anywhere. Small typos (GTX 520, GTX 510), an arrow leading to GTX 880 that makes no sense (inconsistent), a made up respin is there. No GK208 and hell, they still sell GF108.

There's no information (the drawing says "is it a GM100 or a GM104? we don't know, maybe a GM104"). The author bets on gddr5, that seems safe but I remember speculation about gddr6.
 
http://www-03.ibm.com/press/us/en/pressrelease/41684.wss

SAN FRANCISCO - 06 Aug 2013: Google, IBM (NYSE: IBM), Mellanox, NVIDIA and Tyan today announced plans to form the OpenPOWER Consortium – an open development alliance based on IBM's POWER microprocessor architecture. The Consortium intends to build advanced server, networking, storage and GPU-acceleration technology aimed at delivering more choice, control and flexibility to developers of next-generation, hyperscale and cloud data centers.

The move makes POWER hardware and software available to open development for the first time as well as making POWER IP licensable to others, greatly expanding the ecosystem of innovators on the platform. The consortium will offer open-source POWER firmware, the software that controls basic chip functions. By doing this, IBM and the consortium can offer unprecedented customization in creating new styles of server hardware for a variety of computing workloads.

As part of their initial collaboration within the consortium, NVIDIA and IBM will work together to integrate the CUDA GPU and POWER ecosystems.

“The OpenPOWER Consortium brings together an ecosystem of hardware, system software, and enterprise applications that will provide powerful computing systems based on NVIDIA GPUs and POWER CPUs,” said Sumit Gupta, general manager of the Tesla Accelerated Computing Business at NVIDIA.

Wonder if there will be Nvidia POWER chips with Nvidia GPUs in them.
 
This is interesting when thinking about IBM, which outside of licensing the ISA to some designers hasn't always transfered its core IP to outside parters--with the rather unimpressive cores in the current-gen consoles versus their contemporary IBM cores as an example.

Perhaps it was a business model consideration, and IBM felt that IP was too valuable an asset to transfer. I suppose the question now is what changed, the returns from sharing, or the value of the IP and IBM's microelectronics division.
IBM might also be buying into the heterogenous HPC wave, or hedging its bets for cloud server tech.

Nvidia could use a partner that isn't in the business of providing GPU silicon or throughput chips. That leaves PowerPC as far as notable architectures go in this space, I think. MIPS is acquired, x86 is arrayed heavily against Nvidia now, ARM makes its own graphics IP and is getting muscled into by multiple GPU providers.
What slice of the original space does this leave for Nvidia's custom ARM core, however?

The inclusion of an interconnect partner shows how important that part is. Intel's bought interconnect tech and is about as aggressive on that scale as it is with the silicon it wants to push. AMD bought into something along those lines, for the dense microserver environment at least.
 
Nvidia in the mean time canceled the minor update that was due out last quarter and surprisingly canceled the first big Maxwell set for next spring. This leaves them with nothing to compete against AMD with for the fall “back to school” market and the winter holiday season. The next minor blip will be in the spring as described in the link above, but that is not competition for AMD, it is Nvidia salvaging the scraps they have left. Dire is barely adequate to describe their competitive situation for 2013 and 1H/2014.
http://semiaccurate.com/2013/08/07/amd-to-launch-hawaii-in-hawaii/

Now we know what was cancelled (if SA is right). What's the big deal about canceling the big Maxwell (presumably @28nm since Nvidia would likely not start with the big GPUs on a new node again)? If 20nm runs well, they could have pulled in the 20nm Maxwell GM104 for instance. Why should GM104 be a "minor blip"?
 
On what comparison? Pitcairn vs GK106 or Verde vs GK107 would beg to differ.

Tahiti vs Gk104 & Tahiti vs Gk110. Nvidia made big progress. They now dominate AMD in performance/watt and overall performance. Can hawaii catch up? Nvidia just copied you guys and split up their big cores into little ones; something AMD did like 4 generations ago. But they are faster & more efficient now, and you guys have to deal with it. :devilish: They stole a page out of your book. Any big surprises for us this time? Moar rops, big die? Moar pixel pushin powa?
 
This is interesting when thinking about IBM, which outside of licensing the ISA to some designers hasn't always transfered its core IP to outside parters--with the rather unimpressive cores in the current-gen consoles versus their contemporary IBM cores as an example.

Perhaps it was a business model consideration, and IBM felt that IP was too valuable an asset to transfer. I suppose the question now is what changed, the returns from sharing, or the value of the IP and IBM's microelectronics division.
IBM might also be buying into the heterogenous HPC wave, or hedging its bets for cloud server tech.

Nvidia could use a partner that isn't in the business of providing GPU silicon or throughput chips. That leaves PowerPC as far as notable architectures go in this space, I think. MIPS is acquired, x86 is arrayed heavily against Nvidia now, ARM makes its own graphics IP and is getting muscled into by multiple GPU providers.
What slice of the original space does this leave for Nvidia's custom ARM core, however?

The inclusion of an interconnect partner shows how important that part is. Intel's bought interconnect tech and is about as aggressive on that scale as it is with the silicon it wants to push. AMD bought into something along those lines, for the dense microserver environment at least.

nv's role could be as simple as porting gpu drivers to power isa.

The real question is what is Google doing there. They seem to have no obvious role as a supplier.
 
boxleitnerb said:
Now we know what was cancelled (if SA is right). What's the big deal about canceling the big Maxwell (presumably @28nm since Nvidia would likely not start with the big GPUs on a new node again)?
Why would anyone believe that such a chip existed in the first place?

Difficulty level: "because Charlie said" not a valid answer.
 
Tahiti vs Gk104 & Tahiti vs Gk110. Nvidia made big progress. They now dominate AMD in performance/watt and overall performance. Can hawaii catch up? Nvidia just copied you guys and split up their big cores into little ones; something AMD did like 4 generations ago. But they are faster & more efficient now, and you guys have to deal with it. :devilish: They stole a page out of your book. Any big surprises for us this time? Moar rops, big die? Moar pixel pushin powa?

:?:

AMD's architecture is not split, it's GCN top to bottom. Unless you were talking about CPUs, but that doesn't really apply.

And GK208 suggests NVIDIA might not intend for the current split to remain.
 
I don't think it makes much sense unless that new architecture is so much more efficient that they would be able to squeeze dozens of percents more performance with the same transistor count...
 
more than 600 mm2 isn't possible, the current reticle limit is about 600 mm2.

You cannot go bigger than GK110 on 28nm, but you could go faster if you increase perf/W and perf/mm2 by architectural means.
 
Staying on 28nm and improving efficiency will not give enough gain to justify debuting Maxwell at that node.
If anyone could tell me how much theoretically would Kepler be faster than Fermi at the same 40nm process, I'd appreciate it.
 
more than 600 mm2 isn't possible, the current reticle limit is about 600 mm2.

You cannot go bigger than GK110 on 28nm, but you could go faster if you increase perf/W and perf/mm2 by architectural means.
If you really want to do it, you can go larger than that. You can check the technical specs here. Often only the maximum square size one can possibly fit in (~ 25x25 mm²) is quoted, but the absolute limit with a rectangular die is actually higher.
 
26x33 gives 858mm2. There's still a little bit of headroom!

(Refreshing to see somebody linking to hard data. Thanks!)
I would assume leaving a little bit of space on the edges can't hurt. But even 25*32 = 800mm² is still significantly larger than GT200 or GK110.
 
800!
What kind of power consumption and thermal dissipation would it require?
Run it at a lower voltage and not very aggressive clock speed and you can get away with less than a HD7970GE while still being a lot faster. That's the way how GK110 ends up with a relatively low power consumption, significantly less than what you would expect from the die size difference to GK104. It is evidently not a very good method to assume a linear scaling of power consumption with the die size. ;)

Or the short version: It depends.
 
Back
Top