NVIDIA Kepler speculation thread

Really, I'm not sure why this is so difficult for you to understand. When designing the chip there were performance goals as well as themal envelope (TDP goals).

Yes there were performance goals with a specific thermal envelope. Both weren't on initial expectations and what exactly is there too hard to understand with that exactly?

So when they got it back, they had a choice of meeting the performance goals which would mean TDP over 300 watts. Or they could meet their TDP goals which meant lower performance than they were shooting for.
For one there is no 16SM GF100 GPU available and they didn't obviously keep their intended frequencies either. In November 2009 they issued a whitepaper for Teslas estimating that the Tesla 2050/3GB (448SPs/384bit) will have a 190+W TDP with frequencies in between 1.2 and 1.4GHz while the 2070/6GB was estimated at <225W.

By that time they had A2 silicon already in their hands and there's nothing worth mentioning the final A3 could have changed.

Today the Tesla 2050 with 3GB is rated on NV's website with a TDP of 238W but with a frequency of 1.15GHz (memory runs at 750MHz). If those rough 40W difference and on top with lower frequencies than ever projected don't ring a bell then I don't think there's much left to say.

Yes GF100 was designed from the get go to be a 6pin+8pin board where typical TDP can range between 225 and 300W. That much is true; the rest refers to a chip being simply problematic for whatever reason and if they could have ever foreseen that early enough they would had fixed it. Apparently it was much too late when they did.

It really is that simple.
 
... the rest refers to a chip being simply problematic for whatever reason ..
Can't we speculate on what has gone wrong with the chip ? maybe the solution Nvidia used for the interconnect layers consumed more power (and hence resulted in more heat ) than previously predicted ?
 
When they got Fermi back, which of those two goals would have been the most realistic to stick to?

You're contradicting yourself. How can you claim that they hit their targets yet had to compromise? If Nvidia had hit their intended targets we would have a fully enabled GF100 @ ~750Mhz pulling ~250w.

I'm assuming you're fervently defending this comment Dave made?

The size of the ASIC tells me that it absolutely was designed for the power envolope that its operating in.

GF100 was designed for the power envelope that the GTX 480 (a lower performance variant) is operating in. That's a very different statement and it wholly agrees with what I am saying - that they missed their design targets. I'm not sure what's so hard to understand about that.
 
An envelope is not about 10 watts here or there, which can be remedied with clock rate changes. It's a broader scale, i.e. what kind of and how many power connectors are necessary.

I take Daves comment as that Nvidia was ready to see GF100 in its highest end variant being powered by 75+75+150 watts.
 
I take Daves comment as that Nvidia was ready to see GF100 in its highest end variant being powered by 75+75+150 watts.

As has been mentioned several times before, the maximum rated output for the power connectors isn't an indication of target power consumption. The GTX 280 was also 75+75+150 yet its TDP was 236w. Having 300w available on the line doesn't mean Nvidia was aiming for a power draw of 300w. I see no evidence to support that claim.

But that statement on power connectors by itself is pretty obvious. I don't think anyone expects either Nvidia or AMD to limit themselves to 225w these days.
 
As has been mentioned several times before, the maximum rated output for the power connectors isn't an indication of target power consumption. The GTX 280 was also 75+75+150 yet its TDP was 236w.

That's what my post was all about: Being 6+8 pin, GTX 280 was designed to operate in the 225-300 Watt envelope, which it did. Being 6+6 pin, HD 5870 is designed to operate in the 150-225 Watt envelope, which it does.

To make it more clear: It's not a single number for target power consumption, but a range. At least that's how I as a non-native speaker would interpret the word "envelope" here, something which encloses some other thing from both sides.
 
Yep, that's what it means. GF100 was also designed to operate in the 225-300w envelope. That obviously didn't turn out as planned considering they barely made it into that envelope even after cutting back on clocks and units and strapping a big cooler to that bad boy. I think everybody is saying the same thing except some folks are focusing on GTX 480, while I and others are referring to Nvidia's goals for GF100. After all it's perfectly reasonable to assume that when they set out building this thing they expected to release parts based on fully enabled chips, as their Fermi whitepaper would suggest.
 
Can't we speculate on what has gone wrong with the chip ? maybe the solution Nvidia used for the interconnect layers consumed more power (and hence resulted in more heat ) than previously predicted ?

I'm the last person that should mention off topic strays, but wouldn't you think that a dedicated thread for that would be better?

Yes there obviously is something wrong with GF100, but I don't see what it could have to do with NV's next (real) generation of GPUs.

Oh and I'm glad that we finally settled that "power envelope" thingy. The more you spit on that envelope the better it'll stick :devilish:
 
Though the nVidia move is said to be some sort of "bestbuy exclusive deal" and some claim it's just due nVidia having lack of demand for some midrange/lowend cards by AIBs, so they'll sell them now for themselves.

Even if XFX isn't nV partner anymore and that would obviously mean no Kepler from them, this should mean also no "midlife GF100-kicker" from them, which belongs to the other thread :p
 
I really hope that Kepler is not used as the basis of PS4's GPU. Or any "kicker" / refresh. I hope PS4's GPU is based on Maxwell. It's likely that Kepler (2011) will be to Fermi (2009/2010) what GT200 (2008) was to G80 (2006). Maxwell (2013) is probably going to be Nvidia's first totally new architecture after Fermi.

Maybe Maxwell will power Playstation 4.

Exactly. I certainly hope so.
 
I have no idea how similar Kepler would be to Fermi. However, what I'm missing a bit is the line of development from g80->g92->gt200->gf100.
That is I clearly see the development from g80 to gt200. But to me it more looks like so (warning ascii art alert):
Code:
     G80 (and friends)
      |
     G92 (and friends)
     /  \
GT200   GF100
           |
        Kepler
Maybe that's exaggerated but I don't really see that GF100 has more in common with GT200 than it has with G92. As such gt200 is more like a side-step, an architecture dead end. I could be wrong though :). But that development line is much more obvious on the AMD side, from r6xx over r7xx to Evergreen.

Also, I'm not sure why people are expecting Maxwell to be a totally new arch. Fermi certainly wasn't totally new neither, though definitely a lot changed. I don't think there's that much wrong with the Fermi architecture neither (just forget GF100) so if Kepler is just some mildly updated, shrunk Fermi it could be quite decent.
Though if Maxwell has big changes of course it would be interesting to know in what direction this goes. What is sometimes mentioned is ditching more fixed function stuff (like rops or tmus) - the problem I see with this is that I think it wouldn't be power-efficient, a metric whose importance is only bound to grow.
 
I have no idea how similar Kepler would be to Fermi. However, what I'm missing a bit is the line of development from g80->g92->gt200->gf100.
That is I clearly see the development from g80 to gt200. But to me it more looks like so (warning ascii art alert):
Code:
     G80 (and friends)
      |
     G92 (and friends)
     /  \
GT200   GF100
           |
        Kepler
I would almost go for the following and developement on next gen pretty much starts when the last one is out of the door.
Code:
     G80  ------------------------ GF100 (and friends) --------- Maxwell? (hopefully..)
      |                             |
     G90 and friends               Kepler (and friends) 
      |
     GT200 (and Friends)
I loved the old good NV code names, NVx0 new and NVx5 refresh. ;)
 
where's the line between GT21x and GF100? they share some things like the memory controller, right?
 
How about:
Code:
Tesla ------ Fermi ----- Kepler ----- Maxwell
 |             |           |             |
G80          GF100      (GK100)        .....
 |             |
G92          GF104
 |             |
GT200       (GF200)
 
Hmm yes that makes sense. Clearly in terms of development cycles, GF100 was developed in parallel to GT200. In contrast to AMD it just looks like gt200 deviates a bit more, so development appears more straight on the AMD side.
 
Well the GF110 aka GTX580 was released today. Reviews are showing a 15% speed improvement with a 10% lower power consumption.

Hopefully this will put to rest the constant postings that nVidia can't improve the performance per watt in new designs.
 
Well the GF110 aka GTX580 was released today. Reviews are showing a 15% speed improvement with a 10% lower power consumption.

Hopefully this will put to rest the constant postings that nVidia can't improve the performance per watt in new designs.

If you produce something with a sufficiently horrendous performance/watt improving upon it is not nearly as difficult :).
 
Back
Top