Actually he claimed they couldn't make one on 55nm either, then when it was apparent it was being done he claimed it will be done in extremely low "halo" quantities and then when he realized he was wrong he shut up about it
trinibwoy said:You're right...if GTX 380 is 225W then it's not likely there will be a dual-GPU part based on that configuration.
But if indeed what Rys speculates is true, NVIDIA doesn't really need a dual chip card, with two "complete" GF100 chips in it. If a single GeForce 380 is able to keep up with the HD 5970, then two GeForce 360 chips in a single PCB, should beat the HD 5970 without problems, assuming SLI scales well of course.
I think it was Fudzilla that said that NVIDIA would be launching two single chip cards, upon GF100 release, with the dual GPU card releasing a bit later. Assuming that the second single chip is the GeForce 360, the GeForce 395 (dual GPU) could be using two of those.
I'm not sure what Fuad's article has to do with your spoiler but maybe I'm missing those valuable connecting dots. Obviously, you don't have that problem, eh?
Hes referring to this. I had to look it up when I got home today but It wasn't too hard to figure out what he was implying.
http://twitter.com/NVIDIAGeForce
Actually he claimed they couldn't make one on 55nm either, then when it was apparent it was being done he claimed it will be done in extremely low "halo" quantities and then when he realized he was wrong he shut up about it
I was sceptical at first too, when Nvidia announced the FLOPS range and thus the likely shader-clocks of Tesla-Fermi, which aren't spectacular to say it positive.
But after thinking it over a bit, I suspect that Tesla will not be clocked nearly as high as they could be. Why? Many reasons.
First, Tesla is targeted more at supercomputers and clusters than even it's first generation (desk-side-SCs). In that space, you usually do not scale with clock speed but with number of cores or devices.
Second, the above makes for a very good yield-recovery scheme. At first, you could sell clock-wise underperforming (with respect to desktop-processors) chips in that market, later you can ramp up clock speed, but disable a SM or two in return - if you keep your GFLOPS the same. I don't think, that the SC market is as spec-avid as the enthusiast gamers, who "want a 512 bit bus" for example and not a given amount of bandwidth.
Third, it's more critical for this environment to ensure long-term stability and, even more important, building a reputation. Nvidia, as a newcomer in this market, needs to convince people that their processors are as good an alternative as other - you do that not only by boasting TFLOPS numbers, but also by ensuring stable operation. So you cannot push your cards to the utmost limits. At least i wouldn't do it.
Actually, the card I was talking about was canned, the one that was released was a very different model. They also can't make a 2x 280 or 2x 285, they have a 2x 275 instead. Looking at the TDPs, they are cherry picking the hell out of those chips to make the cards.
Also, if you look at production numbers it was really limited, there aren't many of them out there. $600 graphics cards don't sell much, but do grab a disproportionate share of reviews, mindshare, and fanboi froth.
-Charlie
Actually, the card I was talking about was canned, the one that was released was a very different model. They also can't make a 2x 280 or 2x 285, they have a 2x 275 instead. Looking at the TDPs, they are cherry picking the hell out of those chips to make the cards.
Also, if you look at production numbers it was really limited, there aren't many of them out there. $600 graphics cards don't sell much, but do grab a disproportionate share of reviews, mindshare, and fanboi froth.
-Charlie
See?
Dots! more dots! dots dots dots! now stop
Also, in all this, I don't think I ever saw anyone guesstimating what could the GeForce 360 be.
I'm guessing it will be what GT212 was supposed to be. A 384 SP part (and given what we know now) with a 256 or 320 bit memory interface, 2/3s of ROPs and 3/4s of TMUs of the full GF100 chip.
Right? Thats why they have only sold more GTX295s than ATI 4870X2s and they are STILL in stock and STILL being bought compared to the 4870X2s. Charlie, for once, WOULD YOU PLEASE JUST ADMIT YOU WERE WRONG ABOUT SOMETHING or is it beneith you to do such a thing?
Also, in all this, I don't think I ever saw anyone guesstimating what could the GeForce 360 be.
I'm guessing it will be what GT212 was supposed to be. A 384 SP part (and given what we know now) with a 256 or 320 bit memory interface, 2/3s of ROPs and 3/4s of TMUs of the full GF100 chip.
Would M$ have a problem with a product named "360"?
Right? Thats why they have only sold more GTX295s than ATI 4870X2s and they are STILL in stock and STILL being bought compared to the 4870X2s.
384SP, 96 TMUs and a 256bit bus could not be enough to keep up with an HD5870 (which should be the main target of a GTX360), I fear.
All those points apply to Hemlock too Charlie.
Seconded.links pls (GT295 vs 4870X2 sales)
Seconded.
Being shorter than 5870/5970 allows it to fit in more cases? It may also suggest something about power/heat & inferred performance. Of course, Nvidia may prefer to tolerate higher ASIC/board temps & eschew greater SA, but perhaps they have more efficient all Cu/Vapo cooling allowing a more compact build at a higher BOM.Wth does Nvidia think we should care how long the board is!!!
384SP, 96 TMUs and a 256bit bus could not be enough to keep up with an HD5870 (which should be the main target of a GTX360), I fear.
Difference is, you could actually buy a GTX 295.All those points apply to Hemlock too Charlie.