NVIDIA Fermi: Architecture discussion

I'm not sure what Fuad's article has to do with your spoiler but maybe I'm missing those valuable connecting dots. Obviously, you don't have that problem, eh? :D
 
Actually he claimed they couldn't make one on 55nm either, then when it was apparent it was being done he claimed it will be done in extremely low "halo" quantities and then when he realized he was wrong he shut up about it :)

Heh. The latest fun part was when Fud said Fermi would be delayed until January. Until then, Charlie (and others) would say that Fud knows nothing of what he's talking about, but as soon as he said something similar to them, Fud became much more credible :)

trinibwoy said:
You're right...if GTX 380 is 225W then it's not likely there will be a dual-GPU part based on that configuration.

But if indeed what Rys speculates is true, NVIDIA doesn't really need a dual chip card, with two "complete" GF100 chips in it. If a single GeForce 380 is able to keep up with the HD 5970, then two GeForce 360 chips in a single PCB, should beat the HD 5970 without problems, assuming SLI scales well of course.

I think it was Fudzilla that said that NVIDIA would be launching two single chip cards, upon GF100 release, with the dual GPU card releasing a bit later. Assuming that the second single chip is the GeForce 360, the GeForce 395 (dual GPU) could be using two of those.
 
Also, in all this, I don't think I ever saw anyone guesstimating what could the GeForce 360 be.

I'm guessing it will be what GT212 was supposed to be. A 384 SP part (and given what we know now) with a 256 or 320 bit memory interface, 2/3s of ROPs and 3/4s of TMUs of the full GF100 chip.
 
But if indeed what Rys speculates is true, NVIDIA doesn't really need a dual chip card, with two "complete" GF100 chips in it. If a single GeForce 380 is able to keep up with the HD 5970, then two GeForce 360 chips in a single PCB, should beat the HD 5970 without problems, assuming SLI scales well of course.

I think it was Fudzilla that said that NVIDIA would be launching two single chip cards, upon GF100 release, with the dual GPU card releasing a bit later. Assuming that the second single chip is the GeForce 360, the GeForce 395 (dual GPU) could be using two of those.

I don't think so.
It's the same issue that I was talking about before...
NVidia wasn't able to pull a dual card out of a GT200 full stop. Because it was too big and too hot to handle two of them on the same card. They could have used a GTX260 GT200, but it wasn't the case, so they had to wait a new process in order to have it done.
GF100 will be a little bit smaller than GT200, but probably bigger than a GT200b, so it seems quite difficult to me that they would be able to put that kind of beast on a dual card.
 
Actually he claimed they couldn't make one on 55nm either, then when it was apparent it was being done he claimed it will be done in extremely low "halo" quantities and then when he realized he was wrong he shut up about it :)

Actually, the card I was talking about was canned, the one that was released was a very different model. They also can't make a 2x 280 or 2x 285, they have a 2x 275 instead. Looking at the TDPs, they are cherry picking the hell out of those chips to make the cards.

Also, if you look at production numbers it was really limited, there aren't many of them out there. $600 graphics cards don't sell much, but do grab a disproportionate share of reviews, mindshare, and fanboi froth.

-Charlie
 
I was sceptical at first too, when Nvidia announced the FLOPS range and thus the likely shader-clocks of Tesla-Fermi, which aren't spectacular to say it positive. :)

But after thinking it over a bit, I suspect that Tesla will not be clocked nearly as high as they could be. Why? Many reasons.

First, Tesla is targeted more at supercomputers and clusters than even it's first generation (desk-side-SCs). In that space, you usually do not scale with clock speed but with number of cores or devices.

Second, the above makes for a very good yield-recovery scheme. At first, you could sell clock-wise underperforming (with respect to desktop-processors) chips in that market, later you can ramp up clock speed, but disable a SM or two in return - if you keep your GFLOPS the same. I don't think, that the SC market is as spec-avid as the enthusiast gamers, who "want a 512 bit bus" for example and not a given amount of bandwidth.

Third, it's more critical for this environment to ensure long-term stability and, even more important, building a reputation. Nvidia, as a newcomer in this market, needs to convince people that their processors are as good an alternative as other - you do that not only by boasting TFLOPS numbers, but also by ensuring stable operation. So you cannot push your cards to the utmost limits. At least i wouldn't do it.

You are missing the most important point for the HPC market, power. It is probably the number one concern for that market. If NV disables a cluster or two for fermi, that is one method of yield drop out that they could use.

If it doesn't clock high enough, that is because of timing problems or power use, one results in a crash, the other in out of spec power use. The first problem would work out fine for the HPC cards, but the second one won't. It is in NV's best interest to pick those cards for lower power.

Also, I am REALLY skeptical that there will be enough demand for $3K+ compute cards to soak up the imperfect GF100s from the desktop market. The market of "well funded HPC customers with money to piss away" is notably smaller than the market for gamers saving up their allowance.

For the Fermi to be a salvage part, it would have to be a salvage part for the '360' to be of any real value, and I doubt that there will be enough parts that fit a narrow enough bin to base a product on.

If I had to bet, I would say that the fermi parts are the cherry picked GF100s, after all, at 6-7 times the cost, I would suspect that the '380s' are the second tier parts.

-Charlie
 
Actually, the card I was talking about was canned, the one that was released was a very different model. They also can't make a 2x 280 or 2x 285, they have a 2x 275 instead. Looking at the TDPs, they are cherry picking the hell out of those chips to make the cards.

Also, if you look at production numbers it was really limited, there aren't many of them out there. $600 graphics cards don't sell much, but do grab a disproportionate share of reviews, mindshare, and fanboi froth.

-Charlie

Right? Thats why they have only sold more GTX295s than ATI 4870X2s and they are STILL in stock and STILL being bought compared to the 4870X2s. Charlie, for once, WOULD YOU PLEASE JUST ADMIT YOU WERE WRONG ABOUT SOMETHING or is it beneith you to do such a thing?
 
Actually, the card I was talking about was canned, the one that was released was a very different model. They also can't make a 2x 280 or 2x 285, they have a 2x 275 instead. Looking at the TDPs, they are cherry picking the hell out of those chips to make the cards.

Also, if you look at production numbers it was really limited, there aren't many of them out there. $600 graphics cards don't sell much, but do grab a disproportionate share of reviews, mindshare, and fanboi froth.

-Charlie

All those points apply to Hemlock too Charlie.

See?
Dots! more dots! dots dots dots! now stop

I'm confused. I thought you were trying to scoop Chris on some breaking news. Wth does Nvidia think we should care how long the board is!!!
 
Also, in all this, I don't think I ever saw anyone guesstimating what could the GeForce 360 be.

I'm guessing it will be what GT212 was supposed to be. A 384 SP part (and given what we know now) with a 256 or 320 bit memory interface, 2/3s of ROPs and 3/4s of TMUs of the full GF100 chip.

Would M$ have a problem with a product named "360"?
 
Right? Thats why they have only sold more GTX295s than ATI 4870X2s and they are STILL in stock and STILL being bought compared to the 4870X2s. Charlie, for once, WOULD YOU PLEASE JUST ADMIT YOU WERE WRONG ABOUT SOMETHING or is it beneith you to do such a thing?

links pls (GT295 vs 4870X2 sales)
 
Also, in all this, I don't think I ever saw anyone guesstimating what could the GeForce 360 be.

I'm guessing it will be what GT212 was supposed to be. A 384 SP part (and given what we know now) with a 256 or 320 bit memory interface, 2/3s of ROPs and 3/4s of TMUs of the full GF100 chip.

384SP, 96 TMUs and a 256bit bus could not be enough to keep up with an HD5870 (which should be the main target of a GTX360), I fear.

Would M$ have a problem with a product named "360"?

Don't think so, as long as they keep the GTX in front of it. ;)

Right? Thats why they have only sold more GTX295s than ATI 4870X2s and they are STILL in stock and STILL being bought compared to the 4870X2s.

HD4870X2 are EOL, while GTX295 aren't, because ATI has something that NVidia has not. A new lineup of cards. ;)
 
384SP, 96 TMUs and a 256bit bus could not be enough to keep up with an HD5870 (which should be the main target of a GTX360), I fear.

Even if the GTX360 were to be that, I dont see why it couldn't be = to if not even faster than the 5870.
 
Wth does Nvidia think we should care how long the board is!!!
Being shorter than 5870/5970 allows it to fit in more cases? It may also suggest something about power/heat & inferred performance. Of course, Nvidia may prefer to tolerate higher ASIC/board temps & eschew greater SA, but perhaps they have more efficient all Cu/Vapo cooling allowing a more compact build at a higher BOM.
 
384SP, 96 TMUs and a 256bit bus could not be enough to keep up with an HD5870 (which should be the main target of a GTX360), I fear.

Maybe too close to the Performance chip of the GT100 Series. Especially as the launch date of GT100 slips, the time difference between the top of the line GPU and the Performance GPU should decrease.
 
Back
Top