NVIDIA Fermi: Architecture discussion

There's a difference arguing a point from sound physics and logical reasoning vs. arguing by waving a magic wand negating the laws of physics and logic. There is flatly no logical, physics based cost/performance advantage Fermi is known to have over Cypress. There are a number of logical, physics based cost/performance advantages Cypress is known to have or can be logically inferred to have over Fermi.

It's primarily a matter of reality based argument vs. a magic wand based argument, not a hated of Nvidia. My personal attitude toward Nvidia is subordinate to the logic and physics of my arguments ... therefore irrelevant.

:LOL:

Is that so ? You don't even know Fermi's performance vs Cypress performance...I would say that your "reality" is the magic wand based argument at this point.
 
There's a difference arguing a point from sound physics and logical reasoning vs. arguing by waving a magic wand negating the laws of physics and logic. There is flatly no logical, physics based cost/performance advantage Fermi is known to have over Cypress. There are a number of logical, physics based cost/performance advantages Cypress is known to have or can be logically inferred to have over Fermi.

It's primarily a matter of reality based argument vs. a magic wand based argument, not a hated of Nvidia. My personal attitude toward Nvidia is subordinate to the logic and physics of my arguments ... therefore irrelevant.


there is alot of things missing you your logic :oops:

power consumption isn't much more then a gtx 280.
 
:LOL:

Gotta agree with XMAN in one respect though, there are some who seem to be consistently hoping for / predicting bad news. To the benefit of whom is the baffling question.

I certainly hope that wasn't aimed in my direction.. I'm not by any means "consistently hoping for / predicting bad news", just merely pointing out that the post "There's no way that a single Fermi chip will suck that much power" is wrong.. That is by no stretch of the imagination that I am hoping fermi will fail etc. Power cost rises exponentially when clocks are raised (non-linear), and anyone suggesting that a single fermi under any circumstance will never "suck that much power" is just patently wrong.. for an very easy comparison (though a bit more extreme) look at 5970 consumption rates stock vs OC, i7 (or really and CPU) Stock vs OC.. when clock rates (and voltage needed) are raised power costs rise very very sharply.

On the whole fermi bashing thing.. puulease, I'm a consumer advocate 1st and though there is a small part of me that agree that nVidia as a whole being brought down a bit would be better for the market (and consumer) as it would allow ATI and any other players to flourish without questionable measures that market leaders tend to put in place (See Intel v AMD/Via).. the failure of fermi would only hurt that philosophy. I've said it again and agian and agian.. I honestly 100% believe that IF Fermi becane to nVidia what the R600 was to ATI, in the long run it would only benefit nearly ALL of us, it would (hopefully) foster changes at nV that would only increase competition, just as the R600 did for ATI. I personally think the R600 was probably among one of the greatest GPUs ever not for what it did (performance etc) but because of how it changed the philosophy at ATI. Competition within a market is not defined by success but by failure, the ability to overcome and learn from that failure.

I may wear red glasses from time to time, and I most certainly have dawned the green ones as well. Some people's bias here is so blindingly abhorrent against any other that they so strongly support "their" side with (imo) total ignorance at times (yes looking at you XMAN and sometimes Razor, and most definitely Silus and Sontin for the green team, Spig is only the latest among the fanATIcs that included Hellbinder and Doomsomething or other and W ..forgot his name..). There are many who for one reason or another belong to a certain camp, be it their job, experience etc and though they exhibit bias they at least show the ability to understand the other side and we often share a heated debate, which in itself is not bad, that spurs fruitful discussion. /ENDRANT
 
Over 50% more? 300W is 33.33% more than 225W.

It can easily be explained by voltage and clocks alone, assuming a 10% bump on each:

225 * 1.1² (voltage) * 1.1 (frequency) = 299.475, and that doesn't even take the disabled SMs under account, but then again it doesn't take the number of memory chips under account either.

Tesla does not consume 225W, but due to the PCIe specs it needs to be connected to either 2 6Pin or 1 8Pin, which means it uses more than 150W and less than 225W. I remember they talked about that it would be only a slight increase over C10X0 generation which go at around 160W. So Tesla won´t reach more than 200W on average.
 
and YES I think fermi will be faster the the RV770 (5870/5890), it damn well better be, it's nearly 50% bigger and half a year later after the HD5000 was launched. There are many people who will buy it JUST because it's NV (and not ATI), most I dare say will welcome the price war that will ensue. By what amount the GF100 is faster and where it performs in will dictate price eventually. If it is "fast enough", the consumer will buy it ,.. if not they nV will have no option but to lower its price to meet what the market demands.
 
I'm in the camp that expects pretty high power consumption just based on the sheer size of the chip and TSMC's issues with 40nm. But I also expect it to be much faster than Cypress as well for one simple reason: if it's not then it's less efficient than GT200 and that's hard to assume given what we know of the architecture so far.

I expect it to be high, just not much more than a GTX 280. Maybe 250 TDP ? Typical power consumption would be lower of course.

As for your efficiency remark, that's basically the point I made before. It's very funny to see how some just assume NVIDIA is incompetent and can't even make a new product, that's better in performance, efficiency, etc than even their last generation of products.

As for performance, I'm sticking to 35% faster than RV870 on average, with obviously some discrepancies, depending on the load.
 
Tesla does not consume 225W, but due to the PCIe specs it needs to be connected to either 2 6Pin or 1 8Pin, which means it uses more than 150W and less than 225W. I remember they talked about that it would be only a slight increase over C10X0 generation which go at around 160W. So Tesla won´t reach more than 200W on average.

IIRC, Tesla ate about 170-190W depending on config.
 
Hmmm, can you tell us again how sound physics and logical reasoning lead you to believe that Fermi will only be 30-40% faster than GT200? That's the comparison people seem to be shying away from. See I would be disappointed if Fermi at 600/1200 couldn't approach 2xGTX 285. Guess my standards are too high?
What does your reply have to do with the quote it referenced: "There is flatly no logical, physics based cost/performance advantage Fermi is known to have over Cypress." ... ?
 
I expect it to be high, just not much more than a GTX 280. Maybe 250 TDP ? Typical power consumption would be lower of course.

As for your efficiency remark, that's basically the point I made before. It's very funny to see how some just assume NVIDIA is incompetent and can't even make a new product, that's better in performance, efficiency, etc than even their last generation of products.

As for performance, I'm sticking to 35% faster than RV870 on average, with obviously some discrepancies, depending on the load.

I think either I'm misunderstanding what you are posting (someone feel free to correct me) or you don't understand the difference between power consumption and TDP (thermal design point/power), they are not one and the same.

and if we are going to make flat out guesses, I'm going with 22%-26% on average, upto 10% higher in "more favorable" (nV optimized/TWIMTBT) programs and possibly only 5% faster in poorly (ATI favorable) games.. though I have a feeling in equally optimized/neutral GPGPU apps that focus on SP-FP the RV870 could be significantly faster.

Hows that for "all over the board" ?? lol
 
I think either I'm misunderstanding what you are posting (someone feel free to correct me) or you don't understand the difference between power consumption and TDP (thermal design point/power), they are not one and the same.

I never said they were the same, but TDP is often used as a reference for the maximum power a chip can consume/draw when under load i.e. executing actual applications, not some power virus.
 
Tesla does not consume 225W, but due to the PCIe specs it needs to be connected to either 2 6Pin or 1 8Pin, which means it uses more than 150W and less than 225W. I remember they talked about that it would be only a slight increase over C10X0 generation which go at around 160W. So Tesla won´t reach more than 200W on average.

Uh? Whatever spin they put on it, Tesla's max power is 225W, and that is what we're discussing here. At it that's what I am talking about. If you're trying to say that in typical workloads Fermi won't draw 300W, then yes, obviously you're right.
 
I think either I'm misunderstanding what you are posting (someone feel free to correct me) or you don't understand the difference between power consumption and TDP (thermal design point/power), they are not one and the same.

and if we are going to make flat out guesses, I'm going with 22%-26% on average, upto 10% higher in "more favorable" (nV optimized/TWIMTBT) programs and possibly only 5% faster in poorly (ATI favorable) games.. though I have a feeling in equally optimized/neutral GPGPU apps that focus on SP-FP the RV870 could be significantly faster.

Hows that for "all over the board" ?? lol

I don't think it matters how it compraes to the 5870 because I simply don't think the 5870 will be anywhere near the price of the fermi hign end.

I would assume that the 5850 would drop to $220-250 with the 5870 in the low $300 range and a new part being introduced higher up the price spectrum.

Whatever it is all consumers should benfit.
 
Spig is only the latest among the fanATIcs that included Hellbinder and Doomsomething or other and W ..forgot his name..). There are many who for one reason or another belong to a certain camp, be it their job, experience etc and though they exhibit bias they at least show the ability to understand the other side and we often share a heated debate, which in itself is not bad, that spurs fruitful discussion. /ENDRANT

BS.

My posts have been lenghty, physics based and well reasoned. Those arguing against my post have not been.

If ONLY Nvidia was competitive with AMD/ATI this generation. My pension is small.

But I argue the FACTS as best I can ascertain them. Unlike what seems to be the case in the vast majority of my fellow humans, it's just not in me to deny factual physical reality for the comfort of a fuzzy logic/head in the sand cocoonism. Physical relaity is what it is.

Presently the FACTS are arrayed against Nvidia in a massive way, so that is what I argue.

I'm not always right but I am always honest.
 
Last edited by a moderator:
I certainly hope that wasn't aimed in my direction..

No, not you specifically. Just saying that the vibe is there in general.

I honestly 100% believe that IF Fermi becane to nVidia what the R600 was to ATI, in the long run it would only benefit nearly ALL of us, it would (hopefully) foster changes at nV that would only increase competition, just as the R600 did for ATI.

That's an interesting position but I disagree wholeheartedly. Nvidia is MUCH friendlier all around when things are going well. Their shenanigans seem to emerge only when backed into a corner. I also don't see how R600 increased competition, its primary effect apparently was to send Nvidia to sleep. :)

In any case, Nvidia being nice isn't of particular benefit to me so I'm looking forward to good things from Fermi. I want increased competition because both companies continue to execute effectively, not because one of them fails miserably now and then.

What does your reply have to do with the quote it referenced: "There is flatly no logical, physics based cost/performance advantage Fermi is known to have over Cypress." ... ?

Essentially if you think Fermi is lackluster vs Cypress it means you also think it's lackluster vs GT200. So I'm asking how logic and physics support that particular view. It actually should be much easier to make the latter comparison as you don't need to make as many unfounded assumptions about things that are very different between the architectures.
 
Why a need to isolate factors? The madshrimp's link averages out the gaming performance of the various cards from across an array of tech site reviews.
I would want to isolate factors because some options like higher clocks can raise peak power much faster than others.
Certain choices, like raising clocks and voltages to get higher ALU throughput, could raise peak power faster than performance is gained.
Mixing and matching options might yield a good average lead, but if someone wants more performance in GRID, they'd probably be out of luck.

If Nvidia's chip is 50% larger than AMD's and has 50% more transistors and Nvidia finds it necessary to clock it as high as they can, which seems a near certainty at this point, how can it be that the power usage isn't roughly 50% higher? On precisely the same process node from the same fabrication company?
It would depend on the types of transistors and their level of activity. The hot-clock domains will draw a lot of power, but if a lot of the additional transistors are in a lower clock domain, they won't contribute the same amount.

Without seeing the numbers for a physical implementation, I don't think we can state a number like 50% with such confidence. It could be worse. Maybe it could be better.

It all comes down to gaming performance. If Nvidia finds it necessary to push Fermi's clocks to compete with Cypress on cost/performance, then 50% more transistors on the same fabrication process = ~ 50% more power draw.
If the voltage is raised to push the clocks, it could easily be worse than 50%. On the other hand, this could be reduced somewhat by the lower non-hot clock domain.

Unless Nvidia has a design that provides vastly better performance/transitor than AMD. I have read of no such magic wand, have you?
I've not found the magic fountain of certainty to say flat-out that it would be 50%.
 
Last edited by a moderator:
BS.

My posts have been lenghty, physics based and well reasoned. Those arguing against my post have not been.

If ONLY Nvidia was competitive with AMD/ATI this generation. My pension is small.

But I argue the FACTS as best I can ascertain them. Unlike what seems to be the vast majority of my fellow humans, I have never been able to deny factual physical reality for the comfort of a fuzzy logic/head in the sand cocoon.

Presently the FACTS are arrayed against Nvidia in a massive way, so that is what I argue.

I'm not always right but I am always honest.

Which is why you continue to post doom and dispair for Nvidia with no hard facts to support what your saying based on any real information about the chips themselves for power draw or TDP. It is all assumptions, god ones I'll you that, but nothing more than pure assumptions just the same.
 
Essentially if you think Fermi is lackluster vs Cypress it means you also think it's lackluster vs GT200. So I'm asking how logic and physics support that particular view. It actually should be much easier to make the latter comparison as you don't need to make as many unfounded assumptions about things that are very different between the architectures.

Lord.

Let's try this ... can you give me one logical, physics based cost/performance advantage Fermi is known to have over Cypress?
 
I just try to ignore those posts. I was only taking exception with your "everyone demanding benchmarks hates nvidia" comment. I'm dying for benchies and couldn't care less which vendor's board I'm using (I currently have two of each in my four home machines).
Which, btw, is exactly what I'm trying to explain...

I'm NOT waiting for Fermi based boards, but I understand quite well some people waiting for it WANT numbers.

Oh, wait... my brain just told me I would perhaps have been waiting for Fermi if there had been some promising numbers 2 months back, as much of an NV hater as I am.
 
Lord.

Let's try this ... can you give me one logical, physics based cost/performance advantage Fermi is known to have over Cypress?

So now you want to argue cost/performance instead of power consumption/tdp? come on, stick to one thing.

But for funs and giggles, how about the fact "that" should fermi be in fact 5-10% faster on average over 5970, that right there would be a cost/performance leadership in my book as it would take 1 single chip to best a dual chip design which is propably more costly than a single fermi chip card to produce.
 
Back
Top