NVIDIA GF100 & Friends speculation

Most gamers pay their own electricity bills and most of them buy cards or computers with graphics cards without a PCI-E power connector if going by volume. Im sure even enthusiasts are starting to notice an increase in their energy bills. Performance per watt is becoming an even more important metric than performance per mm^2.

You don't expect there to be good idle performance? IE GT200 like? Sure power is going up. It has in all designs. But if you don't game 100% of your Computers usage then you're actually likely useing less power now than you were with G92 designs at idle.
 
What would Charlie Demerjian's ideal GPU architecture look like?

One that is fast enough to drive my screen with max settings, passively cooled, and cheap. Other than that, I don't care, to me, the use should be a black box.

That's an interesting statement. So having the same geometry performance in entry level and flagship parts is now supposed to be a good thing? I think it's going to be hard even for you to downplay what Nvidia seems to have done here.

Ya think I might know a bit more about it, and may in fact be working on an article right now, just taking a break as I eat? :) You might want to stay tuned to S|A for the next few hours. That said, no, I don't agree with it. I understand why, but in this case, I think it was their only option, and a bad one at that. The price paid for it is too high, and the demos they showed off to the press were very specific for a reason.

-Charlie
 
Most gamers pay their own electricity bills and most of them buy cards or computers with graphics cards without a PCI-E power connector if going by volume. Im sure even enthusiasts are starting to notice an increase in their energy bills. Performance per watt is becoming an even more important metric than performance per mm^2.
was performance per mm^2 ever that important to end-users?
225W TDP is still somehow reasonable if the performance jump is big.
 
Maybe you could explain how their proposed change could anyone affect for worse then?

I was under the impression that being able to alleviate perceived bottlenecks is what it's all about - an somehow Nvidia seems to have allegedly identified trisetup/tessellation as their main bottleneck whereas AMD went with doubling raster, FLOPS and texturing rates.

Working on just that now.

-Charlie
 
But to say if you OC one card you can score X, why point does it make? If you OC one you OC both and right now, reported scores for fermi are based on stock clocks, not OCs.

It was a post or an article that said something like "22k Vantage performance equal to +20% over a HD 5870 @ 1GHz". That was my point, don't know if I'm clear. ;)

Not sure how that's relevant. Was just pointing out to PSU-failure that his assumption that the Extreme gap would be smaller than the Performance gap didn't hold true for the numbers given. Use your judgment as to the reliability of those numbers.

I was just saying that trying to get a ratio between RV870 and GF100 from those numbers alone is pointless, even if it seems quite clear that GF100>RV870, it's quite difficult to say how much from what we know as of now... We just have to wait few hours more. ;)
 
You don't expect there to be good idle performance? IE GT200 like? Sure power is going up. It has in all designs. But if you don't game 100% of your Computers usage then you're actually likely useing less power now than you were with G92 designs at idle.

Funnily enough im using a G92 in this computer haha. What im saying is that the majority of the market does care that the card has great performance within a certain smaller TDP envelope and these are OEMs and your typical lower level enthusiast buyers as going by the pie graphs released by AMD which make up the bulk of the market by numbers.

I was responding to your comment about the 'majority' of the market. If you read that at face value the 'majority' are OEMs and low level consumer designs. Sure theres a segment of people who do buy high level graphics cards and be damned about the power useage but increasingly even these people are becoming aware of the overall power draw as seen by the increased popularity of bronze, silver, and gold 80+ power supplies.
 
Most gamers pay their own electricity bills and most of them buy cards or computers with graphics cards without a PCI-E power connector if going by volume. Im sure even enthusiasts are starting to notice an increase in their energy bills. Performance per watt is becoming an even more important metric than performance per mm^2.

I'm very concerned about my power bill - but only after I've secured "enough" power to run my games the way I want them to run. (Dragon Age: Origins @2.560x1.600 w/4x SGSSAA and 16:1 AF for example right now)

Power saving occurs with me by using a netbook for surfing the net and leaving my game rig completey switched off when I'm not playing.


Working on just that now.

-Charlie

Looking forward to the read!
 
One that is fast enough to drive my screen with max settings, passively cooled, and cheap. Other than that, I don't care, to me, the use should be a black box.

Oh I assumed you would have suggestions on how Nvidia's engineers could have spent their transistor budget more efficiently.

Ya think I might know a bit more about it, and may in fact be working on an article right now, just taking a break as I eat? :) You might want to stay tuned to S|A for the next few hours. That said, no, I don't agree with it. I understand why, but in this case, I think it was their only option, and a bad one at that. The price paid for it is too high, and the demos they showed off to the press were very specific for a reason.

Hmmm that's not surprising is it? AMD's demos also focused on tessellation. It's the new shit right? :) I'm sure this article you're working on will be a classic, looking forward to an indepth analysis of where Fermi went wrong.....
 
I'm very concerned about my power bill - but only after I've secured "enough" power to run my games the way I want them to run. (Dragon Age: Origins @2.560x1.600 w/4x SGSSAA and 16:1 AF for example right now)

Power saving occurs with me by using a netbook for surfing the net and leaving my game rig completey switched off when I'm not playing.

Thats a lot of letters! Try saying the actual words behind the letters and it'd be quite a mouthful.

Anyway you ARE in Germany really, so I wouldn't be surprised to find you pay 2* more for your electricity than we enlightened New Zealanders and our lovely renewable electricity. So what does power efficiency mean in terms of computing over there? Whats the attitude towards power hungry graphics cards and TVs?
 
GF100 NDA breaking at ~ 9pm tonight. Chuck getting ready to post an anti-GF100 article at SA tonight. Coincidence? I think not. ;)

I've said it before, and I'll say it again: there is a big difference between people being positive and hopeful about their most favorite/preferred vendor, as opposed to be being negative and hateful about their least favorite/preferred vendor. The latter type of attitude is unfortunate, as it creates conflict between fans of each respective vendor, and it takes focus away from what is truly important (ie. details on and celebration of a brand new radically improved product).
 
Thats a lot of letters! Try saying the actual words behind the letters and it'd be quite a mouthful.

Anyway you ARE in Germany really, so I wouldn't be surprised to find you pay 2* more for your electricity than we enlightened New Zealanders and our lovely renewable electricity. So what does power efficiency mean in terms of computing over there? Whats the attitude towards power hungry graphics cards and TVs?
I'm afraid I don't quite understand?

Yes, electricity is quite expensive her in Germany i think. It's about 20 Euro-Cents per kWh (which is about 30 US-Cents roughly); we pay in our household roughly 400-500 Euros (about 650 US-Dollars i think) per year. And yes, I like power efficient pc equipment - but i have other priorities when selecting them for my gaming rig, because this is only turned on for gaming, which is vastly dwarfed time-wise by surfing the internet or watching movies (at my girlfriend's PC). So, priority number one is enough pc-power (fill, tex, bandwidth, FLOPS etc. -> Fps for my games) since the gaming rig is only turned on a few (single-digit) hours a week, so even a 100-watts-difference would equate only to 10.x Euros a year, which is only 0.25-0.2 percent of my electricity bill.

I think I already said that (without the numbers) in my former posting.
 
And you wonder why I don't sign them..... :)

-Charlie

Judging by the past, it's so you can continue making false statements and outlandish predictions. I just love how you gloated about being correct on the HD5xxx shader count yet your own articles during the previous year had guesses ranging from 900 to 2000+.

To keep you honest in the future, I propose any and all 'grand claims' should have a penalty when they turn out completely false. Perhaps something along the lines of how Intel executives made you wear a bunny suit for losing a bet. ;)

Untitled.jpg
 
Heh I love it. A couple of months ago, tessellation was the big thing for some people (here and in other forums). Now that leaks seem to suggest that Fermi's very good at it, the same people call it a gimmick. It never fails :)

I'm really curious to see the alleged decrease in TMUs vs GT200, because the leaked performance numbers suggest much higher performance vs GT200. The in-depth architecture analysis will be very interesting.
 
If it is 200% as fast as GTX285, well, that's what it should be. It remains baffling that ATI could only get 40% more performance out of basically a 2X 4890. They couldn't even manage 50, 60%, let alone say, 90. It's not all that surprising this may come back to bite ATI. And dont blame the memory bandwidth, tests show the 5870 gains more from overclocking the core than the memory.

As for all this super duper Fermi tesselation stuff, what good is it? We mostly only get console ports on PC today. Who will design a game that uses this?
 
Judging by the past, it's so you can continue making false statements and outlandish predictions. I just love how you gloated about being correct on the HD5xxx shader count yet your own articles during the previous year had guesses ranging from 900 to 2000+.

To keep you honest in the future, I propose any and all 'grand claims' should have a penalty when they turn out completely false. Perhaps something along the lines of how Intel executives made you wear a bunny suit for losing a bet. ;)

Untitled.jpg
LOVE the bunny suit! You did forget to mention how he's been right about Fermi so far in spite of so many people trying to bash him along the way. ;)

Are you the same guy who started this site? If so, please update it...I'm dying to see what happens next. :yep2:
 
Heh I love it. A couple of months ago, tessellation was the big thing for some people (here and in other forums). Now that leaks seem to suggest that Fermi's very good at it, the same people call it a gimmick. It never fails :)

Yes, and remember what good old Mr. nvidia-hater (Charlie) wrote about the GT300:

Contrast that with the GT300 approach. There is no dedicated tesselator, and if you use that DX11 feature, it will take large amounts of shader time, used inefficiently as is the case with general purpose hardware.
http://www.theinquirer.net/inquirer/news/1137331/a-look-nvidia-gt300-architecture

:LOL:
 
Back
Top