GF100 evaluation thread

Whatddya think?

  • Yay! for both

    Votes: 13 6.5%
  • 480 roxxx, 470 is ok-ok

    Votes: 10 5.0%
  • Meh for both

    Votes: 98 49.2%
  • 480's ok, 470 suxx

    Votes: 20 10.1%
  • WTF for both

    Votes: 58 29.1%

  • Total voters
    199
  • Poll closed .
Is it just me, or did no-one test the new AA modes? Everyone for what I checked quickly seemed to just quote nV numbers without actually testing them
Indeed.
Some sites did one or two images of CSAA modes,but none did evaluation of CSAA with alpha 2 coverage transparencies.
Anandtech did test of transparent SSAA with dx10, but thats not quite same. ;)
 
Within a retail envelope it certainly is, at 825 MHz, even if the real TDP for the base card was only 250 (which nvidia has admitted that it is higher), that card is pulling north of 300W. If you believe that the real TDP is likely in the range of 275-300W for the 480, then that card is easily north of 350W.

OTOH, ATI can increase their power by at LEAST 75W and still be within the PCIe limit. In other words, ATI could if they wanted to, likely release a ~950 MHz card right now with their current 5870 parts at roughly the same power envelope as the 480.

a 50mhz overclock at hardocp (850-900mhz) only increased power consumption by 8watts. That includes 50mhz on the ram.

If that holds steady. 1ghz would only be 24w more than the stock 5870. 383w vs 407w according to hardocp's 383w for the stnadard 5870 in thier system.

It would be a far cry from the 431w of the gtx 285 in that review.

In thier gtx 480 review the 5870 stock used only 367w adding 24 to that is 391w. I don't know if its a newer 5870 they are using vs the launch unit of the original The gtx 480 is at 480w in thier testing on that system.

I also don't know how linear power usage scales i'm trying to find more examples with someone who overclocked higher and got similar power usage.


I said it before but 2gigs + 1 to 1.1ghz clock speeds and a jump on memory speeds should put cypress in the lead in most cases. It may not be a complete out right win andin the future tessliation games may prove to be faster on the gtx 480 no matter what ati does now. But the type who buy $500 video cards will most likely not care in a year or so when those titles start coming out.
 
Last edited by a moderator:
Cost of the 480 in the UK is approx £450. The 5780 is £330, so that is a 35% or so extra for 15% performance. True minimum framerates are better but on the downside it uses more power, is hotter and noisier.

It's not a dog but it's not a must have uber card either. Will it have the lifespan of the G80? I guess that depends on how tessillation takes off, but I have my doubts.
 
a 50mhz overclock at hardocp (850-900mhz) only increased power consumption by 8watts. That includes 50mhz on the ram.

If that holds steady. 1ghz would only be 24w more than the stock 5870. 383w vs 407w according to hardocp's 383w for the stnadard 5870 in thier system.
Isn't that a little overly optimistic? Power doesn't scale linearly, and you certainly need to pump up the voltage to get that thing stable. I wouldn't be surprized if a 1ghz Cypress draws closer to 300w in furmark.
 
Isn't that a little overly optimistic? Power doesn't scale linearly, and you certainly need to pump up the voltage to get that thing stable. I wouldn't be surprized if a 1ghz Cypress draws closer to 300w in furmark.

depends onthe card. Obviosly binning would be important however the are a large amount of cards doing 1ghz on 1.2v

http://www.xtremesystems.org/forums/showthread.php?t=235693

HD5870 average OC: 1027 / 1286 @ 1.27v
 
One thing I liked:

This is something I have mentioned several times in architecture threads, and I was going to make my own thread about it last year.

It is a common misconception that registers per SM is metric necessary for hiding latency. What you want to look at is registers per texture unit, because that's the latency you want to hide. If you double the ALUs but keep the TUs the same (or in this case reduce them), then you do not need to double the total register count to have the same latency hiding ability. I wrote a program to simulate the way SIMD engines process wavefronts and it confirms my conviction on the matter.

Latency hiding = # threads / tex throughput

(More specifically, the last term is average texture clause throughput. I know NVidia doesn't use clauses, but you can still group texture accesses together by dependency to create quasi-clauses and get a slightly understimated value of latency hiding)

My own measurement support that. Utilizing MDolencs Fillrate Tester, all Geforces up to and including GT200 take a tiny bit of a perf hit in the longer, 4 registers test. Whereas modern Radeon (at the very least since the HD2000 series) and GF100 deliver the same perf on that particular shader.

But we had to ask anyway :)
 
When the number comes direct from the lead architect, you can forgive me for having confidence in it ;)
How is that even possible? Strikes me as very, very odd if so. Their PR-machine has been out of control for a long time, imho, but the lead architect giving faulty info? Hmmm.

Or, do you mean the figure you got is still the correct one?
 
That's clearly shopped! JHH would never wear a shirt that doesn't show his Pectoral Fortitude...
Had do look Pectoral up, and then :LOL: Spot on! Thanks for the laughs.

Edit: As far as GF100 goes, I think it's very promising, BUT the heat/power issue completely kills it for me. I have a 4890 right now (8800GTX before that) and that card has been my new favourite since 9700pro. I think 480GTX is priced a little bit on the high side, but I bought X1800XT for around $800 on launch day (Sweden, insane VAT et al) so I'm almost immune.;)
That turned out to be my worst card ever in terms of longevity, maybe GF3 was on par in that regard.

As it stands I would have bought the 480 if the implementation had been better, now I wait and see. I (sadly) detest nVidia as a company, mostly due to JHH and their disgusting PR, but love their technology.

I would also like to thank all the great minds at B3D for the always valuable technology lessons one gets here, found nowhere else. :)
 
Last edited by a moderator:
God, I didn't imagine 7 years ago that this tool would still be used on hardware that many generations further down the line. :)
 
The performance is certainly not bad. Everything else is.

I want my PC as higher performing as possible, along with decent noise and temperatures.

I have two 5850s inside and everything operates like a charm. I can barely hear the cards and my case is generally very silent. I don't want to miss that.

So I voted meh for both.
 
Maybe someone can explain this to me from Charlie's article:
"This means Nvidia can't take the time to respin it. ATI will have another generation out before Nvidia can make the required full silicon (B1) respin, so there is no point in trying a respin. The die size can't be reduced much, if at all, without losing performance, so it will cost at least 2.5 times what ATI's parts cost for equivalent performance. Nvidia has to launch with what it has."
It, i.e. GF100, is faster (even Charlie admit's that) by some varying percentage, and, at max. 80 percent larger (600 vs 334 sqmm), yet it's supposed to cost 2.5 times as much? I fail to graps the math here…

But then, it must be really hard for Charlie to admit that even an unmanufacturable salvage part with way missed clock targets and disable units is beating the HD 5870 perf wise. So he calls it slow. I wonder what that does to the performance of an HD 5870 in his view? More than slow? Me, I'm quite happy with the perf of my 5870 and didn't even see a need yet to try and overclock it.

I also wonder, what he was expecting the voltages to be in Nvidias planning. According to HT4U.net, they're quite low especially under load:http://ht4u.net/reviews/2010/nvidia_geforce_gtx_480/index10.php
NVIDIA GeForce GTX 480 GPU 50 / 100 MHz 1,004 Volt 700 / 1401 MHz 1,011 Volt
RAM (GDDR5) 67,5 MHz 1,584 Volt 924 MHz 1,583 Volt
 
Last edited by a moderator:
Because 28nm is far away, wouldn't it be better to be working on a B1 revision instead?
Could they actually fix the power issues with a B1 revision?

The biggest concern for nVIDIA atm is power not die size, so if they could somehow fix this then they might be able to hold out til Fermi2 @ 28nm,
Power? I think yields are more important to them.
 
It, i.e. GF100, is faster (even Charlie admit's that) by some varying percentage, and, at max. 80 percent larger (600 vs 334 sqmm), yet it's supposed to cost 2.5 times as much? I fail to graps the math here…
I think the theory goes that with so few chips per wafer that work at all...

I'm curious to know if GF100 is in continuous production at TSMC or if NVidia has simply decided to sell the chips it can scrounge off the risk production wafers it ordered last summer. In other words, once the halo cards are sold, will there be any more?

Jawed
 
Why work on a B-revision if you can work on a full-fledged refresh?
Well, given that they apparently weren't able to get high yields with all 512 shaders enabled, their first order of business (for the high-end) will likely be to get those yields up to par so that they can release a slightly-improved version.

I'm sure they'll have a refresh coming either by the end of this year or early next year.
 
Maybe someone can explain this to me from Charlie's article
There is no way to know the price of two different chips from their die sizes. You may guess but these guesses will be as good as anyones. It may cost 2,5 times more or it may cost the same. Die size isn't the only factor of chip pricing.

Well, given that they apparently weren't able to get high yields with all 512 shaders enabled, their first order of business (for the high-end) will likely be to get those yields up to par so that they can release a slightly-improved version.
No need to do a B1 for this. They'll just wait for TSMC to improve 40G futher.

I'm sure they'll have a refresh coming either by the end of this year or early next year.
According to what I've heard it may be sooner than that.
 
Back
Top