NVIDIA GF100 & Friends speculation

In sum, I'd be willing to bet that this time around, the benchmarks are real, and produced by nVidia. It is possible they may be skewed in one manner or another (using older drivers for ATI parts, selective elimination of certain games that performed poorly on nVidia hardware).

One small mention on using the older drivers, though: at this point in time, nVidia likely stands more to gain from driver improvements than ATI, and so using older drivers might not be horribly misleading.
 
One thing is clear to me though: Make no mistake, ATI/AMD's next major GPU refresh will have a significantly larger die size than RV770 and RV870. So, ironically, I believe that ATI/AMD is moving in the direction of increasing die size on their high end GPU. In turn, I do believe that NVIDIA's performance/area gap will get smaller compared to ATI/AMD.
Hmmm, I'd be rather surprised if they moved to a significantly larger die size on a refresh part. Usually refresh parts are smaller, not larger, due to process shrinks and little change in overall configuration.
 
As I said: We're considering it right now. But a, no two - we added 4870 X2 too - bad decisions from half a year back don't necessarily mean, we have to keep with that until all time ends. After all, we've quit benchmarking Quake 3 also without being called names. ;)

Oh, come one, you can't seriously be thinking to get away with just saying it was a bad decision... + a Q3 comparison on top of that. That would be one of the lamest excuses I've heard lately. You don't have to include 4870 X2, which is EOL-ed anyway, but people can still buy 5970...
 
Hmmm, I'd be rather surprised if they moved to a significantly larger die size on a refresh part. Usually refresh parts are smaller, not larger, due to process shrinks and little change in overall configuration.

Sorry, I meant to say major architectural revision, not refresh.
 
Sorry, I meant to say major architectural revision, not refresh.

But unlike nVidia, ATI probably isn't compromising execution while the die size goes up- and R900's will have seen RV770's strategy bear some fruits (RV870 didn't in design, that's why the team had resistance on having another small die)

Barring any major problems they could always release the smaller died counterparts first and take that market instead of waiting for the halo part. So why is the diesize going up again?
 
I don't know what "compromising execution" means, but GF100 certainly seems to be a step in the right direction for NVIDIA, offering very high performance and image quality with past, present, and future workloads, offering very strong graphics and compute feature set, and offering a better means to scale to lower end designs compared to GT200.

Anyone who has been involved in new product development knows that a radically new and innovative design typicially takes much longer than originally expected. Some hiccups are even out of the designer's control. One has to do the best they can. It is not an easy juggling act. In this industry, it is quite common to see one company on top of the world one minute, and then knocked way off the throne the next minute. That's just how it goes when one has to predict trends many years in advance.

Why do I believe that the next major architecture for ATI/AMD will have an even larger die size than RV870? Because they can :D Will ATI/AMD encounter some serious obstacles when transitioning to a radically new architecture? I'm sure they will.
 
Last edited by a moderator:
But unlike nVidia, ATI probably isn't compromising execution while the die size goes up-
Er, you can bet that nVidia had no intention whatsoever of compromising execution. And they had no way of knowing beforehand that execution would be compromised. It just turned out that the GF100 architecture was more difficult to design than they originally thought.
 
What NVIDIA should do to combat ATI/AMD's high end dual GPU card is to position the GTX 470 SLI against the HD 5970

That isn't exactly a new notion, it will happen for sure. Sites like THG already include CF/SLI scores in most GPU reviews. And almost just as certain is that people will be bbickering endlessly whether it is fair or not. The dividing line is likely to roughly coincide with those who favour ATI or Nvidia ;)
 
I don't know what "compromising execution" means,
Fermi being 6 months slower than a retaped-out Cypress and the midrange/mainstream part being even more delayed than Junpier sounds like a very obvious indicator

but GF100 certainly seems to be a step in the right direction for NVIDIA, offering very high performance and image quality with current and future workloads, offering very strong graphics and compute feature set, and offering a better means to scale to lower end designs compared to GT200.
And you're arguing that RV8XX's major enhancements wrt RV770 doesn't at all meet the requirements for GPU compute? That's rather daft.

Save the ease of "scale to lower end" until you see any midrange cards. In fact Juniper was the first one being demonstrated in mid-2009 first with Wolfenstein then later with Heaven. That's pre-launch hard proof.

And again, how is it much easier to scale down when each GPU design still needs its own unique logic parts like the memory controller (which GF100 might also have a problem with incidentally)?


Anyone who has been involved in new product development knows that a radically new and innovative design typicially takes much longer than originally expected.
Hmm, are GT21X new and innovative, or just delayed?

Some hiccups are even out of the designer's control. One has to do the best they can. It is not an easy juggling act. In this industry, it is quite common to see one company on top of the world one minute, and then knocked way off the throne the next minute. That's just how it goes when one has to predict trends many years in advance.

That's really just fluff. Is nVidia really doing the best they can?
 
One small mention on using the older drivers, though: at this point in time, nVidia likely stands more to gain from driver improvements than ATI, and so using older drivers might not be horribly misleading.

I don't really buy this line of thinking, after all Nvidia's driver team has had months and months to optimise their drivers already. That there weren't any cards in the hands of the public during that time shouldn't really matter much.
 
Er, you can bet that nVidia had no intention whatsoever of compromising execution. And they had no way of knowing beforehand that execution would be compromised. It just turned out that the GF100 architecture was more difficult to design than they originally thought.

only GF100? No derivatives of GT200 and no mid and high end GT21x would indicate they've had a lot more execution problems. If there was any design issue, it's been bothering nV ever since GT200.
 
Er, you can bet that nVidia had no intention whatsoever of compromising execution. And they had no way of knowing beforehand that execution would be compromised. It just turned out that the GF100 architecture was more difficult to design than they originally thought.

They've been doing things akin to the G70 days. Topend + 1-2Q delay = Midrange + 0.5-1Q delay = Lowend. But in the current generation where ATI has basically redefined the timeframe of an architecture generation's releases, they're not faring well enough IMO.

RV7X0 covered all markets in 3/4 months (depending on whether you count 4830)
RV8X0 covered all markets in 6 months (If you just count Juniper? 3 weeks)
 
thats understandable

i think it is only fair to add the 5970 to the benchmark after all you guys did it with the release of the 5870 before by adding the nvidia gtx 295

http://www.pcgameshardware.de/screenshots/original/2009/09/HD5870-CoD-WaW-1280.png
http://www.pcgameshardware.de/screenshots/original/2009/09/HD5870-FC2-1280.png

not trying to be rude just would like to see it.

And there's no reason for it to not be included in the review. The GTX 480 is going to be NVIDIA's best card for a while. It only makes sense to see how it fares against ATI's best.
 
And there's no reason for it to not be included in the review. The GTX 480 is going to be NVIDIA's best card for a while. It only makes sense to see how it fares against ATI's best.

I couldn't care less about MGPU's...as long as they don't scale alike in all games and often needs months before "compatible" driver comes out..and even then it dosn't tell the whole story (eg microstutter)...
 
I couldn't care less about MGPU's...as long as they don't scale alike in all games and often needs months before "compatible" driver comes out..and even then it dosn't tell the whole story (eg microstutter)...

That's besides the point. The "it's one GPU vs two GPUs" is another issue, that shouldn't compromise reviews. Reviews should include the best from both worlds, against each other, so that the consumer can see if the price difference is justified between the cards being compared.
 
And there's no reason for it to not be included in the review. The GTX 480 is going to be NVIDIA's best card for a while. It only makes sense to see how it fares against ATI's best.

Which happens to be the 5970... :)

I could care less either way. When 5870 launched many Nvidia fans crowed about how GTX 295 was faster than 5870 and thus 5870 was a failure. Now that GTX 480 is launching, Nvidia fans don't want 5970 benched. :p

You can probably flip flop that with ATI fans.

At the end of the day, everyone is going to end up comparing it to whatever card they want. Leaving off dual GPU cards only serves to limit a reviews usefulness to a consumer.

If the consumer doesn't care about dual GPU (like me), then they'll just ignore all dual GPU numbers. If they care about dual GPU, then it would be useful to them.

Regards,
SB
 
Which happens to be the 5970... :)

I could care less either way. When 5870 launched many Nvidia fans crowed about how GTX 295 was faster than 5870 and thus 5870 was a failure. Now that GTX 480 is launching, Nvidia fans don't want 5970 benched. :p

You can probably flip flop that with ATI fans.

At the end of the day, everyone is going to end up comparing it to whatever card they want. Leaving off dual GPU cards only serves to limit a reviews usefulness to a consumer.

If the consumer doesn't care about dual GPU (like me), then they'll just ignore all dual GPU numbers. If they care about dual GPU, then it would be useful to them.

Regards,
SB

So you wrote all that to agree with me ? Mmkay :)
 
比 Cypress 大。
Edison 发表于 2010-3-25 17:05

GF104's die is slightly bigger than G92,also bigger than Cypress.

Edison is the man who leaked GTX480 would have 480SP last month.
 
Last edited by a moderator:
At the end of the day, everyone is going to end up comparing it to whatever card they want. Leaving off dual GPU cards only serves to limit a reviews usefulness to a consumer.

Similarly, CF/SLI would make it more useful. And quad of each, if you really want everyone's best.

Me, I don't really care. Never had a VSA100, Rage Fury MAXX or GTX295 and haven't found a reason to consider them.
 
Why are people concerned about whether only one hardware site is including 5970 or not? I can assure you others will, so stop cluttering up the thread.
 
Back
Top