NVIDIA GF100 & Friends speculation

GF104's die is slightly bigger than G92,also bigger than Cypress.

Edison is the man who leaked GTX480 would have 480SP last month.

G92 was smaller than Cypress: 324mm2 vs 334 mm2, so if only slightly bigger than G92, the difference between it and Cypress must really be small.
 
I don't really buy this line of thinking, after all Nvidia's driver team has had months and months to optimise their drivers already. That there weren't any cards in the hands of the public during that time shouldn't really matter much.

That's what I would think too.

I couldn't care less about MGPU's...as long as they don't scale alike in all games and often needs months before "compatible" driver comes out..and even then it dosn't tell the whole story (eg microstutter)...

It's all fine for you then, but there are people who do care (I'm not tempted by those cards either, fyi). I think that an honest and comprehensive review should include all relevant cards on the market. If you're inherently not interested in multigpu cards, you can just disregard that part of the chart.

I guess nV's puppet show begins (don't mean to offend anyone) :).

Why are people concerned about whether only one hardware site is including 5970 or not? I can assure you others will, so stop cluttering up the thread.

Yes, we probably should.
 
That's what I would think too.



It's all fine for you then, but there are people who do care (I'm not tempted by those cards either, fyi). I think that an honest and comprehensive review should include all relevant cards on the market. If you're inherently not interested in multigpu cards, you can just disregard that part of the chart.

I guess nV's puppet show begins (don't mean to offend anyone) :).

I don't care about either the 5970 or the GTX295 (or ANY MGPU card), they are crude and not very efficient "solutions" to getting the performance crown while introducing issues and like Carsten said, FPS on MGPU's dosn't tell the whole story.
 
I don't care about either the 5970 or the GTX295 (or ANY MGPU card), they are crude and not very efficient "solutions" to getting the performance crown while introducing issues and like Carsten said, FPS on MGPU's dosn't tell the whole story.

Actualy if a certain die area makes you problems than u could put two smaller dies that have better yealds and end with higher performance. I think the multi gpu-s are a good idea with the parralel nature of the graphic.
I could imagine multi die gpu-s in the future. I mean 2 or more dies on a single GPU substrate. U would only need to create a botton line gpu and than just increase the number of dies. :p .
 
Actualy if a certain die area makes you problems than u could put two smaller dies that have better yealds and end with higher performance. I think the multi gpu-s are a good idea with the parralel nature of the graphic.
I could imagine multi die gpu-s in the future. I mean 2 or more dies on a single GPU substrate. U would only need to create a botton line gpu and than just increase the number of dies. :p .

The problem is that multi gpu cards of today do not take advantage of the "parallel nature of the graphics" any more than single gpu cards...
 
GF104's die is slightly bigger than G92,also bigger than Cypress.

Edison is the man who leaked GTX480 would have 480SP last month.

That's a bit puzzling... Even assuming 256 SPs and a 256-bit bus instead of the 192-bit one some of us were expecting, why is it so big? That doesn't seem very consistent with the 530mm² of GF100. Or could it have something to do with NVIDIA trading area for yields, for instance doubling vias?
 
The problem is that multi gpu cards of today do not take advantage of the "parallel nature of the graphics" any more than single gpu cards...

And why not ? U can reach sometimes near 95% scaling. Maybe if the API would be tile based from the ground and could distribute tiles to ech gpu u could even run amd + nvidia combo.
 
And why not ? U can reach sometimes near 95% scaling. Maybe if the API would be tile based from the ground and could distribute tiles to ech gpu u could even run amd + nvidia combo.

Probably because the increased latency and micro-stuttering make it inherently inferior, as does wasting half of the total installed memory, requiring driver profiles to get the full benefit in most applications. mGPU smells :D
 
Which happens to be the 5970... :)

I could care less either way. When 5870 launched many Nvidia fans crowed about how GTX 295 was faster than 5870 and thus 5870 was a failure. Now that GTX 480 is launching, Nvidia fans don't want 5970 benched. :p

You can probably flip flop that with ATI fans.

At the end of the day, everyone is going to end up comparing it to whatever card they want. Leaving off dual GPU cards only serves to limit a reviews usefulness to a consumer.

If the consumer doesn't care about dual GPU (like me), then they'll just ignore all dual GPU numbers. If they care about dual GPU, then it would be useful to them.

Regards,
SB

Yeah I'm having a hard time comprehending the fanboi train of thought.. IF it favors NV it's ok to include X2 variants (see GTX 295 vs HD5800 series) BUT if it doesn't favor NV it's not ok to include them (See 5970 vs GTX 480).. Ahhh ok
 
Yeah I'm having a hard time comprehending the fanboi train of thought.. IF it favors NV it's ok to include X2 variants (see GTX 295 vs HD5800 series) BUT if it doesn't favor NV it's not ok to include them (See 5970 vs GTX 480).. Ahhh ok

Nah, it's been common since the 3870X2 (which might have been bad, but I don't suppose horrible per se). Heaps upon heaps of lame-ass justification ;)

Oh, and microstutter never existed during the 795/00GX2 eras. Truth! :D
 
Seems fishy

I don't know about that chart Annihilator posted. Looking through it, the GTX 480 beats the HD 5870 at every single resolution and AA/AF level except on two tests, and just barely at that:

Crysis Warhead 19x12 4x/16x - 5870 beats 480 by 0.4 fps
Left4Dead 25x16 8x/16x - 5870 beats 480 by 0.5 fps

Some game engines favor one architecture over another, so typically you'll see ATi cards faster on Game X and Nvidia cards faster on Game Y. Even if one card is overall 15-20% faster than the other, the slower card still ends up winning a handful of benchmarks on specific games. But not this time according to that spreadsheet. And that casts doubt on its validity to me.
 
Might not be perfectly cut down the middle. Could have the same 4 GPC's, 256-bit bus, 32 ROPS + other unscaled bits.

GF104 won't compete with Cypress, it's projected specs put it somewhere north of GT200 in the computational department and a bit down on bandwidth. Definitely something to beat up Juniper with.

Well I sure hope all the reviewers preach about the evils of paper launches, like Anandtech did for the 5830. Which they weren't even sure was a paper launch by 3 days, they just suspected it might be.

I'm pretty sure AMD made reviewers quite well aware of all the buts and ifs for the GT300 reviews, let's see who picks up on that because I smell a new soap opera coming (kind of like AMD's "Paper Dragon" presentation.)
 
That's a bit puzzling... Even assuming 256 SPs and a 256-bit bus instead of the 192-bit one some of us were expecting, why is it so big? That doesn't seem very consistent with the 530mm² of GF100. Or could it have something to do with NVIDIA trading area for yields, for instance doubling vias?

Somehow i see GF104 as the new 8800GT, instead of replacement for the 9600GT. When info about GF104 was leaked, it was said at first it would be higher end. Maybe its a GF100 with some bits cut down, like only 320SP, no DP, no ECC and a 256 bit memory bus? With higher clocks and not much less performance as GF100 but cheaper to make and sell?
 
Last edited by a moderator:
I don't know about that chart Annihilator posted. Looking through it, the GTX 480 beats the HD 5870 at every single resolution and AA/AF level except on two tests, and just barely at that:

Crysis Warhead 19x12 4x/16x - 5870 beats 480 by 0.4 fps
Left4Dead 25x16 8x/16x - 5870 beats 480 by 0.5 fps

Some game engines favor one architecture over another, so typically you'll see ATi cards faster on Game X and Nvidia cards faster on Game Y. Even if one card is overall 15-20% faster than the other, the slower card still ends up winning a handful of benchmarks on specific games. But not this time according to that spreadsheet. And that casts doubt on its validity to me.
This appears to be an nVidia marketing document (notice the "NVIDIA confidential"), which would seem to indicate that they would have dropped games that made their hardware look bad. If these benchmarks are correct, then we can expect that there are a few other games where the GF100 performs much more poorly. However, even knowing that, I'd say that the benchmarks look quite good.

Edit: Here is a quick listing of the games used in the Anandtech, Tomshardware, and HardOCP reviews for the HD 5970, just for comparison's sake (in alphabetical order, asterisks by those that don't appear in the supposed earlier leak):
3DMark Vantage
Batman: Arkham Asylum
Battleforge
*Crysis
Crysis Warhead
*Dawn of War II
*DIRT 2
Far Cry 2
*Grand Theft Auto IV
HAWX
Left 4 Dead
*Need for Speed Shift
Resident Evil 5
STALKER Clear Sky
World in Conflict

Now, we don't know what the performance is like in these other games, but there is always the possibility that one or more of them were omitted due to poor performance. That said, high performance in the remaining titles still would mean good overall performance, which is precisely what nVidia really needs right now.
 
Somehow i see GF104 as the new 8800GT, instead of replacement for the 9600GT. When info about GF104 was leaked, it was said at first it would be higher end. Maybe its a GF100 with some bits cut down, like DP, ECC and a 256 bit? With more or less same performance as GF100 but cheaper to make and sell?

With only 256 Cuda Cores?
 
Back
Top