NVIDIA GF100 & Friends speculation

The 280 was released 4 months or so? before the 4870x2.

If you mean the 285, it was launched a week after the 295 so Nvidia already had the fastest crown back in some way.

The release of GT280 was mid-june, the 4870x2 came in mid-august. GTX295 was the ces product last year.
 
280 was June, 4870X2 was September/October, if I remember correctly.

EDIT: Yup, it was August. However, the point stands.
 
Yes, Nvidia will be releasing their flagship card and it wont be going straight to the top of most benchmarks. I cannot remember the last time that happened.

I'm just wondering what the impact of that will be. Will they try to avoid it by paper launching an X2 at the same time? Can they even get a presentable X2 card ready by then?
 
Are you sure Hemlock isn't faster in another games/benchmarks? Anyway, this isn't about transistors, it's manufacturing costs / performance (two 300mm² GPUs aren't more expensive then a single 450mm² one).
Costs include double the amount of video RAM, a PCI-Express-Bridge-Chip, PCB cost and the contracts negotiated with TSMC.
 
Yes, Nvidia will be releasing their flagship card and it wont be going straight to the top of most benchmarks. I cannot remember the last time that happened.

I'm just wondering what the impact of that will be. Will they try to avoid it by paper launching an X2 at the same time? Can they even get a presentable X2 card ready by then?

Nothing will happen. Really you can't compare AFR mGPU systems with single-GPUs. You need profiles, you have microstuttering and you have only half of the memory. I hope that even the us magazines will criticize single-card AFR systems.
 
And you think Nvidia wasted huge area on something that would give them only 2% of performance?!

I don't get Dave's comment. So AMD designs architectures based on how they will perform on current apps? That's ironic considering they have had an unsupported tessellator in their chips for a while now. It's doubly ironic considering that measured gains in current apps on Cypress are far below the theoretical improvement over RV770.

Sure enough, improved geometry performance isn't going to shine in current apps because they weren't designed when that level of performance was available. I'm sure Dave's bluffing and I fully expect AMD's next architecture to step up the game in geometry processing as well. If not then I have to wonder if all these years of evangelizing tessellation were just for show.
 
But if GF100 is 280W (guess?), how can they make a dual card and still make it 300W TDP?

EDIT:

I don't get Dave's comment. So AMD designs architectures based on how they will perform on current apps? That's ironic considering they have had an unsupported tessellator in their chips for a while now. It's doubly ironic considering that measured gains in current apps on Cypress are far below the theoretical improvement over RV770.

Sure enough, improved geometry performance isn't going to shine in current apps because they weren't designed when that level of performance was available. I'm sure Dave's bluffing and I fully expect AMD's next architecture to step up the game in geometry processing as well. If not then I have to wonder if all these years of evangelizing tessellation were just for show.

Unless they know Evergreen won't last long enough to be seriously limited in this regard.
 
But if GF100 is 280W (guess?), how can they make a dual card and still make it 300W TDP?

Question: how do squeeze an elephant into a regular refrigerator?

Answer: you open the fridge, push the elephant inside and close the fridge.

Given that it's likely that they will go for a dual chip GPU I wouldn't be surprised if it's just a notch above Hemlock in terms of performance, just to win the power consum.....errrrr performance crown.
 
Question: how do squeeze an elephant into a regular refrigerator?

Answer: you open the fridge, push the elephant inside and close the fridge.

Given that it's likely that they will go for a dual chip GPU I wouldn't be surprised if it's just a notch above Hemlock in terms of performance, just to win the power consum.....errrrr performance crown.

But that would mean virtually cutting the chips' performance in half to still make it under 300 and get a slight advantage over Hemlock. I don't see a point in that. At least from a financial stand point.
 
But that would mean virtually cutting the chips' performance in half to still make it under 300 and get a slight advantage over Hemlock. I don't see a point in that. At least from a financial stand point.

Why not? You're selling "shitty" bins for even better margins!

The real enthusiast (I shudder using that word nowadays, seems to imply more idiocy than anything else) would get 2 full/less castrated GF100s instead at a slightly higher margin I'd guess.
 
We still don't have exact numbers of the power consumption. So let it be nvidias problem, to get a dual fermi done. They think, they can do it, so they have reasons to believe. :)
 
I still say screw PCI-SIG and their anal retentive policies (AFAICS the bylaws don't allow them to put these kind of restrictions on members, at least not to qualify for the cross licensing).
 
Why do people insist on comparing a dual GPU card to a single GPU card? I'm sorry, it may work on a $ to $ basis, but after that, it holds no water. Most people with a decent IQ will compare it to Cypress, not Hemlock.
Christ.

________________________

I haven't seen anyone comment on hardware.fr's specs. (based on a 725/700 gpu, 1200mhz mem)

5870 GF100
850.... 2800 Mtriangles
27.2... 22.4 Gpixels
2720 ... 1433 Gflops
68 ..... 44.8 Gtexels
143 .... 214 Bandwidth (Gigs/s)

[note to self: lrn2 format]


Nous avons récapitulé les spécifications du GF100 pour les comparer aux autres GPUs. Nous avons également calculé les débits maximum en prenant en compte des fréquences de 725 MHz pour le GPU, 1400 MHz pour les unités de calcul (et donc 700 MHz pour les setup engines et les unités de texturing) et 1200 MHz (2400 MHz au niveau du transfert des données) pour la GDDR
http://www.hardware.fr/articles/782-6/nvidia-geforce-gf100-revolution-geometrique.html

do these numbers seem believable? While the triangle rate is awesome (and bandwidth), doesn't this look a little unbalanced for most current games?
 
Christ.

________________________

I haven't seen anyone comment on hardware.fr's specs. (based on a 725/700 gpu, 1200mhz mem)

5870 GF100
850.... 2800 Mtriangles
27.2.... 22.4 Gpixels
2720 .... 1433 Gflops
68 .... 44.8 Gtexels
143 .... 214 Bandwidth (Gigs/s)

[note to self: lrn2 format]


http://www.hardware.fr/articles/782-6/nvidia-geforce-gf100-revolution-geometrique.html

do these numbers seem believable? While the triangle rate is awesome (and bandwidth), doesn't this look a little unbalanced for most current games?

Raw numbers don't show your the efficiency of the architectures. And the Gpixels of GF100 is wrong. It should be 33.6 Gpixels.
 
Higher the tessellation factor, more the load on setup units.
I think that needs verifying.

An increase in triangles definitely increases the DS workload. Can DS keep up with the increase in triangles? Is DS texture-fetch limited?

In other words, which takes longer?: generating extra triangles in TS or interpolating their attributes in DS?

Jawed
 
5870 GF100
850.... 2800 Mtriangles
27.2... 22.4 Gpixels
2720 ... 1433 Gflops
68 ..... 44.8 Gtexels
143 .... 214 Bandwidth (Gigs/s)
do these numbers seem believable? While the triangle rate is awesome (and bandwidth), doesn't this look a little unbalanced for most current games?
I don't think so .. Nvidia clearly tweaked the architecture up to favor setup rate , and tweaked it down against texture fill rate , they clearly saw that texel fill rate is not a limiting factor anymore .

Take a look at HD4890 vs GTX285 , they have 34 vs 48 GiTexel respectively , and yet they are somehow close in performance despite the huge gap in fill rate .

I think the same apply here .
 
Back
Top