2008 with 4870x2 and GTX280.
The 280 was released 4 months or so? before the 4870x2.
If you mean the 285, it was launched a week after the 295 so Nvidia already had the fastest crown back in some way.
2008 with 4870x2 and GTX280.
The 280 was released 4 months or so? before the 4870x2.
If you mean the 285, it was launched a week after the 295 so Nvidia already had the fastest crown back in some way.
Costs include double the amount of video RAM, a PCI-Express-Bridge-Chip, PCB cost and the contracts negotiated with TSMC.Are you sure Hemlock isn't faster in another games/benchmarks? Anyway, this isn't about transistors, it's manufacturing costs / performance (two 300mm² GPUs aren't more expensive then a single 450mm² one).
Yes, Nvidia will be releasing their flagship card and it wont be going straight to the top of most benchmarks. I cannot remember the last time that happened.
I'm just wondering what the impact of that will be. Will they try to avoid it by paper launching an X2 at the same time? Can they even get a presentable X2 card ready by then?
If they are launching it 1-2 months later as they said, probably yes.Can they even get a presentable X2 card ready by then?
And you think Nvidia wasted huge area on something that would give them only 2% of performance?!
I don't get Dave's comment. So AMD designs architectures based on how they will perform on current apps? That's ironic considering they have had an unsupported tessellator in their chips for a while now. It's doubly ironic considering that measured gains in current apps on Cypress are far below the theoretical improvement over RV770.
Sure enough, improved geometry performance isn't going to shine in current apps because they weren't designed when that level of performance was available. I'm sure Dave's bluffing and I fully expect AMD's next architecture to step up the game in geometry processing as well. If not then I have to wonder if all these years of evangelizing tessellation were just for show.
But if GF100 is 280W (guess?), how can they make a dual card and still make it 300W TDP?
Question: how do squeeze an elephant into a regular refrigerator?
Answer: you open the fridge, push the elephant inside and close the fridge.
Given that it's likely that they will go for a dual chip GPU I wouldn't be surprised if it's just a notch above Hemlock in terms of performance, just to win the power consum.....errrrr performance crown.
But that would mean virtually cutting the chips' performance in half to still make it under 300 and get a slight advantage over Hemlock. I don't see a point in that. At least from a financial stand point.
Christ.Why do people insist on comparing a dual GPU card to a single GPU card? I'm sorry, it may work on a $ to $ basis, but after that, it holds no water. Most people with a decent IQ will compare it to Cypress, not Hemlock.
http://www.hardware.fr/articles/782-6/nvidia-geforce-gf100-revolution-geometrique.htmlNous avons récapitulé les spécifications du GF100 pour les comparer aux autres GPUs. Nous avons également calculé les débits maximum en prenant en compte des fréquences de 725 MHz pour le GPU, 1400 MHz pour les unités de calcul (et donc 700 MHz pour les setup engines et les unités de texturing) et 1200 MHz (2400 MHz au niveau du transfert des données) pour la GDDR
Christ.
________________________
I haven't seen anyone comment on hardware.fr's specs. (based on a 725/700 gpu, 1200mhz mem)
5870 GF100
850.... 2800 Mtriangles
27.2.... 22.4 Gpixels
2720 .... 1433 Gflops
68 .... 44.8 Gtexels
143 .... 214 Bandwidth (Gigs/s)
[note to self: lrn2 format]
http://www.hardware.fr/articles/782-6/nvidia-geforce-gf100-revolution-geometrique.html
do these numbers seem believable? While the triangle rate is awesome (and bandwidth), doesn't this look a little unbalanced for most current games?
Raw numbers don't show your the efficiency of the architectures. And the Gpixels of GF100 is wrong. It should be 33.6 Gpixels.
I think that needs verifying.Higher the tessellation factor, more the load on setup units.
I don't think so .. Nvidia clearly tweaked the architecture up to favor setup rate , and tweaked it down against texture fill rate , they clearly saw that texel fill rate is not a limiting factor anymore .5870 GF100
850.... 2800 Mtriangles
27.2... 22.4 Gpixels
2720 ... 1433 Gflops
68 ..... 44.8 Gtexels
143 .... 214 Bandwidth (Gigs/s)
do these numbers seem believable? While the triangle rate is awesome (and bandwidth), doesn't this look a little unbalanced for most current games?
Nope, based on 700 MHz, Damien's Number is right on the money.