D9P/G94: 9600 GT on 19th of February

About couple weeks ago their was promotion going instate rebate for VisionTek HD3870-512MBGDDR4 I believe it was $194.00; but I bought mine back in Nov 2007 for $215 :( :)

Edit: You still have a good price; you should be able you crank up GPU CLOCK frequency 725MHz+ :)

Yeah I spent a bit extra to get this:
http://www.newegg.com/Product/Product.asp?Item=N82E16814241068

I like that copper cooler on there and, of course, plan to push the thing to the ragged edge of its clock capabilities. :)


Looks like Nvidia and ATI has surrender to produce single powerful GPU.

Edit: Have to wait for ATI R900

Yeah lol. I'm not too keen on the dual GPU boards myself. Not sure I can wait till 2015 though haha.
 
Last edited by a moderator:
Both IHVs need from what it seems smaller manufacturing processes than those currently used by either/or in order to make a significant difference.
 
That either means

1) Each board is at least 30% faster than an Ultra, cuz sometimes SLI is only as fast as one of the cards (EQ2, for ex)
2) SLI is being perfected in a big way (doubtful)

If (1) is true, that's neat cuz a single board release would potentially be even faster like with 7950GX2 due to cooling options. 7950GX2 was basically 2 7900GTs stuck together.
 
Last edited by a moderator:
Not sure I can wait till 2015 though haha.

I don't think it would that late; my guess estimate 2010 :)

R700 year 2008 multi-core 55nm/45nm
R800 year 2009 multi-core 45nm
R900 year 2010 single core 32nm
Edit: I read somewhere that AMD already working on R900 project.
 
Last edited by a moderator:
Is there a card that is two times faster than its predecessor 100% of the time ?
I'd love to hear about it...

No, but then I don't recall at all even mentioning something like that, so I'm not sure at all what your point is or was.
 
Hopefully this 30% come out at playable settings and not in 1600x1200 with 8xaa 16xaf 12fps vs 15.6fps :smile:

Sure it will come out in playable settings, in extreme-hd-settings 9800 GX2 will suffer of only 512MB usable VRAM and maybe bandwidth, ROP-power.

Also interessting to note is the power-usage which THG.tw claims: 40A @ +12V, 8800Ultra demands 30A, so 9800GX2 will need 120W more - TDP >250W?:oops:
 
I don't think it would that late; my guess estimate 2010 :)

R700 year 2008 multi-core 55nm/45nm
R800 year 2009 multi-core 45nm
R900 year 2010 single core 32nm
Edit: I read somewhere that AMD already working on R900 project.


Nope. No chance of a huge single-core R900 from what I can see.

Once they've gone to multi-core, that's where they'll stay. Increased parallellism is where we're heading and once most of the bugs have been ironed out of Crossfire/SLI, that's what we'll see from both IHVs.
 
Is there a card that is two times faster than its predecessor 100% of the time ?
I'd love to hear about it...

I vaguely recall the R300 being faster in every aspect than the R200 and Geforce 4 TI4600, every time when FSAA was enabled.

The performance and quality increases offered by R300 is still considered to be one of the greatest in the history of 3D graphics.

Not sure it was 100% but close.
 
I vaguely recall the R300 being faster in every aspect than the R200 and Geforce 4 TI4600, every time when FSAA was enabled.

The performance and quality increases offered by R300 is still considered to be one of the greatest in the history of 3D graphics.

Not sure it was 100% but close.

If you're comparing R200 with R300 with FSAA enabled, then you're comparing apples with oranges (supersampling vs. multisampling). By the way you get analogue performance increases if you compare a 7900GTX with a 8800GTX too; if you should use AF with relevant optimisations disabled the picture gets even worse for the first.

Example:

http://www.computerbase.de/artikel/...ti_radeon_hd_3870_rv670/16/#abschnitt_f_e_a_r

In 2560*1600 with 4xAA the G80 is almost 3x times as fast as the G71.
 
If you're comparing R200 with R300 with FSAA enabled, then you're comparing apples with oranges (supersampling vs. multisampling). By the way you get analogue performance increases if you compare a 7900GTX with a 8800GTX too; if you should use AF with relevant optimisations disabled the picture gets even worse for the first.

Example:

http://www.computerbase.de/artikel/...ti_radeon_hd_3870_rv670/16/#abschnitt_f_e_a_r

In 2560*1600 with 4xAA the G80 is almost 3x times as fast as the G71.

Aye but if I recall correctly G71 also didn't have enough Z buffer (forget the actual name) to fully buffer Z-culling/rejection at resolutions higher than 1920x1200. It suffered a far greater drop going from 1920x1200 to 2560x1600 than competing ATI products. Even in SLI, which is why Crossfire was so popular for 30" monitors at the time.

That would again exacerbate the gap between G71 and G80 at such a high resolution.

Regards,
SB
 
I'm farily sure you'll find that the majority of those titles NVIDIA have also been releasing beta drivers for as well as us releasing "Hotfix" drivers; timing may be a little different, but thats likely down to Q/A efforts I suspect.

Timing may be a little different? Oh my, that is an interesting way of putting it. I'm sure the second place finisher of any horse race likes to point out that timing may be a little different. I'm not trying to pick on you in particular, Dave, but that comment had more spin than a yo-yo.
 
I'm specifically talking about a number of cases where we've posted a Hotfix the day or so after NVIDIA posted a Beta.
 
Aye but if I recall correctly G71 also didn't have enough Z buffer (forget the actual name) to fully buffer Z-culling/rejection at resolutions higher than 1920x1200. It suffered a far greater drop going from 1920x1200 to 2560x1600 than competing ATI products. Even in SLI, which is why Crossfire was so popular for 30" monitors at the time.

That would again exacerbate the gap between G71 and G80 at such a high resolution.

Regards,
SB

No doubt about that; but then again if you follow the logic of the original post I responded to and take the architectural efficiency changes in R300 vs. NV2x, you'll come to similar conclusions as to why the first was by such a high factor faster with MSAA than the latter.
 
Strange how they got practically everything in the specifications department, but then forgot to tell us how many scalar processors does this G94 core have... ;)
 
Back
Top