I'm still not sure whether increasing the clocks is enough to be the R700 (or get close)."We believe that Shaders and clock of 55nm GT200 are definitely going to be higher than the 65nm and the chip itself should be a bit cooler."
I'm still not sure whether increasing the clocks is enough to be the R700 (or get close)."We believe that Shaders and clock of 55nm GT200 are definitely going to be higher than the 65nm and the chip itself should be a bit cooler."
I'm still not sure whether increasing the clocks is enough to be the R700 (or get close).
I'm still not sure whether increasing the clocks is enough to be the R700 (or get close).
There aren't many titles like this. Try to find a game where the 8800GT is 61% faster than the 9600GT.It's doable. An ~30% shader clock bump would get the job done in shader-bound titles,
Not an easy task, particularly in terms of power consumption. We can already see how the rather meager speed bump of G92b increased power consumption over G92 even though it went from 65nm to 55nm. We'd be lucky to see 20%, IMO.and the same bump in core clock would do for pixel/texture/setup-bound apps.
There aren't many titles like this. Try to find a game where the 8800GT is 61% faster than the 9600GT.
Not an easy task, particularly in terms of power consumption. We can already see how the rather meager speed bump of G92b increased power consumption over G92 even though it went from 65nm to 55nm. We'd be lucky to see 20%, IMO.
Then you o/c the R700 and the GT200b dies.
US
8x MSAA is mostly irrelevant on all NVs G80+ GPUs.but especially with 8xMSAA, they're gonna loose badly still.
8x MSAA is mostly irrelevant on all NVs G80+ GPUs.
16x CSAA and 16xQ CSAA is the preffered speed and quality modes. 8x MSAA is like 16x CSAA in quality and like 16xQ CSAA in speed so there's no reason for using it.
A little better not noticably. And it sorta depends on the game and the scene and your monitor.Except 8xMSAA has noticeably better edge-anti-aliasing so no...
A little better not noticably. And it sorta depends on the game and the scene and your monitor.
And 16xQ has a little better edge-anti-aliasing than 8x with the same speed.
So it's the same -- 8x MSAA is pointless on G80+ GPUs anyway.
A little better not noticably. And it sorta depends on the game and the scene and your monitor.
And 16xQ has a little better edge-anti-aliasing than 8x with the same speed.
So it's the same -- 8x MSAA is pointless on G80+ GPUs anyway.
I expect clocks around 750/1500 at minimum.I expect GT200b to increase speeds a little, but probably not more than 10-15% max.
It's exactly the same with your "noticeable"."not noticeable" is a purely subjective phrase, and I most certainly do see the difference on my G92.
I expect clocks around 750/1500 at minimum.
It's exactly the same with your "noticeable".
I'm playing the same games with 16xCS and 8xMS often to try to see any difference.
It's there, yes, but it's not that big and at some times 16xCS tends to be even better then 8xMS.
So it's not that simple as "8xMS is better then 16xCS" cause it's not.
They turn out to be more or less on par most of the time especially if you're not looking into it in search of a difference.
They are very close from the quality POV, but 8xMS is quite slower.
Apparently, GT200b will remove the FP64 shaders in order to decrease die size and allow for higher clocks.
http://www.diskusjon.no/index.php?showtopic=873626&st=2140&p=11526561&#entry11526561
http://www.diskusjon.no/index.php?showtopic=873626&st=2160&p=11527080&#entry11527080
Oh he does not need to speculate.I think it far more likely this is someone parroting speculation seen in this very thread as fact elsewhere.
I think it far more likely this is someone parroting speculation seen in this very thread as fact elsewhere.
The FP64 ALUs aren't taking much space. GT200 has 30 SMs taking up ~25% of 576 mm2. G92 has 16 SMs taking up ~23% of 328 mm2.Oh he does not need to speculate.
The 55nm GT200b will not have the FP64 ALU's at it will be a significant less complex and smaller die.