Nvidia GT200b rumours and speculation thread

"We believe that Shaders and clock of 55nm GT200 are definitely going to be higher than the 65nm and the chip itself should be a bit cooler."
I'm still not sure whether increasing the clocks is enough to be the R700 (or get close).
 
I'm still not sure whether increasing the clocks is enough to be the R700 (or get close).

It's doable. An ~30% shader clock bump would get the job done in shader-bound titles, and the same bump in core clock would do for pixel/texture/setup-bound apps. Bandwidth is already where it needs to be. So we'd be talking about 780/1700 clocks to match or exceed R700 in most circumstances.
 
I'm still not sure whether increasing the clocks is enough to be the R700 (or get close).

In my opinion it really depends on the game as in some applications even a single HD4870 already beats the GTX280. With a - say - 30% clock bump on the shaders they might get back some of the benchie-bars from R700, but especially with 8xMSAA, they're gonna loose badly still.
 
It's doable. An ~30% shader clock bump would get the job done in shader-bound titles,
There aren't many titles like this. Try to find a game where the 8800GT is 61% faster than the 9600GT.

and the same bump in core clock would do for pixel/texture/setup-bound apps.
Not an easy task, particularly in terms of power consumption. We can already see how the rather meager speed bump of G92b increased power consumption over G92 even though it went from 65nm to 55nm. We'd be lucky to see 20%, IMO.
 
There aren't many titles like this. Try to find a game where the 8800GT is 61% faster than the 9600GT.

Well I did mention more than one bottleneck for this very reason, and you most certainly can be entirely shader-bound in a given frame, so your minimum FPS in this case would be higher.

Not an easy task, particularly in terms of power consumption. We can already see how the rather meager speed bump of G92b increased power consumption over G92 even though it went from 65nm to 55nm. We'd be lucky to see 20%, IMO.

I never said it would be easy, but there are several AIB-O/C'd GTX 280 models with core clocks of 650MHz, so perhaps only AIB-O/C'd GT200b cards will be capable of surpassing R700 in the average case.
 
Then you o/c the R700 and the GT200b dies.

US

It would be perfectly fair to compare an AIB-O/C'd R700 to a GT200b, no argument here. We don't know yet how well R700's clocks will scale though, so your conclusion cannot be proven until R700's release (perhaps a month or two after when the AIB-O/C'd models come out).
 
but especially with 8xMSAA, they're gonna loose badly still.
8x MSAA is mostly irrelevant on all NVs G80+ GPUs.
16x CSAA and 16xQ CSAA is the preffered speed and quality modes. 8x MSAA is like 16x CSAA in quality and like 16xQ CSAA in speed so there's no reason for using it.
 
8x MSAA is mostly irrelevant on all NVs G80+ GPUs.
16x CSAA and 16xQ CSAA is the preffered speed and quality modes. 8x MSAA is like 16x CSAA in quality and like 16xQ CSAA in speed so there's no reason for using it.

Except 8xMSAA has noticeably better edge-anti-aliasing so no...

Now don't get me wrong, I use 16xCSAA whenever possible and really like it! I just wish it had better edge-anti-aliasing is all.
 
Except 8xMSAA has noticeably better edge-anti-aliasing so no...
A little better not noticably. And it sorta depends on the game and the scene and your monitor.
And 16xQ has a little better edge-anti-aliasing than 8x with the same speed.
So it's the same -- 8x MSAA is pointless on G80+ GPUs anyway.
 
A little better not noticably. And it sorta depends on the game and the scene and your monitor.
And 16xQ has a little better edge-anti-aliasing than 8x with the same speed.
So it's the same -- 8x MSAA is pointless on G80+ GPUs anyway.

So I guess what we should really be saying is 8x MSAA is faster on R7xx GPU's but its better quality on NV GPU's (because you would use 16xQ with no performance penatly).

I expect GT200b to increase speeds a little, but probably not more than 10-15% max.

It will mostly be to reduce the cost IMO. Thats really were things need to improve for GT200, performance is good enough already. Its just that relative to R7xx its too expensive (because its too big).
 
A little better not noticably. And it sorta depends on the game and the scene and your monitor.
And 16xQ has a little better edge-anti-aliasing than 8x with the same speed.
So it's the same -- 8x MSAA is pointless on G80+ GPUs anyway.

"not noticeable" is a purely subjective phrase, and I most certainly do see the difference on my G92.
 
I expect GT200b to increase speeds a little, but probably not more than 10-15% max.
I expect clocks around 750/1500 at minimum.

"not noticeable" is a purely subjective phrase, and I most certainly do see the difference on my G92.
It's exactly the same with your "noticeable".
I'm playing the same games with 16xCS and 8xMS often to try to see any difference.
It's there, yes, but it's not that big and at some times 16xCS tends to be even better then 8xMS.
So it's not that simple as "8xMS is better then 16xCS" cause it's not.
They turn out to be more or less on par most of the time especially if you're not looking into it in search of a difference.
They are very close from the quality POV, but 8xMS is quite slower.
 
I expect clocks around 750/1500 at minimum.


It's exactly the same with your "noticeable".

Fair enough.

I'm playing the same games with 16xCS and 8xMS often to try to see any difference.
It's there, yes, but it's not that big and at some times 16xCS tends to be even better then 8xMS.
So it's not that simple as "8xMS is better then 16xCS" cause it's not.
They turn out to be more or less on par most of the time especially if you're not looking into it in search of a difference.
They are very close from the quality POV, but 8xMS is quite slower.

Try it on a 42" plasma @ 1080p from no more than 8' and then tell me you don't see any aliasing ;)
 
Oh he does not need to speculate. ;)

The 55nm GT200b will not have the FP64 ALU's at it will be a significant less complex and smaller die.
The FP64 ALUs aren't taking much space. GT200 has 30 SMs taking up ~25% of 576 mm2. G92 has 16 SMs taking up ~23% of 328 mm2.

I suppose that NVidia could have trimmed G92's ALUs so a comparison like this isn't entirely accurate, but the FP64 unit is can't be more than maybe 10% of the SM as it has also has 8 FP32 SPs, 8 quarter speed SF/interpolator units, 64 KB in the register file, 16KB in shared memory, control logic, etc.

We're talking about 2-3% saved by removing the FP64. What's more important is that it should let GT200's shader core reach G92b's clocks, assuming power isn't a problem.
 
Back
Top