Nvidia GT200b rumours and speculation thread

If Nvidia are lucky, they might fit a very sparse, carefully selected Ub0r-edition ('return of the Ultra' anyone?) into the thermal budget allowed by the current cooling solution. But I really hope they don't resort to that kind of thing.
 
According to Fud it's going to end up faster than the R700:
Nvidia is currently finalizing the clocks on the chip and it is waiting to see the final performance of R700. Nvidia's chip is likely to be slightly faster than ATI's dual chip offering, but we suspect that the performance will be quite close.

Nvidia is certainly not giving up without a fight, as Nvidia's entire marketing was based on the fact that Nvidia was a performance leader. Nvidia simply cannot let the crown slip out of its hands and that is why Nvidia works so hard to maintain its performance dominance.
Source
 
According to Fud it's going to end up faster than the R700:

Source

/yawn, roll eyes, go back to bed

Seriously not gonna happen. I did some calcs on this in another thread and found any "Ultra" GT200b-based SKU would need to be clocked AT LEAST 30% higher (than GT200-based GTX 280) all-around to have a shot at matching R700, let alone surpassing it.
 
Yeah, it's quite ridiculous to say the least. The only way this would make sense is if GT200b was more based on GT212 than GT200 (4SMs/cluster, GDDR5), and that's madness - in fact, even if it was true, assuming it'd be faster would still be quite crazy by itself! What next, DX10.1 support? ...
 
Optical shrinks anyways tend to be more for die savings, so if anything, it was intended all along to save em money and increase margins
 
Probably NV will aim for a significant shader clock domain bump with the shrink -- the area, the current G200 mostly disappoints. And maybe this would spin up some TFLOPs PR quacking, albeit too late. ;)
 
I agree with fellix. The shader clocks were a huge disappointment and probably one of the performance culprits of the G200 chips. So i think this is where nVIDIA could have focused on alot more than any other aspect of the chip.

Unless its a very very large increase in clocks, i doubt it performing on par against the R700 though. Then again, who knows :p
 
Maybe they're measuring in the ePeen benchmark of Futuremark's Vantage using PhysX drivers...
 
R700 scales 100% everywhere since when?
it will be AFR, so many people won't buy it simply because its sli-on-a-card.

Well these "many people" (you're talking about a small sub-set of an already tiny group, aka hardware enthusiasts) will be missing out if they choose to believe R700 has the same limitations as previous multi-GPU boards. It's already been demonstrated that 48x0 hardware in CFX does not suffer from micro-stuttering, so as long as you choose the proper settings to achieve a desirable frame rate, there's no reason not to consider R700. AFR input lag is meaningless if your minimum frame rate is high enough to negate it.

As for the 100% scaling, it's more like 70-80%, with a few cases of super-linear scaling (compared to 512MB 4870 cards) thanks to the additional memory on R700 boards.
 
Last edited by a moderator:
It's already been demonstrated that 48x0 hardware in CFX does not suffer from micro-stuttering, ...
Well, it's been indicated by choosing a setting, which isn't prone to exhibit Microstuttering, to be exact.

...so as long as you choose the proper settings to achieve a desirable frame rate, there's no reason not to consider R700.
That's true - but also true for all other SLIs-on-a-stick solutions. You've just got a wider range of corresponding options on HD4870 X2.
 
Well these "many people" (you're talking about a small sub-set of an already tiny group, aka hardware enthusiasts) will be missing out if they choose to believe R700 has the same limitations as previous multi-GPU boards. It's already been demonstrated that 48x0 hardware in CFX does not suffer from micro-stuttering,
When and where? From whom?
I don't see ANY review yet :p, just previews.
Seen enough promises from ATi/AMD last years, one successful chip is not enough to blindly believe in anything they claim.
 
When and where? From whom?
I don't see ANY review yet :p, just previews.
Seen enough promises from ATi/AMD last years, one successful chip is not enough to blindly believe in anything they claim.

Right, and those previews don't mean anything, eh? It's not like the sites that performed said previews have the cards on-hand and have benchmarked them or anything...

Oh, wait :rolleyes:

Usually performance INCREASES after previews (due to more mature drivers). If you want to be a skeptic, fine, but don't act like the info we have is meaningless because the card hasn't hit retail yet.
 
WTF? Perfect micro-stuttering? Sorry, I don't buy it for a second. Never since the introduction of AFR have I seen anything even remotely resembling that graph you linked. There is more going on here than meets the eye.
If you want to be a believer, fine, but don't just ignore any facts contradicting the faith... like evidences that the Earth is round ;)

R7xx is just way-more powerful than R6xx, hence, in order to observe stutter one needs more demanding situations/game settings.
Did MAD claim ANY fundamental changes how frames are rendered? No, AFAIK.
chip-2-chip communication was increased 2-3x, yet even here people had calculated that the BW is nowhere near the one needed to make multi-gpu act as a monolithic one.
 
Wait for the architecture and more reviews before jumping to conclusions

PCGH was the same website that started the entire microstutter crap but they also seemed to have confused stuttering with microstuttering recently
 
If you want to be a believer, fine, but don't just ignore any facts contradicting the faith... like evidences that the Earth is round ;)

Have you ever seen an FPS graph like that before? I sure haven't. Forgive me for being skeptical (funny you accuse me of blind faith here).

R7xx is just way-more powerful than R6xx, hence, in order to observe stutter one needs more demanding situations/game settings.
Did MAD claim ANY fundamental changes how frames are rendered? No, AFAIK.
chip-2-chip communication was increased 2-3x, yet even here people had calculated that the BW is nowhere near the one needed to make multi-gpu act as a monolithic one.

CFX sideport. Re-architected CFX driver. Your statements completely ignore these realities. Also, micro-stuttering has been demonstrated to have been eliminated on non-X2 48x0 hardware (meaning you can take two separate 48x0 boards and still not have micro-stutter, blowing the doors off your argument).
 
Back
Top