Nvidia GT200b rumours and speculation thread

WTF? Perfect micro-stuttering? Sorry, I don't buy it for a second. Never since the introduction of AFR have I seen anything even remotely resembling that graph you linked. There is more going on here than meets the eye.

Well, i didn't do the test myself, but i've got no reason not to believe my colleague. But bear in mind that this was with preview board and drivers.

I am not convinced however by sampsa's finding at 60 fps, were mikrostuttering is less likely to occur. I am not convinced either that this issue will still be present when X2 + drivers are finally ready - but i am equally not ready to blindly believe anythings fine now just because someone said so. :)

CFX sideport. Re-architected CFX driver. Your statements completely ignore these realities. Also, micro-stuttering has been demonstrated to have been eliminated on non-X2 48x0 hardware (meaning you can take two separate 48x0 boards and still not have micro-stutter, blowing the doors off your argument).

Oh really? I didn't know you'd fall for promises given on a launch whitepaper. Until now, I have yet to see "fixed microstuttering" - no matter on which card(s).


edit:
To make myself more clear: I don't think that anyhting's set in stone just yet.
 
Last edited by a moderator:
Well, i didn't do the test myself, but i've got no reason not to believe my colleague. But bear in mind that this was with preview board and drivers.

I am not convinced however by sampsa's finding at 60 fps, were mikrostuttering is less likely to occur. I am not convinced either that this issue will still be present when X2 + drivers are finally ready - but i am equally not ready to blindly believe anythings fine now just because someone said so. :)

This has nothing to do with blindly believing or disbelieving anyone, and absolutely everything to do with the fact that you and yours continue to demonstrate that you do not know the difference between stuttering (low fps) and micro-stutter (uneven frame distribution).
 
Also, micro-stuttering has been demonstrated to have been eliminated on non-X2 48x0 hardware (meaning you can take two separate 48x0 boards and still not have micro-stutter, blowing the doors off your argument).
Can you prove that? Links please.
Because I've seen private reports (from people who post here from time to time) that 4850 X2 do have micro-stutter and hints that atm R700 is smoother than X2, yet if one really tries to increase the load he'll see it again.
I don't say that the issue will be present on any resolution... yet I await explanation how 10-20GB/s is enough for smooth multi-gpu execution. Hell, just post in the thread about future of MGPU that the problem with communication between cores is solved!
20GB for 2 cores, 40GB for 4 cores... what a scaling!
 
Can you prove that? Links please.
Because I've seen private reports (from people who post here from time to time) that 4850 X2 do have micro-stutter and hints that atm R700 is smoother than X2, yet if one really tries to increase the load he'll see it again.
I don't say that the issue will be present on any resolution... yet I await explanation how 10-20GB/s is enough for smooth multi-gpu execution. Hell, just post in the thread about future of MGPU that the problem with communication between cores is solved!
20GB for 2 cores, 40GB for 4 cores... what a scaling!

grid_graph2.png

There's one at least http://www.xtremesystems.org/forums/showthread.php?t=194808
 
This has nothing to do with blindly believing or disbelieving anyone, and absolutely everything to do with the fact that you and yours continue to demonstrate that you do not know the difference between stuttering (low fps) and micro-stutter (uneven frame distribution).
Oh, I do think we do - we even mention the difference in the text, which you choose to ignore.

"The video shows our benchmark scene in Crysis at 1,680 x 1,050 pixels (DX10, very high) without FSAA or AF. Crossfire is not working properly and therefore the framerate is low - let's call this macro stuttering. In addition to this macro stuttering there is an irregular frame distribution, known as - surprise - micro stuttering. Just compare the video to the insufficient performance of your single GPU graphics card, which will look smoother for sure. The reason for Crossfire to work improperly is a driver bug of course. The visible irregular distribution of displayed frames within the example might make it clear how annoying micro stuttering can be at low fps rates. "



So we obviously have to agree that we disagree on this matter.

edit:
please note the difference in the graphs - PCGH used full rendering times per frame whereas Sampsa choose to display differences in rendering time between succesive frames - can make quite a difference to the careless eye
 
Last edited by a moderator:
Its not input lag, or even micro stuttering that bothers me about SLI/Xfire, its the hit or miss nature of it.

Sometimes it works, sometimes it doesn't. In some (v.rare) situations its even slower!

No way I want to be messing around with profiles or whatever else is needed to get sdme games working, especially when they are new. I wpuld prefer a reliable performance even if that performance is a little lower on average.

If the GT200b can match the price of R700 and be no more than 20% slower when Xfire is working, it would definatly be a winner for me.
 
If the GT200b can match the price of R700 and be no more than 20% slower when Xfire is working, it would definatly be a winner for me.
Unfortunately for Nvidia (& ATI in the last few generations), the niche of enthusiasts go for the top dog. Niggles like Crossfire, SLI, power consumption, etc. just dont matter.
 
PCGH was the same website that started the entire microstutter crap but they also seemed to have confused stuttering with microstuttering recently

No, actually, the "microstutter crap" was started on the 3dcenter.de forums by a guy who tried to find out how far he could push image quality settings in games and benchmark tests and still have ~30 FPS. He noticed that ~30 FPS on SLI was noticably worse than 30 FPS with a single GPU and he even figured out why. PCGH just picked up the story, they're AFAIK still the only significant publication that did so while everyone else is still busy kissing red and green ass pretending that there is no problem.
 
This just proves that at this resolution/settings/game R7x0 is fast enough.
I'd expect that when the single chip has nearly 2.5 times more raw power and more memory per chip
But what about increased settings on games coming fall of 2008 or in 2009?
Who will guarantee that there will be no issues?
Of course I bet "enthusiasts" will go for FPS - remember "MegahUrtz sells"?Now we have "FPS sells"... for how long?

PS:
2 L233
also @ixbt.com in russian, not sure if these murmurs were published on their eng site.
 
Last edited by a moderator:
This just proves that at this resolution/settings/game R7x0 is fast enough.

Moreover, this might even indicate a possible CPU-bottleneck with the processor more and more becoming the limiting factor of distributing frames. Reason: In the linked thread, sampsa also gave absolute Fps-numbers and even though he stated (IIRC) that crossfire was working well, there was no huge performance gain (~13%) to be observed north of high60-/low70ish fps.
 
Unfortunately for Nvidia (& ATI in the last few generations), the niche of enthusiasts go for the top dog. Niggles like Crossfire, SLI, power consumption, etc. just dont matter.
Yeah. And for those GT200b cards in Triple SLI should be quite faster than R700 CFX.

And i can confirm that R700 is much better than R670 wrt microstuttering but it most probably because RV770 is so much faster than RV670 and because it has 2 gigs of RAM now not because AMD has done something smart to the underlying AFR architecture. And that means that it'll stutter in future games -- the question is when (and if) these future games will appear. I think that for its lifespan R700 will be a very good solution -- not without it's usual AFR troubles (in several games it's already slower than GTX280) but good overall.
 
Sorry to bring this up again, but being commented on in such an unfriendly way makes it easy.

--
Microstuttering NOT by PCGH...
http://www.computerbase.de/artikel/...hd_4870_x2/24/#abschnitt_problem_mikroruckler

Google-Translation:
http://translate.google.de/translat...roblem_mikroruckler&hl=de&ie=UTF8&sl=de&tl=en

Quote:
"it in the Radeon HD 4870 X2, in the frame Times regularly jump back and forth, making the game massively affected. All three test applications Mikroruckler visible, with Call of Juarez extreme case."
 
Quote:
"it in the Radeon HD 4870 X2, in the frame Times regularly jump back and forth, making the game massively affected. All three test applications Mikroruckler visible, with Call of Juarez extreme case."
Another reason for Nvidia to release the GT200b as soon as possible :). If they hurry they might spoil the X2 launch due to the awareness of microstuttering (at least reviewers now mention it).
 
FWIW, Jen-Hsun effectively confirmed a GT200b GX2 SKU last night:
http://seekingalpha.com/article/90644-nvidia-f2q09-qtr-end-7-27-08-earnings-call-transcript?page=-1 said:
So you just have to take it case by case and -- but we think our approach is the right approach. The best approach is to do both. If we could offer a single chip solution at 399, it certainly doesn’t preclude us from building a two-chip solution at something higher. So I think that having the right price, right product at each price point and the best-performing product at each price point is the most important thing.
Given that you can get a GTX 280 for $389 post-rebate already, it'd be surprising if the single-chip SKU it's based on wasn't nearly as fast as the GTX 280 imo...

I think the biggest question at this point is the GT200b's memory configuration. Is it still 512-bit GDDR3? Presumably it is, but that's not a given - remember GT200 was originally aimed at the timeframe where GDDR5 wasn't expected to be widely available, while GT200b was never going to have that same problem. At the same time, I doubt switching to GDDR5 would save them a lot of money or allow for noticeable performance improvements. In many ways, 256-bit GDDR5 would be quite disappointing... I'd love to see 384-bit with a 320-bit SKU, though.
 
Well, he said the same thing before and thus he's not really confirming anything, he's just saying that it's not like single-chip vs multi-chip, since they're doing both and are going to do GX2 card again eventually -- but not neccesseraly on GT200b.

If they do make GX2 on GT200b then it'll be a very expensive and hot card with very high power requirements. Right now i'm thinking that the next GX2 will come with the transition of GT200 to 40nm as does the 256-bit GDDR5 memory interface (which places this GX2 in next spring at the earliest). 55nm GT200 will probably be too hot and expensive for a GX2 solution. Plus it'll probably still use 512-bit GDDR3.

I'm placing my bets on 384-bit GDDR5 DX11 chip on 40nm though =)
 
Another reason for Nvidia to release the GT200b as soon as possible :). If they hurry they might spoil the X2 launch due to the awareness of microstuttering (at least reviewers now mention it).
I don't see any reason, why a possible GT200b should not suffer from the same problem as it is generally AFR related.
 
FWIW, Jen-Hsun effectively confirmed a GT200b GX2 SKU last night:
Given that you can get a GTX 280 for $389 post-rebate already, it'd be surprising if the single-chip SKU it's based on wasn't nearly as fast as the GTX 280 imo...

I think the biggest question at this point is the GT200b's memory configuration. Is it still 512-bit GDDR3? Presumably it is, but that's not a given - remember GT200 was originally aimed at the timeframe where GDDR5 wasn't expected to be widely available, while GT200b was never going to have that same problem. At the same time, I doubt switching to GDDR5 would save them a lot of money or allow for noticeable performance improvements. In many ways, 256-bit GDDR5 would be quite disappointing... I'd love to see 384-bit with a 320-bit SKU, though.
I dont see how thats confirming anything. Given its achievable, Nvidia will definitely do it. I dont think anyone ever doubted them of that.

It was very very funny to see Jen-Hsun's pimping CUDA at every oppurtunity he could. Ya know we just cant keep focussing on offering the best performance, we have to deliver other things as well.
 
Seems a Nvidia spokeman said, that comparison 4870X2 GTX280 is not fair because of the 2 GPUs of X2 and that Nvidia will launch if the market demands a G200(B)-GX2, too. Which will offer a higher image quality and a better multi-gpu-technology.
;)

edit:
Ouch, Arun already posted it and its from Jensen himself...

I think the biggest question at this point is the GT200b's memory configuration. Is it still 512-bit GDDR3? Presumably it is, but that's not a given - remember GT200 was originally aimed at the timeframe where GDDR5 wasn't expected to be widely available, while GT200b was never going to have that same problem. At the same time, I doubt switching to GDDR5 would save them a lot of money or allow for noticeable performance improvements. In many ways, 256-bit GDDR5 would be quite disappointing... I'd love to see 384-bit with a 320-bit SKU, though.
But the problem with reducing the MC, is that ROPs also reduced, and these are in some cases(8xAA) the weak point of GT200.
In future GT200 I think they will go 8 ROPs per 64-Bit.
 
Last edited by a moderator:
Seems a Nvidia spokeman said, that comparison 4870X2 GTX280 is not fair because of the 2 GPUs of X2 and that Nvidia will launch if the market demands a G200(B)-GX2, too. Which will offer a higher image quality and a better multi-gpu-technology.
;)

Valid point but they also have power consumption and heat under control it seems as well despite slapping on 2 chips on the same card.
 
Back
Top