NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
thanks for answering and not deleting my question(i have a bad habit of going off topic) im really interested in what the gt200 will offer price/performance wise

world domination for nVidia

.. seriously .. in the Discreet Graphics add-in market; they want it all!

[^my honest opinion^]

=>apoppin: I still don't think there will be a dual-chip GT200, even if the chip was shrinked. The architecture was modified to better cope with current and future games, but it still supports only DX10. So far, nVidia is doing a good job downplaying DX10.1, nevertheless, for BFUs 10.1 > 10.0. Now, imagine 45/40nm manufacturing won't be ready until Q1'09. By that time, Microsoft will probably have launched DX11 and it's not like nVidia to lag behind when a major DX version is available. Second, to allow for dual-chip cards, the chip would have to be shrunk to a size that would not allow 512bit memory. Not a problem in 2009, since there will be plenty of GDDR5 to go around, but oops, GT200 doesn't support GDDR5 or even GDDR4. That would mean having to desing a new memory controller, but I think nVidia will launch a completely new generation of GPUs instead. And make a lot of noise about DX11.

=>CJ: Since you already outed it, I can confirm that as well. (I also left a hint in the above text that somebody here could be able to see through, since I can't talk).

i do think so; but only if it is necessary. We know nVidia's preference is to have a SINGLE GPU - ideally. My own speculation is that it is held in "reserve" - IF AMD somehow comes up with a X2 or X4 that is a good performer compared to GT-200

i did not say it is a "done deal" .. remember r700 will also go thru a shrink... .

Do we know for sure that GT200 is not DX10.1?

and i think DX11 is coming after Vista7 .. in 2010; you may well be right about the next architecture after Tesla 2.0 GPU being DX11.
[my guess]

and Yes, i AM "listening" - this is called - what i am doing - "active questioning"
- reporters do this - after all "we" are not experts, you guys are. And you can also educate me
[believe-it-or-not]
 
Last edited by a moderator:
I think most believe the GT200 won't support DX 10.1 since they down played it.
If the GT200 does, want to take bets on how fast a "fixed" DX 10.1 patch for AC is released? :devilish:
 
I think most believe the GT200 won't support DX 10.1 since they down played it.
If the GT200 does, want to take bets on how fast a "fixed" DX 10.1 patch for AC is released? :devilish:

Don't forget the FM boys and Vantage .. i think they really have a 10.1 version about to be released .. in a month or so
[a guess, why not?]

if so - if GT20 supports it, DX10.1 will become important .. suddenly

. . . and are you *sure* DX10.1 is not included in Tesla architecture ? nVidia is known to misdirect.
.. i must be wrong NP ... and i can live without the .1 - but for 2 whole years? AMD has it now =P
[if so, i would secretly hate them for not including it]. Everything else seems to be right on target. What is the reason for leaving DX10.1 out? It gives their competition something to talk about - worthy or not.
 
Last edited by a moderator:
i do think so; but only if it is necessary. We know nVidia's preference is to have a SINGLE GPU - ideally. My own speculation is that it is held in "reserve" - IF AMD somehow comes up with a X2 or X4 that is a good performer compared to GT-200
GT200 is reported to have a TDP of over 200 watts. Suppose we could disable a few block and lower the clocks and get it down to ~180 W. That's still too much for a dual-GPU card. The same goes for R700 and a quad setup, not to mention the drivers for QuadCF are in a horrible state. Nor ATi nor nVidia can't shrink the chips, since the 45nm manufacturing process won't be ready until Q4'08, and before it is mature enough to be used for complex chips as the GT200, it will take at least another half a year.
i did not say it is a "done deal" .. remember r700 will also go thru a shrink...
It will? How do you know that?
Do we know for sure that GT200 is not DX10.1?
Well there are some people here who are under NDA and know the specs, so they would certainly know. I'm under NDA myself, so I can't tell you. Take it as my educated guess.
and i think DX11 is coming after Vista7 .. in 2010; you may well be right about the next architecture after Tesla 2.0 GPU being DX11.
What's Tesla 2.0?
 
Last edited by a moderator:
GNor ATi nor nVidia can't shrink the chips, since the 45nm manufacturing process won't be ready until Q4'08, and before it is mature enough to be used for complex chips as the GT200, it will take at least another half a year.

What exactly is wrong with 55nm for GT200? I'm not saying at all that NV is going to go for a dual GPU thingy based on GT200 any time soon, yet isn't the statement that NVIDIA can't shrink the chips rather wrong after all?

Just for theory's sake if they would think of such a scenario for 2009 it would for one mean that it wouldn't carry a shitload of ROPs (not a problem since GDDR5 availability will be more than fine) and that they would glue two performance SKUs together for such a project.

Personally I still think that up to now anything GX2/X2 is utter nonsense; AMD seems to be working on interesting ideas considering the memory problem, but the remaining amount of redundancy and other problems still don't sit pretty well with me.
 
What exactly is wrong with 55nm for GT200? I'm not saying at all that NV is going to go for a dual GPU thingy based on GT200 any time soon, yet isn't the statement that NVIDIA can't shrink the chips rather wrong after all?
What's wrong with 55nm? That it doesn't allow a dual-chip design with large chips like the GT200.
Now, imagine 45/40nm manufacturing won't be ready until Q1'09. By that time, Microsoft will probably have launched DX11 and it's not like nVidia to lag behind when a major DX version is available. Second, to allow for dual-chip cards, the chip would have to be shrunk to a size that would not allow 512bit memory. Not a problem in 2009, since there will be plenty of GDDR5 to go around, but oops, GT200 doesn't support GDDR5 or even GDDR4. That would mean having to desing a new memory controller, but I think nVidia will launch a completely new generation of GPUs instead. And make a lot of noise about DX11.
Ailuros said:
Just for theory's sake if they would think of such a scenario for 2009 it would for one mean that it wouldn't carry a shitload of ROPs (not a problem since GDDR5 availability will be more than fine) and that they would glue two performance SKUs together for such a project.
It wouldn't carry a shitload of ROPs? Then it would be a different chip than GT200. So we agree that there won't be a GT200 GX2. I say that there won't be another "enthusiast-class" GPU based on GT200. The architecture, though modified, will be with us for quite some time and I think nVidia will swap it for a new generation instead of beating an almost-dead horse.
 
Lol bad AA? Please explain this mystical advance in AA that DX10.1 brings.
The access to subsamples is limited in DX10 which causes problems in AA quality (hack 1: deferred shading) or performance (hack 2: supersampling).

A nice read (I know it's the Inquirer, but they sum it up pretty good): Why DX10.1 matters to you.

What MSBRW does is quite simple, it gives shaders access to depth and info for all samples without having to resolve the whole pixel. Get it now? No? OK, we'll go into a bit more detail. DX10 forced you to compute a pixel for AA (or MSAA) to be functional, and this basically destroyed the underlying samples. The data was gone, and to be honest, there was no need for it to be kept around.
...
DX10.1 brings the ability to read those sub-samples to the party via MSBRW. To the end user, this means that once DX10.1 hits, you can click the AA button on your shiny new game and have it actually do something. This is hugely important.
...
In the end, DX10.1 is mostly fluff with an 800-pound gorilla hiding among the short cropped grass. MSBRW will enable AA and deferred shading, so you can have speed and beauty at the same time, not a bad trade-off.

It's funny: Most of the current cards (G92 GT, GTS and so on) deliver great performance on 1280x1024, even with AA/AF. For 1600x1200, especially 1920x1200, their FPS are to low, especially if AA/AF is used.

Now, the GT200 will deliver great performance in high resolutions, even if quality settings like AA/AF are used. But it doesn't adress the generell problem of AA quality in DX10. I can't believe Nvidia won't fix that for another 1.5 years.
 
What's Tesla 2.0?
79.jpg
 
Which reminds me that I find it strange they'd implicitly promise, in a presentation aimed at top analysts, that their DX11 chip would be ready in 2009. Or am I misreading that and they're merely hinting at a 45nm refresh? Guess it's probably the former, with the latter as what they'd pretend they meant in case their schedule slips... heh.
 
Well, there's a big difference between January 2009 and December 2009. In case their schedule slips, I think GT200 can hold the line against non-existent competition.
 
Uhhh, no, GT200 doesn't have a chance in hell to compete against 40nm chips. Now, what I really want to know is whether the GT21x family is 55nm or 40nm... I'm starting to suspect 40, but I really don't know. I'd also like to know whether the two other designs in the GT20x family are 65 or 55nm, I'm suspecting the latter but once again I don't know.

Heck, a quick Google search tells me those codenames are public now, so here goes: http://itnotice.ru/s-forumov/ixbt/b...i-novosti---4-ya-vetka-prodolzhenie-5157.html
 

This slide seems to confirm my theory:
- GT200 just brings in awesome power for existing technology
- Q1/09 will be the release date of their new architecture

I currently own a GF6800GT (yes, I'm one of the idiots who have been waiting for the awesome G90/9800 GTX which never existed...) - maybe it's smarter to buy a "value card" (G92, RV770) for ~200USD and switch to the real deal next year.
 
- Q1/09 will be the release date of their new architecture

That would I doubt, since GT200 is supposed to be 2nd US architecture, there should be a refresh at smaller process and D3D11, which is supported by next-gen will not be available before late 2009.

Heck, a quick Google search tells me those codenames are public now, so here goes: http://itnotice.ru/s-forumov/ixbt/b...i-novosti---4-ya-vetka-prodolzhenie-5157.html

Interesting, heard a while ago about this.

So there is full line-up with integrated parts planed?

But the question is: when?
I have the suspicion, that these parts will made on 45nm in early 2009, while 55nm G9x will be sold until then below GT200.
 
Interesting, heard a while ago about this.
So there is full line-up with integrated parts planed?
Yes, and it's fairly easy to figure out what they are too. If GT200 is 10 clusters with 24 SPs each, then:
GT206 = 10/(2*2*2) = 1.25 ~= 1 -> 8 TMUs, 24 SPs, 4 ROPs
GT209 = Minimal possible configuration for IGPs, less than half GT206 -> 4 TMUs, 8 SPs, 2 ROPs(?)

My suspicion is that GT206 will have taped-out on 55nm before GT200 was back from the fab, and they'll ship it in A12. In other words, I am expecting there to be minimal design changes to the clusters and ROPs to improve their scalability downwards for SKUs with redundancy and iGT209.

But the question is: when?
I have the suspicion, that these parts will made on 45nm in early 2009, while 55nm G9x will be sold until then below GT200.
My expectation right now is something along these lines:
Back-to-School Cycle: 65nm GT200, 55nm G92b, 65nm G94, 65nm G96, 65nm G98, MCP78/MCP7A/MCP73/MCP6x-based IGPs
Winter Cycle: 65nm GT200, 55nm G92b, 65nm(?) G94, 55nm(?) G96b, 55nm GT206, MCP78/MCP7A/MCP7C/MCP6x-based IGPs
Spring Cycle: 40nm GT212, 40nm GT214, 40nm GT216, 40nm GT218, 55nm GT206, iGT206/iGT209/MCP7C-based IGPs

EDIT: BTW, I don't think traditional migration cycles from both NV and ATI will be able to predict anything for 40G. It's such a huge improvement in gate density and perf/mm²... While you could justify staying one half-node behind generally due to yields and wafer costs, this won't be justifiable in this generation IMO and so both companies will rush to try not being left behind. So I would expect the 40nm chips from NV and ATI in late Q4 or Q1.
 
The access to subsamples is limited in DX10 which causes problems in AA quality (hack 1: deferred shading) or performance (hack 2: supersampling).

No I know what you're referring to. It's just that direct access to the subsamples does not enable some massive improvement in AA quality over the "bad DX10 AA" that employs the workarounds. It's a welcome improvement but it in no way makes existing AA "bad" all of a sudden. I think the more important question is why we still aren't getting AA in DX10 games using deferred shading (which isn't a hack btw).
 
What's wrong with 55nm? That it doesn't allow a dual-chip design with large chips like the GT200.

A cut back performance chip with a smaller MC should be impossible why exactly at 55nm later on?

It wouldn't carry a shitload of ROPs? Then it would be a different chip than GT200.
What's a G92 and why was it possible that that chip despite its 323mm^2 die size ended up as a GX2?

So we agree that there won't be a GT200 GX2.
Personally I don't think that any of those kind of configs make sense yet, but that for other reasons. If they however should end up in a future timeframe X without anything to compete against AMD's whatever future products a GX2 is the quick and dirty way to close the gap just as the 9800GX2 (which had a pretty short lifetime anyway).


I say that there won't be another "enthusiast-class" GPU based on GT200. The architecture, though modified, will be with us for quite some time and I think nVidia will swap it for a new generation instead of beating an almost-dead horse.
Question being how long it'll take until the D3D11 generation arrives and how many refrehes-gap fillers NVIDIA will need until such a product makes even sense.
 
Status
Not open for further replies.
Back
Top