NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
Which reminds me that I find it strange they'd implicitly promise, in a presentation aimed at top analysts, that their DX11 chip would be ready in 2009. Or am I misreading that and they're merely hinting at a 45nm refresh? Guess it's probably the former, with the latter as what they'd pretend they meant in case their schedule slips... heh.

IMO each swinging line stands for an "era" with the last one taking a start from Cuda and ending with DX11 - but "Next-Gen" in Q1/09 isn't nowhere as near the end of the curved line as is G80 for DX10.
 
A cut back performance chip with a smaller MC should be impossible why exactly at 55nm later on?
A cut-down GT200? Well, it can't have half the units, since its dual setup would be slower than GT200. It could have 3/4 the units + 384bit bus maybe? But then in would perhaps make more sense to use faulty GT200's and disable 1/4 units and two memory channels. Still I think that the power requirements of such hypothetical card would make its existence impossible.
What's a G92 and why was it possible that that chip despite its 323mm^2 die size ended up as a GX2?
It's much more easier to make a dual-chip card if the memory bus width is only 256 bits, that not being the case with anything that could potentially be faster than GT200 and still be based on the same, GDDR5-unfriendly architecture. Still, with G92, nVidia ended up with an awkward sandwich that is expensive to make and its cooling systems is noisy and difficult to replace.
Question being how long it'll take until the D3D11 generation arrives and how many refrehes-gap fillers NVIDIA will need until such a product makes even sense.
I think the next generation high-end won't be here at least until a year from now. Basically the same as with G80.
 
GT200 is reported to have a TDP of over 200 watts. Suppose we could disable a few block and lower the clocks and get it down to ~180 W. That's still too much for a dual-GPU card. The same goes for R700 and a quad setup, not to mention the drivers for QuadCF are in a horrible state. Nor ATi nor nVidia can't shrink the chips, since the 45nm manufacturing process won't be ready until Q4'08, and before it is mature enough to be used for complex chips as the GT200, it will take at least another half a year.

It will? How do you know that?

Well there are some people here who are under NDA and know the specs, so they would certainly know. I'm under NDA myself, so I can't tell you. Take it as my educated guess.

What's Tesla 2.0?

oops, typo, i guess .. 'Tesla 2.0' sounds like a good architectural codename name [anyway] ... GT200 =)
EDIT: looking at that slide it appears to be a very fortuitous typo .. Tesla 2 indeed!

Q4 '08 !!... Good God, that is less than 6 months! .. thanks for the confirmation!
- i was unsure until you mentioned that .55 will be fully ready for them by then; and Spring for GT200's shrink to .45 nm! Faster than i even thought!

Of course, r700 will have a shrink. AMD is possibly looking at SMIC on the Cnina Mainland for their asset lite divestment - SMIC already has .45 nm for IBM and it would be an easy thing for AMD's ATi engineers to make this adjustment. If not, then it will be TMIC and asset-light for their shrink. Unless r700 is a flop, but that is doubtful as they are going for "pricing", it appears.


as to the TDP, look at that nasty sandwich GX2 .. i will say no more but that GT200 may have a "reserve" no one *can* talk about - for an X2 at .55 nm this Winter [very doubtful indeed at .65!!]. Ugly and inelegant but a ready answer for a X4 [and we do know AMD wants one and they ARE working on CF-x drivers as Priority Two; how long will that take after r700 is release? a few months .. maybe Q2 also - but that is raw speculation on my part; as befits this thread]


. . . and it is my "pure guess" about DX10.1 .. thanks also; all nVidia would have to do is enable it in their drivers - at some point - and very few would know about it - right now.

But you can tell us what the reasoning for leaving DX10.1 out of NVIDIA's architecture for the next two years .. please
[it makes zero sense to me]

http://www.theinquirer.net/gb/inquirer/news/2007/11/16/why-dx10-matters
[ignore the fact is IS theInq, please, answer them!]

and don't forget:
http://www.teamati.com/showthread.php?t=4887

. . . . and Arun also has his roadmap:

My expectation right now is something along these lines:
Back-to-School Cycle: 65nm GT200, 55nm G92b, 65nm G94, 65nm G96, 65nm G98, MCP78/MCP7A/MCP73/MCP6x-based IGPs
Winter Cycle: 65nm GT200, 55nm G92b, 65nm(?) G94, 55nm(?) G96b, 55nm GT206, MCP78/MCP7A/MCP7C/MCP6x-based IGPs
Spring Cycle: 40nm GT212, 40nm GT214, 40nm GT216, 40nm GT218, 55nm GT206, iGT206/iGT209/MCP7C-based IGPs
 
Last edited by a moderator:
Q4 '08 !!... Good God, that is less than 6 months! .. thanks for the confirmation!
- i was unsure until you mentioned that .55 will be fully ready for them by then; and Spring for GT200's shrink to .45 nm! Faster than i even thought!
The sentence continues. IIRC, 45nm production will start sometime in Q4'08, BUT most likely only for less complex, low-power chips, not GPUs. My prediction is that TSMC will be able to manufacture GPUs on the 45nm (or perhaps nV and ATi will jump right to 40nm) in Q2'09, not sooner and very probably later.
Of course, r700 will have a shrink. AMD is possibly looking at SMIC on the Cnina Mainland for their asset lite divestment - SMIC already has .45 nm for IBM and it would be an easy thing for AMD's ATi engineers to make this adjustment. If not, then it will be TMIC and asset-light for their shrink. Unless r700 is a flop, but that is doubtful as they are going for "pricing", it appears.
I heard some rumours about TSMC going SOI, so they can manufacture CPUs for AMD. I think it was Fudzilla. They also claimed that Fusion is gonna be a monolithic die, so the GPU would have to be designed for SOI as well. Can't say I trust it, though.
as to the TDP, look at that nasty sandwich GX2 .. i will say no more but that GT200 may have a "reserve" no one *can* talk about - for an X2 at .55 nm this Winter [very doubtful indeed at .65!!]. Ugly and inelegant but a ready answer for a X4 [and we do know AMD wants one
We do?
... and it is my "pure guess" about DX10.1 .. thanks also; all nVidia would have to do is enable it in their drivers - at some point - and very few would know about it - right now.
They can't enable in drivers what the chip doesn't support.
But you can tell us what the reasoning for leaving DX10.1 out of NVIDIA's architecture for the next two years .. please
[it makes zero sense to me]
I had a lenghty discussion with DegustatoR on this topic a while back in this thread. It starts on page 28, post #638.
 
The sentence continues. IIRC, 45nm production will start sometime in Q4'08, BUT most likely only for less complex, low-power chips, not GPUs. My prediction is that TSMC will be able to manufacture GPUs on the 45nm (or perhaps nV and ATi will jump right to 40nm) in Q2'09, not sooner and very probably later.

I heard some rumours about TSMC going SOI, so they can manufacture CPUs for AMD. I think it was Fudzilla. They also claimed that Fusion is gonna be a monolithic die, so the GPU would have to be designed for SOI as well. Can't say I trust it, though.

We do?

They can't enable in drivers what the chip doesn't support.

I had a lenghty discussion with DegustatoR on this topic a while back in this thread. It starts on page 28, post #638.

And you are certain the Tesla architecture does not support it at all?! - 100% sure; OK, i will let it rest for good.

OK, and thank you for the head's up on the why not. i am very later to the party at B3D although i kept up at ARS and ATF. It is a lot to consider, and i just find it difficult to believe that nVidia will just ignore DX10.1 - for two more whole years! This is just my feeling on it, with nothing concrete to back it whatsoever just so you know i do not know, but i am just incredulous.
i don't trust Fudzilla either; they are a mouth piece for deception also and one has to look deep for any truths. However, there are those who say Fusion is a Pipe-Dream and the real reason that AMD acquired ATi was for ATi's engineers to teach AMD how to transition from their own FABs to commodity FABs. AMD can sell it's OWN Dresden fab to anyone - and it's foundry equipment to SMIC if they choose - if they really want to divest themselves of it and become truly asset-lite! Of course that is my own industry wild speculation that Arun says is just plain wrong. Who really knows and directs what AMD will do? They don't post here or else they keep it 100% to themselves.

And, Yes, we DO know that AMD wants X4! That is the purpose of CrossfireX! Evidently their partners are also screwing around with 3870x4s; 4870x2 and x4 is logical; maybe x3 like Phenom also; they are broke. That is evidently the only way they will apparently compete with Tesla; until the next round in a couple of years - apparently AMD has DX10.1 now, also to tout as their second coming and savior of graphics.
 
Last edited by a moderator:
This slide seems to confirm my theory:
- GT200 just brings in awesome power for existing technology
- Q1/09 will be the release date of their new architecture


Q1/2010 for Nvidia's next-gen, brand new, clean sheet DX11 architecture is MUCH more likely than Q1 2009.

Nvidia's next-gen is going to be Q4 2009 or Q1 2010, around the same time that Intel is supposed to bring Larrabee to market.
 
And you are certain the Tesla architecture does not support it at all?! - 100% sure; OK, i will let it rest for good.
Before G80 was launched, I heard some rumours about game devs emulating DX10 on R580 shaders. I suppose it's possible since the ALUs are highly versatile, but the performance sucks so it can't be used for real. Current nVidia chips can also do almost anything through CUDA, but not everything might be really usable. Besides, DX10.1 isn't about new technologies, but about speeding up the current ones, so emulation wouldn't make any sense. nVidia says they don't care about DX10.1 because the game devs have enough problem, but in my opinion it's just smoke & mirrors maneuver for G80 being built primarily for DX9 and not for all those new DX10.1 features. The R600 was clearly designed with these in mind, though the architectural flexibility cost ATi more transistors, forcing them to use the 80nm process... and you know the rest.
and i just find it difficult to believe that nVidia will just ignore DX10.1 - for two more whole years!
Two years? No I don't think so. GT200 is a slightly or heavily modified G80, but its principles won't last another two years. Three years on the market is long enough for even the best architectures to grow obsolete. R300 was launched in August 2002, R520 came in September 2005 (with a three-month delay or so) and it was just about time the old architecture was replaced. So, just as Megadrive1988 says, Q4'09 could be the right time to release a new, DX11 based product.
However, there are those who say Fusion is a Pipe-Dream and the real reason that AMD acquired ATi was for ATi's engineers to teach AMD how to transition from their own FABs to commodity FABs.
That is, forgive me, total bullshit. Every manufacturing process, even from the same company, has its specifics and chips must be designed from scratch in order to be able to use it. In the past, AMD transitioned from traditional bulk process to SOI with no trouble. In the past, nVidia fabbed some of its chips (notably the famous NV30) in IBM fabs before settling at TSMC for good.
And, Yes, we DO know that AMD wants X4! That is the purpose of CrossfireX!
Something tells me you don't mean knowing as in Arun's (was it Arun?) definition. Four GPUs on a card, if we're talking about RV670 or RV770, that is - sorry - also total bullshit. The purpose of CrossFireX is to allow for X2 cards to work together, or with a single card. Just as you can't put two chips of the G80/R600/GT200 calibre on one card, you can't put four RV670/RV770s on a card.
By the way, Quad CrossFire scaling sucks so badly ATi would be only shooting itself in the foot by marketing it as a usable graphics solution.
 
The sentence continues. IIRC, 45nm production will start sometime in Q4'08, BUT most likely only for less complex, low-power chips, not GPUs. My prediction is that TSMC will be able to manufacture GPUs on the 45nm (or perhaps nV and ATi will jump right to 40nm) in Q2'09, not sooner and very probably later.

I would love to know how and why this somehow became "confirmed."
When I first posted about shrinking to 45/40nm, I brought up the theory that 800SP RV770 might be on 45/40nm. I was then corrected that shrinking would be unlikely until late Q3/Q4,
I went back to check dates of other announced nodes, such as 80/65/55nm, and found out that Q3/Q4 lined up more realistically.

So I am just wondering how this got changed to Q1/Q2 '09, i.e. when this happened and what the sources were/are.
 
Ailuros said:
Question being how long it'll take until the D3D11 generation arrives and how many refrehes-gap fillers NVIDIA will need until such a product makes even sense.

Well Nvidia doesn't really like refreshes these days, so I would not really expect a refresh of the GT200 chip unless yields are unsatisfactory... I'd guess somewhere between a 1 year and 18 months for a D3D11 class chip (if they do refresh the GT200 chip it will probably be 18+ months to max ROI).
 
Well Nvidia doesn't really like refreshes these days, so I would not really expect a refresh of the GT200 chip unless yields are unsatisfactory... I'd guess somewhere between a 1 year and 18 months for a D3D11 class chip (if they do refresh the GT200 chip it will probably be 18+ months to max ROI).
They are going to be shrinking the GT200 ASAP once they believe the PP is mature.
They want the largest margin possible as soon as possible and a +500mm2 die isn't going to have large margins.
 
Status
Not open for further replies.
Back
Top