NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
First of all, GT200 is the replacement for G80 and nVidia expects to use it for two years. That is normal for new architecture.
You're forgetting that GT200 is not a new architecture.
No one seriously expects DX11 before 2010 after Vista 7 launches. Link me otherwise, please.
I don't seriously know. Can you provide links to articles claiming that DX11 will come more than 3 years after DX10? By the way, it's not Vista 7, but Windows Seven.
and we know for sure there are 3 GPUs - a x3 on a single PCB
But that's "unofficial", just as some earlier Gemini designs.
of course it is ES, but you know AMD is thinking about it.
I don't, and I think I'd be the one to know.
As to SMIC ...
Sorry, I don't have the time to read it right now.
 
Ohh, the first patent, right, I thought it was another one sorry. In fact, that is clearly NOT used in G8x - it is a generic patent, possibly aimed at NV's DX11 arch or possibly not. As the summary clearly says, it is a way to use the shader pipeline to do texture addressing & texture filtering work to reduce bottlenecks; it doesn't replace the TA/TF pipeline, but complements it; i.e. you've got the benefits of both fixed-function units and being able to allocate all of your resources to it (like Larrabee would, just without as much fixed-function stuff).
Unfortunately we're in interpretation-soup - you could just interpret this as execution of fp32 texture filtering in the ALUs using texels fetched by the TMUs - hmm, G80 can do fp32 texture filtering can't it?

I haven't read it at all closely, just perused the diagrams.

Because it's a generic patent, they likely didn't bother making the drawings or anything else really look like the real-world implementation would work. Their patent isn't any less valuable because of it and so forth - so why would they?
I agree. Which is why I'm willing to accept it as fully implementable as software at some point. And also why not every unit described may be implemented in current hardware - instead units might be amalgamated.

There's not any document that claims that, but it's an incredibly simple and efficient way to explain the bilinear rates for INT8 and FP32 respectively on G84/G86/G9x; the latter is actually higher than the former when used in scalar form, while peak bilinear rates in INT8/INT16/FP10/FP16/etc. are impossible to achieve.
An old theory relating to int8 rate was simply register pressure and I still think this hasn't been fully explored.

Separately, did you look at the floating point filtering patent document? This is clearly relevant to the fp10 and fp16 cases at minimum, I'd say. EDIT: in fact it's quite explicit about doing fixed point, "s2.14" filtering, i.e. 16-bit - so it would seem it can't do fp32 filtering and so fp32 filtering would have to be performed on the ALUs.

Jawed
 
Was it not 1GB of memory for the 280? Here it is saying 896MB: http://www.techfuzz.com/roadmaps/2008.aspx#GeForce9900GTX They are also stating it will be a dual-chip solution as well, which I highly doubt give the already massive chip it is.


Dear sir


Productions of GT200 series are initialized in China recently.

9900GTX is just a refinement of G80 with additional MIMDs.

Do not be fooled by rumors.

GT200 will be revealed within just several weeks.
 
You're forgetting that GT200 is not a new architecture.

I don't seriously know. Can you provide links to articles claiming that DX11 will come more than 3 years after DX10? By the way, it's not Vista 7, but Windows Seven.

But that's "unofficial", just as some earlier Gemini designs.

I don't, and I think I'd be the one to know.

Sorry, I don't have the time to read it right now.

Nothing is really brand new and it builds on what went before; even though it may take 4 or 5 years to design it using different design teams. As i understand it, GT200 is new to G80 as R700 is new to r600/r670. These type of architectural platforms are generally used for about 18 months at a minimum - unless it is a real turkey. Neither GT200 nor r700 are simple speed bumps and expansion of older architecture. i believe they had different design teams; please correct me if you know for sure. i think we will also know much more abut GT200 and r700/770 really soon. So let it be please, until then.

Sure, Window's Seven is catchy and will be the final name for it no doubt. However, notice that DX jumps are about 3 years apart. And DX11 will not come before the next MS OS; they are saying [late] '09, at the earliest [afaik]

Of course it is unofficial. X3 is an engineering sample that to me shows their thinking and their direction imo. How is it that you'd be the one to know better?
We know for a fact that CrossFireX is to be used with 2x3870; 3x3870 or X3 and 2 x 3870x2 [or X4] and certainly AMD will support 2 x 4870x2; why not on a single PCB? X3 works! and it scales fairly well; x4 is a loser right now but the improving drivers should change that. i think AMD will do whatever it takes to compete with GT200 and nVidia will do whatever it takes to keep GT200 ahead - including their own GTx2 [my prediction] no matter how inelegant the solution appears.

And comment on AMD and SMIC when you have time; perhaps in the appropriately linked thread.
 
1000/750 . 330mm^2 = 440mm^2

I bet GT200 is similar to the robust improvement between R520 and R580.:D

Are you saying that you think R520->R580, in regards to die size, will be like G92->GT200?
For the ATi GPUs that was "only" an increase in ALUs and some other minor tweaks.
As for Nvidia this will not only be a ~2x increase in ALUs but also ROPs, memory controller plus the texture units, though that depends on the final configuration.(G92 or G80)
 
Are you saying that you think R520->R580, in regards to die size, will be like G92->GT200?
For the ATi GPUs that was "only" an increase in ALUs and some other minor tweaks.
As for Nvidia this will not only be a ~2x increase in ALUs but also ROPs, memory controller plus the texture units, though that depends on the final configuration.(G92 or G80)

No, the key difference is R580 just upped the ALU's but it was texture bottlenecked to hell so it didn't do a vast amount of good..

G80 already has vast texturing power (and GT200 maybe a bit more), so just doubling the shaders could bear double perfomance in this case.

And double performance is of course roughly what major chip revamps aim for.
 
1000/750 . 330mm^2 = 440mm^2

I bet GT200 is similar to the robust improvement between R520 and R580.:D

As it was already said, R580 improvements were relatively minor to what GT200 brings to the table. R580 had only more ALUs (not shader units, just only ALUs, control logic was practically the same) and the fetch4 support. GT200, as far the latest rumors are credible, has at least two more clusters (+25%) with 50% more shader units per cluster (+62.5% total) and 100% more ROPs/bus width. Now, the crossbar (if it's a crossbar, but there is no evidence that Nvidia will change it) between shader clusters and ROPs is much more expensive than on G92. But, of course, they could have removed the VP2 and so spare some transistors. Now, if this chip is "only" 30% bigger, it would be a great success.

No, the key difference is R580 just upped the ALU's but it was texture bottlenecked to hell so it didn't do a vast amount of good..

G80 already has vast texturing power (and GT200 maybe a bit more), so just doubling the shaders could bear double perfomance in this case.

And double performance is of course roughly what major chip revamps aim for.

He's not talking about performance, but die size.
 
Nothing is really brand new and it builds on what went before; even though it may take 4 or 5 years to design it using different design teams. As i understand it, GT200 is new to G80 as R700 is new to r600/r670. These type of architectural platforms are generally used for about 18 months at a minimum - unless it is a real turkey.
But you just said that GT200 is to G80 what R700 is to R600 (from the architectural point of view). But R600 was launched in May last year and even if it was launched on schedule (in Q1'07), that's still a few months short of your 18 figure. By the way, R580 was also a mild refresh of the R520 architecture, and the difference was less than 5 months.
Neither GT200 nor r700 are simple speed bumps and expansion of older architecture. i believe they had different design teams; please correct me if you know for sure.
Well, since GT200 is from nVidia and RV770 is from ATi, I'd pretty much say they were designed by separate teams :D Oh, you mean different from those who designed G80 and R600, respectively. I don't really know, but I'd say the teams are not "fixed" - there's probably a number of people working on one chip only, but also a number of people that work on all of them.
Sure, Window's Seven is catchy and will be the final name for it no doubt. However, notice that DX jumps are about 3 years apart. And DX11 will not come before the next MS OS; they are saying [late] '09, at the earliest [afaik]
Hmm... well you're probably right here, considering how many games now support DX10, current GPUs won't become obsolete so soon. In that case, we may see another GT200-derived product sometime in 2009. Perhaps something with a new memory controller (after all, not every ATi R5xx family chip has a ring-bus, so this should me modular to some extent), with GDDR5 support and the option of making a dual card.
i think AMD will do whatever it takes to compete with GT200 and nVidia will do whatever it takes to keep GT200 ahead - including their own GTx2 [my prediction] no matter how inelegant the solution appears.
I replied you about RV770 X4 in the appropriate thread. For the same reasons that I mentioned there is a GT200 GX2 impossible.
 
But you just said that GT200 is to G80 what R700 is to R600 (from the architectural point of view). But R600 was launched in May last year and even if it was launched on schedule (in Q1'07), that's still a few months short of your 18 figure. By the way, R580 was also a mild refresh of the R520 architecture, and the difference was less than 5 months.

Well, since GT200 is from nVidia and RV770 is from ATi, I'd pretty much say they were designed by separate teams :D Oh, you mean different from those who designed G80 and R600, respectively. I don't really know, but I'd say the teams are not "fixed" - there's probably a number of people working on one chip only, but also a number of people that work on all of them.

Hmm... well you're probably right here, considering how many games now support DX10, current GPUs won't become obsolete so soon. In that case, we may see another GT200-derived product sometime in 2009. Perhaps something with a new memory controller (after all, not every ATi R5xx family chip has a ring-bus, so this should me modular to some extent), with GDDR5 support and the option of making a dual card.

I replied you about RV770 X4 in the appropriate thread. For the same reasons that I mentioned there is a GT200 GX2 impossible.

Don't forget, i left myself an out. Architectures that are turkeys most recently like r600 and x1800] are interim and short lived. Remember that r600 was also 6 months late and certainly my 2900xt is very little different from 3850/3870 series; so r600 certainly lives on in r670 just as G80 is pretty similar to G92 variants. Look at G80's longevity and X1900 series. They manufacturers try to get 2-3 years out of a core. They do not always succeed. My opinion is that GT 200 is mostly new - and nVidia is planning to use it for the next 2 years - as long as DX10 is really viable. Logically, imo.

Yes, i AM looking forward to a GT200 derived product next year to hold us till 2010 and DX11. Yes, we agree about the teams. [To take it further as a joke, AMD and nVidia should play musical chairs with their engineers and swap them back-and-forth freely between their companies; we would see real interchangeability with HW then]. =P

As to GTx2 being impossible .. impossible[!] ... i will disagree with you and then shut up about it. GTx2 is nothing i can prove and i will not attempt it. If you google "GT200x2" there is a lot of speculation about it and i guess i started that speculation by myself at ATF months ago. Maybe i will live it down; it is also not impossible i could also be right - especially next year, when there is a real option to make GT200 into a sandwich - then with the shrink and speed-bump; not now.
 
My opinion is that GT 200 is mostly new
Your opinion is wrong. However...
and nVidia is planning to use it for the next 2 years - as long as DX10 is really viable. Logically, imo.
...here you're correct. Even if GT200 is not mostly new, nVidia will try to make the most of the architecture and I'm sure they'll squeeze at least 12-18 months from GT200...
Yes, i AM looking forward to a GT200 derived product next year to hold us till 2010 and DX11.
...or its future derivatives.
As to GTx2 being impossible .. impossible[!] ... i will disagree with you and then shut up about it. GTx2 is nothing i can prove and i will not attempt it. If you google "GT200x2" there is a lot of speculation about it and i guess i started that speculation by myself at ATF months ago. Maybe i will live it down; it is also not impossible i could also be right - especially next year, when there is a real option to make GT200 into a sandwich - then with the shrink and speed-bump; not now.
You know... I've seen people debating fictional specs that were... well, highly improbable. If I remember correctly, they were GF9800 specs that someone copied from some Asian forum (in native language) and didn't bother to translate the line that said that it's just the poster's personal theory. Even though someone in the thread pointed out (more than once) that those specs are not real, those posts went completely unnoticed by the hordes of drooling users. (It may sound racist, but while reading VR forums, I can't help thinking there must a be a horrible IQ difference between Asians and the rest of the world.)
 
As to GTx2 being impossible .. impossible[!] ... i will disagree with you and then shut up about it. GTx2 is nothing i can prove and i will not attempt it. If you google "GT200x2" there is a lot of speculation about it and i guess i started that speculation by myself at ATF months ago. Maybe i will live it down; it is also not impossible i could also be right - especially next year, when there is a real option to make GT200 into a sandwich - then with the shrink and speed-bump; not now.

i bet they will make one just because of their history 7950,9800gx2 i think they will introduce it late it its life just like the others were introduced end of g70,g80-g92 i remember reading somewhere that they see much of a future for sandwiched card and that a powerful single card is better than a sandwiched card so i see sandwiched cards as a means for them to squeeze all the money they can out of that architecture (after all it does cost millions to develop it) so i never buy the sandwiched cards because i have the patience to wait for the new architecture which usually destroys the power hungry sandwich cards anyway:D
 
Status
Not open for further replies.
Back
Top