NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.

Arun

Unknown.
Moderator
Legend
Note: If this isn't the right codename, obviously the thread title will be changed later.

In recent months, there have been a variety of rumours about a new NVIDIA monster-chip. Fudzilla has been at the forefront of claiming that chip is called 'GT200':
Next gen Nvidia is GT200
GT200 won't do DirectX 10.1
Nvidia's GT200 is a 65nm chip
GT200 thermal design power (TDP) is 250W

However, there are a variety of other rumours that might be refering to the same chip, such as:
nVidias kommender G100 im Detail (Update) [Translation]

Discussion points in this thread may include:
- GT200 specs/power/costs.
- GT200 Architecture: How different is it from G9x?
- Advantages & disadvantages of 'monster chips' (financially and otherwise).
 
Well i really hope G100 (or GT200 whatever :) ) will be "next G80" and will bring similar performance leap like G80-->G71 :)
 
512-bit memory interface with GDDR3. ;)

No way. 512-bit interface is very expensive thing. Look at r600 - it doesn`t give any advantages over "only" 384-bit interface in G80 :) G100 should have more SP than 512-bit mem interface. :)
 
why not go 512bit, with that transistor and power/heat budget you're well over the top anyway.

See r600? They went to 512-bit and then in rv670 they go back to 256-bit again without any performance hit. This situation shows that 512-bit doesn`t give any advantages. IMO the most important thing is architectural improvements/changes and Shader processors. I think G100 should have at least 256 improved SP because of many future games will supposedly need much shader power.
I belive GT200 (or G100) will be about 2 times faster in real world games because about 1,5 year without any performance boost is very annoying. We simply deserve on it ;)
 
See r600? They went to 512-bit and then in rv670 they go back to 256-bit again without any performance hit. This situation shows that 512-bit doesn`t give any advantages.
That's a ridiculous extrapolation and claim. R600 wasn't that fast, so it wasn't bandwidth-starved and 512-bit was massive overkill. Now, if what you want to see is 2x the performance of G80, good luck getting that with 256-bit or 384-bit GDDR3... Unless what you're thinking of is 256-bit GDDR5, in which case that's another (and much more complex) debate entirely!
 
That's a ridiculous extrapolation and claim. R600 wasn't that fast, so it wasn't bandwidth-starved and 512-bit was massive overkill. Now, if what you want to see is 2x the performance of G80, good luck getting that with 256-bit or 384-bit GDDR3... Unless what you're thinking of is 256-bit GDDR5, in which case that's another (and much more complex) debate entirely!

When 256-bit will be bottleneck for sure 384-bit should be enough for much faster GPU than G80 :) Well i could be wrong but GF8800GTX is doing great in every resolution with 384-bit mem interface (AA including too).
 
When 256-bit will be bottleneck for sure 384-bit should be enough for much faster GPU than G80 :) Well i could be wrong but GF8800GTX is doing great in every resolution with 384-bit mem interface (AA including too).

And what if you want to double the performance of the 8800GTX Ultra? Do you really think keeping the current memory throughput as is won't become a bottleneck?
 
512-bit memory interface with GDDR3. ;)

In my opinion not enough(~140GB/s with 0.83ns) to feed this monster.:mrgreen:

The availability of GDDR5(which starts with <0.5ns) could be relevant for GT200s release-date, I suspect...
 
In my opinion not enough(~140GB/s with 0.83ns) to feed this monster.:mrgreen:

The availability of GDDR5(which starts with <0.5ns) could be relevant for GT200s release-date, I suspect...

What monster? :) How do you know how fast will be G100/GT200?
 
And what if you want to double the performance of the 8800GTX Ultra? Do you really think keeping the current memory throughput as is won't become a bottleneck?

it certainly will, as eg. the 8800GTS-512 with its 256-bit bus is more often BW limited than not. the interesting question is how nVidia will resolve the issue. I see 4 solutions:
- 256-bit with insanely fast memory (and what about latency?)
- 384-bit with gDDR4 (this probably sounds feasible)
- 512-bit crossbar (good luck for that :eek: )
- 512-bit with a new memory bus (that had better been a long while in the making)

on a different note, I think nVidia would be really stupid to release a 2Bn transistor chip. no way they can get decent yields with that...
 
Man I wish they would go back to the old NV nomenclature... It was so easy back in those days. I pretty much assume whatever NV puts out next will be "NV55".
 
Man I wish they would go back to the old NV nomenclature... It was so easy back in those days. I pretty much assume whatever NV puts out next will be "NV55".

My thoughts exactly.


GT200, D9E, G9X, or whatever, is NV55, a major refresh / overhaul of G80. Like what NV47 / G70 / GF 7800 GTX was to NV40 / GF 6800. Not a totally new architecture (as G80 is from NV4x/G7x) but still a new GPU. A highend GeForce 9 series.

Nvidia's next-gen NV60 / GeForce 10 series, I don't expect to see that until late 2009, or around the time Larrabee becomes a product.
 
My thoughts exactly.


GT200, D9E, G9X, or whatever, is NV55, a major refresh / overhaul of G80. Like what NV47 / G70 / GF 7800 GTX was to NV40 / GF 6800. Not a totally new architecture (as G80 is from NV4x/G7x) but still a new GPU. A highend GeForce 9 series.

Nvidia's next-gen NV60 / GeForce 10 series, I don't expect to see that until late 2009, or around the time Larrabee becomes a product.

Isn't NV60/GF10 expected in Q3 this year? That would certainly fit into the "normal" NV timescales. We get NV55/GF9 in Q1, and then 6 months later we get the "real" next gen GPU.

Personally, i'm holding out for GF10.
 
Status
Not open for further replies.
Back
Top