NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
The real question is if there are any games under development with deferred shading/shadowing with a 10.1 path for MSAA.
KillZone2 (a PS3 title) is based on a classic deferred renderer + MSAA and runs on NVIDIA hardware. This should tell us that NVDIA got at least DX10.1 features working way before AMD/ATI and way before DX10 was even finalized, it's not like they don't know how to do it.
 
The 1.2B transistors is what Nvidia talked about yesterday.

http://anandtech.com/weblog/showpost.aspx?i=453
See text besides the die picture of the G100.

See the slide CJ posted in contrast to that.

Were it not for spoiling the R770 launch the 65nm version would not be released.

I'm "convinced" that IHVs know not better but to develop a highly complex chip (as in 576 square millimeters) as some funky technological experiment in order to shelve it.
 
BTW, here's a thought: I think the only way Charlie could possibly be right about the GT200b die size being 'a little more than 400mm²' while GT200 is presumably 576mm²... is if GT200b uses GDDR5. 576*0.81 is 466mm², but you could assume the I/O for GDDR5 per-bit isn't massively higher than GDDR3 so you could save maybe 20mm² there. And if you save a bit on the MCs too, there you go, low 400s.
[strike]Is that .81-factor based on experienced scaling from other chips transitions from 65 to 55 nm or is your calculation based on the theoretical, ideal shrink, where everything scales down with finer process technology?[/strike] Never mind, should've read the thread a little bit further. :(
 
CJ might be more reliable than financial and marketing FUD.

Leuke foto !

R&D always is a bit of an experiment, that makes it interesting.
And technology serving for a die shrink certainly is no waste.


See the slide CJ posted in contrast to that.



I'm "convinced" that IHVs know not better but to develop a highly complex chip (as in 576 square millimeters) as some funky technological experiment in order to shelve it.
 
Skipping the rest of the bias, GT200b will not hit the market for another couple of months, most likely for the annual NVIDIA November refresh, so anyone buying a GT200 card will not end up with a dead duck. He or she will get the fastest single-core graphics card on the planet, even if it is still a G8x derivate.
Nordichardware
Thats good news for all the people who want to buy a new card in July/August.

Any news whether the GT200b is just a die-shrink or rather a die-shrink combined with a switch to smaller memory-interface/GDDR5?
 
CJ might be more reliable than financial and marketing FUD.

If you read the text at the link you posted you'll notice how careful the wording is considering the transistor amount.

R&D always is a bit of an experiment, that makes it interesting.
And technology serving for a die shrink certainly is no waste.

That would had been the most expensive experiment in the history of graphics chips. And no it wasn't an experiment and yes it was to launch as it is for quite some time now since development didn't start like yesterday either.

And since this is getting fairly ridiculous: what were you trying to say again initially?
 
I'll just change subject.

NV told me the next ultra-high-end (G100 based) card would have 4GB of memory. I wonder how that would be possible with the current PCB layout.


And since this is getting fairly ridiculous: what were you trying to say again initially?
 
If true then that might be a sign that ATI's strategy worked this round that they realized a monolithic GPU might just not be worth it if yields are truly 40%

Major design decisions for upcoming GPU's get decided many years in advance. So NVIDIA would be hard-pressed to do an about-face so quickly. More likely is that they would work on single die and multi die solutions at the same time, and position as the highest end card whatever approach performs best at any given point in time.
 
Usually the highest end Quadro GPUs have twice the framebuffer of the desktop mainstream GPUs. I wouldn't be surprised to see a G200 based Quadro with a 2GB framebuffer.

A 4GB framebuffer sounds rather like a Quadro version of the real "G100" for the less foreseeable future.
 
I find it hard to believe that NVIDIA would tell Voxilla that the next ultra high end card will have 4GB RAM, unless they are breaking their word on not being able to talk about unannounced products. But even if true, Ailuros is probably correct in that it would be a Quadro ultra high end variant.
 
I don't even think there's going to be a G200 based Quadro with a 4GB framebuffer.
32 memory-chips would be a bit insane and there is not 2GBit memory avaiable...
I would think Quadro FX 5700 will come with 2GiB 1.0ns memory, which is a good step from 5600 with 1.5.
 
Oh... there is something I forgot... ;)

Look at this picture:
http://www.pconline.com.cn/images/h.../1294786_13.jpg&namecode=diy&subnamecode=home

There you can see clearly 4 memory pads, but on GTX 280 there are only 2 chips:
http://img89.imageshack.us/img89/1249/2008052276e7838a450d9ecmz7.jpg

So maybe this cooler is prepared for 32 chips 2GiB/0.83ns or 4GB/1.0ns version?

So, how did the RV670-based Firestream reach 2GB? ;)
16 chips, which is also used on 1GiB 88GT for ~130€ by Palit/Gainward. But 32 would be a definitly bigger number combined with 512-Bit...
 
Last edited by a moderator:
Major design decisions for upcoming GPU's get decided many years in advance. So NVIDIA would be hard-pressed to do an about-face so quickly. More likely is that they would work on single die and multi die solutions at the same time, and position as the highest end card whatever approach performs best at any given point in time.

Redundancy techniques in practice always bring NVIDIA relatively good return on product contributions at premium grades of wide range product lines.


G92- G92 150/G92 270/G92 400/

RV670- RV670 only

This time, NVIDIA will launch 8800GT 320 MB alike to compete RV770XT and depreciate RV770Pro's potential market value, short after when 9900GTX will be available to the majority of public.


My argument is that for recent 3 quarters in aggregation, ATI has not been contributed any excellent result that AMD truly counted on .
 
Oh... there is something I forgot... ;)

Look at this picture:
http://www.pconline.com.cn/images/h.../1294786_13.jpg&namecode=diy&subnamecode=home

There you can see clearly 4 memory pads, but on GTX 280 there are only 2 chips:
http://img89.imageshack.us/img89/1249/2008052276e7838a450d9ecmz7.jpg

So maybe this cooler is prepared for 32 chips 2GiB/0.83ns or 4GB/1.0ns version?
That first picture is a view of the underside of the cooler - there's no memory along that side of the GPU because that's where the PCI Express traces run.

A view of the top of the cooler should have a space for PEG 6+8-pin sockets.

Jawed
 
Redundancy techniques in practice always bring NVIDIA relatively good return on product contributions at premium grades of wide range product lines.


G92- G92 150/G92 270/G92 400/

RV670- RV670 only
ATI GPUs have fine-grained redundancy which means turning off 1 ALU in 17, for example. So every GPU is built slightly bigger than needed and then the vast majority of defects are "captured" by the fine redundancy, resulting in a full-spec GPU. Clock speeds are obviously a separate matter. And apparently there were some HD3690s based on recovered RV670s. Seemingly these have half the memory bus active.

Jawed
 
Status
Not open for further replies.
Back
Top