NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
NVIDIA's GX2s tend to be dual-PCB... However, without water cooling I don't really see how you can cool such a beast in only two slots.
 
http://www.donanimhaber.com/resimler/image.aspx/geforce_gtx200series_bfs_fx57.jpg

http://www.donanimhaber.com/resimler/image.aspx/GeForce_200_1d3.jpg

http://www.donanimhaber.com/resimler/image.aspx/GeForce_200_1d2.jpg
Hope this aint a repost.

I wonder if they are referring to the 9800GX2 when they mention "50% over the 1st gen". :devilish:

The new cards are branded as the Geforce 200. They could have Geforce GTX 2x0 (high end/enthusiast), Geforce GTS 2x0 (high end/performance), Geforce GT 2x0 (mid end/mainstream), and Geforce GS 2x0(low end/budget). Unless there is no GTS class within the geforce 200 series i.e has high(GTX), mid(GT), low end(GS).

Does anybody like the new naming scheme at all?

Mod Edit: IMG -> URL. Leeching isn't nice.
 
Also DT's write up on GT200

http://www.dailytech.com/article.aspx?newsid=11842

NVIDIA's upcoming Summer 2008 lineup gets some additional details

Later this week NVIDIA will enact an embargo on its upcoming next-generation graphics core, codenamed D10U. The processor will make its debut as two separate graphics cards, currently named GeForce GTX 280 (D10U-30) and GeForce GTX 260 (D10U-20).

The GTX 280 sports all of the features of the D10U processor, the GTX 260 version will consist of a significantly cut-down version of the same GPU. The GTX 280 version of the processor will enable all 240 unified stream processors. NVIDIA documentation, verified by DailyTech, claims these second-generation unified shaders perform 50 percent better than the shaders found on the D9 cards released earlier this year.

The main difference between the two new GeForce GTX variants revolves around the number of shaders and memory bus width. NVIDIA disables 48 stream processors on the GTX 260 version. GTX 280 ships with a 512-bit memory bus capable of supporting 1GB GDDR3 memory; the GTX 260 alternative has a 448-bit bus with support for 896MB.

GTX 280 and 260 add virtually all of the same features as GeForce 9800GTX: PCIe 2.0, OpenGL 2.1, SLI and PureVideoHD. The company also claims both cards will support two SLI-risers for 3-way SLI support.

Unlike the upcoming AMD Radeon 4000 series, currently scheduled to launch in early June, the D10U chipset does not support DirectX extentions above 10.0. Next-generation Radeon will also ship with GDDR4 while the June GeForce refresh is confined to just GDDR3.
The GTX series is NVIDIA's first attempt at incorporating the PhysX stream engine into the D10U shader engine. The press decks currently do not shed a lot of information on this support, and the company will likely not elaborate on this before the June 18 launch date.

After NVIDIA purchased PhysX developer AGEIA in February 2008, the company announced all CUDA-enabled processors would support PhysX. NVIDIA has not delivered on this promise yet, though D10U will support CUDA, and therefore PhysX, right out of the gate.

NVIDIA's documentation does not list an estimated street price for the new cards.

No DX10.1 support.
Unlike the other sites they mention specifically that the GT200 scalar ALUs are 50% faster than the ones in G80/9x. Any ideas on the kind of improvements made to the shader core to produce this kind of performance improvment?
 
Cooling it is easy, a full length full height heatsink would be much larger than present GX2 heatsinks, the only problem is cooling with back exhaust ... I personally don't see a problem with that for niche products though. If requiring ridiculous power supplies isn't a problem why is requiring cases with ridiculous airflow any more of a problem?
 
Unlike the other sites they mention specifically that the GT200 scalar ALUs are 50% faster than the ones in G80/9x. Any ideas on the kind of improvements made to the shader core to produce this kind of performance improvment?
Just a guess, but perhaps they're comparing it to G92 which evidently has a bottleneck elsewhere - it has 100% more shaders than G94, though it performs ~30% better at best.
 
Well I guess they are guessing die size is getting to big for a single GPU, but come on they can always go down in microns.

Well take the if out of that statement if you don't like it serenity ;) :D

Sounds like a made up rumor for website hits to me.
 
Unlike the other sites they mention specifically that the GT200 scalar ALUs are 50% faster than the ones in G80/9x. Any ideas on the kind of improvements made to the shader core to produce this kind of performance improvment?
Clock rates? :p
 
I was thinking clock rates, but since GT200 being a monstrous die probably limit the ability to clock its core/shader frequency so high to produce a result 50% faster than G92 on shader performance alone? (when comparing 1 GT200 SP to 1 G92 S2). Just like how G80 was low clocked compared to its cooler running refreshes.
 
I was thinking clock rates, but since GT200 being a monstrous die probably limit the ability to clock its core/shader frequency so high to produce a result 50% faster than G92 on shader performance alone? (when comparing 1 GT200 SP to 1 G92 S2). Just like how G80 was low clocked compared to its cooler running refreshes.

If they are speaking about the "cluster shading power" going up 50% and not "SP shader power" it's obvious what they are referring to :p
 
Every chip has an array of contact pads, similar to pins on a CPU. This includes pins that connect the GPU to the memory chips. The bigger the width, the more the pins. Because of that, there's a lower limit of die surface for each bus, e.g. for 256bit it's around 190 mm2. GT200 is enough large to accomodate a 512bit bus, its die-shrink might not be. Therefore, to keep memory bandwidth in the same numbers, they would need faster memory = GDDR5.

Oh yes it does. G71, G73, R580 were all limited by GDDR3 of their time. G92 is limited by GDDR3 today. I imagine nVidia calculated the optimal memory throughput for GT200 and will easily reach the number even using cheaper GDDR3. But imagine they'd shrink the chip, lowering the TDP and making GX2 card possible. But a 512bit bus would no longer fit, so they'd have to narrow it down to 256 bits. Now if they used GDDR3, the chips would only get about half their calculated optimal bandwidth and that sure would limit them.

Of course, but why would nVidia purposedly screw up a product?


Unless ofcourse they go for 320/384 bit busses again, ala G80, die size permitting ofcourse.
 
Clock rates? :p

Hmmm I don't know. Usually when you see such claims they are in reference to per-clock performance. So maybe the MUL finally shows up. If it does it'll be far easier for them to claim a teraflop of performance based on 3 flops per SP at reasonable shader clocks ~ 1400Mhz.
 
yeah, it was only a guess, and there shoud be a reason to create such a card, and I don't see it in the immediate future.

Yup, not immediate for certain, maybe when they can go to 55nm, just to lower thermal envelope, I think even goin' to 384 can't lower TDP under 200W per PCB, and 400W are pretty too much for any air stock cooler.

Your picture seems somewhat feasible. Future will tell :>
 
DailyTech said:
GTX 280 ships with a 512-bit memory bus capable of supporting 1GB GDDR3 memory; the GTX 260 alternative has a 448-bit bus with support for 896MB.
Life for devs used to be simple, they only had to think about graphic memory in powers of two. Nowadays, detecting the user's card and suggesting appropriate graphic options in a game is becoming a full-time job.

The D10U-30 will enable all 240 unified stream processors designed into the processor. [...] NVIDIA disables 48 stream processors on the GTX 260.
10x24 and 8x24 then? They say GTX280 uses all SPs, but 16x16 with a cluster disabled (and in that case, 12x16 for GTX260) could be a yield saver if the chip as big as it is rumored to be.

Next-generation Radeon will also ship with GDDR4 while the June GeForce refresh is confined to just GDDR3.
Wouldn't be a DailyTech news without at least a typo.
 
Status
Not open for further replies.
Back
Top