I know, but even the current GX2 has quite complex PCB...
NVIDIA's upcoming Summer 2008 lineup gets some additional details
Later this week NVIDIA will enact an embargo on its upcoming next-generation graphics core, codenamed D10U. The processor will make its debut as two separate graphics cards, currently named GeForce GTX 280 (D10U-30) and GeForce GTX 260 (D10U-20).
The GTX 280 sports all of the features of the D10U processor, the GTX 260 version will consist of a significantly cut-down version of the same GPU. The GTX 280 version of the processor will enable all 240 unified stream processors. NVIDIA documentation, verified by DailyTech, claims these second-generation unified shaders perform 50 percent better than the shaders found on the D9 cards released earlier this year.
The main difference between the two new GeForce GTX variants revolves around the number of shaders and memory bus width. NVIDIA disables 48 stream processors on the GTX 260 version. GTX 280 ships with a 512-bit memory bus capable of supporting 1GB GDDR3 memory; the GTX 260 alternative has a 448-bit bus with support for 896MB.
GTX 280 and 260 add virtually all of the same features as GeForce 9800GTX: PCIe 2.0, OpenGL 2.1, SLI and PureVideoHD. The company also claims both cards will support two SLI-risers for 3-way SLI support.
Unlike the upcoming AMD Radeon 4000 series, currently scheduled to launch in early June, the D10U chipset does not support DirectX extentions above 10.0. Next-generation Radeon will also ship with GDDR4 while the June GeForce refresh is confined to just GDDR3.
The GTX series is NVIDIA's first attempt at incorporating the PhysX stream engine into the D10U shader engine. The press decks currently do not shed a lot of information on this support, and the company will likely not elaborate on this before the June 18 launch date.
After NVIDIA purchased PhysX developer AGEIA in February 2008, the company announced all CUDA-enabled processors would support PhysX. NVIDIA has not delivered on this promise yet, though D10U will support CUDA, and therefore PhysX, right out of the gate.
NVIDIA's documentation does not list an estimated street price for the new cards.
Just a guess, but perhaps they're comparing it to G92 which evidently has a bottleneck elsewhere - it has 100% more shaders than G94, though it performs ~30% better at best.Unlike the other sites they mention specifically that the GT200 scalar ALUs are 50% faster than the ones in G80/9x. Any ideas on the kind of improvements made to the shader core to produce this kind of performance improvment?
Well I guess they are guessing die size is getting to big for a single GPU, but come on they can always go down in microns.
Well take the if out of that statement if you don't like it serenity
Clock rates?Unlike the other sites they mention specifically that the GT200 scalar ALUs are 50% faster than the ones in G80/9x. Any ideas on the kind of improvements made to the shader core to produce this kind of performance improvment?
I was thinking clock rates, but since GT200 being a monstrous die probably limit the ability to clock its core/shader frequency so high to produce a result 50% faster than G92 on shader performance alone? (when comparing 1 GT200 SP to 1 G92 S2). Just like how G80 was low clocked compared to its cooler running refreshes.
A 512 bit bus maybe could not fit, but maybe a 384 bit bus could.
And a 2x384 bit bus should be not so bad, I guess
They'll have to reduce ROPs number too, going back to 384. I don't think this can be an issue, tho...
Every chip has an array of contact pads, similar to pins on a CPU. This includes pins that connect the GPU to the memory chips. The bigger the width, the more the pins. Because of that, there's a lower limit of die surface for each bus, e.g. for 256bit it's around 190 mm2. GT200 is enough large to accomodate a 512bit bus, its die-shrink might not be. Therefore, to keep memory bandwidth in the same numbers, they would need faster memory = GDDR5.
Oh yes it does. G71, G73, R580 were all limited by GDDR3 of their time. G92 is limited by GDDR3 today. I imagine nVidia calculated the optimal memory throughput for GT200 and will easily reach the number even using cheaper GDDR3. But imagine they'd shrink the chip, lowering the TDP and making GX2 card possible. But a 512bit bus would no longer fit, so they'd have to narrow it down to 256 bits. Now if they used GDDR3, the chips would only get about half their calculated optimal bandwidth and that sure would limit them.
Of course, but why would nVidia purposedly screw up a product?
Clock rates?
yeah, it was only a guess, and there shoud be a reason to create such a card, and I don't see it in the immediate future.
Life for devs used to be simple, they only had to think about graphic memory in powers of two. Nowadays, detecting the user's card and suggesting appropriate graphic options in a game is becoming a full-time job.DailyTech said:GTX 280 ships with a 512-bit memory bus capable of supporting 1GB GDDR3 memory; the GTX 260 alternative has a 448-bit bus with support for 896MB.
10x24 and 8x24 then? They say GTX280 uses all SPs, but 16x16 with a cluster disabled (and in that case, 12x16 for GTX260) could be a yield saver if the chip as big as it is rumored to be.The D10U-30 will enable all 240 unified stream processors designed into the processor. [...] NVIDIA disables 48 stream processors on the GTX 260.
Wouldn't be a DailyTech news without at least a typo.Next-generation Radeon will also ship with GDDR4 while the June GeForce refresh is confined to just GDDR3.
The new demo will feature new character : Gordon Medusa
Somebody who is currently testing the GT200 have said that new demo for GT200 will be amazing in the face of public.
http://we.pcinlife.com/thread-935774-2-1.html
It is probably related to Game physics (Apex) and comes in the form of new levels for UT3 with lots of explosions and destructible environment.Somebody who is currently testing the GT200 have said that new demo for GT200 will be amazing in the face of public.