Nvidia GT200b rumours and speculation thread

Since when does G9x = GT200?

since this maybe

"If recent ForceWare driver releases are correct, NVIDIA is poised to rename existing GeForce 8,9 series GPUs once again. VR-Zone has uncovered new GeForce GTS, GeForce GT, and GeForce G GPUs, as well as new mobile parts.

According to the report, G92-based cards will be designated as GeForce GTS 150, while G94 cards will be named GeForce GT 130 and G96 GeForce GT 120. Finally, G98 GPUs will be sold as GeForce G100.

As you can see under the new designation NVIDIA will brand their high-end enthusiast parts as "GeForce GTX" while mainstream performance segment will be branded as "GeForce GTS". Mainstream cards will be sold as GeForce GT and finally entry-level cards will use the GeForce G designation.

No word on when the new renamed cards will hit retail, presumably NVIDIA's board partners will move as quickly as possible to sell existing GeForce 9800, 9600, 9500, 9400 cards first though."
 
Since when does G9x = GT200?

They're close enough to make it basically pointless to release a "cut down" GT200 for the mainstream when G9x already exists and (I assume) is pretty cheap to produce.

Take GT200, reduce the functional units to say 128 shaders, reduce the bus to say 256bit, reduce the memory to say, 512MB, strip out the DP capability which isn't needed in the mainstream.

How different is that to a 9800GTX?

How different would it perform?

How different would it cost to produce?

Same argument holds for all the other price points that G92 occupies.
 
Since when does G9x = GT200?
Think about it for a second. RV740 is a RV770 derivative with the same number of texture units, fewer shader arrays, no double precision, and a few other changes.

The 9800GTX is the GTX 260 with the same number of texture processors, fewer shader arrays, no double precision, and a few other differences.

There's no need for GT200 derivatives, just like RV670 didn't have any and the HD3xxx series simply renamed the R600 derivatives. It probably would have been possible to do the same with the HD4xxx series if the architecture wasn't so much better than in previous generations.
 
RV740 is a RV770 derivative with the same number of texture units, fewer shader arrays, no double precision, and a few other changes.

I don't get the point you're making here. R740 is clearly a derivative of R770 (it has the same CAL capabilities minus double precision). But I don't see how that fits in with G9x and GT200. A compute 1.2 capable card (which is the same as 1.3 but lacks double precision) would be a derivative of GT200. Of course a compute 1.2 card doesn't exist yet, which is essentially what we are discussing.
 
I don't get the point you're making here. R740 is clearly a derivative of R770 (it has the same CAL capabilities minus double precision). But I don't see how that fits in with G9x and GT200. A compute 1.2 capable card (which is the same as 1.3 but lacks double precision) would be a derivative of GT200. Of course a compute 1.2 card doesn't exist yet, which is essentially what we are discussing.

But does the fact that its compute 1.1 or 1.3 in CUDA matter to the vast majority of end users who only want to play games?
 
I don't get the point you're making here.
Why don't you look at the post you responded to? pjbliverpool said the comments about no low-end GT200 are rubbish and he's absolutely right. I'm simply strengthening his argument.

G92 is almost identical to what a GT200 derivative would be, and in all likelihood G92 addresses its segment better than such a derivative would. If G92b is almost as fast as the 8-cluster GT200, how on earth could NVidia derive a part from the monsterous GT200 to be smaller and faster?

A GT200 derivative makes no sense, and thus its absence proves nothing.
 
Think about it for a second. RV740 is a RV770 derivative with the same number of texture units, fewer shader arrays, no double precision, and a few other changes.

In the above are you talking about the RV730 aka the 4670?

I think the RV740 is "little dragon" ati's first go at a 40nm processor roughly aimed at replacing the 4830 and maybe 4850 if things go really well. Apparently its taped and gone back for a respin to fix minor problems.

Getting back on topic, charlies article stated a release date of Dec 12. Is this for the GTX295 or the 55nm GTX260s? Its more likely to be the first isnt it? The second unless delivered in volume very quickly would likely freeze buyers who were just about to purchase the 65nm parts for christmas.
 
But does the fact that its compute 1.1 or 1.3 in CUDA matter to the vast majority of end users who only want to play games?

willardjuice said:
Yeah I was talking about architectures, not stupid naming schemes.

I don't care what average gamers think.

pjbliverpool said the comments about no low-end GT200 are rubbish and he's absolutely right

I think it could be argued though that there is no low end GT200. Currently Nvidia is using G9x to fill that void and whether or not that's an effectively strategy is irrelevant to me. There are architectural advancements (outside of added double precision support) the GT200 has over G9x. Compute 1.2 cards include those enhancements but minus the double precision support (just like R730 included the enhancements that R770 had over R670 minus double precision support).

There is a reason why Nvidia went from Compute 1.1 cards to directly Compute 1.3 cards. The is a reason why Nvidia created the Compute 1.2 standard to be the same as Compute 1.3 minus double precision. That reason (IMO) is that Nvidia is planning a lower end GT200. We can agree to disagree, but I don't think my opinion is too far out there.
 
I don't care what average gamers think.

But they're who will be buying the GPU's so if the average gamer doesn't care, then neither should NV. And hence its not a failiure on their part but simply part of a financiially sound plan.

There is a reason why Nvidia went from Compute 1.1 cards to directly Compute 1.3 cards. The is a reason why Nvidia created the Compute 1.2 standard to be the same as Compute 1.3 minus double precision. That reason (IMO) is that Nvidia is planning a lower end GT200.

Thats a sound argument but its also possible that that standard exists to allow for high end revisions of GT200 which would favour a smaller, faster die over the DP capability. Thats what was expected of "GT200b" for a while. Hell, I guess its still a possiblity for the new refreshes.
 
there's another reason for CUDA 1.2.
I remember DP was to be reserved for quadro and tesla models of the GT200, then nvidia changed its mind and allowed DP on geforce GTX as well to give a bigger momentum to CUDA adoption and development.

that doesn't rule out a GT200 derivative on the 40nm process (I agree that there was no point doing it on 65/55nm as there's G92.)
the GT206? I was very confused by rumors of GT206==GT200b. I decided to wait rather than scratch my head too hard.
 
Last edited by a moderator:
This is what GTX 260 should've been from the start.

Nice-looking PCB with some pretty impressive specs though. Definitely making use of all that real estate!
 
I like the fact that they've managed to keep all the RAM IC's in a single side, and yet the PCB is some 1.5cm shorter.
Let's hope the MSRP follows suit and shrinks accordingly... ;)
 
gtx260 default specs 576mhz/1242mhz
55nm gtx260 576mhz/2000mhz

same core freq but about 60% increase in shader clock what would that equate to in extra fps ?
 
I like the fact that they've managed to keep all the RAM IC's in a single side, and yet the PCB is some 1.5cm shorter.
Let's hope the MSRP follows suit and shrinks accordingly... ;)

Don't forget 4 fewer PCB layers as well. Should allow the AIB partners to increase their margins nicely, while keeping prices the same or possibly even reducing them further.

Everyone wins!
 
Back
Top