Nvidia GT200b rumours and speculation thread

Really ? Then why bother with clock gating on a chip at all ?
Everybody does it now, from Intel to AMD to Nvidia. Your card won't be speeding away full throttle while surfing the web on AERO Glass. And you can't argue that some games are simply less taxing than others, sometimes on different parts of the same chip (texturing or shading, fill rate or memory bandwidth, etc), sometimes it even varies from scene to scene within the same game (different maps and levels of detail).

------------------------------------------------------------------------------

PCPer did a review of the Quadro CX 1.5GB (1.5GB GDDR3, 192 scalar processors, 384bit bus), and besides confirming the presence of the 55nm GT200b, they also showed what i believe is the first board carrying any GT2xx that only requires a single (!) 6pin power connector, which implies a TDP well under 150W.
Could this, coupled with the 384bit bus and 192 scalar processors onboard, mean the possible debut of a direct consumer replacement product for the 9800 GTX/GTX+, and perhaps even the 9800 GT ?

You misunderstand.

You don't need performance when there's no workload.

Get it now?
 
Is that the right way to look at it? You would want as little power consumption as possible when idling. I thought that's the quality he was alluding to.
 
Is that the right way to look at it? You would want as little power consumption as possible when idling. I thought that's the quality he was alluding to.
I think ShaidarHaran's point was referring to this:

Perf/mm^2/Watt

If you're doing nothing in 2D mode, why bother including the performance variable?
 
Yes! Thank you.

Too bad that in a modern OS, there's no "2D mode" to speak of.
The desktop is lit, textured and rendered in 3D, although sometimes it doesn't "look" 3D. So is the content in any open windows, like web browsers, HD and SD video, etc.

Sure, you could argue with the legacy mode "2D" of certain OS'es, but that is going away fast.
 
Last edited by a moderator:
Too bad that in a modern OS, there's no "2D mode" to speak of.
The desktop is lit, textured and rendered in 3D, although sometimes it doesn't "look" 3D. So is the content in any open windows, like web browsers, HD and SD video, etc.

Sure, you could argue with the legacy mode "2D" of certain OS'es, but that is going away fast.

Load up a GPU utilization app and tell us all how much load is placed on the GPU during the scenario(s) you describe.

You're really grasping for straws here...
 
Too bad that in a modern OS, there's no "2D mode" to speak of.
The desktop is lit, textured and rendered in 3D, although sometimes it doesn't "look" 3D. So is the content in any open windows, like web browsers, HD and SD video, etc.

Sure, you could argue with the legacy mode "2D" of certain OS'es, but that is going away fast.

For examples Vistas desktop still uses the "2D mode" clocks etc
 
I think ShaidarHaran's point was referring to this:

Perf/mm^2/Watt

If you're doing nothing in 2D mode, why bother including the performance variable?

Yeah, I got that but I'm saying the way to look at it is do you have the performance when you need it and the low power consumption when you don't. How else will you evaluate the all round power efficiency of the card? Looking only at load consumption doesn't give the whole picture.
 
How else will you evaluate the all round power efficiency of the card? Looking only at load consumption doesn't give the whole picture.
Of course not, but INKster's "Perf/mm^2/Watt" metric is even more useless. Also, lower TDP doesn't mean lower idle consumption.
 
Digitimes claims that Nvidia will not position the GTX 295 Dual-GPU and the GTX 285 single-GPU cards as the top-of-the-line.
Instead, it will apparently release a brand new "GT300" core as soon as next quarter, also built on 55nm (probably signaling a delay of the company's 40nm products).
 
Digitimes claims that Nvidia will not position the GTX 295 Dual-GPU and the GTX 285 single-GPU cards as the top-of-the-line.
Instead, it will apparently release a brand new "GT300" core as soon as next quarter, also built on 55nm (probably signaling a delay of the company's 40nm products).

I hope thats true! It would be very cool. Also notice the line:

"The core frequencies of the two cards will be 10-15% faster than Nvidia's previous generation cards,"

I wonder if thats just speculation or if they know something about the GTX295...? If its running that much faster than the 280, or even the 260 then it should have a healthy lead over the 4870X2.
 
I hope thats true! It would be very cool. Also notice the line:

"The core frequencies of the two cards will be 10-15% faster than Nvidia's previous generation cards,"

I wonder if thats just speculation or if they know something about the GTX295...? If its running that much faster than the 280, or even the 260 then it should have a healthy lead over the 4870X2.

Post #453... ;)
 
Post #453... ;)

Indeed, but thats talking about the 285. I'm more interested in the 295 as its been assumed up till now that it would be need to be clocked slower than the 280 and maybe even the 260.

If this is telling us that its running at 285 speed, or even just 10% faster than 260 speeds then thats pretty good!
 
Indeed, but thats talking about the 285. I'm more interested in the 295 as its been assumed up till now that it would be need to be clocked slower than the 280 and maybe even the 260.

If this is telling us that its running at 285 speed, or even just 10% faster than 260 speeds then thats pretty good!

Do note that the GTX 295 has a 448bit bus (x2), as opposed to the full 512bit bus of the GTX 280 and GTX 285 (TMU/ROP count should follow accordingly).
However, it keeps the same 240 scalar processors per core (480 total) just as their top single-GPU cards, not 216 or 192 like the two current GTX 260's.
At least that's what expreview.com stated a few days ago.
 
TMU's follow the shaders...so it will have a full complement of them to go with its 240 shaders if that is indeed the configuration.

The GT300 story makes no sense. The 55nm GT200 is still a big chip at 470mm^2. Their next monster has to be on 40nm out of necessity. And where are all the 40nm GT2xx derivatives to replace the G9x series cards?
 
Back
Top