Nvidia GT200b rumours and speculation thread

I think the sideport has been held back to compete with this new card. Though they may bin more intensively and release a faster 4870x2.
 
I think the sideport has been held back to compete with this new card. Though they may bin more intensively and release a faster 4870x2.

I was thinking that sideport was disabled because it made no sense to have it as it is right now.


Anandtech review said:
The CrossFire Sideport is simply another high bandwidth link between the GPUs. Data can be sent between them via a PCIe switch on the board, or via the Sideport. The two aren't mutually exclusive, using the Sideport doubles the amount of GPU-to-GPU bandwidth on a single Radeon HD 4870 X2. So why disable it?

According to AMD the performance impact is negligible, while average frame rates don't see a gain every now and then you'll see a boost in minimum frame rates. There's also an issue where power consumption could go up enough that you'd run out of power on the two PCIe power connectors on the board. Board manufacturers also have to lay out the additional lanes on the graphics card connecting the two GPUs, which does increase board costs (although ever so slightly).

AMD decided that since there's relatively no performance increase yet there's an increase in power consumption and board costs that it would make more sense to leave the feature disabled.

The reference 4870 X2 design includes hardware support for the CrossFire Sideport, assuming AMD would ever want to enable it via a software update. However, there's no hardware requirement that the GPU-to-GPU connection is included on partner designs. My concern is that in an effort to reduce costs we'll see some X2s ship without the Sideport traces laid out on the PCB, and then if AMD happens to enable the feature in its drivers later on some X2 users will be left in the dark.

I pushed AMD for a firm commitment on how it was going to handle future support for Sideport and honestly, right now, it's looking like the feature will never get enabled. AMD should have never mentioned that it ever existed, especially if there was a good chance that it wouldn't be enabled. AMD (or more specifically ATI) does have a history of making a big deal of GPU features that never get used (Truform anyone?), so it's not too unexpected but still annoying.
 
Well, it's there in the hardware, but not enabled. They had to put some spin on disabling it. I'll bet their new drivers/card will have increased performance using sideport should they want to enable it.
 
The other thing is this report is that the die surface is not even -- 0.6mm delta between lowest and highest point, with the "depression" around the center. :rolleyes:
I don't see there anything with 0,6mm in it =)
He does say that the heatspreader is convex on the inside but that doesn't mean that the die itself isn't even.
 
Well, it's there in the hardware, but not enabled. They had to put some spin on disabling it. I'll bet their new drivers/card will have increased performance using sideport should they want to enable it.

So AMD is intentionally limiting the performance of the 4870X2 until Nvidia tries to play catch up? I usually see that sort of wild-eyed optimism from the Nvidia camp :)
 
A recent burst of nVIDIA rumour on its upcoming products!

Some more pics

GeForceGTX295_2.jpg


image.php


Here are some more shots on the GeForce GTX 295 card. Enjoy! Among other things we have confirmed today are 480 SPs, 289W TDP, Quad SLI support. Clock speeds, performance and price still remains a mystery but we will soon find out. Can RV775 X2 card stands against this NV offering?

GTX285 exposed even more

We have looked into 55nm GeForce GTX260 VGA Card, now let’s turn to 55nm GTX280. It features brand-new PCB design and enhanced default frequency. Besides, it’s been given a new brand – GeForce GTX285.

Adopting P891 as reference design, GeForce GTX285 remains 10.5” length just as GTX280. And its cooler is dual-slot designed with dual DVI and S-Video output. The graphic memory carries on 1GB GDDR3, and its stream processors is 240. The frequency of GeForce GTX285 remains to be confirmed, but its performance can get up to 110% of GTX280.

We can regard GeForce GTX285 as GTX280 Overclocking Edition with less power consumption. GeForce GTX280’s Maximum Graphics Card Power is 236W, and it requires a 6-pin and a 8-pin PEG Power Connectors. However, GeForce GTX285’s power consumption has been reduced to 183W with only 2 6-pin PEG power connectors needed.

GeForce GTX285 with highest clock and lowest power consumption is undoubtedly the most powerful card in single-GPU cards, while new GeForce GTX260 arouses the most attention in middle and high-end cards’ price war. Dual-GPU GeForce GTX295 is expected to regain the crown of performance. All of these will happen in January 2009.

After unveiling details of NVIDIA new-line products, we will follow some news about AMD’s plan to fight back, such as RV775XT. (Special thanks to one of our readers who provides information)

A mere 55nm die shrink has somehow reduced the power consumption by 53W or in other words 23%! Maybe there third spin (or second?) has paid off. But Id say its quite lackluster seeing as its performance has been increased by a mere 10%. If performance scales linearly with frequency, this card is going to be clocked at maybe in the vicinity of ~625/1400/2200?
 
Last edited by a moderator:
A mere 55nm die shrink has somehow reduced the power consumption by 53W or in other words 23%! Maybe there third spin (or second?) has paid off.

This is starting to look a lot like the G70->G71 transition, even though back then they had switched from a 110nm half-node to a new 90nm full node, while this time it's reversed, switching from a 65nm full node to a 55nm half node.
Meaning, they could have cut some unused "fat" off of the original transistor count, while still shrinking the same basic chip features of the GT2xx.
 
I usually see that sort of wild-eyed optimism from the Nvidia camp

No, I am not in any camp. I should have phrased it better. "I'll bet their new drivers/card will mysteriously find increased performance using sideport in case they were intentionally holding it back."
 
nVIDIA might have focused more on optimizing the GT200 core in terms of layout (i.e size) and transistor count (getting rid of the fat). Since when it comes to performance/mm^2, ATi wins hands down. This was probably in the works for awhile seeing as the biggest concern for the 65nm version of GT200 was it being the godzilla of single GPUs.

Maybe this wasn't a simple die shrink?
 
Since when it comes to performance/mm^2, ATi wins hands down.

Yeah, but what about Perf/mm^2/Watt ?
The 65nm GT200 already consumes significantly less power at idle or small workloads than the 55nm RV770 (and a lot of the time, that's how a graphics card ends up being used), despite the bigger core and higher transistor count.
If the 55nm version really has this much smaller top TDP than their 65nm cousin, then we can expect they'll significantly widen the lead over ATI's flagship (unless RV775XT is 40nm).
 
Yeah, but what about Perf/mm^2/Watt ?
The 65nm GT200 already consumes significantly less power at idle or small workloads than the 55nm RV770 (and a lot of the time, that's how a graphics card is used), despite the bigger core and higher transistor count.
If the 55nm version really has this much smaller top TDP than their 65nm cousin, then we can expect they'll significantly widen the lead over ATI's flagship (unless RV775XT is 40nm).

Yes thats because nVIDIA employs a more aggressive version of power play on GT200 based cards (A stock GTX280 goes all the way down to a 2D clock of 300/601/100). But in saying so, a 23% reduction at full load would put it at lower than the GTX260.. something doesn't sound right. :???:

17400.png
 
But Id say its quite lackluster seeing as its performance has been increased by a mere 10%.

But why does the GTX 280 need to be more than 10% faster? Its already comfortably the fastest single GPU around and thanks to the GTX295, it no longer has to worry about competing with dual GPU solutions.

The only competition to the GTX285 will be the 4870 1GB. So 10% faster than the 280 is more than enough, especially when it comes with significantly lower power consumption compared to the 280.

To me, this looks like they went all out to reduce power consumption/heat in order to produce a dual GPU card. The side effect was that they could produce a new varient of the 280 which draws significantly less power, and hence they could afford to launch a slightely faster version under a new name which would promote new consumer interest while still most likely having much better yields.
 
Yeah, but what about Perf/mm^2/Watt ?
The 65nm GT200 already consumes significantly less power at idle or small workloads than the 55nm RV770 (and a lot of the time, that's how a graphics card ends up being used), despite the bigger core and higher transistor count.
If the 55nm version really has this much smaller top TDP than their 65nm cousin, then we can expect they'll significantly widen the lead over ATI's flagship (unless RV775XT is 40nm).

Performance is irrelevant in the scenarios you listed. Who cares how fast a card is at doing nothing?
 
...

To me, this looks like they went all out to reduce power consumption/heat in order to produce a dual GPU card. The side effect was that they could produce a new varient of the 280 which draws significantly less power, and hence they could afford to launch a slightely faster version under a new name which would promote new consumer interest while still most likely having much better yields.

Hmmm sounds like ATi.
 
To play crysis :devilish:

However the new RV775XT is apparently the new competition for the GTX285 so we will have to see if the 10% extra performance is enough.

I used to think like that, but wouldn't that go against everything that ATI is doing?

I thought ATI had foregone the high end in order to cater for the mainstream/performance sector. If they decide to respin RV770 into an RV775/RV790, aren't they specifically aiming at the high end? Sure they're using a small die, but they're still plunging R&D to compete at the top if they do so.

Don't get me wrong; I'm not complaining if we do get an RV775/RV790 (in fact, I may buy one :p ). But if RV775/RV790 ends up being equal to or better than GTX 280/285, why bother with following the dual-GPU strategy? They're back to using single GPU's to fight single GPU's.
But of course, there is the problem of GTX 295...

I'm just wondering how these supposed chips fit into ATI's 'mainstream performance' strategy.
 
Performance is irrelevant in the scenarios you listed. Who cares how fast a card is at doing nothing?

Really ? Then why bother with clock gating on a chip at all ?
Everybody does it now, from Intel to AMD to Nvidia. Your card won't be speeding away full throttle while surfing the web on AERO Glass. And you can't argue that some games are simply less taxing than others, sometimes on different parts of the same chip (texturing or shading, fill rate or memory bandwidth, etc), sometimes it even varies from scene to scene within the same game (different maps and levels of detail).

------------------------------------------------------------------------------

PCPer did a review of the Quadro CX 1.5GB (1.5GB GDDR3, 192 scalar processors, 384bit bus), and besides confirming the presence of the 55nm GT200b, they also showed what i believe is the first board carrying any GT2xx that only requires a single (!) 6pin power connector, which implies a TDP well under 150W.
Could this, coupled with the 384bit bus and 192 scalar processors onboard, mean the possible debut of a direct consumer replacement product for the 9800 GTX/GTX+, and perhaps even the 9800 GT ?
 
Last edited by a moderator:
So AMD is intentionally limiting the performance of the 4870X2 until Nvidia tries to play catch up? I usually see that sort of wild-eyed optimism from the Nvidia camp :)

You never know, maybe ;)

Based on true story:
Long time ago there was a challenge going! who will brake a world record of picking up to most weight. And their was person who always being number #1 he never did his MAX bench-weight pickup on purpose - if somebody beat him he will do slightly more and he did every time, and their was a complain about - why he always do about ~10-15LB's more - that doesn't count - but for the record, judge(s) said it counts :) So every year he pickups slightly more to stay on top of world record!
 
Yes thats because nVIDIA employs a more aggressive version of power play on GT200 based cards (A stock GTX280 goes all the way down to a 2D clock of 300/601/100). But in saying so, a 23% reduction at full load would put it at lower than the GTX260.. something doesn't sound right. :???:

http://images.anandtech.com/graphs/1gb4870_092408101853/17400.png
23% decrease on consumtpion for GTX280 would drop it just above GTX260 actually.
GTX280 178W > 137W
GTX260 is atm sitting at 136W
(numbers from XBit Labs which measure only what the cards pull through the slot & extra-power plugs
 
Back
Top