I think the sideport has been held back to compete with this new card. Though they may bin more intensively and release a faster 4870x2.
Anandtech review said:The CrossFire Sideport is simply another high bandwidth link between the GPUs. Data can be sent between them via a PCIe switch on the board, or via the Sideport. The two aren't mutually exclusive, using the Sideport doubles the amount of GPU-to-GPU bandwidth on a single Radeon HD 4870 X2. So why disable it?
According to AMD the performance impact is negligible, while average frame rates don't see a gain every now and then you'll see a boost in minimum frame rates. There's also an issue where power consumption could go up enough that you'd run out of power on the two PCIe power connectors on the board. Board manufacturers also have to lay out the additional lanes on the graphics card connecting the two GPUs, which does increase board costs (although ever so slightly).
AMD decided that since there's relatively no performance increase yet there's an increase in power consumption and board costs that it would make more sense to leave the feature disabled.
The reference 4870 X2 design includes hardware support for the CrossFire Sideport, assuming AMD would ever want to enable it via a software update. However, there's no hardware requirement that the GPU-to-GPU connection is included on partner designs. My concern is that in an effort to reduce costs we'll see some X2s ship without the Sideport traces laid out on the PCB, and then if AMD happens to enable the feature in its drivers later on some X2 users will be left in the dark.
I pushed AMD for a firm commitment on how it was going to handle future support for Sideport and honestly, right now, it's looking like the feature will never get enabled. AMD should have never mentioned that it ever existed, especially if there was a good chance that it wouldn't be enabled. AMD (or more specifically ATI) does have a history of making a big deal of GPU features that never get used (Truform anyone?), so it's not too unexpected but still annoying.
I don't see there anything with 0,6mm in it =)The other thing is this report is that the die surface is not even -- 0.6mm delta between lowest and highest point, with the "depression" around the center.
Well, it's there in the hardware, but not enabled. They had to put some spin on disabling it. I'll bet their new drivers/card will have increased performance using sideport should they want to enable it.
Here are some more shots on the GeForce GTX 295 card. Enjoy! Among other things we have confirmed today are 480 SPs, 289W TDP, Quad SLI support. Clock speeds, performance and price still remains a mystery but we will soon find out. Can RV775 X2 card stands against this NV offering?
We have looked into 55nm GeForce GTX260 VGA Card, now let’s turn to 55nm GTX280. It features brand-new PCB design and enhanced default frequency. Besides, it’s been given a new brand – GeForce GTX285.
Adopting P891 as reference design, GeForce GTX285 remains 10.5” length just as GTX280. And its cooler is dual-slot designed with dual DVI and S-Video output. The graphic memory carries on 1GB GDDR3, and its stream processors is 240. The frequency of GeForce GTX285 remains to be confirmed, but its performance can get up to 110% of GTX280.
We can regard GeForce GTX285 as GTX280 Overclocking Edition with less power consumption. GeForce GTX280’s Maximum Graphics Card Power is 236W, and it requires a 6-pin and a 8-pin PEG Power Connectors. However, GeForce GTX285’s power consumption has been reduced to 183W with only 2 6-pin PEG power connectors needed.
GeForce GTX285 with highest clock and lowest power consumption is undoubtedly the most powerful card in single-GPU cards, while new GeForce GTX260 arouses the most attention in middle and high-end cards’ price war. Dual-GPU GeForce GTX295 is expected to regain the crown of performance. All of these will happen in January 2009.
After unveiling details of NVIDIA new-line products, we will follow some news about AMD’s plan to fight back, such as RV775XT. (Special thanks to one of our readers who provides information)
A mere 55nm die shrink has somehow reduced the power consumption by 53W or in other words 23%! Maybe there third spin (or second?) has paid off.
I usually see that sort of wild-eyed optimism from the Nvidia camp
Since when it comes to performance/mm^2, ATi wins hands down.
Yeah, but what about Perf/mm^2/Watt ?
The 65nm GT200 already consumes significantly less power at idle or small workloads than the 55nm RV770 (and a lot of the time, that's how a graphics card is used), despite the bigger core and higher transistor count.
If the 55nm version really has this much smaller top TDP than their 65nm cousin, then we can expect they'll significantly widen the lead over ATI's flagship (unless RV775XT is 40nm).
But Id say its quite lackluster seeing as its performance has been increased by a mere 10%.
Yeah, but what about Perf/mm^2/Watt ?
The 65nm GT200 already consumes significantly less power at idle or small workloads than the 55nm RV770 (and a lot of the time, that's how a graphics card ends up being used), despite the bigger core and higher transistor count.
If the 55nm version really has this much smaller top TDP than their 65nm cousin, then we can expect they'll significantly widen the lead over ATI's flagship (unless RV775XT is 40nm).
But why does the GTX 280 need to be more than 10% faster?
...
To me, this looks like they went all out to reduce power consumption/heat in order to produce a dual GPU card. The side effect was that they could produce a new varient of the 280 which draws significantly less power, and hence they could afford to launch a slightely faster version under a new name which would promote new consumer interest while still most likely having much better yields.
To play crysis
However the new RV775XT is apparently the new competition for the GTX285 so we will have to see if the 10% extra performance is enough.
Performance is irrelevant in the scenarios you listed. Who cares how fast a card is at doing nothing?
So AMD is intentionally limiting the performance of the 4870X2 until Nvidia tries to play catch up? I usually see that sort of wild-eyed optimism from the Nvidia camp
23% decrease on consumtpion for GTX280 would drop it just above GTX260 actually.Yes thats because nVIDIA employs a more aggressive version of power play on GT200 based cards (A stock GTX280 goes all the way down to a 2D clock of 300/601/100). But in saying so, a 23% reduction at full load would put it at lower than the GTX260.. something doesn't sound right.
http://images.anandtech.com/graphs/1gb4870_092408101853/17400.png