But what if Nvidia's card is too fast or not good enough to make ati drop prices. Or maybe it doesn't come until ati is ready with a refresh.
By a rough specification-based estimate, Radeon HD 5870 could end up two times faster than Radeon HD4870 but realistically, you should expect the new card to be faster by about 60 percent across the board.
twice the flops is all well and good, but will it get twice the frames
The fan is silent in idle even on 4870X2.if the watt is like 25w for idle, surfing, work, then the fan will be silent.
How this translates to load and how much heat, numbers been 190w? and if so, thee might be a good sound level with these.
And what if Nvidias card can battle the X2, but there's nothing below it? Just GT300 for two or three quarters? Welcome back healthy margins!
Crysis Benchmark from CHIPHELL
CPU:AMD Phenom II X4 955BE
Win 7 RTM
VGA:HD5870 1GB
Crysis 1900x1200 4AA+16AF DX10 Very High
min:30.xx
avg:43.xx
max:54.xx
If they don't radically change the architecture, it's almost impossible. ATi's current architecture is way more efficient in die-size terms than Nvidia's. To be able to beat an HD5870X2, Nvidia has to produce a chip able to deliver over 3+ Tflops (let's suppose a 50% scaling from HD5870 to HD5870X2). That's almost 3 times the spec of GT200 with the current architecture.
And still, as you said, even if they manage to beat the HD5870X2 with a enormous, 600+ mm^2 chip, they are not going to have anything to beat the 5850, that will cause lot of pain... i mean, with those specs, a 5850 should perform greatly even at FullHD resolutions, 50% than a HD4890 for 299$/250 euros
Nvidia must change strategy with GT300 or they will keep losing market share.
Crysis Benchmark from CHIPHELL
CPU:AMD Phenom II X4 955BE
Win 7 RTM
VGA:HD5870 1GB
Crysis 1900x1200 4AA+16AF DX10 Very High
min:30.xx
avg:43.xx
max:54.xx
If they don't radically change the architecture, it's almost impossible. ATi's current architecture is way more efficient in die-size terms than Nvidia's. To be able to beat an HD5870X2, Nvidia has to produce a chip able to deliver over 3+ Tflops (let's suppose a 50% scaling from HD5870 to HD5870X2). That's almost 3 times the spec of GT200 with the current architecture.
And still, as you said, even if they manage to beat the HD5870X2 with a enormous, 600+ mm^2 chip, they are not going to have anything to beat the 5850, that will cause lot of pain... i mean, with those specs, a 5850 should perform greatly even at FullHD resolutions, 50% than a HD4890 for 299$/250 euros
Nvidia must change strategy with GT300 or they will keep losing market share.
Yeah, nice
How does a 4890 perform on a similar system?
20fps?
Because FLOPs by themselves define any GPUs performance? Without knowing what NV's X11 architecture exactly looks like as a whole that kind of speculative math is so useless that it's not even funny anymore.
D12U sizes now over 600 square millimeters? Do you know what the hell you are talking about?
AMD is releasing a fine bunch of GPUs with the 5850 and the 5870. What will come after that let's just care about that whenever it arrives. Whoever feels needs an upgrade today will have two new very good GPUs to pick from.
If they don't radically change the architecture, it's almost impossible. ATi's current architecture is way more efficient in die-size terms than Nvidia's. To be able to beat an HD5870X2, Nvidia has to produce a chip able to deliver over 3+ Tflops (let's suppose a 50% scaling from HD5870 to HD5870X2). That's almost 3 times the spec of GT200 with the current architecture.
And still, as you said, even if they manage to beat the HD5870X2 with a enormous, 600+ mm^2 chip, they are not going to have anything to beat the 5850, that will cause lot of pain... i mean, with those specs, a 5850 should perform greatly even at FullHD resolutions, 50% than a HD4890 for 299$/250 euros
Nvidia must change strategy with GT300 or they will keep losing market share.
Minor nitpick.
5870 delivers, 2.64Tflops. So 5870X2 will give
scaling | flops
50% | 3.96T
70% | 4.49T
80% | 4.75T
They need a 4.5Tflop gt300 to stay alive.
Come on, don't get mad!
What i meant was that Nvidia has to change strategy: they cannot use anymore the strategies they used for G80 and GT200 (high-end parts months before the mass market parts). But that's my opinion, you can have a different one.
Ok now go back and compare the FLOP rates of a GTX295 vs a 4870X2 and do the speculative math as many times until you finally realize that FLOP != FLOP between those two architectures.