OK, I'm sorry, it looks like my memory was wrong and you're weren't negative to G80 and GT200 before their release.
Apology accepted.
Prices and performance has nothing in common. 5670 cost $100 and 5970 cost $700 -- is it 7x faster? No. So does that mean that everyone should go and buy 5670? Nope. Price is what you're ready to pay for a product and wrt graphics cards performance in today's games is not the only factor of pricing. So if 5870 will have 75-80% perfomance of GF380 and will cost 60% of GF380 then that's because GF380 has some other benefits to a buyer beyond performance alone. I've already described some of these benefits. Surely if you don't thik they're important then you're better of buying 5870 -- IF you're OK with it's performance because deltas aren't absolute numbers. If enough people will think the same NV will be forced to drop the prices. So that'll be solved one way or another, so I don't see any reason to talk much about it.
It's all relative to individual needs. Personally, I'm looking for a fast single-GPU solution with DX11 support, so my main consideration is the price-performance ratio between the two product lines. Everything after that is very secondary for me.
As I've said it's better to sell at a loss than not to sell at all. The pricing will be competitive or the products won't be on the market at all.
I wasn't trying to suggest that NV won't compete, they'll certainly do their best. But I don't think it unfair to also suggest that AMD is in the driver's chair at this point (6 month market lead, much smaller chip, more time to get yields up) and have more room to maneuver on pricing. That's not a bad position to be in. The danger for them, though, is NV leveraging that devrel advantage they clearly have to get some big-name games out that push geometry loads heavier than we've seen before here in the near future. IMO, Cypress strikes me as a very evolutionary part, enabling AMD to hit that Win 7 release window with DX11 support, but they can't sit on this design for too long if we see a geometry usage upscaling comparable to what GPUs in the early 2000s were upscaling with, say, fill rates.
5870 isn't fast enough for me on my 24" 1920x1200 so I don't really understand how is it fast enough for you on a 30" display.
My GTX 285 is fast enough for everything I've played over the last year. Dragon Age at 4x MSAA and 2x SSAA with 16xAF at 25x16 runs just fine. Dirt 2 demo, Batman, Call of Duty, etc. But I've always been fine with 30-35 fps for most games.
Fermi's key points are not only performance but features as well. So it's really a question of you caring about those features (PhysX, CUDA, 3D Vision etc). If you do then you don't really have a choice. If you don't then, well, you need to judge from performance pov. For me PhysX is a more killer feature than DX11 for the moment so I don't really have much choice (well I could wait for a Fermi middle end GPU and use it as a dedicated PhysX accelerator but why would I want to do something like that instead of simply buying a GF100 card?).
Absolutely. I just replayed Batman: AA and turned off PhysX. Having played the game once already, I didn't consider the extra visuals from PhysX worth the frame rate loss. This is where Fermi could be compelling, it should be able to let users 'enable and forget', at least for these initial PhysX games.
Like I said upstream, as a consumer I'm waiting to learn more on board configs, pricing, clock speeds, relative performance, etc., before deciding on my next GPU upgrade. As a hardware geek, however, there's no doubt that Fermi is far more interesting as an architecture. NV has done a lot of heavy lifting early in the DX11 life cycle that will most likely serve them well, especially as they transition the design to smaller fab processes over the next few years.