Yeah, and i even know what reason this is and it has little to do with high end being unimportant and much more to do with AMD being in the red for several years. The question was more to the public who somehow thinks that having just one GPU is better than having the same GPU plus having another, more complex GPU. It all boils down to what you want to do with these GPUs. If you don't want to compete in ultra high end and you don't want to compete in workstation and GPGPU space then you're OK with only middle class hardware. If not then you're probably heading for some troubles with such strategy.I agree with you, but your rhetorical question seemed to be pointed to another direction, AMD and nVidia. And you obviously know that AMD had its reasons for this.
GT200 was never meant to be sold at current prices. The problem is that NV DOESN'T HAVE A CHIP which may compete with RV770 while having the same complexity. It's not a problem of GT200, it's a problem of NV not having the right middle class GPU.Contradict much?
Nvidia is using G92, a 1.5year old chip, because they simply cannot earn a profit selling a G200 chip at those prices. G200 is too big and costly to sell for less than ~$150, i.e. G92 prices.
I'm not talking about the current situation. GT200 is bad compared to RV770 and that's exactly what makes 4870X2/CF look as good as it does -- the greatness of ONE RV770, not some miraculous AFR scaling or anything.DegustatoR: No, it's 3-way CrossFire against 2-way SLI (even dual-GPU HD4870X2 is pretty close to dual-GPU GTX2950).
Well, that means that whoever has the fastest single GPU card wins. But i don't think that it's that simple otherwise NV would have won the battle against RV770.Anyway, multi-card hype is ower, almost nobody buys it (despite its low price), nobody tests it, so why care? Maybe 99% of users have single-card setup...
I was talking about single-card setup. Not single GPU graphics-card.Well, that means that whoever has the fastest single GPU card wins. But i don't think that it's that simple otherwise NV would have won the battle against RV770.
Ah, OK. Then we're back to what i've said earlier =)I was talking about single-card setup. Not single GPU graphics-card.
I don't think you can compare 2900XT and GTX285 sales using Steam statistics. Many 2900XTs was upgraded to newer cards since then and GTX285 is selling for less time now than 2900XT was sold.Anyway, sales of GTX285 are etremely low. If you look at steam stats, the card is as common, as the old lackluster HD2900XT, which isn't available for more than a year.
What kind of strategy? Having a high margins high end chip (GT200) in addition to your low margins high volume chip (G92)? Yes, i'm quite sure that it's better than just having one low margins high volume chip because it gives you and your customers more choice.Are you sure, that this kind of strategy is really good?
GTX280 had 150% die-space of HD2900XT (R600), required more complex PCB (NVIO chip + additional routing) and sale price was (majority of its market life-span) significantly lower than sale price of HD2900XT. Despite its low price, the card wasn't very succesfull on market.
Almost every website was calling HD2900XT the biggest fiasco from the times of NV30, but all indicators shows, that GTX280 and 285 is in similar or even worse position and despite it, there is still a lot of people considering this as the right step.
GTX285 is an update to GTX280, you should add their sales.Yes, you can't compare them directly, but their position is similar, which definately means something. Even GTX280, which is EOL now, isn't much more frequent than HD2900.
It wasn't fast enough for the majority of G80 users to go and upgrade -- that was the main problem with GTX280. Plus the gaming landscape changed -- there are only a couple of games which aren't playable on G80/G92/RV770 and all of them aren't playable on GT200 also -- so why buy it? GT200 was a bad GPU from the performance update point of view.GTX280 had 150% die-space of HD2900XT (R600), required more complex PCB (NVIO chip + additional routing) and sale price was (majority of its market life-span) significantly lower than sale price of HD2900XT. Despite its low price, the card wasn't very succesfull on market. That doesn't sound like description of a great product to me.
Not really. GT200/b is still the fastest GPU on the market and GTX295 is the fastest videocard on the market. Neither NV30 nor R600 ever was and that was the reason for them to be failures. GT200 is a failure for NV but it's a good GPU for consumers.Almost every website was calling HD2900XT the biggest fiasco from the times of NV30, but all indicators shows, that GTX280 and 285 is in similar or even worse position and despite it, there is still a lot of people considering this as the right step.
GT200/b is the first GPU on the market with IEEE DP capability. It's the fastest GPU on the market even now, 9 months after launch. It support PhysX and now have an ability to "force" SSAO in some titles. That's features, performance and quality.I like big monolithic GPUs - if they bring performance, technology and IQ advantage over competition - e.g. X1950XTX. But that is not the case of GT200/b.
If I remember correctly it was the strange (at that times) choice of ALU/TU balance that were criticised most, not R580's poor price-effectiveness in general. I myself prefered R580 to G71 during those times.Many people (even in this forum) were criticising R580 for poor price-effectiveness compared to G71.
What are you talking about? Everyone SCREAMS about GT200 being bad because of low perf/mm^2!Now the situation is opposite (and RV790 is even in better position: IQ comparable to competition, technological advantage: DX10.1 over DX10 and even some marketing advantages: GDDR5, 3rd gen of 55nm etc.) - but only a few of the R580 critics are able to say the same about GT200 (why?)
Yeah, that's why Intel have chosen the same concept for LRB probably =)I'm not sure, if is reasonable to presume, that GT300 will be similar to GT200. It seems to me, that the whole super-clocked scalar concept was successful only due to failure of R600.
I think it's pretty safe to assume that GT300 will be a G80/GT200 evolution i.e. it won't be as drastic architectural change as NV20->NV30 or G70->G80 was. Serial scalars won't go anywhere, ALU/TU ratio will be increased again of course but that's far from "drastic changes". Somehow i think that most of improvements will go into thread dispatcher which will allow them to significantly increase processing efficiency. GDDR5 support will give them the same bandwidth per line as AMD has/will have so any current AMD AA performance/PCB advantage will probably disappear. Most interesting questions about GT300 are the DP performance and design decision and tesselation performance compared to RV870. But i wouldn't expect any drastic changes in the underlying architecture.I'd expect some drastic changes - at least significant increase in ALU:TEX ratio.
GT200/b is the first GPU on the market with IEEE DP capability.
GT200 specs says "IEEE 754 single & double precision floating point units". I'm not an expert on these things - are they lying?The only chips in the consumer market with IEEE FP anything are conventional x86 CPUs.
Larrabee's vector ops are an unknown, but at least its x87 pipeline should match some level of the specification.
GPUs currently have varying degrees of "sort of IEEE, getting there maybe".
GT200 specs says "IEEE 754 single & double precision floating point units". I'm not an expert on these things - are they lying?
They don't implement exceptions, but otherwise they comply to the spec.GT200 specs says "IEEE 754 single & double precision floating point units". I'm not an expert on these things - are they lying?
HD2900XT was more expensive than GTX280 for majority of its life-span.I don't think you should compare them.
The GTX280 is a high-end part, the 2900XT was not.
Are you sure? Price of GT200 fell before we heard first mention of crisis and more than half a year before the crisis affected anything. Prices went down shortly after RV770 launch - it was ATis' agressive price policy, not the crisis.Aside from that, we weren't in a financial crisis back when 2900XT was launched, so prices can't be compared.
HD4800 has also better sales than GTX280, but that isn't related to the situation I was talking about. ATi was able to sell 400mm2 HD2900XT for higher price in similar amounts as nVidia sells 600mm2 GTX280 for lower price. And 3/4 of GTX280s' sales were realized before the crisis impacted these markets.The 8800GTS probably also sold a LOT more than the 2900XT, which proves the fiasco that the 2900XT was.
Any single reason why GTS should be cheaper to produce? G80 was bigger than R600, additional chip (NVIO) required, 25% more VRAM required...But the 2900XT *was* a fiasco. It was supposed to compete with the 8800GTX, but fell well short of that. So what you had was a card that was too expensive, too noisy, and too powerhungry for the performance it delivered. As such it had to be pitched against the 8800GTS instead, which wasn't as powerhungry and noisy (and probably cheaper to build for nVidia).
That's very nice, but why the board isn't more widespreaded? Why people don't buy it more? Is it really profitable to nVidia to sell 500-600mm2 GPUs for the launch price of 128bit 150mm2 GF6600GT 3 years ago (or GF8600GTS 2 years ago)? At that time nobody of us could imagine more desperate step than to sell 512bit 600mm2 GPU for mainstream price.The GTX280 and GTX285 don't have that problem. They have very good power efficiency (better than ATi), run quietly (better than ATi), and the price is in line with performance.
RV670 was an update to R600. Should I add it to their sales, too?GTX285 is an update to GTX280, you should add their sales.
Very few people care about the fastest solutions on market these days. Esp. if they can buy a solution, which is 5-15% slower, 20-30% cheaper and supports up to date standards (DX10.1).Not really. GT200/b is still the fastest GPU on the market and GTX295 is the fastest videocard on the market. Neither NV30 nor R600 ever was and that was the reason for them to be failures. GT200 is a failure for NV but it's a good GPU for consumers.
Yes, it would be nice GP-GPU accelerator, but there are no meaningful GP-GPU applications for end users, so it's logical, that qualities of this card are measured by its 3D performance. Because of the poor performance/size ratio, the chip is sold for fraction of the sale price it was designed for. How can I consider GT200 to be a very solid GPU, if it is competitive (performance-wise) only in 2 grades lower price segment, than it was designed for?GT200/b is the first GPU on the market with IEEE DP capability. It's the fastest GPU on the market even now, 9 months after launch. It support PhysX and now have an ability to "force" SSAO in some titles. That's features, performance and quality.
It sucks only in perf/mm^2 area. In every other area GT200 is a very solid GPU.
Concept of GT200 was probably good for GP-GPU applications, but what sells graphics cards these days is still their 3D performance.Yeah, that's why Intel have chosen the same concept for LRB probably =)
I don't think ATis' AA advantage comes from BW. GTS285 has >25% more BW, but MSAA 8x performance is identical to HD4890 at average.GDDR5 support will give them the same bandwidth per line as AMD has/will have so any current AMD AA performance/PCB advantage will probably disappear.
HD2900XT was more expensive than GTX280 for majority of its life-span.
Are you sure? Price of GT200 fell before we heard first mention of crisis and more than half a year before the crisis affected anything. Prices went down shortly after RV770 launch - it was ATis' agressive price policy, not the crisis.
Any single reason why GTS should be cheaper to produce? G80 was bigger than R600, additional chip (NVIO) required, 25% more VRAM required...
That's very nice, but why the board isn't more widespreaded? Why people don't buy it more?
Is it really profitable to nVidia to sell 500-600mm2 GPUs for the launch price of 128bit 150mm2 GF6600GT 3 years ago (or GF8600GTS 2 years ago)?
Power efficiency, reference coolers or temperatures are not the key features of graphics board. The key aspects for evaluation of 3D accelerator are 3D performance, price (maybe price/performance ratio), image quality and features.
HD2900XT was clocked to 740MHz. Majority of boards were able to run at 840MHz with stock voltage and cooler. The problem was extreme power requiremants, not yields. Even GT and PRO versions were very OC friendly, 800MHz was quite common at the voltage level of XT.The GTS is a 'salvage part' while the 2900XT isn't (in fact, ATi probably had to push hard with the clockspeeds to remain competitive, so yields probably wouldn't be that great)
GTS320 was cheaper and slower part for different price segment (it also suffered from some bugs, which negatively affected performance). I'm comparing GTS640 to HD2900XT ~ two products with comparable price and performance.and they also had a 320 mb version, so less VRAM required
Sorry, but GeForce 8800 GTS used the same (384bit) PCB as GTX. So HD2900XT had only a bit more PCB routing because of slightly wider bus. Do you really think, that this minor difference outweight additional chip (NVIO), more complex PCB because of additional routing for NVIO, 25% more RAM and 70mm2 bigger die covered by heatspreader? I don't think so.and simpler/cheaper boards (remember, also 320 bit memory bus rather than 512 bit).
So I think it's quite certain that the GTS was cheaper to produce than the 2900XT, for more than one reason.