http://www.tomshardware.com/2008/01/05/exclusive_nvidia_geforce_9800gx2/
Tom has more on the 9800GX2
Tom has more on the 9800GX2
IMHO it's all a matter of perspective. Somehow I doubt (though I could be wrong here) that nvidia could not have provided us with a "proper" refresh for some time now( e.g by the time of G92 mainstream release).. I think the keyword here is competition, and in this mess I cannot solely blame Nvidia.. It's a profitable organization/company after all.
It didn't stop NVIDIA to release the 8800Ultra, despite it not being necessary and a quite redundant product. It's not that IHVs don't pay attention what the competition has or will have available, but it most certainly doesn't define in absolutes their roadmaps either.
Agreed and completely understood. Nvidia could have given us the whole G92 lineup instantly though (or at least I suspect), but instead of that chose to postpone the high end for this year. And now we are left with an uncertain next gen release date, and we can only speculate if investing in D8E will be a "rational" move, in terms of time.Morgoth made a couple of good points; many things would be theoretically possible if each IHV would ignore what each funky idea would cost in precious resources, what it would mean in terms of resources for future products on the roadmap and what such a funky idea would cost the end user.
That GX2 thingy should not cost as much in R&D as a single chip sollution with roughly equivalent performance on the same manufacturing process. The G71 GX2 was a sollution to bridge them over (in a relative sense) from G70 to G80. In hindsight wasn't G80 the absolute deal-breaker compared to predecessors?
And why should we talk about a bunch of soft launches and not a hard launch?
I'm not sure if we would have lower supply thus higher prices if I do understand what you're trying to say.
Yes 8800GT was/is a helluva price/perf gpu but I don't think the highest end cards would have caused so much more problems in the stock. It's a relatively small proportion in terms of 8800GT sales.
Then again I'm pretty sure NVIDIA wanted to sell as much as it can in the mainstream market, w/o having higher end market "affect" its sales.
It's one thing to see a card like 8800GT performing like it did in a soft launch, and another if we had a hard one..
Seems like a mix of different sources and a bunch of extrapolation to me. And if NV has any clue what they're doing, they'll go for 512 SPs instead. Oh wait, they probably don't have any clue whatsoever what they're doing wrt the ALU-TEX raito and its implications, so just ignore this.
All I said was that it seems to me that mixes different sources and includes likely incorrect extrapolation. BTW, using GDDR3 in Q3 is not impossible, but it *is* retarded.The article does mention a "3rd quarter launch", so it's definitively not impossible.
All I said was that it seems to me that mixes different sources and includes likely incorrect extrapolation. BTW, using GDDR3 in Q3 is not impossible, but it *is* retarded.
Because a 512-bit bus is retarded when you can achieve 25%+ higher bandwidth with GDDR5 and a 256-bit bus and ridiculously lower PCB costs. On the chip level, it'll also take less die space to support 4x64-bit GDDR5 than 8x64-bit GDDR3. And it'll take significantly less power.
I will always be at a loss as to why people keep thinking that. It's wrong, wrong, wrong. I've gone into this enough times that I have very little desire to do so again, but let's just summarize it by saying your margins are equal to: '1-(solution price - board price)/chip cost'.That is often much less important
Price and costs are two different things; and I thought we were talking PCB, not memory?(look at how simply using lower density chips drastically reduced the original G80-based 8800 GTS' price, once the 320MB version came out...).
GDDR5 is, and honestly it doesn't have much to do with maturity.And i doubt that 2.5~2.8GHz GDDR4 or even GDDR5 is that much more power-friendly than mature 2.0GHz GDDR3, frankly.
This is a GPU, not a CPU - latency doesn't matter a single bit as long as there are enough ALUs and processing units in general. It does have a transistor cost (in terms of registers) but it should be significantly lower than the difference between a 256-bit and a 512-bit bus.Finally, there's the added latency of GDDR4/GDDR5, which larger on-chip buffers can't totally compensate.
Vr-zone is relaying something which, honestly, i find hard to believe (but not "impossible"):
9800 GTX spec's (not the known 9800 GX2):
- "G100" core
- 55nm
- 1800 M transistors (!)
- 512bit bus (!)
- 1GB of 2.0 GHz -effective- memory (still GDDR3)
- 384 unified scalar processors shock
- DX10.1
- 650MHz core, 2000MHz shader core
http://forums.vr-zone.com/showthread.php?t=222565
That depends on the manufacturer. NVIDIA doesn't count boards as revenue (i.e. they take zero profit on those); as for memory, they do sell it with a margin to some AIBs, while others have enough leverage and volume to buy it themselves at good prices. But NV doesn't *want* to resell more GDDR, as it's obviously a very low margin business.Don't high-end Nvidia cards get shipped as a complete package to manufacturers, with PCB and memory IC costs already included ?
Oh sure, you might have small inefficiencies; but if you can get 5GHz GDDR5, that's going to be significantly faster than 2x2GHz GDDR3 in all situations anyway.And by "latency" i wasn't pondering on somehow stalling the GPU ALU's, but just pointing out that 2.0GHz GDDR4 may be slower than 2.0GHz GDDR3, for instance.
We're not talking how to maximize bandwidth here, we're talking how to minimize costs and maximize margins for a given level of bandwidth. And I very much doubt that 512-bit GDDR3 is the answer to that in the Q3 2008 timeframe. Plus, *if* we are talking about a part that is potentially 3 times faster than the 8800 GTX, surely asking for twice the bandwidth isn't overkill?Adding too much bandwidth might be overkill -and eat into profits unnecessarily-, just look at HD2900 XT GDDR4...
Oh sure, you might have small inefficiencies; but if you can get 5GHz GDDR5, that's going to be significantly faster than 2x2GHz GDDR3 in all situations anyway.
We're not talking how to maximize bandwidth here, we're talking how to minimize costs and maximize margins for a given level of bandwidth. And I very much doubt that 512-bit GDDR3 is the answer to that in the Q3 2008 timeframe. Plus, *if* we are talking about a part that is potentially 3 times faster than the 8800 GTX, surely asking for twice the bandwidth isn't overkill?
Sure, as long as we're talking about the high-end where the volumes are lower. There is plenty of momentum for GDDR5 from *all* sides, so I'd be incredibly surprised if we didn't see at least 2.4GHz GDDR5 this year, and possibly even more. And that's from both NV and AMD.Do you believe in 5GHz GDDR5 in a commercial product in 2008 ? I don't, but i respect your opinion of course.
The Ultra has too much bandwidth though; they used that as an artificial differentiator and the performance boost is indeed rather small. Compared to the GTX, the GTS512 has ~75% of its bandwidth and it's nearly as fast, but that's because it's a fair bit faster in other respects.Well, the 8800 GTS 512 does hold up very well against a 8800 Ultra, despite having little more than 60% of its bandwidth...
Even where it doesn't is more likely due to the extra 256MB in the Ultra than the lack of bandwidth per se.