nvidia "D8E" High End solution, what can we expect in 2008?

I'm glad with this news incoming 9800GX2. :)

That mean my g92 8800GTS going to get everything new features that supported on GeForce 9 series. :D
 
IMHO it's all a matter of perspective. Somehow I doubt (though I could be wrong here) that nvidia could not have provided us with a "proper" refresh for some time now( e.g by the time of G92 mainstream release).. I think the keyword here is competition, and in this mess I cannot solely blame Nvidia.. It's a profitable organization/company after all.

It didn't stop NVIDIA to release the 8800Ultra, despite it not being necessary and a quite redundant product. It's not that IHVs don't pay attention what the competition has or will have available, but it most certainly doesn't define in absolutes their roadmaps either.

Morgoth made a couple of good points; many things would be theoretically possible if each IHV would ignore what each funky idea would cost in precious resources, what it would mean in terms of resources for future products on the roadmap and what such a funky idea would cost the end user.

That GX2 thingy should not cost as much in R&D as a single chip sollution with roughly equivalent performance on the same manufacturing process. The G71 GX2 was a sollution to bridge them over (in a relative sense) from G70 to G80. In hindsight wasn't G80 the absolute deal-breaker compared to predecessors?
 
It didn't stop NVIDIA to release the 8800Ultra, despite it not being necessary and a quite redundant product. It's not that IHVs don't pay attention what the competition has or will have available, but it most certainly doesn't define in absolutes their roadmaps either.

Of course it doesn't define their absolute roadmaps, and one should be extremely naive not to realize that. They could have given us a better refresh than Ultra though, which blatantly reminded me of the 9700PRO/9800 PRO/XT situation some years back from the ATI camp..

Morgoth made a couple of good points; many things would be theoretically possible if each IHV would ignore what each funky idea would cost in precious resources, what it would mean in terms of resources for future products on the roadmap and what such a funky idea would cost the end user.

That GX2 thingy should not cost as much in R&D as a single chip sollution with roughly equivalent performance on the same manufacturing process. The G71 GX2 was a sollution to bridge them over (in a relative sense) from G70 to G80. In hindsight wasn't G80 the absolute deal-breaker compared to predecessors?
Agreed and completely understood. Nvidia could have given us the whole G92 lineup instantly though (or at least I suspect), but instead of that chose to postpone the high end for this year. And now we are left with an uncertain next gen release date, and we can only speculate if investing in D8E will be a "rational" move, in terms of time.

I know these things happen in the gpu life cycles, and history repeats itself but somehow I feel that nvidia, despite having the upper hand in this gen, didn't make the "proper" moves in the high end. I wasn't looking for monstrous performing gpus but perhaps for more suitable timetables for the end user.

Just my 2 cents
 
If they'd launched it all together I'm sure we would've just had a bunch of soft-launches. 8800GT couldn't be kept in stock all by itself and all of these boards are using the same GPU. Stock would've been tighter and prices perhaps even more inflated.

And I'm not sure that would've been good at all because I can pretty much say with certainty that a lot more people want a 8800GT than one of the $400+ options.
 
And why should we talk about a bunch of soft launches and not a hard launch?
I'm not sure if we would have lower supply thus higher prices if I do understand what you're trying to say.
Yes 8800GT was/is a helluva price/perf gpu but I don't think the highest end cards would have caused so much more problems in the stock. It's a relatively small proportion in terms of 8800GT sales.
Then again I'm pretty sure NVIDIA wanted to sell as much as it can in the mainstream market, w/o having higher end market "affect" its sales.
It's one thing to see a card like 8800GT performing like it did in a soft launch, and another if we had a hard one..
 
And why should we talk about a bunch of soft launches and not a hard launch?
I'm not sure if we would have lower supply thus higher prices if I do understand what you're trying to say.
Yes 8800GT was/is a helluva price/perf gpu but I don't think the highest end cards would have caused so much more problems in the stock. It's a relatively small proportion in terms of 8800GT sales.
Then again I'm pretty sure NVIDIA wanted to sell as much as it can in the mainstream market, w/o having higher end market "affect" its sales.
It's one thing to see a card like 8800GT performing like it did in a soft launch, and another if we had a hard one..

I guess the key to the topmost priority NV might have at this point could be exactly the mainstream segment. RV670 arrived earlier than expected and NV needs as fast as possible something to battle the 3850's w/o cutting too deep into it's margins. From what I recall the 8800GS was originally planned for the OEM market originally, yet now it seems to also appear in probably limited quantities in the retail market.
 
Vr-zone is relaying something which, honestly, i find hard to believe (but not "impossible"):

9800 GTX spec's (not the known 9800 GX2):

- "G100" core
- 55nm
- 1800 M transistors (!)
- 512bit bus (!)
- 1GB of 2.0 GHz -effective- memory (still GDDR3)
- 384 unified scalar processors :)shock:)
- DX10.1
- 650MHz core, 2000MHz shader core

http://forums.vr-zone.com/showthread.php?t=222565
 
Seems like a mix of different sources and a bunch of extrapolation to me. And if NV has any clue what they're doing, they'll go for 512 SPs instead. But then again, they probably don't have any clue whatsoever what they're doing wrt the ALU-TEX ratio, so ignore this.
 
Seems like a mix of different sources and a bunch of extrapolation to me. And if NV has any clue what they're doing, they'll go for 512 SPs instead. Oh wait, they probably don't have any clue whatsoever what they're doing wrt the ALU-TEX raito and its implications, so just ignore this.

The article does mention a "3rd quarter launch", so it's definitively not impossible.
 
The article does mention a "3rd quarter launch", so it's definitively not impossible.
All I said was that it seems to me that mixes different sources and includes likely incorrect extrapolation. BTW, using GDDR3 in Q3 is not impossible, but it *is* retarded.
 
All I said was that it seems to me that mixes different sources and includes likely incorrect extrapolation. BTW, using GDDR3 in Q3 is not impossible, but it *is* retarded.

With a 512bit bus, how is that ?
They'll likely be much cheaper to buy than either GDDR4 or GDDR5, no ?
And there's always an "Ultra" moniker to be used, just in case... ;)
 
Because a 512-bit bus is retarded when you can achieve 25%+ higher bandwidth with GDDR5 and a 256-bit bus and ridiculously lower PCB costs. On the chip level, it'll also take less die space to support 4x64-bit GDDR5 than 8x64-bit GDDR3. And it'll take significantly less power.
 
Because a 512-bit bus is retarded when you can achieve 25%+ higher bandwidth with GDDR5 and a 256-bit bus and ridiculously lower PCB costs. On the chip level, it'll also take less die space to support 4x64-bit GDDR5 than 8x64-bit GDDR3. And it'll take significantly less power.

I was talking about the cost of IC's themselves, not the bus/PCB/power consumption.
That is often much less important (look at how simply using lower density chips drastically reduced the original G80-based 8800 GTS' price, once the 320MB version came out...).

And i doubt that 2.5~2.8GHz GDDR4 or even GDDR5 is that much more power-friendly than mature 2.0GHz GDDR3, frankly.
Finally, there's the added latency of GDDR4/GDDR5, which larger on-chip buffers can't totally compensate.
 
That is often much less important
I will always be at a loss as to why people keep thinking that. It's wrong, wrong, wrong. I've gone into this enough times that I have very little desire to do so again, but let's just summarize it by saying your margins are equal to: '1-(solution price - board price)/chip cost'.
(look at how simply using lower density chips drastically reduced the original G80-based 8800 GTS' price, once the 320MB version came out...).
Price and costs are two different things; and I thought we were talking PCB, not memory?
And i doubt that 2.5~2.8GHz GDDR4 or even GDDR5 is that much more power-friendly than mature 2.0GHz GDDR3, frankly.
GDDR5 is, and honestly it doesn't have much to do with maturity.
Finally, there's the added latency of GDDR4/GDDR5, which larger on-chip buffers can't totally compensate.
This is a GPU, not a CPU - latency doesn't matter a single bit as long as there are enough ALUs and processing units in general. It does have a transistor cost (in terms of registers) but it should be significantly lower than the difference between a 256-bit and a 512-bit bus.
 
Don't high-end Nvidia cards get shipped as a complete package to manufacturers, with PCB and memory IC costs already included ?

So, Nvidia does stand to gain a profit merely by reselling GDDR3, right ?
In that case, shipping with a low-cost solution like GDDR3 does begin to make sense if they wish to maintain their profit margins above 40%, as usual.

And by "latency" i wasn't pondering on somehow stalling the GPU ALU's, but just pointing out that 2.0GHz GDDR4 may be slower than 2.0GHz GDDR3, for instance.
Adding too much bandwidth might be overkill -and eat into profits unnecessarily-, just look at HD2900 XT GDDR4...
 
Don't high-end Nvidia cards get shipped as a complete package to manufacturers, with PCB and memory IC costs already included ?
That depends on the manufacturer. NVIDIA doesn't count boards as revenue (i.e. they take zero profit on those); as for memory, they do sell it with a margin to some AIBs, while others have enough leverage and volume to buy it themselves at good prices. But NV doesn't *want* to resell more GDDR, as it's obviously a very low margin business.

And by "latency" i wasn't pondering on somehow stalling the GPU ALU's, but just pointing out that 2.0GHz GDDR4 may be slower than 2.0GHz GDDR3, for instance.
Oh sure, you might have small inefficiencies; but if you can get 5GHz GDDR5, that's going to be significantly faster than 2x2GHz GDDR3 in all situations anyway.

Adding too much bandwidth might be overkill -and eat into profits unnecessarily-, just look at HD2900 XT GDDR4...
We're not talking how to maximize bandwidth here, we're talking how to minimize costs and maximize margins for a given level of bandwidth. And I very much doubt that 512-bit GDDR3 is the answer to that in the Q3 2008 timeframe. Plus, *if* we are talking about a part that is potentially 3 times faster than the 8800 GTX, surely asking for twice the bandwidth isn't overkill?
 
Oh sure, you might have small inefficiencies; but if you can get 5GHz GDDR5, that's going to be significantly faster than 2x2GHz GDDR3 in all situations anyway.

Do you believe in 5GHz GDDR5 in a commercial product in 2008 ? I don't, but i respect your opinion of course.

We're not talking how to maximize bandwidth here, we're talking how to minimize costs and maximize margins for a given level of bandwidth. And I very much doubt that 512-bit GDDR3 is the answer to that in the Q3 2008 timeframe. Plus, *if* we are talking about a part that is potentially 3 times faster than the 8800 GTX, surely asking for twice the bandwidth isn't overkill?

Well, the 8800 GTS 512 does hold up very well against a 8800 Ultra, despite having little more than 60% of its bandwidth...
Even where it doesn't is more likely due to the extra 256MB in the Ultra than the lack of bandwidth per se.
 
Do you believe in 5GHz GDDR5 in a commercial product in 2008 ? I don't, but i respect your opinion of course.
Sure, as long as we're talking about the high-end where the volumes are lower. There is plenty of momentum for GDDR5 from *all* sides, so I'd be incredibly surprised if we didn't see at least 2.4GHz GDDR5 this year, and possibly even more. And that's from both NV and AMD.
Well, the 8800 GTS 512 does hold up very well against a 8800 Ultra, despite having little more than 60% of its bandwidth...
Even where it doesn't is more likely due to the extra 256MB in the Ultra than the lack of bandwidth per se.
The Ultra has too much bandwidth though; they used that as an artificial differentiator and the performance boost is indeed rather small. Compared to the GTX, the GTS512 has ~75% of its bandwidth and it's nearly as fast, but that's because it's a fair bit faster in other respects.
 
Back
Top