nvidia "D8E" High End solution, what can we expect in 2008?

Still, if they decide to follow the pricing strategy of the R600 and keep R680 below 400 dollars it may still have a fairly good chance to have the market for itself in that slot, since D9E will likely be way more expensive (8800 GTX/Ultra territory) and the performance-midrange D9P will not be arriving until next Summer.

If D9E follows the 8800GT it will offer more SP at lower cost so I think you can say $399 with 160SP at least. That's assuming the dual Ati part beats it of course, and one would hope so, otherwise it is $$$ ultra time all over again.

Of course I am assuming it is a process shrink plus exta oomph here and not something radical like nv40 or G80.
 
If D9E follows the 8800GT it will offer more SP at lower cost so I think you can say $399 with 160SP at least. That's assuming the dual Ati part beats it of course, and one would hope so, otherwise it is $$$ ultra time all over again.

Of course I am assuming it is a process shrink plus exta oomph here and not something radical like nv40 or G80.

I remembered that there was one rumour stated that the G92 core will support DX10.1 but at the end it appeared to be only DX10.0 part. Is it possible that the G92 core has got some bug that not allow it to become DX10.1 part and still need the fix? (However, the chip is still function as DX10.1 with VP2)

So, in order to stay on the time frame and counter lower cost and power consumption RV670 part, the unfinished G92 core needed to step out on the battle? And the next respin with DX10.1 fixed will rather come as the performance part in the begining of next year?

Just my guess ;)
 
I remembered that there was one rumour stated that the G92 core will support DX10.1 but at the end it appeared to be only DX10.0 part. Is it possible that the G92 core has got some bug that not allow it to become DX10.1 part and still need the fix? (However, the chip is still function as DX10.1 with VP2)

So, in order to stay on the time frame and counter lower cost and power consumption RV670 part, the unfinished G92 core needed to step out on the battle? And the next respin with DX10.1 fixed will rather come as the performance part in the begining of next year?

Just my guess ;)

Brian Burke had something to say about that recently:

HL: The GeForce 8800 GT does not support DirectX 10.1? How concerned should consumers be if their video card supports DX10 but not 10.1?

BB: Not very. With graphics, there is always “something new coming soon”. So you either have to plant a stake in the ground and make a purchase or always sit in a constant state of limbo waiting for the next this to come along. DirectX 10.1 is a minor incremental update to DX 10, and it won’t really affect games for quite a while. It makes a few features that were optional in DirectX 10 required. We support nearly all of the features anyways with our existing GeForce 8 hardware. NVIDIA works with every leading PC game developer in the world. None of them are currently using DX 10.1 features in the games they are working on for 2008. Developers have been developing for months on DirectX 10 with NVIDIA hardware. We are already starting to see the fruits of that with Crysis, Hellgate: London and other games.

http://hardwarelogic.com/news/138/ARTICLE/1857/2007-11-29.html
 
Gosh. I'm very surprised to hear NVIDIA's Chief PR dude downplay the benefits of DX10.1. :rolleyes:

:p

I don't doubt that DX10.1 can be very useful but as NVIDIA are dominating the market, developers are obviously going to focus their efforts on bog standard DX10.0 first and foremost. Hopefully, most developers will also put the extra effort in so that DX10.1 is supported fully where beneficial (and I hope NVIDIA don't lean on any of them not to do so!).
 
I'd expect developers going that route to target 10.1 features that are optional in 10.0 unless those features perform poorly on Nvidia hardware. In those cases are they going to bother with another rendering path that might be more performant on ATI cards (due to their relatively small market penetration not some evil conspiracy)?

Or will devs start to develope 10.1 rendering paths in expectation of the next generation of Nvidia products? In this case, ATI might have a bit of a ray of hope for the future if devs use Rv670 to develope for 10.1...assuming of course that they don't already have pre-production hardware from Nvidia with 10.1 capabilities.

Regards,
SB
 
Is it possible G92 supports DX10.1 but NV isn't touting it until their next GPU? IIRC, they've done similar things in the past (touting a "new" feature on a new part that just wasn't publicized for the previous one). I'm sure they wouldn't mind keeping the focus on their very, very competitive DX10.0 lineup.

What, too cynical? :D
 
I don't believe there is such a thing as too cynical when dealing with marketing...

However, I don't really see it in this case; if G92 did fully support 10.1, I don't see any reason they wouldn't want to mention it.
 
Is it possible G92 supports DX10.1 but NV isn't touting it until their next GPU? IIRC, they've done similar things in the past (touting a "new" feature on a new part that just wasn't publicized for the previous one). I'm sure they wouldn't mind keeping the focus on their very, very competitive DX10.0 lineup.

What, too cynical? :D

I seem not agree with that on the NV side since by the time of G80 getting out to the market there was no vista driver and NV still called out it as the first DX10.0 support part. NV seems exhibiting marketing behavior by shouting that they can do better than the compatitor (remember down-play on video decoder on HD 2900?? to proclaim the VP2 superior)

Thus, that was the reason that I believe that the G92 was supposed to support the DX10.1 but it cannot make it well (like pure video in the early 6800 part). And if they want it to get fixed, the RV670 can run around the market without compatitor in that price setment.

Also, to INKster, I don't think we can count the word from both NV and AMD/ATi mouth about their products since both of them will never accept any failure at hand... There whould be a reason why G92 have more number of transistor count than the RV670 at the same bus width.

Anyway, it is just my guess so...
 
There whould be a reason why G92 have more number of transistor count than the RV670 at the same bus width.

It's quite obvious why, no ?
What single-GPU solution is there at the moment against the Geforce 8800 GTS 512MB in the 299~349 dollar segment from ATI ? None.

Yet it's the same G92 core, with the full 128 scalar processor compliment, instead of the 112 sp version found in the 8800 GT.
And since the 8800 GT is already faster than the HD3870 at stock speeds, then you can imagine what an even faster, "complete" G92 core would do.

It's all about the higher profit margin that Nvidia rakes in by selling cards at 300+ dollar prices, for not a lot more expense in testing and production, since even the memory chips will be the same 2.0GHz GDDR3 Qimonda parts -albeit not factory underclocked like in the GT-.
 
I think it's reasonable to assume that D9E was due to have launched by now. I also assume that the D9P 55nm part will be launching as per its original schedule (as a comparison point: it seems to me that 55nm at TSMC hasn't been pushed back merely because 65nm was difficult, for ATI - so NVidia may have the same luck).

So, some time after that, D9E 55nm will take its turn, say 6-9 months after the originally scheduled launch of 65nm D9E. 6 months would be May/June, coincidental with D9P, so, erm, maybe more like 9 months?

Jawed

If you recall nvidia had stated in a cinference call that a double precision gpu would be released before the end of the year. Nvidia has stumbled not sure why.
 
And why would nvidia not want to refresh their line up with higher margin chips?

Because if you have the chance to stretch a product life cycle at the top like that then you can use the spare time, money and engineering resources to design an even better high margin product further down the line, one that isn't a (relatively) simple die shrink of your current tech.
 
And why would nvidia not want to refresh their line up with higher margin chips?
They did. G92 is what you're talking about.
But why would they want to make _another_ low margins chip if their previous low margins chip is still at the top and is selling great?
No competition -- no reason to do anything :(
 
Because if you have the chance to stretch a product life cycle at the top like that then you can use the spare time, money and engineering resources to design an even better high margin product further down the line, one that isn't a (relatively) simple die shrink of your current tech.

Just how exactly do you expect them to use this "spare" time "improve" the profit of the chip? I assume the chip is taped out or very near to it, and I don't know what you can do at this point in time to improve margins other than disabling portions. I also assume we are talking about chip that is going to be over a billion transistors on the 65nm node. So can we take some guess on size? Maybe I'm wrong after all and the next chip is going to have lower margins.... still don't know why they would not bring it out if AMD does not have a alternitive solution in the market which means they can charge whatever they want for it.
 
Just how exactly do you expect them to use this "spare" time "improve" the profit of the chip? I assume the chip is taped out or very near to it, and I don't know what you can do at this point in time to improve margins other than disabling portions. I also assume we are talking about chip that is going to be over a billion transistors on the 65nm node. So can we take some guess on size? Maybe I'm wrong after all and the next chip is going to have lower margins.... still don't know why they would not bring it out if AMD does not have a alternitive solution in the market which means they can charge whatever they want for it.

Doing a more substantial redesign of a GPU architecture costs money. A lot of it.
So if you can skip that part once in a while then you're already improving the company's bottom line.
ATI did largely that with the R420/R480 collecting the benefits of R300, R360, etc.

In the case of the Geforce 8, the mid-life upgrade (8800 Ultra) to the 8800 GTX consisted of little more than a -slightly- new cooler and higher clockspeeds on the same core, not even the PCB was changed in any significant way.
That didn't happen with the top Geforce 6's (there were 130nm and 110nm versions) or with the Geforce 7's (there were 110nm and 90nm cores, as well as two GX2 models and 7x50 speed-bump variants).

Lack of true high-end competition tends to do that.


(note that i don't consider the G92-based 8800 GT and 8800 GTS 512MB to be true high-end refreshes, as they don't depart significantly -if anything- from the 8800 Ultra in terms of performance)
 
Last edited by a moderator:
Doing a more substantial redesign of a GPU architecture costs money. A lot of it.
So if you can skip that part once in a while then you're already improving the company's bottom line.
ATI did largely that with the R420/R480 collecting the benefits of R300, R360, etc.

They loss market share with R4xx.
 
They loss market share with R4xx.

Not significantly at first. And its performance was perfectly comparable to NV40 -minus SM 3.0 support-.
It was more a question of "dragging" its SM2.0 feature set for too long -since the Radeon 9700 Pro-, like with what happened at Nvidia that caused the late release of the 7900 GX2/7950GX2 in 2006, after the Geforce 6800 Ultra launched in April 2004.

Remember that R520 was late (although i believe R580 was launched on target).
 
Back
Top