nvidia "D8E" High End solution, what can we expect in 2008?

This quote doesn't make much sense to me. I don't find it surprising at all that G94 would be A1. So far I haven't seen anything which would indicate it's it's not an exact copy of G92 with just the shader clusters cut in half. Surely it's not surprising to get this working on first try. If there were (serious) bugs in G92 nvidia plans to fix in later revisions, then G94 should have the fixes too.

I found this interesting. Certain capabilities/features not enabled in current G92s. Could be why the G94 seems to do much better than expected against a 112SP G92?
It's certainly possible there are bugs in the chip hindering performance or bugs/design errors preventing parts of the chip working at clock frequencies it was designed for. I remain sceptical though.
Also Vr-zone seems to mention that the memory clock hasnt been decided. What is the highest GDDR3 clock out today? 2400MHz (effective?)
Yes I think so. That's at least what hynix and qimonda list on their websites (samsung only has 1.0ns parts on their website). I don't think though we really know that G92 couldn't use GDDR4.
 
FWIW it's being reported the 9800GX2 is be delayed another week to March 18th.

Apparently the design was done late last year and cards have been around since early this year. The release is being held up by drivers.

linky

I don't doubt this is holding up the rest of the 9800 series as well, as one can assume Tri and Quad SLI will be something Nvidia will push with the new SKUs if they are indeed similar to G92 product currently available other than the 2nd SLI connector for 3/4 gpu setups. I imagine they want to get performance scaling up to snuff with what we will see with XfireX that *should* be available to test against come their release. I doubt they want to see Tri or Quad SLI solutions defeated by the similar amount of cheaper or similarly priced solutions from AMD, ie 2xR680 beating 2x9800GTX, especially when ATi has room to cut the price of the 3870X2 to similar to that of the MSRP of the 9800GTX ($399).

I also wonder if they are running into the performance scaling issues with DX10 as ATi has reported they have because of DX10's more complicated nature. It's been said by ATi 3870x2 was held up because of getting scaling up to snuff.
 
9800gtxpower.png
 
Hmm quite a big difference between "new" 9800GTX and "old" 8800GTS under load. It`s strange because 9800GTX is clocked only 25mhz on core and about 50-60mhz on SPs. So it causes at least 40W more on 9800GTX?
 
Hmm quite a big difference between "new" 9800GTX and "old" 8800GTS under load. It`s strange because 9800GTX is clocked only 25mhz on core and about 50-60mhz on SPs. So it causes at least 40W more on 9800GTX?

If the memory is clocked higher that's going to eat away at your power envelope as well.

Likewise the more power required, the more power lost just getting it to the core/memory.

Regards,
SB
 
If they decide to raise the voltage to improve the clock speeds that will also, of course, cause a further increase in power consumption.
 
AFAIK G92-420 also runs @1.2V like G92-400, probably cheap 0.83ns Qimonda memory, which needs >2V is guilty of the higher consumption...
 
Hmm quite a big difference between "new" 9800GTX and "old" 8800GTS under load. It`s strange because 9800GTX is clocked only 25mhz on core and about 50-60mhz on SPs. So it causes at least 40W more on 9800GTX?

Yeah I'd like to see them sell a card that draws 20% more power for only 5% more performance. It should do well at higher resolutions with AA enabled but for the average case this doesn't look like a winner.
 
The first test is done on a 600MHz core, 2000MHz memory and 1500MHz Shader clock and at 3Dmark05 default settings we scored 17600.

In 3Dmark 06 the card at the same clock scored 14400 and this is the normal default score.

If you push the card to 730MHz core and 2080MHz memory and leave the Shaders at 1500MHz you end up with 19400 in 3Dmark05 and about 16100 in 3Dmark 06.

Read More: http://www.fudzilla.com/index.php?option=com_content&task=view&id=6012&Itemid=1
 
Confirmation of GX2 clock speeds from The Inq:

http://www.theinquirer.net/gb/inquirer/news/2008/03/03/nvidia-9800gx2-clocks-revealed

WE JUST GOT hard measured numbers on the Nvidia 9800GX2. There has been a lot of speculation on clocks and speeds, so here is what the one we have access to did.

The stock speeds will be 600/1000/1500MHz for core, memory and pixel shaders, but they OC a bit beyond that. With the broken OS, 64-bit edition, and an Intel quad, they score 14,4xx in 3DMark06 at default settings.

You can crank up the core frequency, but at least on the ones we have access too, memory goes all of a pittance higher, and shaders don't budge at all. If you push it as hard as you can go, you can just break 16K on 3DMark06.

With RS770/R780 on the near horizon, I would feel mighty twitchy right now if I was NV.
 
More Pics:

20080304_16e825dca47447b3ffc1ZZQqEGMVnVvq.jpg


20080304_0aeab8a96ab29979926bk6odl8873JvS.jpg


20080304_c1045434c8d1fd14ad29bk6FDRv0OHeL.jpg


9800GTX is so long .. I couldn't install the 8800GTX because it was too long and now the 9800GTX is even longer.

Definitely not getting it.

US
 
I don't think it is any longer, it's just the perspective of the picture that makes it look that way, as well as the 9800 GTX having the I/O backplate still attached (whereas the 8800 GTX it's compared to does not). Match up the PCB sizes - they look identical to me.
 
Fake. Nvidia does not and will not use GDDR4. Also, we've already seen pics of 9800 GTX which prove it uses G92. G92 doesn't have 256 SPs, or a 512-bit memory bus.
 
Back
Top