nvidia "D8E" High End solution, what can we expect in 2008?

There's no G90; you obviously mean G92 and that would be two GPUs clustered together for that 9800GX2 thingy or whatever it's going to be called. What Vincent probably means is two cores on a single-die package. It's an interesting theory nonetheless, but at this point I have a hard time figuring out why they would need to go that route that early.

If there is no G90 what GF9800GTX/GT will be based on?? Another G92 SKU?? In this situation GF9800GTX will be slower than GF8800GTX in higher resolutions because of 256-bit memory bus and only 16ROPs.
 
If there is no G90 what GF9800GTX/GT will be based on?? Another G92 SKU?? In this situation GF9800GTX will be slower than GF8800GTX in higher resolutions because of 256-bit memory bus and only 16ROPs.

Go back a few pages... ;)

At high resolution levels i very much doubt that any single G80-based card can even remotely hold a candle to a dual-G92 like this 9800 GX2 (unless the internal SLI mode is "broken" badly for some particular game).
 
Go back a few pages... ;)

At high resolution levels i very much doubt that any single G80-based card can even remotely hold a candle to a dual-G92 like this 9800 GX2 (unless the internal SLI mode is "broken" badly for some particular game).

Right, but 2xG92-450 = GF9800GX2 and i`m saying about GF9800GTX and GT which are single-chip solutions (according to Hardocp). They should be faster than current G92 based GF8800GTS/GT but the question is how much? :)
 
Right, but 2xG92-450 = GF9800GX2 and i`m saying about GF9800GTX and GT which are single-chip solutions (according to Hardocp). They should be faster than current G92 based GF8800GTS/GT but the question is how much? :)

If i had to guess, i'd say that anything immediately below the 9800 GX2 ("9800 GTX") would be nothing more than a 8800 GTS 512MB/1GB, but with the GPU main clock hovering 750MHz (a comfortable 100MHz above what there is now as standard on the full, 128 working scalar processor G92 core), and eventually "re-using" those fast GDDR3 IC's (2.1~2.4GHz) from the 8800 Ultra.
 
Last edited by a moderator:
If i had to guess, i'd say that anything immediately below the 9800 GX2 ("9800 GTX") would be nothing more than a 8800 GTS 512MB/1GB, but with the GPU main clock overing 750MHz (a confortable 100MHz above what there is now as standard on the full, 128 working scalar processor G92 core), and eventually "re-using" those fast GDDR3 IC's (2.1~2.4GHz) from the 8800 Ultra.

So what GF9800GTX will be in your opinion?? There are some rumours about D9E-20 - single highend chip (probably GF9800GTX) and then if - as you`re saying - it will have only 100-150Mhz faster core than GF8800GTS is there any sense to release this GPU? I don`t think so. Moreover there is GF9800GT in plans too. Then there is a question - if GF9800GTX is supposedly nothing more than about 750-800Mhz G92 core so what is GF9800GT?? This situation seems strange because if GF9800 series doesn`t bring at least 20-25% performance boost over G92 GF8800GTS/GT there is no reason to releasing these GPUs.
 
Last edited by a moderator:
So what GF9800GTX will be in your opinion?? There are some rumours about D9E-20 - single highend chip (probably GF9800GTX) and then if - as you`re saying - it will have only 100-150Mhz faster core than GF8800GTS is there any sense to release this GPU? I don`t think so. Moreover there is GF9800GT in plans too. Then there is a question - if GF9800GTX is supposedly nothing more than about 750-800Mhz G92 core so what is GF9800GT?? This situation seems strange because if GF9800 series doesn`t bring at least 20-25% performance boost over G92 GF8800GTS/GT there is no reason to releasing these GPUs.

More like there's no sense NOT releasing it, easy money from re-branding basicly your current products on new name, add a bit of marketing and -boom- countless people buying these "new" cards
 
More like there's no sense NOT releasing it, easy money from re-branding basicly your current products on new name, add a bit of marketing and -boom- countless people buying these "new" cards
Especially with the benefit of, potentially, higher yields, i.e. higher profit, thanks to 65nm and using less power plus running cooler.
Shrinking the highend always has advantages.
I also wouldn't be too surprised if their G80 inventory is running a bit low, wish I knew when they EOL'ed it.
 
More like there's no sense NOT releasing it, easy money from re-branding basicly your current products on new name, add a bit of marketing and -boom- countless people buying these "new" cards

Yes and then start loosing clients by those tecnics ;)

Rule number one: don´t mess with costumers, otherwise people will start to go to the other side. Well from Q3 -> Q4 Nvidia have already started losses in market share.
If they keep trying easy money that trend will continue for lots of quarters.
 
Yes and then start loosing clients by those tecnics ;)

Rule number one: don´t mess with costumers, otherwise people will start to go to the other side. Well from Q3 -> Q4 Nvidia have already started losses in market share.
If they keep trying easy money that trend will continue for lots of quarters.

Nvidia did no different than AMD. It too didn't follow up on R600 with another single-GPU high-end card too.
Remember that they share many common problems, because they depend on TSMC's and UMC's ability to deliver a steady flow of product on schedule.
They don't have any control over the manufacturing processes themselves.

Besides, with the increasing complexity of high performance GPU's their development time frame tends to widen significantly. Long gone are the days of regular 6 month "refresh products".
 
More like there's no sense NOT releasing it, easy money from re-branding basicly your current products on new name, add a bit of marketing and -boom- countless people buying these "new" cards

Ok but you know GF9800GTX/GT series will be probably released in March and G100 is expected somewhere in Q3. This is only about 4-5 months after NVIDIA will release much faster GPU (i hope 2 times faster means 2 times faster in real world games not theoretically) so who then will buy GF9800?? :)
 
Ok but you know GF9800GTX/GT series will be probably released in March and G100 is expected somewhere in Q3. This is only about 4-5 months after NVIDIA will release much faster GPU (i hope 2 times faster means 2 times faster in real world games not theoretically) so who then will buy GF9800?? :)

About the 95%+ of markets who won't hear a single word of any "G100" or similar before it's out
 
I love made up statistics :)

True it's a made up statistic but really. I don't know many non-enthusiat who knew they had a R300 or R350 or G70 or G80 or NV30...

The point remains, the vast majority of consumers wouldn't have heard of or even have a clue what G100 would be referring to. Although I suspect quite a few might think it might be an Infinity or Lexus G100. Wooo new luxury car coming out. ;)

Regards,
SB
 
If there is no G90 what GF9800GTX/GT will be based on?? Another G92 SKU?? In this situation GF9800GTX will be slower than GF8800GTX in higher resolutions because of 256-bit memory bus and only 16ROPs.

The 8800GTS 512 looks like it loses in very high resolutions/extreme settings more due to it's framebuffer size than it's bandwidth.

I'm not even sure if there's going to be a "9800GTX", but if it's clocked significantly higher than the GTS/512 and has fast 1GB ram, you're very you don't want to overthink the above?

Or a better question then: all odds look like that that 9800GX2 will contain 2*G92 chips; why wouldn't it contain 2*G90 chips instead if each G92 is truly strangled by bandwidth and/or Z/Pixel fillrates?
 
Last edited by a moderator:
Ok but you know GF9800GTX/GT series will be probably released in March and G100 is expected somewhere in Q3. This is only about 4-5 months after NVIDIA will release much faster GPU (i hope 2 times faster means 2 times faster in real world games not theoretically) so who then will buy GF9800?? :)

How many bought in the past 7900GTXs and/or 7950GX2s? G71 was released in Q1 2006, while G80 in Q4 2006.

FEAR 2560*1600 4xAA/16xAF:
8800Ultra = 42 fps
8800GTX = 38 fps
7950GX2 = 14 fps
7900GTX = 13 fps

http://www.computerbase.de/artikel/...ti_radeon_hd_3870_rv670/16/#abschnitt_f_e_a_r

42 / 13 = 3.23x

Since performance is in a relative sense tied to transistor count, G80 contained roughly 2.5x more transistors than G71. If NV truly wants to yield such differences in performance I'd expect a chip that has an at least equal jump in transistor count compared to G80.
 
Ail, you need to count in the clocks as well. Not that it would be more meaningfull but at least a more correct comparison.
 
How many bought in the past 7900GTXs and/or 7950GX2s? G71 was released in Q1 2006, while G80 in Q4 2006.

FEAR 2560*1600 4xAA/16xAF:
8800Ultra = 42 fps
8800GTX = 38 fps
7950GX2 = 14 fps
7900GTX = 13 fps

http://www.computerbase.de/artikel/...ti_radeon_hd_3870_rv670/16/#abschnitt_f_e_a_r

42 / 13 = 3.23x

Since performance is in a relative sense tied to transistor count, G80 contained roughly 2.5x more transistors than G71. If NV truly wants to yield such differences in performance I'd expect a chip that has an at least equal jump in transistor count compared to G80.

Well what about rumoured 1,8 billion transistors? :) Yea i know it is rather impossible in 65nm but hmm.. Nearly 700m transistors in 90 were doubtful too ;) So if number of transistors goes in pair with % of performance then new NVIDIA "next-gen" should be about 50-60% faster than G92? :)
 
Ail, you need to count in the clocks as well. Not that it would be more meaningfull but at least a more correct comparison.

If I'd want to get into deeper details yes; but still "handcrafted" ALUs for higher frequencies add to the transistor budget and what exactly tells you that the G80 wouldn't be up to 2x times as fast as G71 if all units would had been clocked at just 575MHz?
 
Well what about rumoured 1,8 billion transistors? :) Yea i know it is rather impossible in 65nm but hmm.. Nearly 700m transistors in 90 were doubtful too ;) So if number of transistors goes in pair with % of performance then new NVIDIA "next-gen" should be about 50-60% faster than G92? :)

G92 is at roughly 750M and I'm afraid we still haven't seen it's peak frequencies if there's going to be a high end single chip variant out of it.

If you'd take in a purely hypothetical case 1800M vs. 750M, that's an increase of 2.4x in terms of transistors. If the 1.8B rate should be close to the truth and the result is barely 50-60% faster than a single G92, then there's something horribly wrong with that chip.
 
Back
Top