nvidia "D8E" High End solution, what can we expect in 2008?

Discussion in 'Architecture and Products' started by 2900guy, Nov 2, 2007.

  1. 2900guy

    Newcomer

    Joined:
    May 16, 2007
    Messages:
    97
    Likes Received:
    0
    I wish I knew enough to give a technical opening, but because im still a noobie in these parts ill just stick to the big numbers noobies usually look for and leave the rest to other members here.

    "9800 GTX"
    dual G92 (dual pcb or dual GPU one pcb?????)
    384 bit bus
    1.5gb memory
    full 128 SP per gpu
    64 "TMU" per gpu
    24 ROP per gpu
    750mhz core
    2000mhz shader
    2.4Ghz GDDR4
     
  2. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    I'm not buying the theory that "D8E" is a G92 with a wider bus just yet.

    A 128 scalar processor GPU, 750~800 MHz/1700~1900 MHz (core/shader core, respectively) G92 with a 256 bit path to 1GiB of 2.4~2.8+ GHz GDDR3/GDDR4 seems more likely for 2007, while any further architectural development is reserved for next Spring/Summer.
    A dual-G92 may appear as a backup plan to fill in for the old GTX/Ultra spot until the said "architectural development" makes a showing.

    And i also bet we'll see "proper" (meaning, 128 bit bus) updated replacements for G84 by that time (G98 is already close to launch, but not much appears to have changed from G86 apart from the new -rumored- VP3).
     
  3. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    Considering the size of G80 versus G92, I'd be pretty surprised if it supported a 384-bit bus at all.
     
  4. dizietsma

    Banned

    Joined:
    Mar 1, 2004
    Messages:
    1,172
    Likes Received:
    13
    I'm still hoping (against hope it seems) that nvidia will produce a 160/192sp gpu. It would still be smaller than 128 on 90nm even keeping the ratios correct for the other parts of the chip. That would be a proper G90.

    However it seems multiple chips are the order of the day, or even multiple cards hence the 3xSli 780/790 chipset from them. So I doubt they will be making a bigger chip and G92 will be high end :(

    I've really gone off SLi in recent years, used to think it was a good idea if only for if one blew you could still use your machine while awaiting RMA.
     
  5. turtle

    Regular

    Joined:
    Aug 20, 2005
    Messages:
    279
    Likes Received:
    8
    Yeah, my opinion has changed. I thought a dual G92 would be the high-end offering, but realized that:

    A. The part coming that looks to be fully unlocked (128s/256-bit) seems to destined to be called 'GTS', and available right in time for Christmas. To me this means G92 is meant to be a 'performance'-market oriented chip, meant to last through the next top-end solution instead of using last-gen's parts, which cost cost a fortune for nvidia to produce. It's something we haven't really seen before, as they are usually cut-down enthusiast parts, but now it's coming before the part that will make us realize why it's $200-250, and perhaps a fully unlocked part only slightly more ($300-400?) That, or there will be a GTX with 160 shaders. I just can't see G92 being much more than G80, and I think we all expect a bigger jump than that chip could bring come 2008 and competition to R700.

    B. A new high-end part is coming in Q108, and it seems to be more and more hinted as being a new chip (g100) rather than a high-end solution ala a GX2. Rumor has been for quite some time both Nvidia/ATi were working on the 55nm and fruit was expected from it in Q108. RV670 is surely early, as we all heard they are using first-production silicon for the final part.

    I vote for a 192s/512-bit/GDDR4/55nm/ in Q12008, with TMUs using the TA:TF ratio of G92.

    I just look at the size of G92 with it's four rop partitions, and probably 128 shaders in those rop partitions, and 256-bit bus and think...hmm...What if there was 6 ROP partitions, what if they was 8? How many shaders would there be? If G80 was bandwidth limited, and this would be more-so, would 512-bit work? Could this fit in a single die using 55/65nm? Yes, I believe it can, all the while being more efficient use of die space/power/etc than 2xG92.
     
    #5 turtle, Nov 2, 2007
    Last edited by a moderator: Nov 2, 2007
  6. Putas

    Regular Newcomer

    Joined:
    Nov 7, 2004
    Messages:
    392
    Likes Received:
    59
    However 320 bit bus would fit in.
     
  7. AnarchX

    Veteran

    Joined:
    Apr 19, 2007
    Messages:
    1,559
    Likes Received:
    34
    My expectations:

    1x G92, 768MB

    775-825MHz ROP-Domain
    2.2-2.5GHz Shader-Domain
    1.2-1.6GHz GDDR4
    8 Clusters (128SPs/64TMUs)
    384Bit

    19-20GPix/s
    49-53GTex/s
    845-960GFLOPs
    115-150GB/s

    - performance-step equal to: 7800GTX 256 -> 7900GTX and should be enough against 2xRV670
    - would fit perfect in Tri-SLI-plans
    - based on several rumors of the rumor-mill :wink:
     
  8. NonNative

    Newcomer

    Joined:
    Sep 22, 2007
    Messages:
    152
    Likes Received:
    0
    Honestly I think G92 as "D8E" is coming as dual slots of cooling with higher core lock and memory clock.No I dont think its 9800GTX.
    I think originally plan G92 suppose to be 9800GTS and G90 as 9800GTX.(according to rumor of early time of G92 800+MHz Core).

    I believed G92 can out performed 8800Ultra if its running at 800MHz.
    However becuase of R600 performs really bad when comepare to G80 so Nvidia decided to keep life of GeForce 8 series last longer.

    I also believed there is no G100.
     
    #8 NonNative, Nov 2, 2007
    Last edited by a moderator: Nov 4, 2007
  9. Per B

    Newcomer

    Joined:
    May 17, 2005
    Messages:
    60
    Likes Received:
    1
    Shouldn't we know by now if G92 supported a 384 bit memory bus? I mean, you should be able to tell from chip/naked board photos!?

    Per
     
  10. AnarchX

    Veteran

    Joined:
    Apr 19, 2007
    Messages:
    1,559
    Likes Received:
    34
    No, there is no possibility.

    But in my eyes the die-size of 289mm2(some people measured 300-330mm2) speaks for itself.:wink:
     
  11. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    711
    Likes Received:
    282
    My guess is we will see another monster GPU for the 9800GTX:

    55 nm
    1.2 B transistors

    256 SP
    64 trilinear texture units (64 TA 128 TF)
    2 GHz shader
    750 MHZ core
    512 bit bus, 150 GB/s
    1 GB

    So basically twice a G80, 3x shader speed at 1.5 TFlop
     
  12. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,420
    Likes Received:
    179
    Location:
    Chania
    Remains to be seen why they've named the first G92 iteration 8800GT then.

    I'd say it's too early for that; much too early.

    I'd too figure that G92 might have 8 clusters after all, but I wouldn't bet that it has more than 4 ROP partitions ie a 256bit bus after all. Assuming G92 has roughly 750M those specs above sound VERY optimistic.

    I'm recalling a statement where a triple-SLi system will reach nearly 3 TFlops of theoretical performance.
     
  13. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,702
    Likes Received:
    117
    My guess is $$$. A combination of helping AIBs get rid of old G80 cores and being able to launch when they did. If they had named it the 8900GT or something like that, they probably feared it would hurt 8800GTS and GTX sales more than the 8800GT name.

    I think the naming decision was a mistake though and will end up hurting them more than it helped them in the long run.
     
  14. 2900guy

    Newcomer

    Joined:
    May 16, 2007
    Messages:
    97
    Likes Received:
    0
    sad part is even if this guess was true, it still would not be able to give playable frame rates (by my definition never going sub 30fps) for crysis at 16x10 4xaa/16xaf and "very high" all, DX10 of course. sad but true.
     
  15. Blazkowicz

    Legend Veteran

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    will GDDR5 be seen on G92?
     
  16. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    Considering it's not even in mass production yet i'd say... most unlikely. ;)
     
  17. turtle

    Regular

    Joined:
    Aug 20, 2005
    Messages:
    279
    Likes Received:
    8
    First of all, I agree that might be a little optimistic...Too much to fit on a die/expect out of, even at 55nm. On to the points you discussed:

    1. Because there will be a GTS w/ 128 shaders, and perhaps a gtx with more/higher clocked shaders. Doesn't mean something else isn't coming on 55nm relatively shortly.

    2. It has been reported that Nvidia and ATi started work on 55nm at relatively the same time, with returns expected in beginning of 2008. RV670 is ATi's part, what is Nvidia's? 9800GTX is as good a guess as any, especially when we expect Nvidia to launch their flagship of the new generation first, like G80.

    3. I agree with you. The possibility I believe is there for something more, but not much more.

    4. That was made by a forum member, although granted it sounded like a distinct possibility.

    While all that may be true, there is a G100. The question is when. The answer is H12008, presumably first quarter (as that's what's been mentioned so far by the media) to trump R700 coming later H108.

    Are we talking about what can be made of G92 or what to expect in Q108? I believe they are two entirely different things. G92 could make a decent solution, if jacked up on volts/clocks/cooling, and might possibly be, but I think it's a stop-gap to something bigger.
     
  18. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,702
    Likes Received:
    117
    As long as we are speculating...
    Assuming a "G90" exists you can get a pretty good idea of what it might look like just by ruling out some possibilities.

    1) Nvidia doesn't like GDDR4 for whatever reason... so let's restrict it to GDDR3
    2) If they are using GDDR3 it is going to need a 384, 448, or 512bit memory interface to supply the needed bandwidth.
    3) Current and upcoming games are often shader bound at ultra high resolutions
    4) A 512bit interface would need 32 ROPs b/c if it only had 16 it would have less fillrate than its predecessors. However, these would take away from available die space for more SPs, and given 3, this probably wouldn't work out well. A part with 24 ROPs @ 800 MHz gives double the fillrate of the 8800GT, which should be more than sufficient and leaves more space for SPs.
    5) If it is going to compete with the supposed 2xRV670 and break 1TFLOP (by Nvidia's measures) it is going to need at least 160 SPs but more likely 192 or even up to 256 if they want to keep the clocks lower for yield and heat issues.

    In conclusion, if such a part were to exist I think it would look something like:
    Core @ 800MHz
    SPs @ 2GHz
    384bit/768MB
    ROPS / SPs / TA / TF
    24 / 192 / 96 / 96
    Coming in at ~1B trannies and ~440mm^2

    I'm not convinced a 160/192/256 SP part exists though. Neither am I convinced 2xG92 makes any sense. Thus, I am baffled.

    Edit: I don't really expect G100 until about this time next year, probably coinciding with the appearance of GDDR5.
     
  19. Arty

    Arty KEPLER
    Veteran

    Joined:
    Jun 16, 2005
    Messages:
    1,906
    Likes Received:
    55
    Why?

    G92 is already out, performing well and has a good TDP. Its very easy to slap two of them together.
     
  20. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,702
    Likes Received:
    117
    Let's see wasted logic, die size, inefficient use of memory, cost, heat, card size, compatibility/rendering issues, power draw, availability, etc...
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...