The G92 Architecture Rumours & Speculation Thread

Discussion in 'Architecture and Products' started by Arun, Aug 8, 2007.

Thread Status:
Not open for further replies.
  1. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    Nvidia has never been a big supporter of GDDR4, and with a 512bit bus the motivation to start doing so would be even smaller.
    That report sounds too good to be true, and is somewhat similar to an earlier rumor anyway.
    Let's wait for something a bit more consistent before we jump to conclusions.
     
  2. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
  3. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    FWIW, as a general observation: if you're talking about the same basic architecture, a 50% clock increase by just shrinking to a smaller process is very unrealistic. Not saying that it won't run 50% faster, but in this day and age, these kind of increases require significant architecture work.

    Wiring accounts for a major part of the delay and that doesn't get any faster with smaller processes.
     
  4. AnarchX

    Veteran

    Joined:
    Apr 19, 2007
    Messages:
    1,559
    Likes Received:
    34
    90nm to 65nm are two process-steps(90->80->65nm) and G92 is supposed to be less complex than G80 (64SPs), so a shader-clock far above 2GHz (which can reach G84 @ air with 1.65V) should be a piece of cake.
    And even with 128SPs it should be no problem, on 8800Ultra was 1.8GHz reached @air without Vmod.
     
  5. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    80nm is only a 90nm-derived half node.
     
  6. AnarchX

    Veteran

    Joined:
    Apr 19, 2007
    Messages:
    1,559
    Likes Received:
    34
    Sure, but I have the feeling some people like to compare this half-node steps with this step to a new process, with should be give a higher clock-gain.
     
  7. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    The number of shader processors has no real impact on the clock speed that can be reached. At the layout level, more or less shader processors is a matter of cut-and-paste. Overall chip complexity (if you define that as the number of transistors) has no impact on local critical paths.

    I haven't seen any G84 @2GHz or G80 @1.8GHz in the stores (overclocking doesn't count: we're talking about reliable volume production). I don't really follow the latest and greatest XXX Superclocked mega editions, so I could be wrong here, but a quick look on evga.com shows the 8800Ultra Black Pearl at 1.66GHz.

    Anyway, I'd love to see 2.4GHz too. I'm just not holding my breath.
     
  8. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    50% more should be not too hard considering that it is a step from 90nm to 65nm, so it is a full step and not hust one half-node.
     
  9. osirisxs

    Newcomer

    Joined:
    Sep 19, 2007
    Messages:
    1
    Likes Received:
    0
    Not that I think it's at all likely, but a 50% clock increase has happened in the past.
    A few examples with similar basic architecture:
    Geforce1 125mhz-Geforce2 200mhz(250mhz ultra).25nm?-.18nm 60%-100% increase
    voodoo2~90mhz - voodoo3 166mhz .35nm? - .25nm ~80% increase
    G70 - G71 was even 40-45% from .11 to .09(not counting their nonexistent makebelieve 512 chip)

    feel free to correct any errors
     
  10. Buntar

    Newcomer

    Joined:
    Dec 10, 2004
    Messages:
    26
    Likes Received:
    0
    Location:
    AZ, US
    osirisxs

    GeForce 256 was 220 nm @ 120 MHz (67%-108% increase for 180 nm GF2 GTS - GF2 Ultra).
    Voodoo 2 was 350 nm @ 90 MHz (103% increase for 250 nm Voodoo 3 3500 [183 Mhz]).
     
  11. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
    Just because it's happened in the past, doesn't mean it'll be as easy to do in the future.... it's like folding a piece of paper in half multiple times...
     
  12. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    Yet nobody says they will increase the clockspeed of the whle chip by 50%.
     
  13. Oushi

    Newcomer

    Joined:
    Nov 14, 2005
    Messages:
    34
    Likes Received:
    1
    Location:
    EG
    Late post But what about INTEL graphics project next year ?!
     
  14. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
    Nor did I mention anything about that. :???:

     
  15. _xxx_

    Banned

    Joined:
    Aug 3, 2004
    Messages:
    5,008
    Likes Received:
    86
    Location:
    Stuttgart, Germany
    What about it? And what about it and the next year? Neither will we see it next year nor is it even remotely anything interesting for gamers.
     
  16. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    Depends if INTEL uses exclusively in house developed architectures for anything 3D in the future, or also 3rd party IP and how high exactly the intend to scale those and for which markets. If the latter planned projects are limited to IGP and/or UMPC shiznit, then it's hardly worth debating. Should it go a couple of steps higher though than those, I wouldn't laugh at what INTEL might release at this stage one bit.

    INTEL has already the largest portion of the market in terms of (graphics) units sold worldwide (not necessarily in terms of revenue though). If INTEL should truly intend to widen the markets it's targetting (be it low end professional markets or anything else), it makes it a larger player than it is today in the graphics market and possible with a much higher revenue out of it too. Of course such an option can have both advantages as disadvantages, since so far INTEL hasn't shown any signs that it's truly pushing the technological progress for graphics at all. Neither from a hardware nor from a software/driver level.

    Significant changes in policy for the latter might also mean a totally new philosophy from INTEL for graphics in the future (if they're wise that is...LOL).
     
  17. AnarchX

    Veteran

    Joined:
    Apr 19, 2007
    Messages:
    1,559
    Likes Received:
    34
    #337 AnarchX, Sep 20, 2007
    Last edited by a moderator: Sep 20, 2007
  18. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    You can support DP via external transmitters. The ports seem to be connecting to two thngs that are partially covered by the HSF.
     
  19. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
    :lol: Cardboard heatsink cover ftw!
     
  20. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    In a quick look the two don't appear to be the same size, though...
    Also: Where's the SLI connector ? ;)

    That must be a low end card, probably the replacement for 8400 GS/8500 GT (G86).
     
    #340 INKster, Sep 20, 2007
    Last edited by a moderator: Sep 20, 2007
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...