G80 rumours

Discussion in 'Pre-release GPU Speculation' started by IbaneZ, Feb 21, 2006.

Thread Status:
Not open for further replies.
  1. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    hmm think your timing is off with the whole xbox chip cost renegotiations, if I remember correctly it was during 5900fx time MS wanted to renegotiate. And also MS wanted to drop the price a bit on nV, which they forced the issue by not making payments, which nV responded by not sending chips, and then both parties went to arbitration, which nV won, unfortunate for MS, MS should have taken them to court not arbitration, once arbitration makes a choice its done, finito, there is no appeal.

    nV then guessed on the dx9 specs as you said, but was still overconfident thier fx line would prevail. I don't know if MS locked them out or it was nV's choice not to be part of the dx9 specs commity.
     
  2. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    That wasn't the problem at all. The problem was twofold. The first issue was that they went with a VLIW instruction set for the pixel shaders. The second is that they went for a mixture between floating-point and fixed-point shaders.

    The first issue made it very, very hard for nVidia to write a good shader compiler. They didn't even write the shader compiler until the architecture was finished! Thus they didn't find out until the very end of development just how nasty the architecture was to compile to. Because of this, none of the NV3x line made good use of the pixel shader power available.

    The second issue cut the DX9 performance by more than a factor of two out of the gate. I've been really puzzled by this design decision for a long, long time now. From what I remember hearing, implementing fixed-point shaders in DX9 was never even a consideration.
     
  3. JoshMST

    Regular

    Joined:
    Sep 2, 2002
    Messages:
    467
    Likes Received:
    25
    Also, NV30 was fabricated by TSMC. It wasn't until the FX 5700 that IBM started making parts for NV.
     
  4. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    But i assume that they started design and research for it long before even the GeforceFX 5800U tape-out from TSMC, right ?
    Different process, different libraries, etc ?

    And yes, i also agree that the FX 5800U was like a weird mixture of old and decadent and cutting-edge new designs.
    When all was added up, it sucked, plain and simple.

    The important thing, for competition's sake, was that they got back up on their legs with NV40.
     
  5. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    also the nv30 boards were very hard to make, if I remember, nV started to make the boards themselves. Think it was like 7 layers or something?
     
  6. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    Sigh. Every thread comes back to NV30.

    My personal belief was that NV30 was explicitly designed to be an incremental upgrade from NV25 in terms of the feature set available for actual use (e.g., with decent performance). DX8 would be the focus of the chip, and DX9 would be exposed primarily for developers, whom NVIDIA assumed would want to use NVIDIA hardware, as was par for the course at the time. By the time DX9 apps became available, they thought, the next refresh would arrive, bringing real DX9 performance with it. The integer and floating point units, the register penalties, and the particular features of ps_2_a support this, I think. In a few very specific ways that only developers would care about, the NV30 is a superior chip for pixel shaders than the R300.

    But, of course, the R300 screwed it all up for NV30 among consumers (and even developers). Then again, it's not like R300 was quite expected to be the Uberchip that it was, especially after the R200 and the lack of refresh.

    Then, NV had to compete, couldn't wait for low-k issues to be resolved, couldn't release the card clocked so low, so we got Dustbuster and terrible yields.
     
  7. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Well, the NV43, I think, shoots some of these statements out of the water. The NV43 has about the same transistor budget as the NV30, has a 128-bit memory bus, far more capabilities, but is tremendously faster.

    After all, if it was merely hubris, nVidia wouldn't have bothered to make the chip so damned big. They would have gone for a smaller chip that would have been cheaper for them, and attempted to sell it for lots of cash.

    No, they really thought that they could do much better with the design than they actually did.
     
  8. PeterAce

    Regular

    Joined:
    Sep 15, 2003
    Messages:
    490
    Likes Received:
    10
    Location:
    UK, Bedfordshire
    I think Nvidia (for the NV3X gen) vastly underestermated the requirement of 'advanced sequencers' (in hardware) that distribute work to the ALUs and dynamically manage registers. This was a basic requirement for shader processing.

    In NV40 it was mentioned that it has the 'Shader Instruction Dispatcher' in hardware. Add that to doubling the register file (allowing 8 FP16 and 4 FP32 temps), spliting the primary ALU into two ALUs (distubuting the instructions and duplicating one MUL, I think), more flexible i.e co-issue, the addition of dual-issue, adding the fast NRM (_pp) and special function units Etc.

    All this (and more that I've prob missed) added up to make it a much faster shader processor.

    NV40 was the biggest 'clean sweep' of thier previous architectures.
     
    #668 PeterAce, Aug 30, 2006
    Last edited by a moderator: Aug 30, 2006
  9. IbaneZ

    Regular

    Joined:
    Apr 15, 2003
    Messages:
    743
    Likes Received:
    17
    Where the fukk is Walt??? He's the master of NV30 crappin'!!!111

    Walt!!!!!11111 Wake the fukkkk up!!!!
     
  10. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    Walt Took the Oath.

    Tempt him not! :lol:
     
  11. wishiknew

    Regular

    Joined:
    May 19, 2004
    Messages:
    341
    Likes Received:
    9
    Age must be catching up to me. But I could have sworn back then Chalnoth defended seperate hardware units designed to calculate to specific precisions as well as the high speed 128 bit ddr2 bus with 'fewer' pcb layers.
     
  12. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    On 31 posts from May 2004 you'd swear that? Hmm. :wink:
     
  13. wishiknew

    Regular

    Joined:
    May 19, 2004
    Messages:
    341
    Likes Received:
    9
    I'm actually surprised I even have 31 posts.
     
  14. stevem

    Regular

    Joined:
    Feb 11, 2002
    Messages:
    632
    Likes Received:
    3
    Obviously a lurker well before reg, then.
     
  15. wishiknew

    Regular

    Joined:
    May 19, 2004
    Messages:
    341
    Likes Received:
    9
    I think I registered just to vote on which picture out of the r420s variable filtering optimization was better or which was the full filtered one.(something like that)
     
  16. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    It's cool, I'm just playing. I'm not really going to back and look for who we banned in April of 2004. :razz:
     
  17. dizietsma

    Banned

    Joined:
    Mar 1, 2004
    Messages:
    1,172
    Likes Received:
    13
    Looking back I think nv30 was far better than the trick pulled with the MX range of the GF4 .. there are still countless people suffering with cheap machines using this in glorious DX7 mode. At least the FX 5800 could do DX8+ and has since been replaced with better cards by people. There's still millions unknowlingly suffering with MX440 :D
     
    #677 dizietsma, Aug 30, 2006
    Last edited by a moderator: Aug 30, 2006
  18. Skinner

    Regular

    Joined:
    Sep 13, 2003
    Messages:
    878
    Likes Received:
    12
    Location:
    Zwijndrecht/Rotterdam, Netherlands and Phobos
    I'm hungry for new powah. even CFX1900 won't cut the cake in every senario, (Yes i'm spoiled ;))
     
  19. trumphsiao

    Regular

    Joined:
    Jan 31, 2006
    Messages:
    285
    Likes Received:
    11
    Rumor : G80 board has 12 memory chips soldered .
     
  20. trumphsiao

    Regular

    Joined:
    Jan 31, 2006
    Messages:
    285
    Likes Received:
    11

    TSMC-made G80 sample speed is well beyond 600MHz.
    I think either G80 or R600 will be 4 :1 concept architecture.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...