G80 rumours

Discussion in 'Pre-release GPU Speculation' started by IbaneZ, Feb 21, 2006.

Thread Status:
Not open for further replies.
  1. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,405
    Why not? :D

    It's possible, but i'm afraid that this will make the chip too complex instead of making it simplier than fully unified architecture.
     
  2. _xxx_

    Banned

    Joined:
    Aug 3, 2004
    Messages:
    5,008
    Likes Received:
    86
    Location:
    Stuttgart, Germany
    Weihnachtsglocken = jingle bells :)

    My (hopefully somewhat better) translation:

     
    Tim Murray and Geo like this.
  3. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    Very nice, xxx. Thanks.
     
  4. elementOfpower

    Newcomer

    Joined:
    Jun 29, 2006
    Messages:
    102
    Likes Received:
    2
    Location:
    Greensboro, NC
    Wha--what?
     
  5. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    IF leaked specs/manufactoring process are for real it makes a lof of sense to have a G80 running below 500 Mhz..I mean..it's freaking HUGE and consume more power than you favourite nuclear power plant can generate :lol:
     
  6. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    It would be interesting if Nvidia really goes big enough to warrant such low clockspeeds. Wouldnt it have been better all around to target higher clocks with less processing units?
     
  7. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    They did it in the past (think NV30 at 500MHz vs NV40 at 400MHz, both on 130nm).
    Clockspeed is somewhat of a moot point, as long the basic design is more efficient.
     
  8. Sunrise

    Regular

    Joined:
    Aug 18, 2002
    Messages:
    306
    Likes Received:
    21
    No, and there are about a dozen reasons for it, with the majority of them being either manufacturing / margin / yield related and some of them directly with the design itself (power consumption due to much increased logic). Also, clocks alone don´t tell you anything about it´s efficiency.

    NV40 -> G70 -> G71 - it´s not like we haven´t seen it before. NV knows what they are doing.
     
  9. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    Most of the time..yep, but NV30 has clearly shown that they can fail too.
     
  10. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    Of course, but there's obviously a sweet spot balance between complexity and clockspeed. Increasing the number of processing units doesn't necessarily translate into higher overall efficiency either. (I'm not just thinkiing performance/clock but performance/mm^2 as well).

    We've seen Nvidia maximize high-end yields using both the lower clock approach (G71) and disabled units approach (G70) so it's not cut and dry that one is always better than the other. I was just curious as to why they would go big and slow this time (which might not even be true - we might see a 600Mhz 500M beast - who knows).

    In my previous post my assumption was that a reduction in complexity would allow for a compensating increase in clock at similar overall power draw. That assumption is obviously very flawed :smile:
     
    #650 trinibwoy, Aug 29, 2006
    Last edited by a moderator: Aug 29, 2006
  11. Sunrise

    Regular

    Joined:
    Aug 18, 2002
    Messages:
    306
    Likes Received:
    21
    Well, NV30 was like R520 - a rare case in history that speaks for itself.
     
  12. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    R520 was no where a failure as NV30 was..IMHO
     
  13. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    Well, NV30 itself was a total failure performance-wise.
    But, in terms of sales of its derivatives, both NV34 (FX 5200/FX 5500), NV36 (FX 5700, FX 5700 Ultra) and NV35 (especially the FX 5900 XT variant) did fairly well.

    They may not be much for games, but they can still handle Windows Vista's AERO Glass interface with ease, for example.
     
  14. Sunrise

    Regular

    Joined:
    Aug 18, 2002
    Messages:
    306
    Likes Received:
    21
    Not implying that it´s directly comparable, at least not architecture-wise (the dustbuster included), so that´s certainly correct, but both IHVs' timelines and their goals with both were totally messed up.

    Also, like INKster already mentioned, NV´s other SKUs, meaning their mid- and especially their low-end stuff where still going as planned and also doing pretty well, whereas ATi´s RV530 was delayed, too, so they only had RV515 working.
     
    #654 Sunrise, Aug 29, 2006
    Last edited by a moderator: Aug 29, 2006
  15. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    For mostly the same reasons--problems with new fabrication technology. The difference is, when all was said and done, NV30 wasn't even *comparable* to R300. It was beat in performance and IQ hands-down. R520 was at least comparable in performance to G70, and it did win on IQ.
     
  16. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    Well, that's a bit generic. There were a goodly number of spots where NV30 was "comparable" perf-wise *at release* (poster child for "didn't age well", however), and areas where the IQ was better (AF being one). But then a goodly bit of their IQ argument got flushed with over aggressive _pp, and once you turned on AA, they got crushed both perf and IQ.

    R520 was only about 1/2 as "late" compared to its competitor, as well.
     
  17. Sunrise

    Regular

    Joined:
    Aug 18, 2002
    Messages:
    306
    Likes Received:
    21
    Yeah, well, "mostly", meaning NV´s Low-K vs. ATI´s IP-problems, no big deal, but still not directly comparable to me.

    It did, but that´s only half of the whole story. Performance-wise, ATi may have done a hell of a lot better (not in all cases), but NV still earned money with their other designs, while ATi didn´t, because they had to delay everything.
     
  18. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    But that had alot to do with nV's overcofindence and inability to gauge ATi, who expected the 9700? It is very unlikely we will ever see such a resounding victory. If it is to happen again, performance won't be an issue, just delays because of cutting edge manufacturing.
     
  19. IbaneZ

    Regular

    Joined:
    Apr 15, 2003
    Messages:
    743
    Likes Received:
    17
    Come on pathetic f@nbois, please don't fukk up this thread too. :lol:
     
  20. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    No, i honestly don't think so.
    Nvidia was having a great time, with powerfull GPU's for DX7 and DX8 (Geforce 256 through Geforce 4 Ti).

    However, one company decided to enter the game console business (Microsoft) and needed a quick way into the "action".
    Enter Nvidia, who designed the chipset for the console (itself the future basis for nForce 1/2 and Soundstorm audio chip).
    But, since MS was new to the business, they agreed on a fixed amount of money for Nvidia (i think it was in the neighbourhood of 200M USD), instead of the general accepted "royalty-per GPU sold" scheme.
    Later, when consumer demand for the Xbox 1 slowed down, Nvidia was still enjoying the fat profits of this agreement, while MS saw it's profit margin on the hardware colapse.
    They tried to re-negociate the deal, but Nvidia refused (i don't think this was *arrogance*, but more of a pure busine$$ decision, as it's routine on any modern company).

    So, they decided to cut the life of Xbox 1 short, joined up with ATI (now on a more traditional payment method) and at the same time, to cut Nvidia out-of-the-loop in regards of future DX9 directions.

    Nvidia, because of the lack of detail, made a critical mistake amidst the race to guess what would be in DX9 Shader Model 2.0, opting for FP32 and FP16 precision, instead of sticking to FP24, like ATI.
    So, they ended up with huge transistor counts, and either great image quality, but slow as hell performance, or very poor image quality and only acceptable speed (due to the lack of proper hardware design in face of the aforementioned "politics").


    Some mention the diversion of human and R&D resources from the PC GPU's divisions in order to develop for a console as a reason for the ultimate failure of NV30, but i disagree too.

    The Xbox 1 GPU was essentially a slightly modified PC GPU, as was the chipset and CPU, so the amount of work would require, at the most, a mild delay of the discrete GPU timetable (and NV30 was delayed, not because of design issues, but mostly due to power consumption, heat and noise output, and the use of the then cutting-edge IBM 130nm process and expensive GDDR2 memory (remember, this was late 2002, early GDDR2 samples were hot, as in, really hot).


    So, this was the consequence of politics /Quid Pro Quo between NV and MS happening 2 years before the GeforceFX 5800 Ultra or the Radeon 9700 Pro even got to market.
    As it was, i also agree that the R520 was a very different situation, and that Nvidia also learned their lesson and won't be making the same (huge) mistake twice. There is even a slight parallel with the infamous Rage Fury Maxx under Windows 98, and subsequent failure to drive support for it under the new Windows 2000 OS.
    It costs a lot of doe to step on Microsoft's toes. :D
     
    #660 INKster, Aug 29, 2006
    Last edited by a moderator: Aug 29, 2006
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...