NV30 not taped out yet....

Discussion in 'Architecture and Products' started by Guest, Jul 31, 2002.

  1. Nagorak

    Regular

    Joined:
    Jun 20, 2002
    Messages:
    854
    Likes Received:
    0
    Well, I also think its rather premature to assume the end of the world for the R300 when the NV30 is released. I don't know how the R9700 will fare selling in the ultra-highend (read overpriced and out of reach for 95% of the target audience). We'll just have to wait and see. Either way the R9700 is a great card and hopefully it will get cheap enough to buy before too long.
     
  2. Evildeus

    Veteran

    Joined:
    May 24, 2002
    Messages:
    2,657
    Likes Received:
    2
    Did i say anything else?
     
  3. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    Well, NVIDIA may have used the shuttle services at TSMC to do their development, testing out different blocks of the device as separate blocks, rather than putting it all together at once and doing verification that way.

    The shuttle is essentially buying a small portion of a mask set so that the cost is shared amongst several ASIC vendors. The numbers I've heard for a all layer mask on the shuttle is about 40k-70k, vs. 800k for the full reticle mask. One of the problems is that this only happens once a month or so, and if you miss the deadline, you have to wait. Of course, TSMC requires you buy the full reticle to go to production, but its quite a bit cheaper for development if you afford the schedule hits.

    This may be the source of "taped out at .15u", "T&L is a separate chip", and all sorts of other various wierd conflicting rumors going around. Maybe the logic was developed at .15u for functional verification purposes, and maybe the vertex block was developed as its own chip, with the intent that all these pieces get integrated into a .13u single die device in the end.
     
  4. LeStoffer

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,262
    Likes Received:
    22
    Location:
    Land of the 25% VAT
    and:

    I just got this crazy idea that they are indeed going full deferrend rendering this time. This is the only reason I can see why they would almost need to get silicon back as a part of the die verification. I don't know if they would even need to include the full pixel pipeline to check it out, but they might have.

    The plot thickens.... :eek:
     
  5. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    I don't see how going the gigapixel route would have any different functional verification requirements than any other system, or would lend itself more to it being developed in pieces. (In other words, I'm not grasping how you're making the connection that these facts are pointing toward your conclusion)
     
  6. LeStoffer

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,262
    Likes Received:
    22
    Location:
    Land of the 25% VAT
    No? Well, I was thinking along the lines that it did take PowerVR some time to get their architecture just right, and if you want to test a load of different games engines with all their "hacks" and workarounds you'll need real silicon to test because simulation would be way too slow of a large number (albeit they of course did some computer simulation beforehand).

    Does it make any sense now?
     
  7. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    I guess, though I think you might be underestimating the amount of functional testing that goes on with any graphics core, whether its a tiler or not.
     
  8. BoardBonobo

    BoardBonobo My hat is white(ish)!
    Veteran

    Joined:
    May 30, 2002
    Messages:
    3,605
    Likes Received:
    541
    Location:
    SurfMonkey's Cluster...
    If nVidia have got a deffered rendering system in action does this mean that they will be able to do ray casting in real time, as some of the demo picks needed, and would this substantiate the rumour that they are getting MSAA, essentially, for free?
     
  9. sumdumyunguy

    Newcomer

    Joined:
    Feb 6, 2002
    Messages:
    94
    Likes Received:
    0
    Location:
    HotLanta!!! :)
    Thank God you said that. Also because of the upcoming certification by the CEO "law" on Aug 14th, I would wager that Huang (& other CEO's here in the states) will be dropping other little truth bombs.

    I saw a couple of pages back (too lazy to get it) that someone had the audacity to suggest that the CEO of nVdia is not the person that we should ask questions of . I would ask then, whom?!? Kenneth Lay?
     
  10. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    No, nVidia is not going deferred rendering. There have been many comments in the past that pretty much show this without a shadow of a doubt. Unfortunately, I can't seem to quickly find some of the quotes I'm thinking of, but I am certain of this. You'll find out at the release of the NV30 whether or not I'm right.
     
  11. martrox

    martrox Old Fart
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,065
    Likes Received:
    16
    Location:
    Jacksonville, Florida USA
    Hasn't nVidia also been quoted many times in the recent past thet there was no need for a 256 bit memory bus?
     
  12. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    I only saw that once (it's been requoted by others not from nVidia), about 3-6 months ago.

    Additionally, it's looking like the NV30 has moved to DDR2 and a 256-bit bus.
     
  13. LeStoffer

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,262
    Likes Received:
    22
    Location:
    Land of the 25% VAT
    You don't need to find the quotes as I remember the statements about not going deferred rendering quite clearly myself. They even hinted that
    the Gigapixel architecture probably wasn't anything they would use.

    But that is not the same as saying they would never come to the conclusion that their LMA architecture just would not be efficient enough as it gets more and more computional heavy to render each and every pixel (visible or not).

    May I direct you to this line that Dave wrote in his NV30 specs:

    New focus on computational efficiency rather than memory efficiency

    This basicly states that memory efficiency may indeed not be the bottleneck with this architecture. Please also note Dave's mention about
    reactorcritical missing "a very important word out when talking about that bandwidth number" (which was 48 GB/s bandwidth).

    Yes. I cannot compete with the possible NDA you have signed, but otherwise let us at least have some fun thinking about it, okay? :wink:
     
  14. martrox

    martrox Old Fart
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,065
    Likes Received:
    16
    Location:
    Jacksonville, Florida USA
    My point, Chalnoth, it that if YOU like it, it's written in stone. However, IF you don't........
     
  15. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Let me just clarify a little something. I have signed no NDA (I wouldn't be talking about this if I had). Nor am I acting under any sort of insider info here. It's just that previous quotes from nVidia personnel have made me very certain that they are not going for any sort of deferred rendering in the forseeable future.

    Additionally, if you look at the "computational efficency" quote, that seems to me that it's more about packing more processing into every pixel, so that not so much memory bandwidth is needed, but lots of pixel shader power is.
     
  16. Pete

    Pete Moderate Nuisance
    Moderator Legend

    Joined:
    Feb 7, 2002
    Messages:
    5,777
    Likes Received:
    1,814
    There may have been no need pre-R300, but since nVidia seems to have so much time on their hands, I can't imagine why they wouldn't be spending some of it adding a faster bus. With all the delays they're having, it shouldn't come at much of a price penalty either, as we now have three cards using it (Parhelia, P10, R300).

    The computational efficiency issue makes sense, as we're seeing fillrate, rather than memory bandwidth, become the limiting factor as we shift to AA+AF+tri. You'll notice many overclocking benchmarks show as much gain from OCing the core as they do from OCing the memory.

    I agree that 48GB/s is probably missing an "effective." That number just sounds ridiculous. Sure, 4x FSAA would be "for free" with that kind of bandwidth, but I just don't see it being feasible. Would excessive power draw become an issue at that speed?

    <tangent>With GPU's becoming as powerful as they are at cetain operations, I can see motherboards begin to give equal though to both CPU and GPU. I'd love 512MB's 256-bit DDR2 for a P4A 3GHz or Hammer 3000+, that's for sure. Dammit, someone develop a MB with a shared memory architecture and two sockets! :) I don't see Intel doing so, as it would take away from their position as most important part of the PC, but nVidia should certainly consider it in conjunction with AMD.</tangent>
     
  17. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Suggesting any memory bandwidth outside about 20-30GB/sec is pretty much unfeasible.
     
  18. demalion

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,024
    Likes Received:
    1
    Location:
    CT
    My guess to the word was "peak".

    What type of design would offer a peak bandwidth that high?

    For lack of a ready answer, I reconsidered and started guessing "uncompressed" where the data is transferred in a compressed state. While not the most elegant phrasing in the context of that quote, it does seem to make sense (i.e., lossless Z Buffer compression).

    EDIT: realized that "compressed" fits better and conveys the point accurately. :oops:
     
  19. multigl2

    Newcomer

    Joined:
    May 23, 2002
    Messages:
    64
    Likes Received:
    0
    pete, unfortunately, you don't just "Add" a 256bit bus. The external bus is such a crucial part of the chip, i'd wager its set in stone very early on in the desing process (As opposed to other stuff).
     
  20. Pete

    Pete Moderate Nuisance
    Moderator Legend

    Joined:
    Feb 7, 2002
    Messages:
    5,777
    Likes Received:
    1,814
    Why haven't I heard about this before? 15% yields sounds pretty awful for what should be a large card, though I'm not sure if that's "working at top frequency" yields, or just "working" yields (which they can shunt to slower, cheaper variants via their vertical chip-family offerings). They're even having trouble with the GF4MX, which I thought was their bread and butter card!

     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...