A few questions on NV30, NV35

Discussion in 'Architecture and Products' started by Bigus Dickus, Aug 14, 2002.

  1. antlers

    Regular

    Joined:
    Aug 14, 2002
    Messages:
    457
    Likes Received:
    0
    In the past, nVidia has gone to some lengths to avoid putting an extra power connector on their cards, in part perhaps to make their technology more palatable to OEMs. The original GeForce 256s were at the very edge of compliance with the AGP spec (the really should have had an extra power connector), and the GeForce 4 4x00 suck every bit of current they can get out of the bus.

    It's not unreasonable to suppose that when they planned the NV30 nVidia thought they would get market-crushing performance by staying within the AGP spec at .13 microns, and only later learned that the performance they envisioned at the time the chip was in it's early planning stages wasn't so far out of the reach of their competitors. You would not say that their design was limited by target power consumption so much as it was limited by their original target clock speed, which was chosen on the basis of a lot of factors, including power consumption.
     
  2. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    Parhelia is more feature rich, hence more transistors.


    Anyways, transistor count is generally no indication of speed.
     
  3. alexsok

    Regular

    Joined:
    Jul 12, 2002
    Messages:
    807
    Likes Received:
    2
    Location:
    Toronto, Canada
    That puzzles me too...

    Intresting conclusion from Digit-Life about this:
    The whole review here:
    http://www.digit-life.com/articles/parhelia/index.html
     
  4. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    No, but drawing conclusions from it is. Speculation is taking past performance and assuming it equals future performance.

    If we want to do that, then lets stroll down history lane for everybody:

    TSMC has good yields
    ATI has crappy drivers
    NVIDIA's drivers r0x0rs

    See how pointless that is?

    (Of course, its probably not as pointless as continuing this discussion)
     
  5. ERP

    ERP
    Veteran

    Joined:
    Feb 11, 2002
    Messages:
    3,669
    Likes Received:
    49
    Location:
    Redmond, WA
    I think the Digit Life conclusion is bogus. VS2.0 doesn't add anything taht would magically allow you to use more pipes.
    It's more likely that the issue is with instruction latency in the vertex shaders.
    Nvidia basically runs multiple vertices in parrallel through each of the shaders to hide instruction latency, you can turn this off on the Xbox devkits and the performance drops like a stone.
    This is obviously jyst speculation on my part and if this is the case Matrox might be able to better utilise the pipelines in later drivers by rearranging instructions in the CompileShader call.

    Although I don't doubt that there are issues in Matroxes driver that prevent parhelia from performing aswell as it can (especially in CPU limited instances), my guess is that the issues with Parhelia are more likely related to a combination of cache performance, and probably an over reliance on the raw bandwidth advantage they have.
     
  6. pascal

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    1,968
    Likes Received:
    221
    Location:
    Brasil
    Bjorn

    Dont try to put words in my mouth.

    I have been as much technical as I can. Dont turn thinks into some kind of fan dispute because I am nobody fan.

    Again, again and again:
    1- The R300 is using extra power beyond AGP specs. From the performance standpoint it is an advantage. And you cannot negate that. How much this is an advantage I dont know but maybe the stable under spec R300 work at 20% lower frequency.
    2- The R300 design probably is a very good design from the microarchitecture and vlsi design standpoint. (Again I am not saying that NV30 has a bad design)
    3- About the OEM maybe they will want cards that work under AGP spec (and maybe is maybe, to be sure contact them).

    About the metamorphosis you (edited: I mean Russ) described from nvidia engineers to monkeys it is something that never crossed my mind :roll:

    We dont know many things about NV30 to speculate about its performance this is all I say. I am not saying it will be faster, or much faster or slower (from the sustained performance viewpoint) because we dont know many things: is it using some kind of deffered rendering, 128bits, any special trick, is the additional logic slowing it, how good is the architecture, how good is the microarchitecture and vlsi design?

    You may just use your fan´s faith and believe it is faster. You can believe in the rumours about 48GB/s. You can believe in many things but from what we officially know (120M transistors with .13 micron process) we cant conclude it will be much faster.

    edited: english again :oops:
     
  7. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    Sigh...

    I am NOT assuming past=future. I am saying based on the past, it is REASONABLE TO SUGGEST that the future would be similar.

    Russ...YOU are the one who is closer to "drawing a conclusion" than I am. That's the point. You don't see anything that would even SUGGEST that Nv30 clock speed might be lower. That MEANS that you have already CONCLUDED certain things, for example, about those basis I suggested.

    FOR EXAMPLE:

    1) YOU are saying that "based on the past, it is NOT REASONABLE TO SUGGEST the future will be similar."

    2) YOU are saying that "based on the known 0.13 TSMC yield issues, it is NOT REASONABLE TO SUGGEST that it can impact shipping clock speeds relative to design target.

    Do you REALLY think those SUGGESTIONS are UNREASONABLE?
     
  8. 8ender

    Newcomer

    Joined:
    Aug 2, 2002
    Messages:
    43
    Likes Received:
    0
    Location:
    London, Ontario
    I think we'll have to wait and see,

    as soon as the availible data is exausted, any good conversation will degenerate into conjecture.

    Hell, look at Anand's article, even he is guilty of posting rumors as fact.

    We have no idea what clock speed the NV30 is going to debut at and yet his article finalizes by saying the NV30 will have superior clock speed.

    Either he knows something we dont or he's been fed a line.

    I think its time to step back and say "lets wait and see" instead of trying to bury each others opinions on a piece of silicon that isnt likely to see the light of day until december.
     
  9. pascal

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    1,968
    Likes Received:
    221
    Location:
    Brasil
    But it is fun :)
     
  10. martrox

    martrox Old Fart
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,065
    Likes Received:
    16
    Location:
    Jacksonville, Florida USA
    It is? Oh, wait, you're right, IT is fun..... :roll:
     
  11. Nagorak

    Regular

    Joined:
    Jun 20, 2002
    Messages:
    854
    Likes Received:
    0
    And there's nothing to suggest it will be besides speculation, so where does that leave us? :-?

    Bottomline:

    1) Everyone expected the R9700 to run at 250 MHz or less. People suggested it was impossible for a 0.15 chip of that complexity to run at 325 MHz.

    2) No one ever suggested the NV30 would be almost 200% as fast as the 250 MHz Radeon and run at 400 MHz. Everyone's expectations were that the NV30 would run around 300 MHz, with the process shrink basically negating the increased complexity.

    3) ATi pulls a rabbit out of their hat and gets their part running at 325 MHz. That was an unexpected surprise.

    Now everyone expects that NV30 will run at 400 MHz just because ATi got their chip running at 325 MHz? All the while they're getting yields of 15%.
     
  12. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    Yeah, thats exactly what I said. I am so full of shit, aren't I?

    Did you even read the whole thread? I hate to be combative here, but constantly CONSTANTLY people don't read half of what's written, and then invent the other half. I said nothing of the sort.
     
  13. Bjorn

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,775
    Likes Received:
    1
    Location:
    Luleå, Sweden
    And again, it just that we're seeing things a little bit different.
    I see it as something they have to do because of the 0.15 microns process. You seem to compare it with adding a Turbo to a engine.

    Ok, but if you're not saying that the NV30 has a bad (maybe not bad but at least not as finetuned as ATi's) design, how can you
    say that this is a advantage for the R300 ?
    What we do know though is that the NV30 uses 0.13 micron vs 0.15 for the R300.

    Maybe they will. But what we're talking about here is if Nvidia would risk their performance crown (when the NV30 is released :))
    just to keep within the AGP power specs.

    I wouldn't bring in any "fan's faith" and stuff like that in this discussion. Especially since i haven't said that i believe in any of those things.
    And yet again (yes Russ, i share your tears :)), NOBODY has made any conclusions here. Do you mind to quote any of these supposed conclusions ?
     
  14. mboeller

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    923
    Likes Received:
    3
    Location:
    Germany
    Can You please edit out this sentence? Why do You disqualify Your arguments with such an idiotic statement.

    thanks
     
  15. pascal

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    1,968
    Likes Received:
    221
    Location:
    Brasil
    But without this extra power probably they would have to lower the core frequency to something like 250MHz then it really help. Probably this thing is generate ~70% more heat than the GF4Ti4600 :eek:

    What I was trying to say is that we cannot expect the NV30 to have microarchitecture design advantages, maybe it has some graphics architecture advantages we dont know.

    We never know.

    Then what are we discussing? :-?
     
  16. 8ender

    Newcomer

    Joined:
    Aug 2, 2002
    Messages:
    43
    Likes Received:
    0
    Location:
    London, Ontario
    Maybe I'm wrong, but isnt more heat produced by higher voltages and not higher clockspeeds?

    I seem to remember something from overlclocking CPU's to this effect.
     
  17. pascal

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    1,968
    Likes Received:
    221
    Location:
    Brasil
    More heat can be produced because of one or more factors below:
    -vcore voltage (1v, 1.2v, 1.5v, etc...)
    -core frequency (200MHz, 300MHz, etc...)
    -fabrication process (.15 micron, .13 micron, etc..)
    -number of transistors (63M, 107M, 120M)
    -pattern of transistors activity
    -IO bus activity. (RDR, SDR, DDR, 256bits, 128Bits, 64bits, etc...)

    IIRC the R300 has more/less the same core frequency as the GF4Ti4600, the same fabrication process (TSMC .15 micron), probably the same vcore (guess) but has 70% more transistors (107/63=1.7) and twice the IO activity (256bits memory bus). I suppose the pattern activity is not much different ( a lot of parallel activity), then some guesstimate of 50~70% more power consumption.

    But see it is a guess.
     
  18. Dio

    Dio
    Veteran

    Joined:
    Jul 1, 2002
    Messages:
    1,758
    Likes Received:
    8
    Location:
    UK
    pascal covers this and more, but I thought I'd add a touch more detail:

    IIRC, heat is proportional to the square of the voltage and to the total number of transistor state changes per unit time (therefore, it is proportional to frequency).

    This is why clock gating (not clocking any unused part of the chip) is a big thing in mobile parts.
     
  19. 8ender

    Newcomer

    Joined:
    Aug 2, 2002
    Messages:
    43
    Likes Received:
    0
    Location:
    London, Ontario
    aah thank you, perhaps what I knew was an urban myth of sorts

    but it is related to power consumption correct?

    perhaps I was thinking of CPU's wherein the voltage is the main factor affecting heat output.
     
  20. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    Power (heat) is squared in relation to voltage.

    Increasing the voltage by 10% increases the power by 21%.

    So yes, voltage is a big knob when it comes to heat generation, CPUs and all electrical things.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...