NV30 Update

Discussion in 'Architecture and Products' started by CMKRNL, Sep 6, 2002.

  1. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    The GF256DDR a refresh? Now that's a laugh. The GeForce DDR was the exact same chip as the SDR. The NV11 was the GeForce2 MX. There was never an NV16. In fact, the DDR was officially released at the same time, even though availability wasn't there just yet (actually, it was so hard to even get a GeForce SDR for so long without preordering, that I was able to buy a GeForce DDR before an SDR...).

    Generally, every part with a different number is architecturally different. Quick example:

    NV10: 4 single-pixel trilinear pipes (.22um)
    NV11: 2 dual-pixel bilinear pipes (.18um)
    NV15: 4 dual-pixel bilinear pipes (.18um)
    NV17: 2 dual-pixel bilinear pipes, aniso + MSAA (.18um)

    If the parts just have different clocks, they have the same part number.

    Now that's kind of funny, as the NV20 made up the GeForce3/Ti cards, while the NV25 was the GeForce4 Ti line.

    No matter which way you slice it, the GeForce4 Ti cards were not only to have a 6-month life span (which no new architecture from nVidia has had...), but they also follow essentially the same programming path as the GeForce3 line of cards. Most of the advancements were in performance, with a couple in terms of image quality (namely Quincunx/4x9 FSAA modes).
     
  2. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Well, now that I've actually looked at the original PDF, I'll point out something nobody else here has:

    Notice the "dual edge clocking" marker on the Gen2->Gen4? That means memory interface to me, though those clocks are still rather...off... In particular, the Gen1 clock seems low for a memory interface (memory clocks of the TNT/TNT2 cards: 120MHz, 150MHz, 175MHz, 183MHz).

    In other words, particularly after seeing the actual PDF, I see even less reason to think that this means anything in terms of the core clock of the NV30.
     
  3. mboeller

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    923
    Likes Received:
    3
    Location:
    Germany
    I read that too at first, but then the speeds are way off. So for me it only indicates that this chips support DDR-SDRAM nothing more.
     
  4. noko

    Regular

    Joined:
    Feb 10, 2002
    Messages:
    502
    Likes Received:
    0
    Location:
    Eustis Florida
    I don't understand the number change from 120 million to 100 million transistors? When did that happen? That sounds like a major design change to me.

    Now I hope that NV30 supports N-patches, I see that as a very smart and efficient means of increasing model complexity and subsequently IQ. Yes if you turn on TRUFORM on models that are not designed for N-Patches you can get that ballon looking effect. I don't think RTCW has any problems when TRUFORM is used, it does increase model smoothness and has better lighting. I see TRUFORM as another piece of the puzzle in creating more realistic enviroments for game play.

    I got a kick out of the last statement made from Nvidia. Ahmmm from their slide show.
    :) , sorry I just had to quote them on their word. You see Nvidia is truthful :wink:.
     
  5. CMKRNL

    Newcomer

    Joined:
    Jul 12, 2002
    Messages:
    91
    Likes Received:
    0
    I'm not sure we should read too much into the 100+ M transistor count statement. My understanding is that the decision to drop the primitive processor was made early in the design. I'm fairly positive the 120M transistor count declaration came well after that decision. My guess is that it will most likely ship with around 120M transistors.
     
  6. Doomtrooper

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,328
    Likes Received:
    0
    Location:
    Ontario, Canada
    What a load of bullshit, you took my screen shots out of context, as you can see by the shots I was trying to show the improvement to models, and I always stated this, it was a after the fact hack by Croteam AND required exclusions to textures to stop some of the effects..

    I even linked to the article by Croteam themself when praising truform and stating the same thing about being a after the fact hack to show the technology....but you took my screen shots and ran with them all over the net ...typical :roll:

    If a game is designed around it like RTCW the image quality is greatly improved..what a concept..

    The improvements to IQ is no different than FSAA, yet you downplayed it because Nvidia doesn't have it...it is as simple as that.
    [​IMG]
     
  7. noko

    Regular

    Joined:
    Feb 10, 2002
    Messages:
    502
    Likes Received:
    0
    Location:
    Eustis Florida
    TRUFORM or should I say N-Patches is a big improvement when done right, just like any other feature. It is an API feature which ATI supports and Nvidia has yet to support. I am looking forward to my Radeon 9700 and using TRUFORM or just plain N-Patches for short. N-Patches can make a scene or model look utterly realistic which would be impossible or unworkable to do with high poly models causing a large overhead on the AGP bus and for slower cards. This is a feature that we should be promoting.
     
  8. Bigus Dickus

    Regular

    Joined:
    Feb 26, 2002
    Messages:
    943
    Likes Received:
    16
    BING! And the light of comprehension shines in Chalnoth's head. That's precisely why I call it a resfresh... NVIDIA's refresh parts have always been essentially the same core as the previous one. This started with the TNT2 to TNT2 Ultra - same core, but the refresh part was clocked higher. This is all NVIDIA's "refreshes" have ever been. I don't understand how some people have convinced themselves that NVIDIA actually releases a new chip every 6 months.

    To reiterate the point: the GF4 isn't a "refresh" part, that was the GF3 Ti and whatever the NV28 will be.
    I was making a guess there, as indicated by the "?" immideately following it. In any case, the DDR hit the shelves later... but it is really the only blip in NVIDIA's otherwise consistent schedule.
    I thought I had read that the GF2 Ultra core was the NV16. Oh well, I was talking about the GF2 Ultra, whatever name of the core might have been.

    Note: I'm leaving out all of the "MX" series of cards in this sequence, since they muddy the water considerably.

    Let me try to make this clearer:

    TNT2 - refresh: TNT2 Ultra
    GF256 - refresh: GF256 DDR (odd one in the series)
    GF2 GTS - refresh: GF2 Ultra
    GF3 - refresh: GF3 Ti500
    GF4 - refresh: NV28

    As you can see, ever since the TNT2 the refresh part has been the same core. The NV28 will deviate by adding AGP 8X support most likely (very minor change).

    So when you call the GF4 the "6-month refresh," I just have to ask... refresh of what? The GF3 was a year before, not 6 months. If you mean a refresh of the GF3 Ti, then what does that make the Ti? The GF4 isn't a refresh, because (and perhaps that light is still on) it's core is actually an improvement over the previous one.
     
  9. alexsok

    Regular

    Joined:
    Jul 12, 2002
    Messages:
    807
    Likes Received:
    2
    Location:
    Toronto, Canada
    Bigus Diskus: I assume that your definition of "refresh" is when only the clockspeeds are higher (gf3 --> gf3ti for example), while gf4ti is an improved core with some other improvements besides improved clock speeds, which is why u don't define it as a "refresh".

    If that's the case, then I do undersand your line of thinking and it's logical...
     
  10. noko

    Regular

    Joined:
    Feb 10, 2002
    Messages:
    502
    Likes Received:
    0
    Location:
    Eustis Florida
    GF4 isn't a refresh, it is a new core with mediocre improvements at best. A refresh of the GF4 should be coming which I don't think will enhance anything over the GF4 except a faster core/mem and AGP8x. Nvidia wasted time and money on it as far as I am concerned. The NV30 is the next generation from Nvidia from the GF3/4 era. Looking at around 2 years to do. Nvidia is falling behind ATI at this stage.
     
  11. alexsok

    Regular

    Joined:
    Jul 12, 2002
    Messages:
    807
    Likes Received:
    2
    Location:
    Toronto, Canada
    NV28 will be NV25 with AGP 8x suppot and probably higher clockspeeds.

    nVidia didn't waste any money or time on NV28, as the NV28 will replace the NV25 and the costs to produce it will be much lower, which is the main reason nVidia are getting it out.

    nVidia are behind ATI at this stage, but there are reasons for that, the main reason being the fact that they focused on NForce and NV2A in the past.

    Everything will return to it's usual pace and routine with the NV30.
     
  12. Bigus Dickus

    Regular

    Joined:
    Feb 26, 2002
    Messages:
    943
    Likes Received:
    16
    Yes, that's my logic.

    If NVIDIA hadn't produced a string of clock bumped cards inbetween their ~year apart new cores, I wouldn't have that opinion.
     
  13. noko

    Regular

    Joined:
    Feb 10, 2002
    Messages:
    502
    Likes Received:
    0
    Location:
    Eustis Florida
    I hope so, still it will compete against a DX9 capable card, the Radeon9500. In short the NV28 may not look to be a good option for its price unless it is a sub $250 card from the get go. Something else Nvidia has to overcome.
     
  14. alexsok

    Regular

    Joined:
    Jul 12, 2002
    Messages:
    807
    Likes Received:
    2
    Location:
    Toronto, Canada
    Think of it like this:

    NV28: NV25 with AGP 8x & higher clock speeds. Since the costs to produce it will be lower, the price will also be lower, especially when R9500 is going to be released. Also, I expect NV28 to be faster at certain situation over the R9500, since it's 4x2, where R9500 is 4x1, although it's hard to deny the fact that R9500 is a DX9 capable card...

    In my opinion, NV28 is nVidia's mainstream product until a mainstream NV30 solution is brought to the market.
     
  15. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    Supposition.
     
  16. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,320
    Likes Received:
    525
    Let me help YOU with history. Radeon was released at the end of the Summer 2000, around the time R8500 and R9700 were released in the following years. GTS was released Spring 2000, to compete with V5.
     
  17. Doomtrooper

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,328
    Likes Received:
    0
    Location:
    Ontario, Canada
    Well I could not get a Geforce 2 GTS in Canada when the Radeon 1 was released, in fact I ended up getting a Radeon 1 after a powercolor Geforce 2 MX crapped out, then I RMA'd it and got a GTS a Month later which also crapped out..both with memory failures..so I RMA'd it and got a Radeon 64 meg for the same price as my 32meg GTS...

    So where I live they were not available, maybe in the US but Not in Canada..and I had ALOT of suppliers...

    A good example is this Radeon 9700 launch..even though ATI is a Canadian based company the US market was released 1st and most online large shops here are just shipping now Sept 10th...while alot of US consumers have had theirs for weeks.

    Kind of sad really that ATI ignores us even though they do business here, but they are out to make money and the US market is much larger than little ol' Canada.
     
  18. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,320
    Likes Received:
    525
    I understand the regional variations in availability, but the point stands: GF2 was finished several months prior to Radeon, giving ATI extra time to improve on their design.
     
  19. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    The TNT2 Ultra was no more a refresh of the TNT2 than the GeForce4 Ti 4600 was a refresh of the GeForce4 Ti 4400.

    Your definition of refresh is obviously quite different from what is normally used, and is thus wrong.

    Here's a little bit of history:

    1. TNT -> TNT2: die shrink, increased 32-bit performance
    2. GeForce -> GeForce2: die shrink, performance optimizations, two bilinear textures per pixel.
    3. GeForce3 -> GeForce4: second vertex shader unit, performance/visual quality optimizations, a few more pixel shader instructions.

    These are what are normally termed as the refreshes from nVidia. The GeForce3 Ti and GeForce2 Pro/Ultra were only termed as "refreshes" because they came out at about the time people thought refreshes should come out. Still, the GF2 Ultra sort of counts as a refresh because it was done on a slightly different process at TSMC.

    What you are describing as "refreshes" would be more accurately described as "re-releases."

    And regardless of which way you slice it, the GeForce4 was based on the GeForce3 core, and was not an entirely-new architecture.
     
  20. Doomtrooper

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,328
    Likes Received:
    0
    Location:
    Ontario, Canada
    Oh I see...[​IMG]

    Well these..ummm...refresh parts were marketed as NEW cards including the Geforce 2 PRO, and of course the ULTRA (another $700 Canadian Card)...and then there is a Geforce 3 Ti which was a die shrink and....oh yes enabled features the 1st Geforce 3 was marketed as having..

    Silly us.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...