GForce 4 Next Gen ?

Discussion in 'Architecture and Products' started by Clayberry, Feb 7, 2002.

  1. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    It's definately not an overclocked GF3 which is what the original poster was claiming. The fact is, it has two CRT output certains (dual monitor), two vertex shaders (another functional unit on the silicon), and a whole host of improvements to the memory and AA subsystem such that is it able to *double* the performance of the GF3 Ti500 at hi-res AA with only a marginal clock rate and memory bandwidth improvement.

    Even the GF2 wasn't an overclocked GF1. Featurewise, the cards are the same, but the GF2 added multitexturing pipelines.

    GF3Ti was an overclocked GF3. The "ultra" versions of NVidias chips were overclocks. But the GF4 is clearly a revision to the silicon itself that required testing, debugging, respin, etc.

    I can't see why someone who already owns a Radeon8500 would upgrade to a 128mb version. What's the point? The 8500 does very well against the GF3, but I doubt another 64mb will make any big difference. The GF4 4200/4400 actually puts its extra 64mb of RAM to good use for high resolution (1280x1024 or 1600x1200) FSAA which in many game engines runs at usable framerates. (at 1600x1200, the back buffer and Z buffer will eat 30+mb in 2X and 60+mb in 4x) The 8500, even with 128mb memory, won't be able to do 1280x1024 or 1600x1200 FSAA with any reasonable performance due to the lack of multisampling.

    Oh well, we'll wait to see the 128mb 8500 overclock benchmarks, but I doubt it will make that big a difference.
     
  2. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,079
    Likes Received:
    648
    Location:
    O Canada!
    AFAIK, all they did was split off the TCU’s. GF256’s TCU’s were trilinear capable; GF2 were dual bilinear.
     
  3. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    Regardless, anytime you change the silicon, you've got a much different situation than speed-binning or rebranding. Sure, making a few minor changes might seem easy (although the move to a smaller manufacturing process was a biggie), but it costs way more than simply binning.

    If we're going to use the original poster's criteria, than NVidia hasn't done anything truly new since the GF1 (added GPU, and the rasterizer hasn't change much since the TNT), and every 3dfx product ever released was nothing more than a Voodoo1. You can see the gradual, conservative evolution of both of those lines of hardware. Just look at the registers and block diagrams of the chips!
     
  4. rhink

    Newcomer

    Joined:
    Feb 9, 2002
    Messages:
    28
    Likes Received:
    0
    >>I wonder how much Intel paid for that revision?<<

    Seeing that Moore cofounded Intel, I doubt they had to pay him much...
     
  5. SlmDnk

    Regular

    Joined:
    Feb 9, 2002
    Messages:
    539
    Likes Received:
    86
    Talking about next gen...

    Have you guys seen the Codecult - Codecreatures Engine video?

    www.codecult.com

    Sort of takes the 3DMark 2001 Nature demo to the next level.
    You have to click your way to the java applet first then to the download/videos section.

    It's a 33.3MB MPG file.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...