NVIDIA GT200 Rumours & Speculation Thread

Discussion in 'Architecture and Products' started by Arun, Feb 10, 2008.

Thread Status:
Not open for further replies.
  1. jimmyjames123

    Regular

    Joined:
    Apr 14, 2004
    Messages:
    810
    Likes Received:
    3
    I listened to all ~7 hours of the Financial Analyst presentation given by NVIDIA, and there was some very interesting and fascinating stuff being presented.

    Did you notice that JHH spoke about how NV would move from hundreds to thousands of "cores" during the presentation?

    He also said they were working things that were miles ahead of the competition. Surely he must have meant being able to use CUDA to run those many parallel processors for good use in computational finance, medicine, weather, etc?

    No wonder NV is so confident moving forward. Their architecture is solid enough so that they can scale to thousands of cores over time, they have a really solid programming tool in CUDA to take full advantage of advanced GPU parallel processing, they have PhysX processing which can be incorporated into the GPU and make use of CUDA, and they have incredibly clever low power high performance devices designed that can be immediately used in next gen iPhones and such, in addition to everything else that we don't know about.

    Will be fascinating to see how things work out, but I can tell from the presentation that NVIDIA is amped about the future.
     
  2. LordEC911

    Regular

    Joined:
    Nov 25, 2007
    Messages:
    877
    Likes Received:
    208
    Location:
    'Zona
    For G92b most likely...
    GT200 is looking like a late Q3/Q4 launch.
    It always amazes me how these news articles always make so many assumptions based off a single quote.

    How is CUDA "miles ahead" of ATi's GPGPUs?
    ATi has had it out for quite awhile, has a much better price/performance and performance/watt and even has double precision, which Nvidia still hasn't managed to push out the door yet.
     
  3. Berek

    Regular

    Joined:
    Oct 17, 2004
    Messages:
    274
    Likes Received:
    4
    Location:
    Austin, TX
    Indeed... which is why I'm not holding my breath over any of it. Computex should reveal more, certainly.
     
  4. Pete

    Pete Moderate Nuisance
    Moderator Legend

    Joined:
    Feb 7, 2002
    Messages:
    5,777
    Likes Received:
    1,814
    Was that said in the context of single cor--er, GPUs, or is it safe to assume that falls under SLI (where 256 * 4 cards would qualify as "thousands")?

    Assuming "GT200" is packing "only" 256 "thingamabobs," that "is."
     
  5. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    why do you say that?

    Cuda as an SDK is miles ahead
     
  6. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,248
    Likes Received:
    3,417
    Actually it looks more and more like Q2/Q3 launch.
    And more and more like 240 SPs (in 10 clusters) with 512-bit bus.
    55nm refresh should be out in Q4 and will use 256-bit bus with GDDR5.
    Something like that...
     
  7. Arnold Beckenbauer

    Veteran Subscriber

    Joined:
    Oct 11, 2006
    Messages:
    1,756
    Likes Received:
    722
    Location:
    Germany
    IMLO JHH didn't mean ATI/AMD, but INTEL. :smile:
     
  8. jimmyjames123

    Regular

    Joined:
    Apr 14, 2004
    Messages:
    810
    Likes Received:
    3
    I got the sense that JHH was almost definitely not referring to SLI, which makes things rather interesting given that they were talking about thousands of cores and several teraflops of computing power in the near future. Also, I seem to recall reading something a little while ago where NVIDIA talked about still developing "monolithic" GPU's for their high end moving forward.
     
  9. jimmyjames123

    Regular

    Joined:
    Apr 14, 2004
    Messages:
    810
    Likes Received:
    3
    Yes, I think JHH in almost every circumstance was referring to Intel when he referred to the "competition". Only in a few brief comments did he make any mention of AMD/ATI. In fact, he even stated that Dave Orton was "by far" the best competitor that NVIDIA has ever faced in their 15 year history, and he complimented the skills of the GPU architects at ATI.
     
  10. IbaneZ

    Regular

    Joined:
    Apr 15, 2003
    Messages:
    743
    Likes Received:
    17
    NH is guessing (?) too.
    http://www.nordichardware.com/news,7644.html

    All i want is a very fast card for the new Stalker game and Farcry 2. :smile:
     
  11. Mart

    Newcomer

    Joined:
    Sep 20, 2007
    Messages:
    27
    Likes Received:
    0
    Location:
    Netherlands
    I know G80 has 8 clusters of 16 ALUs, but is each cluster one SIMD (total of 8 processors, 16 ALUs wide) of does a cluster consist of two SIMDS (16 processors, 8 wide)?

    With regards to the bandwidth of "GT200". GDDR3 at even 2100 MHz and a 512bit bus doesn't seem like a major increase over 8800GTX. It would even be slower than 8800Ultra. Would that be enough to feed 240 or even 256 ALUs? How did G80 fare with regards to bandwidth? I know that G92 seems to be quite restricted by its 256bit bus. Won't similar problems arise if GT200 uses GDDR3?
     
  12. Megadrive1988

    Veteran

    Joined:
    May 30, 2002
    Messages:
    4,723
    Likes Received:
    242
    I don't understand why GT200 would be using GDDR3 when GDDR4 is available now.
     
  13. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,237
    Likes Received:
    4,260
    Location:
    Guess...
    1100Mhz GDDR3 (2200Mhz effective) gives 140.8GB/s on a 512bit bus.

    Thats quite a bit higher than the Ultra with only 103.7GB/s.

    If these specs are true this thing soumds like a beast! 1.08 TFLOPS of shader power at 1500Mhz!!
     
  14. aaronspink

    Veteran

    Joined:
    Jun 20, 2003
    Messages:
    2,641
    Likes Received:
    64
    JHH wouldn't know thousands of cores if it bit him on the ass. He certainly doesn't know anything about hundreds, or even a hundred cores.

    Really he's still in the low teens.

    Aaron spink
    speaking for myself inc.
     
  15. Megadrive1988

    Veteran

    Joined:
    May 30, 2002
    Messages:
    4,723
    Likes Received:
    242
    The current 128 "cores" (SP) in G80 and G92 GPUs
    (256 in 9800 GX2 cards) isn't directly comparable to Intel's 16 to 24 cores in Larrabee.

    Nvidia will soon jump to 200 or 200+ SP in the GT200 / NV55 GPU and the GX2 cards based on GT200 / NV55, we're looking at 400-500 SP. By the time Larrabee ships in 2010 Nvidia will have products with 1000 or more SP.

    Nvidia has 1000 SP vs Intel's 16-24 (or even maybe 32) cores .

    Would seem like Nvidia has an overwhelming advantage, yet that's not really comparable.

    It's like saying Xbox 360's Xenos GPU has 48 shader pipelines when it really only has 8 (ROPs). Those pipelines are really just ALUs.

    Obviously what it'll come down to is not counting SP, ALUs and cores, but overall final graphics performance. Larrabee might be better than current NV50/G80/G92 based products and even upcoming NV55/GT200 based products. It'll be interesting to see how Larrabee compares to Nvidia's true next-generation architecture (NV60, for lack of a better name since Nvidia changes the way it names GPUs every month now) both in ray-tracing (if that happens) but more importantly, in rasterization or even hybrid rendering which will be more practical than ray-tracing only.
     
  16. Arnold Beckenbauer

    Veteran Subscriber

    Joined:
    Oct 11, 2006
    Messages:
    1,756
    Likes Received:
    722
    Location:
    Germany
    http://www.overclockers.ru/hardnews/28879.shtml
    They say: GT200 is a monster...
    10 clusters with 24 "SPs" and 8 TMUs -> 240 SPs and 80 TMUs at all
    512bit MI with GDDR3
    32 ROPs,
    Some kind of CFAA, possible D3D10.1
     
  17. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,791
    Likes Received:
    1,596
    :lol:
     
  18. jimmyjames123

    Regular

    Joined:
    Apr 14, 2004
    Messages:
    810
    Likes Received:
    3
    har har :D

    Would it be accurate to describe them as thousands of parallel processing units?
     
  19. jimmyjames123

    Regular

    Joined:
    Apr 14, 2004
    Messages:
    810
    Likes Received:
    3
    This is very true. But regardless of nomenclature that NVIDIA is using for their GPU's ("cores", or stream processors, or parallel processing units, etc), it's noteworthy that the performance in many of the applications discussed during the 7 hour marathon session was said to scale linearly with the number of "parallel processing units" on the NVIDIA GPU. So starting with the current 128, going to 256 would be 2x improvement, 512 would be 4x improvement, 1024 would be 8x improvement. So if a current NVIDIA high end GPU is already 200x faster than Intel Core 2 Duo in some applications, then a NVIDIA GPU in the next two or three years would be 1600x faster than a Core 2 Duo. That's a pretty incredible gap that Intel needs to bridge over the next few years with Larrabee.
     
    #519 jimmyjames123, Apr 16, 2008
    Last edited by a moderator: Apr 16, 2008
  20. Megadrive1988

    Veteran

    Joined:
    May 30, 2002
    Messages:
    4,723
    Likes Received:
    242
    True, but Nvidia isn't offering anything really new. Their basic TNT2 graphics accelerator was much much much faster at rendering graphics than a Pentium III processor. Larrabee will have to be optimized for rendering graphics, be it rasterization, raytracing or hybrid raster-raytracing, and other methods so that its 16-24 or 32 cores can keep up with Nvidia's many hundreds or a thousand or so parallel processing units. We've seen that even 3 or 4 CELL processors in parallal are not upto the task of matching in software what a 4-year old 6800 can do in hardware, as far as traditional rasterization (I am thinking back to one particular demo seen last year or 2006).
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...