Official GT200 Reviews/Benchmarks Thread

Discussion in '3D Hardware, Software & Output Devices' started by apoppin, Jun 16, 2008.

  1. apoppin

    Regular

    Joined:
    Feb 12, 2006
    Messages:
    255
    Likes Received:
    0
    Location:
    Hi Desert SoCal
    Hi Jim .. it has been a long time; and no more ATF for me! Best of all, i joined "emoticon anonymous" and have my usage mostly under control here
    [ i have my own forum now: http://www.mcowen.com/forums/index.php
    --We just launched it last month; main site to follow in August and we will hit the search engines then - anything goes! - and i can post anything i want there .. so i can be extra serious here]

    Yes, i really believe the Tesla architecture is the basis for Nvidia's new GPUs and is a very solid foundation. i really expect they are betting the farm on this one but put the price of the 280 too high imo.

    Nvidia may have the fastest single monster GPU, but their older tech in SLi competes with it at a lower price. I expect that the drivers will continue to improve in the Tesla GPUs and there will be compelling features for gamers, sooner or later .. probably we will see PhysX implemented in games and i am looking forward to it.

    But they did such a good job with G80's GTX that many of us are not in any hurry to upgrade and i am hoping that it drops quickly - and of course i an dying to see 4870 in Crossfire to see how the X2 will go against 280.

    i have just a feeling that Nvidia will have to unleash their "sandwich" to keep the performance crown with the GPU shrink [my personal prediction for many weeks] And thus we should see continued downward price pressure on Nvidia's GTX; probably what Nvidia dreads and what i an really hoping for.

    So i need to look for a larger LCD pretty soon. My personal problem is that i am nearsighted and a 30" monitor is physically to large for me. i tried 27" and it is still too big. I would love a 25x16 in a smaller screen; else i will settle for a 24" 19x12.
     
  2. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,022
    Likes Received:
    122
    Would it have been launched a week earlier, noone would have called the GTX260 "good value", that's for sure...
    Interesting there's no mention of memory clock, however. At the price point it's going to be I guess it can't really be faster than the memory of the 9800GTX (1100Mhz), but if it's slower the 9800GTX+ probably won't be faster than the 9800GTX and even less of a threat to a 4850...
     
  3. Pete

    Pete Moderate Nuisance
    Moderator Legend

    Joined:
    Feb 7, 2002
    Messages:
    5,777
    Likes Received:
    1,814
    :oops:, and thanks.

    I finally looked at Xbit's About page, and I see that, while they're Russian and partly based in Russia, they seem to be an English-only site, so no Russian home page for early #s.
     
  4. apoppin

    Regular

    Joined:
    Feb 12, 2006
    Messages:
    255
    Likes Received:
    0
    Location:
    Hi Desert SoCal
    Well, i expect a Price war now ... the 4750 does very well as a value card against the current GTX and the GX2; i am dying to see the 4870[x2!!] .. now!
    Here are some comparisons including the GT280 and GX2 [i am looking now]

    http://www.computerbase.de/artikel/hardware/grafikkarten/2008/kurztest_ati_radeon_hd_4850_rv770/

    http://www.hardware-infos.com/news.php?news=2161

    http://www.techpowerup.com/reviews/Powercolor/HD_4850

    even with AA [although admittedly at middling resolutions] !!

    yep, i know this is GT2X0 thread - and GTX280 is included in the benches .. but it now looks to have some [really surprising to me] competition in a Sub-$200 Card!
    It looks like 800 shaders and 40 TMUs is official:
    http://www.hardware-infos.com/news.php?news=2161

    Now i AM waiting .. with no "left out" feelings, whatsoever!
    - i want to see the coming price war; and Nvidia does appear to have an expensive GPU to try and compete with.
    interesting and exciting to me .. after the debacle that was r600
     
    #84 apoppin, Jun 20, 2008
    Last edited by a moderator: Jun 20, 2008
  5. Arnold Beckenbauer

    Veteran Subscriber

    Joined:
    Oct 11, 2006
    Messages:
    1,756
    Likes Received:
    722
    Location:
    Germany
  6. XMAN26

    Banned

    Joined:
    Feb 17, 2003
    Messages:
    702
    Likes Received:
    1

    Looking at the numbers from xbit, it really isn't all that much better than the 8800U. Drivers or should they have simply waited for the die shrunk version? I'm thinking the latter as it would have allowed them to make the drivers even that much better.
     
  7. Twinkie

    Regular

    Joined:
    Oct 22, 2006
    Messages:
    386
    Likes Received:
    5
  8. AnarchX

    Veteran

    Joined:
    Apr 19, 2007
    Messages:
    1,559
    Likes Received:
    34
  9. Arty

    Arty KEPLER
    Veteran

    Joined:
    Jun 16, 2005
    Messages:
    1,906
    Likes Received:
    55
    Apart from clock increases, the GT200b should also bring lower consumption and overall shorter dimension card. That and if priced right, I will pick it up. That is if I can manage to hold off from the 4870. :)
     
  10. apoppin

    Regular

    Joined:
    Feb 12, 2006
    Messages:
    255
    Likes Received:
    0
    Location:
    Hi Desert SoCal
    of course -- if you feel lucky; i hate O/C'ing GPUs although i routinely OC my CPUs. You certainly can undoubtedly do the same with most cards but you get a lot of extra heat and possibly a short-lived Card. But then the "b" will also OC further then the current overpriced one :razz:
    [imo]
     
  11. chavvdarrr

    Veteran

    Joined:
    Feb 25, 2003
    Messages:
    1,165
    Likes Received:
    34
    Location:
    Sofia, BG
    IMHO, G200b will suck as much power as they can throw out of the box (so ~G200) - but it will ship with clocks as high as possible.
     
  12. Oushi

    Newcomer

    Joined:
    Nov 14, 2005
    Messages:
    34
    Likes Received:
    1
    Location:
    EG
    what about GT200 on 45 nano ? can it be done ?
     
  13. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    Yes it can be done.. question is.. Will we see it before 1h 2009?
     
  14. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    What will be interesting is what happens if there's only space for a 256-bit bus, necessitating the use of GDDR5.

    With half the bus width this chip will also have half the ROPs.

    Will half the ROPs, presumably at considerably greater clocks than GTX280, be enough?

    Alternatively, of course, NVidia could change the ratio of ROPs to units of bus width, i.e. put twice as many ROPs into each partition.

    Jawed
     
  15. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
    Precisely. I shake my head everytime I read a so-called "pundit" or "enthusiast" speak of a future shrunken GT200 using a 256-bit bus and GDDR5. Sure that addresses the bandwidth aspect of the equation, but as you mention, NV's ROP partitions are tied into memory controller channels (1:1) so unless NV's got some "super ROPs" in the works that can achieve twice the current workload, it just isn't going to happen.
     
  16. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    But are they?

    If NVidia re-architects for GDDR5 then perhays they can double them?

    Alternatively, maybe the ROPs are just waiting for more bandwidth. I've long argued that GDDR3 is too slow per ROP for NVidia's architecture (even more so for RV770) - there's a load of performance just going unused. So it may turn out that changing to GDDR5 unleashes the ROPs sufficiently that halving their count with 20-30% clock boost and despite only having 80% or so bandwidth (i.e. less bandwidth in total) still results in at least GTX280 performance.

    Jawed
     
  17. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
    Yes ;)

    I certainly won't rule it out of the realm of possibility, but I think that's a lot to ask for. Perhaps each memory channel could service two ROP partitions in a future 256-bit GT200-derived GDDR5 SKU? Although I suspect this would lead to an increase in latency without a rearchitecting of the memory controller. Perhaps we'll see a 40/45nm GT300 (or whatever you care to call the successor to GT200b) featuring a ring bus :lol:

    I can't find a reason to disagree with this. NV's got such huge Z fill rate and AA sample rate there's just no way they can feed those ROPs with GDDR3.

    I see what you're getting at here, as per-ROP bandwidth would increase with the proposed transition to 256-bit/GDDR5 so efficiency should increase as well, since NV's ROPs are rarely running @ 100% utilization (ever?).

    I don't know if a 20-30% clock boost is reasonable with such a massive chip undergoing such a minor shrink... I can see a potential GTX 290 with 650/1500 clocks on 55nm but beyond that would have to be up to the individual AIB partners...

    Also, a GDDR5 GT200b SKU would not have to provide less bandwidth than GTX 280 does already, NV would just have to use 2.2-2.5GHz chips instead of down-clocking 2GHz chips as ATi does.

    So yes, per-ROP bandwidth would increase (assuming NV doesn't double ROP count per partition or create "super ROPs" that achieve twice the throughput of their current ROPs).
     
  18. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    Getting rid of 256-bits of memory interface should make for a better than minor shrink - and getting rid of half the ROPs would make it even better. Also, we're talking 65-40nm here, which is a big shrink. That's Oushi's question (45nm is supposedly really 40nm).

    Jawed
     
  19. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
    Good point. I did not account for these facts previously. I'm not sure you can toss out half the ROPs and still maintain Z fillrate or AA sample perf. though. Even with the efficiency improvements GDDR5 and a new memory controller would bring, it's going to take a hefty dose of increased core clock to account for a halving of ROPs.

    Someone needs to run some bandwidth efficiency tests on GT200 (i.e. overclocked and underclocked).

    My mistake. I thought your clock speed increase was in reference to GT200b for some reason.
     
  20. Oushi

    Newcomer

    Joined:
    Nov 14, 2005
    Messages:
    34
    Likes Received:
    1
    Location:
    EG
    i am very sad for nvidia for making this huge brilliant chip and still ATI can catch up
    with it so easly , can someone explain to me what is holding GT280 back or it's just ATI
    had made a superior chip than GT280 ??
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...