ATI RV740 review/preview

Discussion in 'Architecture and Products' started by LunchBox, Feb 25, 2009.

  1. CJ

    CJ
    Regular

    Joined:
    Apr 28, 2004
    Messages:
    816
    Likes Received:
    40
    Location:
    MSI Europe HQ
    They should have named it GTS250 right from the start. GF9800GTX+ wasn't launched until after the GTX280 and GTX260. So that would have been the perfect timing to reposition the G92(b). I'm guessing a lot more people would have 'accepted' the namingscheme if that had happened at that time, since nV was moving to a new process (55nm), they were using a new cooler and the GPU was running at a higher speed than the 9800GTX.

    Now it's just a name-change for the sake of changing names and hoping that the line-up will become clearer to the consumers. Instead they're making it even more confusing since both the 9800GTX+ and GTS250 will have a certain amount of time that they're both on the same storeshelves.

    Yes, dude... didn't you know they were exactly the same? Shame on you! And did you also know that "whatever review you take GTX+/GTS250 will be faster than 4850. It's just the way it is"?
     
  2. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    7,583
    Likes Received:
    704
    Location:
    Guess...
    Yep, that I can agree with. Naming it the 250 from the start would have been a much better option for the consumer. Hell, I think it would have even been a beter option for NV. I wonder if that was a simple mistake or if they thought (at the time) that they would be coming out with a true GT2xx based mid range GPU to take up the GTS 250 mantle?
     
  3. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,016
    Likes Received:
    112
    Isn't that just so board partners can get rid of old boards (selling with the new name)? I'd expect the newer version to be cheaper to produce, hence expecting everybody switching to the new board.
     
  4. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26


    Since the new extension theme Nvidia seems to be going with has only existed for 1 generation and has just started. I think its a bit premature to say that. The "MX" lasted from Geforce 2 to the Geforce 4 generation. Honestly MX/TI wouldnt mean anything to me if I didnt already know what they meant back then.

    This for the retail chain. So retailers can clearly understand how to sell the differences between the extensions.
     
  5. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,709
    Likes Received:
    122
    So the previous naming scheme was bad enough that even the people selling the items had difficulty understanding where they fit... comforting.

    It isn't premature. If the naming scheme is intended to help those with no prior knowledge, MX vs Ti is a much better option. At least the lettering is significantly different to indicate to the uneducated consumer that it might be worth their time to inquire about the differences in the products.
     
  6. CJ

    CJ
    Regular

    Joined:
    Apr 28, 2004
    Messages:
    816
    Likes Received:
    40
    Location:
    MSI Europe HQ
    Before you know it we're back at XT, Pro, LE for AMD.

    Radeon XT 490 instead of HD4890 doesn't sound too bad... XT 470 for the HD4870 doesn't sound bad either... how about Radeon Pro 450 for the RV740XT and Radeon LE 430 for the HD4350? Could even throw in a Radeon GTO or Radeon XL in there somewhere if needed.. :razz:

    /runs and hides
     
  7. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26

    MX/TI would not work because there are now 3 classifications of performance. The problem isnt in the lettering. Its simply in the consistency. If Nvidia can maintain GTX, GTS, and GT for the next 3 generations or so. It will simply become as "accepted" as the TI/MX situation was back then.

    Back in the TI/MX days there was much larger distinction between the high end and the low end. And most middle range hardware didnt exist. The Geforce Ti 4200 was the closest thing we had to that. When they moved away from this system. We already saw people confusing the new numbering system with the old.

    IE FX 5200 verses Ti 4200/4400/4600. Like I said before. The problem lies with Nvidia's consistency. If they can stick with a similar extension monitor for a good amount of time. I think they'll be set.

    Chris
     
  8. compres

    Regular

    Joined:
    Jun 16, 2003
    Messages:
    553
    Likes Received:
    3
    Location:
    Germany
    This makes the most sense. Not to bash nVidia, but I think they are avoiding this on purpose to get the extra sell out of the confused customer.
     
  9. Blazkowicz

    Legend Veteran

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    how so? except msaa performance I don't know of a functional difference between R6xx and R7xx. unless I miss some tesselator stuff that will never be used.

    there's the switch away from the ring bus and all, towards more classical stuff (I'm thinking now of the X1300 which didn't have the ring bus, though I don't know about TMU stuff on it vs the bigger R5xx's)
     
  10. AnarchX

    Veteran

    Joined:
    Apr 19, 2007
    Messages:
    1,559
    Likes Received:
    34
    Look at some good architecture article, there are significant differences in ALU (also concernig GPGPU) capabilities.
     
  11. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,325
    Likes Received:
    280
    Different ALUs (small SPs are beefier), different TMUs (half-speed FP16, removed samplers), different ROPs (removed fog ALU, doubled Z performance, fully functional resolve), different MC (ring-bus -> combination of hub, ROPs hardwired to MC and internal crossbar for texture cache) and different internal ordering (separated quads, renders in tiles, not like R6xx, but in R5xx style, texels are not shared throught whole engine - only for tile borders, if I'm not mistaken) - is it too little? :wink:
     
  12. A.L.M.

    Newcomer

    Joined:
    Jun 2, 2008
    Messages:
    144
    Likes Received:
    0
    Location:
    Looking for a place to call home
    The real midrange GT2xx should be the GT215, which shouldn't have been scrapped out from the plans as it probably happened with GT212. :wink:
     
  13. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,079
    Likes Received:
    648
    Location:
    O Canada!
    no-X's brief overview gives some of the insights into the architectural differences, and like AnarchX mentions, its worthwhile taking a read at some of the early reviews / architectual articles to see some of the differences and changes. Abstracting a level, though, it does relate to differences from a (games) developer perspective and also results in different level of GPGPU support. Additionally, R7xx has other functionality not found in R6xx such as 7.1 HDMI audio support, UVD2 with hardware PiP decode and DisplayPort Audio on all RV7xx other than RV770.
     
  14. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,620
    Likes Received:
    5,635
    And don't forget double precision at reasonable speeds for a fairly low cost chip and card solution.

    Regards,
    SB
     
  15. hoom

    Veteran

    Joined:
    Sep 23, 2003
    Messages:
    2,994
    Likes Received:
    527
    Also the shared store & increased caches.
     
  16. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,831
    Likes Received:
    2,121
    Location:
    Germany
    Should be getting myself a Radeon 9800 then - cheap as hell also. Only confusing thing: Which is better: "XT" or "Pro". ;)

    Sorry mate, but for someone not into the details of GPU or vga card developments just walking into a store deciding from naming schemes - he'll be lost anyway.
     
    #216 CarstenS, Mar 8, 2009
    Last edited by a moderator: Mar 8, 2009
  17. Panajev2001a

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,187
    Likes Received:
    8
    No, it is not.... I might not be speaking for many people here, but GT2xx is Compute 1.3 (CUDA wise), while 9800GTX+ is Compute 1.1 (besides not supporting doubles, you have less registers available and more restrictions when dealing with transfers to and from global memory/device memory).

    This might not matter much for people who play only games, but considering the inroads CUDA is making in various applications (Ahead Nero being one of the latest to add CUDA support), I think that having consideration for CUDA/OpenCL capabilities for your GPU is not a bad idea...
     
  18. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,831
    Likes Received:
    2,121
    Location:
    Germany
    If I am not mistaken, that's not what it means.

    In DX11 there are going to be techlevels - AMD is clearly able to support 10.1, but not 11 since their feature set would not allow that. Nvidia is only going to be able to support 10.0 - just the way it is right now.

    For API consistency reasons there's -AFAIK- no driver hacks allowed, enabling only one feature of a higher techlevel. It's like DX10: All or nothing.

    You can, of course, do drivers hacks with application detection and stuff, but no API-Level-exposure.
    See also: http://www.pcgameshardware.de/aid,6...en-in-Windows-7-und-Vista/Technologie/Wissen/

    Google-Translate:
    http://translate.google.de/translat...chnologie/Wissen/&sl=de&tl=en&history_state0=
     
  19. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    7,583
    Likes Received:
    704
    Location:
    Guess...
    I already mentioned the CUDA differences in previous posts. IMO they simply have no baring whatsoever for the average consumer this generation. There are currently no practical benefits that i'm aware of for a consumer to have a Compute 1.3 compliant GPU. Even DX10.1 with its current questionable worth is hugely more valuable to the consumer because at least it does demonstrate some minor practical benefits.

    Also, anyone with the knowledge to understand the differences between CUDA versions sure as hell isn't going to be fooled by this name change.
     
  20. swaaye

    swaaye Entirely Suboptimal
    Legend

    Joined:
    Mar 15, 2003
    Messages:
    8,492
    Likes Received:
    598
    Location:
    WI, USA
    I really hope CUDA dies soon. Time to get something more open and widely supported out there.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...