The ATI R600 Rumours & Speculation Centrum

Discussion in 'Pre-release GPU Speculation' started by Arun, Oct 16, 2006.

Thread Status:
Not open for further replies.
  1. AlexV

    AlexV Heteroscedasticitate
    Moderator Veteran

    Joined:
    Mar 15, 2005
    Messages:
    2,535
    Likes Received:
    144
    Cool...I was wrong...kudos to ATi. Though I suspect that nV will be able to get within spitting distance with a refresh that brings the fastest DDR4 to the G80 and increases clocks a tad, I`m not certain they`ll outdo the R600 if it pans out to be the currently predicted beast:)
     
  2. icecold1983

    Banned

    Joined:
    Aug 4, 2006
    Messages:
    649
    Likes Received:
    4
    with all the talk in this thread it seems like r600 should be much faster than g80. such a big difference in performance usually only happens when a ihv releases a bad product, which isnt the case here. it would be very interesting to see what happens if atis card is >30% faster than g80 across the board.
     
  3. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,791
    Likes Received:
    1,596
    It's just that, talk. None of it is confirmed, nor is the R600's supposed performance advantage confirmed (although it does seem reasonable to expect it to be slightly faster, given 8800 will be nearing refresh stage by then)
     
  4. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,059
    Likes Received:
    3,119
    Location:
    New York
    Sure did :)
     
  5. Sound_Card

    Regular

    Joined:
    Nov 24, 2006
    Messages:
    936
    Likes Received:
    4
    Location:
    San Antonio, TX

    R600?...... the Conroe of graphics cards?:smile:
     
  6. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    unfortunately with rumored uncontrolled power usage and heat dissipation? :razz:
     
  7. Twinkie

    Regular

    Joined:
    Oct 22, 2006
    Messages:
    386
    Likes Received:
    5
    Although in a marketing sense going back from GDDR4 to GDDR3 sounds abit bizzare (maybe shooting themselves in the foot but really when has ATi actually promoted their products? :lol: ), im agreeing with someone in the thread saying ATi could use GDDR3 and 512bit. Wasnt their a rumour on this too?

    GDDR3 makes more sense compared to GDDR4 because it doesnt give you ridiculous number such as 153 gb/s (if the memory clock is at 1200(2400 effective) mhz). So GDDR3 at 900mhz give you ~115gb/s which is much more than the G80 already while availability wont be hindered. Not to mention the price would be much lower than GDDR4, unless im wrong here.

    How fast is the current GDDR4? whats the word on availability? Isnt this one of the reasons why nVIDIA abandoned GDDR4 for later use? The X1950XTX only uses 8 memory chips, but the R600 will probably use something along the lines of 16 memory chips. Looking at the prices of the 8800 series ($400 for GTS is amazing), i dont think the R600 can stand a chance in the price war.

    So, trimphsiao seems to suggest that either ATi or nVIDIA going for the 4:1 (ALU:TeX) ratio? It seems nVIDIA is 4:1 (128:32). This means is ATI still sticking with the 3:1 ratio, just like the R580?

    Based on the current rumours, we are looking at

    80nm
    720~ million transistors (INQ)
    800mhz Core Clock (hexus)
    96 shaders (straight from the horse's mouth) scalar or ?
    over 1000mhz~ Shader clock (hexus) clock domain?
    32 ROPs (assumption based on the confirmation of the 512bit bus)
    32 TMUs (assumption based on the confirmation of the 512bit bus)
    GDDR4 1024mb
    Ring Bus technology
    Unified Shader architecture

    However im looking at this more realistically (also based on current rumours about clock domains)
    My predicted specs:

    80nm
    600~700 million transistors
    700mhz Core Clock (this probably differs by +/-50mhz for the different models, could end up even less due to yield)
    1400mhz Shader Clock (this was probably ATi's hidden secret, one of their tricks up their sleeve, using clock domains for the first time)
    96 shaders (im predicting scalar shaders because im basing on the rumour of razor1 saying that ATi isnt liking G80 at all. This is a little theory but because not only G80 use clock domains but decided to go for scalar shaders clocked over 1ghz which couldve been what ATi was targetting for and was their trump card except that G80 went for the same concept and in their testing labs the G80 ended up faster than the original R600s in 3dmark)
    24 ROPs
    24 TMus (Moving up with from 3:1 to 4:1 ratio)
    GDDR3 (sounds more realistic, and could potentially mean no availability problems suffered by some cards e.g FX5800 ultra, 7800GTX 512mb)
    900mhz (1800mhz effective) memory clock
    512bit memory interface
    1024mb
    Peak memory bandwidth of 115gb/s
    Ring Bus technology
    Unified Shader architecture
     
  8. Arty

    Arty KEPLER
    Veteran

    Joined:
    Jun 16, 2005
    Messages:
    1,906
    Likes Received:
    55
    Twinkie, two points re GDDR4

    1. Availability is quite good
    2. Consumes less power
     
  9. Skrying

    Skrying S K R Y I N G
    Veteran

    Joined:
    Jul 8, 2005
    Messages:
    4,815
    Likes Received:
    61
    R600 will use GDDR4.
     
  10. oeLangOetan

    Newcomer

    Joined:
    Nov 13, 2003
    Messages:
    76
    Likes Received:
    0
    GDDR4 is only used on hi-end ati parts, availibility doesn't have to be good to be sufficient
     
  11. Twinkie

    Regular

    Joined:
    Oct 22, 2006
    Messages:
    386
    Likes Received:
    5
    Then why did the G80, therefore 8800GTX/8800GTS not use GDDR4? It doesnt make any sense. GTS would have been better to use the slowest of all the GDDR4s (e.g the ones the X1950XTX uses) and provide high bandwidth while saving power consumption. Same goes to the GTX.
    If availability isnt the problem then why did G80 use GDDR3 and not GDDR4?

    Using some of the fastest GDDR4 sounds really good on the R600 along with the 512bit memory interface, but to me it sounds all too good to be true. But then again, with G80 and all it could be a big surprise.
     
  12. Skrying

    Skrying S K R Y I N G
    Veteran

    Joined:
    Jul 8, 2005
    Messages:
    4,815
    Likes Received:
    61
    R600 is the ATi high end part last I heard, unless there's something really crazy going on. When refering to R600 I would imagine we're all talking about the X2800XTX (or in my hopes X2700 Pro). Just like in reference to G80 is the 8800GTX.
     
  13. Sound_Card

    Regular

    Joined:
    Nov 24, 2006
    Messages:
    936
    Likes Received:
    4
    Location:
    San Antonio, TX
    I highly doubt R600 has clock domains, or scaler shaders for that matter.
     
  14. Skrying

    Skrying S K R Y I N G
    Veteran

    Joined:
    Jul 8, 2005
    Messages:
    4,815
    Likes Received:
    61
    ATi had been buying up lots of GDDR4, add to the fact that it'd look pretty bad if your previous high end part used "faster" memory than your new one.
     
  15. Arty

    Arty KEPLER
    Veteran

    Joined:
    Jun 16, 2005
    Messages:
    1,906
    Likes Received:
    55
    If availability was such a big concern then how can ATI use it in a card that retails for almost $200 less (X1950XTX) ?

    EDIT

    Sky was faster. :grin:
     
  16. Twinkie

    Regular

    Joined:
    Oct 22, 2006
    Messages:
    386
    Likes Received:
    5
    Proof? Source?
    Not sure where you got that from. Your not going to give me "from my secret source" are you? :grin:

    I dont think it will look that bad. Look at the R300 for instance. It used slow DDR but had a wider bus. On the other hand, the NV30 had the newest, baddest, shiniest GDDR2s. I dont see many people swayed to buy the NV30 instead of the R300 because the R300 looked bad in terms of the kind of memory it used (did look abit outdated). But that didnt mean it didnt perform worse than its competition.

    Same as here. By using GDDR3 it still provides MUCH higher bandwidth than any current graphic cards out there while avoiding any sort of availability problem.

    How about the cost of GDDR3/GDDR4?

    edit - Im sure the ones X1950XTX used to my knowledge were the most slowest of all GDDR4 memory. The bottom of the food chain. Im talking about the faster GDDR4s.
     
  17. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    Must be the reason of G71's success vs. 7800GTX 512MB ;)
     
  18. Natoma

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,913
    Likes Received:
    84
    Given ATI's long rumored issues with power consumption on the R600, it wouldn't make much sense for them to have gone the GDDR3 route at all considering GDDR4 uses significantly less power and doesn't have availability issues.
     
  19. Skrying

    Skrying S K R Y I N G
    Veteran

    Joined:
    Jul 8, 2005
    Messages:
    4,815
    Likes Received:
    61
    First, I really do not feel like Googling or searching this forum all around for a source. Second, it just simply makes sense that they'd have lots of GDDR4, being the only company that is buying it right now for use in a current product.

    We know AMD will be pushing the power very high with R600, they need the most saving they can possibly get, why use more power consuming but slower GDDR3 when you can use faster and less power consuming GDDR4 which you already have in a current product? There's no reason.

    The slower GDDR4 is faster and less power consuming than the fastest of GDDR3. It'd make only sense to go with GDDR4. Why Nvidia did not use it could potentially be to make head room for another product down the line, or the fact that they simply do not have much, nor is power consumption appear to be that serious of a problem for them at this point.
     
  20. Natoma

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,913
    Likes Received:
    84
    24 ROPs/TMUs makes absolutely no sense with a 512bit bus. The 8800 isn't bandwidth starved with a 384bit bus at high resolutions and AA, and it has 24 ROPs and 32 TMUs.

    That would be one seriously unbalanced part.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...