AMD: R7xx Speculation

Discussion in 'Architecture and Products' started by Unknown Soldier, May 18, 2007.

Thread Status:
Not open for further replies.
  1. nicolasb

    Regular

    Joined:
    Oct 21, 2006
    Messages:
    421
    Likes Received:
    4
  2. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    Fudo doesn't know what he's talking about.
    And, uhhh, 1000MHz memory? I don't think so, but we'll see...
    Either way, yes, let's not waste our time in a thread about a very exciting chip to talk about a quite unexciting shrink ;)
     
  3. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,791
    Likes Received:
    1,596
    It's about 40 TMU's..

    And how the heck they managed to smash our expectations in BOTH TMU and SP count actually..

    Basically, they seem to have a good deal more efficient architecture than Nvidia, suddenly. Packing a ton of functional units in little space somehow.

    To me, it's not that a "monolithic" chip is bad, per se, it's that GT200's performance isn't good enough per die size. I dont think, if it was say 2X as fast as 4870 is going to be, people would be complaining so much about the 649 tag. GT200's problem isn't that it's monolithic, it's that it doesn't appear to be fast enough.
     
  4. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    Bingo, agreed completely. If AMD decided to release a 600mm² chip based on R7xx and 512-bit GDDR5, the performance would be so mind-blowing it's not even funny.

    All around though, the 65nm transition has been pretty damn shit for NVIDIA. There are rumours that G92b (and presumably GT200b too then?) will scale by more than the theoretical 19% though, so maybe that one will go a bit better. There's no way to get around the fact that NVIDIA lost their perf/mm² advantage going from 90 to 65nm though, while AMD created one out of nowhere with RV770.

    What really matters now is whether G92b is 100% competitive with RV770 at good margins. If it is, then the financial impact on NV won't be anywhere near as big as some fanboys might like to think. At this rate, I'm getting skeptical about G92b though given RV770's stunning scaling in antialiasing modes...
     
  5. Love_In_Rio

    Veteran

    Joined:
    Apr 21, 2004
    Messages:
    1,627
    Likes Received:
    226
    That resumes all!. I completely agree with you. It´s not the today but the opportunities that RV770 bring graphic market for the future. I can´t avoid thinking in a 4xRV770 card at 1ghz in 40nm at 125w and more than 5 teraflops...
    We got there at last, the way started by voodoo is near a fantastic ( and similar, hello voodoo 2,3,4...) end...
     
    #4005 Love_In_Rio, Jun 19, 2008
    Last edited by a moderator: Jun 19, 2008
  6. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,791
    Likes Received:
    1,596
    From the French 4850 review, by game, 9800GTX versus 4850

    ET Quake wars: Essentially tie, GTX faster without AA, even with
    HL2 Ep 2: Essential tie
    Stalker: tie
    R6 Vegas: Decent ~20% win for 4850
    Oblivion: Fairly close, 9800GTX bit faster no AA, 4850 significantly faster with AA.
    GRiD: Solid 10-20% win for 4850
    Bioshock: 20% win for 4850
    CoH: Essential tie, GTX slightly faster no AA, tie with.
    World in Conflict: Decent win for 4850, 26-19 FPS with AA
    Crysis: Both unplayable, virtual tie no AA, 4850 19-12 FPS lead with AA

    Now 9800GTX+ = ~9% overclock on shaders+core

    I would say this move should stop Nvidia from just massively bleeding share, at least to 4850. 4850 may still have some other edges, such as more forward looking shader heavy architecture (theoretically) and DX 10.1. But it's not blowing away the new 9800GTX lineup, though it may be slightly superior, especially with a price edge on the GTX+. And the die sizes are similar? 4870 may still present a problem though. To which the umpleasent solution would seem dropping the 260 to prices much lower than Nvidia would like..
     
    #4006 Rangers, Jun 19, 2008
    Last edited by a moderator: Jun 19, 2008
  7. Miksu

    Regular

    Joined:
    Mar 9, 2003
    Messages:
    997
    Likes Received:
    10
    Location:
    Finland
    I think it's interesting to watch if we're going to see the new GTX+ cards on retail or only in reviews.

    Is NVidia already sending drivers which auto-overclock the GTX -> GTX+ to hardware sites so that when the 4850-reviews hit the net, it will be compared against the GTX+ -variant?
     
  8. crystall

    Newcomer

    Joined:
    Jul 15, 2004
    Messages:
    149
    Likes Received:
    1
    Location:
    Amsterdam
    As for what happened to nVidia I was wondering about their current design flow. Apperently they put quite a bit of full-custom designed stuff inside G80 and later silicon, while this has given them an edge it has the drawback of requiring a redesign everytime you do something which is not a dumb optical shrink (and even an optical shrink is not exactly automatic). Automated flows on the other hand have improved significantly in the last few years, Fast-14 is just an example of that*, so I was wondering if AMD is reaping the benefits of improved tools while nVidia's approach has backfired.

    This is obviously 100% speculation on my part but it could also explain the delays around the GT200 launch.

    (*) Another proof of the significant improvements of automated flows in sub-90 nm processes is Cell's 45nm shrink. Even if it was far from optimal it was done by an extraordinarily small team in a very short time and with significant power and area savings.
     
  9. Love_In_Rio

    Veteran

    Joined:
    Apr 21, 2004
    Messages:
    1,627
    Likes Received:
    226
    Yes, but what about power consumptions ?
     
  10. simbus82

    Newcomer

    Joined:
    Nov 10, 2005
    Messages:
    34
    Likes Received:
    0
  11. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,237
    Likes Received:
    4,260
    Location:
    Guess...
    Or perhaps not :wink:
     
  12. IbaneZ

    Regular

    Joined:
    Apr 15, 2003
    Messages:
    743
    Likes Received:
    17
  13. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,237
    Likes Received:
    4,260
    Location:
    Guess...
    I disagree with this assessment as I think you have the scaling off. A 280 in SLI is likely going to be qute a bit faster than 4x 4850's' and far more consistant, stable and reliable. Its the same with any of the comparisons using more than 2 GPU's. As for the 1 GPU vs 2 GPU comparisons. I think its fair to pay a premium to not have the headaches of dual GPU, i.e. lack of consistency, micro stuttering, input lag etc...

    I'm not trying to big up NV or beat up on AMD, I'm made up that AMD are seemingly in a very strong competitive position again. i just think people are getting a little over enthusiastic and beating up on NV a little unnecessarily. Afterall the GTX200 series are still great GPU's, they are just a little over prices and that can easily change - especially in the case of the 260 which really isn't that bad anyway. $399 for a single GPU that slaughters an 8800Ultra?

    I've numbered you comparisons above and taking into account the lack of >2 GPU scaling and the general problems with dual GPU's I would call them as follows:

    1. 2x 4850 - this even makes me consider a dual GPU solution!
    2. GTX 280 SLI - due to scaling issues it will likely be as fast or faster than the other setups in most situations while being far more consitent and stable.
    3. GTX 280 x3 - It will probably be faster and again more reliable (although both solutions are poor choices IMO)
    4. Until we see final performance and prices, I'm going to say this ones a tie.
    5. As above between the dual 260's or 4870's. 3x 4850 is a loser IMO against those.
    6. Probably the 280 SLI since it will be the most stable, consistent etc...
     
  14. CJ

    CJ
    Regular

    Joined:
    Apr 28, 2004
    Messages:
    816
    Likes Received:
    40
    Location:
    MSI Europe HQ
    I'm not sure about them sending auto-OC drivers around, but they are already sending out PR shit:

     
  15. dizietsma

    Banned

    Joined:
    Mar 1, 2004
    Messages:
    1,172
    Likes Received:
    13
  16. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    wow.. last week having those 128 cores was like drinking wine alongside jesus!

     
  17. Scrat

    Newcomer

    Joined:
    Nov 2, 2007
    Messages:
    53
    Likes Received:
    0
    Location:
    Pistoia, Italy
    Lol, new efficiency index, performance/shader unit ratio [​IMG]
     
  18. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    nice.. Watt/shader/performance index
     
  19. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    Pity the 64 TMUs and the 128 Z test units are working so inefficiently. Or the extra bandwidth.

    Jawed
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...