R600 has 512-bit external memory bus, Orton promises it takes no prisoners

Discussion in 'Beyond3D News' started by Geo, Dec 15, 2006.

  1. Shtal

    Veteran

    Joined:
    Jun 3, 2005
    Messages:
    1,344
    Likes Received:
    3
    The most thing I wounder is how would it effect 3DMark06 score when moving from default setting to higher resolution! plus added 4 or 8X AA.
     
  2. Shtal

    Veteran

    Joined:
    Jun 3, 2005
    Messages:
    1,344
    Likes Received:
    3
    You are correct! But the double bandwidth going from 16x to 32x PCI-Express should be a lot more performance increase. (Like SATA150 with 10K WD-Raptor drive only scaling for 86GB's but the ability of 150MB's for SATA.

     
  3. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    82
    There's no doubt there will be marketing around it, but there's no way it's been put in just for marketing. It costs way, way too much in transistors, design effort, and manufacturing difficulty to put in a 512 bit bus. It would be very foolish to detract so much from the rest of the chip with a marketing only feature if it didn't have a real world advantage, and would go against the design philosophy we've been seeing from ATI for the last few years.
     
  4. Power_man

    Newcomer

    Joined:
    Dec 17, 2006
    Messages:
    14
    Likes Received:
    0
    Location:
    United Kingdom
    The ordinary PCI-E x16 bus have bandwidth 40.0 Gbit/s (bits) or 4 GB/s (bytes),but the PCI-E x16 bus (ver.2) have to (have) 80.0 Gbit/s (bits) or 8 GB/s (bytes) (double more) :).The R600 will be with PCI-E Ver.2 because the chip will work with "very big transpher of data" (than previous 3D accelerators).The bigger data transfer will allow greater reality in playing games nowadays.That's why I regard the chip R600 will process the data much better than chip G80 due to the fact not only for it's big data bus (512bit),but also it will aliow and great transfer data towards the very outer bus
     
  5. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    r600 peg2 only? can you hold your laughing when you write it? Or did you actually want to write that r600 based boards will benefit more from peg2? You know as well that all cards are made in a backwards compatible way, or there is some sort of translation device to make a chip backwards compatible...
     
  6. Shtal

    Veteran

    Joined:
    Jun 3, 2005
    Messages:
    1,344
    Likes Received:
    3
    I truly agree with you; my idea is ATI needed at least 384bit memory - or very, very high speed GDDR4 @ 256bit but that's put ATI to the maximum use without any future for higher bandwidht.

    My idea is 384bit bit would be enough but just to spoil Nvidia with 512bit ATI stuff.
     
  7. Skrying

    Skrying S K R Y I N G
    Veteran

    Joined:
    Jul 8, 2005
    Messages:
    4,815
    Likes Received:
    61
    Seems like a shockingly expensive decision if only for marketing. In fact, I see little to no reason why they'd do such a thing. Last I heard ATi didn't have that great of margins and now under AMD I seriously doubt they'd like that trend to continue.
     
  8. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    82
    No one had any inkling of the G80's 384 bit bus until a few months back. That's far too recent for ATI to make any changes to the R600 designs.
     
  9. Shtal

    Veteran

    Joined:
    Jun 3, 2005
    Messages:
    1,344
    Likes Received:
    3
    It is same thing with ATI R420 was desigh to have 16 pipelines but their plan was for 12 pipeline until Nvidia made NV40 with 16 pipeline that force ATI to go directly to 16 also.

    It is possibility's that it had 512bit Design in mind but may not planed to use right away. But it could be 384bit Nvidia idea changed ati mind and to use 512bit right away.
     
  10. Shtal

    Veteran

    Joined:
    Jun 3, 2005
    Messages:
    1,344
    Likes Received:
    3
    In the beginning of the design for full 16 pipelines ATI R420 had very low yields for 16 pipelines that may why ATI planned for 12 pipelines as original plan.

    It could be 512bit idea in the beginning of the design was planned for R600. But ATI may have very low yields/resources to make it happened soon.

    Most of you think the delay of R600 is because 80nm, but logical guess would be 512bit external controller takes extra time/effort to happened. If Nvidia would had went for 256bit memory it may be well ATI would release 256bit version of R600 first then later with R650 for 512bit version.
     
  11. shadow

    Newcomer

    Joined:
    Jun 26, 2005
    Messages:
    12
    Likes Received:
    0
    What about the use of very high levels of AA + high resolutions and the the of FP16 HDR,wich are known to be very bandwith intensive situations...


    Also,up until now,there was no point in moving beyond GPU's using more than 16 ROPs since a 256 bit wide bus,even with the fastest memory out there,there wouldn't be enough memory bandwith to make use of more of them anyhow,but the G80 GPU,with it's 384 bit bus and it's 680 million transistors,does have 24 ROP's in it,so who's to say that basically the R600 doesn't have 32 ROPs,given the use of a 512 bit memory bus,wich would be enough to make full use of those,and that the transistor budget to pull it off may in fact be there,since they're building it at 80nm,with supposedly in excess of 700 million transistors in that die....

    The other advantage is that it relieves pressure on memory makers when it comes to future GPU releases by not requirering the absolute fastest(and more expensive) memory on the market at any given point in time,to make the card work at it's best,and potentially,avoids another reason for future product delays that go beyond the company developing the card itself....


    This basically lays the foundation for future card releases for the next 3~4 years in terms or memory busses,same situation that ATI did with the 9700 pro back in the day,though it was matrox that released the first card with a 256 bit memory bus a couple of months earlier( Matrox Parhelia)...
     
    #91 shadow, Dec 17, 2006
    Last edited by a moderator: Dec 17, 2006
  12. Skrying

    Skrying S K R Y I N G
    Veteran

    Joined:
    Jul 8, 2005
    Messages:
    4,815
    Likes Received:
    61
    - There's no point in having 512-bit if you're not going to be using that bandwidth. Its simply a large waste of money if its only purpose is a marketing gimmick.

    - Assuming that there's a purpose for this much bandwidth then the designed from the start would have been shaped around it. Typically a unbalanced design is not optimal and results in wasting of money.

    There's little reason why if AMD found out that Nvidia was going to have a 384-bit bus that they'd be like "oh noes, lets get that 512-bit going". It does not work like that, there would be no purpose for it unless the design from the start was able to take advantage of the extreme bandwidth.
     
  13. Shtal

    Veteran

    Joined:
    Jun 3, 2005
    Messages:
    1,344
    Likes Received:
    3
    You did made a point, but R300 also used 128bit memory for Radeon 9500Pro. It is hard to believe that R600 would need more bandwidth then G80 w/ 86GB/s. Their is no way it could utilized 100% efficiency to put that all bandwidth to maximum use.

    Matrox Parhelia made a mistake and used single 256bit memory vs. ATI split with 64x4 = for total 256bit for R300.
    Unless ATI may be well split even further to utilized more effectually for R600.
     
    #93 Shtal, Dec 17, 2006
    Last edited by a moderator: Dec 17, 2006
  14. shadow

    Newcomer

    Joined:
    Jun 26, 2005
    Messages:
    12
    Likes Received:
    0


    I think so too...And it would be dumb if highly competent engineers working at ATI,didn't work that huge advantage in R600 design a long time ago,not just because a competitor now has a 384 bit bus....Design for chips like the G80 and the R600 were probably started over 2 years ago at the very least...


    In fact if i remember correctly,it was Nvidia who did that emergency memory bus update on the 5900 series,after they found out that the 9700 pro had a 256 bit one,the 5800 had no way to compete with that,and even though the 5900 was using faster memory than the 9700 pro,along with that 256 bit memory bus,it still was beat at high resolutions with AA/AF applied, since the GPU die was never built with a 256 bit memory bus in mind,but that was the only thing they had time to add to the NV35 for the most part....
     
  15. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    82

    ATI obviously think otherwise, as the presence of a 512 bit external bus would prove. I think they are in a better position to make a valid decision on what their architecture needs than you are, especially if you consider this will probably be the template for the memory technology they will be using for the next few years.

    Whereas it was easy to bin previous chips that had a faulty quad, it's not so easy or desirable to disable a part of the memory bus that is sitting there anyway.
     
  16. shadow

    Newcomer

    Joined:
    Jun 26, 2005
    Messages:
    12
    Likes Received:
    0

    True,the 9500 pro did use only a 128 bit memory bus,and in situations were memory bandwith wasn't an issue,it was a pretty fast card in it's own right,especially after the other 4 pipes were unlocked ;) ,but it didn't last long since it was an expensive die to put in a midrange card that was selling for way less than the 9700 series...

    In any case,the R300 core was desingned with a 256 bit bus from the get go....


    32 rops will eat that bandwith up very nicely,and the use of HDR using FP 16 precision as well...Blending operations using 32 bit precision also have extreme bandwith needs too...
     
    #96 shadow, Dec 17, 2006
    Last edited by a moderator: Dec 17, 2006
  17. Shtal

    Veteran

    Joined:
    Jun 3, 2005
    Messages:
    1,344
    Likes Received:
    3
    What about NV30 with 128bit memory, I think human error....
     
  18. Shtal

    Veteran

    Joined:
    Jun 3, 2005
    Messages:
    1,344
    Likes Received:
    3
    True! but when?
     
  19. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    82
    That's a completely different example, as Nvidia went low with the memory bandwidth at a time when ATI went high. Nvidia also thought they would gain bandwidth advantages from PP and driver optimisations.

    Here, we're talking about ATI going higher than Nvidia again - how can that be a mistake when it can only benefit memory bandwidth?
     
  20. shadow

    Newcomer

    Joined:
    Jun 26, 2005
    Messages:
    12
    Likes Received:
    0

    As in games needing that huge fillrate that many ROP's provide?,or if the the R600 may actually have it?...
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...