AMD: Volcanic Islands R1100/1200 (8***/9*** series) Speculation/ Rumour Thread

Discussion in 'Architecture and Products' started by Nemo, May 7, 2013.

Tags:
  1. aaronspink

    Veteran

    Joined:
    Jun 20, 2003
    Messages:
    2,641
    Likes Received:
    64
    Gotta be careful in what you are comparing. You are switching between link bandwidths and payload bandwidths which aren't at all comparable between gen 2/3. Gen 2 used 8b/10b encoding which meant that 5GT/s was really 4GB/s. Gen 3 uses 128b/130b link encoding which means for all practical purposes GT/s=GB/s.

    Some of the differentials between PCI-2 and PCI-3 in sustained bandwidth are due to improvements in the actual controllers at both ends being able to send and receive at higher rates. This is mostly buffering/etc. Also bandwidth delivered with PCI-e is also largely dependent on transfer sizes.
     
  2. aaronspink

    Veteran

    Joined:
    Jun 20, 2003
    Messages:
    2,641
    Likes Received:
    64
    Depends on what they've allowed for buffer depths/etc. If the buffering is sufficient and the QoS capabilities are sufficient then it would be practical to simply have the display controller do DMA access in a JIT basis to the other card(s) to pull in the frame data required without storing it in DRAM.

    It is also important to point out that the main limiter is likely going to be the upstream PCI-e controllers peer-to-peer bandwidth and latency characteristics. Though I'm sure AMD has tested this heavily for AMD, Intel, and PLX controllers and all the PCI-e root designers have had years now to work on peer-to-peer bandwidth and latency in their controllers as it has been used in the high end server market now for years. Any controller for instance that can handle Nvidia GPUDirect transfers at high data rates will work fine which means at least any of the extreme CPUs can handle it fine.
     
  3. rapso

    Newcomer

    Joined:
    May 6, 2008
    Messages:
    215
    Likes Received:
    28
    wasn't the crossfire bridge about 5GB/s? (or was that SLI?).
    and that's probably peak, I'd think 4GB/s real performance might have not been possible, PCIe should be able to handle those 4GB/s.

    edit: http://images.anandtech.com/reviews/video/ATI/4870X2/sideport.jpg
    5GB/s bi-directional in addition to 5GB bi-directional via PCIe 2.0

    edit2: btw. how did you calculate the bandwidth requirement?
    for 1440p@60Hz I get
    2560x1440 * 4byte * 5Screens * 30Transfers/s -> ~2.06GB/s
     
    #923 rapso, Sep 30, 2013
    Last edited by a moderator: Sep 30, 2013
  4. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,176
    Location:
    La-la land
    P2P or not, there's a possibility of flooding when you consider that all your data being transferred has to pass through the interface to the destination video card (the one with the monitor cable attached to it). That's the weak link in the chain, to say nothing of any possible limitations in internal bandwidth of the central hub.

    What measurements; what number of video card running crossfire, what screen resolution? What screen refresh rate? Go 120Hz (which many hardcore gamers are very eager to do), you again double bandwidth requirements over what is the current defacto standard. The bandwidth ceiling's gonna be flying towards your head at that rate.

    Also consider that of intel's current CPUs, only sandy and ivy bridge EX offer full PCIe 16x interfaces when running more than one board. Few people buy such systems to game on, due to rather massive costs. That halves available bandwidth for crossfire, IE problem for 4k rez/60Hz at least.

    Really? That'd be extremely surprising. There's a lot of pins in those connectors, particularly on AMD cards, enough for at least 4 differential signalling links I should think.
     
  5. Anton Markov

    Newcomer

    Joined:
    Sep 30, 2013
    Messages:
    1
    Likes Received:
    0
    640K who need of more

    Wikipedia says:
    The standard PCI-E 4.0 is ready now(an effective will be presented in H;Z series motherboadrs with intel chipset in combine with Scylake processors arcitecture, but and PCI-E 3.0 in X8 electrical has all bandwich who need for crossfire without discrete bridge.
     
  6. Albuquerque

    Albuquerque Red-headed step child
    Veteran

    Joined:
    Jun 17, 2004
    Messages:
    4,309
    Likes Received:
    1,107
    Location:
    35.1415,-90.056
    I buy this to some extent, as I've had an X38 chipset in the distant past where I could demonstrably put the PCI-E bus under enough duress that it would hard-lock the box. The challenge was an IOMeter test run against my PCI-E 2.0 8x RAID card running four (or more) SSD's in RAID0, along with a simultaneous PCI-E traffic throughput test of my Radeon 5850 in PCI-E 2.0 16x mode. The X38 chipset was supposedly capable of dual 16x simultaneously, which is why I bought it. Aaaaannnnddd... Nope!

    However, with the advent of PCI-E controller hub now being part of the SB and later processors directly (and some of the Nehalem line too, if I recall.. The Socket 1155 stuff?) I would be hard-pressed to imagine a case where the PCI-E bus could be similarly overwhelmed.
     
  7. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,176
    Location:
    La-la land
    I'm of course not claiming the system would hard-lock (that would IMO be faulty hardware at work), but everything has a limit. Especially in cost-sensitive consumer electronics, where there just isn't a need for 100% maximum I/O concurrency.

    Now, it might be practically possible to simultaneously stream to and from every slot using every bidirectional PCIe link in point-to-point fashion, I'm not saying that is unthinkably un-possible, but I would not be at all surprised if you hit some kind of internal limit pretty quickly. There's gotta be a crossbar/router of some type in there, and it too will have a max capacity of some form.
     
  8. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    PCI Express bandwidth is ~3x the maximum display output bandwidth supported by any current GPU.
     
  9. CaptainGinger

    Newcomer

    Joined:
    Feb 28, 2004
    Messages:
    92
    Likes Received:
    47
    If PCIe is so good for this why were SLI and Crossfire connectors needed at all? Or has the situation only been viable since PCIe 3.0?
     
  10. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,245
    Likes Received:
    4,465
    Location:
    Finland
    If it's now ~3x needed by any output, PCI Expresss 2.0 (x16/x16) would have been sufficient, but 1.0 wouldn't have (nor PCIE2.0 x8/x8)
     
  11. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    At the time, GPUs didn't support the crazy Eyefinity definitions that they do now.
     
  12. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    Given the amount of motherboards with crippled PCIe out there (32 lane connector but only 8 active etc.), doing it over PCIe only sounds like a recipe for consumer support overload.

    And AFAIK PCIe was never designed with QoS contracts in mind so everything is best effort. (But I may be totally wrong about that?)

    Simple point to point is so much easier...

    I assume these things have changed over the years and now it has come to a point where they don't expect too many problems.
     
  13. Shtal

    Veteran

    Joined:
    Jun 3, 2005
    Messages:
    1,344
    Likes Received:
    4
    http://www.techpowerup.com/191768/radeon-r9-290x-clock-speeds-surface-benchmarked.html

    [​IMG]

     
    #933 Shtal, Oct 1, 2013
    Last edited by a moderator: Oct 1, 2013
  14. itsmydamnation

    Veteran

    Joined:
    Apr 29, 2007
    Messages:
    1,349
    Likes Received:
    470
    Location:
    Australia
    4500 GDDR5 ...... really ?!?!?!? that seems completely nuts. Also clocks are lower then expected. Not sure if i believe that link.
     
  15. Shtal

    Veteran

    Joined:
    Jun 3, 2005
    Messages:
    1,344
    Likes Received:
    4
    Could be under-clocked sample from AMD.
    Final version of the card should be clocked higher based on AMD slide(s)

     
    #935 Shtal, Oct 1, 2013
    Last edited by a moderator: Oct 1, 2013
  16. xDxD

    Regular

    Joined:
    Jun 7, 2010
    Messages:
    412
    Likes Received:
    1
    Mmmmm........How many chance that those tests are fake?
     
  17. kalelovil

    Regular

    Joined:
    Sep 8, 2011
    Messages:
    568
    Likes Received:
    104
    Perhaps turbo clocks apply to memory as well as core clock now (is that feasible?), and these samples rather than indicating the lack of turbo in the product simply don't have it enabled yet in their firmware.
     
  18. Psycho

    Regular

    Joined:
    Jun 7, 2008
    Messages:
    746
    Likes Received:
    41
    Location:
    Copenhagen
    No, it takes time to retrain the GDDR5, so you can't change it in the middle of a frame.
    The clocks aren't surprising for some random Engineering Sample, but clearly not what we'll see in 290x per officially released specs (bandwidth, triangle rate).
    But even though amd usually does better in the very high resolutions, performance seems too high for those clocks - could be gpuz not detecting the boost, or just fake..
     
  19. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,237
    Likes Received:
    4,260
    Location:
    Guess...
    Yeah those clocks definitely don't jive with either AMD's quoted triangle rates or bandwidth so there's hopefully some mistake. Great to hear its faster than Titan even at those speeds though.
     
  20. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,237
    Likes Received:
    4,260
    Location:
    Guess...
    If HSA on future dGPUs allows the GPU to address system memory over the PCI-E interface, won't that make that bandwidth a lot more important in future?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...