AMD: Speculation, Rumors, and Discussion (Archive)

Discussion in 'Architecture and Products' started by iMacmatician, Mar 30, 2015.

Thread Status:
Not open for further replies.
  1. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    Maybe AMD is seeding LinkedIn with fake profiles to throw off the competition. :wink:
     
  2. onQ

    onQ
    Veteran

    Joined:
    Mar 4, 2010
    Messages:
    1,540
    Likes Received:
    55
    DCE is Display Controller Engine. 1st seen it in the PS4 leaked specs but they never said what it was but I found it later.
     
  3. bridgman

    Newcomer Subscriber

    Joined:
    Dec 1, 2007
    Messages:
    58
    Likes Received:
    102
    Location:
    Toronto-ish
    The GFX block is big - command processors, graphics & compute pipelines, shader core/ISA, CBs/DB (ROPs)... I think texture cache/filtering is in there too but not sure ATM.

    3dilettante, the compute block you're thinking of might be MEC - MicroEngine Compute ? Each MEC block manages 4 "pipes" each supporting up to 8 "queues" (rings).
     
  4. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    347
    Likes Received:
    95
    ANOTHER Linkdin leak? http://wccftech.com/amd-vega-10-4096-stream-processors/

    Either AMD has been totally negligent in its NDA agreements and checking for leaks, or these are somehow fake? Either way it's odd for so many "leaks" from the same source to happen in a row, let alone for so many people in a row to make the same mistake of putting assumedly NDAd specifics of chips not out yet on their resume. Oh yes, "let's show that we happily break NDA stuff on our resume, via our resume! This is surely a good way to get hired."

    But hell, who knows. Maybe it's true, all of it *Han Solo face.
     
    iMacmatician, Razor1 and Alexko like this.
  5. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    So, ahem, AMD seems to be making mainstream and performance GPUs based upon Polaris that cut down ALUs, just in time for async compute games to come along and demand more compute :sleeping:

    Perhaps 4096 ALUs is the little Vega.

    If it's a "new, efficient" architecture, perhaps we're talking extreme ALUs. Sorry, old old joke.
     
    Lightman and TheAlSpark like this.
  6. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    Ok, thanks for clarifying.

    That strengthens my view that Polaris will see perf/W improvements due to process (known) and low level improvements (things like sequential clock gating maybe?), and that Vega will see additional perf/W improvements due to HBM and architecture changes.

    I don't think it's a bad way of doing things: it may have been the only way to get a 16nm chip out of the door in time. But I also don't expect ground breaking competitive perf/W either.

    Or the v8 IP level for Polaris is just smoke and mirrors and Raja was actually telling the truth about it being a full new architecture... But then what is Vega?
     
  7. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    It better be. Though a new architecture will hopefully also fix the current clock deficit of GCN.
     
  8. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    Well they did mention their IPC is going to better for Polaris, I would think that would be the same for Vega, so maybe 4096 is what they went for?
     
  9. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,500
    Likes Received:
    919
    Unless it's better by a factor of at least 1.5, that wouldn't be enough for the high end.
     
  10. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland

    18 billions transistors for 4096 ALUs ? something dont match here...
     
  11. Nemo

    Newcomer

    Joined:
    Sep 15, 2012
    Messages:
    125
    Likes Received:
    23
    Maxwell v1
    Maxwell v2 -- HDMI 2.0, FL12_1

    GCN Gen3
    GCN Gen4 -- HDMI 2.0a, DP 1.2 and FL12_1?
     
  12. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    DP 1.3 .. DP 1.2 is so last year. and a bit more in term of connectivity as required for full HDR supportt .
     
  13. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,258
    Likes Received:
    1,948
    Location:
    Finland
    DP 1.3 you mean? They're already 1.2a
     
  14. Nemo

    Newcomer

    Joined:
    Sep 15, 2012
    Messages:
    125
    Likes Received:
    23
    Ooops, sorry for the typo. I mean DP1.3.
     
  15. gamervivek

    Regular Newcomer

    Joined:
    Sep 13, 2008
    Messages:
    715
    Likes Received:
    220
    Location:
    india
    It's the same linkedin leak from pcgh, except videocardz feel that the Greenland SoC might be a Vega chip, not a far-fetched assumption to make. 4096 is a nice round number and very likely show up on a vega chip. Likelier to be vega11(assuming vega stack has the same numbering as polaris) than vega10 unless AMD are cutting down shaders to improve other parts. 2304 for polaris 10, 3000 odd for vega 11 and 4096 for the vega10.
    It also has gfx ip 9.0 and I'm guessing that is feature level 12_1 support.

    All this while AMD's open source driver removes mentions of no. of shaders to bios only so as to keep them secret.

    https://semiaccurate.com/forums/showpost.php?p=258461&postcount=514

    Though some folks think they can divine the stock clocks from the driver.

    http://forums.overclockers.co.uk/showpost.php?p=29320384&postcount=2146
     
  16. msia2k75

    Regular Newcomer

    Joined:
    Jul 26, 2005
    Messages:
    326
    Likes Received:
    29
    Arent Sandra sis specs displayed often dodgy? The 36CU attributed to Polaris 10 could be false.
     
  17. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    I think somewhere along the line AMD talked about instruction caching having an effect on IPC.

    I don't buy it. There would need to be multiple massive shaders/kernels trying to run on a set of CUs simultaneously to get even close to exhausting instruction cache. I believe 4 CUs share an instruction cache of 32KB.

    The other hits on IPC are taken branches (with whole hardware thread coherence - not much point talking about divergence) and waits.

    Branches are already very low cost in GCN.

    Waits are a whole other ballgame. The only time waits are really a problem is when GPU allocation is very high. Peculiarly there isn't much register allocation margin between 10 wavefronts per SIMD (the maximum) and say 6, where intense branching and waiting will cause ALU stall.

    The register file is simply too small for complex shaders given the current way GCN works and the stupidity of the compiler which always maximises register allocation in favour of issuing less instructions.
     
    Razor1 likes this.
  18. Infinisearch

    Veteran Regular

    Joined:
    Jul 22, 2004
    Messages:
    739
    Likes Received:
    139
    Location:
    USA
    Where did you hear this? I'd like to read about it. Also why doesn't AMD just fix the compiler?
     
  19. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    Many years writing shaders. Modern compilers are CPU centric. They have no concept of the threads in flight versus register allocation trade off. AMD's compiler will allocate registers until only 1 hardware thread can be in flight.
     
  20. Infinisearch

    Veteran Regular

    Joined:
    Jul 22, 2004
    Messages:
    739
    Likes Received:
    139
    Location:
    USA
    Why hasn't AMD tried to fix this behaviour?
    edit- basically why are they choosing to be dependent on new hardware?
    also thanks.
     
    #1020 Infinisearch, Mar 27, 2016
    Last edited: Mar 27, 2016
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...