Xbox One (Durango) Technical hardware investigation

Discussion in 'Console Technology' started by Love_In_Rio, Jan 21, 2013.

Thread Status:
Not open for further replies.
  1. blakjedi

    Veteran

    Joined:
    Nov 20, 2004
    Messages:
    2,985
    Likes Received:
    88
    Location:
    20001
    Starting off your thought process with that statement immediately red flags it. You assume that a) they are lying or B) what they are saying isn't true. You have an assumption that numerically larger is better. But at looking at a system holistically that may not be true. They made a tradeoff and that tradeoff limited them in some areas but benefits them in others and maybe be better at producing a systemic benefit.
     
  2. oldschoolnerd

    Newcomer

    Joined:
    Sep 13, 2013
    Messages:
    65
    Likes Received:
    8
    I reckon they knew what was going to happen when they tested the up clock vs the extra 2 CUs. For those guys who are so deep in the tech it must have been a no brainer. The real benefit to MS was the opportunity for PR giving the impression that their faster 12CUs were more effective than Sony's 14 slower ones...
     
  3. dobwal

    Legend

    Joined:
    Oct 26, 2005
    Messages:
    5,955
    Likes Received:
    2,326
    Wouldnt the bandwidth of the DME increased with the increase in the gpu speed with a bandwidth of 27.3 GBs versus 25.6?
     
  4. Airon

    Banned

    Joined:
    Dec 12, 2012
    Messages:
    172
    Likes Received:
    0
    In truth I have always been under the impression that 12-14 CU somehow is the point of balance of the system.
    Sony, originally, has the statement 14+4 (as reported via VGleaks based upon their tech documentation) before they realize that it could be a much more powerful marketing tool to have 18CU.
    But, I do not believe that the next generation console competion will be remembered as the era of CU. No, there is much more than this.

    Now, I am courious to know what ekim already seems to know.
     
  5. No, he's just assuming that Microsoft would never admit to the public that their system is weaker than Sony's. They'll do whatever they can (true or not) to avoid having a portion of its customers jumping ship if convinced that Sony will provide better visuals, physics, etc.

    Which is a pretty good assumption, IMO.
     
  6. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    That would fit with the rest of the GPU. I was going off of the old diagram.
     
  7. DrJay24

    Veteran

    Joined:
    May 16, 2008
    Messages:
    3,894
    Likes Received:
    634
    Location:
    Internet
    Well you are simply just taking the available data and making it match MS's message IMO. Sony has always had 18CUs, that number has nothing to do with marketing. We know you can make 30CU video card, given enough video memory bandwidth. Sony is pushing GPGPU wth their added ACEs and have given talks saying you can shift those CU resources back and forth from rendering to compute, that is not an admission that some number of CUs is somehow wasted. There is a leaked talk that mentions this, the speaker says you can shift those resources per frame as little or much as you like. Don't read flexibility as a weakness. Even if there is some small drop due to scaling as CUs goes up, more is always better assuming you can feed them and 176GB/s is more than you need for 14CUs (the 7870 has 20CUs with 154GB/s).
     
  8. Cjail

    Cjail Fool
    Veteran

    Joined:
    Feb 1, 2013
    Messages:
    2,027
    Likes Received:
    211
    Any discussion about the 14 + 4 CU split should be taboo.
    At this point it's nothing more than noise.
     
  9. oldschoolnerd

    Newcomer

    Joined:
    Sep 13, 2013
    Messages:
    65
    Likes Received:
    8
    That 176GB/s figure is theoretical peak. The real world will be less. I asked over on the Orbis thread what people thought this may be, but couldn't get an answer. I said I would assume the same % difference between theoretical and real world on the x1 of about 75% and nobody had any better ideas...or if they did they kept them to themselves!

    So for the sake of argument that would leave the ps4 with about 130GB/S, less anything the CPU needs (max of 20GB/s)...so if the x1 is apparently balanced with it's 200GB/s (less max 20GB/s for the cpu) the 14CUs figure may be already a bit much.

    Of course I could be up the pole with that 75% figure, but can't see it being miles off....
     
  10. MrFox

    MrFox Deludedly Fantastic
    Legend

    Joined:
    Jan 7, 2012
    Messages:
    6,488
    Likes Received:
    5,996
    At the very least it shouldn't be in the xbox thread.
     
  11. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,791
    Likes Received:
    1,596
    Aren't people referencing it in relation to the general idea that MS is selling (right or wrong) of diminishing returns with more CU's?

    I dont think anybody is saying there's a physical 14/4 split anymore or has for a long time.
     
  12. DrJay24

    Veteran

    Joined:
    May 16, 2008
    Messages:
    3,894
    Likes Received:
    634
    Location:
    Internet
    All bandwidth numbers are peak, so are AMD's they publish with the card specs, so that point is moot. Thinking that MS has 200GB/s available for 12CUs is kind of silly, they may see 200GB/s aggregate at times but in general their available bandwidth is much lower, remember the eSRAM is only 0.04% of the RAM.
     
  13. XpiderMX

    Veteran

    Joined:
    Mar 14, 2012
    Messages:
    1,768
    Likes Received:
    0
    They said 150GB/s can be a common number.
     
  14. warb

    Veteran

    Joined:
    Sep 18, 2006
    Messages:
    1,057
    Likes Received:
    1
    Location:
    UK
    Why not? MS are talking about balance and their upclock being more effective than an additional 2 CUs in whatever they were testing (for X1). "Hardware balanced at 14 CUs" does seem somewhat relevant.
     
  15. taisui

    Regular

    Joined:
    Aug 29, 2013
    Messages:
    674
    Likes Received:
    0
    I like that people would have apply an arbitrary 100% utilization on one design to boast its elegance, but think of an edge case scenario on how the other would run like crap on the other.

    I also love the fact that the words from one horse's mouth are always deceptive and lying, while the lying horse on the other hand, whose words are generally perceived as "what is actually meant is blah blah blah..."

    But hey, what's new.
     
  16. blakjedi

    Veteran

    Joined:
    Nov 20, 2004
    Messages:
    2,985
    Likes Received:
    88
    Location:
    20001
    I think its bollock unless you also assume that Sony would "admit that their system is weaker". Whatever "weaker" means.

    My assumption which I think is the most reasonable one, is that MS believes they designed a great system: full stop. Heres why:full stop. We made tradeoffs, heres what they are and this is why you will enjoy our system regardless of what other people (i.e., the competitor, the internet fora or the digiterati) say.

    Those descriptions have nothing to do with the other guy except to countervail a perception that was wholly manufactured.
     
  17. oldschoolnerd

    Newcomer

    Joined:
    Sep 13, 2013
    Messages:
    65
    Likes Received:
    8
    No the 200GB/s is real world...according to DF doc anyway... The combined peak of ddr3 and esram is something like 270GB/s.

    That 0.04%, if used correctly will be used really heavily as the stages in pipeline use it to store intermediate results. It punches well above it's weight for such a little guy....
     
  18. DrJay24

    Veteran

    Joined:
    May 16, 2008
    Messages:
    3,894
    Likes Received:
    634
    Location:
    Internet
    That is not very meaningful IMO. Is that an average or a particular operation? We won't know these numbers until developers start leaking real world experiences with real engines. Now we just have some MS PR.
     
  19. taisui

    Regular

    Joined:
    Aug 29, 2013
    Messages:
    674
    Likes Received:
    0
    This is just a number game, by the same argument, having cache on processors would be pointless because they are merely a fraction of the system RAM, why bother?
     
  20. zupallinere

    Regular Subscriber

    Joined:
    Sep 8, 2006
    Messages:
    768
    Likes Received:
    109
    So from your perspective the NOT XB1 is only has 110 GB/s which will basically be nearly the 102 GB/s of the original ESRAM bandwidth spec. :lol: Nicely done. The tables have turned and the XB1 is the bandwidth MONSTER !!! :wink:

    We also have X1 Balance (tm) at 200 GB/s and 12 CU meaning NOT XB1s will limp along starving for bandwidth and falling by the wayside. :twisted:

    More seriously I would step back and think about how many developers over how many years have been able to access the bandwidth of GDDR5 memory. Is every game on every GDDR5 based GPU throwing away 25% of the bandwidth for all these years ??
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...