Xbox One (Durango) Technical hardware investigation

Discussion in 'Console Technology' started by Love_In_Rio, Jan 21, 2013.

Thread Status:
Not open for further replies.
  1. keenism

    Newcomer

    Joined:
    Feb 26, 2013
    Messages:
    27
    Likes Received:
    0
    Location:
    Behind You

    Assuming this "achievable clock" exists, you still have to remember...this isn't a discreet gpu. Further yet it's inside a console which has to function under a range of conditions(you wouldn't believe the amount of crap that accumulates over time in these things in some places) and placements reliably and consistently...(I think that's the goal this time? :lol: )

    Realistically, more GPU speed = more heat/voltage/stress on other components = higher fail = money..
     
  2. Reiko

    Regular

    Joined:
    Aug 27, 2011
    Messages:
    272
    Likes Received:
    0
    MS likes to burn money if it reaches their end goal.
     
  3. SenjutsuSage

    Newcomer

    Joined:
    Feb 25, 2013
    Messages:
    235
    Likes Received:
    2
    Why does it need to be half the clock of the CPU cores? Does it have something to do with APUs or HSA?

    It isn't like we are referring to an overclock. These are apparently safe clock levels for this GPU. It shouldn't lead to unreliability and failures. Also, do we know that Durango is using an APU?
     
    #2123 SenjutsuSage, Mar 18, 2013
    Last edited by a moderator: Mar 18, 2013
  4. McHuj

    Veteran Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,613
    Likes Received:
    869
    Location:
    Texas
    It doesn't have to be. And current AMD APU's don't follow that model, but I think power savings can be achieved if all the clocks can run from a common clock.

    IMO, everything about Durango (to me at least) seems to be about finding the optimum point of performance/watt and performance/$$$ with both power consumption and cost being lower than the original 360.

    I also think that the system is probably carefully balanced in terms of memory bandwidth, I'm not sure if over clocking the CPU or GPU would do much good if the memory bandwidth needs to be scaled up as well. According to spec, the memory looks to be 2.133 DDR3, could they (or would they need to) go to 2.4 DDR3 easily and cost effectively?
     
  5. SenjutsuSage

    Newcomer

    Joined:
    Feb 25, 2013
    Messages:
    235
    Likes Received:
    2
    Yea, they are clearly putting a high premium on finding the perfect balance between cost, performance and power consumption, without a doubt. Honestly, unless there's something I'm missing, they've found a pretty damn good balance. I still find it hard to think they wouldn't go for the higher gpu clock, though.

    Perhaps, though, due to the choices they've made elsewhere, there might be even less of a reason for them to specifically need to count on the power savings from all clocks running from a common clock, thus allowing them to untether the gpu core clock, taking advantage of what a GPU within that range should be capable of.
     
  6. astrograd

    Regular

    Joined:
    Feb 10, 2013
    Messages:
    418
    Likes Received:
    0
    I would imagine that they would plan ahead on raising the clock and then could go back on that if they felt they didn't need that fast a chip. Seems smarter to assume you'll need to go all out and pare back as necessary instead of putting yourself in a position to actually react.
     
  7. GrimThorne

    Newcomer

    Joined:
    Mar 7, 2013
    Messages:
    221
    Likes Received:
    31
    Sorry, meant to say 7700 series. And we're looking at the VGleaks specs right? That's either 7770 or 7790, both are mid-range performers. I hear what you're saying though, but it's still hard to imagine that Durango was the best that this team of engineers could come up with. It's just such obviously low hanging fruit.
     
  8. McHuj

    Veteran Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,613
    Likes Received:
    869
    Location:
    Texas

    I would say that you can't say that unless you know what criteria they were designing for. I think cost, manufacturability, power consumption, etc may have been more important than just raw performance.
     
  9. SenjutsuSage

    Newcomer

    Joined:
    Feb 25, 2013
    Messages:
    235
    Likes Received:
    2
    Exactly. It's pretty inconceivable that they would have boxed themselves in from the very start at some predetermined specification. More than likely, they had a number of plans that they've either altered somewhat or moved away from entirely since they've been planning for the next Xbox.

    Obviously it's to be expected they're probably well past the point now where they have a favorite design, and that favorite design is probably more or less what we know now, but I fully expect that there are enough moving parts among their existing plans that, while I'm sure not wildly different from their current plans, have some meaningful differences and are fairly advanced, well thought out designs in their own right that if they did indeed arrive at the conclusion, that a change was absolutely, no way around it necessary, they would be prepared to make that change.

    After all, we are talking about a highly qualified, no doubt hard working team of professionals that are paid to get things right and account for any and all possibilities. It isn't so difficult to imagine that these folks could have from their work with AMD, devised multiple paths forward.

    It isn't their job to design the most expensive possible system they can make just because they are able to, or allowed to. They have to, at all costs, find a way around having to do such a thing. Their primary goal is giving the developers what they need and designing a system that can also meet Microsoft's own desired goals. If they can do that without going for the most powerful or expensive parts, then that's what they will do, and it's certainly what they should do.

    After that it's up to Microsoft to user every resource available to them to make the console as desirable as possible from a software, features and services standpoint.
     
  10. Tap In

    Legend

    Joined:
    Jun 5, 2005
    Messages:
    6,382
    Likes Received:
    65
    Location:
    Gravity Always Wins
    exactly... it amazes me the sentiment that MS is "stupid" because they just didn't spend everything they had in the war chest to buy the biggest, hottest, baddest parts they could find to put in the new box.

    instead of being brilliant for engineering it the smart way
     
  11. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,763
    Likes Received:
    280
    Location:
    In the land of the drop bears
    Most documents we have seen (all?) say they are targeting 800mhz, I see no reason to believe otherwise, why would they tell developers they are targeting 800mhz GPU clock if they are aiming for higher?.
     
  12. AlphaWolf

    AlphaWolf Specious Misanthrope
    Legend

    Joined:
    May 28, 2003
    Messages:
    9,470
    Likes Received:
    1,686
    Location:
    Treading Water
    A lower boundary makes sense if you don't have final silicon and yield data. Moving it up won't set back projects like moving down would.
     
  13. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,763
    Likes Received:
    280
    Location:
    In the land of the drop bears
    Yes, but you would at least inform the developers you are aiming for higher, there is really no reason to leave them in the dark about trying to target higher clocks.
     
  14. SenjutsuSage

    Newcomer

    Joined:
    Feb 25, 2013
    Messages:
    235
    Likes Received:
    2
    Don't get me wrong, it could very well be what they are targeting. However, even if they didn't tell developers that they intended on aiming higher, I don't see it as too problematic to the development process.

    Microsoft may have simply provided devs with the 800MHZ parts because, due to yield related issues, it was the quickest possible way to get a close enough approximation of the final hardware into developer hands on time. I imagine it couldn't have been too easy producing GPUs with the ESRAM, so maybe it made some sense to keep clocks lower until they could get better yields.

    If Microsoft ends up delivering a higher clocked GPU come final hardware, then it more or less eases its way into the pre-existing development process without very much chaos. It would be much worse, for example, to promise something much greater, convincing developers they're going to get it, and then suddenly pulling the rug out from underneath them by giving them something notably weaker, which I don't suspect happened here.

    If you think about it, who are we to really say they didn't notify developers of this? The rumored specs would still more or less be accurate, which is essentially what various site sources have been saying. Sony didn't exactly notify too many developers about 8GB of GDDR5, did they? Even some first parties apparently didn't know about it.
     
  15. AlphaWolf

    AlphaWolf Specious Misanthrope
    Legend

    Joined:
    May 28, 2003
    Messages:
    9,470
    Likes Received:
    1,686
    Location:
    Treading Water
    I don't see why. Development would still need to proceed under the assumption of the lower boundary, so giving them maybes isn't really helpful.
     
  16. Squilliam

    Squilliam Beyond3d isn't defined yet
    Veteran

    Joined:
    Jan 11, 2008
    Messages:
    3,495
    Likes Received:
    114
    Location:
    New Zealand
    It would be nice if Durango is clocked at 900/1800mhz respectively. It'd be nice for some single threaded performance and extracting some extra juice out of the GPU/eSRAM respectively. I think it is certainly doable with respect to the chip in question though it could be down to how 'close' the 20nm shrink is to being achieveable. If they can get the 20nm shrink within 12 months they may decide to take a little hit initially.
     
  17. SenjutsuSage

    Newcomer

    Joined:
    Feb 25, 2013
    Messages:
    235
    Likes Received:
    2
    It just makes way too much sense for them not to do it, but I guess we'll just have to see what they do.
     
  18. Nisaaru

    Veteran

    Joined:
    Jan 19, 2013
    Messages:
    1,133
    Likes Received:
    403
    If I would decide for ESRAM instead of GDDR5 I would at least make the effort of getting the most out of it bandwidth wise. Sure that would mean more ROPs too but if you can't match high end GPUs CUs/Memory for TDP and price reasons you can at least try to reach higher peak bandwidth.
     
  19. AlphaWolf

    AlphaWolf Specious Misanthrope
    Legend

    Joined:
    May 28, 2003
    Messages:
    9,470
    Likes Received:
    1,686
    Location:
    Treading Water
    It only makes sense if yields and thermals allow it.
     
  20. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    You're comparing Durango to (desired) performance, assuming the intention of the engineers is Moare Powarrr. What if they've got the performance of a 7790 in 80% of the cost and 75% of the power consumption, coupled with a 50% in RAM costs by going with DDR3? Then their goals, which weren't low hanging, were achieved by good engineering.

    The only way to determine the effectiveness of the engineers is to know what targets they were designing for, not what we want them to design. Same with Wii U - the engineers managed to get comparable performance to PS360 in a much smaller and cheaper package. We can grumble all we like about performance, but that's not due to the ineptitude of the engineers; that's a fault of the decision makers giving them low-power targets.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...