Xbox One (Durango) Technical hardware investigation

Discussion in 'Console Technology' started by Love_In_Rio, Jan 21, 2013.

Thread Status:
Not open for further replies.
  1. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    20,516
    Likes Received:
    24,424
    He also went on about a Prince of Persia game for the consoles. It turns out the only thing coming out for it is the already known about iOS mobile game and is targeted for the iPad.

    Those who claim he is never wrong are blindly ignoring what doesn't fit in with their own views.
     
  2. temesgen

    Veteran

    Joined:
    Jan 1, 2007
    Messages:
    1,680
    Likes Received:
    486
    Who says PS4 is superior to XB1? They are different, one is a truck and the other a car; depending on what I want to do one or the other offer advantages but in general both will work just fine.

    Regarding the memory I'd like to see someone outline how 9gigs of memory offers a tangible benefit over 5, as I am not seeing enough of a need to justify the cost and trouble to increase memory this late in the game.
     
  3. BeyondTed

    Newcomer

    Joined:
    May 20, 2013
    Messages:
    233
    Likes Received:
    0
    Where does the conclusion "it's all about the money" come from? How do you know it is not about the energy/power consumption of moving data off chip?

    MS has published this, the power/Joules going off chip versus on chip versus actual processing. [Pretty sure it was MS that was credited in the post I read.] The power consumed going off chip is orders of magnitude higher than on chip (10,000 X) (and also for the actual processing operations (10 x 10,000 X.)

    Second, how do you know that the power consumption of the ESRAM is bigger (or smaller, or equal) to the power costs of *driving* GDDR5 modules versus driving DDR3 modules?

    If moving data off chip consumes so much more power (as per MS publication) then maybe less power is dissipated in equivalent memory transactions when they use ESRAM + DDR3 versus GDDR5?

    I can't find the MS link right now but here is the same statement essentially:

    http://people.csail.mit.edu/jrk/research.pdf



    And don't forget latency. ESRAM <<< Off Chip. A race car doesn't go very fast when it hits red light after red light.
     
    #4483 BeyondTed, Jul 3, 2013
    Last edited by a moderator: Jul 3, 2013
  4. Arwin

    Arwin Now Officially a Top 10 Poster
    Moderator Legend

    Joined:
    May 17, 2006
    Messages:
    18,762
    Likes Received:
    2,639
    Location:
    Maastricht, The Netherlands
    Isn't it better to say that the race car has slower brakes, the impact of which depends on the number of bends in the track?
     
  5. ERP

    ERP
    Veteran

    Joined:
    Feb 11, 2002
    Messages:
    3,669
    Likes Received:
    49
    Location:
    Redmond, WA
    I really wish we could ban people for car analogies...
     
  6. BeyondTed

    Newcomer

    Joined:
    May 20, 2013
    Messages:
    233
    Likes Received:
    0
    I don't know! Maybe the red lights are more appropriate to describe in order pipeline stalls.
     
  7. BeyondTed

    Newcomer

    Joined:
    May 20, 2013
    Messages:
    233
    Likes Received:
    0
    Oh dear... ...I am sorry! I will try to keep that in mind.
     
  8. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,176
    Location:
    La-la land
    This is essentially a red herring.

    As explained by knowledgeable forum members, DDR3 and GDDR5 latency is essentially equal, as measured in nanoseconds (which is the only measure that matters). Also, GDDR5 minimum block transfer size is larger than DDR3, but seeing as transfer speed is >2x faster it should not matter.
     
  9. XpiderMX

    Veteran

    Joined:
    Mar 14, 2012
    Messages:
    1,768
    Likes Received:
    0
    Are you implying a new RROD for the Xbox One? Is the eSRAM hotter than GDDR5/DDR3?

    EDIT: Sorry, I misunderstood you.
     
  10. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    It's a bad approximation, but if we just go with the relative bandwidths on the external memory pools, the PS4 at peak would be moving about 2.5 times the data off-die.
    The exact power consumption would be reliant on knowing the parameters of the chips and voltages, but the overall activity level should bear out.


    The present situation, now that people have compared data sheets appears more like:
    ESRAM <<? DDR3 ~=(maybe) GDDR5.

    The DRAM devices themselves don't differ much, so it would come down to the respective designs' memory controllers and design emphasis.
    The eSRAM should be faster, to a degree not disclosed.
     
  11. BeyondTed

    Newcomer

    Joined:
    May 20, 2013
    Messages:
    233
    Likes Received:
    0
    If so then at least it can be said than ESRAM Latency is <<< Off Chip Latency.
     
  12. BeyondTed

    Newcomer

    Joined:
    May 20, 2013
    Messages:
    233
    Likes Received:
    0
    Not implying that. [I should have written costs not savings. I went and edited that. Sorry for lack of clarity.]

    I am saying that there are less off chip transactions (and they consume less power at much lower clock rates) in one scenario (The DDR3 + ESRAM scenario). So I am implying that the power consumed by the ESRAM might not be much compared with that power savings. (The savings referring to the reduced off chip memory transactions.)

    The drivers for the GDDR5 should be dissipating quite a bit more since there are both more transactions and a much higher clock. [I am assuming the dominant source of the 10,000 higher factor is the drivers for the SOC to external memory die.] But there are more qualified people who could toss around numbers based upon the fact that the MICRON chips are shown in the picture, the BW/Clock is known (published for the GDDR5 case), etc.

    If there is an AMD person around here maybe they can walk over and talk to someone who worked on the GDDR4 or GDDR5 specs and get their opinion in a water cooler conversation. But reading 10,000 X it suggests that it might be pretty real factor.



    Plus if the SOC gets lots bigger for a big ESRAM block then the contact area between the SOC and the heatsink just went up lots. Thermal resistance is inversely proportional to contact area (and yes there is much more to it).
     
  13. DSoup

    DSoup Series Soup
    Legend Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    16,785
    Likes Received:
    12,697
    Location:
    London, UK
    If the Xbox One was closer to the Wii U in performance than the PlayStation 4 that may be true, but this isn't the case. The PlayStation 4 and Xbox One are in the same performance ballpark, just as the PlayStation 3 and Xbox 360 were in the same performance ballpark.

    What we've observed in the current generation will likely continue. The PlayStation 3's bonkers architecture and less usable memory resulted in lower resolutions, lower frame rates, lower quality textures and other graphical budgets (alpha blending in particular) being cut. The games were fundamentally the same, it's just the PlayStation 3 often ended up with a compromised visuals compared to the Xbox 360 version.

    And you know what? It's not a big deal. :cool:
     
  14. Vertrucio

    Newcomer

    Joined:
    Jun 20, 2013
    Messages:
    33
    Likes Received:
    0
    Just wanted to ask a small technical question.

    Does the Kinect2 actually have any, or better, processing power in its hardware than the Kinect1 implementation?

    If I recall, a huge limiting factor for Kinect 1 was that it offloaded all its processing to the 360, which took away from the game's processing.

    There's been new news and tech demos of the Kinect 2 which look impressive, but the kinect 1 also looked impressive in tech demos.

    If the Kinect 2 does have its own dedicated processor, anyone know what that is?
     
  15. DSoup

    DSoup Series Soup
    Legend Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    16,785
    Likes Received:
    12,697
    Location:
    London, UK
    This is mind bending in its audacity. So on one hand, it's entirely reasonable to think Microsoft can pull a last minute upclock on their APU - even though the one they showed, presumed to be running at 1600/800mhz, already has a comically large cooling solution, but we're also thinking that Sony, who appear to be running a much simpler design variation of the same Jaguar package, are having yield problems? :eek:

    The 1.6Ghz Jaguar would have been picked by Microsoft and Sony because of cost (yield), energy consumption and heat output. It's probably a magic sweet spot for the package which is why both companies picked the exact same clocks. If TSMC were having yield problems producing for Sony, they'd know by now, Sony would know by now, and pre-orders wouldn't be virtually unlimited.
     
  16. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,680
    There is an audio block that does echo cancellation for Kinect, and I imagine whatever other audio processing Kinect needs to do. The rest would be on the GPU and CPU. The question is how much of it is part of the OS reservation and how much is run in the game VM.
     
  17. Lalaland

    Regular

    Joined:
    Feb 24, 2013
    Messages:
    864
    Likes Received:
    693
    I'd put money 12GB is pure nonsense, if you mix RAM module sizes you break the ability to dual channel which is a far worse loss than the gain of 4GB. If the memory controller is triple channel and MS just decided 'sod it' let's not populate that then there are pink slips a flying down Seattle way. There is no time for a 'respin' to add more channels or to change the APU itself and if they tried a dual design, dual fab strategy they would burn cash so fast it would be cheaper to just put $100 in every XB1.

    The better ESRAM performance intrigues me though, I had presumed it was just that they worked out a more efficient way of r/w in parallel but smarter folks than me are nay saying that interpretation. I guess I saw it as analogous to a microcode engine patch for a CPU that improves performance for certain macro-ops but that is a very different scenario to a memory chip. I do hope that developer briefing leaks in a more substantive way so I can read what the smart people here think!
     
    #4497 Lalaland, Jul 3, 2013
    Last edited by a moderator: Jul 3, 2013
  18. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    The actual pictures of the XBox board would indicate how many channels there are.
    That being said, having a nonpower of two on a power of two bus width is possible. The inverse has also been done by certain Nvidia GPU SKUs.

    The memory controllers and whatever address partitioning they use can be route accesses appropriately, at the cost of non-uniform bandwidth if accesses to the additional space start hammering the controllers linked to the higher density channels and idling the others.
     
  19. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    You're reasoning strikes me as very unsound. The fact Ms has a huge case with a huge fan implies they can deal with more heat than PS4, no? Ergo the possibility that MS can upclock but Sony can't.
     
  20. XpiderMX

    Veteran

    Joined:
    Mar 14, 2012
    Messages:
    1,768
    Likes Received:
    0
    Are there any possibilities that the changes have been produced since march-april?
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...