Wii U hardware discussion and investigation *rename

Discussion in 'Console Technology' started by TheAlSpark, Jul 29, 2011.

Thread Status:
Not open for further replies.
  1. artstyledev

    Newcomer

    Joined:
    Dec 18, 2012
    Messages:
    45
    Likes Received:
    0

    confirmed/speculation i guess. i mean i guess you cant believe everything but that was the word around town. even IGN's(yes i know IGN) they made their test Wii U dev kit with a 4850. i believe thats wha nintendo was telling dev like the final GPU will be as powerful as the 4850 now remember this...The E6760 scores a 5870 in 3D Mark Vantage which is higher than the HD 4850 despite the card only running at 35 watts! so take it for what its worth i guess.
     
  2. Inuhanyou

    Veteran

    Joined:
    Dec 23, 2012
    Messages:
    1,305
    Likes Received:
    480
    Location:
    New Jersey, USA
    That kind of invalidates the whole thing doesnt it? Lol..i mean you can't use a 4850 off the shelf GPU in a PC and expect to have the same results comparable to a closed box with plenty of other variables even if they had the same GPU. It just shows a clear lack of technical knowledge on their part. Not saying i know much about tech myself though.
     
  3. artstyledev

    Newcomer

    Joined:
    Dec 18, 2012
    Messages:
    45
    Likes Received:
    0

    YES exactly they stated that before their test. obviously whatever is inside the Wii U is custom and not the same as off the shelf.
     
  4. Inuhanyou

    Veteran

    Joined:
    Dec 23, 2012
    Messages:
    1,305
    Likes Received:
    480
    Location:
    New Jersey, USA
    Then why do it knowing that it has no actual value :?:

    Its just not something to take any sort of stock in.
     
  5. artstyledev

    Newcomer

    Joined:
    Dec 18, 2012
    Messages:
    45
    Likes Received:
    0
    Hits on their website obviously. Wii U was the new thing that gamers wanted to know about the power. it was a easy way to get hits on the site.
     
  6. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    Okay, in saying 'multithreaded engine' I wasn't thinking of a PC targeting a range of machines - should have said 'console multithreaded engine'. In a console where you know you've got three cores, I don't see why devs wouldn't keep them all busy as long as they have work to do. It'd all come down to dependencies as you say. I guess a dev can set us right there.
     
  7. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    430
    Location:
    Cleveland, OH
    Thank you for the very informative post. It's much more detailed than the information I could quickly find on the technologies, including no explanation as to why the latency would be worse.

    I do however have a couple of questions..

    1) The GDDR3 latency on XBox 360 is awful (PS3's XDR latency is also very bad but Rambus has always had higher latency RAM), but you seem to be saying that the technology has no latency disadvantage vs DDR3 (aside from a larger minimum burst size which I agree barely effects latency). Yet we can see that the absolute latency of main RAM on XBox 360 (> 150ns) is much worse than a high end x86 desktop with DDR3. Would you say this is purely down to having to go through the GPU along with a potentially inferior memory controller and has nothing to do with the memory itself?
    2) You don't mention GDDR5; do you have any input on what the (absolute, not in clock cycles) latency is like there vs DDR3? I've seen various reference that suggest it's higher latency than DDR3, for instance on Xeon Phi where it should be paired with a high quality memory controller.
     
  8. HolySmoke

    Newcomer

    Joined:
    May 20, 2004
    Messages:
    84
    Likes Received:
    61
    It's worth noting that the lack of command lists in PC gaming is a confound in this comparison. Even the best threaded code can still be held back by the number of draw calls that the CPU can process and leaving the secondary cores without work.

    Crysis was a notable example of this. The 'object detail' setting could push enough draw calls that overall CPU utilization on the Core 2 architecture would lower rather than increase.
     
  9. Svensk Viking

    Regular

    Joined:
    Oct 11, 2009
    Messages:
    627
    Likes Received:
    208
    Wasn't that what DX11 multithreaded rendering was supposed to help with? AFAIK, Civilization 5 is the only game that supports it, but it was shown to get better performance once Nvidia enabled it in their drivers
     
  10. HolySmoke

    Newcomer

    Joined:
    May 20, 2004
    Messages:
    84
    Likes Received:
    61
    Yes. AMD had yet to enable this last time I knew.

    Edit: It's 'sænskur víkingur' for the correct nominative :wink:
     
    #4070 HolySmoke, Dec 25, 2012
    Last edited by a moderator: Dec 25, 2012
  11. Svensk Viking

    Regular

    Joined:
    Oct 11, 2009
    Messages:
    627
    Likes Received:
    208
    Fixed now:smile: Need to work on my icelandic(that was google though^^) and I sure regret we swedes have simplified our grammar so much:mad:
     
  12. HolySmoke

    Newcomer

    Joined:
    May 20, 2004
    Messages:
    84
    Likes Received:
    61
    Heh, it's not all roses for us either. I had problems finding the correct possessive for my niece's name on a Christmas card earlier today.

    I felt stupid.
     
  13. (((interference)))

    Veteran

    Joined:
    Sep 10, 2009
    Messages:
    2,499
    Likes Received:
    70
    Isn't it possible to teardown the GPU to find out exactly what it is?
    Like was done with the A6 processor in the iPhone 5.

    I guess it's just that tech sites like Anandtech, iFixit, Chipworks etc are not as interested in the internals of the Wii U compared to the latest iDevice and don't think the effort is worth it.
     
  14. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    What can they do exactly, stare at a square that says "this is where the GPU is"? or is it possible to trim carefully the top and get an actual die shot.
     
  15. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
  16. SoreSpoon

    Newcomer

    Joined:
    Dec 16, 2012
    Messages:
    45
    Likes Received:
    4
    Okay, I know the thing is weak, but now you're being ridiculous.
     
  17. function

    function None functional
    Legend

    Joined:
    Mar 27, 2003
    Messages:
    5,854
    Likes Received:
    4,411
    Location:
    Wrong thread
    Mario, even at 550 mHz, would be considerably faster than Xenos. Even with 1/4 of the GPU removed it would still be better.

    The Wii U has shown nothing so far to indicate it is better than RV730 (or even as fast as that).
     
  18. Teasy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,563
    Likes Received:
    14
    Location:
    Newcastle
    55nm, "a bit unlikely"?.. Its a 40nm chip, 137mm2, should be somewhere around 1 billion transistors. The fact that developers haven't gotten the most out of it yet doesn't change that.
     
    #4078 Teasy, Dec 25, 2012
    Last edited by a moderator: Dec 25, 2012
  19. BobbleHead

    Newcomer

    Joined:
    Sep 24, 2002
    Messages:
    58
    Likes Received:
    2
    1) The latency of the DRAM itself is only a portion of the total latency. In the example of 800 MHz DDR3, the first piece of read data comes out of the DRAM 11+11 cycles after you send the first part of the read command. Add in 4 cycles for the rest of that burst and you get 26 cycles * 1.25ns = 32.5ns for a read. That's just for the command to the DRAM and the read return. The rest of the time is how long it takes a read request to get from the CPU to the memory controller to the DRAM and for read data to make the return trip. In a high end x86 desktop chip the memory controller is a part of the CPU. The path to/from the controller is made as short as possible and can run at high clock speeds. In the 360 the CPU is a separate chip from the GPU chip which has the memory attached to it. It takes additional time for the CPU to send the request from one chip to the other, for the request to go from that interface to the memory controller on the GPU and the reverse. This can add quite a bit of latency. Any system where the memory controller is on the same chip as the CPU can have much lower latency.

    2) GDDR5 uses similar signaling to GDDR3. Pseudo open drain and pull up termination, but at lower voltage (1.2-1.5V rather than 1.8V). In order to push the interface faster there is additional overhead on the sending and receiving sides and logical changes to the interface. That overhead adds to the base latency. The DRAM core has roughly the same latency as DDR3 but the GDDR5 IO layer imposes that extra latency penalty. For that extra cost you gain the ability to send data a lot faster. As a result, GDDR5 latency is a bit higher than DDR3 latency in absolute terms, but it's not a huge difference.
     
  20. SoreSpoon

    Newcomer

    Joined:
    Dec 16, 2012
    Messages:
    45
    Likes Received:
    4
    But it would actually cost more to do something like that than it would to shrink an RV730 to 40nm. There's absolutely no reason for it to be 55nm. I know how you plan to respond to this, so instead please give a reason.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...