Wii U hardware discussion and investigation *rename

Discussion in 'Console Technology' started by TheAlSpark, Jul 29, 2011.

Thread Status:
Not open for further replies.
  1. Inuhanyou

    Veteran

    Joined:
    Dec 23, 2012
    Messages:
    1,305
    Likes Received:
    480
    Location:
    New Jersey, USA
    Ah i see, thank you for the information.

    At this point as i've heard so many conflicting things about Wii U's hardware potential from so many sources, its a bit overwhelming. One thing is clear though, that Nintendo obviously prioritized being apart of the current console market in terms of hardware as opposed to aiming for some sort of sit in between the current HD twins and the next set.

    I'm assuming that their DX10 equivalent feature set will get them some part of the way in terms of effects and shader rendering, maybe they figured that would be enough to split the difference?

    I'm not sure how big a difference using DX10 equivalent effects versus 360 and PS3's DX9 equivalent will make honestly. And that's before we get to the little issue of still having no idea about the actual bandwidth of the EDRAM or the general processing capabilities of the GPU.

    I've heard people saying RV770, but isn't that a bit too high for what we've been seeing in games so far, even taking into account what some people would call the "quick and dirty port" nature of those games?

    I mean we are all pretty much focusing in on the fact that the GPU is most likely the beefiest portion of the Wii U at this point, so to actually get only a marginal improvement from games like Wii U even with the substantially bigger amount of EDRAM in comparison to 360 says in my mind that it can't be that much more powerful.

    Would not an RV730 or RV740 die shrink'd be more appropriate given that they are also apparently trying to fit the EDRAM onto the GPU die as well?

    Sorry if that sounds a bit ignorant, as i'm not sure how clock speeds or modern efficiencies could tie into that kind of performance.
     
  2. function

    function None functional
    Legend

    Joined:
    Mar 27, 2003
    Messages:
    5,854
    Likes Received:
    4,411
    Location:
    Wrong thread
    I'm happy to call the Wii U CPU both Weaksauce and Cheapsauce.

    That's because I'm comparing it to PC and console CPUs from 2005 onwards, because those are the systems running the kind of AAA core-gamer multiplatform titles that Nintendo were banging on about getting and having superior versions of. And also because the Wii U costs more than a PS360 and about as much a PC with a CPU several times faster.

    The fact that I am being forced to compare the Wii U CPU to the 2005 Xenon is bad enough in itself, the fact that it seems to lose is worse. The fact that there are people who think Xenon is some kind of benchmark for a $350 2012 console is simply depressing.

    When the Wii U gets compared to the PS4, Xbox 3 and contemporary PC CPUs (even budget ones) it'll probably be a bloodbath.
     
  3. artstyledev

    Newcomer

    Joined:
    Dec 18, 2012
    Messages:
    45
    Likes Received:
    0

    thats fine shifty we can do that. i havent been a member of this site long but ive been following this thread for a long time. i think the problem some people like myself have is the architecture talk seems to be totally biased most times. ive never in all my years of gaming 20+ seen a console doomed and believed by the masses that we have seen the best for the most part of what it can achieve 1 month into its lifecycle. its like lets bury it before it has chance to breath. the architecture talk is fine i think we all have been beat over the head with the details... but its the unknows of those details people are waving off like i dont need to know anything else Wii U cant run a game like GTA4 or the OOoE is gonna be weak sauce. it to me (correct me if im wrong) seem like its more bashing the console before everything (about the tech in console) and better games are launched more than anything else. im not trying to get banned from the site im not trying to argue and seem as if im bashing someone over the head. im just looking for a happy medium i guess. :???:
     
  4. function

    function None functional
    Legend

    Joined:
    Mar 27, 2003
    Messages:
    5,854
    Likes Received:
    4,411
    Location:
    Wrong thread
    The 360 seems to be some way beyond DX9 and in some ways even beyond DX10.1 - as a console it can benefit from not being constrained by PC DX versions:

     
  5. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    430
    Location:
    Cleveland, OH
    RV740 is perhaps the most heavily speculated right now. It would probably not be a die shrink because it's already 40nm and Wii U's GPU is probably 40nm. At first glance its die size of around 137mm^2 looks a little large when you consider Wii U's is ~156mm^2 and contains EDRAM. But it'd save space having a 64-bit DDR3 controller vs a 256-bit GDDR5 controller (presumably the internal EDRAM bus doesn't take a lot of space) and it may save space with whatever CPU interface it uses instead of PCI-E. They could have also trimmed some things like FP64 support. The alleged clock speed (550MHz) ended up right in the range offered for RV740 so that fits too.

    Ultimately we don't really know how custom this core is.

    Plus the constraint that it's manufactured on the same processes MS and Sony are currently using for their consoles.
     
  6. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    430
    Location:
    Cleveland, OH
    And we're talking about a CPU, hell a CPU with a more or less well known instruction set. Its ability to vary execution of said instructions given constraints of a known frequency, power budget, transistor count/manufacturing technology, and the reasonable assumption that there isn't an overwhelming difference in engineering talent and resources between its designers and the competition's (which are more or less dictated by the same company to begin with) is pretty limited.
     
  7. Inuhanyou

    Veteran

    Joined:
    Dec 23, 2012
    Messages:
    1,305
    Likes Received:
    480
    Location:
    New Jersey, USA
    All that? With 360? Wow i'm learning things i didn't know already.

    Well yeah, i guess you can't really compare 360's featureset to its API brother on PC's, but does that apply to all consoles or is it just 360 that was specialty designed?

    I know that PS3 and Wii U both use OpenGL, so it would not directly to apply to them regardless.
     
  8. artstyledev

    Newcomer

    Joined:
    Dec 18, 2012
    Messages:
    45
    Likes Received:
    0
    isnt that the base of their GPU though.... ive heard other say the same as if they got the GPU base didnt optimize it any more and called it a day. We know the final dev kits didnt get to developers until june of this year. If that was the case WTH was nintendo doing all that time? just asking a fair question do we not think the GPU will have some features beyong dx 10.1? is that even a possibility at this point? i mean look at the 3ds we know from quotes developers had resident evil 5 running (in some form) on the 3ds. the 3ds is NO where close in power to ps360 but it was the architecture of the handheld and features that for the most part made it possible. is that a fair assessment?
     
  9. artstyledev

    Newcomer

    Joined:
    Dec 18, 2012
    Messages:
    45
    Likes Received:
    0
    im going to be posting and ask that you read my post and inform me of bad post. im trying not to get banned so look out for me... let me know when im F'N up.
     
  10. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    430
    Location:
    Cleveland, OH
    AFAIK most PS3 games don't use OpenGL but something lower level/more custom. But even if it's using something like OpenGL that doesn't say anything about what extensions or hard modifications it's using.

    And yes, this does apply to all consoles, or all I'm aware of. The big differentiating is that they get APIs/libraries that are tailored to their particular hardware, while standards like OpenGL and DirectX need to follow a lowest common denominator that all hardware can meet. This isn't that bad since the standard development has worked closely with hardware development but there can still be a lot of abstraction cost.
     
  11. function

    function None functional
    Legend

    Joined:
    Mar 27, 2003
    Messages:
    5,854
    Likes Received:
    4,411
    Location:
    Wrong thread
    It's a bit unlikely, but I guess it could also be based on RV730 (aka Mario), maybe with a 80 shaders / 8 TMUs ripped out and some edram chucked in there instead, and made on NEC 55nm .... :eek:
     
  12. artstyledev

    Newcomer

    Joined:
    Dec 18, 2012
    Messages:
    45
    Likes Received:
    0

    the clocked speed supposedly is 550mhz. now ive said this before (just my opinion) the e6760 would be the like missing puzzle piece in the Wii U. like it fits PERFECTLY with nintendo's philosophy. now im not going to get into the whole email confirmation thing. i do ask though that GPU has everything nintendo would want in a GPU. its low power draw, modern architecture capable... openGL 4.1, shader model 5.0 and dx11 (which doesnt matter i know it a microsoft thing). but all in all it has all the bell and whistles. take that 600mhz and clock it down to 550mhz customize it for Wii U... could that work? i mean the final dev kits wasnt in devs hands until june-july 2012. and according to RUMOR yes rumors the final devkit was more powerful or suprisingly powerful i believe they said.
     
  13. artstyledev

    Newcomer

    Joined:
    Dec 18, 2012
    Messages:
    45
    Likes Received:
    0
    if you take the e6760 and downclock it from 600mhz to 550mhz (while customizing other parts) how would that effect the 35w power output? what could it cut it down to?
     
  14. Inuhanyou

    Veteran

    Joined:
    Dec 23, 2012
    Messages:
    1,305
    Likes Received:
    480
    Location:
    New Jersey, USA
    But doesn't what we've seen out of the retail units pretty much knock down the e6760 rumor? I mean regardless of what AMD customer support says, don't we already have confirmation from Nintendo that its a chip based off the RV700 line? Why would they change it mid way to using a completely different part?

    Again, the retail unit performance pretty much knocks down this kind of thing, that GPU is like 500-600 gflops last i heard, it should not take much for that kind of power to be seen in the launch titles of the Wii U, but we don't see that.
     
  15. artstyledev

    Newcomer

    Joined:
    Dec 18, 2012
    Messages:
    45
    Likes Received:
    0
    yeah the RV700 i know was supposedly used as a base GPU... there were also the rumors around E3 this year people saying they call the Wii U GPU GPU7 i believe. now im not one of these tech guru's but many people believed maxed out the GPU will be in the 500ish range gflops. im not going to pretend i know if that will happen. but just based on specs alone if they took the e6760 modified it even more could they not achieve less power draw than it has already 35w with its modern features. like wouldnt that be EXACTLY what nintendo would want the GPU in the Wii U to be?
     
  16. artstyledev

    Newcomer

    Joined:
    Dec 18, 2012
    Messages:
    45
    Likes Received:
    0
    in March nintendo signs deal with green hill software to use its software in wii u to maximize what it can do achieve whatever... in May it was announced the first AMD embedded GPU to use green hills software would be the E6760. not saying that confirms anything but its a HELL of a coincidence.
     
  17. lwill

    Newcomer

    Joined:
    Apr 11, 2007
    Messages:
    110
    Likes Received:
    0
    That info was not a rumor. The documentation for Wii U's devkit called it "GPU7" and that it was based on the R700 series. Now, it is possible that some features were modifed so that it may share some characteristics to a e6760. However, we know that the GPU base started with the R700 and it was modified from that.
     
  18. artstyledev

    Newcomer

    Joined:
    Dec 18, 2012
    Messages:
    45
    Likes Received:
    0
    The older dev kit's for the Wii U were using GPU's that were basically the AMD Radeon 4850, which Nintendo was telling developers that the performance would be roughly equal to in the Wii U's final GPU. To put this into perspective though, the HD 4850 would have never been what the final Wii U GPU would be using anyway since it pulled between 130-240 watts of power consumption and was built on the 55nm process. Also, some of the features of the 4850 are already 4 years old with better and more efficient standards being used today (DirectX 11 for example). ​
     
  19. BobbleHead

    Newcomer

    Joined:
    Sep 24, 2002
    Messages:
    58
    Likes Received:
    2
    From the outside GDDR3 and DDR3 behave very similarly, almost identical, from a logical perspective. At 800 MHz, both GDDR3 and DDR3 have a CL (cas latency) of 11 cycles, while RCD (ras to cas delay) is 12 cycles for GDDR3 vs 11 cycles for DDR3. So the effective latency of a read is almost identical, 11+12 vs 11+11.

    The main logical difference is that GDDR3 is a 4-bit fetch and DDR3 is an 8-bit fetch. This determines the minimum burst size you'll get for a read or write. This is important for pushing up to higher IO bandwidths but has almost nothing to do with latency. At their core, DRAMs have not been speeding up anywhere near as much as the IO speeds. To get more and more bandwidth the interface between IO and core has been made wider and wider. GDDR3 at 800 MHz runs the DRAM core at 400 MHz and fetches 4-bits in parallel. This gives you 1600 Mbit/sec. DDR3 at 800 MHz runs the DRAM core at 200 MHz and fetches 8-bits in parallel. It gives you the same 1600 Mbit/sec. In both cases it fetches bits in parallel at lower speed and then serializes them at a higher speed. The latency of making that fetch is roughly the same. This is also why CL in cycles goes up very quickly as IO speed goes up - the core is running much slower than the IO.

    The core structures of the DRAMs in all of these (DDR2, DDR3, DDR4, GDDR3, GDDR5) are the same. The differences are in the IO area. The wider the core interface, the higher you can push the IO speed. So DDR2 and GDDR3 are 4-bit, while DDR3 and GDDR5 are 8-bit. Then when you get to the electrical interface between two chips you expose a larger number of differences. GDDR3 uses a 1.8-2.0V IO with pull-up termination at the end point and a pseudo open drain style of signaling. It also uses single ended uni-directional strobes. It is a good interface for a controller chip and a couple DRAMs. DDR3 uses 1.35-1.5V for IO with mid-rail termination at the end points with termination turned on/off by a control pin. It has bi-directional differential strobes. It is better suited for interfaces with more DRAMs (like a DIMM).

    GDDR3 and GDDR5 use signaling designed to go a lot faster. They wind up limited by both the DRAM core speed and the IO speed. DDR2 and DDR3 use signaling designed to handle more loads. They wind up limited by IO speed but not by DRAM core speed.

    At this point if you are making something at the upper end of the GDDR3 speed range there is almost no reason to use GDDR3 over DDR3. They will have very similar performance and latency. Since GDDR3 is being phased out it is relatively expensive. DDR3 is available in huge quantities because it is the PC main memory, and this drives down prices. The one advantage GDDR3 has is that it comes in x32 packages. If you wanted to keep the PCB small, you might opt for GDDR3. This also works against you because the core of the DRAM remains the same size. 2 x16 DDR3 modules can give you twice the memory of 1 x32 GDDR3 module. If there were no new Xbox coming out in the next couple years, you would definitely see an updated 360 using DDR3 rather than GDDR3 just because of the relative price of the DRAMs.
     
    Shoujoboy likes this.
  20. Inuhanyou

    Veteran

    Joined:
    Dec 23, 2012
    Messages:
    1,305
    Likes Received:
    480
    Location:
    New Jersey, USA

    Is that actually confirmed or is that still speculation?
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...