Wii U hardware discussion and investigation *rename

Status
Not open for further replies.
You got this wrong. It's 16 bits on 16 single ended data pins per chip. That they use DDR to transfer data doesn't factor in.
And you need quite a bit more connections to a DRAM chip than just the 16 data lines. You need a clock (some memory types like GDDR5 even use two different ones or differentially signalled ones [needing two pins]), some pins to transfer commands, and supply voltage and ground connections, too.

Hi Gipsel! I was reading some previous posts of yours in this thread (very informative) and have a question for you, since you seem to be pretty familiar w/ the R700 ISA and whatnot. I was doing some measurements of the SRAM banks on Latte. The register banks in the SIMD blocks appear to be almost exactly 2x as long as the banks which must be 2kB according to Marcan's analysis here: http://marcansoft.com/transf/latte_annotated.jpg

Indeed, if you compare the SIMD registers to a slice of the SRAM in the 1 MB texture cache, things look a bit different, which leads me to believe a higher density was used for those larger pools. If those banks in the SIMD blocks are only 4kB, is there any way that this chip contains more than 160 shaders? Would a wavefront size of 32 affect this at all? As you and others have noted, 55nm would make sense of alot of this, but I also did some measurements of the eDRAM and it seems to line up w/ Renesas' specification for their 40nm cells. I'm really scratching my head here.

I did too, i actually solved that one a bit ago though.

The part number for wii's ramchip, was 'U3' It happens to be the exact same ram chip as the 360 uses.

So.... In many parts data bases that can be googled via wii u ram brings up the wii part 'wii u3 ram'.... With the same serial number used in 360's ram.

wii-u3-samsung-k4j52324qc-tp_8778081379786398538.jpg

Wow! That's pretty hilarious. I can see now how that would mislead those folks over on thewiiu message boards.
 
While your criticisms are harsh sounding, I have to agree with you. And you didn't even mention the obvious on the CPU front. IBM's own 476fp would have been a fitting choice.
I had a conversation on the topic with Exophase pages ago and I used to think as you do and he proved me wrong.
Actually as far as CPU cores are concerned there was not an obvious good choice for Nintendo.
None of the CPU IBM offers are really good target (and easy target for devs), AMD had only bobcat (not that bad though), and in the ARM realm A9 was the only choice, nothing that significantly better than what they chose.
Though BC kind of set the deal not only with regard to the CPU cores, I'm still curious about how the Wii different memory pool are emulate/include within the WiiU hardware.
 
Last edited by a moderator:
I had a conversation on the topic with exophase pages ago and I used to think as you do I've been proven wrong.
Actually as far as CPU cores are concerned there was not an obvious good choice for Nintendo.
None of the CPU IBM offers are really good target (and easy target for devs), AMD had only bobcat (not that bad though), and in the ARM realm A9 was the only choice, nothing that significantly better than what they chose.
Though BC kind of set the deal not only with regard to the CPU cores, I'm still curious about how the Wii different memory pool are emulate/include within the WiiU hardware.

Really? I feel like that may be the most obvious thing about Wii U's hardware. The 32MB emulates the 24 on Wii. The 1 MB texture cache is on there (only it's real SRAM this time) and the 2 MB eFB is implemented as eDRAM. Actually, something stumped me in the neogaf thread a few days back that maybe somebody here has insight on. How many bits can each column of eDRAM transfer? If you just count across one of the modules that comprises the 2 MB pool, there are 64 across (actually 66, but ECC may explain that). That would seem to point to a 64 bit bus or at least an even multiple of it. However, Flipper/Hollywood's eFB is reportedly on a 384-bit bus (or at least it works out that way given the data rate). I'm wondering how that works out.

I'll have to read that conversation re: the CPU. I know darkblu has done some tests showing Broadway to actually be pretty capable for what it is. Still, they could have easily gotten away w/ 4 or 6 cores. I think Nintendo just want to keep things simple for the sake of their own programmers. The rest of the world seem to have gotten over the hurdles of multithreaded programming years ago.
 
To say the truth whereas I'm curious I did not really follow the discussion either, lazy me :oops:
Thank for the details :)

As for Nintendo strategy, it makes sense or more it made sense more precisely. They can't really make money with the hardware and be competitive. At least if they planned to be in the grey they would need imho proper designs, no bias (as pointed out by Exophase) in how they select their hardware partners, etc. In the specific WiiU case among other thing they timing is off I really believe that fall 2011 should have been their target.

Anyway it seems that nothing is moving at Nintendo CEO, I don't expect the situation to get significantly better anytime soon wrt wiiu sales but I expect nothing to happen at management either.
Imo it is a badly managed society.
 
Even looking outside of the embedded realm, the good old bobcat would not have been that terrible, a dual core +1 SIMD is 75mm^2 and 18 Watts @1.6GHz.
I'm yet to see a coherent argument why the above would have been a better choice than the present Espresso setup. People just keep suggesting alternatives to what nintendo did for no other reason than believing they could do better than nintendo.
 
I'm yet to see a coherent argument why the above would have been a better choice than the present Espresso setup. People just keep suggesting alternatives to what nintendo did for no other reason than believing they could do better than nintendo.

Id imagine the simd would be a fair ways better than paired singles. Anyone have a benchmark of any kind on nintendofied 750's paired singles simd?

Im pretty sure espresso would be rather capable with gp, looking at cache sizes, it has 2 FX's and a GX. They are quite a ways better than the CX/cle broadway came from, and broadway actually surprised me several times. Excite trucks damage system (im guessing tesselation and dynamically created displacement maps), wii sports resorts/skyward swords ability to create new assets on the fly dependant on user input. Crystal bearers npc ai was most amusing.

I think espresso will be all right for games. You know, provided wii u ever gets any.
 
With vsync each frame is displayed for 1/30 or 1/20 of a second. There's no alternative.

....
Don't worry, I get how double buffering + v-sync works when the system renders directly to one of those buffers. The GPU could be idling for nearly the complete TV refresh period. I just assumed that both XBOX and WiiU render in eDRAM and blit to a double buffered frontend so holdups are avoided (you can always blit to the buffer that isn't displayed and instruct the ramdac to swap after next v-sync, right?) .

I watched the video again and I have to admit the XBOX sticks to 20 or 30 fps very well. So I give in, you may be right. But there is situations where it runs in between. This can only occur when the game switches, or perhaps even oscillates between 20 and 30 fps (50ms and 33ms) during the measure interval DF uses. At 2:45 it even stays steady at 22 fps for a few secs, which seems odd since the camera doesn't really move and it is an animated scene on top of that. Hence I didn't consider double buffering in the first place.

Hey, it's at least as scientific as your frame rate / GPU power analysis.
At least I argumented my analysis. Which made you point out the vsync thing.

First of all I wasn't talking about the car, I was talking about the road reflections and made sure to point out I was talking about the road in all my posts.
Does the car matter? The point is that it needs maps for the sides and perhaps back too. You don't have those already.

Second, I didn't assume what the techniques were, I said what I thought could be then done then asked you for the specific information you've seen on how the developers achieved their effects.
Ok, my bad.

Third, I didn't say you'd trace the normal to calculate visibility (where did you get this from?) - I said you could use the "surface normals that you'd already be calculating" in generating the reflections.
Because you expect reflections not being that expensive and you pointed 'use the normal' out so explicitly. To me it sounded like it is a huge advantage to already have the normals and nothing else has to be done.

The point being that you'd already have calculated the surface normals (and could possibly have stored them), and would already have the colour and z buffers. I should hope that a GPU would be pretty damn quick at testing a ray against a Z value. It's a fraction of the work a proper raytracer needs to do.
Testing against a single Z is damn quick. The point is that, to my understanding, the ray is traced in steps. The test must be performed for every step, which incorporates calculation of texture coord, sample depth, sample color, compare and set. I consider that to be quite a hit, depending of the maximum number of steps taken. With traditional reflection mapping it is sufficient to project the reflected vector onto the reflectionmap assuming a fixed distance and just sample color. You know of any more sophisticated ways to trace a ray, using less computing power?
 
Nope, it's 4. With 8, maybe there would have been a chance of Nintendo saying, "Hey, maybe we should put this on a 128-bit bus!"
Ah. Yes, we can't have that, can we? :LOL: Thanks for the infos, man!

How many bits can each column of eDRAM transfer? If you just count across one of the modules that comprises the 2 MB pool, there are 64 across (actually 66, but ECC may explain that). That would seem to point to a 64 bit bus or at least an even multiple of it.
There could be multiple banks of 64 bits each that are accessed in parallel. If wuu edram is just 64 bits wide it would be absolutely unique in that regard, as all other consoles have featured much wider (some cases, much much wider; PS2's aggregate 2.5kbit wide read/write/texture ports spring to mind.) Also probably unlikely to only be 64 bits if wuu compatibility mode uses edram to emulate wii main memory. You'd have the CPU and GPU butting heads with each other when accessing memory, you'd need sufficient bandwidth to feed both and then some, and that's not likely to happen with just 64 bits @ 550MHz.

I know darkblu has done some tests showing Broadway to actually be pretty capable for what it is.
Maybe so, but it's still pretty damn startling that nintendo would go with a CPU that doesn't have any decent SIMD unit integrated. That's just unforgivable in this day and age really.
 
No way, I'm kind of defending the Wii U for once!

(Though not in terms of absolute performance. Hell no. It's awful.)

In the Wii Us power consumption defence we don't know what the AC adapter takes, a 6X BR drive could maybe take a handful of Watts, it's not using low power DDR3, and the Wuublet wifi is likely to almost always be powered up and transmitting hard. Plus aren't most high performance mobile devices on 32/28nm now? Tegra 3 is 40nm iirc, and doesn't seem like hot stuff tbh.

So basically what you're saying is "it's pretty good if you consider all these ways in which it's pretty bad." Using older process technology and not using DDR3L, potentially using a low efficiency AC adapter (we don't really know though, could be > 90%), these all contribute to factors in which Nintendo could have done better and therefore didn't do a very impressive job on bringing power efficiency down.

I get what you're trying to say, that the efficiency of the GPU design itself seems like it's pretty decent given the constraints of the manufacturing and all the other design decisions. It's hard to really get a good handle on this, but I could see other contemporary designs still beating it by a fair margin under similar constraints.

Can't say about the CPU except that at the system power consumption levels we're looking at matching what Wii U does with other designs would put the power contribution in the noise. Whether or not Wii U's CPU contribution is also in the noise is impossible to know.

None of the CPU IBM offers are really good target (and easy target for devs), AMD had only bobcat (not that bad though), and in the ARM realm A9 was the only choice, nothing that significantly better than what they chose.

First A15 device came out about a month before Wii U, if they were especially aggressive it would have probably been possible to use it.
 
Last edited by a moderator:
I'm yet to see a coherent argument why the above would have been a better choice than the present Espresso setup. People just keep suggesting alternatives to what nintendo did for no other reason than believing they could do better than nintendo.
That is a quiet unfair statement, Nintendo did what it did mostly for BC.
As for doing better then "Nintendo" I would think the actual work on silicon might have been done by AMD, IBM and Renesas, for most part.
You seem to think that I'm a hater of Nintendo for some reasons, but I don't care it is not true, I'm neither a lover. Especially now that it seems clear that both Sony and MSFT have pretty expensive design, with a lesser path toward price reduction than during the previous, I really think that they wasted an opportunity if not to have wii type of success but to do quiet well. I still think the wiiumote is a great idea.
It is not about me or other people doing "better than Nintendo", it is about other companies doing better with regard to the hardware they put together at a given price, I gave that Onda tablet as a ref because we have here a company that might operate with pretty low margin and manage to ship product quiet competitive.

Now if you think that with 200mm^2 of silicon and a 30/35Watts hardware companies could not have done better, significantly better, I won't agree.
If last Criterion game is any clue the system could keep up with the ps360, doing better in some places, we still speak of differences that most people would not notice without paying extreme attention or actually without relying to video editing tools. Pretty much like the difference between the ps3 and the 360 there is not that much "see" at this point, comparison articles are still successful but I wonder why I would bet that most people could not tell the difference. The WiiU is there along the 360 and the ps3 but it shipped 7 years after the 360, 6 years after the PS3. The later both have a price advantage (set to grow), have a bigger library of (HD) games, imo that is not good and it shows.

ANyway behind the hardware choices there are business decisions foremost, discussing Nintendo engineers prowess, or engineering team at IBM, AMD, Renesas is not relevant, the issue is on the business side /model thus it should not be discussed here.
 
Last edited by a moderator:
Ah. Yes, we can't have that, can we? :LOL: Thanks for the infos, man!


There could be multiple banks of 64 bits each that are accessed in parallel. If wuu edram is just 64 bits wide it would be absolutely unique in that regard, as all other consoles have featured much wider (some cases, much much wider; PS2's aggregate 2.5kbit wide read/write/texture ports spring to mind.) Also probably unlikely to only be 64 bits if wuu compatibility mode uses edram to emulate wii main memory. You'd have the CPU and GPU butting heads with each other when accessing memory, you'd need sufficient bandwidth to feed both and then some, and that's not likely to happen with just 64 bits @ 550MHz.


Maybe so, but it's still pretty damn startling that nintendo would go with a CPU that doesn't have any decent SIMD unit integrated. That's just unforgivable in this day and age really.

Np on the DDR3 infos.

Anyway, I actually meant to imply what you said about the multiple parallel banks. There are 4 modules that make up the 2 MB eFB, each 64 across. So having a 256-bit bus for the whole frame buffer makes sense to me. A 384-bit bus with that config just seems strange. That means each column is transferring a bit and a half of data?

Yeah, I'm surprised and disappointed they stuck w/ paired singles but darkblu's tests are even more surprising. If anything, the biggest drawback seems to be that they couldn't get the clocks higher. And I still think an extra core (or 3) wouldn't have been completely unreasonable.
 
That is a quiet unfair statement, Nintendo did what it did mostly I would think mostly for BC.
I sense a misunderstanding in the making here. So let's backtrack a bit.

You: Nintendo could have used a Bobcat.
Me: Why? Outside of 'Just because I think so.'
You: Asking why is unfair. Nintendo choice was BC driven.

Well, guess what - a dual-core 1.6GHz Bobcat could have given Nintendo little (if anything) in performance, over their actual Expresso setup.

At the absolutely rudimentary task of matrix multiplication used in the test I linked a few posts above, a 1243MHz Broadway core should (as in: if we scaled clocks accordingly) perform virtually identical to a 1.6GHz Bobcat core. In the context of this task, two Bobcats would have hardly been a better choice than three Broadways. Actually, chances are the current Espresso setup would plain outperform your dual-core Bobcat setup 3:2. This is before we even consider BC.

As whether I think you're a WiiU hater - I have no idea what left you with that impression. Regardless, accept my sincere apologies if something I said sounded like I thought you were a hater.

Yeah, I'm surprised and disappointed they stuck w/ paired singles but darkblu's tests are even more surprising. If anything, the biggest drawback seems to be that they couldn't get the clocks higher. And I still think an extra core (or 3) wouldn't have been completely unreasonable.
Fun fact: an IBM presentation dating back to the Gekko days plainly states that: '..in terms of fp/simd, with Gekko Nintendo got more than what they bargained for.'

Indeed, a couple more cores in there would have produced quite an interesting machine.

That is quite a thorough comparison. Its better than I would have that. It really is.
Thanks. It's just a professional deformation of mine where I don't take assumptions for facts. If I can verify something I go out an verify it.
 
Last edited by a moderator:
The fx and gx actually had some performance enhancements beyond bigger caches and higher clock frequencies.

It could be wise to consider some of them may have come to expresso along with fx/gx cache sizes and clock increases, rather than mantaining the mindset of an 'overclocked tricore broadway'.
 
First A15 device came out about a month before Wii U, if they were especially aggressive it would have probably been possible to use it.
Definitely a tempting choice, may along with an attempt to implement why not the last rendition of ARM mali GPU (T678) which may push a sane amount of FLOPS within Nintendo power budget.
All ARM solution, should not be a trouble to put together for engineering teams, advance GPGPU capabilities, efficient use of bandwidth (TBDR).

Though that would be indeed a pretty risky bet wrt timeline, to prone to delay, etc.
Ultimately I think that Nintendo should have aimed for 2011, 40nm process were here but not the architectures we are discussing, to their credit even a few years ago (/design time) there was much better choices ceteris paribus (process, power budget, silicon budget/BOM).
-----------------------------

Darkblu, It is ok, I have read too much in you post. Kudos for answering the way you did.
More to the point I though it was clearer in my post that Nintendo did not had obviously better choice than what they chose for the CPU.
I just discard a big part of what I was writing because it was uselessly long and unfocused.
I will try to sum up in a better the issue I have with the system:
1) BC should have been adressed in another manner, looking at the rumor about MSFT new 360 and at Nintendo selling a whole system (the Wii mini) @99$ (for a profit), I think that they could have offer an add-on card for those really interested in BC (priced competitively and quiet possible making a profit). On top of that when I see that "ocarina of time" will be released in HD it got me to wonder about I think is a pretty paradox stance. /Back to business /business model considerations.

2) I think that within what their budget (encompassing all aspects), they made the wrong trade off.
They should have gone with the process that offer the highest density (TSMC), pass on embedded DRAM. I don't think I'm too far off if I state that they could have put 6 expresso cores, a lesser shared L2 cache and a 6 SIMD GPU (vliw4 which would end around the same size as redwood) on (~) the same die size as their GPU (~150mm^2).

3)They could not pass on a 128 bit bus, they may had to deal with situations where the system is more bandwidth constrain that the system as it it. For Nintendo own games I wonder to which extend it would have been an issue for their games, looking at AMD APU it is not that bad either.

side note) the main issue pathological case of money pinching, they used to make money on hardware, now it is tougher, the wiiU doesn't (supposedly) sell for a profit) and they will have to drop the price. The same happened with the 3ds. Their money pinching costs them money. Some bias in how they have in how they selected their hardware partners have a pretty terrible impact on their designs.
Now they think they are OK with the 3ds, they makes some money overall, but they are wrong. I would have bought the 3ds if the hardware was not that sucky, at the price they sell it (at release and now) and looking at low hand tablets should be a wake up call for investors (though Pachter gives pretty good explanations about what is going on (or is not going on) on that front).

4) back to the WiiU, I think that they needed 2GB of RAM for the games alone, if they wanted as much as 1GB for the OS/Service they needed to put an extra GB of RAM in there. It could have been covered (completely or partially) by not needed an extra chip (the CPU) and the interposer (production cost, testing, etc.).
Wrt to the RAM speed, I'm not sure which speed they selected but I suspect that in the long run it is another pathological case of money pinching /it will cost them ultimately more...

5)Then there is the power budget, I'm aware that the extra CPU cores (though I suspect that those cores are extremely power efficient /minimal issue) and extra SIMD / bigger GPU (for the ref redwood @650MHz with 1GB of DDR3 brun 43Watts alone) would have burnt more power.
They have room to lower the GPU clock and possibly get where they wanted to be, but that not the issue, it just another case of "money pinching" (/bad, short sighted decision), upping the power consumption a tad would have cost them next to nothing (a few db but those a "free" + money pinching).

6) I think that the lesser SKU need more Flash, I would think that Sony made a right choice with 12/16GB, which allows a few partial install + updates, OS, etc. 8GB seems short. I do get that Nintendo may not want to do that for its titles but others need to do that.

final point) BC (for the sake of surfing the HD re-release wave :rolleyes: ) and money pinching is going to cost Nintendo more money than a more properly designed system. BC makes them no money, neither it get them extra costumers but hindered their design choices, money pinching is going to actually costs them a lot of money, they sell in the "grey", they will have significantly lower the system price, competition is coming (from both end next gen, and lower cost current gen), first that will force them to remain in the grey and the effect on the user base and associated revenues (their own software sales and royalties from third party) is self explanatory at this point.

final point 2.0) Another thing is that all that was doable and should have been done in time for fall 2011, for a 2012 release I think that not going for 28nm was a mistake (/I changed I pov on the matter vs previous pov).
 
Last edited by a moderator:
I dug a tad (out of boredom /personal issue) what was doable @28nm.

I found those HD8000M, pretty interesting line, the one matching what could have been Nintendo requirement the HD8730M (650MHz, 6 CUs, 8 ROPs, 3GB of DDR3).
I found different data for this chip named MARS, 76.5mm2 <1 billion transistor.

Quiet a beast, you could consider to tie it to 2GB of DDR3 as in the HD 8730M but I think that with such a tiny chip designing a mini durango is far from crazy.

Some people estimate have the 32MB of scratchpad in Durango ~60mm2 (using figures measured in GPU + a 50% overhead).
That put 8MBs (enough for a 720p render target) @ 15mm2.
I would favor an amount that allows to fit light G-buffer @720p, that may require north of 10MB (just above what the 360 has), that would be around 20mm2.

For the CPU it is ~28mm2 @45nm, I would think that 6 cores + the matching amount of cache would not get bigger @28nm (actually I think it would be tinier and if not I"m sure a shared L2 though tinier would serve the system better, anyway I stick to 28mm2 as a conservative measure).

Overall Nintendo being Nintendo and with some rumors hinting at 720P titles on next gen system, I think that 8MB would serve it purposes, as the system would have played catch up with its bigger siblings, it is likely that devs would have ended toying with the resolution.

That would put the system at : 77+15+28 => 120mm2 / in the ballpark of Cape Verde.
For the ref Cape Verde part with 2 GB of DDR3 are available at newegg for 99$ (and every body get its share from AMD to for example Saphhire to newegg...)
The chip describe above would likely burn less power than a Cap verde (650MHz vs 800Ms, * CUs vs 6, 8 ROPS vs 16), the HD 7750 with GDDR5 burns only 45Watts, that chip may have meet Nintendo requirements and so it would have required a lesser cooling solution than a what use on Cape verde card, thanks to the stcratchpad memory they may have used cheaper/slower DDR3 (than what those cards use).

Now I think even less of Nintendo designs choices... :(
 
Last edited by a moderator:
I wanted to add something but it was too late yesterday, I am working at having the Nintendo lowers to hate me lol, just kidding.

More to the point and if people with actual experience in the field can elaborate they are elcome :)

I read quiet a few time that actually the WiiU may also have suffered from a lesser R&D than its Sony and MSFT counter parts, I came to really question that.

When it is all said and done, both the Orbis and Durango will have been put together using existing block of logic on the lithography they use be it jaguar cores or GCN building block (Shader cores, ROPS, etc.). It is more an advanced work of integration.

On the other end, one might really well qualify the WiiU as a fully custom design.
Be it for the CPU or the GPU, there was no existing building blocks.
IBM went to implement and tweak broadway so it can be a SMP CPU ( I guess tweak the L2 interface) from scratch on the 45nm they use. I would guess it is costlier than using already existing blocks (be it power 7, power a2, ppc476, putting aside the merit of those design).
The same applies to the GPU, they got AMD to implement the RV7xx/evergreen architecture on Renesas process, AMD did not have building blocks available on that process.

So to me it actually looks like a pretty significant R&D effort. Actually whereas as Nvidia pointed out in recent presentation the cost of implementing a piece of logic on newer process keeps climbing, I now question if Nintendo made any saving by sticking to 40/45nm process as did not leverage any existing "blocks" on the lithography they used. It is pretty much indeed a completely custom design, another down fall to BC.

If they had go with something like I describe and whereas implement the CPU on 28nm process may have cost more than on a 40/45nm one, they would have at least benefit from existing building blocks from AMD (the whole GPU). Looking at the performances they would have won and within their power budget I think that it was a really bad decision.
-------------------------------------------------------

Anyway lets move forward, I think that the WiiU is going to be eol pretty early. I'm not sure about what is going on lithography front, it seems that a lot of founders are to move asap to 14nm+finfet and that 22/20nm process could be either short lived or even cancelled (I guess it depends on how well the tests are going on those 14nm process).

Nintendo have a lot of money, they won't lose much on the WiiU, IF something were to happen within the management I think that either 22nm or 14nm process would offer them a lot of option to come with both a system cheaper than Sony and MSFT and possibly better.
To a lesser extend it is a bit as if you compare what the 360 does @45nm wrt both perf perfs watts and perfs per mm2 to what can be done from scratch on this generation of lithography (the WiiU whereas I think what I think about it, still does what it does for 35Watts and have a lesser silicon budget than valhalla)

So yes if something happens at Nintendo headquarter, not a given and they take the "right decision", select the proper partner, I could see Nintendo EOL the WiiU as soon as those new process are available (be it hybrid 22/14nm from GF or 16nm from TSMC).
Within the area ~ Cap verde and with technology like interposer available they should be able to compete for extremely cheap, that is if they decide to do so...

I think there is no gloom and doom, that company has lot of money, they need to adapt their strategy, making money (a significant amount of money) on the hardware is no longer an option, they have to move imho to cheap affordable, pretty state of the art for their BOM, devices set to be sold in the grey. Either way definitely they could do more money as a software provider.
 
Last edited by a moderator:
Does nobody else have something to add on the raycasting thing (yeah, I'm still on it)? IMO it's the only way to reflect non distant objects correctly and I couldn't find anything faster on the Internet. In fact, raycasting seems to be the least demanding solution, as you can balance the number of samples taken versus the quality required.

some paper on the subject

If you look at the reflection of the trees in the right side of the pics, its quite clear the WiiU has sharper quality. Or is there a better explaination for this difference?

1280x-1

1280x-1


BTW: The holes in the tree's leafs aren't there when using bumper cam instead of chase cam, which IMO is because of the raycasting process' coarseness.

Anyway lets move forward, I think that the WiiU is going to be eol pretty early.

People said the same about Wii but in the end it sold so well that it never happened. Appearantly they don't target hardware freaks but the remainder instead. They figure that not everybody needs a 500HP car... Why do you think it will happen this time? Would you buy a Wii U if it was whatever times better (and likely more expensive) than a PS4?
 
Status
Not open for further replies.
Back
Top