Can Wii achieve the same level of Xbox's Doom3?

We developed several new Wii hardware specific tricks and techniques like Dynametric Light Tightening, Reframbriance, and ¡Approxiflexion! that contribute to The Conduit's performance and unique graphics style.
I do wonder how much of this is a joke. Looking at their back catalog I don't see any reason to believe that HVS is coming up with revolutionary new graphics techniques for Wii based on their knowledge of the physics of light.

If it was Valve saying it and the names didn't sound like intentional faux tech jargon I'd be more inclined to believe that these were important techniques... but as it stands they sound more like engine-specific optimizations and they're just trying to ride the "we're awesome with the Wii hardware" wave as long as they can for marketing purposes.
 
Hi just new here. I would like to point out that nintendo has some patents online that describe some of the graphic stuff. Not that it gives away full knowledge to me... Just go to freepatentsonline and search for GX and nintendo. Google has em too. For example this page actually mentions dot3 calculations done by the shader unit: http://www.google.com/patents?id=S-l-AAAAEBAJ&pg=PA19&source=gbs_selected_pages&cad=1_1#PPA35,M1.

Besides that, I guess you could use multiple techniques to perform a normal mapping effect. For example the wii can calculate per vertex bump coordinates for emboss mapping (see emboss mapping patent). I guess the hardware, per pixel, interpolates between the vertex based values when drawing a poly. You need two textcoords, where the first represents the actual texture coordinate and the second will be used to calculate the bump coordinates to. The calculation is simply TC1 = TC0 + calculatedBumpOffset. We are interested in calculatedBumpOffset, which basically represents the position of your light relative to the polygon's surface using 2D coordinates. More specific, they are linear with the sine of the angle between the light and the surface normal in tangent space t and s axis.
Now, the x and y coordinates in a tangent space normal map basically represent the same as the light position, however in this case it is the sine of the angle between the texel's facing and the surface normal. So adding them (perhaps negate first) to the bump offset gives a texture coordinate result that represents (0,0) when the texel directly faces the light. In addition, it results in, for example (-1, 0) when the texel complete faces away in t direction. This would cost a single texture coordinate generation stage I guess. So, this coordinate can be used to lookup a light map that contains intensity values which can be multiplied with the material texture. And voila, we get the same "overlit" lighting. If the same can be done for a second light, located at the position of the cam it would be possible to add per pixel reflection too.

Besides that you could do it in the shader as well (which other people mentioned before). I'm just curious if the hardware provides per pixel, normalized light direction vectors. I would guess it doesn't, but the same bump offset could also be used to directly lookup from a normalized light direction vector texture.

Oh... and perhaps none of this works... :rolleyes: Feedback from actual developers welcome:)
 
Looking at their back catalog I don't see any reason to believe that HVS is coming up with revolutionary new graphics techniques for Wii based on their knowledge of the physics of light.

If you work on licensed software, it's because your programmers are incompetent and your engineers failed all their physics classes. :rolleyes:
 
If you work on licensed software, it's because your programmers are incompetent and your engineers failed all their physics classes. :rolleyes:

Of course not, the actual HVS engineers seem highly competent from my minimal interaction with them. But this is 8 or 9 year old hardware. I highly doubt they're breaking ground. A couple of guys at Valve are breaking ground, Carmack has broken ground, most other people aren't and won't ever... not on the scale they're implying (assuming it's not a joke).

Given how rare it is, talent like that tends to bubble up. There's something to be said for remaining profitable as an independent, which HVS has for a long time, but they're not the top of the heap. Perhaps that's harsh, but I feel that they invite this kind of criticism when some of their marketing hype has been based on slighting other Wii developers. (At least from my interpretation, perhaps they mean it all in a more innocent way.)
 
Perhaps that's harsh, but I feel that they invite this kind of criticism when some of their marketing hype has been based on slighting other Wii developers.
Have you seen what other Wii developers have produced? The slights are entirely justified. Also, go watch some videos of the Conduit. They're not lying about getting normal mapping to work.

It doesn't sound to me like they claim to be inventing whole new graphical paradigms akin to the Doom 3 engine or something. It sounds like they're simply doing more with the Wii hardware than most other developers. Given that most Wii developers slap together something that would look bad on the Dreamcast and call it a day, and given how few 3rd parties did anything with the GC besides port PS2 code, it's not really that hard to believe.
 
I agree, those techniques were just a little joke. And HVS' got every right to make all the comments they have. It is completely true. Talking normal-mapping, from the latest vid I saw they're using it even more liberally and it does look great. The game is truly coming together, from the in-game materials to the lighting and post-processing. I think a lot of gamers will be suprised and stunned once they see the extent of the technologies implemented.
 
wii ports of xbox1 games never look as good as xbox version, like house of the dead 3, farcry, Ghost Squad (based on chihiro the xbox arcade board). Far cry and Ghost Squad looks horrible on the wii compare to the xbox, because they probably use alot of shaders. I dont think we will ever see games like Doom 3 or especially Riddick that use normal map for everything on the wii.
Ghost Squad and HOTD3 were both low budget, offloaded port jobs. Had they been done inhouse and given decent resources, the results would've improved accordingly. Similar ground up genre entires on Wii (RE: Umbrella/Darkside Chronicles, HOTD Overkill, Dead Space Extraction) are all far more technically accomplished anyway.
 
No, it can't. Xbox is about 2 to 3 times stronger than wii in terms of GPU and has a better CPU, so it is impossible for wii to move a game such Doom3.
Wii can do bump mapping, maybe normal mapping, but in a really limited way if you want to have a solid framerate, and Doom3 used bump maps and normal maps everywhere.

Regards.
 
freezamite, no offence but you don't know what you're talking about here. You should look up some more info on both XBox hardware and Wii hardware before making these kind of comments.
 
Last edited by a moderator:
freezamite, no offence but you don't know what you're talking about here. You should look up some more info on both XBox hardware and Wii hardware before making these kind of comments.
Of course I know.
About Wii, it is a GC with 64MB more of RAM (but lots of MB are lost in things like security and the SO, so this MB haven't to be count when we speak about the graphic capabilities of the wii) and its CPU and GPU are exactly the same of the NGC without any minimal change except for the overclock.

But people that understand about chips, know that clock is not the more important think when talking about processors.
Although Xbox CPU and Wii CPU have a similar clock, it is also true that Xbox CPU had fully SIMD capacity while Wii CPU don't.

Talking about GPUs, Wii has a low end DX7 GPU while Xbox had a good DX8 GPU, the difference between GPU, despite the clocks, is reaaaally big.

Regards.
 
Freezamite, Wii has 91MB of main memory vs 27MB for GC and 8GB/s main memory bandwidth (GC had 2.6GB). Both are quite significant increases which improve the system a lot. Obviously all the other buses within the system have gained 50% more bandwidth. Also the CPU does have improvements all be it pretty small ones (L2 cache fetch modes for instance). Wii is easily twice as powerful as GC.

Broadway has SIMD support by the way, as well as twice the amount of bandwidth as XCPU. Its a 64bit (32bit SIMD) 729Mhz RISC PowerPC 750CXe based CPU with the same amount of GFLOP power as the XCPU. I'd be suprised if it wasn't actually a bit faster overall.

Wii's OS is not going to take up the entire 64MB of GDDR3 either, that's a crazy idea. The OS will take up a very small percentage of Wii's memory, just like XBox's OS did. You don't put 64MB of GDDR3 memory with a bandwidth of over 4GB/s in a system for a OS..

On the GPU's. Yes XBox's GPU is based on DX8 while Hollywood has a rendering pipeline closer (though not exactly the same) as DX7 generation. But that doesn't mean one is low end while the other is good or whatever. Look at the specs, they both have a similar fillrate. One has the advantage of being more flexible (XGPU) while the other has the advantage of more raw power due to the much higher bandwidth/higher effective fillrate (Hollywood). Again pretty similar power just designed in different ways with differing strengths. If you think XGPU is better then I'm not going to bicker about it, but 2 to 3 times stronger is just nonesense.
 
Last edited by a moderator:
No sorry you don't know at all if you think Wii is just a GC clocked 50% faster with 64MB of ram. Wii has 250% more main memory then GC. 250% more main memory bandwidth as well.
Yes, that's all. That's what I wanted to say, the 64MB of extra RAM along with its independent bus of 32 bits.

50% more CPU bandwidth. 50% more internal framebuffer and texture buffer bandwidth.
But this is the logic increase to not neckbottling the system. I mean, if you overclock your CPU making it 1.5 times faster, and the bandwidth you had before the overclock was just what the CPU needed and nothing more, if you don't increase your bandwidth now you're having a problem here.
This is why it can't be counted as a real improvement apart from the 1.5 overclock.

The same with the internal framebuffer and texture buffer bandwidth, I also have them in mind when talking about the overclock.

The only increase in bandwidth that is a real improovement apart from the speed boost is the extra bandwidth gained from the extra 64MB of RAM.
But that memory is used by the starlet, we also know that at least 12MB of this memory can't be used in games because are dedicated to security issues.
Apart from that 12MB, you also have to discount the amount of memory needed by the OS, that at least has to be of about 12 or even 32 MB more (since Xbox360 OS needs 64MB of Ram and PS3 OS needs even more RAM).

So the fact is that we can't know how many of this 64MB of RAM can be used in games (If you know, then tell me ;) ).
Apart from that, that extra memory bandwith is used for other purposes than gaming, so we have to think about that too.

At the end, that extra bandwith that the Wii theorically has over the Xbox is lost in things that doesn't matter in terms of graphics.

and some small CPU improvements we know of (L2 cache fetch modes for instance).
But to be honest, this is such a small improvement that it is hard to think in it when talking of Wii.
How much can this increase the CPU's performance? 1% maybe?
But well, it is right that there is a (really small) bust here.

But that's all, nothing more has been changed a bit.

Broadway had SIMD.
Really limited if compared to true SIMD.
Wii has a 64bit FPU, wich can work with 2 32 bit floating point value.
Real SIMD works with 4 32 bit floating point values, so it is impossible to compare.
The PowerPC G3 where made to compete against the P2, but in front of a P3, they weren't even a match.

You think Wii's OS takes up the entire 64MB of GDDR3?.. That's a crazy idea, the OS will take up a very small percentage of Wii's memory, just like XBox's OS did. You don't put 64MB of GDDR3 memory with a bandwidth of over 4GB/s for an OS.
OS, security, chanels, internet...
Nintendo itself said that while you were playing, the wii runed a full OS in backgrown not only updating the news or the weather chanel, but also to make games able to look into those chanels so the programers could for example make the ingame weather to be just the same as real life weather at the same time.

So not only memory and bandwidth, but also lots of CPU resources are lost in that kind of things.

I really don't think that the real power of wii dedicated to games is superior to the one the GC had.
But even if wii could use all his raw performance into games, they would still be inferior to xbox's game.
The difference between GPUs is that big.

How are you comparing the GPU's exactly? Yes XBox's GPU is based on DX8 and Hollywood has a rendering pipeline closer to the DX7 generation then DX8. However to say one is low end and the other "good" is just nonesense. Look at the specs, they both have the same fillrate. One has the advantage of being more flexible (XGPU) while the other has the advantage of more bandwidth (Hollywood).
Fillrate is not everything, and even in fillrate terms, XGPU had much more texel fillrate than Wii.
And it is also easy to understand why Xbox had a "goog" DX8 gpu, it is known by all of us wich model of GPU Xbox had and we can easily compare.
About Wii, its TEV pipeline, what is said to be the most advanced part of that GPU, is like a DX7 rendering pipeline, so compared to a DX8 pipeline, there is no point on witch it could be better.

About the T&L unit of Wiis and GC GPU, it is also known that it wasn't the best T&L unit even when it was released back in 2002, and that it was a really big disappointment compared to what programers excpected it to be.

Wii isn't a bad console, it has its games, but it was made to be just the same as a GC but with the wii remote instead of a normal pad.
At the end new features were added, and of course the specifications where changed a bit to maintain GCs graphical level along with that extra things.

This is what wii is, sorry if this isn't what you expected wii would be.

Regards.
 
Last edited by a moderator:
On a specs sheet, XGPU has a very high texture fillrate with its 4x2 design. But since the memory performance was rather awful due to the unified memory setup etc, it wasn't nearly that fast practically .

And I do think Wii has more pixel fillrate by every measure.

Xbox's only real advantage is with its pixel & vertex shader hardware. The TEV in Wii has proven to be a decent alternative in most cases though. But yes this is certainly a disadvantage that Wii has not shown to be able to put aside. You just need to play Doom3, Half Life2 or FarCry Instincts for a little bit to realize this.

I don't remember Cube's geometry processing ability being looked down on at all back in '02. Other than its fixed function limitations maybe. What I remember is being blown away by Rogue Leader's huge polygon counts and that Factor 5 got this working pretty quickly. I also remember that all of Cube's hardware was clocked a lot lower than Xbox's and yet it competed quite well.
 
Last edited by a moderator:
Apart from that 12MB, you also have to discount the amount of memory needed by the OS, that at least has to be of about 12 or even 32 MB more (since Xbox360 OS needs 64MB of Ram and PS3 OS needs even more RAM).

The 360 OS needs 32MB.

Fillrate is not everything, and even in fillrate terms, XGPU had much more pixel fillrate than Wii.

NV2A's trilinear fillrate is worse.
 
On a specs sheet, XGPU has a very high texture fillrate with its 4x2 design. But since the memory performance was rather awful due to the unified memory setup etc, it wasn't nearly that fast practically .

And I do think Wii has more pixel fillrate by every measure.

Xbox's only real advantage is with its pixel & vertex shader hardware. The TEV in Wii has proven to be a decent alternative in most cases though. But yes this is certainly a disadvantage that Wii has not shown to be able to put aside. You just need to play Doom3, Half Life2 or FarCry Instincts for a little bit to realize this.

I don't remember Cube's geometry processing ability being looked down on at all back in '02. Other than its fixed function limitations maybe. What I remember is being blown away by Rogue Leader's huge polygon counts and that Factor 5 got this working pretty quickly. I also remember that all of Cube's hardware was clocked a lot lower than Xbox's and yet it competed quite well.
I meant texel fill rate was higger on the NV2A, pixel fillrate is near the same in both GPUs.

But although in a certain and conretely aspect Wii GPU maybe has equal or better performance, what counts is the whole overall and this is where the NV2A is better by far.

About the Xbox memory, it was the worst part of the system design, but at the end you only have to look at the screen to appreciate the differences between GC and Xbox.
While Xbox had lots of games with SD 360 graphics (RSC2, SC:Chaos Theory, DOAU), GC hadn't a single game that could fit on this category, and the same for Wii.

Regards.
 
Freezamite

You seem to be trying to discount 4GB of memory bandwidth because a ARM9 processor has access to it for security/OS data :unsure: How much bandwidth do you think is used for a small OS (basically a fancy menu system) like the one on Wii and for some security tasks?, its insignificant. Also how can Wii's extra bandwidth compared to XBox be lost because of these tasks when the XBox had its own OS?...

The XBox has 6.4GB memory bandwidth for everything. That includes OS, Sound, game code, textures, geometry, everything. Wii has a total of 20GB memory bandwidth (not including the on-chip 1MB texture buffer) and you're telling me that the Wii OS and security are using that extra 13.6GB/s up?..

Where do you get these figures for how much memory is taken by security and OS by the way, I think its much more likely that 12MB is for everything outside of gaming, security AND OS.

Also how do you know that it isn't Starlet that runs this menu system/OS? It runs security and various other tasks in the system and AFAIK is a 242Mhz ARM9 so its not beyond its capabilities.

Really limited if compared to true SIMD.
Wii has a 64bit FPU, wich can work with 2 32 bit floating point value.
Real SIMD works with 4 32 bit floating point values, so it is impossible to compare.
The PowerPC G3 where made to compete against the P2, but in front of a P3, they weren't even a match.

Real SIMD as you call it does 4 32bit floating points peak, in reality it never achieves that. Also while the original G3 architecture was designed to compete against the Pentium 2, Broadway is not an original G3, it has many improvements over that CPU. Much like the Pentium 3 has many improvements over the Pentium 2.

Fillrate is not everything, and even in fillrate terms, XGPU had much more pixel fillrate than Wii.
And it is also easy to understand why Xbox had a "goog" DX8 gpu, it is known by all of us wich model of GPU Xbox had and we can easily compare.
About Wii, its TEV pipeline, what is said to be the most advanced part of that GPU, is like a DX7 rendering pipeline, so compared to a DX8 pipeline, there is no point on witch it could be better.

About the T&L unit of Wiis and GC GPU, it is also known that it wasn't the best T&L unit even when it was released back in 2002, and that it was a really big disappointment compared to what programers excpected it to be.

Wii isn't a bad console, it has its games, but it was made to be just the same as a GC but with the wii remote instead of a normal pad.
At the end new features were added, and of course the specifications where changed a bit to maintain GCs graphical level along with that extra things.

This is what wii is, sorry if this isn't what you expected wii would be

I know what Wii is, I've been there and done that when it comes to discussing GC hardware to death and Wii is very similar of course. I think you either underestimate Wii or you have XBox up on a pedestal. Having seen your comment in another post about XBox having many SD XBox 360 games obviously the later is true. XBox had no games that looked as good as XBox 360 games at SD resolutions. By the way XGPU has a pixel fillrate of 932mpixels/s, Hollywood has a pixel fillrate of 972mpixels/s.

You haven't said anything new here so its hard to really discuss the GPU with you any further. just the same generalisations of "DX8 good, DX7 bad". You need to accept that XGPU and Hollywood are very similar in power overall, with there own advantages/disadvantages over each other. Neither one is 2 to 3 times better then the other, nowhere near.
 
Last edited by a moderator:
XBox had no games that looked as good as XBox 360 games at SD resolutions.

That reminds, let say Nintendo decided to make the Wii do Xbox360 at 480p, what kind of GPU will they need while still maintain a more consistent frame rate? Is a GPU upgrade all they need, or would they need to further upgrade the RAM, CPU, etc?
 
Back
Top