Normal Mapping Wii demo

BTW. does Wii support shaders? If yes, to which shader model do the hardware capailities correspond?

Technically it doesn't correspond to a shader model. Xbox was basically shader model 1.3 IIRC, the Gamecube and Wii are different hardware where things are implemented in a much different fashion than Direct X 8+ aimed hardware.
 
BTW. does Wii support shaders? If yes, to which shader model do the hardware capailities correspond?

Wii does not support shaders at all. The texture combiner hardware is very similar to the last DX7 parts such as Geforce 2 GTX and Radeon 7000 series. With DX7 texture combiners you can also do DOT3 bumpmapping and EMBM bumpmapping. However using these features in full scale games was not that common, as you had to multipass (alpha blend) just to support specular + diffuse. Multiple light sources required very heavy multipassing, killing the performance completely (even without any real time shadows present). Current generation games tend to have hundreds of shadow casting light sources in each game level and it's not uncommon to have more than ten lights affecting a single object. This demo just shows a single light source (and no shadows).

Around ten years ago I coded a single light source normal mapping demo very much like this one using DX7: 2 layer DOT3 combiner for Geforce 2 GTS and 3 layer EMBM combiner for Matrox G200 and Radeon DDR. I am glad to see people getting excited for all this oldschool stuff... but comparing the DX7 style Wii texture combiner hardware to new DX9/DX10 shader hardware is just not getting us anywhere. By sacrificing some performance and multipassing heavily you might get close to the original Xbox (DX8) visuals. Basially the same tricks were used to get the first DX8 shader based games to run on old DX7 cards. This is really nothing new.
 
@Sebbbi: If one defines shader as: an operation that invluences a pixel's final colour, the Wii/GC certainly has one. What the Wii doesn't have is a set of GPU operations that are compliant with modern shader models. Besides that, Nintendo themselves call their blender a shader unit (see the recirculating shader patent).

You are right in that the Wii HW has limited resources, but the number of separate passes (polygon draws) you require really depends on your requirements. It's not as bad as you make it sound=) For example, you could setup 6 diffuse lights (texture coordinate limited, you need one for normal coordinate and 0 coordinate as well) if you already know they all light the polygon drawn. Only if you want more lights, you need a second pass (or start using EMBM). For some effects multiple passes would be a requirement: you could setup a pass to calculate screen space parallax mapping displacement coordinates and then use a second pass to use those with normal mapping. As for shadow mapping, if you use light attenuation (only 8 bits but still appreciatable) and have the CPU select the proper map, you can project at least 4 shadowmaps in a single pass. Another thing, color and alpha data is processed independently from eachother. So you can setup a stage to perform operation 1 via the color channel and in parallel operation 2 via the alpha channel. So you could, up to some extends, perform normal mapping and shadowmapping operations in parallel.

You are also right that this is nothing new. My intention was/is to find out what the Wii can do and how feasible some of that stuff is. My first conclusion is that Normal mapping is simple to implement and probably performs equal to a XBox (i.e. to do normal mapping on a XBox I assume you require 2 tex reads, a dot3 op, a clamp op and a blending op).

@Deadly: well, it all depends on your fillrate requirements. If a shader setup requires all tev stages, and the game overwrites each pixel ten times it gets hard to maintain framerate. It also depends on the achievable texture quality and the possibility to implement all required shaders to create photorealistic images. But in general, I think most *general* effects seen on X360, can be done on the Wii as well (though, some might be scaled back, such as number of lights). But that is nothing new either, some German company already proved that=). But I'm currently looking into Quake's source to see if I can implement some stuff in there. If that works out (and wiibrew actually runs in Wii mode), we can use Quake to explore the Wii's performance boundaries and perhaps create some mods such as normal mapped characters.
 
If that works out (and wiibrew actually runs in Wii mode), we can use Quake to explore the Wii's performance boundaries and perhaps create some mods such as normal mapped characters.

Are you saying you plan to make Quake playable on the Wii with extra added effects? That would be pretty damn cool.

You are also right that this is nothing new. My intention was/is to find out what the Wii can do and how feasible some of that stuff is.

Yup, The Conduit and Dewy's Adventure on the Wii did normal mapping, but it's still cool to find out for yourself just how far you can push it.

BTW, since you're programming for the Wii, do you know how many polygons the system can push using Nintendo's standards. Meaning with pretty much every single effects turned on. Come to think of it, I still don't know how many polygons the old Gamecube is capable of. Nintendo said 12-16 million, while Factor 5 says 20+ million. Which is it?
 
But I'm currently looking into Quake's source to see if I can implement some stuff in there. If that works out (and wiibrew actually runs in Wii mode), we can use Quake to explore the Wii's performance boundaries and perhaps create some mods such as normal mapped characters.
Wiibrew does run in Wii mode, and someone already ported Quake to Wii. Might be the best place to start.
 
Wiibrew does run in Wii mode, and someone already ported Quake to Wii. Might be the best place to start.

Already have it compiling, removed the wiikeyboard code (since I'm too lazy to download the lib), found the dynamic lighting and scenery drawing code and started fooling around=) Just wrote some code that hopefully picks out the 4 most nearby lights instead of generating a dynamic lightmap. In the next step I can assign those 4 to the hw when drawing the surface.

Are you sure about the Wii mode thing? I thought it ran GC mode in term of speed, but with enabled Wii hardware (or did they surpass that without me noticing:???:)

@Deadly: I'm just a starter, but if I don't bail out prematurely, I can hopefully answer those questions later. Anyway, I think you can really knock it down=)
 
Are you sure about the Wii mode thing? I thought it ran GC mode in term of speed, but with enabled Wii hardware (or did they surpass that without me noticing:???:)
Pretty much, yes. I've actually never heard of such a limitation. And it doesn't even make much sense, since homebrew runs in pure Wii mode and never enters compatibility mode.
 
Ok, great thing to know=)

Some advances... Killing to get there though! Also found out that the GPU can calculate only 3 bump coordinates per polygon. Also found out that you can't specify a distinct light to a colour channel's color and a distinct light to the same channel's alpha. So I have to get back from the 6 lights per polygon pass... It's only 3. So to get my 4 lights per surface I'll need 2 passes. That's OK because my my last goal will be implementing dynamic shadow mapping, and I gonna need 2 passes to include that for 4 lights anyway.

Wii holds perfectly with 2 lights per surface:

20090715.jpg
 
Is that an off screen pic? Looks really messy and hard to make out. I can definitely see your normal maps, but can see the lighting.
 
The Tenebrae project added normal mapping and dynamic shadows to Quake. The project seems dead now, but you might still be able to find some helpful resources there.

Here's a set of normal mapped models for Tenebrae.
 
Yeah, offscreen. I haven't looked into getting it running in Dolphin so I can't make screenshots. It looks messy for a lot of reasons:

* I don't have dedicated normal maps, so the normal map size doesn't fit some polies. Did found some from Tenebrae, but those are height maps and I need to convert them first. Also have to extend the texturing system so Quake can load them

* Quake removes most static lights when spawning them. I already hacked around that for now. Looks a lot better, but I actually have to change it in progs.dat (or see if someone else already did such a thing) and remove the hack

* I don't perform backface culling on static lights yet; what you see on that pic is a light that's actually located in the room behind the wall=)

* Only two lights per polygon. Besides that picking static lights requires some improvements... It now simply select the 2 lights that are closest to the surface's plane, but should actually pick the 2 lights that have most attenuation on the surface and are located in front of the surface

* I didn't return the first two tev stages to direct mode, so that's why the hud looks deformed

So, still a lot left to do but I'm pretty sure it'll work out!
 
@IronTiger, thanks for the link. I have tenebrae's sources as well and they changed and added a lot of stuff. It sure would be nice if I can use those models!
 
Unfortunately those links are dead:/

Anyway, for the ones interested, I implemented an bumpmap generator based on greyscales. So each existing texture is converted to B/W and then processed into a normal map. The quality isn't that great but it already looks better than original Quake. Still using only two, badly selected lights per surface.

01.jpg


02.jpg


03.jpg


04.jpg
 
Tenebrae probably used a similar technique for its normals. Are you doing it in realtime with a shader, or have you pre-generated your bump maps? I can dig up my copy of Tenebrae when I go home this weekend. If there's anything you want from it, let me know.

How's the framerate at this stage?
 
Hard to appreciate the work without direct feeds. Try out the latest dolphin and see if you can make some screen grabs, or perhaps release a demo to someone here with better video capturing equipment to take some screens.

Your last screen definitely shows off some nice bumps on the wall and what looks like specular too.
 
Unfortunately those links are dead:/

Anyway, for the ones interested, I implemented an bumpmap generator based on greyscales. So each existing texture is converted to B/W and then processed into a normal map. The quality isn't that great but it already looks better than original Quake. Still using only two, badly selected lights per surface.

Have you seen DarkPlaces? It has solutions for some of these problems you are encountering. It allows the loading of external rtlight files that define the lights to be used in a level for realtime lighting purposes.

http://icculus.org/twilight/darkplaces/download.html

The Dark Places download comes with rtlight files for all of the original Quake maps where people have gone through and optimised the number of lights hitting each surface. Not only that but there are some good higher resolution texture packs available for it, decent normal maps etc.
 
I have a question about the wii "extra hardware". I thought wii was an overclocked GC with more ram.
Is this what you mean when you talk about "extra hardware"?
Is there any other extra hardware inside Wii like an improoved CPU or GPU?

In that case, would this hardware put wii's capabilietes at the same level of the first Xbox? (I assume that a simple 1.5 overclock can't put Wii at the same level of the first Xbox).

Regards.
 
Sorry for the late reply. Had to do some other stuff. At that point, timedemo ran about 50 frames/sec with 4 lights per surface. I draw each poly 4 times. Can sqeeze out a bit more I guess because the CPU waits for vsync and such.

However, I had to change the scenery shader a bit. It turns out that some surfaces have been modelled like brick walls. So, of those few surfaces, the vertices don't coincide. Since the bump offsets are interpolated and not really per pixel I got annoying errors, i.e. slightly shifted light spots. If the vertices would be coincident this problem won't occur, but that's a problem one solves during scenery design; it can easily be avoided.

So I had to go a slightly different route. Now I setup a texture transform matrix that transforms the light to tangent space and use the coordinates to project a light direction texture on the affected polygons. The texture values replace the interpolated bump coordinates and it works since it is position based, which may be interpolated freely. Besides that, it's possible to do more that two lights per polygon draw.

I do need to straiten out the code a little (it VERY unoptimized now) but since it adds two additional stages it will be slower. In terms of CPU setup, it requires per polygon setup but it's nothing more than a matrix concat. The GPU does the rest. For models I can keep using interpolated bump offsets.

@deadly: As for screenshots, Quake has some code to shoot TGAs, but that code hasn't been ported yet. I'll tackle that shortly and post some fresh shots.

@tron: Yes, also saw stuff about dark places, and yes, those light's are a pain. If you remember the introduction, where you select your skill and episodes, that map alone already has 268 lights. That gets a bit tricky and sometimes even messy with only 4 lights per poly; specular reflection may be cut off and sometimes light levels abrubtly change between polygons.

@freezamite: There is no extra jaw dropping hw in the Wii that I use. This can also be done on GC, though it will probably be slower. I haven't programmed an Xbox so I don't know. On what do you base your assumption that 1.5 overclock can't put Wii at the same level of the first Xbox?
 
Cool, good to know you're still working on it. That's a very impressive frame rate too.

I have more questions that stems from my curiosity of what normal maps can do.

I looked at the The Tenebrae project models and even with normal mapping, they still look pretty low poly. The details just don't come out right. Forget adding bulging vains and stuff -- if you can make them look round and smooth, that would be great. Of course, that's ONLY if normal maps can make such low poly models look smooth. Since Quake has such a low polygon count, is it possible to make them look smooth? The Tenebrae project looks like they're more concerned with adding details than smoothing out the low polygon models.

This isn't a question but more of a request. Can you do something about the guns? The guns are the closest object to the player in an FPS game, and it would really show off how detailed you can make things with normal maps. They're also static objects with the camera always facing it at the exact same angle, so the normal map illusion won't break down.

Oh, and why Quake? I know this game is popular, but seems like a lot of hackers also love to mess around with this game. Is it because the game is very easy to port and work with? with the Doom 3 discussion going on in the other thread, any chance you'd ever consider taking a crack at it? Not the whole game, just try getting one level to run and play on the Wii as a tech demo.
 
Last edited by a moderator:
Oh, and why Quake? I know this game is popular, but seems like a lot of hackers also love to mess around with this game. Is it because the game is very easy to port and work with? with the Doom 3 discussion going on in the other thread, any chance you'd ever consider taking a crack at it? Not the whole game, just try getting one level to run and play on the Wii as a tech demo. Personally, I'd rather see Jurassic Park Operation Genesis running on the Wii just for a laugh.
Quake is open source now, and Doom 3 is not. id usually releases the source code to their engines when all their licensees are done with it, but Doom 3 still has Wolfenstein coming out, and that will probably be supported for a while after release.
 
Back
Top