WiiGeePeeYou (Hollywood) what IS it ?

Status
Not open for further replies.
pc999 said:
But assuming the GPU is like flipper
The question is how much like Flipper. Replacing the T&L with vertex shaders or significantly upgrading the TEV would allow for numerous more effects. BTW, check out the new Pangya Golf video. The animation and self shadowing look very nice. Look close at this "screenshot":

http://revolutionmedia.ign.com/revo...super-swing-golf-pangya-20060511093931317.jpg

I'm assuming that's in-engine. The lighting on the girl's arm looks similar to what we saw in certain places in RE4. If you look close at the guy, you see the kind of behavior around vertices normally associated with vertex lighting, not per-pixel. The self-shadows also look really good, with soft edges. If you've played a few Cube games, you'll notice that they seem to do shadows by creating some kind of projective texture, which is often filtered to give it a soft look. This is most apparent in Metal Arms, Prince of Persia, and Metroid Prime 2.
 
fearsomepirate said:
The question is how much like Flipper. Replacing the T&L with vertex shaders or significantly upgrading the TEV would allow for numerous more effects. BTW, check out the new Pangya Golf video. The animation and self shadowing look very nice. Look close at this "screenshot":

http://revolutionmedia.ign.com/revo...super-swing-golf-pangya-20060511093931317.jpg

I'm assuming that's in-engine. The lighting on the girl's arm looks similar to what we saw in certain places in RE4. If you look close at the guy, you see the kind of behavior around vertices normally associated with vertex lighting, not per-pixel. The self-shadows also look really good, with soft edges. If you've played a few Cube games, you'll notice that they seem to do shadows by creating some kind of projective texture, which is often filtered to give it a soft look. This is most apparent in Metal Arms, Prince of Persia, and Metroid Prime 2.

Not sure Flipper had T&L... I think I remember an article about it saying that those where done on the IBM CPU...
About the shadows, the NGC uses Shadow Maps AFAIK.
(Can it even do otherwise ? I don't remember it having a stencil buffer...)
 
Squeak said:
But EMBM can do the exact same effects as normal mapping, plus non monocrome and oddly shaped lights.
Depending on how they implemented it on Hollywood, developers should be able to use is anywhere from a lot to nearly all the time.

though EMBM is possibly among the most versatile uv-transformation techniques available up to date, and certainly is often used to produce perturbances in surface shading, it's far not as straight-forward/intuitive as normal maps for pixel shading. the latter allow you to work with actual light vectors and plug them directly into the phong model, whereas EMBM relies on lightmaps to achieve similar looks - an unnecessary-high level of implementation complexity for an effect which can also be satisfactory appoximated by simpler technique (specularity masks, etc). or put in other words, EMBM is a technique that is a tad too complex for the use ascribed to it in this thread, i.e. full-fledged per-pixel local lighting on everything. i'd personally prefer pvr2dc's polar normal maps to EMBM for massive pixel lighting. but then again i'm not a console dev ; )

ps: apropos, if this generation's computational power turns out sufficient for the common adoption of dynamic reflection maps, so that the latter finally manage to take their rightful place in the speclar spot of the reflection models which has been occupied so far by blinn's hack et al, then EMBM may all of a sudden turn out to be the right tool for the right job once again. at least for those situations where straightforward cubemapping cannot be afforded.. like in some edram-based GPU architectures devoid of cubemapping ; )
 
Last edited by a moderator:
Ingenu said:
Not sure Flipper had T&L... I think I remember an article about it saying that those where done on the IBM CPU...
Flipper's fixed-function T&L is old news. Planet Gamecube reported on it back when it was still called Dolphin.
http://www.planetgamecube.com/newsArt.cfm?artid=4699
Back in the day, ERP posted that Flipper's T&L unit was actually simpler than GF2's as an illustration of how ArtX cut corners in transistor count without severely affecting image quality.
http://imtoolazytofindit
Anand reported on it here:
http://www.anandtech.com/showdoc.aspx?i=1566&p=3
About the shadows, the NGC uses Shadow Maps AFAIK. (Can it even do otherwise ? I don't remember it having a stencil buffer...)
Luigi's Mansion and the Splinter Cells looked like they were using shadow volumes. I'm not so good at parsing that old Gamasutra article, but it looks like the way shadow projection done on Flipper (whether onto the model or something else) is that the TEV is used to generate texture coordinates. You'll notice it also mentions an implementation of real bump-mapping as opposed to emboss, but requires throwing the data from global lights into a texture. It looks totally weird to me.
http://www.gamasutra.com/features/20021002/sauer_03.htm
 
pc999 said:
Ok, now I am officialy confused about what GC can or cant do.:LOL:

If someone can tell I would be thankful.

GPU - "Flipper" (system LSI)
Manufacturing Process: 0.18 microns NEC Embedded DRAM Process
Clock Frequency: 162 MHz
Embedded Frame Buffer/Z Buffer: Approx. 2 MB, Sustainable Latency: 6.2 ns (1T-SRAM)
Embedded Texture Cache: Approx. 1 MB, Sustainable Latency: 6.2 ns (1T-SRAM)
Texture Read Bandwidth: 10.4 GB/second (Peak)
Main Memory Bandwidth: 2.6 GB/second (Peak)
Color depth, Z Buffer depth: 24-bits
Image Processing Function: Fog, Subpixel Anti-aliasing, 8 Hardware Lights, Alpha Blending, Virtual Texture Design, Multi-texturing, Bump Mapping, Environment Mapping, MIP Mapping, Bilinear Filtering, Trilinear Filtering, Ansitropic Filtering, and Real-time Hardware Texture Decompression (S3TC).
Other: Real-time Decompression of Display List, Hardware Motion Compensation Capability, and HW 3-line Deflickering filter.

The bump mapping refers to EMBM.

http://www.segatech.com/gamecube/overview/index.html
 
On the surface surrounding this boss here, there seems to be some combination of bump and specular.
http://media.wii.ign.com/media/748/748588/dl_1500290.html

You can also see it on the rolling chomp in the press conference vid.

Hupfingsack, a list of hardware features isn't exhaustive if there are any programmable elements, which there are on Flipper. For example, "bloom lighting" isn't listed in any last-gen console specs, yet we saw it in PS2, Xbox, and Cube games. Self-shadowing isn't mentioned on the Flipper specs and was seen in at least 2 games. The Gamasutra article details an admittedly complex way to process normal maps instead of emboss maps using texture operations. It looks like it requires 3 texture lookups in addition to creating a texture with global light data on the fly, as opposed to the taking the of a dot product of an RGB-encoded normal with incident light vector done DX8+ hardware.
 
Last edited by a moderator:
Ingenu said:
Not sure Flipper had T&L... I think I remember an article about it saying that those where done on the IBM CPU...


Flipper absolutely did have a geometry engine aka T&L unit -- otherwise Flipper would not have been a GPU, or to use an old-skool term, Flipper would not have been a "polygon processor".

without a T&L unit / geometry-processor, the Flipper would've just been a 3D accelerator / rasterizer like the PowerVR2DC in Dreamcast or the Graphics Synthesizer in PS2.


the Flipper's T&L unit / geometry-processor is the part called 'XF' in the Flipper block diagram, in the lower right corner.

flipper_die.jpg



Flipper's geometry engine / T&L unit is fixed function, and not as flexable as the Vector Units in PS2 which are part of the Emotion Engine CPU. perhaps also, Flipper's T&L unit / geometry engine is not as flexable as the Vertex Shaders in Xbox's NV2A GPU (not sure about that).

but absolutely, the Flipper has T&L / a geometry-engine / a geometry processor, thus making Flipper a full GPU.

Though because of the inflexiblility and/or the fact that there is only 1 single T&L unit in Flipper, that is probably why the Gekko CPU is called upon to assist Flipper with certain lighting, animations, etc.
 
Last edited by a moderator:
darkblu said:
though EMBM is possibly among the most versatile uv-transformation techniques available up to date, and certainly is often used to produce perturbances in surface shading, it's far not as straight-forward/intuitive as normal maps for pixel shading. the latter allow you to work with actual light vectors and plug them directly into the phong model, whereas EMBM relies on lightmaps to achieve similar looks
I can't see where the big difference is. One uses normal maps the other height maps. Both needs a second texture for the lightmap and both needs the eye and light vector, plus halfway if you're doing specular.
EMBM can also be used for "wrinkling" a colour texture, something which can be very useful for refraction/reflection and more importantly making the texture underneath the light look bumpy too.
- an unnecessary-high level of implementation complexity for an effect which can also be satisfactory appoximated by simpler technique (specularity masks, etc). or put in other words, EMBM is a technique that is a tad too complex for the use ascribed to it in this thread, i.e. full-fledged per-pixel local lighting on everything.
Sure it's probably quite a bit more complex to implement in hardware and takes more cycles to execute (I have no idea how much with either) but EMBM is so much more versatile and flexible, that I think it's probably worth it in most cases.
i'd personally prefer pvr2dc's polar normal maps to EMBM for massive pixel lighting.
Do you have some sort of paper that describes how that works? Sounds interesting.
 
Last edited by a moderator:
Squeak said:
I can't see where the big difference is. One uses normal maps the other height maps. Both need a second texture for the lightmap

i'm affraid we lost each other somewhere. why would normal mapping need a lightmap given it has the local surface normal and light vectors? or did you mean storing the specularity power per pixel in a specular-power map?

and both need the eye and light vector, plus halfway if you're doing specular.

you mean eye/halfway for the normal mapping only, as how would you use those at EMBM per pixel?

EMBM can also be used for "wrinkling" a colour texture, something which can be very useful for refraction/reflection and more importantly making the texture underneath the light look bumpy too.

actually this is exactly what i had in mind when i originally said EMBM can do per-pixel lighting - basically this technique. in essense it 'wrinkles' a lightmap/envmap. but whereas providing envmaps for specularities may seem justified nowadays (especially if dynamic), providing diffuse lightmaps is a bit circa '96.


Sure it's probably quite a bit more complex to implement in hardware and takes more cycles to execute (I have no idea how much with either) but EMBM is so much more versatile and flexible, that I think it's probably worth it in most cases.
Do you have some sort of paper that describes how that works? Sounds interesting.

i don't have anything at hand, but i do remember SimonF used to have a nifty page with bumpmapping techniques comparison. i'll try to look up the link to it in the forums' archives, but no promises.
 
Last edited by a moderator:
23 pages and 552 posts and counting...

What have we learned?

NOTHING! :LOL:

Can someone please PM me when we get some real GameCube 2... errr... Wii specs.

* No, specs don't matter if the games look great. But this is B3D and I enjoy the technical discussion... and until we even KNOW what is inside the Wii it is hard to have any meaningfull discussion in regards to the hardware. So far the discussion is based on rumors, rumors of rumors, and debating if "such and such" footage, if realtime, could/could not be done on X hardware. Here is to hoping that Hollywood is very robust and what we saw at E3 was stuff developed with "GCN+" hardware (i.e. backwards compatible pipeline) and that the final hardware is DX9 class with some good umpf. Afterall my old Radeon 9700 ran HL2 at 1024x768 w/ 2xMSAA pretty well so I am hoping we get something that can push Widescreen 480p well and look sweet while doing it, especially the clean self shadowing.

Seriously, 23 pages of a lot of fluff... but some good coverage / review of the GCN architecture.
 
fearsomepirate said:
Hupfingsack, a list of hardware features isn't exhaustive if there are any programmable elements, which there are on Flipper. For example, "bloom lighting" isn't listed in any last-gen console specs, yet we saw it in PS2, Xbox, and Cube games. Self-shadowing isn't mentioned on the Flipper specs and was seen in at least 2 games.

Well, I thought he was asking for the hardwired features, after all even a C64 can do bump mapping given enough time. Moreover, don't forget the TEV are texture combiners and not vertex shaders, which reduces the amount custom effects.
As for self-shadowing and light scattering I seem to recall that Gekko has to be used for those features. I think that was mentioned in F5 interview, if I am not mistaken. But I am not entirely sure.
As for bloom lighting, I have no clue how that is done on GCN.
 
later today i will consult the almighty GOOGLE to try to find something interesting, in hopes of putting some life back into this stalling thread, before it dies....

for now i will just leave you with part of an old press-release from 2005

Revolution's technological heart, a processing chip developed with IBM and code-named "Broadway," and a graphics chip set from ATI code-named "Hollywood," are being designed to deliver game experiences not possible to date.

"We're excited to be developing the graphics chip set for Revolution, which continues our longstanding relationship with Nintendo," says Dave Orton, ATI Technologies Inc.'s president and chief executive officer. "As the leading graphics provider, ATI is committed to delivering exceptional visual performance that enables consumers to interact with new and visually compelling digital worlds.


Wii await Wii's greatness ^__^
 
Last edited by a moderator:
hupfinsgack said:
The GCN doesn't support Dot3 bump mapping in HW. So the only options left would be to do it via Gekko which is not viable.


I fond this
If EMBM, (DOT3 in cascading stages, displacement maps) per-pixel lighting, soft-shadowing, etc.

Wouldnt this mean it can also do normal mapping too?
 
hupfinsgack said:
As for self-shadowing and light scattering I seem to recall that Gekko has to be used for those features.

The Gamasutra article doesn't say anything about the latter, but about the former, it's all this voodoo about representing the object as a Z-texture and measuring distance to light sources with some kind of "ramp texture." In other words, it sure sounds like they're using the TEV. Why don't you actually read the article I linked before claiming that nearly every effect that isn't hardwired has to be done on the CPU. OTOH, if that many effects are CPU-driven, then that 485 MHz PPC must be ungodly more powerful than anyone imagined.

As for bloom lighting, I have no clue how that is done on GCN.

Probably as a programmable framebuffer effect, like 99% of everything else we see on Cube.
 
pc999 said:
Wouldnt this mean it can also do normal mapping too?

AFAIK, normal mapping is just jargon for a DOT3 bump-mapping where the bump map was derived from a high-polygon source. And it seems like "DOT3 in cascading stages" is a fancy way of saying you have to do multiple scalar operations instead of having a vector unit that can do it all in one cycle. After all, anything that can add and multiply can compute dot products. I would hope that the Wii's GPU has had its basic math capabilities upgraded to something more comparable to DX shaders.
 
pc999 said:
BTW I finally got the change to see this again (still very bad qualitity), anyone think this can be real time (the driving parts)? I think they can/will possible be.

http://video.google.com/videoplay?docid=-4835745581631650859&q=Disaster:+day+of+crisis
If the Disaster: Day of Crisis footage is actually in-game, then I'm definitely impressed. Admittedly, all we have are grainy movies shot from a projector screen with a handheld camcorder... The lighting in the movie, though, does look particularly realistic.
 
fearsomepirate said:
AFAIK, normal mapping is just jargon for a DOT3 bump-mapping where the bump map was derived from a high-polygon source. And it seems like "DOT3 in cascading stages" is a fancy way of saying you have to do multiple scalar operations instead of having a vector unit that can do it all in one cycle. After all, anything that can add and multiply can compute dot products. I would hope that the Wii's GPU has had its basic math capabilities upgraded to something more comparable to DX shaders.

Could the improved lighting in Wii games be just because of a speed improvement (ie not touching in the architeture)?
 
Status
Not open for further replies.
Back
Top