Was the Dreamcast better at bump mapping than the PS2?

I think I remember seeing bump mapping on the DC launch Godzilla game. I think I also remember reading something about the DC having special hardware for bump mapping. Sorry if I'm vague, but it's been a REALLY long time since the DC launch.

I know the PS2 is capable of bump mapping, but besides Hitman, what other games used it? Which system is better at it?

Oh yeah, if it true that the DC also had hardware support for normal mapping? I know PS2 could normal map, but didn't have special hardware.
 
Last edited by a moderator:
Well, IIRC, bump-mapping was an actual rendering feature of the PowerVR2DC / CLX2 graphics chip, whereas it is not a feature of Graphics Synthesizer.

Someone (such as Simon F) can correct me if I'm mistaken.
 
yes but I dont remember seeing any in any DC game, however I do remember seeing normal mapping (very very poor one) in Hitman Blood Money on PS2. But I think normal map doesnt really require hardware support, is that true?
 
Not particularly practical on either, DC primarily because of vertex limitations, PS2 pixel (with multipass emulation).

On PS2 you do have an alternative - ie. using deferred shading, where you get full floating point and a lot more math instructions then just DOT3 to work with. But that comes with its own issues that would limit your scene complexity to that of 1st gen PS2 games, if that.
 
I think I also remember reading something about the DC having special hardware for bump mapping.
Dreamcast had a normal map texture format and could do dot products with an incoming light direction to modify the shading of surfaces (you could even change the opacity if you wanted to).

The normal map vectors, however, were not stored in Cartesian coordinates but in a polar-ish form. At the time I was worried that, if we used Cartesian coordinates, the cost of renormalisation of the vectors in the texture (e.g. due to bilinear filtering) and of the per-vertex light vectors would be too high. I shouldn't have worried since 1) re-normalisation can be done with relatively little hardware and 2) when other hardware came along that did normal mapping with Cartesian vectors I don't think it did either.
 
On PS2 you do have an alternative - ie. using deferred shading, where you get full floating point and a lot more math instructions then just DOT3 to work with. But that comes with its own issues that would limit your scene complexity to that of 1st gen PS2 games, if that.
Like Tekken Tag Tournament? I'm still curious how the stone floors were done. The ground in TTT had far more detail and depth than even Tekken 6's floors.
 
bump mapping I think is never existed on ps2 or dc...normal mapping instead there only in two games on ps2 , last hitman (re-think I have notice also bump mapping in some wall) and matrix path of neo, via software obviously. Very impressive results however for a simple ps2 imho.
 
Iron Tiger said:
Like Tekken Tag Tournament?
TTT was using dense geometry with specular highlights for floor. As for deferred shading, I don't think there was any commercially released project using it.
 
This reminded me of something:
http://www.gamespot.com/xbox/action/splintercell3/news.html?sid=6116458&msg_sort=1&page=4
More importantly, a newly created technique, dubbed "geo texturing," has allowed Chaos Theory to mimic the normal mapping on the Xbox using a real-time 3D mesh that's textured on the fly. The end result is the same look as the Xbox's normal maps, done in real 3D, which doesn't tax the PS2 as much as traditional techniques would
http://www.eurogamer.net/article.php?article_id=57628
The PS2 version, for example, bereft of the Normal Mapping that makes the texturing on the PC and Xbox versions look incredibly convincing, is to be buffed up with a new technique Ubi calls Geotexturing, giving the same extra layer of depth to the textures on the PS2 but without crippling the system.
 
Not particularly practical on either, DC primarily because of vertex limitations, PS2 pixel (with multipass emulation).

On PS2 you do have an alternative - ie. using deferred shading, where you get full floating point and a lot more math instructions then just DOT3 to work with. But that comes with its own issues that would limit your scene complexity to that of 1st gen PS2 games, if that.
I really don't see where the big obstacle is.
I mean, in its simplest case it's just lighting with a three coloured vertex light on top of the normalmapped model, and then do a paletteswap to get black and white?
What's the big holdback, it's only three passes, One of them sans geometry or UV setup?

I may seem nugatory to discuss this now, but it still has some relevance with Wii which is also capable of palette textures (although you have to do a swap through main memory there to use the screen as a texture).
 
Squeak said:
What's the big holdback, it's only three passes, One of them sans geometry or UV setup?
The full-screen passes aren't free either, and it imposes certain restrictions on render-order because you have to time-share VRam buffers. None of it is unsolvable, it's just a hassle that has questionable benefits (especially on SDTV).
99% of people online only 'notice' "bump/normal/anything" mapping if it has high-resolution specular highlights up the wazoo, and that will be run you more passes.
Personally if I was thinking of getting pixel-shenanigans on PS2, I'd go deferred and scale back game environments to work with it. At least in that case, all the hassle would yield some palpable results.

Which is the main reason it's pointless on Wii - you can get pixel-specular highlights with hw-dependant lookup. If you can get the most important component of the end-result for a tiny fraction of cost&implementation hassle, why bother with anything else?
That said, it's been ages since I thought about it but IIRC on GC/Wii you can do it without fullscreen passes, just using more texture stages(6 or thereabouts).
 
The full-screen passes aren't free either, and it imposes certain restrictions on render-order because you have to time-share VRam buffers. None of it is unsolvable, it's just a hassle that has questionable benefits (especially on SDTV).
If any console was ever made for full screen passes it was the PS2. Sure there would be a hit, but I have a feeling it had more to do with not having time to experiment and the "good enough" mindset of many developers.
99% of people online only 'notice' "bump/normal/anything" mapping if it has high-resolution specular highlights up the wazoo, and that will be run you more passes.
You could do specular, even with vertex lighting, I think it was Galaxy or some other Wii game that did specular by subdeviding the polygon where the specular highlight was, something the PS2 would be just as, if not more well suited for, given its programmabilty.
Personally if I was thinking of getting pixel-shenanigans on PS2, I'd go deferred and scale back game environments to work with it. At least in that case, all the hassle would yield some palpable results.
Yeah, I read about that, years ago. It seems like a quite insane "because we can" showoff approach, when you have the GS with its enormous per pixel fillrate (even quite big today) sitting right there.

Which is the main reason it's pointless on Wii - you can get pixel-specular highlights with hw-dependant lookup. If you can get the most important component of the end-result for a tiny fraction of cost&implementation hassle, why bother with anything else?
You mean the EMBM of the TEV, then yeah sure. But some developers seems to imply the it can't be used for more general BM purposes, which seems weird to me, since the general idea is the same with normal mapping and EMBM.
 
You could do specular, even with vertex lighting, I think it was Galaxy or some other Wii game that did specular by subdeviding the polygon where the specular highlight was, something the PS2 would be just as, if not more well suited for, given its programmabilty.

You don't even need to subdivide the polygons, if your polygon detail is high enough. We had vertex specular lighting in all Warhammer 40K Squad Command units and shiny objects (metal barrels, etc), and it looked perfectly fine. It's true that PSP isn't exactly PS2, but the graphics chip is very similar (fixed function pixel processing with no dot3 or embm support).

On old days it was a common thing to subdivide the large polygos of the shiny objects to create better vertex specular lighting quality. Most of developers had artists do this by hand, but I have also heard that some (dx6 and dx7 era) games subdivided the polygons on fly from the approximate specular highlight position. I read a developer "blog" about it years ago, but cannot remember which developer it was.
 
Last edited by a moderator:
But if your polygon density is, say double, then you might as well just have rendered the same model again with a specular texture.
With subdivision, you are saving memory, bandwidth, rendertime and a texturing stage. If your hardware is already highly programmable, the hit needn't be big at all.
Of course there are limitations. Of the polygons becomes to small you will get very bad utilization of the render footprint quad. And if the highlight is very small you need small polys. But these are special cases.
 
Squeak said:
If any console was ever made for full screen passes it was the PS2. Sure there would be a hit, but I have a feeling it had more to do with not having time to experiment and the "good enough" mindset of many developers.
There were PS2 games that go as high as 40 operations per-pixel (some of it is overdraw, but still), it's hardly an issue of people not wanting to do things with pixel-math on PS2. It's probably the main thing where PS2 exclusives shined over other more capable platforms really.
Diffuse normal-mapping is just not that high on the list of visually interesting effects, so if it comes to 'either-or' situation, most people will chose something else.

You could do specular, even with vertex lighting, I think it was Galaxy or some other Wii game that did specular by subdeviding the polygon where the specular highlight was, something the PS2 would be just as, if not more well suited for, given its programmabilty.
I meant pixel-granular specular - what the masses generally call "bump-mapping" online, because the diffuse term lighting is too subtle for them.
Smooth specular highlights are easier to do with reflection maps, and have been standard fare in PS2 games really. I've played with subdivision idea briefly way back in the early days, but I didn't like the results I was getting.

Yeah, I read about that, years ago. It seems like a quite insane "because we can" showoff approach, when you have the GS with its enormous per pixel fillrate (even quite big today) sitting right there.
The fillrate and polygon setup still gets put to good use laying down attribute buffers. The 'insane' part about it is only the fact that getting attribute layers out of VRam requires reversing GIF bus, which was a clunky operation. If PS2 memory subsystem was architected like the PSP, the machine would be a very capable deferred shader.

which seems weird to me, since the general idea is the same with normal mapping and EMBM.
Two things - half of EMBM equation is constant in Flipper, which removes, well, most of the useful functionality. And even if you could use variable inputs, there's no vertex hardware to accelerate their setup, forcing you to revert to clunky CPU assisted methods.
 
Back
Top