PS2 can be a competent multitexturer with its raw speed when resources are allocated for it; DOT3-type bump mapping could even be used as a showcase effect in a level of some game. On the other hand, multitexturing was one of the principles of PowerVR2DC's design.
The Dreamcast was designed to be able to sustain a majority of its hardwired blending, filtering, and anti-alaising effects together. If it truly could only sustain a minority of effects like many of the system’s games seemed to indicate, the PowerVR2DC design would be exceedingly non-optimal to waste die area, logic, and transistor budget like that which could've otherwise been used to add pipelines and fillrate to the chip. ImgTech's engineers weren't loading the chip with impractical features to spice up the spec sheet for marketability in some hype driven, consumer marketplace; they were delivering to SEGA for evaluation, whose engineers would gauge its true performance through testing.
I don’t think the DC comparisons that were being made in this thread were in direct relation to the PS2. They were assertions about what the DC was capable of, in relation to its own performances.
cthellis42:
I'm just skeptical of anyone who things there was magic coming... Slow advancements and some new tricks--same as always.
It's understandable to think Dreamcast had potential for only gradual improvement left as is usual with established systems. However, much of its feature set was too far ahead of the time, and developers would've only been able to deliver the graphical look originally intended by the hardware's design once they became familiar with utilizing those hardware-accelerated features. Conventions of the typical DC look, which too often didn't push much passed bilinear and mip-mapping as mentioned, would be defied with more natural filtering helping to hide characteristic mip lines, and potentially even hardware support for anti-alaising helping to smooth raw images. Through bump mapping or any sort of possible texture combining, object detail could've been given a new dimension in shading with bumps, highlights, or deterioration effects, showing off a dynamic personality independent from the base texture. These new properties given to the graphics would define a generation of software distinct from the previous one.
I meanly figure if the DC had more "free" capabilities, even if they were really HARD to program for, ONE of the developers would have been pushing insanely for it, just to show off to the others. And that, of course, would leave a title of note sitting around for us to look at and take into effect while continuing to ponder lamentables...
Surprisingly, the system did provide some example from its notoriously neglected feature set. Le Mans used anisotropic filtering; note how the grass textures alongside the track remain a consistent integrity at all distances from the camera... no characteristically jarring blur levels in view. Self-shadowing, in titles like World Series Baseball 2K1 and 2K2, Crazy Taxi 1 and 2, and Sonic Adventure 2, and better shading became more prevalent all the time as general modifier volumes were put to use. Multitexturing was shown at both 30 and 60 fps, racers like the aforementioned Le Mans and Tokyo Xtreme Racer 2 demonstrating as expected.
Dot product bump mapping was even used in some Dreamcast games like the Tomb Raider titles (I think the effect saw use mainly in Windows CE games... perhaps the Kamui libraries didn't facilitate it as well with regards to memory management?) As lazy as the versions were otherwise, Eidos was proud to brag in the press releases to get picked up by media sites:
"Features not possible before on console versions will be available through your TV, including bump mapping, environment mapping and volumetric fogging."
http://www.computerandvideogames.com/r/?page=http://www.computerandvideogames.com/news/news_story.php(que)id=10973
"State of the art graphics: single-skin technology, bump-mapping, environment-mapping."
http://www.sega.com/pc/catalog/SegaProduct.jhtml?PRODID=209
But as most point out, many times providing new "stuff" will come at a cost elsewhere, and so the net effect would be...? Slow, gradual steps, I imagine. Same as always.
The trend in later games was actually showing the opposite. The engines, like the one of Le Mans for instance, were performing increasingly higher complexities of polygon counts, lighting, and even audio when they finally became familiar with and put to use the advanced multitexturing, anisotropic filtering, etc. features. Potential is locked into hardwired features, and it's clear from the trend that developer unfamiliarity - not design compromise - was the limiting factor for why that potential went largely untapped in the DC's young library.
zidane1strife:
Well the one thing that really did it for me, with regards to DC gphx... was the draw distance, most games I played featured draw distance similar to that of previous gen systems...
It was as if we had beefed up the gphx of old gen. games...
Unfortunately, a lot of DC games actually were little more than accelerated PlayStation/N64 titles. For an enlightening look at the kind of resource allocation such titles got, read the Postmortem on the popular Dreamcast version of Tony Hawk's Pro Skater at Gamasutra:
"We figured that since for this project we had three programmers (Sean hadn't come on board yet), a Playstation dev system, and a lot of DC familiarity, we have it running in just one month."
"Early on we decided we weren't going to use the Dreamcast's wonderful floating-point math because we didn't want to introduce errors into the code before we even had it up and running."
"We estimated how much we'd have to do to get past limitations in the code that prevented the Dreamcast from hitting its full stride and it ran out into several weeks. Since we weren't doing a fighting game, all that 60FPS would have gotten us would have been a nice marketing bullet point."
"With new power comes new responsibilities, such as fully taking advantage of all those polygons by implementing extra joints in the characters (at the shoulders and necks for example), weighted vertices, and cloth and hair systems. Our other Dreamcast games have these features but we were locked into the animations from the Playstation version of Tony Hawk so we couldn't add shoulders nor could we move pivots for more natural bends. And the skaters in Tony Hawk go into extreme poses while performing stunts. For instance, when they crouch really low on their skateboards their knees get pointy, and when they put their arms over their heads their shoulders dip and they look like balloon animals. We did the best we could by hacking in weighted vertices on the knees but the hack made the shoulders look even worse so we left them the way they were."
"We had similar problems with tools -- we never did fully understand Neversoft's Max plug-in and were limited in what we could do to improve the levels because of it -- as well as with some of the source code."
http://www.gamasutra.com/features/20000628a/fristrom_01.htm (might need to register to view)
wazoo:
Yes, I was refering to this thread. Anyway, if you put so much faith into that, then you have to trust their claim about 20M pol/sec for GPC on the ps2, which put the ps2 well above the DC
In polygon rate, sure. Anisotropic filtering wasn’t used on their PS2 follow-up, and progressive scan was lost I believe. There were also some other complaints regarding the image quality for the way it displayed.
YPO:
Strangely their spec for the PS2 version stated 2 million pps.
That wouldn’t be unprecedented. PS2 ports of DC games commonly result in downgrading. The DC has local access (no streaming needed) to strong texturing capabilities and more room to spare for the framebuffer from its display RAM. It will naturally be more optimal for certain workloads. The PS2 versions typically sacrifice vertical resolution to 448 lines and lose the progressive scan capabilities.
I think you've left out the keyword "engine" here Deadmeat. Just like the GT3 engine was capable of 10 million pps when the actual game didn't push that number.
Melbourne House’s games used multitexturing and resulted in geometry being resent, so it makes sense that the engine would push more polygons than could be seen in on-screen geometry.
marconelly!:
I'd say you haven't took a look at DC games in a while, because right now, there are games in every genre on PS2 that outrun their respective competitors on DC in every single area imaginable. Maybe DC had better IQ and textures on average, but if you compare top tier games (which only makes sense IMO) I think there's no argument, really.
Even among the best programmed PS2 games, full frame rendering and proscan will be compromised at times. Standard inclusion can’t be consistent with engines like the Baldur’s Gate: Dark Alliance type, a top example of PS2 development surely. Developer Lost Boys frankly stated that RAM limitations forced them to choose between their textures and being able to include those IQ standards in the much-hyped Killzone. There were not such contentions in display memory on DC - great textures never came at the expense of framebuffer IQ concerns, and progressive scan was standard for the whole library (and actually supported in the games along with native VGA.) And even in the best PS2 games like GT3 and SH3, it’s soundly behind the DC for getting rid of annoying alaising with texture filtering and mip-mapping usage.
Fafalada:
It's a reasonable argument, but begs the question why DC which happens to be even more memory limited didn't resort to it more often.
I wonder if DC developers didn’t find it easier to get their desired texturing budget than PS2 devs who have to worry about efficiently maintaining as large a stream of imported textures as possible and making sure they’re the right ones needed for rendering.