And i'll say one last thing as well, with regards to graphics. Normal maps don't fix the lack of geometric complexity on heads, fingers and limbs. So in Doom 3 (also Chaos theory) and riddick (though riddick himself looks fairly rounded) you end up with much more blocky looking models, though they have extra surface detail. Halo combat evolved looks downright primitive compared to a large number of games on all 3 of these consoles because of its low poly counts and stiff animation. I thought about it, and when I think of the most impressive Xbox games to me, none of them have normal mapping, and they run at high frame rates, or at least stable 30fps. Panzer dragoon orta, Conker, Voodoo vince outrun 2 etc. etc.
HTupolev has already explained some of Halo's background, and it's important to note that Doom 3 was designed to run on devices with much lower T&L ability than the dual vertex shader GF4s, so its poly count would have suffered. GF3's had to be able to run the game with only one vertex shader.
I acknowledge the xbox is capable of this type of lighting and "bump mapping", while the others cannot, it's just that I don't think the console has any business using these techniques. The lighting may be more advanced, but it isn't exactly more pleasing to the eye and it results in significant sacrifices to other parts of the game.
The 'sacrifices' are much lower in Xbox specific games. The outstanding Halo 2 used normal mapping very effectively, and if I recall correctly Panzer Orta does and the earth shatteringly brilliant Rallisport Challenge 2 does to (it uses relatively complex pixel shaders for the time, so it wouldn't surprise me if they did the ice track detail using it).
I need to find my Xbox and play Rallisport Challenge 2 again. I wish I hadn't chucked all my CRTs.
By "GC didn't need normal mapping" I mean GC didn't need normal maps to compete with Xbox on a graphical level.
Well that's subjective, but my personal thought would be it depends on the game. You don't have to plaster normal maps over everything for them to be a worthwhile addition.
Nothing but more theoretical data and no in game results. If I don't see results, I don't care, which is why I brought up if outsiders looked into this conversation ; they likely wouldn't either.
Well, those hypothetical outsiders probably wouldn't be equipped for this conversation so we shouldn't care what they think.
The results aren't theoretical, they're very real and independently verifiable. They
are for a synthetic test (that's what 3DMark is) but you do get to see the results on screen. They're also using DX8 for PC, over an ancient AGP bus by the looks of things. And as ERP stated, you can go faster on Xbox than on PC.
Interestingly - and this is the kind of thing the popular socialites here like to try and spot - you'll see that:
- GF4 Ti 4400 (275 mhz) hits 11.5 mpps w/8 lights to the GF3 Ti 500 (240 mhz) 5.6 mpps -
more than 2X faster
- GF4 Ti 4400 (275 mhz) hits 46.5 mpps w/1 lights to the GF3 Ti 500 (240 mhz) 26.1 mpps -
rather less than 2X faster
Evidence of the kind of Xbox avoidable bottleneck ERP was talking about, even on a mighty AMD Athlon XP 2000? Just an interesting observation ....
Anyway, here's the thing: triangle setup and T&L wouldn't have been a bottleneck in massively exceeding RS poly counts, and triangle setup and T&L were absolutely fast enough to allow a game to hit a peak of 30 mpps. The hardware can do it. For real.
If you're struggling to transform 15 million pps it's because you're using a lot of lights on a lot of the verts or because you're bottlenecked on the CPU or in the pixel shader or TMU or ROP or something. But that could happen on any platform.
If someone says the Xbox was more capable simply because it could use rendering techniques the GC could not, that's true, but in terms of all around detail and actual results (games) both consoles are pretty similar, though I still say gamecube edges it out (except with games like burnout); it certainly has less bottlenecks and was the more thought out piece of hardware, in fact it's probably the most well rounded console period. Though admittedly that's me looking at 5th gen onward.
Well you're perfectly entitled to think GC games look better. But the Xbox could throw more maths behind every pixel and vertex.
GC is impressive, but you're looking in the wrong place to see what's so impressive about it, IMO.
It's good to know gamecube couldn't do anything I thought it could and insights to how the ps2 renders games, that's all good stuff. But without evidence and results, my outlook's not going to change.
You're entitled to your outlook, but the rest of us prefer things that are a little more concrete.
Emotional attachments are hard to shed. I've been there, every bit as much as you I suspect!
After your response (which I will read and consider) i'd like to talk about the cpus next. I admit the my idea of the gekko being superior was *sorta* based around that Factor 5 clock frequency comment, so it's probably not as good as I thought it was. But I also know it has more registers and cache for starters. Anyways, i'd like to learn anything I can about these cpus.
I don't know a lot, but a good place to start might be to compare some die sizes:
https://forum.beyond3d.com/threads/console-die-sizes.53343/
... and then search for some benchmarks for the PowerPC 750CXe (closest none console chip iirc)
, scaling results for clock frequency. It's a while since I looked at any benchmarks, but what I remember being impressive was its performance in relation to its die size.
The GC and Wii have both been hacked, so you may be able to find some benches direct from those systems. There used to be some, so I expect they're still out in the wild somewhere ...
"The actual data you've been provided is 1) XB could do more polygons than GC."
That is not a fact, there is no evidence. I'm getting theoretical performance numbers and "developer sentiments." Besides, you don't think there is a developer out there that would say the GC could do more polygons than Xbox? And even so, it still wouldn't end the conversation in my favor. This is why we have to look at the actual games and what they achieved.
There's evidence that the hardware could in reality transform and present vastly more geometry to the screen than RS used. There's also proof that the GF4 could "do" more polygons than the ATI Radeon 8500, ATIs flagship. It's not unreasonable to think that newer, significantly larger and much more highly clocked GPUs would have greater capability that older, smaller, and much lower clocked chips.
And there's also a statement from a highly experienced multiplatform dev, speaking in hindsight, after the generation, about the hardware. And while you say this is just a "sentiment", this would have been based on
profiling actual, optimised software. It's not a guess, it's the result of real code, that's really been tested, for realz.
And no, I don't think there's a developer who would say GC could push more polygons for real. Not a programmer, who optimised the rendering pipeline anyway ... maybe a PR mouthpiece or something or some gobshite with a blog who knows a guy that was a tester or something spurious like that.
But as always, I invite you to try and find them! Try! Find a developer! Or find something showing GC can hit 50 mpps! Or find some developer docs on the T&L unit!
I've actually spent a while here looking for statements from developers, looking for old hardware benches, looking at contemporary T&L units (nothing matches GF4 even when clocked higher). Meanwhile, you dismiss everything and simply state that your entirely unsupported opinion is a truth that must be disproved through an increasingly narrow and moving set of goal posts.
How about you do some work now?