Sheesh! talk about causing a firestorm!
What is there to back up?
FAF i have to disagree with you. How are you judging this? If you want to rule out the benchmarks how are you capable of making a comparison?
Experience...
Regardless, benchmarks only show what you're willing to see. Even SPECcpu isn't necessarily a very good processor benchmark, it's really more of a compiler bench that can be very obvious when compiler engineers slip optimizations into their compilers that recognize certain data patterns in a couple of algorithms that cause their processor to fly on a subset and artificially inflate their scores. SPEC can also be rather brutal on archs' without extensive OOE resources if they don't have clairvoyant C and Fortran compilers.
Anyhow, I think Faf explained my point more eloquently...
Well a 5.1 Dolby Digital stream is only 384k/s (or so says Gordian Knot ).
Dolby Digital, Pro-Logic II, DTS Interactive, et al are basically data transport mechanisms off of the system.
How can you say honestly that the ps2 could perform just as well in DOT3 bump mapping (using the developers convoluted methods) as the Xbox. I am sure you will say "well it can't -but it could use another method." Sorry you have already admitted the xbox has a performance feature that enables it to win out over the PS2.
He hasn't said any such thing. Since you bring up bump maps, you won't get any argument from me about the usefulness of XBOX, and the GCN's DOT3 capabilities. It sure as hell makes global illumination models a lot more straight forward. But since you do bring up bump maps, why not explore displacement maps? I know my little meshify algorithms which I had delved into to compress arbitrarily large object meshes, had interesting sideffect of being readily adaptable to displacement mapping...
Likewise tangent space normal mapping can not only yield a easy, fast per-pixel light model, but is also works well for morphing geometry, it's lean on texture space, and leaves the tangent available for various lighting models.
In the PRESENT the PS2 is the lead development system for the marjoity of cross platform games (which make up roughly 75% of MS and Sonys systems), thus any extra power in the Xbox simply goes to a giant memory card, possibly better frame rate, and slightly sharper textures.
Even in the case of it being the lead platform, the premise that a project is going to be cross-platform doesn't bode well for a developer getting really creative on the lead platform if it can't be reasonably portable to the other platforms. I mean you're not going to go out and roll your custom EE Lisp compiler for a project if it's also bound for XBOX, and GCN, and if you do, it's likely you wouldn't be using it for the project.
That's why I get rather irritated when people try to use EA's tool as a hardware benchmark, when (as Faf pointed out) it's really more a benchmark of how effective their asset compiler is at multiple (vastly different) targets (which I might point out is incredibly impressive).
Nope. Still curious to hear from Archie what he meant about the hundreds of megs/sec of audio!
Regarding audio, part of the response was in regard to sub-processor, and sub-busses (e.g. S-bus and HT) not only being audio paths but also I/O paths. The other was a bit of irritation at the relative disregard for the importance of audio and the effects of a complex audio model. (I guess I shouldn't expect too much after all this *is* B3D */ graphical that is */ ).
Yes high-resolution, multi-channel audio can be a bandwidth hog (depending on what sort of processing you're performing on it, regardless hence the heavy usage of fast SCSI arrays). Of course games (especially console games) tend to rely lower resolution samples that tend to be at least moved around in a ADPCM format.
Of course there's more to audio bandwidth than just the raw data. You know you do have to move it around, perform calculations, store and buffer it; that does consume hardware bandwidth.
How about considering a more complex audio model where you havedozens, or hundreds of emitters (ala State of Emergency type crowd although not necessarily running around in mad fashion), where you have batch of complex sounds being processed that stochastically generate even more sounds (crowds, rain, etc...). IIR vs. FIR filtration, various compression/decompression routines while not processing intensive to utilize bus activity. How about a complex scene where you've got a lot of emitters in a 'busy' room (lots of obstructions) and you're throwing up a bunch of AABBs to calculate sound obstruction. In some cases you're going even be invoking the CPU regardless of how powerful the MCPX is. Of course I don't specialize in audio programming either though so...