Arwin said:
I think there are some more issues either way.
The question is, can we do better? It could be rather cool to try, as a collective B3D effort, andn then maybe offer it as an article to the B3D website.
The problem is that any discussion needs to be crouched in the context of software, which would be games. Who cares if Xenos can do vertex texturing, coherant memory reads, hardware tesselation, and so forth if no uses it (for whatever reason). Ultimately hardware is going to be judged by the software which has a ton of factors influencing such (notably tools and developer skill and resources)--the hardware is just a portion of the pie in this regards.
Even if we could eliminate those factors (but why would we? They are equally important in the question of reality: what can we realistically get out of the machine?) we still have the problem that game design is a moving target, and thus any arm chair analysis is difficult because we would be filtering that through preconceived ideas of how a machine should be used--on top of our perception of how the hardware functions. e.g. What is the best approach: The MGS4 engine, UE3 engine, Halo 3 engine?
These engines will each reach different bottlenecks that limit the game design, yet a similar game may avoid such and be able to push their game 2x as far by resolving such.
This sounds like relativism, and that is because it is! UE3 is a perfect example. To games sum it up well: Frame City Killers. Gears of War. Same engine, same target platform, totally different results. If we were able to clone 10-20 dev teams and have them start a game the same game at the same time (thus using similar degrees of toolset maturity and knowledge of the platform) we could get somewhere significant. But even then that would not be totally fair because certain game designs lend themselves better to different platforms. And in the real world trying to compare best-to-best, or ports, or the average quality further complicates such.
Which brings us to another important factor of a design: porting and dev familiarity. It seems to me this is where MS's and Sony's visions are different. MS is obviously trying to platform with the PC and to a degree with their new API. Sony may have ticked off some PC devs but the PS2 devs seem pretty happy with many of the improvements over the PS2.
So who is right? Both.
Even if we could refine all the technical merits of each platform and construct them into some unbiased X+Y+Z = Overall Better, we then must turn to the tools and skill of the developers and how relevant such technical benchmarks are in regards to art. e.g. Machine A does 3x as many polygons, but Machine B can do soft shadows and self-shadowing on dynamic objects.
What is more important:
The higher poly count or the better shadowing? We could argue all day long which is more important, but honestly neither are outside how a developer uses them.
If there was a bigger gap between the platforms we could say more. They took very different approaches, but in the end they are on the same process. MS rushed to get the X360 out -- with some shortages -- and Sony delayed their launch. So it seems they are farther apart in release than they are in many ways.
I think this is probably the best we can do: ERP and some others a while back did a "x does y better than z". e.g.
Xenos has an edge over RSX in pixel fillrate
RSX has an edge over Xenos in texel fillrate
Xenos has an edge over RSX in dynamic branching in pixel shaders
RSX has an edge over Xenos in theoretical peak flops
Xenos has an edge over RSX in shader utilization
etc...
This lays out the differences clearly, but it also leaves those differences up to the individual developer how important they are to their design. And as Fran, Faf, and others recently noted for exclusive devs you minimize the weakspots and focus on where the machine excells. So the answer that, "X is faster than Y at A,B,C" is pretty arbitrary because on machine Y you would probably do tasks D,E,F instead.
A checklist would be nice (but can be misleading; e.g. see RSX pixel fillrate or Xenos aggregate system bandwidth as examples)... but even that only works mainly for GPUs. CPUs are more flexible and do not solve a limited number of problems as GPUs do. The problems and solutions are just different and are really dependant on a case-by-case basis, in real software, and even then it is not always clear how much they help in real world scenarios. e.g. Is it even the limiting factor? (A lot of the internal PDFs I have seen mention first determine of something is a bottleneck before even bothering making it faster). Another example is if you are 2x as fast at cloth physics it may not be as a big of win as it first appears (e.g. that could take you from a 100x100 mesh to a 141x141).
Specs are important, and they are fun, but I am not sure we can ever arrive at meaningful conclusions. Typically itself, 3 and 4 years down the road, is what vindicates hardware decisions. Not the hardware itself -- as it could be woefully under utilized -- but the combination of factors of the right hardware, the right tools, and the right time for the technology and the direction of the industry. Some things are obvious up front, others less so.
Anyhow, yes we could do better. We already have if you read some of the posts from Spring 2005. But I think most of us also realize that any such articles would need to be presented in a way that is not only fair, but also restrained to note that ultimately the ONLY important hardware features/performance are the ones used by developers. And to that end we are all in the dark to a significant degree seeing as 2/3rds of the machines are unreleased and very little next-gen software has been crafted.