The definitive console specs thread.

Geeforcer

Harmlessly Evil
Veteran
Since there so many threads in this forum that are started with: "What are the specs for X...?" followed by the reply along the lines of "noob" or "Didn't you read the .... (fifteen hundred post thread)", I have a humble proposal: Please post the specs of any console (last, current, next-gen) in this thread. I will edit them into this post. Hopefully someone will then sticky this thread for easy reference. Also welcome are links to articles about any aspect of a particular component (like B3D Xenos tech overview).

 
Corrections for that console spec list:

It lists the chips' process sizes for their die sizes but doesn't actually have any listing for the die areas. Some die areas on the 250 nm processes: SH-4 (42.25 mm^2), Emotion Engine (240 mm^2), Graphics Synthesizer (279 mm^2).

CLX2's manufacturer/designer should be clarified as NEC/PowerVR.

CLX2's max polygons per second should be 7 million.

CLX2's pixel and texel fill rates should both reflect that the 100 Mpixels/sec figure scales by the multiple of depth complexity of the scene.

CLX2's maximum pixel depth should be 32 bits, and the listing could clarify that the figure is unconditional and not just the maximum.

CLX2's maximum Z-buffer depth should be ~32 bits and clarified to be in floating-point precision, and that listing too could reflect that the performance is unconditional and not just the maximum.
 
Maximum z-buffer depth (bits) RSX = 24, Graphics Synthesizer = 32

Integer register size (bits) Cell = 64, Enotion Engine = 128


Are those good or bad?
 
Lunchbox, there's just no way that the original Playstation and Saturn processors were built on 250nm back in 1994. And there are several places where you blindly did auto spell correction of MB into MiB ("eMiBedded" and "coMiBined" for example).

Also, I think now you can fill in some basic Wii information like clockspeeds.
 
Maximum z-buffer depth (bits) RSX = 24, Graphics Synthesizer = 32
All PC-GFX Cards since R300 have a max of 24bit Z. You can work around issues, but I still wonder why 32bit was dropped.
Integer register size (bits) Cell = 64, Enotion Engine = 128
The way register sizes are measured in that table, I`d believe Cell to be 128Bit too.
 
Last edited by a moderator:
While the Graphics Synthesizer did support 32-bit for Z, functionality restrictions and shortage of display RAM supposedly made using it impractical.
 
Are those good or bad?
There are too many mistakes in that chart to even begin counting them.

While the Graphics Synthesizer did support 32-bit for Z
Standard for titles was 24bit, just like on PC for that matter, primarily because 32bit Z Values have no tangible advantage over 24bit (thus 8bits normally get allocated to stencil buffer - manufacturers will still claim they have 32bit Z anyhow).
 
Lunchbox, there's just no way that the original Playstation and Saturn processors were built on 250nm back in 1994. And there are several places where you blindly did auto spell correction of MB into MiB ("eMiBedded" and "coMiBined" for example).

Also, I think now you can fill in some basic Wii information like clockspeeds.


Me?
or did you mean Lazy8s???
 
neeyik.info is Nick Evanson's (Neeyik) site. His chart.

A ha. Well whoever compiled it, it's got tons of errors. If you find a couple of errors, then you gotta ask is the rest of the info that you're not sure about reliable as well... It does the whole point of the thing a big disservice.
 
Well you see, this is how it is. I spend countless hours balancing up rumours vs. facts to make such charts - I've even posted up around here asking for further information and clarification; sometimes it works, sometimes it doesn't. And yet most of the time when somebody links to the chart, what do I get? "Full of errors". Great, so how about somebody offer some corrections, with factual evidence, so that I can correct the chart and credit the appropriate people?

The usual response I get to such a request is that "I can't because of NDAs". Well, one is never going to get a 100% correct chart with such restrictions, hence why I slap a big disclaimer on all of the charts I do.
 
Well you see, this is how it is. I spend countless hours balancing up rumours vs. facts to make such charts - I've even posted up around here asking for further information and clarification; sometimes it works, sometimes it doesn't. And yet most of the time when somebody links to the chart, what do I get? "Full of errors". Great, so how about somebody offer some corrections, with factual evidence, so that I can correct the chart and credit the appropriate people?

Hey, don't worry. Everybody appreciates your effort.

Small update:
According to GameWatch (see this thread) the RSX clock has been lowered to 500 Mhz. DeanA also hinted at that.

EDIT:
See also here
 
Thanks hupfinsgack - the core speed question was why I entered "550?" in the chart, rather than a straight figure. Even as it is, I think I'll still add the ? after 500, simply as a matter of caution.

Edit: Oh and thanks to Lazy8s for his figures; I'll add and credit them accordingly later on tonight.
 
Others things I noticed:

Wii memory is split into 24 MB 1TSRAM & 64 MB GDDR3. (directly from the HW, so 100% confirmed). Moreover, since maxconsole's specs got the GDDR3 right before everybody else, I'd say that their other specs are also spot on. (BW for the 1TSRAM & GDDR3, etc.). Maybe you should add them with the source and a question mark.

Also the embedded RAM on Hollywood is more likely to be 3MB as well instead of the previously IGN rumoured 8MB.
 
Max polygons per second (millions)

If you count the peak transform rate it should be :
-66 millions on the PS2
-32.4 millions on the GameCube (5 cycles@162MHz)
-116.5 millions on the Xbox (2 cycles@233MHz)

On the PS3 you're counting the peak transform rate (1Gpoly/s) but on the Xbox 360 you're counting the setup rate (0.5GPoly/s).

This is what I noticed at first sight
 
Good point about the polygon rates - I've not been consistent on what the figure should represent or more rather, when I've sourced values I've not thought about whether they were pre or post setup throughputs. The problem is if I choose to use, say, peak transform rate for the consoles you've stated then there is guarantee that the others are displaying the same attributes - for example, most console specs tend to state things like "untextured polygons" or "flat shaded polygon" and so on.

I think I'll mark those values that are known to be peak transform rate accordingly. Thanks for your help Zeross.
 
Back
Top