cybamerc said:
Vince:
> ...Ohh, and PS2's Software-Hardware ratio is over 12:1.
You live in a fantasy world.
Production tie ratio as of March 31, 2003: 6.84
US tie ratio as of August 2003: 7.07
Please tell me your joking, I can't believe you're this damn stupid.
I made the comment about a 12:1 ratio for good reason, apparently you can't comprehend this - which scares me. How about I start from the beginning.
When entering arguments such as this that are debating the level of hardware malfunction, there is very little solid statistics that can definitively show the malfunction and/or replacement rate. This unknown leads to the introduction of conjectural and otherwise obtuse data being used as a "standard". For example, cthellis made a good comment on this when he said:
cthellis42 said:
Problem with personal experience is it's too damn unreliable.
So, in order to arrive at a more precise answer to this question of PS2 reliability we must look at statistics that reflect the current PS2 userbase in terms of singular households, rather than absolute shipment and sales numbers,
N, which many such as cybermerc, PCEngine, and Jvd directly or indirectly claim would be artificially increased by the sales of additional consoles per singular household, bought to replace said "malfunctioning" units.
So, we need a statistic that's proportional and representative to the singular hosehold/user, instead of the hardware vendor like shipment and sales numbers are. We're going to need to put the thinking-caps on, and will shortly thereafter arrive at the conclusion that the [Software:Hardware] ratio fits in well. And it does so for the following reasons:
- It's a metric that's representative of a singular household/user as for statistic sake we can assume that each user will only buy new hardware after a malfunction, not new software.
- It's a metric that can relate the singular user (via soft sales) to the hardware manufacturer (eg. hard sales) and will let us probe the range the feasibility of a high defect rate.
- It's proportional and thus is scalable across the industry, and can be compared to both past and future ratios. Thus, giving us a base for normalization.
In short, it's as good as we're going to get without a vendor specifically telling us the fault rate.
Now, let’s talk about that this ratio actually is. The [software:hardware] ratio is a proportion that tells us, statistically, how many units of software have been sold per each sold piece of hardware.
This is cool for a few reasons. By assuming that each singular user buys only one copy of a given title (reasonable assumption when talking of the large numbers we are), we can state that if there were to be a significant variable introduced,
f, that was based on an above average malfunction rate - it would manifest itself by hyperinflating the absolute amount of sold hardware. This, in turn, would cause the ratio between Soft Sales and Hard Sales to rapidly shrink and be below the industry standards of this generation, and those experienced in the past.
Before we can form this into a valid hypothesis, we need to close some possible open variables. Thankfully, Cybermerc (in all his genius) did this for me. We can assume that the current industry ratio is a valid baseline:
Cybermerc said:
GameCube and Xbox have lower tie ratios because they came out a year later. All in all the systems are fairly equal
As well as him clearing up if the PS2 has any reason to have a higher tie-in rate (eg. due to increased per capita soft sales):
Cymermerc said:
> Explain how the "Mainstream" PS2 user buys 2X the number of games
> as the "hardercore" contingents found on the XBox/Cube.
That is easily explained: they don't.
Which means that we can state will a high probability that the following is true:
As
f increases, it causes iterative buying of hardware that manifests itself by influencing the effect of
N in an inversely proportional method. This has a net effect of changing the ratio of [hardware sales] to [software sales]. In the case of an increasing
f, it would see a smaller ratio of
N:[soft sales].
So, lets sum up where we are:
- We've established that the [Software:Hardware] ratio is adequate for this job as it compares singular software sales to absolute hardware sales.
- We've eliminated all the major erratic variables.
- We've devised a way to statistically see if f is infact, a large influence on the N of a given console.
Not too bad. So, lets do it now:
Actually I don't have to since Cybermerc already proved the case against a large
f (fault) rate in the hardware - didn't I say he was a genius?!?
PS2 Tie-in ratio* said:
XBox Tie-in ratio* said:
*measured in hardware
N to software.
So, the numbers basic reinforce the fact that the PS2's attach-rate is not only
equivalent to the industry standard, it's
higher than the industry standard. Thus, as per the conditions Cybermerc agreed to, we can state with a high level of accuracy that there is no perceptible influence of the variable
f - which means that the level of fault is statistically indifferent.
So, if we were to agree with their fallacious statements that
f is large, then that would manifest itself by artificially increasing the ratio between
N and soft sales.
Eg. Where the 12:1 comment came from.
So, the effect of arguing for a hyperinflated
f is that with the PS2's attach rate already ~30% higher than the industry baseline (eg. Cube and 'Box), you need to explain why their attach rates are so low. Which is difficult without invoking a higher
f for them aswell, or stating that their "hardcore" gamers just buy significantly less software than PS2 gamers. This differential would be: [per capita XBox games] + 2 (preexisting differential) + d
f. Which, I don't think many are willing to concede.
So Cybermerc, PCEngine, Jvd - explain why if
f was as big as you claim where the hell is the proof? Explain where the numbers went.