While it's true that I have used my own gut lol, but it's mostly because I trust that the analyst company has done a half decent job on defining what counts as active and that there is not a reason to believe that they use different set of rules for each platform.
See, that's your first problem right there: you trust something to be given in the equation, and then you base your entire argument on that.
But that's circular logic, because the only reason I can see to trust some research metric to be true, is if you know how the data was accumulated and if there are no errors in it. And we don't have that here.
But you believe a research firm statment because you trust them, and you trust them because they seem like a decent research firm. But what makes you think that in the first place? What do you base your trust on?
"the minimum qualification to be an active installed unit is it’s in working condition and it is used at least once a year."
Additionally they have a "propietary equation that calcutates scrappage rates or retirement rates each year".
So it's not that complex. If I were to guess, I woud say that basically what we have here is something in the realm of this formula:
[
percentage or people who claimed they used their console in the last year]
*
( Sum of
[the number of consoles sold to customer in a specific
year]
*
1 - [numbe
r of years passed since that specific year] * [our "proprietary
secret formula" to calculate how many consoles are depreciated each year]
)
Or something like that
p *
Σ(y*(1-r*s))
So you see the second problem here? There are so many things with this estimation that can go wrong:
1) They
must have a sample of people that they interviewed. I don't see how else they could work out how many people used their console in the past year at least once (unless they can tap into people's brain). So they're effectively relying on people being honest in the interview and actually remember when they last used their console.
2) For the statistic to be relevant, they also need to make sure that the sample of people that they interviewed properly reflects the general population of console owners (which is unknown). I.E: they need to have the right representation of the population based on age, gender, race, location, etc.
This is a fact that holds true to most statistical estimations and polls, but it's even harder to get it right for the global population of console owners because it (probably) differs from the distribution of the general population.
3) For any kind of calculation, they need to put in the formula the number of consoles actually sold through to customers each year. But this is again something that they don't know because we never have these numbers - only shipped numbers that the manufacturers send to retail. So they either track statistics with the likes of the NPD and chart-track (but this doesn't cover the whole word), or use another estimation like percentage of console out of the shipped numbers. Either way, we are throwing another kind of uncertainty into the equation.
4) They use some "proprietary equation" which is the jargon for saying ". They need to base their yearly depreciation rate on something, right? But how do they collect that? That is something that is even harder to sample from user queries, and they claim that they don't do that.
They use some kind of formula to find how many consoles were scrapped, and they probably employ a more complex formula then just applying linear deprecation (but who knows). But that still adds to the formula a very problematic uncertainty: did use different metrics for the PS3 and the Xbox? And did they use different formulas for each of the model of the Xbox 360 (after all, the first ones were much more prone to break then the later ones)? Do they know if replacement units from the lab are something that MS and Sony put into their "units shipped" numbers (I'm asking because I seriously have no idea)?
It makes sense that they should use different percentages, but if indeed they did - then where did they gather this information from? More statistical samples and estimations?
We have so many variables and estimations put into such a formula, that it's almost completely irrelevant even if it was of any significance to developers. And it doesn't - because even if we are to take these numbers as fact, they don't even show us a trendline (they never gave us the numbers for previous years), and they don't help us understand the user base playing and purchasing patterns (as Shifty Geezer mentioned a few times here). We just have a tally which represents the number of people who turned on their console at least once in the past year - doesn't matter what they did with it when they turned it on, how many time they actively play games, or what is their monthly spending on games. Now THAT would be something beneficial for developers.