But for everyday human experience / climate use, Fahrenheit actually makes more sense because 0F is the temperature at which salt water freezes and 100F is the temperature of the human body (well, as close as the technology of the day could measure it). It's more granular, there isn't such a need to deal with decimal values as in Celcius.
This is fantastically off topic, but...
Why do we need granular temperatures to understand weather? How much difference is there between 16 and 17 degrees C? I can't say I notice it, so is there need for a finer scale that works in divisions half the size of degrees C? And what's wrong with decimals? If you stick to half degrees (could go fractions instead of decimals
) that's hardly confusing. And besides, temperatures aren't accurate except in scientific situations. If the weatherman says it's 22 degrees C outside (or Fahrenheit) that doesn't mean it is 22 degrees. It could be 22.3, or 23.1, or 21.9. The figures used are approximations to give an idea of the sort of temperatures you might experience. How much granularity do you need in approximations? The more you have, the more chance for error. If the weatherman pegs the temperature at 30 degrees, in Celcius the range of error might be +/-1 degree, whereas in Fahrenheit that same range would be +/-2 degrees. Throw in other factors like clothing, wind chill, sunshine, environment, individual preference for heat or cold, and the whole temperature thing is very rough. I can't see why granularity is an issue.
Fahrenheit was developed as a scientific measure using a couple of fairly arbitary high and lows and an arbitary set of divisions. Key points along the scale don't mean anything (0 degrees doesn't correlate to any physical phenomena, and 100 degrees isn't/wasn't human body temperature) so the values are less meaningful than Celsius which is based on water's properties. Low figures of C tell a gardner to be wary of frost, as 0 is freezing.
And this brings me to the whole point of this argument - what sort of granularity do game review scores need?
p) When a Wii title is measured, is it enough to 5 ranks from good to bad? Or ten? Is it worth having a percentage? Can you tell the difference between a 74% game and a 75% game? We also have problems with non-standard scoring. Some reviewers place 0 as the worst possible game, 5 mediocre, and 10 the best possible game. Others tend to use a higher, non-calibrated scale, ranging from maybe 5 (bad) to 10 (great). This mix of measures is very medieval, like non-standard weight measures and different temperature measures. Hundreds of uears ago these standards have been standardised, but review scores, an important metric in entertainmentology, are still a muddle. The IEEE still seems unconcerned to sort this mess out, and I hope Nintendo take steps to standardise scoring for their game reviews.
Another interesting point this throws up is true values versus perceived values. How much colour fidelity does a game need? Years ago it was decided an absolute measure for the number of 'colours' the eye could see was 2^24, but that still shows banding. However, it's enough that in most cases it's not noticed. How much lower can we go in the colour space granularity before it becomes a concern? Nintendo feel granularity in this situation isn't a prime concern, and are willing to settle for 2^16 colours. Will this end up like Celsius? Will the framebuffer's inability to express 173/256ths of absolute blue intensity be as insufferable as Celsius's inability to communicate 60 degrees Fahrenheit using integers?