So you guys feel that they could have done better with the design, or Shifty are you saying that Sony would have done a better job of designing the hardware? In what way exactly? are you talking about size or what? I want to understand what you are trying to say before I respond.
At three PCBs and some 12ish ASICs, the whole design doesn't appear (at a cursory glance) to fit with engineering K.I.S.S.
I wouldn't disparage someone else's engineering before you understand how the product actually works.
Yes, I suppose it's easy to be an arm-chair engineer, and one without qualifications to boot!
But comparisons with other devices, and a little know-how, I'm not seeing why this thing needs so much stuff relative to other devices. It's not about trimming fat, but consolidating. Again though, I wasn't presenting a carefully considered viewpoint, but just reacting that it's a world apart from the elegance of Sony PCBs.
Back when the price came out, people were estimating outrageously cheap BOMs, saying it was overpriced. Now they're looking at the tear downs, seeing how complex it really is, and questioning whether it's over-designed. All the while, none of us really understands how it works, so making judgement calls on the component level seems a little ridiculous.
But guessing and discussing is part of the fun of tech. Otherwise why the hell are any of us on this board?! Why do people get so upset when others wonder about things?
Sony engineers do amazing board design, but in this case, I doubt they could have reduced the chip count by much.
I understand different chips from different manufacturers, but I'm sure a custom ASIC could roll most of those components into a couple of chips. Although TBH I don't know if Sony still have the means to produce their own custom components. But given the expectation to sell millions of these, is it really more economical to source 12 different components and fit them into a relatively large, hot device, then to design something for the job that would achieve it in a simpler package? Was that option explored and the economics just didn't justify it?
Also, the chip responsible for the 3d point cloud is the PrimeSense PS1080-A2, on the smaller of the two mainboards. There's another entire board with a Marvell AP102. I have pointed out before that people had made assumptions when we said we'd moved skeleton processing to the console. But skeleton processing isn't all the system is doing.
I don't suppose anyone will ever tell us the details of this.
We know there's the 2D image, and clearly Kinect has to be doing on-board processing to control the motors. Oh, I suppose it could be getting instruction from the 360. All the info we have to go on is that the 360 receives the image data, audio stream and point-cloud. I'll take your word for it that there's more info than those! I suppose there's room data info with the camera informing 360 whereabouts it's looking. You can't really complain though when people try to join the dots and get it all wrong when they aren't aware they're missing some!
Trust me, if there was any good way for us to have been able to avoid putting the motor in, we would have done it, but we wanted to support multiple mounting heights and people from 4' to over 6', and it couldn't be done with current tech without a tilt motor.
You won't be able to answer this, but what is wrong with a higher res camera and wider lens FOV? Is it harder to accomodate the 3D spacial interpretation with wideangle distortions? Or cost of the cameras? Or speed of the depth processing?
Also, let's be clear about this most of all, I certainly wasn't knocking the engineering, not on any serious level. It was, as I said, a tongue-in-cheek comment. If all that stuff is necessary, there's no posssible alternative, and my ideas of something more like a minimalist work-of-art with a lone black IC dead centre of a PCB with Art Nouveau tracks swirling elegantly around are completely out of line with what is possible I happily accept that. Out of curiosty though I am interested in the design decisions and thought processes and in understanding the differences between Kinect and similar devices. Are there no cost considerations at all and this is the best anyone could do? Or may we are seeing engineering sadly held back as a rough compromise thanks to the prosaic limits of real-world economics?
Edit: Flicking through the teardown again, I've just noticed the two camera CCDs are actually different parts. This blows my theory of using the same component for both tasks to save costs out the water, and again leads to head scratching! If PrimeSense are to be believed, their system works with off-the-shelf components making for very cheap systems, one of their major selling points, so what's specific about the Microsoft part that another CCD couldn't do it? I notice the centre one (optical I believe) has a much larger aperture, certainly as alluded to by the cone between the camera and case.