The "depth sensor" in the Kinect is just a camera as well. An IR camera, but there's nothing magical about it. The depth information you can deduce by using 2 offset visible-light cameras is no more "approximate" than the depth information you can deduce from a single IR camera with an offset IR pattern projector. In fact, one could argue that the Sony solution can derive more precise depth info (in a best case scenario), since it appears to use higher resolution imagery as the input to its depth calculations.
Now, I do think the Kinect is a more robust depth-sensing solution, because it does not require good room lighting, nor does it not rely on the target having high contrast features to triangulate. (Kinect's pattern projector paints those details onto everything.) Still, I don't see why both schemes can not be classified as "3D".
If the Kinect 2 uses a time-of-flight camera, then perhaps you could start making such a distinction.