It does use the irradiance slices; what it does not seem to use is the signed distance fields hack for AO-like soft shadow effects.
I think it is using different shadowing techniques for different objects. If you look at objects which cast shadows from light sources located above or to the right, you see shadow map-style artifacts (chunky, aliasing, etc). For example, watch the skateboard scene in HD and look at the shadows cast by the flags, the skateboard, or under the ramp.
However, certain objects in the game, especially those that appear to be lit ambiently or from a direction pointing towards the view, there appears to be another algorithm used, as the shadows are way more blurry, and diffuse services appear to have interior transmission. For example, look at the floating French curve thingys, or the dwarf with mushroom at the end.
The dwarf's face appears to have some red cast on it from the mushroom, and the hands and other areas with creased geometry appear to have some internal reflection.
To me it seems that only irradiance slices are used, to enable lots of lights without a drastic and variable - thus unpredictable - computational overhead. That's how there are many little lights in there at the same time.
The performance of the SDF algorithm given is not drastic and unpredictable, especially when confined to 2.5D, so I don't see how your reasoning rules out SDF usage. The whole point of the presentation was that SDF can be used for a physically incorrect fake solution, which none the less, looks good and runs in real time. The demo rendered a hugely complex model on a Radeon X1600XT in real time, and there was nothing in the geometry I saw that is less complex than what LBP's 2.5D composited layer scenes have.