Someone correct me if I'm wrong, but I don't think realtime AA works on a per object basis. I'd imagine AA performance is independent of scene complexity and more dependent on resolution since it's done in screenspace.
It's complicated. Generally temporal reprojection, resolve, and spatial stuff (like SMAA1x) itself is done in screen-space, but the information that these things have to work with and the quality of the results is heavily dependent on the scene makeup and in some cases things that are done per-object.
For instance, if you want reprojection to be accurate in motion and work for moving objects, you need some way of creating and storing good motion data, which means per-object velocity calculations that get stored in screen-space for use by the TAA (and MB and whatnot). By contrast, the TAA in Halo Reach just rejects pixels where it thinks motion is occurring, so it doesn't need precise motion results; it just needs to know whether motion is occurring. But, the results are also not as high-quality, since the game has no explicit AA in motion (the classical tactic of hoping that the jag won't be as noticeable in motion, especially with motion blur; the obvious failure case being ultra-slow panning, where you have neither motion nor AA to cover for the raw render).
Scene makeup also strongly affects results, for a few reasons. One, high-contrast regions of the screen with lots of high-frequency detail are where AA is needed the most, but they're also a challenging case for deciding whether a reprojected sample is valid or not. Two, depth discontinuities also create challenges for deciding whether a reprojected sample is valid (hence why you have cases like TO1886 where the TAA is mostly intended for shader aliasing and MSAA is being used to clean up high-frequency geometry), and they create disocclusion areas where TAA is simply has nothing to work with.
Both of the above paragraphs also obviously introduce motion itself as an element of scene complexity. Even camera movement (or perhaps most predominantly camera movement) is extremely important in terms of the kinds of crap you run into. And this matters for graphics in general, more or less. For instance, it's REALLY DIFFICULT to hold the camera in FFXIII stable when you're holding it sideways from a character walking in a straight line; this makes perfect sense given that fast parallax would cause the huge amount of nearby detail implemented as skybox to stand out more. The terrain is just rough enough in UC4's Madagascar that I was rarely able to move the jeep fast without moderate angular camera movement, which is great because it means that the noisy MB is breaking up middle-frequency shenanigans like LOD transitions in the grass (which at the contrasts and densities being used is a challenging thing to hide).
Some of those sparks does reflect. But not all. Perhaps to save ressources.
They might be getting rendered in different ways. One of the challenges with SSR is that it relies on the depth buffer, and transparencies don't like depth buffers.
The typical quirk would be that if a transparency is in front of a wall in screen-space, naive SSR would reflect it as if it were a texture painted on the wall.
edit: Kind of like this:
I remember Second Son where some GPU particles had collosion enabled, but just a few (the most fell through the floor).
Oh sixth-gen games, why are you still so awesome...