This sort of makes sense, because AI often has a crapload of code that rarely gets executed (on a per-line basis), whereas SPU optimization makes most sense for small amounts of code run repeatedly with different parameters and inputs. Occasionally AI will fit into the latter, but often not for games of this scope.
Unless you're modelling behaviour for 10,000 stupid fish

wink

or identically behaving soldiers, it's going to be ridiculously hard to do parallelization in terms of code and data. It's easy to split the tasks among the different processors, but using them efficiently is really hard.
This seems to be a point that lots of posters in this thread are missing. Multithreading alone isn't the problem with Cell, nor is in-order processing. It's the memory model that is a pain in the ass, and that's what Barbarian, Gabe Newell, and others are pissed off about since it needs investment that won't pay off anywhere else.