Granted i'm sure processing such as AI, physics & animation could probably benefit from a speed boost over traditional CPUs, however even the CPUs of today are hardly "traditional" (CELL in particular) & if, going forward we're going to see the next generation of CPUs advancing in terms of parallel processing capacity (8+ cores, tens of threads, wider SIMD & greater floating point perf etc..) then I STILL don't see why any console developer would have any reason to move all these non-graphics-centric tasks onto (what is in the console space, essentially) an already overloaded GPU..
the reason is simple, because it can be faster, simpler to implement or just because it's not possible on GPUs with the default API.
some algorithms do not fit to the pure gpu work so they're still done on gpu, but less efficient, or they're done on cpu, which is less load on the gpu, but probably not the quality you'd have it on gpu.
e.g. you want add fog as a postprocess. with pixelshader you read the depth of each pixel and calculate some alphablend value. with gpgpu you can run one 'kernel' for 4*4 pixel-area. calculating the fog for the first pixel and checking for the next if it's depth value is within a range of e.g. +-5% and blend the fog with the same instensity, otherwise recalculate the fog intensity.
in worst case you'll get the same performance like with pixelshader, recalculating fog for every pixel, but more likely nearby areas will get similar intensity, which makes fog 1/16 the cost.
you can do a lot more optimizations, cause lot of input (e.g. textures) is the same for groups of pixel, like sampling through volume-textures for clouds, the first some iterations can be the same for 16pixels and if you're above some treshold, you can add samples for every single pixel.
on pixelshaders you often do the same work on nearby pixels, just because you barely can exchange data.
additionally you get more freedom to implement ideas that weren't possible on gpu without big headache, like compressing dxt textures. e.g. if you render some imposter billboards for metroids of a spacegame during runtime. or you can skin several objects at the same time e.g. some group of soldiers may have a very similar pose, just offsetted by position or you can skin them directly for several viewports e.g. to the player's camera and some shadowbuffers-views. (yes, you could do that already with geometry shaders, but not on current consoles, and also not with all the freedom of gpgpu.)
Besides that, do modern GPGPU solutions provide the means to execute both core graphics & GP code at the same time..?
you can run cuda and normal opengl/d3d stuff at the same time, but they'll probably interleave your work.
another big benefit for consoles with gpgpu would be (if they'd merge cpu+gpu), that you've a better possibility for load balancing. some games are cpu bound, some are gpu bound, if you'd have the freedom to decide yourself how much of what work you put on the ..hmm.. unifited-pu-core, you'd make better use of it. will you do skinning on SPU or GPU? will you do postprocessing on SPU or GPU? maybe dynamically decide by the amount of load they have? how will you shedule all the work between SPU/GPU and how will you manage all the buffers?
with just one unit doing all the work, you've some less headache about that