A raytracing chip wouldn't just be for graphics. However, there's a whole discussion here that's bigger than the intention of this thread. I point you
here for further research and discourse.
I could see value in a "ray-tracing chip", but I have to wonder how exactly it would work, you'd have to have some sort of scene graph that can be walked to cast rays.
I have to wonder if a hardware solution to that wouldn't end up being inflexible, leading to a lot of duplicate data. You'd have the games Scene graph, the Physics database, and the one you use for raytracing. Then you have to wonder if the HW data structure for efficient raytracing is amenable to dynamic update...
Would be interesting though, potential uses in Graphics, Audio and GamePlay.
Also not a fan of the blitter idea, if the idea is to move things to fast memory for the GPU, then I'd do it with the GPU, because you don't have to synchronize anything explicitly.
If the idea is to be able to move memory around for the CPU, how big do the copies have to be to offset the latency.
If the idea is format conversion for GPU formats, I still don't get it, you preswizzle textures in your asset pipeline, GPU's can read and write Swizzled/UnSwizzled and Tiled/Untiled formats with varying performance penalties, so they already have all of the functionality of the proposed "blitter" and you don't have the synchronization issue you would if it was a separate unit.
For something to become fixed function, it really needs to have established functionality. i.e it needs to help the way everyone does something, which is why I think fixed function acceleration for things like SVO's are unlikely.
I could see having a separate set of CU's taked to the CPU for GPGPU work IF significant changes to them could dramatically increase their performance in less suitable workloads. I'm not sure I know what those changes would look like.
I'd guess most of the units are incredibly mundane
Audio DSP
Compression/Decompression Engine
etc