No, he's not. Modern GPUs already are software based, by your definition. They have a fixed rendering flow, but the steps are highly programmable. You can already take a triangle and do lots of software-based processing to it. So what's the difference between what we have now and going compute on software rasterisers? There's not a clear move forwards, or sideways, which is why people are talking about voxels as at least a different renderer. Anything triangle based is already highly programmable.
This has been covered in the earlier half of this thread though, IIRC. Your new slide isn't really adding to it. Yes, compute is there for devs to use however they want. But how is one defining software-based rasterising such that programmable pixel and vertex shaders are considered fixed functions when those same processors are going to be doing the compute in the new software renderer? PS4 will be a step along the same path already laid out in a decade of increasingly programmable graphics hardware. For a significant advance, one needs to establish what it is the next step requires. In terms of your thread question, what are the things in rendering that PS4 can breathe new life into?