I'm sorry to ask such a basic question- but what is the primary difference between a programmable logic vs fixed logic? Furthermore, how does it (positively) effect power drain?
I'm sure the b3d audience with rich hw design background could pile you up with facts here, but in the broadest, perhaps philosophical perspective, the difference is the same as between
potential and
accomplishness. You can think of it all as a spectrum - fully programmable logic sitting at the 'potential' extreme of the spectrum, and fully fixed logic sitting at the 'accomplished' extreme of the spectrum. A fully-programmable logic hardly knows to do anything resembling a compete task from the task domain of interest, but it can do all the necessary building ops from the domain (and perhaps other domains too),
and it can also acquire knowledge how to arrange those in a meaningful fashion for the domain. At the opposite end is the fully-fixed logic, which knows a-priori all the tasks of the domain and has them readily implemented as functions, or IOW, is already accomplished. Now, clearly, in that second case somebody has to draw a line somewhere re what constitutes the 'complete list of tasks from the domain' in terms of practice, ergo the fixedness of the approach.
Of course, in reality there's hardly such things as fully-programmable logic and fully-fixed logic - nether contemporary CPUs are entirely free of fixed functionality, nor GPUs were ever devoid of any programmability (here we use them as two extremes in the spectrum when it comes to graphics pipelines), as both extremes - absolute potential, and absolute accomplishness, would've been useless to us.
Now, potential implies an associated cost of effort needed to transform that into accomplishness for any give task (which is what we are ultimately after when we employ the potential). That is, if i could do just two ops - plus and multiplication, it'd take some extra effort for me, outside of those original ops, to carry out something like (a + b) * c, which happens to be my task for today. That extra something is the ability to arbitrarily arrange my two ops, and arbitrarily route intermediate results, so that i could accomplish today's task. Remember that i'm an entity full of potential and not much a-priori knowledge of the task at hand.
On the other hand, if i were somebody who could only do f(a, b, c) = (a + b) * c, and that was all i could do, then there would be no associated extra cost for me to carry out my task today - i don't need extra effort to arrange or route anything arbitrary - the required function is already what i do, and i do it well! Of course, tomorrow the task might be a * (b + c) + d, in which case my second form would be totally screwed. Luckily, my first form would be able to do that, but it would cost it a bit more effort to 'reconfigure' itself for each new day's task.
Now, replace in the above 'me' with 'functional block', 'effort' with 'electricity', and 'days' with any meaningful IC time quanta (say, clocks), and you'd be getting the picture.
Is that like having a particular part of the GPU do nothing but apply AA? So more features are "Fixed" and "guaranteed" (so to speak) and leaves the developers with little in the way of wiggle room?
More or less. Apropos, it was only quite recently that the parts of the modern GPU responsible for AA acquired any programmable value worth speaking of.