The term 'general purpose' is just plain meaningless, like a 'general purpose' book. All code has a specific purpose, and all code is written in differing degrees of optimization for different hardware, whether written in a high-level or low-level language. The idea of a CPU that runs all code well without having to worry about being targetted by that code gives us the notion of a 'general purpose' CPU, but it's a concept that's born of oversimplifications and marketting, I think.
The persistence of the term for a good number of years indicates some kernel of truth, though the language is perhaps innacurate.
Maybe it is possible to define some categories of code that usually pop up when people discuss "general purpose" code.
The first broad category I can think of is "commodity code". Even if there is no such thing as a general purpose book, we do differentiate between a newspaper and Proust.
There is just code out there that won't be optimized unless it's the default flag on the compiler that happens to be on the machine the developer is working on.
We can probably divide that grouping again, into "commodity due to cost" and "commodity due to low utility". The second one is mostly non-performance critical, so it's not useful as an argument for x86 when a high-end chip would be overkill anyway.
The first group is the large pool of applications where performance would be nice, but other constraints limit the amount of optimization that can be done. Just like we don't pay the premium to have a modern-day Tolstoy write our ad copy, we don't pay someone to massage the code if the money just isn't in it.
More speculative:
"special purpose, dynamic optimum", code that has a specific purpose, but for whatever reason has execution requirements that shift drastically, perhaps due to specific behaviors that occur unpredictably based on data elements.
sub-categories
"special purpose, dynamic optimum (instruction)", code with branch and instruction mixes that are difficult to profile at compile time, or are a solution to a problem that simply has no ideal single instruction combination for a given design.
x86 cores are likely very good at this, or at least better than a single SPE or PPE
"special purpose, dynamic optimum (data)", code that has difficult to profile data access behavior.
This does assume that there
is an optimum. If there isn't, then this falls under the category of "SOL" code.
An SPE may still do well here, if the data behavior can be usefully contained within the bounds of the LS, or is not so scattered that it can't be handled by creative DMA paging. The downside is that for commodity code, this has to be handled creatively without a code genius.
This is very much a moving target. If compilers sprout to the ability to magically implement highly robust and very good optimized code from many high-level sources, then the range of commodity code that does well, even if it is considered "general purpose" would increase.
A category with some overlap with commodity code:
"SOL" code. Just plain bad-luck.
Two categories:
"SOL because of the programmer", this is just junk code.
"SOL for a reason", perhaps the problem being worked on is poorly defined, or the spec keeps changing. Perhaps it's just a really hard problem. I can't think of a good one, but I'm sure these problems exist.