Kentsfield as an alternative to Cell in PS3

Which is unoptimized code! Those algorithms will perform differently well on different architectures. An architecture that runs them well is considered to be good at 'general purpose code'. The actual tasks that 'general purpose code' perform are not 'general purpose tasks' : 'general purpose tasks' don't exist. Thus a chip that performs poorly on 'general purpose code' may not be bad at the task that code is doing; it could just be needing a different, non-generalised implementation. That doesn't mean to say that any processor can perform well at any task, though. Some processors just aren't well designed for some tasks no matter what the implementation.

You're confusing high level language with low level language. Just because you're using high level language doesn't mean your code is unoptimised. You can have optimised AND unoptimised high level code.
 
Actually I think you're the one confusing the two with 'general purpose code'.

High-level language code you compile for multiplatform use (Cal's definition of 'general purpose') is dependent on the compiler for optimizations, and thus is unoptimized for widely different architectures from the original target architecture. Such high-level code can't be compiled into both an optimized x86 implementation to perform a task and also into a 4-way SIMDed algorithm from the same high-level source. If you write a high-level AI routine that's optimized for x86, the compilation to Cell won't be optimized, and vice versa.

Taking Cal's example :
e.g. a C++ FFT routine which only uses standard floating point operation.

A C++ FFT routine written and optimized for x86 will run very poorly on Cell, versus a Cell optimized C++ FFT routine.

The term 'general purpose' is just plain meaningless, like a 'general purpose' book. All code has a specific purpose, and all code is written in differing degrees of optimization for different hardware, whether written in a high-level or low-level language. The idea of a CPU that runs all code well without having to worry about being targetted by that code gives us the notion of a 'general purpose' CPU, but it's a concept that's born of oversimplifications and marketting, I think.
 
The "general purpose" code is still optimized for x86 therefore it is optimized. Whether or not it is optimized for a non-target architecture is irrelevent. If it's low level it's not "general purpose" since it will take a lot more work to run it on other architectures sometimes requiring a total rewrite.
 
The "general purpose" code is still optimized for x86 therefore it is optimized.
That knocks over Cal's idea then...
To me the term "general purpose code" does not sound like unoptimized code [it is optimized]. It's more like the high level language code you can directly compile for multi-platform use.
...as the code is unoptimized in multiplatform use. How can code be optimized code and yet also be multiplatform and remain optimized? If it could, we wouldn't have developers complaining about the difficulty of reinventing algorithms for different architectures!

In which case it's not 'general purpose'. It's 'x86 purpose'. Or putting it another way, 'general purpose' code is defined by you as 'high-level code that runs well on an x86 or similar classical architecture without regard for how well it performs on different CPU architectures'. To which one asks, why is it called 'general purpose'? What's 'general' about it?

In reality the term seems to have come about with the evolution of processors. For a while we had CPU's that did memory and integer jobs. Floating point and vector maths etc. were added as extras, and these old jobs became that 'general purpose' of now, despite the illiteracy of the term (as all code performs a specific task). The term was also raised with the consoles only to claim an inability to perform some tasks effectively, whereas the term's terrible definition only covers implementation of tasks for a specific architecture.

It's a useless concept that should be dropped from all intelligent discussion on programming and processing architectures. Discussion should centre on specific performance in floating point maths, integer calculations, memory movement, handling of branches, optimally solving a given task, etc.
 
Ok so what is your opinion on "general purpose CPUs"? Is there such a thing?
I'd have to say there isn't, on the principle that they all are and so the indentification is redundant. A CPU performs all tasks. It can sort databases, calculate prime numbers, draw 3D graphics, or play a game of chess. There are certainly specializations within the CPU genus of processors, and perhaps some will argue that a CPU that isn't specialized to any one task more than others is generalized, or a 'general purpose' CPU. But ultimately the performance in a task isn't a categorizing factor : architecture is. The CPU, the Central Processing Unit, has to be general purpose, as it is central to all other functions. Hanging off this will be specialist processors like GPUs or DSPs. I don't know of any machines that separates integer maths, memory management, vector maths, and all the other aspects to processing, into separate processing units. If such a thing exists, and one of it's processors manages all that, it could be considered a non-general purpose CPU, at which point the classification of other CPUs as general purpose would make sense. If there was ever to be confusion as to what type of CPU a CPU was. Which there won't be, as there's only one that's common (or even exists?) and it's the one that does a bit of everything!
 
The term "general purpose code" doesn't make any sense since by definition it does everything, there's no software like that I know of.

The term "general purpose processor" makes more sense but it's so wide you could drive a double decker bus through it. It means a processor which can do everything you throw at it. That covers pretty much every microprocessor ever made, even things like the latest generation GPUs can be considered general purpose these days. The Cell's SPEs will certainly fit that definition.

I think what people are really on about is "serial control code" or "branchy integer code", both similar areas in which x86 will have an advantage over Cell.

All modern processors are effectively general purpose, but their strengths can be wildly different. x86 will be best at the above, Cell is best of intensive FP, Niagara is best on heavily threaded integer code.

SPEs can't do multi-threading in hardware but can support 8 threads in software so they may be good on heavily threaded stuff as well.
 
I think whatever course the desktop takes, it will remain largely the realm of x86 for quite some time to come; every year that passes, legacy support only becomes more crucial than the last.

I'm a fan of Cell though, and I think there are definitely places for it and architectures like it going forward; not everywhere is the desktop, and there are lots of industries willing to put the time in to optimize if it means a tangible improvement in execution.

I disagree.
Efforts such as Java and .Net seek to separate the code from a specific platform, and the farther these efforts go, the more likely it is that one day the general PC platform could switch away from x86.
Additionally, I see x86 designs going more Cell like, but replacing the PPE with a more powerful OOE processor. Possibly two.

BTW, whatever cpus do in hardware is just math, right? Why can't an OOOE algorithm be programmed for the Cell that uses the SPEs to aid the PPE in single threaded code?
 
The term "general purpose code" doesn't make any sense since by definition it does everything, there's no software like that I know of.

The term "general purpose processor" makes more sense but it's so wide you could drive a double decker bus through it. It means a processor which can do everything you throw at it. That covers pretty much every microprocessor ever made, even things like the latest generation GPUs can be considered general purpose these days.
the very concepts of CPU and GPU are outdated. A GPU that's sorting a database isn't doing graphics work, so why's it called a Graphics Processing Unit? As technology has evolved, so should definitions. It'd be better to categorize PUs by architecture. What we call a GPU now would perhaps be a Graphics (or data?) Enhanced SIMD-Focussed Processing Unit. x86 would be a Balanced Processing Unit. Cell might be a Strong-SIMD Processing Unit. Alternatively, name them according to the role they play in the end system. As such Cell might be a CPU in PS3, and yet a GPU in a handheld with an extra ARM CPU. Whatever, the idea of graphics processors and physics processors and central processors doesn't hold any water these days. As long as you can program anything on them (which I think is mostly dependent on ability to branch. Once you've got branching and the basic of compares and maths/bit ops, you can do anything), they're just processing units.

I think what people are really on about is "serial control code" or "branchy integer code", both similar areas in which x86 will have an advantage over Cell.
Yet that's of no consequence if the tasks can be re-engineered away from being 'branchy integer code' or 'serial control code'. Thus the ability of a processor to quickly perform 'general purpose code' (which is branchy integer) is irrelevant, if another processor that's rubbish at 'general purpose (branchy integer) code' can implement a different algorithm to achieve the same results on SIMD vector units. It's the results we're after, as quickly as possible. Who cares what algorithm you use to get that?! That's why labelling Kentsfield as a 'general purpose processor' or Cell as not is ludicrous, if you're basing that definition on the types of algorithms you run as opposed to the ability to perform a task.

SPEs can't do multi-threading in hardware but can support 8 threads in software so they may be good on heavily threaded stuff as well.
Running 8 threads on a SPE is a bizarre use of the SPE. You're better off designing software to not stall and need threads to keep the logic busy; and running threads to completion, 8 or 9 at a time, before switching, unless the switches are very occassional.
 
I disagree.
Efforts such as Java and .Net seek to separate the code from a specific platform, and the farther these efforts go, the more likely it is that one day the general PC platform could switch away from x86.

The Java and .Net examples are a very good point, but they don't contradict the legacy codebase example I mentioned, because indeed whatever the alternatives, the legacy codebase continues to grow. It will be interesting to see what happens with Java going forward - I'm a fan, want to say that - but in terms of the traditional desktop environment I'm still seeing x86; new paradigms or no, legacy code is important for the forseeable future. I'll say though that if x86 is sufficiently well emulated, either natively in software on the OS side or through a hardware effort a la Itanium, perhaps then indeed x86 could be killed off. But it would require an architecture-agnositc framework like Java actually taking over in the present before people would tolerate degraded performance on x86-native code via emulation.

Anyway though back to 'general purpose' and the massive confusion surrounding that term. The point's got to get hammered home. ;)
 
Last edited by a moderator:
Thus the ability of a processor to quickly perform 'general purpose code' (which is branchy integer) is irrelevant, if another processor that's rubbish at 'general purpose (branchy integer) code' can implement a different algorithm to achieve the same results on SIMD vector units. It's the results we're after, as quickly as possible. Who cares what algorithm you use to get that?! That's why labelling Kentsfield as a 'general purpose processor' or Cell as not is ludicrous, if you're basing that definition on the types of algorithms you run as opposed to the ability to perform a task.

Just wanted to say thanks - you've made some pretty salient points in this thread and they've certainly affected the way I think about "general purpose" computing. The distinctions you've made between general purpose code, general purpose tasks and the dependency on algorithm choices were really useful! :smile:
 
The Java and .Net examples are a very good point, but they don't contradict the legacy codebase example I mentioned, because indeed whatever the alternatives, the legacy codebase continues to grow. It will be interesting to see what happens with Java going forward - I'm a fan, want to say that - but in terms of the traditional desktop environment I'm still seeing x86; new paradigms or no, legacy code is important for the forseeable future. I'll say though that if x86 is sufficiently well emulated, either natively in software on the OS side or through a hardware effort a la Itanium, perhaps then indeed x86 could be killed off. But it would require an architecture-agnositc framework like Java actually taking over in the present before people would tolerate degraded performance on x86-native code via emulation.

Anyway though back to 'general purpose' and the massive confusion surrounding that term. The point's got to get hammered home. ;)

Well, Microsoft has already cut out (or emulated) a lot of backwards compatibility with Vista, especially the 64 bit version, and has become more strict on what it will allow to run on the OS. I'd say Microsoft is making a very real attempt to cut out the hardware reliance. I don't know enough about it to say if they could port Vista to another platform and run the same exes (without a recompilation like on the 360), but it's certainly a good start.
And non-x86 Linux, along with Mac, has some decent Windows emulation attempts. I believe they get performance within an order of a magnitude of the original software, which is good enough for many things.

For Microsoft's future, it may seem like a very good idea to them to eventually get Windows running on both the Power and ARM architectures.
 
For Microsoft's future, it may seem like a very good idea to them to eventually get Windows running on both the Power and ARM architectures.

Microsoft has already tried this with Windows - Windows NT on Alpha, MIPs, PPC, and more recently Windows datacenter on Itanium, and has failed miserably because Windows is a binary distribution OS and distribution of Windows applications is through binaries. To successfully jump to multiple architectures, you either need a virtual machine code like Java bytecode (slow), or a source code distribution like Linux or Unix, on which the end user can recompile the OS and existing applications to run on a new architecture. It is not for no reason Linux, Unix, FreeBSD run successfuly on a very wide range of architectures, while Microsoft's attempts to do the same with Windows have failed.
 
Microsoft has already tried this with Windows - Windows NT on Alpha, MIPs, PPC, and more recently Windows datacenter on Itanium, and has failed miserably because Windows is a binary distribution OS and distribution of Windows applications is through binaries. To successfully jump to multiple architectures, you either need a virtual machine code like Java bytecode (slow), or a source code distribution like Linux or Unix, on which the end user can recompile the OS and existing applications to run on a new architecture. It is not for no reason Linux, Unix, FreeBSD run successfuly on a very wide range of architectures, while Microsoft's attempts to do the same with Windows have failed.

Mac has done it.

All files have to be installed anyway, what's the problem with standardizing everything to .net, then making an installer that runs on all supported platforms and compiles the code to the current platform upon installation? Multiple installer binaries + source code to be compiled?
 
For Microsoft's future, it may seem like a very good idea to them to eventually get Windows running on both the Power and ARM architectures.

Getting Windows to run on Power and ARM is not the problem (in fact Windows has run on all sorts of architectures in the past, proving it is a relatively portable OS).

The problem is getting all the legacy x86 apps to run better than an x86 processor can run them, and no one's really been able to do this by a large enough margin to make a switch of CPU architectures worthwhile.

After all, people run applications, not operating systems.
 
It wouldn't be about migrating the pc platform away from x86, but rather welcoming new devices into the Microsoft's Windows environment.
In particular, a PowerPC console and more powerful Arm phones (or embedded computing devices) could one day see and benefit from some application compatibility with general PCs. Running legacy code might not be such a big deal, but if Microsoft could manage to have one main code base among three different platforms, it would simplify things a great deal for them and make software roll out much quicker. It would also put microsoft in a great position to...well continue their current position, even if some sort of cheap embedded client or console takes over the PC space.
 
Mac has done it.

All files have to be installed anyway, what's the problem with standardizing everything to .net, then making an installer that runs on all supported platforms and compiles the code to the current platform upon installation? Multiple installer binaries + source code to be compiled?

Mac is a UNIX - a version of FreeBSD UNIX (which is an open source Unix) with an open source MACH microkernel, and a proprietary display system. It is easy for the end user or distributor to port source applications between different versions of Unix and between Unix and Linux, because all the source code for Linux and Unixes (including proprietary Unixes) is available to the end user. Applications can be compiled by the end user or small distributor against source libraries on any cpu architecture without having to wait for these to be ported by someone else with rights and access to the source code, who may have no interest in doing so, as is the case with binary libraries. A lot of Linux GPL code, including libraries and applications are also used on Unixes like FreeBSD, Solaris, AIX, HP-UX, Apple Mac etc.

Java and .NET are slow. .NET is also proprietary. Developers are reluctant to lock themselves into major application development in an environment that would give someone control over their future, particularly something tied to such a predatory company as Microsoft. For example writing a wordprocessor, browser, or database application etc. in .NET would allow Microsoft to use licensing charges, patent licensing restrictions, faster undocumented APIs, and proprietary extensions and OS bundling (as .NET is fully supported only on Windows) to destroy them when Microsoft decides they want the competition out. Generic development environments like C or C++ (and now Java which has been reverse engineered by IBM, JBoss and others and is going GPL under Sun) are a much safer and more productive environment to develop in if you are not Microsoft.

As for multi-platform Windows, as I said it has been tried at great expense to Microsoft and failed miserably. For example NT on the Alpha was fully developed and supported by Microsoft for many years, but was dropped in favour of Linux by HP due to lack of demand. Similarly Microsoft developed and supported and hyped Windows 2000 Dataserver for the Itanium platform as an enterprise class server OS. Unfortunately due to lack of demand that too was dropped, but is doing well as a high end server running Linux.
 
Last edited by a moderator:
SPM said:
Java and .NET are slow. .NET is also proprietary. Developers are reluctant to lock themselves into major application development in an environment that would give someone control over their future, particularly something tied to such a predatory company as Microsoft. For example writing a wordprocessor, browser, or database application etc. in .NET would allow Microsoft to use licensing charges, patent licensing restrictions, faster undocumented APIs, and proprietary extensions and OS bundling (as .NET is fully supported only on Windows) to destroy them when Microsoft decides they want the competition out. Generic development environments like C or C++ (and now Java which has been reverse engineered by IBM, JBoss and others and is going GPL under Sun) are a much safer and more productive environment to develop in if you are not Microsoft.

I'm a Java programmer, but in fairness to Microsoft, I have to say that they have standardized their primary .NET language (C#) and their runtime environment (the CLI/CLR) through both ECMA and ISO.

To be sure, a lot of the .NET gui classes are Microsoft-proprietary, but the language and the runtime are very well done, and are not proprietary.
 
I'm a Java programmer, but in fairness to Microsoft, I have to say that they have standardized their primary .NET language (C#) and their runtime environment (the CLI/CLR) through both ECMA and ISO.

To be sure, a lot of the .NET gui classes are Microsoft-proprietary, but the language and the runtime are very well done, and are not proprietary.

But can you use it independently of single vendor control? Can you run it on any platform? The answer is no. There is a subset open source Linux implementation called Mono, and sure C# is open, but the same way that what makes the Java standard Java is it's standard class libraries, .NET is also it's class libraries, which are proprietary for the most part, and Mono is an incomplete reverse engineered open source subset of those libraries. Some developers are put of Mono by the fact that software patents may allow Microsoft to sue users of Mono.

The benefits of .NET adoption for Microsoft is obvious because it ties developers into Microsoft platforms. The benefits of .NET to developers are less obvious.
 
Back
Top