Cell

I was envisioning something on the front end of the chip that could whip out instruction fragments to all of the different cores.

I don't think that's worth the extra logic. It seems easier and possibly faster due to freed transistors going to other things to simply have a thread assigned to each MP element and have that execute until, it dies, blocks for I/O or whatever.

Do you think having an array of instruction buffers and logic to create instruction fragments which spit out said fragments over the MPU's internal network better than the following?

I would think simply driving a large chunk of instructions to each processor and then letting it chew on it for a while would be better, especially for hiding latency, then feeding it in a simillar manner - I about this could be wrong.
 
Serge,

psurge said:
I have to disagree. writing multi-threaded programs is hard, even using a language like Java which has some language support for threads.

writing multi-threaded code is harder than writing single-threaded. no doubt about that. but so is solving differential equations compared to 1st grade algebra. does it mean we can stick to 1st grade algebra? hardly. now, how much more difficult is developing multithreaded code to singlethreaded is a tricky question.

see, as with most things in human life, one needs to get sufficient 'exposure' to something to start feeling comfortable when dealing with it. respectively, if one sticks to single-threaded code awaiting for a good-enough language to occur which would do all concurrencies splitting and synchronisation for him, well, he may not have that chance in his lifetime. no?

as for what takes for a procedural language to be multi-threading-capable -- threads (or parallelism per say) need very little support by the language itself to be viable at all. what it takes is essentially a hint for the compiler that a given object (in OOP terms) is a subject to 'external' changes, i.e. its 'value' should not be assumed or cached. the rest in multithreading is good os support.

I would say it is inherently harder than single threaded programs, and requires a mental paradigm shift for coders, who mostly are used to writing single threaded applications. (IMO people mostly think in "single-threaded" fashion as well).

actually, we don't quite know yet how people think. human brain is an enournous neuron network, so it hardly does anything 'single-threaded'-ly, excep for the very top layer of consciousness, but that's delving into other spheres. what we can safely say is that 'people are mosty used to single-threaded laying out of algorithms.' which states almost nothing about what people could be taught.

C,C++, and even Java do nothing to clearly express paralellism, multi-threaded program behaviour, or effects of having multiple threads running the same code in the source text. They do not provide ways of managing/scheduling parallel subtasks. They do not address issues like exception handling in a multi-threaded program where the program state at any given time is non-deterministic, or how threads communicate/pass data between each-other.

check out ADA - you may find it does many of the things you deem necessary.

That, IMO is what makes multi-threaded debugging very hard, regardless of what tool you are using.

the debugging tools i use under the pervasive multi-threading os i run at home do anything i need for the purpose. apparently we have different ideas of what each of us deems necessary for the purpose.

I think someone needs to produce a language which enables a coder to specify parallelism using the language :
i.e. be able to translate "I want calculation X to be performed for objects Y, each calculation is independant of anything else going on" to clean code.

a programming language does not need to fit the alpha and omega of your abstraction needs (as there's hardly a limit to them), it just has to have the reasonable minimum(tm), the rest is up to the people who use it.
 
The main reason to have what are effectively separate CPUs in a single die is so that the software is able to tell the CPU what pieces of the program can be separated among execution units (this is also the reason for hyperthreading, btw...).

That's not the main reasons why they are going for seperate processors on a single die. According to them, there will be a point where adding more transistors into a processor won't get you anymore performance increase, and they said we are already at a point of diminishing return.

So its not that each of the processor in there will be slow or anything, but rather optimum, for its clock speed.

There are no doubts there are things that must be executed in serial manner and there are things that can be executed in parallel.

And normally these things that can be executed in parallel are the things that are slowing us down currently.
 
darkblu,

I feel like you completely missed my point. I wasn't saying that coders should stick to 1st grade algebra/1 thread only, or that any of the things I mentioned are necessary to write/debug multi-threaded code. I am not arguing over what abstraction needs a language must meet (i'm sure there are coders who can whip up multi-threaded code in assembly language). I'm also aware that the human brain consists of huge amounts of interconnected neurons, and that I'm not an authority on human consciousness or how it is produced. I'm sorry if my earlier post seemed confrontational... so please, no need to be condescending just because threading is easy for you.
I thought what i was saying was pretty clear - i think it would be nice if a language allowed "cleaner" mappings of a parallel algorithm to source text, and that such a language would simplify both writing and debugging multi-threaded code. IMO such a language would also facilitate thinking about solving problems in a multi-threaded way.

I will take a look at Ada.... here's another one : E, a very interesting approach to distributed programming.
 
Saem said:
I don't think that's worth the extra logic. It seems easier and possibly faster due to freed transistors going to other things to simply have a thread assigned to each MP element and have that execute until, it dies, blocks for I/O or whatever.

It wouldn't be any worse a proposition than slapping on a giant processing front-end that hardware morphs CISC code input into microcode for execution on a RISC CPU core, would it? :D
 
It wouldn't be any worse a proposition than slapping on a giant processing front-end that hardware morphs CISC code input into microcode for execution on a RISC CPU core, would it?

Actually, it'd be much worse, in my mind. Regardless of what others may claim, CISC is better for front-ends. While fast and simple instructions are great for the execution/back end.

I should go into further detail as to where I thought the CISC instruction decoder logic should go. I mean it should be in each processing element or processing elements should share a dedicated decoder or a hybrid of the two. MP's are grouped and these groups talk to a logic block which delivers the decoded instructions. I imagine the logic to be a series of look up tables. I'm suggesting this as a compression scheme.

Ultimately, memory bandwidth, will likely be a large concern and so will the cache sizes. With the amount of data streams that could be active, one would need a lot of instruction cache. This will cost a fair bit of money and transistors. A more complex/powerful instrucitons set where instructions carry more meaning would help alleviate these problems. Of course, it might not do it that well, since one can achieve fair bit greater density with SRAM macros than one can with logic. But bandwidth will still be an issue.
 
I should have said "outrageous a proposition" instead of "worse a proposition". The premise being teaching a RISC chip to "speak CISC", as proposed at some engineering meeting that happened at some early 90's date. It may seem like a completely blasphemous proposal in the beginning, but it could end up being the greatest thing since sliced bread. ...But I defer to the notion that you know more about this stuff than I do!
 
I should have said "outrageous a proposition" instead of "worse a proposition". The premise being teaching a RISC chip to "speak CISC", as proposed at some engineering meeting that happened at some early 90's date. It may seem like a completely blasphemous proposal in the beginning, but it could end up being the greatest thing since sliced bread. ...But I defer to the notion that you know more about this stuff than I do!

That's the thing, I might not. Don't simply give up the debate because you feel you don't know as much. You'd be suprised how far logic can take you.

BTW, many of the advancements in computer science and computer architecture - though I believe this is to a lesser extent - came from the study of the brain. That being said, the CISC -> RISC and all the shades of grey in between make sense to me, since that's basically what happens in the mind, IIRC.

BTW, I apologize if I came across rude or condescending in anyway, in this, my previous or any other posts.
 
I don't believe so. I believe the advancement of computer technology, particularly early-on, was based upon the solving of mathematical algorithms. People studied the specific steps needed to calculate certain algorithms, and figured out how to make hardware that could execute those algorithms.

More recently, I believe it's been sort of an extension of that. There is just so much unknown about how the human mind actually works, but if you've studied programming, and paid attention to how your own mind works, it becomes painfully obvious that the two are vastly different, and it takes a significant amount of learning to learn how to "think like a computer" to program effectively (I believe learning analytical mathematics methods, such as geometry and calculus, help significantly).
 
Saem,

I appreciate your neutrality on the subject.

I just can't imagine what IBM, Sony, et al have brewing in their secret Cell development project. It's got to be something that just blows your mind. It just wouldn't make sense if they were to spend so much in development just to introduce another multiprocessor system. Yes, it is essentially a heavy-scaling MP scheme, but me thinks there is a tWiSt to it that they haven't disclosed to the public, yet. My guess it is some kind of hardware-enabled thinga-ma-jig or other that changes the entire approach. Of course, the MP-directed middleware and compiler tools will be crucial, as well. Quite possibly the hardware thinga-ma-jig will work hand-in-hand with middleware and compiler stuff.
 
I hope so. It's far from trivial to allow many processors to work together well with only one memory bus between them.
 
IBM, SONY, and Toshiba will each gain something different from their cooperation on CELL.

My theory is that IBM will use the technology to build Blue Gene or a similar system considering their ASCI White supercomputer was dethroned by NEC's Earth Simulator by a factor of 5. They need a chip that's powerful to build a powerful supercomputer and their Power4 isn't up to the task.

SONY obvious wants it for PS3, PS4, etc. Toshiba gets to manufacture the chips for SONY and make some money like how NEC makes money from fabbing chips for SEGA's DC and Nintendo's GC.
 
Serge,

didn't mean to sound condescending, i guess it's because i have had too many arguments on the topic, including fruitless disputes with my fellow coworkers about the use (resp. lack of) of multi-threading in our daily work.

of course your point is valid - it'd be only good if coders had more and better means for multi-threading at their disposal. and creating and adopting inherently-concurrent languages would be undoubtfully beneficial for the wider adoption of multithreading, and eventually for delivering more preformance to the end user (i.e. us) once hw vendors get on the smp & likes wave, that is.

and thanks for the E link - it's new to me and i'll surely take a close look at it once i'm done with my current work project, whose deadline is uncomfortably near, btw.
 
I don't believe so. I believe the advancement of computer technology, particularly early-on, was based upon the solving of mathematical algorithms. People studied the specific steps needed to calculate certain algorithms, and figured out how to make hardware that could execute those algorithms.

I should have made myself clear. Early on, it was very much math and math definately has it's place now.

More recently, I believe it's been sort of an extension of that. There is just so much unknown about how the human mind actually works, but if you've studied programming, and paid attention to how your own mind works, it becomes painfully obvious that the two are vastly different, and it takes a significant amount of learning to learn how to "think like a computer" to program effectively (I believe learning analytical mathematics methods, such as geometry and calculus, help significantly).

Programming is nice and methodical. As long as I'm calm, I tend to perform in a similar fashion. It's all about the level of abstraction you're perceiving this at. Currently, I'd say AI is one of the hotest fields in computer science/engineering, a lot of the work done their trickles into all sorts of other aspects of computer science/engineering, this is what I was refering to. You'd be suprised how much comes from these fields, I certianly was when my attention was first drawn to it.
 
Back
Top