I'm just about to start working on my doctorate of computer engineering in reconfigurable computing this next fall...
Before this recent trend of multi-core processing that seems to be prevailent in the computer segment with AMD and Intel, and the videogame segment with Microsoft and Sony, I often wondered about the feasibility of a multicore environment...
As a design I've been toying around with, and that I'm wondering would be feasable....I have to ask this question to all those with more expertise than I have in the field:
I had always thought the way to "better" computing was through massively parallelized systems. In fact, one notion I'm still toying around with is the incorporation of many simple cores into one chip, one master and many, many slaves. For example, take a simple 8 or 16-bit core and use an advanced memory paging technique to access memory outside of their addressable "theoretical limits" and place them onto one chip. With each core being so basic in nature, it could be possible to put hundreds onto one die and to clock them very fast. While I know that the current chips far exceed the capabilites of designs of the past, why can't multiple inclusions of simpler chips exceed the newer ones?
I know, I know, many of you may be laughing at what I have to say, but couldn't it be possible to take these hundreds of 8 or 16-bit cores and distribute the work-load amongst them?
In my mind it's akin to the principle of classes in object oriented design, where you take a very compicated problem, and split it up into many simpler and more manageable parts.
Just a thought....
Before this recent trend of multi-core processing that seems to be prevailent in the computer segment with AMD and Intel, and the videogame segment with Microsoft and Sony, I often wondered about the feasibility of a multicore environment...
As a design I've been toying around with, and that I'm wondering would be feasable....I have to ask this question to all those with more expertise than I have in the field:
I had always thought the way to "better" computing was through massively parallelized systems. In fact, one notion I'm still toying around with is the incorporation of many simple cores into one chip, one master and many, many slaves. For example, take a simple 8 or 16-bit core and use an advanced memory paging technique to access memory outside of their addressable "theoretical limits" and place them onto one chip. With each core being so basic in nature, it could be possible to put hundreds onto one die and to clock them very fast. While I know that the current chips far exceed the capabilites of designs of the past, why can't multiple inclusions of simpler chips exceed the newer ones?
I know, I know, many of you may be laughing at what I have to say, but couldn't it be possible to take these hundreds of 8 or 16-bit cores and distribute the work-load amongst them?
In my mind it's akin to the principle of classes in object oriented design, where you take a very compicated problem, and split it up into many simpler and more manageable parts.
Just a thought....