Microsoft Singularity and GPUs

Lux_

Newcomer
I came across an interesting paper about Singularity project. I've heard about it before, but not in such detail.
From the beginning, Singularity has been driven by the following question: what would a software platform look like if it was designed from scratch, with the primary goal of improved dependability and trustworthiness?
[...]
In the Singularity project, we have built a new operating system, a new programming language (an extension of C#), and new software verification tools.

I'm posting this here, because on page 10 they describe the role of a GPU (among other things) in future OSs:
We are exploring the hypothesis that programmable I/O processors should become first-class entities to OS scheduling and compute abstractions.
[...]
Singularity packages programs in the abstract MSIL format, which can be converted to any I/O processor’s instruction set. The same TCP/IP binary can be installed for both a system’s x86 CPU and its ARM-based programmable network adapter.


And an interesting info on CPUs in general:
We expect that the instruction-set neutrality of Singularity MBPs encoded in MSIL may ultimately be relevant even for many-core CPUs. As many-core systems proliferate, many in the engineering community anticipate hardware specialization of cores. For example, the pairing of large out-of-order cores with smaller in order-cores will provide systems with greater control over power consumption. Many-core systems enable processor specialization as each individual processor need not pay the full price of compatibility required for single core chips; a many-core chip may be considered backwards compatible as long as at least one of its cores is backwards compatible.

I realize, that Singularity is a research project, but I'm sure the overall ideas and direction will be incorporated into Windows and software development in general. For example additional infrastructure (type information etc.) will hopefully make software verification more mainstream.

Also consider this:
1) The GPGPU field has taken similar approach: write the program in some kind of C-derivate, compile it into intermediate format and in runtime some kind of middleware/execution environment/OS compiles it further for particular card - AMD or NVidia, lowend or highend, DX9, 10 or 11.
2)CPUs are headed to specialized cores (Cell, Fusion, Larrabee etc), which may be present or not/disabled.

It seems to me that the future PC will transform into some sort of continiously evolving hardware farm, that can scale from lowend (everything integrated and/or emulated) to highend (everything discrete and in multiples of 2). And in that farm some kind of OS/middleware has to run itself and everything that is thrown at it.

And it seems to me, that MSIL is about to become next x86- a widespread representation of executable code, that is compiled for particular processor at install/runtime (the current x86 is also not executed directly anymore, but decoded into microcode at runtime).

I was sad when CPU clocks hit a brick wall (how many years has AMD been between 2 and 3 GHz, including Barcelona?). I'm glad the possibilities of innovation in PCs in general hasn't. :cool:
 
Back
Top