partly ot: Apple to use Cell? Cell PCs?

crystalcube said:
MfA said:
More developers and companies feel comfortable with the GPL, that isnt going to change.

Not true either. Have you considered why Apple chose FreeBSD as its basis rather than linux ?

I thought it was NetBSD (or some derivative of BSDi)??
 
Jov said:
I thought it was NetBSD (or some derivative of BSDi)??

It is freeBSD and current version OS X is at par with freeBSD 5. BSDi is no longer in active development. NetBSD, FreeBSD, OpenBSD , *BSD are not derived from BSDi rather are based on BSD 4.4 , which was initially developed by University of California, Berkeley.
 
MfA said:
Because it isnt running on them.

Hmmm, what are you talking about? Big thing they are working on is XGrid (just your usual grid software basically). And various multi-parrallism of MacOS-X is actually making waves in BioMedical research. While your everyday application its worthless for it sure is useful for complex monto carlo simulations or diffusion of a substance and a number of other various items.

aaaaa00 said:
Microsoft's software can't take x86 beyond some minor hyperthreading on two cores without major reworking ...

That's such bullcrap.

The NT kernel scales to 32 processors on 32-bit architectures, and 64-processors on 64-bit architectures. As of Win2k3 it supports NUMA, SMT, and memory architectures with relaxed ordering.

The kernel is reentrant, it can be running on any or all of the CPUs in the machine at the same time. Interrupts can be fielded on any processor (depending on the hardware).

It has run on Alpha and MIPS, and continues to run on x86, x86-64, Itanium, and PowerPC (which means it could run quite easily on Cell PUs).

Hehe, amusing thing is before IBM the same thing was also true with Linux. IBM came along though and has Linux so it scales up nicely to 16 CPU's or so before performance isn't really gained (IBM is working on improving this of course still more).
 
An amusing point to note is that Rick Rashid, one of the key members of the team that created Mach at CMU, works for.... Microsoft.
 
A grid is just a cluster with relatively low bandwith high latency interconnects ...

Apple chose BSD because they want to keep most of the system closed ... so in the end what did they contribute? Another divergent BSD kernel code base.
 
MfA said:
Apple chose BSD because they want to keep most of the system closed ... so in the end what did they contribute? Another divergent BSD kernel code base.

I dont think we were discussing Apple's contribution. But again a point of ignorance. Apple has contributed significantly to khtml ( the browser component of KDE ). Apple has also made significant contributions to gcc in areas of Objective-C / C++ compiler and ppc compilation. Apple has also open sourced rendezvous(ZeroConf) even for windows. Anyway any other contribution that Apple has made, I am sure you will be totally unaware of them.

Also BSD license doesn't forces them to open source anything so if they are keeping their system closed they have right to do that.
 
It is their right, as anyone who uses their BSD licensed code can close source that. Maybe that is why all your extra examples failed to provide a good case in favour of BSD. The improvements they contributed to the GPL projects you mentioned arent really relevant even if they were to BSD projects ... it is still more a sideffect of their use of open sourced projects in their propriety system than a real impetus to contribute. This is because they are a systems vendour, and not a hardware one. Open source to systems vendor is a threat, they embrace it when they havent a choice ... but it is a threat nonetheless. Not so for companies more interested in selling hardware, and services, such as Intel or IBM (they on the other hand dont feel like helping system vendors out, which is a strike against BSD).

They kept the kernel BSD to make keeping it in sync with the other BSDs easier ... but most of the value they put in the closed parts of OS-X.

RendezVous does represent a major contribution, but hell ... that is apsl, so completely irrelevant.
 
Well, licensing issues aside, a dual G5 with a 9800 running OS X is the ultimate geek machine.

Mac OS
Cocoa w/ Interface Builder
PowerPC
Altivec
BSD
Open sourced kernel
Bash
OpenGL
QuickTime

all in one system. Sigh.
 
You seem to believe what you want to believe. You claimed that Apple made no contribution. When given some examples you write them off as insignificant. I am not providing a case for BSD or GPL.

You said Apple made no contribution: thats proven wrong.
You said OS X cant not use "massively parallel" apps: You still have to provide any technical basis for it. ( I am sure you have none )
You think GPL is better: Thats your belief
You think Apple chose BSD to keep system closed: Thats correct.
You say APSL is irrelevant: my question How ? APSL ver 2.0 is acknowledged as an open source license. And GPL is just an important part of a bigger picture not the complete picture.

If as you say open source is no threat to Intel and IBM , why doesn't they open source more components? Why is DB2 not open sourced yet ? Why is Intel compiler not open sourced yet ? Why is IBM not yet open sourced OS/2 ?

They have made significant contribution to Open Source codebase so has many more e.g. SGI. They are no different from Apple they are doing what helps their interest.

Still waiting for you to provide any technical basis for why OS X cant use "massively parallel" apps ..
 
MfA said:
A grid is just a cluster with relatively low bandwith high latency interconnects ...

Not quite, a grid is more like a dynamic cluster changing in size based on if those computers are being used or not (aka like Folding@home or SETI@Home). A lot of things don't need high bandwidth or low latency interconnects either so its not a big issue so much as the aggregate performance of several computers is easy to attain (and if an operation is truly massively parallel then you don't need high bandwidth and low latency, those are more attempts to try to make more problems actually massively parallelable).
 
The post you replied to concerning BSD was not addressed to you, it was concerning the BSD license ... IMO they didnt contribute a whole lot of valuable BSD licensed code, as I said yet another divergent BSD codebase, no more than GPL. RendezVous remains the best example of their contributions, and it is entirely irrelevant because it uses neither BSD nor GPL (BTW apsl is a copy-left license, viral just like the GPL).

You are right ... I should have said it is less of a threat to Intel&IBM. They are partly software vendors, as well as hardware ones.

OS-X cant use massively parallel apps if it doesnt run on massively parallel machines. You are again right though, I have no technical basis. If they are running OS-X on NUMA machines at the moment at Apple Im wrong.
 
MfA said:
OS-X cant use massively parallel apps if it doesnt run on massively parallel machines.

Curious, what are you considering massively parallel machines? And what are you calling massively parallel apps?

Since to me the less entertwined data is between operations the more massively parallel the potential app is where the more entertwined the less massively parallel an apps is.

An example with no real data entwinement would be what a couple lab partners in my lab who are doing a senstivity study on an equation with about 30 different parameters (each calculation of the equations takes between a second to 10 seconds but its the 30 different paramaters that makes it a pain to search the parameter space so most their time is spent figuring out how to better search it for what they are trying to figure out).

Now a render farm would be an example of a system I would say of slightly less parallel due to more data needing to be shared across the systems (the various physics data). And you can keep on going with needing more dependcies till you achieve a system not solvable reasonably in parallel at all.
 
When I say massively parallel machine I mean something tied together with a single system image.

As far as the apps is concerned, I was just paraphrasing him. Im only really interested in games though. All the computationally relevant tasks there are about simulation in one way or another IMO, and the thing we are simulating has enough parallelism to go around ... cant get more massively parallel than reality.
 
MfA said:
When I say massively parallel machine I mean something tied together with a single system image.

As far as the apps is concerned, I was just paraphrasing him. Im only really interested in games though. All the computationally relevant tasks there are about simulation in one way or another IMO, and the thing we are simulating has enough parallelism to go around ... cant get more massively parallel than reality.

Heh except for exotic maths with absolutely no known purpose about everything anyone really wants to calculate in general revolves around reality so everything must be massively parallel practically :p

Reality itself isn't easily parrallable unfortunately without really fast interconnects or you simplify things or do additional computations to put off latency issues. And of course you are always working with simplifications. Main issue is reality is a much higher order dimension than processors and memory are meant to work in.

Games heh definately though aren't going to be on anything besides a single machine for a client running it so, doesn't seem to matter really (besides strange tech demos like that cluster doing raytracing Quake 3). Though of course you can run the AI, Physics engine, graphics, and etc on seperate processors of course.
 
On chip interconnects tend to be really fast, as for dimensions ... not really relevant to rendering (apart from raytracing, but I think it will be a while before that is relevant). With AI&physics in timestep simulation it is all about local interaction ... fortunately not everything moves as fast as light.
 
Oh great...just as D---meat leaves, the Mac vs PC holy war begins.

[returning to original topic]

As much as I would like to see this scenario come true, this is a little too optimistic.

The weakest link is that his entire scenario is based on ONE fact - IBM hasn`t pumped out a 3 GHz PowerPC. The rest is just conjecture. He also seems to have great confidence in the future viability of Sun, and its ability to sustain the R&D necessary to keep up with the likes of IBM and Intel. IMHO, Sun seems likely to seek cooperation with Fujitsu to keep the SPARC architecture competitive. Also he ignores the not-small matter of making Jobs and McNealy work together.

Still, it`s an interesting scenario ~ having (briefly) read through his other columns regarding Cell and Linux, I have to admit that he might be on to something with the Linux developer community. While most Linux developers are fixated on "beating Microsoft" on the desktop and would likely stick with x86, the server developer community would probably be immensely attracted to Cell - can you imagine Google running Cell servers? A lot of my Linux developer friends like the PowerPC-Altivec architecture, I suspect they would like Cell even more.

Anyways, the sooner x86 dies, the better.
 
MfA said:
On chip interconnects tend to be really fast, as for dimensions ... not really relevant to rendering (apart from raytracing, but I think it will be a while before that is relevant). With AI&physics in timestep simulation it is all about local interaction ... fortunately not everything moves as fast as light.

Heh sorry got you confused with what I meant. Was referring to simulations of reality not relating to games (for scientific purposes). Games are huge simplifications of reality so everything can be done fairly easily. I agree for games its pretty easy to parallize certain items but its kinda pointless to waste time allowing your game to be massively parallel when no consumer hardware for playing games will be.
 
"Why move when you can use Quantum Teleportation!"

Because it might screw up reality? :p

I don't trust Quantum Computers. They're gonna be the end of us all, I tell you!

As for Apple using Cells, it'd be a dream come true for me, but I doubt it.

XGrid, which is being built into the next OS X update would be great for the Cell-clusters people have been rambling on about. Having several Cell-devices at home and then having them share resources via XGrid.
 
Back
Top