Apples GUI is an OpenGL app. (Link inside)

Entropy

Veteran
Since there has been some discussion about similar items, I thought some might find this to be of interest.

I'll quote from the most relevant of the slides:
From August the 24th, OSX will have a GUI (more like the entire display engine actually) that
* Uses OpenGL like any other application
* The desktop is a actually a 3D scene
* Everything is a textured polygon
* Compositing via blending and multitexturing


The slides from the presentation made at Siggraph are here
http://www.opengl.org/developers/code/features/siggraph2002_bof/sg2002bof_apple.pdf

Entropy
 
It makes sense--Jaguar (X.2) allows users with HW TnL cards (DX7+) to accelerate their desktops.
 
Yes, in essence MS will be following Apple again with Longhorn (although AFAIK MS will use each window as a different 3D scene, and I'm not sure this is the case with Jaguar).
 
BeOS had a prototype App_Server using OpenGL, since some of the guys went back to Mac it's no surprise to me, as it is no surprise that M$ plan to do the same thing.
 
The question is will this speed things up or slow them down. Sounds kind of like more bloatware to me.
 
The intention with Jaguar is to actually speed things up. G4's are supposed to get a speed boost because some of the graphics processing can be handled by the Velocity processor (or so I'm told) and any mac with an OpenGL compliant 3D graphics adaptor with 32+MB of RAM is also supposed to be a speed boost because some of the GUI operations are offloaded from the CPU to the 3D card.

However, most likely when MS implements it it will have the opposite effects!
 
Nagorak,
At the first company I worked for (just north of Sydney), the MD was keen to actually have the GUI ray traced. It didn't happen, of course, but it was an interesting idea.
 
What's the real-world difference between treating the entire desktop as a 3D scene or each individual window as a scene?

Benefits and drawbacks, please.

I'm thinking, if one window changes and is overlayed by another, will one approach require redrawing both, for example.

How will the P10:s multithreading capability improve the chip's ability to handle multiple 3D scenes, compared to non-multithreaded chips like everything else out there?

*G*
 
For the whole screen vs windows as scene, in the first case textures are shared accross the windows, in the second case, each window will load its texture which could mean plenty of times the same texturein RAM.

Not even counting the fact that you would modify the modelview matrice, primary buffer/render target... plenty of times, etc...
 
It makes sense--Jaguar (X.2) allows users with HW TnL cards (DX7+) to accelerate their desktops.

Actually you don't need TnL...

The intention with Jaguar is to actually speed things up. G4's are supposed to get a speed boost because some of the graphics processing can be handled by the Velocity processor (or so I'm told)

Well actually this is already the case with Quartz, although I imagine it'll be tuned in 10.2 (seems so far from the beta builds). QE helps to relieve a lot of the bandwidth strain on the CPU (even worse when you consider how much a bandwidth hog AltiVec is and how little Apple give it), by moving the Quartz compositor task off of the CPU.

OpenGL compliant 3D graphics adaptor with 32+MB of RAM

It's 16MB nowadays...

The interesting thing comes when chips like the R300 become more pervasive (actually even full DX8 chips probably could too) will the Quartz drawing layer be migrated as well???
 
archie, I thought the minimum for h/w X.2 OGL acceleration was a Radeon or GF2MX? I assumed this was b/c of h/w TnL--is it just a loose minimum rendering speed requirement?
 
I'm not really sure about why there is a speed benefit from going 3D here, but an advantage could be that it gives developers more freedom to experiment with new interfaces. Windows can be rotated etc. I've yet to see a good general interface that looks 3d, but that doesn't mean someone won't develop one.
 
archie, I thought the minimum for h/w X.2 OGL acceleration was a Radeon or GF2MX? I assumed this was b/c of h/w TnL--is it just a loose minimum rendering speed requirement?

That includes Radeon Mobility chips... The older TiBooks weren't going to be supported initially (16MB), but that changed in later revisions (I should also point out that the Radeon Mobility doesn't have a "Charisma Engine", whereas the Radeon Mobility 7500 does).

I'm not really sure about why there is a speed benefit from going 3D here

Perhaps, perhaps not... In the case of OS X, the Quartz compositor is something that's can be easily migrated away from the CPU to an alternative processing device (in this case a consumer GPU), which frees up a lot of processor bus traffic for other computing tasks (keep in mind the CPU is still handling the Quartz drawing layer), so the overall benefit is tangible UI performance improvements (and perhaps noticeable in your own code as you may find more available CPU cycles without resorting to forcing the mach tasker to give you real-time priority)...
 
DaveBaumann said:
Yes, in essence MS will be following Apple again with Longhorn (although AFAIK MS will use each window as a different 3D scene, and I'm not sure this is the case with Jaguar).

I think M$ planned a 3D Windows for Longhorn long before Apple decided on this--if indeed this is what they do. I also think what M$ will wind up doing will be far more fundamental and powerful than what Apple's doing--which is essentially just turning the GUI into a 3D application launched from the OS. I could be wrong, but Apple's approach doesn't sound very sophisticated--more like a superficial gimmick.

Of course, Apple's desperately trying to drive OSX into its installed base--and so far efforts have not been all that profitable. This seems like a way to make it "more attractive" in an attempt to drive its acceptance in the Mac installed base. By Apple's own figures, fully 80% of the current Mac installed base does not yet use OSX in any form, and accordingly developers are continuing to develop for older Mac OS's to get the most coverage. It'll be interesting to see what this actually turns into. More RDF, or something fundamental?
 
WaltC said:
I also think what M$ will wind up doing will be far more fundamental and powerful than what Apple's doing
In what way?
--which is essentially just turning the GUI into a 3D application launched from the OS. I could be wrong, but Apple's approach doesn't sound very sophisticated--more like a superficial gimmick.
It sounds like a move in the right direction to me; moving per-pixel gui operations to hardware, everything double buffered, and getting a lot of extra for free: transforms, blending etc all in hardware. I don't see why you would call that a gimmick.
 
Back
Top