Ok, here’s my vision so far …
Each user interface element is going to be an independent object. At least, in principe … in practice, we are probably going to compile them in at launch time, but that also depends on what platform we are going to use. I actually encourage people to write different language versions, as with a clear specification, that should be easy enough to do. The general purpose could should be relatively small and efficient, and most of the work goes into thinking it out. Once that is done, translating it to any language C or up (bar, say, Prolog) should be a relative breeze.
A User Interface object should in fact be seen as a complete program in itself. It’s user interface rendering is also going to be its Icon, basically (should that point out not to be practical, then it will simply have to present itself as something that looks nice in ‘icon’ form, and then transfer/animate nicely from there upon activation, and back again.
To make this technically possible, the Interface will run (or at least be able to, and certainly at first) only the focused object, and that object will have its fullest attention. This also means that when not active, the object will really just be a small, static mesh. Too much animation at this point would also really create a headache, I think.
But I do foresee we will make exceptions later, giving an option to specifically allow certain applications to remain active (I hate it anyway when Word starts going ape using loads of processing power because it is trying to grammar and spell-check my documents in the background …
)
A few examples of these:
- a mini-game, like a Rubic’s Cube, will look like a 3D Rubic’s Cube. Once you activate it, you can zoom in on it and additional controls to interact might appear
- google earth, say, would look like a 3D earth, and you would just zoom in a lot, basically.
You get the picture.
- images, sound, movies, etc. will get some nice system defined look as you would expect them, but rather than as individual movie files, they will be presented in the form of the player they are associated with. Upon activation you will simply be able to zoom in on them. If multiple players are supported, each player will also be present as a child of this item when it is selected, so you can play the movie/sound/image with the alternative player object, or play several selected objects at once with the alternative player object.
- Links can be attempted to render the object they refer to if possible, with options to download manually or automatically, etc.
User Interface Object
So, a user interface object being an independent little animal, it has a fair deal of functions. Here are the main ones I foresee:
- output generic 3D Mesh Data
The UIO will generate its own 3D Mesh Data reflecting its status, reactions to user input and so on, that will be forwarded to the Renderer by the main application object (‘MAO’)
- subscribe to events/messages
The UIO will tell the MAO what messages it is interested in – basically info coming either from the OS (e.g. user input or data that is exposed by other Objects that this object has indicated to be interested in)
Examples of messages it may get from the MAO are ‘Focus’, ‘Start’, ‘End’, etc., as well as user-interface messages and such (key input, controller movement, etc.). Additionally, it will receive information on what effects are supported and what its resolution is. This way it can make optimal use of the available visual effects but also provide good fall-back support, and be object will then be able to display itself optimally depending on whether it has 100 pixel3 available vs 1000 pixel3 available.
- generate events/messages
Examples are ‘I’ve changed, so redraw me’ (Refresh/Redraw), ‘I’m done, so close me (Close)’, but also ‘I want to spawn child objects’ (SpawnChildren) for instance containing configuration options, or draw a new object of type x with parameters y. Or, I have obtained a certain value state or collection of values, available to objects that have subscribed to some of this data.
- accept and return configuration/state settings
Basically the object will be able to have its state loaded and saved. Take the example of the 3D Rubic’s Cube. The OS could ask it to give up its current state to save and restore the same object or a copy of it later. The state information could include the position of the individual scares on the cube, and user options like choosing to ‘light up’ the yellow squares, graying out all other squares, making all other squares transparent, and so on. A movie player object would return the position of the current movie being played, so that a user can choose to continue here later.
- publish (upon request) what kind of parents it supports (optional) and what kind of data it exposes.
This will allow other objects to see what kind of information they may be able to receive from this object, and it will allow the MAO to link this object to any supported object type (think again about the relation between a viewer and a file type)
Renderer
So, the Renderer will receive a pack of 3D Mesh information from the MAO/OS which compiles the 3D Mesh Data that the objects have exposed and information on the view settings, and the Renderer will go to work. The Mesh data is still going to be somewhat hi-level; the Mesh information will simply say ‘this poly should reflect and be transparent, bump mapped, or whatever’, and the Renderer will have to try and achieve as much of the effects as possible without taking too long.
The Main Application Object
The MAO itself will expose system messages that UIOs can subscribe to, and have a few objects of its own that you can use among others to indicate what kind of effects you want to turn on or off, the level of refreshing activity, and so on. We will also try to develop some kind of automatic load-balancing and feature detection, but at the beginning it is always better to have something fully configurable, for testing, and for later when people who prefer that kind of flexibility.
The MAO will control the overall camera / view settings for the renderer. I was thinking that by keeping the OS/CTRL or whatever key pressed (if you can overrule the Windows key to start up the start menu, of which I’m not sure at this moment), you can use the mouse to rotate the whole user interface if you so desire, while using the keyboard for going through the objects, normal key down activating the next child, while shift-down would allow you to focus onto the next object without activating it so you could run up and down the tree quickly. Once an object is activated, it will be able to take care of most input and won’t be interrupted by the MAO until it considers itself finished.
The rotation part is actually a bit complicated, as I’m thinking it will be rare for the whole scene to be rotated, and instead I would only want to rotate a certain object tree. Think here about a pole with three circular clothing racks or postcard racks, where you only want to rotate the middle rack. Also an interesting question is if you’d want the objects to keep facing the camera by default even if you rotate a structure. I’m thinking yes, but I’m also thinking, let’s keep this simple for now and stick to only zooming objects in and out.
Next Up
Next up is how the 3D Forum Browser would work using the roughly outlined model described above. No doubt this will both make things a bit clearer and force us to fill in some of the details or make some adjustments. And then we should just start making it work, going for a quick win first, getting something visible done as fast as possible to start with. I may even use my simple text renderer for it, if noone else with more graphics experience chimes in.