Defining 2 worlds (VERY simple question)

K.I.L.E.R

Retarded moron
Veteran
Since it's so simple there's no need to move it into the professional forums and make me look like a freaking idiot. I do a good job of that on my own.

I know how to define 1 world.
So let me expand my knowledge.

Is this correct?

Code:
    glMatrixMode(GL_PROJECTION);
    (whatever....)

//camera
  glMatrixMode(GL_MODELVIEW);
      (transform...)

//world1
   glMatrixMode(GL_MODELVIEW);
     (transform...)

drawWorld1crap();

//world2
    glMatrixMode(GL_MODELVIEW);
     (transform...)

    drawWorld2crap();
 
Once the modelview matrix is chosen, it's set globally. Assuming all this is in the same program, you can call glMatrixMode(GL_MODELVIEW) once and to draw the 2nd world, just glClear everything and call the new displaylists instead of the old.
 
Thanks.
Although I would like to avoid the traditional way of doing things.

I could just draw 2 planets, their own land masses, people etc... which would probably be beyond the capabilities of current CPU's to handle both worlds dynamically in real time.

Sorry about this, but at the time of my post I really had no idea what I was really asking.
It's clearer to me now.

CPUs barely have the power to handle more than a hundred bones.
So when will they have the power to calculate many complex entities in real time?
Why are CPUs so weak?
Video cards have their performance increased tremendously every generation.
CPUs are now going multi-core just so they can keep up with Moore's law and still they cannot keep pace.

Is it because CPUs are more expensive to create than video cards?
I would assume they are because the architecture of a CPU would be much more complex than a GPUs due to manufacturers constant efforts to cut costs per wafer and the constant effort of maintaining an exponential performance increase over previous generation designs. Doing this with video cards isn't a problem due to the nature of GPUs being multi-pipelined, you can just slap on a few more pipes for the extra performance increase (although this may no longer be the case as much as it used to be).

Would my assumptions be correct?
 
I don't think its cpus in general that are the problem. I believe its just the designs of cpus that are the problems .

IF a cpu was designed for a specific task it would be extremely fast with modern tech. however many of the cpus are general purpose . The other problem is companys want to make money. Whhat is the main benfit of amd and intel releasing new cpus when intrest in new cpus is down. Sad to say but your normal speed increase isn't a big deal anymore. Back when we had a 800mhz cpus buying the new 900mhz cpu could net u some big gains . But now buying a athlon 64 3800+ isn't much diffrence from a 4000+ . It will only squeeze the companys bottom line .


Multicore is a great thing going foward as mhz ramping has slown down we can now increase the cores per micron process and increase the clock speeds . I.e if a 4ghz dual core athlon 64 was possible at 90nm a 4.5-5gig quad core athlon 64 can be possible on 65nm . We increase the speed of each cpu and add another two cpus thus giving us double the performance or more (even counting the fact that you wont get a 100% increase in speed from the extra cpus)

Of course the burden would go back onto programers and compilers to get the most out of these cpus and cooling might be a problem. But can u imaging a a quad core athlon 64 at 4ghz a core with 2 cpus in your motherboard ? That would be quite a beast .

Personaly i believe that over the next year the war between amd and intel will heat up again and things will move rather quickly for a year or two. I believe this because many games will start to use x86-64 code and will be threaded for multicore cpus . This is when we will see them start increasing cores and ghz on the cores to have the best chip .
 
The benfits of multithreaded programming are going to be great. Using explicit threads and pipes/sockets, you can offload AI, UI and sound at least.

Sadly, I'm too broke to get a x2 3800 to try out my own pthread programs with real parallelism :cry:.
 
Back
Top