CT5 Evans Sutherland Simulator - How did it work?

Greetings,

I've always wondered about the workings of the CT5 simulation system:

Does anyone have any knowledge of how these things actually worked in terms of the real time rendering mechanics?

My (albeit) limited understanding is that the display system was vector based.
I vaguely recall reading (possibly wrong/mistaken) that the screens were projection based (which makes sense considering the technological limitations of the time).

It's always been a curiousity of mine how it all worked, and my web searching unfortunately hasn't revealed much.

I'm theorising primitives would've been rendered in real time (i.e. no buffering), and a "scene" would've been comprised of several independent minicomputers rendering specific objects, with the (effectively) final composition simply being the objects and background being displayed in appropriate Z order by the projection system.
This is purely speculation, so if anyone knows how it really worked would be very interested in their explanations.

Thanks.
 
I'm theorising primitives would've been rendered in real time (i.e. no buffering), and a "scene" would've been comprised of several independent minicomputers rendering specific objects, with the (effectively) final composition simply being the objects and background being displayed in appropriate Z order by the projection system.
This is purely speculation, so if anyone knows how it really worked would be very interested in their explanations.

Thanks.

how do you cull objects rendered independently without buffering?
 
how do you cull objects rendered independently without buffering?

Good question.

Is it not possible to render a relatively simple object (less than a hundred polygons) by drawing it directly to the screen and not drawing any hidden lines or surfaces using an appropriate HSR/HLR algorithm?
Isn't that how things were done in those days?
Remember if the projection is simply overlaying each object in the correct order, then culling isn't such a major issue.

I realize independent buffers are a luxury we take for granted in the 21st century. But, in the early 1980s? Bearing in mind I understand CT5 used vector (not raster) display technology anyway (how exactly does one buffer a vector image if it's not rasterized).

I can't recally many early games with HLR, although apparently Elite did it.
 
According to Wikipedia Evans and Sutherland introduced a framebuffer in 1974:
...
In 1974 Evans & Sutherland released the first commercial framebuffer, costing about $15,000. It was capable of producing resolutions of up to 512 by 512 pixels in 8-bit grayscale, and became a boon for graphics researchers who did not have the resources to build their own framebuffer. The New York Institute of Technology would later create the first 24-bit color system using three of the Evans & Sutherland framebuffers. Each framebuffer was connected to an RGB color output (one for red, one for green and one for blue), with a Digital Equipment Corporation PDP 11/04 minicomputer controlling the three devices as one.
...
 
Right. It would appear that in the 1970s to early 1980s "real time" 3D graphics was often equated with real time playback of pre-rendered scenes and/or objects.
In fact, it seems the definition of the term "real time" graphics would vary depending on the scenario.

So, it appears the CT5 system - which I assume would've employed the E&S Picture System for asset creation, was effectively a very expensive simulator on rails with respect to the animation sequences. Of course, there may have been several such pre-rendered sequences.
 
This is not correct. The E&S CT5 was truly a real time 3D graphics system. It did not, in anyway, simply playback pre-rendered scenes and it was not dependent at all on the E&S Picture System -- other than we used Picture Systems to do our schematic capture of the boards using an in-house developed schematic capture program called CADOL (computer aided design of logic).

I worked on the CT5 and its follow-on systems CT5A and CT6 for many years while an engineer at E&S. The CT5 had a simulation database -- a tree like structure of the terrain geometry and moving models (airplanes, tanks, etc) stored on disks on a PDP-11 computer. These were not rendered objects, but polygon models of the terrain and objects. The front end of the CT5 would scan through the database tree and determine which objects were potentially in view of the current eyepoint and requesting additional portions of the tree be paged into CT5 memory from the PDP-11 when the eyepoint moved into new areas. These object would then have their polygon descriptions sent to a geometry processor which performed the usual polygon operations that 3D graphics still uses today -- rotation, clipping, perspective division, etc. The surviving polygons would then feed into what we called the Display Processor, which rendered the polygons into a frame buffer in a manner like today's modern graphics chips, except that we didn't have the processing power to do Phong shading, so we had to use Gauraud shading instead. And, we didn't have the memory bandwidth to do a true Z-buffer, so the CT5 relied on sending polygons to the DP in sorted depth order and it didn't do well with interpenetrating polygons.

Technology for the CT5: Originally 16Kbit DRAM, 2901 4-bit slice processors usually cascaded to create 16-bit or 32-bit ALUs, and TRW bipolar 16x16-bit multipliers. Almost everything else was TI 74Sxx TTL logic. Each individual processor section of the graphics pipeline, such as the polygon rotator, was essentially a micro-coded VLIW processor.

A full 8-channel (eight display) CT-5 system usually occupied 13 19-inch racks, each 6 foot tall, plus the PDP-11 computer on the front end. It was a wonder to behold!
 
And, for the record, the CT5 was anything but a "simulation on rails". You could "fly" the eyepoint around to any spot in the simulation database. For those of us engineers working on the system, the software guys had written a fly application that we used to move the eyepoint around. It was controlled by the keypad on the VT100 terminal attached to the PDP-11 computer. It didn't simulate real vehicle dynamics. Instead, it just proceeded in a straight line in the the direction it was currently moving. You could change the direction from the keypad. And, you could increase or decrease the velocity from the keypad.

The second project I did at E&S was to create a "flybox" for the CT5. It was a portable terminal with a display (vacuum fluorescent text only -- LCD screens weren't readily available then) with a keyboard and two 3-axis joysticks and 3 knobs. It connected to the PDP-11 through a RS-422 serial cable. Used a Motorola 6809 CPU inside the flybox. The database designers used it a lot. They'd fly through the database using the joysticks, fly up to a particular object and, if they felt the color needed tuning, they could dynamically adjust the color of an objects polygons using the three knobs to tweak the RGB components of the color.
 
A few more interesting tidbits:

The CT5 was primarily intended for military simulation applications -- although one was sold to Daimler Benz as a car simulator. The Novoview (SP1, SP2, SP3, SP3T, ESIG2000, etc) simulators from E&S, which were done by a separate engineering group (sitting right next to the CT engineering group), were also full 3D simulators that were focused on commercial aviation simulation. The FAA had some very stringent requirements for different levels of certification of the simulators. And one of the hardest FAA requirements was the brightness and representation of the "light points" -- lights on the ground and on other aircraft primarily seen at night. If lights were implemented as very small polygons and drawn in the normal raster manner, as was the case in CT5, they weren't all that bright and then twinkled as they moved between the raster lines. This was not acceptable to meet FAA requirements. So, the Novoview simulators drew all of the polygons in normal raster manner, but the light points were written to a separate buffer. The backend of the graphics machine, where the pixels were read out of the frame buffer and sent to the display finished that task quicker than the frame time. The remaining time in the frame was used to calligraphically render the light points. Each light point would be read from the buffer, the CRT beam (actually beams, because this was a full color display) was turned off and positioned to the place where the light was to be drawn on the screen, then the beam was turned on briefly in order to draw the light point. The longer the beam was turned on, the brighter the light point. I was always amazed by the clarity of the light points drawn by a Novoview image generator.

Obviously, this required special CRT monitors that could handle both raster scan and calligraphic modes every frame.

Novoview systems were much smaller and cheaper than CT5. They had far lower polygon capacity. They did much of their math bit-serial (calculating one bit of the result each clock cycle), especially the earlier machines, whereas the CT5 did parallel math. The SP in the the SP1, SP2, SP3, etc names stood for serial processor because the math was done serially -- or so I was told. The CT in the name CT5 stood for continuous tone. The Novoview systems didn't use a PDP-11 computer on the front end like CT5 did. They originally used a TI microcomputer. When TI decided to stop making this computer, E&S was in a bind. With TI's blessing, E&S had three engineers reverse engineer the TI computer and create their own software-compatible implementation.
 
@jsnow
Wonderful posts!
... a true Z-buffer, so the CT5 relied on sending polygons to the DP in sorted depth order and it didn't do well with interpenetrating polygons.

Since it was a flight simulator, presumably interpenetrating polygons would be indicative of a ground or mid-air collision so less of a rendering concern :)

FWIW A colleague of mine worked for a different flight sim company and once told me of the alternate CRT display mode used to handle bright, point lights.
 
Last edited:
I contacted someone who was part of the 3D industry back in the 70's and 80's. This was over a week before json's post. He basically set things straight with regards to how things worked.
For some reason I don't get emails about any new posts in this thread (even though I started it? - nothing in spam folder), so I would (manually) periodically check for any updates.

There wasn't much activity on the thread, and once the general workings had been explained to me I decided I would post an update to the effect that CT5 was indeed a realtime rendering system. However, at the time I was very busy (still am), so have been a little late in returning.
However, I am pleasantly surprised at the renewed activity, and most significantly to the contribution made by @json.

@json, many thanks for your clarification on the technology. You probably know the person I contacted. I don't think he actually worked on the CT5, but, he certainly knew about E&S technology.

Apologies for the delayed response (not sure why I don't get email notifications for new post, but, it's annoying).
 
I contacted someone who was part of the 3D industry back in the 70's and 80's. This was over a week before json's post. He basically set things straight with regards to how things worked.
For some reason I don't get emails about any new posts in this thread (even though I started it? - nothing in spam folder), so I would (manually) periodically check for any updates.

There wasn't much activity on the thread, and once the general workings had been explained to me I decided I would post an update to the effect that CT5 was indeed a realtime rendering system. However, at the time I was very busy (still am), so have been a little late in returning.
However, I am pleasantly surprised at the renewed activity, and most significantly to the contribution made by @json.

@json, many thanks for your clarification on the technology. You probably know the person I contacted. I don't think he actually worked on the CT5, but, he certainly knew about E&S technology.

Apologies for the delayed response (not sure why I don't get email notifications for new post, but, it's annoying).

Sorry, I meant to address @jsnow (no, I'm not a web developer)
 
Back
Top