Weirdest actual 3D card for PC's

Simon F said:
darkblu said:
now, the classic apple2 had really funny color graphics mode
That was the 'complementary colour pair' addressing wasn't it? That came back into the limelight recently because of Microsoft's patent on LCD AA/"Clear Type". There were arguments that it had all been done before on the Apple ][.

i'm not familiar with the ms 'invention' but from the way it sounds that should be it.

re giving details, ahh well, how about that:

the idea at apple had been to represent 6 colors with a ~1-bit-per-pixel. yes, you got that right. for the purpose you had a byte for each 7 pixels (i.e. 8/7 bits per pixel) where:
  • the last bit of the byte would 'alternate palettes', so to say
  • a lit even horizontal position will be either 'violet' or 'blue', depending on 'palette', and a lit odd horizontal position will be either 'green' or 'orange', by same plaette bit.
  • two neighbouring pixels lit would produce white.
so at the end you get 2 whites and 2 blacks, and 4 other colors, 6 distinct colors in total. cool, no?

ed: doh, the typos/semantics errors count in the above suggests i should get some sleep.
ed`: nice, there was a factological error in there, too.
 
darkblu said:
come on, guys, bitplanes and mid-refresh palette/dac programming were everyday stuff on EGAs and VGAs alike.

now, the classic apple2 had really funny color graphics mode, and the monochrome 'hires' mode was no less cunning either ..having a well-optimized 'division by 7' routine for the 6502 was really important in those days, yeees..

I think your missing what was weird about the Amiga copper 'framebuffer'. The framebuffer was a program, each 'pixel' was an instruction, you 'draw' a program via the CPU (for a normal pixel the cpu would write the instruction for change dac at each 'pixel').

A copper framebuffer was where each 'pixel' could do one of 3 things.
Move a value to a register (update a colour, start a blit, etc).
Wait for the video scan to reach a particular point.
Skip the next instruction if a condition was met.

A copper framebuffer was not that useful except to get a lowres high colour images but as a peice of weirdness it was quite hard to beat. You could draw lines that caused the 'pixel's to the left to be extended 2 pixels the right. Yet could write a filled triangle with 2 lines (the left line caused the 'pixel' to change the bitmap register which caused the horizontal span to be fetched, the right line switch the bitmap register back). You could even write a 'pixel' that caused a loop (set a pixel halfway along the screen causing the left hand side to be repeated).

A displaylist isn't a weird idea, but a displaylist as a framebuffer is pretty wierd :)
 
Atari ST was so simple (hardware wise). The ST's graphics chip was such a simple machine compared to the the Amiga graphics (Video shifter compared to Agnus's Copper and Blitter).

The STE had a Blitter as well didn't it? (Along with the Falcon and Jaguar iirc)

Anybody program on the Jag?
 
yep, DeanoC, i do admit i may have missed more than a couple of things in this thread - i'll re-read it tomorrow afresh :)
 
Simon F said:
darkblu said:
now, the classic apple2 had really funny color graphics mode
That was the 'complementary colour pair' addressing wasn't it? That came back into the limelight recently because of Microsoft's patent on LCD AA/"Clear Type". There were arguments that it had all been done before on the Apple ][.

The Atari 8Bit also had a similar mode. It made use of moire patterns and artifacting to get 6 colors from on/off pixels. This was showcased wonderfully in the BroderBund Software game Drol. A few demos also made nifty use of this mode.
 
Speaking about theamiga, I have fond feeling about it's weird hicolor display mode called HAM (short of Hold And Modify) it was at its time great leap from PC's 256 color. A500 series (A1000, A2000 etc) used 6bit per "pixel" and got 4096 colors and AGA series (A1200 & A400) used 8bit per pixel and got ~256k colors. These modes were mostly used in art & paint programs but couple of games used them too. After a while demoscene started to use HAM mode too, it was weird concept but I was enjoying "challenges" using it fastest way possible :) Another thing, Amiga used bitplane style for it's gfx, and if you wanted to display pixel it was not just "write 1 byte somewhere" but you had to separate a pixel data to bits and insert it bit by bit :) Routine that was tuned/developed in demo scene was called "chunky to planar" and boy, it was constant battle who had the fastest C2P routine :)

HAM mode pixel structure was something like this:
bits 5 & 6: select which color component to change next (r,g,b) or use of 16 color palette
bits 1,2,3,4: pointer to color palette or single color value for R or G or B

So it could take max 3 pixels to change color to "final" color.

Ahhh, that was great time :)
 
stevem said:
DeanoC said:
The design basically stuck a x86 compatible (Cyrix IIRC) chip on the graphics card and run the TnL portion of D3D on the card, it had direct access to the triangle engine etc.

AFAIR it was a V2200 board with a Fujitsu Pinolite geometry engine handling fixed function T&L. The end results weren't all that impressive given the new wave of TNT & V2 chips coupled with faster PII CPUs. They abandoned the project, although Rendition/Hercules had a number of prototypes.

the fujitsu pinolite board was a result of rendition's very lame drivers at the time. apparently, some new driver guy at rendition rewrote the GL driver to outperform the pinolite board. that helped kill the project.

IIRC, DX didn't have TNL at the time, and DX programs wouldn't have benefitted from pinolite until DX7. two years later geforce came out...

- sm
 
Jogi said:
Speaking about theamiga, I have fond feeling about it's weird hicolor display mode called HAM (short of Hold And Modify) it was at its time great leap from PC's 256 color.

You mean "PC's 16 colour." The Amiga came out in 85. VGA came out in 87. At the time the Amiga appeared, the PC had CGA and EGA. IIRC there was also PGA at the time, although I don't remember exactly what it was.
 
Ahhh....CGA. 2 Wonderful pallettes of 3 colors (plus a background color) to choose from, IIRC. My favorite was Pallette 2....I believe it was something like puke green, rusty red, and mustard yellow. :)

I do recall playing "The Hobbit" text/graphics adventure game, and it was quite impressive what they could do with that pallette.
 
Haha. I remember playing a lot of CGA games on my PCjr , but what I liked most was the PCjr's special 16-color modes. King's Quest ruled! :)

Tommy McClain
 
Gah! The Amiga was a wonderful machine. Overcome by a fit of nostalgia I assembled my A400 and buried myself in the HiSoft Assembler.

I remember the arguments I used to get into with a friend about which particualr architecture was the best. He was a staunch PC fan and I thought the Amiga was the dogs danglies. Event the OS kicked ass for its time.

I also remember how he struggled to get the old copper bar effect on his Paradise EGA card. Much port attacking and lots and lots of assembler with a lovely Pascal framework.

Even the games on the Amiga never seem to have been surpassed. I remember seeing Hybris for the very first time shortly followed by my first full on Demo. Made my x186 - CGA look pathetic, which I guess it was really :)

Somehow it just doesn't seem as much fun anymore. I guess I just got old and used to it all. :(
 
shaderman said:
stevem said:
DeanoC said:
The design basically stuck a x86 compatible (Cyrix IIRC) chip on the graphics card and run the TnL portion of D3D on the card, it had direct access to the triangle engine etc.

AFAIR it was a V2200 board with a Fujitsu Pinolite geometry engine handling fixed function T&L. The end results weren't all that impressive given the new wave of TNT & V2 chips coupled with faster PII CPUs. They abandoned the project, although Rendition/Hercules had a number of prototypes.

the fujitsu pinolite board was a result of rendition's very lame drivers at the time. apparently, some new driver guy at rendition rewrote the GL driver to outperform the pinolite board. that helped kill the project.

IIRC, DX didn't have TNL at the time, and DX programs wouldn't have benefitted from pinolite until DX7. two years later geforce came out...

- sm

Thats why I distinctly remember the Rendition (I'm sure it was an x86 processor though) project, it was rumoured to get DX TnL by running the software TnL on the video card. DX had 'hardware' TnL long before it officially did, the PSGP predates Dx7, Intel had a PSGP SSE back when it was still KNI. Rendition added a PSGP that send the data to the video card's CPU. The Dx7 had a 'proper' method via the DDI of expressing vertex processing capabilities.

Thats why I was involved, we were one of the few games that actually had a Dx6 TnL path through our engine (we worked on the early SSE and before we wrote a custom path we sent it via Intel PSGP).

Dx has always had an extension system (I've used an Intel and Matrox extended DX in the past).
 
darnit. Been trying to get my friend to get me one of those Rendition PCI/AGP reference cards, but Micron's labs are sealed tighter than a drum right now. Ah well...
 
That Verite board with the coprocessor really intrigued back in the day. Hercules' card was called the Thriller Conspiracy. I dug up some info on it for you all to nostalgiatize over :)

Nifty Fujitsu Pinolite information
http://pr.fujitsu.com/jp/news/1997/Jul/2e.html

http://alag3.mfa.kfki.hu/dcsabas/hardware/3d_cards.htm
VBE 2.0, Rendition V2200 RISC + Fujitsu FGX-1 geometry and lighting processor (750k polygons/s), 230 MHz RAMDAC, LineFrequency: 31-110 kHz, VerticalRefresh: 60-160 Hz, 8+1 MB 100 MHz SGRAM, 3D res:1024x768/16bit (double and Z- buffered), MPEG-1, MPEG-2(DVD) PCI 2.1 bus, 3D: D3D, OpenGL, Rendition Redline, Speedy3D. Produces two times higher fps on processors with weak FPU! (like 486, 5x86, K5, 6x86, K6, P54C), 55M pixels/s.

http://www.billsworkshop.com/e3.html
Hercules Thriller Conspiracy
due out September 1998; $149

This was another surprise at the show. Hercules was running a very early reference board and did not get it operating until Friday May 29. This is an interesting, perhaps odd, card. It is based on the Rendition Verite V2200 chipset used in the current Thriller, augmented by a Fujitsu pinolite (FXG-1) geometry and lighting processor. This takes over some of the functions now performed by the CPU, and is designed to boost performance significantly for socket 7 systems running Cyrix and other less powerful CPU's. It assumes the floating-point calculations that these CPU's run much more slowly. Quake II demo 1 at 640x480 resolution was running at 24 FPS on a Cyrix PR200. That is slower than I achieved with the original Thriller 3D on my 200MMX system. Let's just say they have some tweaking to do. The card will be PCI only, consistent with it's intent of socket 7 support. I don't know how many socket 7's will be left in September.


Turn Your Pentium Class System into a True Arcade Gaming Machine

Hercules Thriller ConspiracyTM

Rendition V2200TM, Fujitsu FGX-1TM Geometry Processor, 230MHz DAC, 8MB plus
1MB Cache produces up to two times the 3D frame rate on low cost PCs

FREMONT, CA, -- May 28th, 1998 -- Hercules Computer Technology, Inc., a
leader in high performance 3D graphics accelerators, today announced the
Hercules Thriller CONSPIRACYTM, the first mainstream 2D/3D graphics board to
accelerate the complete 3D pipeline in hardware, including geometry and
lighting.

The Hercules Thriller CONSPIRACYTM is a complete 3D graphics pipeline
processor system capable of performing significantly more of the work
involved in rendering 3D graphic images than conventional graphic
accelerators. Based on the high performance Rendition Verite V2200 2D/3D
graphics and multimedia processor and the powerful Fujitsu FGX-1 geometry
and lighting processor, the Hercules Thriller CONSPIRACYTM is expected to
set new price/performance standards for 3D rendering. This is especially
important for low-cost "socket 7 PCs" with less powerful CPUs. The Hercules
Thriller CONSPIRACYTM is targeted for mainstream consumer applications in
the 3D gaming and 3D business markets.

The Hercules Thriller CONSPIRACYTM is a 2D/3D add-on graphics card for the
PCI bus, with a 230 MHz DAC, 8MB high-performance SGRAM display memory, and
an additional 1MB of on-board cache (9MB in total). The Hercules Thriller
CONSPIRACYTM supports 2D graphics resolutions up to 1600x1200 at 90 Hz
refresh rate and 3D graphics resolutions up to 1024x768/16bit double
buffered and Z-buffered. The Hercules Thriller CONSPIRACYTM will start
shipping at end of June '98 at an estimated retail price of $149.

Accelerating the Complete 3D Pipeline

The performance bottleneck of today's 3D applications can be attributed to
the limited 3D support in the main system processor. In a typical
application, the creation of a 3D image consists of 4 steps: (1) geometry
set-up (calculation of objects/triangles in the view port); (2) lighting;
(3) triangle set-up (break down of triangles into pixels) and; (4)
rasterization (effects such as filtering, fog, transparency, specular
highlights and perspective correction). First generation 3D accelerators
implemented the rasterization directly in hardware. Second generation 3D
technology, such as Rendition's V2200, added triangle set-up, still leaving
the geometry and lighting to be executed by the computer's main processor.
Moving these functions to a second, specialized geometry and lighting
processor, such as Fujitsu's FGX-1TM, will result in significant 3D
performance improvements without requiring the use of faster main
processors.

The Fujitsu FGX-1 is a 32-bit floating-point/integer arithmetic front-end
processor, which integrates a PCI interface and a PCI bridge to communicate
with Rendition's V2200. It features a highly optimized "parallel processing"
and "8 stage (4 operation stages) pipeline" architecture to execute geometry
and lighting at an impressive internal data transfer rate of up to 800
MB/sec (400 MOPS/750k polygons/sec.).

Both the input data stream and the output data stream are double buffered
while the Fujitsu FGX-1's overall data transfer time is virtually zero. To
further enhance the overall 3D performance, Hercules uses an additional 1MB
of SGRAM as a cache between

Fujitsu's FGX-1 and Rendition's V2200. In 2D operation, the Fujitsu FGX-1
works in "transparent mode" without any performance degradation.

Hercules believes that the Hercules Thriller CONSPIRACYTM can produce up to
two times the 3D frame rate of conventional second generation 3D
accelerators. This is true in virtually all 3D games and 3D Windows
applications supporting either OpenGL (e.g. games taking advantage of the
Quake engine) or Rendition Rredline, running on socket 7 computers.

The Hercules Thriller CONSPIRACYTM allows owners of AMD, Cyrix, IBM and
Intel based Socket 7 computers to take full advantage of its performance
improvements without the expense of purchasing a complete new system. The
Hercules Rendition V2200/Fujitsu FGX-1 solution is a mechanism that offloads
the bulk of the work from the host CPU, which results in a dramatic increase
in overall performance. "This new technology will represent a major shift
for consumers of PC graphics," said Jay Eisenlohr, Vice President of
Business Development for Rendition. "In the 3D market, performance continues
to increase up to 3X with each generation of technology, requiring users to
purchase expensive new computers to keep up with the industry's
developments. This new technology will provide an inexpensive way to obtain
the best performance, by allowing gamers to buy a simple add-in card that
brings them up to speed instantaneously."
 
On the note of Verite' chips, if one were to compare these old architectures to today's, what did these old chips do? Were they 1x1 architectures? What else was in there? Nobody did in-depth analysis back then like we do today.
 
IIRC the V100 had a fillrate of 25 MPixels/sec @ 25 MHz and the V2200 had 50 M Pixels/sec @50 MHz. So unless my meory is completely shot I'd guess it was a 1*1.

Pretty sure it also had a RISC processor in there as well. Byte Magazine once done quite a detailed report on the Verite, Mpact and Voodoo.
 
The Verite did however have two Z-pipes per color pipe for faster HSR. I don't remember if it was just the 2x00 or if the 1000 also had it.

swaayej said:
Nobody did in-depth analysis back then like we do today.
So you never visited Dimension3D back then?
 
Basic said:
The Verite did however have two Z-pipes per color pipe for faster HSR. I don't remember if it was just the 2x00 or if the 1000 also had it.

swaayej said:
Nobody did in-depth analysis back then like we do today.
So you never visited Dimension3D back then?

I certainly didnt. Someone should have saved those reviews for posterity!
 
Reviews?
I don't remember that I read any. It's the forum that counts. And for a long time Dim3D was nothing but the forums.
 
A bit off topic, but back in Autumn 1997 I was about to built my first PC (was a Mac user prior) and figured out that I might better get to the bottom of things and turned to tomshardware for the prime advise on a new videocard.

Tom back in the good ol' days said:
There's a lot of hype thrown at us from all the different card and chip manufacturers on the graphic market too and you can easily face a huge disappointment if you should make the wrong choice.

http://www6.tomshardware.com/graphic/19971109/index.html

Well, I went for a Riva128 and it turned out to be a disappointment. Neither Unreal (D3D) nor Quake II (OpenGL) was any good and I ended up playing both games in software mode. That was kind of a weird experience - even back then...
 
Back
Top