3dfx Rampage ;)

Ante P

Veteran
I just quote another site I found, perhaps it's all old stuff (well it is, but some of it was new to me so it might be new to you I figured).
I've marked some of the stuff I didn't know about before:

A short time before 3dfx's demise, the following info appeared on 3dfx's web site. It gives a fair insight as to what 3dfx was planning with "Rampage"...
3dfx Glossary

Alpha Blending : In computer graphics, each pixel has three channels of color information--red, green, and blue--and sometimes a fourth called the alpha channel. This channel controls the way in which other graphics information is displayed, such as levels of transparency or opacity. Alpha blending is the name for this type of control, and it's used to simulate effects such as placing a piece of glass in front of an object so that the object is completely visible behind the glass, unviewable, or something in between.

Alpha Channel : The extra layer of 8-bit greyscale carried by a 32-bit graphic. This extra information is used to determine the transparency or edge characteristics of the image.

Alpha Transparency : It is easy enough to make an object transparent when creating 3D scenes, but in real life even glass isn't completely transparent, particularly when viewed at an angle. Graphics cards supporting alpha transparency are capable of taking into account the transparent and translucent properties of particular objects.

Anisotropic Texture Filtering : This filtering samples from more than 8 pixels and depends on the degree of surface tilting on X - Y - Z in different shapes and therefore can produce better quality graphics. Anisotropic filtering requires a higher fill rate than trilinear filtering, which means it is slower. There are different levels of quality available depending on the number of pixel samples (or "taps"). 64 tap anisotropic filtering offers much higher quality than 8 tap anisotropic filtering, but again, is slower. Anisotropic filtering is the latest filtering type to be implemented in 3D accelerators.

Anti-Aliasing : AA Hides the jagged effect of image diagonals by modulating the intensity on either side of the diagonal boundaries. This creates a local blurring along these edges and reduces the appearance of stepping. The result is a smoother, far more realistic image. Also see: Real-Time Full-Scene HW Anti-Aliasing .

API : A 3D application programming interface which controls all aspects of the 3D rendering process. Conflicting APIs exist, including Microsoft's DirectX and Open GL, Glide, Intel's 3DR, Reality Lab and Brender. Most are custom designed for either entertainment or serious 3D animation.

Backface Culling : Discards invisible back-facing polygons that can be eliminated from the list of polygons that need further 3D rendering processing.

Bezier : A way of mathematically describing a curve, used by graphics programs such as MacroMedia FreeHand and Adobe Illustrator.

Bi-cubic filtering : An advanced form of filtering to simulate textures on a "cube" to cover all sides of an image.

Bilinear Texture Filtering : 3D accelerators use an interpolation method to produce smooth transitions between different pixels in the source texture. This is done by sampling the four closest pixels of the source texture (or most suitable mip-map) and interpolating these values before rendering each single texel on screen.

Bitmap File : A file in which every pixel on the screen is represented by a piece of data in memory. These are usually graphics although some audio formats are described as bitmapped as well.

BLT : Bit-Aligned- Block- Transfer. The process of copying pixels or other data from one place in memory to another.

BRDF-Based Lighting : Bi-directional Reflectance Distribution Function (BRDF)-based lighting. When light makes contact with a material, three types of interactions may occur: light reflection, light absorption, and light transmittance. That is, some of the incident light is reflected, some of the light is transmitted, and another portion of the light is absorbed by the medium itself.

Bresenham line draw : Bresenham's line-drawing algorithm uses an iterative scheme. A pixel is plotted at the starting coordinate of the line, and each iteration of the algorithm increments the pixel one unit along the major, or x-axis. The pixel is incremented along the minor, or y-axis, only when a decision variable (based on the slope of the line) changes sign. A key feature of the algorithm is that it requires only integer data and simple arithmetic. This makes the algorithm very efficient and fast.

B-Spline : B-splines are a formulation (Barsky and Beatty, 1983) of B-spline curve segments. Barsky introduced two new degrees of freedom- bias and tension- which cab be applied either uniformly to the whole curve or non-uniformly by varying their values along the curve.

Bump Mapping : Bump mapping adds lighting detail to an otherwise flat surface, giving the surface a "bumpy" look and feel. There are several methods for creating bump mapping, one involves using paletted textures and the other involves multi-pass rendering. Voodoo3 3D supports both these methods at full rendering performance and with all filtering modes. In fact, Voodoo3 supports bump mapping at full speeds, in a single pass and single cyle. This full speed approach to bump mapping makes Voodoo3 unique among graphics architectures, offering full speed performance even while bump mapping.

Catmull-Rom spline patches : Unlike a natural cubic spline, a Catmull-Rom spline has local control. This means that modifying one control point only affects the part of the curve near that control point.

Clipping : The triangle setup engine is a floating-point math processor that receives vertex data and calculates all of the parameters that the rendering engine will require. This unit ‘sets up’ and ‘clips’ the triangle for the rendering engine. Clipping uses a guard band efficiently and also includes a clip against Z<0.0 and Z>1.0. If you supply Z values outside that range then they will not be rendered. Because polygons which are wholly clipped out still need to be transformed it will be to your advantage to remove many polygons before submitting them to the API.

Color (Chroma) Keying : Certain color (or a range of colors in Chroma Keying) will be thought as totally transparent during blending of two images. For example, a weather forecaster stands in front of a blue background and blends with a weather map image forms the scene that you can see daily on TV.

Cubic environment mapping : Cube environment mapping in hardware is a breakthrough image quality feature that is fully supported by DirectX 7 and OpenGL that allows developers to create accurate, real-time reflections. Accelerated in hardware, cube environment mapping will free up the creativity of developers to use reflections and specular lighting effects to create interesting, immersive environments. By changing the map shape to a six-sided cube, cube environment mapping offers a simple development path to the creation of stunning, reflective images. With a cube map shape, reflections are captured by the six projected faces of the cube map that surround an object

Culling : Removing, from the processing pipeline to spare unneeded work, complete objects and surfaces which are completely hidden by other objects, or are facing away from the viewer (i.e. Backface culling) .

Curved patches : A curved surface created from two or more curves.

Depth Cueing : Reducing object's color and intensity as a function of its distance from the observer. In layman's terms, the color and intensity of the surface of an object changes depending on the distance from you.

Diffuse Lighting : Diffuse lighting assumes the light hitting an object scatters in all directions equally, so the brightness of the reflected light does not depend at all on the position of the viewer. Sunlight on a playground is an example in the real world of diffuse lighting.

Direct3D : Microsoft's API for 3D graphics. It is also one of the components of DirectX and supported by all gaming-oriented 3D accelerators so far.

DirectX : A Microsoft Windows API designed to provide software developers with direct access to low-level functions on PC periferals. Before DirectX, programmers usually opted for the DOS environment, which was free of the limited multimedia feature set that characterized Windows for many years.

Displacement Mapping : An extension of Bump Mapping in which textures are used to move the surface, not just change the appearance of the texture surface.

Dot product bump mapping : Dot product bump mapping is a technique of encoding a bump map in a texture, and then texture blending with a light vector. This produces a grayscale value that can then be used to modulate with a surface color or environment map.

Double Buffering : All calculations and rendering steps must occur on hundreds to thousands of polygons for each frame of an interactive program that needs to update the display at a rate of between 15 and 30 times each second. Double buffering gives the system a little breathing room by providing an opportunity to render the next frame of a sequence into off-screen memory that is then switched to the display while the memory containing the formerly displayed frame can be cleared and re-painted with the next frame to be displayed and so on.

Environment Mapped Bump-mapping : Environment mapping is the process of mapping a texture onto an object to create a mirror-like reflection effect. This is normally done using a pre-computed spherical texture map representing the 3D space around the object. Another method called cubic environment mapping creates the 3D space for the static environment map using 6 textures (top, bottom, 4 sides) representing the surrounding area from the point of view of the object. Creating the 3D space environment map is computationally expensive which is why most are computed only once (static.) Dynamic environment mapping recreates the texture used for the reflection every frame. This creates a more realistic image but at a huge performance cost.

Fill Rate: The number of pixels that can be rendered on screen per second. (Pixels, or "picture elements," are literally the dots that appear on your computer display.) A 3D accelerator card with extremely high pixel fill rates is a must for serious gaming action. More...

Flat Shading : Each polygon is drawn in a single color representing the interaction of light with that part of the object. Flat shading results in a faceted appearance where the underlying geometry is visible.

Fogging : Not particularly relevant to CAD applications, fogging is a technique by which objects can be reduced in colour intensity so that they appear to be buried in a distant mist or fog. This technique can aid the impression of distance, and can be used as a performance-saving feature in some games, with fogging eliminating the need for distant objects to be drawn. It's something of a double-edged sword, since fogging can require lots of processing power. It is generally used to create atmospheric effects.

Frame Buffer : An area of RAM used to store the pixel data for a single screen image, or frame.

Frame Rate : The number of complete screens or frames drawn per second (FPS). Higher frame rates provide smoother motion.

Full Speed Filtering : The Voodoo 3D architecture performs filtering operations at full speeds under all single texture per pixel conditions. In fact, Voodoo3 can perform trilinear filtering at full hardware speeds, suffering no performance degradation. The Voodoo platform has been designed for maximum game performance, which means sustained, high frame rates. Functions or filtering that have substantial performance penalties are essentially valueless to the game developer as their frame rates will suffer an unacceptable frame rate penalty. Many other graphics architectures claim advanced filtering techniques but perform them either so slowly in hardware, or perform them in software, that there is a 4x, 8x, or possibly even a 16x performance penalty. These operations may improve the visual quality of a single frame, but when enabled the game's frame rate my drop from 30 frames per second, to 2 frames per second, rendering the feature completely unusable. Voodoo performs advanced filtering operations at full speed, allowing both for advanced image quality as well as high frame rates.

Full 32-bit RGBA with 1 pass : One subtle benefit of being able to render multiple textures with one pass is that multiple passes can be avoided and color computations can be performed in full 32-bit RGBA precision. For example, to render a base texture combined with a lighting map in a graphics system that can only render one texture per pass, the results of the first rendering pass are typically truncated from 24-bit RGB to 16-bit RGB when stored in the framebuffer. When this truncation is followed by a second pass, visual anomalies often result, which can typically be seen as bands of discoloration.

Guard Band Clipping : Guardband clipping is a way to deal with the problem of clipping triangles to the screen in hardware. When triangles are clipped to the screen in software - the triangles that fall off the edge of the screen must be split into several smaller triangles that do fit on the screen. This is expensive to do. The guardband allows our hardware to do the same job efficiently.

Gouraud Shading : Also called "smooth shading." Rendering a polygon with smoothly changing color across its face. Each vertex can have a unique color that is blended evenly across the polygon. Reduces "banding," or abrupt color changes, and enhances realism.

Graphics Accelerator : Basically, a graphics card which can (1) draw points, lines and polygons, using only the polygons' 3-dimensional vertices, and (2) map textures on polygons and/or shade polygons.

Higher Order Surfaces (HOS) : Curved surface maps representing textures. Includes Bezier, B-Spline, Catmull-Rom spline patches.

Lerping : Moving geometric objects from point A to point B… it is the mathematical model of movement.

Level of Detail (LOD) : Reduces depth of volumetric effects, especially when player is near. Reduces particle counts, especially when player is inside the particle system.

Lighting : Lighting is the 2ndt step in the 3D pipeline and provides high visual impact. Lighting effects are essential for enhancing the realism of a scene and bringing rendered images one more step closer to our perception of the real world.

M-Bufferâ„¢ : "M" stands for multi-sample. While the VSA-100 FSAA algorithm (aka T-Buffer) relies on super-sampling FSAA method (where a unique texture lookup is performed per subsample), in the multi-sampling method, used by Spectre, only one texture lookup per *pixel* (so that all subsamples use the same texture lookup value) is required. This is an optimization that the workstation folks have done for years, and is visually unnoticeable. However, it reduces the amount of texture lookup by a factor of 4 compared to the VSA-100 method, so the performance for the Spectre M-buffer method is therefore much better.

Mesh model : A graphical model with a mesh surface constructed from polygons. The polygons in a mesh are described by the graphics system as solid faces, rather than as hollow polygons, as is the case with wireframe models. Separate portions of mesh that make up the model are called polygon mesh and quadrilateral mesh.

Mip Mapping : This involves storing multiple copies of texture maps (generally two or three), digitized at different resolutions. When a texture mapped polygon is smaller than the texture image itself, undesirable effects result. Mip mapping can provide a large version of a texture map for use when the object is close to the viewer, and a small version of the texture map for use when the object shrinks from view.

MIP Mapping LOD Control : MIP Mapping LOD Control - This control changes the Level of Detail (LOD) bias used for MIP mapping. Moving the slider control will either add or subtract a bias from the LOD computed during MIP mapping, making textures appear sharper or blurrier. This can result in improved visual detail, but may reduce performance accordingly. It can also result in texture aliasing if the sharpness level is set too high.

Mip-Mapped Textures : This is an important feature of any 3D graphics accelerator. It refers to the ability of such an adapter to apply a texture to a 3D object based on that object's distance from the viewpoint. This requires modification of the texture and is applied either on a per-pixel or per-triangle basis, with the latter being faster but less accurate.

Motion Blur : With traditional computer generated images, a given frame showing an object in motion will render that object with crisp, clean edges in each frame. When viewing a full motion version of the scene based on these images, the result is an unrealistic strobing effect (much like watching someone move underneath a strobe light). This strobing effect is quite different from the continuous, fluid motion of moving objects in real life.

MPEG : MPEG (Moving Pictures Experts Group) is a group of people that meet under ISO (the International Standards Organization) to generate standards for digital video (sequences of images in time) and audio compression.

Multi-Texturing : Voodoo 3D, by means of it's patent-pending architecture, has the unique capability of rendering multiple textures onto a polygon in a single pass and single cycle. Employing multiple Texture Mapping Units (TMUs) that each render a completely independent texture onto a polygon, makes multi-texturing a standard feature of a consumer game platform.

Natural cubic spline : Between each pair of control points there is a cubic curve. To make sure that curves join together smoothly, the first and second derivative at the end of one curve must equal the the first and second derivative start of the next one. Computing the natural cubic spline essentially involves solving a system of simultaneous equations to make sure this happens.

NCC Textures : Voodoo3 offers a patented proprietary Narrow Channel Compression (NCC) format for textures. NCC textures occupy 8 bits per texel just like palettized textures, but the decompression table is 20 times smaller. This makes switching textures much more efficient and allows applications to use a different table per texture. Several arcade games use NCC textures to offer the highest resolution textures possible without noticeable image quality loss.

Non-Power-of-Two Texture Support : Support for numbers other than 2, 4, 8, 16, 32, 64, 128, 256, 512, etc. Usually this is used to describe the texture sizes that an engine requires in order to make good use of video memory. Textures that are not in powers of 2, like 33x24, would probably cause the engine to run slower or maybe crash.

Open GL : A set of specifications for a cross-platform 3D graphics API, developed initially by Silicon Graphics Inc. There are several implementations of Open GL, provided by different vendors. A Win32 version is provided by Microsoft. Open GL includes routines for shading, texture mapping, texture filtering, anti-aliasing, lighting, geometry transformations, etc. Most of these functions can be hardware-accelerated.

Overbright (52-bit color) : Increased internal precision of 52-bits (signed 13-bits per RGBA color component), generating rendered images of substantially higher quality. The higher dynamic range reduces darkening resulting in more vibrant and realistic images.

Paletized Textures : Voodoo3 fully supports 8-bit paletted textures, offering both 24-bit RGB and RGBA formats. These formats are commonly used by game developers and provide for high-quality artwork while greatly reducing the texture memory requirements. Some competing designs either cannot filter paletted textures or convert them to 16-bit or 24-bit textures in their drivers. Voodoo2 3D can perform advanced filter on paletted textures, providing both the texture memory savings and high-quality artwork needed by today's games.

PCI : Peripheral Component Interconnect. This is a self-configuring PC local bus. Designed by Intel, PCI has gained wide acceptance (even by Apple, in its PowerPC series).

Perlin Noise : If you look at many things in nature, you will notice that they are fractal. They have various levels of detail. A common example is the outline of a mountain range. It contains large variations in height (the mountains), medium variations (hills), small variations (boulders), tiny variations (stones) . . . you could go on. Look at almost anything: the distribution of patchy grass on a field, waves in the sea, the movements of an ant, the movement of branches of a tree, patterns in marble, winds. All these phenomena exhibit the same pattern of large and small variations. The Perlin Noise function recreates this by simply adding up noisy functions at a range of different scales.

Per Pixel Interpolated LOD : The most accurate approximation to per-pixel mipmapping is per-pixel interpolated LOD. The host CPU typically computes an LOD for each vertex of the polygon, and then the graphics subsystem interpolates this LOD across the polygon. This imposes a severe computational load on the host CPU as well as additional parameter data transfer and setup requirements. Interpolation of LOD across a polygon is also inaccurate resulting in both excessive blurriness as well as sharpness (aliasing).

Per-Pixel Lighting : Using DOTPRODUCT3 texture blending to achieve this, it results in true per-pixel lighting effects where the lighting equation is solved for every pixel at render time.

Per Pixel Mipmapping : We believe the Voodoo architecture is in a class by itself when it comes to mipmapping. As far as we know, Voodoo is the only low-cost PC solution that performs accurate per-pixel mipmapping. While every chip claims to support mipmapping, none implement an accurate per-pixel mipmapping selection. Instead, they use a variety of short-cuts. Voodoo computes an extremely accurate Level-Of-Detail (LOD) value for every pixel rendered. This LOD value is used to select a mipmap for every single pixel rendered, and can freely change from one pixel to the next. It is very important to select the proper mipmap - an inappropriate selection results in either excessive blurring of the texture or excessive sharpness and therefore aliasing. This per-pixel computation requires absolutely no host CPU intervention or assistance.

Per Polygon LOD : The least accurate approximation to per-pixel mipmapping is per polygon LOD. In this scheme, the host CPU or graphics subsystem computes an LOD for each rendered polygon, and this one mipmap level is used for rendering the entire polygon. This results in substantial errors for larger polygons - the result being sections of the polygon that are either excessively blurry or sharp. Even worse is an artifact known as LOD "popping". This occurs when the graphics code decides to change the LOD for a polygon for the new frame. When this occurs, the entire polygon changes LOD becoming either twice as blurry or twice as sharp. This is very noticeable and distracting, especially for large polygons. If the LOD computation is slightly unstable, or if the frame to frame changes are such that the LOD changes from one value to another and then back again, the polygon will repeatedly "pop" between mipmap levels and result in an extremely annoying visual artifact.

Perspective Correct Texture Mapping : This texture-mapping process keeps scenery looking realistic, particularly when looking down a long hallway or corridor that's been rendered with large polygons. Without perspective correction, the hallway might appear to bend into the vanishing point.

Phong Shading : Involves considerably more calculation than Flat and Gouraud Shading, but results in the most realistic shading effects.

Pixel Shader : Phong shading rendering algorythm calculates a color for every pixel on an object's surface. The path that the light follows is not calculated in Phong Shading, (as it is in ray tracing).

Polygon : The basic 2D element from which 3D objects are constructed. Most polygons in video games are triangles.

Primitives : Standard geometric 3D objects - sphere, cube, cone, cylinder, square and plane. Generic primitives are often used as building blocks for making more complex models.

Projected Textures : Spot lights and head lamps can be rendered using projected textures. In this case, the light's texture is projected onto polygons in the scene, and a new set of texture coordinates for the projected are computed. The projected texture is rendered at the same time the base texture for the polygon.

Quantization : Color quantization is usually defined as a lossy image compression operation that maps an original full color image to an image with a small color palette. The goal of any quantization algorithm is minimization of a perceived difference between the original and quantized images.

Ray Tracing : A relatively simple algorythm with renders a scene by tracing a ray of light pixel by pixel. Lighting values are calculated as rays travel around the three-dimensional scene affecting the surfaces of objects.

Real-Time Full-Scene HW Anti-Aliasing (FSAA): an instant upgrade to all PC games for Windows 95 or better, Real-time Full-Scene HW Anti-Aliasing has long been the "Holy Grail" in 3D computer graphics. The VSA-100 architecture brings useable, full-compatible, and absolutely amazing AA to the PC for the first time. FSAA Hides the jagged effect of image diagonals by modulating the intensity on either side of the diagonal boundaries. This creates a local blurring along these edges and reduces the appearance of stepping. The result is a smoother, far more realistic image. More...

Recursive textures : Texturing flexibility by supporting N (8 for Spectre) independent, unique textures applied per pixel in a single rendering pass. From per-pixel bump mapping to turbulence and glass distortion types of effects, 3D developers now have the flexibility and power to generate Hollywood-quality visual effects in real-time. Also reflection on water.

Reflectance Blur : (also called soft reflectance) is a natural visual phenomenon. In the real world, there are some semi-glossy surfaces, like polished wood or brushed stainless steel, that will reflect objects with different degrees of focus, depending on how close the object is to the surface. Hold a pencil perpendicular to the surface, for example, and the part of the pencil that's closest to the surface will reflect in sharp focus, but the reflection will become increasingly blurry for parts of the pencil that are further and further away from the surface.

Reflection Maps : A simple reflection can be implemented using a reflection map. One example of a reflection map is the effect of clouds reflected in a car's rear window. When the rear window polygon is rendered, rays can be cast from the viewer towards the vertices and upwards into the sky, indexing into a sky texture. The sky texture and the texture for the rear window (streaky glass for example) are then rendered simultaneously. This same technique can also be applied to other shiny surfaces on the car, e.g. the car's roof. Mirrors can also be rendered using this technique assuming that a reflection map of the surrounding environment has been created beforehand.

Rendering : The process of creating an image on the screen from polygons, textures, lights, and other graphical information, as opposed to displaying pre-computed graphics and animation.

Resolution : The number of pixels in height and width on a screen.

Skinning (vertex) : Moving bone elements, textures wrapped around bone. "Skins" on texture of elbow will stretch.

SLI : Scanline Interleave is a mode in which two Pixelfx are connected and render in alternate turns, one handling odd, the other handling even scanlines of the actual output. Each Pixelfx stores only half of the image and half of the depth buffer data in its own local framebuffer, effectively doubling the number of pixels.

Specular Lighting : Specular lighting is different than diffuse lighting because it does depend on the position of the viewer as well as the direction of light and orientation of the triangle being rendered. Shining a spotlight into a dark corner of a room onto a TV set and looking at the hot spots on the picture tube will show you specular lighting. Specular lighting captures the mirror-like properties of an object so effects such as reflection and glare are achievable.

Stencil Buffer : The stencil buffer is used to eliminate certain pixels from being drawn. The stencil buffer acts the same way as a cardboard stencil used with a can of spray paint. You can 'draw' values into the stencil buffer using the normal rendering primitives. Then a stencil test can be defined and stenciling enabled.

Stippled Lines : Lines that are jagged, or stair-stepped, due to undersampling.

Sub-Pixel Correction : Sub-pixel correction simply means drawing things correctly. In other words, if a pixel is at position 0.1356, it is drawn at exactly 0.1356 rather than at 0. The pixel is interpolated from the actual coordinate, rather than just integer values.

T-Bufferâ„¢ : The T-Buffer allows several key digital effects for improving photorealism in real-time 3D graphics rendering. The primary purpose of 3dfx's T-Buffer technology is to improve image quality. The challenge is exactly how to narrow this gap between computer-generated 3D graphics and what users typically see in real life, photography, and motion pictures. The T-Buffer attempts to narrow the gap considerably by offering real-time hardware acceleration of spatial anti-aliasing, motion blur, depth of field, and some other closely-related effects.More...

Texture Compression : Voodoo3 supports texture compression in the form of palettized textures and a patent-pending proprietary Narrow Channel Compression format. Texture compression allows applications to have greater effective texture memory, making more efficient use of the available texture storage, as well as maximizing texturing performance as each texture downloaded can be smaller in size, minimizing the bandwidth impact.

Texture Filtering : Bilinear or trilinear filtering, also known as sub-texel positioning. If a pixel is in between texels, the program colors the pixel with an average of the texels' colors instead of assigning it the exact color of a single texel.

Texture Mapping : The process of placing a bitmap image, or texture, on a surface during rendering. For example, a photograph of bricks is placed on a polygon to create the illusion of a brick wall. Texture mapping is essential to creating realistic 3D worlds.

TMDS : Transition minimized differential signal. This signaling is required to connect graphics engines to the DVI (digital video interface) monitors.

Transform : The transform engine converts 3D data from one frame of reference to a new frame of reference. The system must transform the data to the current view before performing the following steps (lighting, triangle setup and rendering). Every object that is displayed and some objects that are not displayed must be transformed every time the scene is redrawn.

Transform, Lighting, & Clipping : Geometry setup of vertices of 3D model before entering 3D pipeline

Trilinear Mapping : Employed to smooth out edges of mip mapped polygons and prevent moving objects from displaying a distracting 'sparkle' resulting from mismatched texture intersections.

Trilinear Mipmappng : Trilinear mipmapping is one of the highest quality texture filtering methods available, requiring 8 texture samples and three linear interpolations (thus the name trilinear). Trilinear mipmapping looks better than bilinear mipmapping because it eliminates mipmap bands which appear within a polygon when the rendering engine switches from one mipmap level to another mipmap level. Trilinear mipmapping blends between mipmap levels, producing a smooth transition between mipmap levels with no banding. In many textures, mipmap bands are not noticeable, but in other textures they are very distracting.

Trilinear Texture Filtering : Samples eight pixels and interpolates these before rendering. This is twice as much as bi-linear does. Tri-linear filtering always uses mip-mapping.

Vertex : A point in 3D space that defines a corner of one or more polygons.

Vertex cache : Can save on AGP B/W & transform cost. Vertex cache typically lies after the transformation and lighting engines and before the setup engine. This means that data which has already been transformed and lit can be accessed more rapidly than data which needs to be fetched anew. The cache allows higher peak polygon throughput rates and is of most importance when the main load is on the lighting engine. The vertex cache only applies if you use indexed data.

Vertex Shader : DX8 shading operations.

W-Buffer : The W-Buffer is basically the same as the Z-Buffer, but it inverts the Z values. For hidden surfaces, if you use a Z-buffer, it would grab straight Z values to determine locations. Using a W-buffer, the Z values are stored inverted, therefore more precise (because of extra decimal precision required for inverted values). This in turn would likely have less artifacts than the simple Z value calculations.

Z-Buffer : An area in graphics memory reserved for storing the Z-axis value of each pixel. This Z-axis value is the measurement of how close the object is to the viewer. The lower the value, the closer the object. This accounts for certain objects overlaping others in a 3D environment.


Copyright 2000, 3dfx Interactive, Inc.
All rights reserved.
 
Tagrineth said:
Yeah, most of us (especially a few under NDA) have known about that for quite some time now :)

hehe ok, I was mostly surprised about seeing Displacement Mapping there :)
 
Very impressive considering the time period. Maybe 3dfx tried to do to much in one design causing it to just take too long. Something that Nvidia seems to be doing (going way beyond DX9 specs). Ripe out alot of that stuff (displacement mapping, 52bit color etc.) and it still would have been the most advance chip at the time. Bottom line, you must not take to long in getting a product out once your previous product becomes obsolete.
 
you dont suppose to think that Rampage had all those features listed right ? its just a freaking glossary...

Ray Tracing : A relatively simple algorythm with renders a scene by tracing a ray of light pixel by pixel. Lighting values are calculated as rays travel around the three-dimensional scene affecting the surfaces of objects.

Sure u can do some kind of Raytracing with PS/VS, ok.. but still Rampage wouldnt do it natively...

Year 2102 Beyond 3D

"Yes, but just if 3dfx had rampage out..."

my god :rolleyes:
 
aye no reason to be depressed we have hardware beyond Rampage, we have at least two high end competitors outdoing each othr every 6 months with perfromance and features. We have Matrox doing their stuff, we have several budget competitors helping forcing the pace at the low end and we have 1 or 2 other high end cards in the wings from other IHV's.
 
What if Rampage is actually a thousand years old alien artifact from Egypt?

I mean, real time raytracing with displacement mapping? And wasn't it supposed to have free antialiasing? Obviously extraterrestrial technology.

And yes, we will still be wishing Rampage was out in 2012. After all, it is a product of a civilization thousands of years more advanced than ours.
 
I got to this point....
Bi-cubic filtering : An advanced form of filtering to simulate textures on a "cube" to cover all sides of an image.
... and then realised it must have been written by a monkey :eek: . I'm almost suprised the author didn't think bi-cubic meant having 2 cubes. :rolleyes:
 
I can say one thing about Rampage...

It would've been out beginning of 2001 (April the absolute latest) and... well, I can also say that one of the beta dual Rampage cards that works actually breezes past Radeon 8500 and approaches GF4 Ti4600. In April 2001.

...that's all I'm permitted to say :)
 
Tagrineth said:
I can say one thing about Rampage...

It would've been out beginning of 2001 (April the absolute latest) and... well, I can also say that one of the beta dual Rampage cards that works actually breezes past Radeon 8500 and approaches GF4 Ti4600. In April 2001.

...that's all I'm permitted to say :)

And it would cost...? 3 chips, huge PCB, 128MB of 200MHz DDRRAM In April 2001.
 
mmmkay...
is this some kind of cry for non-released HW -thread? :)

well, that I can do as well...
Pyramid3D (I love to dig this monster up every time ppl are wondering Rampage's features. ;) ) was able to do Displcement Mapping in 1997. Thanks to Microcode programmable core. Sure it wouldn't have been lightningly fast but, it was capable doing it. As well, as many times said programmable T&L unit and pixel pipeline.


and now comes the news bomb I have been hiding quite some time. ;) Pyramid3D is actually still available as sold chip! Yes, you heard right. when the Bitboys was doing Pyramid3D core for Tritech, they had partner. I don't know exactly what was this partnership about, but when the Tritech went Bankcrupt, all Pyramid3D tech went to VLSI Ltd. and they continued developing it all way to finished. And it's still available as product on their webpages. Name isn't same but specs are the same. It's called VSVP Video Processor.

Now we only need some OEM talk around to build card based on this chip and place a order of those chips big enough to get VLSI make them. ;) After that we know for sure, How fast this mysterious chip truly would have been if it would have been released on time. :)

(somehow I doubt that many enough here would want Bitboys tech from 1997, but you never know... ;) :p ;) )

VLSI's product page about VSVP (aka. Pyramid3D): http://www.vlsi.fi/products/vsvp.htm

(they used to have drivers, tech papers, SDKs, basically everything including card reference designs to start full production. Maybe only thing that they didn't have enough marketing power to bring it to market. So, ironically, the real reason why Pyramid3D never hit streets was the missing hype, not that there would have been too much of it. ;) )
 
Geeforcer said:
Tagrineth said:
I can say one thing about Rampage...

It would've been out beginning of 2001 (April the absolute latest) and... well, I can also say that one of the beta dual Rampage cards that works actually breezes past Radeon 8500 and approaches GF4 Ti4600. In April 2001.

...that's all I'm permitted to say :)

And it would cost...? 3 chips, huge PCB, 128MB of 200MHz DDRRAM In April 2001.

Actually four chips - two Rampage, two SAGE (Yes, they ARE scalable, hence the name: Scalable-Architecture Geometry Engine)... and the PCB was no bigger than Ti4600's (actually less than an inch shorter). Keep in mind the SAGE chips didn't have their own RAM, they (ab)used the AGP interface. Sustained performance per SAGE was pretty good, considering they had 2GB/sec or so dedicated... and extra bandwidth could be extended to the Rampage(s).

Oh, and full DX8.0 compliancy is in the Rampage core... SAGE is a geometry accelerator - in the sense that a Rampage on its own does have not-too-great-but-still-there hardware T&L/VS.
 
wha was said about the pyramid3d being microcode programmable.... lets not forget that was the direction 3Dfx was heading too... :) sounds like Tag could give some more info as to the depth of the implementation?
 
Back
Top