So what do you think about S3's DeltaChrome DX9 chip

darkblu:
Yes.
With multisampling, you only run the PS once for a group of subpixels that build up a pixel. If you can read the frame buffer color value in the PS, there's N subpixels to get the value from. You can't do the calculation once for each subpixel, since that would require supersampling. So you have to downsample, averageing the subsamples.

The question is what to do if you won't write to all the subpixels. Either because you're at the edge of an triangle, or because you're at an intersection between two triangles. If you average all the subpixels, then it's possible for subpixels that belong to a completely different surface to "leak" into the read pixel. That could result in big errors.

Note the difference, it's OK for final pixel values to mix. That's exactly what you want to get AA. But if the read pixel is an intermediate value in a multipass shader, any leaks could be very bad.
 
DaveBaumann said:
Padman said:
It wasn't untill the disaster that is VIRGE that S3 got a bad name. :oops:
I'm tempted to ban anyone form my forums who reminds me I bought one of them! :rolleyes:

;)

<taunt> Dave bought a Viiirrrge! Dave bought a Viiirrrge! </taunt>
[...ducks sharpish... ] :LOL:
 
On its own, it wasn't a bad product, but compared to the Verite and the Voodoo, it was outclassed.

S3 just couldn't come back quick enough with a product soon enough to avoid being marginalized in the market.
 
I have to wonder how many people are still at Via/S3 from the days before the graphics division was sold to VIA. After all, OpenGL Guy was a ex S3 employee, and I know some people from the FireGL team that are working for ATI (Rolf , Quick) .

Dave my first modern computer (Pentium 120) had a S3 Virge. I don't know why I bought it either .
 
Dio,

Yeah. I saw UT running then with the compressed textures and was envious... But was UT also faster in MeTaL than in D3D? (When you got the card stable, that is.) I recall some reports that it was, but I never got to test it myself (with a G400, then GF2U).
 
Hey....I bought a Voodoo Rush....and the original daughtercard version to boot!

But to my credit, I returned it before the 30 days was up, and picked up a Voodoo Graphics instead. ;) Put that ATI Mach 32 PCI back in the box for 2D...
 
Gunhead said:
Dio,

Yeah. I saw UT running then with the compressed textures and was envious... But was UT also faster in MeTaL than in D3D? (When you got the card stable, that is.) I recall some reports that it was, but I never got to test it myself (with a G400, then GF2U).

Much faster. Trust me ;)

Oh, and it looked better too (and not just because of the compressed textures)
 
Ben6,

You are more familiar with this... but I'd be surprised if Fire GL team members ever really "joined the ranks" of Diamond/S3 (S3 bought Diamond, right? And then VIA bought S3's graphics division?), as they were so much an independent development team, away in Germany.

Didn't they use IBM's chips -- not S3's -- from the beginning, and until the ATI era?
 
andypski said:
Gunhead said:
Dio,

Yeah. I saw UT running then with the compressed textures and was envious... But was UT also faster in MeTaL than in D3D? (When you got the card stable, that is.) I recall some reports that it was, but I never got to test it myself (with a G400, then GF2U).

Much faster. Trust me ;)

Oh, and it looked better too (and not just because of the compressed textures)

I'll second that. In fact I still have a Savage4 :oops:
 
misae said:
andypski said:
Gunhead said:
Dio,

Yeah. I saw UT running then with the compressed textures and was envious... But was UT also faster in MeTaL than in D3D? (When you got the card stable, that is.) I recall some reports that it was, but I never got to test it myself (with a G400, then GF2U).

Much faster. Trust me ;)

Oh, and it looked better too (and not just because of the compressed textures)

I'll second that. In fact I still have a Savage4 :oops:

yep my S4 was as fast with 32 bpp, Metal, compressed textures and trilinear filtering as my V3 with Glide at 800x600, 16bpp, 256x256 textures and bilinear filtering, at 1024x768 it fell away a bit.
 
andypski said:
Gunhead said:
Dio,

Yeah. I saw UT running then with the compressed textures and was envious... But was UT also faster in MeTaL than in D3D? (When you got the card stable, that is.) I recall some reports that it was, but I never got to test it myself (with a G400, then GF2U).

Much faster. Trust me ;)

Oh, and it looked better too (and not just because of the compressed textures)

most likely true.
Even my S3 ViRGE (in autumn 1996.) was clearly faster on S3dToolbox (very low level API for ViRGE chipsets.) accelerated games than on D3D. in S3d Accelerated games it truly was 3D Accelerator. But upcoming 3dfx and Direct3D gave it it's known de-accelerator status.

Terminal Velocity on 512x384 with S3d acceleration dropped some jaws about 6 months before no one here heard about 3dfx.
 
Psikotiko said:
I boght a virge :cry:
and a voodoo :)

Well, I didn't. I saw a S3 Virge in action over at my cousin ... so I got a G100 instead. ;) Hardly any better though, so I went Voodoo2 later on and couldn't be happier. Ah, those were the days, playing Unreal on the V2 gotta have been the most exciting gaming experience in my life. At those days the games actually used the features on the card. It's kinda saddening seeing how far graphic technology have gone while games haven't progressed significantly, Unreal still holds its own against most new games. It'll be exciting playing Unreal2 though, which btw has gone gold now :)
 
Back
Top