So what do you think about S3's DeltaChrome DX9 chip

I had a Virge in a machine at the office and I used to call it the 3D Decelerator, since it was slower than software rendering, and lower quality. :)
 
Diverging slighlty - what I can't understand is how bad UT2003 looks on a V3 - I mean Unreal looks good and Severance:Blade of Darkness and Max Payne all look OK, yet UT2003 looks much worse than all of them - like an amiga game or something.
 
Ben6,

You know this better. Care to tell more? It's interesting :) Why did they? ;-) (Meaning, how could S3 lure them?)

Humus,

I'm still playing thru the original Unreal. I can (and will) wait until Unreal 2, but I'll have to be in a hurry if it's gone gold already...

Nappe1,

You are young. Very probably you were there from the beginning. But drop the attitude, can you? No offense, I sincerely hope.
 
Gunhead, I don't know nuthing. Just know a few people who work in the industry They went with the rest of FireGL when S3 bought Diamond (errr I'm not the guy to ask on timeframes lol ) then went with the team when ATI bought FireGL. I may be mistaken (would have to ask Rolf or Ian when I get a chance), but that's my recollection.
 
Gunhead said:
Humus,

I'm still playing thru the original Unreal. I can (and will) wait until Unreal 2, but I'll have to be in a hurry if it's gone gold already...

It's slated for an early february release. I really hope it's as good or even better than the original. :)
I'm still playing Unreal from time to time too, and not to forget Return to NaPali. I'm about to take a round when I'm done reading this forum :)
 
DemoCoder said:
I had a Virge in a machine at the office and I used to call it the 3D Decelerator, since it was slower than software rendering, and lower quality. :)

My Virge certainly was slower at 3D rendering, but was much higher-quality (it at least offered bilinear filtering).
 
Randell said:
Diverging slighlty - what I can't understand is how bad UT2003 looks on a V3 - I mean Unreal looks good and Severance:Blade of Darkness and Max Payne all look OK, yet UT2003 looks much worse than all of them - like an amiga game or something.

First guess? They use a number of passes for various effects in the game. At 16-bit, those passes will accumulate errors very quickly.
 
Gunhead said:
Nappe1,

You are young. Very probably you were there from the beginning. But drop the attitude, can you? No offense, I sincerely hope.

well, what's young for you? how you define young?

anyways, as a note, that was my 3rd reply on thread during last 7 days. in 2 of these 3 replys I have been clearly showed that my attitude has something wrong. it has to be, that's the only reason why shit hits the fan each time I reply. Or maybe it is because I really don't care who side you are though I have very clear image about most active users here what's their favorite company. (Of Course I can't post that here, because that would cause just another flame war.) some hide it better, others less.

Anyhow, I am about drop the whole boards at least forawhile. There's no point of telling your opinion if it is wrong no matter how well you based on it facts or anything. Besides, I have "dropped the ball" ages ago and most of my contacts are gone. I know pretty much nothing and care even less. blah blah blah and other self miserable stuff. (no one reads this anyway.)

yes, I admit that I made a mistake. I said something positive about ViRGE instead of saying quality word crap 7 times on my post. Let me fix that right a way: crap, crap, crap, crap, crap, crap, crap and the bonus crap for original crappy poster about crappy card running crappy game crappier than competitors craps with crappy drivers and using crAPI on crappy OS built on crappy system design from crappy age.



(as a note, there's post with 7 craps on this thread.)
 
Randell said:
Diverging slighlty - what I can't understand is how bad UT2003 looks on a V3 - I mean Unreal looks good and Severance:Blade of Darkness and Max Payne all look OK, yet UT2003 looks much worse than all of them - like an amiga game or something.
If you don't support pixel shaders, then some of the rendering could require multiple passes. vogel had some comments on this a while back if you care to search for them.

Edit: Small edit for correctness.
 
Chalnoth said:
DemoCoder said:
I had a Virge in a machine at the office and I used to call it the 3D Decelerator, since it was slower than software rendering, and lower quality. :)

My Virge certainly was slower at 3D rendering, but was much higher-quality (it at least offered bilinear filtering).

It offered trilinear filtering, too, with much nicer texel sample blending precision than many of its competitors of the day. True color rendering was also supported.

The Virge could look quite nice, as long as you avoided certain problematic texture sizes and formats, and stuck to very simple blending modes. You had plenty of time to admire the imagery, too.
 
Dan G said:
The Virge could look quite nice, as long as you avoided certain problematic texture sizes and formats, and stuck to very simple blending modes. You had plenty of time to admire the imagery, too.

Yes, lots of time between frames :)

Anyway, I only remember a couple of games with my Virge. I could never get any OpenGL games to work, and I'm not sure why that was currently. Descent 2 had a special "S3 Virge" rendering mode, but since it was forced to 640x480, it was unplayable. There was another demo that I had (Some Star Wars game) that I had the option of changing resolutions, and I could actually use the Virge to great effect at 320x200 resolution. It still managed to look a bit better than software rendering, and played great. The third game that I remember from that era was Mechwarrior 2: Mercenaries. The Direct3D version of that game did not work very well at all. The ground textures would only repeat to a certain distance, 640x480 was the only resolution supported (meaning it ran slowly, again), and I seem to remember that it was also unstable.

Once I got my TNT a little while later, all of these problems vanished (although Descent 2 had no Direct3D or OpenGL rendering mode, just a few vendor-specific ones, so I could never play it with 3D rendering properly), but the #1 game that I wanted to play with 3D rendering at the time was Unreal. That took a heck of a long time...

Anyway, all of this was long before I became an "enthusiast" in the market. In fact, the variety of problems that I had with my TNT early-on are probably the reason I bother to attend these boards to this day.
 
Chalnoth said:
The third game that I remember from that era was Mechwarrior 2: Mercenaries. The Direct3D version of that game did not work very well at all. The ground textures would only repeat to a certain distance, 640x480 was the only resolution supported (meaning it ran slowly, again), and I seem to remember that it was also unstable.

I played Mech 2 mercs back in the day myself, but I have to ask, why the HELL would you want to play anything in lower than 640*480 res? The D3D version of Mercs was actually quite amazing, but if the virge had trouble running at that res, you probably had to turn everything off anyway, so you'd be better off running in software.

I'm not arguing with you by any means and I never owned a virge, but I guess the point is: if it can't run ok in 640*480 then what's the point? Or maybe that was your point? Actually I could be wrong, but I believe the D3D version of Mercs ran in 512*384 as well?
 
Chalnoth/OpenGL,

Thanks that makes sense - so the V5 looks a whole lot better than the V3 - is this due to larger texture memory/z-buffer even running in 22bit on the V5?
 
Nagorak said:
I played Mech 2 mercs back in the day myself, but I have to ask, why the HELL would you want to play anything in lower than 640*480 res? The D3D version of Mercs was actually quite amazing, but if the virge had trouble running at that res, you probably had to turn everything off anyway, so you'd be better off running in software.

Well, in software I had to run at 320x240 anyway, so I while it wasn't a big boost, at least I could have bilinear filtering with the Virge at that res and still maintain good performance. And I suppose the D3D version of Mercs might have run at 512x384, but I never knew about it. Remember, this was quite a bit before I became an "enthusiast."

But yes, it did become a moot point anyway once I got the TNT. And this is the exact reason why the Virge was termed a "3D decellerator."
 
Randell said:
Chalnoth/OpenGL,

Thanks that makes sense - so the V5 looks a whole lot better than the V3 - is this due to larger texture memory/z-buffer even running in 22bit on the V5?

It wouldn't be due to larger texture memory/z-buffer. One reason I can think that the V5 might look significantly better than the V3 is that perhaps 3dfx made some significant improvements to the dithering pattern for "22-bit color" in the V5 that help with multi-pass rendering.

Another possibility is that perhaps the V5 actually supports 32-bit textures, and if those were used then it may have improved the 16-bit rendering quality.
 
Humus said:
Ah, those were the days, playing Unreal on the V2 gotta have been the most exciting gaming experience in my life. At those days the games actually used the features on the card.

Yes I remember the first time I saw the Nali Castle Fly by in Glide. I Swear my jaw hit the floor. Then a few maps later when I walked in to an old castle and saw my reflection in the marble floor was another amazing moment...to bad that monster was ripping me a new one :)
 
[quote="ChalnothOne reason I can think that the V5 might look significantly better than the V3 is that perhaps 3dfx made some significant improvements to the dithering pattern for "22-bit color" in the V5 that help with multi-pass rendering.

Another possibility is that perhaps the V5 actually supports 32-bit textures, and if those were used then it may have improved the 16-bit rendering quality.[/quote]

I beleive it must be the latter - it was a huge difference - the V5 looks like Ut2003 - with missing water/effects etc - the V3 looks.... well I'll see if I can get a screenshot this week when I replace my friends V3 with a Radeon9000.
 
Basic said:
Yes, multisampling is a problem.
Then on the other hand, you could do a mix of multisampling and supersampling. :D

Until now all shader capable gfx cards has kept the final calculations as a fixed function step. Fog and alpha blend is done after PS, and isn't programmable in the same way. Another thing is that this fixed function part has essentially been working in "supersampling mode". Or in other words, the calculations is done for each subpixel. That's needed because even though the PS output is just one value, the framebuffer contents could be different for the subpixels.

Move one step towards unification and programmability:
Remove the fixed function fog and blends, and move it into PS. But what about differing subpixel values? Well, do as current MSAA solutions, use "supersampling" at the end of the pipe. But now it means at the end of the PS program.

The PS would run along just as with ordinary MSAA. But when it encounter a framebufer read, it "forks" into one "thread" for each subpixel. (UN*X speak.) And maybe one could add an instruction to force such a split if you have a PS where part of it would benefit from SSAA.

If you want to merge the values back to MSAA somewhere in the pipe, you would have the same problems (and the same possible solutions) as I said before.
 
Basic, sounds like a great idea to me. It gives you the capability to do totally flexible fp/int buffer blends without a huge transistor count increase...

not sure about the "fork" instruction... It seems like a large amount of state would have to be saved per "thread", especially as PS hardware becomes more general... input registers would have to be updated, not sure what else.

If you have dynamic branching in the PS, why not handle all this explicitly? Just stick the part of the pixel shader which requires SS into a loop... the following would need to be provided:

- (access to)/(ability to generate) a per pixel sample pattern.
- ability to read/write color/z/stencil values of each sample.
- ability to specify sample size (in bytes) and number of samples operated on by one shader "thread".

That would also give you the ability to perform MSAA with fp-buffers.

Regards,
Serge
 
Back
Top