So what do you think about S3's DeltaChrome DX9 chip

Brent

Regular
It is officially announced today

http://www.s3graphics.com/index.html

press release: http://www.s3graphics.com/pressrel/2003_01_07.html

Feature Descriptions: http://www.s3graphics.com/DeltaChromeFeatureDescription.html

Beyond DX9 PS and VS support just like GFFX: http://www.s3graphics.com/DeltaChromeDX9.html

Even screenshots from the control panel: http://www.s3graphics.com/DeltaChromeScreenGoodies.html#S3Display


According to the specs of this card it matches the R300 in hardware, and surpasses it matching the GFFX for DX9 PS 2.0+ and VS 2.0+ support (beyond dx9)

IMO it sounds increadibly impressive on paper, R300/NV30 level stuff here on paper.

But in the real world? And will it ever see the light of day soon? And will the drivers suck?

I haven't read through the whole thing yet but did anyone see the memory bandwidth or memory bus width?
 
Hrm....

Lots of marketspeak, I guess we'll have to wait and see. Any comments on the "advanced deferred rendering" they are touting? I'm curious as to how much the two pass rendering scheme for doing front-to-back culling actually saves you. How many programs are doing front-to-back these days? There were rumors that this is what morrowind is dueing and giving it such horrible performance even on top end systems.

Certainly some interesting tidbits in there, but I guess we'll have to get the opinions of some of the big boys around here. :)

Nite_Hawk
 
One interesting feature is "bi-directional" Z and Color buffers. I'm gathering that this means you can read from the Z and frame buffers during rendering without binding them as textures.

This two pass rendering thing sounds like they have an on-chip very coarse hierarchical Z buffer area (e.g. the very top of the pyramid, say divide the screen into 32x32 blocks or something). It says the first pass doesn't write to the color buffer OR the z-buffer. So perhaps the first pass does rasterization to compute hierarchical Z values for very coarse screen tiles, and writes those values to on-chip memory, using no memory bus bandwidth.


However, this would be a waste because even if you render a scene without writing anything to the memory bus, the memory bandwidth is effectively "wasted" because you aren't using it. You still won't fill the triangle any faster since the fillrate is fixed. The only way you can make up for this is if the second pass saves so much bandwidth and fillrate that it exceeds all the lost bandwidth in the first pass.

It could do this if the on-chip hierarchical Z buffer can reject oodles of pixels without even having to do z-reads.
 
BTW, it also seems like they hardware accelerate GDI+ features. 2D acceleration has been an area of little improvement recently. Speeding up anti-aliased 2D rendering is nice.
 
Hmm, seems like they confuse the concept of a shader effect with a hardware or DX9 feature. Programmable Depth-Cued Fog Color, Programmable Selective Depth-of-Field and Programmable Render Target Blending and Depth Shading are all shader effects not hardware features, they are just examples of what all hardware can do given the correct shader program.

I thought conditional write masks were part of VS2.0, not sure about call and return, that really depends on if its based on constants or true temporary variables. Not sure why they say 16x16 max loops, maximum of 16 loops with up to 16 instructions each ?

Not sure why they make a fuss about using color and z as input to the shader, doesn't sound like something new.

Internal accuracy seems to be the same as ATI, which means its stuck between GFFX hi and lo quality.

The 2 pass system has been discussed before and is most likely a driver forced initial depth-only render pass, this removes the dependency of EarlyZ systems on the render order but at a cost (fillrate, vertex throughput, bandwidth) depending on the implementation and it requires scene storage so any worries that you might have about Tile Based Deferred Rendering will apply as well. If needed ATI and NVIDIA should be able to implement this feature as well.

"DeltaChrome can employ not only front-to-back but also back-to-front Z occlusion culling."

Not sure what that means but I think it simply means they support changing the Z compare mode without things going pear shaped :LOL:

No clue about "Triangle Mask Optimization", some kind of fast polygon rejection ?
 
DemoCoder said:
This two pass rendering thing sounds like they have an on-chip very coarse hierarchical Z buffer area (e.g. the very top of the pyramid, say divide the screen into 32x32 blocks or something). It says the first pass doesn't write to the color buffer OR the z-buffer. So perhaps the first pass does rasterization to compute hierarchical Z values for very coarse screen tiles, and writes those values to on-chip memory, using no memory bus bandwidth.

Hmm, that would imply that its all on chip which means its a very low res version, do you really want to process the whole frame and only get such rough data back from it ?
 
Kristof said:
No clue about "Triangle Mask Optimization", some kind of fast polygon rejection ?

Maybe an on-chip hier-Z buffer is used to early-Z reject triangles during triangle setup?
 
I wonder if the pixel and vertex shaders will be disabled when the card ships, and enabled with drivers some months down the road, only to find they don't work properly at all...
 
Not sure why they make a fuss about using color and z as input to the shader, doesn't sound like something new.

It could be new if you could read results from the frame buffer in the same pass as you are writing to it. DX9 doesn't allow you to bind a render target as a texture in the same pass tho.

Of course, given that 8 concurrent pixels are being evaluated, and that the order of evaluation is not guaranteed during rasterization, it's probably of no use anyway since if you try to smple read from a framebuffer pixel, it could have either not been evaluated yet, is currently being evaluated by a neighboring pipeline, has been evaluated but not written over by an occluder (yet), or has been evaluated but written over by an occluder, etc.
 
Crusher said:
I wonder if the pixel and vertex shaders will be disabled when the card ships, and enabled with drivers some months down the road, only to find they don't work properly at all...

now I know where you got your name... ;)

and don't bother bashing it. 8) chalnoth will do it for you. :LOL:
"it's S3 :arrow: it's piece of crap."
 
Well, here's hoping that they will actually pull something worthwhile of for the mainstream consumer. The more support for DX9, the better! 8)

I don't think that they'll hit neither nVidia nor ATI just yet, but keep trying folks, keep trying for the sake of competition.
 
did I already said that this is the best news since autumn 2001 for me? :D
damn I am excited... hmmh... where I did put that ICQ active list long as gorilla's arm??


1...2...3... calm down... 3... 2... 1...
:devilish: :D :) :( 8) :rolleyes: :LOL: :eek: :? :oops: :devilish: :) :( :eek: :LOL: ;) :idea: :arrow: :!: :?:
 
aye and surely they've learnt from the past. But who's going to write the drivers now that OpenGL guy and Co are at ATI ;)

I mean this forum was very excited about the S3/S4/S2000 in their day IIRC (before hardware was available that is).

Do we only believe ATI and and nVidia can pull it off now?
 
Randell said:
Do we only believe ATI and and nVidia can pull it off now?

umh... this might sound fancy but I got this crazy thought...
If we don't believe new comers, who will? What happens to business if no one believes to change?

think about that. I doubt that I need to talk about my opinion on this particular case...
 
Randell said:
aye and surely they've learnt from the past. But who's going to write the drivers now that OpenGL guy and Co are at ATI ;)

I mean this forum was very excited about the S3/S4/S2000 in their day IIRC (before hardware was available that is).

Do we only believe ATI and and nVidia can pull it off now?

Hmmm...maybe I should give up gaming and become a driver writer :p

Nahhh too much pressure :LOL:
 
DeltaChromeS3Turbo096.jpg


Hmmm...are we to presume that these are default clock speeds? (Looks like something the Inquirer would pick-up and report as fact....)

Anyone know memory interface? (Support DDR-II? I assume 128 bit...but perhaps 256 bit?)

Anyway, best of luck to S3. Here's to hoping for another good competitor...
 
Nappe1 said:
Randell said:
Do we only believe ATI and and nVidia can pull it off now?

umh... this might sound fancy but I got this crazy thought...
If we don't believe new comers, who will? What happens to business if no one believes to change?

think about that. I doubt that I need to talk about my opinion on this particular case...

Dont get me wrong I half like/half dislike S3 - the S4 for example had some great points - especially iro S3TC, IQ and trilinear speed - if only they hadnt crippled it with a 64bit memory bus it could have taken the TNT2/V3 on perf wise and it had some real problems - driver or silicon I dont know e.g. if you used MeTal to play UT online it would lock up solid.

However they did have driver writers and customer support guys out in the forums helping people officially - something ATI, 3dfx and nVidia werent doing at the time.

And competition helps of course Nappe1
 
Back
Top