S3 Chrome S2X

obobski

Newcomer
http://www.s3graphics.com/en/products/chrome_s27/ according to that it's using 4x SSAA, now just to get this straight (to refresh memory on this)


SSAA works by rendering the image 4 times and taking X number of sample points from each render? so it's 5x mode would take 5 sample points from each render, 20 total samples (which is almost 3 passes...) or would 4x SSAA be 4 renders and each one produces 1 sample point? meaning that it's 5x mode would be what? 5 renders? or what is with that? (it says 2x, 5x and 9x)

I'm just wondering generally, where the correlation is, the card performs amazingly well with no AA (for what it is) but when you turn on 2x AA it's performance just nose dives...I know SSAA is a culprit here, but i'm trying to fully refresh my memory on SSAA's workings....
 
SSAA works by rendering X samples per pixel. So 4x SSAA means it averages 4 samples per pixel. 5x would be 5 samples per pixel not 20. (and a very odd setup).

For the older cards, they actually rendered at a higher resolution and scaled down. S3 says they are using rotated grid, like the old Voodoo5. The reason the perfomance takes a nose dive is that they're taking multiple samples for every pixel, where ATI's and Nvidia's MSAA tries to only anti-alias edges.

http://www.beyond3d.com/articles/ssaa/index.php gives a good explanation of super sampling, except when I went to read it, it's broken around page 7.
 
Last edited by a moderator:
SSAA just renders at a higher resolution and downsamples, effectively.

4x SSAA should be 4x the pixels (800x600 would be 1600x1200). It would certainly explain why turning on AA would make stuff unplayable.

MSAA is far less of a performance eater, but it doesn't work as cleanly as SSAA.
 
That article sounds really weird; it seems to say that S3 achieves a "sparse" or "rotated" grid by first rendering the frame with ordered-grid supersampling but with the entire frame ROTATED :!:, then rotate back when downsampling the frame.
 
arjan de lumens said:
That article sounds really weird; it seems to say that S3 achieves a "sparse" or "rotated" grid by first rendering the frame with ordered-grid supersampling but with the entire frame ROTATED :!:, then rotate back when downsampling the frame.
That's what it says. Not exactly a memory-saving solution.
 
arjan de lumens said:
That article sounds really weird; it seems to say that S3 achieves a "sparse" or "rotated" grid by first rendering the frame with ordered-grid supersampling but with the entire frame ROTATED :!:, then rotate back when downsampling the frame.

That's the oddest solution I think I've ever seen. Wouldn't it make more sense to jitter the geometry rather then rotating the image?
 
that'd be super to be on the same page as you guys, but I don't read german....

but basically the S2X uses a really ineffecient system to render it's rotated SSAA grid? and uses a weird # of samples?

why do (Seemingly) all the companies except ATI and nVidia (and 3DLabs, but we aren't talking professional sector) Get it right? XGI could've done fine if they had just matured drivers a bit before launch and gotten OEM contracts, and dealt with BitFluent sucking....

S3 could be doing fine if the card had 8 texturing processors and used more normal versions of AA....

it makes no sense
Matrox gets it right in those respects, but doesn't launch stuff that often and doesn't launch stuff with tons of power when it does do launches, while S3 and XGI get the theoretical power mostly ok, but end up seriously screwing up something....


cutting corners for cost?
 
the maddman said:
That's the oddest solution I think I've ever seen. Wouldn't it make more sense to jitter the geometry rather then rotating the image?
Jittering the geometry requires one geometry pass per sample. Rotating the whole scene probably only requires a larger framebuffer and more sophisticated downsampling. But the latter requires a load of bandwidth anyway, so the complexity of selecting the correct samples might be completely hidden.
 
obobski said:
cutting corners for cost?
Perhaps. Although I do get the feeling that the problem is more that they don't properly plan their designs for the correct performance/feature-set targets and/or don't do the necessary underlying research. The S3 anti-aliasing method strikes me strongly as a feature that isn't actually present or at all planned in the hardware design, but hacked on top later with a bunch of driver hacks, while their AF method (also lambasted in the German article) looks like they just used some shitty algorithm they found somewhere without taking time to study its properties, much less tweak it.

Methinks they are mainly cutting corners on R&D cost to meet perceived budget and time-to-market requirements, only to find later that all the corner-cutting has been severe enough to prevent their design from ever being remotely competitive - invariably with the result that whetever money they saved on R&D, they lose tenfold through failing sales.
 
but the S2X series does fine even with ingame IQ turned up, it's only when you try to enable AF and AA that it really takes it in the chops...

also, it's design seems a little backwards from the normal ratios of pixel shaders to texturing units

most modern ones are either 1:1 (like NV40) or a ratio with more texturing than pixel (like G70) while theirs is 8 pixel to 4 texture

so it has the pixel fill rate of a 6800GT, with the texturing abilities of something like a 6600nU (or less) and the worst AA and AF algorithms (i wish I read german...anyone got a good translator in mind?) that they could seemingly find....

it doesn't make sense, it seems like they started out with all the right ideas, 90nm technology, high core clock, low heat dissipation, 8 pixel shaders, decent memory interface, PCIe, SMP capable, and it's like they just stopped

threw whatever elements of some crappier previous gen chip onto it to make it function
and said here....

what is the huge problem with just pushing through, I don't see a financial issue given that they are owned by VIA...which doesn't seem to have any issues with $$$ (and the K8T900 was somewhat designed around S2X's MultiChrome feature, at least to allow support for it, so I don't think VIA's funding/support is the issue, it's like they just said "f this lets do something else" and they have yet to launch something actually good...)

XGI does similar things, they seem to start with some good ideas and technology, and then just say F it after a point, and neither S3 nor XGI produce anything that's really worth what it costs, even though it doesn't cost much
 
I think the poor AF/AA comes from the transistor budget S3 works in. They always are talking up how small the designs are, they have to keep it as simple. If they make the AF/AA smarter, it's going to make the chip bigger, hotter and slower.
 
the maddman said:
I think the poor AF/AA comes from the transistor budget S3 works in. They always are talking up how small the designs are, they have to keep it as simple. If they make the AF/AA smarter, it's going to make the chip bigger, hotter and slower.
In that case, I have to put a serious question mark as to what market they intend to enter this GPU into. My general impression is that people who make a conscious decision to buy a discrete graphics card are highly conscious of both performance and image quality and as such very unlikely to accept a card that skimps on AF/AA these days, and people who don't make such conscious decisions are usually quite satisfied with an el-cheapo IGP in the first place. Skimping on AF/AA may as such be a reasonable choice for an IGP; I just don't see how it can make a whole lot of sense in a discrete card, even if you save ~15-25% transistor count on it.
 
This S27 review seems pretty promising, at least compared to previous parts. It's interesting to note the very variable AA+AF hit. I'd like to see separate AA and AF benches, and, depending on how S27 performs SSAA, maybe a "fairer" AA+AF comparison setting (SSAA offers some incidental AF, no?).

Considering the mipmap screens, it'd also be interesting to hear IQ comparisons. Though it appears to be applying AF more evenly across the screen, Xbit says they sample more from the first mipmap and so may present more shimmer. OTOH, nV's boundary blends look positively stingy by comparison, and who knows what ATI's doing in-game (assuming trylinear still presents its best face with colored mip apps).

And so ends me talking out of my ass. I'd still like to know why S3 keeps up with NV in Pacific Fighters while ATI struggles.
 
I'd love to see S3 implement some MSAA instead of forcing SSAA. That's the performance killer, and it's shown very obviously in those xbit benchies. Otherwise, me thinks S3 did an excellent job! The OpenGL performance is killer against ATI, and it's a small and probably quiet card.

Hope they can just iron out the little wrinkles...
 
It's a bit worrying that the card doesn't seem to support AA at all under OpenGL; if it is indeed true that they have hacked on rotated-frame SSAA as a software-only afterthought, it would seem to me that it can be difficult to make it work cleanly with OpenGL's scissor test feature.

There appears to be no support for non-Windows operating systems; this is probably a non-issue for most people, but a showstopper for some.

Other than AA/AF, the performance is reasonable, roughly in line with what you'd expect from an 8-pipeline GPU (although the clock speeds are much higher than other 8-pipeline GPUs of similar performance). The power consumption levels look good, but I suspect that much of it is due to the GPU not supporting MSAA and being limited to PS2.0 (PS2.0 requires only FP24; PS3.0 requires FP32, which sucks up perhaps 50-70% more power).

S3 has narrowed the gap up to NV/ATI with this one, but IMO not quite closed it yet. Hopefully, the fate of this GPU, whether good or bad, won't dissuade S3 from further development of the Chrome series cores; at the moment, they are easily the most credible candidate to enter into direct competition with NV/ATI.
 
very well put arjan, makes sense...

hopefully their next design won't skimp on features, possibly the S2X's weak point is nothing they've done in the past did so great, so they had to prove to some investor group they are still there...idk, hopefully they'll get moving faster (As they seem to have been doing in the last year~) to put some pressure on nV and ATI in the middle and hopefully upper ranges

push things along faster, and result in a popping of the price bubble...since S3 seems to have low cost cards mastered, if only they could apply that to a card to compete with G70 or R520, force those cards down out of the near $1000 range (newegg has 512MB 7800GTX's @ $750...and it sells, I don't know what's more surprising, them asking that much for that card, or people paying it)

and, it would be nice to see a third option for graphics cards that's actually viable
if only Matrox would be excercising this same forward jump...*roll eyes*
 
Having serious competitors to NV/ATI other than S3 is probably not much more than a pipe dream at this point. AFAICT, the situation with the non-NV/ATI/S3 GPU developers seems to be:
  • Matrox used to have one of the best engineering teams of them all, but was fried so badly with the Parhelia fiasco and the mismanagement leading up to it that I don't expect them to recover - ever. They have their fringe markets, but it looks like it would be trivial for NV/ATI to squeeze them out or suck them dry at the slightest sense of threat.
  • XGI may have good funding, but has a LONG way to go if the Volari was any indication - assuming, of course, that they don't have the plug pulled on them. Something similar seems to apply to 3dlabs (except in a narrow professional segment).
  • Intel seems to be content with the IGP market and haven't shown any real interest in discrete GPUs since the ill-fated i740.
  • ImgTech, BitBoys, Falanx, Takumi: While all of these seem to enjoy various degrees of success in the mobile segment and claim to have pixel shader capable IP cores, none of these appear to have the resources to break into the desktop market without massive backing from much larger companies. Right now, it would probably be trivial for NV or ATI to borg them all if they wanted to.
 
Back
Top