Hardware forced AA?

2real4tv

Veteran
This is most likely a stupid question but to me the achilles heel for consoles this gen has been the low amount of AA in some games which brings up my question. Is it theoretically possible to come up with some tech/hardware to handle output signal of the consoles to somehow force AA?
 
Forcing AA would bring some games to a crawl I bet. Developers wouldn't be too happy, considering its really up to them to make use of what hardware they are using.
 
Do you really mean force AA, or do you mean add AA on the output stream before it hits the TV? The latter is impossible. You can't add AA, not real AA anyhow, post rendering. You could only do a fake jaggy-reduction step which amounts to a blur. And that would add lag as well as look pretty rough. I recommend you get yourself a 14" CRT. Jaggies are non-existent! And PS3 deinterlaces beautifully.
 
no u cant really add AA because that requires adding more unknown data to the picture (though i suppose u can extrapolate it, but ultimatly its still guess work )
like say a blurry photo of a streetsign, theres really no way of filtering to make the words come into focus cause its unknown data (unlike whatever u may see on the TV with the CIA doing it).
u can go the opposite direction though + remove data like Shifty saiz with a blur
 
This is most likely a stupid question but to me the achilles heel for consoles this gen has been the low amount of AA in some games which brings up my question. Is it theoretically possible to come up with some tech/hardware to handle output signal of the consoles to somehow force AA?

A little off tpoic here, but I disagree. To me most games look great this gen, but sometimes the content isn't there. Since the budgets are relative to the dev time and possible income from sales, could too much emphasis on the eye candy hurt the so important gameplay?

A little more on topic, wasn't the "extremely" high resolutions (HD) supposed to compensate for the lack of AA or at least dimish the need of AA a bit?
 
1280x720 really isn't extremely high resolution. :???: On top of that, when my TV scales the output to 1080p, it does a very poor job and quite frankly it looks like ass. Extra AA would help somewhat. Native 1080 *and* AA would be the ticket.

IMO we shouldn't be putting up with jaggies anymore at this stage, but it doesn't look like good anti-aliasing is in the cards for this generation of consoles. Really to me it's one of those fundamental things that should be there, and the lack of high-quality AA detracts from some otherwise lovely looking games, and it kinds of ticks me off that it's almost considered an afterthought.
 
Do you really mean force AA, or do you mean add AA on the output stream before it hits the TV? The latter is impossible. You can't add AA, not real AA anyhow, post rendering. You could only do a fake jaggy-reduction step which amounts to a blur. And that would add lag as well as look pretty rough. I recommend you get yourself a 14" CRT. Jaggies are non-existent! And PS3 deinterlaces beautifully.

Yes the latter is what i meant, thanks for the answer, I knew it would add some type of lag because of the process time to handle one frame. Don't know how AA works(software or hardware intensive) in image processing its just my electronics mind trying to think of new ideas.
 
Yes the latter is what i meant, thanks for the answer, I knew it would add some type of lag because of the process time to handle one frame. Don't know how AA works(software or hardware intensive) in image processing its just my electronics mind trying to think of new ideas.
i think u misunderstand.
well it shouldnt really add lag to any great degree, as i assume its a cheap operation (as in, it will always stay within the same frame, ie wont skip frames)
AA works but taking a larger resolution image of a screenshot + then selecting X number(or so) samples within that image to come up with a final pixels value eg 4xAA == sample 4 pixels in the same region to make one pixel, (usually how its done now is only on the edge pixels + ignoring of the textures only depth values)
thus what u ask for is impossible.
take a 1280x720 + somehow turn it into a 1280x720 4xAA image even with inifinte time, the only way to get from the 1280x720 to get 1280x720 4xAA 4xAA is to have acces to better data the what they recieve
 
i think u misunderstand.
well it shouldnt really add lag to any great degree, as i assume its a cheap operation (as in, it will always stay within the same frame, ie wont skip frames)
AA works but taking a larger resolution image of a screenshot + then selecting X number(or so) samples within that image to come up with a final pixels value eg 4xAA == sample 4 pixels in the same region to make one pixel, (usually how its done now is only on the edge pixels + ignoring of the textures only depth values)
thus what u ask for is impossible.
take a 1280x720 + somehow turn it into a 1280x720 4xAA image even with inifinte time, the only way to get from the 1280x720 to get 1280x720 4xAA 4xAA is to have acces to better data the what they recieve

Can it work similiar to how photos are supersampled but on a frame by frame basis? Again Iam a noob at this and currently a sophmore comp. sci student just was curious.
 
No, Supersampling is generally not a good idea (takes way too many resources), so Multi-Sampling AA is used (MSAA). Only the Edges are given multi samples to reduce Aliasing.
 
Can it work similiar to how photos are supersampled but on a frame by frame basis? Again Iam a noob at this and currently a sophmore comp. sci student just was curious.

what do you mean by how photos supersampled?


Anyway, you can always downscale to add more AA. :)
 
I think that's what 2real4tv means. A photo doesn't suffer from aliasing because it's basically a huge supersample. And as you say, you can just render big and downscale, but at an unwieldy cost.
 
I think that's what 2real4tv means. A photo doesn't suffer from aliasing because it's basically a huge supersample. And as you say, you can just render big and downscale, but at an unwieldy cost.
I strongly suspect that a traditional photograph probably doesn't suffer from "aliasing" because the grains of silver (BW) / "clouds of dye" (colour) that form on the negative are essentially random. This would transform aliasing into high frequency noise.
 
Isn't all the hardware that can do some sort of AA in a single cycle considered to be able to force AA? The problems are of course the framebuffer size (Xenos) and memory bandwidth (PC cards & uber resolution). I mean, lower the resolution to non-ridunkulous for a G80, and most games don't take much of a hit when doing 4xMSAA.
 
I strongly suspect that a traditional photograph probably doesn't suffer from "aliasing" because the grains of silver (BW) / "clouds of dye" (colour) that form on the negative are essentially random. This would transform aliasing into high frequency noise.

Not really familiar with photography, but I don't see why would aliasing occur, it's a sampling phenomenon after all.

I think that's what 2real4tv means. A photo doesn't suffer from aliasing because it's basically a huge supersample.
I thought he was talking about digital images,
I guess you can approximate real life as an infinite resolution image which won't have any aliasing issues.
And as you say, you can just render big and downscale, but at an unwieldy cost.
I was kidding, referring to 360 or some PS3 games downscaling for SDTV since it's indeed forced AA.

Whether hardware can increase the rendering resolution in a developer transparent way is an interesting question though. I mean even if PS3.5 has reasonably sufficient resources, I would be surprised if it can be done easily.

Say a cel-shaded game is doing 1 pixel width silhouette outlining, I don't think this is scalable (even if it is, would probably lost under 8xSSAA anyway).
I suspect same would be true for many postprocessing effects.

I wonder how successful software emulation is in that regard.
 
Not really familiar with photography, but I don't see why would aliasing occur, it's a sampling phenomenon after all.
So why does that mean that chemical-based photography is not sampling as well ? You know that your eyes do sampling, don't you?

Wikipedia has some info on B&W film grain. The grains are of finite size and so can be considered the same as turning pixels on/off except that these pixels are not arranged in a regular grid.
 
So why does that mean that chemical-based photography is not sampling as well ? You know that your eyes do sampling, don't you?

Wikipedia has some info on B&W film grain. The grains are of finite size and so can be considered the same as turning pixels on/off except that these pixels are not arranged in a regular grid.

In quantum world everything is discrete, yet for some reason I don't see much practical value going there.
Even if you consider those sampling, photons burning a negative will have AA through the lens, air, or even energy propagation on the surface before needing any random noise on the film.
Do you really think a perfect film would show any aliasing?
 
Wikipedia has some info on B&W film grain. The grains are of finite size and so can be considered the same as turning pixels on/off except that these pixels are not arranged in a regular grid.
Ordered grid or not, they're also, very, very small! If we had displays of 300 dpi, we wouldn't be seeing aliasing. There's certainly no noticeable aliasing even up close of 300 dpi photo prints of graphics (not photos which have AA). The problem is displays are presenting viewers with enormous pixels that the viewer can individually discern, and AA is just about approximating what something would like like if produced in higher resolution - you put up a 50% grey pixel to give the same appearance as lots of tiny checkered black and white dots that are too small render on the display. If you had a high enough resolution you could render those dots individually and the viewer would then see them as a grey spot. Unless they were sitting really, really close!
 
I was thinking in terms of a standalone DSP(or some derivative) that would be able to add AA to the output(hdmi in-->hdmi out) data stream of the ps3 and whether its theoretically possible
 
I was thinking in terms of a standalone DSP(or some derivative) that would be able to add AA to the output(hdmi in-->hdmi out) data stream of the ps3 and whether its theoretically possible

Theoretically it's not possible. The problem is similar to recovering from modulo operator (which doesn't have an inverse naturally).

Still it's not theoretically wise to assume there won't be any practically acceptable solution based on rasterization or general rendering patterns coupled with temporal analysis and extrapolation or something like that.
I wouldn't hold my breath though. :)
 
Back
Top