Line Rasterization and Antialiasing (Not Necessarily 3D)

I'm trying to write myself a line-drawing function onto a 2D surface, but I'm running into a small dilemma. I'm not sureif I should be using a DDA algorithm or a bresenham algorithm. I've currently got the DDA algorithm implements because I don't plan on drawing many lines, but I'm sort of stuck on how to implement anti-aliasing on that type of algorithm. I'm just not sure how I'd get the pixel weights for doing a linear interpolation between pixels.

Also, completely separate question that has nothing to do with 3D and more to do with DirectDraw. Anyone have any links that would give me a good idea how to implement alpha channels in DirectDraw 7?
 
Definitely use Bresenham's algorithm, with sub-pixel precision. It solves all rounding errors with floating-point DDA interpolation and is equally fast when avoiding conditional jumps (use a masking trick).

A must-read is Chris Hecker's Perspective Texture Mapping articles.
 
I use DDA with fixedpoint. With 16.16 precision you should have plenty of precision for any screen resolution.
DDA makes the antialiasing very easy because you can just use the fractional part of the interpolator as the pixel coverage factor.
With Bresenham I suppose you'll need to scale the 'counter' back, which takes time (at best a reciprocal mul I suppose). Bresenham also makes the subpixel-correction step iterative, which is something I'm not particularly fond of. And I think Bresenham's speed advantage has long gone. Conditional jumps in innerloops are deadly on modern CPUs, and a masking trick requires more processing than the simple shifts that fixedpoint DDA requires.
This is why I chose DDA.
 
I believe the antialiased DDA algo is commonly known as the wu-algo. Searching on that should give you plenty of hits.
It's simple though, you take the 2 nearest pixels to the current (x,y) position, in the dominant direction. Then you shade one with coverage, and the other with 1-coverage.
If you want to know about subpixel-correction or fixedpoint, I am not sure if you can find that much on the internet (it's pretty much 'pre-internet' technology I suppose)... But it should be covered in any old computergraphics book though.
 
Hi Scali! :)
Scali said:
Bresenham also makes the subpixel-correction step iterative, which is something I'm not particularly fond of.
It isn't iterative, it's just one if-statement. What it does is either add an extra step value, or zero. So this can be done very easily and very fast with a masking operation. Please read Hecker's articles for the theory.

By the way, anti-aliasing is easy as well since the coverage factor is just the 'error' factor used in the Bresenham algorithm.
 
I think we need to define what we are talking about here...
What I mean with Bresenham, is the division-less iterative solution...
It steps one pixel at a time in one direction, and has an internal counter for the other direction... If that counter hits a threshold, one step is taken in the other direction.
DDA instead takes dx/dy (or dy/dx) and adds dx/dy to x every pixel, and adds 1 to y every pixel (or again other way around).
Bresenham basically scales dx/dy up by not doing the division by dy. Instead it scales the fractional part of x up by dy... Now instead of taking an extra step when (fraction > 1), a check is required to see if the scaled up counter is > dy.
Something like that anyway.
Now this means 2 things:
1) There are no fractions, therefore no direct subpixel information, no direct derivatives
2) The counter is not in a 'normalized' form, and needs to be renormalized in order to get any kind of fractional information that is useful for antialiasing.

Now, this leads to 2 problems:
1) How do you do subpixel correction, other than using scaled-up coordinates with subpixel-info, iterate until you hit a hotspot, and then scale down and step on a per-pixel basis?
2) How do you scale the counter to a useful format (eg 0-255 range) that can be used for antialiasing directly, without dropping back to fractions and defeating the whole purpose of Bresenham?

So 1) is the iterative part that I meant... With DDA, you just use dx/dy to step to the hotspot directly. Bresenham doesn't calc dx/dy in the first place, so stepping to the hotspot is not possible this way. Changing Bresenham to calc dx/dy is like moving back to DDA, it defeats the purpose.
I know that you can build Bresenham itself without an if-statement, but with masking operations instead, but as I said before, fixedpoint DDA only requires some shifts per pixel, and on modern CPUs it's about as fast, if not faster than the Bresenham mask hack, let alone with a conditional jump.
And as I said, the 'normalization' you'd need to do on the Bresenham counter would make it much slower than DDA.
Unless ofcourse there is something that I had not thought of.
But these are the main reasons why I moved to DDA for subpixel-correct antialiased lines in my software engine.
 
Scali said:
1) How do you do subpixel correction, other than using scaled-up coordinates with subpixel-info, iterate until you hit a hotspot, and then scale down and step on a per-pixel basis?
Once more, read Hecker's articles. ;) He talks mainly about polygon rasterization and the last part is completely about texture mapping, but he also explains how to apply Bresenham's method to sub-pixel rasterization without much extra work.
2) How do you scale the counter to a useful format (eg 0-255 range) that can be used for antialiasing directly, without dropping back to fractions and defeating the whole purpose of Bresenham?
No, if performance matters that much, you can just interpolate that value as another 'error' term.

Either way, Bresenham may take a little more setup, but is isn't necessarily slower, especially for long lines. And a much more important reason to use it than performance, is its correctness.
 
Once more, read Hecker's articles. He talks mainly about polygon rasterization and the last part is completely about texture mapping, but he also explains how to apply Bresenham's method to sub-pixel rasterization without much extra work.

Be more specific? Which article? What page? What paragraph?
And what exactly does he say then?
Surely you don't expect me to read through everything without even knowing what to look for.

No, if performance matters that much, you can just interpolate that value as another 'error' term.

So basically you are suggesting to implement DDA inside Bresenham, so you can antialias. What a wonderful idea! That is EXACTLY what I meant in my earlier posts.

Either way, Bresenham may take a little more setup, but is isn't necessarily slower, especially for long lines. And a much more important reason to use it than performance, is its correctness.

If you are suggesting both DDA and Bresenham in order to get the antialias, then I fail to see how it can possibly be faster than simply doing DDA.
As for correctness, what do you mean by that exactly? Precision? That's not an issue in practice.
Also, where do you get that Bresenham takes more setup? If anything, it takes less setup, since it avoids the division, and divisions are by far the most expensive, obviously.
As I already said earlier, it's the mainloop that's the problem. An if per pixel is slow, and avoiding it with 'clever' masking tricks is more work than the straightforward shift/and operations for DDA.
And then there's the antialiasing.
So I would say that Bresenham is definitely slower on long lines, short lines would be its strength on a modern CPU.
 
Back
Top