I want to warp a 2d Image to give it a perspective effect. So an overhead view would be transformed and would look like if its viewed at an angle. No rotations in the transform. And the Image will be shrunk at all areas so interpolation is not required. So the transformation would convert an axis aligned rectangle to a trapezium with its parallel sides parallel to the X axis (horizontal). The image layout in memory is the common layout - consecutive memory address runs across the image horizontally.
I wrote a simple routine that downsamples a line from M pixels to N pixels M>=N.
By varying N along Y (vertical axis) I could perform the perspective warping effect but only in the X axis. This works fine except for the sub pixel/texel precision thing that i have to fix.
Now I am not sure what would be a good way to implement the perspective warp in Y (vertical) direction too. My concern is that it would be thrashing the cache really bad.
So, should I just downsample lines from the source image one line at a time, and then accumulate them to a single line in the source destination?
Or should I swizzle the image into small blocks and for each destination pixel simply fetch the corresponding pixels from the swizzled source image?
Or any other quick hacks?
Since I am playing with this on a embedded system with really bad tools, I cannot test various methods, so I wanted some suggestions/ hints.
Thanks!
I wrote a simple routine that downsamples a line from M pixels to N pixels M>=N.
By varying N along Y (vertical axis) I could perform the perspective warping effect but only in the X axis. This works fine except for the sub pixel/texel precision thing that i have to fix.
Now I am not sure what would be a good way to implement the perspective warp in Y (vertical) direction too. My concern is that it would be thrashing the cache really bad.
So, should I just downsample lines from the source image one line at a time, and then accumulate them to a single line in the source destination?
Or should I swizzle the image into small blocks and for each destination pixel simply fetch the corresponding pixels from the swizzled source image?
Or any other quick hacks?
Since I am playing with this on a embedded system with really bad tools, I cannot test various methods, so I wanted some suggestions/ hints.
Thanks!