Hello guys...
I have an image processing algorithm where output of each pixel is determined by the sum of the product of each pixel in that row multiplied by a kernel.
Would an optimisation using GPU be possible for this kind of a problem???
Because if the row has 1024 elements i would have to read 1024 elements for each output pixel and then multiply it with kernel and then take their sum.
Thank u
I have an image processing algorithm where output of each pixel is determined by the sum of the product of each pixel in that row multiplied by a kernel.
Would an optimisation using GPU be possible for this kind of a problem???
Because if the row has 1024 elements i would have to read 1024 elements for each output pixel and then multiply it with kernel and then take their sum.
Thank u