Senior Project Help

CMAN

Regular
This isn't 3D related, so I'll put it in here.

For my senior project I'm trying to adapt JPEG-LS for use in a digital camera. The biggest downfall to using JPEG-LS in a digital camera is the buffer requirements for the process. Since there is a predictor step, I'd need to be able to buffer up to two lines. My biggest headache is how to not need to buffer two entire lines since I only need 4 pixels for each individual pixel. For example)

c b d
a x

x is the current pixel being predicted and the other letters are the other pixels I need in relation to the current pixel. This allows me to detect horizontal and verticle lines and find smoothness.

Anyone have any ideas?
 
I think you might have done better to put this in 3d arch and coding, with an apology for it being somewhat off-topic --at least there the "right people" would be sure to see it. You could put "Somewhat off-topic" in the subject line and hope Neeyik is feeling expansive that day. Holiday season and all. . . :LOL:
 
Just wondering, why would you not be able to buffer two full lines? Almost all still digital cameras have multiple megs of SDRAM available.

Granted, any optimization is good, but I think based on the way a CCD is read, you're going to be stuck having to get the full lines in succession.
 
can someone move this to 3D arch and coding?

I think I'm going to try a streaming type system where instead of sending the entire rows it would only request a matrix of size 2xN. This would reduce the internal memory necessary by a good deal and not require another cache just for the DSP.

The downfall to the approach is going to be increased bandwidth needed (2x since the same data will need to be sent to the DSP twice) for the same performance, and the existing cache will need to hold each row until the next one is processed.

I'm going to help account for the increased bandwidth by pre-caching the next matrix while the current matrix is being calculated. And I can optimize for individual systems by increasing the matrix size until the time necessary to transer the next matrix equals the time necessary to complete the current calculations up to the point where the memory is full with the two matrixes.

Does this sound like a good plan?

And can someone move this to 3D arch and coding?
 
RussSchultz said:
Just wondering, why would you not be able to buffer two full lines? Almost all still digital cameras have multiple megs of SDRAM available.

Granted, any optimization is good, but I think based on the way a CCD is read, you're going to be stuck having to get the full lines in succession.

I'm trying to not need to use the existing buffer so that it doesn't affect other operations on the camera. Basically it wouldn't change too much existing structures.
 
Hmm, I will firstly confess I don't know anything about Jpeg-LS. But looking at your example and making some big assumptions you should only need to actually buffer one line.


I am assuming that one you have worked out a prediction for x you will subtract it from the actual value X to get 'X and that it is 'X that you store.

CBD
Ax

You have got CBD buffered. Once x has been predicted you can discard C, read X into the location that just became free by discarding C and do your subtraction.
So you would only need a buffer of width+1.

Or am I missing something?

CC
 
As Captain Chickenpants said, I think one row plus some additional storage is enough.

Consider this:

1 2 3 4 5 6 7 8 9 ...

is the buffered row. A temp variable a for storing the first pixel of next row. The second pixel is loaded into a "current pixel" variable called x.

Now, with 1, 2, 3, and a, the prediction of x can be handled. Now, replace 1 with a, and put the original x into a, and proceed to the next pixel. This time, after prediction, replace 2 with a, and put x into a, and so on. After the row is completed, the buffer will contain the second row.
 
Sorry for not adding to the topic, but which digital camera(s) are open to be reprogrammed by the user, and how is that accomplished really? Has someone reverse-engineered the hardware or something? :)
 
Hmm, just been thinking about this a little more.

Are you reading directly from a ccd?
If so then I think you will need to do demosaicing, white balance, gamma, saturation, sharpending etc as well as jpeg compression.

Demosaicing will probably require at least 3 lines of data, as will sharpending.

CC
 
Right now we are just going to simulate the process for a proof of concept idea. Therefore we are going to be reading from a file on a computer into a DSP and back into a computer (possibly modulated by the first DSP and then sending the signal to another DSP which we might use to demoudlate and decode then save).

We are not reverse engineering anything, just simulating a digital camera at the moment to see if it could work. We are also going to try to ignore CBD and see how it works (it may not), and try Peano scran rather than a raster scan to see if that could work.
 
Captain Chickenpants said:
Hmm, just been thinking about this a little more.

Are you reading directly from a ccd?
If so then I think you will need to do demosaicing, white balance, gamma, saturation, sharpending etc as well as jpeg compression.

Demosaicing will probably require at least 3 lines of data, as will sharpending.

CC

Thanks! Do you have a link handy that shows a block diagram and or white pages for the above processes?
 
Some general links.

Demosaicing stuff.
http://research.microsoft.com/users/lhe/papers/icassp04.demosaicing.pdf
http://www.cs.huji.ac.il/~werman/Papers/demosaicing_hints_icip04.pdf


Color temperature/ white balance stuff
http://www.nationmaster.com/encyclopedia/Color-temperature

Hue, saturation, brightness and contrast can all (I beleive)be done with 4x4 matrix transforms. Which you should be able to concatenate into a single matrix. The site below has the transforms for hue/sat and brightness, I think that contrast should be a simple scaling transform
http://www.sgi.com/misc/grafica/matrix/


General color stuff
http://www.efg2.com/Lab/Library/Color/Science.htm
http://akvis.com/en/articles/index.php

Sharpening.
http://www.engr.panam.edu/~caharlow/EE4366_05/notes/ch19_edgeEnhance.pdf


Not sure on the best order for doing things, apart from sharpening last.
 
Guden Oden said:
Sorry for not adding to the topic, but which digital camera(s) are open to be reprogrammed by the user, and how is that accomplished really? Has someone reverse-engineered the hardware or something? :)
Sorry as well for being slightly off-topic, but take a look here for example:
http://digita.mame.net/

(Is there any system out there mame hasn't been ported to? ;) )
 
Back
Top