Image signal processor: what is it?

rpg.314

Veteran
This thing seems to be on almost all the SoC's aimed at embedded market.

What is it?

What does it do?

Prima facie, the name suggests that it's functions could be performed by a programmable GPU. So why is it there even on SoCs which have a programmable GPU?
 
Camera related stuff oversimplified; it runs mostly algorithms for photo-enhancement capabilities. Your question as to why they don't use GPUs for image signal processing could fall into the same category as to why SoCs (or even standalone GPUs for that matter) use for example dedicated ff hw video decoders.

Would you pass audio related tasks on the CPU on the other hand and if not why would that be?
 
Camera related stuff oversimplified; it runs mostly algorithms for photo-enhancement capabilities. Your question as to why they don't use GPUs for image signal processing could fall into the same category as to why SoCs (or even standalone GPUs for that matter) use for example dedicated ff hw video decoders.

Would you pass audio related tasks on the CPU on the other hand and if not why would that be?

So an isp is more power efficient than a GPU. But what sort of algorithms are these? If an isp is better at it, then I don't think these could be per pixel sort of things.
 
So an isp is more power efficient than a GPU.

A lot of dedicated purpose ff hw would be I guess.

But what sort of algorithms are these? If an isp is better at it, then I don't think these could be per pixel sort of things.
Out of NV's Tegra2 whitepaper:

Image Signal Processor (ISP): Many mobile devices include high resolution sensors for taking pictures and lower resolution cameras for Web video. The ISP in the NVIDIA Tegra processor is capable of taking raw camera sensor input at up to 12 megapixel resolution and 30 frames per second. The image processor can then apply real-time image enhancement algorithms like automatic white balance, edge enhancement, and noise reduction, and can even adjust for poor lighting conditions. The output of the ISP is then ideal to save as a picture or as a stream that can be compressed for live video conferencing or sent with HD quality across a broadband network.
 
But what sort of algorithms are these?
I believe part of it is because camera sensors don't take color pictures. They have a grid of color filters covering the sensor elements - typically a quad of elements I believe with one red, one green and one blue-filtered element, along with one unfiltered element for intensity.

An algorithm then weighs the data from these four separate elements to form...well, a false color picture, really. :LOL:
 
I still don't see anything that would have an order of magnitude advantage in power efficiency. Could it be that embedded GPU's with a reasonably generalized programming model could remove that OoM power penalty?

PS: I am still reading paper posted by Arun.
 
Last edited by a moderator:
The ancient Greeks reasoned that the world was composed by four basic elements in various proportions. This model explained things on a purely superficial level, but doesn't stand up to modern scrutiny of course.

Likewise, I don't think you can just 'see' wether dedicated camera silicon is more power efficient or not - although you could just trust that if there IS dedicated silicon for it, it's probably because there's a good reason for it; not because the chip designers were stupid... ;)
 
Back
Top