Wednesday, December 30, 2009

Tracking using Adaptive Histogram

Do you ever wonder how your camera be able to detect faces? Actually this option of your camera is more known as face recognition. This topic has been one of the intersting research for the past years. If you have watched a movie or a series where the character enters a room and is allowed access by facial scans, yes that is one of its cool applications.

Tracking is another research that is of interest to many especially to the field of biometrics. Some of the interesting researches include tracking of basketball players to be able to

For this activity, we will track a face using color as a cue. To be able to do this, histogram backprojection is applied. If you can remember, we already discussed histogram backprojection in Activity 4.

The first thing that we did in doing this activity is taking images. So for this part we look for a victim whose face we will track. Fortunately, we were able to find one. We asked him to walk from our lab, Intrumentation Physics Laboratory which is inside NIP, to the front of NIP. All along, he was holding a camera and the camera recorded a video of his face while walking. Then we parsed the obtained video into images. Here are some of the sample images obtained.

sample images

From the obtained images, we derived the skin locus. The skin locus is the color of the skin in all possible illumination. This is in the normalized chromaticity coordinates. In our case we were only able to calculate the skin locus of our victim for the different illumination he experienced while walking. In doing this, we chose different images of different illumination from our parsed video and calculated for its histogram in NCC (Normalized Chromaticity Coordinate). Then to obtain the skin locus we summed-up all these histograms.

Cropped skin images used in obtaining the skin locus

Skin locus obtained

Then we performed the algorithm based on the paper Adaptive skin color modeling using the skin locus for selecting training pixels.

Following the algorithm described in the paper, we were able to track the face of our victim.

Tracked image of our victim

I had a hard time implementing the algorithm for face tracking. This is because of the skin color of our victim. As can be seen in the images, his face is yellowish and its sometimes blends with the color of the wall making it hard for the algorithm. To be able to address this problem the distance between the centroid of each clusters obtained for each image is checked. When the distance is higher than a defined threshold value, the cluster is rejected.

For this activity, the 305 class thanks Kirby Cheng for willingly volunteering himself to be our victim. I personally thank Irene Crisologo and Thirdy Buno for the discussions and Ma'am Jing for suggesting ways to improve the algorithm.

Friday, October 2, 2009

Activity 19: Retoration of Blurred Images

Have you ever taken a picture where the your object is blurred because of its apparent motion while the image was being taken? This kind of blurring is termed as motion blur. Although it is a kind of blurring which might be unwanted for some instances, motion blur is a favorite style in photography. Here are some motion blurred images I've searched around the web.



images are taken from: http://www.smashingmagazine.com/2008/08/24/45-beautiful-motion-blur-photos/

Motion blur is often caused by the long exposure time of the camera. Exposure time is the duration of time that the camera's shutter is open. In this case, the object moved within the exposure time causing the blur in the image captured.

For activity, we restored motion blurred images. In doing this we used degradation modeling. to be able to blur a given image. Here, we consider the case where in the image represented by f(x,y) has been blurred by the linear movement between the object and the image acquisition device used. Assuming that the opening and closing of the shutter happens instantaneously, the blurred image is given by

Equation 1

where xo and yo represents the time varying components of motion in the x and y directions, and T represents the exposure time.

Consequently, the Fourier transform of the bulrred image is given by,

Equation 2

where G(u,v) and F(u,v) blurred and original images in frequency space while H(u,v) is the degradation function in Fourier space given in the following equation.

Equation 3

where a and b are the total distances for which the image has been displaced both in the x and y directions.

After blurring the image, gaussian noise was also added into it. Thus G(u,v) can now be expressed as
Equation 4

where N(u,v) is the added noise in frequency space.

Now to be able to recover the original image from its blurred counterpart we used the Minimum mean square error or Weiner filtering. In this method, the image and the noise added are considered as random processes. The orginal image in frequency space is then given by,


Equation 5

Where Sn and Sf are the power spectrum of the noise and the original image both in frequency space. However, in reality the power spectrum of the ungraded or the original image is unknown. Thus the equation for the restored image can be written as,

Equation 6
where K is a constant.

Using Equations 2 and 3, blurred image of the text below with a and b equal to 0.1 and T = 1 were simulated. This blurred image was then used to recover the original image using equations 5 and 6. These images are shown below.

Original image



Using equation 6, the original image was recovered for varying values of K. What I used here are: 5, 3, 0.01 and 0.00001.


As can be seen from the figure above as K decreses, the recovered image looks more like the original. The values of a and b (x and y displacements) were also varied. The recovered images are shown below. The First column of the suceeding 4 sets of images are the blurred images, the second column are the recovered images using Equation 6 and the third column are the recovered images using Equation 5.

a & b = 0.9
a & b = 0.9a & b = 0.01a & b = 0.001

As can be observed in the images above, as the value of a and b decreases the recovered images looks more like the original image. This can be seen especially in the last set of images where a and b are equal to 0.001. This behaviour is expected since a and b are the displacements of the object in the x and y directions. Small values of a and b only mean that the object moved a small distance from its original position while its image is being captured. Thus as the value of a and b increases the generated blurred image becomes more blurred.

The values of the exposure time, T, was also varied. The recovered images are shown below. The First column of the suceeding 4 sets of images are the blurred images, the second column are the recovered images using Equation 6 and the third column are the recovered images using Equation 5.

T = 0.001
T = 0.01
T = 3
T = 5

As can be observed from the four sets of images above, as the value of T increases, the recovered images look more like the original one. Also, observe that the generated blurred images for small value of T are dark. If we will recall the definition of T, it is the duration of the exposure time or the duration of time that the shutter of the imaging device is opened. For small values of T, small amount of light will be able to enter the imaging system. Thus, the generated dark blurred images.

A possible extension of this activity is the recovery of the "unblurred" image of a nactual blurred image taken. However to be able to do this, one must have the knowledge of the degradation function of the blurred image.

For this activity I thank Irene Crisologo, Thirdy Buno, Jaya Combinido and Miguel Sison for the helpful discussions.

I give myself a grade of 10 for I was able to do all the required tasks for this activity.


References:
[1] Activity 19: Retoration of Blurred Images Manual