I took an interesting class in Computational Photography during my last year of college. Here is a sample of some of my project work. While none of these projects were my own ideas, some of the results were pretty cool. Projects were completed using MATLAB. Click here for project details and explanations of each algorithm. 




Hybrid images are images that appear different depending on viewing distance. When close up, high frequencies are more prominent, while when further away, low frequencies become more dominant. Therefore, if we combine the high frequency content of one photo with the low frequency content of another, the resulting image is one that is has a different meaning when viewed from different distances. Above, I've created a hybrid image of two similarly aligned photos, one of me, and one of my cousins' pet dogs, Daisy. Up close, Daisy is very prominent. However, if you move away from the screen or minimize the photo, the dominant image changes.



combined before after.png

Gradient domain fusion allows us to blend an specified part of an image into another, while still preserving the overall image gradient. Blended photographs look much more realistic when the image gradient is consistent, as in the photo above to the right. If we try to preserve the colors or intensities, as shown in the photo above to the left (a set of pixels have been copied directly over the background image), the result is not very realistic.




Experimenting with image projection in videos, interest point matching, and homography, I worked with every frame of a video to generate a few interesting items. This project was quite mathematically dense and computationally expensive. Above is an example of some of the processing work achieved. To the left is my original video of the Illini Union area. The center video shows only the foreground pixels, so only objects that moved during the length of the video. The right video only consists of the background, and most moving objects are effectively removed.