Contents
- Introduction
- Basic Algorithm
- STEP 1. Radial Distortion Correction
- STEP 2. Distinctive Feature Extraction
- STEP 3. Pairwise Alignment
- STEP 4. Global Align and Stitching
- STEP 5. Spherical Coordinates Warping and Final Blending
- Experimental Results
Introduction
A panorama, is some wide-angle view of a physical space in a photograph.
This project is a solution for Assignment 2 Automatic Panoramic Mosaic Stitching of the course Image Processing and Computer Vision (ENGG 5104, CUHK, 2015 Spring). In this project, I implement an algorithm for aligning several photos and stitching them into a seamless panorama.
Basic Algorithm
-
Correct radial distortion in each image;
-
Extract distinctive features for each image;
-
Align input images pairwisely;
-
Automatically stitch images, i.e. compute the transformation matrix for each image to the final panorama;
-
Warp the images and stitch them into the final panorama.
STEP 1. Radial Distortion Correction
(Related file: WarpSpherical.cpp; Related routine: WarpRDField.)
The photo captured by cameras may have radial distortion. The simplified model of radial distortion is
where x’n and y’n are normalized image coordinates, i.e.
The distorted image coordinates are
Apply inverse mapping of this model to obtain undistorted images.
STEP 2. Distinctive Feature Extraction
SIFT is adopted here. Refer to SIFT.
STEP 3. Pairwise Alignment
(Related files: FeatureMatch.cpp, FeatureAlign.cpp; Related routines: FeatureMatchByDistRatio, alignPair, countInliers, and leastSquaresFit.)
This part is to compute the transform from image i to image j for each pair (i, j).
First of all, find matching feature points (in image j to image i). The intial matching is decided using Euclidean distance. For each feature point descriptor in image i, compute the its Euclidean distance to every feature point descriptor in image j. If the ratio between the distance of the nearest neighbor and the second nearest neighbor is less than some threshold r, the feature point is regarded as distinctive; otherwise the feature point will not be used. Note that the distinctiveness is critical for images with repeated patterns such as textures.
Then RANSAC algorithm is used to find the transform. RANSAC is an iterative algorithm. In each iteration, it randomly selects two pairs of matching points and compute the transformation matrix. With this tranformation matrix, compute how many items in all matching points fit the transform model within a given tolarence, and keep the inliers and inlier number of this iteration. After a given nubmer of iterations, pick the iteration with largest number of inliers, whose transform model and corresponding inliers are accepted. Finally, the full homography is then estimated by these inliners.
Here is a YouTube vedio illustrating RANSAC vividly:
STEP 4. Global Align and Stitching
(Related file: GlobalAlign.cpp; Related routine: initGlobalAlign.)
This part involves how to transform all the images to a common plane.
First, only the good matchings between images are kept. Then the matchings are sorted by their strength (i.e. the number of inliers). After that, the images are added according to the next strongest match (and other relevant matches) to current stitch and perform global alignment every time.
STEP 5. Spherical Coordinates Warping and Final Blending
(Related files: WarpSpherical.cpp, BlendImages.cpp; Related routines: WarpSphericalField, BlendImages, AccumulateBlend, NormalizeBlend.)
From the point of view of camera, a large nubmer of images can be considered to be taken in the sphere center from different orientation of a sphere. This part executes blending useing inverse mapping of spherical coordinates warping. The correspondence of each pixel in the panorama is searched for each image. The computation is: for a pixel in the panorama (xsph, ysph),
Experimental Results
The input images are:
After executing my algorithm, the output image is: