Recently, works on improving the naturalness of stitching images gain more and more extensive attention. Previous methods suffer the failures of severe projective distortion and unnatural rotation, especially when the number of involved images is large or images cover a very wide field of view. In this paper, we propose a novel natural image stitching method, which takes into account the guidance of vanishing points to tackle the mentioned failures. Inspired by a vital observation that mutually orthogonal vanishing points in Manhattan world can provide really useful orientation clues, we design a scheme to effectively estimate prior of image similarity. Given such estimated prior as global similarity constraints, we feed it into a popular mesh deformation framework to achieve impressive natural stitching performances. Compared with other existing methods, including APAP, SPHP, AANAP, and GSP, our method achieves state-of-the-art performance in both quantitative and qualitative experiments on natural image stitching.
In this paper, we propose to take the vanishing point (VP) as an effective global constraint, and develop a novel similarity prior estimation method for natural image stitching. We focus on the problem of estimating θ, and exploit the VP guidance by taking its two advantages: (1) utilizing the orientation clues from VPs to estimate the initial 2D rotations for input images; (2) making use of the global consistency of VPs in Manhattan world, by which a novel scheme is proposed to estimate the prior robustly. After that, the determined similarity prior is feed into a mesh deformation framework as global similarity constraints to stitch multiple images into a panorama with a natural look.
In summary, we make three main contributions in this paper:
|Figure 1. The flowchart of the proposed rotation estimation scheme.|
Figure 1 presents the flowchart of the algorithm. More details are described in the our paper, which has been submitted to IEEE Transactions on Image Processing.
Overall, we tested the proposed stitching method on two datasets: VPG dataset and GSP dataset.
VPG is a Manhattan dataset that is collected by ourselves. As shown in Figure 2, it consists of 36 sets of images, covering both typical street-view scenes and indoor scenes. In 3.1, we use VPG dataset to systemactically compare our method with other existing SOTA methods. GSP is a general dataset that is provided by Chen et al.. It contains 42 sets of images. Since the Manhattan assumption is not necessarily satisfied in GSP dataset, in 3.2, we use it to evaluate the stitching performance of the proposed method on general cases.
Two quantitative evaluation metrics for panorama naturalness:
The smaller the values of these two indexes, the higher the naturalness of the panorama. More details about these two metrics can be found in our paper.
|No.||(LD↓, GDIC↓)||Stitching Results|
|No.08 outdoor 24 images||AANAP:(5.17,9.49)|
|No.09 outdoor 72 images||AANAP:(5.95,25.06)|
|No.03 indoor 16 images||AANAP:(1.20,2.33)|
|No.05 indoor 36 images||AANAP:(1.47,7.24)|
|No.28 outdoor 12 images|
|No.33 outdoor 5 images|
|No.16 indoor 20 images|
|No.22 indoor 10 images|
Figure 3 gives the VP divergence distributions of VPG and GSP datasets. Note that the distribution difference between these two datasets, which indicates that the Manhattan assumption is not necessarily satisfied in GSP dataset. Experimenting on GSP is a much more general case because it simulates the practical application situation in which the Manhattan prior is not known during stitching. Therefore, the results can demonstrate the effectiveness of the proposed degeneration scheme.
|VP Divergence||Stitching Results|
|35 images ε=0.044|
|5 images ε=0.048|
|15 images ε=0.161 degeneration occurs|
|VP Divergence||Stitching Results||VP Divergence||Stitching Results||VP Divergence||Stitching Results||VP Divergence||Stitching Results|
|2 images ε=0.0204||4 images ε=0.0019||5 images ε=0.0068||6 images ε=0.0005|
|11 images ε=0.0740||3 images ε=0.0002||5 images ε=0.0114||21 images ε=0.0781|
|10 images ε=0.1917 degeneration occurs||5 images ε=0.1291 degeneration occurs||7 images ε=0.1464 degeneration occurs||15 images ε=0.2022 degeneration occurs|
Apart from the naturalness, the alignment accuracy is another common issue that people concern during stitching. In this part, we hope to show that the naturalness improvement provided by VPG is compatible with other high-accuracy stitching frameworks.
In order to prove this property, we conduct experiments on two advanced stitching frameworks: Dual-Feature Warping (DFW) framework and Generalized Content-Preserving Warping (GCPW) framework. The coordination between naturalness and alignment accuracy can be observed in Table 5.
|No.||(GDIC↓, MSE↓)||Stitching Results|