Vanishing Point Guided Natural Image Stitching

Kai Chen, Jian Yao*, Jingmin Tu, Yahui Liu, Yinxuan Li and Li Li

School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, Hubei, P.R.China

*EMail: chenkai@whu.edu.cn, jian.yao@whu.edu.cn *Web: http://cvrs.whu.edu.cn


1. Abstract

Recently, works on improving the naturalness of stitching images gain more and more extensive attention. Previous methods suffer the failures of severe projective distortion and unnatural rotation, especially when the number of involved images is large or images cover a very wide field of view. In this paper, we propose a novel natural image stitching method, which takes into account the guidance of vanishing points to tackle the mentioned failures. Inspired by a vital observation that mutually orthogonal vanishing points in Manhattan world can provide really useful orientation clues, we design a scheme to effectively estimate prior of image similarity. Given such estimated prior as global similarity constraints, we feed it into a popular mesh deformation framework to achieve impressive natural stitching performances. Compared with other existing methods, including APAP, SPHP, AANAP, and GSP, our method achieves state-of-the-art performance in both quantitative and qualitative experiments on natural image stitching.

2. Approach

In this paper, we propose to take the vanishing point (VP) as an effective global constraint, and develop a novel similarity prior estimation method for natural image stitching. We focus on the problem of estimating θ, and exploit the VP guidance by taking its two advantages: (1) utilizing the orientation clues from VPs to estimate the initial 2D rotations for input images; (2) making use of the global consistency of VPs in Manhattan world, by which a novel scheme is proposed to estimate the prior robustly. After that, the determined similarity prior is feed into a mesh deformation framework as global similarity constraints to stitch multiple images into a panorama with a natural look.

In summary, we make three main contributions in this paper:

  • A robust scheme to determine the image similarity prior from the VP clues of the scene, based on which a novel natural stitching method named VPG is developed to significantly improve the naturalness of output panoramas. Relevant comparisons with SOTA methods are given in 3.1.

  • A degeneration mechanism to make the proposed VPG can be well-used in general scenes. When the scene holds the Manhattan assumption, VPG manages to produce a more natural panorama than other methods. Otherwise, it automatically falls back to a standard stitching scheme. The output still has a relatively natural look that is not affected by unrealiable VP guidance. Stitching results for general cases are privided in 3.2.

  • Abundent analyses upon the proposed VPG algorithm are provided. The results further reveal that VPG has two additional good properties: First, it is not influenced by different reference selections; Second, it is compatible with other high-alignment-accuracy stitching frameworks to achieve a coordination between good naturalness and high alignment accuracy. Panoramas with both good naturalness and high alignment accuracy are presented in 3.3.


  • Figure 1. The flowchart of the proposed rotation estimation scheme.

    Figure 1 presents the flowchart of the algorithm. More details are described in the our paper, which has been submitted to IEEE Transactions on Image Processing.

    3. Experimental Results

    Overall, we tested the proposed stitching method on two datasets: VPG dataset and GSP dataset.

    VPG is a Manhattan dataset that is collected by ourselves. As shown in Figure 2, it consists of 36 sets of images, covering both typical street-view scenes and indoor scenes. In 3.1, we use VPG dataset to systemactically compare our method with other existing SOTA methods. GSP is a general dataset that is provided by Chen et al.[2]. It contains 42 sets of images. Since the Manhattan assumption is not necessarily satisfied in GSP dataset, in 3.2, we use it to evaluate the stitching performance of the proposed method on general cases.

    Figure 2. The VPG dataset. 01-06 and 13-24 are indoor cases. 07-12 and 25-36 are outdoor street-view cases.

    3.1 Comparisons with SOTA Methods on VPG Dataset

    Two quantitative evaluation metrics for panorama naturalness:

  • Local-Distortion Index (LD): A local measurement that evaluates the projective distortion of image non-overlapping regions by analyzing image local homography.
  • Global-Direction-InConsistency Index (GDIC): A global measurement that measures the unnatural rotation artifacts with the aids of camera orientation parameters.
  • The smaller the values of these two indexes, the higher the naturalness of the panorama. More details about these two metrics can be found in our paper.

    VPG vs. AANAP(CVPR 2015) and GSP(ECCV 2016)

    Table 1. Comparative results on 4 synthetic image sets (Click the thumbnails for clear observation).

    No. (LD↓, GDIC↓) Stitching Results
    No.08 outdoor 24 images AANAP:(5.17,9.49)
    GSP-2D:(3.00,2.70)
    GSP-3D:(2.06,2.05)
    VPG:(2.69,0.50)
    No.09 outdoor 72 images AANAP:(5.95,25.06)
    GSP-2D:(2.68,4.26)
    GSP-3D:(2.69,0.86)
    VPG:(2.50,0.52)
    No.03 indoor 16 images AANAP:(1.20,2.33)
    GSP-2D:(1.82,1.66)
    GSP-3D:(1.59,3.41)
    VPG:(1.57,0.77)
    No.05 indoor 36 images AANAP:(1.47,7.24)
    GSP-2D:(1.85,6.20)
    GSP-3D:(1.84,1.83)
    VPG:(1.68,0.55)


    Table 2. Comparative results on 4 real image sets (Click the thumbnails for clear observation).

    No. Stitching Results
    No.28 outdoor 12 images
    No.33 outdoor 5 images
    No.16 indoor 20 images
    No.22 indoor 10 images

    3.2 Performance Evaluation on GSP Dataset

    Figure 3 gives the VP divergence distributions of VPG and GSP datasets. Note that the distribution difference between these two datasets, which indicates that the Manhattan assumption is not necessarily satisfied in GSP dataset. Experimenting on GSP is a much more general case because it simulates the practical application situation in which the Manhattan prior is not known during stitching. Therefore, the results can demonstrate the effectiveness of the proposed degeneration scheme.

    Figure 3. Left: Typical VP divergence distributions of Manhattan scenes and Non-Manhattan scenes. Right: The associated VP divergence distributions of the VPG dataset and the GSP dataset.

    Table 3. Comparative results on GSP dataset (Click the thumbnails for clear observation).

    VP Divergence Stitching Results
    35 images ε=0.044
    5 images ε=0.048
    15 images ε=0.161 degeneration occurs


    Table 4. More stitcing results on GSP dataset (Click the thumbnails for clear observation).

    VP Divergence Stitching Results VP Divergence Stitching Results VP Divergence Stitching Results VP Divergence Stitching Results
    2 images ε=0.0204 4 images ε=0.0019 5 images ε=0.0068 6 images ε=0.0005
    11 images ε=0.0740 3 images ε=0.0002 5 images ε=0.0114 21 images ε=0.0781
    10 images ε=0.1917 degeneration occurs 5 images ε=0.1291 degeneration occurs 7 images ε=0.1464 degeneration occurs 15 images ε=0.2022 degeneration occurs

    3.3. Scalability for High Alignment Accuracy

    Apart from the naturalness, the alignment accuracy is another common issue that people concern during stitching. In this part, we hope to show that the naturalness improvement provided by VPG is compatible with other high-accuracy stitching frameworks.

    In order to prove this property, we conduct experiments on two advanced stitching frameworks: Dual-Feature Warping (DFW) framework[3] and Generalized Content-Preserving Warping (GCPW) framework[4]. The coordination between naturalness and alignment accuracy can be observed in Table 5.

    Table 5. Comparative results after using different stitching frameworks. Note that VPG+DFW and VPG+GCPW achieve both better naturalness and higher alignment accuracy than GSP-3D. (Click the thumbnails for clear observation).

    No. (GDIC↓, MSE↓) Stitching Results
    No.01 GSP-3D:(4.54,11.87)
    VPG:(0.73,11.97)
    VPG+DFW:(1.00,10.89)
    VPG+GCPW:(1.09,10.19)
    No.13 GSP-3D:(-,6.50)
    VPG:(-,6.15)
    VPG+DFW:(-,5.93)
    VPG+GCPW:(-,5.49)


    Dataset

    Reference

    1. Chung-Ching Lin and Sharathchandra U. Pankanti: Adaptive As-Natural-As-Possible Image Stitching. In: Proceddings of the IEEE Conference on Computer Vision and Pattern Recognition Conference on Computer Vision, CVPR 2015.
    2. Yu-Sheng Chen and Yung-Yu Chuang: Natural Image Stitching with the Global Similarity Prior. In: Proceddings of European Conference on Computer Vision, ECCV 2016.
    3. Shiwei Li, Lu Yuan, Jian Sun, and Long Quan: Dual-Feature Warping-Based Motion Model Estimation. In: Proceddings of the IEEE International Conference on Computer Vision, ICCV 2015.
    4. Kai Chen, Jingmin Tu, Jian Yao, and Jie Li: Natural Image Stitching with the Global Similarity Prior. IEEE Access, 2018.