Acta Geodaetica et Cartographica Sinica ›› 2019, Vol. 48 ›› Issue (6): 727-736.doi: 10.11947/j.AGCS.2019.20180432

• Photogrammetry and Remote Sensing • Previous Articles     Next Articles

A template matching method of multimodal remote sensing images based on deep convolutional feature representation

NAN Ke, QI Hua, YE Yuanxin   

  1. Faculty of Geosciences and Environmental Engineering, Southwest Jiaotong University, Chengdu 611756, China
  • Received:2018-09-14 Revised:2019-03-20 Online:2019-06-20 Published:2019-07-09
  • Supported by:
    The Science and Technology Program of Sichuan Province(No. 2017SZ0027)

Abstract: Due to significant non-linear radiometric differences between multimodal remote sensing images (e.g., optical, infrared, and SAR), traditional methods cannot efficiently extract common features between such images, and are vulnerable for image matching. To address that, the deep learning technique is introduced into the present study to design a matching method based on Siamese network, which aims to extract common features between multimodal images. The network is first optimized by removing the pooling layer and extracting the feature layer from Siamese network to maintain the integrity and positional accuracy of the feature information, making it possible the effective extraction of common features between multimodal images. Then, the template matching strategy is adopted to achieve high-precision matching of multimodal images. The proposed method is evaluated by using multiple multimodal remote sensing images. The results show that the proposed method outperforms traditional template-matching methods in both the matching correct ratio and matching accuracy.

Key words: multimodal image, image matching, deep learning, Siamese network

CLC Number: