测绘学报 ›› 2020, Vol. 49 ›› Issue (5): 611-621.doi: 10.11947/j.AGCS.2020.20190274

• 摄影测量学与遥感 • 上一篇    下一篇

航拍图像跨数据域特征迁移道路提取方法

王舒洋1, 慕晓冬1, 贺浩2, 杨东方2, 马晨晖1   

  1. 1. 火箭军工程大学作战保障学院, 陕西 西安 710025;
    2. 火箭军工程大学导弹工程学院, 陕西 西安 710025
  • 收稿日期:2019-06-28 修回日期:2019-12-23 发布日期:2020-05-23
  • 通讯作者: 贺浩 E-mail:hehao209@126.com
  • 作者简介:王舒洋(1991-),女,博士生,研究方向为遥感图像处理与计算机视觉。E-mail:yelvlanshu@163.com
  • 基金资助:
    国家自然科学基金(61403398;61673017);陕西省自然科学基金面上项目(2017JM6077)

Feature-representation-transfer based road extraction method for cross-domain aerial images

WANG Shuyang1, MU Xiaodong1, HE Hao2, YANG Dongfang2, MA Chenhui1   

  1. 1. The Rocket Force University of Engineering, College of Operational Support, Xi'an 710025, China;
    2. The Rocket Force University of Engineering, College of Missile Engineering, Xi'an 710025, China
  • Received:2019-06-28 Revised:2019-12-23 Published:2020-05-23
  • Supported by:
    The National Nature Science Foundation of China (Nos. 61403398;61673017);The General Project of Shaanxi Nature Science Foundation (No. 2017JM6077)

摘要: 针对传统道路提取方法应用于新数据泛化能力不足的问题,研究了通过特征迁移和编解码网络实现跨数据域的道路提取方法。首先,构建了基于编解码网络的道路提取基本模型,用于实现单一数据来源的道路提取任务。然后,基于道路提取网络结构和循环一致性原则,提出了用于跨数据域图像特征迁移的循环生成对抗网络,使目标域图像映射入源域特征空间。使用预训练的道路提取模型处理特征迁移后的目标域图像,即可实现跨数据域道路提取任务。试验结果表明,本文所提方法能够拓展道路提取网络的泛化能力,准确有效地提取跨数据域图像中的道路目标。相较于未特征迁移的结果,本文所提方法大幅改善了道路提取指标,使得F1提升了50%以上。本文方法不需要目标域的标注信息,也不需要对道路提取网络进行微调训练,而只需训练由目标域向源域的特征迁移模型,所耗时间和人力成本较低,因而具有良好的应用价值。

关键词: 道路提取, 遥感, 迁移学习, 深度学习, 生成对抗网络, 编解码网络

Abstract: Aiming at the problem of the insufficient generalization ability of traditional road extraction methods when applying to a new dataset, this paper proposes a cross-domain road extraction method that realized by feature-representation-transfer and encoder-decoder network. Firstly, a basic road extraction model based on encoder-decoder network is designed to segment the road from a single data source. Then, based on the structure of road extraction network and the principle of cycle-consistent, a cycle generative adversarial network for feature transfer of cross-domain imagery is used, which maps the feature of target city images to the domain of source data. Finally, the pre-trained road extraction model is used to segment the target domain images after the feature transfer, so that the cross-domain road extraction can be realized. The experimental results show that the proposed method improves the generalization ability of the road extraction network and can extract the road target from cross-domain images accurately and effectively. Compared with the results without feature transfer, the proposed method greatly improves the road extraction metric, and increases the F1-score by more than 50%. The proposed method does not require any annotation of the target domain images, nor does it need to fine-tune the road extraction network, while it only need to train the feature transfer model from the target domain to the source domain. Therefore, it has good application value.

Key words: road extraction, remote sensing, transfer learning, deep learning, generative adversarial network, encoder-decoder network

中图分类号: