[1]赵学军,包壮壮,吴华兴,等.基于条件随机场和U型网络的遥感图像道路提取[J].探测与控制学报,2020,42(03):24.[doi:.]
 ZHAO Xuejun,BAO Zhuangzhuang,WU Huaxing,et al.Remote Sensing Images Extracting Roads Based on Conditional Random Field and U-net[J].,2020,42(03):24.[doi:.]
点击复制

基于条件随机场和U型网络的遥感图像道路提取()
分享到:

《探测与控制学报》[ISSN:1008-1194/CN:61-1316/TJ]

卷:
42
期数:
2020年03
页码:
24
栏目:
出版日期:
2020-06-15

文章信息/Info

Title:
Remote Sensing Images Extracting Roads Based on Conditional Random Field and U-net
文章编号:
1008-1194(2020)03-0024-07
作者:
赵学军1包壮壮1吴华兴2董玉浩1李智伟1
1.空军工程大学基础部,陕西 西安 710051;2.空军工程大学管理工程与无人机工程学院,陕西 西安 710051
Author(s):
ZHAO Xuejun1 BAO Zhuangzhuang1 WU Huaxing2 DONG Yuhao1 LI Zhiwei1
1. Department of Basic,Air force Engineering University,Xi’an 710051, China; 2. Equipment Management and Unmanned Aerial Vehicle Engineering College, Air Force Engineering University, Xi’an 710051, China
关键词:
道路提取深度学习语义分割条件随机场
Keywords:
roads extracting deep learning semantic segmentation conditional random field
分类号:
P237
DOI:
.
文献标志码:
A
摘要:
针对当前遥感图像数据集提取道路时对物体细节分割效果不佳,双向车道、立交桥等处不能完好分辨的问题,提出了基于条件随机场和U型网络的遥感图像道路提取方法。由于遥感图像中道路局部细节特征丰富、语义特征简单的特性,该方法调整了滤波器个数,并使用空洞卷积增大感受野,提高了网络特征提取能力;其次,针对道路目标在遥感图像中占比小,易被背景侵蚀的特性,选择Lovasz-Softmax损失函数,并在后端引入条件随机场,进一步利用了全局上下文信息的联系,优化了分割结果。实验结果表明,本方法在召回率、精度和F1-Score达到82.8%,80.9%,81.8%的同时,交并比及运行速度为85%和10 FPS,基本满足视频分割的要求,可运用于边缘设备。
Abstract:
Aiming at the problems that the segmentation of object details during extracting roads from remote sensing image datasets based on current method is not ideal, and the two-way lanes and overpasses cannot be completely resolved, a method of remote sensing images extraction roads based on conditional random fields and U-shaped networks was proposed. This method focused on the characteristics of rich local details and simple semantic features in the remote sensing image, adjusted the number of filters, and used atrous convolution to increase the receptive field and improved the network feature extraction capability. Secondly, according to the characteristics of road targets that account for a small proportion in remote sensing images, and which were easily eroded by the background, the Lovasz-Softmax loss function and a conditional random field were introduced at the back-end to further utilize the connection of global context information to optimize the segmentation results. Experimental results showed that the method achieved 82.8%, 80.9%, and 81.8% recall rate, accuracy, and F1-score, and the IoU and running speed was 85% and 10FPS, which basically met the requirements of video segmentation and can be applied to edge devices.

参考文献/References:

[1] 陈超,姚国清.可扩展遥感图像处理软件框架的研究与设计[J].国土资源遥感,2014,26(3):182-186.
[2] 李宏益,唐娉,丁玲,等.大规模多源遥感数据应用分析与系统实现[J].中国科学:技术科学,2018,48(4):433-440.
[3] Shi W,Miao Z,Debayle J.An integrated method for urban mainroad centerline extraction from optical remotely sensed imagery[J].IEEE Transactions on Geo-science & Remote Sensing,2014,52(6):3359-3372.
[4] Mu H,Zhang Y,Li H,et al.Road extraction base on Zernike algorithm on SAR image[C]//Proceedings ofGeoscience and Remote Sensing Symposium,2016:1274-1277.
[5] Xian G Z. Remote Sensing Applications for the Urban Environment[M]. US: CRC Press, Inc. 2015.
[6] 张永宏,何静,阚希,等. 遥感图像道路提取方法综述[J]. 计算机工程与应用, 2018, 54(13):7-16.
[7] Hinton G E. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786):504-507.
[8] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[C]// International Conference on Neural Information Processing Systems. Curran Associates Inc. 2012:1097-1105.
[9] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2014, 39(4):640-651.
[10]Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation[C]// International Conference on Medcical Image Computing and Computer-assisted Intev vention, 2015: 234-241.
[11]Lin G, Milan A, Shen C, et al. Refine-net: multi-path refinement networks for high-resolution semantic segmentation[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. US: IEEE, 2017.
[12]Jégou, Simon, Drozdzal M, Vazquez D, et al. The one Hundred layers tiramisu: fully convolutional dense nets for semantic segmentation
[C]//Proceedings of the IEEE conference on computer vision and pattern recognition workshops. US: IEEE, 2017: 11-19.
[13] Zhao H, Shi J, Qi X, et al. Pyramid scene parsing network
[C]//Proceedings of the IEEE conference on computervision and pattern recognition. US: IEEE, 2017, 2881-2890.
[14]Chen L C, Papandreou G, Kokkinos I, et al. Deep lab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2016, 40(4):834-848.
[15]苏健民,杨岚心,景维鹏.基于U-Net的高分辨率遥感图像语义分割方法[J].计算机工程与应用,2019,55(7):207-213
[16]Zheng S, Jayasumana S, Romera-Paredes B, et al. Conditional random fields as recurrent neural networks
[C]//Proceedings ofthe IEEE internationalconferenceon computer vision. US: IEEE, 2015: 1529-1537.
[17]Berman M, Triki A R, Blaschko M B. The lov′asz-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks
[C]//Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition. US: IEEE, 2018: 4413-4421.
[18]MNIH V.Machine learning for aerial image labeling[D]. Toronto : University of Toronto, 2013.
[19]贺浩,王仕成,杨东方,等.基于 Encoder-Decoder网络的遥感影像道路提取方法[J].测绘,2019,48(3):330-338.
[20]WEI Yanan,WANG Zulin,XU Mai. Road structure refined CNN for road extraction in aerialimage[J].IEEE Geoscience and Remote Sensing Letters,2017,14(5):709-713.
[21]Panboonyuen T,Vateekul P,Jitkajornwanich K,et al. An enhanced deep convolutional encoder-decoder network for road segmentation on aerial imagery[C]// International Conference on Computing & Information Technology. Springer, Cham, 2017, 566: 191-201.

相似文献/References:

[1]李倩,裴炳南,常芳芳.基于多层卷积神经网络的SAR图像分类方法[J].探测与控制学报,2018,40(03):85.[doi:.]
 LI Qian,PEI Bingnan,CHANG Fangfang.SAR Image Classification Method Based on Multilayer Convolution Neural Network[J].,2018,40(03):85.[doi:.]
[2]阮怀林,杨兴宇.基于栈式稀疏自编码器的有源欺骗干扰识别[J].探测与控制学报,2018,40(04):62.[doi:.]
 RUAN Huailin,YANG Xingyu.Radar Active Deception Identification Method Based on Stacked Sparse Autoencoder[J].,2018,40(03):62.[doi:.]
[3]朱克凡,王杰贵,吴世俊.基于GAN的半监督低分辨雷达目标识别算法[J].探测与控制学报,2019,41(06):57.[doi:.]
 ZHU Kefan,WANG Jiegui,WU Shijun.Semi-supervised Low-resolution Radar Target Recognition Technology Based on Generative Adversarial Network[J].,2019,41(03):57.[doi:.]

备注/Memo

备注/Memo:
收稿日期:2019-12-05
基金项目:国家自然科学基金项目资助(61472443)
作者简介:赵学军(1972—),男,陕西汉中人,博士,副教授,研究方向:军事仿真理论及技术,模式识别。E-mail: 292457155@qq.com
更新日期/Last Update: 2020-07-15