1
|
Adegun AA, Fonou Dombeu JV, Viriri S, Odindi J. State-of-the-Art Deep Learning Methods for Objects Detection in Remote Sensing Satellite Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:5849. [PMID: 37447699 DOI: 10.3390/s23135849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/02/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023]
Abstract
Introduction: Object detection in remotely sensed satellite images is critical to socio-economic, bio-physical, and environmental monitoring, necessary for the prevention of natural disasters such as flooding and fires, socio-economic service delivery, and general urban and rural planning and management. Whereas deep learning approaches have recently gained popularity in remotely sensed image analysis, they have been unable to efficiently detect image objects due to complex landscape heterogeneity, high inter-class similarity and intra-class diversity, and difficulty in acquiring suitable training data that represents the complexities, among others. Methods: To address these challenges, this study employed multi-object detection deep learning algorithms with a transfer learning approach on remotely sensed satellite imagery captured on a heterogeneous landscape. In the study, a new dataset of diverse features with five object classes collected from Google Earth Engine in various locations in southern KwaZulu-Natal province in South Africa was used to evaluate the models. The dataset images were characterized with objects that have varying sizes and resolutions. Five (5) object detection methods based on R-CNN and YOLO architectures were investigated via experiments on our newly created dataset. Conclusions: This paper provides a comprehensive performance evaluation and analysis of the recent deep learning-based object detection methods for detecting objects in high-resolution remote sensing satellite images. The models were also evaluated on two publicly available datasets: Visdron and PASCAL VOC2007. Results showed that the highest detection accuracy of the vegetation and swimming pool instances was more than 90%, and the fastest detection speed 0.2 ms was observed in YOLOv8.
Collapse
Affiliation(s)
- Adekanmi Adeyinka Adegun
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4041, South Africa
| | - Jean Vincent Fonou Dombeu
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Pietermaritzburg 3209, South Africa
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4041, South Africa
| | - John Odindi
- School of Agricultural, Earth and Environmental Sciences, University of KwaZulu-Natal, Pietermaritzburg 3209, South Africa
| |
Collapse
|
2
|
Wang J, Cui Z, Jiang T, Cao C, Cao Z. Lightweight Deep Neural Networks for Ship Target Detection in SAR Imagery. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; PP:565-579. [PMID: 37015502 DOI: 10.1109/tip.2022.3231126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In recent years, deep convolutional neural networks (DCNNs) have been widely used in the task of ship target detection in synthetic aperture radar (SAR) imagery. However, the vast storage and computational cost of DCNN limits its application to spaceborne or airborne onboard devices with limited resources. In this paper, a set of lightweight detection networks for SAR ship target detection are proposed. To obtain these lightweight networks, this paper designs a network structure optimization algorithm based on the multi-objective firefly algorithm (termed NOFA). In our design, the NOFA algorithm encodes the filters of a well-performing ship target detection network into a list of probabilities, which will determine whether the lightweight network will inherit the corresponding filter structure and parameters. After that, the multi-objective firefly optimization algorithm (MFA) continuously optimizes the probability list and finally outputs a set of lightweight network encodings that can meet the different needs of the trade-off between detection network precision and size. Finally, the network pruning technology transforms the encoding that meets the task requirements into a lightweight ship target detection network. The experiments on SSDD and SDCD datasets prove that the method proposed in this paper can provide more flexible and lighter detection networks than traditional detection networks.
Collapse
|
3
|
Yasir M, Jianhua W, Mingming X, Hui S, Zhe Z, Shanwei L, Colak ATI, Hossain MS. Ship detection based on deep learning using SAR imagery: a systematic literature review. Soft comput 2022. [DOI: 10.1007/s00500-022-07522-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
4
|
A Real-Time Ship Detector via a Common Camera. JOURNAL OF MARINE SCIENCE AND ENGINEERING 2022. [DOI: 10.3390/jmse10081043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Advanced radars and satellites, suitable for remote monitoring, inappropriately reach the economical requirements of short-range detection. Compared with far-sightedness skills, common visible-light sensors offer more ample features conducive to distinguishing the classes. Therefore, ship detection based on visible-light cameras should cooperate with remote detection technologies. However, compared with detectors applied in inland transportation, the lack of fast ship detectors, detecting multiple ship classes, is non-negligible. To fill this gap, we propose a real-time ship detector based on fast U-Net and remapping attention (FRSD) via a common camera. The fast U-Net offered compresses features in the channel dimension to decrease the number of training parameters. The remapping attention introduced boosts the performance in various rain–fog weather conditions while maintaining the real-time speed. The ship dataset proposed contains more than 20,000 samples, alleviating the lack of ship datasets containing various classes. Data augmentation of the cross-background is especially proposed to further promote the diversity of the detecting background. In addition, the rain–fog dataset proposed, containing more than 500 rain–fog images, simulates various marine rain–fog scenarios and soaks the testing image to validate the robustness of ship detectors. Experiments demonstrate that FRSD performs relatively robustly and detects 9 classes with an mAP of more than 83%, reaching a state-of-the-art level.
Collapse
|
5
|
Small Ship Detection Based on Hybrid Anchor Structure and Feature Super-Resolution. REMOTE SENSING 2022. [DOI: 10.3390/rs14153530] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Small ships in remote sensing images have blurred details and are difficult to detect. Existing algorithms usually detect small ships based on predefined anchors with different sizes. However, limited by the number of different sizes, it is difficult for anchor-based methods to match small ships of different sizes and structures during training, as they can easily cause misdetections. In this paper, we propose a hybrid anchor structure to generate region proposals for small ships, so as to take full advantage of both anchor-based methods with high localization accuracy and anchor-free methods with fewer misdetections. To unify the output evaluation and obtain the best output, a label reassignment strategy is proposed, which reassigns the sample labels according to the harmonic intersection-over-union (IoU) before and after regression. In addition, an adaptive feature pyramid structure is proposed to enhance the features of important locations on the feature map, so that the features of small ship targets are more prominent and easier to identify. Moreover, feature super-resolution technology is introduced for the region of interest (RoI) features of small ships to generate super-resolution feature representations with a small computational cost, as well as generative adversarial training to improve the realism of super-resolution features. Based on the super-resolution feature, ship proposals are further classified and regressed by using super-resolution features to obtain more accurate detection results. Detailed ablation and comparison experiments demonstrate the effectiveness of the proposed method.
Collapse
|
6
|
Ship Detection in SAR Images Based on Multi-Scale Feature Extraction and Adaptive Feature Fusion. REMOTE SENSING 2022. [DOI: 10.3390/rs14030755] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Deep learning has attracted increasing attention across a number of disciplines in recent years. In the field of remote sensing, ship detection based on deep learning for synthetic aperture radar (SAR) imagery is replacing traditional methods as a mainstream research method. The multiple scales of ship objects make the detection of ship targets a challenging task in SAR images. This paper proposes a new methodology for better detection of multi-scale ship objects in SAR images, which is based on YOLOv5 with a small model size (YOLOv5s), namely the multi-scale ship detection network (MSSDNet). We construct two modules in MSSDNet: the CSPMRes2 (Cross Stage Partial network with Modified Res2Net) module for improving feature representation capability and the FC-FPN (Feature Pyramid Network with Fusion Coefficients) module for fusing feature maps adaptively. Firstly, the CSPMRes2 module introduces modified Res2Net (MRes2) with a coordinate attention module (CAM) for multi-scale features extraction in scale dimension, then the CSPMRes2 module will be used as a basic module in the depth dimension of the MSSDNet backbone. Thus, our backbone of MSSDNet has the capabilities of features extraction in both depth and scale dimensions. In the FC-FPN module, we set a learnable fusion coefficient for each feature map participating in fusion, which helps the FC-FPN module choose the best features to fuse for multi-scale objects detection tasks. After the feature fusion, we pass the output through the CSPMRes2 module for better feature representation. The performance evaluation for this study is conducted using an RTX2080Ti GPU, and two different datasets: SSDD and SARShip are used. These experiments on SSDD and SARShip datasets confirm that MSSDNet leads to superior multi-scale ship detection compared with the state-of-the-art methods. Moreover, in comparisons of network model size and inference time, our MSSDNet also has huge advantages with related methods.
Collapse
|
7
|
SII-Net: Spatial Information Integration Network for Small Target Detection in SAR Images. REMOTE SENSING 2022. [DOI: 10.3390/rs14030442] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
Ship detection based on synthetic aperture radar (SAR) images has made a breakthrough in recent years. However, small ships, which may be regarded as speckle noise, pose enormous challenges to the accurate detection of SAR images. In order to enhance the detection performance of small ships in SAR images, a novel detection method named a spatial information integration network (SII-Net) is proposed in this paper. First, a channel-location attention mechanism (CLAM) module which extracts position information along with two spatial directions is proposed to enhance the detection ability of the backbone network. Second, a high-level features enhancement module (HLEM) is customized to reduce the loss of small target location information in high-level features via using multiple pooling layers. Third, in the feature fusion stage, a refined branch is presented to distinguish the location information between the target and the surrounding region by highlighting the feature representation of the target. The public datasets LS-SSDD-v1.0, SSDD and SAR-Ship-Dataset are used to conduct ship detection tests. Extensive experiments show that the SII-Net outperforms state-of-the-art small target detectors and achieves the highest detection accuracy, especially when the target size is less than 30 pixels by 30 pixels.
Collapse
|
8
|
Abstract
Most existing SAR moving target shadow detectors not only tend to generate missed detections because of their limited feature extraction capacity among complex scenes, but also tend to bring about numerous perishing false alarms due to their poor foreground–background discrimination capacity. Therefore, to solve these problems, this paper proposes a novel deep learning network called “ShadowDeNet” for better shadow detection of moving ground targets on video synthetic aperture radar (SAR) images. It utilizes five major tools to guarantee its superior detection performance, i.e., (1) histogram equalization shadow enhancement (HESE) for enhancing shadow saliency to facilitate feature extraction, (2) transformer self-attention mechanism (TSAM) for focusing on regions of interests to suppress clutter interferences, (3) shape deformation adaptive learning (SDAL) for learning moving target deformed shadows to conquer motion speed variations, (4) semantic-guided anchor-adaptive learning (SGAAL) for generating optimized anchors to match shadow location and shape, and (5) online hard-example mining (OHEM) for selecting typical difficult negative samples to improve background discrimination capacity. We conduct extensive ablation studies to confirm the effectiveness of the above each contribution. We perform experiments on the public Sandia National Laboratories (SNL) video SAR data. Experimental results reveal the state-of-the-art performance of ShadowDeNet, with a 66.01% best f1 accuracy, in contrast to the other five competitive methods. Specifically, ShadowDeNet is superior to the experimental baseline Faster R-CNN by a 9.00% f1 accuracy, and superior to the existing first-best model by a 4.96% f1 accuracy. Furthermore, ShadowDeNet merely sacrifices a slight detection speed in an acceptable range.
Collapse
|
9
|
BiFA-YOLO: A Novel YOLO-Based Method for Arbitrary-Oriented Ship Detection in High-Resolution SAR Images. REMOTE SENSING 2021. [DOI: 10.3390/rs13214209] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Due to its great application value in the military and civilian fields, ship detection in synthetic aperture radar (SAR) images has always attracted much attention. However, ship targets in High-Resolution (HR) SAR images show the significant characteristics of multi-scale, arbitrary directions and dense arrangement, posing enormous challenges to detect ships quickly and accurately. To address these issues above, a novel YOLO-based arbitrary-oriented SAR ship detector using bi-directional feature fusion and angular classification (BiFA-YOLO) is proposed in this article. First of all, a novel bi-directional feature fusion module (Bi-DFFM) tailored to SAR ship detection is applied to the YOLO framework. This module can efficiently aggregate multi-scale features through bi-directional (top-down and bottom-up) information interaction, which is helpful for detecting multi-scale ships. Secondly, to effectively detect arbitrary-oriented and densely arranged ships in HR SAR images, we add an angular classification structure to the head network. This structure is conducive to accurately obtaining ships’ angle information without the problem of boundary discontinuity and complicated parameter regression. Meanwhile, in BiFA-YOLO, a random rotation mosaic data augmentation method is employed to suppress the impact of angle imbalance. Compared with other conventional data augmentation methods, the proposed method can better improve detection performance of arbitrary-oriented ships. Finally, we conduct extensive experiments on the SAR ship detection dataset (SSDD) and large-scene HR SAR images from GF-3 satellite to verify our method. The proposed method can reach the detection performance with precision = 94.85%, recall = 93.97%, average precision = 93.90%, and F1-score = 0.9441 on SSDD. The detection speed of our method is approximately 13.3 ms per 512 × 512 image. In addition, comparison experiments with other deep learning-based methods and verification experiments on large-scene HR SAR images demonstrate that our method shows strong robustness and adaptability.
Collapse
|
10
|
SAR Ship Detection Dataset (SSDD): Official Release and Comprehensive Data Analysis. REMOTE SENSING 2021. [DOI: 10.3390/rs13183690] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
SAR Ship Detection Dataset (SSDD) is the first open dataset that is widely used to research state-of-the-art technology of ship detection from Synthetic Aperture Radar (SAR) imagery based on deep learning (DL). According to our investigation, up to 46.59% of the total 161 public reports confidently select SSDD to study DL-based SAR ship detection. Undoubtedly, this situation reveals the popularity and great influence of SSDD in the SAR remote sensing community. Nevertheless, the coarse annotations and ambiguous standards of use of its initial version both hinder fair methodological comparisons and effective academic exchanges. Additionally, its single-function horizontal-vertical rectangle bounding box (BBox) labels can no longer satisfy the current research needs of the rotatable bounding box (RBox) task and the pixel-level polygon segmentation task. Therefore, to address the above two dilemmas, in this review, advocated by the publisher of SSDD, we will make an official release of SSDD based on its initial version. SSDD’s official release version will cover three types: (1) a bounding box SSDD (BBox-SSDD), (2) a rotatable bounding box SSDD (RBox-SSDD), and (3) a polygon segmentation SSDD (PSeg-SSDD). We relabel ships in SSDD more carefully and finely, and then explicitly formulate some strict using standards, e.g., (1) the training-test division determination, (2) the inshore-offshore protocol, (3) the ship-size reasonable definition, (4) the determination of the densely distributed small ship samples, and (5) the determination of the densely parallel berthing at ports ship samples. These using standards are all formulated objectively based on the using differences of existing 75 (161 × 46.59%) public reports. They will be beneficial for fair method comparison and effective academic exchanges in the future. Most notably, we conduct a comprehensive data analysis on BBox-SSDD, RBox-SSDD, and PSeg-SSDD. Our analysis results can provide some valuable suggestions for possible future scholars to further elaborately design DL-based SAR ship detectors with higher accuracy and stronger robustness when using SSDD.
Collapse
|
11
|
Jiang Y, Li W, Liu L. R-CenterNet+: Anchor-Free Detector for Ship Detection in SAR Images. SENSORS (BASEL, SWITZERLAND) 2021; 21:5693. [PMID: 34502583 PMCID: PMC8434279 DOI: 10.3390/s21175693] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 08/19/2021] [Accepted: 08/20/2021] [Indexed: 01/07/2023]
Abstract
In recent years, the rapid development of Deep Learning (DL) has provided a new method for ship detection in Synthetic Aperture Radar (SAR) images. However, there are still four challenges in this task. (1) The ship targets in SAR images are very sparse. A large number of unnecessary anchor boxes may be generated on the feature map when using traditional anchor-based detection models, which could greatly increase the amount of computation and make it difficult to achieve real-time rapid detection. (2) The size of the ship targets in SAR images is relatively small. Most of the detection methods have poor performance on small ships in large scenes. (3) The terrestrial background in SAR images is very complicated. Ship targets are susceptible to interference from complex backgrounds, and there are serious false detections and missed detections. (4) The ship targets in SAR images are characterized by a large aspect ratio, arbitrary direction and dense arrangement. Traditional horizontal box detection can cause non-target areas to interfere with the extraction of ship features, and it is difficult to accurately express the length, width and axial information of ship targets. To solve these problems, we propose an effective lightweight anchor-free detector called R-Centernet+ in the paper. Its features are as follows: the Convolutional Block Attention Module (CBAM) is introduced to the backbone network to improve the focusing ability on small ships; the Foreground Enhance Module (FEM) is used to introduce foreground information to reduce the interference of the complex background; the detection head that can output the ship angle map is designed to realize the rotation detection of ship targets. To verify the validity of the proposed model in this paper, experiments are performed on two public SAR image datasets, i.e., SAR Ship Detection Dataset (SSDD) and AIR-SARShip. The results show that the proposed R-Centernet+ detector can detect both inshore and offshore ships with higher accuracy than traditional models with an average precision of 95.11% on SSDD and 84.89% on AIR-SARShip, and the detection speed is quite fast with 33 frames per second.
Collapse
Affiliation(s)
| | | | - Lin Liu
- College of Geodesy and Geomatics, Shandong University of Science and Technology, Qingdao 266590, China; (Y.J.); (W.L.)
| |
Collapse
|
12
|
Effect of Attention Mechanism in Deep Learning-Based Remote Sensing Image Processing: A Systematic Literature Review. REMOTE SENSING 2021. [DOI: 10.3390/rs13152965] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Machine learning, particularly deep learning (DL), has become a central and state-of-the-art method for several computer vision applications and remote sensing (RS) image processing. Researchers are continually trying to improve the performance of the DL methods by developing new architectural designs of the networks and/or developing new techniques, such as attention mechanisms. Since the attention mechanism has been proposed, regardless of its type, it has been increasingly used for diverse RS applications to improve the performances of the existing DL methods. However, these methods are scattered over different studies impeding the selection and application of the feasible approaches. This study provides an overview of the developed attention mechanisms and how to integrate them with different deep learning neural network architectures. In addition, it aims to investigate the effect of the attention mechanism on deep learning-based RS image processing. We identified and analyzed the advances in the corresponding attention mechanism-based deep learning (At-DL) methods. A systematic literature review was performed to identify the trends in publications, publishers, improved DL methods, data types used, attention types used, overall accuracies achieved using At-DL methods, and extracted the current research directions, weaknesses, and open problems to provide insights and recommendations for future studies. For this, five main research questions were formulated to extract the required data and information from the literature. Furthermore, we categorized the papers regarding the addressed RS image processing tasks (e.g., image classification, object detection, and change detection) and discussed the results within each group. In total, 270 papers were retrieved, of which 176 papers were selected according to the defined exclusion criteria for further analysis and detailed review. The results reveal that most of the papers reported an increase in overall accuracy when using the attention mechanism within the DL methods for image classification, image segmentation, change detection, and object detection using remote sensing images.
Collapse
|
13
|
Injection of Traditional Hand-Crafted Features into Modern CNN-Based Models for SAR Ship Classification: What, Why, Where, and How. REMOTE SENSING 2021. [DOI: 10.3390/rs13112091] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With the rise of artificial intelligence, many advanced Synthetic Aperture Radar (SAR) ship classifiers based on convolutional neural networks (CNNs) have achieved better accuracies than traditional hand-crafted feature ones. However, most existing CNN-based models uncritically abandon traditional hand-crafted features, and rely excessively on abstract ones of deep networks. This may be controversial, potentially creating challenges to improve classification performance further. Therefore, in view of this situation, this paper explores preliminarily the possibility of injection of traditional hand-crafted features into modern CNN-based models to further improve SAR ship classification accuracy. Specifically, we will—(1) illustrate what this injection technique is, (2) explain why it is needed, (3) discuss where it should be applied, and (4) describe how it is implemented. Experimental results on the two open three-category OpenSARShip-1.0 and seven-category FUSAR-Ship datasets indicate that it is effective to perform injection of traditional hand-crafted features into CNN-based models to improve classification accuracy. Notably, the maximum accuracy improvement reaches 6.75%. Hence, we hold the view that it is not advisable to abandon uncritically traditional hand-crafted features, because they can also play an important role in CNN-based models.
Collapse
|
14
|
Zou L, Zhang H, Wang C, Wu F, Gu F. MW-ACGAN: Generating Multiscale High-Resolution SAR Images for Ship Detection. SENSORS 2020; 20:s20226673. [PMID: 33233434 PMCID: PMC7700639 DOI: 10.3390/s20226673] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 11/11/2020] [Accepted: 11/19/2020] [Indexed: 12/05/2022]
Abstract
In high-resolution Synthetic Aperture Radar (SAR) ship detection, the number of SAR samples seriously affects the performance of the algorithms based on deep learning. In this paper, aiming at the application requirements of high-resolution ship detection in small samples, a high-resolution SAR ship detection method combining an improved sample generation network, Multiscale Wasserstein Auxiliary Classifier Generative Adversarial Networks (MW-ACGAN) and the Yolo v3 network is proposed. Firstly, the multi-scale Wasserstein distance and gradient penalty loss are used to improve the original Auxiliary Classifier Generative Adversarial Networks (ACGAN), so that the improved network can stably generate high-resolution SAR ship images. Secondly, the multi-scale loss term is added to the network, so the multi-scale image output layers are added, and multi-scale SAR ship images can be generated. Then, the original ship data set and the generated data are combined into a composite data set to train the Yolo v3 target detection network, so as to solve the problem of low detection accuracy under small sample data set. The experimental results of Gaofen-3 (GF-3) 3 m SAR data show that the MW-ACGAN network can generate multi-scale and multi-class ship slices, and the confidence level of ResNet18 is higher than that of ACGAN network, with an average score of 0.91. The detection results of Yolo v3 network model show that the detection accuracy trained by the composite data set is as high as 94%, which is far better than that trained only by the original SAR data set. These results show that our method can make the best use of the original data set, improve the accuracy of ship detection.
Collapse
Affiliation(s)
- Lichuan Zou
- Key Laboratory of Digital Earth Science, Aerospace information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (L.Z.); (C.W.); (F.W.); (F.G.)
- College of Resources and Environment, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Hong Zhang
- Key Laboratory of Digital Earth Science, Aerospace information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (L.Z.); (C.W.); (F.W.); (F.G.)
- Correspondence: ; Tel.: +86-10-8217-8186
| | - Chao Wang
- Key Laboratory of Digital Earth Science, Aerospace information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (L.Z.); (C.W.); (F.W.); (F.G.)
- College of Resources and Environment, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Fan Wu
- Key Laboratory of Digital Earth Science, Aerospace information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (L.Z.); (C.W.); (F.W.); (F.G.)
| | - Feng Gu
- Key Laboratory of Digital Earth Science, Aerospace information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (L.Z.); (C.W.); (F.W.); (F.G.)
- College of Resources and Environment, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
15
|
LS-SSDD-v1.0: A Deep Learning Dataset Dedicated to Small Ship Detection from Large-Scale Sentinel-1 SAR Images. REMOTE SENSING 2020. [DOI: 10.3390/rs12182997] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Ship detection in synthetic aperture radar (SAR) images is becoming a research hotspot. In recent years, as the rise of artificial intelligence, deep learning has almost dominated SAR ship detection community for its higher accuracy, faster speed, less human intervention, etc. However, today, there is still a lack of a reliable deep learning SAR ship detection dataset that can meet the practical migration application of ship detection in large-scene space-borne SAR images. Thus, to solve this problem, this paper releases a Large-Scale SAR Ship Detection Dataset-v1.0 (LS-SSDD-v1.0) from Sentinel-1, for small ship detection under large-scale backgrounds. LS-SSDD-v1.0 contains 15 large-scale SAR images whose ground truths are correctly labeled by SAR experts by drawing support from the Automatic Identification System (AIS) and Google Earth. To facilitate network training, the large-scale images are directly cut into 9000 sub-images without bells and whistles, providing convenience for subsequent detection result presentation in large-scale SAR images. Notably, LS-SSDD-v1.0 has five advantages: (1) large-scale backgrounds, (2) small ship detection, (3) abundant pure backgrounds, (4) fully automatic detection flow, and (5) numerous and standardized research baselines. Last but not least, combined with the advantage of abundant pure backgrounds, we also propose a Pure Background Hybrid Training mechanism (PBHT-mechanism) to suppress false alarms of land in large-scale SAR images. Experimental results of ablation study can verify the effectiveness of the PBHT-mechanism. LS-SSDD-v1.0 can inspire related scholars to make extensive research into SAR ship detection methods with engineering application value, which is conducive to the progress of SAR intelligent interpretation technology.
Collapse
|