1
|
Abstract
Data analysis methods have scarcely kept pace with the rapid increase in Earth observations, spurring the development of novel algorithms, storage methods, and computational techniques. For scientists interested in Mars, the problem is always the same: there is simultaneously never enough of the right data and an overwhelming amount of data in total. Finding sufficient data needles in a haystack to test a hypothesis requires hours of manual data screening, and more needles and hay are added constantly. To date, the vast majority of Martian research has been focused on either one-off local/regional studies or on hugely time-consuming manual global studies. Machine learning in its numerous forms can be helpful for future such work. Machine learning has the potential to help map and classify a large variety of both features and properties on the surface of Mars and to aid in the planning and execution of future missions. Here, we outline the current extent of machine learning as applied to Mars, summarize why machine learning should be an important tool for planetary geomorphology in particular, and suggest numerous research avenues and funding priorities for future efforts. We conclude that: (1) moving toward methods that require less human input (i.e., self- or semi-supervised) is an important paradigm shift for Martian applications, (2) new robust methods using generative adversarial networks to generate synthetic high-resolution digital terrain models represent an exciting new avenue for Martian geomorphologists, (3) more effort and money must be directed toward developing standardized datasets and benchmark tests, and (4) the community needs a large-scale, generalized, and programmatically accessible geographic information system (GIS).
Collapse
|
2
|
Impact of Image-Processing Routines on Mapping Glacier Surface Facies from Svalbard and the Himalayas Using Pixel-Based Methods. REMOTE SENSING 2022. [DOI: 10.3390/rs14061414] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Glacier surface facies are valuable indicators of changes experienced by a glacial system. The interplay of accumulation and ablation facies, followed by intermixing with dust and debris, as well as the local climate, all induce observable and mappable changes on the supraglacial terrain. In the absence or lag of continuous field monitoring, remote sensing observations become vital for maintaining a constant supply of measurable data. However, remote satellite observations suffer from atmospheric effects, resolution disparity, and use of a multitude of mapping methods. Efficient image-processing routines are, hence, necessary to prepare and test the derivable data for mapping applications. The existing literature provides an application-centric view for selection of image processing schemes. This can create confusion, as it is not clear which method of atmospheric correction would be ideal for retrieving facies spectral reflectance, nor are the effects of pansharpening examined on facies. Moreover, with a variety of supervised classifiers and target detection methods now available, it is prudent to test the impact of variations in processing schemes on the resultant thematic classifications. In this context, the current study set its experimental goals. Using very-high-resolution (VHR) WorldView-2 data, we aimed to test the effects of three common atmospheric correction methods, viz. Dark Object Subtraction (DOS), Quick Atmospheric Correction (QUAC), and Fast Line-of-Sight Atmospheric Analysis of Hypercubes (FLAASH); and two pansharpening methods, viz. Gram–Schmidt (GS) and Hyperspherical Color Sharpening (HCS), on thematic classification of facies using 12 supervised classifiers. The conventional classifiers included: Mahalanobis Distance (MHD), Maximum Likelihood (MXL), Minimum Distance to Mean (MD), Spectral Angle Mapper (SAM), and Winner Takes All (WTA). The advanced/target detection classifiers consisted of: Adaptive Coherence Estimator (ACE), Constrained Energy Minimization (CEM), Matched Filtering (MF), Mixture-Tuned Matched Filtering (MTMF), Mixture-Tuned Target-Constrained Interference-Minimized Filter (MTTCIMF), Orthogonal Space Projection (OSP), and Target-Constrained Interference-Minimized Filter (TCIMF). This experiment was performed on glaciers at two test sites, Ny-Ålesund, Svalbard, Norway; and Chandra–Bhaga basin, Himalaya, India. The overall performance suggested that the FLAASH correction delivered realistic reflectance spectra, while DOS delivered the least realistic. Spectra derived from HCS sharpened subsets seemed to match the average reflectance trends, whereas GS reduced the overall reflectance. WTA classification of the DOS subsets achieved the highest overall accuracy (0.81). MTTCIMF classification of the FLAASH subsets yielded the lowest overall accuracy of 0.01. However, FLAASH consistently provided better performance (less variable and generally accurate) than DOS and QUAC, making it the more reliable and hence recommended algorithm. While HCS-pansharpened classification achieved a lower error rate (0.71) in comparison to GS pansharpening (0.76), neither significantly improved accuracy nor efficiency. The Ny-Ålesund glacier facies were best classified using MXL (error rate = 0.49) and WTA classifiers (error rate = 0.53), whereas the Himalayan glacier facies were best classified using MD (error rate = 0.61) and WTA (error rate = 0.45). The final comparative analysis of classifiers based on the total error rate across all atmospheric corrections and pansharpening methods yielded the following reliability order: MXL > WTA > MHD > ACE > MD > CEM = MF > SAM > MTMF = TCIMF > OSP > MTTCIMF. The findings of the current study suggested that for VHR visible near-infrared (VNIR) mapping of facies, FLAASH was the best atmospheric correction, while MXL may deliver reliable thematic classification. Moreover, an extensive account of the varying exertions of each processing scheme is discussed, and could be transferable when compared against other VHR VNIR mapping methods.
Collapse
|
3
|
Automated Delineation of Supraglacial Debris Cover Using Deep Learning and Multisource Remote Sensing Data. REMOTE SENSING 2022. [DOI: 10.3390/rs14061352] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
High-mountain glaciers can be covered with varying degrees of debris. Debris over glaciers (supraglacial debris) significantly alter glacier melt, velocity, ice geometry, and, thus, the overall response of glaciers towards climate change. The accumulated supraglacial debris impedes the automated delineation of glacier extent owing to its similar reflectance properties with surrounding periglacial debris (debris aside the glaciated area). Here, we propose an automated scheme for supraglacial debris mapping using a synergistic approach of deep learning and multisource remote sensing data. A combination of multisource remote sensing data (visible, near-infrared, shortwave infrared, thermal infrared, microwave, elevation, and surface slope) is used as input to a fully connected feed-forward deep neural network (i.e., deep artificial neural network). The presented deep neural network is designed by choosing the optimum number and size of hidden layers using the hit and trial method. The deep neural network is trained over eight sites spread across the Himalayas and tested over three sites in the Karakoram region. Our results show 96.3% accuracy of the model over test data. The robustness of the proposed scheme is tested over 900 km2 and 1710 km2 of glacierized regions, representing a high degree of landscape heterogeneity. The study provides proof of the concept that deep neural networks can potentially automate the debris-covered glacier mapping using multisource remote sensing data.
Collapse
|
4
|
Glacier Monitoring Based on Multi-Spectral and Multi-Temporal Satellite Data: A Case Study for Classification with Respect to Different Snow and Ice Types. REMOTE SENSING 2022. [DOI: 10.3390/rs14040845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Remote sensing techniques are frequently applied for the surveying of remote areas, where the use of conventional surveying techniques remains difficult and impracticable. In this paper, we focus on one of the remote glacier areas, namely the Tyndall Glacier area in the Southern Patagonian Icefield in Chile. Based on optical remote sensing data in the form of multi-spectral Sentinel-2 imagery, we analyze the extent of different snow and ice classes on the surface of the glacier by means of pixel-wise classification. Our study comprises three main steps: (1) Labeled Sentinel-2 compliant data are obtained from theoretical spectral reflectance curves, as there are no training data available for the investigated area; (2) Four different classification approaches are used and compared in their ability to identify the defined five snow and ice types, thereof two unsupervised approaches (k-means clustering and rule-based classification via snow and ice indices) and two supervised approaches (Linear Discriminant Analysis and Random Forest classifier); (3) We first focus on the pixel-wise classification of Sentinel-2 imagery, and we then use the best-performing approach for a multi-temporal analysis of the Tyndall Glacier area. While the achieved classification results reveal that all of the used classification approaches are suitable for detecting different snow and ice classes on the glacier surface, the multi-temporal analysis clearly reveals the seasonal development of the glacier. The change of snow and ice types on the glacier surface is evident, especially between the end of ablation season (April) and the end of accumulation season (September) in Southern Chile.
Collapse
|