1
|
Clark HP, Smith AG, McKay Fletcher D, Larsson AI, Jaspars M, De Clippele LH. New interactive machine learning tool for marine image analysis. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231678. [PMID: 39157716 PMCID: PMC11328963 DOI: 10.1098/rsos.231678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 03/28/2024] [Accepted: 04/02/2024] [Indexed: 08/20/2024]
Abstract
Advancing imaging technologies are drastically increasing the rate of marine video and image data collection. Often these datasets are not analysed to their full potential as extracting information for multiple species is incredibly time-consuming. This study demonstrates the capability of the open-source interactive machine learning tool, RootPainter, to analyse large marine image datasets quickly and accurately. The ability of RootPainter to extract the presence and surface area of the cold-water coral reef associate sponge species, Mycale lingua, was tested in two datasets: 18 346 time-lapse images and 1420 remotely operated vehicle video frames. New corrective annotation metrics integrated with RootPainter allow objective assessment of when to stop model training and reduce the need for manual model validation. Three highly accurate M. lingua models were created using RootPainter, with an average dice score of 0.94 ± 0.06. Transfer learning aided the production of two of the models, increasing analysis efficiency from 6 to 16 times faster than manual annotation for time-lapse images. Surface area measurements were extracted from both datasets allowing future investigation of sponge behaviours and distributions. Moving forward, interactive machine learning tools and model sharing could dramatically increase image analysis speeds, collaborative research and our understanding of spatiotemporal patterns in biodiversity.
Collapse
Affiliation(s)
- H. Poppy Clark
- Marine Biodiscovery Centre, Department of Chemistry, University of Aberdeen, AberdeenAB24 3UE, UK
| | - Abraham George Smith
- Department of Computer Science, University of Copenhagen, Copenhagen2100, Denmark
| | - Daniel McKay Fletcher
- Rural Economy, Environment and Society, Scotland’s Rural College, EdinburghEH9 3JG, UK
| | - Ann I. Larsson
- Tjärnö Marine Laboratory, Department of Marine Sciences, University of Gothenburg, Sweden
| | - Marcel Jaspars
- Marine Biodiscovery Centre, Department of Chemistry, University of Aberdeen, AberdeenAB24 3UE, UK
| | - Laurence H. De Clippele
- School of Biodiversity, One Health & Veterinary Medicine, University of Glasgow, GlasgowG61 1QH, UK
| |
Collapse
|
2
|
Aziz RM, Mahto R, Das A, Ahmed SU, Roy P, Mallik S, Li A. CO-WOA: Novel Optimization Approach for Deep Learning Classification of Fish Image. Chem Biodivers 2023; 20:e202201123. [PMID: 37394680 DOI: 10.1002/cbdv.202201123] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 06/28/2023] [Accepted: 06/29/2023] [Indexed: 07/04/2023]
Abstract
The most significant groupings of cold-blooded creatures are the fish family. It is crucial to recognize and categorize the most significant species of fish since various species of seafood diseases and decay exhibit different symptoms. Systems based on enhanced deep learning can replace the area's currently cumbersome and sluggish traditional approaches. Although it seems straightforward, classifying fish images is a complex procedure. In addition, the scientific study of population distribution and geographic patterns is important for advancing the field's present advancements. The goal of the proposed work is to identify the best performing strategy using cutting-edge computer vision, the Chaotic Oppositional Based Whale Optimization Algorithm (CO-WOA), and data mining techniques. Performance comparisons with leading models, such as Convolutional Neural Networks (CNN) and VGG-19, are made to confirm the applicability of the suggested method. The suggested feature extraction approach with Proposed Deep Learning Model was used in the research, yielding accuracy rates of 100 %. The performance was also compared to cutting-edge image processing models with an accuracy of 98.48 %, 98.58 %, 99.04 %, 98.44 %, 99.18 % and 99.63 % such as Convolutional Neural Networks, ResNet150V2, DenseNet, Visual Geometry Group-19, Inception V3, Xception. Using an empirical method leveraging artificial neural networks, the Proposed Deep Learning model was shown to be the best model.
Collapse
Affiliation(s)
- Rabia Musheer Aziz
- Mathematics division, School of Advanced Sciences and Languages, VIT Bhopal University, Kothrikalan, Sehore, 466116, M.P., India
| | - Rajul Mahto
- School of Computing Science and Engineering, VIT Bhopal University, Kothrikalan, Sehore, 466116, M.P., India
| | - Aryan Das
- School of Computing Science and Engineering, VIT Bhopal University, Kothrikalan, Sehore, 466116, M.P., India
| | - Saboor Uddin Ahmed
- School of Computing Science and Engineering, VIT Bhopal University, Kothrikalan, Sehore, 466116, M.P., India
| | - Priyanka Roy
- Mathematics division, School of Advanced Sciences and Languages, VIT Bhopal University, Kothrikalan, Sehore, 466116, M.P., India
| | - Saurav Mallik
- Molecular and Integrative Physiological Sciences, Department of Environmental health, Harvard T. H. Chan School of Public Health, Boston, MA 02115, USA
- Department of Pharmacology & Toxicology, University of Arizona, Tucson, AZ 85721, USA
| | - Aimin Li
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- School of Computer Science and Engineering, Xi'an University of Technology, Shaanxi, 710048, China
| |
Collapse
|
3
|
Image dataset for benchmarking automated fish detection and classification algorithms. Sci Data 2023; 10:5. [PMID: 36596792 PMCID: PMC9810604 DOI: 10.1038/s41597-022-01906-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 12/14/2022] [Indexed: 01/05/2023] Open
Abstract
Multiparametric video-cabled marine observatories are becoming strategic to monitor remotely and in real-time the marine ecosystem. Those platforms can achieve continuous, high-frequency and long-lasting image data sets that require automation in order to extract biological time series. The OBSEA, located at 4 km from Vilanova i la Geltrú at 20 m depth, was used to produce coastal fish time series continuously over the 24-h during 2013-2014. The image content of the photos was extracted via tagging, resulting in 69917 fish tags of 30 taxa identified. We also provided a meteorological and oceanographic dataset filtered by a quality control procedure to define real-world conditions affecting image quality. The tagged fish dataset can be of great importance to develop Artificial Intelligence routines for the automated identification and classification of fishes in extensive time-lapse image sets.
Collapse
|
4
|
Lopez-Vazquez V, Lopez-Guede JM, Marini S, Fanelli E, Johnsen E, Aguzzi J. Correction: Lopez-Vazquez et al. Video Image Enhancement and Machine Learning Pipeline for Underwater Animal Detection and Classification at Cabled Observatories. Sensors 2020, 20, 726. SENSORS (BASEL, SWITZERLAND) 2022; 23:16. [PMID: 36617155 PMCID: PMC9823824 DOI: 10.3390/s23010016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 10/12/2022] [Indexed: 06/17/2023]
Abstract
The authors wish to correct the following error in the original paper [...].
Collapse
Affiliation(s)
- Vanesa Lopez-Vazquez
- DS Labs, R+D+I unit of Deusto Sistemas S.A., 01015 Vitoria-Gasteiz, Spain
- University of the Basque Country (UPV/EHU), Nieves Cano, 12, 01006 Vitoria-Gasteiz, Spain
| | - Jose Manuel Lopez-Guede
- Department of System Engineering and Automation Control, Faculty of Engineering of Vitoria-Gasteiz, University of the Basque Country (UPV/EHU), Nieves Cano, 12, 01006 Vitoria-Gasteiz, Spain
| | - Simone Marini
- Institute of Marine Sciences, National Research Council of Italy (CNR), 19032 La Spezia, Italy
- Stazione Zoologica Anton Dohrn (SZN), 80122 Naples, Italy
| | - Emanuela Fanelli
- Stazione Zoologica Anton Dohrn (SZN), 80122 Naples, Italy
- Department of Life and Environmental Sciences, Polytechnic University of Marche, Via Brecce Bianche, 60131 Ancona, Italy
| | - Espen Johnsen
- Institute of Marine Research, P.O. Box 1870, 5817 Bergen, Norway
| | - Jacopo Aguzzi
- Stazione Zoologica Anton Dohrn (SZN), 80122 Naples, Italy
- Instituto de Ciencias del Mar (ICM) of the Consejo Superior de Investigaciones Científicas (CSIC), 08003 Barcelona, Spain
| |
Collapse
|
5
|
Marini S, Bonofiglio F, Corgnati LP, Bordone A, Schiaparelli S, Peirano A. Long‐term Automated Visual Monitoring of Antarctic Benthic Fauna. Methods Ecol Evol 2022. [DOI: 10.1111/2041-210x.13898] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Simone Marini
- National Research Council of Italy (CNR) Institute of Marine Sciences La Spezia 19132 Italy
- Stazione Zoologica Anton Dohrn Naples 80121 Italy
| | - Federico Bonofiglio
- National Research Council of Italy (CNR) Institute of Marine Sciences La Spezia 19132 Italy
| | - Lorenzo P. Corgnati
- National Research Council of Italy (CNR) Institute of Marine Sciences La Spezia 19132 Italy
| | - Andrea Bordone
- ENEA‐Marine Environment Research Centre La Spezia 19132 Italy
| | - Stefano Schiaparelli
- DISTAV Università di Genova Genova 16132 Italy
- 5 MNA Italian National Antarctic Museum (Section of Genoa) Genoa 16132 Italy
| | - Andrea Peirano
- ENEA‐Marine Environment Research Centre La Spezia 19132 Italy
| |
Collapse
|
6
|
Jose JA, Kumar C, Sureshkumar S. Region-Based Split Octonion Networks with Channel Attention Module for Tuna Classification. INT J PATTERN RECOGN 2022. [DOI: 10.1142/s0218001422500306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Tuna fish is a popular food because of its nutritional value and taste. Demand for various species of tuna increases over time, necessitating the development of a system to sort tuna fish into distinct species in export sectors in order to accelerate the process. The work proposes an automated tuna classification system based on split octonion network. The images are initially preprocessed and divided into region images. Each region image is applied to a split octonion network with eleven layers. In addition, a split octonion channel attention module is presented, which is fed to the last two convolutional layers. The features from the three octonion networks are fused and applied to a series of dense layers. In the last layer, a softmax classifier is utilized for final classification. Results show that the proposed region-based split octonion network with attention module gives an accuracy of 98.01% on tuna database. The region-based tuna classification model is fine-tuned for the categorization of six species from QUT-FishBase dataset and Fish-Pak dataset. The system shows accuracies of 97.83% and 98.17% on QUT-FishBase and Fish-Pak datasets, respectively. The proposed methodology is also compared with existing approaches using a variety of evaluation criteria.
Collapse
Affiliation(s)
| | - C. Kumar
- Rajiv Gandhi Institute of Technology, Kottayam, India
| | - S. Sureshkumar
- Kerala University of Fisheries and Ocean Studies, Kochi, India
| |
Collapse
|
7
|
Unlocking the Potential of Deep Learning for Migratory Waterbirds Monitoring Using Surveillance Video. REMOTE SENSING 2022. [DOI: 10.3390/rs14030514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Estimates of migratory waterbirds population provide the essential scientific basis to guide the conservation of coastal wetlands, which are heavily modified and threatened by economic development. New equipment and technology have been increasingly introduced in protected areas to expand the monitoring efforts, among which video surveillance and other unmanned devices are widely used in coastal wetlands. However, the massive amount of video records brings the dual challenge of storage and analysis. Manual analysis methods are time-consuming and error-prone, representing a significant bottleneck to rapid data processing and dissemination and application of results. Recently, video processing with deep learning has emerged as a solution, but its ability to accurately identify and count waterbirds across habitat types (e.g., mudflat, saltmarsh, and open water) is untested in coastal environments. In this study, we developed a two-step automatic waterbird monitoring framework. The first step involves automatic video segmentation, selection, processing, and mosaicking video footages into panorama images covering the entire monitoring area, which are subjected to the second step of counting and density estimation using a depth density estimation network (DDE). We tested the effectiveness and performance of the framework in Tiaozini, Jiangsu Province, China, which is a restored wetland, providing key high-tide roosting ground for migratory waterbirds in the East Asian–Australasian flyway. The results showed that our approach achieved an accuracy of 85.59%, outperforming many other popular deep learning algorithms. Furthermore, the standard error of our model was very small (se = 0.0004), suggesting the high stability of the method. The framework is computing effective—it takes about one minute to process a theme covering the entire site using a high-performance desktop computer. These results demonstrate that our framework can extract ecologically meaningful data and information from video surveillance footages accurately to assist biodiversity monitoring, fulfilling the gap in the efficient use of existing monitoring equipment deployed in protected areas.
Collapse
|
8
|
Bergler C, Gebhard A, Towers JR, Butyrev L, Sutton GJ, Shaw TJH, Maier A, Nöth E. FIN-PRINT a fully-automated multi-stage deep-learning-based framework for the individual recognition of killer whales. Sci Rep 2021; 11:23480. [PMID: 34873193 PMCID: PMC8648837 DOI: 10.1038/s41598-021-02506-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 11/17/2021] [Indexed: 01/10/2023] Open
Abstract
Biometric identification techniques such as photo-identification require an array of unique natural markings to identify individuals. From 1975 to present, Bigg's killer whales have been photo-identified along the west coast of North America, resulting in one of the largest and longest-running cetacean photo-identification datasets. However, data maintenance and analysis are extremely time and resource consuming. This study transfers the procedure of killer whale image identification into a fully automated, multi-stage, deep learning framework, entitled FIN-PRINT. It is composed of multiple sequentially ordered sub-components. FIN-PRINT is trained and evaluated on a dataset collected over an 8-year period (2011-2018) in the coastal waters off western North America, including 121,000 human-annotated identification images of Bigg's killer whales. At first, object detection is performed to identify unique killer whale markings, resulting in 94.4% recall, 94.1% precision, and 93.4% mean-average-precision (mAP). Second, all previously identified natural killer whale markings are extracted. The third step introduces a data enhancement mechanism by filtering between valid and invalid markings from previous processing levels, achieving 92.8% recall, 97.5%, precision, and 95.2% accuracy. The fourth and final step involves multi-class individual recognition. When evaluated on the network test set, it achieved an accuracy of 92.5% with 97.2% top-3 unweighted accuracy (TUA) for the 100 most commonly photo-identified killer whales. Additionally, the method achieved an accuracy of 84.5% and a TUA of 92.9% when applied to the entire 2018 image collection of the 100 most common killer whales. The source code of FIN-PRINT can be adapted to other species and will be publicly available.
Collapse
Affiliation(s)
- Christian Bergler
- grid.5330.50000 0001 2107 3311Department of Computer Science - Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Martensstr. 3, 91058 Erlangen, Germany
| | - Alexander Gebhard
- grid.5330.50000 0001 2107 3311Department of Computer Science - Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Martensstr. 3, 91058 Erlangen, Germany
| | - Jared R. Towers
- Bay Cetology, 257 Fir street, Alert Bay, BC V0N 1A0 Canada ,grid.23618.3e0000 0004 0449 2129Pacific Biological Station, Fisheries and Oceans Canada, 3190 Hammond Bay Road, Nanaimo, BC V9T 6N7 Canada
| | - Leonid Butyrev
- grid.5330.50000 0001 2107 3311Department of Computer Science - Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Martensstr. 3, 91058 Erlangen, Germany
| | - Gary J. Sutton
- Bay Cetology, 257 Fir street, Alert Bay, BC V0N 1A0 Canada ,grid.23618.3e0000 0004 0449 2129Pacific Biological Station, Fisheries and Oceans Canada, 3190 Hammond Bay Road, Nanaimo, BC V9T 6N7 Canada
| | - Tasli J. H. Shaw
- Bay Cetology, 257 Fir street, Alert Bay, BC V0N 1A0 Canada ,grid.23618.3e0000 0004 0449 2129Pacific Biological Station, Fisheries and Oceans Canada, 3190 Hammond Bay Road, Nanaimo, BC V9T 6N7 Canada
| | - Andreas Maier
- grid.5330.50000 0001 2107 3311Department of Computer Science - Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Martensstr. 3, 91058 Erlangen, Germany
| | - Elmar Nöth
- grid.5330.50000 0001 2107 3311Department of Computer Science - Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Martensstr. 3, 91058 Erlangen, Germany
| |
Collapse
|
9
|
A novel lifelong learning model based on cross domain knowledge extraction and transfer to classify underwater images. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.11.048] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
10
|
Irfan M, Jiangbin Z, Iqbal M, Arif MH. Enhancing learning classifier systems through convolutional autoencoder to classify underwater images. Soft comput 2021. [DOI: 10.1007/s00500-021-05738-w] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
11
|
Abstract
AbstractA wide range of applications in marine ecology extensively uses underwater cameras. Still, to efficiently process the vast amount of data generated, we need to develop tools that can automatically detect and recognize species captured on film. Classifying fish species from videos and images in natural environments can be challenging because of noise and variation in illumination and the surrounding habitat. In this paper, we propose a two-step deep learning approach for the detection and classification of temperate fishes without pre-filtering. The first step is to detect each single fish in an image, independent of species and sex. For this purpose, we employ the You Only Look Once (YOLO) object detection technique. In the second step, we adopt a Convolutional Neural Network (CNN) with the Squeeze-and-Excitation (SE) architecture for classifying each fish in the image without pre-filtering. We apply transfer learning to overcome the limited training samples of temperate fishes and to improve the accuracy of the classification. This is done by training the object detection model with ImageNet and the fish classifier via a public dataset (Fish4Knowledge), whereupon both the object detection and classifier are updated with temperate fishes of interest. The weights obtained from pre-training are applied to post-training as a priori. Our solution achieves the state-of-the-art accuracy of 99.27% using the pre-training model. The accuracies using the post-training model are also high; 83.68% and 87.74% with and without image augmentation, respectively. This strongly indicates that the solution is viable with a more extensive dataset.
Collapse
|
12
|
Anton V, Germishuys J, Bergström P, Lindegarth M, Obst M. An open-source, citizen science and machine learning approach to analyse subsea movies. Biodivers Data J 2021; 9:e60548. [PMID: 33679174 PMCID: PMC7930014 DOI: 10.3897/bdj.9.e60548] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 02/11/2021] [Indexed: 11/29/2022] Open
Abstract
Background The increasing access to autonomously-operated technologies offer vast opportunities to sample large volumes of biological data. However, these technologies also impose novel demands on ecologists who need to apply tools for data management and processing that are efficient, publicly available and easy to use. Such tools are starting to be developed for a wider community and here we present an approach to combine essential analytical functions for analysing large volumes of image data in marine ecological research. New information This paper describes the Koster Seafloor Observatory, an open-source approach to analysing large amounts of subsea movie data for marine ecological research. The approach incorporates three distinct modules to: manage and archive the subsea movies, involve citizen scientists to accurately classify the footage and, finally, train and test machine learning algorithms for detection of biological objects. This modular approach is based on open-source code and allows researchers to customise and further develop the presented functionalities to various types of data and questions related to analysis of marine imagery. We tested our approach for monitoring cold water corals in a Marine Protected Area in Sweden using videos from remotely-operated vehicles (ROVs). Our study resulted in a machine learning model with an adequate performance, which was entirely trained with classifications provided by citizen scientists. We illustrate the application of machine learning models for automated inventories and monitoring of cold water corals. Our approach shows how citizen science can be used to effectively extract occurrence and abundance data for key ecological species and habitats from underwater footage. We conclude that the combination of open-source tools, citizen science systems, machine learning and high performance computational resources are key to successfully analyse large amounts of underwater imagery in the future.
Collapse
Affiliation(s)
- Victor Anton
- Wildlife.ai, New Plymouth, New Zealand Wildlife.ai New Plymouth New Zealand
| | | | - Per Bergström
- Department of Marine Sciences, Göteborg University, Gothenburg, Sweden Department of Marine Sciences, Göteborg University Gothenburg Sweden
| | - Mats Lindegarth
- Department of Marine Sciences, Göteborg University, Gothenburg, Sweden Department of Marine Sciences, Göteborg University Gothenburg Sweden
| | - Matthias Obst
- Department of Marine Sciences, Göteborg University, Gothenburg, Sweden Department of Marine Sciences, Göteborg University Gothenburg Sweden.,SeAnalytics AB, Gothenburg, Sweden SeAnalytics AB Gothenburg Sweden
| |
Collapse
|
13
|
N.S. A, D. S, S. RK. Naive Bayesian fusion based deep learning networks for multisegmented classification of fishes in aquaculture industries. ECOL INFORM 2021. [DOI: 10.1016/j.ecoinf.2021.101248] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
14
|
Manco L, Maffei N, Strolin S, Vichi S, Bottazzi L, Strigari L. Basic of machine learning and deep learning in imaging for medical physicists. Phys Med 2021; 83:194-205. [DOI: 10.1016/j.ejmp.2021.03.026] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 03/07/2021] [Accepted: 03/16/2021] [Indexed: 02/08/2023] Open
|
15
|
An Automated Pipeline for Image Processing and Data Treatment to Track Activity Rhythms of Paragorgia arborea in Relation to Hydrographic Conditions. SENSORS 2020; 20:s20216281. [PMID: 33158174 PMCID: PMC7662914 DOI: 10.3390/s20216281] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 10/30/2020] [Accepted: 11/02/2020] [Indexed: 11/17/2022]
Abstract
Imaging technologies are being deployed on cabled observatory networks worldwide. They allow for the monitoring of the biological activity of deep-sea organisms on temporal scales that were never attained before. In this paper, we customized Convolutional Neural Network image processing to track behavioral activities in an iconic conservation deep-sea species—the bubblegum coral Paragorgia arborea—in response to ambient oceanographic conditions at the Lofoten-Vesterålen observatory. Images and concomitant oceanographic data were taken hourly from February to June 2018. We considered coral activity in terms of bloated, semi-bloated and non-bloated surfaces, as proxy for polyp filtering, retraction and transient activity, respectively. A test accuracy of 90.47% was obtained. Chronobiology-oriented statistics and advanced Artificial Neural Network (ANN) multivariate regression modeling proved that a daily coral filtering rhythm occurs within one major dusk phase, being independent from tides. Polyp activity, in particular extrusion, increased from March to June, and was able to cope with an increase in chlorophyll concentration, indicating the existence of seasonality. Our study shows that it is possible to establish a model for the development of automated pipelines that are able to extract biological information from times series of images. These are helpful to obtain multidisciplinary information from cabled observatory infrastructures.
Collapse
|
16
|
ENDURUNS: An Integrated and Flexible Approach for Seabed Survey Through Autonomous Mobile Vehicles. JOURNAL OF MARINE SCIENCE AND ENGINEERING 2020. [DOI: 10.3390/jmse8090633] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The oceans cover more than two-thirds of the planet, representing the vastest part of natural resources. Nevertheless, only a fraction of the ocean depths has been explored. Within this context, this article presents the H2020 ENDURUNS project that describes a novel scientific and technological approach for prolonged underwater autonomous operations of seabed survey activities, either in the deep ocean or in coastal areas. The proposed approach combines a hybrid Autonomous Underwater Vehicle capable of moving using either thrusters or as a sea glider, combined with an Unmanned Surface Vehicle equipped with satellite communication facilities for interaction with a land station. Both vehicles are equipped with energy packs that combine hydrogen fuel cells and Li-ion batteries to provide extended duration of the survey operations. The Unmanned Surface Vehicle employs photovoltaic panels to increase the autonomy of the vehicle. Since these missions generate a large amount of data, both vehicles are equipped with onboard Central Processing units capable of executing data analysis and compression algorithms for the semantic classification and transmission of the acquired data.
Collapse
|
17
|
The Hierarchic Treatment of Marine Ecological Information from Spatial Networks of Benthic Platforms. SENSORS 2020; 20:s20061751. [PMID: 32245204 PMCID: PMC7146366 DOI: 10.3390/s20061751] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Revised: 03/13/2020] [Accepted: 03/19/2020] [Indexed: 02/04/2023]
Abstract
Measuring biodiversity simultaneously in different locations, at different temporal scales, and over wide spatial scales is of strategic importance for the improvement of our understanding of the functioning of marine ecosystems and for the conservation of their biodiversity. Monitoring networks of cabled observatories, along with other docked autonomous systems (e.g., Remotely Operated Vehicles [ROVs], Autonomous Underwater Vehicles [AUVs], and crawlers), are being conceived and established at a spatial scale capable of tracking energy fluxes across benthic and pelagic compartments, as well as across geographic ecotones. At the same time, optoacoustic imaging is sustaining an unprecedented expansion in marine ecological monitoring, enabling the acquisition of new biological and environmental data at an appropriate spatiotemporal scale. At this stage, one of the main problems for an effective application of these technologies is the processing, storage, and treatment of the acquired complex ecological information. Here, we provide a conceptual overview on the technological developments in the multiparametric generation, storage, and automated hierarchic treatment of biological and environmental information required to capture the spatiotemporal complexity of a marine ecosystem. In doing so, we present a pipeline of ecological data acquisition and processing in different steps and prone to automation. We also give an example of population biomass, community richness and biodiversity data computation (as indicators for ecosystem functionality) with an Internet Operated Vehicle (a mobile crawler). Finally, we discuss the software requirements for that automated data processing at the level of cyber-infrastructures with sensor calibration and control, data banking, and ingestion into large data portals.
Collapse
|
18
|
A Flexible Autonomous Robotic Observatory Infrastructure for Bentho-Pelagic Monitoring. SENSORS 2020; 20:s20061614. [PMID: 32183233 PMCID: PMC7146179 DOI: 10.3390/s20061614] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Revised: 03/08/2020] [Accepted: 03/10/2020] [Indexed: 11/17/2022]
Abstract
This paper presents the technological developments and the policy contexts for the project “Autonomous Robotic Sea-Floor Infrastructure for Bentho-Pelagic Monitoring” (ARIM). The development is based on the national experience with robotic component technologies that are combined and merged into a new product for autonomous and integrated ecological deep-sea monitoring. Traditional monitoring is often vessel-based and thus resource demanding. It is economically unviable to fulfill the current policy for ecosystem monitoring with traditional approaches. Thus, this project developed platforms for bentho-pelagic monitoring using an arrangement of crawler and stationary platforms at the Lofoten-Vesterålen (LoVe) observatory network (Norway). Visual and acoustic imaging along with standard oceanographic sensors have been combined to support advanced and continuous spatial-temporal monitoring near cold water coral mounds. Just as important is the automatic processing techniques under development that have been implemented to allow species (or categories of species) quantification (i.e., tracking and classification). At the same time, real-time outboard processed three-dimensional (3D) laser scanning has been implemented to increase mission autonomy capability, delivering quantifiable information on habitat features (i.e., for seascape approaches). The first version of platform autonomy has already been tested under controlled conditions with a tethered crawler exploring the vicinity of a cabled stationary instrumented garage. Our vision is that elimination of the tether in combination with inductive battery recharge trough fuel cell technology will facilitate self-sustained long-term autonomous operations over large areas, serving not only the needs of science, but also sub-sea industries like subsea oil and gas, and mining.
Collapse
|
19
|
Billings G, Johnson-Roberson M. SilhoNet-Fisheye: Adaptation of A ROI-Based Object Pose Estimation Network to Monocular Fisheye Images. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2994036] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|