1
|
Huang D, Bai H, Wang L, Hou Y, Li L, Xia Y, Yan Z, Chen W, Chang L, Li W. The Application and Development of Deep Learning in Radiotherapy: A Systematic Review. Technol Cancer Res Treat 2021; 20:15330338211016386. [PMID: 34142614 PMCID: PMC8216350 DOI: 10.1177/15330338211016386] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.
Collapse
Affiliation(s)
- Danju Huang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Han Bai
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Wang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yu Hou
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Lan Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yaoxiong Xia
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Zhirui Yan
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenrui Chen
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Chang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenhui Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| |
Collapse
|
2
|
Alemayoh TT, Lee JH, Okamoto S. New Sensor Data Structuring for Deeper Feature Extraction in Human Activity Recognition. SENSORS (BASEL, SWITZERLAND) 2021; 21:2814. [PMID: 33923706 PMCID: PMC8073736 DOI: 10.3390/s21082814] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 04/10/2021] [Accepted: 04/15/2021] [Indexed: 11/26/2022]
Abstract
For the effective application of thriving human-assistive technologies in healthcare services and human-robot collaborative tasks, computing devices must be aware of human movements. Developing a reliable real-time activity recognition method for the continuous and smooth operation of such smart devices is imperative. To achieve this, light and intelligent methods that use ubiquitous sensors are pivotal. In this study, with the correlation of time series data in mind, a new method of data structuring for deeper feature extraction is introduced herein. The activity data were collected using a smartphone with the help of an exclusively developed iOS application. Data from eight activities were shaped into single and double-channels to extract deep temporal and spatial features of the signals. In addition to the time domain, raw data were represented via the Fourier and wavelet domains. Among the several neural network models used to fit the deep-learning classification of the activities, a convolutional neural network with a double-channeled time-domain input performed well. This method was further evaluated using other public datasets, and better performance was obtained. The practicability of the trained model was finally tested on a computer and a smartphone in real-time, where it demonstrated promising results.
Collapse
Affiliation(s)
| | - Jae Hoon Lee
- Department of Mechanical Engineering, Graduate School of Science and Engineering, Ehime University, Matsuyama 790-8577, Japan; (T.T.A.); (S.O.)
| | | |
Collapse
|
3
|
Li X, Zhang Y, Marsic I, Sarcevic A, Burd RS. Deep Learning for RFID-Based Activity Recognition. PROCEEDINGS OF THE ... INTERNATIONAL CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS. INTERNATIONAL CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS 2016; 2016:164-175. [PMID: 30381808 PMCID: PMC6205502 DOI: 10.1145/2994551.2994569] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
We present a system for activity recognition from passive RFID data using a deep convolutional neural network. We directly feed the RFID data into a deep convolutional neural network for activity recognition instead of selecting features and using a cascade structure that first detects object use from RFID data followed by predicting the activity. Because our system treats activity recognition as a multi-class classification problem, it is scalable for applications with large number of activity classes. We tested our system using RFID data collected in a trauma room, including 14 hours of RFID data from 16 actual trauma resuscitations. Our system outperformed existing systems developed for activity recognition and achieved similar performance with process-phase detection as systems that require wearable sensors or manually-generated input. We also analyzed the strengths and limitations of our current deep learning architecture for activity recognition from RFID data.
Collapse
Affiliation(s)
- Xinyu Li
- Department of Electrical and Computer Engineering, Rutgers University, New Brunswick, NJ, USA
| | - Yanyi Zhang
- Department of Electrical and Computer Engineering, Rutgers University, New Brunswick, NJ, USA
| | - Ivan Marsic
- Department of Electrical and Computer Engineering, Rutgers University, New Brunswick, NJ, USA
| | - Aleksandra Sarcevic
- College of Computing and Informatics, Drexel University, Philadelphia, PA, USA
| | - Randall S Burd
- Division of Trauma and Burn Surgery, Children's National Medical Center, Washington, D.C., USA
| |
Collapse
|