1
|
Yu J, Jin X, Du W, Bai Y, Zhou X, Gao M, Li S, Qin J, Chen X, Liu Y, Yu J, Chen C, Xie Q, Xie S, Kong X, Zhan W, Yu Y, Li K, Ji Q, Chen F, Chen P. Unveiling facial kinship: The BioKinVis dataset for facial kinship verification and genetic association studies. Electrophoresis 2024; 45:794-804. [PMID: 38161244 DOI: 10.1002/elps.202300169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 12/01/2023] [Accepted: 12/13/2023] [Indexed: 01/03/2024]
Abstract
Facial image-based kinship verification represents a burgeoning frontier within the realms of computer vision and biomedicine. Recent genome-wide association studies have underscored the heritability of human facial morphology, revealing its predictability based on genetic information. These revelations form a robust foundation for advancing facial image-based kinship verification. Despite strides in computer vision, there remains a discernible gap between the biomedical and computer vision domains. Notably, the absence of family photo datasets established through biological paternity testing methods poses a significant challenge. This study addresses this gap by introducing the biological kinship visualization dataset, encompassing 5773 individuals from 2412 families with biologically confirmed kinship. Our analysis delves into the distribution and influencing factors of facial similarity among parent-child pairs, probing the potential association between forensic short tandem repeat polymorphisms and facial similarity. Additionally, we have developed a machine learning model for facial image-based kinship verification, achieving an accuracy of 0.80 in the dataset. To facilitate further exploration, we have established an online tool and database, accessible at http://120.55.161.230:88/.
Collapse
Affiliation(s)
- Jian Yu
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Xiaozhe Jin
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Weijie Du
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Yantong Bai
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Xin Zhou
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Mengli Gao
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Shuwen Li
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Jiarui Qin
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Xuanlong Chen
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Yuhao Liu
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Jianing Yu
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Chen Chen
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Qiheng Xie
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Sumei Xie
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Xiaochao Kong
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Wenxuan Zhan
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Yanfang Yu
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Kai Li
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Qiang Ji
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Feng Chen
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| | - Peng Chen
- Department of Forensic Medicine, Nanjing Medical University, Nanjing, Jiangsu, P. R. China
| |
Collapse
|
2
|
Wu X, Zhang X, Feng X, Bordallo Lopez M, Liu L. Audio-Visual Kinship Verification: A New Dataset and a Unified Adaptive Adversarial Multimodal Learning Approach. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:1523-1536. [PMID: 36417714 DOI: 10.1109/tcyb.2022.3220040] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Facial kinship verification refers to automatically determining whether two people have a kin relation from their faces. It has become a popular research topic due to potential practical applications. Over the past decade, many efforts have been devoted to improving the verification performance from human faces only while lacking other biometric information, for example, speaking voice. In this article, to interpret and benefit from multiple modalities, we propose for the first time to combine human faces and voices to verify kinship, which we refer it as the audio-visual kinship verification study. We first establish a comprehensive audio-visual kinship dataset that consists of familial talking facial videos under various scenarios, called TALKIN-Family. Based on the dataset, we present the extensive evaluation of kinship verification from faces and voices. In particular, we propose a deep-learning-based fusion method, called unified adaptive adversarial multimodal learning (UAAML). It consists of the adversarial network and the attention module on the basis of unified multimodal features. Experiments show that audio (voice) information is complementary to facial features and useful for the kinship verification problem. Furthermore, the proposed fusion method outperforms baseline methods. In addition, we also evaluate the human verification ability on a subset of TALKIN-Family. It indicates that humans have higher accuracy when they have access to both faces and voices. The machine-learning methods could effectively and efficiently outperform the human ability. Finally, we include the future work and research opportunities with the TALKIN-Family dataset.
Collapse
|