1
|
Scurich N, Faigman DL, Albright TD. Scientific guidelines for evaluating the validity of forensic feature-comparison methods. Proc Natl Acad Sci U S A 2023; 120:e2301843120. [PMID: 37782809 PMCID: PMC10576079 DOI: 10.1073/pnas.2301843120] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023] Open
Abstract
When it comes to questions of fact in a legal context-particularly questions about measurement, association, and causality-courts should employ ordinary standards of applied science. Applied sciences generally develop along a path that proceeds from a basic scientific discovery about some natural process to the formation of a theory of how the process works and what causes it to fail, to the development of an invention intended to assess, repair, or improve the process, to the specification of predictions of the instrument's actions and, finally, empirical validation to determine that the instrument achieves the intended effect. These elements are salient and deeply embedded in the cultures of the applied sciences of medicine and engineering, both of which primarily grew from basic sciences. However, the inventions that underlie most forensic science disciplines have few roots in basic science, and they do not have sound theories to justify their predicted actions or results of empirical tests to prove that they work as advertised. Inspired by the "Bradford Hill Guidelines"-the dominant framework for causal inference in epidemiology-we set forth four guidelines that can be used to establish the validity of forensic comparison methods generally. This framework is not intended as a checklist establishing a threshold of minimum validity, as no magic formula determines when particular disciplines or hypotheses have passed a necessary threshold. We illustrate how these guidelines can be applied by considering the discipline of firearm and tool mark examination.
Collapse
Affiliation(s)
- Nicholas Scurich
- Department of Psychological Science, Department of Criminology, Law and Society, University of California, Irvine, CA92697
| | - David L. Faigman
- University of California College of the Law, San Francisco, CA94102
| | | |
Collapse
|
2
|
Perceptions of blind proficiency testing among latent print examiners. Sci Justice 2023; 63:200-205. [PMID: 36870700 DOI: 10.1016/j.scijus.2022.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 10/27/2022] [Accepted: 12/23/2022] [Indexed: 12/31/2022]
Abstract
In recent years, scholars have levied multiple criticisms against traditional proficiency testing procedures in forensic laboratories. Consequently, on several occasions, authorities have formally recommended that laboratories implement blind proficiency testing procedures. Implementation has been slow, but laboratory management has increasingly expressed interest in initiating blind testing in at least some forensic disciplines, with some laboratories conducting blind testing in almost all disciplines. However, little is known about how a key population perceives blind proficiency testing, i.e., forensic examiners. We surveyed active latent print examiners (N = 338) to explore perceptions of blind proficiency testing and determine whether beliefs varied between examiners who work for laboratories with and without blind proficiency testing. Results suggest that examiners do not hold particularly strong beliefs about such procedures, but that examiners who work in laboratories with blind proficiency testing procedures view them significantly more positively than those who do not. Further, examiner responses provide insight into potential obstacles to continued implementation.
Collapse
|
3
|
McAndrew WP, Speaker PJ, Houck MM. Interpol review of forensic management, 2019-2022. Forensic Sci Int Synerg 2023; 6:100301. [PMID: 36655022 PMCID: PMC9840933 DOI: 10.1016/j.fsisyn.2022.100301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
This paper reviews and summarizes the forensic management literature from late 2019 to late 2022, covering laboratory strategic, tactical and operational decision-making, benchmarking, quality assurance, and managerial impacts from technological developments.
Collapse
Affiliation(s)
- William P. McAndrew
- Department of Finance and Economics, Dahlkemper School of Business, Gannon University, USA,Corresponding author.
| | - Paul J. Speaker
- Department of Finance, John Chambers College of Business & Economics, West Virginia University, USA
| | - Max M. Houck
- Global Forensic and Justice Center, Florida International University, USA
| |
Collapse
|
4
|
Growns B, Dunn JD, Helm RK, Towler A, Kukucka J. The low prevalence effect in fingerprint comparison amongst forensic science trainees and novices. PLoS One 2022; 17:e0272338. [PMID: 35951612 PMCID: PMC9371274 DOI: 10.1371/journal.pone.0272338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 07/19/2022] [Indexed: 11/18/2022] Open
Abstract
The low prevalence effect is a phenomenon whereby target prevalence affects performance in visual search (e.g., baggage screening) and comparison (e.g., fingerprint examination) tasks, such that people more often fail to detect infrequent target stimuli. For example, when exposed to higher base-rates of 'matching' (i.e., from the same person) than 'non-matching' (i.e., from different people) fingerprint pairs, people more often misjudge 'non-matching' pairs as 'matches'-an error that can falsely implicate an innocent person for a crime they did not commit. In this paper, we investigated whether forensic science training may mitigate the low prevalence effect in fingerprint comparison. Forensic science trainees (n = 111) and untrained novices (n = 114) judged 100 fingerprint pairs as 'matches' or 'non-matches' where the matching pair occurrence was either high (90%) or equal (50%). Some participants were also asked to use a novel feature-comparison strategy as a potential attenuation technique for the low prevalence effect. Regardless of strategy, both trainees and novices were susceptible to the effect, such that they more often misjudged non-matching pairs as matches when non-matches were rare. These results support the robust nature of the low prevalence effect in visual comparison and have important applied implications for forensic decision-making in the criminal justice system.
Collapse
Affiliation(s)
- Bethany Growns
- College of Social Sciences and International Studies, University of Exeter, Exeter, United Kingdom
| | - James D. Dunn
- School of Psychology, University of New South Wales, Sydney, Australia
| | - Rebecca K. Helm
- College of Social Sciences and International Studies, University of Exeter, Exeter, United Kingdom
| | - Alice Towler
- School of Psychology, University of New South Wales, Sydney, Australia
| | - Jeff Kukucka
- Department of Psychology, Towson University, Towson, MD, United States of America
| |
Collapse
|
5
|
‘Swapping Carrots for Sticks: Forensic Science Provider Views of the Forensic Regulator Act 2021’. Sci Justice 2022; 62:506-514. [DOI: 10.1016/j.scijus.2022.07.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 07/10/2022] [Accepted: 07/28/2022] [Indexed: 11/20/2022]
|
6
|
Dorfman AH, Valliant R. Inconclusives, errors, and error rates in forensic firearms analysis:Three statistical perspectives. Forensic Sci Int Synerg 2022; 5:100273. [PMID: 35800204 PMCID: PMC9254335 DOI: 10.1016/j.fsisyn.2022.100273] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 05/22/2022] [Accepted: 05/26/2022] [Indexed: 11/24/2022]
Abstract
Error rates that have been published in recent open black box studies of forensic firearms examiner performance have been very low, typically below one percent. These low error rates have been challenged, however, as not properly taking into account one of the categories, "Inconclusive", that examiners can reach in comparing a pair of bullets or cartridges. These challenges have themselves been challenged; how to consider the inconclusives and their effect on error rates is currently a matter of sharp debate. We review several viewpoints that have been put forth, and then examine the impact of inconclusives on error rates from three fresh statistical perspectives: (a) an ideal perspective using objective measurements combined with statistical algorithms, (b) basic sampling theory and practice, and (c) standards of experimental design in human studies. Our conclusions vary with the perspective: (a) inconclusives can be simple errors (or, on the other hand, simply correct or at least well justified); (b) inconclusives need not be counted as errors to bring into doubt assessments of error rates; (c) inconclusives are potential errors, more explicitly, inconclusives in studies are not necessarily the equivalent of inconclusives in casework and can mask potential errors in casework. From all these perspectives, it is impossible to simply read out trustworthy estimates of error rates from those studies which have been carried out to date. At most, one can put reasonable bounds on the potential error rates. These are much larger than the nominal rates reported in the studies. To get straightforward, sound estimates of error rates requires a challenging but critical improvement to the design of firearms studies. A proper study-one in which inconclusives are not potential errors, and which yields direct, sound estimates of error rates-will require new objective measures or blind proficiency testing embedded in ordinary casework.
Collapse
Affiliation(s)
- Alan H. Dorfman
- National Center for Health Statistics (retired), Bethesda, MD, 20814-1345, USA
| | | |
Collapse
|
7
|
Neuman M, Hundl C, Grimaldi A, Eudaley D, Stein D, Stout P. Blind testing in firearms: Preliminary results from a blind quality control program. J Forensic Sci 2022; 67:964-974. [PMID: 35349174 PMCID: PMC9313831 DOI: 10.1111/1556-4029.15031] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 03/01/2022] [Accepted: 03/03/2022] [Indexed: 11/29/2022]
Abstract
Open proficiency tests meet accreditation requirements and measure examiner competence but may not represent actual casework. In December 2015, the Houston Forensic Science Center began a blind quality control program in firearms examination. Mock cases are created to mimic routine casework so that examiners are unaware they are being tested. Once the blind case is assigned to an examiner, the evidence undergoes microscopic examination and comparison to determine whether the fired evidence submitted was fired in the same firearm. Fifty‐one firearms blind cases resulting in 570 analysis and comparison determinations were reported between December 2015 and June 2021. No unsatisfactory results were obtained; however, 40.3% of comparisons in which the ground truth was either elimination or identification resulted in inconclusive conclusions. Due to the quality of some of the evidence submitted, inconclusive results were not unexpected. A ground truth of elimination and comparison result of inconclusive was observed at a rate of 74%, while a ground truth of identification and comparison result of inconclusive was observed at a rate of 31%. Bullets (61.8%) were the main contributors to inconclusive conclusions; variables such as the assigned examiners, training program, examiner experience, and the intended complexity of the case did not significantly contribute to the results. The program demonstrates that the quality management system and firearms section procedures can obtain accurate and reliable results and provides examiners added confidence in court. Additionally, the program can be tailored to target specific research questions and provide opportunities for collaboration with other laboratories and researchers.
Collapse
Affiliation(s)
- Maddisen Neuman
- Houston Forensic Science Center, Houston, Texas, USA.,Center for Statistics and Applications in Forensic Evidence (CSAFE), Ames, Iowa, USA
| | - Callan Hundl
- Houston Forensic Science Center, Houston, Texas, USA
| | | | - Donna Eudaley
- Houston Forensic Science Center, Houston, Texas, USA
| | - Darrell Stein
- Houston Forensic Science Center, Houston, Texas, USA
| | - Peter Stout
- Houston Forensic Science Center, Houston, Texas, USA
| |
Collapse
|
8
|
Airlie M, Robertson J, Krosch MN, Brooks E. Contemporary issues in forensic science-Worldwide survey results. Forensic Sci Int 2021; 320:110704. [PMID: 33581657 DOI: 10.1016/j.forsciint.2021.110704] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 12/23/2020] [Accepted: 01/19/2021] [Indexed: 01/28/2023]
Abstract
Forensic science continues to be openly challenged and criticized. The aim of this study was to gain an understanding of forensic workplaces and the perceived current and potential future issues from forensic scientists via a detailed survey. An online survey was designed and disseminated to forensic practitioners and researchers worldwide. 544 participants from more than 20 countries took part in the survey. Participants personally rated ten forensic disciplines with subjective methodologies, responded on a five-point Likert scale to 22 statements that addressed subjectivity and objectivity, validation and proficiency testing and error and bias and answered demographic questions relating to their workplace type, level of experience and qualifications. Participants also commented freely on forensic issues specifically important to them. The purpose of this paper is to provide the survey results and consensuses captured on several key issues. Overall participants believed forensic science must be valid and reliable and supported development of objective methodologies, validation and further investigation into the application statistics, use of error rates and implications of cognitive bias. Participants raised consensus concerns with the provision of expert evidence and other broader issues. This information and understanding from the forensic front line are essential for forensic science moving forward to best address current challenges and criticisms not only of forensic evidence for the court but for applications of forensic science beyond the courtroom.
Collapse
Affiliation(s)
- Melissa Airlie
- University of Canberra, National Centre for Forensic Studies, Faculty of Science and Technology, 11 Kirinari Street, Bruce, ACT, 2617, Australia; Forensic Services Group, Queensland Police Service, Level 4, 200 Roma Street, Brisbane, Queensland, 4000, Australia.
| | - James Robertson
- University of Canberra, National Centre for Forensic Studies, Faculty of Science and Technology, 11 Kirinari Street, Bruce, ACT, 2617, Australia
| | - Matt N Krosch
- Forensic Services Group, Queensland Police Service, Level 4, 200 Roma Street, Brisbane, Queensland, 4000, Australia
| | - Elizabeth Brooks
- University of Canberra, National Centre for Forensic Studies, Faculty of Science and Technology, 11 Kirinari Street, Bruce, ACT, 2617, Australia
| |
Collapse
|
9
|
Expert Reliability in Legal Proceedings: "Eeny, Meeny, Miny, Moe, With Which Expert Should We Go?". Sci Justice 2020; 61:37-46. [PMID: 33357826 DOI: 10.1016/j.scijus.2020.09.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 09/07/2020] [Accepted: 09/20/2020] [Indexed: 11/21/2022]
Abstract
Between Expert Reliability refers to the extent to which different experts examining identical evidence make the same observations and reach the same conclusions. Some areas of expert decision making have been shown to entail questions with relatively low Between Expert Reliability, but the disagreement between experts is not always communicated to the legal actors forming decisions on the basis of the expert evidence. In this paper, we discuss the issues of Between Expert Reliability in legal proceedings, using forensic age estimations as a case study. Across national as well international jurisdictions, there is large variation in which experts are hired to conduct age estimations as well as the methods they use. Simultaneously, age estimations can be fully decisive for outcomes e.g. in asylum law and criminal law. Using datasets obtained from the Swedish legal context, we identify that radiologists and odontologists examining knees or teeth images to estimate age seem to disagree within their own disciplines (radiologist 1 v. radiologist 2 or odontologist 1 v. odontologist 2) as well as across different disciplines (radiologist v. odontologist) relatively often. This may have large implications e.g. in cases where only one expert from the respective field is involved. The paper discusses appropriate ways for legal actors to deal with the possibility of lacking Between Expert Reliability. This is indeed a challenging task provided that legal actors are legal experts but not necessarily scientific experts.
Collapse
|
10
|
Dror IE. The Error in "Error Rate": Why Error Rates Are So Needed, Yet So Elusive. J Forensic Sci 2020; 65:1034-1039. [PMID: 32315087 DOI: 10.1111/1556-4029.14435] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Revised: 03/22/2020] [Accepted: 03/24/2020] [Indexed: 12/14/2022]
Abstract
Establishing error rates is crucial for knowing how well one is performing, determining whether improvement is needed, measuring whether interventions are effective, as well as for providing transparency. However, the flurry of activities in establishing error rates for the forensic sciences has largely overlooked some fundamental issues that make error rates a problematic construct and limit the ability to obtain a meaningful error rate. These include knowing the ground truth, establishing appropriate databases, determining what counts as an error, characterizing what is an acceptable error rate, ecological validity, and transparency within the adversarial legal system. Without addressing these practical and theoretical challenges, the very notion of a meaningful error rate is limited.
Collapse
Affiliation(s)
- Itiel E Dror
- UCL Centre for the Forensic Sciences, University College London, London, U.K.,Cognitive Consultants Internationals (CCI-HQ), London, U.K
| |
Collapse
|
11
|
Dror IE, Scurich N. (Mis)use of scientific measurements in forensic science. Forensic Sci Int Synerg 2020; 2:333-338. [PMID: 33385131 PMCID: PMC7770438 DOI: 10.1016/j.fsisyn.2020.08.006] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 08/21/2020] [Accepted: 08/22/2020] [Indexed: 04/16/2023]
Abstract
Forensic science error rate studies have not given sufficient attention or weight to inconclusive evidence and inconclusive decisions. Inconclusive decisions can be correct decisions, but they can also be incorrect decisions. Errors can occur when inconclusive evidence is determined as an identification or exclusion, or conversely, when same- or different-source evidence is incorrectly determined as inconclusive. We present four common flaws in error rate studies: 1. Not including test items which are more prone to error; 2. Excluding inconclusive decisions from error rate calculations; 3. Counting inconclusive decisions as correct in error rate calculations; and 4. Examiners resorting to more inconclusive decisions during error rate studies than they do in casework. These flaws seriously undermine the credibility and accuracy of error rates reported in studies. To remedy these shortcomings, we present the problems and show the way forward by providing a corrected experimental design that quantifies error rates more accurately.
Collapse
Affiliation(s)
- Itiel E. Dror
- University College London (UCL), 35 Tavistock Square, London, WC1H 9EZ, USA
- Corresponding author.
| | - Nicholas Scurich
- University of California, Irvine, 4312 Social and Behavioral Sciences Gateway, Irvine, CA, 92697, USA
| |
Collapse
|