1
|
Lai DKH, Cheng ESW, Mao YJ, Zheng Y, Yao KY, Ni M, Zhang YQ, Wong DWC, Cheung JCW. Sonoelastography for Testicular Tumor Identification: A Systematic Review and Meta-Analysis of Diagnostic Test Accuracy. Cancers (Basel) 2023; 15:3770. [PMID: 37568585 PMCID: PMC10417060 DOI: 10.3390/cancers15153770] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/18/2023] [Accepted: 07/21/2023] [Indexed: 08/13/2023] Open
Abstract
The objective of this review was to summarize the applications of sonoelastography in testicular tumor identification and inquire about their test performances. Two authors independently searched English journal articles and full conference papers from CINAHL, Embase, IEEE Xplore®, PubMed, Scopus, and Web of Science from inception and organized them into a PIRO (patient, index test, reference test, outcome) framework. Eleven studies (n = 11) were eligible for data synthesis, nine of which (n = 9) utilized strain elastography and two (n = 2) employed shear-wave elastography. Meta-analyses were performed on the distinction between neoplasm (tumor) and non-neoplasm (non-tumor) from four study arms and between malignancy and benignity from seven study arms. The pooled sensitivity of classifying malignancy and benignity was 86.0% (95%CI, 79.7% to 90.6%). There was substantial heterogeneity in the classification of neoplasm and non-neoplasm and in the specificity of classifying malignancy and benignity, which could not be addressed by the subgroup analysis of sonoelastography techniques. Heterogeneity might be associated with the high risk of bias and applicability concern, including a wide spectrum of testicular pathologies and verification bias in the reference tests. Key technical obstacles in the index test were manual compression in strain elastography, qualitative observation of non-standardized color codes, and locating the Regions of Interest (ROI), in addition to decisions in feature extractions. Future research may focus on multiparametric sonoelastography using deep learning models and ensemble learning. A decision model on the benefits-risks of surgical exploration (reference test) could also be developed to direct the test-and-treat strategy for testicular tumors.
Collapse
Affiliation(s)
- Derek Ka-Hei Lai
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ethan Shiu-Wang Cheng
- Department of Electronic and Information Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ye-Jiao Mao
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong, China
| | - Yi Zheng
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ke-Yu Yao
- Department of Materials, Imperial College, London SW7 2AZ, UK
| | - Ming Ni
- Department of Orthopaedics, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200240, China
- Laboratory of Prevention and Treatment of Bone and Joint Diseases, Shanghai Institute of Traumatology and Orthopaedics, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Ying-Qi Zhang
- Department of Orthopaedics, Tongji Hospital, School of Medicine, Tongji University, Shanghai 200065, China
| | - Duo Wai-Chi Wong
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong, China
| | - James Chung-Wai Cheung
- Department of Biomedical Engineering, Faculty of Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- Research Institute of Smart Ageing, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
2
|
Different evidence summaries have implications for contextualizing findings of meta-analysis of diagnostic tests. J Clin Epidemiol 2019; 109:51-61. [PMID: 30654146 DOI: 10.1016/j.jclinepi.2019.01.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Revised: 11/19/2018] [Accepted: 01/08/2019] [Indexed: 01/06/2023]
Abstract
OBJECTIVE To evaluate diagnostic tests, analysts use meta-analyses to provide inputs to parameters in decision models. Choosing parameter estimands from meta-analyses requires understanding the meta-analytic and decision-making contexts. STUDY DESIGN AND SETTING We expand on an analysis comparing positron emission tomography (PET), PET with computed tomography (PET/CT), and conventional workup (CW) in women with suspected recurrent breast cancer. We discuss Bayesian meta-analytic summaries (posterior mean over a set of existing studies, posterior estimate in an existing study, posterior predictive mean in a new study) used to estimate diagnostic test parameters (prevalence, sensitivity, specificity) needed to calculate quality-adjusted life years in a decision model contextualizing PET, PET/CT, and CW. RESULTS The mean and predictive mean give similar estimates, but the latter displays greater uncertainty. Namely, PET/CT outperforms CW on average but may not do better than CW when implemented in future settings. CONCLUSION Selecting estimands for decision model parameters from meta-analyses requires understanding the relationship between decision settings and meta-analysis studies' settings, specifically whether the former resemble one or all study settings or represents new settings. We provide an algorithm recommending appropriate estimands as input parameters in decision models for diagnostic tests to obtain output parameters consistent with the decision context.
Collapse
|
3
|
Deverka P, Messner DA, McCormack R, Lyman GH, Piper M, Bradley L, Parkinson D, Nelson D, McLeod HL, Smith ML, Jacques L, Dutta T, Tunis SR. Generating and evaluating evidence of the clinical utility of molecular diagnostic tests in oncology. Genet Med 2015; 18:780-7. [PMID: 26633547 DOI: 10.1038/gim.2015.162] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Accepted: 09/30/2015] [Indexed: 12/21/2022] Open
Abstract
PURPOSE Enthusiasm for molecular diagnostic (MDx) testing in oncology is constrained by the gaps in required evidence regarding its impact on patient outcomes (clinical utility (CU)). This effectiveness guidance document proposes recommendations for the design and evaluation of studies intended to reflect the evidence expectations of payers, while also reflecting information needs of patients and clinicians. METHODS Our process included literature reviews and key informant interviews followed by iterative virtual and in-person consultation with an expert technical working group and an advisory group comprising life-sciences industry experts, public and private payers, patients, clinicians, regulators, researchers, and other stakeholders. RESULTS Treatment decisions in oncology represent high-risk clinical decision making, and therefore the recommendations give preference to randomized controlled trials (RCTs) for demonstrating CU. The guidance also describes circumstances under which alternatives to RCTs could be considered, specifying conditions under which test developers could use prospective-retrospective studies with banked biospecimens, single-arm studies, prospective observational studies, or decision-analytic modeling techniques that make a reasonable case for CU. CONCLUSION Using a process driven by multiple stakeholders, we developed a common framework for designing and evaluating studies of the clinical validity and CU of MDx tests, achieving a balance between internal validity of the studies and the relevance, feasibility, and timeliness of generating the desired evidence.Genet Med 18 8, 780-787.
Collapse
Affiliation(s)
| | - Donna A Messner
- Center for Medical Technology Policy, Baltimore, Maryland, USA
| | | | - Gary H Lyman
- Division of Medical Oncology, Fred Hutchinson Cancer Research Center, University of Washington, Seattle, Washington, USA
| | - Margaret Piper
- Center for Health Research, Kaiser Permanente Northwest, Portland, Oregon, USA
| | - Linda Bradley
- Department of Pathology and Laboratory Medicine, Alpert Medical School, Brown University, Providence, Rhode Island, USA
| | - David Parkinson
- New Enterprise Associates, Inc., Menlo Park, California, USA
| | | | | | | | | | - Tania Dutta
- Center for Medical Technology Policy, Baltimore, Maryland, USA
| | - Sean R Tunis
- Center for Medical Technology Policy, Baltimore, Maryland, USA
| |
Collapse
|
4
|
Dahabreh IJ, Gatsonis C. A flexible, multifaceted approach is needed in health technology assessment of PET. J Nucl Med 2014; 55:1225-7. [PMID: 25047328 DOI: 10.2967/jnumed.114.142331] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2014] [Accepted: 06/19/2014] [Indexed: 12/16/2022] Open
|
5
|
A framework for crafting clinical practice guidelines that are relevant to the care and management of people with multimorbidity. J Gen Intern Med 2014; 29:670-9. [PMID: 24442332 PMCID: PMC3965742 DOI: 10.1007/s11606-013-2659-y] [Citation(s) in RCA: 91] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/19/2013] [Revised: 06/19/2013] [Accepted: 09/04/2013] [Indexed: 12/19/2022]
Abstract
Many patients of all ages have multiple conditions, yet clinicians often lack explicit guidance on how to approach clinical decision-making for such people. Most recommendations from clinical practice guidelines (CPGs) focus on the management of single diseases, and may be harmful or impractical for patients with multimorbidity. A major barrier to the development of guidance for people with multimorbidity stems from the fact that the evidence underlying CPGs derives from studies predominantly focused on the management of a single disease. In this paper, the investigators from the Improving Guidelines for Multimorbid Patients Study Group present consensus-based recommendations for guideline developers to make guidelines more useful for the care of people with multimorbidity. In an iterative process informed by review of key literature and experience, we drafted a list of issues and possible approaches for addressing important coexisting conditions in each step of the guideline development process, with a focus on considering relevant interactions between the conditions, their treatments and their outcomes. The recommended approaches address consideration of coexisting conditions at all major steps in CPG development, from nominating and scoping the topic, commissioning the work group, refining key questions, ranking importance of outcomes, conducting systematic reviews, assessing quality of evidence and applicability, summarizing benefits and harms, to formulating recommendations and grading their strength. The list of issues and recommendations was reviewed and refined iteratively by stakeholders. This framework acknowledges the challenges faced by CPG developers who must make complex judgments in the absence of high-quality or direct evidence. These recommendations require validation through implementation, evaluation and refinement.
Collapse
|
6
|
O'Connor A, Lovei GL, Eales J, Frampton G, Glanville J, Pullin A, Sargeant J. Implementation of systematic reviews in EFSA scientific outputs workflow. ACTA ACUST UNITED AC 2012. [DOI: 10.2903/sp.efsa.2012.en-367] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Affiliation(s)
| | | | | | - G.K. Frampton
- Southampton Health Technology Assessments Centre, University of Southampton U.K
| | | | | | | |
Collapse
|
7
|
Abstract
Synthesizing information on test performance metrics such as sensitivity, specificity, predictive values and likelihood ratios is often an important part of a systematic review of a medical test. Because many metrics of test performance are of interest, the meta-analysis of medical tests is more complex than the meta-analysis of interventions or associations. Sometimes, a helpful way to summarize medical test studies is to provide a "summary point", a summary sensitivity and a summary specificity. Other times, when the sensitivity or specificity estimates vary widely or when the test threshold varies, it is more helpful to synthesize data using a "summary line" that describes how the average sensitivity changes with the average specificity. Choosing the most helpful summary is subjective, and in some cases both summaries provide meaningful and complementary information. Because sensitivity and specificity are not independent across studies, the meta-analysis of medical tests is fundamentaly a multivariate problem, and should be addressed with multivariate methods. More complex analyses are needed if studies report results at multiple thresholds for positive tests. At the same time, quantitative analyses are used to explore and explain any observed dissimilarity (heterogeneity) in the results of the examined studies. This can be performed in the context of proper (multivariate) meta-regressions.
Collapse
|
8
|
Abstract
INTRODUCTION Grading the strength of a body of diagnostic test evidence involves challenges over and above those related to grading the evidence from health care intervention studies. This chapter identifies challenges and outlines principles for grading the body of evidence related to diagnostic test performance. CHALLENGES Diagnostic test evidence is challenging to grade because standard tools for grading evidence were designed for questions about treatment rather than diagnostic testing; and the clinical usefulness of a diagnostic test depends on multiple links in a chain of evidence connecting the performance of a test to changes in clinical outcomes. PRINCIPLES Reviewers grading the strength of a body of evidence on diagnostic tests should consider the principle domains of risk of bias, directness, consistency, and precision, as well as publication bias, dose response association, plausible unmeasured confounders that would decrease an effect, and strength of association, similar to what is done to grade evidence on treatment interventions. Given that most evidence regarding the clinical value of diagnostic tests is indirect, an analytic framework must be developed to clarify the key questions, and strength of evidence for each link in that framework should be graded separately. However if reviewers choose to combine domains into a single grade of evidence, they should explain their rationale for a particular summary grade and the relevant domains that were weighed in assigning the summary grade.
Collapse
|