Interrater reliability between in-person and telemedicine evaluations in obstructive sleep apnea.
J Clin Sleep Med 2021;
17:1435-1440. [PMID:
33687321 PMCID:
PMC8314612 DOI:
10.5664/jcsm.9220]
[Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Revised: 02/19/2021] [Accepted: 02/19/2021] [Indexed: 11/13/2022]
Abstract
STUDY OBJECTIVES
We examined how telemedicine evaluation compares to face-to-face evaluation in identifying risk for sleep-disordered breathing.
METHODS
This was a randomized interrater reliability study of 90 participants referred to a university sleep center. Participants were evaluated by a clinician investigator seeing the patient in-person, then randomized to a second clinician investigator who performed a patient evaluation online via audio-video conferencing. The primary comparator was pretest probability for obstructive sleep apnea.
RESULTS
The primary outcome comparing pretest probability for obstructive sleep apnea showed a weighted kappa value of 0.414 (standard error 0.090, P = .002), suggesting moderate agreement between the 2 raters. Kappa values of our secondary outcomes varied widely, but the kappa values were lower for physical exam findings compared to historical elements.
CONCLUSIONS
Evaluation for pretest probability for obstructive sleep apnea via telemedicine has a moderate interrater correlation with in-person assessment. A low degree of interrater reliability for physical exam elements suggests telemedicine assessment for obstructive sleep apnea could be hampered by a suboptimal physical exam. Employing standardized scales for obstructive sleep apnea when performing telemedicine evaluations may help with risk-stratification and ultimately lead to more tailored clinical management.
Collapse