Diagnostic Accuracy of Virtual Cognitive Assessment and Testing

Systematic Review and Meta-analysis

Jennifer A. Watt MD PhD; Natasha E. Lane MD PhD; Areti Angeliki Veroniki PhD; Manav V. Vyas MBBS MSc; Chantal Williams MSc; Naveeta Ramkissoon MPH; Yuan Thompson PhD; Andrea C. Tricco PhD; Sharon E. Straus MD MSc; Zahra Goodarzi MD MSc


J Am Geriatr Soc. 2021;69(6):1429-1440. 

In This Article


Our systematic review is a comprehensive synthesis of studies comparing the diagnostic accuracy of videoconference to in-person cognitive assessments and virtual cognitive test cutoffs suggestive of dementia or MCI. Further, we qualitatively synthesized barriers and facilitators to virtual cognitive assessment. We found three studies demonstrating that virtual cognitive assessments have good accuracy compared with in-person cognitive assessments, but we did not identify any studies comparing the accuracy of telephone with in-person cognitive assessments (based on established criteria such as the DSM)—this is an important knowledge gap given that two-thirds of older adults are receiving virtual care via the telephone during the COVID-19 pandemic.[46] However, our systematic review and meta-analysis identified thresholds suggestive of cognitive impairment based on the TICS (the most studied telephone-based cognitive test in our systematic review). Cognitive tests such as the TICS that are coupled with appropriate functional ability inquiry could aid clinicians in completing telephone-based cognitive assessments, but the baseline prevalence of dementia or MCI (i.e., nursing home, memory clinic, or primary care setting) will impact the post-test probability of diagnosis. Although scores from other virtual cognitive tests (i.e., MMSE, MoCA) demonstrated moderate-to-high correlation with those conducted in-person, there was substantial variability across studies and relatively few studies reported cutoffs consistent with dementia or MCI. Identified barriers and facilitators (e.g., hearing impairment, presence of a caregiver to support technology use) to virtual cognitive assessment and testing or the length of time between in-person and virtual cognitive assessments and testing may explain some of this variability. Our results are timely and important given renewed and growing interest in conducting virtual cognitive assessments and testing: there is a large evidence base supporting virtual cognitive testing and assessments, but there are also important knowledge gaps to be filled.

Our systematic review highlights the diagnostic accuracy of a cognitive test that may be unfamiliar to clinicians—the TICS.[5] The TICS was adapted from and correlates highly with the MMSE (Pearson correlation 0.94, p < 0.0001).[5,7] The TICS is composed of 11 tasks designed to assess the cognitive domains of orientation, memory, attention/calculation, and language.[5] In the initial TICS validation study, Brandt et al., enrolled 100 patients with dementia (mean TICS 35.79 [SD 1.75], range 0–31) and 33 control patients (mean TICS 13.2 [SD 8.53], range 31–39).[5] They chose a cutoff of fewer than 31 points as supportive of cognitive impairment, which corresponded to a sensitivity of 94% and specificity of 100% for identifying dementia.[5] However, subsequent studies have enrolled patient populations where (1) the difference in mean TICS scores between those with and without dementia was smaller, (2) there was greater variability in TICS scores within groups of patients with and without dementia, and (3) the mean TICS scores in persons with dementia were lower than that in the study by Brandt et al.[21,23,25] Two subgroup analyses in our systematic review and meta-analysis suggest that language and education may be important determinants of optimal TICS cutoffs supportive of a diagnosis of dementia: 27 was the optimal cutoff threshold where the mean years of formal education was >8 and 29 was the optimal cutoff threshold where cognitive testing was conducted in English. Greater diversity in study patient populations may at least partially explain the lower optimal cutoff threshold for the TICS identified in our meta-analysis.

Barriers and facilitators described by study authors illustrate key considerations for clinicians who are adapting their practices to incorporate virtual cognitive assessments and virtual cognitive testing. Clinicians and patients engaged in virtual cognitive testing experienced some barriers similar to those present in face-to-face cognitive assessments and testing (e.g., the impact of culture, education, and language on the conduct and interpretation of cognitive testing). Other barriers are unique to the virtual environment: lack of access to or familiarity with videoconferencing technology, loss of certain nonverbal cues that suggest potential cognitive impairment, and a lesser ability to prevent disruptions that might occur outside of the clinician's office. Barriers and facilitators associated with telemedicine use by patients and frontline staff have been previously described (e.g., involvement of support persons to facilitate assessment, internet or phone availability, and lack of training).[34,35] We have added to this evidence by highlighting unique considerations for conducting cognitive testing in persons who may have cognitive or sensory impairment.[47,48] Clinicians will need to tailor their approach to virtual cognitive testing to ensure patients' performance is not compromised by challenges imposed by the testing environment rather than their cognitive abilities.

Our systematic review and meta-analysis has several limitations. First, important considerations in understanding the validity and reliability of each cognitive test's diagnostic accuracy, including the training of persons administering tests or the baseline prevalence of cognitive impairment, were not always reported. Second, in many cases, the type of dementia or MCI being assessed was not specified or multiple types were assessed concurrently; we could not comment on the diagnostic accuracy of the TICS in persons with Alzheimer's disease or vascular dementia, specifically. Lastly, identified barriers and facilitators associated with virtual cognitive testing describe only those reported by study authors in their clinical setting, which may not generalize to the experiences of patients, caregivers, and clinicians across different care settings.

In conclusion, although there is evidence supporting virtual cognitive testing and assessments, important knowledge gaps related to telephone assessments, in particular, must be filled because many older adults have not been able to access videoconference assessments during the COVID-19 pandemic.[46,49] The TICS and modified TICS are more extensively validated than other virtual cognitive tests, but virtual cognitive tests familiar to clinicians (i.e., MMSE and MoCA) demonstrated moderate-to-high correlation with in-person test versions and should be studied further to better understand variability in study-specific estimates. Patients, caregivers, clinicians, researchers, and policy-makers must consider both the diagnostic accuracy of virtual cognitive assessments and barriers faced by patients, caregivers, and clinicians in accessing and using these assessments to continue supporting patient needs during the COVID-19 pandemic and beyond.