- Author:
Chan-Young PARK
1
;
Minsoo KIM
;
YongSoo SHIM
;
Nayoung RYOO
;
Hyunjoo CHOI
;
Ho Tae JEONG
;
Gihyun YUN
;
Hunboc LEE
;
Hyungryul KIM
;
SangYun KIM
;
Young Chul YOUN
Author Information
- Publication Type:Original Article
- From:Dementia and Neurocognitive Disorders 2024;23(1):1-10
- CountryRepublic of Korea
- Language:EN
-
Abstract:
Background:and Purpose: Voice, reflecting cerebral functions, holds potential for analyzing and understanding brain function, especially in the context of cognitive impairment (CI) and Alzheimer’s disease (AD). This study used voice data to distinguish between normal cognition and CI or Alzheimer’s disease dementia (ADD).
Methods:This study enrolled 3 groups of subjects: 1) 52 subjects with subjective cognitive decline; 2) 110 subjects with mild CI; and 3) 59 subjects with ADD. Voice features were extracted using Mel-frequency cepstral coefficients and Chroma.
Results:A deep neural network (DNN) model showed promising performance, with an accuracy of roughly 81% in 10 trials in predicting ADD, which increased to an average value of about 82.0%±1.6% when evaluated against unseen test dataset.
Conclusions:Although results did not demonstrate the level of accuracy necessary for a definitive clinical tool, they provided a compelling proof-of-concept for the potential use of voice data in cognitive status assessment. DNN algorithms using voice offer a promising approach to early detection of AD. They could improve the accuracy and accessibility of diagnosis, ultimately leading to better outcomes for patients.