1.Detection of endpoint for segmentation between consonants and vowels in aphasia rehabilitation software based on artificial intelligence scheduling.
Xingjuan DENG ; Ji CHEN ; Jie SHUAI
Journal of Biomedical Engineering 2009;26(4):886-899
For the purpose of improving the efficiency of aphasia rehabilitation training, artificial intelligence-scheduling function is added in the aphasia rehabilitation software, and the software's performance is improved. With the characteristics of aphasia patient's voice as well as with the need of artificial intelligence-scheduling functions under consideration, the present authors have designed a set of endpoint detection algorithm. It determines the reference endpoints, then extracts every word and ensures the reasonable segmentation points between consonants and vowels, using the reference endpoints. The results of experiments show that the algorithm is able to attain the objects of detection at a higher accuracy rate. Therefore, it is applicable to the detection of endpoint on aphasia-patient's voice.
Algorithms
;
Aphasia
;
etiology
;
rehabilitation
;
Artificial Intelligence
;
Endpoint Determination
;
Humans
;
Language Therapy
;
instrumentation
;
Phonetics
;
Software
;
Speech Intelligibility
;
Speech Production Measurement
;
instrumentation
;
Stroke
;
complications
;
Stroke Rehabilitation
;
Verbal Behavior
2.Comparison of Speech Rate and Long-Term Average Speech Spectrum between Korean Clear Speech and Conversational Speech
Jeeun YOO ; Hongyeop OH ; Seungyeop JEONG ; In Ki JIN
Journal of Audiology & Otology 2019;23(4):187-192
BACKGROUND AND OBJECTIVES: Clear speech is an effective communication strategy used in difficult listening situations that draws on techniques such as accurate articulation, a slow speech rate, and the inclusion of pauses. Although too slow speech and improperly amplified spectral information can deteriorate overall speech intelligibility, certain amplitude of increments of the mid-frequency bands (1 to 3 dB) and around 50% slower speech rates of clear speech, when compared to those in conversational speech, were reported as factors that can improve speech intelligibility positively. The purpose of this study was to identify whether amplitude increments of mid-frequency areas and slower speech rates were evident in Korean clear speech as they were in English clear speech. SUBJECTS AND METHODS: To compare the acoustic characteristics of the two methods of speech production, the voices of 60 participants were recorded during conversational speech and then again during clear speech using a standardized sentence material. RESULTS: The speech rate and long-term average speech spectrum (LTASS) were analyzed and compared. Speech rates for clear speech were slower than those for conversational speech. Increased amplitudes in the mid-frequency bands were evident for the LTASS of clear speech. CONCLUSIONS: The observed differences in the acoustic characteristics between the two types of speech production suggest that Korean clear speech can be an effective communication strategy to improve speech intelligibility.
Acoustics
;
Rehabilitation
;
Speech Acoustics
;
Speech Intelligibility
;
Speech Perception
;
Voice
3.The Dynamic Range for Korean Standard Sentence Material: A Gender Comparison in a Male and a Female Speakers.
Journal of Audiology & Otology 2015;19(2):97-100
BACKGROUND AND OBJECTIVES: The purpose of this study was to identify differences between the dynamic ranges (DRs) of male and female speakers using Korean standard sentence material. Consideration was especially given to effects within the predefined segmentalized frequency-bands. MATERIALS AND METHODS: We used Korean standard sentence lists for adults as stimuli. Each sentence was normalized to a root-mean-square of 65 dB sound pressure level. The sentences were then modified to ensure there were no pauses, and the modified sentences were passed through a filter bank in order to perform the frequency analysis. Finally, the DR was quantified using a histogram that showed the cumulative envelope distribution levels of the speech in each frequency band. RESULTS: In DRs that were averaged across all frequency bands, there were no significant differences between the male and the female speakers. However, when considering effects within the predefined frequency bands, there were significant differences in several frequency bands between the DRs of male speech and those of female speech. CONCLUSIONS: This study shows that the DR of speech for the male speaker differed from the female speaker in nine frequency bands among 21 frequency bands. These observed differences suggest that a standardized DR of male speech in the band-audibility function of the speech intelligibility index may differ from that of female speech derived in the same way. Further studies are required to derive standardized DRs for Korean speakers.
Adult
;
Female*
;
Humans
;
Male*
;
Speech Intelligibility
4.Effects of Workplace Noise and Hearing Protective Devices on Worker's Speech Intelligibility.
Hyoung Ouk PARK ; Chang Sun SIM ; Joong Keun KWON ; Kyoo Sang KIM ; Young Joo KWON ; Nam Jeong KIM ; Min Suk SEO ; Ji Ho LEE
Korean Journal of Occupational and Environmental Medicine 2010;22(2):154-165
OBJECTIVE: The objective of this study was to evaluate the degree of intelligible speech in workers wearing hearing protective devices and the differences in background noise type. METHODS: A total 82 shipyard company employees and normal hearing volunteers were recruited as subjects from a company in Ulsan: subjects were classified to the hearing-impaired group or to the control group according to hearing status. They were tested pure tone and speech audiometry in varying test environments including signal-to-noise ratio and background noise type in a headphone and in a free field environment. RESULTS: For both groups, the attenuation of hearing protective devices was greater at high frequencies. For the hearing-impaired group, speech discrimination scores differences between the unprotected and the protected ear were not significant between background noise type, when speech was presented at 65 dBA. However speech intelligibility in white noise background was lower than in the other background noise types when speech was presented at 75 dBA. CONCLUSIONS: Workers' speech intelligibility increased with an increasing signal-to-noise ratio. Speech intelligibility was also dependent on background noise type. Workers and their colleagues were trained to pronounce words more loudly and more clearly when they wearing hearing protective devices or other communication methods that do not use sound signal should be considered.
Audiometry, Speech
;
Ear
;
Ear Protective Devices
;
Hearing
;
Humans
;
Noise
;
Signal-To-Noise Ratio
;
Speech Intelligibility
;
Speech Perception
5.A comparison study of two different segmentation methods to measure intelligibility in Mandarin.
Sha LIU ; De-min HAN ; Ning ZHANG ; Na HAN ; Xue-qing CHEN ; Bo LIU ; Ying KONG ; Lian-sheng GUO ; Hui LIU
Chinese Journal of Otorhinolaryngology Head and Neck Surgery 2007;42(4):241-244
OBJECTIVETo confirm alternative methods for Mandarin hearing in noise test (MHINT) to get the performance-intensity (PI) function, and to set up right test rules for scoring sentence intelligibility for subjects.
METHODSSentence speech reception thresholds were obtained for 30 normally hearing subjects with the MHINT test using several adaptive rules based on either character or word segmentation of the sentences. Each adaptive rule was intended to measure the threshold at a different point on the PI function. By also measuring sentence intelligibility at each threshold, the accuracy with which the PI function was estimated could be evaluated for each type of segmentation. The results of different segmentation were compared.
RESULTSThere was no significant difference in the reception threshold of sentences (RTS) between the group which used character segmentation and the group which used word segmentation (P > 0.05); and also no significant difference in their speech recognition score (SRS, P > 0.05). When the same subject used both character segmentation and word segmentation to do the test, there was no significant difference in their RTS between character segmentation and word segmentation (P > 0.05); and also no significant difference in their SRS (P > 0.05).
CONCLUSIONSThis method could be used not only with normally hearing individuals and people with hearing loss to evaluate their sentence intelligibility, but also with people who had aided equipment.
Acoustic Stimulation ; Adolescent ; Adult ; Audiometry, Speech ; methods ; Female ; Humans ; Language ; Male ; Speech Intelligibility ; Speech Perception ; Young Adult
6.Nasalance in Cochlear Implantees.
Swapna SEBASTIAN ; N SREEDEVI ; Anjali LEPCHA ; John MATHEW
Clinical and Experimental Otorhinolaryngology 2015;8(3):202-205
OBJECTIVES: Speech intelligibility is severely affected in children with congenital profound hearing loss. Hypernasality is a problem commonly encountered in their speech. Auditory information received from cochlear implants is expected to be far superior to that from hearing aids. Our study aimed at comparing the percentages of nasality in the speech of the cochlear implantees with hearing aid users and also with children with normal hearing. METHODS: Three groups of subjects took part in the study. Groups I and II comprised 12 children each, in the age range of 4-10 years, with prelingual bilateral profound hearing loss, using multichannel cochlear implants and digital hearing aids respectively. Both groups had received at least one year of speech therapy intervention since cochlear implant surgery and hearing aid fitting respectively. The third group consisted of age-matched and sex-matched children with normal hearing. The subjects were asked to say a sentence which consisted of only oral sounds and no nasal sounds ("Buy baby a bib"). The nasalance score as a percentage was calculated. RESULTS: Statistical analysis revealed that the children using hearing aids showed a high percentage of nasalance in their speech. The cochlear implantees showed a lower percentage of nasalance compared to children using hearing aids, but did not match with their normal hearing peers. CONCLUSION: The quality of speech of the cochlear implantees was superior to that of the hearing aid users, but did not match with the normal controls. The study suggests that acoustic variables still exist after cochlear implantation in children, with hearing impairments at deviant levels, which needs attention. Further research needs to be carried out to explore the effect of the age at implantation as a variable in reducing nasality in the speech and attaining normative values in cochlear implantees, and also between unilateral versus bilateral implantees.
Acoustics
;
Child
;
Cochlear Implantation
;
Cochlear Implants
;
Hearing
;
Hearing Aids
;
Hearing Loss
;
Humans
;
Speech Intelligibility
;
Speech Therapy
7.Development of early auditory and speech perception skills within one year after cochlear implantion in prelingual deaf children.
Ying FU ; Yuan CHEN ; Xin XI ; Mengdi HONG ; Aiting CHEN ; Qian WANG ; Lena WONG
Chinese Journal of Otorhinolaryngology Head and Neck Surgery 2015;50(4):274-280
OBJECTIVETo investigate the development of early auditory capability and speech perception in the prelingual deaf children after cochlear implantation, and to study the feasibility of currently available Chinese assessment instruments for the evaluation of early auditory skill and speech perception in hearing-impaired children.
METHODSA total of 83 children with severe-to-profound prelingual hearing impairment participated in this study. Participants were divided into four groups according to the age for surgery: A (1-2 years), B (2-3 years), C (3-4 years) and D (4-5 years). The auditory skill and speech perception ability of CI children were evaluated by trained audiologists using the infant-toddler/meaningful auditory integration scale (IT-MAIS/MAIS) questionnaire, the Mandarin Early Speech Perception (MESP) test and the Mandarin Pediatric Speech Intelligibility (MPSI) test. The questionnaires were used in face to face interviews with the parents or guardians. Each child was assessed before the operation and 3 months, 6 months, 12 months after switch-on.
RESULTSAfter cochlear implantation, early postoperative auditory development and speech perception gradually improved. All MAIS/IT-MAIS scores showed a similar increasing trend with the rehabilitation duration (F=5.743, P=0.007). Preoperative and post operative MAIS/IT-MAIS scores of children in age group C (3-4 years) was higher than that of other groups. Children who had longer hearing aid experience before operation demonstrated higher MAIS/IT-MAIS scores than those with little or no hearing aid experience (F=4.947, P=0.000). The MESP test showed that, children were not able to perceive speech as well as detecting speech signals. However as the duration of CI use increased, speech perception ability also improved substantially. However, only about 40% of the subjects could be evaluated using the most difficult subtest on the MPSI in quiet at 12 months after switch-on. As MCR decreased, the proportion of children who could be tested using the MPSI reduced.
CONCLUSIONSWithin one year after CI, children develop early auditory and speech perception capabilities with time. Chinese versions of the IT-MAIS/MAIS, MESP and MPSI are useful instruments to document early auditory and speech perception skills in children after CI implantation.
Child, Preschool ; Cochlear Implantation ; Deafness ; surgery ; Hearing Aids ; Humans ; Infant ; Speech Intelligibility ; Speech Perception ; Surveys and Questionnaires
8.Improvement of Speech Intelligibility in Patients with Sensorineural Hearing Loss Using Noise Reduction Algorithms.
Sung Hwa HONG ; Hyo Chang WOO ; Won Ho CHUNG ; Yang Sun CHO ; Byung Suk HA ; Kye Hoon PARK ; Dong Wook KIM
Korean Journal of Otolaryngology - Head and Neck Surgery 2001;44(6):588-594
BACKGROUND & OBJECTIVES: Speech intelligibility is significantly decreased in noisy environments in patients with sensorineural hearing loss (SNHL) compared to those without hearing loss. It is caused by increased hearing threshold, recruitment, and decreased frequency selectivity. The hearing threshold and recruitment could be grossly controlled by the hearing aid with a compression circuit. It has been well established that various techniques can be used to trigger the signal to increase the noise ratio (SNR), such as noise reduction algorithms, in order to compensate for decreased speech intelligibility in noisy environments. These algorithms are incorporated into some of the commercially available digital hearing aids. This study was designed to measure the effect of noise in speech intelligibility of patients with sensorineural hearing loss and the effect of reducing the background noise with a noise reduction algorithm. MATERIALS AND METHODS: Twenty-one ears with sensorineural hearing loss were selected for this study. Speech reception thresholds (SRTs) and speech discrimination scores (SDSs) were checked with or without noise, and after removal of noise using a noise reduction algorithm. RESULTS: SRTs were increased in noise compared with a quiet environment. By reducing noise using the algorithm we prepared, SRTs were decreased by 4.6 dB, 4.2 dB, and 4.9 dB at +6, 0, -6 dB SNRs, respectively, in comparison with those who didn't use the algorithm in noise. It was observed that SDSs were decreased by 24-50% in noise and patients partially recovered hearing loss when noise was reduced. CONCLUSION: Speech intelligibility deteriorated in noise but partially improved with a noise reduction algorithm. A further study on the development of the noise reduction algorithm is needed to improve SNRs.
Ear
;
Hearing
;
Hearing Aids
;
Hearing Loss
;
Hearing Loss, Sensorineural*
;
Humans
;
Noise*
;
Speech Intelligibility*
;
Speech Perception
9.Interpersonal Competence in College Students with Cochlear Implants: A Preliminary Study.
Young Mi CHOI ; Hee Hwa KIM ; Sung Wook JEONG ; Lee Suk KIM
Korean Journal of Otolaryngology - Head and Neck Surgery 2018;61(7):341-348
BACKGROUND AND OBJECTIVES: ObjectivesZZAlthough many studies have shown that cochlear implantation promotes restoration of adequate hearing and verbal ability, there is little research showing whether patients with cochlear implants (CI) adapted well to the society by forming a good interpersonal relationship. The purpose of this study was to investigate the interpersonal competence of college students who acquired good verbal ability through CI. SUBJECTS AND METHOD: Interpersonal competence was examined using the Korean version of interpersonal competence questionnaire. Participants were 23 college students with CI who had good verbal communication abilities. Their interpersonal competence was compared to that of 28 college students with normal hearing (NH). To identify the factors contributing to the interpersonal competence, regression analysis was performed using speech perception, vocabulary and speech intelligibility as independent variables. RESULTS: College students with CI showed significantly weaker interpersonal competence than those with NH. Among 5 sub-factors of interpersonal competence, assertion and conflict management were not significantly different between two groups and initiation, caring others, and appropriate disclosure of CI group were significantly weaker than NH group. But college students with CI who had normal vocabulary and Speech Intelligibility Ratings score of 5 showed good interpersonal competence, which was comparable to those with NH. CONCLUSION: College students with CI who acquired good verbal communication ability demonstrates lower interpersonal competence than those with NH. Therefore, it is necessary to support them to improve interpersonal competence as well as speech-language ability.
Cochlear Implantation
;
Cochlear Implants*
;
Communication
;
Disclosure
;
Hearing
;
Humans
;
Mental Competency*
;
Methods
;
Speech Intelligibility
;
Speech Perception
;
Vocabulary
10.The effect of palatal height on the Korean vowels.
Bo Yoon CHUNG ; Young Jun LIM ; Myung Joo KIM ; Shin Eun NAM ; Seung Pyo LEE ; Ho Beom KWON
The Journal of Korean Academy of Prosthodontics 2010;48(1):69-74
PURPOSE: The purpose of this study was to analyze the influence of palatal height on Korean vowels and speech intelligibility in Korean adults and to produce baseline data for future prosthodontic treatment. MATERIAL AND METHODS: Forty one healthy Korean men and women who had no problem in pronunciation, hearing, and communication and had no history of airway disease participated in this study. Subjects were classified into H, M, and L groups after clinical determination of palatal height with study casts. Seven Korean vowels were used as sample vowels and subjects' clear speech sounds were recorded using Multispeech software program on computer. The F1 and the F2 of 3 groupswere produced and they were compared. In addition, the vowel working spaces of 3 groups by /a/, /i/, and /u/ corner vowels were obtained and their areas were compared. Kruskal-Wallis test and Mann-Whiteny U test were used as statistical methods and P < .05 was considered statistically significant. RESULTS: There were no significant differences in formant frequencies among 3 groups except for the F2 formant frequency between H and L group (P = .003). In the analysis of vowel working space areas of 3 groups, the vowel working spaces of 3 groups were similar in shape and no significant differences of their areas were found. CONCLUSION: The palatal height did not affect vowel frequencies in most of the vowels and speech intelligibility. The dynamics of tongue activity seems to compensate the morphological difference.
Adult
;
Female
;
Hearing
;
Humans
;
Male
;
Phonetics
;
Prosthodontics
;
Speech Intelligibility
;
Tongue