1.Perception of Smile Aesthetics and Attractiveness among Saudi Females
Nozha Sawan ; Mamata Hebbal ; Abeer Alshami ; Afnan Ben Gassem ; Yara Alromaih ; Eman Alsagob
Archives of Orofacial Sciences 2022;17(1):113-122
ABSTRACT
Smile aesthetic, known as the static and dynamic relationship of the dentition and supporting
structures to the facial soft tissues, is one of the most important elements of facial attractiveness.
The objective of the study was to assess the perception of smile aesthetics and attractiveness through
digital image manipulation of aesthetic variables and to compare those perceptions according to
diverse sociodemographic data among female Saudi laypeople attending the dental clinic. A crosssectional study of 193 female Saudi participants were randomly selected and consented to answer the
study questionnaire. Nine smile photograph images were created to compare different smile aesthetic
perceptions. Two groups were recruited: 120 participants in the first group (under 30 years old) and
73 participants in the second group (30 years old or above). All participants in both groups were asked
to choose the attractiveness of each smile image using multiple-choice options. A statistically significant
finding showed that normal buccal corridors were chosen as the most attractive smile by 42.5%
of the participants in the younger group and by a significantly higher ratio of the participants with a
bachelor’s degree or higher level of education at 49% (p < 0.05). Laypeople’s preferences regarding smile
attractiveness vary, but a normal appearance was the ideal choice for the majority. Orthodontic treatment
should consider the general sociocultural understanding of smile perception.
Esthetics, Dental--psychology
;
Facial Recognition
;
Saudi Arabia
2.Interaction Between Conscious and Unconscious Information-Processing of Faces and Words.
Shiwen REN ; Hanyu SHAO ; Sheng HE
Neuroscience Bulletin 2021;37(11):1583-1594
It is widely acknowledged that holistic processing is a key characteristic of face perception. Although holistic processing implies the automatic integration of face parts, it is unclear whether such processing requires the awareness of face parts. Here, we investigated the interactions between visible face parts and face parts rendered invisible using continuous flash suppression (CFS). In the first experiment with the upper half-face visible and the lower half-face invisible, the results showed that perceived face identity was influenced by the invisible lower half-face, suggesting that integration occurs between the visible and invisible face parts, a variant of the "composite face effect". In the second experiment, we investigated the influence of visible face parts on the processing of invisible face parts, as measured by the time it took for the invisible parts to break out from CFS. The results showed a visible-to-invisible facilitation effect, that the aligned invisible face parts broke through CFS faster than when the visible and invisible face parts were misaligned. Visible eyes had a stronger influence on the invisible nose/mouth than the other way around. Such facilitation of processing from visible to invisible parts was also found when Chinese characters were used as stimuli. These results show that information integration occurs across the consciousness boundary.
Awareness
;
Consciousness
;
Eye
;
Face
;
Facial Recognition
;
Photic Stimulation
3.Faces of the Face.
Archives of Plastic Surgery 2017;44(3):251-256
No abstract available.
Face
;
Pattern Recognition, Visual
;
Human
;
Sociology, Medical
;
Facial Expression
;
Facial Muscles
;
Animals
;
Biological Evolution
;
Species Specificity
4.Objective assessment of facial paralysis using local binary pattern in infrared thermography.
Xulong LIU ; Wenxue HONG ; Tao ZHANG ; Zhenying WU
Journal of Biomedical Engineering 2013;30(1):34-38
Facial paralysis is a frequently-occurring disease, which causes the loss of the voluntary muscles on one side of the face due to the damages the facial nerve and results in an inability to close the eye and leads to dropping of the angle of the mouth. There have been few objective methods to quantitatively diagnose it and assess this disease for clinically treating the patients so far. The skin temperature distribution of a healthy human body exhibits a contralateral symmetry. Facial paralysis usually causes an alteration of the temperature distribution of body with the disease. This paper presents the use of the histogram distance of bilateral local binary pattern (LBP) in the facial infrared thermography to measure the asymmetry degree of facial temperature distribution for objective assessing the severity of facial paralysis. Using this new method, we performed a controlled trial to assess the facial nerve function of the healthy subjects and the patients with Bell's palsy respectively. The results showed that the mean sensitivity and specificity of this method are 0.86 and 0.89 respectively. The correlation coefficient between the asymmetry degree of facial temperature distribution and the severity of facial paralysis is an average of 0.657. Therefore, the histogram distance of local binary pattern in the facial infrared thermography is an efficient clinical indicator with respect to the diagnosis and assessment of facial paralysis.
Facial Paralysis
;
diagnosis
;
physiopathology
;
Humans
;
Infrared Rays
;
Pattern Recognition, Automated
;
methods
;
Skin Temperature
;
Thermography
;
instrumentation
5.Gender difference in face recognition.
Yi GAI ; Yingjle LI ; Yisheng ZHU ; Lun ZHAO
Journal of Biomedical Engineering 2009;26(1):47-49
The method of event-related potentials (ERP) was used to study the gender difference in face recognition. The stimuli in the experiment were 10 upright and 10 inverted face photos. The subjects, half female and half male, were asked to judge whether the face was upright or inverted. The results showed that the N170 wave forms were observed in the occipito-temporal regions and they were found exhibiting brain's right hemispheric dominance. The stimuli of different gender photos were noticed to have no significant impact on the N170 wave forms, but there were significant differentce in the amplitude and latency period of N170 between different gender participants. Moreover, there was marked difference in the latency period of the male participants watching the same gender face and different gender face. Similar results were not found in the ERP's latency period of female participants.
Adult
;
Brain
;
physiology
;
Electroencephalography
;
Evoked Potentials
;
Face
;
Facial Expression
;
Female
;
Form Perception
;
physiology
;
Humans
;
Male
;
Pattern Recognition, Visual
;
physiology
;
Recognition (Psychology)
;
Sex Factors
;
Young Adult
6.Characteristics of facial expression recognition ability in patients with Lewy body disease.
Yuriko KOJIMA ; Tomohiro KUMAGAI ; Tomoo HIDAKA ; Takeyasu KAKAMU ; Shota ENDO ; Yayoi MORI ; Tadashi TSUKAMOTO ; Takashi SAKAMOTO ; Miho MURATA ; Takehito HAYAKAWA ; Tetsuhito FUKUSHIMA
Environmental Health and Preventive Medicine 2018;23(1):32-32
BACKGROUND:
The facial expression of medical staff has been known to greatly affect the psychological state of patients, making them feel uneasy or conversely, cheering them up. By clarifying the characteristics of facial expression recognition ability in patients with Lewy body disease, the aim of this study is to examine points to facilitate smooth communication between caregivers and patients with the disease whose cognitive function has deteriorated.
METHODS:
During the period from March 2016 to July 2017, we examined the characteristics of recognition of the six facial expressions of "happiness," "sadness," "fear," "anger," "surprise," and "disgust" for 107 people aged 60 years or more, both outpatient and inpatient, who hospital specialists had diagnosed with Lewy body diseases of Parkinson's disease, Parkinson's disease with dementia, and dementia with Lewy bodies. Based on facial expression recognition test results, we classified them by cluster analysis and clarified features of each type.
RESULTS:
In patients with Lewy body disease, happiness was kept unaffected by aging, age of onset, duration of the disease, cognitive function, and apathy; however, recognizing the facial expression of fear was difficult. In addition, due to aging, cognitive decline, and apathy, the facial expression recognition ability for sadness and anger decreased. In particular, cognitive decline reduced recognition of all of the facial expressions except for happiness. The test accuracy rates were classified into three types using the cluster analysis: "stable type," "mixed type," and "reduced type". In the "reduced type", the overall facial recognition ability declined except happiness, and in the mixed type, recognition ability of anger particularly declined.
CONCLUSION
There were several facial expressions that the Lewy body disease patients were unable to accurately identify. Caregivers are recommended to make an effort to compensate for such situations with language or body contact, etc., as a way to convey correct feeling to the patients of each type.
Aged
;
Aged, 80 and over
;
Cluster Analysis
;
Cognition
;
physiology
;
Emotions
;
Facial Expression
;
Facial Recognition
;
physiology
;
Female
;
Humans
;
Lewy Body Disease
;
physiopathology
;
psychology
;
Male
;
Middle Aged
7.Discriminative Effects of Social Skills Training on Facial Emotion Recognition among Children with Attention-Deficit/Hyperactivity Disorder and Autism Spectrum Disorder.
Ji Seon LEE ; Na Ri KANG ; Hui Jeong KIM ; Young Sook KWAK
Journal of the Korean Academy of Child and Adolescent Psychiatry 2018;29(4):150-160
OBJECTIVES: This study investigated the effect of social skills training (SST) on facial emotion recognition and discrimination in children with attention-deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD). METHODS: Twenty-three children aged 7 to 10 years participated in our SST. They included 15 children diagnosed with ADHD and 8 with ASD. The participants' parents completed the Korean version of the Child Behavior Checklist (K-CBCL), the ADHD Rating Scale, and Conner's Scale at baseline and post-treatment. The participants completed the Korean Wechsler Intelligence Scale for Children-IV (K-WISC-IV) and the Advanced Test of Attention at baseline and the Penn Emotion Recognition and Discrimination Task at baseline and post-treatment. RESULTS: No significant changes in facial emotion recognition and discrimination occurred in either group before and after SST. However, when controlling for the processing speed of K-WISC and the social subscale of K-CBCL, the ADHD group showed more improvement in total (p=0.049), female (p=0.039), sad (p=0.002), mild (p=0.015), female extreme (p=0.005), male mild (p=0.038), and Caucasian (p=0.004) facial expressions than did the ASD group. CONCLUSION: SST improved facial expression recognition for children with ADHD more effectively than it did for children with ASD, in whom additional training to help emotion recognition and discrimination is needed.
Autism Spectrum Disorder*
;
Autistic Disorder*
;
Checklist
;
Child Behavior
;
Child*
;
Discrimination (Psychology)
;
Facial Expression
;
Facial Recognition
;
Female
;
Humans
;
Intelligence
;
Male
;
Parents
;
Social Skills*
8.Facial expression recognition based on feature selection by quadratic mutual information.
Ling ZHANG ; Yuanwen ZOU ; Tianfu WANG ; Jiangli LIN ; Deyu LI
Journal of Biomedical Engineering 2008;25(3):510-514
To solve the problem of imprecise positioning of feature point and of the feature data redundancy in facial expression recognition by active appearance models (AAM), the automatic adjustment of initial model for AAM fitting is proposed in this paper. The specific aims are to improve the precision of positioning and to more effectively reflect the variation of expressions by acquired features. The problem of feature selection is resolved by adopting quadratic mutual information and reducing the feature dimension. The support vector machine (SVM) classifier is used for expression recognition. The experimental results on CAS-PEAL facial expression database show that the proposed method effectively improves the performance of facial expression recognition, the maximum recognition rate being 83.33%.
Algorithms
;
Computer Simulation
;
Facial Expression
;
Humans
;
Image Interpretation, Computer-Assisted
;
methods
;
Models, Biological
;
Pattern Recognition, Automated
;
methods
;
Signal Processing, Computer-Assisted
9.Nerve Canals at the Fundus of the Internal Auditory Canal on High-Resolution Temporal Bone CT.
Joon Ha JI ; Eun Kyung YOUN ; Seung Chul KIM
Journal of the Korean Radiological Society 2001;45(6):565-570
PURPOSE: To identify and evaluate the normal anatomy of nerve canals in the fundus of the internal auditory canal which can be visualized on high-resolution temporal bone CT. MATERIALS AND METHODS: We retrospectively reviewed high-resolution (1 mm thickness and interval contiguous scan) temporal bone CT images of 253 ears in 150 patients who had not suffered trauma or undergone surgery. Those with a history of uncomplicated inflammatory disease were included, but those with symptoms of vertigo, sensorineural hearing loss, or facial nerve palsy were excluded. Three radiologists determined the detectability and location of canals for the labyrinthine segment of the facial, superior vestibular and cochlear nerve, and the saccular branch and posterior ampullary nerve of the inferior vestibular nerve. RESULTS: Five bony canals in the fundus of the internal auditory canal were identified as nerve canals. Four canals were identified on axial CT images in 100% of cases; the so-called singular canal was identified in only 68%. On coronal CT images, canals for the labyrinthine segment of the facial and superior vestibular nerve were seen in 100% of cases, but those for the cochlear nerve, the saccular branch of the inferior vestibular nerve, and the singular canal were seen in 90.1%, 87.4% and 78% of cases, respectively. In all detectable cases, the canal for the labyrinthine segment of the facial nerve was revealed as one which traversed anterolaterally, from the anterosuperior portion of the fundus of the internal auditory canal. The canal for the cochlear nerve was located just below that for the labyrinthine segment of the facial nerve, while that canal for the superior vestibular nerve was seen at the posterior aspect of these two canals. The canal for the saccular branch of the inferior vestibular nerve was located just below the canal for the superior vestibular nerve, and that for the posterior ampullary nerve, the so-called singular canal, ran laterally or posteolaterally from the posteroinferior aspect of the canal for the saccular branch. CONCLUSION: Five bony nerve canals in the fundus of the internal auditory canal were detected by high-frequency on high-resolution temporal bone CT. Familiarity with these structures can prevent confusion with, or misinterpretation as, a fracture line, and further study such as MR imaging may be required when any enlargement or erosion of these nerve canals is present.
Cochlear Nerve
;
Ear
;
Facial Nerve
;
Hearing Loss, Sensorineural
;
Humans
;
Magnetic Resonance Imaging
;
Paralysis
;
Recognition (Psychology)
;
Retrospective Studies
;
Temporal Bone*
;
Vertigo
;
Vestibular Nerve
10.Studies of visual mismatch negativity elicited by cartoon facial expressions.
Shumei JI ; Wei LI ; Peng LIU ; Zhjie BIAN
Journal of Biomedical Engineering 2013;30(3):476-480
A modified "cross-modal delayed response" paradigm was used to investigate whether the visual mismatch negativity can be elicited by cartoon facial expressions, and to define the mechanism underlying automatic processing of facial expressions. Subjects taking part in the tests were instructed to discriminate the type of the tones they heard as quickly and accurately as possible, and to act merely when they heard the response imperative signal. Neutral, happy and angry faces were presented during intervals between a tone and a response imperative signal. Visual mismatch negativity (VMMN) was obtained by subtracting the event - related potential (ERP) elicited by neutral faces from that elicited by happy faces or angry faces. The angry-related VMMN was more negative than happy-related VMMN, and both were more negative in the left than in the right cerebral hemisphere. The results indicated that VMMN can be elicited by the cartoon facial expressions, and the facial expressions can be processed automatically.
Adult
;
Brain
;
physiology
;
Cartoons as Topic
;
Evoked Potentials, Visual
;
physiology
;
Facial Expression
;
Female
;
Humans
;
Male
;
Pattern Recognition, Visual
;
physiology
;
Photic Stimulation
;
Visual Perception
;
physiology
;
Young Adult