1.Performance of ChatGPT 3.5 and 4 on U.S. dental examinations: the INBDE, ADAT, and DAT
Mahmood DASHTI ; Shohreh GHASEMI ; Niloofar GHADIMI ; Delband HEFZI ; Azizeh KARIMIAN ; Niusha ZARE ; Amir FAHIMIPOUR ; Zohaib KHURSHID ; Maryam Mohammadalizadeh CHAFJIRI ; Sahar GHAEDSHARAF
Imaging Science in Dentistry 2024;54(3):271-275
Purpose:
Recent advancements in artificial intelligence (AI), particularly tools such as ChatGPT developed by OpenAI, a U.S.-based AI research organization, have transformed the healthcare and education sectors. This study investigated the effectiveness of ChatGPT in answering dentistry exam questions, demonstrating its potential to enhance professional practice and patient care.
Materials and Methods:
This study assessed the performance of ChatGPT 3.5 and 4 on U.S. dental exams -specifically, the Integrated National Board Dental Examination (INBDE), Dental Admission Test (DAT), and Advanced Dental Admission Test (ADAT) - excluding image-based questions. Using customized prompts,ChatGPT’s answers were evaluated against official answer sheets.
Results:
ChatGPT 3.5 and 4 were tested with 253 questions from the INBDE, ADAT, and DAT exams. For the INBDE, both versions achieved 80% accuracy in knowledge-based questions and 66-69% in case history questions.In ADAT, they scored 66-83% in knowledge-based and 76% in case history questions. ChatGPT 4 excelled on the DAT, with 94% accuracy in knowledge-based questions, 57% in mathematical analysis items, and 100% in comprehension questions, surpassing ChatGPT 3.5’s rates of 83%, 31%, and 82%, respectively. The difference was significant for knowledge-based questions (P = 0.009). Both versions showed similar patterns in incorrect responses.
Conclusion
Both ChatGPT 3.5 and 4 effectively handled knowledge-based, case history, and comprehension questions, with ChatGPT 4 being more reliable and surpassing the performance of 3.5. ChatGPT 4’s perfect score incomprehension questions underscores its trainability in specific subjects. However, both versions exhibited weakerperformance in mathematical analysis, suggesting this as an area for improvement.
2.Performance of ChatGPT 3.5 and 4 on U.S. dental examinations: the INBDE, ADAT, and DAT
Mahmood DASHTI ; Shohreh GHASEMI ; Niloofar GHADIMI ; Delband HEFZI ; Azizeh KARIMIAN ; Niusha ZARE ; Amir FAHIMIPOUR ; Zohaib KHURSHID ; Maryam Mohammadalizadeh CHAFJIRI ; Sahar GHAEDSHARAF
Imaging Science in Dentistry 2024;54(3):271-275
Purpose:
Recent advancements in artificial intelligence (AI), particularly tools such as ChatGPT developed by OpenAI, a U.S.-based AI research organization, have transformed the healthcare and education sectors. This study investigated the effectiveness of ChatGPT in answering dentistry exam questions, demonstrating its potential to enhance professional practice and patient care.
Materials and Methods:
This study assessed the performance of ChatGPT 3.5 and 4 on U.S. dental exams -specifically, the Integrated National Board Dental Examination (INBDE), Dental Admission Test (DAT), and Advanced Dental Admission Test (ADAT) - excluding image-based questions. Using customized prompts,ChatGPT’s answers were evaluated against official answer sheets.
Results:
ChatGPT 3.5 and 4 were tested with 253 questions from the INBDE, ADAT, and DAT exams. For the INBDE, both versions achieved 80% accuracy in knowledge-based questions and 66-69% in case history questions.In ADAT, they scored 66-83% in knowledge-based and 76% in case history questions. ChatGPT 4 excelled on the DAT, with 94% accuracy in knowledge-based questions, 57% in mathematical analysis items, and 100% in comprehension questions, surpassing ChatGPT 3.5’s rates of 83%, 31%, and 82%, respectively. The difference was significant for knowledge-based questions (P = 0.009). Both versions showed similar patterns in incorrect responses.
Conclusion
Both ChatGPT 3.5 and 4 effectively handled knowledge-based, case history, and comprehension questions, with ChatGPT 4 being more reliable and surpassing the performance of 3.5. ChatGPT 4’s perfect score incomprehension questions underscores its trainability in specific subjects. However, both versions exhibited weakerperformance in mathematical analysis, suggesting this as an area for improvement.
3.Performance of ChatGPT 3.5 and 4 on U.S. dental examinations: the INBDE, ADAT, and DAT
Mahmood DASHTI ; Shohreh GHASEMI ; Niloofar GHADIMI ; Delband HEFZI ; Azizeh KARIMIAN ; Niusha ZARE ; Amir FAHIMIPOUR ; Zohaib KHURSHID ; Maryam Mohammadalizadeh CHAFJIRI ; Sahar GHAEDSHARAF
Imaging Science in Dentistry 2024;54(3):271-275
Purpose:
Recent advancements in artificial intelligence (AI), particularly tools such as ChatGPT developed by OpenAI, a U.S.-based AI research organization, have transformed the healthcare and education sectors. This study investigated the effectiveness of ChatGPT in answering dentistry exam questions, demonstrating its potential to enhance professional practice and patient care.
Materials and Methods:
This study assessed the performance of ChatGPT 3.5 and 4 on U.S. dental exams -specifically, the Integrated National Board Dental Examination (INBDE), Dental Admission Test (DAT), and Advanced Dental Admission Test (ADAT) - excluding image-based questions. Using customized prompts,ChatGPT’s answers were evaluated against official answer sheets.
Results:
ChatGPT 3.5 and 4 were tested with 253 questions from the INBDE, ADAT, and DAT exams. For the INBDE, both versions achieved 80% accuracy in knowledge-based questions and 66-69% in case history questions.In ADAT, they scored 66-83% in knowledge-based and 76% in case history questions. ChatGPT 4 excelled on the DAT, with 94% accuracy in knowledge-based questions, 57% in mathematical analysis items, and 100% in comprehension questions, surpassing ChatGPT 3.5’s rates of 83%, 31%, and 82%, respectively. The difference was significant for knowledge-based questions (P = 0.009). Both versions showed similar patterns in incorrect responses.
Conclusion
Both ChatGPT 3.5 and 4 effectively handled knowledge-based, case history, and comprehension questions, with ChatGPT 4 being more reliable and surpassing the performance of 3.5. ChatGPT 4’s perfect score incomprehension questions underscores its trainability in specific subjects. However, both versions exhibited weakerperformance in mathematical analysis, suggesting this as an area for improvement.
4.Performance of ChatGPT 3.5 and 4 on U.S. dental examinations: the INBDE, ADAT, and DAT
Mahmood DASHTI ; Shohreh GHASEMI ; Niloofar GHADIMI ; Delband HEFZI ; Azizeh KARIMIAN ; Niusha ZARE ; Amir FAHIMIPOUR ; Zohaib KHURSHID ; Maryam Mohammadalizadeh CHAFJIRI ; Sahar GHAEDSHARAF
Imaging Science in Dentistry 2024;54(3):271-275
Purpose:
Recent advancements in artificial intelligence (AI), particularly tools such as ChatGPT developed by OpenAI, a U.S.-based AI research organization, have transformed the healthcare and education sectors. This study investigated the effectiveness of ChatGPT in answering dentistry exam questions, demonstrating its potential to enhance professional practice and patient care.
Materials and Methods:
This study assessed the performance of ChatGPT 3.5 and 4 on U.S. dental exams -specifically, the Integrated National Board Dental Examination (INBDE), Dental Admission Test (DAT), and Advanced Dental Admission Test (ADAT) - excluding image-based questions. Using customized prompts,ChatGPT’s answers were evaluated against official answer sheets.
Results:
ChatGPT 3.5 and 4 were tested with 253 questions from the INBDE, ADAT, and DAT exams. For the INBDE, both versions achieved 80% accuracy in knowledge-based questions and 66-69% in case history questions.In ADAT, they scored 66-83% in knowledge-based and 76% in case history questions. ChatGPT 4 excelled on the DAT, with 94% accuracy in knowledge-based questions, 57% in mathematical analysis items, and 100% in comprehension questions, surpassing ChatGPT 3.5’s rates of 83%, 31%, and 82%, respectively. The difference was significant for knowledge-based questions (P = 0.009). Both versions showed similar patterns in incorrect responses.
Conclusion
Both ChatGPT 3.5 and 4 effectively handled knowledge-based, case history, and comprehension questions, with ChatGPT 4 being more reliable and surpassing the performance of 3.5. ChatGPT 4’s perfect score incomprehension questions underscores its trainability in specific subjects. However, both versions exhibited weakerperformance in mathematical analysis, suggesting this as an area for improvement.
5.RELATIONSHIP BETWEEN SELF-ACCEPTANCE AND SELF EFFICACY AMONG LIVER CANCER PAKISTANI FEMALE PATIENTS
Maryam Khurshid ; Mamoona Ismail Loona ; Amna Hassan ; Sameena Iqbal ; Sameera Mirza
ASEAN Journal of Psychiatry 2021;22(8):1-5
Introduction:
It is agreed all over the world that liver cancer can influence a woman's approach of self-acceptance and self-efficacy. Limited literature addressing levels of self-acceptance and self-efficacy separately among women with liver cancer is available in Pakistani population.
Objectives:
The aim was to study the role of self-acceptance and self-efficacy among women with liver cancer, and to analyse the influence of socio-personal characteristics of women with their self-efficacy. .
Data Source and Settings:
To address the objectives, a sample of 60 women (N=60) was selected from different hospitals of
Islamabad.
Period:
The duration of the study was 6 months
Methods :
The present study made use of the two scales i.e., self-efficacy scale and Self-Acceptance. Clinical group was selected through random sampling to collect data from different hospitals of Islamabad.
Results:
To determine the psychometric properties; alpha correlation, and t-test analysis and ANOVA were computed. Self-acceptance and self-efficacy was found significantly associated with each other. Results indicated that the low sense of identity as a whole, leads women to be low self-efficacy regarding their abilities in their daily lives. There was also a significant difference in demographic variables on Self-acceptance and self-efficacy.
Conclusion
It is concluded from the finding of the current study that women suffering from liver cancer demonstrated low level of Self-Acceptance. Several associated factors with self-acceptance and self-efficacy were also found among female liver cancer patients of Islamabad.