1.The performance of ChatGPT-4.0o in medical imaging evaluation: a cross-sectional study
Elio Stefan ARRUZZA ; Carla Marie EVANGELISTA ; Minh CHAU
Journal of Educational Evaluation for Health Professions 2024;21(1):29-
This study investigated the performance of ChatGPT-4.0o in evaluating the quality of positioning in radiographic images. Thirty radiographs depicting a variety of knee, elbow, ankle, hand, pelvis, and shoulder projections were produced using anthropomorphic phantoms and uploaded to ChatGPT-4.0o. The model was prompted to provide a solution to identify any positioning errors with justification and offer improvements. A panel of radiographers assessed the solutions for radiographic quality based on established positioning criteria, with a grading scale of 1–5. In only 20% of projections, ChatGPT-4.0o correctly recognized all errors with justifications and offered correct suggestions for improvement. The most commonly occurring score was 3 (9 cases, 30%), wherein the model recognized at least 1 specific error and provided a correct improvement. The mean score was 2.9. Overall, low accuracy was demonstrated, with most projections receiving only partially correct solutions. The findings reinforce the importance of robust radiography education and clinical experience.
2.Effectiveness of ChatGPT-4o in developing continuing professional development plans for graduate radiographers: a descriptive study
Minh CHAU ; Elio Stefan ARRUZZA ; Kelly SPUUR
Journal of Educational Evaluation for Health Professions 2024;21(1):34-
Purpose:
This study evaluates the use of ChatGPT-4o in creating tailored continuing professional development (CPD) plans for radiography students, addressing the challenge of aligning CPD with Medical Radiation Practice Board of Australia (MRPBA) requirements. We hypothesized that ChatGPT-4o could support students in CPD planning while meeting regulatory standards.
Methods:
A descriptive, experimental design was used to generate 3 unique CPD plans using ChatGPT-4o, each tailored to hypothetical graduate radiographers in varied clinical settings. Each plan followed MRPBA guidelines, focusing on computed tomography specialization by the second year. Three MRPBA-registered academics assessed the plans using criteria of appropriateness, timeliness, relevance, reflection, and completeness from October 2024 to November 2024. Ratings underwent analysis using the Friedman test and intraclass correlation coefficient (ICC) to measure consistency among evaluators.
Results:
ChatGPT-4o generated CPD plans generally adhered to regulatory standards across scenarios. The Friedman test indicated no significant differences among raters (P=0.420, 0.761, and 0.807 for each scenario), suggesting consistent scores within scenarios. However, ICC values were low (–0.96, 0.41, and 0.058 for scenarios 1, 2, and 3), revealing variability among raters, particularly in timeliness and completeness criteria, suggesting limitations in the ChatGPT-4o’s ability to address individualized and context-specific needs.
Conclusion
ChatGPT-4o demonstrates the potential to ease the cognitive demands of CPD planning, offering structured support in CPD development. However, human oversight remains essential to ensure plans are contextually relevant and deeply reflective. Future research should focus on enhancing artificial intelligence’s personalization for CPD evaluation, highlighting ChatGPT-4o’s potential and limitations as a tool in professional education.
3.The performance of ChatGPT-4.0o in medical imaging evaluation: a cross-sectional study
Elio Stefan ARRUZZA ; Carla Marie EVANGELISTA ; Minh CHAU
Journal of Educational Evaluation for Health Professions 2024;21(1):29-
This study investigated the performance of ChatGPT-4.0o in evaluating the quality of positioning in radiographic images. Thirty radiographs depicting a variety of knee, elbow, ankle, hand, pelvis, and shoulder projections were produced using anthropomorphic phantoms and uploaded to ChatGPT-4.0o. The model was prompted to provide a solution to identify any positioning errors with justification and offer improvements. A panel of radiographers assessed the solutions for radiographic quality based on established positioning criteria, with a grading scale of 1–5. In only 20% of projections, ChatGPT-4.0o correctly recognized all errors with justifications and offered correct suggestions for improvement. The most commonly occurring score was 3 (9 cases, 30%), wherein the model recognized at least 1 specific error and provided a correct improvement. The mean score was 2.9. Overall, low accuracy was demonstrated, with most projections receiving only partially correct solutions. The findings reinforce the importance of robust radiography education and clinical experience.
4.Effectiveness of ChatGPT-4o in developing continuing professional development plans for graduate radiographers: a descriptive study
Minh CHAU ; Elio Stefan ARRUZZA ; Kelly SPUUR
Journal of Educational Evaluation for Health Professions 2024;21(1):34-
Purpose:
This study evaluates the use of ChatGPT-4o in creating tailored continuing professional development (CPD) plans for radiography students, addressing the challenge of aligning CPD with Medical Radiation Practice Board of Australia (MRPBA) requirements. We hypothesized that ChatGPT-4o could support students in CPD planning while meeting regulatory standards.
Methods:
A descriptive, experimental design was used to generate 3 unique CPD plans using ChatGPT-4o, each tailored to hypothetical graduate radiographers in varied clinical settings. Each plan followed MRPBA guidelines, focusing on computed tomography specialization by the second year. Three MRPBA-registered academics assessed the plans using criteria of appropriateness, timeliness, relevance, reflection, and completeness from October 2024 to November 2024. Ratings underwent analysis using the Friedman test and intraclass correlation coefficient (ICC) to measure consistency among evaluators.
Results:
ChatGPT-4o generated CPD plans generally adhered to regulatory standards across scenarios. The Friedman test indicated no significant differences among raters (P=0.420, 0.761, and 0.807 for each scenario), suggesting consistent scores within scenarios. However, ICC values were low (–0.96, 0.41, and 0.058 for scenarios 1, 2, and 3), revealing variability among raters, particularly in timeliness and completeness criteria, suggesting limitations in the ChatGPT-4o’s ability to address individualized and context-specific needs.
Conclusion
ChatGPT-4o demonstrates the potential to ease the cognitive demands of CPD planning, offering structured support in CPD development. However, human oversight remains essential to ensure plans are contextually relevant and deeply reflective. Future research should focus on enhancing artificial intelligence’s personalization for CPD evaluation, highlighting ChatGPT-4o’s potential and limitations as a tool in professional education.
5.The performance of ChatGPT-4.0o in medical imaging evaluation: a cross-sectional study
Elio Stefan ARRUZZA ; Carla Marie EVANGELISTA ; Minh CHAU
Journal of Educational Evaluation for Health Professions 2024;21(1):29-
This study investigated the performance of ChatGPT-4.0o in evaluating the quality of positioning in radiographic images. Thirty radiographs depicting a variety of knee, elbow, ankle, hand, pelvis, and shoulder projections were produced using anthropomorphic phantoms and uploaded to ChatGPT-4.0o. The model was prompted to provide a solution to identify any positioning errors with justification and offer improvements. A panel of radiographers assessed the solutions for radiographic quality based on established positioning criteria, with a grading scale of 1–5. In only 20% of projections, ChatGPT-4.0o correctly recognized all errors with justifications and offered correct suggestions for improvement. The most commonly occurring score was 3 (9 cases, 30%), wherein the model recognized at least 1 specific error and provided a correct improvement. The mean score was 2.9. Overall, low accuracy was demonstrated, with most projections receiving only partially correct solutions. The findings reinforce the importance of robust radiography education and clinical experience.
6.Effectiveness of ChatGPT-4o in developing continuing professional development plans for graduate radiographers: a descriptive study
Minh CHAU ; Elio Stefan ARRUZZA ; Kelly SPUUR
Journal of Educational Evaluation for Health Professions 2024;21(1):34-
Purpose:
This study evaluates the use of ChatGPT-4o in creating tailored continuing professional development (CPD) plans for radiography students, addressing the challenge of aligning CPD with Medical Radiation Practice Board of Australia (MRPBA) requirements. We hypothesized that ChatGPT-4o could support students in CPD planning while meeting regulatory standards.
Methods:
A descriptive, experimental design was used to generate 3 unique CPD plans using ChatGPT-4o, each tailored to hypothetical graduate radiographers in varied clinical settings. Each plan followed MRPBA guidelines, focusing on computed tomography specialization by the second year. Three MRPBA-registered academics assessed the plans using criteria of appropriateness, timeliness, relevance, reflection, and completeness from October 2024 to November 2024. Ratings underwent analysis using the Friedman test and intraclass correlation coefficient (ICC) to measure consistency among evaluators.
Results:
ChatGPT-4o generated CPD plans generally adhered to regulatory standards across scenarios. The Friedman test indicated no significant differences among raters (P=0.420, 0.761, and 0.807 for each scenario), suggesting consistent scores within scenarios. However, ICC values were low (–0.96, 0.41, and 0.058 for scenarios 1, 2, and 3), revealing variability among raters, particularly in timeliness and completeness criteria, suggesting limitations in the ChatGPT-4o’s ability to address individualized and context-specific needs.
Conclusion
ChatGPT-4o demonstrates the potential to ease the cognitive demands of CPD planning, offering structured support in CPD development. However, human oversight remains essential to ensure plans are contextually relevant and deeply reflective. Future research should focus on enhancing artificial intelligence’s personalization for CPD evaluation, highlighting ChatGPT-4o’s potential and limitations as a tool in professional education.
7.The effectiveness of cultural competence education in enhancing knowledge acquisition, performance, attitudes, and student satisfaction among undergraduate health science students: a scoping review
Journal of Educational Evaluation for Health Professions 2021;18(1):3-
Purpose:
Cultural competence in healthcare assists in the delivery of culturally sensitive and high-quality services. This scoping review aims to provide an overview of the available evidence and to examine the effectiveness of classroom-based intervention strategies used to enhance the cultural competence of undergraduate health science students.
Methods:
A comprehensive and systematic literature search was undertaken in databases, including Cochrane Library, Medline, and Emcare. Articles were eligible if they employed an experimental study design to assess classroom-based cultural competency education for university students across the health science disciplines. Two reviewers independently screened and extracted relevant data pertaining to study and participant characteristics using a charting table. The outcomes included knowledge, attitudes, skills, and perceived benefits.
Results:
Ten studies were analysed. Diverse approaches to cultural education exist in terms of the mode, frequency, and duration of interventions. For the knowledge outcome, students who experienced cultural education interventions yielded higher post-test scores than their baseline cultural knowledge, but without a significant difference from the scores of students who did not receive interventions. Data relating to the skills domain demonstrated positive effects for students after experiencing interventions. Overall, students were satisfied with their experiences and demonstrated improvements in confidence and attitudes towards culturally competent practice.
Conclusion
Across health science disciplines, cultural competency interventions were shown to be effective in enhancing knowledge acquisition, performance of skills, attitudes, and student satisfaction. Future research is necessary to address the significant absence of control arms in the current literature, and to assess long-term effects and patient-related outcomes.
8.The effectiveness of cultural competence education in enhancing knowledge acquisition, performance, attitudes, and student satisfaction among undergraduate health science students: a scoping review
Journal of Educational Evaluation for Health Professions 2021;18(1):3-
Purpose:
Cultural competence in healthcare assists in the delivery of culturally sensitive and high-quality services. This scoping review aims to provide an overview of the available evidence and to examine the effectiveness of classroom-based intervention strategies used to enhance the cultural competence of undergraduate health science students.
Methods:
A comprehensive and systematic literature search was undertaken in databases, including Cochrane Library, Medline, and Emcare. Articles were eligible if they employed an experimental study design to assess classroom-based cultural competency education for university students across the health science disciplines. Two reviewers independently screened and extracted relevant data pertaining to study and participant characteristics using a charting table. The outcomes included knowledge, attitudes, skills, and perceived benefits.
Results:
Ten studies were analysed. Diverse approaches to cultural education exist in terms of the mode, frequency, and duration of interventions. For the knowledge outcome, students who experienced cultural education interventions yielded higher post-test scores than their baseline cultural knowledge, but without a significant difference from the scores of students who did not receive interventions. Data relating to the skills domain demonstrated positive effects for students after experiencing interventions. Overall, students were satisfied with their experiences and demonstrated improvements in confidence and attitudes towards culturally competent practice.
Conclusion
Across health science disciplines, cultural competency interventions were shown to be effective in enhancing knowledge acquisition, performance of skills, attitudes, and student satisfaction. Future research is necessary to address the significant absence of control arms in the current literature, and to assess long-term effects and patient-related outcomes.

Result Analysis
Print
Save
E-mail