1 Associate Professor of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine; Director, Resident Research, Yale Primary Care Residency Program, Yale School of Medicine; Program Director, General Internal Medicine Medical Education Fellowship, Yale School of Medicine; Director, Advancement of Clinician-Educator Scholarship (ACES) Faculty Development Program, Department of Internal Medicine, Yale School of Medicine
Find articles by Donna M. Windish1 Associate Professor of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine; Director, Resident Research, Yale Primary Care Residency Program, Yale School of Medicine; Program Director, General Internal Medicine Medical Education Fellowship, Yale School of Medicine; Director, Advancement of Clinician-Educator Scholarship (ACES) Faculty Development Program, Department of Internal Medicine, Yale School of Medicine
∗ Corresponding author: ude.elay@hsidniw.annod Received 2021 Feb 22; Accepted 2021 Jul 19. Copyright © 2021 WindishThis is an open-access publication distributed under the terms of the Creative Commons Attribution license.
All appendices are peer reviewed as integral parts of the Original Publication.
Clinician-educators often need to produce scholarship for academic promotion. While some programs exist to help with faculty development skills, few provide adequate statistical training to help educators evaluate their work.
From January 2020 through January 2021, faculty at three academic centers attended one of five in-person or virtual seminars with dedicated statistical training for medical education interventions. These 90-minute seminars included a 45-minute PowerPoint presentation of common statistical tests used for educational interventions followed by small breakout groups to help attendees work on additional practice examples. After each seminar, surveys were distributed in person or virtually to obtain feedback.
Forty-three faculty attended the five seminars, with a range of surgical and nonsurgical specialties represented. Of these attendees, 38 (88%) completed session evaluations. The majority of respondents (n = 34, 90%) rated the session as extremely useful in helping them know how to use statistics in their scholarly work. Most participants agreed or strongly agreed they had adequate time to practice skills (n = 30, 79%). Self-rated confidence in using statistics was significantly higher after the session compared to before (3.00 post vs. 1.97 pre, p < .0001). Most participants (n = 32, 84%) rated the session as excellent and the small-group practice as most useful (n = 16, 42%), but many (n = 26, 69%) wanted more skills practice.
This intervention shows that dedicated training on biostatistics used in educational interventions can help clinician-educators improve self-rated confidence and knowledge in choosing statistical tests in educational scholarship.
Keywords: Statistics, Faculty Development, Case-Based Learning, Quantitative ResearchBy the end of this activity, learners will be able to:
Describe the application of the following statistical areas to educational interventions: study designs, variable types, exploratory data analysis, confirmatory (inferential) data analysis, and basic interpretation of results.
Use a four-step approach to choosing a statistical test for educational cases.Producing scholarship is often a key determinant to academic advancement for clinician-educators regardless of home institution. 1–3 The value of educational scholarship is well recognized and is receiving increased support. 4 To help clinician-educators cultivate faculty development skills, some institutions have established academies of medical educators 5 or education scholarship units. 6 Despite these initiatives, many institutions may not provide adequate statistical or methodological support to help with development and evaluation of educators' work. 3
A recent Association of American Medical Colleges survey showed that a majority of medical schools lack specific biostatistics training. 7 This lack of training can contribute to low statistical knowledge among resident trainees 8–11 and subsequently to low statistical literacy among faculty. 11–13 A recent scoping review of clinician-educator faculty development programs found that few programs focus on research or scholarship skills. 14 Without statistical knowledge, clinician-educators may be at a disadvantage in publishing their scholarly work and thus potentially miss opportunities to be promoted.
Resources exist that address understanding statistical concepts and evidence-based medicine. The JAMA Guide to Statistics and Medicine contains a series of articles addressing statistical techniques used in clinical research. 15 The goal of the series is to help clinicians understand and learn how to critically appraise the medical literature. One article in the series reviews the reporting guidelines for survey studies. 16 Since survey research is a common tool used in educational interventions, educators might find this particular article helpful in their work. A recent publication in MedEdPORTAL describes a module for teaching students basic biostatistics and evidence-based medicine. 17 The authors of that resource review study design strengths and weaknesses, how to appraise the literature, and how to assess the clinical importance of published studies. Another workshop in MedEdPORTAL contains an interactive review of basic biostatistics and discusses how to apply Bayes' theorem to testing and decision-making. 18 It uses a flipped classroom approach with quizzes to assess knowledge gained. Each of the three publications just described can aid educators in understanding basic statistical concepts, evidence-based medicine, and reading of the literature. None provide a dedicated guide that would aid educators in choosing statistical tests when analyzing their own educational interventions.
In 2006, Windish and Diener-West developed a guide to help clinician-educators understand and choose statistical tests. 19 Since then, little has been published that provides specific training on statistics for educational interventions with detailed examples. The resource presented here is a unique contribution to the literature aimed at building knowledge of biostatistics using educational examples that clinician-educators will find germane to their educational scholarship. The resource includes an instructional video identical to content presented in faculty development seminars across multiple institutions taught to medical educators. It also provides active learning opportunities through additional educational examples to practice and apply what has been learned from the video. This resource can be used as a seminar at other institutions in addition to serving as an everlasting resource for individuals when conducting educational research.
Five faculty development seminars were offered at three different schools of medicine from January 2020 through January 2021. Seminars were either in person or virtual via Zoom, with a range of six to 12 participants, and lasted 90 minutes. Each seminar was led by the author and included a 45-minute PowerPoint presentation that reviewed study designs, variable types, exploratory data analysis, confirmatory data analysis, basic interpretation of results, and a four-step approach to choosing a statistical test. 19 Statistical content was determined based on the low literacy regarding these concepts seen in prior studies of residents and educators. 8–13 A video of the PowerPoint presentation contained in this seminar is available in Appendix A. All figures in the presentation were created by the author using Stata statistical software version 14.2 (StataCorp) from fabricated data for illustrative purposes only. The photographs in the apple-pie analogy for regression analysis are author owned. In each seminar, statistical concepts were introduced and interwoven throughout the presentation using an example of an educational intervention aimed at improving second-year medical students' counseling skills, confidence in medical interviewing, professionalism skills, and pass rate. This example was designed to address how to evaluate a curriculum using different evaluation strategies, including the broad categories of assessing knowledge, attitudes, and skills. Statistical concepts included continuous, ordinal, and dichotomous outcome variables, parametric tests, nonparametric tests, and paired analyses.
After the PowerPoint presentation, faculty divided into smaller groups of two to five people who worked together for 20 minutes on additional practice examples provided on worksheets (Appendix B). This small-group practice allowed participants to apply the statistical knowledge learned in the presentation. All figures in the worksheets were created by the author from manufactured data and used for illustrative purposes. Half of the small groups completed questions from case 1, and the other half completed questions from case 2. Case 1 addressed the following statistical concepts: Student t test, correlation, and multiple logistic regression. Case 2 had participants work through examples that used a paired t test and analysis of variance. The last 15 minutes of the seminar featured a debrief of the practice examples with answers provided in the larger group (Appendix C).
Seminars were held at the Yale School of Medicine, the Washington University School of Medicine in St. Louis, and the University of Wisconsin School of Medicine and Public Health. The two sessions held at Yale were hosted by the Department of Medicine and were open to all faculty in the department, with one in-person session and one virtual session. The two in-person sessions held at the Washington University in St. Louis were hosted by the Academy of Educators and were open to all faculty in any discipline throughout the university. One seminar was hosted virtually for the University of Wisconsin and was open to educators in graduate and undergraduate medical education.
At the end of each seminar, faculty were asked to complete a session evaluation (Appendix D). Questions asked participants to rate the following:
The usefulness of the session in helping know them how to use statistics in their current scholarly work (5-point scale: 1 = extremely useful, 5 = extremely useless),
The adequacy of the faculty facilitator (5-point scale: 1 = extremely adequate, 5 = extremely inadequate),