Background

Currently, residency programs lack objective predictors for passing the sequenced American Board of Anesthesiology (ABA) certification examinations on the first attempt. Our hypothesis was that performance on the ABA/American Society of Anesthesiologists In-Training Examination (ITE) and other variables can predict combined success on the ABA Part 1 and Part 2 examinations.

Method

The authors studied 2,458 subjects who took the ITE immediately after completing the first year of clinical anesthesia training and took the ABA Part 1 examination for primary certification immediately after completing residency training 2 yr later. ITE scores and other variables were used to predict which residents would complete the certification process (passing the ABA Part 1 and Part 2 examinations) in the shortest possible time after graduation.

Results

ITE scores alone accounted for most of the explained variation in the desired outcome of certification in the shortest possible time. In addition, almost half of the observed variation and most of the explained variance in ABA Part 1 scores was accounted for by ITE scores. A combined model using ITE scores, residency program accreditation cycle length, country of medical school, and gender best predicted which residents would complete the certification examinations in the shortest possible time.

Conclusions

The principal implication of this study is that higher ABA/ American Society of Anesthesiologists ITE scores taken at the end of the first clinical anesthesia year serve as a significant and moderately strong predictor of high performance on the ABA Part 1 (written) examination, and a significant predictor of success in completing both the Part 1 and Part 2 examinations within the calendar year after the year of graduation from residency. Future studies may identify other predictors, and it would be helpful to identify factors that predict clinical performance as well.

  • ❖ Performance on in-training examinations in other specialties correlates with the performance on certification examinations but whether this applies to the two-stage certification process in anesthesiology is not known

  • ❖ Performance on the American Board of Anesthesiology/American Society of Anesthesiologists In-Training Examination after 1 yr of clinical anesthesia served as a significant predictor of successful completion of the two-stage certification process within 1 yr of completion of training

THE American Board of Anesthesiology/American Society of Anesthesiologists (ABA/ASA) In-Training Examination (ITE) assesses resident knowledge annually as the resident progresses through anesthesiology training. In addition, the ITE is expected to predict whether a resident will, upon graduation, pass the ABA Part 1 examination. This expectation follows logically from the properties of the two examinations: the examinations are highly similar in content, and they are both highly reliable examinations. However, this expectation has never been validated.

To achieve ABA certification, a resident must pass both a cognitive (e.g. , written multiple choice examination known as the ABA Part 1 examination) and a structured oral examination (ABA Part 2 examination). Residency program directors lack standardized tools to predict whether a resident will pass the ABA Part 2 examination and must rely on measures such as a resident's performance on practice oral examinations and observations of a resident's communication skills in discussing the management of patients. Our hypothesis is that ITE performance predicts combined success on the ABA Part 1 and Part 2 examinations.

The importance of an objective tool in predicting whether a resident will achieve certification was demonstrated by two studies in which faculty members and residents were asked to predict residents' scores on ITEs. Replogle and Johnson1found a positive predictive value of 0.72 between composite American Board of Family Medicine ITE scores taken over a 3-yr period and the American Board of Family Practice certification examination. Hawkins et al.  2examined the ability of faculty members to predict the performance of their residents on the internal medicine ITE. Faculty members were asked to predict their residents' performance compared with a national peer group. Those authors found that faculty members were often inaccurate in predicting the performance of their residents; in particular, faculty members tended to overestimate the performance of residents who demonstrated deficiencies in the knowledge on the ITE. Parker et al.  3examined performance on the family medicine ITE and found that residents tended to greatly overestimate their own performance on the examination, and that this was especially true of those who scored in the lowest quartile. The latter two studies suggest that both faculty and residents tend to overestimate the performance of those residents who most need remediation in cognitive knowledge. These studies underscore the importance of an objective instrument that predicts difficulties in completing the certification process.

Previous studies in other medical specialties have investigated the ability of an ITE to predict performance on a cognitive primary certification examination.1,4–6These studies have uniformly found a strong correlation between ITE performance and certification examination performance. The present investigation examines the hypothesis that ITE performance can predict early success on the ABA Part 1 and Part 2 examination sequence and assesses the capacity of ITE and other factors to identify residents who will experience difficulty in successfully completing that sequence.

Subjects

The potential subjects for the study were physicians who met all of the following conditions.

  • completed their clinical anesthesia third (CA-3) resident year between 2002 and 2004,

  • took the ABA/ASA ITE 2 yr before completion of residency training, and

  • took the ABA Part 1 examination for primary certification in the year that they completed training.

Candidates who passed the Part 1 examination on their first attempt and did not take the Part 2 examination in the following calendar year were excluded from the analysis. The calendar year after graduation was used in this analysis because candidates who passed the ABA Part 1 examination immediately after graduation would have had their first opportunity to take the ABA Part 2 examination in either April or September to October of the following year.

There were 2,458 subjects who satisfied these conditions. The subjects were divided into two groups: those who achieved certification in the shortest possible time (by passing the Part 1 and Part 2 examinations on the first attempt) and those who did not. A total of 1, 671 subjects (68%) were in the first group and 787 subjects (32%) were in the second group.

Statistical Analysis

Variables.

The independent variables in the study included data collected by the ABA and data on anesthesiology residency programs that was available through the Accreditation Council for Graduate Medical Education. The purpose of the study was to determine whether the independent variables can predict which candidates complete certification in the shortest possible time. Categorical variables were restricted to two categories to be used in the correlational and regression analyses.

Independent Variables.

The independent variables investigated included the following: scaled score on ABA/ASA ITE (after completion of clinical anesthesia first [ CA-1 ] resident year, most often the same as completion of Postgraduate Year 2) 2 yr before graduation; gender; medical degree (M.D. or D.O.); country of medical school (American Medical School Graduates or International Medical School Graduates); known history of substance abuse (yes or no); number of unsatisfactory clinical competency committee reports (these reports are submitted by the residency program to the ABA to represent each 6-month period of residency training in clinical anesthesia from the CA-1 yr through graduation); and length of the accreditation review cycle of the Accreditation Council for Graduate Medical Education (in years; a longer cycle is postulated to be an indicator of program quality).

Dependent Variable.

The dependent variable was classified into one of two groups: completed certification in the shortest possible time or did not complete certification in the shortest possible time. Obviously, candidates in the latter group failed at least one examination.

A stepwise logistic regression analysis using SPSS® (SPSS, Inc., Chicago, IL) was conducted to ascertain how well the independent variables predicted group membership. A preliminary analysis investigated whether the independent variables could predict the subjects' scores on the Part 1 exam ination taken in the CA-3 yr of residency.

Summary Data—Independent Variables

Summary data of the independent variables are presented in tables 1 and 2. Correlational data are presented in table 3. A number of independent variables had low correlations with each other; however, only one correlation was significant enough to strongly suggest a lack of independence between the variables. Birth country and country of medical school had a correlation of 0.69 (P < 0.001); therefore, birth country was dropped from subsequent analyses.

Table 1.  Summary Data of Independent Categorical Variables

Table 1.  Summary Data of Independent Categorical Variables
Table 1.  Summary Data of Independent Categorical Variables

Table 2.  Summary Data of Other Variables

Table 2.  Summary Data of Other Variables
Table 2.  Summary Data of Other Variables

Table 3.  Correlations between Independent Variables

Table 3.  Correlations between Independent Variables
Table 3.  Correlations between Independent Variables

Regression Analysis—Prediction of Part 1 Scores

The first regression analysis investigated scores on the Part 1 examination as the predicted variable. Table 4compares means between the categorical independent variables and Part 1 scores. The results of the stepwise regression analysis are shown in table 5. The model was significant and produced an adjusted R  2of 0.46 (F  (4, 2, 453) = 517.9,  P < 0.001) for predicting scores on the Part 1 examination. The ITE score alone accounted for almost half of the observed variance (R  2= 0.45) and most of the explained variance. The model for predicting Part 1 scores is 91.799 + (6.275 × ITE score) + (2.53 × program cycle length) − 5.333 (if examinee is a graduate of an international medical school) − 39.419 (if examinee has a history of substance abuse). The other variables did not contribute significantly to the model. The findings concerning substance abuse must be interpreted with caution, because they are based on only seven subjects. Using the ITE score alone, the model for predicting Part 1 scores is 97.307 + (6.385 × ITE Score). As an example, consider an American Medical School graduate who scored a 25 on the ITE, has no history of substance abuse, and comes from a program with a cycle length of 3 yr. The formula for predicting this physician's Part 1 score is 91.799 + (6.275 × 25) + (2.53 × 3) − (5.33 × 0) − (39.149 × 0) = 256.3.

Table 4.  Independent Categorical Variables and Part 1 Scores

Table 4.  Independent Categorical Variables and Part 1 Scores
Table 4.  Independent Categorical Variables and Part 1 Scores

Table 5.  Stepwise Regression Analysis: Part 1 Scores

Table 5.  Stepwise Regression Analysis: Part 1 Scores
Table 5.  Stepwise Regression Analysis: Part 1 Scores

The passing score for the Part 1 examination is 209. The regression analysis model was used to predict each subject's score to determine how well the model actually predicted pass-fail status for individual subjects. Figure 1shows the results of this analysis. Not surprisingly, the model was more accurate in predicting pass-fail status when the predicted score was farther away from the passing score of 209.

Fig. 1. Accuracy of full model in predicting American Board of Anesthesiology Part 1 Pass-Fail Status.

Fig. 1. Accuracy of full model in predicting American Board of Anesthesiology Part 1 Pass-Fail Status.

Close modal

Regression Analysis—Prediction of Certification Status

A subsequent logistic regression analysis investigated certification status as the predicted variable to see whether the independent variables could predict which examinees would achieve certification status in the shortest possible time (1 yr after completion of residency). Table 6compares the categorical independent variables and certification status. The results of the stepwise logistic regression analysis are shown in table 7. The model was significant (P < 0.001) and produced a Cox and Snell R  2of 0.19 for predicting certification status. The ITE score alone produced an R  2of 0.15, which was significant (P < 0.001).

Table 6.  Independent Categorical Variables and Certification Status (% Achieving Certification in Shortest Possible Time)

Table 6.  Independent Categorical Variables and Certification Status (% Achieving Certification in Shortest Possible Time)
Table 6.  Independent Categorical Variables and Certification Status (% Achieving Certification in Shortest Possible Time)

Table 7.  Stepwise Logistic Regression Analysis: Certification Status

Table 7.  Stepwise Logistic Regression Analysis: Certification Status
Table 7.  Stepwise Logistic Regression Analysis: Certification Status

The model for predicting certification status is (−) 4.410 + (0.182 × ITE score) + (0.208 × program cycle length) − 0.591 (if examinee is a graduate of an international medical school) + 0.382 (if examinee is a woman). The other variables did not contribute significantly to the model.

Note that the model seems to suggest that being a woman is a positive predictor, when women were less likely than men to complete certification in the shortest possible time (table 6). This finding is explained by the relationship between gender and examination performance. Women on average scored lower on the ITE than men (P < 0.001). However, those women who took the Part 2 examination in the year after graduation from residency were slightly more likely than men to pass the examination (82.6% vs.  80.0%). This last finding did not reach significance (P = 0.252).

To mostly eliminate negative model scores, we scaled the model equation by adding a constant of 10, changing the equation to 5.59 + (0.182 × ITEs core) + (0.208 × program cycle length) − 0.591 (if examinee is a graduate of an international medical school) + 0.382 (if examinee is a woman). The range of predicted scores was 5.6–14.6, with higher scores predicting a greater likelihood that the examinee will achieve certification in the shortest possible time. These predicted scores do not correspond to examination scores or percentages. Rather, higher scores predict a greater likelihood that the examinee will achieve certification in the shortest possible time. Figure 2shows the results of comparing the predicted status to the actual status.

Fig. 2. Relationship between model score and the percentage of examinees who achieved certification in the shortest possible time.

Fig. 2. Relationship between model score and the percentage of examinees who achieved certification in the shortest possible time.

Close modal

Consider the same example used in the Regression Analysis—Prediction of Part 1 Scores section: an American Medical School graduate who scored a 25 on the ITE and comes from a program with a cycle length of 3 yr. In addition, suppose this physician is a woman, the model score for this physician is 5.59 + (0.182 × 25) + (0.208 × 3) − (0.591 × 0) + (0.382 × 1) = 11.1. Looking at figure 2, physicians with a model score between 11 and 11.49 completed certification in the shortest possible time approximately 80% of the time.

Our principal finding is that a resident's score on the ABA/ASA ITE taken immediately after completion of the CA-1 yr serves as a predictor of completing the ABA certification process on schedule, that is, within 1 yr of graduating from residency. In the multivariable analysis, country of medical school, gender, training program cycle length, and the ITE score together account for 19% of the variability in predicting group membership, which was the maximum achieved by the stepwise logistic model.

This study was based on data from 2002 to 2004, when the ITE was administered in July. Recently, the ITE has moved to a March administration date. However, there is no indication that the March administration date has impacted the ITE scores of CA-1 residents. In March 2009, the average ITE score for CA-1 residents was 27. From 2002 to 2008, the average ITE score for CA-1 residents was 26, with a range of 24–29.

One might expect that scores from earlier written examinations would predict the outcome of a subsequent written examination, as was demonstrated in this study. Prediction of success on sequential written and oral examinations is less intuitive. The ABA certification process involves two examinations that must be passed in sequence: the ABA Part 1 examination, which is a paper-and-pencil multiple choice examination, and the ABA Part 2 examination, which is a structured oral examination given by a total of four examiners in two consecutive 35-min sessions. Of those candidates who did not achieve certification, 437 (55%) failed the first Part 1 examination and 350 (45%) passed ABA Part 1 but failed the ABA Part 2 examination. As would be expected, the group that failed the Part 1 examination had lower initial scaled scores on the ITE (19.6 vs.  24.2). Previous internal analyses indicate that very high pass scores on the ABA Part 1 examination (i.e. , > 300) predict more than 90% success on the ABA Part 2 examination on the first attempt, whereas barely passing scores on the ABA Part 1 examination (i.e. , 209–220) predict approximately a 50% chance of passing the ABA Part 2 examination on the first attempt.

Numerous studies have successfully correlated previous written multiple choice examination scores with subsequent multiple choice examination scores. This relationship has been established between admission tests and written board examinations7as well as between various components of the three-part United States Medical Licensing Examination and subsequent United States Medical Licensing Examination steps,8undergraduate medical specialty “shelf” examinations9,10or postgraduate medical specialty in-training or written examinations.5,11,12Similarly, scores on ITEs taken during residency training have correlated significantly with performance on written certifying examinations in internal medicine,6,13oral and maxillofacial surgery,14family practice,1orthopedics,5and neurosurgery.4 

We chose the ITE as the baseline examination for future predictions because this examination is taken by anesthesiology residents at the end of the CA-1 yr. At that time, most residents have completed one clinical base year (mainly nonanesthesia clinical rotations) and 1 yr of clinical anesthesiology. They would have had considerable exposure to anesthesiology and hence should have a solid understanding of anesthesiology terminology. Typically, there is substantial improvement in performance on the ITE between the end of the CA-1 yr and the CA-3 yr. At the time of this study, the ABA Part 1 examination was a subset of the ABA/ASA ITE, such that more than 80% of the questions on ABA Part 1 also appeared on the ITE. However, there were no questions in common between the ITE taken by these candidates at the end of their CA-1 yr and the ABA Part 1 examination taken shortly after graduation from residency 2 yr later.

We did not find previous studies correlating written examination test performance to oral examination test performance. However, Muller et al.  15found low correlations (0.16–0.38) between an objective structured clinical skills examination (OSCE) and the subsequently administered United States Medical Licensure Examination Step 2 examination. Simon et al.  16found a somewhat higher correlation (0. 41) between the United States Medical Licensure Examination OSCE examination and the United States Medical Licensure Examination Step 1 examination. In a small study of general surgery interns (Postgraduate Year 1), Schwartz et al.  17found a correlation of 0.50 between the OSCE examination and the subsequent surgery ITE. An OSCE shares some elements with an oral examination, for example, communication skills, organization skills, and problem solving. Nevertheless, OSCE involves interaction with an actor playing the part of a patient, whereas the ABA Part 2 examination involves interaction with experienced anesthesiologists who are trained to administer a structured oral examination in which the grading includes the examiners' assessments of the candidate's clinical judgment, clinical application of knowledge, adaptability in changing clinical scenarios, and organization and presentation.

Other Predictors

One categorical variable that predicted outcome was location of medical school. Graduates of medical schools outside the United States had a greater chance of failing to achieve certification in the shortest possible time than did American medical graduates. Another variable that contributed significantly to the model was the quality of the training program as indicated by the length of the accreditation cycle of the Accreditation Council for Graduate Medical Education. This latter organization awards progressively longer accreditation cycles based on the perceived quality of the residency training program. Although this is an imperfect marker of quality, its contribution to the stepwise analysis was significant. The fact that ITE score plus cycle length is a better predictor than ITE score alone suggests that either the programs with longer accreditation cycles are better preparing their candidates to pass the certification examinations, or that they are attracting candidates who are more likely to pass ABA examinations on their first attempt.

Gender also emerged as a significant factor in the regression analysis. Men performed better than women on the ITE and Part 1 examinations. On the Part 2 examinations, a trend was noted: women performed better, al though the difference did not reach significance. Haist et al.  18investigated the effect of age and gender on medical school performance, as judged by a clinically based performance examination given in the fourth year, an academic performance score (based on medical school grade point average and on United States Medical Licensure Examination Step 1 and Step 2 scores), and the presence or absence of academic difficulty (defined as low grade point average). Those authors found that women performed better than men on the clinical performance examination, older women were least likely to have academic difficulties, and younger men were most likely to experience academic difficulties. Those authors also cited several previous studies indicating higher scores for either men than women or women than men on several examinations typically taken during medical school.18 

There is a relative paucity of information about the impact of international medical graduate status on standardized examinations as compared with American medical graduates. Hallock and Kostis19reviewed the 50-yr history of the Educational Commission for Foreign Medical Graduates (ECFMG) in 2006, noting that creation of a required clinical skills examination of ECFMG in 1998 using standardized patients addressed the perceived inadequacy of a written examination alone in assessing bedside clinical skills and spoken English skills. Those authors also noted that just 44.5% of first-time ECFMG applicants between 1958 and 2005 were eventually awarded ECFMG certificates, but that recent success rates appear higher. Improved recent success is perhaps explained by the fact that the total number of new international applicants per year since 1999 has been approximately half what it was for much of the period between 1970 and 1998. Unsurprisingly, van Zanten et al.  20found that native English- speaking candidates received higher proficiency scores on standardized patients on a high-stakes clinical skills examination conducted in English. This may be similar to what one could expect on a two-examination sequence involving one written and one oral examination conducted in English. Sierles et al.  21reported that international medical graduates were more likely to fail a simulated psychiatry/neurology oral examination than were American medical graduates. Part and Markert22reported that recent clinical experience and higher scores on Parts I and II of the ECFMG Medical Sciences Examination predicted better first postgraduate year performances among internal medicine residents who were foreign-born international medical graduates.

There are some limitations to our study. First, association does not necessarily explain causation. For example, does the fact that all seven residents identified with substance abuse issues failed the Part 1 examination indicate that substance abuse interfered with the ability to pass the examination or were other factors to blame? Second, factors that we either did not or could not analyze might account for some of the variability in outcomes or might interact with variables included in our analysis.

The principal implication of this study is that higher ABA/ASA ITE scores taken at the end of the CA-1 yr serve as a significant and moderately strong predictor of high performance on the ABA Part 1 (written) examination, accounting for almost half of the observed variance, and a significant predictor of success in completing both the Part 1 and Part 2 examinations within 1 yr of graduating from residency. Like most predictions, this one is imperfect. Possibly other factors may contribute to future performance as well. For example, study habits and structured resident-presented didactic sessions have correlated with or improved ITE scores in surgery.23,24Attendance at didactic conferences improved scores on the surgery ITE25but did not affect scores on the internal medicine ITE.26These findings probably merit additional study, but data on study habits and didactic conferences were not available to ABA. Future studies may identify other predictors, and it would be helpful to identify factors that predict clinical performance as well.

The authors thank the following members of the Research Committee of the American Board of Anesthesiology, Inc., for providing expert direction and advice during all phases of the project: J. Jeffrey Andrews, M.D., Professor, Department of Anesthesiology, University of Texas Health Science Center at San Antonio, San Antonio, Texas; Douglas B. Coursin, M.D., Professor, Department of Anesthesiology, University of Wisconsin–Madison Medical School, Madison, Wisconsin; Steven C. Hall, M.D., Professor, Department of Pediatric Anesthesia, Children's Memorial Hospital, Chicago, Illinois; and Francis P. Hughes, Ph.D., Executive Director, Professional Affairs, the American Board of Anesthesiology, Inc., Raleigh, North Carolina.

1.
Replogle W, Johnson W: Assessing the predictive value of the American Board of Family Practice In-training examination. Fam Med 2004; 36:185–8
2.
Hawkins R, Sumption K, Gaglione M, Holmboe E: The in-training examination in internal medicine: Resident perceptions and lack of correlation between resident scores and faculty predictions of resident performance. Am J Med 1999; 106:206–10
3.
Parker R, Alford C, Passmore C: Can family medicine residents predict their performance on the in-training examination? Fam Med 2004; 36:705–9
4.
Garvin P, Kaminsky D: Significance of the in-training examination in a surgical residency program. Surgery 1984; 96:109–13
5.
Klein G, Austin M, Randolph S, Sharkey P, Hilibrand A: Passing the boards: Can USMLE and orthopaedic in-training examination scores predict passage of the ABOS part-I examination. J Bone Joint Surg Am 2004; 86:1092–5
6.
Rollins L, Martindale J, Edmond M, Manser T, Scheld W: Predicting pass rates on the American Board of Internal Medicine certifying examination. J Gen Intern Med 1998; 13:414–6
7.
Bailey J, Yackle K, Yuen M, Voorhees L: Preoptometry and optometry school grade point average and optometry admissions test scores as predictors of performance on the National Board of Examiners in optometry part I (basic science) examination. Optom Vis Sci 2000; 77:188–93
8.
Andriole D, Jeffe D, Hageman H, Whelan A: What predicts USMLE step 3 performance? Acad Med 2005; 80(10 suppl):S21–4
9.
Myles T, Henderson R: Medical licensure examination scores: Relationship to obstetrics and gynecology examination scores. Obstet Gynecol 2002; 100:955–8
10.
Myles T, Galvez-Myles R: USMLE Step 1 and 2 scores correlate with family medicine clinical and examination scores. Fam Med 2003; 35:510–3
11.
Carmichael K, Westmoreland J, Thomas J, Patterson R: Relation of residency selection factors to subsequent orthopaedic in-training examination performance. South Med J 2005; 98:528–32
12.
Fish D, Radfar-Baublitz L, Choi H, Felnesthal G: Correlation of standardized testing results with success on the 2001 American Board of Physical Medicine and Rehabilitation part 1 board certificate examination. Am J Phys Med Rehabil 2003; 82:686–91
13.
Brill-Edwards P, Couture L, Evans G, Hamilton P, Hramiak I, Megran D, Schmuck M, Cole G, Mikhael N, Norman G: Predicting performance on the Royal College of Physicians and Surgeons of Canada internal medicine written examination. CMAJ 2001; 165:1305–7
14.
Ellis E, Haug R: A comparison of performance on the OMSITE and ABOMS written qualifying examination. J Oral Maxillofac Surg 2000; 58:1401–6
15.
Muller E, Harik P, Margolis M, Clauser B, McKinley D, Boulet J: An examination of the relationship between clinical skills examination performance and performance on USMLE step 2. Acad Med 2003; 78(10-suppl):S27–9
16.
Simon S, Volkan K, Hamann C, Duffey C, Fletcher S: The relationship between second-year medical students' OSCE scores and USMLE step 1 scores. Med Teach 2002; 24:535–9
17.
Schwartz R, Donnelly M, Sloan D, Johnson S, Strodel W: The relationship between faculty ward evaluations, OSCE, and ABSITE as measures of surgical intern performance. Am J Surgery 1995; 169:414–7
18.
Haist S, Wilson J, Elam C, Blue A, Fosson S: The effect of gender and age on medical school performance: An important interaction. Adv Health Sci Educ 2000; 5:197–205
19.
Hallock J, Kostis J: Celebrating 50 years of experience: An ECFMG perspective. Acad Med 2006; 81(12 Suppl):S7–16
20.
van Zanten M, Boulet J, McKinley D, Whelan G: Evaluating the spoken English proficiency of international medical graduates: Detecting threats to the validity of standardised patient ratings. Med Educ 2003; 37:69–76
21.
Sierles F, Daghestani A, Weiner C, deVito R, Fichtner C, Garfield D: Psychometric properties of ABPN-style oral examinations administered jointly by two residency programs. Acad Psychiatry 2001; 25:214–22
22.
Part H, Markert R: Predicting the first-year performances of international medical graduates in an internal medicine residency. Acad Med 1993; 68:856–8
23.
Bull D, Stringham J, Karwande S, Neumayer L: Effect of a resident self-study and presentation program on performance on the thoracic surgery in-training examination. Am J Surg 2001; 181:142–4
24.
Derossis A, Da Rosa D, Schwartz A, Hauge L, Bordage G: Study habits of surgery residents and performance on American Board of Surgery in-training examinations. Am J Surg 2004; 188:230–6
25.
Godellas C, Huang R: Factors affecting performance on the American Board of Surgery in-training examination. Am J Surg 2001; 181:294–6
26.
Caccamese S, Eubank K, Hebert R, Wright S: Conference attendance and performance on the in-training examination in internal medicine. Med Teach 2004; 26:640–4