Introduction:

In the last two posts I introduced the fact that there exists a culture of ‘failure to fail’ in medicine. Faculty themselves have a large part to play and, apart from being too lenient, also unwittingly  introduce bias during learner evaluations.In this final post I want to share my approach to trainee evaluation.

I believe that most clinicians already know how to make judgements about learners – they just don’t know that they know! Furthermore if you follow the six steps below I think you’ll start to make more effective judgements and provide more meaningful feedback to learners. Just maybe – you might even identify [and start the process of helping] a failing learner. Comments welcome.

FACT: Most clinicians ALREADY possess skills transferable to trainee evaluation

Screen Shot 2014-02-24 at 7.14.21 AM

  • ER docs routinely diagnose and treat conditions in the ER with little information
    • We have the ability to quickly discern sick  from not sick
    • What’s more – (if you think about it) you also possess the right words to describe why  … pale, cyanotic, listless versus  pink, alert, smiling
    • The same skills can be used to identify competence [introduced self, gave clear question to consultant, provided concise history] and incompetence [pertinent negatives missing from history, inability to generate basic ddx, brusque interpersonal interactions]

Screen Shot 2014-02-17 at 3.29.37 PM

  • Many clinicians also possess transferable “Soft Skills”. We use these daily in difficult patient interactions.
  • Guess what? The exact same skills can be brought to bear in difficult trainee evaluations!
    • listening empathetically
    • breaking bad news
    • dealing with emotions
    • dealing with hostility
from wicked writing skills

from wicked writing skills

My 6-step Prescription for Evaluation Success:

I have tried to illustrate above that the same skills that make you an astute clinician will also allow you to discern whether a learner may be under-performing and then let them know. Why not stick to what works? Below is my six-step prescription for evaluation success. I am hoping that it is pleasantly familiar to you.

STEP 1: Take a history from the learner at the beginning of the shift.

The ED STAT course run by CAEP [link] teaches Faculty to understand where the learner is coming from at the start of the shift.

  • Ask the learner about:
    • Level of training
    • EM experience
    • Home program
  • Provide a orientation to the ER
  • Ascertain their learning goals
  • Set expectations
    • based on level of training
    • that they will be receiving feedback about their performance
    • [in fact - tell them about these six steps]

STEP 2: Examine the learner’s skills

Use the IPPA approach!

  • Inspectionwatch the learner
    • taking a history
    • performing physical exam manoeuvres
    • giving the diagnosis/ddx
    • giving discharge instructions
  • Palpationperform physical exams together
    • to confirm or refute findings
    • to correct errors
    • to role model
    • to learn from them
      • learners often remember all the special manoeuvres better
      • they ask questions that challenge you to read up – like “which cranial nerve is glossopharyngeal?”
  • Percussion:  use your finely tuned senses ‘tap out’ gaps in:
    • knowledge
    • clinical skills
    • attitude
    • [Hint: the mnemonic KSALTs can help identify where the problem lies with the learner. Read this [link] to the ALIEM MEdIC case].
  • Auscultationlisten to the learner
    • during consultations
    • during interactions with other providers
    • [HINT: eavesdrop on their histories from behind the curtain]

STEP 3: Diagnose the Learner:

Screen Shot 2014-02-25 at 4.52.23 PM

  • Is the learner competent or not?
  • Think about biases you may be introducing and try to avoid them – especially central tendency bias [link].
  • Use my ABC RIMES approach:
    • Attitude - make comments like “self-directed” vs “unmotivated”
    • Behaviours – “procedural skills above peers” vs “poor attention to workplace safety”
    • Competencies: [I use a modification of the RIME approach [link] with the addition of an ‘S’]
        • Reporting – give feedback on history-taking, written communication and presentations
        • Interpretation – give feedback on interpretation of Xrays, EKG’s and ‘putting information together’
        • Managerial skills – give feedback on efficiency, multi-tasking, stewardship
        • Educator/Expertise – give feedback on knowledge and teaching skills
        • Soft skills – give feedback on communication, team-skills, honesty, reliability, insight, receptivity to feedback
  • For those that are slavish to the CanMEDs brand try the MS CAMP mnemonic:
    • Medical expert
    • Scholar
    • Communicator/ Collaborator
    • Advocate
    • Manager
    • Professional

STEP 4: Provide your diagnosis

  • Think of your favourite uncle or granddad – would you be happy with the care that they received? If so  - great!
  • Provide feedback using the ABC RIMES framework [here's an earlier post on providing feedback].
  • If the feedback is going to be negative – you need to prepare. Here’s more information on getting ready for a difficult conversation [link].
    • If you’ve primed the learner at the beginning of the shift – they shouldn’t be surprised.
    • Have the courage to practise tough love – trainees are adults, they should be expected to receive feedback like adults.

STEP 5: Document! Document! Document!

Screen Shot 2014-02-25 at 4.57.57 PM

  • ALL verbal feedback needs to be written down.
  • Use strong descriptive terms rather than weak ones. Saying ‘good job’ is good for self-confidence, but learners need specifics.
  • Use the ABC RIMES or MS CAMP framework – effective evaluation forms make this task easier.
  • RANK THE LEARNER. They cannot all be ‘above average’ – some trainees are below average.
  • Even though a learner may be having a bad day this needs to be documented so that patterns can emerge.

STEP 6: Consult them out!

Most of us feel that problem learners are “someone else’s problem” – you’re right! You need to consult them out. But like all consults – you need to give the consultant the right information [see above].

  • Remediation is NOT your responsibility
  • Issues need to be referred to:
    • Rotation coordinator
    • Program director

Summary:

Most clinicians already possess the skills needed to made judgements about failing learners. Furthermore, they also possess the soft skills to give negative face-to-face evaluations. If all clinicians used these skills [together with direct observation and expectation-setting] we may identify more learners that need help. Ultimately we will achieve more success at graduating cadres 100% of whom are competent,caring physicians.

Future Directions:

There needs to be huge emphasis on culture change. This is going to require:

  1. More faculty development and engagement. BOTH faculty and programs need to come together to do this. [I'd appreciate comments on how to bring faculty to the table because at my institution this is problematic]
  2. Learners themselves understand how to receive and act on feedback.  

References:

ED STAT! Course Material

Hauer et al 2010. Twelve tips for implementing tools for direct observation of medical trainees’ clinical skills during patient encounters.

Kogan et al 2010. What drives faculty ratings of residents’ clinical skills? The impact of faculty’s own clinical skills.

Kogan et al 2012. Faculty staff perceptions of feedback to residents after direct observation of clinical skills.

 

 

Introduction:

We know that the best way to evaluate learners is by directly observing what they do in the workplace. Unfortunately [for a variety of reasons] we do not do enough of this. In my last post I described some reasons why we sometimes fail at making appropriate judgements about failing learners.

When it comes to providing feedback, there is much room for improvement.We know that feedback can be influenced by the source, the recipient and the message. What most people don’t know is that, when you’re evaluating a learner, you yourself could be unwittingly introducing bias -  just like when we make diagnoses.

Types of Evaluation Bias:

Screen Shot 2014-02-12 at 2.55.15 PM

  • If a learner really excels in one area,  this may positively influence their evaluation in other areas.
  • For example, a resident successfully/quickly intubates a patient with respiratory arrest. The evaluator is so impressed she minimizes deficiencies in knowledge, punctuality, ED efficiency.

Screen Shot 2014-02-12 at 2.56.41 PM

  • It is a human tendency to NOT give an extreme answer.
  • Recall from part 1 [link] that faculty tend to overestimate learners’ skills and furthermore tend to pass them on rather than fail them.
  • This may explain why learners are all ranked “above average”.

Screen Shot 2014-02-12 at 2.59.28 PM

  • Some faculty are inherently  more strict [aka “Hawks”] while others are more  lenient [Doves]
  • Research is inconclusive on demographics that predispose them to being either of these
    • One study suggested ethnicity and experience may be correlated with hawkishness

Screen Shot 2014-02-12 at 3.02.53 PM

  • This is more relevant to end-of-year evaluations.
  • Despite fact that we all have highs and lows – recent performance tends to overshadow remote performance

Screen Shot 2014-02-12 at 3.03.55 PM

  • After just working shift with exemplary learner, the evaluator may be unfairly harsh when faced with a less capable learner.

Screen Shot 2014-02-17 at 3.02.19 PM

  • Evaluators can be biased by their own mental ‘filters’. These develop from their experiences, background &c. They will therefore form subjective opinions based on
    • First Impressions [Primacy Effect]
    • Bad Impressions [Horn Effect] This is the opposite of halo effect. [e.g. if you're shabbily dressed you may be brilliant, but might be under appreciated]
    • “similar-to-me” effect. Faculty will be more favourable to trainee who is perceived to be similar to themselves.

Opportunity for EM Faculty

Okay. In part 1 you have seen how we as faculty are partly to blame for evaluation failure. Above, I have illustrated how we also introduce bias when we try to evaluate learners. Now for the good news …

  • What faculty need to ascertain is:
    • how well the trainee performs HIGH QUALITY PATIENT CARE
      • what are the features of this?
      • how do I put this into words?
  • This can easily be done by:
    1. making observations – thankfully, in the ER we have all the tools we need [read below]
    2. synthesizing observations into an evaluation - in Part 3 of this blog I’ll show you how I go about putting my observations into words.

Each ER shift  is a perfect laboratory  for direct observation

Screen Shot 2014-02-12 at 3.06.31 PM

  • During a shift in the ED, multiple domains of competency can be assessed -such as physical exam, procedural skill and written communication skills.  In fact ALL of the CanMEDs competencies can be assessed in real time.
  • Additionally we [at U of S] already do things that are encouraged:
    • We provide daily feedback using a:
      • Effective [validated] assessment tool. [Those daily encounter cards were designed using current evidence on evaluation]
      • Multiple assessments by several faculty members throughout their rotation
      • ER also provides avenue for 360 evaluation [by ancillary staff,co-learners and patients]

Summary:

Hopefully you now know a bit more about why we as medical faculty fail at failing learners and how evaluation bias plays into this. I have tried to show that the ER provides a perfect learning lab for direct observation and feedback. In the next post I will prescribe my personal formula for successful trainee assessment. Comments please! [There's a link at the top under the title of the post]

References:

Management Study Guide Website

Dartmouth Website

 

Lake-Wobegon-0003-0984

I Recently gave a talk to fellow faculty on the phenomenon of “failure to fail” in emergency medicine. I am no expert, but I have tried to synthesize the details in a useful way. I have broken it down into three parts. Part 1 deals with the phenomenon of Failure to Fail.  In separate posts I will introduce some forms of evaluator bias and then provide a prescription for more effective learner assessment in the ER. As always comments are welcome!

Introduction:

  1. We expect our medical trainees to acquire the fundamental clinical skills
  2. We expect them to evolve from novice to expert.
  3. Our goal is to graduate cadres of competent physicians who will serve their communities safely, effectively and conscientiously.

The Importance of Direct observation and Work-Based Assessment:

The current model of medical training is a blend of didactic teaching, clinical learning, simulation and self-directed endeavours. We then try to evaluate the learners formatively and summatively through written exams and standardised clinical scenarios.

We are learning that the best tool to evaluate learners is direct observation in the work context. This requires four things:

  1. Deliberate practice [on the part of the trainee]
  2. Intentional observation [on the part of the faculty]
  3. Feedback
  4. Action planning

This will place even more emphasis on direct faculty oversight. We will therefore need to develop their skills and coach them how to:

  • Perform Direct observation
  • Perform a valid [i.e repeatable] evaluation of skills
  • Provide Effective feedback

Screen Shot 2014-01-29 at 1.51.23 PM

Current State of Trainee Evaluation

FACT: the current model is sometimes failing to discriminate and fail learners

      • This occurs in spite of observed unsatisfactory performance.
      • This occurs despite faculty confidence with their ability to discriminate.
      • Most faculty agree that this is the single most important problem with trainee evaluation.

We’re trying to understand why… This is what I found.

  •  The Wobegon Effect – [wikilink]
    • From the fictional town featured on a radio show – a town where

 “all the women are strong, all the men are good looking, and all the children are above average”

    • Describes the human tendency to overestimate one’s achievements and capabilities in relation to others. [Also known as illusory superiority - link] 
  • Grossly inflated performance ratings have been found practically everywhere in North America:
    • Business – both employees and managers
    • University professors – overestimate their excellence [gulp]
    • Studies on ‘bad drivers’ – everyone has one of these in their family! 
  • Not-surprisingly this phenomenon is equally pervasive in medicine
    • Faculty struggle to provide honest feedback and consistent [valid] evaluations. [One study raters rated 66% trainees "above average" This is simply not possible! Pubmed]
    • Fazio et al [see references] demonstrated that 40% of IM clerks should have failed, yet were passed on…FORTY PERCENT!!!
    • In this study by Harasym et al [Link] showed that OSCE examiners are more likely to err on the side of passing than failing students

    Screen Shot 2014-02-13 at 7.39.05 AM

    • Residents [in particular lowest performing ones] overestimate their competency compared to their ratings by faculty peers and nurses [Pubmed Link]
    • Moreover the biggest overestimations lay in the so called “soft skills – Communication, teamwork and professionalism. These are often the problems that give faculty and colleagues headaches with a particular learner.
    • One reason might be because  soft skills are hard to quantify – unlike suturing skills where incompetence is quickly identified 
  • End result is a culture of “failure to fail”  where …
    • Many Graduates  are not acquiring required skill-set
    • We are failing to serve patient needs
      • Reduced safety, increased diagnostic error and  reduced patient satisfaction
    • Increasing negative fallout to ENTIRE profession – our reputations are being besmirched in the digital era. 
    • Ultimately public trust is being eroded.
    • We cannot succeed at our job without public confidence in what we do.

Why we fail to fail learners:

Barriers to adequate Evaluation:

Learner factors

  • Learners are all different. Moreover, the same learner will vary in skills through time as they grow and develop.
  • We all have good and bad days.
  • There exists a phenomenon called “Group-work effect” where medical teams can mask deficiencies of individual learners.

Institution factors

  • Tools of evaluation flawed – some eval forms are poorly designed to discriminate learners
  • We all work in the current culture of “too busy to teach”.
  • There is an incredible amount of work needed to change this culture

Faculty Factors

  • Faculty feel confident in ability when poled.
  • Faculty feel sense of responsibility to patient, profession, learner BUT …
  • Raters themselves are the Largest source of variance in ratings of learners:
    • Examiners account for 40% of the variance in scores on OSCEs
    • Examiners’ ratings are FOUR times more varied than the “true” scores of students
    • some tend to be more strict – “Hawks” … some are more lenient – “Doves”
    • negligible effect of gender, ethnicity and age/experience [one UK study that "hawks are more likely to be ethnic minority and older - link]
  • Clinical competence of faculty members is also correlated with better evaluations [link]
    • One Interesting study where faculty took OSCE themselves, then rated students … Results show that:
      • Use their own practice style as frame of reference to rate learners
      • Better performers on the OSCE were more insightful and attentive evaluators

2013 convenience sample of U of S EM faculty: Top three reasons :

1. Fear of being perceived as unfair

2. Lack of confidence in the supporting evidence when considering to “fail”

3. Uncertainty about how to identify specific behaviours

What I  discovered in the literature:

  1. Competing demands [clinical vs educational] mean that education suffers.
  2. Lack of Documentation  - Preceptors fail to record day-to-day performance. So when it comes to end of rotation eval – not enough evidence.
  3. Interpersonal conflict model decribes the following phenomenon:
  • Faulty members’ goal is to improve trainees skills – preceptors do care a lot!
    • They perceive the need to emphasize the positives and be gentle [to protect learner self esteem and maintain an alliance/engage the trainee.
    • Faculty try and make feedback constructive without the learner feeling like it's a personal attack.
    • This creates tension when one is forced to be negative and critical feedback
    • Emotional component of giving negative feedback also makes it even more difficult
    • Consequently this tension forces us to overemphasize the positives
    • Creates mixed messages regarding feedback. Learners walk away with wrong message.

4. Lack of self efficacy:

  • There's a lack of knowledge of what specifically to document - a) Faculty don't know what type of information to jot down, b) Faculty struggle to identify specific behaviours associated with failure.

[The reported low self-confidence during evaluations is actually a product of our training [or rather lack thereof]. No-one teaches you how to navigate minefields in evaluation. This is particularly evident for soft skills. Staff often think that their judgements are subjective interpretations].

5. Anticipating [arduous] appeal process – having an extra commitment, having to defend ones actions/comments, fear of escalation [legal action e.g.].
6. Lack of remediation options- there exists a lack of faculty support. This makes them unsure about what to do/advise after diagnosing a problem.

Summary:

We have seen that the current model of medical training is failing to identify and fail underperforming learners. There are several reasons why, but faculty themselves play a large role in this culture of “Failure to Fail”. In my next post I will highlight some biases that we encounter when judging learners and provide a prescription for more effective learner evaluation.

Acknowledgement- Dr Jason Frank @drjfrank for pointing me in the right direction [authors Kogan and Holmboe are ones you should search out in particular]

References:

Dudek et al 2005 Failure to Fail: The Perspectives of Clinical Supervisors

Fazio et al 2013. Grade inflation in the internal medicine clerkship: a national survey.

Harasym et al 2008 Undesired variance due to examiner stringency/leniency effect in communication skill scores assessed in OSCEs

Kogan and Holmboe 2013. Realizing the promise and importance of performance-based assessment.