from http://professionallearningmatters.org/

from http://professionallearningmatters.org/

Like many institutions, we have a mix of EM resident learners rotating through our departments. Expectations  [and competencies] of junior learner differ greatly from that of a senior learner.  For Example:

  • PGY 1 – Focus on clinical skills e.g. Xray reading and procedures
  • PGY2-4 – Focus on more challenging patient encounters e.g. medical and procedural management of the septic patient
  • PGY 5 –  Focus on managerial roles e.g. taking referrals from family doctors

At our recent Faculty Development Workshop my brilliant colleague – Dr Rob Woods gave an engaging presentation on teaching senior learners in the ED. He subsequently facilitated an impromptu crowd-sourcing  of the participants.  The result was the derivation of an  easy-to-apply rubric for expectations for learners of various seniority levels in the ER. We hope you find it useful:

Teaching EM Trainees at Different Levels of Training Sheet1

Introduction:

In the last two posts I introduced the fact that there exists a culture of ‘failure to fail’ in medicine. Faculty themselves have a large part to play and, apart from being too lenient, also unwittingly  introduce bias during learner evaluations.In this final post I want to share my approach to trainee evaluation.

I believe that most clinicians already know how to make judgements about learners – they just don’t know that they know! Furthermore if you follow the six steps below I think you’ll start to make more effective judgements and provide more meaningful feedback to learners. Just maybe – you might even identify [and start the process of helping] a failing learner. Comments welcome.

FACT: Most clinicians ALREADY possess skills transferable to trainee evaluation

Screen Shot 2014-02-24 at 7.14.21 AM

  • ER docs routinely diagnose and treat conditions in the ER with little information
    • We have the ability to quickly discern sick  from not sick
    • What’s more – (if you think about it) you also possess the right words to describe why  … pale, cyanotic, listless versus  pink, alert, smiling
    • The same skills can be used to identify competence [introduced self, gave clear question to consultant, provided concise history] and incompetence [pertinent negatives missing from history, inability to generate basic ddx, brusque interpersonal interactions]

Screen Shot 2014-02-17 at 3.29.37 PM

  • Many clinicians also possess transferable “Soft Skills”. We use these daily in difficult patient interactions.
  • Guess what? The exact same skills can be brought to bear in difficult trainee evaluations!
    • listening empathetically
    • breaking bad news
    • dealing with emotions
    • dealing with hostility
from wicked writing skills

from wicked writing skills

My 6-step Prescription for Evaluation Success:

I have tried to illustrate above that the same skills that make you an astute clinician will also allow you to discern whether a learner may be under-performing and then let them know. Why not stick to what works? Below is my six-step prescription for evaluation success. I am hoping that it is pleasantly familiar to you.

STEP 1: Take a history from the learner at the beginning of the shift.

The ED STAT course run by CAEP [link] teaches Faculty to understand where the learner is coming from at the start of the shift.

  • Ask the learner about:
    • Level of training
    • EM experience
    • Home program
  • Provide a orientation to the ER
  • Ascertain their learning goals
  • Set expectations
    • based on level of training
    • that they will be receiving feedback about their performance
    • [in fact - tell them about these six steps]

STEP 2: Examine the learner’s skills

Use the IPPA approach!

  • Inspectionwatch the learner
    • taking a history
    • performing physical exam manoeuvres
    • giving the diagnosis/ddx
    • giving discharge instructions
  • Palpationperform physical exams together
    • to confirm or refute findings
    • to correct errors
    • to role model
    • to learn from them
      • learners often remember all the special manoeuvres better
      • they ask questions that challenge you to read up – like “which cranial nerve is glossopharyngeal?”
  • Percussion:  use your finely tuned senses ‘tap out’ gaps in:
    • knowledge
    • clinical skills
    • attitude
    • [Hint: the mnemonic KSALTs can help identify where the problem lies with the learner. Read this [link] to the ALIEM MEdIC case].
  • Auscultationlisten to the learner
    • during consultations
    • during interactions with other providers
    • [HINT: eavesdrop on their histories from behind the curtain]

STEP 3: Diagnose the Learner:

Screen Shot 2014-02-25 at 4.52.23 PM

  • Is the learner competent or not?
  • Think about biases you may be introducing and try to avoid them – especially central tendency bias [link].
  • Use my ABC RIMES approach:
    • Attitude - make comments like “self-directed” vs “unmotivated”
    • Behaviours – “procedural skills above peers” vs “poor attention to workplace safety”
    • Competencies: [I use a modification of the RIME approach [link] with the addition of an ‘S’]
        • Reporting – give feedback on history-taking, written communication and presentations
        • Interpretation – give feedback on interpretation of Xrays, EKG’s and ‘putting information together’
        • Managerial skills – give feedback on efficiency, multi-tasking, stewardship
        • Educator/Expertise – give feedback on knowledge and teaching skills
        • Soft skills – give feedback on communication, team-skills, honesty, reliability, insight, receptivity to feedback
  • For those that are slavish to the CanMEDs brand try the MS CAMP mnemonic:
    • Medical expert
    • Scholar
    • Communicator/ Collaborator
    • Advocate
    • Manager
    • Professional

STEP 4: Provide your diagnosis

  • Think of your favourite uncle or granddad – would you be happy with the care that they received? If so  - great!
  • Provide feedback using the ABC RIMES framework [here's an earlier post on providing feedback].
  • If the feedback is going to be negative – you need to prepare. Here’s more information on getting ready for a difficult conversation [link].
    • If you’ve primed the learner at the beginning of the shift – they shouldn’t be surprised.
    • Have the courage to practise tough love – trainees are adults, they should be expected to receive feedback like adults.

STEP 5: Document! Document! Document!

Screen Shot 2014-02-25 at 4.57.57 PM

  • ALL verbal feedback needs to be written down.
  • Use strong descriptive terms rather than weak ones. Saying ‘good job’ is good for self-confidence, but learners need specifics.
  • Use the ABC RIMES or MS CAMP framework – effective evaluation forms make this task easier.
  • RANK THE LEARNER. They cannot all be ‘above average’ – some trainees are below average.
  • Even though a learner may be having a bad day this needs to be documented so that patterns can emerge.

STEP 6: Consult them out!

Most of us feel that problem learners are “someone else’s problem” – you’re right! You need to consult them out. But like all consults – you need to give the consultant the right information [see above].

  • Remediation is NOT your responsibility
  • Issues need to be referred to:
    • Rotation coordinator
    • Program director

Summary:

Most clinicians already possess the skills needed to made judgements about failing learners. Furthermore, they also possess the soft skills to give negative face-to-face evaluations. If all clinicians used these skills [together with direct observation and expectation-setting] we may identify more learners that need help. Ultimately we will achieve more success at graduating cadres 100% of whom are competent,caring physicians.

Future Directions:

There needs to be huge emphasis on culture change. This is going to require:

  1. More faculty development and engagement. BOTH faculty and programs need to come together to do this. [I'd appreciate comments on how to bring faculty to the table because at my institution this is problematic]
  2. Learners themselves understand how to receive and act on feedback.  

References:

ED STAT! Course Material

Hauer et al 2010. Twelve tips for implementing tools for direct observation of medical trainees’ clinical skills during patient encounters.

Kogan et al 2010. What drives faculty ratings of residents’ clinical skills? The impact of faculty’s own clinical skills.

Kogan et al 2012. Faculty staff perceptions of feedback to residents after direct observation of clinical skills.

 

 

Introduction:

We know that the best way to evaluate learners is by directly observing what they do in the workplace. Unfortunately [for a variety of reasons] we do not do enough of this. In my last post I described some reasons why we sometimes fail at making appropriate judgements about failing learners.

When it comes to providing feedback, there is much room for improvement.We know that feedback can be influenced by the source, the recipient and the message. What most people don’t know is that, when you’re evaluating a learner, you yourself could be unwittingly introducing bias -  just like when we make diagnoses.

Types of Evaluation Bias:

Screen Shot 2014-02-12 at 2.55.15 PM

  • If a learner really excels in one area,  this may positively influence their evaluation in other areas.
  • For example, a resident successfully/quickly intubates a patient with respiratory arrest. The evaluator is so impressed she minimizes deficiencies in knowledge, punctuality, ED efficiency.

Screen Shot 2014-02-12 at 2.56.41 PM

  • It is a human tendency to NOT give an extreme answer.
  • Recall from part 1 [link] that faculty tend to overestimate learners’ skills and furthermore tend to pass them on rather than fail them.
  • This may explain why learners are all ranked “above average”.

Screen Shot 2014-02-12 at 2.59.28 PM

  • Some faculty are inherently  more strict [aka “Hawks”] while others are more  lenient [Doves]
  • Research is inconclusive on demographics that predispose them to being either of these
    • One study suggested ethnicity and experience may be correlated with hawkishness

Screen Shot 2014-02-12 at 3.02.53 PM

  • This is more relevant to end-of-year evaluations.
  • Despite fact that we all have highs and lows – recent performance tends to overshadow remote performance

Screen Shot 2014-02-12 at 3.03.55 PM

  • After just working shift with exemplary learner, the evaluator may be unfairly harsh when faced with a less capable learner.

Screen Shot 2014-02-17 at 3.02.19 PM

  • Evaluators can be biased by their own mental ‘filters’. These develop from their experiences, background &c. They will therefore form subjective opinions based on
    • First Impressions [Primacy Effect]
    • Bad Impressions [Horn Effect] This is the opposite of halo effect. [e.g. if you're shabbily dressed you may be brilliant, but might be under appreciated]
    • “similar-to-me” effect. Faculty will be more favourable to trainee who is perceived to be similar to themselves.

Opportunity for EM Faculty

Okay. In part 1 you have seen how we as faculty are partly to blame for evaluation failure. Above, I have illustrated how we also introduce bias when we try to evaluate learners. Now for the good news …

  • What faculty need to ascertain is:
    • how well the trainee performs HIGH QUALITY PATIENT CARE
      • what are the features of this?
      • how do I put this into words?
  • This can easily be done by:
    1. making observations – thankfully, in the ER we have all the tools we need [read below]
    2. synthesizing observations into an evaluation - in Part 3 of this blog I’ll show you how I go about putting my observations into words.

Each ER shift  is a perfect laboratory  for direct observation

Screen Shot 2014-02-12 at 3.06.31 PM

  • During a shift in the ED, multiple domains of competency can be assessed -such as physical exam, procedural skill and written communication skills.  In fact ALL of the CanMEDs competencies can be assessed in real time.
  • Additionally we [at U of S] already do things that are encouraged:
    • We provide daily feedback using a:
      • Effective [validated] assessment tool. [Those daily encounter cards were designed using current evidence on evaluation]
      • Multiple assessments by several faculty members throughout their rotation
      • ER also provides avenue for 360 evaluation [by ancillary staff,co-learners and patients]

Summary:

Hopefully you now know a bit more about why we as medical faculty fail at failing learners and how evaluation bias plays into this. I have tried to show that the ER provides a perfect learning lab for direct observation and feedback. In the next post I will prescribe my personal formula for successful trainee assessment. Comments please! [There's a link at the top under the title of the post]

References:

Management Study Guide Website

Dartmouth Website