Last year, I was chatting with some judges in the Mid-Atlantic Slack about review counts, and one of them mentioned that he felt that reviews written was not a great metric to track – that quality mattered more than quantity. My response was the idea that quantity is its own quality.
Receiving a well-written review is obviously more beneficial than receiving a poorly written review. But the difference in benefit between receiving a review and not receiving one is much greater. And while I always strive to make the reviews I write meaningful and helpful, that effort doesn’t mean they’re all terrific works of art that should go on the Feedback Loop fridge.
At the end of last year, I wrote about how my goal was to write a lot of reviews. I feel good about that. I’ve formed better habits of taking notes, investing feedback into at least one teammate, and actually sitting down to write reviews. Although I’m a little behind with those habits this year, I’m in a good place to write regular post-event reviews.
So this year, I intend to focus on the quality of the reviews I write. Quality is a tough thing to measure, but it’s important to me to find some way to measure it. I don’t like the idea of reaching the end of the year and giving myself a trophy simply because there was a goal. Goals are relevant only when there’s risk involved, the possibility that they might not be reached.
With that in mind, here are my 2017 review goals:
Write at least one review per Grand Prix or Open.
This one is scaled back from last year, partially because of my focus on quality. Additionally, I’m going to be in roles this year where I don’t get to work closely with a single judge all day as often as I used to. So while I still want to write multiple reviews for every Grand Prix, I’m going to be okay if I don’t write about individual interactions every day.
Give feedback to the head judge of every Grand Prix I work.
People tend to be better at giving feedback down or sideways than they are at reviewing up. They often feel like head judges must have a reasons for every action; or maybe they’re afraid of retribution for bad feedback. Or maybe they just don’t interact with the Head Judge often enough to take notes. As a result, head judges unfortunately don’t receive as much feedback as floor judges.
I plan to use a couple specific tools to meet this goal. The first is simply conversation. As a Level 3 judge, I’ll often be a team lead working closely with the head judge; as a result, I’ll have better opportunities for observing them than many floor judges. Every time I interact with a Head Judge on an appeal or investigation, I want to follow up with a conversation.
The second tool is the Grand Prix Head Judge Feedback Form – a link usually sent out after a Grand Prix to allow judges to share their thoughts with the head judges. This form is part of the renewal process for Grand Prix head judges, so it’s taken pretty seriously.
And the third tool is good, old-fashioned review-writing. I plan to write reviews that reflect my feedback on the forms, but with more of a personal and narrative version.
Write at least 50% of my total reviews as re-views.
I want to invest in reviewing judges whom I’ve reviewed in the past. One way to improve my review quality this year is by “specializing” a little bit – coming back to judges with whom I’ve worked closely before and charting how I believe they’ve grown (or failed to grow). When I anticipate working with a judge I’ve reviewed or who has reviewed me in the past, I will re-read those reviews for additional context for providing current feedback.
Beyond just making these reviews more specific, this is a quality gauge in that I want to see if my reviews are having an effect on judges. If I write a review for a judge in June, work with them again in October, and write the exact same review, perhaps a new strategy is needed. I can then evaluate whether my feedback was delivered in a way that encouraged that judge to build on their strengths or mitigate their weaknesses.
As with last year, I’m going to be tracking my reviews all year in a spreadsheet using some key performance indicators for progress and adjustments. Those KPI are:
- Percentage of GP HJ Feedback forms completed (Target: 90% or better)
- Percentage of GP HJs reviewed (Target: 50% or better)
- Percentage of “re-views,” or a second review of a judge (Target: 50% or better)
- Percentage of “re-reviews,” or the third+ review of a judge (Target: 25% or better)
Along the way, I want to continue to level up my verbal feedback. I’m requesting roles where verbal feedback is a critical component, like Day 2 Team Lead Shadow and Level 3 Panelist. To excel in these roles, I want to keep that live feedback skill as sharp as I can.
It’s only February. If you haven’t worked out your 2017 feedback goals, you’ve still got plenty of time. Are you unsure where to start or how to track your progress? Leave a comment below, where we can discuss ideas and metrics.
Is this supposed to be titled “Quantity is its own quality”?
Hey – thanks for looking; while writing this, I had to double-check each instance a few times and still missed some until Angela caught it!
And nope, because I wanted to focus on quality this year over quantity, I wanted to turn the phrase around and make it “Quality is its Own Quality.”
And now I’m double-checking that line…
Hey – thanks for looking; while writing this, I had to double-check each instance a few times and still missed some until Angela caught it!
And nope, because I wanted to focus on quality this year over quantity, I wanted to turn the phrase around and make it “Quality is its Own Quality.”
And now I’m double-checking that line…