Consistency in QA evaluations is key to delivering top-notch customer support. Enter the Calibration feature—a powerful tool 🛠️ that aligns your QA team by standardizing evaluations and uncovering discrepancies 🔍.
With Calibration, your team can ensure a unified understanding 🤝 and consistent ticket assessments. Ready to elevate your QA process? Let’s explore how it works and why it’s a game-changer! 🚀✨
🧐 Purpose of the Calibration Feature
The QA Calibration feature boosts consistency and reliability in evaluations by bringing your team together for calibration sessions 🤝. These sessions allow QA team members to evaluate the same exact conversation and compare their assessments with those of their colleagues, fostering alignment.
Here’s why it’s a must-have:
- Standardize Evaluations: Ensure every team member assesses tickets consistently ⚖️.
- Spot Training Needs: Identify unclear criteria or areas open to different interpretations 🔍.
- Refine QA Processes: Foster discussions to resolve discrepancies and improve evaluation standards 🚀.
💪 Setting Up a Calibration Session
The Calibration feature lets you create sessions where multiple QA experts evaluate the same set of cases for comparison and alignment. Here’s how QA Hosts can set up a session:
-
Navigate to the "Calibrate" Tab
Open the QA House and go to the Calibrate tab. Click the Create Session button to get started! -
Configure the Session
- Choose a Due Date: Set a deadline for QA experts to submit their evaluations. This ensures everyone has enough time to complete their tasks 🗓️.
- Select a QA Scorecard: Pick the scorecard the team will use for calibration, defining the criteria for evaluation 📝.
- Add QA Experts: Specify which team members will participate. These experts must complete their evaluations by the deadline 👥.
- Assign Tickets: Choose the tickets and associated agents to be evaluated. This ensures all participants review the same interactions for comparison 🔍.
-
Notifications Sent
Once scheduled, all participants will receive a notification prompting them to complete their evaluations by the deadline.
-
➡️ Next Steps for the Host
As the session Host, you’ll also need to submit your own evaluations for the assigned cases. During the session, participants’ evaluations will be systematically compared to yours, helping to identify discrepancies and align understanding.
🧗🏽♂️ Conducting a Calibration Session
Once the setup is complete, participants can begin evaluating their assigned tickets:
-
Submitting Evaluations:
- Each QA expert evaluates their tickets by clicking the "Rate" button. The number displayed on the button indicates how many evaluations the expert needs to submit for the session.
- Participants can revise their evaluations anytime before the deadline. Simply click the "Rate" button again, even if it shows "0" tasks remaining, to make edits.
🚨While the Calibration session status is "In Progress", both participants and the Host can submit and edit their evaluations.
⭐️ Important: While submitting their answers, participants cannot see evaluations submitted by their colleagues or the Host. Each participant only has access to their own evaluations. - The Host decides when to close submissions or lets the session automatically close when the deadline passes. The deadline can also be extended, even if the session status is "Idle".
- Each QA expert evaluates their tickets by clicking the "Rate" button. The number displayed on the button indicates how many evaluations the expert needs to submit for the session.
-
Reviewing Results:
- After submissions are completed, the team usually gathers for an alignment meeting. During the meeting, the Host clicks the "Start Session" button, making all submitted results visible to participants and platform Admins.
- Once results are shared, the team can identify discrepancies and discuss differences in their interpretations of QA criteria.
- After submissions are completed, the team usually gathers for an alignment meeting. During the meeting, the Host clicks the "Start Session" button, making all submitted results visible to participants and platform Admins.
🤝 Discussing and Aligning on Evaluations
Calibration sessions create opportunities to align on evaluation standards:
-
Analyzing Discrepancies
-
The system highlights the match percentage for each QA criterion, showing how aligned the team is on specific points.
- This data helps pinpoint QA criteria that might require clearer boundaries or additional explanation for better alignment.
-
-
Making Adjustments:
- The Host (typically a QA Admin) can adjust evaluations during the session. These updates dynamically recalculate the match rate for each participant in real time.
- Notes can be added directly within the session to document key discussions and decisions for future reference.
- 🟢 During the Calibration session, its status is marked as "Calibration".
🏁 Completing the Calibration Session
-
Finalizing the Session:
- Once the team has reached a consensus, the session is marked as "Complete", locking all evaluations from further editing.
- This ensures the agreed-upon standards are preserved for reference in future evaluations.
-
Post-Session Review:
- Participants can revisit the session and review notes from discussions to continuously improve their evaluation practices.
- QA Admins can access insights on participants’ individual Calibration match rates over a defined timeframe in Calibration Insights
The QA Calibration feature is a game-changer for fostering consistency and alignment in QA evaluations. By encouraging open discussions and shedding light on evaluation discrepancies, it promotes a culture of continuous improvement.
With Calibration, you can ensure your customer support quality standards are upheld consistently across the board. Try it today and take the first step toward a more aligned, effective QA process! 🚀
Comments
0 comments
Please sign in to leave a comment.