🚨 Before beginning to use the Quality Assurance tool, it's crucial to set it up in a way that suits our requirements for every existing QA Scorecard.
🤫 If you prefer every QA Scorecard to have a sample of random cases, there is no need to
There are several reasons why reviewing the Settings is important before defining the QA workflow:
- Ensuring that an adequate number of QA evaluations are performed for each agent within the week.
- Ensuring that only conversations that are relevant or significant to you are reviewed.
- Ensuring that quality standards are effectively communicated to the rest of your team.
- Keeping management informed about the targets to be achieved and working towards reaching them.
Let's start our setup by visiting Settings -> Quality Assurance -> Case funnel
The Case Funnel Settings will aid in specifying the type of cases that require the attention of QA experts, and the number of QA evaluations expected per agent per week for every QA Scorecard separately.
1. Select the name of the QA Scorecard to be filtered
2. Choose the tickets to be rated by QA
To begin with the setup process, we need to define our preferences by selecting whether we're only interested in Solved cases or if we'd also like to include ongoing conversations in our QA review. 🤝
- Solved and Handled cases: This category includes all cases where an agent has left a comment, whether internal or external, that will be displayed on the Scorecard for QA evaluation.
- Solved cases: This category includes only cases that have been resolved by the agent, irrespective of whether they responded to the customer in the case or not. It may be the case that the agent only solved or merged the case with another one, but it will still appear on the Scorecard for evaluation.
💡Pro-tip: If you want to provide feedback in real-time, we recommend selecting the Solved and Handled option. This allows you to offer tips and feedback to your agent while they are still working on the same case.
3. Narrow down the selection
Now it's time to specify the type of cases that require QA rating, ensuring that your QA experts are not spending their time evaluating unimportant cases. 👆🏽
To define your preferences, use the two filters: channel filter, and metric filter.
❗️ QA Insights Dashboard helps understand the results on the team and agent level. Check it out here!
- Channel filter: Define channel groups through which agents provided support - Email, Chat, and Phone. If multiple channel groups are selected, we include all cases supported in each of these channels.
💡Pro-tip: If no channels are defined, we include cases supported through any channel.
💡Pro-tip x2: You can always adjust the settings to focus on a specific channel support for the week. For instance, if you introduced Live Chat support and need to ensure the same high-quality level as with Email support, set a filter to focus only on chat tickets this week or month!
❗️ Did you know that you can quickly switch between Chat, Email, and Phone QA results in the QA Insights dashboard? This helps quickly understand which channel needs more attention during coaching. Learn more about the QA Insights dashboard here!
- Metrics filter: Focus on cases that meet your preferred criteria (e.g., have at least 2 messages from the agent before solved) or outside your desired outcome (e.g., the first reply time is longer than 1h). If more than one metric is selected, we include cases that meet at least one of the metric criteria (e.g., either had more than 2 Public Replies or Chat Messages before solved or had longer than 1h First Reply Time).
💡Pro-tip: If no metrics are selected, we do not specify if the ticket had a specific metric result.
💡Pro-tip x2: If only the CSAT metric is selected, and both positive and negative ratings are chosen, we include only the cases rated by the customer.
💡Pro-tip x3: You can leave the metric limits half-open. For instance, if you want at least 4 Public replies or Chat Messages sent before the case is resolved, you can select Messages to Solve as (4;-) (as shown in the example)👇🏽.
After defining your preferred criteria, your QA experts will be presented with cases for QA rating that have at least one of the defined tags, were supported through one of the chosen channels, and had a result for at least one of the selected Metrics and values.
Using the example above👆🏽, one of the cases suggested for QA rating could be related to "Shipping," supported through Chat, and negatively rated by the customer.
4. Set the recommended weekly workload
The next step in your QA setup is to decide on the number of cases that you want your QA team to evaluate each week. You have two options:
- Set a fixed number of cases to rate.
- Set a percentage of all solved/handled cases. For example, if your agents solved 100 cases last week, and you set a minimum of 10% of solved cases to rate, a total of 10 random cases will be rated on the Scorecard.
💡 Pro-Tip: If you choose to rate 5 cases per agent or 10% of Handled cases, and an Agent handled 100 cases during the week, 10 cases will be suggested for QA evaluation for that week. We choose the highest number of 2, so if you want the same number of QA ratings each week for everyone, we suggest setting the percentage to 0.
💡 Pro-Tip x2: Rate enough cases to see an agent's actual performance. For example, if an agent solved 100 cases during the previous week, but you only rated one, you wouldn't get a good understanding of the agent's performance (that ticket could be either very good or very bad, which means you won't be objective in your rating). We recommend rating 5%-10% of all cases that your agents have handled. This ensures objective feedback and a realistic IQS (Internal Quality Score).
"How can I ensure that my filters will work for everyone?"
While it's great to define narrow filters, it can be difficult to predict if every agent will support the same types of tickets within the week or have enough of these cases to achieve the weekly QA rating goal. So how can we ensure that the weekly rating process goes smoothly?
Review how many cases each agent has supported weekly on average for the past 30 days that match your filter.
Using the example above 👆🏽, we can expect around 100 possible cases to be rated after selecting our filter. As our limit is set to 5, we will have plenty of options to choose from for the QA rating.
🚨 You still can swap the cases for another one in case you think it is not worth the QA rating. We will propose you another cases that will match your defined filters.
What's next?
Set up your company's QA standards by creating your own QA scorecard. You can find more information on how to do this at this link.
In the meantime, QA experts can learn about how the QA rating process works within Kaizo by visiting this link. Happy rating! 😎
Comments
0 comments
Please sign in to leave a comment.