To assess the agent's support quality in a ticket, it's not always enough to rely solely on message content. Checking ZD events, ticket details, and relevant productivity metrics can provide valuable insights into the agent's performance.
Forget about manual First Reply Time and Messages To Solve calculations - gain comprehensive ticket insights required for QA rating!
🤩 Check out our updated ticket window Conversation and Details tabs!
- Clearer visibility of the messages and the channels used by the customer and the agent
- Better overview of the ticket statuses between interactions
- More space to review even long conversations
- Clear overview of the ticket Details
💡What Performance Metrics are included?
- Average Customer's Sentiment for the ticket
- Average Agent's Empathy Score on the ticket
- First Reply Time (FRT) of the ticket (if applicable)
- Reply Time (mean) (RT) of the ticket (if applicable)
- Messages To Solve in the ticket
- Number of times the ticket was Reopened
- CSAT result of the ticket
❗️Note: The relevance of all metrics is specific to the agents currently undergoing QA rating. As a result, certain metrics may not be applicable to every individual agent.
Let's consider an example: Agent A initially handled a ticket and provided the first reply to the customer. Subsequently, Agent B took over and responded to the customer's second and third emails. Agent B successfully resolved the ticket, resulting in a positive CSAT rating.
In this scenario, Agent A will have a First Reply Time (FRT) result for the ticket since they sent the initial message. However, they will not have a Reply Time (mean) (RT) metric as only the first message was sent by Agent A. Agent A's CSAT rating will be associated with this ticket.
On the other hand, Agent B will have the Reply Time (mean) (RT) metric, reflecting their average response time across all messages they sent in the ticket. Agent B will not have a First Reply Time (FRT) result since the first message was not sent by them. Agent B's CSAT result for this ticket will also be displayed.
🧐How will this information benefit me during the QA rating process?
1. Comparisons to team averages
Benchmarking agents against the team's average results can provide useful context. Flexibility in expectations is important, and comparing agents to their team's average performance can help identify outliers and areas for improvement.
The results that are worse than the average team's result for the past week from QA rated, will be highlighted in red.
By hovering over metric results, you can compare the metric result for this specific ticket to the team's average result of the same metric for all of the tickets for the past week. (for example, while QA rating Week 20, you can compare the metric results to Week 21). Unusually bad or good results may uncover interesting insights and prompt further investigation.
CSAT results and ticket reopenings are not compared to the team's average but are always highlighted in red. This ensures attention is given to negative CSAT ratings or instances where a ticket was Reopened by the customer.
💡Pro-tip: It's important to consider not only results that are worse than the team's average but also unusually good results, as they can have interesting consequences. For example, if you notice that an agent's Reply Time is significantly lower than the team's average, it may initially seem like the agent did a great job. However, upon reviewing the ticket, you may realize that the customer's issue was more complex than anticipated, requiring more time for investigation rather than a quick response. This could lead to incorrect support and potentially result in a negative CSAT rating or further customer inquiries.
2. Monitor and better understand the selected ticket for QA rating by connecting this to the Ticket Filtering feature
The proposed ticket for the QA rating will match the criteria set inside the Ticket funnel in Settings. In case your QA Admin set up the filter to overview only the tickets that had a negative CSAT rating, all of the tickets proposed will have this metric present.
Setup the Ticket funnel inside the Settings to focus on the cases and tickets that matter.
❗️In case no criteria were defined to be filtered inside Settings, any random ticket will appear for the QA rating.
3. Productivity metrics integrated into QA
QA scorecards can be customized to include productivity metrics, eliminating the need for manual calculations. By simply glancing at the top bar of the QA pop-up, you can quickly access agent productivity insights, saving time and effort.
4. Quicker understanding of what went wrong in the ticket
When reviewing conversations, it can be challenging to pinpoint where things went wrong without extensive investigation. However, leveraging relevant metrics can guide your focus from the start. For example, imagine opening a ticket with a negative CSAT rating and over 10 Messages To Solve. Additionally, you discover that the ticket was reopened 5 times, which is significantly higher than the average. This suggests that the agent may have overlooked some of the customer's questions or failed to provide the requested information. By considering these metrics while reading the conversation, you can quickly identify potential issues and uncover valuable opportunities for improvement.
Customer Sentiment and Agent Empathy Score metrics provide valuable insights and assumptions about the support interaction. When both metrics indicate a positive result, it signifies that the agent provided excellent support that was appreciated by the customer. This positive experience is likely to lead to a favorable CSAT rating, and it's motivating to acknowledge such tickets with a great QA review. Conversely, when low metric results are observed, particularly in the Agent's Empathy Score Metric, it prompts further investigation and deeper reading of the conversation. These metrics serve as triggers to delve into the details and understand any potential areas where empathy may have been lacking or the customer's sentiment was negatively impacted.