What is your key Fin metric | Community
Skip to main content

Hey everyone!

I’m curious to hear from other Fin users – what key metric are you focusing on to demonstrate Fin’s success to your organization? Personally, I’ve found that resolution rate isn’t always the best indicator since so many variables can impact it. For example, a customer might receive a perfect answer but still choose to speak to an agent, or Fin could close a conversation automatically due to spam, both of which can skew the resolution rate.

I’m leaning toward deflection rate as a more telling metric – measuring the percentage of support handled by Fin versus what’s escalated to humans. But I realize this might be oversimplifying things. Would love to hear what others are tracking and why!

 

Hi John! 

Love this topic. I never really thought about resolution rate versus the deflection rate but that is definitely something I will be thinking about going forward.

Another metric I like to keep a close eye on are the Fin CSAT ratings. It helps me gauge how users are reacting to the support offered. I can also check each conversation to better understand why they left the rating and do some fine tuning to the content where I see inaccurate or lower quality content being fed to Fin. 

I would also love to see what others think about this subject! 😊


@Adam Warden Agree with you on CSAT. We’re doing a weekly check of negative CSAT ratings and escalating them to a human to review. It would be nice if a workflow could do this automatically, but I don’t think that capability exists. 


Hey ​@John Pjontek support engineer Paul here.

Great question, resolution rate is just one piece of the puzzle, and it can be misleading on its own.

Here are a few key metrics other teams are using to measure Fin’s success:

  • Deflection rate – how often Fin handles a conversation without needing a teammate

  • Involvement rate – how frequently Fin joins conversations

  • CSAT – how happy customers are with Fin’s replies

  • Automated resolution rate – % of issues Fin fully resolves on its own

  • Time to resolution – how fast Fin gets to a useful answer

  • Engagement – how much and how deeply users interact with Fin

Tracking a mix of these gives a much clearer picture of how Fin is performing and where it can improve. Happy to help you dig into these if you'd like!


@Adam Warden Agree with you on CSAT. We’re doing a weekly check of negative CSAT ratings and escalating them to a human to review. It would be nice if a workflow could do this automatically, but I don’t think that capability exists. 

I strongly recommend using workflows off the back of CSAT ratings, with a negative sentiment:

  • Route to a team inbox or individual agent with a message to follow up
  • Solicit deeper more qualitative feedback via Surveys
  • Trigger a “book a call” via Google Calendar/Meet or Calendly (integrations available)

Yeah, everyone has listed the obvious already:

  • Deflection
  • Resolution
  • CSAT

All three of which are at the top of the Fin AI Agent > Performance report.

Something deeper might be in the weeds of your unresolved conversations. When a live agent was tagged in, was it because Fin answered factually incorrectly, didn’t answer the intended question, or didn’t have the content required to generate a proper response?

I have our live support agents tag me in any conversations they notice where Fin could have performed better. Reading all non-positive Fin CSAT conversations can also guide you to some areas to improve.

I don’t have a metric that tracks these conversations. I have created a back-office ticket process for tagging them, but I’ve been inconsistent in using it and asking others to do so.


Hey ​@John Pjontek this is a great topic for discussion. Based on research and experience helping companies measure their chatbot performance, I think it’s worth considering metrics in both the quantitative (deflection %, resolution %, etc.) and qualitative (CSAT, effort, NPS, etc). I recently wrote a blog on this topic in more detail. 


Reply