Skip to main content

There are actions that users of our app can’t do on their own and that they need to reach out to our support team to ask them to do for them. This is clearly stated in our help center articles, when that’s the case. 

So, now they reach out, and there’s Fin (who we call “Alvar”) trying to answer the question. Only, he tells the customer reaching out to him, that they need to contact the support team, which is what they just did 🤯.


See this example (i’ve got MANY more)


Surely, Fin should be thought that he is a member of the support team, and when he thinks he should tell the customer to contact support, what he really should do is to assign the conversation to a human and do nothing. 

 

Anyone experience the same issue? 

Any practical workarounds? 

What we do now is that we have humans monitoring the Fin inbox and jumping in, making a joke about Fin being a new team member and that he is not that good yet… Obviously not efficient (not to mention that Intercom will still charge us in this case, at least in teh email scenario)

Hi @Henrik Lenerius -- Cam from the Intercom Technical Support Engineering team here 😁

Certainly keen to hear if any community members are experiencing this as well and what workarounds people may have found for this! 

Along with that, myself and the team would be quite keen to get some additional details from you on these instances where this is occurring to see if we can identify a solution to this for you! I have a feeling with some slight adjustments to a few Workflow settings and Fin AI Agent content source details we may be able to reduce the frequency with which this is happening, if not stop it all together. If you’re open to that please feel free to start a new Chat with us in Messenger and mention your post here so you’re routed appropriately and we’d be glad to look into this further for you 👍


It’s happening for us as well. FIN should simply send the conversation to an inbox when it says “contact the team at xxx”.
Otherwise the customer will be in an infinite loop getting replies from AI while our process states it should reach out to support.

 


This is happening to us as well. It’s been a very high frustration as it sends customers in circles. They’ll end the chat, reach out and then ask the same question and fin will tell them again to reach out to support. Our articles mention in them in certain places to contact support in case of an issue in a certain area. These instances cannot be removed from our help center and Fin continues to grab that snippet from the articles.


@Cam G. 
Any update on this one please?

This is again a reply FIN sends where it says to reach out to the team via email or chat while he is the team.
 

“Bonjour R,

Nous sommes désolés d'apprendre que votre colis est arrivé endommagé. Ce genre de situation ne devrait pas se produire, et nous allons trouver une solution pour vous. Veuillez nous contacter par chat ou par e-mail dès que possible. Nous examinerons votre cas et déterminerons la meilleure façon de vous renvoyer rapidement un article en bon état.”


@Henrik Lenerius  I had a good chuckle at your post title 😂

 

And I immediately knew what you meant! We've been helping teams implement Fin and trying to deal with this challenge that Fin sees itself at outside the team. 

 

I was talking with one of the Intercom team at the 'Pitfalls of AI' workshop, which was a great venue to have these conversations and sharing that it would be amazing to have some ways to custom prompt Fin to deal with these situations but I'm sure if the Intercom team is aware they will find ways to sort this out. 

 

I have tried with Snippets as well but it's not very successful. 

 

 

A few of the issues we see are similar to yours…

 

Fin answers unlike human agents by saying the company name a lot. 

 

'Company name' offers 3 types of products that meet your needs, they are the following…

 

Instead of 'We offer 3 types of products that meet your needs...'

 

And then there's the handoff or instruction that often says something like what you shared above... 

 

You'll need to provide the following information, when you have that ready contact customer support at 'email address' or 'number'

 

Which probably is influenced by content on the website or documentation somewhere but it should be saying something more like, 'when you have that information feel free to reply here or open a new conversation' especially if the user is in chat. 

 

 

Often the response is something like 'how do I contact customer support?' if the email address isn't listed and for someone who is already contacting customer support it is probably very frustrating like being told please hold while I transfer you but worse because it's not transferring you. 

 

I feel like Fin should be able to understand this context better.

 

I noticed there's a new experimental feature to choose the length of the response, I hope this becomes something you can configure by prompt as well or Fin can also determine the right situations to adapt but I do think most AI tools that include a way for the business or user to pre-prompt are the most effective. 

 

I'd love to be able to prompt at a high level some things to tweak the overall content to override some things that might be in the content that aren't ideal for the AI response approach. 

 

Overall Fin is a huge improvement but these small but important things can depersonalize the service and the feeling for the customer. 

 

Some of these things may be able to be done by workflow but not everything, so I'm glad we are having these discussions! 

 

I will reach out on chat and try to point to some of these conversation if it's helpful to the team to tweak the response and use that data to help everyone. 

 

 


Reply