Articles: Allow to exclude page for indexing & remove automatic links | Community
Skip to main content
Answered

Articles: Allow to exclude page for indexing & remove automatic links

  • October 24, 2025
  • 2 replies
  • 44 views

I want to suggest a 2 features to articles: 
1. A checkbox that allow us to  exclude a single page for indexing. Ex: pages with contact information. Those pages should be in help center and also be available for FIN, but not for bots. With a single disallow to be added in the robots.txt is enough.
2. If you type an email or a URL, and you don't link it, when you save it's linked automatically and if you want these information to be excluded for crawlers you can't.  
 

Best answer by Paul B12

howdy ​@TusFacturasAPP Paul here from support engineering to help you out 🤝 

Thanks, great idea. Short version: we don’t have a single “hide-from-crawlers-but-still-public-and-available-to-Fin” checkbox today. You can achieve the goal in two practical ways depending on hosting:

 

1) If your Help Center uses a custom domain you control, add a robots.txt disallow for that article path (search engines and our external-scraper Apify are normally controlled at the domain level).

 

2) If your article is an Intercom Article, use audience/privacy controls (Private Articles) . Search engines do not list private articles, and Fin will still use them for eligible users. For the auto-linking problem, the editor automatically detects and links email addresses and URLs. There’s no UI toggle to disable this yet, so recommended workarounds are to obfuscate the address (e.g., [at] or a zero-width space inside tokens), render the contact as an image, or post the HTML via the Articles API with markup that prevents linkification. I can raise both items as product suggestions: a per-article “noindex for crawlers” checkbox and an editor toggle to disable auto-linking or set rel="nofollow" on generated links.

 

Thanks again!

2 replies

Paul Byrne
Intercom Team
Forum|alt.badge.img+6
  • Intercom Team
  • 116 replies
  • Answer
  • October 30, 2025

howdy ​@TusFacturasAPP Paul here from support engineering to help you out 🤝 

Thanks, great idea. Short version: we don’t have a single “hide-from-crawlers-but-still-public-and-available-to-Fin” checkbox today. You can achieve the goal in two practical ways depending on hosting:

 

1) If your Help Center uses a custom domain you control, add a robots.txt disallow for that article path (search engines and our external-scraper Apify are normally controlled at the domain level).

 

2) If your article is an Intercom Article, use audience/privacy controls (Private Articles) . Search engines do not list private articles, and Fin will still use them for eligible users. For the auto-linking problem, the editor automatically detects and links email addresses and URLs. There’s no UI toggle to disable this yet, so recommended workarounds are to obfuscate the address (e.g., [at] or a zero-width space inside tokens), render the contact as an image, or post the HTML via the Articles API with markup that prevents linkification. I can raise both items as product suggestions: a per-article “noindex for crawlers” checkbox and an editor toggle to disable auto-linking or set rel="nofollow" on generated links.

 

Thanks again!


  • New Participant
  • 4 replies
  • October 31, 2025

I want to suggest a 2 features to articles: 
1. A checkbox that allow us to  exclude a single page for indexing. Ex: pages with contact information. Those pages should be in the help center and also be available for FIN, and much like scheduling a Morpheus8 Fort Lauderdale treatment, they should remain hidden from bots. With a single disallow to be added in the robots.txt, that is enough.
2. If you type an email or a URL, and you don't link it, when you save it's linked automatically and if you want these information to be excluded for crawlers you can't.  

Adding a checkbox to exclude individual pages from indexing would allow sensitive pages, such as those with contact information, to remain accessible in the help center or for internal users while being hidden from search engines, with a single disallow in robots.txt being sufficient. Similarly, the automatic linking of emails and URLs is convenient but limits control over crawler access, so providing an option to exclude these links from indexing or crawling would help protect sensitive information. Implementing these features would improve both flexibility and content security.