Skip to main content
Answered

A/B testing seems to be scoped to the individual email in a series. Is there a way to A/B test across the whole series instead?

  • October 14, 2020
  • 8 replies
  • 148 views

  • Connector
  • 5 replies

The only workaround I can think of is to set up a rule that evaluates the user ID for odd or even numbers and creates separate segments based on that. It would be nice if Intercom had a better A/B cross-series feature. Otherwise it's difficult to really measure the impact of an A/B test, outside of something like open and click rates.

Best answer by Anonymous

Great question! We're actually looking into this problem right now (as we speak in fact!) Would love to understand more about what you're looking to solve with A/B testing across the Series.

View original
Did this topic help you find an answer to your question?

8 replies

Forum|alt.badge.img+5
  • Expert User
  • 1152 replies
  • October 15, 2020

Hello @ezra​ ,

 

You are talking about creating A/B Testing for Series, what will be the difference to create separate A/B tests for e-mail, post, chat? 

You can simply create A/B testing in series for e-mail, post, chat:

series-abtest 

The main difference I see is the wait time, A/B test for wait times. Everything else can be created from Series.


  • 0 replies
  • Answer
  • October 15, 2020

Great question! We're actually looking into this problem right now (as we speak in fact!) Would love to understand more about what you're looking to solve with A/B testing across the Series.


  • Author
  • Connector
  • 5 replies
  • October 16, 2020

@roy s11​ Thank you for your response. I'm aware of the A/B test feature. The issue is that these user segments do not seem to be passed along to the next message. So it's fine if you just want to measure open and click rates, but if you want to create user segments and test across multiple messages in a campaign with the same users, that doesn't seem to be possible.

 

@Shwaytaj Raste​ Let's say I have a series targeting newly signed up users, but I want to run A/B tests for different onboarding paths. Right now the A/B test does not allow me to tag people who saw one message over another, so by the time they reach the next message, the two groups are undifferentiated again.


  • 0 replies
  • October 16, 2020

That makes sense. Just pulling at that thread a bit -->

  1. I'm assuming that you want to test "Path A" vs "Path B". So maybe you want to test whether an in-app + email is more effective than just an email. Is that correct?
  2. How would you decide which is better? What criteria would you use?
  3. Are control groups important?

  • Author
  • Connector
  • 5 replies
  • October 19, 2020

Sure, so for example we might want to evenly split onboarding across two paths (A and 😎 where each path takes a different approach to onboarding the user.

 

To make this more concrete, let's say one flow focused on longform emails while another was driven more by emails with imagery and short text blurbs.

 

Currently, we can split test this on the message but the moment that message ends, the two groups seem to merge back into a single pool. There's no way to track the performance of the longform vs visual emails.

 

If an A/B split was available as a nodule, the same way conditionals and Wait periods are, then you could split a path predictably into different segments, and ideally could then measure the results of each parallel campaign.


  • 0 replies
  • October 19, 2020

Thanks for sharing this. That resonates with the way we are thinking as well. Curious to know how you would measure success. Is it just the engagement rates for the message that matters or would you set a broader "goal" to measure success?


  • Author
  • Connector
  • 5 replies
  • October 19, 2020

It would be important to measure goals. Engagement still doesn't really tell us anything about account activation (KPI) or paid conversions.


Forum|alt.badge.img+5
  • Expert User
  • 1152 replies
  • December 20, 2020

@ezra​ , Intercom created a new feature, that allows us to create A/B tests across the whole series.

Here's more info about that.


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings