Skip to main content

The only workaround I can think of is to set up a rule that evaluates the user ID for odd or even numbers and creates separate segments based on that. It would be nice if Intercom had a better A/B cross-series feature. Otherwise it's difficult to really measure the impact of an A/B test, outside of something like open and click rates.

Hello @ezra​ ,

 

You are talking about creating A/B Testing for Series, what will be the difference to create separate A/B tests for e-mail, post, chat? 

You can simply create A/B testing in series for e-mail, post, chat:

series-abtest 

The main difference I see is the wait time, A/B test for wait times. Everything else can be created from Series.


Great question! We're actually looking into this problem right now (as we speak in fact!) Would love to understand more about what you're looking to solve with A/B testing across the Series.


@roy s11​ Thank you for your response. I'm aware of the A/B test feature. The issue is that these user segments do not seem to be passed along to the next message. So it's fine if you just want to measure open and click rates, but if you want to create user segments and test across multiple messages in a campaign with the same users, that doesn't seem to be possible.

 

@Shwaytaj Raste​ Let's say I have a series targeting newly signed up users, but I want to run A/B tests for different onboarding paths. Right now the A/B test does not allow me to tag people who saw one message over another, so by the time they reach the next message, the two groups are undifferentiated again.


That makes sense. Just pulling at that thread a bit -->

  1. I'm assuming that you want to test "Path A" vs "Path B". So maybe you want to test whether an in-app + email is more effective than just an email. Is that correct?
  2. How would you decide which is better? What criteria would you use?
  3. Are control groups important?

Sure, so for example we might want to evenly split onboarding across two paths (A and 😎 where each path takes a different approach to onboarding the user.

 

To make this more concrete, let's say one flow focused on longform emails while another was driven more by emails with imagery and short text blurbs.

 

Currently, we can split test this on the message but the moment that message ends, the two groups seem to merge back into a single pool. There's no way to track the performance of the longform vs visual emails.

 

If an A/B split was available as a nodule, the same way conditionals and Wait periods are, then you could split a path predictably into different segments, and ideally could then measure the results of each parallel campaign.


Thanks for sharing this. That resonates with the way we are thinking as well. Curious to know how you would measure success. Is it just the engagement rates for the message that matters or would you set a broader "goal" to measure success?


It would be important to measure goals. Engagement still doesn't really tell us anything about account activation (KPI) or paid conversions.


@ezra​ , Intercom created a new feature, that allows us to create A/B tests across the whole series.

Here's more info about that.


Reply