Conversion Events

In order to track engagement metrics and the necessary details regarding how messaging drives your KPIs, Braze allows you to set Conversion Events for each of your campaigns and Canvases. A Conversion Event is a type of success metric that tracks whether a recipient of your messaging performs a high-value action within a set amount of time after receiving your engagement. With this, you can begin to attribute these valuable actions to the different points of engagement reaching the user. For example, if you’re creating a personalized holiday campaign for active users, a Conversion Event of “Starts Session” within 2 or 3 days may be appropriate, as it will then allow you to gather a sense of the rate at which your engagement helped nudge users to come back upon receiving your message.

Along with “Make a Purchase,” events like “Start a Session,” “Upgrade App,” or any of your Custom Events can be selected as Conversion Events. Below are further details on the feature, as well as steps needed to implement them.

Primary Conversion Event

The Primary Conversion Event is the first event added during campaign or Canvas creation, and it is the one that has the most bearing on your engagement and reporting. It is used to:

  • Compute the winning message variation in multivariate campaigns or Canvases.
  • Determine the window in which revenue is calculated for the campaign or Canvas.
  • Adjust message distributions for campaigns and Canvases using Intelligent Selection.

Step 1: Create a Campaign with Conversion Tracking

Navigate to the Braze Campaigns page in your company dashboard and click “Create Campaign,” then select the type of campaign you’d like to create.

After setting up your campaign’s messages and—for non-API campaigns—schedule, you’ll have the option to add up to four Conversion Events for tracking. We highly recommend using as many as you feel is necessary, as the addition of a second (or third) Conversion Event can significantly enrich your reporting. For example, if you had a campaign or Canvas targeting lapsing users, although a retention-centric Conversion Event of “Starts Session” within 3 days is valuable, perhaps you also want to add a secondary Conversion Event of performing another high-value Custom Event. This way, you can dive back into the dashboard and understand not only the extent to which your campaign or Canvas is ushering users back into your application, but also how involved and active these sessions are.

Step 2: Add Conversion Events

For each conversion event you wish to track, select the event and conversion deadline:

  1. Select the general type of event you’d like to use.

    Conversion Event Selection

    • Opens App: A user is counted as having converted when they open any one of the apps that you specify (defaults to all apps in the app group).
    • Makes Purchase: A user is counted as having converted when they purchase the product you specify (defaults to any product).
    • Performs Custom Event: A user is counted as having converted when they perform one of your existing custom events (no default, you must specify the event).
    • Upgrade App: A user is counted as having converted when they upgrade the app version on any one of the apps that you specify (defaults to all apps in the app group). Braze will perform a best-efforts numerical comparison to determine if the version change was in fact an upgrade. For example, a user would convert if they upgrade from version 1.2.3 to 1.3.0 of the application, while Braze will not register a conversion if a user downgrades from 1.2.3 to 1.2.2. However, if the app’s version names contain strings, such as “1.2.3-beta2”, then Braze will not be able to determine if a version change was in fact an upgrade. In that situation, Braze will count it as a conversion when the user’s most recent app version changes.
  2. Set a “conversion deadline.” You have the option of allowing up to a 30 day window during which a conversion will be counted if the user takes the specified action.

Once you’ve selected your conversion events, continue the campaign creation process and begin sending your campaign.

Step 3: View Results

Navigate to the “Details” page to view details for each conversion event associated with the campaign you just created. Regardless of your selected conversion events, you can also see the total revenue that can be attributed to this specific campaign — as well as specific variants — during the window of the Primary Conversion Event (please note, if no conversion events were selected during campaign creation, the time period defaults to 3 days). Additionally, for multivariate messages, you can see the number of conversions and conversion percentage for your control group and each variant.

View Results

Conversion Tracking Rules

Conversion Events allow you to attribute user action back to a point of engagement. That said, there are a few things to note regarding how Braze handles conversions when there are multiple in play. Please find these scenarios outlined below.

  • A user can only convert once on each conversion event for a campaign or Canvas. For instance, assume a campaign has only one conversion event which is “makes any purchase.” If a user who receives this campaign makes 2 separate purchases within the conversion deadline, only one conversion will be counted.
  • If a user performs one conversion event within the conversion deadlines of two separate campaigns or Canvases that they received, the conversion will register on both.
  • A user will count as converted if they performed the specific conversion event in the window even if they did not open/click the message

Rate-Limiting

Braze allows you to control marketing pressure by implementing two different types of rate-limiting for your campaigns. The first focuses on providing the best experience for the end user, while the second takes into consideration the bandwidth of your servers.

User Centric Rate-Limiting

As you create more segments, there are going to be cases where the membership of those segments overlaps. If you’re sending out campaigns to those segments, you want to be sure that you are not messaging your users too often. If a user receives too many messages within a short time period, they will feel over-encumbered and either turn off push notifications or uninstall your app.

Relevant Segment Filters

Braze provides the following filters in order to help you limit the rate at which your users receive messages:

  • Last Engaged With Message
  • Last Received Any Campaign
  • Last Received Push Campaign
  • Last Received Email Campaign
  • Last Viewed News Feed

Implementing Filters

Consider the following example segment:

Rate_Limit_Example

This is a standard re-engagement segment. If you have other more targeted segments receiving notifications recently, you may not want your users to be targeted by more generic campaigns directed at this segment. Appending the “Last Received Push Campaign” filter to this campaign, the user has ensured that if they’ve received another notification in the past 24 hours, they will slide out of this campaign for the next 24 hours. If they still meet the other criteria of the segment 24 hours later and haven’t received any more notifications they will slide back into the segment.

Appending this filter to all segments targeted by campaigns would cause your users to receive a maximum of one push every 24 hours. You could then prioritize your messaging by ensuring that your most important messages are delivered before less important messages.

Setting A Max User Cap

Additionally, in the ‘Target Users’ section of your campaign composition, you can limit the total number of users that will receive your message. This feature serves as a check that is independent of your campaign filters, allowing you to freely segment users without needing to worry about over-spamming.

Total Limit Example

Using the filters in this way, you’ll be able to limit the rate at which your users receive notifications on a per channel basis or globally across all message types.

Setting a Max Impression Cap

For in-app messages, you can control marketing pressure by setting a maximum number of impressions that will be displayed to your user base, after which Braze will not send down more messages to your users. However, it is important to note that this cap is not exact. New in-app message rules are sent down to an app on session start, meaning that it is possible for Braze to send an in-app message down to the user before the cap is hit, but by the time the user triggers the message, the cap has now been hit. In this situation, the device will still display the message.

For example, let’s say you have a game with an in-app message that triggers when a user beats a level, and you cap it at 100 impressions. There have been 99 impressions so far. Alice and Bob both open the game and Braze tells their devices that they are eligible to receive the message when they beat a level. Alice beats a level first and gets the message. Bob beats the level next, but since his device has not communicated with Braze’s servers since his session start, his device is unaware that the message has met its cap and he will also receive the message. However, once an impression cap has been hit, the next time any device requests the list of eligible in-app messages, that message will not be sent down and will be removed from that device.

Delivery Speed Rate-Limiting

If you anticipate large campaigns driving a spike in user activity and overloading your servers, you can specify a per minute rate limit for sending messages. While targeting users during campaign creation, you can click into Advanced Options and select a rate limit (in various increments from 10K to 500K messages per minute).

Per Minute Rate Limit Example

For instance, if you are trying to send out 75K messages with a 10K per minute rate limit, the delivery will be spread out over 8 minutes. Your campaign will deliver 10k for each of the first 7 minutes, and 5K over the last minute. Be wary of delaying time sensitive messages, however, with this form of rate-limiting. If the segment contains 30M users but we set the rate limit to 10K per minute, a large portion of the user base won’t receive the message until the following day.

It is important to note that when sending a multi-channel campaign with a speed rate limit, each channel is sent independently of the others. The effect is that users could receive the different channels at different times, and it is not predictable which channel they will get first. For example, if you send a campaign that contains an email and a push notification, you may have 10K users with valid push tokens but 50K users with valid email addresses. If you set the campaign to send 100 messages per minute (a slow rate limit for the campaign size), a user could receive the push notification in the first batch of sends and the email in the last batch of sends, almost 9 hours later.

Rate Limiting and Connected Content Retries

+When the Connected Content Retry feature is enabled, Braze will retry call failures while respecting the rate limit you set for each resend. Let’s think again about the 75K messages with a 10K per minute rate limit. In the first minute, the call fails or is slow and only sends 4K messages. Instead of attempting to make up for the delay and send the remaining 4K messages in the second minute or add it to the 10K it is already set to send, Braze will move those 6K failed messages to the “back of the queue” and add an additional minute, if necessary, to the total minutes it would take to send your message.

Minute No Failure 6K Failure in Minute 1
1 10K 4K
2 10K 10K
3 10K 10K
4 10K 10K
5 10K 10K
6 10K 10K
7 10K 10K
8 5K 10K
9 0 1K

Multi-Channel Campaigns

Keep in mind that the per minute rate limit is adjusted on a per-campaign basis. If multiple channels are utilized within a campaign, the rate limit will apply to each of those channels. If your campaign utilizes email and in-app banners with a rate limit of 10K per minute, we will send 20K total messages each minute (10K email, 10K push).

Multi-Platform Push Campaigns

For push campaigns delivering on multiple platforms, the rate limit selected will be equally distributed across platforms. A push campaign leveraging Android, iOS and Windows with a 10K rate limit per minute will equally distribute the 10K messages across the 3 platforms.

Frequency Capping

As your user base continues to grow and your messaging scales to include life cycle, triggered, transactional and conversion campaigns, it’s important to prevent your notifications from appearing spammy or disruptive. By granting greater control over your users’ experience, Frequency Capping enables you to create the campaigns you desire without overwhelming your audience.

Feature Overview

Frequency Capping can be set up for each app group by selecting Global Campaign Settings found underneath the Campaigns tab. From here, you can choose:

  • What message channel you’d like to cap - push, email, webhook or any of those three
  • How many times each user should receive that channel within a certain time frame, which can be measured in minutes, days, weeks (7 days) and months

Each line of frequency caps will be connected using an “AND,” and you’re able to add as many as you wish. In addition, you may include multiple caps for the same message types. For instance, you can cap users to no more than 1 push per day and no more than 3 pushes per week.

Frequency Capping

Delivery Rules

There may be some campaigns - transactional messages, in particular - that you wish to always reach the user, even if she has already reached her frequency cap. For example, a delivery app may wish to send an email or push when an item is delivered regardless of how many campaigns the user has received. If you want a particular campaign to override Frequency Capping rules, you can set this up when scheduling that campaign’s delivery by checking the box next to “Ignore frequency capping settings for this campaign”. When sending API campaigns, which are often transactional, you’ll have the ability to specify that a campaign should ignore Frequency Capping rules within the API request by setting “override_messaging_limits” to “true.”

By default new Campaigns/Canvases that do not obey Frequency Caps will also not count towards them. This is configurable for each Campaign/Canvas. Please note that this behavior changes the default behavior when you turn off Frequency Capping for a Campaign/Canvas; the changes are backwards compatible and do not impact messages that are currently live right now.

Frequency Capping Update

  • Different channels within a multi-channel campaign will individually count towards the frequency cap. For instance, if you create a multi-channel campaign with, say, both push and email, and have Frequency Capping set up for both of those channels, then the push will count toward 1 push campaign and the email message will count toward 1 email message campaign. The campaign will also count toward 1 “campaign of any type.” If users are capped to 1 push and 1 email campaign per day and someone receives this multi-channel campaign, then she will no longer be eligible for push or email campaigns for the rest of the day (unless a campaign ignores Frequency Capping rules).

  • Triggered in-app messages will count towards global frequency cap, however they cannot be frequency capped. For instance, an in-app message confirming a purchase will appear after every purchase regardless of frequency capping.

  • Keep in mind that Frequency Capping applies to the campaigns a user receives, and not to individual messages. That is, if you have a recurring campaign that users are re-eligible to receive, each recurring message of each channel will be counted as just one campaign. For instance, if a user receives a recurring push campaign that delivers every day, and she is capped to 2 push campaigns per week, the recurring push messages will collectively count towards 1 push campaign for frequency capping purposes, and for that week, this user will still be eligible to receive a different campaign that contains push.

Multivariate Testing

Introduction to Multivariate Testing

What is Multivariate Testing?

A multivariate test is an experiment that compares users’ responses to multiple versions of the same marketing campaign. These versions share similar marketing goals, but differ in wording and style. The objective is to identify the version of the campaign that best accomplishes your marketing goals. This section walks through how to use multivariate testing to test out differences in content. If you’d like to evaluate differences in message scheduling or timing (for instance, sending an abandoned cart message after 1 hour of inactivity versus 1 day of inactivity), please refer to our section on setting up a Canvas.

As an example of a multivariate test, suppose you have two options for a push notification:

  • This deal expires tomorrow!
  • This deal expires in 24 hours!

Using a multivariate test, you can see which wording results in a higher conversion rate. The next time you send a push notification about a deal, you’ll know which type of wording is more effective.

The Benefits of Multivariate Testing

Multivariate testing gives you an easy, clear way to learn about your audience. You no longer have to guess what users will respond to - every campaign becomes an opportunity to try different variants of a message and gauge audience response.

Specific scenarios in which multivariate testing could come in handy include:

  • The first time you’re trying out a messaging type. Worried about getting in-app messaging right the first time? Multivariate testing allows you to experiment and learn what resonates with your users.
  • The creation of onboarding campaigns and other campaigns that are constantly being sent out. Since most of your users will encounter this campaign, why not ensure that it’s as effective as possible?
  • Cases in which you have multiple ideas for messages to send. If you are unsure of which to choose, run a multivariate test and then make a data-driven decision.
  • Investigating whether your users respond to “tried and true” marketing techniques. Marketers often stick to conventional tactics to engage with users, but every product’s user base is different. Sometimes, repeating your call to action and using social proof won’t get you the results you desired. Multivariate testing lets you step outside the box and discover unconventional tactics that work for your specific audience.

Five Rules for Multivariate Testing

Multivariate testing can unveil powerful insights regarding your users. To ensure that your test results are truly reflective of your users’ behaviors, you need to:

  1. Run the test on a large number of users. Large samples ensure that your results reflect the preferences of your average user and are less likely to be swayed by outliers. Larger sample sizes also allow you to identify winning variants that have smaller margins of victory.

  2. Randomly sort users into different test groups. Braze’s multivariate testing feature allows you to create up to eight randomly selected test groups. Randomizing is designed to remove bias in the test set and increase the odds of the test groups being similar in composition. This ensures that differing response rates are due to differences in your messages rather than your samples.

  3. Know what elements you are trying to test. Multivariate testing allows you to test differences between several versions of a message. In some cases, a simple test may be most effective, since isolating changes allows you to identify which elements had the greatest impact on response. Other times, presenting more differences between variants will let you examine outliers and compare different sets of elements. Neither method is necessarily wrong, provided you are clear from the beginning what you are trying to test for.

  4. Decide how long your test will run for before beginning the test, and don’t end your test early. Marketers are often tempted to stop tests after they see results that they like, biasing their findings. Resist the temptation to peek and never end your test early!

  5. Include a control group if possible. Including a control group lets you know whether your messages have a greater impact on user conversion than sending no message at all. Learn more about control groups here.

Creating Multivariate Tests with Braze

How To Create a Multivariate Test

Step 1: Create your campaign

On the Campaigns section of Braze’s dashboard, click “Create Campaign” and select a channel for the campaign.

AB_1_campaign

Step 2: Compose your variants

Create up to 8 variants of your message. For some ideas on how to get started differentiating your variants, see here.

AB_2_variants

Step 3: Schedule your campaign

Test scheduling works the same as scheduling any other Braze campaign. All of Braze’s standard campaign scheduling options are available.

Step 4: Choose a segment to run your campaign on

Select a segment to send the campaign to. For best practices around choosing a segment to test with, see here.

AB_4_segments

Optional Step: Limit the number of users to test with

Sometimes, you’ll want to limit the number of users you send a test to, so that later, once you know which of your variants performs best, you will have users who haven’t received the campaign and can be sent the winner. This typically most useful if your test isn’t scheduled to recur and you won’t be using Intelligent Selection.

Limit Users

Step 5: Distribute users among your variants

Decide what percentage of your target segment should receive each of your variants. Or have Intelligent Selection handle the distribution for you.

AB_control

Step 6: Designate a Conversion Event

Setting a conversion event for a campaign allows you to see how many of the recipients of that campaign performed a particular action after receiving it. Unlike other campaigns, multivariate tests must define a conversion event. You can read more about our Conversion Events feature here.

Step 7: Review and launch

On the confirmation page, review the details of your multivariate campaign and launch the test!

Step 8: View results

Once your campaign has launched, you can check how each variant is performing by selecting clicking on your campaign from the Campaigns section of the dashboard.

AB_8_view_results

You will see a summary of the campaign and statistics on how the different variants are performing. If one variant is outperforming the others with better than 95% confidence, Braze will mark that variant with a banner indicating that it is best performing.

Messaging type Projected winner is selected based on…
Push conversion rate (if unavailable, open rate)
Email conversion rate (if unavailable, click rate)
In-app message conversion rate (if unavailable, click rate)

Step 9: Select a winner and continue sending

Once you are satisfied that one of your variants is the best performing, you can edit the campaign and adjust the distribution of users so that 100% of users receive the best performing variant. All users in the segment who have not yet received the campaign (including users who enter the segment at a future date) will then get the best performing variant, provided your campaign is scheduled to send again in the future. If you configured your campaign to only send to a limited number of users initially, this is typically when you would also remove that restriction, so that the best variant can reach as many of your segment users as possible.

Keep in mind that Braze recommends that you wait to select a winner like this until you have 95% confidence in a particular variant. If no variant did better than the others with 95% confidence, it’s possible that multiple variants received similar response rates, or your sample size of users was not large enough to yield 95% confidence. While you can still select a winner, the chances that your selection will not reflect users’ true preferences will be greater than 5%. Follow-up tests may yield more informative results.

AB_winner

Tips For Different Channels

Depending on which channel you select, you’ll be able to test different components of your message. Try to compose variants with an idea of what you want to test and what you hope to prove. What levers do you have to pull and what are the desired effects? While there are millions of possibilities that you can investigate using a multivariate test, we have some suggestions to get you started:

Channel Aspects of Message You Can Change Results To Look For
Push Wording, punctuation, use of images and emojis, deep links, presentation of numbers (e.g. “triple” vs. “increase by 200%”), presentation of time (e.g. “ends at midnight” vs. “ends in 6 hours”) Open and conversion rate
Email Subject, display name, salutation Open and conversion rate
- Use of images, organization of text (e.g. bulleted vs. numbered lists) Click and conversion rate
- Reply-to field, sign-off Unsubscribe and conversion rate
In-app Notification Aspects listed for “push,” message format Click and conversion rate

In addition, the ideal length of your test may also vary depending on channel. Keep in mind the average amount of time most users may need to engage with each channel. For instance, if you’re testing a push, you may achieve significant results faster than when testing email, since users see pushes immediately, but it may be days before they see or open an email. If you’re testing in-app messages, keep in mind that users must open the app in order to see the campaign, so you should wait longer in order to collect results from both your most active app openers as well as your more typical users.

If you’re unsure how long your test should run for, the Intelligent Selection feature can be useful for finding a winning variant efficiently.

Choosing a Segment

Since different segments of your users may respond differently to messaging, the success of a particular message says something about both the message itself and its target segment. Therefore, try to design a test with your target segment in mind. For instance, while active users may have equal response rates to “This deal expires tomorrow!” and “This deal expires in 24 hours!”, users who haven’t opened the app for a week may be more responsive toward the latter wording since it creates a greater sense of urgency.

Additionally, when choosing which segment to run your test on, be sure to consider whether the size of that segment will be large enough for your test. In general, multivariate tests with more variants require a larger test group to achieve statistically significant results. This is because more variants will result in fewer users seeing each individual variant. As a crude guide, you will likely need around 15,000 users per variant (including the control) to achieve 95% confidence in your test results. However, the exact number of users you need could be higher or lower than that depending on your particular case. For more exact guidance on variant sample sizes, consider referring to Optimizely’s Sample Size Calculator.

Intelligent Selection

Intelligent Selection is a feature that analyzes the performance of a campaign or Canvas twice a day and automatically adjusts the percentage of users that receive each message variant. A variant that appears to be performing better than others will go to more users, while variants that are underperforming will be targeted at fewer users. Each adjustment is made using a statistical algorithm that makes sure we are adjusting for real performance differences and not just random chance.

By looking at performance data in real-time and shifting campaign traffic toward winning variants gradually, Intelligent Selection ensures that more users receive your best performing variant, without any sacrifice in statistical confidence. Intelligent Selection will also rule out underperforming variants and identify high performing variants faster than a traditional A/B test. With Intelligent Selection, you can test more frequently and with greater confidence that your users will see your best message.

Intelligent Selection is ideal for campaigns that are scheduled to send multiple times. As Intelligent Selection needs initial results to begin adjusting your campaign, a campaign that sends only once will not benefit.

Intelligent_Selection_Shot

Including a Control Group

When you create a multivariate test, you can reserve a percentage of your target audience for a randomized control group. Users in the control group will not receive the test, but Braze will monitor their conversion rate for the duration of the campaign. When viewing your results, you’ll be able to compare the conversion rates of your variants against a baseline conversion rate provided by your control group. This lets you compare not only the effects of your variants, but also compare the effects of your variants against the conversion rate that would result if you didn’t send a message at all.

The size of the control group for a campaign with Intelligent Selection will be based on the number of variants. If each variant is being sent to more than 20% of users, then the control group will be 20% and the variants will be split evenly across the remaining 80%. However, if you have multiple variants such that each variant is being sent to less than 20% of users, then the control group will have to become smaller. Once Intelligent Selection starts analyzing the performance of your test, the control group will grow or shrink based on the results.

Understanding Your Multivariate Test Results

Congratulations on getting to this stage! Receiving your multivariate test results is, however, not the last step of the testing process. Now you need to understand what your results mean and apply these insights to your engagement strategy.

Understanding Confidence

An important part of your results is the confidence of your test. The confidence shows the reliability of your test results - the greater the confidence, the more reliable your findings. For instance, your results may show that “A” had a 20% click rate and “B” had a 25% click rate. This seems to indicate that “B” is the more effective message. Having a confidence of 95% means that the difference between the two click rates is likely due to an actual difference in users’ responses, and that there is only a 5% likelihood that the difference is due to chance.

In general, a confidence of at least 95% is necessary to show that your results are reflective of users’ actual preferences, and not due to chance. In rigorous scientific tests, 95% confidence is the common benchmark used to determine statistical significance. If you continually fail to achieve 95% confidence, try increasing your sample size or decreasing the number of variants.

Braze will mark any variants that have 95% confidence with a banner indicating that they are projected as best performing.

Statistically Insignificant Results — Now What?

A test that doesn’t have a confidence of 95% can still hold important insights. Here are a few things you can learn from a test with statistically insignificant results:

  • It’s possible that all of your variants had roughly the same effect. Knowing this saves you the time you would’ve spent making these changes. Sometimes, you may find that conventional marketing tactics, such as repeating your call to action, don’t necessarily work for your audience.
  • While your results may have been due to chance, they can inform the hypothesis for your next test. If multiple variants appear to have roughly the same results, run some of them again alongside new variants to see if you can find a more effective alternative. If one variant does better, but not by a significant amount, you can perform another test in which this variant’s difference is more exaggerated.
  • Keep testing! A test with insignificant results should lead to certain questions. Was there truly no difference between your variants? Should you have structured your test differently? You can answer these questions by running follow-up tests.

While multivariate testing is useful for discovering which type of messaging generates the most response from your audience, it’s also important to understand which alterations in messaging have only a negligible effect. This allows you to either continue testing for another more effective alternative, or save the time that may have been spent deciding between two alternate messages.

Whether or not your multivariate test has a clear winner, it can be helpful to run follow-up test to confirm your results or apply your findings to a slightly different scenario.

One multivariate test can (and should!) inspire ideas for future multivariate tests, as well as guide you toward changes in your messaging strategy. Possible follow-up actions include:

  • Changing your messaging strategy based on test results. Your multivariate results may lead you to change the way you word or format your messaging.

  • Changing the way you understand your users. Each multivariate test will shed light on your users’ behaviors, how users respond to different messaging channels, and the differences (and similarities) among your segments.

  • Improving the way you structure future multivariate tests. Was your sample size too small? Were the differences between your variants too subtle? Each multivariate test provides an opportunity to learn how to improve future tests. If your confidence is low, your sample size is too small and should be enlarged for future tests. If you find no clear difference between how your variants performed, it’s possible that their differences were too subtle to have a discernible effect on users’ responses.

  • Running a follow-up test with a larger sample size. Larger samples will increase the chances of detecting small differences between variants.

  • Running a follow-up test using a different messaging channel. If you find that a particular strategy is very effective in one channel, you may want to test that strategy in other channels. If one type of message is effective in one channel but not effective in another, you may be able to conclude that certain channels are more conducive to certain types of messages. Or, perhaps there is a difference between users who are more likely to enable push notifications and those who are more likely to pay attention to in-app messages. Ultimately, running this sort of test will help you learn about how your audience interacts with your different communication channels.

  • Running a follow-up test on a different segment of users. To do this, create another test with the same messaging channel and variants, but choose a different segment of users. For instance, if one type of messaging was extremely effective for engaged users, it may be useful to investigate its effect on lapsed users. It’s possible that the lapsed users will respond similarly, or they may prefer another one of the other variants. This sort of test will help you learn more about your different segments and how they respond to different sorts of messages. Why make assumptions about your segments when you can base your strategy on data?

  • Running a follow-up test based on insights from a previous test. Use the intuitions you gather from past tests to guide your future ones. Does a previous test hint at one messaging technique being more effective? Are you unsure about what specific aspect of a variant made it better? Running follow-up tests based on these questions will help you generate insightful findings about your users.

Sending Test Messages

Before sending out a messaging campaign to your users, you may want to test it to make sure it looks right and operates in the intended manner. Creating and sending test messages to select devices or members of your team is very simple using the tools in the dashboard.

Sending a Test Mobile Push Notification or Email Message

After drafting your Push or Email, you have the ability to send the message to your own device to see what it looks like in real time. In the settings bar, click the “eye” icon, input your email address or userID, and click “Send Test.” This will send the message that you drafted to your device.

This is what the testing process will look like for a push message.

Test Push

And here is what the process will look like for an email message.

Test Email

Keep in mind that you will either need your users’ email address or Braze UserID to send test messages to your device.

Sending a Test Web Push Notification

After drafting your web push, you have the ability to send the message to your computer to see what it looks like in real time. In the settings bar, click the “eye” icon, and click “Send Test to Myself.”

Test Web Push

If you have already accepted push messages from the Braze Dashboard, you will see the push come through in the corner of your screen. Otherwise, please click “Allow” when prompted, and the message will then appear.

Sending a Test In-App Message

If you have push notifications set up within your app and on your test device, you have the ability to send test in-app messages to your app to see what it looks like in real time. In the settings bar, click the “eye” icon, input your email address or userID, and click “Send Test:”

Test In App

This will send a push message to your device like the following:

Test Push for In App

Directly clicking and opening the push message will send you to your app where you’ll be able to view your in-app message test.

Sending a Test News Feed Card

Sending a test newsfeed card requires you to set up a test segment and subsequently send a test campaign out.

Creating a Designated Test Segment

Once you set up a test segment, you can utilize it these messaging channels. The process is very simple and if configured properly will only need to be done once.

Navigate to the “Segments” page in the dashboard and create a new segment. In the dropdown menu under “Add Filter”, you’ll find our testing filters at the bottom of the list.

Testing Filters

Our testing filters allow you to select users with specific email addresses or external user IDs.

Testing Filter Options

These filters have three options:

  1. “Equals” - This will look for an exact match of the email or user ID that you provide. Use this if you only want to send the test campaigns to devices associated with a single email or user ID.
  2. “Does Not Equal” - Use this if you want to exclude a particular email or user ID from test campaigns.
  3. “Matches” - This will find users that have email addresses or user IDs that match part of the search term you provide. You could use this to find only the users that have an “@yourcompany.com” address, allowing you to send messages to everyone on your team.

These filters can also be used in conjunction with each other to narrow down your list of test users. For example, the test segment could include an email address filter that matches “@braze.com” and another filter that does not equal “sales@braze.com”. You can also select multiple specific emails by using the matches option and separating the email addresses with a “|” character (e.g. matches “email1@braze.com|email2@braze.com”).

After adding the testing filters to your test segment, you can verify that you’ve selected only the users you intended by clicking “Preview” at the top of the segment editor or by exporting that segment’s user data to CSV by clicking on the gear icon in the right hand corner of the editor and selecting “CSV Export All User Data” from the dropdown menu.

Verify Test Segment

Exporting the segment’s User Data to CSV will give you the most accurate picture of who falls under that segment. The “Preview” tab is only a sample of the users in the segment - see more details about this in our FAQ - and therefore may appear to have not selected all intended members.

Once you’ve confirmed that you’re only targeting the users that you want to receive the test message, you can either select this segment in an existing campaign that you want to test or click the “Start Campaign” button in the segment menu.

Sending the Test Campaign

In order to send test newsfeed cards, you need to target your previously created test segment. Begin by creating a Multichannel campaign and following the usual steps. When you reach the ‘Target Users’ section, select your test segment as shown below.

Test Segment

Finish confirming your campaign and launch it to test your newsfeed cards.

Be sure to check the box titled “Allow users to become re-eligible to receive campaign” under the Schedule portion of the campaign wizard if you intend to use a single campaign to send a test message to yourself more than once.

Sending a Test Campaign Personalized with User Attributes

If you are using personalization in your message, you’ll need to take some additional steps to properly preview your campaign and check that user data is properly populating the content.

When sending a test message, make sure to choose either the option to “Select Existing User” or preview as a “Custom User.”

Testing a personlized message

If selecting a user, you’ll be able to enter the user ID or email of a specific app user into a search field. Then you’ll be able to use the dashboard preview to see how that user’s message would appear, and send a test message to your device that reflects that that user would see.

Select a user

If previewing as a customized user, you’ll be able to enter in text for various fields available for personalization, such as first name and any custom attributes. Once again, you can enter your own email address to send a test to your device.

Custom user

Sending a Test Campaign Personalized with Custom Event Properties

Testing campaigns personalized with Custom Event Properties differs slightly from testing other types of campaigns outlined above. The most robust way to test campaigns personalized using Custom Event Properties is to trigger the campaign yourself. Begin by writing up the copy involving the event property:

Composing Test Message with Properties

Then use Action Based Delivery to deliver the campaign when the event occurs. Note that if you’re testing an iOS Push campaign, you must set the delay to 1 minute to allow yourself time to exit the app, since iOS doesn’t deliver push notifications for the currently open app. Other types of campaigns can be set to deliver immediately.

Test Message Delivery

As described above, target the users as you would for testing using either a testing filter or simply targeting your own e-mail address and finish creating the campaign.

Test Message Targeting

Go into your app and complete the custom event, and the campaign will trigger, and you should see the message customized with the event property:

Test Message Example

Alternatively, if you are saving custom user IDs, you can also test the campaign by sending a customized test message to yourself. After writing the copy for your campaign, click the eye icon on the upper right corner of the preview, then select “Customized User”. Add the custom event property on the bottom of the page, add your user ID or e-mail address to the top box and click “Send Test”. You should receive a message personalized with the property.

Testing Using Customized User