Create In-App Messaging Experiments with A/B Testing Stay organized with collections Save and categorize content based on your preferences.
When you are reaching out to your users or starting a new marketing campaign,you want to make sure that you get it right. A/B testing can help you findthe optimal wording and presentation by testing message variants onselected portions of your user base. Whether your goal is better retentionor conversion on an offer, A/B testing can perform statistical analysis todetermine if a message variant is outperforming the baseline for yourselected objective.
To A/B test feature variants with a baseline, do the following:
- Create your experiment.
- Validate your experiment on a test device.
- Manage your experiment.
Create an experiment
An experiment that usesFirebase In-App Messaging lets you evaluate multiple variants ofa single in-app message.
Sign in to theFirebase consoleand verify thatGoogle Analytics is enabled in your project so thatthe experiment has access toAnalytics data.
If you did not enableGoogle Analytics when creating your project, youcan enable it on theIntegrationstab, which you can access using>Project settings in theFirebase console.
In theEngage section of theFirebase console navigation menu, clickA/B Testing.
ClickCreate experiment, and then selectIn-App Messagingwhen prompted for the service you want to experiment with.
Alternatively, on theFirebase console navigation menu, expandEngage, then clickIn-App Messaging. Then clickNewexperiment.
Enter aName and optionalDescription for your experiment, andclickNext.
Fill out theTargeting fields, first choosing the app that uses yourexperiment. You can also target a subset of your users to participate inyour experiment by choosing options that include the following:
- Version: One or more versions of your app
- User audience:Analytics audiences used to target userswho might be included in the experiment
- User property: One or moreAnalytics user properties forselecting users who might be included in the experiment
- Country/Region: One or more countries or regions for selectingusers who might be included in the experiment
- Device language: One or more languages and locales used to selectusers who might be included in the experiment
- First open: Target users based on the first time they everopened your app
- Last app engagement: Target users based on the last time theyengaged with your app
Set thePercentage of target users: Select the percentage of your app's user base matching the criteria set underTarget users that you want to evenly divide between the baseline and one or more variants in your experiment. This can be any percentage between 0.01% and 100%. Percentages are randomly reassigned to users for each experiment, including duplicated experiments.
In theVariants section, configure a baseline in-app message to sendto the baseline group using themessage design interfaceyou use for a normal in-app messaging campaign.
To add a variant to your experiment, clickAddVariant. By default, experiments have one baseline and one variant.
(optional) Enter a more descriptive name for each variant.
(optional) At the top of theVariants section, click theComparevariants button to compare one more message variants side-by-side with thebaseline message.
Define a goal metric for your experiment to use when evaluating experimentvariants along with any additional metrics you want to use from the list.These metrics include built-in objectives (engagement, purchases,revenue, retention, etc.),Analytics conversion events, and otherAnalytics events.
Configure scheduling for the experiment:
- Set aStart andEnd date for the experiment.
- Set how in-app messages are triggered across all variants.
ClickReview to save your experiment.
You are allowed up to 300 experiments per project, which could consist of up to 24 running experiments, with the rest as draft or completed.
Validate your experiment on a test device
For each Firebase installation, you can retrieve the installation auth tokenassociated with it. You can use this token to test specific experiment variantson a test device with your app installed. To validate your experiment on a test device, do the following:
- Get the installation auth token as follows:
Swift
do{letresult=tryawaitInstallations.installations().authTokenForcingRefresh(true)print("Installation auth token:\(result.authToken)")}catch{print("Error fetching token:\(error)")}
Objective-C
[[FIRInstallationsinstallations]authTokenForcingRefresh:truecompletion:^(FIRInstallationsAuthTokenResult*result,NSError*error){if(error!=nil){NSLog(@"Error fetching Installation token %@",error);return;}NSLog(@"Installation auth token: %@",[resultauthToken]);}];
Java
FirebaseInstallations.getInstance().getToken(/* forceRefresh */true).addOnCompleteListener(newOnCompleteListener<InstallationTokenResult>(){@OverridepublicvoidonComplete(@NonNullTask<InstallationTokenResult>task){if(task.isSuccessful() &&task.getResult()!=null){Log.d("Installations","Installation auth token: "+task.getResult().getToken());}else{Log.e("Installations","Unable to get Installation auth token");}}});
Kotlin
valforceRefresh=trueFirebaseInstallations.getInstance().getToken(forceRefresh).addOnCompleteListener{task->if(task.isSuccessful){Log.d("Installations","Installation auth token: "+task.result?.token)}else{Log.e("Installations","Unable to get Installation auth token")}}
Web
import{getInstallations,getToken}from"firebase/installations";constinstallations=getInstallations(app);constinstallationAuthToken=getToken(installations);
- On theFirebase console navigation bar, clickA/B Testing.
- ClickDraft (and/orRunning for Remote Config experiments), hover over your experiment, click the context menu (more_vert), and then clickManage test devices.
- Enter the installation auth token for a test device and choose the experiment variant to send to that test device.
- Run the app and confirm that the selected variant is being received on the test device.
To learn more aboutFirebase installations, seeManage Firebase installations.
Manage your experiment
Whether you create an experiment withRemote Config, the Notifications composer, orFirebase In-App Messaging, you can then validate and start your experiment, monitor yourexperiment while it is running, and increase the number of users included inyour running experiment.
When your experiment is done, you can take note of the settings used by thewinning variant, and then roll out those settings to all users. Or, you canrun another experiment.
Start an experiment
- In theEngage section of theFirebase console navigation menu, clickA/B Testing.
- ClickDraft, and then click the title of your experiment.
- To validate that your app has users who would be included in yourexperiment, expand the draft details and check for a numbergreater than0% in theTargeting and distribution section(for example,1% of users matching the criteria).
- To change your experiment, clickEdit.
- To start your experiment, clickStart Experiment. You can run up to 24experiments per project at a time.
Monitor an experiment
Once an experiment has been running for a while, you can check in on itsprogress and see what your results look like for the users who have participatedin your experiment so far.
- In theEngage section of theFirebase console navigation menu, clickA/B Testing.
ClickRunning, and then click on, or search for, the title of yourexperiment. On this page, you can view various observed and modeledstatistics about your running experiment, including the following:
- % difference from baseline: A measure of the improvement of a metricfor a given variant as compared to the baseline. Calculated by comparingthe value range for the variant to the value range for the baseline.
- Probability to beat baseline: The estimated probability that a givenvariant beats the baseline for the selected metric.
- observed_metric per user: Based on experiment results,this is the predicted range that the metric value will fall into overtime.
- Totalobserved_metric: The observed cumulative value forthe baseline or variant. The value is used to measure how well eachexperiment variant performs, and is used to calculateImprovement,Value range,Probability to beat baseline, andProbability tobe the best variant. Depending on the metric being measured, thiscolumn may be labeled "Duration per user," "Revenue per user,""Retention rate," or "Conversion rate."
After your experiment has run for a while (at least 7 days forFCMandIn-App Messaging or 14 days forRemote Config), data on this pageindicates which variant, if any, is the "leader." Some measurements areaccompanied by a bar chart that presents the data in a visual format.
Roll out an experiment to all users
After an experiment has run long enough that you have a "leader," or winningvariant, for your goal metric, you can release the experiment to 100% of users.This lets you select a variant to publish to all users moving forward. Evenif your experiment has not created a clear winner, you can still choose torelease a variant to all of your users.
- In theEngage section of theFirebase console navigation menu, clickA/B Testing.
- ClickCompleted orRunning, click an experiment that you want torelease to all users, click the context menu()Roll out variant.
Roll out your experiment to all users by doing one of the following:
- For an experiment that usesthe Notifications composer, use theRoll out message dialog to send the message to the remaining targetedusers who were not part of the experiment.
- For aRemote Config experiment, select a variant to determine whichRemote Config parameter values to update. The targeting criteria definedwhen creating the experiment is added as a new condition in yourtemplate, to ensure the rollout only affects users targeted by theexperiment. After clickingReview in Remote Config to review thechanges, clickPublish changes to complete the rollout.
- For anIn-App Messaging experiment, use the dialog to determine whichvariant needs to be rolled out as a standaloneIn-App Messaging campaign.Once selected, you are redirected to the FIAM compose screen to make anychanges (if required) before publishing.
Expand an experiment
If you find that an experiment isn't bringing in enough users forA/B Testingto declare a leader, you can increase distribution of your experiment to reach alarger percentage of the app's user base.
- In theEngage section of theFirebase console navigation menu, clickA/B Testing.
- Select the running experiment that you want to edit.
- In theExperiment overview, click thecontext menu(), and thenclickEdit running experiment.
- TheTargeting dialog displays an option to increase the percentage ofusers who are in the running experiment. Select a number greaterthan the current percentage and clickPublish. The experiment will bepushed out to the percentage of users you have specified.
Duplicate or stop an experiment
- In theEngage section of theFirebase console navigation menu, clickA/B Testing.
- ClickCompleted orRunning, hold the pointer over your experiment,click the context menu(), andthen clickDuplicate experiment orStop experiment.
Web client identification and experiment persistence
Note: This section is relevant only for web applications.When a user launches a web application usingFirebase A/B Testing in abrowser for the first time, a uniqueFirebase installation ID(FID) is generated. This FID ispersistently stored in the browser's IndexedDB to identify the app instanceacross sessions.
Firebase A/B Testing uses the FID to assign users toexperiment variants, andGoogle Analytics uses it for event aggregation tomeasure and analyze user behavior within each variant.
Because the FID is stored in IndexedDB,Firebase A/B Testing treats a user as a new user if they accessyour app from a different browser or in an incognito/private window, or if theyclear their browser's IndexedDB. This means that a user might be included indifferent experiment variants when using different browsers or browsingsessions.
User targeting
You can target the users to include in yourexperiment using the following user-targeting criteria.
| Targeting criterion | Operator(s) | Value(s) | Note |
|---|---|---|---|
| Version | contains, does not contain, matches exactly, contains regex | Enter a value for one or more app versions that you want to include in theexperiment. | When using any of thecontains,does not contain, ormatches exactly operators, you can provide a comma-separated list ofvalues. When using thecontains regex operator, you can create regularexpressions inRE2format. Your regular expression can match all or part of the target versionstring. You can also use the^ and$ anchors to match thebeginning, end, or entirety of a target string. |
| User audience(s) | includes all of, includes at least one of, does not include all of, does not include at least one of | Select one or moreAnalytics audiences to target users who might be included in your experiment. | Some experiments that targetGoogle Analytics audiences may require afew days to accumulate data because they are subject toAnalyticsdata processing latency.You are most likely to encounter this delay with new users, who aretypically enrolled into qualifying audiences 24-48 hours after creation, orforrecently-created audiences. |
| User property | For text: contains, does not contain, exactly matches, contains regex For numbers: <, ≤, =, ≥, > | AnAnalytics user property is used to select users who might be included in an experiment, with a range of options for selecting user property values. On the client, you can set only string values for user properties. For conditions that use numeric operators, theRemote Config service converts the value of the corresponding user property into an integer/float. | When using thecontains regex operator, you can create regularexpressions inRE2format. Your regular expression can match all or part of the target versionstring. You can also use the^ and$ anchors to match thebeginning, end, or entirety of a target string. |
| Country/Region | N/A | One or more countries or regions used to select users who might be included in the experiment. | |
| Languages | N/A | One or more languages and locales used to select users who might be included in the experiment. | |
| First open | More than Less than Between | Target users based on the first time they ever opened your app, specified in days. | |
| Last app engagement | More than Less than Between | Target users based on the last time they engaged with your app, specified in days. |
A/B Testing metrics
When you create your experiment, you choose a primary, orgoal metric, that isused to determine the winning variant. You should also track other metrics tohelp you better understand each experiment variant's performance and trackimportant trends that may differ for each variant, like user retention, appstability and in-app purchase revenue. You can track up to five non-goalmetrics in your experiment.
For example, say you've added new in-app purchases to your app and want tocompare the effectiveness of two different "nudge" messages. In this case, youmight decide to choose to setPurchase revenue as your goal metricbecause you want the winning variant to represent the notification thatresulted in the highest in-app purchase revenue. And because you also want totrack which variant resulted in more future conversions and retained users, youmight add the following inOther metrics to track:- Estimated total revenue to see how your combined in-app purchase and adrevenue differs between the two variants
- Retention (1 day),Retention (2-3 days),Retention (4-7 days) totrack your daily/weekly user retention
The following tables provide details on how goal metrics and other metrics arecalculated.
Goal metrics
| Metric | Description |
|---|---|
| Crash-free users | The percentage of users who have not encountered errors in your app that were detected by theFirebase Crashlytics SDK during the experiment. Note:Firebase Crashlytics is not supported for web applications. |
| Estimated ad revenue | Estimated ad earnings. |
| Estimated total revenue | Combined value for purchase and estimated ad revenues. |
| Purchase revenue | Combined value for allpurchase andin_app_purchase events. |
| Retention (1 day) | The number of users who return to your app on a daily basis. |
| Retention (2-3 days) | The number of users who return to your app within 2-3 days. |
| Retention (4-7 days) | The number of users who return to your app within 4-7 days. |
| Retention (8-14 days) | The number of users who return to your app within 8-14 days. |
| Retention (15+ days) | The number of users who return to your app 15 or more days after they last used it. |
| first_open | AnAnalytics event that triggers when a user first opens an app afterinstalling or reinstalling it. Used as part of a conversion funnel. |
Other metrics
| Metric | Description |
|---|---|
| notification_dismiss | AnAnalytics event that triggers when a notification sent by the Notifications composer is dismissed (Android only). |
| notification_receive | AnAnalytics event that triggers when a notification sent bythe Notifications composer is received while the app is in the background (Android only). |
| os_update | AnAnalytics event that tracks when the device operating system isupdated to a new version.To learn more, seeAutomaticallycollected events. This metric is not supported for web applications. |
| screen_view | AnAnalytics event that tracks screens viewed within your app. To learnmore, seeTrackScreenviews. |
| session_start | AnAnalytics event that counts user sessions in your app. To learn more,seeAutomaticallycollected events. |
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-04 UTC.