All Collections
Phrasee Platform
Dynamic Optimization
Dynamic Optimization - Experiment Configuration
Dynamic Optimization - Experiment Configuration

How to configure a dynamic optimization experiment in the Phrasee platform

Updated over a week ago

Configuration panel

The Dynamic Optimization configuration panel enables you to schedule broadcast or trigger experiments in your delivery provider.

The Dynamic Optimization configuration panel is currently available for these platforms:

  • Adobe Campaign Classic

  • Adobe Journey Optimizer (optimization on open data only; clicks coming soon)

  • Bloomreach

  • Braze

  • Iterable

  • MessageGears (triggers only)

  • MoEngage (optimization on open data only; clicks coming soon)

  • Responsys (optimization on open data only)

  • Salesforce Marketing Cloud

When everything is configured and your campaign or workflow is pushed live, the delivery platform will send a request to Phrasee to retrieve a language variant and its associated open tracking pixel. The language variant will be embedded within the message via a parameter and the open pixel within the body of the message.

This is accomplished differently in different platforms.

You should check our documentation for each individual platform to determine the best way to set up your deployment.

Ensure you're following recommended testing guidelines.

The process

1. Open the Dynamic Optimization settings panel

First, follow your usual process for generating experiment language variants in Phrasee.

Once you've approved the language variants, you'll progress to the Results tab, and a plug icon will appear in the upper-right corner of the screen. Click on this icon to open the Dynamic Optimization configuration panel.

You may need to expand the panel by clicking the arrow icon at the top.

Select Dynamic Optimization to open the settings.

Once you open the panel, you'll have a number of options to configure. Certain options will only be available for trigger experiments. We'll go through each setting below in the order they appear in the panel.

2. Choose the optimization mode

The Optimization mode dropdown will allow to select from available optimization modes. For triggers, there are three modes available. For broadcast campaigns, you are limited to Fast, maximize revenue mode.

Let's examine what each mode does.

Fast, maximize revenue

This mode employs a methodology to find the best-performing variant as quickly as possible. Once a winning variant is found, the majority of sends will use this variant, maximizing overall revenue for the campaign and minimizing the opportunity cost of testing. This is the only mode available for broadcast experiments.

Fast, repeat sends to the same audience

This uses the same optimization method as Fast, maximize revenue. The difference is the user can choose to settle on a group of champions as opposed to just one.

The default is three champions, which means any one of the winning three variants will be sent to subscribers most of the time. This option works well when you are sending a campaign to the same audience week after week, or even multiple times within the same week. This provides the recipient with a variety of different high-performing lines to reduce fatigue.

Slow, maximize statistical significance

This uses a slow, deliberate method to find the best-performing variant. The approach uses a straight split test to find the statistically worst-performing variant. The worst-performing variant is then dropped and the process is repeated until there are 2 lines left: the human control and the best-performing Phrasee variant. The system then performs a head-to-head test with the last two variants. Once a true winner is found, the final split used is user configurable.

The default settings will send 80% of the deployment to the winner and 20% to the loser. This process requires Drop bad variants to be active. If Drop bad variants is inactive, the optimization will give all live variants an equal proportion of the send. While guaranteeing the best line wins, this option can take up to 10 times longer to complete than Fast, maximize revenue.

3. Choose the optimization metric

The metric you select here will tell Phrasee which metric it should use when making decisions about adjusting the proportion of the audience to serve to each variant and when to drop variants performing badly.

Depending on your platform and agreed upon methodology with Phrasee, you may have up to four options:

  1. Opens to sends - Phrasee will use unique open rate to determine the performance of a variant. By default, we collect open rates through the use of an open tracking pixel for dynamic optimization experiments.

  2. Clicks to sends - Phrasee will use unique click-through rate to determine the performance of a variant. Phrasee has webhooks available where certain platforms can provide click tracking data.

  3. Clicks to opens - This uses click-to-open rate (CTOR) to determine performance. This is best used when optimizing variants within an email body while also testing subject line.

Your Phrasee Customer Success representative will make a recommendation based on your use case in consultation with our Data Science team.

If your project has been configured with split calculator, it will have selected the best option for the experiment.

4. Configure the optimization schedule

Input the Start schedule and End schedule for your dynamic optimization experiment. This is the duration of time that Phrasee will be listening for new data and adjusting your variants and proportions accordingly.

It's important to note two very important things about the schedule:

  1. Once the experiment has been activated, you can still adjust the optimization Start schedule until that point in time has passed. Once the optimization period has begun, you are unable to adjust the Start schedule to stop it.

  2. Similarly, you can adjust the End schedule after the experiment has begun to increase the length of time Phrasee will listen for data and adjust variants and proportions. However once the configured End schedule point in time has passed, you are not able to adjust it to make the experiment run longer.

If you need to change the Start schedule or End schedule after that time has passed, you will need to make a new experiment.

Broadcast experiments

For broadcast experiments, the minimum optimization schedule for an experiment is three days. Even though you'll likely only be deploying for one of those days, it's critical for Phrasee to keep listening for data.
โ€‹
This is because broadcast campaigns have about a 72- to 96-hour maturation period. During this time, your dashboard will continue to update the Mature Data tab. You can ask Phrasee to listen for additional data on a broadcast experiment for up to seven days.

Trigger experiments

For trigger experiments, you are able to configure them to run as long as you like. Keeping in mind that once the End schedule has passed it cannot be extended, it is best practice to set a trigger campaign to run longer than you think you will need it. You can always adjust the End schedule to a final end date if you ever settle on one.

As Phrasee dynamic trigger experiments are built to be able to run in perpetuity, you may find you never want to turn it off and create another experiment for a particular marketing touchpoint unless the goal or content of the touchpoint changes.

5. Triggers only: Determine if Phrasee should automatically drop bad variants

With Phrasee's dynamic optimization, you have the ability to manually drop variants by using the Status dropdown next to each variant in the Results tab.

However, Phrasee dynamic optimization works best when you put that decisioning in the hands of the Phrasee Brain.

By default when you create a new experiment, Phrasee will toggle Drop bad variants on. If you decide you want to turn it off, simply uncheck the box next to Drop bad variants. This is not generally recommended, but it is available as an option should you need it.

6. Triggers only: Choose how Phrasee should introduce new variants

The controls for introducing new variants are in a dropdown just after the Optimization schedule box. Again, this is for triggers only. Broadcast experiments do not run long enough to test more than the initial variants generated.

You have four different options for controlling the introduction of new variants in a dynamic trigger experiment:

  1. Automatic - This is the default. If Phrasee is winning with a strong uplift, we will delay adding new variants for up to 3 months to maximise the overall performance of the experiment. If we're not winning by much or are losing to the control, we will keep introducing new variants until we find a strong winner. This option also controls the number of new variants that we introduce. So if we are winning, we may only introduce a few new variants at a time.

  2. Time based - This will allow to instruct Phrasee to wait a configurable number of days before introducing new variants. The default for this setting is 30 days.

  3. Do not introduce new language - This does what it says on the tin. Phrasee will not add new variants, allowing the experiment to find a winner from the current variants, irrespective of whether a Phrasee variant is winning or not.

  4. Continuous testing - This mode will simply allow Phrasee to ask for the approval of new language as soon as an old variant is dropped.

No matter which mode you choose, Phrasee will never introduce language to a live experiment you haven't approved. You must approve newly offered variants from the Language Approval tab just as you would in a fresh experiment.

Otherwise, your new variants will just sit there in an unyielding approval limbo forever and ever. The horror! ๐Ÿ˜ฑ

7. Triggers only: Choose if Phrasee can drop your human control

This is another option that does what it says on the tin: If the control is performing badly and this setting is toggled on, Phrasee will drop it. Uplift and other performance metrics will use the last known human control performance metrics prior to it being dropped.

Be default, this options is toggled off. We generally wouldn't recommend selecting this option, particularly at the start of an experiment. It's important for Phrasee to be able to get a solid benchmark against which to measure its own hypotheses.

If you wish to enable it, simply check the box next to Allow the human control to be automatically dropped.

8. Determine the human control's minimum audience share

By default, Phrasee will always send at least 2% of your audience the human control variant you entered, irrespective of how the variant may be negatively performing.

You can choose to toggle Minimum sends to the human control off or set it lower, which would allow Phrasee to drop the percentage below the 2% threshold. Alternatively, you can increase it and ensure Phrasee always sends a larger percentage of your audience the human control.

Note that if your human control is winning, Phrasee will increase the share to it no matter what percentage you've entered here.

9. Triggers only: Choose the percentage of the audience Phrasee can use for ongoing testing

The final setting available for triggers is Minimum percentage of audience used for testing (1-20%). If you have chosen settings in your experiment that allow Phrasee to introduce and test new variants, Phrasee will use this setting to determine what percentage of your audience can be used to test new variants you approve.

The default is 20%. This is also the maximum Phrasee would ever use for ongoing testing of new language. You can adjust this as low as one percent or turn it off completely by unchecking the box next to Minimum percentage of audience used for testing (1-20%).

Though there is always an opportunity cost with testing, ongoing testing of trigger language helps to keep pace with your audience's changing taste and engagement. This allows your language to continue remaining relevant and earning further engagement. Disabling this function introduces the risk of declining engagement over time as language becomes stale or out-of-pace with your audience's evolving preferences.

10. Click Start the Awesome

With all of your settings in place, click Start the Awesome. The button will turn green and an alert will appear at the top of the panel to let you know you the dynamic optimization has been enabled for your experiment.

Note: This does not actually schedule your send. It has only enabled the optimization on Phrasee's side. You need to complete experiment implementation and deployment steps within your delivery provider.

Certain providers will require copying and pasting a URL to pull information from the dynamic optimization endpoint. You can copy the URL in the panel into your delivery provider platform when you see it appear. This will configure Phrasee as a data source from which language variants will be returned on a per-subscriber basis.

Sometimes, like in the example above, you may need to replace our placeholder values (e.g. <unique customer ID>, <unique delivery ID>) with the actual dynamic variables from your platform. We cannot always preconfigure these, as many of them vary based on how your unique deployment platform instance was provisioned.

Platform-specific settings

As mentioned above, each platform operationalizes its sends differently and, therefore, the process for adding Dynamic Optimization to sends in those platforms differs. Please find your platform below and read the article for the type of experiment you're running.


Related articles

Last reviewed: April 18, 2024

Did this answer your question?