Skip to content

Commit 2c85500

Browse files
Adding hyperlink within doc (#2035)
* Adding hyperlink within doc Updating Building an Experiment -> Select an Experiment hyperlink * grammar * use callout component --------- Co-authored-by: myronkaifung <[email protected]>
1 parent 5ee7a96 commit 2c85500

File tree

1 file changed

+12
-11
lines changed

1 file changed

+12
-11
lines changed

pages/docs/reports/experiments.mdx

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -16,13 +16,14 @@ The Experiment report analyzes how one variant impacts your metrics versus other
1616

1717
### Step 1: Select an Experiment
1818

19-
Click 'New Experiment' from the Experiment report menu and select your experiment. Any experiment started in the last 30 days will automatically be detected and populate in the dropdown. To analyze experiments that started prior to 30 days, please hard-code the experiment name
20-
21-
NOTE: Only experiments tracked via exposure events, i.e $experiment_started, can be analyzed in the experiment report. Read more on how to track experiments here.
19+
Click 'New Experiment' from the Experiment report menu and select your experiment. Any experiment started in the last 30 days will automatically be detected and populated in the dropdown. To analyze experiments that began before 30 days, please hard-code the experiment name
2220

21+
<Callout type="info">
22+
Only experiments tracked via exposure events, i.e, $experiment_started`, can be analyzed in the experiment report. Read more on how to track experiments [here](/docs/reports/experiments#adding-experiments-to-an-implementation).
23+
</Callout>
2324
### Step 2: Choose the ‘Control’ Variant
2425

25-
Select the ‘Variant’ that represents your control. All your other variant(s) will be compared to the control, i.e, how much better they perform vs the control variant.
26+
Select the ‘Variant’ that represents your control. All your other variant(s) will be compared to the control, i.e, how much better are they performing vs the control variant.
2627

2728
### Step 3: Choose Success Metrics
2829

@@ -66,7 +67,7 @@ In the above image for example, max p=0.025 [(1-0.95)/2]
6667
So, if an experiment's results show
6768

6869
- p ≤ 0.025: results are statistically significant for this metric, i.e, you can be 95% confident in the lift seen if the change is rolled out to all users.
69-
- p > 0.025: results are not statistically significant for this metric, i.,e you cannot be very confident in the results if the change is rolled out broadly.
70+
- p > 0.025: results are not statistically significant for this metric, i.e, you cannot be very confident in the results if the change is rolled out broadly.
7071

7172
### How do you read lift?
7273

@@ -113,7 +114,7 @@ You can also add the experiment breakdowns and filters directly in a report via
113114

114115
The Experiment report behavior is powered by [borrowed properties](/docs/features/custom-properties#borrowed-properties).
115116

116-
For every user event, we identify if the event is performed after being exposed to an experiment. If it were, then we would borrow the variant details from the tracked $experiment_started to attribute the event to the proper variant.
117+
For every user event, we identify if the event is performed after being exposed to an experiment. If it were, then we would borrow the variant details from the tracked `$experiment_started` to attribute the event to the proper variant.
117118

118119
### FAQs
119120
1. If a user switches variants mid-experiment, how do we calculate the impact on metrics?
@@ -124,7 +125,7 @@ For every user event, we identify if the event is performed after being exposed
124125

125126
We consider the complete user’s behavior for every experiment that they are a part of.
126127

127-
We believe this will still give accurate results for a particular experiment, as the users have been randomly allocated. So there should be enough similar users, ie. part of multiple experiments, across both control and variants for a particular experiment.
128+
We believe this will still give accurate results for a particular experiment, as the users have been randomly allocated. So there should be enough similar users, ie, part of multiple experiments, across both control and variants for a particular experiment.
128129

129130
3. For what time duration do we associate the user being exposed to an experiment to impact metrics?
130131

@@ -169,8 +170,8 @@ You can specify the event and property that should be used as the exposure event
169170
Experimentation is priced based on MEUs - Monthly Experiment Users. Only users exposed to an experiment in a month are counted towards this tally.
170171

171172
### FAQ
172-
#### How are MEUs different than MTUs (Monthly Tracked Users)?
173-
MTUs count any user who has tracked an event to the project in the calendar month. MEU is a subset of MTUs; it’s only users who have tracked an exposure experiment event (ie, $experiment_started) in the calendar month.
173+
#### How are MEUs different than MTUs (Monthly Tracked Users)?
174+
MTUs count any user who has tracked an event to the project in the calendar month. MEU is a subset of MTU; it’s only users who have tracked an exposure experiment event (ie, `$experiment_started`) in the calendar month.
174175

175176
#### How can I estimate MEUs?
176177
If you actively run experiments, you can look at the number of monthly users exposed to an experiment. Note that the MEU calculation is different if users are, on average, exposed to 30 or more experiments in a month.
@@ -214,8 +215,8 @@ You can see your experiment MEU usage by going to Organization settings > Plan D
214215
### Post Experiment Analysis Decision
215216
Once the experiment is ready to review, you can choose to 'End Analysis'. Once complete, you can log a decision, visible to all users, based on the experiment outcome:
216217

217-
- Ship Variant (any of the variants): You had a statistically significant result. You have made a decision to ship a variant to all users. NOTE: Shipping variant here is just a log; it does not actually trigger rolling out the feature flag unless you are using Mixpanel feature flags (in beta today).
218-
- Ship None: You may not have had any statistically significant results, or even if you have statistically significant results, the lift is not sufficient to warrant a change in user experience. You decide not the ship the change.
218+
- Ship Variant (any of the variants): You had a statistically significant result. You have made a decision to ship a variant to all users. NOTE: Shipping variant here is just a log; it does not actually trigger rolling out the feature flag unless you are using Mixpanel feature flags **(in beta today)**.
219+
- Ship None: You may not have had any statistically significant results, or even if you have statistically significant results, the lift is not sufficient to warrant a change in user experience. You decide not to ship the change.
219220
- Defer Decision: You may have a direction you want to go, but need to sync with other stakeholders before confirming the decision. This is an example where you might defer decision, and come back at a later date and log the final decision.
220221

221222
### Experiment Management

0 commit comments

Comments
 (0)