You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Adding hyperlink within doc
Updating Building an Experiment -> Select an Experiment hyperlink
* grammar
* use callout component
---------
Co-authored-by: myronkaifung <[email protected]>
Copy file name to clipboardExpand all lines: pages/docs/reports/experiments.mdx
+12-11Lines changed: 12 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,13 +16,14 @@ The Experiment report analyzes how one variant impacts your metrics versus other
16
16
17
17
### Step 1: Select an Experiment
18
18
19
-
Click 'New Experiment' from the Experiment report menu and select your experiment. Any experiment started in the last 30 days will automatically be detected and populate in the dropdown. To analyze experiments that started prior to 30 days, please hard-code the experiment name
20
-
21
-
NOTE: Only experiments tracked via exposure events, i.e $experiment_started, can be analyzed in the experiment report. Read more on how to track experiments here.
19
+
Click 'New Experiment' from the Experiment report menu and select your experiment. Any experiment started in the last 30 days will automatically be detected and populated in the dropdown. To analyze experiments that began before 30 days, please hard-code the experiment name
22
20
21
+
<Callouttype="info">
22
+
Only experiments tracked via exposure events, i.e, $experiment_started`, can be analyzed in the experiment report. Read more on how to track experiments [here](/docs/reports/experiments#adding-experiments-to-an-implementation).
23
+
</Callout>
23
24
### Step 2: Choose the ‘Control’ Variant
24
25
25
-
Select the ‘Variant’ that represents your control. All your other variant(s) will be compared to the control, i.e, how much better they perform vs the control variant.
26
+
Select the ‘Variant’ that represents your control. All your other variant(s) will be compared to the control, i.e, how much better are they performing vs the control variant.
26
27
27
28
### Step 3: Choose Success Metrics
28
29
@@ -66,7 +67,7 @@ In the above image for example, max p=0.025 [(1-0.95)/2]
66
67
So, if an experiment's results show
67
68
68
69
- p ≤ 0.025: results are statistically significant for this metric, i.e, you can be 95% confident in the lift seen if the change is rolled out to all users.
69
-
- p > 0.025: results are not statistically significant for this metric, i.,e you cannot be very confident in the results if the change is rolled out broadly.
70
+
- p > 0.025: results are not statistically significant for this metric, i.e, you cannot be very confident in the results if the change is rolled out broadly.
70
71
71
72
### How do you read lift?
72
73
@@ -113,7 +114,7 @@ You can also add the experiment breakdowns and filters directly in a report via
113
114
114
115
The Experiment report behavior is powered by [borrowed properties](/docs/features/custom-properties#borrowed-properties).
115
116
116
-
For every user event, we identify if the event is performed after being exposed to an experiment. If it were, then we would borrow the variant details from the tracked $experiment_started to attribute the event to the proper variant.
117
+
For every user event, we identify if the event is performed after being exposed to an experiment. If it were, then we would borrow the variant details from the tracked `$experiment_started` to attribute the event to the proper variant.
117
118
118
119
### FAQs
119
120
1. If a user switches variants mid-experiment, how do we calculate the impact on metrics?
@@ -124,7 +125,7 @@ For every user event, we identify if the event is performed after being exposed
124
125
125
126
We consider the complete user’s behavior for every experiment that they are a part of.
126
127
127
-
We believe this will still give accurate results for a particular experiment, as the users have been randomly allocated. So there should be enough similar users, ie. part of multiple experiments, across both control and variants for a particular experiment.
128
+
We believe this will still give accurate results for a particular experiment, as the users have been randomly allocated. So there should be enough similar users, ie, part of multiple experiments, across both control and variants for a particular experiment.
128
129
129
130
3. For what time duration do we associate the user being exposed to an experiment to impact metrics?
130
131
@@ -169,8 +170,8 @@ You can specify the event and property that should be used as the exposure event
169
170
Experimentation is priced based on MEUs - Monthly Experiment Users. Only users exposed to an experiment in a month are counted towards this tally.
170
171
171
172
### FAQ
172
-
#### How are MEUs different than MTUs (Monthly Tracked Users)?
173
-
MTUs count any user who has tracked an event to the project in the calendar month. MEU is a subset of MTUs; it’s only users who have tracked an exposure experiment event (ie, $experiment_started) in the calendar month.
173
+
#### How are MEUs different than MTUs (Monthly Tracked Users)?
174
+
MTUs count any user who has tracked an event to the project in the calendar month. MEU is a subset of MTU; it’s only users who have tracked an exposure experiment event (ie, `$experiment_started`) in the calendar month.
174
175
175
176
#### How can I estimate MEUs?
176
177
If you actively run experiments, you can look at the number of monthly users exposed to an experiment. Note that the MEU calculation is different if users are, on average, exposed to 30 or more experiments in a month.
@@ -214,8 +215,8 @@ You can see your experiment MEU usage by going to Organization settings > Plan D
214
215
### Post Experiment Analysis Decision
215
216
Once the experiment is ready to review, you can choose to 'End Analysis'. Once complete, you can log a decision, visible to all users, based on the experiment outcome:
216
217
217
-
- Ship Variant (any of the variants): You had a statistically significant result. You have made a decision to ship a variant to all users. NOTE: Shipping variant here is just a log; it does not actually trigger rolling out the feature flag unless you are using Mixpanel feature flags (in beta today).
218
-
- Ship None: You may not have had any statistically significant results, or even if you have statistically significant results, the lift is not sufficient to warrant a change in user experience. You decide not the ship the change.
218
+
- Ship Variant (any of the variants): You had a statistically significant result. You have made a decision to ship a variant to all users. NOTE: Shipping variant here is just a log; it does not actually trigger rolling out the feature flag unless you are using Mixpanel feature flags **(in beta today)**.
219
+
- Ship None: You may not have had any statistically significant results, or even if you have statistically significant results, the lift is not sufficient to warrant a change in user experience. You decide not to ship the change.
219
220
- Defer Decision: You may have a direction you want to go, but need to sync with other stakeholders before confirming the decision. This is an example where you might defer decision, and come back at a later date and log the final decision.
0 commit comments