Skip to content

Commit e097516

Browse files
authored
remove duplicate column from Experiments table (#2056)
* typo * grammar (#2055)
1 parent 850a026 commit e097516

File tree

1 file changed

+17
-18
lines changed

1 file changed

+17
-18
lines changed

pages/docs/reports/experiments.mdx

Lines changed: 17 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ NOTE: Only experiments tracked via exposure events, i.e $experiment_started, can
2222

2323
### Step 2: Choose the ‘Control’ Variant
2424

25-
Select the ‘Variant’ that represents your control. All your other variant(s) will be compared to the control, i.e how much better are they performing vs the control variant.
25+
Select the ‘Variant’ that represents your control. All your other variant(s) will be compared to the control, i.e, how much better they perform vs the control variant.
2626

2727
### Step 3: Choose Success Metrics
2828

@@ -34,7 +34,7 @@ Enter either the sample size (the number of users to be exposed to the experimen
3434

3535
### Step 5: Confirm other Default Configurations
3636

37-
Mixpanel has set default automatic configurations, seen below . If required, please modify them as needed for the experiment
37+
Mixpanel has set default automatic configurations, seen below. If required, please modify them as needed for the experiment
3838

3939
1. **Experiment Model type**: Sequential
4040
2. **Confidence Threshold**: 95%
@@ -44,18 +44,18 @@ Mixpanel has set default automatic configurations, seen below . If required, ple
4444

4545
The Experiments report identifies significant differences between the Control and Variant groups. Every metric has two key attributes:
4646

47-
- p-value : this shows if the variants’ delta impact vs the control is statistically significant
48-
- lift : the variants’ delta impact on the metric vs control
47+
- p-value: this shows if the variants’ delta impact vs the control is statistically significant
48+
- lift: the variants’ delta impact on the metric vs control
4949

5050
Metric rows in the table are highlighted when any difference is calculated with high confidence. Specifically, if the difference is greater than the confidence interval you set up during the experiment configuration
5151

52-
- Positive differences, where the variant value is higher than control, are highlighted in green
53-
- Negative differences, where the variant value is lower than control, are highlighted in red
52+
- Positive differences, where the variant value is higher than the control, are highlighted in green
53+
- Negative differences, where the variant value is lower than the control, are highlighted in red
5454
- Statistically insignificant results remain gray
5555

5656
### How do you read statistical significance?
5757

58-
The main reason you look at statistical significance (p-value) is to get confidence on what it means for the larger roll out.
58+
The main reason you look at statistical significance (p-value) is to get confidence on what it means for the larger rollout.
5959

6060
![image](/exp_stat_sig.png)
6161

@@ -65,8 +65,8 @@ In the above image for example, max p=0.025 [(1-0.95)/2]
6565

6666
So, if an experiment's results show
6767

68-
- p ≤ 0.025 : results are statistically significant for this metric, i.e you can be 95% confidence in the lift seen if the change is rolled out to all users.
69-
- p > 0.025 : results are not statistically significant for this metric, i.e you cannot be very confident on the results if the change is rolled out broadly.
68+
- p ≤ 0.025: results are statistically significant for this metric, i.e, you can be 95% confident in the lift seen if the change is rolled out to all users.
69+
- p > 0.025: results are not statistically significant for this metric, i.,e you cannot be very confident in the results if the change is rolled out broadly.
7070

7171
### How do you read lift?
7272

@@ -104,7 +104,7 @@ NOTE: If you are using a ‘sequential’ testing experiment model type, you can
104104
### Diagnosing experiments further in regular Mixpanel reports
105105
Click 'Analyze' on a metric to dive deeper into the results. This will open a normal Mixpanel insights report for the time range being analyzed with the experiment breakdown applied. This allows you to view users, view replays, or apply additional breakdowns to further analyze the results.
106106

107-
You can also add the experiment breakdowns and filters directly in a report via the Experiments tab in the query builder. This lets you do on-the-fly analysis with the experiment groups. Under the hood, the experiment breakdown and filter works the same as the Experiment report.
107+
You can also add the experiment breakdowns and filters directly in a report via the Experiments tab in the query builder. This lets you do on-the-fly analysis with the experiment groups. Under the hood, the experiment breakdown and filter work the same as the Experiment report.
108108

109109

110110
## Looking under the hood - How does the analysis engine work?
@@ -113,7 +113,7 @@ You can also add the experiment breakdowns and filters directly in a report via
113113

114114
The Experiment report behavior is powered by [borrowed properties](/docs/features/custom-properties#borrowed-properties).
115115

116-
For every user event, we identify if the event is performed after being exposed to an experiment. If it was, then we borrow the variant details from the tracked $experiment_started to attribute the event to the proper variant.
116+
For every user event, we identify if the event is performed after being exposed to an experiment. If it were, then we would borrow the variant details from the tracked $experiment_started to attribute the event to the proper variant.
117117

118118
### FAQs
119119
1. If a user switches variants mid-experiment, how do we calculate the impact on metrics?
@@ -150,11 +150,11 @@ You can specify the event and property that should be used as the exposure event
150150
![image](/exp_settings_rescale.png)
151151

152152
### When to track an exposure event?
153-
- An exposure event ONLY needs to be sent the first time a user is exposed to an experiment as long as the user is always in the initial bucketed variant. Exposure events don’t have to be sent subsequently in new sessions.
153+
- An exposure event ONLY needs to be sent the first time a user is exposed to an experiment, as long as the user is always in the initial bucketed variant. Exposure events don’t have to be sent subsequently in new sessions.
154154
- If a user is part of multiple experiments, send a corresponding exposure event for each experiment.
155155
- Send exposure event only when a user is actually exposed, not at the start of a session.
156156

157-
For example,if you want to run an experiment on the payment page of a ride-sharing app, you only really care about users who open the app, book a ride, and then reach the payment page. Users who only open the app and do other activities shouldn't be considered in the sample size. So exposure event should ideally be implemented to track only once the payment page is reached.
157+
For example, if you want to run an experiment on the payment page of a ride-sharing app, you only really care about users who open the app, book a ride, and then reach the payment page. Users who only open the app and do other activities shouldn't be considered in the sample size. So exposure event should ideally be implemented to track only once the payment page is reached.
158158

159159
- Send exposure details and not the assignment.
160160

@@ -170,10 +170,10 @@ Experimentation is priced based on MEUs - Monthly Experiment Users. Only users e
170170

171171
### FAQ
172172
#### How are MEUs different than MTUs (Monthly Tracked Users)?
173-
MTUs count any user who has tracked an event to the project in the calendar month. MEU is a subset of MTU, it’s only users who have tracked an exposure experiment event (ie, $experiment_started) in the calendar month.
173+
MTUs count any user who has tracked an event to the project in the calendar month. MEU is a subset of MTUs; it’s only users who have tracked an exposure experiment event (ie, $experiment_started) in the calendar month.
174174

175175
#### How can I estimate MEUs?
176-
If you actively run experiments you can look at the number of monthly users exposed to an experiment. Note the MEU calculation is different if users are, on average, exposed to 30 or more experiments in a month.
176+
If you actively run experiments, you can look at the number of monthly users exposed to an experiment. Note that the MEU calculation is different if users are, on average, exposed to 30 or more experiments in a month.
177177

178178
If not running experiments, below are some rough estimations of MEU's based on the number of MTUs being tracked to the project.
179179
| **MTU bucket** | **Estimated MEU (% MTU) ** |
@@ -182,11 +182,10 @@ If not running experiments, below are some rough estimations of MEU's based on t
182182
| Medium (100k - 1M) | 40-75% |
183183
| Large (1M - 10M) | 25-60%|
184184
| Very large (10M - 100M) | 20-50% |
185-
| Medium (100k - 1M) | 40-75% |
186185
| 100M + | 10-25% |
187186

188187
#### Does it matter how many experiments a user is exposed to within the month?
189-
We’ve accounted for a MEU to be exposed to up to 30 experiments per month. If the average number of experiment exposure events per MEU is over 30, then the MEUs will be calculated as the total number of exposure events divided by 30.
188+
We’ve accounted for an MEU to be exposed to up to 30 experiments per month. If the average number of experiment exposure events per MEU is over 30, then the MEUs will be calculated as the total number of exposure events divided by 30.
190189

191190
#### What happens if I go over my purchased MEU bucket?
192191
You can continue using Mixpanel Experiment Report, but you will be charged a higher rate for the overages.
@@ -215,7 +214,7 @@ You can see your experiment MEU usage by going to Organization settings > Plan D
215214
### Post Experiment Analysis Decision
216215
Once the experiment is ready to review, you can choose to 'End Analysis'. Once complete, you can log a decision, visible to all users, based on the experiment outcome:
217216

218-
- Ship Variant (any of the variants): You had a statistically significant result. You have made a decision to ship a variant to all users. NOTE: Shipping variant here is just a log, it does not actually trigger rolling out the feature flag unless you are using Mixpanel feature flags *(in beta today).*
217+
- Ship Variant (any of the variants): You had a statistically significant result. You have made a decision to ship a variant to all users. NOTE: Shipping variant here is just a log; it does not actually trigger rolling out the feature flag unless you are using Mixpanel feature flags (in beta today).
219218
- Ship None: You may not have had any statistically significant results, or even if you have statistically significant results, the lift is not sufficient to warrant a change in user experience. You decide not the ship the change.
220219
- Defer Decision: You may have a direction you want to go, but need to sync with other stakeholders before confirming the decision. This is an example where you might defer decision, and come back at a later date and log the final decision.
221220

0 commit comments

Comments
 (0)