Skip to content

Commit f614e27

Browse files
committed
Merge branch 'kunakl07-master' into dev
2 parents 1e23453 + a1393c3 commit f614e27

File tree

2 files changed

+7
-7
lines changed

2 files changed

+7
-7
lines changed

docs/source/content/query_strategies/Acquisition-functions.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
Acquisition functions
44
=====================
55

6-
In Bayesian optimization, a so-called *acquisition funciton* is used instead of the uncertainty based utility measures of active learning. In modAL, Bayesian optimization algorithms are implemented in the ``modAL.models.BayesianOptimizer`` class. Currently, there are three available acquisition funcions: probability of improvement, expected improvement and upper confidence bound.
6+
In Bayesian optimization, a so-called *acquisition funciton* is used instead of the uncertainty based utility measures of active learning. In modAL, Bayesian optimization algorithms are implemented in the ``modAL.models.BayesianOptimizer`` class. Currently, there are three available acquisition functions: probability of improvement, expected improvement and upper confidence bound.
77

88
Probability of improvement
99
--------------------------

docs/source/content/query_strategies/Disagreement-sampling.rst

+6-6
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
Disagreement sampling
44
=====================
55

6-
When you have several hypothesis about your data, selecting the next instances to label can be done by measuring the disagreement between the hypotheses. Naturally, there are many ways to do that. In modAL, there are three built-in disagreement measures and query strategies: *vote entropy*, *consensus entropy* and *maximum disagreement*. In this quick tutorial, we are going to review them. For more details, see Section 3.4 of the awesome book `Active learning by Burr Settles <http://active-learning.net/>`__.
6+
When you have several hypotheses about your data, selecting the next instances to label can be done by measuring the disagreement between the hypotheses. Naturally, there are many ways to do that. In modAL, there are three built-in disagreement measures and query strategies: *vote entropy*, *consensus entropy* and *maximum disagreement*. In this quick tutorial, we are going to review them. For more details, see Section 3.4 of the awesome book `Active learning by Burr Settles <http://active-learning.net/>`__.
77

88
Disagreement sampling for classifiers
99
-------------------------------------
@@ -52,7 +52,7 @@ Instead of calculating the distribution of the votes, the *consensus
5252
entropy* disagreement measure first calculates the average of the class
5353
probabilities of each classifier. This is called the consensus
5454
probability. Then the entropy of the consensus probability is calculated
55-
and the instance with largest consensus entropy is selected.
55+
and the instance with the largest consensus entropy is selected.
5656

5757
For an example, let's suppose that we continue the previous example with
5858
three classifiers, classes ``[0, 1, 2]`` and five instances to classify.
@@ -100,7 +100,7 @@ Even though the votes for the second instance are ``[1, 1, 2]``, since the class
100100
Max disagreement
101101
^^^^^^^^^^^^^^^^
102102

103-
The disagreement measures so far take the actual *disagreement* into account in a weak way. Instead of this, it is possible to to measure each learner's disagreement with the consensus probabilities and query the instance where the disagreement is largest for some learner. This is called *max disagreement sampling*. Continuing our example, if the vote probabilities for each learner and the consensus probabilities are given, we can calculate the `Kullback-Leibler divergence <https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence>`__ of each learner to the consensus prediction and then for each instance, select the largest value.
103+
The disagreement measures so far take the actual *disagreement* into account in a weak way. Instead of this, it is possible to measure each learner's disagreement with the consensus probabilities and query the instance where the disagreement is largest for some learner. This is called *max disagreement sampling*. Continuing our example, if the vote probabilities for each learner and the consensus probabilities are given, we can calculate the `Kullback-Leibler divergence <https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence>`__ of each learner to the consensus prediction and then for each instance, select the largest value.
104104

105105
.. code:: python
106106
@@ -123,15 +123,15 @@ In this case, one of the learner highly disagrees with the others in the class o
123123
Disagreement sampling for regressors
124124
------------------------------------
125125

126-
Since regressors in general don't provide a way to calculate prediction probabilities, disagreement measures for classifiers may not work with regressors. Despite of this, ensemble regression models can be always used in an active learning scenario, because the standard deviation of the predictions at a given point can be thought of as a measure of disagreement.
126+
Since regressors, in general, don't provide a way to calculate prediction probabilities, disagreement measures for classifiers may not work with regressors. Despite this, ensemble regression models can be always used in an active learning scenario, because the standard deviation of the predictions at a given point can be thought of as a measure of disagreement.
127127

128128
Standard deviation sampling
129129
^^^^^^^^^^^^^^^^^^^^^^^^^^^
130130

131131
.. figure:: img/er-initial.png
132132
:align: center
133133

134-
When a committee of regressors is available, uncertainty of predictions can be estimated by calculating the standard deviation of predictions. This is done by the ``modAL.disagreement.max_std_sampling`` function.
134+
When a committee of regressors is available, the uncertainty of predictions can be estimated by calculating the standard deviation of predictions. This is done by the ``modAL.disagreement.max_std_sampling`` function.
135135

136136
Disagreement measures in action
137137
-------------------------------
@@ -151,7 +151,7 @@ The consensus predictions of these learners are
151151
.. figure:: img/dis-consensus.png
152152
:align: center
153153

154-
In this case, the disagreement measures from left to right are vote entropy, consensus entropy and max disagreement.
154+
In this case, the disagreement measures from left to right are vote entropy, consensus entropy, and max disagreement.
155155

156156
.. figure:: img/dis-measures.png
157157
:align: center

0 commit comments

Comments
 (0)