File tree Expand file tree Collapse file tree 8 files changed +154
-154
lines changed
Expand file tree Collapse file tree 8 files changed +154
-154
lines changed Original file line number Diff line number Diff line change @@ -559,7 +559,7 @@ <h2 id="kullback-leibler-divergence" class="anchor">Kullback-Leibler divergence
559559D_{KL}(p \| q) = \int_x p(x) \log \frac{p(x)}{q(x)} dx.
560560$$
561561
562- < p > The KL-divegrnece \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</ p >
562+ < p > The KL-divergence \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</ p >
563563
564564< p > Note that the KL divergence is asymmetric. In cases where \( p(x) \) is
565565close to zero, but \( q(x) \) is significantly non-zero, the \( q \)'s effect
@@ -579,7 +579,7 @@ <h2 id="jensen-shannon-divergence" class="anchor">Jensen-Shannon divergence </h2
579579D_{JS}(p \| q) = \frac{1}{2} D_{KL}(p \| \frac{p + q}{2}) + \frac{1}{2} D_{KL}(q \| \frac{p + q}{2})
580580$$
581581
582- < p > Many practitioners believe that one reason behind GANs' big success is
582+ < p > Many practitioners believe that one reason behind GANs' big success (to be discussed later) is
583583switching the loss function from asymmetric KL-divergence in
584584traditional maximum-likelihood approach to symmetric JS-divergence.
585585</ p >
Original file line number Diff line number Diff line change @@ -485,7 +485,7 @@ <h2 id="kullback-leibler-divergence">Kullback-Leibler divergence </h2>
485485$$
486486< p > < br >
487487
488- < p > The KL-divegrnece \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</ p >
488+ < p > The KL-divergence \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</ p >
489489
490490< p > Note that the KL divergence is asymmetric. In cases where \( p(x) \) is
491491close to zero, but \( q(x) \) is significantly non-zero, the \( q \)'s effect
@@ -508,7 +508,7 @@ <h2 id="jensen-shannon-divergence">Jensen-Shannon divergence </h2>
508508$$
509509< p > < br >
510510
511- < p > Many practitioners believe that one reason behind GANs' big success is
511+ < p > Many practitioners believe that one reason behind GANs' big success (to be discussed later) is
512512switching the loss function from asymmetric KL-divergence in
513513traditional maximum-likelihood approach to symmetric JS-divergence.
514514</ p >
Original file line number Diff line number Diff line change @@ -490,7 +490,7 @@ <h2 id="kullback-leibler-divergence">Kullback-Leibler divergence </h2>
490490D_{KL}(p \| q) = \int_x p(x) \log \frac{p(x)}{q(x)} dx.
491491$$
492492
493- < p > The KL-divegrnece \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</ p >
493+ < p > The KL-divergence \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</ p >
494494
495495< p > Note that the KL divergence is asymmetric. In cases where \( p(x) \) is
496496close to zero, but \( q(x) \) is significantly non-zero, the \( q \)'s effect
@@ -510,7 +510,7 @@ <h2 id="jensen-shannon-divergence">Jensen-Shannon divergence </h2>
510510D_{JS}(p \| q) = \frac{1}{2} D_{KL}(p \| \frac{p + q}{2}) + \frac{1}{2} D_{KL}(q \| \frac{p + q}{2})
511511$$
512512
513- < p > Many practitioners believe that one reason behind GANs' big success is
513+ < p > Many practitioners believe that one reason behind GANs' big success (to be discussed later) is
514514switching the loss function from asymmetric KL-divergence in
515515traditional maximum-likelihood approach to symmetric JS-divergence.
516516</ p >
Original file line number Diff line number Diff line change @@ -567,7 +567,7 @@ <h2 id="kullback-leibler-divergence">Kullback-Leibler divergence </h2>
567567D_{KL}(p \| q) = \int_x p(x) \log \frac{p(x)}{q(x)} dx.
568568$$
569569
570- < p > The KL-divegrnece \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</ p >
570+ < p > The KL-divergence \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</ p >
571571
572572< p > Note that the KL divergence is asymmetric. In cases where \( p(x) \) is
573573close to zero, but \( q(x) \) is significantly non-zero, the \( q \)'s effect
@@ -587,7 +587,7 @@ <h2 id="jensen-shannon-divergence">Jensen-Shannon divergence </h2>
587587D_{JS}(p \| q) = \frac{1}{2} D_{KL}(p \| \frac{p + q}{2}) + \frac{1}{2} D_{KL}(q \| \frac{p + q}{2})
588588$$
589589
590- < p > Many practitioners believe that one reason behind GANs' big success is
590+ < p > Many practitioners believe that one reason behind GANs' big success (to be discussed later) is
591591switching the loss function from asymmetric KL-divergence in
592592traditional maximum-likelihood approach to symmetric JS-divergence.
593593</ p >
You can’t perform that action at this time.
0 commit comments