Skip to content

Commit c8755f5

Browse files
committed
update
1 parent 8f29d05 commit c8755f5

File tree

8 files changed

+154
-154
lines changed

8 files changed

+154
-154
lines changed

doc/pub/week13/html/week13-bs.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -559,7 +559,7 @@ <h2 id="kullback-leibler-divergence" class="anchor">Kullback-Leibler divergence
559559
D_{KL}(p \| q) = \int_x p(x) \log \frac{p(x)}{q(x)} dx.
560560
$$
561561

562-
<p>The KL-divegrnece \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</p>
562+
<p>The KL-divergence \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</p>
563563

564564
<p>Note that the KL divergence is asymmetric. In cases where \( p(x) \) is
565565
close to zero, but \( q(x) \) is significantly non-zero, the \( q \)'s effect
@@ -579,7 +579,7 @@ <h2 id="jensen-shannon-divergence" class="anchor">Jensen-Shannon divergence </h2
579579
D_{JS}(p \| q) = \frac{1}{2} D_{KL}(p \| \frac{p + q}{2}) + \frac{1}{2} D_{KL}(q \| \frac{p + q}{2})
580580
$$
581581

582-
<p>Many practitioners believe that one reason behind GANs' big success is
582+
<p>Many practitioners believe that one reason behind GANs' big success (to be discussed later) is
583583
switching the loss function from asymmetric KL-divergence in
584584
traditional maximum-likelihood approach to symmetric JS-divergence.
585585
</p>

doc/pub/week13/html/week13-reveal.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -485,7 +485,7 @@ <h2 id="kullback-leibler-divergence">Kullback-Leibler divergence </h2>
485485
$$
486486
<p>&nbsp;<br>
487487

488-
<p>The KL-divegrnece \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</p>
488+
<p>The KL-divergence \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</p>
489489

490490
<p>Note that the KL divergence is asymmetric. In cases where \( p(x) \) is
491491
close to zero, but \( q(x) \) is significantly non-zero, the \( q \)'s effect
@@ -508,7 +508,7 @@ <h2 id="jensen-shannon-divergence">Jensen-Shannon divergence </h2>
508508
$$
509509
<p>&nbsp;<br>
510510

511-
<p>Many practitioners believe that one reason behind GANs' big success is
511+
<p>Many practitioners believe that one reason behind GANs' big success (to be discussed later) is
512512
switching the loss function from asymmetric KL-divergence in
513513
traditional maximum-likelihood approach to symmetric JS-divergence.
514514
</p>

doc/pub/week13/html/week13-solarized.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -490,7 +490,7 @@ <h2 id="kullback-leibler-divergence">Kullback-Leibler divergence </h2>
490490
D_{KL}(p \| q) = \int_x p(x) \log \frac{p(x)}{q(x)} dx.
491491
$$
492492

493-
<p>The KL-divegrnece \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</p>
493+
<p>The KL-divergence \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</p>
494494

495495
<p>Note that the KL divergence is asymmetric. In cases where \( p(x) \) is
496496
close to zero, but \( q(x) \) is significantly non-zero, the \( q \)'s effect
@@ -510,7 +510,7 @@ <h2 id="jensen-shannon-divergence">Jensen-Shannon divergence </h2>
510510
D_{JS}(p \| q) = \frac{1}{2} D_{KL}(p \| \frac{p + q}{2}) + \frac{1}{2} D_{KL}(q \| \frac{p + q}{2})
511511
$$
512512

513-
<p>Many practitioners believe that one reason behind GANs' big success is
513+
<p>Many practitioners believe that one reason behind GANs' big success (to be discussed later) is
514514
switching the loss function from asymmetric KL-divergence in
515515
traditional maximum-likelihood approach to symmetric JS-divergence.
516516
</p>

doc/pub/week13/html/week13.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -567,7 +567,7 @@ <h2 id="kullback-leibler-divergence">Kullback-Leibler divergence </h2>
567567
D_{KL}(p \| q) = \int_x p(x) \log \frac{p(x)}{q(x)} dx.
568568
$$
569569

570-
<p>The KL-divegrnece \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</p>
570+
<p>The KL-divergence \( D_{KL} \) achieves the minimum zero when \( p(x) == q(x) \) everywhere.</p>
571571

572572
<p>Note that the KL divergence is asymmetric. In cases where \( p(x) \) is
573573
close to zero, but \( q(x) \) is significantly non-zero, the \( q \)'s effect
@@ -587,7 +587,7 @@ <h2 id="jensen-shannon-divergence">Jensen-Shannon divergence </h2>
587587
D_{JS}(p \| q) = \frac{1}{2} D_{KL}(p \| \frac{p + q}{2}) + \frac{1}{2} D_{KL}(q \| \frac{p + q}{2})
588588
$$
589589

590-
<p>Many practitioners believe that one reason behind GANs' big success is
590+
<p>Many practitioners believe that one reason behind GANs' big success (to be discussed later) is
591591
switching the loss function from asymmetric KL-divergence in
592592
traditional maximum-likelihood approach to symmetric JS-divergence.
593593
</p>
0 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)