Skip to content

Commit

Permalink
Merge pull request #488 from PierreCounathe/pierrecounathe/unit-4-pro…
Browse files Browse the repository at this point in the history
…positions

Unit 4 Proposal Updates
  • Loading branch information
simoninithomas authored Apr 19, 2024
2 parents e9f1aff + 732d543 commit ebfd6d5
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 6 deletions.
7 changes: 3 additions & 4 deletions units/en/unit4/pg-theorem.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,17 +21,16 @@ So we have:

We can rewrite the gradient of the sum as the sum of the gradient:

\\( = \sum_{\tau} \nabla_\theta P(\tau;\theta)R(\tau) \\)
\\( = \sum_{\tau} \nabla_\theta (P(\tau;\theta)R(\tau)) = \sum_{\tau} \nabla_\theta P(\tau;\theta)R(\tau) \\) as \\(R(\tau)\\) is not dependent on \\(\theta\\)

We then multiply every term in the sum by \\(\frac{P(\tau;\theta)}{P(\tau;\theta)}\\)(which is possible since it's = 1)

\\( = \sum_{\tau} \frac{P(\tau;\theta)}{P(\tau;\theta)}\nabla_\theta P(\tau;\theta)R(\tau) \\)

We can simplify further this since

\\( \frac{P(\tau;\theta)}{P(\tau;\theta)}\nabla_\theta P(\tau;\theta) = P(\tau;\theta)\frac{\nabla_\theta P(\tau;\theta)}{P(\tau;\theta)} \\)

We can simplify further this since \\( \frac{P(\tau;\theta)}{P(\tau;\theta)}\nabla_\theta P(\tau;\theta)\\).

Thus we can rewrite the sum as \\( = P(\tau;\theta)\frac{\nabla_\theta P(\tau;\theta)}{P(\tau;\theta)} \\)

\\( P(\tau;\theta)\frac{\nabla_\theta P(\tau;\theta)}{P(\tau;\theta)}= \sum_{\tau} P(\tau;\theta) \frac{\nabla_\theta P(\tau;\theta)}{P(\tau;\theta)}R(\tau) \\)

Expand Down
4 changes: 2 additions & 2 deletions units/en/unit4/policy-gradient.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -109,8 +109,8 @@ In a loop:

We can interpret this update as follows:

- \\(\nabla_\theta log \pi_\theta(a_t|s_t)\\) is the direction of **steepest increase of the (log) probability** of selecting action at from state st.
This tells us **how we should change the weights of policy** if we want to increase/decrease the log probability of selecting action \\(a_t\\) at state \\(s_t\\).
- \\(\nabla_\theta log \pi_\theta(a_t|s_t)\\) is the direction of **steepest increase of the (log) probability** of selecting action \\(a_t\\) from state \\(s_t\\).
This tells us **how we should change the weights of policy** if we want to increase/decrease the log probability of selecting action \\(a_t\\) at state \\(s_t\\).

- \\(R(\tau)\\): is the scoring function:
- If the return is high, it will **push up the probabilities** of the (state, action) combinations.
Expand Down

0 comments on commit ebfd6d5

Please sign in to comment.