Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion lectures/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@ parts:
numbered: true
chapters:
- file: more_julia/generic_programming
- file: more_julia/general_packages
- file: more_julia/auto_differentiation
- file: more_julia/quadrature_interpolation
- file: more_julia/data_statistical_packages
- file: more_julia/optimization_solver_packages
- caption: Software Engineering
Expand Down
21 changes: 10 additions & 11 deletions lectures/about_lectures.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,10 +45,18 @@ While Julia has many features of a general purpose language, its specialization
using Matlab or Fortran than using a general purpose language - giving it an advantage in being closer
to both mathematical notation and direct implementation of mathematical abstractions.

Julia has both a large number of useful, well written libraries and many incomplete poorly maintained proofs of concept.

A major advantage of Julia libraries is that, because Julia itself is sufficiently fast, there is less need to mix in low level languages like C and Fortran.

As a result, most Julia libraries are written exclusively in Julia.

Not only does this make the libraries more portable, it makes them much easier to dive into, read, learn from and modify.

### A Word of Caution

The disadvantage of specialization is that Julia tends to be used by domain experts, and consequently
the ecosystem and language for non-mathematical/non-scientfic computing tasks is inferior to Python.
the ecosystem and language for non-mathematical/non-scientific computing tasks is inferior to Python.

Another disadvantage is that, since it tends to be used by experts and is on the cutting edge, the tooling is
much more fragile and rudimentary than Matlab.
Expand All @@ -59,12 +67,6 @@ not expect the development tools to quite as stable, or to be comparable to Matl

Nevertheless, the end-result will always be elegant and grounded in mathematical notation and abstractions.

For these reasons, Julia is most appropriate at this time for researchers who want to:

1. invest in a language likely to mature in the 3-5 year timeline
1. use one of the many amazing packages that Julia makes possible (and are frequently impossible in other languages)
1. write sufficiently specialized algorithms that the quirks of the environment are much less important than the end-result

## Advantages of Julia

Despite the short-term cautions, Julia has both immediate and long-run advantages.
Expand All @@ -76,16 +78,13 @@ The advantages of the language itself show clearly in the high quality packages,
- Interval Constraint Programming and rigorous root finding: [IntervalRootFinding.jl](https://github.com/JuliaIntervals/IntervalRootFinding.jl)
- GPUs: [CuArrays.jl](https://github.com/JuliaGPU/CuArrays.jl)
- Linear algebra for large-systems (e.g. structured matrices, matrix-free methods, etc.): [IterativeSolvers.jl](https://juliamath.github.io/IterativeSolvers.jl/dev/), [BlockBandedMatrices.jl](https://github.com/JuliaMatrices/BlockBandedMatrices.jl), [InfiniteLinearAlgebra.jl](https://github.com/JuliaMatrices/InfiniteLinearAlgebra.jl), and many others
- Automatic differentiation: [Zygote.jl](https://github.com/FluxML/Zygote.jl) and [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl)
- Automatic differentiation: [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl) and [Enzyme.jl](https://github.com/EnzymeAD/Enzyme.jl)

These are in addition to the many mundane but essential packages available. While there are examples of these packages in other languages, no
other language can achieve the combination of performance, mathematical notation, and composition that Julia provides.

The composition of packages is especially important, and is made possible through Julia's use of something called [multiple-dispatch](https://en.wikipedia.org/wiki/Multiple_dispatch).

The promise of Julia is that you write clean mathematical code, and have the same code automatically work with automatic-differentiation, interval arithmetic, and GPU arrays--all of which may be used in
cutting edge algorithms in packages and combined seamlessly.

## Open Source

All the computing environments we work with are free and open source.
Expand Down
2 changes: 1 addition & 1 deletion lectures/dynamic_programming/jv.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ and use Monte Carlo integration, or discretize.

Here we will use [Gauss-Jacobi Quadrature](https://en.wikipedia.org/wiki/Gauss–Jacobi_quadrature) which is ideal for expectations over beta.

See {doc}`general packages <../more_julia/general_packages>` for details on the derivation in this particular case.
See {doc}`quadrature and interpolation <../more_julia/quadrature_interpolation>` for details on the derivation in this particular case.

```{code-cell} julia
function gauss_jacobi(F::Beta, N)
Expand Down
2 changes: 2 additions & 0 deletions lectures/getting_started_julia/julia_by_example.md
Original file line number Diff line number Diff line change
Expand Up @@ -930,6 +930,8 @@ until $| x^{n+1} - x^n|$ is below a tolerance

For those impatient to use more advanced features of Julia, implement a version of Exercise 8(a) where `f_prime` is calculated with auto-differentiation.

See {doc}`auto-differentiation <../more_julia/auto_differentiation>` for more.

```{code-cell} julia
using ForwardDiff

Expand Down
70 changes: 0 additions & 70 deletions lectures/introduction_dynamics/finite_markov.md
Original file line number Diff line number Diff line change
Expand Up @@ -1156,69 +1156,6 @@ and return the list of pages ordered by rank.

When you solve for the ranking, you will find that the highest ranked node is in fact `g`, while the lowest is `a`.

(mc_ex3)=
### Exercise 3

In numerical work it is sometimes convenient to replace a continuous model with a discrete one.

In particular, Markov chains are routinely generated as discrete approximations to AR(1) processes of the form

$$
y_{t+1} = \rho y_t + u_{t+1}
$$

Here ${u_t}$ is assumed to be i.i.d. and $N(0, \sigma_u^2)$.

The variance of the stationary probability distribution of $\{ y_t \}$ is

$$
\sigma_y^2 := \frac{\sigma_u^2}{1-\rho^2}
$$

Tauchen's method {cite}`Tauchen1986` is the most common method for approximating this continuous state process with a finite state Markov chain.

A routine for this already exists in [QuantEcon.jl](http://quantecon.org/quantecon-jl) but let's write our own version as an exercise.

As a first step we choose

* $n$, the number of states for the discrete approximation
* $m$, an integer that parameterizes the width of the state space

Next we create a state space $\{x_0, \ldots, x_{n-1}\} \subset \mathbb R$
and a stochastic $n \times n$ matrix $P$ such that

* $x_0 = - m \, \sigma_y$
* $x_{n-1} = m \, \sigma_y$
* $x_{i+1} = x_i + s$ where $s = (x_{n-1} - x_0) / (n - 1)$

Let $F$ be the cumulative distribution function of the normal distribution $N(0, \sigma_u^2)$.

The values $P(x_i, x_j)$ are computed to approximate the AR(1) process --- omitting the derivation, the rules are as follows:

1. If $j = 0$, then set

$$
P(x_i, x_j) = P(x_i, x_0) = F(x_0-\rho x_i + s/2)
$$

1. If $j = n-1$, then set

$$
P(x_i, x_j) = P(x_i, x_{n-1}) = 1 - F(x_{n-1} - \rho x_i - s/2)
$$

1. Otherwise, set

$$
P(x_i, x_j) = F(x_j - \rho x_i + s/2) - F(x_j - \rho x_i - s/2)
$$

The exercise is to write a function `approx_markov(rho, sigma_u, m = 3, n = 7)` that returns
$\{x_0, \ldots, x_{n-1}\} \subset \mathbb R$ and $n \times n$ matrix
$P$ as described above.

* Even better, write a function that returns an instance of [QuantEcon.jl's](http://quantecon.org/quantecon-jl) MarkovChain type.

## Solutions

### Exercise 1
Expand Down Expand Up @@ -1313,10 +1250,3 @@ tags: [remove-cell]
@test ranked_pages['l'] ≈ 0.032017852378295776
end
```

### Exercise 3

A solution from [QuantEcon.jl](https://github.com/QuantEcon/QuantEcon.jl) can be found [here](https://github.com/QuantEcon/QuantEcon.jl/blob/master/src/markov/markov_approx.jl).

[^pm]: Hint: First show that if $P$ and $Q$ are stochastic matrices then so is their product --- to check the row sums, try postmultiplying by a column vector of ones. Finally, argue that $P^n$ is a stochastic matrix using induction.

Loading