Skip to content

Commit

Permalink
Raise version, a couple of very minor grammar / style issues
Browse files Browse the repository at this point in the history
jirilebl committed May 15, 2019
1 parent 87e31fa commit 4d9d64c
Showing 9 changed files with 53 additions and 52 deletions.
28 changes: 14 additions & 14 deletions ch-approximate.tex
Original file line number Diff line number Diff line change
@@ -138,10 +138,10 @@ \subsection{Complex numbers and limits}

It is not hard to show that the algebraic operations are
continuous. This is because convergence in
$\R^2$ is the same as convergence for each component. So for example:
write $z_n = x_n + iy_n$ and
$w_n = s_n + it_n$, and suppose that
$\lim \, z_n = z = x+iy$ and $\lim \, w_n = w = s+it$.
$\R^2$ is the same as convergence for each component. For example,
write $z_n = x_n + i\,y_n$ and
$w_n = s_n + i\,t_n$, and suppose that
$\lim \, z_n = z = x+i\,y$ and $\lim \, w_n = w = s+i\,t$.
Let us show
\begin{equation*}
\lim_{n\to\infty} z_n w_n = zw
@@ -237,12 +237,12 @@ \subsection{Complex-valued functions}

When we deal with complex-valued functions
$f \colon X \to \C$, what we often do is to write
$f = u+iv$ for real-valued functions $u \colon X \to \R$ and $v \colon X \to
$f = u+i\,v$ for real-valued functions $u \colon X \to \R$ and $v \colon X \to
\R$.

On thing we often wish to do is to integrate
$f \colon [a,b] \to \C$. What we do is to write
$f = u+iv$ for real-valued $u$ and $v$.
Suppose we wish to integrate
$f \colon [a,b] \to \C$. We write
$f = u+i\,v$ for real-valued $u$ and $v$.
We say that $f$ is \emph{Riemann integrable}\index{Riemann
integrable!complex-valued function}
if $u$ and $v$ are Riemann
@@ -254,8 +254,8 @@ \subsection{Complex-valued functions}
multivariable, etc.).

Similarly when we differentiate, write $f \colon [a,b] \to \C$ as
$f = u+iv$. Thinking of $\C$ as $\R^2$ we find that $f$ is differentiable
if $u$ and $v$ are differentiable. For such a function the derivative
$f = u+i\,v$. Thinking of $\C$ as $\R^2$ we say that $f$ is differentiable
if $u$ and $v$ are differentiable. For a function valued in $\R^2$, the derivative
was represented by a vector in $\R^2$. Now a vector in $\R^2$ is a complex
number. In other words
we write
@@ -2099,7 +2099,7 @@ \subsection{Exercises}
\sum_{k=0}^\infty \frac{{(-1)}^k}{2k+1} x^{2k+1}
\end{equation*}
converges for all $-1 \leq x \leq 1$ (including the end points).
Hint: integrate the finite sum, not the series.
Hint: Integrate the finite sum, not the series.
\item
Use this to show that
\begin{equation*}
@@ -2122,7 +2122,7 @@ \section{Fundamental theorem of algebra}

In this section we study the local behavior of polynomials
and the growth of polynomials as $z$ goes to infinity. As an application
we prove the fundamental theorem of algebra: any polynomial
we prove the fundamental theorem of algebra: Any polynomial
has a complex root.

\begin{lemma} \label{lemma:polyalwaysgetssmaller}
@@ -3746,7 +3746,7 @@ \subsection{Exercises}
Suppose $f \colon [0,1] \to \R$ is continuous and
$\int_0^1 f(x) x^n ~dx = 0$ for all $n = 0,1,2,\ldots$.
Show that $f(x) = 0$ for all $x \in [0,1]$.
Hint: approximate by polynomials to show that $\int_0^1 {\bigl( f(x)
Hint: Approximate by polynomials to show that $\int_0^1 {\bigl( f(x)
\bigr)}^2 ~ dx = 0$.
\end{exercise}

@@ -4915,7 +4915,7 @@ \subsection{Exercises}
$f \colon \overline{\D} \to \C$ which is analytic on $\D$
and such that on the boundary of $\D$ we have
$f(e^{i\theta}) = \sum_{n=0}^\infty c_n e^{i\theta}$.\\
Hint: if $z=re^{i\theta}$, then $z^n = r^n e^{in\theta}$.
Hint: If $z=re^{i\theta}$, then $z^n = r^n e^{in\theta}$.
\end{exercise}

\begin{exercise}
4 changes: 2 additions & 2 deletions ch-contfunc.tex
Original file line number Diff line number Diff line change
@@ -2456,9 +2456,9 @@ \subsection{Limits at infinity}
\lim_{n \to \infty} \sin(\pi n) = 0, \qquad \text{but} \qquad
\lim_{x \to \infty} \sin(\pi x) ~ \text{does not exist}.
\end{equation*}
Of course the notation is ambiguous: are we thinking of the
Of course the notation is ambiguous: Are we thinking of the
sequence $\{ \sin (\pi n) \}_{n=1}^\infty$ or the function $\sin(\pi x)$
of a real variable. We are simply using the convention
of a real variable? We are simply using the convention
that $n \in \N$, while $x \in \R$. When the notation is not clear,
it is good to explicitly mention where the variable lives, or what kind
of limit are you using. If there is possibility of confusion, one can
8 changes: 4 additions & 4 deletions ch-metric.tex
Original file line number Diff line number Diff line change
@@ -2889,12 +2889,12 @@ \subsection{Uniform continuity}
d_Y\bigl(f(p),f(q)\bigr) \leq K d_X(p,q)
\ \ \ \ \text{for all } p,q \in X.
\end{equation*}
A function that is Lipschitz is uniformly continuous:
just take $\delta = \nicefrac{\epsilon}{K}$.
A Lipschitz function is uniformly continuous:
Take $\delta = \nicefrac{\epsilon}{K}$.
A function can be uniformly continuous
but not Lipschitz,
as we already saw in the case
of functions on the real line.
as we already saw: $\sqrt{x}$ on $[0,1]$
is uniformly continuous but not Lipschitz.

It is worth mentioning that,
if a function is Lipschitz, it tends to be
2 changes: 1 addition & 1 deletion ch-multivar-int.tex
Original file line number Diff line number Diff line change
@@ -2219,7 +2219,7 @@ \subsection{The set of Riemann integrable functions}
\begin{enumerate}[(i)]
\item
$\sR(R)$ is a real algebra:
if $f,g \in \sR(R)$ and $a \in \R$, then $af \in \sR(R)$, $f+g \in \sR(R)$
If $f,g \in \sR(R)$ and $a \in \R$, then $af \in \sR(R)$, $f+g \in \sR(R)$
and $fg \in \sR(R)$.
\item
If $f,g \in \sR(R)$ and
35 changes: 18 additions & 17 deletions ch-one-dim-ints-sv.tex
Original file line number Diff line number Diff line change
@@ -648,7 +648,7 @@ \subsection{Path integral of a one-form}

The notation makes sense from the formula you remember from calculus,
let us state it somewhat informally:
if $x_j(t) = \gamma_j(t)$, then $dx_j = \gamma_j^{\:\prime}(t) dt$.
If $x_j(t) = \gamma_j(t)$, then $dx_j = \gamma_j^{\:\prime}(t) dt$.

Paths can be cut up or concatenated. The proof is a direct application
of the additivity of the Riemann integral, and is left as an exercise.
@@ -778,10 +778,10 @@ \subsection{Path integral of a one-form}

\begin{proof}
Assume first that $\gamma$ and $h$ are both smooth.
Write $\omega = \omega_1 dx_1 + \omega_2 dx_2 + \cdots +
\omega_n dx_n$.
Suppose that $h$ is orientation preserving. Using
the change of variables formula for the Riemann integral,
Write $\omega = \omega_1 \, dx_1 + \omega_2 \, dx_2 + \cdots +
\omega_n \, dx_n$.
Suppose that $h$ is orientation preserving. Use
the change of variables formula for the Riemann integral:
\begin{equation*}
\begin{split}
\int_{\gamma} \omega
@@ -817,8 +817,8 @@ \subsection{Path integral of a one-form}

Due to this proposition (and the exercises), if $\Gamma
\subset \R^n$ is the image of a simple piecewise smooth path
$\gamma\bigl([a,b]\bigr)$, then if we somehow indicate the orientation, that
is, the direction in which we traverse the curve, then we can write
$\gamma\bigl([a,b]\bigr)$, then as long as we somehow indicate the orientation, that
is, the direction in which we traverse the curve, we can write
\begin{equation*}
\int_{\Gamma} \omega ,
\end{equation*}
@@ -828,8 +828,8 @@ \subsection{Path integral of a one-form}

Recall that \emph{simple} means that $\gamma$ restricted to $(a,b)$ is
one-to-one, that is, $\gamma$ is one-to-one except perhaps at the endpoints.
We also often relax the simple path condition a little bit.
For example, as long as
We may relax the condition that the path is simple a little bit.
For example, it is enough to suppose that
$\gamma \colon [a,b] \to \R^n$ is one-to-one except at finitely many points. That
is, there are only finitely many points $p \in \R^n$ such that
$\gamma^{-1}(p)$ is more than one point. See the exercises. The issue about the
@@ -1376,7 +1376,7 @@ \subsection{Path independent integrals}
\begin{equation*}
f(x) := \int_{p}^x \omega .
\end{equation*}
Write $\omega = \omega_1 ~dx_1 + \omega_2 ~dx_2 + \cdots + \omega_n ~dx_n$.
Write $\omega = \omega_1 \,dx_1 + \omega_2 \,dx_2 + \cdots + \omega_n \,dx_n$.
We wish to show that for every $j = 1,2,\ldots,n$, the
partial derivative $\frac{\partial f}{\partial x_j}$ exists
and is equal to $\omega_j$.
@@ -1533,7 +1533,8 @@ \subsection{Path independent integrals}
path independence, or in other words an
\emph{\myindex{antiderivative}}
$f$ whose total derivative is the given one-form
$\omega$. Since the criterion is local, we generally only get the result
$\omega$. Since the criterion is local, we generally only find the
function $f$
locally. We can find the antiderivative in any so-called
\emph{\myindex{simply connected}} domain, which informally is a domain where
any path between two points can be ``continuously deformed''
@@ -1564,9 +1565,9 @@ \subsection{Path independent integrals}
differentiable one-form defined on $U$. That is, if
\begin{equation*}
\omega =
\omega_1 ~dx_1 +
\omega_2 ~dx_2 + \cdots +
\omega_n ~dx_n ,
\omega_1 \,dx_1 +
\omega_2 \,dx_2 + \cdots +
\omega_n \,dx_n ,
\end{equation*}
then $\omega_1,\omega_2,\ldots,\omega_n$ are continuously differentiable
functions. Suppose that for every $j$ and $k$
@@ -1725,8 +1726,8 @@ \subsection{Vector fields}
\subsection{Exercises}

\begin{exercise}
Find an $f \colon \R^2 \to \R$ such that $df = xe^{x^2+y^2} dx +
ye^{x^2+y^2} dy$.
Find an $f \colon \R^2 \to \R$ such that $df = xe^{x^2+y^2}\, dx +
ye^{x^2+y^2} \, dy$.
\end{exercise}

\begin{exercise}
@@ -1812,7 +1813,7 @@ \subsection{Exercises}
\begin{exercise}[Hard]
Take
\begin{equation*}
\omega(x,y) = \frac{-y}{x^2+y^2} dx + \frac{x}{x^2+y^2} dy
\omega(x,y) = \frac{-y}{x^2+y^2} \, dx + \frac{x}{x^2+y^2} \, dy
\end{equation*}
defined on $\R^2 \setminus \{ (0,0) \}$. Let $\gamma \colon [a,b] \to \R^2
\setminus \{ (0,0) \}$ be a closed piecewise smooth path.
4 changes: 2 additions & 2 deletions ch-real-nums.tex
Original file line number Diff line number Diff line change
@@ -855,7 +855,7 @@ \subsection{Maxima and minima}
such that
$\sup\, A \in A$, then we can use the word
\emph{\myindex{maximum}} and the notation $\max\, A$\glsadd{not:max} to denote the supremum.
Similarly for infimum: when a set $A$ is bounded below
Similarly for infimum: When a set $A$ is bounded below
and $\inf\, A \in A$, then we can use the
word \emph{\myindex{minimum}} and the notation $\min\, A$.\glsadd{not:min}
For example,
@@ -997,7 +997,7 @@ \subsection{Exercises}
\begin{enumerate}[a)]
\item
Show that the power is well-defined even
if the fraction is not in lowest terms: if $\nicefrac{p}{q} =
if the fraction is not in lowest terms: If $\nicefrac{p}{q} =
\nicefrac{m}{k}$ where $m$ and $k > 0$ are integers, then
${(x^{1/q})}^p = {(x^{1/m})}^k$.
\item
12 changes: 6 additions & 6 deletions ch-several-vars-ders.tex
Original file line number Diff line number Diff line change
@@ -124,7 +124,7 @@ \subsection{Vector spaces}
\begin{example}
An example vector space is $\R^n$, where addition
and multiplication by a scalar is done componentwise:
if $a \in \R$, $v = (v_1,v_2,\ldots,v_n) \in \R^n$, and $w =
If $a \in \R$, $v = (v_1,v_2,\ldots,v_n) \in \R^n$, and $w =
(w_1,w_2,\ldots,w_n) \in \R^n$, then
\begin{align*}
& v+w :=
@@ -594,7 +594,7 @@ \subsection{Linear mappings}
admits a product (composition of operators),
it is often called an \emph{\myindex{algebra}}.

An immediate consequence of the definition of a linear mapping is: if
An immediate consequence of the definition of a linear mapping is: If
$A$ is linear, then $A0 = 0$.

\begin{prop}
@@ -1770,8 +1770,8 @@ \subsection{Determinants}
basis.

If we compute the determinant of the matrix $A$, we obtain
the same determinant if we had used any other basis:
in the other basis the matrix would be $B^{-1}AB$.
the same determinant if we use any other basis:
In the other basis the matrix would be $B^{-1}AB$.
It follows that
\begin{equation*}
\det \colon L(X) \to \R
@@ -1846,7 +1846,7 @@ \subsection{Determinants}
The proof is left as an exercise. The proposition says that we can compute
the determinant by doing elementary row operations. For computing the
determinant one doesn't have to factor the matrix into a product of
elementary matrices completely: usually one would only do row operations
elementary matrices completely. Usually one would only do row operations
until we find an \emph{\myindex{upper triangular matrix}}, that is a matrix
$[a_{i,j}]$ where $a_{i,j} = 0$ if $i > j$. Computing their determinant is
not difficult (exercise).
@@ -2091,7 +2091,7 @@ \subsection{Exercises}
\begin{enumerate}[a)]
\item
Prove that the estimate $\snorm{A-B} < \frac{1}{\snorm{A^{-1}}}$ is the
best possible in the following sense: for any invertible
best possible in the following sense: For any invertible
$A \in GL(\R^n)$ find a $B$ where equality is satisfied and $B$ is
not invertible.
\item
6 changes: 3 additions & 3 deletions realanal.tex
Original file line number Diff line number Diff line change
@@ -341,8 +341,8 @@
by Ji{\v r}\'i Lebl\\[3ex]}
\today
\\
(version 5.1)
% --- 5th edition, 1st update)
(version 5.2)
% --- 5th edition, 2nd update)
\end{minipage}}

%\addtolength{\textwidth}{\centeroffset}
@@ -360,7 +360,7 @@
\bigskip

\noindent
Copyright \copyright 2009--2018 Ji{\v r}\'i Lebl
Copyright \copyright 2009--2019 Ji{\v r}\'i Lebl

%PRINT
% not for lulu
6 changes: 3 additions & 3 deletions realanal2.tex
Original file line number Diff line number Diff line change
@@ -381,8 +381,8 @@
by Ji{\v r}\'i Lebl\\[3ex]}
\today
\\
(version 2.1)
% --- 2nd edition, 1st update)
(version 2.2)
% --- 2nd edition, 2nd update)
\end{minipage}}

%\addtolength{\textwidth}{\centeroffset}
@@ -400,7 +400,7 @@
\bigskip

\noindent
Copyright \copyright 2012--2018 Ji{\v r}\'i Lebl
Copyright \copyright 2012--2019 Ji{\v r}\'i Lebl

%PRINT
% not for lulu

0 comments on commit 4d9d64c

Please sign in to comment.