diff --git a/ch-approximate.tex b/ch-approximate.tex index c8a240d..53855c0 100644 --- a/ch-approximate.tex +++ b/ch-approximate.tex @@ -138,10 +138,10 @@ \subsection{Complex numbers and limits} It is not hard to show that the algebraic operations are continuous. This is because convergence in -$\R^2$ is the same as convergence for each component. So for example: -write $z_n = x_n + iy_n$ and -$w_n = s_n + it_n$, and suppose that -$\lim \, z_n = z = x+iy$ and $\lim \, w_n = w = s+it$. +$\R^2$ is the same as convergence for each component. For example, +write $z_n = x_n + i\,y_n$ and +$w_n = s_n + i\,t_n$, and suppose that +$\lim \, z_n = z = x+i\,y$ and $\lim \, w_n = w = s+i\,t$. Let us show \begin{equation*} \lim_{n\to\infty} z_n w_n = zw @@ -237,12 +237,12 @@ \subsection{Complex-valued functions} When we deal with complex-valued functions $f \colon X \to \C$, what we often do is to write -$f = u+iv$ for real-valued functions $u \colon X \to \R$ and $v \colon X \to +$f = u+i\,v$ for real-valued functions $u \colon X \to \R$ and $v \colon X \to \R$. -On thing we often wish to do is to integrate -$f \colon [a,b] \to \C$. What we do is to write -$f = u+iv$ for real-valued $u$ and $v$. +Suppose we wish to integrate +$f \colon [a,b] \to \C$. We write +$f = u+i\,v$ for real-valued $u$ and $v$. We say that $f$ is \emph{Riemann integrable}\index{Riemann integrable!complex-valued function} if $u$ and $v$ are Riemann @@ -254,8 +254,8 @@ \subsection{Complex-valued functions} multivariable, etc.). Similarly when we differentiate, write $f \colon [a,b] \to \C$ as -$f = u+iv$. Thinking of $\C$ as $\R^2$ we find that $f$ is differentiable -if $u$ and $v$ are differentiable. For such a function the derivative +$f = u+i\,v$. Thinking of $\C$ as $\R^2$ we say that $f$ is differentiable +if $u$ and $v$ are differentiable. For a function valued in $\R^2$, the derivative was represented by a vector in $\R^2$. Now a vector in $\R^2$ is a complex number. In other words we write @@ -2099,7 +2099,7 @@ \subsection{Exercises} \sum_{k=0}^\infty \frac{{(-1)}^k}{2k+1} x^{2k+1} \end{equation*} converges for all $-1 \leq x \leq 1$ (including the end points). -Hint: integrate the finite sum, not the series. +Hint: Integrate the finite sum, not the series. \item Use this to show that \begin{equation*} @@ -2122,7 +2122,7 @@ \section{Fundamental theorem of algebra} In this section we study the local behavior of polynomials and the growth of polynomials as $z$ goes to infinity. As an application -we prove the fundamental theorem of algebra: any polynomial +we prove the fundamental theorem of algebra: Any polynomial has a complex root. \begin{lemma} \label{lemma:polyalwaysgetssmaller} @@ -3746,7 +3746,7 @@ \subsection{Exercises} Suppose $f \colon [0,1] \to \R$ is continuous and $\int_0^1 f(x) x^n ~dx = 0$ for all $n = 0,1,2,\ldots$. Show that $f(x) = 0$ for all $x \in [0,1]$. -Hint: approximate by polynomials to show that $\int_0^1 {\bigl( f(x) +Hint: Approximate by polynomials to show that $\int_0^1 {\bigl( f(x) \bigr)}^2 ~ dx = 0$. \end{exercise} @@ -4915,7 +4915,7 @@ \subsection{Exercises} $f \colon \overline{\D} \to \C$ which is analytic on $\D$ and such that on the boundary of $\D$ we have $f(e^{i\theta}) = \sum_{n=0}^\infty c_n e^{i\theta}$.\\ -Hint: if $z=re^{i\theta}$, then $z^n = r^n e^{in\theta}$. +Hint: If $z=re^{i\theta}$, then $z^n = r^n e^{in\theta}$. \end{exercise} \begin{exercise} diff --git a/ch-contfunc.tex b/ch-contfunc.tex index a7790a8..c828a16 100644 --- a/ch-contfunc.tex +++ b/ch-contfunc.tex @@ -2456,9 +2456,9 @@ \subsection{Limits at infinity} \lim_{n \to \infty} \sin(\pi n) = 0, \qquad \text{but} \qquad \lim_{x \to \infty} \sin(\pi x) ~ \text{does not exist}. \end{equation*} -Of course the notation is ambiguous: are we thinking of the +Of course the notation is ambiguous: Are we thinking of the sequence $\{ \sin (\pi n) \}_{n=1}^\infty$ or the function $\sin(\pi x)$ -of a real variable. We are simply using the convention +of a real variable? We are simply using the convention that $n \in \N$, while $x \in \R$. When the notation is not clear, it is good to explicitly mention where the variable lives, or what kind of limit are you using. If there is possibility of confusion, one can diff --git a/ch-metric.tex b/ch-metric.tex index f0082cb..21fb04d 100644 --- a/ch-metric.tex +++ b/ch-metric.tex @@ -2889,12 +2889,12 @@ \subsection{Uniform continuity} d_Y\bigl(f(p),f(q)\bigr) \leq K d_X(p,q) \ \ \ \ \text{for all } p,q \in X. \end{equation*} -A function that is Lipschitz is uniformly continuous: -just take $\delta = \nicefrac{\epsilon}{K}$. +A Lipschitz function is uniformly continuous: +Take $\delta = \nicefrac{\epsilon}{K}$. A function can be uniformly continuous but not Lipschitz, -as we already saw in the case -of functions on the real line. +as we already saw: $\sqrt{x}$ on $[0,1]$ +is uniformly continuous but not Lipschitz. It is worth mentioning that, if a function is Lipschitz, it tends to be diff --git a/ch-multivar-int.tex b/ch-multivar-int.tex index 4846667..be01ad1 100644 --- a/ch-multivar-int.tex +++ b/ch-multivar-int.tex @@ -2219,7 +2219,7 @@ \subsection{The set of Riemann integrable functions} \begin{enumerate}[(i)] \item $\sR(R)$ is a real algebra: -if $f,g \in \sR(R)$ and $a \in \R$, then $af \in \sR(R)$, $f+g \in \sR(R)$ +If $f,g \in \sR(R)$ and $a \in \R$, then $af \in \sR(R)$, $f+g \in \sR(R)$ and $fg \in \sR(R)$. \item If $f,g \in \sR(R)$ and diff --git a/ch-one-dim-ints-sv.tex b/ch-one-dim-ints-sv.tex index ccb8c11..8e6ea96 100644 --- a/ch-one-dim-ints-sv.tex +++ b/ch-one-dim-ints-sv.tex @@ -648,7 +648,7 @@ \subsection{Path integral of a one-form} The notation makes sense from the formula you remember from calculus, let us state it somewhat informally: -if $x_j(t) = \gamma_j(t)$, then $dx_j = \gamma_j^{\:\prime}(t) dt$. +If $x_j(t) = \gamma_j(t)$, then $dx_j = \gamma_j^{\:\prime}(t) dt$. Paths can be cut up or concatenated. The proof is a direct application of the additivity of the Riemann integral, and is left as an exercise. @@ -778,10 +778,10 @@ \subsection{Path integral of a one-form} \begin{proof} Assume first that $\gamma$ and $h$ are both smooth. -Write $\omega = \omega_1 dx_1 + \omega_2 dx_2 + \cdots + -\omega_n dx_n$. -Suppose that $h$ is orientation preserving. Using -the change of variables formula for the Riemann integral, +Write $\omega = \omega_1 \, dx_1 + \omega_2 \, dx_2 + \cdots + +\omega_n \, dx_n$. +Suppose that $h$ is orientation preserving. Use +the change of variables formula for the Riemann integral: \begin{equation*} \begin{split} \int_{\gamma} \omega @@ -817,8 +817,8 @@ \subsection{Path integral of a one-form} Due to this proposition (and the exercises), if $\Gamma \subset \R^n$ is the image of a simple piecewise smooth path -$\gamma\bigl([a,b]\bigr)$, then if we somehow indicate the orientation, that -is, the direction in which we traverse the curve, then we can write +$\gamma\bigl([a,b]\bigr)$, then as long as we somehow indicate the orientation, that +is, the direction in which we traverse the curve, we can write \begin{equation*} \int_{\Gamma} \omega , \end{equation*} @@ -828,8 +828,8 @@ \subsection{Path integral of a one-form} Recall that \emph{simple} means that $\gamma$ restricted to $(a,b)$ is one-to-one, that is, $\gamma$ is one-to-one except perhaps at the endpoints. -We also often relax the simple path condition a little bit. -For example, as long as +We may relax the condition that the path is simple a little bit. +For example, it is enough to suppose that $\gamma \colon [a,b] \to \R^n$ is one-to-one except at finitely many points. That is, there are only finitely many points $p \in \R^n$ such that $\gamma^{-1}(p)$ is more than one point. See the exercises. The issue about the @@ -1376,7 +1376,7 @@ \subsection{Path independent integrals} \begin{equation*} f(x) := \int_{p}^x \omega . \end{equation*} -Write $\omega = \omega_1 ~dx_1 + \omega_2 ~dx_2 + \cdots + \omega_n ~dx_n$. +Write $\omega = \omega_1 \,dx_1 + \omega_2 \,dx_2 + \cdots + \omega_n \,dx_n$. We wish to show that for every $j = 1,2,\ldots,n$, the partial derivative $\frac{\partial f}{\partial x_j}$ exists and is equal to $\omega_j$. @@ -1533,7 +1533,8 @@ \subsection{Path independent integrals} path independence, or in other words an \emph{\myindex{antiderivative}} $f$ whose total derivative is the given one-form -$\omega$. Since the criterion is local, we generally only get the result +$\omega$. Since the criterion is local, we generally only find the +function $f$ locally. We can find the antiderivative in any so-called \emph{\myindex{simply connected}} domain, which informally is a domain where any path between two points can be ``continuously deformed'' @@ -1564,9 +1565,9 @@ \subsection{Path independent integrals} differentiable one-form defined on $U$. That is, if \begin{equation*} \omega = -\omega_1 ~dx_1 + -\omega_2 ~dx_2 + \cdots + -\omega_n ~dx_n , +\omega_1 \,dx_1 + +\omega_2 \,dx_2 + \cdots + +\omega_n \,dx_n , \end{equation*} then $\omega_1,\omega_2,\ldots,\omega_n$ are continuously differentiable functions. Suppose that for every $j$ and $k$ @@ -1725,8 +1726,8 @@ \subsection{Vector fields} \subsection{Exercises} \begin{exercise} -Find an $f \colon \R^2 \to \R$ such that $df = xe^{x^2+y^2} dx + -ye^{x^2+y^2} dy$. +Find an $f \colon \R^2 \to \R$ such that $df = xe^{x^2+y^2}\, dx + +ye^{x^2+y^2} \, dy$. \end{exercise} \begin{exercise} @@ -1812,7 +1813,7 @@ \subsection{Exercises} \begin{exercise}[Hard] Take \begin{equation*} -\omega(x,y) = \frac{-y}{x^2+y^2} dx + \frac{x}{x^2+y^2} dy +\omega(x,y) = \frac{-y}{x^2+y^2} \, dx + \frac{x}{x^2+y^2} \, dy \end{equation*} defined on $\R^2 \setminus \{ (0,0) \}$. Let $\gamma \colon [a,b] \to \R^2 \setminus \{ (0,0) \}$ be a closed piecewise smooth path. diff --git a/ch-real-nums.tex b/ch-real-nums.tex index 861720b..f74d0af 100644 --- a/ch-real-nums.tex +++ b/ch-real-nums.tex @@ -855,7 +855,7 @@ \subsection{Maxima and minima} such that $\sup\, A \in A$, then we can use the word \emph{\myindex{maximum}} and the notation $\max\, A$\glsadd{not:max} to denote the supremum. -Similarly for infimum: when a set $A$ is bounded below +Similarly for infimum: When a set $A$ is bounded below and $\inf\, A \in A$, then we can use the word \emph{\myindex{minimum}} and the notation $\min\, A$.\glsadd{not:min} For example, @@ -997,7 +997,7 @@ \subsection{Exercises} \begin{enumerate}[a)] \item Show that the power is well-defined even -if the fraction is not in lowest terms: if $\nicefrac{p}{q} = +if the fraction is not in lowest terms: If $\nicefrac{p}{q} = \nicefrac{m}{k}$ where $m$ and $k > 0$ are integers, then ${(x^{1/q})}^p = {(x^{1/m})}^k$. \item diff --git a/ch-several-vars-ders.tex b/ch-several-vars-ders.tex index eb3038e..973c53b 100644 --- a/ch-several-vars-ders.tex +++ b/ch-several-vars-ders.tex @@ -124,7 +124,7 @@ \subsection{Vector spaces} \begin{example} An example vector space is $\R^n$, where addition and multiplication by a scalar is done componentwise: -if $a \in \R$, $v = (v_1,v_2,\ldots,v_n) \in \R^n$, and $w = +If $a \in \R$, $v = (v_1,v_2,\ldots,v_n) \in \R^n$, and $w = (w_1,w_2,\ldots,w_n) \in \R^n$, then \begin{align*} & v+w := @@ -594,7 +594,7 @@ \subsection{Linear mappings} admits a product (composition of operators), it is often called an \emph{\myindex{algebra}}. -An immediate consequence of the definition of a linear mapping is: if +An immediate consequence of the definition of a linear mapping is: If $A$ is linear, then $A0 = 0$. \begin{prop} @@ -1770,8 +1770,8 @@ \subsection{Determinants} basis. If we compute the determinant of the matrix $A$, we obtain -the same determinant if we had used any other basis: -in the other basis the matrix would be $B^{-1}AB$. +the same determinant if we use any other basis: +In the other basis the matrix would be $B^{-1}AB$. It follows that \begin{equation*} \det \colon L(X) \to \R @@ -1846,7 +1846,7 @@ \subsection{Determinants} The proof is left as an exercise. The proposition says that we can compute the determinant by doing elementary row operations. For computing the determinant one doesn't have to factor the matrix into a product of -elementary matrices completely: usually one would only do row operations +elementary matrices completely. Usually one would only do row operations until we find an \emph{\myindex{upper triangular matrix}}, that is a matrix $[a_{i,j}]$ where $a_{i,j} = 0$ if $i > j$. Computing their determinant is not difficult (exercise). @@ -2091,7 +2091,7 @@ \subsection{Exercises} \begin{enumerate}[a)] \item Prove that the estimate $\snorm{A-B} < \frac{1}{\snorm{A^{-1}}}$ is the -best possible in the following sense: for any invertible +best possible in the following sense: For any invertible $A \in GL(\R^n)$ find a $B$ where equality is satisfied and $B$ is not invertible. \item diff --git a/realanal.tex b/realanal.tex index 2e25aa4..25da803 100644 --- a/realanal.tex +++ b/realanal.tex @@ -341,8 +341,8 @@ by Ji{\v r}\'i Lebl\\[3ex]} \today \\ -(version 5.1) -% --- 5th edition, 1st update) +(version 5.2) +% --- 5th edition, 2nd update) \end{minipage}} %\addtolength{\textwidth}{\centeroffset} @@ -360,7 +360,7 @@ \bigskip \noindent -Copyright \copyright 2009--2018 Ji{\v r}\'i Lebl +Copyright \copyright 2009--2019 Ji{\v r}\'i Lebl %PRINT % not for lulu diff --git a/realanal2.tex b/realanal2.tex index 8ed358c..3499abf 100644 --- a/realanal2.tex +++ b/realanal2.tex @@ -381,8 +381,8 @@ by Ji{\v r}\'i Lebl\\[3ex]} \today \\ -(version 2.1) -% --- 2nd edition, 1st update) +(version 2.2) +% --- 2nd edition, 2nd update) \end{minipage}} %\addtolength{\textwidth}{\centeroffset} @@ -400,7 +400,7 @@ \bigskip \noindent -Copyright \copyright 2012--2018 Ji{\v r}\'i Lebl +Copyright \copyright 2012--2019 Ji{\v r}\'i Lebl %PRINT % not for lulu