-
Notifications
You must be signed in to change notification settings - Fork 20
/
ch-fourier-and-pde.tex
6251 lines (5772 loc) · 190 KB
/
ch-fourier-and-pde.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter{Fourier series and PDEs} \label{FS:chapter}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Boundary value problems} \label{bvp:section}
%mbxINTROSUBSECTION
\sectionnotes{2 lectures\EPref{, similar to \S3.8 in \cite{EP}}\BDref{,
\S10.1 and \S11.1 in \cite{BD}}}
\subsection{Boundary value problems}
Before we tackle the Fourier series, we study
the so-called
\emph{boundary value problems\index{boundary value problem}}
(or \emph{endpoint problems\index{endpoint problem}}). Consider
\begin{equation*}
x'' + \lambda x = 0, \quad x(a) = 0, \quad x(b) = 0,
\end{equation*}
for some constant $\lambda$, where $x(t)$ is defined for $t$ in the interval
$[a,b]$.
Previously we specified the value of the solution and its derivative
at a single point. Now we specify the value of the solution at two different
points. As $x=0$ is a solution, existence of
solutions is not a problem. Uniqueness of solutions is another issue.
The general solution to $x'' + \lambda x = 0$ has two
arbitrary constants\footnote{%
See \subsectionvref{subsection:fourfundamental} or \examplevref{example:expsecondorder} and
\examplevref{example:sincossecondorder}.}.
It is, therefore,
natural (but wrong) to believe that requiring two
conditions guarantees a unique solution.
\begin{example}
Take $\lambda = 1$,
$a=0$, $b=\pi$. That is,
\begin{equation*}
x'' + x = 0, \quad x(0) = 0, \quad x(\pi) = 0.
\end{equation*}
Then $x = \sin t$ is another solution (besides $x=0$) satisfying both boundary
conditions. There are more. Write down the general
solution of the differential equation, which is $x= A \cos t + B \sin t$.
The condition $x(0) = 0$ forces $A=0$. Letting $x(\pi) = 0$ does not
give us any more information as $x = B \sin t$ already satisfies both
boundary conditions.
Hence, there are infinitely many solutions of the form $x = B \sin t$,
where $B$ is an arbitrary constant.
\end{example}
\begin{example}
On the other hand, consider $\lambda = 2$. That is,
\begin{equation*}
x'' + 2 x = 0, \quad x(0) = 0, \quad x(\pi) = 0.
\end{equation*}
Then the general solution is
$x= A \cos ( \sqrt{2}\,t) + B \sin ( \sqrt{2}\,t)$. Letting $x(0) = 0$ still
forces $A = 0$. We apply the second condition to find
$0=x(\pi) = B \sin ( \sqrt{2}\,\pi)$.
As $\sin ( \sqrt{2}\,\pi) \not= 0$ we obtain
$B = 0$. Therefore $x=0$ is the unique solution to this problem.
\end{example}
What is going on? We will be interested in finding which
constants $\lambda$ allow a nonzero solution, and we will be interested in
finding those solutions. This problem is an analogue of finding
eigenvalues and eigenvectors of matrices.
\subsection{Eigenvalue problems}
For basic Fourier series theory we will need
the following three eigenvalue problems.
We will consider more general equations and boundary conditions,
but we will postpone this until
\chapterref{SL:chapter}.
\begin{equation} \label{bv:eq1}
x'' + \lambda x = 0, \quad x(a) = 0, \quad x(b) = 0 ,
\end{equation}
\begin{equation} \label{bv:eq2}
x'' + \lambda x = 0, \quad x'(a) = 0, \quad x'(b) = 0 ,
\end{equation}
and
\begin{equation} \label{bv:eq3}
x'' + \lambda x = 0, \quad x(a) = x(b), \quad x'(a) = x'(b) .
\end{equation}
A number $\lambda$ is called an
\emph{eigenvalue\index{eigenvalue of a boundary value problem}}
of \eqref{bv:eq1}
(resp.\ \eqref{bv:eq2} or \eqref{bv:eq3}) if and only if
there exists a nonzero (not identically zero) solution to \eqref{bv:eq1}
(resp.\ \eqref{bv:eq2} or \eqref{bv:eq3})
given that specific $\lambda$. A
nonzero solution is called a corresponding
\emph{\myindex{eigenfunction}}\index{corresponding eigenfunction}.
Note the similarity to eigenvalues and eigenvectors of matrices. The
similarity is not just coincidental. If we think of the equations as
differential operators, then we are doing the same exact thing.
Think of a function $x(t)$
as a vector with infinitely many components (one for each $t$).
Let $L = -\frac{d^2}{{dt}^2}$ be the linear operator.
Then the eigenvalue/eigenfunction pair should be $\lambda$ and
nonzero $x$ such that $Lx = \lambda x$.
In other words,
we are looking for nonzero functions $x$
satisfying certain endpoint conditions that solve
$(L- \lambda)x = 0$. A lot of the formalism from linear algebra still
applies here, though we will not pursue this line of reasoning too far.
\begin{example} \label{bvp:eig1ex}
Let us find the eigenvalues and eigenfunctions of
\begin{equation*}
x'' + \lambda x = 0, \quad x(0) = 0, \quad x(\pi) = 0 .
\end{equation*}
%For reasons that will be clear from the computations,
We have to handle
the cases $\lambda > 0$, $\lambda = 0$, $\lambda < 0$ separately.
First suppose that $\lambda > 0$. Then
the general solution to $x''+\lambda x = 0$ is
\begin{equation*}
x = A \cos ( \sqrt{\lambda}\, t) + B \sin ( \sqrt{\lambda}\, t).
\end{equation*}
The condition $x(0) = 0$ implies immediately $A = 0$.
Next
\begin{equation*}
0 = x(\pi) = B \sin ( \sqrt{\lambda}\, \pi ) .
\end{equation*}
If $B$ is zero, then $x$ is not a nonzero solution. So to get a nonzero
solution we must have that $\sin ( \sqrt{\lambda}\, \pi) = 0$. Hence,
$\sqrt{\lambda}\, \pi$ must be an integer multiple of $\pi$. In other words,
$\sqrt{\lambda} = k$ for a positive integer $k$.
Hence the positive eigenvalues are
$k^2$ for all integers $k \geq 1$. Corresponding eigenfunctions
can be taken as $x=\sin (k t)$. Just like for eigenvectors, constant
multiples of an eigenfunction are also eigenfunctions,
so we only need to pick one.
Now suppose that $\lambda = 0$. In this case the equation is $x'' = 0$,
and its general solution is $x = At + B$. The condition $x(0) = 0$ implies
that $B=0$, and $x(\pi) = 0$ implies that $A = 0$. This means that $\lambda
= 0$ is \emph{not} an eigenvalue.
Finally, suppose that $\lambda < 0$. In this case we have the general
solution\footnote{Recall that
$\cosh s = \frac{1}{2}(e^s+e^{-s})$
and
$\sinh s = \frac{1}{2}(e^s-e^{-s})$. As an exercise
try the computation with the general solution written as
$x = A e^{\sqrt{-\lambda}\, t} + B e^{-\sqrt{-\lambda}\, t}$ (for
different $A$ and $B$ of course).}
\begin{equation*}
x = A \cosh ( \sqrt{-\lambda}\, t) + B \sinh ( \sqrt{-\lambda}\, t ) .
\end{equation*}
Letting $x(0) = 0$ implies that $A = 0$ (recall $\cosh 0 = 1$ and $\sinh 0 =
0$). So our solution must be $x = B \sinh ( \sqrt{-\lambda}\, t )$ and satisfy
$x(\pi) = 0$. This is only possible if $B$ is zero. Why? Because
$\sinh \xi$ is only zero when $\xi=0$. You should plot sinh to see this
fact.
We can also see this from the definition of sinh.
We get $0 = \sinh \xi = \frac{e^\xi -
e^{-\xi}}{2}$. Hence $e^\xi = e^{-\xi}$, which implies $\xi = -\xi$ and that is only
true if $\xi=0$. So there are no negative eigenvalues.
In summary, the eigenvalues and corresponding eigenfunctions are
\begin{equation*}
\lambda_k = k^2 \qquad \text{with an eigenfunction} \qquad x_k = \sin (k t)
\qquad \text{for all integers } k \geq 1 .
\end{equation*}
\end{example}
\begin{example}
Let us compute the
eigenvalues and eigenfunctions of
\begin{equation*}
x'' + \lambda x = 0, \quad x'(0) = 0, \quad x'(\pi) = 0 .
\end{equation*}
Again we have to handle the cases $\lambda > 0$, $\lambda = 0$, $\lambda
< 0$ separately.
First suppose that $\lambda > 0$.
The general solution to $x''+\lambda x = 0$ is
$x = A \cos ( \sqrt{\lambda}\, t) + B \sin ( \sqrt{\lambda}\, t)$. So
\begin{equation*}
x' = -A\sqrt{\lambda}\, \sin ( \sqrt{\lambda}\, t) + B\sqrt{\lambda}\,
\cos (\sqrt{\lambda}\, t) .
\end{equation*}
The condition $x'(0) = 0$ implies immediately $B = 0$.
Next
\begin{equation*}
0 = x'(\pi) = -A\sqrt{\lambda}\, \sin ( \sqrt{\lambda}\, \pi) .
\end{equation*}
Again $A$ cannot be zero if $\lambda$ is to be an eigenvalue,
and $\sin ( \sqrt{\lambda}\, \pi)$ is only zero
if
$\sqrt{\lambda} = k$ for a positive integer $k$.
Hence the positive eigenvalues are again
$k^2$ for all integers $k \geq 1$. And the corresponding eigenfunctions
can be taken as $x=\cos (k t)$.
Now suppose that $\lambda = 0$. In this case the equation is $x'' = 0$
and the general solution is $x = At + B$ so $x' = A$. The condition
$x'(0) = 0$ implies that
$A=0$. The condition $x'(\pi) = 0$ also implies $A=0$.
Hence $B$ could be anything (let us take it to be 1). So $\lambda = 0$
is an eigenvalue and $x=1$ is a corresponding eigenfunction.
Finally, let $\lambda < 0$. In this case the general solution is
$x = A \cosh ( \sqrt{-\lambda}\, t) + B \sinh ( \sqrt{-\lambda}\, t)$
and
\begin{equation*}
x' = A\sqrt{-\lambda}\, \sinh ( \sqrt{-\lambda}\, t)
+ B\sqrt{-\lambda}\, \cosh ( \sqrt{-\lambda}\, t ) .
\end{equation*}
We have already seen (with roles of $A$ and $B$ switched) that for this
expression to be zero at $t=0$ and $t=\pi$, we must have $A=B=0$. Hence there are
no negative eigenvalues.
In summary, the eigenvalues and corresponding eigenfunctions are
\begin{equation*}
\lambda_k = k^2 \qquad \text{with an eigenfunction} \qquad x_k = \cos (k t)
\qquad \text{for all integers } k \geq 1 ,
\end{equation*}
and there is another eigenvalue
\begin{equation*}
\lambda_0 = 0 \qquad \text{with an eigenfunction} \qquad x_0 = 1.
\end{equation*}
\end{example}
The following problem is the one that leads to the general Fourier
series.
\begin{example} \label{bvp-periodic:example}
Let us compute the
eigenvalues and eigenfunctions of
\begin{equation*}
x'' + \lambda x = 0, \quad x(-\pi) = x(\pi), \quad x'(-\pi) = x'(\pi) .
\end{equation*}
We have not specified the values or the derivatives
at the endpoints, but rather that they are the same at the beginning and
at the end of the interval.
Let us skip $\lambda < 0$. The computations are the same as before,
and again we find
that there are no negative eigenvalues.
For $\lambda = 0$, the general solution is $x = At + B$. The condition
$x(-\pi) = x(\pi)$ implies that $A=0$ ($A\pi + B = -A\pi +B$ implies $A=0$).
The second condition $x'(-\pi) = x'(\pi)$ says nothing about $B$ and hence
$\lambda=0$ is an eigenvalue with a corresponding eigenfunction $x=1$.
For $\lambda > 0$ we get that
$x = A \cos ( \sqrt{\lambda}\, t ) + B \sin ( \sqrt{\lambda}\, t)$.
Now
\begin{equation*}
\underbrace{A \cos (-\sqrt{\lambda}\, \pi) + B \sin (-\sqrt{\lambda}\,
\pi)}_{x(-\pi)}
=
\underbrace{A \cos ( \sqrt{\lambda}\, \pi ) + B \sin ( \sqrt{\lambda}\,
\pi)}_{x(\pi)} .
\end{equation*}
We remember that $\cos (- \theta) = \cos (\theta)$ and
$\sin (-\theta) = - \sin (\theta)$. Therefore,
\begin{equation*}
A \cos (\sqrt{\lambda}\, \pi) - B \sin ( \sqrt{\lambda}\, \pi)
=
A \cos (\sqrt{\lambda}\, \pi) + B \sin ( \sqrt{\lambda}\, \pi).
\end{equation*}
Hence either $B=0$ or $\sin ( \sqrt{\lambda}\, \pi) = 0$.
Similarly (exercise) if we differentiate $x$ and plug in the second
condition we find that $A=0$ or $\sin ( \sqrt{\lambda}\, \pi) = 0$.
Therefore, unless we want $A$ and $B$ to both be zero (which we do not)
we must have $\sin ( \sqrt{\lambda}\, \pi ) = 0$. Hence, $\sqrt{\lambda}$
is an integer and the eigenvalues are yet again $\lambda = k^2$ for
an integer $k \geq 1$. In this case, however,
$x = A \cos (k t) + B \sin (k t)$ is an eigenfunction for any $A$ and any $B$.
So we have two linearly independent eigenfunctions $\sin (kt)$ and $\cos (kt)$.
Remember that for a matrix we can also have two eigenvectors
corresponding to a single eigenvalue if the eigenvalue is repeated.
In summary, the eigenvalues and corresponding eigenfunctions are
\begin{align*}
& \lambda_k = k^2 & & \text{with eigenfunctions} & &
\cos (k t) \quad \text{and}\quad \sin (k t)
& & \text{for all integers } k \geq 1 , \\
& \lambda_0 = 0 & & \text{with an eigenfunction} & & x_0 = 1.
\end{align*}
\end{example}
\subsection{Orthogonality of eigenfunctions}
Something that will be very useful in the next section is the
\emph{\myindex{orthogonality}} property of the eigenfunctions. This is an analogue
of the following fact about eigenvectors of a matrix. A matrix is
called
\emph{symmetric\index{symmetric matrix}}
if $A = A^T$ (it is equal to its transpose).
\emph{Eigenvectors for two distinct eigenvalues of a symmetric
matrix are orthogonal.}
%That symmetry is required.
%We will not prove this fact here.
The
differential operators we are dealing with act much like a symmetric matrix.
We, therefore, get the following theorem.
%\medskip
%
%Suppose $\lambda_1$ and $\lambda_2$ are two distinct eigenvalues of $A$
%and $\vec{v}_1$ and $\vec{v}_2$ are the corresponding eigenvectors. Then
%we of course have that $A \vec{v}_1 = \lambda_1 \vec{v}_1$ and
%$A \vec{v}_2 = \lambda_2 \vec{v}_2$.
%\begin{equation*}
%\langle A \vec{v}_1 , \vec{v}_2 \rangle = \lambda_1 \langle \vec{v}_1 , \vec{v}_2 \rangle
%\qquad
%\langle A \vec{v}_2 , \vec{v}_1 \rangle = \lambda_2 \langle \vec{v}_2 , \vec{v}_1 \rangle
%\end{equation*}
%
%\begin{equation*}
%\langle A \vec{v}_1 , \vec{v}_2 \rangle -
%\langle A \vec{v}_2 , \vec{v}_1 \rangle
%=
%(\lambda_1 - \lambda_2 ) \langle \vec{v}_1 , \vec{v}_2 \rangle
%\end{equation*}
%
%\begin{equation*}
%\langle (A-A^T) \vec{v}_1 , \vec{v}_2 \rangle
%=
%(\lambda_1 - \lambda_2 ) \langle \vec{v}_1 , \vec{v}_2 \rangle
%\end{equation*}
\begin{theorem} \label{bvp:orthogonaleigen}
Suppose that $x_1(t)$ and $x_2(t)$ are two eigenfunctions of the problem
\eqref{bv:eq1}, \eqref{bv:eq2} or \eqref{bv:eq3}
for two different
eigenvalues $\lambda_1$ and $\lambda_2$. Then they are
\emph{orthogonal\index{orthogonal!functions}}
in the sense that
\begin{equation*}
\int_a^b x_1(t) x_2(t) \,dt = 0 .
\end{equation*}
\end{theorem}
The terminology comes from the fact that the integral is a type of
inner product. We will expand on this in the next section. The theorem
has a very short, elegant, and illuminating proof so let us give it here.
First, we have the following two equations.
\begin{equation*}
x_1'' + \lambda_1 x_1 = 0
\qquad \text{and} \qquad
x_2'' + \lambda_2 x_2 = 0.
\end{equation*}
Multiply the first by $x_2$ and the second by $x_1$ and subtract to get
\begin{equation*}
(\lambda_1 - \lambda_2) x_1 x_2 = x_2'' x_1 - x_2 x_1'' .
\end{equation*}
Now integrate both sides of the equation:
\begin{equation*}
\begin{split}
(\lambda_1 - \lambda_2) \int_a^b x_1 x_2 \,dt
& =
\int_a^b x_2'' x_1 - x_2 x_1'' \,dt \\
& =
\int_a^b \frac{d}{dt} \left( x_2' x_1 - x_2 x_1' \right) \,dt \\
& =
\Bigl[ x_2' x_1 - x_2 x_1' \Bigr]_{t=a}^b
= 0 .
\end{split}
\end{equation*}
The last equality holds because of the boundary conditions. For example, if
we consider \eqref{bv:eq1} we have $x_1(a) = x_1(b) = x_2(a) = x_2(b) = 0$
and so $x_2' x_1 - x_2 x_1'$ is zero at both $a$ and $b$.
As $\lambda_1 \not= \lambda_2$, the theorem follows.
\begin{exercise}[easy]
Finish the proof of the theorem (check the last equality in the proof) for the cases
\eqref{bv:eq2} and \eqref{bv:eq3}.
\end{exercise}
The function $\sin (n t)$ is an eigenfunction for the problem
$x''+\lambda x = 0$, $x(0) = 0$, $x(\pi) = 0$.
Hence for positive
integers $n$ and $m$ we have the integrals
\begin{equation*}
\int_{0}^\pi \sin (mt) \sin (nt) \,dt = 0 ,
\quad
\text{when } m \not = n.
\end{equation*}
Similarly,
\begin{equation*}
\int_{0}^\pi \cos (mt) \cos (nt) \,dt = 0 ,
\quad
\text{when } m \not = n,
\qquad \text{and} \qquad
\int_{0}^\pi \cos (nt) \,dt = 0 .
\end{equation*}
And finally we also get
\begin{equation*}
\int_{-\pi}^\pi \sin (mt) \sin (nt) \,dt = 0 ,
\quad
\text{when } m \not = n,
\qquad \text{and} \qquad
\int_{-\pi}^\pi \sin (nt) \,dt = 0 ,
\end{equation*}
\begin{equation*}
\int_{-\pi}^\pi \cos (mt) \cos (nt) \,dt = 0 ,
\quad
\text{when } m \not = n,
\qquad \text{and} \qquad
\int_{-\pi}^\pi \cos (nt) \,dt = 0 ,
\end{equation*}
and
\begin{equation*}
\int_{-\pi}^\pi \cos (mt) \sin (nt) \,dt = 0
\qquad \text{(even if $m=n$).}
\end{equation*}
%\medskip
%
%The theorem is also true when different boundary conditions are applied as
%well. For example, if we require $x'(a) = x'(b) = 0$, or
%$x(a) = x'(b) = 0$, or
%$x'(a) = x(b) = 0$. See the proof.
%By what we have seen previously we apply the theorem to find the integrals
%\begin{equation*}
%\int_{-\pi}^\pi \sin (mt) \sin (nt) \,dt = 0 \qquad \text{and} \qquad
%\int_{-\pi}^\pi \cos (mt) \cos (nt) \,dt = 0 ,
%\end{equation*}
%when $m \not = n$, and
%\begin{equation*}
%\int_{-\pi}^\pi \sin (mt) \cos (nt) \,dt = 0 ,
%\end{equation*}
%for all $m$ and $n$.
\subsection{Fredholm alternative}
We now touch on a very useful theorem in the theory of differential
equations. The theorem holds in a more general setting than we are
going to state it, but for our purposes the following statement is
sufficient. We will give a slightly more general version in
\chapterref{SL:chapter}.
\begin{theorem}[Fredholm alternative%
\footnote{Named after the Swedish mathematician
\href{https://en.wikipedia.org/wiki/Fredholm}{Erik Ivar Fredholm}
(1866--1927).}]\index{Fredholm alternative!simple case}
\label{thm:fredholmsimple}
Exactly one of the following statements holds.
Either
\begin{equation} \label{simpfredhomeq}
x'' + \lambda x = 0, \quad x(a) = 0, \quad x(b) = 0
\end{equation}
has a nonzero solution, or
\begin{equation} \label{simpfrednonhomeq}
x'' + \lambda x = f(t), \quad x(a) = 0, \quad x(b) = 0
\end{equation}
has a unique solution for every function $f$ continuous on $[a,b]$.
\end{theorem}
The theorem is also true for the other types of
boundary conditions we considered.
The theorem means that if $\lambda$ is not an eigenvalue, the nonhomogeneous
equation \eqref{simpfrednonhomeq} has a unique solution for every right-hand
side. On the other hand if $\lambda$ is an eigenvalue, then
\eqref{simpfrednonhomeq} need not have a solution for every $f$,
and furthermore,
even if it happens to have a solution, the solution is not
unique.
We also want to reinforce the idea here that linear differential operators have
much in common with matrices. So it is no surprise that
there is a finite-dimensional version of Fredholm alternative for matrices as
well. Let $A$ be an $n \times n$ matrix. The Fredholm alternative then
states that either $(A-\lambda I) \vec{x}
= \vec{0}$ has a nontrivial solution, or $(A-\lambda I) \vec{x} = \vec{b}$
has a unique solution for every $\vec{b}$.
A lot of intuition from linear algebra can be applied to linear differential
operators, but one must be careful of course. For example, one
difference we have already seen is that in general a differential operator
will have infinitely many eigenvalues, while a matrix has only finitely many.
\subsection{Application}
Let us consider a physical application of an endpoint problem.
Suppose we have a tightly stretched quickly spinning elastic
string or rope of uniform linear density $\rho$, for example in
$\unitfrac{kg}{m}$.
Let us put this problem into the $xy$-plane and both $x$ and $y$
are in meters. The $x$-axis represents the
position on the string. The string rotates at angular velocity $\omega$,
in $\unitfrac{radians}{s}$.
Imagine that the whole $xy$-plane rotates at angular velocity $\omega$.
This way, the string stays in this $xy$-plane and $y$
measures its deflection from the equilibrium position, $y=0$, on the $x$-axis.
Hence the graph of $y$ gives the shape of the string.
We consider an ideal string with
no volume, just a mathematical curve.
We suppose the tension on the string is a constant $T$ in Newtons.
%If we take a small segment and we look at the tension at the endpoints, we
%see that this force is tangential and we will assume that the magnitude is
%the same at both end points. Hence the magnitude
%is constant everywhere and we will
%call its magnitude $T$.
Assuming that the deflection is small,
we can use Newton's second law (let us skip the derivation) to get the equation
\begin{equation*}
T y'' + \rho \omega^2 y = 0 .
\end{equation*}
To check the units notice that the units of $y''$ are $\unitfrac{m}{m^2}$, as the derivative is
in terms of $x$.
Let $L$ be the length of the string (in meters) and the string
is fixed at the beginning and end
points. Hence, $y(0) = 0$ and $y(L) = 0$. See
\figurevref{bvp:whirstringfig}.
\begin{myfig}
\capstart
\inputpdft{bvp-whirstring}
\caption{Whirling string.\label{bvp:whirstringfig}}
\end{myfig}
We rewrite the equation as
$y'' + \frac{\rho \omega^2}{T} y = 0$.
The setup is similar to \examplevref{bvp:eig1ex}, except for the
interval length being $L$ instead of $\pi$. We are looking for eigenvalues
of $y'' + \lambda y = 0, y(0) = 0, y(L) = 0$ where
$\lambda = \frac{\rho \omega^2}{T}$. As before
there are no nonpositive eigenvalues. With $\lambda > 0$,
the general solution to the equation is $y = A \cos ( \sqrt{\lambda} \,x ) + B
\sin ( \sqrt{\lambda} \,x )$. The condition $y(0) = 0$ implies that $A = 0$ as
before. The condition $y(L) = 0$ implies that
$\sin ( \sqrt{\lambda} \, L) = 0$ and hence
$\sqrt{\lambda} \, L = k \pi$ for some integer $k > 0$, so
\begin{equation*}
\frac{\rho \omega^2}{T} = \lambda = \frac{k^2 \pi^2}{L^2} .
\end{equation*}
What does this say about the shape of the string? It says that for
all parameters $\rho$, $\omega$, $T$ not satisfying the equation above, the
string is in the equilibrium position, $y=0$. When
$\frac{\rho \omega^2}{T} = \frac{k^2 \pi^2}{L^2}$, then the string will
\myquote{pop out} some distance $B$. We cannot compute $B$
with the information we have.
Let us assume that $\rho$ and $T$ are fixed and we are changing $\omega$.
For most values of $\omega$, the string is in the equilibrium state. When
the angular velocity $\omega$ hits a value
$\omega = \frac{k \pi \sqrt{T}}{L\sqrt{\rho}}$, then the string
pops out and has the shape of a sin wave crossing the
$x$-axis $k-1$ times between the end points.
For example, at $k=1$, the string does not cross the $x$-axis
and the shape looks like in \figurevref{bvp:whirstringfig}.
On the other hand, when $k=3$ the string crosses the $x$-axis
2 times, see \figurevref{bvp:whirstring2fig}.
When $\omega$ changes again, the string returns to
the equilibrium position. The higher the angular velocity,
the more times it crosses the $x$-axis when it is popped out.
\begin{myfig}
\capstart
\inputpdft{bvp-whirstring2}
\caption{Whirling string at the third eigenvalue ($k=3$).\label{bvp:whirstring2fig}}
\end{myfig}
For another example, if you have a spinning jump rope (then $k=1$ as it is
completely \myquote{popped out}) and you
pull on the ends to increase the tension, then the velocity also increases
for the rope to stay \myquote{popped out}.
\subsection{Exercises}
Hint for the following exercises: Note that
if $\lambda > 0$, then
$\cos \bigl( \sqrt{\lambda}\, (t - a) \bigr)$
and $\sin \bigl( \sqrt{\lambda}\, (t - a) \bigr)$
are also solutions of the homogeneous
equation.
\begin{exercise}
Compute all
eigenvalues and eigenfunctions of
$x'' + \lambda x = 0$, $x(a) = 0$, $x(b) = 0$ (assume $a < b$).
\end{exercise}
\begin{exercise}
Compute all
eigenvalues and eigenfunctions of
$x'' + \lambda x = 0$, $x'(a) = 0$, $x'(b) = 0$ (assume $a < b$).
\end{exercise}
\begin{exercise}
Compute all
eigenvalues and eigenfunctions of
$x'' + \lambda x = 0$, $x'(a) = 0$, $x(b) = 0$ (assume $a < b$).
\end{exercise}
\begin{exercise}
Compute all
eigenvalues and eigenfunctions of
$x'' + \lambda x = 0$, $x(a) = x(b)$, $x'(a) = x'(b)$ (assume $a < b$).
\end{exercise}
\begin{exercise}
We skipped the case of $\lambda < 0$ for
the boundary value problem
$x'' + \lambda x = 0$, $x(-\pi) = x(\pi)$, $x'(-\pi) = x'(\pi)$.
Finish the calculation and show that there are no negative eigenvalues.
\end{exercise}
\setcounter{exercise}{100}
\begin{exercise}
Consider a spinning string of length 2 and linear density 0.1 and tension 3.
Find smallest angular velocity when the string pops out.
\end{exercise}
\exsol{%
$\omega = \pi \sqrt{\frac{15}{2}}$
}
\begin{exercise}
Suppose $x'' + \lambda x = 0$ and $x(0)=1$, $x(1) = 1$.
Find all $\lambda$ for which there is more
than one solution. Also find the corresponding solutions (only for the
eigenvalues).
\end{exercise}
\exsol{%
$\lambda_k = 4 k^2 \pi^2$ for $k = 1,2,3,\ldots$
\quad
$x_k = \cos (2k\pi t) + B \sin (2k\pi t)$ \quad (for any $B$)
}
\begin{exercise}
Suppose $x'' + x = 0$ and $x(0)=0$, $x'(\pi) = 1$.
Find all the solution(s) if any exist.
\end{exercise}
\exsol{%
$x(t) = - \sin(t)$
}
\begin{exercise}
Consider
$x' + \lambda x = 0$ and $x(0)=0$, $x(1) = 0$. Why does it not
have any eigenvalues? Why does any first order equation with two endpoint
conditions such as above have no eigenvalues?
\end{exercise}
\exsol{%
General solution is $x = C e^{-\lambda t}$. Since $x(0) = 0$ then $C=0$, and so $x(t) = 0$.
Therefore,
the solution is always identically zero. One condition is always
enough to guarantee a unique solution for a first order equation.
}
\begin{exercise}[challenging]
Suppose $x''' + \lambda x = 0$ and $x(0)=0$, $x'(0) = 0$, $x(1) = 0$.
Suppose that $\lambda > 0$. Find an equation that all such
eigenvalues must satisfy.
Hint: Note that $-\sqrt[3]{\lambda}$ is a root
of $r^3+\lambda = 0$.
\end{exercise}
\exsol{%
$\frac{\sqrt{3}}{3} e^{\frac{-3}{2}\sqrt[3]{\lambda}}
- \frac{\sqrt{3}}{3} \cos \bigl( \frac{\sqrt{3}\, \sqrt[3]{\lambda}}{2} \bigr)
+ \sin \bigl( \frac{\sqrt{3}\, \sqrt[3]{\lambda}}{2}\bigr) = 0$
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\sectionnewpage
\section{The trigonometric series} \label{ts:section}
%mbxINTROSUBSECTION
\sectionnotes{2 lectures\EPref{, \S9.1 in \cite{EP}}\BDref{,
\S10.2 in \cite{BD}}}
\subsection{Periodic functions and motivation}
As motivation for studying Fourier series, suppose we have the problem
\begin{equation} \label{ts:deq}
x'' + \omega_0^2 x = f(t) ,
\end{equation}
for some periodic function $f(t)$.
We already solved
\begin{equation} \label{ts:deqcos}
x'' + \omega_0^2 x = F_0 \cos ( \omega t) .
\end{equation}
One way to solve \eqref{ts:deq} is to
decompose $f(t)$ as a sum of cosines (and sines) and then
solve many problems of the form \eqref{ts:deqcos}. We then use
the principle of superposition, to sum up all the solutions we got
to get a solution to \eqref{ts:deq}.
Before we proceed, let us talk a little bit more in detail about
periodic functions.
A function is said to be \emph{\myindex{periodic}} with period $P$ if
$f(t) = f(t+P)$ for all $t$. For brevity we say $f(t)$ is $P$-periodic.
Note that a $P$-periodic function is also $2P$-periodic, $3P$-periodic
and so on.
For example, $\cos (t)$ and $\sin (t)$ are
$2\pi$-periodic. So are $\cos (kt)$ and $\sin (kt)$ for all integers $k$. The
constant functions are an extreme example. They are periodic for any period
(exercise).
Normally we start with a function $f(t)$ defined on some interval $[-L,L]$,
and we want to
\emph{extend $f(t)$ periodically}\index{extend periodically}\index{periodic extension}
to make it
a $2L$-periodic function. We do this extension
by defining a new function $F(t)$
such that for $t$ in $[-L,L]$, $F(t) = f(t)$. For $t$ in $[L,3L]$,
we define $F(t) = f(t-2L)$, for $t$ in $[-3L,-L]$, $F(t) = f(t+2L)$, and
so on.
To make that work we needed $f(-L) = f(L)$.
We could have also started with $f$
defined only on the half-open interval $(-L,L]$ and then define $f(-L) = f(L)$.
\begin{example}
Define $f(t) = 1-t^2$ on $[-1,1]$. Now extend $f(t)$ periodically to
a 2-periodic function. See \figurevref{ts:perextofinvertedparabolafig}.
\begin{myfig}
\capstart
\diffyincludegraphics{width=3in}{width=4.5in}{ts-perextofinvertedparabola}
\caption{Periodic extension of the function
$1-t^2$.\label{ts:perextofinvertedparabolafig}}
\end{myfig}
\end{example}
You should be careful to distinguish between $f(t)$ and its extension. A common
mistake is to assume that a formula for $f(t)$ holds for its extension. It
can be confusing when the formula for $f(t)$ is periodic, but with perhaps
a different period.
\begin{exercise}
Define $f(t) = \cos t$ on $[\nicefrac{-\pi}{2},\nicefrac{\pi}{2}]$. Take the $\pi$-periodic
extension and sketch its graph. How does it compare to the graph of
$\cos t$?
\end{exercise}
\subsection{Inner product and eigenvector decomposition}
Suppose we have a \emph{\myindex{symmetric matrix}},
that is $A^T = A$. As we remarked before,
eigenvectors of $A$ are then orthogonal. Here the word
\emph{orthogonal}\index{orthogonal!vectors} means
that if $\vec{v}$ and $\vec{w}$ are two
eigenvectors of $A$ for distinct eigenvalues,
then $\langle \vec{v} , \vec{w} \rangle = 0$.
In this case the inner product $\langle \vec{v} , \vec{w} \rangle$
is the \emph{\myindex{dot product}},
which can be computed as $\vec{v}^T\vec{w}$.
To decompose a vector $\vec{v}$ in terms of mutually orthogonal
vectors $\vec{w}_1$ and $\vec{w}_2$ we write
\begin{equation*}
\vec{v} = a_1 \vec{w}_1 + a_2 \vec{w}_2 .
\end{equation*}
Let us find the formula for $a_1$ and $a_2$. First let us compute
\begin{equation*}
\langle \vec{v} , \vec{w_1} \rangle
=
\langle a_1 \vec{w}_1 + a_2 \vec{w}_2 , \vec{w_1} \rangle
=
a_1 \langle \vec{w}_1 , \vec{w_1} \rangle
+
a_2 \underbrace{\langle \vec{w}_2 , \vec{w_1} \rangle}_{=0}
=
a_1 \langle \vec{w}_1 , \vec{w_1} \rangle .
\end{equation*}
Therefore,
\begin{equation*}
a_1 =
\frac{\langle \vec{v} , \vec{w_1} \rangle}{
\langle \vec{w}_1 , \vec{w_1} \rangle} .
\end{equation*}
Similarly
\begin{equation*}
a_2 =
\frac{\langle \vec{v} , \vec{w_2} \rangle}{
\langle \vec{w}_2 , \vec{w_2} \rangle} .
\end{equation*}
You probably remember this formula from vector calculus.
\begin{example}
Write
$\vec{v} = \left[ \begin{smallmatrix} 2 \\ 3 \end{smallmatrix} \right]$
as a linear combination of
$\vec{w_1} = \left[ \begin{smallmatrix} 1 \\ -1 \end{smallmatrix} \right]$
and
$\vec{w_2} = \left[ \begin{smallmatrix} 1 \\ 1 \end{smallmatrix} \right]$.
First note that $\vec{w}_1$ and $\vec{w}_2$ are orthogonal
as $\langle \vec{w}_1 , \vec{w}_2 \rangle = 1(1) + (-1)1 = 0$.
Then
\begin{align*}
& a_1 =
\frac{\langle \vec{v} , \vec{w_1} \rangle}{
\langle \vec{w}_1 , \vec{w_1} \rangle}
=
\frac{2(1) + 3(-1)}{1(1) + (-1)(-1)} = \frac{-1}{2} ,
\\
& a_2 =
\frac{\langle \vec{v} , \vec{w_2} \rangle}{
\langle \vec{w}_2 , \vec{w_2} \rangle}
=
\frac{2 + 3}{1 + 1} = \frac{5}{2} .
\end{align*}
Hence
\begin{equation*}
\begin{bmatrix} 2 \\ 3 \end{bmatrix}
=
\frac{-1}{2}
\begin{bmatrix} 1 \\ -1 \end{bmatrix}
+
\frac{5}{2}
\begin{bmatrix} 1 \\ 1 \end{bmatrix} .
\end{equation*}
\end{example}
\subsection{The trigonometric series}
Instead of decomposing a vector in terms of eigenvectors of a matrix,
we decompose a function in terms of eigenfunctions of a certain
eigenvalue problem. The eigenvalue problem we use for
the Fourier series is
\begin{equation*}
x'' + \lambda x = 0, \quad x(-\pi) = x(\pi), \quad x'(-\pi) = x'(\pi) .
\end{equation*}
We computed that eigenfunctions are 1, $\cos (k t)$,
$\sin (k t)$. That is, we want to find a representation of a
$2\pi$-periodic function $f(t)$ as
\begin{equation*}
\mybxbg{~~
f(t) = \frac{a_0}{2} +
\sum_{n=1}^\infty a_n \cos (n t) + b_n \sin (n t) .
~~}
\end{equation*}
This series is called the \emph{\myindex{Fourier series}}%
\footnote{Named after the French mathematician
\href{https://en.wikipedia.org/wiki/Joseph_Fourier}{Jean Baptiste Joseph Fourier}
(1768--1830).} or the
\emph{\myindex{trigonometric series}} for $f(t)$.
We write the coefficient of the eigenfunction 1 as $\frac{a_0}{2}$
for convenience.
We could also think of $1 = \cos (0t)$, so that
we only need to look at $\cos (kt)$ and $\sin (kt)$.
As for matrices we want to find a \emph{\myindex{projection}}
of $f(t)$ onto the subspaces given by the eigenfunctions. So we want to
define an \emph{\myindex{inner product of functions}}. For example, to
find $a_n$ we want to compute $\langle \, f(t) \, , \, \cos (nt) \, \rangle$.
We define the inner product as
\begin{equation*}
\langle \, f(t)\, , \, g(t) \, \rangle \overset{\text{def}}{=}
\int_{-\pi}^\pi f(t) \, g(t) \, dt .
\end{equation*}
With this definition of the inner product,
we saw in the previous section that the eigenfunctions $\cos (kt)$
(including the constant eigenfunction), and
$\sin (kt)$ are \emph{orthogonal\index{orthogonal!functions}} in the sense
that
\begin{align*}
\langle \, \cos (mt)\, , \, \cos (nt) \, \rangle = 0 & \qquad \text{for } m \not= n , \\
\langle \, \sin (mt)\, , \, \sin (nt) \, \rangle = 0 & \qquad \text{for } m \not= n , \\
\langle \, \sin (mt)\, , \, \cos (nt) \, \rangle = 0 & \qquad \text{for all } m \text{ and } n .
\end{align*}
For $n=1,2,3,\ldots$
we have
\begin{align*}
\langle \, \cos (nt) \, , \, \cos (nt) \, \rangle &=
\int_{-\pi}^\pi \cos(nt)\cos(nt) \, dt
=
\pi,
\\
\langle \, \sin (nt) \, , \, \sin (nt) \, \rangle &=
\int_{-\pi}^\pi \sin(nt)\sin(nt) \, dt
=
\pi,
\end{align*}
by elementary calculus. For the constant we get
\begin{equation*}
\langle \, 1 \, , \, 1 \, \rangle
=
\int_{-\pi}^\pi 1 \cdot 1 \, dt
= 2\pi.
\end{equation*}
The coefficients are given by
\begin{equation*}
\mybxbg{~~
\begin{aligned}
& a_n =
\frac{\langle \, f(t) \, , \, \cos (nt) \, \rangle}{\langle \, \cos (nt) \, , \,
\cos (nt) \, \rangle}
=
\frac{1}{\pi} \int_{-\pi}^\pi f(t) \cos (nt) \, dt , \\
& b_n =
\frac{\langle \, f(t) \, , \, \sin (nt) \, \rangle}{\langle \, \sin (nt) \, , \,
\sin (nt) \, \rangle}
=
\frac{1}{\pi} \int_{-\pi}^\pi f(t) \sin (nt) \, dt .
\end{aligned}
~~}
\end{equation*}
Compare these expressions with the finite-dimensional example.
For $a_0$ we get a similar formula
\begin{equation*}
\mybxbg{~~
a_0 = 2
\frac{\langle \, f(t) \, , \, 1 \, \rangle}{\langle \, 1 \, , \,
1 \, \rangle}
=
\frac{1}{\pi} \int_{-\pi}^\pi f(t) \, dt .
~~}
\end{equation*}
Let us check the formulas using the orthogonality properties. Suppose for
a moment that
\begin{equation*}
f(t) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos (n t) + b_n
\sin (n t) .
\end{equation*}
Then for $m \geq 1$ we have
\begin{equation*}
\begin{split}
\langle \, f(t)\,,\,\cos (mt) \, \rangle
& =
\Bigl\langle \, \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos (n t) + b_n
\sin (n t) \,,\, \cos (mt) \, \Bigr\rangle \\
& =
\frac{a_0}{2}
\langle \, 1 \, , \, \cos (mt) \, \rangle
+ \sum_{n=1}^\infty
a_n \langle \, \cos (nt) \, , \, \cos (mt) \, \rangle +
b_n \langle \, \sin (n t) \, , \, \cos (mt) \, \rangle \\
& =
a_m \langle \, \cos (mt) \, , \, \cos (mt) \, \rangle .
\end{split}
\end{equation*}
And hence
$a_m =
\frac{\langle \, f(t) \, , \, \cos (mt) \, \rangle}{\langle \, \cos (mt) \, , \,
\cos (mt) \, \rangle}$.
\begin{exercise}
Carry out the calculation for $a_0$ and $b_m$.
\end{exercise}
\begin{example}
Take the function
\begin{equation*}
f(t) = t
\end{equation*}
for $t$ in $(-\pi,\pi]$. Extend $f(t)$ periodically and write it
as a Fourier series. This function is called the \emph{\myindex{sawtooth}}.
\begin{myfig}
\capstart
\diffyincludegraphics{width=3in}{width=4.5in}{ts-sawtooth}
\caption{The graph of the sawtooth function.\label{ts:sawtoothfig}}
\end{myfig}
The plot of the extended periodic function is given in
\figurevref{ts:sawtoothfig}.
Let us compute the coefficients. We start with $a_0$,
\begin{equation*}
a_0 = \frac{1}{\pi} \int_{-\pi}^\pi t \,dt = 0 .
\end{equation*}
We will often use the result from calculus that says that the integral of an odd
function over a symmetric interval is zero. Recall that an
\emph{\myindex{odd function}} is a
function $\varphi(t)$ such that $\varphi(-t) = -\varphi(t)$. For example
the functions $t$, $\sin t$, or (importantly for us)
$t \cos (nt)$ are all odd functions. Thus
\begin{equation*}
a_n = \frac{1}{\pi} \int_{-\pi}^\pi t \cos (nt) \,dt = 0 .
\end{equation*}
Let us move to $b_n$. Another useful fact from calculus
is that the integral of an even function over
a symmetric interval is
twice the integral of the same function over half the interval.
Recall an \emph{\myindex{even function}}
is a
function $\varphi(t)$ such that $\varphi(-t) = \varphi(t)$. For example
$t \sin (nt)$ is even.
\begin{equation*}
\begin{split}
b_n & = \frac{1}{\pi} \int_{-\pi}^\pi t \sin (nt) \,dt \\
& = \frac{2}{\pi} \int_{0}^\pi t \sin (nt) \,dt \\
& = \frac{2}{\pi} \left(
\left[ \frac{-t \cos (nt)}{n} \right]_{t=0}^{\pi}
+
\frac{1}{n}
\int_{0}^\pi \cos (nt) \,dt
\right)
\\
& = \frac{2}{\pi} \left(
\frac{-\pi \cos (n\pi)}{n}