This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA
~ 1.
we have
Find the coordinate vector of [
~]
1
with respect to the ba i [; ] . [ ; ] . 1
2. Find the coordinate vector of
Aw =
pw+qv AÜ =  qw + pÜ
[i]
and
with res pect to the basis ote that B is a rotationdilation matrix . Compare wi th Exercise 30.
~
The problern of coordinate changes is a multidi men ional version of the probl ern of unit changes which you may have encountered .in math or physics courses (or in your dai ly lives, when dealing with fo re.ign systems of measurement or fo reign currencies). We illustrate th.is poi nt with a simple example. Suppose I collect data regarding the height and weight of people. For each person, 1 represent the data as a vector; fo r example:
 [1.83] 84
X
l~J · l~J· l~J· l~J
3. Find the matrix of the linear tran fo rmati on
T (_v) = [ with res pect to ilie ba i [
~
l [~ l
~
; ]
x
382 •
hap . 7 Coordinate System
Sec. 7.1 Coordinate Sys tems in IP. 11
4. Find the matrix of t.he linear mm formation
1]
~ = [ _7 T (x)
8
6
with respect to the ba i
12. ln the figure below, sketch the vector .X with basis of JR 2 consisting of the vectors ii,
X
1]· [21] [3
s. Let T: JR2 * JR 2 be the orthogonal
projection onto the line spanned by [
~
. [3] [1]
, . Draw a ketch. 1 3 b. Use you r answer in part a to find the standard matrix of T .
6. Let T: JR3
7
13. Consider the vectors ii,
v, w sketched
0
2
iR with tandard matrix [
Fi nd the matrix oftb is transfonnation with respect to the ba i [
where ß i the
w.
below . Find the coordinate vector of
ii (Iran ·Ja!Cd)
Draw a sketch. b. Use your answer in part a to fi nd the standard matrix of T. *
l
ii.
a. Find the matrix of thi transformation with re pect to the bas i
7. Con ider tbe linear tran formation T: IR
~
0
ÜJ with respect to the basis ii ,
be tbe reflection in tbe plane given by the equation
2
= [ 
383
l
a. Find the matrixofT with r pect to tbe bas1s
!Pl3
[.XJ 8
•
~
:
l
/(
14. Given a hexagonal tiling of the plane. such as you might find a n a kitche n floor, con ider the ba i.s ß of ~2 con isting of the vector ii ÜJ ketched below.
R
~] , [ ~ ] .
8. Consider a linear tran sforma lion T: IR 2 + iR2 • We are told that the matrix of T with respect to the ba i.
Dl U]
is [
~ ~ J.
Fi nd the standard matrix
ofT. 9. Con ider a linear tran formation T : T with respect to the ba i · [
~] ,
[
2
+ ~ 2 .
b]
is [
~
We are told th at the matrix of
!].
Find the Standard matrix
ofT in tenns of a , b, c, d .
10. Consider the basis ß of
[.XJ 8
for
x= [ ~].
11 . Da Exerci e 10 for
~2
con isting of the vectors [
II Iu strate the re ult with a sketch.
x= [ ~
J
~]
and [ 
~].
Find
a. Fi nd the coordinate vector [ 0 P ]
b. We are to ld that [ OR ]
8
=
UJ
8
and [ O.....Q ]
8
.
Sketch the po in t R . ls R a vertex
ce nter of a ti Je? e. We are to ld thar [
ÖS ]
8
= [ :;].
a ccnte r r a vertc
f a til ?
r a
384 •
Sec. 7.1
hap. 7 Coord inate y tem
15. Find tJ1e coorclinate vector of e1 with respect to the ba i
oordinate Systems in IR" •
23. Let L be the line in JR 3 spanned by the vector ü =
Ul nl m
0.6 ] [ ~8
. Let T: R 3
385 R 3 be
the rotation about thi s line through an angle of rr j2, in the direction indicated in the sketch be.low. Find the matrix A suchthat T (x) =Ai .
of ~ 3 . 16. If ß i a ba is of IR!", i the tran formation T from IR!" to IR!" given by T (x) = [x ] 8 L
linear? Ju tify your an wer. 17. Con ider the ba i ß of R 2 con i t.ing of the vectors [ ; ] and [ told that
[.x] 8
= [
1 ~]
~].
We are
for a certain vector
,r in R .
Fi nd
XI .
x.
18. Let ß be the ba is of IR!" consisting of the ector Ü1, Ü2, .. . , ü" , and Iet T be some other basis of IR!". Is
a basis of
~n
as well? Explain.
19. Consider the ba is ß of
n
, ... "' ... '
06
~2 con
isting of the vector [
~]
and [ ;
l
ii •• ••
• ••• • •• • • ••••••••••••••• • ~
24. Con ider the regular tetrahedron ketched below, whose center i at the origin.
an d Iet
be the basis consisting of [ ~ ] , [ ~] . Find a matri x P such that
for all .X in iR2 . 2.0. Find a basis ß of IR uch that and 21. Consider two orthogonal unit vectors consisting of the vectors ü1, ü2 , ü3 =
ü1 and ü2 in JR 3 and form the basis ß ü1 x ü2 • Find the matrix of the linear
transformation
with respect to the ba is ß. 22. Consider a 3 x 3 matrix A and a vector ü in IR uch th at A 3 ü = 0, but A 2 ü =/= Ö. a. Show that the vectors A2 ü, AÜ ü form a bas is of JR: 3 . Hint : Demonstrate linear independence. b. Find the matrix of the transformation T x) = Ax with re pect to the ba i A 2 ü, AÜ , ü.
Let ii 0 , ü1, ü2 , ii3 be the position vector of the four vertice. of the tetrahedr n: ..... ü0 = OP o, .. . , ii. = OP 3. a. Find the sum iio + ii1 + Ü2 + Ü3 . b. Find the coordinate vector of Üo with re pect to the ba i VJ. IJ:!, VJ. c. Let T be the linear transformat.ion with T(Üo) = Ü3, T(v3) = v1 . and T(ü 1) = ü0 . What is T (v 2 )? De. cribe t.he transformation T geometrically (as a reflection, rotation, projection , or whatever). Fi nd the matri 8 ofT with respecr to the basi ii 1, ii2 , Find the complex eigenval u of 8. 3 What is 8 ? Expl.ain.
v.
386 •
Chap. 7 Coordinate Systems
Sec. 7.2 Diagonalizati on and Similarity •
25. Con ider a rotation T (x) = Ax in ffil. 3 (that is, T is an orthogonal tran formation. and det(A) = I). a. Consider an orthonormal basis ß of ffil. 3 . Is the matrix B of T with respect . . . ? to ß orthoaonaJ? Js it necessanly a rotatwn matnx. b. Now supp~se ß i an ?rthon~rmal ~asi Ü1, Ü2, V3 of IR\_ where Ü1 is_,a fixed point of T , that ts, T v 1) = v 1 • (Such a vector ex tsts by _Eu Iet s theorem· see Exercise 6.3.34.) What can you say about Lhe matr1x 8 of T with ~espect to ß? Describe the first row and the tirst column of 8 , and explain why the rninor 8 11 is a 2 x 2 rotation matrix . ExpJain the significance of this result in geometric terms. 26. Consider a linear tran formation T (,\) = kr from IR" to IR". Let B be the matrix ofT with re pect to the ba is  e3, ... , ( l )"e" of IR" . Describe the entries of B in terms of the entries of A.
e1, e2,
29. Consider a real 2 x 2 matrix A with eigenvalues p ± iq and corresponding eigenvectors ± iw. Show that the matri x [ w ii ] i invertible. Hint: We ca n write
v
30. Consider a real 2 x 2 matri x A with eigenvalue p ± iq and corre ponding eigenvectors v ± iw. Find the matrix B of the linear Iransformation T(x) = Ax with respect to the basis consisting of v and w. Campare with Example 5.
DIAGONALIZATION AND SIMILARITY
27. Consider a linear transformation T (x) = Ax from IR" to IR". Let B be the matrix ofT with respect to the basis e1" en1, ... , e2, e1 of IR". Describe the entrie of B in terms of the entrie of A. 28. This problern refers to Leontief s inputoutput model , first di cussed in the Exerci es 1.1.20 and 1.2.37 . Consider three industries /1 , h / 3 each of which produces only one good, with unü prices P I = 2 P2 = 5, PJ = LO (in doiJars), respectively. Let tbe three products be good 1, good 2, and good 3. Let
012 013] = [0.3 0.1 o 22
a 23
0 32
0 33
0.2
0.2 0.3 0.2
387
The introductory example of the last section (pp. 374 and 375) sugge t the following resu!t:
Fact 7.2.1
The matrix of a linear transformation with respect to an eigenbasis is diagonaL More specifically, consider a linear Iransformation T (.r) = Ax, where A is an n x n matrix . Suppose ß is an eigenbasi for T consisting of the vectors v1, ü2 , ..• , 11 , with A = .A; Then the matrix B of T with respect to ß is
v
v;
v;.
!l'' JJ 0
0.1 ] 0.3 0.1
8
~ s' AS ~
)'2
0
be the matrix that lists the interindustry demand in terms of dollar amounts. The entry oij teils us how many doiJars' worth of good i are required to produce one doUar's worth of good j. AJtematively, the interindustry demand can be measured in units of goods by means of the matrix
where bij teils us how many units of good i are required to produce one unit of good j. Find the mat.rix B for the economy discussed here. Also wri.te an equation relating the tbree matrices A, B, and S, where
S=
2 0 0 5
[0 0
~~J
is the diagonal matrix listing the unit prices on the diagonal. Justify your answer carefully.
where
To justify this fact, consider the column of B (see Fact 7.1.5):
0 0 (ith column of
B) = [T(Ü;)] 6
=
[.A;v;]6
=
.A;
+ i th
component
0 Applying this observation to i = 1, 2, .. . ,
11 ,
we prove our claim.
•
388 •
Sec. 7.2 Diagonaliza tion and Sirrti la rity •
Chap. 7 Coordin ate Systems The con verse of Fact 7 .2.1 is true as weil : if the matrix of a li near tra n fo m1 ation T with respect to a basis ß i diagonal, lhe n ß is an eigenbasis fo r T (Exercise 17). Fact 7.2. I motivates the followin g detinition:
Asking whether. a matrix A is diaoo naJizable is the ame as a kin bo whethe r . . b . there IS an e tgenbasts fo r A. l n C hapter 6 we outli ned a method for answering thi s questio n. We summarize th is proce s below:
Algorithm 7.2.4 Definition 7.2.2
Diagonalizable matrices An n x 11 matrix A is called diagonalizable if there is an invertibl e matri x S such that s' AS is di agonal.
389
Diagonalization Suppose we are asked to decide whe ther an n x n mat rix A is diagonali zable, and, if so, to fi nd an invertibl e S such that s' AS is di agonal. Th is process can be carried out over IR, over C , or over any other fie ld.
a. Fi nd the eigenvalues of
A , that is, olve the eq uatio n / A(A.) = 0.
b. For each eigenvaJ ue A., fi nd a bas i of the eigen pace As we observed above, s 1 AS is diagonal if (and only il:) the column. of S form an eigenbasis for A. This impli es the following result:
E 1. = ker(AI"  A ).
c. T he matrix A is diagonaJi zable if (and only if) the dimens ion of tbe eigenspaces add up to n . In thi case, we find an eigenbas is ti 1 ti2,
Fact 7.2.3
The matrix A is diagonalizable if (and only il:) there is an eigenbasis for A . In particula r, if an 11 x n matri x A has n di stinct eigenvalues, then A .is diagonah zable (by Fact 6.3.6).
. . . , ü" for A by combining the bases of the eigenspaces we fo und in step b. Let S = [ 1 ti2 iJ"]. Then s' AS is diagonal with the cotTes ponding eigenvalues on the di agonal.
u
EXAMPLE 1 ~ Diagonali ze the matri x A matrix with real entries may be di agonaliza ble over C, but not over 1lt For example, the rotation matrix
Solution has no real eigenvectors, so that A is not diagonalizable over IR. However, there is a complex eigenbasis for A, namel y,
We proceed step by step as outlined above:
a. Find the eigenvalues: 1
JA (A.) with associated eigenvalues i,  i (see Example 6, Section 6.4). T herefore, A is diagonalizable over C. The matrix S=
u n
= det [ A.  0  1 A.  1) 3
= (A. 

I )(A.
2
),
~1 0
~
] A.  I
(A.  I ) = (A.  I ) ( A.  1)2 
2A.) = (A. I )(A.  2)A.
1)

=0
The eigenva lu es are 1 2, and 0, so that A is diago nali zab le (over IR). b. Find an eigenvector for eac h eigenva lue :
does the job:
s ' A S = [ ~
~J
E 1 = ker
Note that "most" n x n matrices (with real or complex entries) are diagonalizable over C , because they have n distinct complex ei genvalues. An example of a matrix which is not diagonalizable over C is the shear matrix [
.
b ~ ].
0
0
0
0
[  1 0
l] [I 00] ~ = ker ~ ~ ~
;
390 •
Sec. 7.2 Diagona lization and Similarity •
Chap. 7 Coordinate Sy tem where
Si milarly, we find eigenvector for the eigenvalues 2 and 0: l
2
t
t
s~
i]
0
J .
Diagonalization is a usefu l too l to study the powers of a matrix: if
[n m m
c. Let
s 1 AS =
u 1] 0
A=
.
and
0 0] 2
0
1 tim es
Note that D' is easy to compu te: raise the calars on the di agonal to the t th power. Tf
.
0 0 Note that you need not actua lly comp ute s 1 AS. The theoretical work above guarantee that s 1 AS is diagonal, wi th the eigenvalues on the diagonal. To check yo ur work you may wish to verify that s 1 AS = D , or, equivalently, that AS = SD (here we need not compu te the inverse of S).
1 ] [0 ~] ~] [! I][I 0~] ~ u ~]
AS ~ [6
1 0
so~[!
0 1
=
I 0
0 OJ . ' 0
2 0 2
0 1
0 0
0
2 0
.xcr where A is di agonali zable, with
.J
EXAMPLE 2 ..... Diagonalize the rotationdilation matrix
where p and q are real numbers with q
I
s
=1=
A.~
0
0
.
. ..
S=
Solution The eigenvalues of A are A. 1.2 = p ± iq (see Section 6.5, Example 2) . T hen
AS
=
Ax(t). D . Then 1
'U J
'U2
v"
'
D' =
[~
A.'2
0
0
.
0
J
S
 1
xo = [xo] 8 =
1
C JA 1
c2 A.~
v"
and
~ J.
c"A.~,
Now
sJ[ ~7!} "'
Pq
Js =
[ P lo  iq
o
piq
J'
= c 1A.11v1 + c_A.~Ü2
+ · · · + c"A.~, i.i".
Thi is the formul a famili ar from Chapter 6 (Fact 6.1.3).
[,,: J. C? 
c"
Then
·  ker [ iq E p+rq q
E1,;q = span [
n
Thi is a more succinct way to write the formula fo r dynarnical y tems derived in Chapter 6. To see why, write
'
0.
+ 1) = 1
X(I) = A' Xo = s D' s I io
q] p
[~
0
Consider a dynami cal system
2
A = [ :
D'=
then
A."
2
l 0
sos 1
1
Then
0 L 1 0
D,
then
1
0
391
392 •
+
Chap. 7
oordinate System
Sec. 7.2 Diagonalization and Similarity •
Similarity
p ± iq and correspondi ng ei genIn Example 5 of Secti on 7. 1 we have seen that A is imi lar to
EXAMPLE 5 ...... Consider a real 2 x 2 matri x A with eigenvalues vectors
So far we have focu ed o n the diagonalizable matrices. that is, those matrices fo r which there is an eigenbas is. lf there i. no eigenba i for a linear transfo rmati on T (_r ) = A.r from C " to C ". we may still Iook for a ba is ß of C" thar i "weil adj usted'' to T , in the sense thar rhe matrix B of T with r spe t to ß i ea ier to work wilh than A it elf. To put it differently, for a (nondi agonalizable) marrix A we may wi h to fi nd an inve1tible S such that B = s 1 AS is of a simpl er form than A itse lf. Ln view of ou r intere t in dynamical sy tems, the main requ irement will be that the power 8 1 of B are ea to ompute. Then we can "do" the dynami cal y tem x(t + I) = A,\ (1) :
I x(r ) =
N ,\o =
v± iw.
[p More specifica lly, B = s AS, where S = q q]· p 1
B =
[w
ü ]. Use thi
simil ari ty to derive a real closed fo rmula for x (t ) = N x 0 .
Solution We can write  sin(
csss 1Y"ro = ss ~ s 1.Xo I
.
Then
()t = A1xo = sBIS
For the di cu sion of this prob lem the fo llowi ng termi nology is useful :
Definition 7.2.5
393
X
1[
 1
xo = r
 ] [ COS(cPI)
w
v
Similar matrices
where [ ~ ] is tbe coordinate vector of
Con ider two 11 x 11 matrice A and B. We say that A is similar to B if there is an invertible matrix S such that B = s 1 A S.
that we fi rst stated thi formula in Fact 6.5.3.
x0
sin(
 sin (
with respect to tbe basis
a] b
'
w, v.
Recall
~
As the term suggests, sirrülar matrices have many properties in common: In terms of hnear tran Formations, this mean that A is similar to B if 8 is tbe matrix of T (x) = Ax with re pect to some ba is ß. Note that a matrix by definiti on is diagonaLizable if it i similar to a diagonal matri x. The verification of tbe following fact is left a Exercise 20: • An n x
11
Suppo e A is sirnilar to B. Then
a. !A ().. ) = f n ()...). b. A and B have the same eigenvalues, with the same algebraic and geometric multiplicities.
matrix A is simi lar to itself.
• If A is sirnilar to B , then B is irnilar to A. • l f A i sirnilar to B , and B i sirnilar to C, then A i
EXAMPLE 3 ...... Find all matrices
Fact 7.2.6
c. det(A)
im.il ar to C.
= det( B) .
d. tr(A) = tr(B).
B similar to !".
e. rank (A) = rank(B).
Solution If B i similar to !" is 111 itself.
l 11 ,
then B =
EXAMPLE 4 ...... Are the matrices A = [ ~
s 1 J"S =
n
! 11 • Therefore, the only marrix . imi lar to
and B = [
~
=~ ~~ ]
fa(A) = det(Al11
simi lar?
[2 0]
. Because A and B are both sirnil ar to D = 0 3 are sirnilar to each other.
B ) = det (Al" 
= (det S)  det(A/11
Find the eigenvalues of A and B . It turns out that both matrice have the eigenvalues 2 and 3. This means that both A and B are diagonalizable, wi th diagonal
.

1
Solution
matnx D =
We will demonstrate clai m a. The verification of the other claims i left to the reader (Exercises 27, 28, 61). For all scalars A,
[2 0] 0
3
, they <11111

s  I AS)
= det (S 1()...111

A)S)
A) det(S) = det(Ain  A) =JA().) .
•
Consider a nondiagonalizable matri x A. l s A irnilar to some matrix B " in simple form "? ln thi s introductory text, we will not addres this issue in full aenerality. If you are interested, read the chapter on ' Jordan Normal Form " f'n an advanced linear algebra text (for example, Section 6.2 in the text Linear Algebra by Friedberg, In el, and Spence; Prentice Hall). . . . We conclude thi s section with three examples concerrung nondiagonahzable 2 x 2 matrices.
394 •
Sec. 7.2 Diagonalization and Similarity •
Chap. 7 Coordinate Systems
395
Figure 1
EXAMPLE 6 .... Consider a real 2 x 2 matrix A (other tban
/2) suchthat / A('A) = ('A 
2
1)
. Show
Figure 2
that A represents a shear.
Solution
Solution Cboose a nonzero vector ü in the onedimensional eigenspace E 1 of A and a vector that is not contained in 1• By Definjtion 2.2.4, we have to how tbat ÜJ is a scalar multiple of ü. See Figure l. Consider tbe matrix B of tbe transformation T (x) = At with respect to the
w
E
Aw
basis ü, ÜJ. Since A ü = ü, the first col umn of B is [
~}.
Wehave !A ('A) = ),2 2}..+ 1 = (A.1) 2, so that A represents a shear, by Example 6. By definüion of a shear, x(O), x(l), x(2), ... will be equally spaced points along a straight
li~e p~allel to E1•
= Axo = [ ~]
We compute .X ( I )
sketched 1n F1gure 2. Note that x(t) = io + t(x(l ) In our example, we bave
xo)
= xo
+I (Axo
and find the trajectory
 xo).
B=[~ ~ ] Since fA(A.) = fB(A.), we have b = 1, so that
EXAMPLE 8 .... Consider the dynamical system
B=[~ ~] ·
x(t
The second column of B teils us that
+ 1)
= Ax(t)
with
xo
= [~] ,
~* ~ ].Interpret t~e linear transformatjon cally, and find a clo ed form ula for x 1). where A = [
or
Aw  w=
aü,
Solution
as claimed. Note tbat we can write the tran formation Tin even imp ler form: the matrix of
T witb respect to the basis Aw w, wis [ ~ ~].
<111
wbere A = [
=!
~
l
We have
JA
A.
= A.2 ).. + ~
= ('A ~) 2 ; the eigenvalues of A are >.. 1 = .A2
Therefore, the matrix M = 2A = [
~t ; ]
= ~
has the eigenvalue J with algebraic
multiplicity 2. By Example 6, the matrix M represents a hear. Thu the matrix A = represents a sheardilation , that i , a shear followed by a dilation by a
1M
EXAMPLE 7 .... Consider the dynarrucal ystem
x 1 + J) =
T (x) = Ax geometri
At t)
with
x0 = [ ~]
Find a closed formula for x(t) and sketch the trajecrory.
1·
factor of We can find M 1
x
0
as in Example 7:
1Mxo=xo+t(Mxo  xo =
[
0] [2] [ 1 +t
1 =
l 21 + t] ·
396 •
Chap. 7 Coordinate Systems
7.
10.
Figure 3
0
[~
0 I
0
n
8.
[I 2 3] 0 0
2 0
3 3
[: l]
II. [
~ 3
Then _
I _
x(t) = A xo =
(
1)
2
I
I _
M xo =
(
1)
2
I
[
I
2t ]
~ ~]
14. Diagonalize Lhe matrix A = [
=:
+t
s 1 A S is of the form [ ~ ~ .
16. Consider the matrix A = [ ~
~
See Figure 3. Let us summarize our observation in Examples 6 and 7: consider an invertible real 2 x 2 matrix A that is nondiagonalizable over C. Then ! A(A.) = (Ä  .>. d for some nonzero A.Q, and A represents a sheard.i lation , that i , a shear followed by a dilation by a factor of .>.. 0 . . . Note that we now un~erstand the dynarnical ystem
.r. cr +
an inverüble matrix S such that
JA (A.)
GOALS
Use the concept of a diagonalizable matrix and the idea of similarity of matrices. Analyze the dynarnical y tem x (t + 1) = Ax(t) for any reaJ 2 x 2 matrix A and sketch a pha e portrait. Decide which of the matrices A in Exercises I to 12 are diagonalizable (over JR). lf possible, find an invertible S and a diagonal D uch that s 1AS = D. Do not use technology.
4•
0
l~
0 0
l I
j
0
l
over C. over C. Find an in vet1ible mat.rix S uch tbat
Verify that A i a hear mat:rix , and find
s JAS =
[
6 ~ J.
n.
ß i di agonal , then ß is an eigeobas is forT .
18. Let A = [
~
Find a clo ed formula for A 1 (entry by enrry). lf A1
what can you infinity?
[2 1]
J.
12.
I 1
17. Show that if the matrix of a Linear tran formalion T with respect to a basi
= A. 2 is left as Exercise 56).
E X E R C I S ES
[~ ~]
~] 5
[~ ~] 1 1 0
I)= Ax(t ) ,
for all real 2 x 2 matrices A (the case when
1.
~]
1
~ ] .
l
9.
 1 3 9
13. Di agonali ze the matri x A = [
15. Consider the matrix A = [ _ ;
1
39 7
Sec. 7.2 Diago naliza ti o n and Simil arity •
5] 5• [] 2 5
3.
[ ~ =~]
6
u=i ~]
.
19. Let A = [
[a (t )
_

c(t )
b(t ) ] d (t )
·
ay about the proportion a t ) : b (t ): c t ) : d (t ) as
~ ~].
Find a closed formula for N
1
goe
to
entry by entry .
20. Justify the following facts : lf A B , and C are n x n matrice
then
• A is im.ilar to itself.
• lf A is sim.ilar to B , then B is similar to A.
• lf A i
imilar to B , and B i
im.ilar to C, then A i
21. Are the matrice [
~ ~]
22. Are the matrice [
~ ~] and [ =~ ~] sim.ilar?
and [
~ ~]
im.ilar to C .
im.ilar?
23. Jf two matrices A and B both represent a hear in JR2 to A?
B nece
arily imilar
398 •
Chap. 7 Coordinate Systems
Sec. 7.2 Diagonalization a nd Similarity •
24. In Exa mple 4 we found that the matrices A = [ 25. 26. 27.
28.
~ ~ ] a nd 8 = [ =~ 1 ~ ]
are imilar. Find an invertibl e matri x S such that 8 = s l A S. Consider two 2 x2 matrices A and 8 with det(A) = det(B) and tr (A) = t.r(8). Are A and B nece sarily similar? Consider two 2 x 2 matrice A and 8 with d t(A) = det(B = 7 and tr(A) = tr(B) = 7. Are A and B neces ari ly simil ar? · A an d B = s 1 AS . Show that i f ."K is in the kerne! a. Con ider tb e matnces of 8 then S.r i in the kerne! of A. b. Shm tbat rank (A) = rank(8 ). Con ider the matrice A and 8 = s 1 AS. Suppo e .r i an eigenvector of 8 , with a . ociated eigenvalue A.. Show that Si i aJ1 eigenvector of A wi.th the same eigenvalue. Campare the geometric multiplicities of A. a. an eigenval ue of A and B.
For the matrices A in Exerci e 29 to 36, find clo ed formula for the co mponent of the dynamical system .r (l
+ 1) = A.~(t)
wirb
_r(O)
= [~] .
Sketch the trajectory.
~
i]
30. A = [
32. A =
[~
~]
33. A =
35. A =
[i i]
29. A = [
6 i]
[i 6] ~ !]
43. If A is diagonalizable, i A 2 diagonali zable as well? 44. If A is diagonalizable and r is an arbi trary positive integer, is A' diagonali zab le as weil? 45. lf A i diagonalizable and invertible, i A  I diagonali zable as well? 46. lf A is in vertible, is A necessarily di agonalizable over C? Conver ely if A is diagonalizable, is A neces arily invertible? 47. Jf A and 8 are di agonalizable, is A
34. A = [ _ l
4
~]
36. A = [
+B
necessmily diagonalizable?
48. ff A and 8 are diagonaJizable, is AB necessari ly di agonalizable? 2
49. lf A i diagonalizable, i A itself necessaril y di agonalizable? 50. Tf A and B are n x n matrice and A i in vertibl e, how th at AB to BA.
imiJar
51. Give an exampl e of two (nonin vertible) 2 x 2 matrice A and B uch that A B is not imilar to BA . 52. a. Consider two matrices A and 8 , where A i in vertible. Show that the matrices AB and BA have the same characteristic polynornial (and therefore the same eigenvalues with the ame algebraic multiplicities). Hillt : Exerci e 50 is helpfu l. b. Now co n ider two n x n matlice A and B which may or may not be invertible. Show tbat A 8 and BA have the same characteristic polynorniaL and therefore the same eigenval ues. H int: For a fixed scalar A., consider the function
n
31. A = [ ;
399
j(x)
= det (Ain 
(A 
X
ln) 8)  det (Hn B (A  x l n)).
Show that f x) is a polynornial in x. What can you say about it degree? Explain why f(x) = 0 when x i not an eigenvalue of A. Conc lude tbat f(x) = 0 for all x; in partic ular, /(0) = 0. 53. Find clo ed formulas for the entries of the hear matrix
37. Are the matli ces A and 8 below similar?
h [~
l 0 0 0
Hint : Consider A2 and 8 2 .
0 0 0 0
n
s~ [~
I 0 0 0
0 l 0 0
~l
38. Tru e or false? If two n x n matrice A and 8 have the ame eigenvalue , with the ame algebraic and geometric multipliciti e , then A and B are imilar. 39. If A and B are invertible, and A is simiJar to B , is A 1 imi lar to B 1 ? 40. If A i similar to 8 , and A is invertible, is B necessarily in vertible? . 41. If Ai s similar to 8 , i AT necessaril y sirni lar t.o B T? 42. lf A is diagonalizable, is AT similar to A?
54. Show that
55. Consider a real 2 x 2 matri x A (other than / 2 ) uch that tr(A) = 2 and det (A) = l. 1 tbe linear t.ran form ation T(x) = Ai nece aril y a he ar . Explain.
400 •
Chap. 7
oordinat
Sec. 7.3 Symmetrie Matrices •
ystems
56. Let A be a 2 similar to [
2
2 matrix \ ith chara teri'tic polynorniaJ A . Show that A
~ ~
l
2
What can you ay abou r A ?
57. Show that any ~ x 2 matrix i si milar to it transpose. 58. ShO\ that any complex 3 x 3 matrix i imilar to an upper triangular 3 x 3 matri ·. 59. Con ider two real n x 11 matrices A and B whi h are "si milar over C": that i the re i a compl e invertib le 11 11 matTix S uch that B =  I AS. Show that A and B are in fact '·simil ar over ~· · : that i , there is a real R such that B = R  1A R. Hints: Write S = S1 + iS2, where S1 and S2 are real. Con ider the function f(x) = det(S 1 +.rS2 ), where x is a compl ex variable . Show that f(x) i a nonzero poly nornial. Conclude that rhere is a rea l number x 0 uch that f(x o) f= 0. Show that R = S1 + xoS2 doe the j ob.
60. In this exercise we will show that the geometric multiplicity of an eigenvalue is les than or eq ual to the algebraic multip li c ity . Suppose Ao i an e igenva lue of an n x n matrix A with geometric multiplicity d. a. Explain why there is a basi ß of ~~~ whose firs t d vector are eigenvectors of A, with e igenvalue Ao· b. Let B be the matrix of the linear tran formation T (x = A .~ with respect to ß . What do the fir t d columns of B Iook like? c. Explain why the characteri tic polynomi al of B i of the form
!s (),)
= (A  Ao)d g (A) ,
for some polynomial g()..). Conclude that the algebraic multipli ity of A.o as an eigenvalue of B (and A) i at lea t d.
61. Con ider two sirnilar matrices A and B . Show that a. A and B have the . ame eigenvalues, with the same algebrai c multipliciti e . b. det(A) = det (B). c. tr(A) = tr(B). 62. We say that two n x n matrices are simulraneously diagonalizable if there is an n x n matrix S such that s 1 AS and s 1B S are both diagonal. a. Are the matrices
0 0] 1 0 0 I
and
simultaneously diagonalizable? Explain. b. Show that if A and B are simultaneously diagonalizable then AB = BA . c. Give an example of two n x n matrice such that AB = BA , but A and B are not simul taneously diagonalizable. d. Let D be a diagonal 11 x n matri x with 11 di stinct entries on the diagonal. Find all n x 11 matrices B which commute with D.
e. Show tbat if A B = BA and A has n distinct eigenvalue are simultaneously diagonalizable. Hint: Part d is usefu l.
401
then A and B
63. Consider a diagonalizable n x n matrix A with m di stinct eigenval ues A 1, . . . , A111 • Show that (A  Alln)(A A.2/n) ... (A  A.111 / 11 ) = 0.
64. Consider a diagonalizable n x n matrix A witb characteristic polynorni al
!A (A)
= An + a"_l),nl + · · · + a1A. + ao.
Show that
65. Consider a real 2 x 2 matrix A with fA(A) with Exercise 64.
=
(A 1) 2 . Find fA(A). Campare
66. Is tbere a 3 x 3 matrix A with all of the following properties? • All eigenvaJues of A are integer . • A is not diagonalizable over C.
• det(A TA) = 36. Gi ve an exampJe or show that none can exist.
67. True orfalse? lf 2 + 3i is an eigenvalue of a real 3 x 3 matrix A, then A i diagonalizable over C. 68. For real 2 x 2 matrices A, give a complete proof of Fact 6.5.2.
SYMMETRIC MATRICES In this section we will work over R except for a brief digression into C, in tbe discussion of Fact 7 .3.3. Our work in the last five sections dealt with tbe following central question: When is a given square matrix A diagonalizable; tbat is, when is there an eigenbasis for A? In geometry, we prefer to work with orthonormal bases, wbich raise the question:
For wbich matrices is tbere an orthonormal eigenbasis? Or, equivalently for which matrices A is there an orthogonal matrix S uch that s 1AS = ST AS is diagonal?
402 •
Chap. 7 Coordinate Systems
Sec. 7.3 Symmetrie Matrices •
403
(Recall that s 1 = sT for orthogonal matrices, by Fact 4.3. 7.) We say that A is orrhogonally diagonalizable if there is an orthogonal S uch that s l A S = ST AS is diagonal. Then the question is:
Whicb matrice are ort11ogonally di agonalizable? Simple exam ples of ortllogonally diagonalizable matrice are diagonal matrices (we can et S = !") and tlle matrices of ortllogonal projections and reflection (exercise).
EXAMPLE 1 ~ If A is orthogonally diagonalizable, what is tlle relationship between AT and A?
Figure 1
Solution Note that tl1e two eigenspaces, E3 and E 8 , are perpendicular (this is no coincidence, as we wi ll see in Fact 7.3.2). Therefore, we can find an ortbonormal eigenbasis simply by dividing tbe given eigenvectors by their lengtbs:
Wehave
s 1AS = D
or
A
= SDS 1 = S DST.
 .J51 [12],  .J51[1]
for an orthogonal S and a diagonal D. Then AT = (S DST)T
ul=
= SDTST = SDST = A.
V2
h[i· ~,] ~ 5s[~ n
AT= A .
Surprisingly, the converse is true as weil:
then s 1A s will be diagonal, namely, s l A s =
Spectral theorem
AT= A).
We will prove tllls theorem Iater in tllis section, based on two preliminary results, Facts 7 .3.2 and 7.3.3 . First we will illustrate the spectral theorem witll an example.
2 ~ For tlle symmetric matrix A = [ ~
~
l
;
J. find an orthogonal s suchthat s AS is
Consider a symmetric matrix A . lf ii1 and ii2 are eigenvectors of A, witl1 distinct eigenvalues, )q and A. 2, then ii1 · iiz = 0, that is, ii2 is orthogonal to ii1.
Fact 7.3.2
Proof We compute the product in two different ways :
1
 ) = /\.2 ' ( ) = vT1 (A.2v2 v1 · u1 vi Av2 = üf ATÜ2 = (AÜJ)TÜ2 = ()..1Ü 1)Tv2 = A.1(Ü 1 · Ü2) T Avz v 1
diagonal .
Comparing the results, we find
Solution
A. 1Cü 1 · vz) = A2Cii1 · ii_)
We will first find an eigenbasis. The eigenvalues of A are 3 and 8, witll corresponding eigenvectors [ _
[~
The key ob ervation we made in Example 2 generalizes as follows:
A matrix A is orthogonally diagonalizable (that is, there is an orthogonal S such that s 1 AS = srAS is diagonal) if and only if A is symmetric (that is,
EXAMPLE
·
U we define the orthogonal matrix
We find that
Fact 7.3.1
2
=
iJ
and [;]. respectively. See Figure 1.
or
404 •
Sec. 7.3 Symmetri Matrices •
Chap. 7 CoordinateS stems
405
Since tbe fir t factor in Lhi product, A.,  A.2 . is nonzero the ·e ond factor u1 . v2, mu t be zero a claimed. • Fact 7.3.2 tel! us that the eig nspace of a symmetri matiix are perpendicul ar to one another. Here i another iUu tration of thi property: EXAMPLE 3 .... For the ymmetric matri ·
find an orthogonal S such that
s' AS i
diagonal.
Figure 3
Solution
In Figu re 3, the vectors ii 1, ii2 form an orthonormal bas i of E0 , and ii3 i a unit vector in E3. Then ii,, ii 2, ii3 i. an orthonormal eigenbasi fo r A. We can Iet S = [ii, ii2 ii3 ] to diagonaJize A orthogonall y. If we apply GramSchm.idt 1 to the vector
The eigenvalues are 0 and 3, with
Note tbat the two eigen paces are indeed perpendicul ar to one another, in accordance with Fact 7.3.2. See Figure 2. We can construct an orthononnal eigen ba i for A by picking an orthonorma l ba i of each eigen pace (using GramSchmidt in the case of E 0 ) . See Figure 3.
panning E0 , we find and
3
f19ure 2 The eigenspoces fo ond f ore orthogonal complements.
[I] E3 = span
~
The computations are left as an exercise. For E 3 we get
Therefore, the orthogon al matri x
 1/../2  l j /6 [
1Allernati ve ly,
Ü2 =
ii
X ii1 .
l j ./2
I/J3 l/J3 ] 2! 16 t;J3
l j /6
0
wc could fi nd a unit ve tor ii1 in Eo and
:1
un it vec10r ii3 in EJ, :1nd then set
406 •
Sec. 7.3 Symmetrie Matrices •
Chap. 7 Coordinate Systems
40 7
For a I x 1 matrix A, we can Iet S = [1). Now ass ume the claim is true for n  1; we show that it holds for n . Pick a :eal eigenvalue A. of A (this is possible by Fact 7 .3.3) and choose an eigenvector v1 of length 1 for A.. We can find an orthonormal basis ii 1, ii 2 , . . • , iin of JRil (think about how you could construct such a basis). Form the orthogonal matrix
diagonalizes the matrix A:
s 1 AS = 00 00 0] 0 [0 0 3
By Fact 7 .3.2, if a symmetric matrix is diagonalizable, tben it is orthogonally diagonalizable. We till have to how that sy mmetric matrices are diagonalizable in tbe first place (over IR). The key point is the following observation: VII
Fact 7.3.3 Proof
I
The eigenvalues of a symmetric matrix A are real.
(This proof may be skipped in a first reading of this text without harming your understandlog of the concepts.) Consider two complex conjugate eigenvalues p±iq of A with corresponding eigenvectors ii±i (compare with Exercise 6.5.31 b). We wish to show that these eigenvalues are in fact real, that is, q = 0. Note first that
and compute P  1A P .
w
Cii + iw) 7 cii iw) = lliill
2
+ !lwf
Tbe fir t column of p l AP is A.e 1 (why?). Also note tbat p  1 AP = p T AP is symmetric: (P 7 AP) 7 = P 7 A 7 P = P 7 AP, because Ais symmetric. Combining these two statements we conclude that p  1AP is of the form p  'AP  [ A.
(verify this). Now we compute the product

Cii + iwl ACii  iw) Cii + iwl A(ii iw )
Comparing the results, we find that p + iq
=
'
Q 1 BQ = D
+ iw) 7 (ii 
iw)
is a diagonal (n  1) x (n  1) matrix. Now introduce the orthogonal n x n matrix R=
p iq, so that q = 0, as claimed. •
The proof above is not very enlightening. A more transparent proof would follow if we were to define the dot product for complex vectors, but to do so would Iead us too far afield. We are now ready to prove Fact 7.3.1: symmetric matrices are orthogonally diagonalizable. Even though this is not logically necessary, Iet us first examine the case of a symmetric n x n matrix A with n distinct eigenvalues (recall that this is the case for " most" matrices). By Fact 7.3.3, the n distinct eigenvalues are real. For each eigenvalue, we can choose an eigenvector of length 1. By Fact 7.3 .2, these eigenvectors will form an orthonormal eigenbasis, that is, the matrix A will be orthogonally diagonalizable, as claimed. Proof
0]
B
where B is a symmetric (n 1) x n  1) matrix. By induction, B is orthogonally diagonalizable; that is, there is an orthogonal (n  1) x (n  1) matrix Q such that
in two different ways:
Cii + iw) 7 CP  iq )(ii  iw )
0
(of Fact 7.3.1): This proof is somewhat technical; it may be sk.ipped in a fi rst reading of this text without harm. · We prove by induction on n that a synunetric n x n matrix A is orthogonally diagonalizable.
[~ ~l
Then
is diagonal. Combining equations (1) and (TI) above, we find that
R 1 P 1 APR=[~ ~]
(lll)
is diagonaL Consider the orthogonal matrix S = PR (recall Fact 4.3.4a: the 1 1 1 product of orthogonal matrice is orthogonal). Note that s 1 = ( P R) = R p  . Therefore equation (ill) can be written
s 1 AS = proving our claim.
[
~ ~
l
•
408 •
hap . 7 Coord inate
Sec. 7.3 Symmetrie Matrices •
tems
409
The method outlined in the proof of Fact 7.3 .1 i not a en ible way to fi nd the matrix S in a numerical exampJe. Ratl1er, we can proceed as in Example 3:
Algorithrn 7.3.4
Orthogonal diagonalization of a symmetric m atri x A a. Fi nd the eigenva lue of A. and find a basi of each eigenspace. b. Usi ng GramSchmid t, find an orthonormal ba i of each eigenspace. c. Form an orthonormal eigenba is iJ, . Ü2, ... , ii" for A by combining the vectors you fou nd in part b, and Iet
Un it circle
Figure 4
EXER CI S ES GOALS Find orthonormal eigenba e fo r symmetric matrices. Appl y the pectra l theorem. 5 i. orthogonal (by Fact 7.3 .2) and
s' AS will be diagonal (by Fact 7 .2 .1).
For each of the matrices in Exerci es 1 to 6, find an orthonormal eigenba i . Do not use technology.
We conclude thi sectio n with an example of a geometric nature:
[~
6.
[~ ~
EXAMPLE 4 ..... Con ider an in vertible symmetric 2 x 2 matrix A. Show that the linear Iransformatio n T (.i) = A.t maps the unit circle into an ellipse, and find the lengths of the emimajor and the semin:tinor axe of this ellipse in term of the eigenvalues of A . Compare with Exerci e 2.2.50.
n
3.
j]
For each of the matrice. A in tbe Exercise 7 to 11 fi nd an orthogonal matrix S and a diagonal matrix D such that s' AS = D . Do not use technology.
Solution The spectral theorem teils us that there is an onhonormal eigenbasi.s Ü1, ii2 for T , with associated real eigenvalues ), 1 and A. 2 . Suppose that IA.II ::::. IA.2 1. These eigenvalues will be nonzero, since A is invertible. Tbe unit circle in ~2 consists of all vectors of the form
ii = cos(t)v 1 + sin(t)ii2 . The image of the unit circle consists of the vectors
7. A = [
~ ~] I
10. A =
2
2 4 [ 2 4
12. Let L: ~ ~ ~ 3 be a refl ecti on in the line spanned by
+ sin (t) T(v2 ) =cos(t)A. 1 u1 +sin(t)A.2 u2 ,
T(v) = cos(t) T(v,)
an ellipse whose semimajor ax is A. 1 v1 has the length IIA. 1 ü1 11 = IA. 1 1, while the length of the semi mj nor axjs is IIA. 2 ii2 11 = IA.2 1. See Figure 4. ln the example illustrated in Figure 4 the eigenva lue A. 1 is positive and A.2 is ~ negati ve.
a. Find an orthonormaJ eigenba i. ß fo r L . b. Find the matri x B of L with re. pect to ß. c. Find the matri x A of L with re pect to the tandard ba i of ~ 3 .
410 •
Sec. 7.3 Symmetrie Matrices •
Chap. 7 Coord.inate S tems
transformaüon . k = h r rhe . 3linear ? 13. Cons1"der a ymrne tr"1c 3 X 3 l11atrix . A with T (x) = A.r necessarily the reftectJOn m a subspace of ~ · In Example 3 in this ection we diagonalized th matnx 14. 1 1
by means of an orthogonal matri x S. Use t~i re ult to ~agonal ize the fo l
:.w;[n~
TTJ' orthog:~a[JIt~r and: D]
1D
eac:. ca[r ~
!]
2 2 2 I 1 2 2 2 0 lf A is invertible and orthogonally diagonalizable, i At orthogonally diag15. onalizable as well ? 16. a. Find tbe eigenvalue of rhe matrix
A=
1
1
J 1 1
1
1
1 1 1
1 1 1
1
J 1 1 1
1 1
wirh their multip1icities (note that tbe algebraic mu1tiplicity agrees with tbe geometric multiplicity; why?). Hint: What i the kerne! of A? b. Find tbe eigenvalues of tbe matrix J
1 3
1 1
l
1
3
1
1 1
3
B=
1
J J 1 3
19. Consider a linear transfonnation L from IR" to IR"'. Show that there i an orth onorm al basis ü1, ü2 , • .. , vn of IR" suchthat the vector L(ii 1) , L (v 2 ), • • • • L (Ü") are orthogonal (note: some of the vectors L (v;) may be zero). Hint: Consider an orthonormal eigenba is ü1, ii2 , . . . , vn for the sy mmetric matrix A TA. 20. Consider a linear transfonnation T fro m IR" to JRm, where n doe not exceed m. Show that there is an orthononnal ba is 1, •• • , ii" of IR" and an orthonorrnal Ü!m of IR"' such that T (Ü;) is a scaJar multiple of for i = 1. ba is .. . n . Hint: Exercise 19 is helpful.
v
w, , ... ,
o
2 0 k 0
l k
A
=
~
0 2 0 k
~]
where k i a constant. a. Find a value of k such tbat the mattix A i diagonalizable. b. Find a value of k such that A is not diagonalizable. 23. lf an n x n matrix A is both syrrunetric and orthogonal, what can you say about the eigenvalues of A? \\'bat about tbe eigenspaces? Interpret the linear transformation T (x) = Ax geometrically in the cases rz = 2 and n = 3. 24. Con ider the matrix
0
0
0 0
1
A=
1 3
18. Consider sorne unit vectors v1 , . • • , V11 in IR" such that the angle between Ü; and üj is 60° for all i =f. j . Find the nvolume of the nparallelepiped span~ed by ü1, • •• , V11 • Hint: Let A = ( v1 v"] and think about the rnatrix ArA and its detenninant. Exercise 17 is usefu l.
w; ,
21. Consider a symmetric 3 x 3 matrix A with eigenvalues l , 2, 3. How many different orthogonal matri ces S are there such that s 1 AS is diagonal? 22. Con ider the matrix
1 1
with their multiplicities. Do not use technology . c. Use your result in part b to find det(B). 17. Use the approach of Exercise 16 to find the detenni nant of the n x n matrix B that has p's on the diagonal and q 's elsewhere.
411
[
0 I
1 0
Find an orthonormal eigenba i for A . 25. Consider the matrix
00 0 0 I
00 0 I 0
00 1 0 0
01 0 0 0
01] 0 . 0 0
Find an orthogonal 5 x 5 matrix S uch that s tAS is diagonal. 26. Let }11 be the n x n matri x with aJI ones on the '·other diagonal" and zero elsewhere. (In Exercises 24 and 25 we tudied 14 and J . respectively.) Find the eigenvalues of 111 , with their multiplicities.
412 •
Sec. 7.4 Quadra tic Forms •
Chap. 7 Coordinate System
b. Co_n ider a c? mp lex n x n matrix A that has zero a it only eigenva lue (wtth algebraJc mul tipli city n). Use Exercise 37 to how that A is nilpotent 39. Let us first introdu ce two notati ons. For a complex n x n matri x A , Iet lAI be the matrix whose ij th entry is lai;" IFor two rea l n x n matrices A and B we wri te A < B if a.1 < b·1. fo r all entri e . Show that '  ' a. lA B I S IAII BI, for all corn plex n x n matrices A and B , and b. lA' I S lAI' , fo r all co mpl ex n x n matrices A and allpos iti ve integer t_ 40. Let U ::::: 0 be a real upper triangul ar n x n matri x with zero on the di agon al. Show that
27. Di agonati ze the n x n matri x I
0
0 0 0
0
0
0 0 l 0
(all one along botb di agonal
1 0 0
I
0 0
0
1 0 0
and zeros el ew here).
28. Diagonalize the 13 x 13 matrix 0
0
0
0
Un + u I .::: r"Un + u + U 2 + .. . + un I)
1
0 0 0
0 ]
0 0 0
0
for all pos iti ve integers t. See Exercises 38 and 39. 41. Let R be a complex upper tri angular n x n matrix with Show that
1
29. 30.
31. 32.
33. 34. 35.
413
Ir;; 1< 1 for i
= 1. . .. , n .
lim R' =0
(all ones in the la t row and the last column, and zeros e l ewhere). Con ider a symmetric matri x A. If the vector ü i in the image of A and is in the kernel of A, is ü necessarily orthogonal to w? Ju li fy your answer. Consider an orthogonal matri x R wbose first column is ü. Form the synunetric matrix A = vür. Find an orthogon al matiix S and a di agonal matrix D such that st AS = D. Describe S in terms of R . True or false? If A is a symmetric matrix, then ran.k(A) = rank (A 2 ) . Consider the n x 11 matrix with all ones on the main di agonal and all q 's elsewhere. For which choice of q is this matri x in vertibl e? Hint: Exercise 17 i helpful. For whi ch angle(s) a cao you find three dislinct unit vectors in ~ 2 such that the angle between any two of tbem is a ? Draw a ketch. For which angle(s) a can you find four distinct unit vectors in ~ 3 such that the angle between any two of them is a ? Draw a sketch . Consider 11 + 1 distinct unit vectors in ~~~ such that the angle between any two of them is a. Find a .
w
36. Consider a syrnmetric 11 x n matrix A with .4 2 = A. Ts the Lineartransformation T (.r = A.i= necessaril y the orthogonal projection onto a subspace of IR"? 37. We ay that an 11 x n matrix A is triangulizable if A i simil ar to an upper triangular n x n matrix B. a. Gi ve an example of a matrix with real entries that i not trianguli zable over ~b. Show that any n x n matrix with complex entries is tri anoulizable over C. Hint: Gi ve a proof by induction analogaus to the proof ~f Fact 7.3.1. 38. a . Consider a complex upper triangular n x n matrix U with zeros on the diagonal. Show that U is nilpotent, that is, un = 0 . (Compare with Exercises 56 and 57 of Section 3.3.)
r~
(meaning that the modulu s of all e ntri es of R' approaches zero . Hint : We ca n write IRI S )..(in+ U ), fo r ome po iti ve real numbe r ).. < 1 and an upper triangul ar matr ix U ::::: 0 with zero on the diagonal. Exerci es 39 and 40 are helpful. 42. a . Let A be a complex n x n matrix such that 1)..1 < I for all eigenvalues ), of A. Show that lim A 1 = 0 / 4
(meaning that the modu lu of all entries of A ' approache zero) . b. Prove Fact 6.5 .2.
QUADRATIC FORMS In thi s ection we will present an important apptication of the pectral theore m (Fact 7 .3. 1). In a multi vari abl e calculu text, we found the foll o\ ing proble m:
EXAMPLE 1 .... Consider the fun ction q (x1. x2) = 8x~  4.x lx2 + 5x= .
Note that q (0 0 = 0. Determine whether this fun ction ha a tri ct m1mmum at (.x 1,.x2 = (0 , 0), that is, whether q (x1,x2) > 0, for all (.x 1, x 2) =1 (0 , 0). There are a number of way to do thi. proble rn ome of which you may ha ve seen in previous courses. Here we pre ent an approach whi ch u e matrix techniques. We first develop some theory. and the n do the examp le.
414 •
Chap. 7 Coordinate Systems
Sec. 7.4 Quadratic Forms •
Note tbat we can write
I
415
/'
We "split" the contribution 4x 1x 2 equally among tb.e two components. More succinctly, we can write
qC"i) = ;. Ax ,
where
A = [
~
2] 5 ,
or
Figure 1 q (x) = xr Ax .
Let us present these ideas in greater generality:
Tbe matrix A is symmetric by construction. By tbe spectral tbeorem (Fact 7 .3. 1), there is an ortbonormal eigenbasis Ü1, 2 for A . We find
v
 = .j51 [ 2]}
Vt
 ./51[1] v2
2 ,
=
witb associated eigenva1ues ). 1 = 9 and )..2 = 4 (verify this). If we write = c 1 ü1 + c2 ü2 , we can express the value of the function as follows:
x
Definition 7.4.1
Quadratic forms A function q (XI. x 2 , . . . , Xn) from ~n to ~ is called a quadratic form if it is a linear cornbination of functions of tbe form x;Xj (where i and j may be equal). A quadratic form can be written as q (x ) = ; . Ax =
;r Ai ,
for a syrnmetric n x n matrix A .
v
(Recall that 1 · ü, = I , ü, · Ü2 = 0, and ~ · ~ = 1, si nce ü1, ü2 is an ortbonormal basis of ~2 .) The formula q (x) = 9cf +4ci shows that q (x) > 0 fo r all nonzero x, because at least one of the terms 9cf and 4ci is positive. Our work above shows that tbe c 1c 2 coordinate systern defi ned by an orthonormal eigenbasis for A is "well adjusted" to the function q. The formul a
EXAMPLE 2 ~ Consider the function q (X J, X2, X3) = 9.xf
+ 7xi + 3xj
 2X JX2 + 4XlX3 6x2X3.
Find a symmetric matrix A such that q (x) = x · Ai for all i in R 3 .
9ci + 4c~ Solution is easier to work with than tbe original formula
As pointed out in Exan1ple l , we Iet a;; = (coefficient of xf), aij = aj; = tccoeffi cient of x ;.xj).
because no term involves c 1c2 : q (x,, x2) = 8xf  4x ,x2
+ Sxi
= 9cf + 4c~ The two coordinate systerns are sbown in Figure 1.
Therefore,
if i =I= j.
416 •
Chap. 7
Sec. 7.4 Quadratic Form •
oord inate Systems
By Fact 7.4.2, tbe definitenes of a ymmetric matri x A i ea y to determine from it e igenval ues:
The ob en ation we made in Example 1 abov can now be generalized as follows:
Fact 7.4.2
Con ider a quadratic form q ,\) = ,\ . A.\ from IR" to lR. Let ß be an orthonorm al eigeobasis for A , witb a ociated eigenvalue At, . . . ,).." . T hen q( i ) = ), , c j
Fact 7.4.4
+ )..2d + · · · + ).."c~ . pect ro ß .1
A symmetric matrix A is positive definite if and on ly if) all of its eigenvalue are positive. The matrix A is positive semidefinite if (and on ly if) al l of it e igenvalue arepositive or zero. These facts foiJow immediately from the formula 
q (x )
Aoain. note that we have been able to get rid of tbe mixed term : no umma:d i~volve c;c1 , itb i =j:. j) in the fonnula abo e. To j ustify tbe formula rated in Fact 7 .4.2. we can proceed as in Example 1. We leave the detail a an
Fact 7.4.5
Positive definite quadratic forms Consider a quadratic form q (x) = ,\ Ai, where A is a ymmetric 11 x n matrix. We say that A i positive definite if q (x ) is positive fo r all nonzero x in :IR", and we call A positive semidefinite if q(x ) 2: 0, for aH x in :IR". Negative definite and negative semidefinite ymmetric matrices are de.fined analogously. Finally, we call A indefinite if q takes po. itive as weil as negative value. .
EXAMPLE 3 ..... Consider an
m
x
n matrix A. Show that the functi on q (x ) =
= At C21 + · · · + AnC2 11
(Fact 7.4.2).
•
The determinant of a positive definite matrix is positive, ince the determinant is the product of the eigenvalues. The converse is not true, however: consider a symmetric 3 x 3 matrix A with one po itive and two negative eigenvalues. Then det(A) is positive, but q (x) = ,"Y · Ai is indefinite. In practice. the fo llowing criterion for positive definiteness is often used (a proof i outlined in Exercise 38):
exercise. When we study a quadratic fo rm q we are often intere ted in fi nding out \ herher q .\ ) > 0 fo r all nonzero .\ as in Example 1). In thi context it i u eful to introduce the following terminology :
Definition 7 .4.3
417
Consider a symmetric n x I! matrix A. F or m = 1, ... , 11 , Iet A (m ) be tbe m x m matrix obtained by omitting all row and columns of A pa t the mtb. These matrices A (ml are called the principal submatrices of A. The matrix A i positive definite if (and only if) det(A (ml ) > 0, for aJI m = 1, ... , 11. A an example, i::onsider the matrix
1 7 3
II A~ f is a quadratic
form, find its matrix , and determine its definitene
n
from Example 2. det(A ' l = det[9] = 9 > 0 9 det(A <2> = det [  l ] = 62 > 0  l 7 det(A t 3>) = det A) = 89 > 0
Solution \ e can \ rite q .\) = Ax ) . (Ai ) = (Axl (A.r) = ; r AT Ai = ; . (AT Ai). This shows that q i a quadratic form, with matrix AT A. This quadratic form i positi e emidefinite, becau e q (.r = II A.r ll 2 2: 0 for a!J vector ; in 2". ote that q (x = 0 if and on ly if i i in tbe kerne! of A. Therefore, the quadratic form i po iti e definite if and onl if ker(A) = {Ö}. ~
We can conclude that A i po itive definite. Alternatively, we could find the ei genva lu e technology, we find that )q ~ 10.7, A2 ~ 7 .1, and
f A and u e Fact 7.4 . . in g ~ J .2, c nfirming ur result.
A.J
+ Princip a l Axes 1The
basic propert ie o f quadrati fonns were fi rst derived by the Dutchman Johan de Witt (16251672) in his Eiementa curvamm linearum. De Witt was one of the leadin g state me n o f his time, guidin g hi country lhrough two war against England . He consolidated his nati on· s com me rc ial and naval power. De Witt met an unfortunate end when he w tom to piece by a proEn gli h mob. (He . hou ld h, ve staycd with math !).
When we study a fu nction j(.x ,, x _, .. . , Xn) from IR" to IR . w are often intere. ted in the solution of the equation j(Xt,X2, ... ,X11 )
=k
418 •
Chap. 7 Coordinate Systems
Sec. 7.4 Quadratic Forms •
419
Now consider the Ievel curve q (x) = x ·Ai = 1,
where A is an invertible symmetric 2 x 2 matrix. By Fact 7.4.2, we can write this equation as Ä1ct
+ Ä2d
= 1,
where c1 , c2 are the coordinates of .X with respect to an orthonormal eigenbasis for A, and AJ, Ä2 are the associated eigen values. As we discussed above, this curve is an ellipse if both eigenvalues are positive and a hyperbola if one eigenvalue is positive and one negative (what happens when both eigenvalues are negative?).
Figure 2 for a fixed k in ~. called the Ievel sets of f (Ievel curves for n = 2, Level swfaces for n = 3). Here we will t.hink about the Ievel curves of a quadratic form q (x ~ 2 ) of two variables. For simplicity, we focu s on the level curve q (x 1, x 2) = I. Let us first think about tbe case wben there i no mixed term in the formula. We trust that you had at least a brief encounter with those Ievel curves in a previous course. Let us discuss the two major cases:
EXAMPLE 4 .... Sketch the curve
1•
Case 1 • q (x 1, x 2 )
(see Example 1). Solution
In Example 1 we found that we can write this equation as
= ax~ + bxi =
I, where b > a > 0. This curve is an ellipse, as shown in Figure 2. The lengths of the semimajor and the semiminor axes are 1j ..j0. and 1/.Jb, respectively. Thi s ellip e can be parameterized by
9cT where c 1, c2 are the coordinates of
0 ] . [XI] = COS (t ) [ Jjfa] +sm. (t ) [ lj.Jb X 2
Ü
= a x ~ + bx~ =
is a hyperbola, _:,ith x 1 intercepts [
I, where a is positive and b negative. This
±l~..jO.l
for A
as shown in Figure 3. What is the
slope of the asymptotes, in terms of a and b? Figure 4 Figure 3
= [ ~ ;
J
=
1,
x with respect to the orthonormal eigenbasis
 .j51 [J2]
U]
Case 2 • q (x1, x2)
+ 4d =
V? =1

.../5
[1] 2
We sketch this ellipse in Figure 4.
420 •
Chap. 7 Coordinate Systems
,. q.q
Sec. 7.4 Quad ratic Forms •
Th c 1 and the c 2 ax ar called the prin ipal axes of the quadratic fo rm 2 ote that tlle  are the e igen pace of the matrix 1. ) _ g·,.21 _ 4r ,.2 ·I·r 2 + 5·r 2·
421
9. Arealsquarematrix A i called skewsymmetric if Ar = A . a. If A i skewsy mmetric, is A 2 kewsymmetric as weil ? Or i A 2 ymmetric ?
b. lf A is skewsymmetric, what can you say about the defin iteness of A 2 ? What about the eigenvalues of A 2 ? of the quadratic form .
Definition 7.4.6
c. What ca n you say about the cornplex eigenvalue of a skew ymmetric matrix ? Which skewsy mmetric matrices are diagonalizable over ~ ? 10. Consider a quadratic form q(x) = x . Ax on ~~~ a nd a fi xed vector v in iR" . Is the transformation
Principal axes Con ider a quadratic form q x ) = X . Ax, where A i a ymmetric 11 X 11 matrix with n distinct eigenvalue . Then tbe eigen pace of A a.re called the principal axes of q (note that tllese wi ll be onedimen iona l). Return to the case of a quadratic form of two variab le . We can ummarize our findings as follow :
+ v) q (.r) 
L (.r) = q(,r
q (v)
linear? lf 0 what is its matrix ? 1
11. If A is an invertible symmetric matri x, what is the relationship betweeo the defi ni tenes of A and A  1 ? 12. Show that a quadratic form q (x) = ,r kr of two varia bles is indefinite if (and on ly if) det(A) < 0. Here, A is a symmetric 2 x 2 m a trix .
13. Show that the diagonal e lements of a positive definite matrix A are positive.
Fact 7.4.7
2
Con ider the curve C in iR defined by
Let >.. 1 and >..2 be the eigenvalues of the matrix [
~ J where a and det(A) are both positive.
14. Con ider a 2 x 2 matrix A = [ :
? ' 1. q(x 1, x 2 =ax"j+bx 1x2+cx2=
2
b~2 b~ ]
Without u ing Fact 7.4.5, show that A i positi ve definite. Hint: Show first that c i positive, and thu tr(A) i positive. Then think about the si.gns of the eigenvaJues.
of q.
lf both ), and >..2 are positive, then C is an ellipse. If there i a positive and a negative eigenval ue, tben C is a hyperbo/a. 1
EXERCISES
GOALS Apply the concept of a quadratic fom1 . U e an ortbonormal eigenba i for A to analyze the quadratic fonn q(x) = x · A.t. For each of the quadratic fonns q li ted in Exerci e matrix A suchthat q (x) = x . Ax.
1. q (x1 x2) = 6xr  7..r Jx2 + 8xi . 2. q(X J X2) = XJX2· 3. q(x J x2 1x3) = 3x~ + 4x~ + 5xj
1 to 3 find a ymmetric
I
I
+ 6x1x3 + 7x2x3.
Detennine the definüeness of the quadrati c forms in Exercises 4 to 7 .
4. 5. 6. 7. 8.
q (xJ x2) = 6xT I
+ 4xlx2 + 3xi.
q(x 11x 2 ) = x~ +4x l x2+xi. q (x J, x2 ) =
2xT + 6x1 x2 + 4xi .
q (x 1 x21 x3) = 3x~ I
+ 4x 1x3.
If A is a symmetric matrix, what can you say about the definiteness of A2 ? When is A 2 positive definite?
Sketch tbe cu rve defined in Exercises 15 to 20. In each case, draw and Iabel the principal axe , Iabel the intercepts of the curve with the principal axe , and give the formula of the curve in the coordinate ystem defined by the principal axe
+ 4xJXz + 3xi = 1. 3x~ + 4XJX2 = J. x? + 4x 1.rz + 4xi = 1.
15. 6x?
16. XJXz =1 .
17.
18.
19.
20.
9xT 4x lx2 + 6x~ = l . 3xT + 6x1xz + 5xi = I.
21. a. Sketch the following three surfaces:
x? +
4xi
+ 9x j
+
4x 2

xf
 xr  4x~
1
I
9xj
1,
+ 9xj
1.
Which of these are bounded? Which are connected? Label the point close t to and farthe t from the origin (if there are any . b. Consider the surface
xr + 2xi + 3xj +
XJ'"2
+ 2XJXJ + 3x2X3 =
I.
Whi ch of the three surface in part a doe this surface qualitati vely re emble most? Which point on thi urface are clo est to the origin? Gi ve a rough approximation ; u"e technology.
422 •
Sec. 7.4 Quadratic Forms •
Chap. 7 Coordinate Systems 22. On the surface
 xf + xi  X~ + l ÜX JX3 =
1,
find the two points close t to the origin. 23. Consider an n x n matrix M that is not symmetric, and define the function g(~) = .~ . Mx from JR" to JR. Is g necessarily a qu adratic form? lf so, give a symmetric matri x A (in terms of M) such that g(.~) = .~ ·
A,t .
32. Show that any positive definite 11 x n matrix A can be written a A = 8 B T , where 8 is an n x n matrix with orthogonal columns. Hint: There is an orthogonal matrix S suchthat s 1 AS = sr AS = D is a diagonal matrix with positive diagonal entries. Then A = S D ST. Now write D as the square of a diagonal matrix. 33. For the matrix A = [ ~
q(.~)
=; . A x,
where A is a symmetric 11 x n matrix. Find q(eJ ) . Give your answer in tetms of the entries of the matrix A . 25. Consider a quadratic form q(x) = x
q(x)
= ;. Ax ,
where A is a symmetric n x 11 matrix . True o r fa lse? lf there is a nonzero vector ii in IR." such tbat q (ii) = 0 then A is not invertible. 27. True o r false? lf A and B are two distinct symmetric n x n matrices, then the quadratic forms q(x) = Ax and p (."i) = are di stinct as well. 28. True or false? If A is a symmetric n x 11 matrix whose entries are all positive, then the quadratic form
x.
x· Bx
q(x) = x · Ai is positive definite. 29. True or false? If A is a symmetric n x n matrix such that the quadratic form q (i) =
x · Ax
[~ ~ ]=[~ ~][~ ~ l 37. Find the Cholesky factorization (discussed in Exercise 36) for A = [
~
;J.
38. A Cholesky fa ctorization of a symmetric matrix A is a factorization of the form A = LLT , wbere L is lower triangular with positive diagonal entries. Show that for a symmetric rz x n matrix A the following are equivalent:
i. A is positive definite. ii. All principal submatrices A (m ) of A arepositive definite (see Fact 7 .4.5). iii. det(A (m)) > 0 for m = 1, ... , n. iv. A has a Cholesky factorization A = L L r. Hints : Show that i implies ii, ii implies iü, iii impl:ies iv, and iv implies i. The hardest step is the implication from iii to iv: arguing by induction on n, you may assume that A
A = [
is positive definite, then all entries of A are positi ve. 30. True or false? If q is an indefinite quadratic form from JR" to JR, then there is a nonzero vector x in lR" such that q (x) = 0. 31. Consider a quadratic fonn q(i) = · Ai, where A is a symmetric n x n matrix with the positive eigenvalues ;. 1 2: ;.2 2: . . . 2: ;." . Let S" 1 be the set of all unit vectors in lR". Describe the image of s" 1 under q , in terms of the eigen values of A.
;] write A = 8 2 as discussed in Exercise 34.
See Example 1. 36. Cholesky factorization for 2 x 2 matrices. Show that any positive definite 2 x 2 matrix A can be written uniquely as A = LLT, where L is a lower triangular 2 x 2 matrix with positive entries on the diagonal. Hint: Solve the equation
· Ai ,
wbere A is a symmetric n x n matrix. Let ii be a unit eigen vector of A , witb associated eigenvalue A.. Find q (ii) . 26. Consider a quadratic form
 ;] write A = B8 T as discussed in Exercise 32.
See Example 1. 34. Show that any positive definite matrix A can be written as A = 8 2 , where B is a positive definite matrix. 35. For the matrix A = [ ~
24. Consider a quadratic form
4 23
A~; >
n [ffr ~] [B~r ~ ]. =
Explain why the scalar t is positive. Therefore we have the Cholesky factorization
x
(continued )
424 •
Ch ap. 7 Coord inate ystems
Sec. 7.5 Singular Values •
Tlli rea onin o al o shows t.hat the C hole ky facto ri zatio n of A is un ique. Alternati vely,b_ ou can use the LDL T factorizatio n of A to show that ii i _ . imp lie iv ( ee E e rci e 4 .3.33). To show that i implies ii consider a no nzero vector x tn IR"' and de fin e
5
425
SINGULAR VALUES We Start with a somewhat technical example.
EXAMPLE 1 .... Consider an m x n matrix A. Tbe n x n matrix
AT A is symmetric; therefore, there is an orthonorrnal eigenbasis ü1, ü2 , . . . , Ün for AT A, by the spectral theorem (Fact 7 .3. 1). Denote the eigenvalue of A T A associated with Ü; by A.;. Wbat can you say about the vectors A Ü1, • . • , A Ü11 ? Th.ink about their lengths and the angles
they enclose. in IR" fi ll in
11 
111
zeros). Then .\TA (111
Solution
= YT Ay > 0.
;
Let us compute the dot products A Ü; . A Üj.
39. Find the Cholesky factorizatio n of the matrix
A~H ~~
2n
40. Consider an invertible 11 x 11 matrix A. Wh at is the relatio n hip between the matrix R in the QR factorization of A and the matrix L in the Chole ·ky factorizati on of AT A? 41. Consider the qu adratic form
(A )T AT ATAT .   AA V;· Vj= V; Vj= U; Vj = V;AjVj = Aj(V; · Uj ) =
{0
Aj
The vectors AÜt , .. . , AÜn are orthogonal , and the length of AÜj is
ifi=j=j ifi=j
jfj.
~
This observation motivates the following definition: Definition 7.5.1
Singular Values The singular values of an m x n matrix A are the square roots of the eigenvalues of the symmetric n x n matrix AT A (listed with their algebraic multi pljcities). It is customary to denote the singular values by at. a~, .. . . a11 , and to Ii st them in decreasing order:
We define
Tbe discriminant D of q is defined as We summarize the observation made in Example 1: D = det [ q
11
Cf2 1
The second derivati ve test teils us that if D and q 11 are both positi ve, then q (x 1, x 2) has a minimum at (0, 0). Justify th.i s fact, using the theory developed in thi s section. 42. For wh.ich choices of the constant p and q i the n x n m atrix
p
B=
Cf
q p
Cf
q
.
[
l]
positive definite? (B has p ' s on the diagonal and q 's elsewhere.) Hint: Exercise 7.3.17 is he lpful. 43. For whi ch angles a can you fi nd a basis of IR" such that the angle between any two vectors in thi s basis is a ?
Fact 7.5.2
Let L (x ) = Ai be a linear transformation from IR" to IRm. Then there is an orthonormal basis ü1, ••• , Ü11 of IR11 such that
a. the vectors
L (Ü1),
•• •,
L (vn ) are orthogonal, and
b. the lengtb of L(Üj) is aj, the jth singular value of A . To construct ü1,
••• ,
Ü11 find an orthonormal eigenbasis for the matrix ArA .
Part a of Fact 7.5.2 is the key statement of thi ection ; make ure you understand the significance of this claim. See Exercise 2.2.47 for a special case. Here are two numerical exa:mples:
EXAMPLE 2 .... Let L(x) =Ai ,
where
A = [
~ ~J.
426 •
Chap. 7 Coordinate Sy tems
Sec. 7.5 Singular Values •
42 7
a. Find the si ngul ar value of A . b. Find orthonormal vector , and iJ! in IR 2 such that L (ü,) is orthogona l to L(Üz).
LCi) = Ai
c. Sketch and descri be the image of the unit circle under the transformation L.
~
5
Solution
a. We need to find the eigen alues of the matrix. AT A fi rst. AT A
= [ ~  ~ ] [ _~
~ ] = [ ~~ ~~]
The characteristic polynomial of AT A is
125A. + 2500 = (A. lOO)(A. 25) .
;_2 
The eicrenvalues of AT A are A. 1 = 100 and A.z = 25. The singular value of ö A are
c. The unit circle consists of all vectors of the form .i = cos(t ü1 + sin(t)ii2 . The image of the unit circle consisrs of the vectors L (.i) = cos(t L (ii 1) + in (r) L Üz); that is, the image is an ellipse whose semimajor and semiminor axe are L (ü,) and L(Üz), respectively. Note that IIL(Ü 1) 11 = a, = 10 and IIL(ii2)11 = a2 = 5. See Figure 2. <111111
and b. Find an orthonormal eigenbasi for AT A ( ee Example 1).
]5 E IOo = ker [ 30 E 25
=
~~
ker [
30 60 ] = span [  2] 1 _
~~ ] = span [ ~]
Therefore, the vector
Fact 7.5.3
u, = J5l [ 2] 1
(Different scales are used for domain and codomain.)
Figure 1
and
Let L (r) = A.i be an inverti ble linear transformation from IR2 to IR 2 • The image of the unit circle under L i an elJipse E . The lengths of the ernimajor and erniminor a.xes of E are the Singular values a 1 and a2 of A , respectively (see Figure 2).
do the job. We can check that the vectors

Au,=
1 [  20 10] .J5 Figure 2
and  = Avz
L(.i')
1 [ 10] J5 5
= Ai
~
are perpendicular. Let us also checkthat the lengths of Aii, and Avz are tbe singular values of A: IIAii , ll =
IIA vzll The vectors ii 1,
iiz
JsKoü ~ I [ ~] I = Js Jt2s
~ I [ _;~J I =
= 10 = a ,
1
=
= 5
= az
and their images L(ii i), L (Ü 2 ) are shown in Figure 1.
Uni I circle
the unit circle: an ellipse
428 •
Chap. 7 Coordinate Sy tems
Sec. 7.5 Singular Values •
429
EXAMPLE 3 .... Con ider the linear tran formation L(.r )
= A.r,
= [~
A
where
L(."i) =Ai
I
~
1
a. Find the singular val ue ' of A. b. Find orthonormal vectors ~> ii2 , ii3 in ~3 such that L(ii1 ) , L (ii2), and L(Ü3) are orthogonal. c. Sketch and describe the image of the unit sphere under the transformation L.
Unil sphere in ~3
Solution Figure 3
a. 1 1
The image is tbe full e llipse shaded in Fig ure 3. Example 3 shows that some of the si ngular value of a matrix may be zero. Suppose the singtllar values a 1, . .. , a,. of an m x I! matri x A are nonzero, while as+ l ... , a" are zero. Choose vectors ü1, .. . , ii,., ü +I · . . . , ii" for A a introduced in Fact 7.5 .2. Note that II A ii; II = a; = 0 and tberefore A Ü; = 6 for i = s + l , . . . , n. We claim that the vectors Aü 1 , •• • , Aüs form a basis of the image of A. lndeed, these vectors are linearly independent (because they are orthogonal and nonzero), and they span tbe image, since any vector in the image of A can be written as
The eigenvalues are >.. 1 = 3, >.. 2 = I , >.. 3 = 0. The ingular value of A are
b. Find an orthonormal eigenbasis u1, u2 , u3 for AT A (we ornit the details).
Eo
~ ker ~ span [  : ] A)
kr = A(c 1 ü + ... + c,. v + ... + c"Ü") 5
1
= c 1 Aü 1 + · · · + c AÜ5 .
Thi s show that s = dim(im A) = rank (A).
~] •
1 ii2 =  [
.J2
1
Fact 7.5.4
We compute Aii" Av2 , Aii 3 and check orthogonality:
 .J61[3]
Au 1 =
• • •
ar
are
• The Singular Value Decomposition
c. The unit sphere in JR 3 consists of all vectors of the form c1il 1 + c2 Ü2
+ c3Ü3 ,
where
cT + c~ + cj =
The image of the unit sphere consists of the vectors
L (x) = c1L Cii1) where
1 ,
3 ,
We can also check that the length of Au; is a; :
x=
lf A is an m x n matrix of rank r, then the singular vaJue a nonzero, while ar + l, ... , a" are zero.
ci + ci : : : I (recall that L (ii3) =
+ c2 L Cii2),
0).
l.
Ju t a we expre sed the GramSchmidt process in term of a matrix decompo ition (the QR factorization), we will now expres Fact 7 . 5.~ in t rm. of a matrix decomposition. Consider a linear transfonnation L(,r) = A.r from IR" to IR"', and choose an orthonormal basis ii" ... , ü" as in Fact 7.5.2. Let r = rank (A). We k..11ow that the Aur areorthogonal and nonzero, with IIAÜ; II = a; . We introduce vectors Aü the unit vectors 1
,
• • • ,

111
I

=  Av1 , a1
I , !Ir = Aur. ar
430 •
Sec. 7.5 Si ngul ar Va lues •
Chap. 7 Coordinate Systems We can expand the sequence ü 1, Then we can write
.. • ,
ü,. to an orthonormaJ basis
u1, .. .• Üm
or, more succinctly,
of IR"'.
AV = U'E .
Note that V is an orthogonal n x n matrix, U is an orthogonal m x m matrix, and "E is an m x n matri x whose first r diagonal entries are a ~> .. . , a,., and all other entries are zero. Mul ti plyi ng the equation A V = U "E witb V7 from the right, we find that A = U"EV 7 .
and A
v;= 0
431
for i = r
+1
. . . , n.
We can express these equations in matrix fo rm a foll ow
Fact 7.5.5
Singular value decomposition (SVD) Any m x n matrix A can be written as
Vr
Vr+l
v"
A = U"EV 7 ,
where U is an orthogonal m x m matrix; V is an orthogonal n x n matrix; and "E is an m x n matrix whose first r diagonal entries are tbe nonzero singular values a 1, ••• , a,. of A , and aJI other entries are zero (r = rank(A)). Alternatively, this singular value decomposition can be written as
V
  T A =a1u 1uT1 + · · · +a,.u,.v,. ,
where the Ü; and the Ü; are tbe columns of U and V , respectively (Exercise 29).
a,.u,.
0
0
A ingular value decomposition of a 2 x 2 mat:r:Lx A is presented in Figure 4.
a1

u2
0 ~
Ul
u,.
0
0
E \I T
A =
figure 4
a,.
Aii l = lT IÜI
ul
0
0
VI = I'
I~
orlhogonal
al
fnho~onal
e2 lT2e l
0 Ul
u,.
Ur+ l
a,.
Um
el
0
0
u
L
L
lT 1e 1
= [ rrl
0
0 ] cr2
432 •
Sec. 7.5 Singular Values •
Chap. 7 Coordinate Systems
Consider a singular value decomposition
Here are two numerical examples.
2lJ. 6
6
EXAMPLE 4 ..,. Find an SVD for A = [ _ 7 Solution
433
A = U L:V r ,
(Compare with Example 2 .)
where
 "J51 [_2] and "J51 [l ] so hat 1 [ 21 2l].
In Example 2 we fou nd u1 =
v2
1
=
2 ,
t
v,.
and
U=
UJ
Um
V = "JS
The columns ii 1 and 1~ 2 of U are defined by
.. u
1
=
I _ 1 [ 1] , cr Av 1 = J5 _ 2
1
 = 1  = "J51[2]
112
CT2
We know tbat A v;
A i.i;
and therefore
1 [ 2l 2] u = "J5 l .
for i = 1, . . . , r
and
1 '
A U2
= cr;Ü ; =Ö
fori=r + l , . .. , n.
These equations tell us tbat ker(A) = span(i.ir+ 1, ... , Ün and
Finally,
o]=[1o0 o5 ].
cr2
You can check that
im(A) = span(Ü1, .. . , Ür). (Fill in tbe details.) We see that an SVD provides us with orthorrormal bases for tbe kerne! and image of A . Likewise, we bave AT = v r; Tu T
EXAMPLE 5 .... Find an SVD for A = [
~ ~
6].
or
AT U = vr; T.
Reading the last equation colunm by colunm, we find that (Compare with Examp1e 3.)
for i = 1, ... , r and
Solution
T0... AU;=
Using our work in Example 3, we find that V =
l j../2 1/J6 1;.J3] 0 2/J6  1;.J3 ' [ Jj./6  t;../2 1/.J3
u = [ 1/.J~  1/../2 ] 1/.J2
:E=[.J303 o1 o]0 . Checkthat A = U:EV r .
(Observe that the roles of the Ü; and the As above we have
+ 1, . . . , m .
v; are reversed.)
and
1;.J2 '
and
for i = r
ker(AT) = span(Ür+ l, . .. , Üm). In Figure 5 we make an attempt to vi sualize these Observations. We represent each of the kernels and images simply as a line. Note that im(A) and ker(AT ) are orthogonal complements, as observed in Fact 4.4.1.
434 •
Sec. 7.5 Singular Values •
Chap. 7 Coordinate Systems
6. span (ii, +
1 •• • • •
AÜ;
ii")
=a ; ll; if i s r =Ö if i > r
=ker(A)
= ker(A7')
the singular values of A = [
II Au11l
spau (ii,...1• 1 • • •• • ii..,)
Aii;
Fin~
~ ~
Figure S
11. We conclude th.is section with a brief discussion of one of the many applications of the SVDan application to data compression. We follow the exposition of Gilbert Strang (Linear Algebra and Its Applications, 3d ed., Harcourt, 1986). Suppose a satellite transmits a picture containing 1000 x I 000 pixels. If the color of each pixel is digitized, th.is information can be represeoted in a 1000 x 1000 matrix A . How can we transmit the essential ioformation contained in thi.s picture without sending an 1,000,000 numbers? Suppose we know an SVD  T A=a 1u 1vT1 +·+a,u,v,.
Even if the rank of the matrix A is !arge, most of the singular values will typically be very small (relative to a 1). If we neglect those, we get. a good approximation A ~ 1v[ + · · · + 5 5 v[, wbere s is much smaller than r. For example, if we cboose s = 10, we need to transmit only the 20 vectors a 1 ii 1, . . . , 1000 , which is 20,000 numbers. a10 10 and ti,, .. . , VJO in IR
a,u
au
v
1
such th at
Find Sin gular value clecompositions for the rnatrice l[st.ed in Exercises 7 to 14. Work with paper and pencil.
[~ ~J 9. [~ ~]
ATii ; = a;v; ifi :Sr ATii; =Öifi > r
Find a unit vector
= a 1. Sketch the image of the unit circle.
7. span (ii 1 , •• • , ii,.) = im(AT)
J.
435
13.
8.
[~ ~] [~ ;J
[~ q]p
7]
10.
[~
12.
[! i]
14.
[~
6
(see Ex.ample 4
( ee Example 5)
;J
15. Jf A is an invertible 2 x 2 matrix, what is the relationship between the singular values of A and A 1? Justify your answer in terms of tbe image of the unit circle. 16. If A is an invertible n x n matrix, what is the relationship between the singular values of A and A  1? 17. Coosider an m x n matrix A with rank (A) = n and a singular value decomposition A = U:EVr. Show that the Ieastsquares solution of a linear system = b can be writteo as
Ar


b·Ü 1
b ·Ü 11
_
X" =   U I +· · · +  VII .
u
0'11
0'1
18. Consider the 4 x 2 matrix
EXE R C I S ES GOALS
Find the singular values and a singular value decomposition of a matrix. of the unit Interpret the singular values of a ' 2 x 2 matrix in terms of the imacre 0 circle.
1. Find the singular values of A = [
~
_
~ J.
l
A= w
[I
1
~
1  1  1
I  1 1  1
_:J[~ ~J [! 4]  1
1
3 .
0 0
Use the result of Exercise 17 to find the lea t ·quares olution of the linear system
2. Let A be an orthogonal 2 x 2 matrix. Use the image of the unit circle to find the singular values of A. 3. Let A be an orthogonal n x n matrix. algebraically. 4. Find the singular values of A
= [ ~ ~ J.
5. Find the singular values of A = [ : metrically.
Find the singular values of A
;
J.
Explain your answer geo
Work with paper and pencil. 19. Consider an m x n matrix A of rank r and a singular value decompo ition A = U :E vr . Explain how you can expre ·s the least quares Solution f a ~~ of V. sy. tem Ax = ;; a linear combination. of the columns v 1 ,
•• • •
436 •
hap. 7
oordinate Systems
Sec. 7.5
20. a. Explain how any "quare matrix A can be written as
ingul ar Values •
43 7
29. Show that an SVD
A = QS ,
where Q i orthogo nal and S is symmetric posiöve emidefinite. Hint: Write A = U'EV T = UV T V'EVr . b . 1 it po ible to write = S 1 Q 1, w here Q1 is orthogonal and S 1 i symmetric positive semidefini re ? 21. Find a decompo iti on A = Q5 a discus ed in Exercise 20 for A = [ (Compare wi th Examples 2 and 4.)
can be written as
30. Find a decomposition
~ ~ ] ·
22. Con ider an arb itrary ~ x 2 matrix A and an orthogonal 2 x 2 matrix 5. a. Explain in terms of the image of the unit circle w hy A and SA have the same ingul ar values. b. Explain aJgebraically why A and SA have tbe same singular values .
 T A = a 1u1 v1
for A = [ _
~ ~]
(see Exerci e 29 and Example 2).
31. Show that any matrix of rank r can be written as the su m of r matrice of ran k I . 32. Consider an m x n matrix A an orth ogonal m x m matrix 5 and an orthogonal n x n matrix R . Campare rhe Singular values of A and S A R. 33. True orfalse? If A is an n x n matrixl then th e singul ar values of A2 are the squares of the ingul ar values of A. Explain. 34. For which sq uare matrices A is there a si ngular value decomposition A = UL.VT with U =V ? 1
23. Con ider a singular va lue decomposiöon
of an m x 11 matrix A. Show that the colunms of U form an or1honorma l eigenbasi for A AT. Wbat are the a ociated eigenvalue ? What does your answer tell you about the relationship between the eigenva lues of A r A and AAT?
1
35. Consider a singular value decomposition A = ur.vr of an m x n matri x A with rank(A) = n. Let ü1, ••• V11 be the col umns of V and ü 1• • •• ii 111 the columns of U. Without using the res ults in Chapter 4, compute (Ar A)  1 ArÜ;. Explain the re ult in terms of Ieastsquare approximations. 36. Consider a singular value decomposition A = U'EVT of an m x 11 matrix A with rank (A) = n. Let ~~ 1• . .. iim be the columns of U . Without u ing the results in Chapter 4, compute A(ATA) 1 ArÜ;. Explain your result in terms of Fact 4.4.8. 37. If the ingular values of an n x n matrix A are all 1. is A nece arily orthogonal? 38. Tru.e or false'? Sirnilar matrices have the same ingular value . I
24. If A is a symmetric 11 x n matrix. whal is the relation hip between the eigenvatue and tbe singul ar values of A? 25. Let A be a 2 x 2 matrix and ü a unit vecror in JR2 . Show that
1
where a 1 a 2 are the sing ular values of A. Illu trate thi inequality with a . ketch, and justi fy it algebraicall y. 26. Let A be an m x n matrix and ü a vector in lR". Show that
where a1 and a" are the largest and the sm allest si ngul ar va lues of A, respectively. Campare with Exercise 25.
27. Let J.. be a real eigenvalue of an n x n matrix A. Show that
where tively.
a, and a"
are the largest and the sm allest singul ar values of A, respec
28. If A is an 11 x 11 matri x, what is the product of its singular values a 1 , • • • , an? State the product in tenns of the determinant of A. For a 2 x 2 matrix Al explain thi. resull in tenns of the image of the unit circle.
+ a2u 2vT2
I
Sec. 8.1 An lntroduction to Cont inuous Dynamical Systems •
439
At CS, by definition of continuous compounding, the balance x (t) grow at an instan.taneous rate of 6% of the cunent balance: dx  = 6% of balance x( r) dt
LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS AN INTRODUCTION TO CONTINUOUS DYNAMICAL SYSTEMS
or dx

dt
Here, we use a differential equation to model a continuous linear dyoamical system with one component. We will so lve the differential equation in two ways, by separating variables and by making an educated guess. Let us try to guess the solution. We think about an easier problern fir t. Do we know a function x( t ) tbat is its own derivative: dxjdt = x? You may recall from calculu that x(t ) = e1 i such a function (some people define x(r) = e1 by thi s property). More generally, the function x(t) = Ce 1 is its own derivative, for aoy constaot C. How can we modify x(r) = Ce 1 to get a function whose derivative is 0.06 times itself? By the chain rule, x(t) = Ce 0 ·061 will do:
There are two fundamentally different way to model the evolution of a dynamical system over time: tbe discrete approach and the continu.ous approach. As a imple example, consider a dynarnical ystem with onJy ooe component.
EXAMPLE 1 .... Suppo e you wi h to opeo a bank account in Switzerland and you shop araund for tbe be t interest rate. You learn that Union Bank of Switzerland (UBS) pay 7 % interest, compounded annually. Its competitor, Credit Suis e (CS , offer 6% interest compouoded continuously. Everything el e being equal, where hou ld you open the account?
= 0.06x .
dx dt
= !!.__ dt
(ce
0 061 ) ·
= 0.06Ce 0·061 = 0.06x(t )
Note that x(O) = Ce 0 = C; that is, C is the initial value, x 0 . We conclude that the balance after t year is x(l)
=
e0.06t xo.
Again, the baJance x(t) grows exponentially. Alternatively, we can solve the differential equation dx j dt = 0.06x by eparating variables. Wri te
Solution Let us think about the two banks: at UBS, the balance x(r) grows by 7% each year if no deposits or withdrawals are made.
dx
= 0.06dt
X
+ I inte rest I and integrate both ide : x( t x( r
+ l) + l)
x(r) l.07x(t)
+
.j, 0.07x (t )
This equation describes a discrete linear dynamical system with one component. The balance after r years i
ln (x = 0 .06t for ome con tanr k.
The balance grows exponentially with time.
438
ow exponentiate: X
x(r) = (J.07rxo
where C =
ek.
+ k,
=
e ln (x)
= e0.06t+k = e0.06t C,
440 •
Sec. 8.1 An lntroduct io n to Co nt inuo us Dynamical System s •
Chap. 8 Linear Systems of Diffe rential Equations W lü ch bank offers the better dea l? We have to comparc the ex po n nti al functio n 1.OJI and e0·061 . Us ing a caJ ul ator, we o mpute
to see that UBS offe rs the better dea l. The extra interest from co ntinuo us compoundi ng doe" not mak up for th e o nepoin t dj ffere nce in the nomin al intere. t rate.
....
dx jdt
We can generali ze the work we have clone with the differenti al equation = 0 .06x.
or
for some n x n. matri x A .
In tbe ~ontinuous approach, we mode l tbe gradu al change the sy tem undergoes as ttme goes by. Mathem ati ca ll y speaki ng, we model the (instantane~u s) .rates of change of the components of the state vector x(t ) , that is, the ir denvattves
dt
Fact 8.1.1
lf these rates depend linearly on x 1, x 2 ,
Consider the linear di fferential equaüon dx  = kx dt '
clx 1
~
t) =
=
~ = a 2 1X 1
1
. .. ,
G JJX t +a 12 X 2
dx2
with given initial val ue x 0 (k is an arbitrary consrant). The so lution is x
x," then we can write
+
+ a 22 x 2 +
e k XQ .
Tbe quantity x will gro w or decay exponen ti all y (dependin g on the sign of k). See Figure 1. or, in matrix form, Now consider a dynam.icaJ ystem with state vector x (t ) and component s x 1(t ), ... . Xn (t ) . ln Chapter 6 we use the discrete approach to mode l thi s dynamical system: we take a snap hot of the system at ümes t = 1, 2, 3, ... , and we describe the Iransformation tbe system undergoes between these snap hots. If ,'t (t + I ) depends li nearly on (t ), we can write
x
I
.x cr + 1) = Ax(r)
where
1
A= Figure 1 (o) x{t) =
ekl with positive k. Exponenliol growth. (b) x{ t)
=
ekr with negative k. Exponentiol decay.
[""
Gt:~
021
a 22
a:,t
Gn 2
a," ] (1 2 11
Gn n
X
The derivative of the parameterized curve .x(l) is definecl componentwi se:
clx 1
X
:t

dt
5
~ 10
(a)
15
0.5
cLt dt 0
2 (b)
441
dx 2
dt dx" dt
442 •
Chap. 8 Linear Systems of Differential Equations Sec. 8.1 An Introduction to Continuous Dynamical Systems • We summarize these observ·nions:
In other words, to solve the system d.r _ =Ax dt '
A li11ear dynamical system can be modeled by x (r
+ 1) =
(di screte model)
Bx(t)
x
or
A and B are
11
x
11
d.r A(continuous mode l).  = X dr matrices, where 11 is the number of components of the
systent.
for a given initial state 0 we have to find the trajectory in the x 1x 2 plane that ta11 at i o and whose velocity vector at each point i is the vector Ax. The existence and uniqueness of such a trajectory seems intuitively obvious. Our intu.ition can be misleading in such matters, however, and it is comforting to know that we can e tablish the existence and uniquene s of the trajectory later. See Fact 8.1 .2. Fact 8.2.3, and Exercise 9.4.48. We cao represent graphically as a vector field in the x 1x2 plane: at the endpoint of each vector x we attach the vector Ax. To get a clear picture, we often sketch merely a direction field for Ai, which means that we wi ll not necessarily sketch the vectors Ax to scale (we care only about their direction). To find the trajectory ,r(t), you simply follow the vec tor field (or direction field); that is you fo!Jow the anows of the field , starring at the point representing the initial state 0 . The trajectories are also called the fiow lines of the vector field Ax. To put it differently , imagine a traffic officer standing at each point of the plane, showing us in which direction to go and how fast to move (in other word , defining our veloci ty). As we follow his directions we trace out a trajectory.
Ar
We will first think about the equation d.r  = Ax 4
dr
from a graphical point of view when A is a 2 x 2 matrix . We are looking for the parameterized curve
=c) _ .~ 1 
[x.tx2(t) C r) J xo.
that represents the evolution of the system from a given initial value. Each point 00 the curve x(r) will represent the state of the system at a ce1tam moment in time, a shown in Figure 2. . It is naturaltothink of the trajectory ,~(t) in Figure 2 as the path of a movu:g particle in the x 1x 2 plane. As you may have seen in a previ~us cour e, the vel~ct~ vector dx f d t of this moving particle is tangent to the traJectory at each pmnt. See Figure 3. Figure 3
Figure 2
~
.T(O) = i o
Trajectory
/
1It 
443
is se nsible to attac h the veloci ty vector dx jdt at the en dpoint of Lhe staLe vector x(r). indicating the path the panicle would take if it wcre to maintain the direclion it has at time t.
x
EXAMPLE 2 ..... Con ider the linear system d.r f dc = Ai where A = [ sketch a direcrion field for values.
Ax. Draw rough
~
;
J
In Figure 4, we
trajectories for the three given initi al
444 •
Chap. 8 Linear Systems of Differential Equ ations
Sec. 8.1 An Introd ucti n to Co ntinuous Dynami cal Systems •
445
Figure 7
Figure S
Solution
We have seen earlier that the eigenvalues of A =
Sketch the ftow lines for tbe three given points by following the arrows, as shown in Figure 5. Thi picture does not teil the whole story about a trajectory x(t ). We don ' t know the position x(t) of the moving particle at a specific time t. In other words, we k.now roughly wbich path the particle takes, but we don ' t know how fast it moves along that path. ~
for some scalar ).. Tbis means that the nonzero vectors along these two special lines are j ust the eigenvectors of A , and the special lines themselves are the eigenspaces. See Figure 6.
Figure 6 (o) AX = ü, for o positive J... (b) AX = ü , for o negative A.
(a)
(b)
are 5 and 1.
with corresponding eigenvectors [ ~] and [ _ ~ ] . These results agree with our graph.ical work in Figures 4 and 5. See Figu.re 7. A in the case of a discrete dy namical system, we can sketch a phase portrait for the system d,r jdt = Ai that shows some representative trajectories. See Figure 8. In Su mmary: if the initia] tate ,rois an eigenvector, then the trajectory mo e aJong tl1e corresponding eigenspace, away from the origi n if tbe eigenvalue i
As we Look at Figure 5 our eye' s attention is attracted to two special line , along which the vectors Ax point eilher radially away from t.he origin or directly toward the origin. In either case, the vector Ax is parallel to .i : Ai = J..x ,
[! ; ]
Figure 8
x_
446 •
Chap. 8 Linear Systems of Differential Equations
Sec. 8.1 An Introduction to Continuous DynamicaJ Systems •
po 1u and toward the origin if the eig~nvalu e _is negativ .. lf the eigenvalue is zero, then 0 is an equilibrium olution : x(t ) = xo. for all Limes ' ·
x
Now .let' s do
a slightly barder example:
EXAMPLE 4 ..... Find all so lutions of the system
How can we solve the system dx jdr = A:i: ana lytically ? We starl with a simp le case..
EXAMPLE 3 ..... Findall solutions of tbe y tem
447
2]
d,:r = [
.1  1 4
dt
X.
Solu tion Above, we have seen that tbe eigenvalues and eigenvectors of A tell us a lot about the behavior of tbe solutions of the system dx/dt = Ax. Tbe eigenvalues of A are A. 1 = 2 and A.2
= 3 with
corresponding eigenvectors ü1
Soluti on This means that
The two differential equation dxt = 2xt
dt
s tAS
S= [
= B , where
i ~J
iJ
aod i:i2
= [~ ~
=[
iJ.
J. the matrix
considered in Exarnple 3. We can write tbe system
dx2
dx
dt
=Ax
are unrelated or uncoupled· we can solve tbem separately u ing the formula tated in Fact 8.1.1. 21 X t (I) = e Xt (0), 31 X 2 (t) = e X2 (0),
aod B
=[
_
dt
as cLt
_
1 =sssx dt
or
s 1 d,i: = Bs 1.r
or
dt
or (see Exercise 51) Botb components of; t ) grow exponeotially, and the econd one wi ll grow ~aster than the first. In particular, if one of the components i initially 0, it remaw s 0 for aJl future times. In Figure 9, we s ketcb a rough pha e portrait for this Y tem. ~
.:!.._cs t;) dt
=
Bcs 1.r).
Let LI S i ntroduce the notation c(t) = s t; (t); note that c(l) i the coordi nate vector of; (t ) with respect to the eigenbasis Üt, ü2 • Theo the system takes the form
dc
_
dt = Be ' Figure 9
whi ch is just the equation we o lved in Example 3." We fo und that the olutioos are of the form c(t)
=
[ ee~;c~c2 J,
where c 1 and c2 are arbitrary con tants. Therefore, the olution of the original sy tem d,t  = A.r
dt
are
448 •
Chap. 8 Linear Systems of Differential Equations
Sec. 8.1 An Introduction to Continuous D:rnamical Systems •
We can write tbis formula in more general term s as ~
A lt 
x(r) = c 1e · v,
+ c2e
This is a linear di stortion of the phase portrait we sketched in Figure 9.
~ore precisely, the mat1ix S =
J.., r 
 v1 .
Note that c 1 and c2 are the coordinates of .~ (0 with respect to tbe bas.is ince
It is informative to consider a few special trajectories: if c , trajectory x (t) = e 2r
2 1
[
moves along the eigenspace E2 spanned by [ and c2
=
=
l and c2
v,, v~ .
= 0,
lf c2
=F
0, then the entries of c2 e 31
value) than the entlies of c1 e21
[iJ
i] transform~
the phase portraits in Figure 9
and
e into [ ~ ] ).
e
1
into the eigenvector
~
2
Our work in Examples 3 and 4 generalizes readily to any n x n matrix A that is diagonalizable over R i.e., for which there is an eigenbasis in ~Rn:
the
J
~] , as expected.
Fact 8.1.2 Likewise, if c , = 0
~],
[
~]
[
i]
will become much !arger (in absolute
as t goes to infinity. The dominant tenn
Consider the system dxjdt = Ax. Suppose there is a real eigenbasis ii " .. . , v" for A, with associated eigenvalues }. I , . .. ' A". Then the general Solution of the system is
x
0
with respect to the basis ü1,
. _. ,
V
12
•
We can think of the general solution as a linear combination of the solutions e;..,, Ü; associated with tbe eigenvectors ii;. Note the similarity between this solution and the general solution of the discrete dynamica1 system ,i (t + 1) = Ax(t):
associated with the !arger eigenvalue, determines the behavior of the
system in the distant future. The state vector .t(t) is almost parallel to E3 for !arge
r. For Iarge negative 1, on the otber band, tbe state vector is very small and almost parallel to E2. . In Figure 10, we sketch a rough phase portrait for the system dxjdt =Ai.
Figure 10
7
1, we have the trajectory
moving along the eigenspace E3.
[
[
mto the phase portrait sketched in Figure 10 (tran~fonning
The c; are the Coordinates of
c2 e 3'
449
Xz
The terms )...~ are replaced by e;..,,. We have already observed this fact in a dynamical system with only one component (see Example 1).
EXAMPLE 5 ~ Consider a ystem d ,i 1dt
= A.x , where A is diagonalizable over R When is the zero state a stable equilibrium solution? Give your answer in terms of the eigenvalues of A.
Solution Note that lim eM = 0 if (and only if) ).. is negative. Therefore, we observe tability
,.....
if (and only if) all eigenvalues of A are negative.
~
Consider an invertible 2 x 2 matrix A with two distinct eige.nvalues AJ > )•2 · Then the phase portrait of d.xjdt =Ai Iooks qualitatively like one of the three sketches in Figure 11. We observe stability only in Figure llc. Note that the direction of the trajectories outside the eigenspaces always approaches the direction of E;.. 1 as t goes to infinity. Compare with Figure 6.1.1 I.
450 •
Chap. 8 Linear Systems of Differential Equations
Sec. 8.1 An Introduction to Contin uous Dynamical Systems •
dx 8. , Gt
= ../X, x(O)
dx = c.1t
9. 
xk
451
= 4.
(with k =/:. 1), x(O)
=
I.
d.x 1 10. d =  ( ), x(O) = 0. {
COS X
dx . ? 11. , = I +x, x(O) = 0. Gl
12. Find a differential eg uation of the form dx jdt
= kx
for which x(t)
= 31 is
a
solutioo. (c)
Figure 11
EXERCISES
GOALS Use the concept of a continuous dyna:mic.al system. Solve the differential equation dx jdt = kx. Solve the system d.t / dt = when Ais diagonalizable over R and sketch the phase portrait for 2 x 2 mat1ices A.
Ax
Solve the initial value problems posed in Exercises 1 to 5. Graph the solution.
dx 1.  = 5x with x(O) = 7. dt dx
.
2. 
= 0.7lx
w1th x(O)
3. d p dt dy 4. dt dy 5. dt
= 0.03P
with P (O)
dt
=  e. = 7.
= 0 .8t
with y(O) =  0.8.
= 0 .8y
with y(O)
= 0.8.
Solve the nonlinear differential equations in Exercises 6 to ll using the metJ1od of separation of variables (p. 439): write the differential equation dxjdt = .f(x) as dxj f(x) = dt and integrate both si des. dx I 6.  =  , x(O) = 1. dt X dx ? . 7.  = x, x(O) = 1. Describe the behavior of your solution as t wcreases. dt
13. In 1778, a wealtJ1y Pennsylvanian merchant named Jacob DeHaven lent $450,000 to the Contineutal Congress to support the troops at Valley Forge. The loan was never repaid. Mr. DeHaven 's descendants are taking the United States Government to coutt to collect what they believe they are owed. Tbe going interest rate at the time was 6%. How much were the DeHavens owed in 1990 a. if interest is compounded yearly? b. if interes t is compounded continuously? (Adapted from The New York Tim es, May 27, 1990.) 14. The carbon in living matter contains a minute proportion of tlle radioactive isotope C14. This radiocarbon arises from cosrnicray bombardment in the upper atmosphere and enters living systems by exchange processes. After tlle death of an organism, exchange stops, and tlle carbon decays. Tberefore, carbon dating enables us to calcuJate the time at which an organism died. Let x(t) = proportion of the original C 14 still present t years after death. By definition, x(O) = 1 = 100%. We are told tllat x(t) satisfies tlle differential eguation
dx I  =   x. dt
8270
a. Find a formu la for x(t). Determine tlle halfl.ife of C 14 (that is, the time it takes for half of tlle C 14 to decay). b. The Iee man. In 1991 , the body of a man was found in melting snow in tlle Alps of Nortllern ItaJy. A wellknown historian in Innsbruck, Austria, deterrnined that the man bad lived in the Bronze Age , which starred about 2000 s.c. in this region . Examination of tissue samples performed independently at Zürich and Oxford reveaJed tllat 47% of the C 14 present in the body at the time of his death had decayed. When did this man die ? ls the result of the carbon dating c.ompatible with the estimate of the Austrian hi storian?
15. Justify the "rule of 69": if a quantity grows at an instantaneou rate of ko/o , then its doubling time is about 69 /k. Example: In 1995 the population of India grew at a rate of about 1.9%, witll a doubling time of about 69/1.9 ~ 36 years.
452 •
Sec. 8. 1 An Introd uctio n to Continuous Dynami cal Sys tems •
453
Ch ap. 8 Linear Syste ms of Differential Equations
24. Let A be an n x n matrix and k a sca lar. Consider the two y tems below: Consider the system
~ = [~ ~J.x. For the va1ues of ;, and A.2 given in Exercises 16 to 19, sketch the trajectories 1 for all nine initial values hown below. Foreach of the points, trace out both the
dx
(I)

(11)
dl =CA+ k i")c
= At
dl
dc
Show that if .~(t) is a Solution of system (l), then c t ) = of ystem (II).
future and tbe pa t of the sy t m .
ek 1 x(t)
is a solution
25. Let A be an n x n matrix and k a scalar. Consider the two systems below:
cLx
•
•
•
~~ =
~ J.X
[ ;
cL.r = [ 4 27. dr 16. A.t = 1, >2 =  1
28.
~~ =
[:
29.
:~ =
U ~ J.r
17. AI = 1, A2 = 2
19.
),1
= 0, )..2
2
=1
20. Consider tbe system dx jdt = Ax with A = [
~

~ J. Sketch a direction
fi eld for Ai. Based on your sketch, describe the trajectories geometrically. From your sketch, can you guess a fonn ula for the o lution wi.th
xo = [ ~ ]?
Veri fy your guess by sub tituti ng in to tbe equation .
21. Consider the system dx jdt = Ax with A = [
~ ~ J. Sketch a direction field
of Ax. B ased o n your sketch, describe the trajectories geometricaUy . Can you find the solutions analytically? 22. Consider a linear system dx /dt = Ax of arbitrar y size. Suppose it (t) and x 2 (t) are solutions of the system. Is the sum x(t) = x 1(t) + 2 (t) a solution as well? How do you know?
d."V = [ I 30. dt
2
31. dx dt
 = kAc dt
dc
~ J.r
with
with with
J. .t"(O) = [ ~ J. x(O) = [:
J_ with x(O) _ = [_ 21 ] .
2 x
4
[7 ~ 3
Jx
~ J. 1 J. ,i(O) = [ O
x (O) = [
2
~] ;
2
with
x(O)
=
[~l ] ·
Sketch ro ugh phase pmtraits fo r the dynarnicaJ systems gi ven in Exercises 32 to 39.
x _ [ 3I
32. ddt 33.
x
23. Consider a linear system dx jdt = Ax of arbi trary size. Suppose Xt (t ) is a solution of the system and k is an arbitrary constant. Is x(t) = kx t(r) a solution as well? How do you know?
=
with
_3 3
2
).z =
(II)
In Exerci e 26 to 31, solve the ystem with the given initial value. 26.
18. >. 1 = 1 ,
 = Ax dt
Show that if x(t) is a Solution of y tem (I), then c(l ) = x (kt ) is a olution of ystem (fl ). Campare the vector fields of tbe two sys tem .
~~+ X I
•
_
(I)
34.
35.
2]
0
X
.
~;: = [ ~ ~ J.r.
n
~;~ = [ : X. :~ = [ ~ ; J.r.
454 •
Chap. 8 Linear Systems of Differential Equations
Sec. 8.1 An lntrodu ction to Continuous Dynamical Systems •
 + 1) = [00..92 0.22 ]()
36. x(t
_ 37. x(t
1.
. = + 1)
[ _ 1_ 02
c. What will happen in the long term ? Does tbe outcome depend on the initial populations? If so, how?
x t ·
Jc)
o.3 1. x t · 7
43. Answer the questions posed in Exercise 42 for tbe system below:
0.2] ()
dx
1.1 38. x(t + 1) = [ 0.4 O.S x t ·
39.
_
X
(t
+ 1) =
[o.s 0 _3
 o.4 ] 1.6
455

dt
c)
=
Sx 
y
dy
X I ·

dt = 2x + 4y
 .
40. Find a 2 x 2 matrix A such that the system dx jdt = Ax has
x(t
 [2e2 ' + 3e3'] 3e2r + 4 e3r
as one of its solutions. Consider a noninvertible 2 x 2 matrix A with two distinct eigenvalues (note 41. that one of the eioenvalues must be 0). Choose two eigenvectors ü, and ÜJ with eioenvalue~ ), 1 = 0 and A.2 . Suppose A.2 is negative. Sketch a phase p~rtrait fo; the system dx jdt = A.t , clearly indicating the shape and longterm behavior of the trajectories.
44. Answer tbe questions posed in Exercise 42 for tbe system below: dx  = x+ 4y
dt
dy
=2x y dt
45. Two berds of vicious animals are figbting each other to the death. During the fight, the populations x(t ) and y(t) of the two species can be modeled by the system below: 1 dx
dt
 4y
dy
 = x dt
a. Wbat is the significance of the constants 4 and 1 in these equations? Which species has the more vicious (or more efficient) fighters? b. Sketch a phase portrait for this system. c. Who wins the fight (in the sense that some individuals ofthat species are left while the other herd .is eradicated)? How does your answer depend on tbe initial populations? 46. Repeat Exerc.ise 45 for the system 42. Consider the interaction of two species of animals in a habitat. We are tol d that the change of the populations x( r ) and y(r) can be modeled by the equations
dx
 py
dt dy
dx = 1.4x  1.2y dt


d ...!... = 0.8x 
dt
1.4y
dt
=qx
'
where time t is measured in years. a. What kind of interaction do we observe (symbiosis, competition, predatorprey)? b. Sketch a phase porlnit for this system. From the nature of the problem, we are interested only in the first qu adrant.
where p and q are two positive constants. 2
'This is the simples!. in a serie of c01nbm models developed by F. W. Lanchester duri ng World War 1 (F. W. Lanchester. Aircraft in W'wfare, tile Daw11 of 1he Fourlir Arm. Tiptree, Constable and Co.. Ltd., 1916). 2The result is known as Lanchester's square luw.
456 •
hap . 8 Linear
tems of Differential Equations
Sec. 8.1 An lntroduction to Continuous Dynamlcal Systems •
47. The intera tion of two popul ati on of anim als is modeled by the differential
52. Find all solutions of the sy. tem
equation
dx
dx  =x+k y
dr
dv __:_ = kx  4v
dr
·
di
say about the sign of the on the value of the constant c. Foreach case you di cussed does each pha e portrait tell
eigenvalues? How doe your an wer depend
k? in part b, sketch a rough pbase portrait. What you about the fut ure of the two population ?
48. Repeat Exercise 47 for the sy tem dx

dr
= x+ky
dy X
dt

dt
 g  0.2h
~;~ = [ ~
0.6g  0.2h.
where timet is measured in hours. After a heavy holiday dinner, we measure g(O) = 30 and h(O) = 0. Find closed formulas for g(l) and h (t ). Sketch the
trajectory. 50. Consider a linear system dx jdr = Ax, where A is a 2 x 2 matrix which is diagonalizable over JR. Wben is the zero tate a stable eq uilibrium solution? Gi ve your answer in terrns of the determinant and the trace of A.
51. Let x(t ) be a differentiable curve in JRn and S an n x n matrix. Show that d
_
 (Sx) dt
X,
! ]x
l
xo = [ b
with
Sk~tch the trajectory for the case
when p is positive, negative or 0. In which cases does the trajectory approach the origin? Hint: Exercises 20, 24. and 25 are helpful. 54. Conside_r a cloor that open to on ly one side (as mo 't doors do). A spring mechant 111 closes the door automaticaUy. The tate of the door at a oiven time r (measured in econds) is deterrnined by the angular displacemen7 a(t) (measured in radians) and the angular velocity r.v i) = da jdt . Note that a i alway positive or zero (since the door opens to only one side • but r.v can be positive or negative (depending on whether the door i opening or closing).
When the door is moving freely (nobody i pushing or pulling), its movement. is ubject to the foUowing differential equation :
da dt dr.v
=
).,
0
4 y
where k i a po itive constant. 49. Hereis a continuou model of a per on ' s glucose regulatory syste m (compare with Exercise 6.1.36). Let g(r) and h (r ) be the excess glucose and in sulin concentrations in aper on's blood. We are told that
dg dr dh
1]
[ ).,
=
where ).. is an arbitrary constant. Hint: Exercises 21 and 24 are helpful. Sketch a phase porh·ait. For which choices of 'A is the zero state a table eq uilibrium olution ? 53. Solve the initial value problem
for ome po iti ve con tant k.
a. What kind of interaction do we ob. erve? What is the practica l ignifi cance of the con tant k? b. Find the eigenvalue of the oeffic ient matrix of the sy tem . What can you
45 7
dx
= S . dt
dr
(the definition of r.v)
(l)
( 2a reflects the force of the spring, and 3r.v model friction)
=  2a  3w
a. Sketch a phase pmtrait for this system. b. Di cuss the movement of the door repre ented by the quali tativel y d ifferent trajectories. For which initial st.ates doe the door slam (i .e., reach a = 0 with velocity (tJ < 0)? 55. Answer the que tion posed in Exerci se 54 for the sy tem
da dt d (tJ
dt
(/)
=  pet  qr.v
where p and q are po itive, and q 2 > 4p.
458 •
2
Sec. 8.2 The Co mplex Case: Euler's Formula •
Chap. 8 Linear System of Differential Equations
We can write a complexvalued funct ion z(t ) in terms of it nary parts: real and imagi 
THE COMPLEX CASE: EULER'S FORMULA Con ider a linear system
z(t ) = x(t )
dx
_
+ i y (t )
iCon~ider the two ex~ple_s abo ve.) l f x(t) and
y (t ) are differentiable realvalued unctiOns, then the den vattve o f the comp lexvalued function z(r) is defined by
 = Ax dt '
dz dx dy =  + i
, bere the n x n matrix A i di agonalizable over C: ther i a complex eigenbasi jj " . . . • ü" fo r A. with a ociated complex eigenva lue A~o ... , A" . You may ' onder ' hetber the formul a
dt
dt
dt .
For examp le, if
then
with com plex c;) produce the general complex solution of the system, just as in the real case (Fact 8.1.2). Before we can make sen e out of the fommla above, we have to think about the idea of a complexvalued fun ction and in partic ul ar about the exponential function e )J for complex A.
dz dt =
I+
2il .
If Z(l ) = COS(I )
+
459
+ i sin (t),
then
ComplexValued Functions
dz dt =  sin (t)
A complexvalued ftmctio n z = f(t) is a function from IR to C (with domain lR and codomain C ): the input t is real, and the output z i complex. Here are two examples:
Pl e~se ve1ify that the basic rules of differential calculu (the sum, product, and quot1 ent rules) apply to complexvalued functions. The chai n rule holds in tbe fo llowi ng fom1: if z = f(l) i a differentiable complexvalued function and t = g(s) i a differentiable function fro m lR to IR, tben
7 =L + i t 2 Z = COS(f ) + i sin (t)
For each r, the ou tpur ' can be represented a a point in the complex plane. As we Iet t vary , we trace out a trajecto ry in the complex pla ne. ln Figure l we sketch the trajectories of the two complex valued fun ction defined above.
r.e 1
(a) The lrajedory of
Z=f + if 2.
1 ~
(b) The trajectory of
z = castn + isin(t).
d
d z dt
ds
dt ds ·
T_he d_erivative d z / dt of a compl exvalued function z (t) , for a given t, can be vJsuahzed as a tangent vector to the trajectory at z(t), as shown in Fig ure 2. Next let' s think about the complexvalued exponential function z = e).' where A is complex and t real. How should the function z = eJ..J be defin ed? We can get ome in piration from the real case: the exponential function x = ekr (fo r real k) i the unique function uch that dxjdt = kx and x(O) = 1 (compare with Fact 8. 1.1 ).
= 1rll =i
1 =  I
Figure 2 t =O z=O
I =
0
z= l
!!I:at l = .I dt
I
= 31Tf2
z= (a)
(b)
 i
+ i cos (t ) .
460 •
Sec. 8.2 The Camp lex Case: Euler's Formula •
hap. 8 Linear Sy tem s of Differential Equation
461
We an u e thi funda me nta l property of real xpone ntial fun ctio ns to defin e z(l) = cos(t)
the complex exponentiaJ function s:
+ i sin(t)
I
Complex exponential functions
Definition 8.2.1
If ).. i a complex number, the n = e1'1 is the unique com ple va lued fun ction 7
su h that d~
 = )... dt
and
 (0) = l.
(The exi tence of uch a function, for any
"J.. ,
will be established below; the
Figure 4
proof of uniquene s is left as Exercise 34.) It follow that the unique com plexvalued function z(r) wi.th and
z(O)
=
The unit circ/e, with parametri zation :(1) = cos(t) req u iremen ts:
o
dz
dr
.
=  Sll1 (t )
+ i CO
+ i sin (i),
ati fi e these
(1) = i z(t)
IS
and z(O) = 1. See Figure 4. We have bown the following fundame ntal re ult: fo r an arbitrary complex initial value zo. Let us first con ider the impl e t ca e,  = e;', where )... = i . We are loolO ng for a complexvalued function z(t) ucb that d jdt = i  and z(O) = l. From a graphical point of view, we are looki ng for the trajectory z(L) in the complex plane tbat starts at z = 1 and whose tangent vector d z/ dt = i is perpendicular to z at each poi.nt (see Example I of Seclion 6.4). In other ward , we are Iook.ing for the ftow line of the vector field in Figure 3 tm·ting at z = I.
Fact 8.2.2
/ r = COS(t ) + i PRO JUV NTUTE 1957
~ . ~
.;;
~ '~
z
;
3
. ),
.
Figure 3
~
5
+5
in (t )
The case 1 = rc Ieads to the intriguing formula eirr =  1; this ha been called the most beautiful formula in all of mathematics. 1 Euler's formula can be u ed to write the polar form of a complex number more succinclly:
~
•
M
. . < .,.
/
Euler's formula
_ ; . .·!, ~... ~.
.
'
EL\/
TlA
Figure S Euler's likeness end his celebroted formulo ore shown on o Swiss postage stomp.
z = r (cos<j> + i sin <j>) = re;q, Now consider z = e)J , where ).. is an arbitrary compl ex number, )... By manipulating exponentials as if they were real , we find that
e'J = e
= eP'eiqr = eP' ( co
(qt)
+i
= p + iq .
in(qt ).
We can va lidate lhi s result by checking th at the complexvalued function z(t) = eP'( co (qt)
+i
in (qt))
1 Benjami n Peirce ( 1809 1880), n Hurvard mathcmatician. after observing tiM em =  I, u. cd to turn to his studems and say, ·'Gentlemen, that is surcl y truc. it is absolu tcl y paradox i al, wc cn nnot under ta nd it, and we don't know what it means, but we have pro ed it, und thercforc wc knO\ it must bc lhc tru th ." Do you not now thin k that we understand not onl y that the formul a i' truc but al o what it m.:a ns?
462 •
hap. 8 Linear Systems of Differential Equation s
Sec. 8.2 The Co mplex Case: Euler's Fo rmula •
does indeed satisfy the definition of e).,1 , namel y, d;, j d r
=
AZ and z(O)
=
~i genbas is. ii t, . . . , ii", with eigenvalues A1,
1:
+ iq )e"
1
(
co (qt ) + i sin (ql) ) = AZ
, An. Findall complex soluti ons
x(t ) of tht s system . By a complex solution we mean a functio n from IR to C"
d z =pe" 1 (co (qt )+ i in (qr))+ ePf (  qsin (q t)+ iqcos(qt )) dr
= (p
••.
463
(that is, t is. 1: al and .r is in C"). In other words , the component funct.ions x 1 (t ), ... , x"(t ) ol· x(t ) are cornplexvalued functions. A · you review our work in the last section , you will find tbat the approach we took to the real case applies to the complex case as well, without rnod.ifications:
.J
EXAMPLE 2 ~ Sketch the trajectory of the comple valued fun ctio n z (t = e
Solution
Fact 8.2.3 ;,(1)
=
eO.I I eil=
eO
I r ( COS (I )
+i
Con ider a linear system
in (r) )
dx

The trajectory spiral nent.ially.
outward a
dt
shown in Figure 6 since e 0· 1' grows expo
_
=Ax .
Suppose there is a complex eigenba .is ii 1, •• • , v" for A with associated complex eigenvalues A1, . .. , An. Then the general complex solutioo of the sy tem is
<11111
EXAMPLE 3 ~ For which complex number ), is Lim eA1 = 0? 1>
Solution
where the c; are arbitrary cornp lex numbers.
Recall that e~·'
= e
We can checkthat the given curve .'i(t) sarisfies the equation dxjdt = A.i: we have
so that leAI 1 = e"1 . This quantiry approaches zero if (and only if) p is negative, that is, if e"' decays exponentiall y. We summarize: Lim e).,1 = 0 if (and only if) the real part of A is negative. <11111 1>
We are now ready to tackle the problern po ed at the beginning of th i ection: consider a . y tem dx /dt = Ax , where the n x n matrix A ha a comple
(by Definition 8.2 .1 ), and
because the Ü; are eigenvectors. The two answers match. When i the zero state a stable equiiibrium solution for the system d.i jdt = A.r? Con ideri ng Example 3 and the form of the solution given in Fact 8.2.3, we can conc lude that this is the case if (and only if) the real part of all eigenvalue are negative at least when A i diagonaiizable over C). The nondiagonalizable case is left a Exercise 9.4.48.
Figure 6
Fact 8.2.4
For a system c1.r
_
=Ax dr ·
the zero tate is an asymptotically stable eq uilibrium solut.ion if (and only if) the real part of all eigenvalues of A are negative.
464 •
Sec. 8.2 The Camplex Case: Euler's Fo rmu la •
465
Chap. 8 Linear Systems of Differenti al Eq uation ~ he~~ A i_s a r~al 2x2 matrix with eigenvalues p±iq. Consider an eigenvector v + 1w w1th e1genvalue p + iq. Tben
EXAMPLE 4 .... Consider the ystem d~ jd r = A.i=. where ~ is ~ (r al) 2 x 2 matrix. When i.s the zero tate a table equilibrium ·so lutJOn fo r th t system? Give your an wer in terms of tr(A) and det(A ) .
x(t) = e"l [w
ility either if A has two negative eigenvalue or if A . has two . We o b erve Stab conj ugate eigenvalue p±iq, wber~ p is negative. In bot:h case , tr(A) l ' negatlVe and det (A) is positive. Check that 111 all other cases tr(A) 2: 0 o r det(A) S 0. <1111
dx
_
EXAMPLE
5 ~ Salve the system
ddxt
As a specia1 case of Fact 8.2.3, let's consider the system d,r _  = Ax. dt w here A j a real 2 x 2 matrix with eigenvalues A. 1,2 = p ± iq and corresponding
few simple points .X, say
1
x(t) = 2Re (c !e"' ÜI)
+ iw)) k cos(qt )w  k sin (qt)ti)
Figure 7.
= 2Re(Ch +ik)e"1 (cos(qr) +i sin (qt))(ti
w
= 2e"'[w Note that x(O) =  2kw
_ J [  k cos(qt) h sin (q t) J v  k sin (q t) + h cos(qr) v][cos(qt) sin (q r)
+ 2hv.
 sin (qt) J[  k]· cos(qt) h
Let a =  2k and b = 2h for si mpli c ity.
Cons ider a linear system
dx
_
 = Ax , dr
[35 2]_ _
3
x
w1.th
x_0 =
[o] 1
.
The e igenvalues are A.1.2 = ±i, so that p = 0 and q = 1. This teiLs us that the trajectory is an ellipse. To detennine the d.irection of the trajec tory (clockwise or counterclockwise) and its rough shape, we can draw the vector field Ai for a
v
= 2 e"' [
=
Solution
eigenvettors ii1.2 = ± iw. . Tak:ing the same approach as in the di crete case (Fact 6 .5.3), we ~an wnte tbe real SOlUtion x(t ) = C le ). 11 V1 + c2e).1 1 V2 (wbere C ! = h + ik and C2 = C!) as
2e"' (h cos(qr)v h sin (q r)ÜJ 
 si n(c'j>t) ] [ cos(c'j>t ) b
in the case of the discrete system x(t + 1) = Ax(t) (Fact 6.5.3). The factor r' is replaced by e" 1 , and 4> is replaced by q. This make good sense, because A.1 = r 1 ( cos(c'j>t) + i sin (c'j>t)) in the formula for the discrete system is replaced by e'·1 = e"' ( cos(q r) + i sin(qt)) in the continuous case (Fact 8.2.3).
where A is a real 2 x 2 matrix. Then the zero tate is an asymptotical ly stab le equi librium solut.ion if (and only if) tr(A) < 0 and det(A) > 0.
=
a]
x(t) = r 1[ Ü;
Consider the system
w, v.
The trajectories are either ellipses (l.inearly d.istorted circles), .if p = 0, or spirals, spiraling outward if p is positive andinward if p is negative. Note the similarity of the fonnula in Fact 8.2.6 to the formu la
 =Ax , dr
Fact 8.2.6
 sin(q t) ] [ a ] cos(qt) b
where a and b are the coord.inates of .X0 with respect to the bas is
Solution
Fact 8.2.5
u ][ c?s(qt) sm(qt)
Figure 7
x = ±e~. and sketch the flow line starring at [ ~ J.
See
466 •
Chap.
Linear
tem of Di ffe rential Equation
Sec. 8.2 The Complex Case: Euler's Formula •
o w Iet u fin d a fom1Uia fo r the trajectory. i 3 Ei = ker [ _ 5
I 
2
[i
is represented by the point ( tr(A) , det(A)). Recall that the characteristic equation is
. 2 .., ] = span [ .23 ]
I + J
~ 3 J = ..._...,_., [ =~ J+ i [
46 7
2
>.. 
n
tr(A)A. + det (A) = 0
and the eigenvalues are
',..'
ii T he linear
tem
,r0 =
.t(l = e P' [ ÜJ
=
=
3
AJ.2 =
tr(A)
± J (tr(A))2 
4 det(A)).
] [ CO (q t ) in (qt )
[ CO (t ) in(t)

Therefore, the eigenvalues of A are real if (and only if) the point (tr(A) , det(A)) is located below or on the parabola
a]
in q1 ) ] [ CO (qt ) b
det (A) = (
 . in (t ) ] [ I ] cos(t )
tr~A) ) 2
0 in the tr(A)  det (A) plane. See Figure 9. Note that there are fi ve major cases, cotTe ponding to the regions in Figure 9, and ome exceptional cases, coiTesponding to the dividing line . The ca e when
[ ~ =~ J [ c~~i;~ ] = Co (t ) [ ~] +sin (t) [ =~ J
]  2 in (t )  [ CO (t)  3 in (t) .
det(A) = (  tr (A)  ) 2
You can check that
c1.r _ = Ax dt

and
i (O) = [
~].
Consider a 2 x 2 matrix A. The variou seenarios for the system dx / dt = A.r can be conveniently represented in the tr(A)  det(A) p lane, where a 2 x 2 matrix A
Figure 9 31T
r =2
2
is di cus ed in Exercises 35 and 37 . What does the phase portrait Iook like when det(A) = 0 and rr(A) =!= 0? In Figure 10 we take another Iook at the five major types of phase porrrai ts. Both in the d.iscrete and in tbe continuous case, we sketcb the phase portraits produced by variou eigenvalue . We include the ca e of an ellipse, since it is important in application .
The trajectory is the ellip e shown in Figure 8.
Figure 8
~(
aw + b ha the oluti on a = 1 b = 0. T herefore,
2]
[~
Ü;
 det(A) = (
tr(A)
2
)2
468 •
Ch ap. 8 Linea r Systems of Diffe rent ial Equati o ns Sec. 8.2 Th e Camplex Case: Eul er's Fo rmula •
Figure 10 The major types of
phase partraits.
Discrete
Continuous
469
E X. E R C I S E S GOALS Use tbe definition of the complexvalu ed expo nential function Solve the system
cLx
z = e'·
_
 = Ax dt for a 2 x 2 mattix A with complex eigenvalues p 1. Find e2n i . 2. Find eC 112lrr i .
± iq.
3. Write z =  I+ i in polar form as z =reit/> . 4. Sketch the trajectory of the complexvalued function =
Z
e 3ir .
What .is the petiod? 5. Sketch the trajectory of the complex valued function
z=
e (O. l 2i )r .
6. Find all complex solutions of the system
2]
~: =[;
3
X
in the form given in Fact 8.2.3. What solution do you get if you Iet c 1 C2
= I?
7. Derermine the stability of the system
"1.2 = p ± iq p2 + q 2 > 1
AJ.2 = p
±
p>O
2]
~~ = [ ~
iq
4
X.
8. Consider a system
dx
_
=Ax dt ,
"1.2 = p ± iq p2 + q2 < 1
"1.2 =
p ± iq
p
where A is a symmettic matrix. When is the zero state a stable equilibrium solution? Give your answer in terms of the definiteness of the matrix A . 9. Consider a system
d.r _ dt = Ax , where A is a 2 x 2 matrix with tr(A) < 0. We are told that A has no real eigenvalues. What can you say about the stability of the system ?
"1.2 = p ± iq p2 + q2 = 1
A. t ,2
= ± iq
10. Consider a quadratic form q (x ) = x · A.t of two variables, x 1 and x 2 • Consider the following system of differential equations: dx,
dt dx2 dt
=
aq
ax , aq ax2
1•
470 •
Chap. 8 Linear Systems of Differential Eq uations
Sec. 8.2 Th e Complex Case: Euler's Fo rmula •
or. more succinctly
471
16. Consider the system dx

dt
= grad(q).
d,t
dt =
a. Show that the ystem dx j d t = grad (q ) i linear by fi nding a matrix B (in
terrn of the sym m tric matrix A) such tl1at grad(q) = Bx. b. When q is negative defi nite, draw a sketch howing po ible Ievel curves of q. On tl1e same sketch, draw a few trajectories of the system dxjdt = grad(q). Wbat does your ketch suggest about tl1e stability of tl1e system dx jd r = grad q)? c. Do the same a in part b for an indefinite quadratic form. d. Explain tl1e re lationship between tbe definiteness of tbe fonn q and the tability of tl1e y tem dx /dt = grad (q) . 11. Do parts a and d of Exercise 10 for a quadratic form of n variables.
12. Derermine tl1e stability of the sy tem
~
dx = [ dt  1
1 0  1
~] x.
2
13. l f tl1e ystem dx j dr = A.t is stable is dxjd t = A  1; stable as well? How can you tel!? 14. Negativefeedback /oops. Suppose some quantities x 1(t), x2(t), ... , X11 (t) can be modeled by differential equations of the fonn
[ 0 Cl.
J
1 b x,
where a and b are arb itrary constants. For which choices of a and b i the zero state a stable equilibrium solution ? 17. Consider tl1e system dx [  1 dt = k
k}
JX ,
where k i an arbitrary constant. For which choices of k is the zero state a stab le equilibrium soluti on?
18. Consider a diagonalizable 3 x 3 matrix A such that the zero state is a stable equilibrium olution of the system dx j dt =Ai. What can you ay about the determinant and the trace of A? 19. True or false? If the trace and the detenninant of a 3 x 3 matrix A are both negative, then the origin is a table equ.ilibrium solution of the system cLt j dt = Ax. Justify your answer. A with eigenvalues ±rr i. Let ü+ i iü be an eigenvector of A witl1 eigenvalue rr i. Solve the initial value problern
20. Consider a 2 x 2 matrix
d.r
_
 . = Ax
dt
with
_
xo = iü .
Draw tl1e olution in the figure below. Mark the vectors .t(O), .r(2).
,r(4). x(l) , and
 bx11 üi
dx 11 dt
X111 k~~x"
where b is positive and the k; are posillve (the matrix of this system has negative numbers on the diago nal, 1's directly below the diagonal, and a negative nurnber in the top right corner). We say that tl1e quantities x 1, ... , xll describe a (linear) negative feedback loop. a. Oe cribe the significance of tl1e entries in the system above, in practical tenns. b. Is a negative feedback loop with two components (n = 2) necessarily stable? c. I a negative feedback loop with three components necessarily stable? 15. Con ider a noninvertible 2 x 2 matrix A witl1 a positive trace. What does the pha e portrait of the ystem d xjdt = Ax Iook Like?
21. Ngozi opens a bank account witl1 an initial balance of 1000 Nigerian naira. Let b(t) be the balance in the accmnH at timet ; we are told that b(O) = 1000. The bank is paying interest at a continuou rate of 5% per year. gozi make deposits into the account at a continuous rate of s(l) (measured in naira per year). We are told that s(O) = 1000, and s(t) is increasing at a continuou rate of 7% per year. (Ngozi can save more as her income goes up over time.) a. Set up a linear sy tem of the form
(time i measured in years).
b. Find b(t ) and s(t) .
db  = ?b dt
+ ?s
ds ?  = .b dt
+ ?.s
472.
hap. 8 U near
Sec. 8.2 The Complex
tems of Differen tial Equ at io ns 3
3
2
2
I 0
n 0
 I
 I
2
2
22. For each of the linear systems below, fi nd d1 matehing pha e portrait on the following page.
a. .t(l + 1) = [ 22. 5
o~5J.~cr )
I = [  1.5 b. x (l + .)
 1 x(r) 0.5
c.
d. e.
J
J 1 J~~ = [ ;·5 0.5 ~;~ = [ ~ 0I J
d.t
dr
[
=
3 0 .\'  2.5 0.5
 3 3
X
2
 I
0
2
3 3
3
ase: Euler' FormuJa •
473
2
 I
0
2
3
2
 I
0
2
3
 2
 I
0
2
3
3
X
2
Find all real olution of the y tems in Exerci e 23 to 26.
23.
:~ = [~ ~ ] ; dt 
o  9 40
25. d.r dt
2 3 3
X
:~ = [ ~
2]
o 4
0 J
2
l
dl
29. dx = dr
1 I
X
X
[t 1] 
30. d.'i = [ 7 dr 4
X
10]5 X
with with with with
x(O) = [
x (O) = [ i(O) =
~
J.
i(O) = [
I
0
2
3
 3 3 3
2
2
V 0
VI 0
 I
 I
2
 2
6].
[~
2
3
X
dr
28. dx = [
 I
X
Solve the sy tems in Exercise 27 to 30. Give the solution in real form . Sketch the solution.
27.
IV
 I
J= [ 2 4 J4 2 di=[  11 15 J7 6
24. dx _ [
26.
In 0
l
 3 3
6].
2
 l
0
2
3
 3 3 3
31. Coosider the massspring system sketched below.
vm 0
VII
(continued on page 474)
 I
 I
 2
2
3 3
2
 I.
0
2
3
3 3
3
474 •
Sec. 8.2 The Camplex Case: Euler's Formula •
Chap. 8 Linear Systems of Diffe rential Equat ions Let x(t ) be the deviation oftheblock fro m the equilibrium po ition at time 1. Consider the veloci ty v(t ) = dx j dt of the block. There are two fo rces acti no on the ma s: the pring force F5 , wlu h i  a umed to be proporti onal ro th~ di splacement x and the force F1 of friction, wh ich is assumed to be proportional to the velocity: F1 =  qv ,
F =  px.
34. Let Z1(t ) and z2 (r ) be two complexvalued solutions of the ini tial value problem
dz

= >..z
dt
Z l (t )
px qv .
is zero. Conclude that z 1(I ) = z2 (t ) . b. Show that the initi al value problem
dz
dv
dt = >..z
F = ma =m  , dt
where a repre ents acceleration and m the mass of the block. Combining the last two equations we find that dv m = px qv dt p
m
X 
q

m
dx
V.
dv dt
_
with
_
x (O) =
.'to
has the solution
Let b = p j m. and c = q jm for simplicity. Then the dy nanlies of this massspri ng system are described by the system

=1
35. Consider a real 2 x 2 matrix A with JA(>..) = >..2 • Recall that A 2 = 0 (see Exercise 7.2.56). Show that the initial vaJ ue problern  = Ax dt
=  
z(O)
with
ha a unique complexval ued olution z(t) . Hint: One solution is given in the text.
or
dr
1
Z2(1)
+ FJ =
By Newton 's second law of motion we ha e
dx
=
(where >.. is a complex number). Suppose that z2 (r) =f. 0 for all t . a. Using the quotient rule (Exercise 33), show that the deri vative of
the total force acting on the mass is
dv dt
z(O)
with
where p > 0 and q :::: 0 (q i 0 if the o cillation i fri ct ionl e ). There fore,
F = F5
475
,ro + t Axo.
x(t) =
Sketch the vector fi eld
Ar
and the trajectory i(t ) .
36. U e Exerc ise 35 to so lve the initia l value problem
V
(b > 0,
c:::: 0).
=bx  cv
Sketch a phase portrait for this system in each of the following case , and describe brieft y the significance of your traj ec tories in terms of the movement of the block. Comment on the stability in each case. a. c = 0 (frictionless). Fi nd the period. b. c? < 4b (underdamped). c. c2 > 4b (overdamped). 32. Prove the product rule for deri vati ve of complexvalued fu nctions. 33. a. Fora differenti able complex valued function z (t ), fi nd the derivati ve of
z(t )
b. Prove the quotient rule for derivatives of complexva lued functions.
In both parts of tlus exercise you may use the product rule (Exercise 32).
~: = [ _!
_!Jr
with
i(O)
=
[ ~] .
Sketch the vector fi eld and the trajectory .r(t ). 37. Consider a real 2 x 2 matri x A with JA (A.) = (>..  A.o) 2 . Solve the ini tial value problem
d,'Y = Axdt
.h
W lt
 0
x( )
= x o.
For which values of >..o is the zero state a stable equil ibri um soluüon of the system? Hint: Use Exercise 8.1 .24 and Exercise 35 above. 38. U e Exercise 37 to solve the iniüal value problem
~;~= [  1 ~ Jr x
Sketch the trajectory (t) .
with
.r (O) =[~J.
476 •
Chap. 8 Lin ea r System s of Differential Equ ations 39. So h re the system d .r
dt
u~ n,
Compare with Exerci e 8. 1.24. When is the zero s tate a stab le equili brium so lution? 40. An eccentric mathemati ian i able to gai n autocratic power in a small A lpine country . In her first decree, he anno unces the introduction of a new currency, tbe Eul er wbi ch is meas ured in complex units . Banks are ordered to pay o nl y im aginary interest on deposits . a. lf you invest 1000 Euler at Si% in terest, compo unded annuall y, how much money do you have after 1 year, after 2 year , after 1 years? Descri be the effect of com pounding in thi case. Sketch a t:rajectory showing the evoluti on of the bal ance in the complex plane . b. Do part a in the case when the Si % interest i cpmpounded contin uously. c. Suppose people ' s ocial standing i determined by the mod ulus of the balan ce of their bank account. Under these circumsta nces, wou ld you choose an account with annu al compound ing or with conti nu o us compoundi ng of interest? (This p roblern is based o n an idea of Prof. D. Mu rnford , H arvard University .)
LINEAR SPACES AN INTRODUCTION TO LINEAR SPACES In this chapter, we present the basic concepts of linear a lgebra in a m ore general contex t. Here is an introductory example, where we use many concepts of linear a loe·bra in the context of fun ction , rather than vectors in ~" . C onsi der the di ffe renG al equation (OE)
cPx
or
 = X dt 2
Going throu gh a Iist of " .imple" functi ons, we can gues rhe so lution X 1(t)
=
COS(! )
and
x 2 t ) = in (!).
We observe th at all " linear combinati on "
are olutions as weil (verify thi s). U ing techniques from C hapter 8, we an how that these are in fact all the solutions of the OE (see Exercise 50). Thi mea n th at the hmction s x 1 (t) = cos(t) and x2(1) = in (!) form a " ba i · of the " olution space" V of the OE. The "dim nsion ' of V is 2 . Now Iet C be the set o f aU smooth functi ons from ~ to (these are the functions that we an differenti ale as many time, a we wa nt). Beca u e x 1(t ) = cos(t ) and x 2 (t ) = sin (t) aTe smooth fun cti o ns and V i c lo ed under linea r comb inati o n , V i a " ubspace" of C . Let u defi ne the Iran sform ati on T: C
+ C
given by
T (x ) =
d 2x 
J
dr 
+ x.
477
478 •
Sec. 9. 1 An lntrod uction to Li near Spaces •
Chap. 9 Linear Spaces It follows from tbe basic rules fo r differentiatio n that T (x
+ y) =
T (x )
+ T (y)
and T (kx) = k T (x).
for all smooth functi o ns x and y, and all real scalars k . We can summari ze the e pro pertie by saying that T is a '"linear tran Formation·· from C to C . The " kerne!" of T is T (x ) = 0}
ker T) = (x in C
dt
Definition 9.1.1
Linear spaces A (real ) linear space 1· 2 \1 is a set endowed with a rule for addition (if f and g are in \1 , then so is f + g) and a rule fo r scalar multiplication (if f is in \1 and k in R then kf is in \/ ) such that these operations satisfy the eight ax ioms below3 (for all f, g, h in \1 and all c, k in IR):
1. (f + g) + h= f + (g +h ). 2. f +g = g
+ f.
3. There is a neutral element n in V such that f This n is unique and denoted by 0. 4. For each f in \1 there is a g in \1 such that f
+n = +g =
f , for a!J f in \1.
0. This g is unique
and denoted by ( f).
5. k(f + g) =kf+kg. 6. (c +k )f = cf + kf. 7. c(kf) = (ck )f.
= \1 .
In Section 9.4 \.Ve will see that thinking of the soluti on space \1 as the kerne t of the linear tra.nsformation T helps u ana.lyze \1 (j ust as it was useful to interpret the solution space of a linear system A.~ = Ö as the kerne! of the linear transformation T (.t ) = Ax). Wbat are the '"eigenfuncti ons·· and ·'eigen values" of the linear transformation T ? We are looking for nonzero mooth function x and sca lars A. such that
T (x) = A.x
We will now make theseinformal ideas more precise.
d2 r
 ·2 +x = 0)
= {x in C
479
8.
1!
= f.
Tlüs definition contains a 1ot of "fine print." In brief, a linear space is a et with two reasonably defined. operations, addition and scalar multiplication , that allow us to form Linear combinations of the element of this et. Probably tlle most important exa.mples of linear spaces. besides !Rn , are spaces offunctions, as in the introductory example.
EXAMPLE 1 .... In IR", the prolotype linear space, the neutral element is the zero vector, 0.
or
<11111
EXAMPLE 2 .... On page 477 we introduced the linear space C00 the set of all smooth function In Section 9.4 we will sol ve this proble rn systematicall y. Here Iet us ju t give some exa.mples: T (e' )
= 2e' ,
so that e' is an eigenfunction witl1 e igenvalue 2. T (t) = t ,
from IR to IR (functions we can differentiate a.s many time a we want) with the operations (f + g)(t ) = f( t ) + g(t ) (kf)( t ) = kf (t )
for all t in IR. Figure 1 illustrate the ruJe for scalar mu1tiplication of function from lR to IR. Draw a sinülar sketch illustrating the rule for addition. <11111
EXAMPLE 3 .... Another linear space is F (lR.lR"), the set of a.ll function s from IR to lR", that i , all parameterized curves in lR". The operations are defined as in Exa.mp le 2 .
<11111
so that t is an eigenfunction with ei genvalue 1.
T( cos(l))
= 0,
so that cos(t ) is an ei genfunction with eigenvalue 0 .
' The teml ve ror space is more collJmonly u·ed. 2lf we work with complex scalars. thcn V is a co mplex linear pace. Our space · are real un le srated otherwise. JThese axioms were established by the halian muthe matician Giuseppe Peano ( 18  1932) in hi Calcolo Geomerrico of 1888. Pean called V a .. linear syste m:·
480 •
Sec. 9.1 An Introducti on to Linear Spaces •
Chap. 9 Linear Spaces
481
for some Sealars c1, . . . , c11 • Since the basic notions of linear algebra (ini tiall y introduced for lR") are defined in terms of linear combinations, we can now generali ze these notion to linear spaces without modification s. A hort version of the rest of this section would say that the concept of subspace, linear independence, basis, dimension, linear Iransformation kernel, image and eigenva lue · can be defined for linear spaces in just tbe same way as for lR". What fo ll ows is the lang version, with many exampl es.
Definition 9.1.2
Figure 1
Subspaces A subset W of a linear space V is called a subspace of V if
EXAMPLE 4 ... Let x be an arbitrary et. Then V= F ( X , ~~~ , the et of all function from X to lR" with the operations defined in Exampie 2, is a linear space. ~
a. W contains the neutral element 0 of V . b. W is closed under addition (if f and g are in W, then so is f
EXAMPLE 5 ... If addition and calar multipli cation are given as in Definition 1.3 .9 then ~"' x ",
c. W is closed under scalar multiplication (that is, if f is in W and k is a scalar. then kf is in W).
the set of all m x n matrices. i a linear pace. The neu tral element is the ze~ matrix.
EXAMPLE 6 ... The complex numbers C formareallinear pace (and also a complex linear pace).
+ g ).
We can su mmarize parts b and c by saying that W is closed under linear combinations.
~
EXAMPLE 7 ... The et of all infinite . equences of real numbers is a linear space, where addition and scalar multiplication are defined componentwise: (x 0, X J . x 2... .)
+ (yo. ) 1, }'2 , ... )
= (xo + Yo,
x1
+ Yl · x2 + )12 , · · .)
Note that a subspace W of a linear space V i a linear space in it own righ t. (Why do the axioms hold for W?)
EXAMPLE 9 ... Here are two subspaces of C :
k (xo. x 1, x2, .. .) = (kxo, kx1, kx2, .. .) .
a. P, the set of all polynomial functions f(t)
EXAMPLE 10 ... The upper triangular matrice are a subspace of ~
(0, 0, 0 .. . ·).
EXAMPLE 8 ... The set of all geometric vectors in a (coordinatefree) plane is a linear space. The rules for addition and scalar multiplication are illustrated by Figure 2. Consider the elements .f1, h . . .. , f," and is a linear combination of the f; if
f
= cdt + c2h
+ · · · + a"rn.
b. ? 11 , the et of all polynomial ftmctions of degree :::; n.
The neutral element i tbe sequence
f
= ao + a1t
~
f of a lin ear space. We say that
+ · · · + c,.,J,,
11
11 ,
the space of all n x n
~
matrices. (Verify thi .) I
EXAMPLE 11 ... The quadratic forms of n variables form a subspace of F(lR" , lR). lf q(.i) = .\;TA .r and p(x) = xTB,T, are two quadratic forms the n so is (q + B)x . Think about scalar multiples.
+ p)(.i) =
;T (A
q(.i)
+ p(.r)
=
~
EXAMPLE 12 ... Consider an n x n matrix A. We claim that the set W of all so lution of the ystem d.i/dt = At i a subspace of F(R ~"). We checkthat W i closed under calar multiplication. lf x(1) is a solution and k is a calar, then
Figure 2
.. : ·· ·· ··
ü+w
d d.r _ < _)  (kx) = k  = kAx = A kx , dl dt
so that k.'i(t) is a solution as ~eil. Checkthat W i closed under additi on and that W contains the curve .r(t) = 0. ~ 2
EXAMPLE 13 ... Consider tbe set W of all noninvertible 2 x 2 matrices in R x of
~2 x2?
.
I W a ub pace
482 •
Chap. 9 Linear Spaces
Sec. 9.1 An Introduction to Li near Spaces •
Solution
w contain
483
Solution the neutral element [
~ ~]
of JR 2
2
and is closed under scalar multi
We can write any 2 x 2 matrix A
plication, but W is not closed under addition. As a counterexarnple, consider the sum
in W
in W
2 2
Therefore, W is not a subspace of JR x
not in W
.
Next we generauze the notions of linear independence and basi
= [ ~ ~]
as
[~ ~ ] =a[~ ~]+b[~ ~] + c [~ ~]+d[~ ~l 'v'
'v'
'...'
'v'
E11
E1 2
E2 1
E22
This shows that the matrices E 11, E 12, E 21, &z 2 span JR 2x 2. The four rnatrices are also linearly independent: none of them is a linear combination of the otbers since each has a 1 in a position where the three others have 0. Thi s shows tha; E11, E1 2, E21, E22 is a basis of JR 2 Y 2 ; tb at is, dirn (JR2x2) = 4. ~
EXAMPLE 15 ..... Find a basis of ? 3, the space of all polynomials of degree ::; 3. Definition 9.1.3
Linear independence, span, basis Consider the elements
!I' .... _{,,
Solution
of a unea:r space V.
a. We say tbat the f; span V if every f in V can be expressed as a linear
We can write any polynornial f(t) of degree ::; 3 as f(t) =
combination of the f;.
+c2h+···+c"J;, = 0
CQ · 1 + CJ · l
bas only the trivial solution Ct = C2 = · · · = C11 =
+ +ct +
0,
that is, if none of the f; is a linear combination of the other
2
bt
dt 3 =
a·
1
+ b. t +
c. r
2 +
d.
r3 .
This shows that the polynom:ials l t , t 2 , 13 span P3 . Are the four polynomials also linearly independent? One way to find out is to consider a relation
b. We say tbat the f; are linearlv independent if the equation CJ!J
a
fJ 1
c. We say that tbe f; are a basis of V if they are unearly independent and span V. This is the ca e if (and only if) every f in V can be written uniquely as a linear combination of the J; .2
+ C2 · 12 +
C) ·
t3
= Co+ c 1 t + c 2t 2
+ c3 13 =
0.
lf some of the c; were nonzero. thi s polynomial could bave at mo. t rhree zero ; therefore, all tbe c; must be zero. We conclude that I , 1 , r2 , 13 is a basis of p 3 o that dim ( P3) = 4. ~
EXAMPLE 16 ..... Consider a system d,r jdt = Ai' where A is an 11 x n matrix. Suppose there is a real eigenbasis ü,, Ü2, ... , Ün for A with associated eigenvalues A1, . .. , An. In Example 12 we have seen that the solutions of tbe sy rem form a sub pace W of F(IR, lR"). Find a basis for W and thus determ.ine dim(W) .
Solution
Fact 9.1.4
Dimension Jf a linear space V has a basis with n elernents, then all other bases of V consist of n elements as weil. We then say that the dimen sion of V is n: dim (V) = n.
By Fact 8.1.2 we can write the solutions of the syste m as
,((/) = c , e.A ,r Ü1 + · · · + c
11
that is rhe curve x ,(t) = eAtli/ ,, ... , xf/(1) independent? Consider a relation C
We defer the proof of thi s fact to the next section.
EXAMPLE 14 ..... Find a basis of JR2 x 2 and tbus deterrnine dim (JR 2 x 2).
See Definiti ons 3.2.3 and 3.2.4 and Fact 3.2.5. 2 See Defi niti on 3.2.3 and Fact 3.2.7.
1
,c,>.,rv I + · · · + c e>.,. r;;
Evaluating this equation at t
II
= 0, CJ'ÜI
1
= e1'"
e.A,.r Ü11 ,
pan
ÜJI
0
""
w.
Are the :\;(1) linearly
.
we find that
+ ·· · +
C11 Ün = Ü.
Since the Ü; are linearly independent, we can concl ucle that c 1 = c2 = ... = c11 = 0; that is. the (t are linearly independent. T herefore, the (t) are a ba is of W and dim (W) = n . ~
x;
x;
484 •
Chap. 9 Linear Spaces
Sec. 9.1 An lntroduction to Linear Spaces •
EXAMPLE 17 ..... Find a basis of C a a real lineru· pace.
EXAMPLE 19 ..... Fi 11 d a b·as 1·s of t1e 1 space
·
485
V of all po1ynom1al s f(t) in P2 suchthat f' (l ) =
0.
Solution Solution
Any complex number  can be written uniq uely as
For this sp~ce, writing down a typical element of V requires some work. Consider a polynor~u.al f(~) = a + bt+ ct 2 in P2 , witb J'(t ) = b+2ct , o that .f' (l) = b+2c. The co nd1t10n f (1) = 0 means that b + 2c = 0; that is b = 2c. Thus a typical element of V ha the form
 = x ·l +y·i for two real numbers x and v. Therefore, 1, i is a basi of C, and dim (C) = 2. The graphical representation of. the complex nurober · in the comp lex pla ne i ba ed ~ on this fact.
f( t ) = a  2ct
x 3 matrices A, and determine the dimension of thi space. Recall that a matrix A is called skewsy mmetri c if AT = A. Note that thi implie that the cliagonal elements of A are 0 ince a;; =  a;;.
+ ct 2 •
EXAMPLE 18 ..... Find a basis of the pace of all skew ymmetri c 3
where a and c are arbitrary constants. write
f( t ) = a · 1 + c · (1 2
A skewsymmetric 3 x 3 matrix can be written as
=
[~b
a
0 c
b] c
b
a 0
c
b] c = a [  01 01 0] 0 0 0 0 0
+b [
21 ).
Not all linear spaces have a finite basis f 1, h . . . . , fn. Consider for example, the pace P of all polynomials. We will show that an arbitrary sequence J1 , h, ... J,, of polynornials does not span P , and therefore isn' t a basis of P. Let N be the maximum of the degrees of the polynomials f 1, h , . .. , fn · Then all linear combination of / 1, .. . , f" will be in PN (the space of the polynomial of degree :::: N). A polynomial of higher degree, such as f(t) = tN +l, will not be in the span of /1, ... , J,,. Therefore, the sequence f 1, j 2, . . . J,, is not a ba i of P.
,
0
where a, b, c are arbitrary constants. Writing A as a li near combination with the arbitrary constants a coefficients we find that
[ ~

Be~ause the two functions 1 and t 2  2t are li nearly independent, they form a b~ 1s of V . Check that the function satisfy the requirement f ' (1) = 0. See F1gure 3. ~
Solution
A
Following step b m tbe rec1pe, we
00 00 01]  1 0 0
+ c [ 00 0
0 0] 0
1
.
 1 0
These three matrices are clearly linearly independent: they are a basis of the space of skewsyrnmetric 3 x 3 matrices. Thi space is therefore threedimensional. ~
Figure 3
Based on our work in Examp1es 14 to 18 we present the following " recipe" for finding a basis of a linear space V : h r) = r2 
Finding a basis of a linear space
V
a. Write down a typical element of V in terms of some arbitrary constants. b. Using the arbitrary constants as coefficients, express your typical element as a linear combination. c. Ve1ify that the elements of V in this linear combination are liriearly independent; then they form a basis of V.
r = r(r  2)
486 •
Chap. 9 Linear pace
Sec. 9. 1 An I ntroducti on to Linear Space • ~
Definition 9.1.5
Infinitedimensional linear spaces A linear pace whi ch doe not have a fin ite ba is j ,, infiniTedimensional. 1
h , . .. ,
EXAMPLE 21 ..... Let C[O, I ] be the linear space of alt continuous functions from the clo ed interval [0, l] to IR. We defi ne
fn is call ed by
We _adopt the ~implified notation / Cf) = bas1c rules of tntegration :
Next we generaJize the nolion of a linear transformation .
'
and W. A function T fro m V to W is called a
Con ider two linear pace linear transformation if T (f
+ g)
= T (f)
1 1
I: C[O, l J ~ IR
As we have just seeo the pace P of alJ polyno mi als i infinitedimensional.
Definition 9.1.6
48 7
/ ( /) =
f01 f.
f(t ) d l .
To check that I is linear we app ly
1' + 1 1'! f
=
1'
(kf)
I
g = I (f )
+ I (g)
= k l (f)
k
What is the im age of ! ? Foreac h real number k, there i a function j (r) uch that k = l (f): one possibl e choice i th e con tant functi on f (t ) = k. Therefore,
+ T (g)
an d
im (! )= ~ 
T (kf
= k T (f).
fo r all f and g in V and all real sca lars k. For a linear transformation T fro m V to W , we let ker(T)
=
EXAMP.LE 22 ..... Consider the functi on T: ~ 2
~
C given by T [ ;
J = x + i y.
We leave it as an
exerci e to check that T is li near. The tran fo rmati on T i clearl y invertible, with
(f in V : T (f) = 0}
r  1(x + i y) = [ ~ ] .
and i m ( T ) = {T (f) : f in V}.
Note that ker(T) is a subspace of V and im(T) is a subspace of W .2
EXAMPLE 20 ..... Let D : C00 ~ C00 be given by D (f) =
We can easily go back and forth betweeo
f'.
What is the kerne! of D ? This kerne! consists of all smooth functions f such that D (f) = f' = 0. A you may reca ll from caJcul u , these are the con tant functions f (t = k. Therefore, the kerne! of D i onedimension al (the fun ctio n f (t ) = 1 is a basis). What is the image of D ? The image consists of all smooth fun cti ons g such that g = f' for . ome f in C 00 , that is, all smooth fu ncti ons g whi cb have a smooth antideri vative f . The fundamental theorem of calculus teils us that aU smooth function s (in fact, all continuous functions) have an antiderivati ve. We can conclude that im(D) = C .
[~ ]
More ad vanced tex!S inLroduce the concept of an infinite basis for uch spaces. See Fac!S 3. 1.4 and 3. 1.6. To show that ke rne! and image are sub paces, yo u need the following a uxili ary results. whic h we leave as Exercise 48 and 49 for tho e with an inte rest in ax iomatic : a. l f Ov i the neutral el eme nt of a linear pace V, then Ov + Ov = Ov and kO v = Ov. fo r all ca lars k. b. Jf T is a linear tnmsformati on fro m V LO W, the n T (Ov) = Ow , where Ov and Ow are the neutral elements of V and W .
and x
+ iy :
T
X+
iy
y1
The in venible linear transformati on T in Exan1ple 22 make C into a carbon copy of JR 2 (as far a sum and rea l calar multiples are concerned). We ay th at T is an isomotp hism and that the linear space ' 2 and C are i ·omorphi , whi ch means in Greek that they have the ame tructure. Eac h carrie ome tru ctures not hared by the otber (for example, tbe dot prod uct in IR 2 and the comple product in q , but a fa r as the two ba ic Operati ons of linear algebra are co ncerned (additi on and real calar multipli cation) th ey behave in the ame way.
1
2
[~ ]
Definition 9.1.7
Isomorphisms An invertibl e linear tran fo rmation i. call ed an isomorphism . We ay that the linear paces V and W are isomo1p hic if there i an isom rph i. m from V to W.
488 •
hap. 9 Lin ea r Space Sec. 9.1 An Tntrod uction to Linear paces •
489
Below, we tate om u fu l fact coocerning i orn rphi ms: Be low, we give two mo re exa mples of isomorphi ms.
EXAMPLE 23 .... Show that the linear spaces lR"x"' and m. IID mx11 are tsomorp · h' tc. Fact 9.1.8
a. Jf T is an isomorph i. rn , then so is r 1.
Solution
b. A linear tran fo rmatio n T from V to W i an i omorphism if (and only if) ker(T) = (0 } and im(T) = W.
We need to find ao isomorphi sm L fro m lR"x"' to JRm x n th t · · • . , a ts an m venible Li near tr an s1orm <11111 att on. Check that L (A) = AT does the j ob.
c. Con ider an i omorphi m T from V to W . Lf / 1, h, . . . J,, is a ba i of V, tben T (f 1 ). T(h .. . . T(J,,) is a ba i of W. d. lf II and W are i o morph ic and dim ( V) = 11 then dim ( W) = 11.
EXAMPLE 24 .... Let
F(R R) b h . , e t e lmear space of all functions from T :V
~V
a. We mu st s how that r  1 i linear. Con ider two e lemen ts codo main of T (that i , the do ma in of r  1). Then
or
f an d g of the
( T U+ g))(l) = (T J
for all
r t Cf+ g) = r  1(rr  1Cf)+
1
+ T g)(t) .
in R Now (T U+ g ))(t)
( ince T is linear)
In a similar way you can how that r t (kf) = kr  t (f), fo r all codomain of T and all calar. k.
= U + g)(t 
1)
= f( t  I)+ g(t
 1)
and (T f
f in the
to solve the equation T (f = 0. Applying r  t on both sides, we find that ! = 0 so that ker T ) = (0}, as claimed . Any g in W ca n be wri tten as g = TT t(g), so th at im (T) = W . Con versel y, s uppo e that ker(T) = {0} and im (T) = W. We have to show that the equati o n T (J) = g has a unique so lu tio n f for any g in W (by Defini tion 2.3. 1) . There is at least one uch solution f, in ce im (T) = W. Con ider two o luti ons, ft and !z: T (Jt) = T(h) = g. Then 0 = T Ut)  T (h) = T Ut  h), so th at ft  h i in the kerne] ofT ; that i , !t  h = 0 , and f t = h, a claimed. c. We will show first that the T U;) pan W. For any g in W, we can write 1 T  (g) = c1f1 +· · ·+c"J,, because the J; span V. Appl yi ng T on both side and u ing linearity, we find that g = c tT (f1) + .. . +c" T (f"), as c laimed. To show the linear independence of the T (f;), consider a re lation Ct T Cft) + · · · + c" T (j,,) = 0 or T (c if1 + · · · + c" j,,) = 0 . Since the kerne! of T is 0 , we have c 1! 1 + · · · + c" j,, = 0. The n the c; are zero since the fi are linearly independent.
•
+ T g)(t ) =
(T f)( t)
+ (T g)(t ) =
f( t  l )
+ g(t
 1):
the two re ult agree. We leave it as an exercise to check that
b. Suppose first that T i an i omorphi sm. To fi nd the kerne! of T , we have
d. Fe llows from part c.
( T f)( t ) = f(t 1),
T U+g) =Tf+ T g
T lli proof may be skipped in a fir t reading.
r r  1Cg)) = y  1( r (r  1Cf) + y  1(g))) = r  t cn + r  t cg).
by
JR to R We define
where we write T f instead of T (.f) to simplify the notation. Note th at T moves the graph of f to the right by one urtit. See Figure 4. l s T Lin ear? We first have to check that ~
Part d ~ hou ld come a no urprise, since isomorphi c paces have the sa me stru ture, as far as linear a lgebra i concerned.
Proof
v
T (kf) = k (T f).
(What doe thi Statement mean in tenn o f shifting graph ?) . . Note th at the transform ation T is in vertible: we can get f back fro m Tf by shtfttng the graph of T f by one unit to the left (see Fig ure 5). (1'  1 g)(t ) = g(t + I )
We conclude that T i an i ·omorphi m from V to V.
Figure 4
t  I
490 •
Chap. 9 Li near Spaces Sec. 9.1 An lntrod uction to Lin ear Spaces •
491
~he~~ C ~ s ·
an arbitrary_constan_t. Thi~ shows that all scal ars ./,. are eigenvalue of e e1genspace E;, 1 onedimenswnal, panned by e ).t . ~
EXAMPLE 26 .... Consider the linear Iransformation r+ I
Figure S
L.
T!l) II X /1 ~ JD) II X /1 !1'0. r Jl'l.
•
g1ven by
T
L (A) = A .
Find all eigenvalues and eigenspace of L. Is L di agonalizable?
ext we generali ze the not:ion of an eigen alue.
Solution Definition 9.1.9
I~
Eigenvalues, diagonalization
L (A ) = 'AA, then L (L (A)) = A = ), 2 A
e1gen va lues a.re
Co nsider a linear transform arion T:
V + V ,
= 1 and
.1,. 2
o that
.1,. 2
= 1. The only possible
=  I. Now
E, = {A : Ar = A) = {symmetric matrices }
where V i a linear pace . A scalar 'A is cal led an eigenvalue of T if there is a nonzero f in V uch rhat
T (f)
.1,. 1
and
= ), j. E_ , = {A : Ar =  A} = {skewsymrnetric matrices}.
If ). is an eigenvalu e of T , we define rhe eigenspace
We leave it as an exercise to show th at
EJ.. = {f in V: T (f) = 'Af}. Now suppose that V is finitedimen sio nal. The n the transformation T is call ed diagonalizable if there is a ba is f 1 , •• • , J,, of V such that T /;) is a scalar multiple of f;, for i = 1, ... , n. Such a basi i called an eigenbasis for T.
.
d1m(E 1) = l + 2 + ... + n =
rz (n + 1) =
2
and
EXAl\tPLE 25 .... Find all eigenvalues and eigenspaces of the linear transformation D: C 00 + C""
given by
dx
D (x) = 
dt
.
.
dim (L 1) =1+2+· ··+(nl ) =
(n l )n
2
,
so that
Solution We have to olve the differenti al equ ati o n
dx
We can find an eigenbasis for L by choosing a basis of each eigenspace; thu L is diagonalizable. For example, in the case n = 2 we have the eigenba i
D x) =  = 'Ax. dt
[~ ~l
By Fact 8. 1.1 the solutions of thi s OE fo r a fi xed )._ are the functio ns ~
t basi. of E 1
[~ /'
n.
[~ 6] t ba is of E _ 1
492 •
Chap. 9 Linear Spaces
For the convenience of the reader, we include a list of notations introduced in this section :
Sec. 9.1 An Introducnon to Linear Spaces •
493
~m x n
the set of all functions from X to Y . the linear space of all real m x n matrices.
Let V be the space of , 11 · fi · . \VI . h f ' a In mte sequences of real numbers (see Example 7) . llc o the Subsets of V given in Exercises 12 to 15 are subspaces of V ? 2 1 . The arithmet~ic seq uences, i.e., sequences of the form (a , a + k , a + 2k . a + 3k , · · .), for some constants a , k. · 13 · The geometric sequences, i.e. , seq uences of tbe form (a , ar, ar 2, ar3, .. .). for · some constants a and r .
p
the linear space of all polynormals with real coefficients.
14. The sequences (xo, x1 , .. .) which converge to 0 (that is, lim x = 0).
P"
tbe linear space of all polynormals of degree :S n. the linear space of all smooth functions from ~ to ~ (those functions that we can ditferentiate as many time as we want).
15. The "squaresummable" sequences (x 0 , x 1, converges.
C[O, 1]
the linear space of a.ll contin uou s functi.ons from the closed interval [0, 1] to ~.
Find a basis for each of the spaces in Exercises 16 to 28, and determine its dimension .
D
the linear transformation from C 00 to C )O given by Df = derivative).
Notations F (X , Y)
coo
.f' (the
17loCO
16.
~3 x 2 .
17,
}Rtlll X T/ •
18. P". 19. The real linear space
EX ER C I S ES
GOALS spaces.
Apply basic notions of linear algebra (defined earlier for
~")
to linear
Which of the subsets of P3 given in Exercises 1 to 5 are subspaces of P3? Find a basis for those that are subspaces. 1. {p(l): p(O) = 2) 2. {p(t): p(2) = 0} 3. {p(r): p' (l) = p(2)} (p' is the derivative)
1 1
4. {p(t) :
tbat is, those for which
f
x1
i=O
([2.
The space of all matrices A in ~ 2 x 2 with tr(A ) = 0. The space of a.ll symmetric 2 x 2 matrices. Tbe space of all symmetric n x n matrices. The space of all skewsymrnetric 2 x 2 matrices. The space of all skewsymmetric n x n matrices. The space of all polynormals f (t) in P2 such that f(l) = 0.
26. The space ofall polynomials f(t) in P3 suchthat f(l) = 0 and J~ f(t)dt = 0.
1
27. The space of all 2 x 2 matrices A that commute with B = [ 1 0
0] 2 .
J
p(t) dt = 0}
5. (p(t): p(  t) =  p(t) for all t} Which of the subsets of ~ 3 x 3 given in Exercises 6 to 11 are subspaces of IR 3 x 3 ? 6. The invertible 3 x 3 matrices. 7. The symrnetric 3 x 3 rnatrices.
28. The space of all 2 x 2 matrices A that commute with B = [ 1 1 0 1 . 29. In the linear space of infinite sequences, consider the subspace W of arithmetic sequences, i.e., those sequences in which the difference of any two consecutive entries is the same, for example, (4, 7, 10, 13, 16 . . . .) ,
where the ctifference of two consecutive entries is 3 and
8. The skewsymmetric 3 x 3 matrices (recall that A is skewsymmetric if AT=  A). 9. The diagonal 3 x 3 matrices.
10. The 3 x 3 matrices for which [
20. 21. 22. 23. 24. 25.
• •• ) ,
11
~] is an eigenvector.
11. The 3 x 3 matrices in reduced row echelon fonn.
(10, 6. 2, 2. 6, ... ), where the ctifference of two consecutive entries is 4. Find a basis for and thu s determine the dimension of W.
w
30. A function f(t) is called even if .f(t) = f( t ), for all t in ~ . and odd if f(  t) =  f(t) , for all t. Are the even functions a subspace of F(IR, ~), the space of all functions from .IR to .IR? What about the odd functions? Justify your answers carefully.
494 •
Chap. 9 Li near Spaces
Sec. 9.1 An Introduction to Linear Spaces •
31. Find a basis of eacb of the fo llow in g linear paces, and thus determ ine their dimension (see Exerci e 30). a. [f in P_.: f i even }. b. {f in P4 : f is odd}. 32. Let L (iR". IT{111 ) be tbe et of aJI li near transfonnarions from iR" to ~~~~ . I · L(iR" , IT{111 ) a subspace of F (~", iR"'), the pace of all functions from iR" to iR"'? Ju tify your answer carefully.
495
46. Consider an n x n matrix A that is diagooalizab le over R Let W be the solu tion space of the system dx / dt = ki: ( ee Example 12). Is the transformation T: W+ lll"
given by
T(x(l)) = x(O)
linear? Is it an isomorphism?
47. For the transfonn ation T defi ned in Example 24, veri fy tbat T(kf) = k(Tj) ,
Decide which of the rransformatio n in Exerci e 33 to 41 are linear. For those th at are linear, detennine whetber they are isomorphism .
33. T: P3
~ iR
1 3
given by
T(f ) =
.f(t) dt .
34. given by T (A) = A + 35. T : JR3 x 3 ~ lR given by T(A) = tr(A). 36. T : JR 2 x:>. ~ lR given by T (A) = det(A ) . 37. T: C ~ C given by T( z) = (3 + 4i).
38. T: JR2x 2
~ ]Jl2x 2
given by
T (A) =
39. T:
~ ]Jl2x 2
given by
T (A) = [ ;
JR2x2
40. T: P6 ~ P6
41. T : P2
~ lll
2
given by
Tl.f) =
given by
T(J)
/ 2.
s 1AS,
f in F(JR, IR). Draw a sketch illustrati ng
48. S how that if 0 is the neutral element of a linear space V then 0 + 0 = 0 and kO = 0, for all scalars k. 49. Show that if T is a linear transformation from V to W, tben T (O v) = Ow, where Ov and Ow are the neutral elements of V and W, res pectively. SO. Find all so lutions of the differential equation
2
T: JR2x 2 ~ JR2 2
for allreal scalars k and al l fu nctio ns th.i s fonn ula.
where S =
d 2x
u J.

~
Hint : ln troduce tbe auxiü a.ry funcrion y = dx /dt . The DE above can be converted into the system
~ JA.
dx dt dy  = x dt
!" +4f'.
= [ j<~L J.
. . 2 x 2 matn.ces "1orm a 42. Do the positive senndefi mte 43. In the space F (R !Pl) of al1 func ti ons from iR to IR, functions with period 3; th at is, those functions f(t) for all t in lll. Do these functio ns fonn a subspace answer carefull y.
+x = O.
dt 2
su b space of .JR2 x 2?. consider the subset of al l such th at f(t+3). = .f(t), of F (IR, iR)? Just1fy your
y
Apply tecbniques introduced in Chapter 8.
51. Use the idea presented in Exercise 50 to fi nd all solutions of the DE d 2x

d t2
dx
+ 3
dt
+2x = 0.
52. U e the idea presented in Exercise 50 to fi nd all solutions of the OE
44. Con. ider the transform ation
d 2x

d t2
T :IRmxn ~ F(lll" , IR/11)
dx
 4
dt
+ 13x =
0.
Fi nd the (real) eigenvalues and eigenspaces of the li near transformati ons in Exercises 53 to 60. Find an eigenbasi if po sible.
given by (T A)(v) = Aü.
Show rhat T is linear. Wh at is the kerne! of T? Show that the image of T i the space L (lll" , JR"') of all linear transformations from IR" to lR"' (see Exercise 32). Find the dimension of L(JR" , IRm) . 45. If T is a linear transfonnation fro m V to W and L is a linear transformation fro m W to U, is the composüe transfonn ation L o T fro m V to U · linear? How can you te ll? If T and L are iso mo rphi sms, is L o T an isomorphism as well ?
53. 54. 55. 56.
L: !Pl2 x 2 ~ JR 2 x 2 T: C 00 + C 00
T: C ~ C
given by
L (A) = A +AT.
given by T(f) = given by T( z) = z.
f + f' .
T: V ~ V given by T (xo , x ,, x 2, X3, . . . ) = (x,, x_, space of infinite sequences of real numbers.
57. T:C 00 + C given by (Tf)(t) = f( t ). 58. T: P2 + P2 given by (T f)(t) = t · f ' (t) .
X3, .. . ),
where V is the
496 •
hap. 9 Linear Spaces
59. T : JR2 x2 + JR 2
2
given by
T ( A) =
60. T : JR.2 x2+JR 2 •2 given by
T (A) =
[!
Sec. 9.2 2
EXAMPLE 1 .... C 0 11 sider the linear space P2, wiili the basis
] A.
S 1 AS,
7  3 I + llt 2 , find
whereS = [ ~ ~ ] ·
B
UJa.
consisting of l
,
497
t , r2_1 For f (t ) =
Solution
61. Find an igen ba i for the linear tran Formation T (A) = s l AS from "JR.II X /1 to lR" x" , where S i an invertible diagonal matrix. 62. Let :JR.+ be tl1e et of positive real numbers. On JR+ we defi ne the "exotic" 1 operati on (usuaJ multiplication) x EB y = xy
The coordinate of
f wi th respect to
[fl,
and
are 7,  3, and 11 , so that
r
B
~ ~n
~onsid_er a basis B o f a linear pace V, consisting of f 1 , • • • , f,, . Introducin o coordm ates 111 ~ with respect to B allows us to transform V into "IR": we ca~ define the coordmate transformation
k O x = xk.
a. Show that JR+ wiili the e operati ons is a linear space; fi nd a basis of this space. b. Show that T (x ) = ln (x) i a linear tran Formation from JR+ to :IR., where 1R is endowed wiili ilie ordinary operations. Is T an i omorphi m? 63. I it po ible to defi ne "exotic" 1 operation on !R( 2 o that dim (lR2 ) = I? 64. Let X be the set of all citizen of Timbuktu. Can you define operarion on X that make X into a real linear space? Expl ai n.
2
oorcUnates in a Linear Space •
I~
UJa
from V to lR" . This coordinate transformation is in vertible: its in ver e is the tran Formation
[ ~; l ~
c,f<+
+ c.J,,
CII J
COORDINATES IN A LINEAR SPACE
fro m IR" to V . Tbe coordinate tran Formation is also linear, because [kf ]a = k(J] and 8 identity, leaving tbe econd as kc 1 f 1 + ... + k c" J,, , o that 11
In thi s section we continue generalizing the basic concepts of li near algebra from IR(" to Linear paces.
Definition 9.2.1
[/ + g] a = [/Ja + [g]a. We verify the first Exercise 25. If I = c,l1 + · · · + c J," ilien kf = 1
Coordinates Consider a linear space V with a basis f in V can be written uniquel y as
f
=
B
con i ting of
f 1,
• • • ,
[k!J a =
J,,. The n any
r
l
[c'l c2
= k
k c"
ctf, + · · · + CnJ,"
for some scalars c1. c2 , . . . , c". The c; are called the coordinates of re pect to B, and the vector
k c2 kc;
:
= k (J] 8 .
C11
We have shown ilie following result:
f with Fact 9.2.2
Coordinate transformation Consider a linear space V , wiili a basis B, consisting of j 1 , coordinate tramformation
f is called the coordinate vector of f, de noted by [!] 8 .
1
Exotic in the sen e of ·• trikingly out of the ord inary" (Webster) .
~
. . .•
ln · Then the
UJa
from V to IR" is an isom01phism, i.e., an in vertible linear tran formation.
1
We re fer
10
the ba.s is I. 1• .. • • 111 a.s the swnda rd basis of P11 •
~"'""'
498 •
Sec. 9.2 Coordi nates in a Linear Space •
Chap. 9 Linear paces
 ··· ··
499
EXAMPLE 3 ..... The coordinate transformation for C with respect to the standard basis 1, i is
c x+iy
~
JR2
~
[ xy ] .
EXAMPLE 4 ..... The coordinate transformation of iR 2 x 2 with respect to tbe standard basis
[b ~] '
Figure 1
[~
b]. [~ ~l
[~ ~]
is JR2 x 2
Th i fact allow us ro gi e a conci e con ceptu al proof of Fact 9. 1.4 (rhis proof wa deferred). Con ider a linear space V with two bases A (wi th n e lements) and B (with m lement . We claim that 11 = m. To prov rhis fac t con ider the coordinate transformations
T_",
Cf)
=
[fh
[~
~]
~
~
from V to iR"
]R4
m
V be the solution space of the differential equation !" + f = 0. In rhe introduction to Section 9. 1 we discussed the solutions, f(t) = c 1cos(t) + c2 sin(r). The coordinate Iransformation for V with respect to the basis cos(t), sin(r), is
EXAMPLE 5 ..... Let
and from V to iR 111 • See Figure I. Then T6 o T~ J i an isomorphism (an inve1tible linear transformation) from lR" to lR111 (see Fact 9.1.8a and Exercise 9.1.45). The exi tence of such an invertible Jjoear transformation (described by an invertible matrix) implies that n = m (by Fact 2.3.3), as clairned. Fact 9.2.2 teils us that by introducing a ba i in a linear space V we " make V into a copy of iR"" (rhis is really just an extension of Descartes' concept of analytical geometry). This translation process makes computation easier because we have powerful numerical tools in iR", based on matrix techniques. We do not need a separate theory for finitedimensional linear spaces, since each such space is isomorphic to some iR" (it has the same structure as lR"). For example, we can say that n + 1 elements of an ndimen ional linear space are linearly dependent, since the corresponding result holds for iR" (by Fact 3.3.4). Here are some (rather harmless) examples of coordi.nate Iransformation .
EXAMPLE 2 ..... The coordinate transformation for P2 with respect to the Standardbasis 1, p2
~
1, t
2
V
c 1 cos(t)
+ c2
EXAMPLE 6 ..... Derermine whether the matrices [ ~
in (t)
~
~
JR2
~
[ cc2' ] ·
l [; ~ l [ i 1
~~]
are linearly inde
pendent. If not, find a nontrivial relation among them.
Solution We translate the problem into iR4 , using the standard coordinate tran fonnation (Example 4). Then the question is: are the vectors
is
JR;3
linearly independent? We can use matrices to answer this q ue tion.
ao+ a,t+a, t' ++ [ :; ] We u e doubleheaded arrows to emphasize the invertibi lity of the coordinate ~ transformation.
~~]
11 12
rref
500 •
hap. 9 Linear paces
Sec. 9.2 Coordinates in a Linear Space •
Therefore. the tbree vector are li nearly de pendenl, with
Written in coordinates, the transformation T maps
m
into
[b
+ 2c ] 2c
= [ 0
0
1 2] [ 0 2
~ J. c
The matrix
or
EXAMPLE 7 ..... Consider a polynomial J(t = a0 + a 1t + · · · +_ 0 111 1111 and an n x n_ matrix
is called the matrix of the transformation T with respect to the bases A and B: it de cribes the transformation T in coordinates.
A.
We can evaluate f at A' that is, we can con 1der the n x 11 matnx f( A ) = J + a 1 A + ... + a 111 A 111 • For a given n x 11 matrix A , show that there is a nonzero 00 11 polynomial f (T of degree ~ n 2 uch tJ1at f( A ) = 0.
T
Solution
M
. . . 1"' A , A2 , . . . , A"2 Since the linear space lR" x" J. n 2 d1mensiOnal, the n 2 + 1 matnces 2 are lioearly dependent , i.e., there i a nontrivial relation co /" + c,A + c2A + · · · + 111 2 c111 A"' =O(withm =11 ) amongthem. Thepolynomial f( t ) = co + c lt+ · · · + cm1 ha the desired property. ~
+
501
This diagram can be drawn more succinctly as follows: T
The Matrix of a Linear Transformation Next we examine how we can write a linear transfom1ation in coordinates. For example con ider the linear transformation T : P2 + P1 given by
T(J ) =
+ bt + ct 2) =
Definition 9.2.3 (b
+ 2 ct ) + 2c =
(b
+ 2c) + 2ct
The matrix of a linear transformation Consider a linear transformation
or
T: a
+ bt + ct
2
T
(b
[T(f)] a
J' + J".
More expli citly, we can write T (a
M
+ 2c) + 2ct.
3 U ing the standard basis A = I , t , t 2 of P2 , we can tran sform P2 into lR . 2 Likewise, we can use the standard basis B = l , t of P1 to transform P1 into 1R .
V+ W.
where dim (V) = n and di m (W) = m. Suppose we are. given a basis A of V and a bas i B of W: V
T
w
T
Let us apply these coordinate tran formations to the input and the output of the Iransformation T above: T
(b
+ 2c) + 2ct
1 Ts
[
Then the matrix M of the linear tran fo m1ation Ta o T o m.arrix ofT with re.spect ro A and B. Note that
b + 2c ]
2c
[T(J)] a
lR"
[T(J)] a = M[Jh ,
fo r all
f in
V.
r_." 1 i ca lled the
502 •
Chap. 9 Linear Spaces T
\1
w
1
1
Ts
TA
M
IR"
Sec. 9.2 Coordinates in a Linear Space •
T
f
EXAMPLE 9.... For two given real nurnber
P an
d
q,
fi d .~.. n ute matrix of the linear transformation
T,.
'
T( z) = (p
JRIII
(f)A
M
[T(f )]a
503
+ iq) z
from C to C with respect to the Standard basis B of C consisting of 1 and i .
Solution Compare with Definition 7.1.3. . We can describe the matrix M column by colurnn compare w1th Fact 7.1.5) . Suppose the ba i A of V consi t of ! 1 , • •• , fi,. App lying t11 e formula
T
p
+ iq
 q+ip
[T(J )]a
=
~
M[f ]A
Note that M is a rotationdi1 ation matrix. to
f
= j; , we find that
Fact 9.2.5
[T /;) ] 8 = M[f;]A = M e; = i th column of M .
The matrix M of the linear transformation T(z) = (p
Fact 9.2.4
+ iq) z
from C to C with respect to the standard basis of
Consider a linear transformation T from V to W . Suppose A i a basis of V, consisting of f~> .. . , j," and B i a basis of W. Let M be the matrix of T with re pect to A and B. Then
M = [:
:
l
ith column of M = [T(f; )]a. a co (t ) + b in (t ), iliat i the inusoidal funclions with period 2n. Find the matri x M of the linear tran formation
EXAMPLE 10 .... Let V be tbe linear pace con isting of all function s of the form f (t) = Fact 9.2.4 provide u with a mechanical proced ure for finding the matrix of a linear transformation , as illu strated below.
(T.f)(t)
EXAMPLE 8 .... Use F act 9.2.4 to find the matrix of the linear transformation T(f ) =
!' + !" from
=
f(t  8)
fro m V to V with respect to the basis B con i ting of co (t) and sin(t ). ote that T moves the graph of f to the right by 8 units (compare with Example 24 of Section 9.1).
P2 to P1 wiili respect to the tandard ba es.
Solution We apply the transfonnation T to ilie functions 1, r , and t 2 of the standard basis of P2 . We then write the resulling function s in coordinate with respect to tbe standard basis of P1 • Next we combine the three resulting vectors in IR2 to construct the matrix M of the transformation T. T
0
Solution Be prepared to u e tl1e addition theorems for ine and co ine. cos t)
T ~
co (1  8) = cos(8) cos(t) si n(l  o) = 
in(t)
+ sin (o)
in (o) CO (t ) +
in r )
CO (o)
[
si n(t
[
where 
2t + 2  
in(8) CO
+
a rotation matrix .
(o )
J'
(8) in (8)
CO

J)
in (8) ] CO
(8
M.
504 •
hap. 9 Linear Spac ·
Sec. 9.2
proble m about a linear tran Formation T can oft n be done by ol ving lhe corresponding problem for the matri x M of T with r pect to o n~e bases. Thi tech nique can be used to find the k rnel and image of T. to de termt~e whether T is an isomorph.ism, to find the e igenvalues ofT , or to ol ve an equat10n T(J ) = g for a given g . ker(T)
c
T
V
7
w
:::J
im T)
:::J
t im M)
t ker(M)
s;
1
~II
7
~111
T (f) = .f( A) ,
2
12
=
is a ba ·i of the image of
r.
[1 0] 0
A= [ 1
1
3
+ 2xJ =Ü
x 2+ 5x 3 =0
[ =~~~ ]
~ ~]
(compare with Example 7 . Find the matrix of T with re p ct to the standard ba e . U e this matrix to find ba e for the kerne! and image of T .
A ba; of the kemel of M;
Solution
! · we find that
Construct the matrix M of T column by column, u ing Fact 9.2.4:
JS
A
{2
=
[il
u~]
[l~J
A= [ 157 2210] 2
= XJ
t
1
M =
[~
4
16l 15 22
[
~
; ~~J 3 4
15 22

rre f 7
Image: A basis of the image of M is
=~] . Tcan fomting back illto P
1,
s;
? 2 ~ ~ 2 x 2 :::J
t
ker(M)
2 3
[=;]. the domaill of
f( t ) =  2 5t + 12 i a ba is of the kerne! ofT . Note that f (t) the characteristic polynomial of A (compare with Exerc i e 7.2.64). im(T)
t
t
s; ~ 3 ~ IR4
:::J
im (M )
V be the linear pace consi ting of all functions of the form f t = a cos t b sin (t). Con ider the linear transformation T from V to V given by
EXAMPLE 12 ..... Let
T(.f ) =
f " 2/' 
+
3f.
a. ls T an isomorphism? b. Find all olutions f in V of the differential equation
Let us find bases for the image and kerne! of M:
0 I
[
ker(T)
m
I·
The general olution is
1.3
h
2] 4
Kerne!: To find the kerne! of M we have to solve the y tem
gi e n by
A= [
where
505
(we pick the pivot columns). Tran formin g these vectors back into R 2 x 2, the codomain of T , we find that
The vertical arrow · abo e repre ent coordinate tran fo rm ati o n .
EXAMPLE 11 ..... Con ider the linear transformation from P?. to IR 2
oordinates in a Linea r Space •
!" 2 f'
3f
= cos (t).
Solution Find rhe matrix M ofT with respectto the ba i B consi ting of co (1) and in (r): cos (t)
 4cos (t ) + 2 in(t )
7
si n(t)
 2cos(t) 4 in (r)
7
Note that /VI i. a rotationdilation marrix.
506 •
Chap. 9 Linear Spaces Sec. 9.2 Coordinates in a Linear Space • 50 7 Consider the matrix M of T . h . matrix with ker(M) = {O ) ~It re~p~ct to. some bas t . Then M i a quare an isornorphism. 'so tat M lS mverttb le by Fact 3. 1.7. Therefore, Ti
a. Since M is invertib le. T is an isomotphism. b. We wtite the equation T (f) = co (t) in coordinate : [T(f)]a = [cos (t)]a
or
•
EX E R C I S E S where
The solution is _
x = M
_ 1 [
.X = [f) s.
l] I [4 2 2][1]
0 = 20
4
0
GOALS Us e th e concept of coordmates. . . formatwn.
= [  0.2]
1. ~e the polynomiaJs f(t) = 7 + 3t + ltnearly independent?
0. 1 .
Transfonning th.is an wer back into V, we find that the uni.que olution in V of the given differential eq uation is
Find the matrix of a linear trans
,2 g(t) =
9 + 9r + 4r 2, h (t = 3 + 2t + r2
2. Are the matrices
J(t) = 0.2cos(r)  0. 1 in(r) .
linearly independent?
Check th.is answer.
f
E
V
V
3
CO
t
t
t X E
T
JR2
(r )
t
M
ll{2 3
[
~]
The ba ic facts we have derived for linear transformation from lR" to IR111 generalize easi1y to linear transformations between finitedimensional Linear spaces.
3. Do the
If T is a ünear transformation from V to W, where ker T and im(T) are finitedimensional linear spaces, then V is finitedimensional, and dim (ker(T))
+ dim (im(T))
= dim(V).
j(r) = 1 + 2t
+ 9r2 + r3,
g(t)
= 1 + 7 t + 71 3
h(t) _
' 3 4. Con tder _the polynomials f(t) = r + 1 and g(t) = (r + 2)(t + k) where k 1 an ar~ttrary constant. For wh.ich cboices of tbe constant k are the three polynomtals f(t), tf(l), g(t) a basis of p2 ? In Exercises 5 to 10 find the matrix of the given linear tran formation with respect to the tandard bases.
T:P2~lR 2
given by
T(f)=[f,~ 1?)] ·
6. T : P2 ~ Pz
given by
(T f)(t) = f(t
5. Fact 9.2.6
p~lynot~aJs
l+B_t + r + 5t,k(t ) = 1+8t+4t+8z 3 formabasi ofP ?
7. T: P3
Compare with Fact 3.3.9.
~
JR given by
T(f)
=
+ l).
[!'f(l) J. f(r)dt
 I
Here i another useful result:
8. T: P2
~ JR
3
given by
T(f) =
[f~~~~ ] . f(2)
Fact 9.2.7
Consider a linear transformation T from V to W , where V and Ware finitedimensional linear spaces. Then T i an isomorphj m if (and only if) a. dim(V) = dim (W) and b. ker(T) = {0}.
Proof
If T is an isomorphism, then dim (V) = dim (W) by Fact 9.l.8d, and ker(T) = {0} by Fact 9.l.8b. Conversely, suppose that dim(V) = dim(W) and ker(T) = (0].
9. T: JR1
2
~ IR
given by
10. F : JR 2 x  ~ IR 2 "' 2
T(A) = tr(A ).
given by
F (A) =
tA + iAT.
11. For the tran formation T in Exercise 5. find bases of ker(T) and im (T). 12. Ts the transformation T in Exercise 6 an isomorphi sm? Find the eigenvalues and eigenspaces of T. Is T diagonali zable? 13. Find a basis of the kerne! of t11e linear transformation T in Exerci e 7. 14. For the transformation T in Exercise 8, find bases of ker(T) and im (T) . I T an isomorphi m?
508 •
Sec. 9.2
Ch ap. 9 Linear Space
15. Oe rib the kerne! and image of the transfonnation F in Exerci e 10. Find the e igenvalu es and eigen pace of F . I F diagonal izable? •
)
,
16. Let \1 be the li near space of al l quadratt forms q (x 1, x 2 = a.lj in two ariable . Concider the linear tran Formatio n
ar
of
OXI
OX2
+ /; ~'"1r2 + c~_?2
T (J) =  · x_   x2
oord inates in a Lin ear Spa ce •
a. Find the matri x of the linea r tran Formation T (f) =
T:V
7
509
V given by
f" + af' + bf
w_ith respect to_ the basis cos(t), in (t). Here a and b are arbitrary co n tant . b. Fmd the functwn (s) f in V such that T (j)
=
f" + af' + bf =CO
(l ).
You_r sol ution will contain the arbitrary constant a and b. For which fro m \1 to V. a. Find the matri x of T wi th re pect to the basis x~ , .r1x2, x~. b. Find bases of the kerne! and im age ofT. c. Find the eigenval u and eigen pace ofT. I T diagonali zable?
17. Consi der the linear tra nsfonnation T:
2x 'P2 7 llll IN.
gt.
en by
T (j)  [ f'o(3)
3f(l) ]
 f(S)
.
P2
given by
T (J) =
T(f )
= f + af' + bj",
where a and b are arbitrary constants. a. Find the matrix of T with respect to the srand ard bas i of P2. b. If g is an arbitrary polyno mial in P2 how many so lutio n f in P2 does the differenti al equarion
g
have? Ju. ti fy your an wer. 19. Consider a linear tran . Formati on T: V 7 V with ker(T ) = (0 }. If V is finitedimensional , then T is an isomorphi sm, by Fact 9.2.7. Show that this is not necessarily the ca e if V i infinitedimensional: for \1 = P give an exampl e of a linear transfonnation T : V 7 V with ker(T ) = (0} which is not an isomorphism (recall that P is the space of all polynomial s). 20. Consider two finitedimen ional linear spaces \1 and W. lf V and W are isomorphic, then they have the same dimension (by Fact 9. 1.8d). Conversely, if V and W have the same dimensio n, are they neces. arily isomorphi c? Ju stify your an wer carefull y. 21. Con ider the linear space V th at consists of all fun crions of the form f( l ) = c 1 COS(I )
where c 1 and c2 are arbitrary constant .
+ C2 sin (t),
7
(t)
+ c4 t
in (t) .
V gi ven by
f" + f.
J" + f
= CO
(t).
Graph your soluti on(s). (The di ffe rential equation f" + j = co s(l) decribes a forced undamped oscillator. In this example, we observe the phenomenon of resonan.ce.) 23. Consider the linear transformation T: P11 7 JR"+ 1 given by
f(no) ] f(a,) T (f) =
f + af' + bf " =
+ c3 r co
a. Find the matri x of T with re pect to the basis cos(t) , sin (t), t co (t ), t in (r). b. Find all olutions f in W of the differenti al equ ation T (j) =
18. Con ider the linear transformation 7
f( r) = c 1cos(r) + c2 sin (r) Consider the linear transformati on T : V
a. Find the matri x of T with respect to the stan dard bases. b. Find bases for tbe kernel and the image of T.
T : P2
cho J c~s of ~ and b i there 11 0 such fun~ti on j? a n ph y ic ' the diffe rential equatt on f + af' + bf = cos(r) de cnbe a fo rced o c illator.) 22. Let V be the linear space of a ll functi ons of tbe form
[
.
,
J(~n)
where the a; are distinct constants. a. Find the kerne! ofT. Hint: A nonzero polynom.i a1 of degree
~ 11 ha  at most n zeros . b. I T an i omorphi m? c. What does your answer in part b tell you about the po ibility o f fitrin g a polynomial of degree ~ n to 11 +I given points (ao, bo), (a 1. b 1) •.•.• (an . b11 ) in the plane? 24. Consider the linear space V of all infinite eque nce of rea l number . We define the subset W of L consi ting of all sequences (xo, x 1• x 2 , ... ) uch that X n+2 = Xn+l + 6 xn for all 11 ~ 0 . a. Show that W is a subspace of V . b. Determine the dimension of W . c. Does W contain any geometri c ·eque nce of the form ( I . c. c 2 • c· . . .. ), for some constant c? Find all such . equ ences in W . d. Can you find a basi of W con isting of geo metric sequence, ?
510 •
hap. 9 Linear Spaces
Sec. 9.3 Inner Product paces • e. Con ider the equence in W w ho e fir t two term are .ro = . 0. x 1 = I . Find x 2 , x 3 , x 4 . Fin d a clo ed fo rmula for the nth term x" of th1 s sequence.
33. Let a , ... , a" be distinct rea l numbers. w1, . . . , w" such that
Hinr : Write thi
equence a a linear combination of lhe sequences you found in part d. 25. Consider a basis B of a I in ear space V. Show that (f
+ g]s
= (f]s
f_
t
w;j(a;),
34. Find the weights w 1, w 2 , w 3 in Exerci e 33 for a 1 =  J, a2 = 0, a = 1 3 (compare with Simpson s rule in calculu ). 35. Consider a linear Lransformation
fo r all fand g in V .
26. Con ider the linear tran formatio n I (f) = f dx fTo m C[O, 1] to IR (see Example 2 1 of Section 9. 1). Show that the kerne) of I i infin itedimensional. 27. Let A be an m x n m atrix with rank (A) = n. Show that the linear transforma1 0 .f(x)
T : V + V,
tion L (x) = Ax from II(" to im(A) is an isomorphism. Show that the inverse is L  1 (}i) = (AT A)  1 Ary.
where \1 is a finitedimensional linear space. How do you think the deten ninant ofT is defined ? Explai n.
28. Let A be an 1n x n mat:Ii x. Show that the line~ tran fo rm ati on L (x) = _Ar fr~~ im (AT) to im(A) is an isomorphi m. Is the Im ear transformat1on F(y ) = A y from im(A) to im(A T) nece aril y the inverse of L? 29. a. Let T be a linear tran for mation from V to \1. where V is a finitedimensional complex linear pace. Show that the transformation T ha comp1ex eigenvalues. b. Consi der two matrice A and B in C" " such that AB = BA. Show that A and B have a con.unon eigenvector in C". Hint: Let V be an eigenspace for A . Show that Bx is in V fo r all .X in II. Thi implies that we can define the linear transfom1 atio n T (x) = B X. fro m II to \1.
2 2
f(t) dt = 1
Show that there are "weigbts"
fo r all polynomials f(t ) in P"_ 1 • Hint: It suffi ces to prove the cl aim for a basis !1 , ... , j;, of P"_ 1. Exercise 32 is helpful.
+ [g]a,
30. Let C be the set of all rotationdilation matrices [ :
1
511
~ ] ,
INNER PRODUCT SPACES In the last two sections, we foc used on those concepts of linear algebra that can be defi ned in terms of linear combinati ons alone, i.e., in term of surns and scal ar multiples. Other important concepts relating to vectors in IR" are defined in terms of the do t product: length , angles, and orthogonality (orthogonal projections, orthonormal bases, orthogonal transformations). It is sometimes usefu l to defi ne a product analogaus to the dot product in linear spaces other than lR" . The e generalized dot products are cal led inner products.
a subspace of
Thinkina of C and C as real linear spaces, defi ne an iso morpbi sm
"'
T: C * C
Definition 9.3.1
uch that T (zw) = T (z) T (w). The existence of such an i omorphism implies that C and C are isomorphic not just as linear space , but as fi elds. 31. In Exercises 4.3.34 and 6.4.37 we have presented two di fferent ways to think about the quatemion s. Mimic the approach of Exerci e 30 to show that the two sets H and lHl carry the same structure as far as additi on and multiplication are concerned. 32. Consider a basis .f1, ... , f" of P"_ 1 • Let a 1, . .. , a" be di tinct real numbers. Con ider the n x n matrix M whose i j th entry is jj (a;). Show Lb at the matrix M is invertible. Hint: If the vector
Inner products An inner product in a linear pace V is a ru le tbat assign a real scalar (denoted by (f, g)) to any pair f, g of elements of V, such that the fo Uowing propertie hold for all f g, h in V, and all c in IR:
a. (f, g) = (g, f) b. (f + g , h) = (f. h) + (g, h) c. (cf. g) = c(f. g) d. (j, f) > 0 for all nonzero
f in
V.
A linear space endowed with an inner product is cal led an inner p roduct space. The prototype of an inner product space is II(" with the dot product: (ü, w) =
ü . u) . Here are some other exampl s: is the kernel of M , then the polynorn.i al f at a 1, • • • , a"; therefore, f = 0.
= c 1 f 1 +· · ·+c,,f"
in P"_ J vani she
EXAMPLE
t ~ Con s ider the linear space C[a. bJ con i ting of all con.tinu ou fun ction domain i the closed intervaJ [a. b J, where a < b. See F1 gure 1.
who e
512 •
Sec. 9.3 Inner Prod uct pa ce •
hap.9 LinearSpaces
513
. .Thi . approxi~ation shows th at the inn er product (f, g) = J:' .f(t g(t ) dt for functions JS a contmuous version of the dot product: the more subd ivisio ns you choose, the better the dot product on the right wilJ approx im ate the in ner product (f ,g ). ~
EXAMPLE 2 .... Let e2 be the space of aLt "sq uaresummabl e" infinite sequence , i.e ., seq ue nces X =
uch that
Figure 1
I: xl
=
i=O
(Xo , X 1
.x2 ,
... • X 11
xö + x~ + · · · converges.
, • • •
In this space we can defin e the inner
product For fu nctions
f
(x,
and g in Cla , b], we define (J, g) =
lb
=
lb
.f(t)g(r) dr
f( t )g(t ) dl.
=
X;y;
= X o )'O + X I YI + · · ·
i =O
(show that thi s se1ies converges) . T he verifi catio n of the axioms i Ca mpare with Exercises 4. 1.1 8 and 9.1.1 5.
The veri fication of the fir t three ax iom fo r an inner prod uct i For e ample, (f, g )
y) = L
ib
traightforward .
traightforward . ~
EXAMPLE 3 .... In IR"' x11 we can defi ne the inner product (A B ) = trace (ATB ).
g (t).f(t ) d t
= (g,
f).
We wi ll verify the fir t and the fourth ax iom .
T he verifi cation of the last axiom requires a bit of calcu lus. We leave i t as Exercise I . .f(t)g (t ) dt i the Iimit of the R iemann RecaJJ that the Riem ann integral
J:
(A , ß )
=
trace (Ar B )
=
trace((Ar ß )T ) = trace (B r A)
=
(8 , A)
To check that (A, A) > 0 for nonzero A write A in terms of its columns:
111
um
L
.f (tk) g (rd ßt , where the
tk
can be chosen as eq uall y paced poin ts i n tbe
i= l
interval [a , b]. See F igure 2 . T hen
1 b
=
(f, g)
.f(t )g(t ) dt
~
A=
b
.{ f((12) lJ
111
.f(tk) g(tk) M
=
.: ([
J Um )
J· [ : J)
V)
li2
g(t g (t2) J)
ßt
g(t", )
fo r !arge m.
(A, A )
Figure 2
__.

__. /
/
.,....
=
trace(ATA)
=
trace
II
j(l)
T VII
/
g(t)
111 ~~ ~ 1 2
•
= trace
(l .
514 •
Chap. 9 Lin ear Spaces Sec. 9.3 In ner Product Spaces •
If A i no nzero, the n at lea t one of the Ü; i nonzero, and the sum !lli , J1 2 + II id~ + · · · + II Ü11 JI 2 is po itive, a de ired . ~
2
We can in troduce the ba ic conc pts of geometry for an inner product sp ace exactl y as we did in IR11 for the dot product.
Definition 9.3.2
The norm of an eleme nt
(f,.
g)
= fo
[ " Sln . (t)
EXAMPLE 6 .... Find the distance of f( t )
Norm, orthogonality
=
=t
cos(t) d t
and g(t)
=
[
J
1 sin2 (t) 12Tr
2
=0
0
= 1 in C[O, 1].
Solution
f of an inner product pace is 11111
515
Solution
Ju. .n.
dist(f, g)
=
Two elements j. g of an inner product pace are ca lled orthogonal (or perpendicul ar) if . T~e results and procedures discussed for the dot product generaUze to arbJ trary mne~ product space . For example. the theorem of Pythagoras holds; the ~ram~chnudt process can be used to construct an orthonormal ba is of a (finitedunen JOnal) mner product space· and the CauchySchwarz inequality teU us that l(f, g)l S ll fll llg ll for two elements f , g of an inner product space.
(f,g) =0.
We can defi.ne the distance of two elements of an inner product pace as the norm of their difference: dist(f. g) =
+ Orthogonal Projections
II!  gll
Consider the space C [a , b ]. with the inner product defi ned in Exa mple I . In physics, the quanti ty 11 f 11 2 can often be interpreted as energy. For exa mpl ~, it describes the acoustic eneray of a periodic sound wave f( t ) and the elast1c potential energy of a unifo rm s~ring with verti cal displa~ement f(x) (see Fi gure 3). The quantity 11!11 2 may a1 o measure thermal or electnc energy .
EXAMPLE 4 .... ln the inner product space C[O, 1] with (f,g) f(t)
=
=
t 2.
j 01 f(t)g( t )
dt , find
In an inner product space V , consider a subspace W with orthonormal basis g , 1 . . . , g"'. The orthogonal projection proj w.f of an element f of V onto W is defined as the unique element of W such that f  proi f is orthoaona1 to w . J IV . e · As 111 the case of the dot product in !Rn, the Oitbogonal projection is given by the formula below.
11!11 fo r Fact 9.3.3
lf g , , .. . , 8m is an orthonormal basis of a ubspace W of an inner product space V, then
Solution 11111
= Ju,n =
jfo'
Orthogonal projection
l
4
dt
=
[f for all
EXAMPLE 5 .... Show that f(t) = sin (1) and g(t) = cos(t ) are perpendi cular in the inner product space C[O, 2JT] with (f, g) = J~rr f(l)g(l ) d t. Figure 3
Displacement fix)
Vertical di splacemen t at x
A string attached at (a, 0) and (b, 0)
X
V.
(Verify this by checking that (.f  proj w.f, g;) = 0 for i = 1. . .. , m.) We may think of proj IV f as the element of W closest to f. In other words, if we choose another element h of W , then the di stance between f and Ir will exceed the di tance between f and proj w f. As an example, consider a ubspace W of C[a . b], with the inn er product introduced in Example 1. Then proj IV f is the function g in W that i clo est to f, in the sense that
I a
f in
di t(.f, g) = b
is least.
II!  gll
=
1"
2
(f(t)  g(t) ) dt
516 •
Chap. 9 Li near Spaces Sec. 9.3 Inner Product Spaces •
517
Solution pro· f
We need to find .
~P1
.
we
·
fi
rst find an ortbonom1al basis of P for tbe
~;aen Smnher pdroduct, then we will use Fact 9.3. 3. ln general , we have t~ use the
°·
· . B m c hrru t process to find an rth ononna1. ba IS of an mner product space ec~u e t e two function s I, I in the Standard basis of P 1 ace already orthogonal.
th at. IS ,
,
(1 , I )
Discrele Ieastsquares condition: L:~ 1 (b,  g (a,)) 2 is minimal. Figure 4a
=
r'
}_ I
f
d1
= Ü.
we merely need to divide each function by it norm:
The requirement that
11111 =
l
m
2
a
.Ji
and
12 li l/l =jr t dt = Jf 1 1 . 3
and
~t.
ow
. 1 prü]pJ = 2 (l , f ) l
3
+ 2 ('
f )l
I =  (e  e 1) +3e 11
2
(We orllit the Straightforward computations.)
2
L
k=l
EXAMPLE 7 ..... Find the linear function of the form g (c) = a
/2
1 1
(! (t ) g (t )) dr = !im "'(/ (tk)  g(td ) ßt . m 4
1 dt =
An orthonormal basis of P1
be mimmal is a continu.ous Ieas tsquares condition , as opposed to the di crete lea tsquares cooditions we discus ed in Section 4.4. We can use the di screte Ieastsquares condition to fit a function g of a certain type to some data points (ab bk) , while the continuous leastsquaces condition can be used to fit a function g of a certain type to a given function f ("function of a certain type' are frequently polynormals of a certain degree or trigonometric function s of a certain fom1 ). See Figures 4a and 4b_ We can think of the continuous least quares condition as a limüing case of a discrete Ieastsquares condition by writing b
II:
See Figure 5.
+ br
that be t approximates tbe function f (l) = e 1 over the interval from  1 to 1, in a continuous least quares sense.
(ontinuous Ieastsquares condition: t !f!n  g( tJF dt is minimal.
Figure 4b
Figure S fl..l)
projp,.f
fl..t)  g(f)
.____"" '  I
a
b
:' I
518 •
Chap. 9 Linear Space
Sec. 9.3 Inner Product Spaces • What follow
one of the major applicati on of thi theory .
These equations tel l usthat the functions l , sin(t), cos (r), . .. , si n(nt ), cos(n.t) are orthogonal to one another (and tberefore li.nearly iodependent). Another of Euler' s identities teils u that
+ Fourier Analysis'
i:
In th pace C[  TC, TC] we introduce an inner prod uct that i a light modi ficatio n of th definition given earlier: (/, g)
=;l
l(t)g(t) dt
t+ c
+ 1; 1 sin(t) + c 1 cos(t) + · · · + b" sin
+ Cn cos(nt)
in(pr) cos(mt ) dt
= 0.
for integers p, m ,
sin(pt) sin(m.t) dt
= 0,
for disti nct iotegers p, m,
J::. cos(pf) cos(mt) dr = 0,
for distin t integer p, m.
I:
2
cos (mt) dt =TC ,
therefore, 1
f(t)
= lll(t)/1 = .Ji ;
g(t)
,
is a fu.nction of nonn one.
called trigonometric polynomials of order ::: n . From calculus you may recall the Euler identiries:
~~:
i:
11";r ldt = ·h;
?·
nt )
=
11111= ;
Also it is requi red that .f(c equal one of the two onesided Iim its. See Fig~re Fora po itive integer n , consider the sub pace T,, of C[TC,TC] which 1 defined as the span of the functions I , sin(t), cos(t), sin(2t). cos(2t), . .. , sin(nt), cos(nt). The space T" consists of all functions of the form f(t) = a
dt
for positive integer m. This means that the functions sin(t), co (t), ... , sio (nt ), co (nt ) ar~ of nonn I witb respect to the given inner product. This is why we chose tbe Inner product as we did, with the factor .!. . The norm of the fu nction l(t) = 1 i "
The factor 1/TC i introduced to facilitate the computation . Convince yourself that this is indeed an inner product (compare with Exercise 7) . More generally, we can consider this inner product in the space of ~I piecewise continuous fimctio ns defined in the interval [ TC, TC]. These are functw ns I (t) that are continuous except forafinite number ofjumpdiscontinu.ilies, that is, point c where the one ided Jimit lim f(t) and lim... .f(t) both exist but are not equal. r~ c 
2
sin (mt)
1"  71
519
Fact 9.3.4
Let T,, be the space of alJ trigonometric polynornials of order ::: n , with the inner product (J, g)
11"
=;
71
.f(t)g(t) dt.
Then the function I
.
.
.Ji' sm (t), cos(t), sm(2t). cos(2!), ... ,
fW hos o jumpdiscontinuity ot t = c.
i n(nt), cos(nt)
Figure 6
fo rm an orthonom1al basi of T,, .
For an arbitrary fu nction
I
in C[TC JT], we can con ider
c
As we discussed above, J,, is the trigonometric polynomial in T,, that best approximate .f. in the sen e that 1Named after the Fre nch mathematician JeanBapti steJoseph Fourier ( 1768 1830). who developecl the subject in his Theorie analytiqrre de Ia clwleur ( 1822), where he investi gated the conduction of heat in very thin s heet~ of melal. Baron Fourier was also an Egyptologist and govern ment admini strator; he accompanied Napo leon on hi s ex pedition to Egypl in 1798.
dist(f. J,,) < dist(f, g) for al l other g in T".
520 •
Sec. 9.3 Inner Product Spaces •
Chap. 9 Linear paces
521
We can u e Fact 9 . .3 and 9.3.4 to find a formula for f,, = pro_ir,.J.
I
A, [
35 F ac t 9..
Fourier coefficients If f i a piecewise continuou functio11 defined 011 the i11te rv al [  n. ;r ] , then
I
Piano
2
3 4
5
6
• k
it best approximat ion /" in T" i .f;,(r)
=
= ao ~ + b , sin(l) + c, co
proj 7J(t)
+h1, si11 (n1)
+ ···
(l )
A, [
+ c" cos(nf),
I
Figure 7
where
f ;r
l
bk = (f(r). in kt)) = ck
= (f(r)
a0 =
(
~
os kr )) =
f(r), 
I ) =
..J2
L:
f(l) in (kt) dr.
 rr
lf
l
M
...;2n
f( r) co (kr) dr.
n
f.
The function fn(t) = ao
.
~ + b 1sin
r)
....;2
+ c 1cos(t) + · · · + h
11
sin(nr)
+ c" cos(nl)
called the nthorder Fourier approximation of f .
EXAMPLE 8 ... Find tbe Fourier coefficients for the function
Note that the constant term , written somewhat awkwardly is
1
M
1 =
2n
f ;r
~~
f(f) dt ,
 ;r
J
• k
Ak si11 (k( t  8k)),
where Ak = b~ + c~ is the amplitude of the harmo11ic and 8k is the phase shif1. Consider the sound generated by a vibrating string, suc h a in a piano or 011 a violin . Let f(t) be the air pressure at your eardrum a a fu11ction of timet (the function f(t) is meas ured as a deviation from the normal atmospheric pressure). In thi s case, the harmonics have a simple physical interpretation: lbey correspond to the various si11usoidal modes at which the string can vibrate. See Figure 7. The fundamental frequency (corresponding to the vibration shown at the bottarn in Figure 7) gives us the.first harmonic of .f(t) , while the overtones (w ith
f(t) = t on the interval rr ::::: t ::::: n.
1 ["
}_rr sin (kf)t dt
1 U l cos(kt)t
I
~
value of f(t). The function bk sin (k r) + ck cos(kt) is calied the kth harmonic of f(t). Using elementary trigonometry, we can write the hat111onic altern atively as
+ ck cos(kr) =
6
+~{
cos(kt) dt
l
Ontegration by parts)
2 .   if k is even k
which i the average value of the function f between 0 and 2n. It make sense that the best way to approximate f(t) by a constant function is to take the average
bk sin (kr)
5
frequencies that are integ~r mu.l tiples of the fundrunental frequency) give us the other ter.ms of th~ harmoruc senes. The quality of a tone is in part determined by the .relattve runphtudes t?e harmonics. When you play concert A (440Hz) on a p1ano the first harmomc 1s much more prominent tban the higher ones, but the same tone playe? on a vio~n . gives p~ornin~nce to higber harmonics (especially the fifth): Se.e Ftgure 8. S~~ constderat:J.ons apply to wind instrument ; they have a v1bratmg colunm of rur mstead of a vibrating string. . The human ear carmot hear tones whose frequencies exceed 20,000 Hz. We p1ck up only finitely many hru1nonics of a tone. What we hear is the projection of f (t) onto a certain T,, .
.
...;2
3 4
Figure 8
bk = (f, s1n(kt)) = ;
ao
2
o!
f ;r .f(t) dt.
The bk> the ck. and a 0 are call ed the Fourier coefficients of the fu nction
Violin
II
k
AU
if k is odd
2

k and ao are zero, since the integrands are odd functions . The first few Fourier polynornials are:
ck
f 1= h =
!3
2sin(t) 2 sin(t)  sin(21)
= 2sin(t) sin(2t)
14 = 2 sin(t) See Figure 9.
+ ~ si n(3 t) 1
?
si n(2t)
+~
in(3t)
2 sin (4t).
522 •
Sec. 9.3 Inner Prod uct paces • Ch ap. 9 Li near pace
523
Co mbinin oo the last t wo " boxe d , eq uatton . , we get the fo llowing identity:
Fact 9.3.6
2 00
+ b2I + Cl2 + · · · + b~? + C2 + · · · = 11
ll f !1
2
The .in fi nite _ seri~s of the quares of the Fo urier coefficient of a piecew i e contmuous l un ctiOn f converges to 11 ! 1 2.
For the function f( t ) studied in Example 8, thi mean that
4 + 4 4
+ 4 + .. . + ?4 + . . . = I 9
JT
n
!"
(2
dt
= 2 JT 2
 rr
3
or
Figure 9
I
How do the en·ors llf  fnll and II!  f u+ t ll of the 11. th and the (n + l ) th Fourier approximation compare? We hope that f u+l will be a better approximati on than
f
11 ,
11
is a po lynomial in T,,+ 1,
ince T,, is
cont ained in T,,+ l, and
II f  fn+ t ll s II f 
1
rr 2
an equ atio n cl i covered by Euler. Fact.9.3.6 h.as a ph.ysical interpretati o n w hen 11! 11 2 represent e nergy. F or exampl e, 1f f (x) IS the di splacement of a vibrating tring then b~ + c~ repre ents ~he energy of the kth h ~m o ni c, and Fact 9.3.6 teil s us that the total energy 1 1.fll 2 1s the um of the energ1es of the harmonics. There is an interesting appli cation of Fourier analysis in quantum mechan. tcs. In the . 1920s quantum mecbanics was presented in two quite distinct forrn s: Werner Het enberg ' s matri x mechanics and Erwin Schrödinoer' wave mech anic . Sch~·ödinger (18871 9? 1) later .sh?wed that the two theori~s are mathe matically equt valent: they use tsomorphic tnner product spaces. Hei enbero work w ith the space f2 introduced in Example 2, while Schrödinger works wftb a fun ction space. related to ~[ JT , rr]. The isomorphism from Schröd.inger' pa e to 2 is established by tak mg Fourier coeffic ients (see Exerci e 13).
II J  fn + I II S II f  J,, II
f
l
11 =1
or at lea t no wor e:
Thi s is indeed the ca e, by definitio n:
I
L ?n =1 + 4 + 9 + 16 +· · · = 6 '
g II.
for all g in Tu+ l • in particular for g = f,,. In other words a n goe to infi nity, the error II!  f,,ll becomes maller and smaller (or at lea t not !arger). U in g somewhat advanced calculus, one can how that thi error ap proache zero:
e
What does this teU us about lim
111;,11?
By the theore m of Pyth ago ras, we
"""""'
have
As n goes to infinity, rhe fi rst summand , lim
n~ oo
We have an expansio n of
f
11
11 !" 11
II f =
f 11 11
2
,
EX E R C I S E S
approaches 0 , so th at
GOALS Use the idea o f an inner product, and apply the ba ic result derived earli er for the dot product in !R" to inner product spaces.
11!11 .
1. In C [a. b ], define t.he product
in terms of an orthonormal basis:
1
f,, = a 0  + b 1 sin (t) + Ct co (t ) + · · · + bu in (n t) +
/2
Cn
where the b b the ck. and a0 are the Fourier coefficie nts. We ca n ex pre. s terms of these Fourier coeffi cients, usin g the theorem of Pythago ras :
II J,,ll 2 =
a~ + bT+ CT+ · · · + b~ + c;,
I·
(f, g ) =
cos(n t ),
II !"II
in
1b
f( t )g(t ) dt .
Show that this product sati sfie the property (J, f) > 0
for all nonzero f .
524 •
Chap. 9 Linear Spaces Sec. 9.3 Inner Prod uct Space •
2. Does the equation
True or false? If f · (.f, g
+ h) =
f
(.f. g ) + (f, h )
hold for all elements f, g , h of an inner product pace? Explain.
10.
·
f
c ·
u.g) = ~
(.r , ji) = (Sx )TS.y .
4. In
~m x u ,
consider tbe inner product
.
h . IS a contmuous even functiO n and g is a con tinuo u odd unc wn , t e n fa nd gare orthogona l in C[ l 1] E I . . , . . xp a m . onsJc1er the space p2 with inner product
3. Consider a matrix S in ~" x n . ln ~~~, de11 ne tbe product
a. For which choice of S is this an inner product? b. For which cboices of S is (x, ji) = .t · y (the dot product)?
525
1:
f (t )g(l ) dt .
Find an orthonormal basi of the space of all function s ·,n (t ) = 1. Pz orthogonal to
f
11. !he angle between two nonzero element JS defined a v and w of an inner prod uc t space
(A , B ) = trace(AT B ) 4(v, w) = arcco
defined in Example 3. a . Find a formula for this inner product in ffi."' x 1 = ffi."' . b. Find a formula for thi inner product in ~ 1 x u (i.e., the space of row veclor with n component ). 5. Is ((A. B )) = trace(ABT ) an inner product in ~m " ? (The notation (( A , B)) is chosen to di stinguish this product from the one considered in Example 3 and Exercise 4.) 6 . a. Consider an m x n mat.rLx P and an n x m matrix Q. Show that
In the space C[ rr , rr] with inner product
(f, g) = ;1
fi~d
12.
(A, B )
=
~m x u
1" ;r
f(t)g( t ) dt.
the angle between f(t ) = cos(r) and g(t) = cos(t + 8), where 0 < 8 < rr + 8) = co (1) cos(8)  sin (t si n(8). .
~tnt: Use the formu la cos(t
Fmd all Fourier coefficient of the ab olute value function
trace( PQ ) = trace(QP) .
b. Campare the two inner product in
(v, w)
llvll llwll .
f(t) =
lt l.
13. For ~ function f in C[ rr , rr_] (with .the inner product defined on p age 518
below:
coos1der the sequeoce of all 1t Founer coefficients,
trace(A T B
(ao. b, , c, , b2 , c2, .. . , b" , c", .. .).
and
this infinite sequence in f2 ? lf so, what is the relationship betwee n
((A , B)) = trace (A BT )
ll.f II
(see Example 3 and Exercises 4 and 5 ). 7. Consider an inner product (u , w} in a space V , and a scalar k. For which choices of k is
an inner product?
8. Consider an inner product (u, w} in a space V. Let
w be a fixed element of
V. ls the following transfonnation linear? T:
V r IR
given by
Wbat i its image? Give a geometric interpretation of its kerne!.
= f(t),
(the norm taken in
e2 was
for all t,
and
introduced in Examp]e 2.)
14. Which of the following i an inner product in P 2 ? Explain . a. (f, g) = f(l)g(l) + f 2)g(2) b. ((f,g)) = f(l)g(l) + f(2)g(2) + f(3)g(3) product in
JR 2 ?
""
(L;~] ' [ ~~ ]) = ax1 Y1 + bXIJ'2 +
for all
CX2 )11
+ dx2y2
16. a . Find an orthonormal ba is of the space P1 with inne r product (f,g) =
odd if f(  t) =  f(t) ,
2 )?
15. For which choices of the constants a, b, c, and d is the followin cr a n inner T(v ) = (u , w)
9. Recall that a function f(t) from ~ to ~ is called even if f( 1)
and
(The inner product space
((u, w}} = k(v, w)
(the norm taken in C[  rr. rr])
1'
f( t g(t ) dt .
t.
(con tinued)
526 •
Sec. 9.3 Inner Product Spaces •
Ch ap. 9 Li nea r Spaces b. Find the linear polynomi al g r) = a+br that be t approx.im ates the function I (t) = 1'2 in the intervaJ [0. 1] in the (conti nuous) least. quares en e. Draw a sketch. 17. Consi der a linear space V . For which linear transfo rrnations T : V~ 1
(v. w)
For three polynornials
an inner product in \1? 18. Consider an orthonormal ba is B of the inner p roduct space V. For an element I of V, what is the relationship between II !II and II [.f]e II (the norm in lR" defined by the dot product)? 19. For which 11 x 11 mauices A is an inner product in lR"? Hint: Show first that A must be symmetric. Then give your answer in terms of the definiteness of A .
8
I
4
o
8
g h
0 1 8 3
3 50
c. Find proj Eh where E = span f g). Express your solution a linear combinations of f and g. d. Fin? an o_rthononnal ba i of span(f, g , h ). Express the functions in your basts as linear combinations of f, g, ancl h . 25. Find the norrn 11t ll of
x = (I , ~ ~· ...,~ · .. .) (e2
a. Find all vectors in JR2 perpendicul ar to [
W
~]
f (with respect to th i inner
(I) = { I
v
+ w)
 q ü) q (w )
define an inner product in lR11 ? How can you tell ? 22. If .f(l ) is a continuou function, what is the relationship between and
Hint: U e the CauchySchwarz inequality. 23. In the pace P1 of the polynomi al of clegree ::: 1, we define the inner procluct (f, g) =
~ (!(ü)g(O) + j(l )g (l)).
Find an orthononnal basis for this inner product space. 24. Consider the linear space P of all polynomials, with inner product
1 1
(f, g )
=
f (r)g (r) dt.
!f
l <
1 lf t
~
0 0.
Sketch the graphs of the fir t fe w Fourier polynornials. 27. Find the Fourier coefficient of the piecewise continuou function
product). Draw a sketch. b. Sketch all vectors in JR2 with lliill = 1 (with respect to thi inner product). 21. Cousider a positive defi nite quadrati c form q (,t) in !Rn. Doe the formu la (ü, w ) = q (ü
e2 .
is defi ned in Example 2).
2
in ~ (see Exercise 19).
in
26. Find the Fourier coefficient of the piecewise continuou function
[5 2]2
h
a. Find (f, g + h). b. Find \lg + hll
(li . ÜJ) = VT AÜJ
T V
g
For example, (f, f) = 4 and (g, h) = (h, g) = 3.
dot product
( V,~ W ) =
f1 we are g1 ·ven tbe fo Uowtng . . mner prod ucts:
(, ) I
IR"
= T (v) · T (w)
20. Consider the inner product
f , g,
52 7
f(t) = { 0
.
~f
t < 0
[ Lf t
~ Ü.
28. Apply Fact 9.3.6 to your an wer in Exerci e 26. 29. Apply Fact 9.3.6 to your an wer in Exerci e 27. 30. Consider an ellip e E in JR 2 whose center is the origin. Show that there is an inner product (· , ·) in IR2 suchthat E con i ts of all vectors with !lx ll = 1, where the norrn is taken with re pect to the inner product (·, . ) .
x
31. Gaussian Integ ration: In an introductory calculus cour e you may bave een approximation fonnulas for integrals of the form b
1 a
ll
j(r) dt
~ .{; w;J(a;) ,
where the a; are equally spaced points in the interval (a , b), and the w; are certain "weights" (Riemann sum , trapezoidal sums, Simpson 's rule). Gaus has shown that with the same computational effort we can get better approximalions if we drop the requirement that the a; be equ ally paced. BelO\.v we discuss bis approach.
528 •
hap. 9 Linear Spaces Sec. 9.4 Linear Differential Operator •
529
Con ider tlle pace P" with the inner product Verify that a linear differential operator is indeed a ünear tran Formation (exercise). {f, o) = [ I j(l g(l ) d t .
Examples of linear differential operator are
degree(ji,. ) = L f . · · · ·Jr" be an orthonormal ba is of thi·d spacetl , with et Iio,I t ddbI d· 1 h , k (to· con truct u 11 a ba 1·s~ . appl y GramSchn11 t ro 1e a n ar t , ... , r") . lt can be shown rhat f 11 has n di srinct root. a 1, a2 . . .. , a" 111 t e interval (  1, 1). We can find . weight " wl , W2 , .... w" uch th at I
(*)
[
1
8
T (J) =
L(f)
w;j(a 1),
than 2n. · b' b k You arenot asked to prove t11e a ertion above for ar Jtrary n , ut wor out the case n = 2: find 0 1, 0 2 and w 1, w 2 , and how that the formu la 1 j(1) d t
=
w 1j(a 1)
6 f"
and
+ SJ,
tf T i an nthorder linear differential operator and g is a smooth function then the equation
for a11 polynomial of degree less than n (see Exerc i ~ 9.2.33). In fact, much more is true: The formula (* holds for all pol ynol11J a1s f (t ) of degree less
[
!" S f' + 6f.
= f"' 
of first, second, and third order, respecti vel y.
"
j(t ) dr =
= f ',
D (f)
T(J )
=g
or
! (") + an  1f tll  1) + ... + a l ! ' + aof = g is called an nthorder linear differential equation (OE). The OE is called homogeneous if g = 0, andinhomogeneaus otherwi e. Examples of linear OE' s are
+ w2f(a_)
1
holds for all cubic polynorruals.
!"  f' 6f =
0
(second order, homogeneous)
and f '(t )  Sf (t ) = sin(t)
LINEAR DIFFERENTIAL OPERATORS In this final section, we will study an important class of linear tran sformations from c to c . Here C 00 denotes the linear space of complexvalued smooth functions (from JR to q , wbich we consider as a linear space over
Definition 9.4.1
Linear differential operators A Iransformation T:C
~
C00
T(f) = j 111 ) + a11 _ J! (II  IJ + · · · + aJ!'
Note that solving a homogeneaus OE T (J) = 0 amount to finding the kerne! ofT . We wiU first thin.k about the relationship between the solutions of the OE' T (J) = 0 and T(J ) = g. More generally, consider a linear transformation T from V to W where V and W are arbitrary linear paces. What is the relationship between the kerne! of T and the solutions f of the equation T(J ) = g, provided that this equation has olution at all (compare with Exerci e 1.3.48)? Here is a simple example:
EXAJ\tfPLE 1 .... Consider the linear transformation
of the fonn
f (k)
T (.r ) = [
~
;
~ J.:r
from
3
to JR2 . Oe cribe
rhe relationship between the kernel of T and the solutions of the linear ystem
+ aof
is called an 11thorder linear differential operator. 1•2 Here derivative of J , and the ak are complex scalars.
(first order, inhomogeneou ).
T (x ) = [
denotes the kth
~
1
J
both algebraically and geometrically.
Solution
~More precisely, this is a linear differential operator wirh consranr coefjiciems. More advanced texLS consider the case when the ak are funclions. .. . . . . . of 2The term "operator'' is often u ed for a Iransformation whose domam and codomam consJst function s.
Using GaussJordan elirrunation, we find that rhe kerne! ofT consi ts of aJI vector of the form
Sec. 9.4 Linear Differen tial Operat ors •
530 •
Chap. 9 Lin ear Spaces
. Note tha.t T (f) ~ T(cJ/, + · · · + c11 f 11 ) + T (fr,) = 0 + g = g , so that f is mdeed a solutLon. Venfy that aU solutions are of tb.is form . What i ~ the s!gnificance of Fact 9.4.2 for linear differenti al equati ons? At the end of thts sectLOn we will demoostrate the following fundamental re ult:
olulions of
T(i )=
531
Ln Fact 9.4.3
The kerne] of an nthorder linear differenti al operator is ndimen ional.
kernet ofT
. Fa~t 9.4.2 now provides us with the following strategy for solving Linear dtfferent.J al eq uations: Figure 1
.F act 9.4.4
To solve an nthorder linear OE
with basis
T(f) = g
we have to fi nd
 = [ 6] ·
The solution set of the system T (x) form
12
con
1
a. a basi of all vectors of the
t
f = cJ!J + · · · + Cnf,, + JP,
62x3x3] [2]+ .[3]+ [ =
l 0
X2
X3
X3
a vector in the kerne! of T
where the c; are arbitrary constant .
0 1
a particular olution of the sy. tem T(x)
planes in JR3 , as shown in Figure 1.
EXAMPLE 2 .... Find all olution of the OE
= [ ~~}
~]
The kerne) of T and the solution set of T (x) = [ 1
J"( c)
fo rm two parallel
...
These observations generalize as follows :
Fact 9.4.2
Consider a linear transformation T from V to W , where V and W are arbitrary linear spaces. Suppose we have a basis f ,, h, .. . f " o~ the kernel of T · Consider an equation T (f) = g with a particular solutLOn fp · Then the solutions f of the equation T (f) = g are of the form
f = e lf, +
c2f2 + · · · + Cnf,, + fp ,
where the c; are arbitrary constants.
of kernel (T), and
Then the solutions f are of the form
2
X2
! 1 ... , fn
b. a particular solution /p of the OE.
We are told that / p(t)
=
+ f( c) =
e'.
~e' is a particular olution (verify tb.is).
Solution Consider the linear differential operator T f = f '' + f. A ba is of the kerne! of T is f 1 (t ) = cos(r) and h(t ) = sin(r) (compare with Exercise 9.1.50). 1 Therefore, the olutions f of the OE ! " + f = e are of the form .
f(t) = c, cos(r) + c2 m(t)
I
+ 2e
1 ,
where c 1 and c2 are arbitrary constant . We now present an approach that allows us to find olutions to homogeneaus linear OE' more systematically .
532 •
Chap. 9 Linear Spa s Sec. 9.4 Linear Differential Operators •
• The Eigenfunction Approach to Solving Linear DE's
EXAMPLE ... Find all exponentia1 function e;. 1 in the kernet of the linear differential operator T (.f)
Definition 9.4.5
E~IPLE
533
4
= .f" + J' 
6f.
Eigentunetions
Solution
Con ider a lin ear different ial operator T from C to C . A s mooth functi on f is called an eigenfunction ofT if T (f) ~ f..J f?r ome .compl ex . calar f..; this scaJar)... is caJJed the eigen alue assoc1ated w1th the eJgenfunctJon f.
The characteri stic polynomial is Pr (J..) = ).. 2 +)...  6 = (.A + 3) ).. _ 2), witb root 2 and ~. Therefore, the functions e21 and e 31 are in the kerne] ofT. We can check th1s:
3 ... Findall eigenfunction and eigenvalue of the operator D (J) =
f' .
and
Solution We have to solve the differential equation
F=
S_in~e
rno t po lynomials of degree n have 11 distinct complex root , we can find n d1 tinct exponential function e/. 11 • • • , eA. 1 in the kerne] of most nthorder linear d.1 fferentia~ Operators. Note that these functions are linearly independent (they are e1genfunctwns of D witb di tinct eigenvalues· the proof of Fact 6.3.5 applies). Now we can use Fact 9.4.3.
f..J.
We know that for a ~iven )... the solution are all exponential functions of the form f(t) = C eM. This ~ean that all comp.lex nu~bers ar~ eige~walues of D, and tl~ eigenspace a ociated with the eigenvalue ).. 1s oned1menswnal , panned by e
= J
T(f ) then 11
T(e ·
Con~ider an nthorder linear differential operator T who e cbaracteristic polynonual Pr (J..) has 11 disünct roots .A 1, ••• , An. Then the exponentia1 f unctions
Fact 9.4.8
)
= (f.."
form a basis of the kernet of T , that is, a basis of the solution pace of tbe homogeneaus DE
+ Dn IA." 1+ · · · + a1f.. + ao)e 1
J.
T(f) = 0.
This ob ervation motivates the following definition:
Definition 9.4.6
See Exerci e 38 for the case of an n thorder linear differential operator whose characteristic polynomial has less than n distinct roots.
Characteristic polynomial Consider the linear differential operator T(J ) =
EXAMPLE 5 ... Find all solution f of the differential equation
/">+ a" _ J/
! " + 2!'  3f = 0.
The characteristic polynomial of T i defined as Pr(f..)
= A + On IA 11
11
 l
+ ... +DIA+ ao.
Solution Tbe characteri stic polynomial of the operator T(J) = !" + !' 3f i Pr ,\) = 2 + 2f..31 3 = (f.. + 3)(A  1), with root I and  3. The exponentia l fun tion e1 and e form a ba is of the olution space, i.e .. the solution are of the form )..
Fact 9.4.7
If T is a linear differential operator, then e}.J is an eigenfunction of T, with associated eigenvalue Pr (J..) for all A.: T(e'J ) = p 7 ('J..)eÄ1
In particular, if p 7 ()..) = 0, then e/.1 is in the kernet of T.
EXAMPLE 6 ... Find all solution f of the differential eq uation
!" 
6J' + 13! = 0.
534 •
Ch ap. 9 Linear Spaces
Sec. 9.4 Lin ear Differenti al Operator •
535
f( t) =eP'(cl cos(qt) + c2 sin(ql))
Solution The characteri ti polynomial i Pr (A.) = A. The exponential functions
2
6A. + I 3, with complex roots 3 ± 2i.

+ i sin (2t))
et3+ 2ilt
= e31 ( cos (2t
e<3  2i l t
= e31 ( cos (2t  i sin (2r))
and
form a basis of the so lutio n space. We may wish to find a basis of the soluüon pace consisting of realvalued functio ns. The foll owing observation is helpful : if f(t) = g r) + i h(r ) is a olution of the DE T(f) = 0, tben Tf = Tg + i Th = 0, o th at g and h are solutions as weil. We can appl y this remark to the real and the imaginary parrs of the solu tion e <3+ 2i) t : the functions e 31 co (2t )
e 31 sin(2t)
and
Figure 2
What .about . nonhomogeneou. differenti al equations? Let us di scuss an example that ts part.Icularly important in applications.
EXAMPLE 7 .... Consider the differential equation are a basis of the solution space (they are clearly linearly independent), and the general solutio n is
! " (!)
+ f ' (t)
 6 f( t ) = 8cos(2t )
a. Let. V be the linear space consisting of all functions of tbe fom1 ci cos(2t ) + c2 Sin (2r) ..Show that the linear differential operator T (f) defines an Isomorphism from V to V .
Fact 9.4.9
!" + af' + bf
T (f) =
1
6!
c. Find all solution s of the DE T (f) = 8 cos(2t).
= 0,
where the coefficient a and b are real. Suppose the zero of Pr (A.) are p ± iq , wi th q =!= 0. Then the solution of tbe given DE are eP
+!' _
j"
b. Part a. implies. that ~he DE T (f) = 8 co (2t) has a unique particular solution .ft,(t ) 10 V. Fmd thi s solution.
Consider a di fferen tial equation
f(t) =
=
(c i cos(qt)
+ c2 sin (qt
),
Solution
a. Consider the matrix
A
of T with respect to the basi
where c 1 and c2 are arbitrary constants. Tbe special case when a = 0 and b > 0 is important in many application s. Then p = 0 and q = Jb, so th at the solutions of the DE
!" + bj =
0
are f(t) = Ci cos(.Jbr)
+ c2 sin (.Jbt ).
A= [  10 2
2] 10 '
a rotationdilarion matri x. Since A. is invertible, T defines an isomorphism from V to V .
b. If we work in coordinates with respect to tbe ba is cos(2t). sin (_r) the DE T (f) = 8 cos(2t) takes the form A.x = [
ote that the fun ction 1
f( t ) = eP (ci cos(q t)
+ c2 sin(qt ))
is the produ ct of an ex ponential and a inusoidal function . Tbe case wben p is negative comes up frequ ently in physics, when we model a damped oscillator. See Figure 2.
o (2t ) . in(2t). A
traightforward computation shows that
x = A 1 [ 8 ]=  ' 0 104
~
l
with the solution
[lo 2] [8] [10/13] 2
10
0
=
The part.icular solution in V is J~(t )
= 
lO
8
cos (2t) +
2
T3
in(2r).
2/ 1
.
536 •
Chap. 9 Linear Spaces Sec. 9.4 Linear Differential Opera tors •
A more Straightforward way to find f 1)(t) is to set f,, (t) ~ P cos 2i) + Q sin(2t) and substirute this trial solution into the OE to determme P and Q.
can be written as
c. In Example 4 we have seen that the functions / 1(t ) = e~r and f2 (!) = e 31 fom 1 a basis of the kerne] ofT. By Fact 9.4.4, the so luti ons of the DE are of the fom1 f (t)
T = D
c l e 2t
2
+D
= (D + 3) o (D
T
(CD +3) o (D 2))! = (D +3)(f'  2f) =
C cos(wt) ,
where a. b, C, and w are real numbers. Suppose that a DE has a particular solution of the form
=
P cos(wt )
+
=0
and b
#
0 or b
#
w 2 . This
Fact 9.4.11
T =D" + a"_ 1D"  1 + · . . + a 1 D +ao = (D A.1)(D A.2) ... (D ).") , where the A.; are complex nurnbers.
f of the OE.
= w 2?
. We can therefore hope to understand alllinear differential operators by studymg firstorder operators.
The Operator Approach to Solving Linear DE's We will now present an alternative, deeper approacb to DE's, which _al~ows us to solve any linear OE (at least if we can find the zeros of the charactenstJc p~lyno mial). This approach will Iead us to a better understanding of the kerne! and 1mage of a linear differential operator; in particular, it will enable us to prove Fact 9.4.3. Let us first introduce a more succinct notation for linear differential operators. Recall the notation Df = f' for the derivative operator. We Iet
EXAMPLE 8 ~ Find the kerne! of the operator T =D  a where
a is a complex number. Do not
use Fact 9.4.3.
Solution We have to solve the homogeneaus differential equation T (f) = 0 or j'(t ) = 0 or f'(t) = af(t). By definition of an exponential function, the solutions are the functions of the form f(t) = Cear, where Cis an arbitrary constant. (See Definition 8.2.1). ~ af(t)
Dm = D o D o ... o D m times
that is,
Fact 9.4.12
The kerne! of the operator
T= Da Then the operator
is onedimensional, spanned by T(f ) = j
+ a" _IJ
f(t) = ea'.
can be written more succinctly as
T = D 11 +
a.n  1D" 
1
+ ·· · +
a. 1 D
the characteristic polynomial PrO.) "evaluated at D". For example, the operator T (f) =
= f" 2f' .
An nthorder linear differential operator T can be expressedas the composite of n firstorder linear differential operators:
+ Q sin (wt).
Now use Facts 9.4.4 and 9.4.8 to find all solutions What goes wrong when a
6f = (D 2 + D 6)f.
The fundamental theorem of algebra (Fact 6.4.2) now teils us the following:
+ af' (t) + bf(t) =
f p(t )
f" 2j' +3!' 
This works because D is linear: we have D(f'  2f)
Consider the linear differential equation J"(t)
 2).
We can verify that thi s formula gives us a decomposition of the operator T :
10 2 . 2) + c2 e3r  13 cos(2t ) + ]3 sm ( t .
Let us summarize the methods developed in Example 7:
Fact 9.4.10
 6.
Treating T formally as a polynomial in D , we can wri.te
= c1/1 (I)+ c2!2Ct) + f~, (t ) =
53 7
f" + 6 j' 
6
+ ao,
Next we think about the nonhomogeneaus equation
(D  a)f=g or f'(t)  af(t)
= g(t) ,
538.
hap. 9 Linear Spaces Sec. 9.4 Linear Differentia l Operato rs •
· 1~ w ill atturn out to where g(r) is a smooth fu nct10n. · be u eful to multipl y both ide of t.hi s equation wi th the fu nctwn e
539
We can break thi s DE down in to n fi rstorder DE 's : D>.,.
efll j'(r)  ae fll f r) = e nt g(r)
D).,. _1
f ~ Jn  i ~ fll 2
. the left h an d J.de of thi equation as the deri vaLive of the fun ction We reco 0omze . em j(r), o that we can wnte
We can successive ly solve the firstorder DE's
(e  at f(l)) ' = e  at g(t .
O\ we can integrate:
(D  A. ,) j,
= g
( D A.2)h
= Ii
(D  AII  I) f n l = f n 2
and j(r) = e"'
I
(D  A" ) j = f n l· e  a'g(t ) dt ,
In particul ar. the DE T (f )
where Jea t g t ) dt denote the inde~ nite i?tegral , that i ' rhe fa mil y of all aJ1lideri ari ve of the fu nction e  at g(r), lllvolvwg a param eter C.
Fact 9.4.13
Fact 9.4.14
= g does have solutions
The image of aU linear differential Operators (fro m C is, any li near DE T(f ) = g ha solutions j.
f .
to
c )
is
c::c , that
Con ider the differential equation f' (r)  af (t ) = g (r),
EXAMPLE 10 .... Fi nd all soJutions of the DE
where g(t ) is a smooth funcüon and a a constant. Then j (r ) = e111
I
T (f) =
e at g (t ) dt .
2
Note that Pr (A.) = A.  2)... use Fact 9.4.8.
Fact 9.4. 13 shows that the differential equaüon (D  a)f = g has olutions f , for any smooth function g ; thi means that im(D a) = C00 .
Dl
The DE ( D  l )j, = 0 has the generaJ o lution arbitrary con. tant.
Solution
e at ce 0 1 d t = ea1
Solution
f ........... .f, ........... 0
where c is an arbitrary constant.
I
()...  I )2 has onlv one root I o that we cannot • ' '
D 1
! '  aJ = ceat ,
j(t ) = e01
2j' + j = 0.
We break the DE down into two firstorder DE's, as discussed above.
EXAMPLE 9 .... Find the solutions f of the DE
U ing Fact 9.4.13 we find that
+1=
f"
I
c d t = e"' (c t
+ C),
.f1 (1)
= c 1e' , where c is an 1
Then the DE (D  l ).f = .f1 = c ,e' ha the generaJ solu tion f (t ) = e1 (c t +c_), 1 where c_ i another arbitrary constant ( ee Example 9). 1 1 The functions e and t e form a basis of the solutio n space (i.e., of the kerne! of T ). Note the kerne! is twodimen ional. since we pick up an arbi trary constant each time we solve a firstorcler DE. ..
where C is another arbitrary constant. Now consider an nthorder DE T (f) = g, where T = D 11 + a11 _ 1 D"  1 + · · · + a, D + ao , = ( D  A.,)( D  A.2 ) . . . ( D  A11  I)(D  A.n) .
Now we can explain why the kerne! of an n tborder linear di fferential operator T is ndimensional. Roughly speaking, thi i true because the general o luti on of the DE T(f) = 0 contain n arbitrary con tants (we pick up one each time we solve a fir t01·der linear DE).
540 •
Chap. 9 Linear Spaces
Sec. 9.4 Linear Differential Opera tor •
Hereisamore formal proof. We will argue by indu tion o n n . Fact 9.4. 12 take care of t11e ca e 11 = 1. We can write an nthorder linear d iffere ntial operator a T = L o (D  A. 11 ), where L i of order n  I :
f
L
11. dt 2
Arguing by induction. we can assum that the kerne! of L is (n  I ) dimensiona l, with ba is h 1.h 2 • •• • , h 11 _ 1• Then the ol uti ons f ofthe DE ( D  A11 )j =
are j(1) = e;.., ,
c J11
J
+ · · · + Cn
eJ..·'(c 1h, 1)
lhn  1
+ ··· +c"_,flu  IC!)) dt
Let H; 1) be an antiderivative of e_;..,., h ;(t ), for i = 1, ... , 11  I. Let F;(t) = e>.. , H;(t); note that (D A.11 ) F; = h ; by constr uction. Then the so.l utions f(t) above can be written a f(t) =
=
(c, H, (t ) + · · · + c"_, H 11  1(t ) + C) c 1 F1 (t) + · · · + C 1 F,,_, (/) + Ce'•'.
e }."r
11 _
where C i an arbitrary constant. 1l1i how that ker(T) i spanned by the n function F, (1), .... Fn1 (t), e/.." 1• We claim that these functions are linearly independent. Consider a relation Ci F1
(t)
+ ·· ·+ C
11
+ Ce/.."
 t F11  i (t)
1
= 0.
(! )
+ · · · + Cu l h u 1(1) =
0.
We conclude that the c; must be zero, si nce the function s h; (t ) are linearly independent; then C = 0 a weil. We have hown lhat the functions F1(!), ... , F111 (t ), e /.." 1 form a ba i of ker(T), so that ker(T) is ndimensional, as claimed.
d 2x dx + 3  1Ox = 0 2 dt d1 · 10. J"(t ) + f(t) = 0.
l 2f(t) = 0.
8. 
dx dt
 ?
13. f"(t)

+2'·  0 "  .
+ 2J'(t) + f(t)
12. J"(t )  4f' (t) = 0.
+ 13f (t ) =
14. f"(t) + 3f'(t) = 0. 16. J"(t) + 4j'(t ) + 13.f(t) =
15 . .f"(l) = 0.
17. f "(t) + 2f' (t) + f(t) = sin (t). d 2x 19. df2 + 2x = COS(l).
18. f"( t ) + 3f' (t ) + 2f(t) = 20. f"'(l)  3 f"( t )
21. .f"' (t )+2J"(t ) f'( t ) 2f (t ) = 0.
22.
!
111
0.
COS(l ).
cos(t .
+ 2f'(t ) =
0.
(1)  j"(1) 4 j'(t ) +4j(r) = 0.
Salve the initial va lue problems in Exercise 23 to 29. 23 . .f' (t ) Sj"(1) dx
24. d
= 0,
j(O)
= 3.
+ 3.x = 7, x(O) = 0. + 2j(t) = 0, f( l) =
t 25. f'(t)
I. 26. J"(l) 9f(t ) = 0, .f(O) = 0, f'(O) = 1. 27. j"(1)+9j(t) =0. f(0) =0, f(!}) =l. 28. J"(t) + J'(t)  12f(1) = 0, f(O) = f'(O) = 0. 29. f"(l) + 4.f (t ) = sin(1), f(O) = f'(O) = 0.
30. The temperature of a hot cup of coffee can be modeled by the DE
Applying the operator D  A11 on both side , we find that CJh l
+ J'( t )
9. J"(t)9f( t ) = 0. d 2x
.............,,.............o D /.. 0
7. J"(!)
541
T ' (t) = k(T(l)
A).
a. What i the ignificance of the constants k and A? b. Solve the DE for T(t) , in term of k, A, and the initial temperature To. Hint: There i a constant particular solution . 31. The peed v(1) of a falling object can omelimec be modeled by dv
m =mg  kv dt
or
EXERCISES
dv
GOAL

dt
Solve linear differential equation .
Find all real solutions of the differential equation in Exercise I to 22.
1. f'(t) S f( t ) = 0.
3. f '(l)
+ 2f(t)
= e31.
5. J'(t)  f(t) = t.
dx
2 .  + 3x = 7. dt dx
4. 
dt
 2.x = cos(3t).
6. f'( t )  2f(t) = e21 .
k
+  v=g. m
where 111 i tl1e mas · of the body, g the g ravitational when acceleration. and k a con tant related to the air res istance. Solve thi DE when v(O) = 0. De cribe the long term behavior of (!). Ske tch a graph. 32. Consider the balance B(l) of a bank account, with initial balance B(O) = Bo. We are withdrawing money at a continuou rate r (in DM/year). Th intere t rate is k (%/year), compounded continuously. Set up a differenrial equation for B (t ) and solve it in term of Bo, r and k . 'Nhat will happ n in the lon a run ? Describe all pos ible sc~::narios. Sketch a graph for B(t in each ca e.
542 •
Chap. 9 Linear Spaces
Sec. 9.4 Linear Differen tial Operators •
33. Con ider a pendulum of lengtl1 L . Let x t ) be the angle tl1e pendul.um .make , with tbe vertical (measured in radi ans). For mall angle , the mot10n ts weil approximated by the DE d 2x g  r dt L
(wbere g is the acceleration due to gravi.ty, g ~ 9.81 m/sec ). Ho"':' .long does the pendulum have to be o tbat it swings from one extreme po 1t1on to the oilier in exactly one second? 2
543
35. The di splacement x(t ) of a cettain oscillator can be modeled by the DE d 2x dx 1 2 + 3 d + 2x =0. c. I t a. Find all solutions of tlli s DE. b. Find the Soluti on with initial values x(O) = I , x' (0) solution.
=
0.
Graph the
c. Find the solution with initial values x (O) = l , x ' (O) =  3. Graph the solution. d. Describe tlle qualitative difference of the solutions in parts b and parts c, in tenns of the motion of the oscil lator. How many times will the oscillator go through tlle equilibrium state x = 0 in each case?
36. The displacement x(t) of a certain oscillator can be modeled by the DE d 2x

dt 2
N ote: x(l) i negati ve when the pendul um is on the left.
The two exLreme posi tion of the pendu lum.
dt
Find all olutions of this DE, and graph a typical solution. How many times will the oscillator go tlrrough ilie equilibrium state x = 0?
37. The di placernent x ( t ) of a certain oscillator can be modeled by ilie DE d 2x dx ? +6 + 9x=0. dtdt
Historical nore: The result of this exerci e was considered a a po · ible definition of ilie meter. The Frencb comm.ittee reforming the measures in the
1790's finally adopted another definition: a meter i · the lO,OOO,OOOili part of ilie distance from ilie Norili Pole to tbe Equator, measured along the meridian though Paris. 34. Consider a wooden block in the shape of a cube, with edge 10 cm. The density of ilie wood is 0.8 gj cm 3 . The block is submersed in water; a guiding mechanism guarantees iliat tbe top and ilie bottom surfaces of the block are parallel to ilie surface of ilie water at all times. Let x (r) be tbe depth ~f the block in the waterat time t. A ume that x is between 0 and 10 at all umes.
dx
+2 + lülx = 0.

Find the solution x (r ) for the in.itial values x (O) = 0, x '(O) = 1. Sketch the graph of tbe solution . How many time will the oscillator go through the equilibrium state x = 0 in this ca e? 38. a. lf p (t ) is a polynom.ial and ). a scalar, show that ( D A.)(p(t)el 1 ) = p ' (t ) e)J .
b. If p(r) is a polynom.ial of degree les than m , what is ( D  A.t' (p (t )e'1 ) ?
c. Find a basis of tlle kerne! of the linear differential operator ( D  A.) 111 • d. If A. 1, . .• , A., are distinct calars and rn 1• • • • , m, are positive integers, find a basis of the kerne] of the linear differential operator (D A. 1) 111 '
a. Two forces are acting on the block: it weight and the buoyancy (the weight of the displaced water). Recall iliat the den sity of water is 1 g/cm 3 . Find formulas for these two forces. b. Set up a differential equation for x(t ). Find the olution, assum.ing that the block is initially completely submersed (x( O) = 10) and at rest. c. How does the period of the oscillation change if you chaoge the dimen ions of the block (consider a !arger or smaller cube)? What if the wood has a different density, or if the initial state is different? What if you conduct the experiment on the moon?
•••
(DA.,)'"' .
39. Find all olutions of tlle linear DE 111 /
(!)
+ 3/" (1) + 3f'(t ) + f
(t) = 0.
(Hint: U e Exercise 38.)
40. Find all olution of the linear DE d 3x

dr
(Hint: U e Exercise 38.)
d2x
dx
dr
dt
+ 2 
 x = O.
544 •
hap. 9 Linear Space
Sec. 9.4 Linear Differential Operators •
41. If T i an nthorder linear differ nüal operator and ). i an arbitrary sca lar, i ). nec ssarily an eigenva lue of T ? [f so, what i lhe dimen sion of the e igen pace associated wirh ). ? 42. Let C be the s pace of all realvalued smooth functi ons. a. Consider the lin ar differential operator T = D 2 from C to cco. Find all (real) eigenvalue ofT. Foreach eigenva lue . find a ba is of the associated e igenspace. b. Let P be the ub pa e f C con i ting of all periodic funcüons f( t with period one (that i . .f(t + 1) = f(t), for a ll r). Con ider the linear differential operator L = D2 fro m P to P. Findall (r a l) e igenvalues and eigenfunctions of L. 43. The displacement of a certain forced oscillator can be modeled by the OE d 2x dt 2
dx + 5 dc
+ 6x
d 2x
dx
dr
dt
= cos(t).
= cos(3t).
a. Find all solutions of this OE. b. Describe the longterrn behavior of tbis oscillator.
45. Use Fact 9.4.13 to solve the initial value problern c.dtx.r 
[o1 21 Jx
with
.T(Ü) = [ _
Hint: Find first x 2 (t) and then x 1(t).
!n
46. Use Fact 9.4.13 to olve the initial value problern
~~ ~ [ g
with
X(0)
~
IJ.
1
ul
Hint: Findfirst x 3 (t), then x 2 (r), and then x 1 (t). 47. Consider the initial value problem
dx
 = Ai dt
with
x(O) =
io ,
where A is an upper triangular n x n matrix with m distinct diagonal entries AJ. . .. , ),m· See the examp les in Exercises 45 and 46. a. Show that this problem has a unique sol ution x(t), whose cornponents x;(t) are of the form x;(t) = PJ(t)e>" r + · ·+ p111 (c)e>.",r,
for some polynomials PJ (t ). Hint: Findfirs t x"(t ), then x" _ 1 (t ) and so on.
b. ~h~w that. the zero state is a table eq uilibriurn soluti on of thi system if an
only If) the real pmt of all the ). 1. is neoati ve
48. Consicler an n x n . h m d1.stmct . • • "' e1genvalue ;,_ 1 ma t n· x A Wit a . S how that the inüial value problern
dx
dl
=Ai
with
). ' • .. '
x(O) = ;
m·
0
ha a unique solution ,r(t).
b. Show that the zero tate i a table equilibrium so lution of the ystern
dx
dt= A .i
if and only if the real part of all the A· is negative. Hint: Exercise 7.3 .37 ' and Exercise 47 above are helpful.
a. Find all solution of thi OE. b. Oescri be the longterrn beha ior of this o cillator. 44. The di placement of a certai n forced oscil lator can be modeled by the OE
 . , + 4  + 5x
545
App. A Vector •
547
b. Scalar multiplication The product of a · calar k and a vector ü i defined componentwi e as weil : kÜ=k
v,v2
r~II
J
=
kv2 J rkvl
k~/1
EXAMPLE 1 ....
VECTORS EXAMPLE 2 .... Here we will provide a conci e ununary of basic facts on vectors. In Section 1.2, vectors are detined as matrice with only one column : ü = [
~ J.
The scalars v;
v"
The negative or opposite of a vector ü in ~~~ is defin ed as
are called the components of the vecto r. 1 The set of all vectors with n components is denoted by ~" . You may be accustomed to a different notation for vectors. Writin g the components in a column is the most convenient notation for linear algebra.
+
Ü=( l )Ü.
w
The dijf~rence ü of two vectors ü and ÜJ in ~~~ is defined componentWlse. Alternat1vely, we can express tbe di ffere nce of two vector a
ü
Vector Algebra Definition A.1
The vector in IR" that consi ts of n zero i cal led the zero vector in IR.":
Vector addition a. The sum of two vectors ü and
win ~~~
is defined "componentwise":
Fact A.2 _
_
v+w=
1
546
w= u+ c w) .
v'. J+ [w ,J .'J [v,+. w + W2
V2
W2
V2
[ ~~~
~II
V" ~ Wn
In vector and matrix algebra. the term "scalar" i synonymaus wi th (real) number.
Rules of vector algebra The followin g formulas hold for all vectors ü v, w c and k:
10
"
and fo r all calars
1. (Ü + ü) + w= ü + (Ü + tv) (additi on i associative) 2. ü + w= ÜJ + ii (addi tion i commutative) 3. ii + Ö= ii 4. ~or ea~ h ü in IR" there is a unique ;r in IR" such that ü + i: = 0, namely, x =  v.
548 •
App. A Vectors App. A Vectors •
+ iü) = k Ü + kiil c + k )v = cü + kv
5. k (Ü
549
6. 7. c(kv) = (ck)ü 8. 1 = 
The e rule follow from the corre ponding rule fo r calars (commutativity, associativity, ilistributivity); for example,
Figure 3 The components of o vedor in standard represenlation are the coordinates of its endpoint.
L
+ Geometrical Representation of Vectors The stan.dard representation of a vector
R
 [Äl]
x=
in the Cartesian coorctinate plane i as an arrow (a ilirected lined segment) connecting the origin to the point (x 1 • x 2 ), a shown in Figure I. Occasionally it is helpful to translate (or shift) the vector in the plane (preerving its direction and length), o that it will connect some point (a 1, a 2) to the point (a r + x 1, a2 + x2) . See Figure 2. In this text, we consider the standard representation of vectors, unl e explicitJy state that the vector ha been tran lated .
we
A vector in 1R2 (in standard representation) is uniquely determined by its endpoint. Conversely, with each point in the pl ane we can associate its position vector, which connects the origin to the given point. See Figure 3.
Figure 1
xis a vedor on the line l. (b) is a vector in the region K.
Figure 4 (a)
'2
x
(a)
(b)
We need not clearly distinouish betw . . identify them as long as we consi' tentJ ' use e~: a vector and tts end_point; we can For example, we will talk abou/ "the vec:~~dard represe~tatwn of vector . mean the vectors whose endpoints are o th r s ot~ a !Jne L when we really Likewise we can talk about "the t ~ e 1 ~e L (m Standard representation). ' vec or tn a reo 100 R" · th 1 . tn efp ane. See Frgure 4 . Adding vector in JR2 can be representedo b . Y means 0 a paraiJelograrn, a bown in Figure 5. If k is a positive scalar, then kv is obtained b , . factor of k leaving its düection unchanged If k . ) stre_tchmg the vec~or v by a 15 · negatJve, then the direction is reserved. See Figure 6.
Figure 2 Figure S
Figure 6
tran lated i '
     '
translated ü V
translated ÜJ
./ ( t)
jj
550 •
App. A Vectors
Definition A.3
App . A Vectors •
We say that two vectors multiple of the other.
551
v a.nd win IR" areparallel if one of them is a scala.r (0, O,x3)
..           ;71
, :: ______
.. .

'
:
:(x 1, x2, x3): '
EXAMPLE 3 .... The vectors
:
UJ
and
UJ
o:
/i7l~
Xz
  (0,"2. 0)
are parallel, since Figure 7
Figure 8
+ Dot Product, Length, Orthogonality EXAMPLE 4 .... The vectors Definition A.4 and
w v
Consider two vector ü and with components u1, ll2 , . . . , v,. and w 1• w2, . . . , w" , respectivel y. Here and may be column or row vectors, and they need not be of the sarne type (these conventions are convenient in linear algebra). The dot product of and w is defined as
w
v
are parallel, since
Note that the dot product of two vectors is a scalar.
EXAMPLES ...
UJ ·[=lJ =I 3+2 (1)+1(1)=0
Let us briefly recalJ Cartesian Coordinates in space: lf we choose an ongm 0 and three mutually perpendicular coordinate axes through 0, we can describe any point in space by a triple of numbers, (x 1, x 2 , x3). See Figure 7. The standard representation of the vector
EXAMPLE6 ...
x=
[;~ ] X3
is the arrow connecting tbe origin to the point (x 1 , x2 , x3 ), as shown in Figure 8.
552 •
App. A Vectors Fact A.S
App. A Vectors •
Rules for dot products The fo ll ow ing equations hold for all column or row vectors ii, components. and for a ll sca lars k:
v,
Ü; with n translated
w
a. v. = w · v b. (ü + ü . w= ii · w+
v.
Figure 10
The erification of the e rule is traightforw ard. Let u justify ru le d :
0
ince Solution
ü is nonzero, at least one of the components v; is nonzero, o that vt is po. itive . Then
llx 11 

V· V =
2 2+ v 21+ ~+ ·· ·+V;
···+
Definition A.7
X~
·
X
2
Xj"
+ x 2= ~~  ~~2 2
X
11i
llü II
= 1; tbat is, tbe lengtb of the
x and ji in JR2, as shown in Figure 10.
+ .Y II2 = llxll 2 + II.YII 2
or
c; + ji) . <x + y) = .:r . .x + .Y . .Y.
;
By Fact A.5,
therefore,
; . x + 2(,r . ji) + ;
l lxll=~Verify that this formul a holds for vectors in JR 3 as weil. We can use this formu la to define the length of a vector in lR":
The length (or norm)
llxll of a
IJ xll
=
vector
~=
. 5i =
; .;
+ .Y . .Y
or
x
Definition A.6
J49 + 1 + 49 + 1 = 1o
Consider two perpendicular vectors By the theorem of Pytbagoras,
jxT
?
~=
A vect~r _ü in II{" is cal led a unit vector if vector u ts 1.
i n JR:! is + x~ by the Pythagorean theorem. See Fi gure 9. Trus length i often denoted by llxll . Note that we have
  [Xt] [Xt] =
=
2
V 11
positive as well. Let us think about the length of a vecto r. The length of a vector
X· X =
y
v· w
c. (kü) . w= k(v. w) d. ü. ü > 0 for all non zero
Figure 9
553
x · ji =
0.
~ou can read ~ese equ~tions ba~kward t? show that ,t . .Y = 0 if and onl y jf ;t= and y are perpend1cular. This reasomng appbes to vectors in JR3 as wtf~. We can use this cbaracterization to define perpe nd icul ar vectors in II{" :
x in lR" is
jxT + xi + · · · + x;. Definition A.8
EXA MPLE 7 ..... Find llx II for
Two vectors v and w in II{" are called perpendicular (or orthogonal) if
ü . w= 0.
+ Cross Product Here we present tbe cross product for vectors in JR 3 only; for a genera lization to IR" see Exercises 5.2.30 and 5.3.17.
554 •
ANSWERS TO ODDNUMBERED EXERCISES
pp. A Vectors In C hapter 5 we di cu
Definition A.9
tJ1e cro
product in th cont , t of li near algebra .
The cro
5.[;~] = [; ]
CHAPTER l 1. 1 l. (x.y) = ( 1, 1)
3.
o so luti on
5. (x, y)
Unlike the dot product , t he cro
Prod uct
v x ÜJ
3
i a vector in !R
=
9. (x, y, z) = (1,
1 2t , t) , where 1 is arbitrary
I I . (x,y) = (4, I)
EXAMPLE 8 ....
13. No
s lut ion
15. (x,
y, z) =
17. (x, y)
Fact A.lO
Let
v x wbe the cross product of two vectors v and ÜJ
in ffi
3 .
w.
a . ü x ÜJ is orthogonal to both ü and b. T he lenath of the vector x is numerically equal to the area of the parallel;gram defined by ü and ÜJ (see Figure lla).
v w
c. 1f ü and
ware not parallel , then the vectors Ü.
W.
ÜX
W
3 I. If a  2b
+c=
(o)
.X = i  2')1 = ~
s 2r
1
,.
X2
9.
X)
t+ s
=
X4
l  2s s
Xj
+I +2
X6
I I.
r;:J r;:+4 J =
,t 4
13.

2
o olutions
IS. m ~ m
0
33. a. The intercepts of the line x + y = I are ( I 0) and (0, I ). The in tercepts of the line x + ~ y = r are (t , 0) and (0, 2). The lines intersect if r =f. 2. t 21  2 b
!IV X wll is numericolly equollo lhe shaded oreo. (b) Arighlhonded syslem. Figure 11
[2~ ]
Xi
29. f(f) =I  Sr + 3r 2
form a tighthanded sy tem ( ee Figure 11 b).
7.[~j] ~
b)
21. a = 400, b = 300 23. a. (x ,y) = (1, 2t ); b. (x ,y) = (l,  31); c. (x , y) = (0, 0) 25 . a. If k = 7 b. lf k = 7, there are infinitely many solutions . c. lf k = 7, the solution are (x , y, z) = ( I  1. 2r  3, r). 27. 7 children (3 boys and 4 girl
Then
I
(0, 0, 0)
= ( Sa + 2b , 3a 
19. a. Product are competing. b. P1 = 26, P_ = 46
Geometrie interpretation of the cross product
 l
X4
(0, 0)
7. No so lu ti ons .
X3
17. [ ; : X4
l ~::~:::;0l =[
X5
 459/434 699/434
35. Tl; ere are_~~~~~ ~~r~ ect an wers. Example: y
__, ' ,' ,'

ii X
: '
1.2
'
'
m~ [~~: ~ I:] 3 m[4_2: _3:J I
~
w U)
(a)
w
 3 =  2
(b)
2 1. 4 rypes
25. Ye ; perform the operations backward . 27. No; you cannot make the last co lu mn zero by elementary row operation . 29. a = 2, b
=c =
d = I
A1
A2 •
Answers to OddNumbered Exercises
+ 4t 2 + 3t 3  21 3 5 + 13r 10t 2 + 3c
31. f(t) = l 51
[Il
33 . f(t) = 
35
37. [ : :]
~
Answers to Odd umbered Exercises •
: u; !~] : ;:e~ ~T:::e ~b'"M'
4
~ittruy
wh"e t ;,
33.
1. a. No solutions
b. One solution c. lnfinitely many solutions
uJ n n
3. rank is l 5. a.
X
b.
X
+Y[
= [ 1
= 3, y = 2
7. One solution
9
[~~nm~m
21.
~;]
= Ci .X I
2 1.
the graph is a plane through the
2.2 L
5.A= [ 1 ~ ~ :~ ]
Ax = c has infinitely many
7. T is linear; A = 9. Not invertible
b. False
~
~
~
A= [l ,ab a
39.
[~~ ] · [~~ ]

~j~
J
!]
# 0.
COSCt . [  SIO Ct
43.
X3
. ..
A  = a2 1/ [ 17. Reftection in the origin; this transformat1on own inverse.
IS
1ts
7.
~
[:
~[
25
x
= [~
!J
.1:. Expres f(t) in term
of a, b, c, d .
49. The image i an eJijp e with s mimaj or axes ±5el and semim.inor axes ±_e2.
i]
7 24
J, a clockwise rotation througlJ
45. Write = C J VJ + c2 ii2 and u e Fact 2.2. 1 to compute T (x) and L (x).
2
;n R defined by U,e ,eotot<
51. The curve C is the image of the unit circle under the Iransformation with matrix [ 1 2
w w J.
9. A shear parallel to the e2 axis II .
sinet COS Ct
J
t he angle et.
5. About 2.5 radians In this case,
 b2 1 +ab
41. refQx = refp .t
3. The parallelegra m in JR 3 defined by the vectors T( eI) and T (e2)
v"]
Vz
15. A is inv.ertible if a ;;j:. 0 or b 1
~ ~ ~]
47. Write T(.'r)
[vl
I I. Tbe inverse is [ _
solutions
ny:::'nr
hear back.
0 0 0 33. Use a parallelogram 35. Yes; mirnie Example 2.2.2. 37. Write ÜJ = C J Vj + c2 ü2 ; then T ( w) = C JT(ÜI ) + c2 T (~ )
v3].
= C ]X J +c2X2 +c3X3 =
Y
1
~ ~ J;you
31. A = [
v = [ ~~'VJJ , then the matrix of T is Vz
0
27 . Yes ; use Fact 2.2. I. 29. Ye
45 . False
3. Not linear
!]
25. [ i on the line
[2 3 4].
[v1
0
u~ n
23.
43. a. T is linear, re presented by the matrix
b. If
0
0
+ k(T(w) T (v) )
+ c2x2;
J
[~ ~ ~]
tepre<e"ted by the
origin in JR3.
CHAPTER 2 2.1 I . Not linear
25. The system or none. 27. a. True
,., .
39. True
41 . Y
when i ;;j:. j
~["tT r] a dilation by a factor of 2 0
un
37. T (x) = T (v) segment.
= 2u ; uj
19
33.1 [: ;J 35.
A3
J
17. Projection onto the line spanned by [: .
C3
123
;
*
41. One solution 43. No solutions 47. a. ; = 0 is a solution. b. By part a and Fact 1.3.3 _ _ _ c. A(x 1 + x2) = Ax1 +_ A x~ = 0 + 0 = 0 d. A(kx) = k (Ax ) = kO = 0 49. a. Infinitely many solution or none b. One solution or none c. No solutions d. Infinitely many solutions 51. lf n. = r and s = p
C.
['i~J
~
: ]
57. [:1] = [~] + [~]
m
23. [
[~0 ~0 ~1
::~,::'Fr ~~J
J l.
2u 1u2 2u~ I
15. a;; = 2u7  I, and a;j
27. Re Aecti o n in the e1 axis 29. Re tl ccti on in the ori g in
+2
15. 70 17. Undefined 19.
Ax = X.
I [ 2u~2u1u2
13.
23. C loc kwise rotati on through an angle of 900 fo ll owed by a di lati on by a factor of 2; in ve,rtible. 25. Dilati o n by a factor of 2
: ::, m~ ~ mm
II . Undefined 13. [
39.
eI axis; not in vertible.
2 1. Rota tio n throug h an ang le of 90o in the clockw ise dJrectl o n; in vert ible.
o: ,A
[:]
39. a. Nei!.her the manufacruring nor the energy secror makes demands on agriculture. b. X ] ~ !8.67, X1 ~ 22.60 , X3 ~ 3.63 2m2 41. m 1 = 1 43. a ::::: 12.17, b::::: 1.15 , c ~ 0.18. Tbe Iongest day is about 13.3 bours.
1.3
19. Projecti o n o nto lhe
24 ] 7
2.3
I. [ 3. [
~ ~ J
t ~ J
Answers to OddNumbered Exercises •
A 4 •
Answers to OddNumbered Exerci es
45 . A = BS 
n
3. U ndefined 5. Not invertible 7 . Not inverti ble
5. [ :
9. Not invertible
u1n
l1
13.
15.
1l
2 :0
l
 6
9. [
1
9
2
3
19.
X1
X3
+
= 3 )' t = Yl

49. mat ri x of L :
~ ~J
4)'2 )'3 1.5)'2 + 0.5 Y3
25. True
27
2 1. Not inverti ble 23. lnverti ble 25. lnvertible 29 . For all k except k = I and k = 2
3 L [ 2:
3 1. l t's never invertible
c. Yes; use Fact 2.3.5. d. Invertible if all diagonal e ntries are nonzero 1
37. (cA) 1 = iA 39. M is invertible; if m ij k (where i f= j) , then the ijth entry of M  1 is  k; all other entri es are
=
the same. 4 1. The tran fonnatio ns in part a, c, d are inverti ble, while the projection in part b is not. 43. Yes;
x=
B  1(A 
45. a. 33 = 27
b. n
1
y
3
c.
123
}3 = 64 (seconds)
47. f(x) = x 1 is not inven ible, but the equation f(x) 0 has the unique solution x 0.
=
2.4
I. [
~ ~J
=
33. No = 35. a.
x
s=
l
~
n[! n ~ n = [
~ ~ l [~ ~ l [~ ~ l [! ~ l [~ ~ J
. [ whe re k is nonzero aod c is arbitrary. Cases 3 and 4 represent shears, and case 5 is a reflection.
13.
foc rubitrary
I
aod '
.
6 1. a. Wn te A =
=
b . .X = ih c. rank(A) rank(B) d. By Example 1.3.3a
=
L
=m
~ ~
l
4 1. a. T he matrices Da Dß and DßDa both represent the coun terclockwi se rotatio n through the
cos(2n /3) 43. A = [ sin (2n /3)
1 [ ~ ~]
A
3
[L(m) 0] L3
A1 ] .~~.
,
15.
nJj
=
[u(m)
 [ A(nIl AW k for x, y and 1. A
J
 . in(2n /3) cos(2n /3)


X
I
0
[A;II 0
0 1] A22
67. (i th row of A B ) = (i th row of A)B
J_ 
69. rank(A) = rank (A 11 )
+ rank(A23)
73. (ijth entry of AB ) = L~= l b kj ::::;
sr
0
0
0
2
0 0 0
I
1 0
[~l [~]
y]
1
a ;k b kj ::::
~]
21. 23 . 25 . 27.
All of ~3 kerne! is {0}, image is all of JR 2 Same as Exercise 23 j(x) = x 3  x
29.
f [: ] = [
:ii:~~ c~~i:~ ] cos(rp)
31:o:r~r,~rroal
71. Only A = !"
s L~= l
3
1 0 0 0 0
19. The Jine spanned by [ _
65. A is invertible if both A 11 and A22 are invertible. In this case,  l=
2
17. All of IR2
0
L4 ' U
Ü][L' 0] [U'
for an arbitrary k
angle a + ß. b. Da Dß = DßDa = cos(a + ß)  sin(a + ß) [ sin(a + ß) cos(a + ß)
[ A(m)
Use Fact 2.4. 12. c. Solve the equation
37. Al l diagonal 2 x 2 matrices 39. T he matrices [
[
~ ~ J
~ ~ 2: ~ ~ 0
l. ker(A) = {0}
57. Ye ; use Fact 2.4.4.
2
33. I:f a 2 + b = I 35. a. Invertible if a, d, f are all nonzero b. lovenible if all diagonal entries are nonzero
CHAPTER 3
1
53. a. Use Exercise 52; Iet S = E 1 E 2 . . . Ep
55
lilil
~ ~]
51. Yes; yes; each ele mentary row operation can be "undone" by an elementary row operation.
b.
29. For example, B = [
27. Not invertible
0
3.1
21. True 23 . True
81. False. Consider A = B = [
 1 0
[ ~ ~ ~] 0
19. Fa1se
+ 0.5 )'3
f(g(x)) = { x if x _i s e~en X + I If X IS odd The functions f and g are not invertible.
~ ~ ~]
0
17. False
17. Not invertible XJ = 3)' 1  2.5)'2
=! [~ I ~ ]
4
13. [II) 15. h: Fact 2.4.9 applies to square matrice o nl y.
~ ~ =~  5
2
79. g(f(x)) = x , for all x.
matrix of T: [ 
I I. [ I 0]
2
 )
[~ ~ ~]
7.
~
_:
47. A
1
33 T
m~ x
+ 2y + 3l
oootdloote
1
A5
Answe rs to OddNumbered Exercises •
A6 •
Answe rs to Odd umbered Exerci se
35. ker(T) is the plane with normal vector
ii; im T) = IR. 37. im(A) = pan(e1, e2) : ker(A) = 2 . pan (e1): im(A2) = pan(e 1); ker(A ) = 3 3 pan (e1. e 2 : A3 = 0 ker(A ) = IR and im (A3 ) = {Ö} 39. a. ker(B) i contained in ker(A 8 ). but they need not be equal. b. im(AB) i contai ned in im(A), but they need
13. Dependent
15. Dependent 17. Independent 19. Dependent _I. Dependent
25.
not be equal. 41. a. im (A) i the line panned by [ ! l and ker(A)
i the perpendicul ar line, spanned by [
~ J.
b. A2 = A : if ii is in im (A), then Aii = ii. c. [!]"gonol proj<etion ooto the line sponned by
43. Suppose A is an 111 x 11 matrix of rank r . Let B be the matrix you get when you omit the first r rows and the firs r n colunms of rrefl.A : / 111 ] . (What can you do when r = 111 ?) 45. There are n  r nonleading variables, which can be cho. en freely. The general vecror in the kerne! can be wrinen as a linear combinaüon of n  r vector , with the nonleading variables as
coefficients. 47. i.m (T ) = L 1 and ker(T) = L2
51. ker(AB) = (0} 0
I
I 0 0 I I 0 0 0 I 53. a. 0 0 I 0 0 1 0 0 0 0 0 b. ker(H ) = pan(v1, "iil , Ü3, Ü4) , by parl a, and im (M) = span(ii 1, ii2, v3, Ü4) , by Fact 3.1.3. Thus ker(H ) = im (M). H (Mx) = Ö, since Mx is in im (M) = ker(H ).
3.2
I. Not a subspace 3. W is a subspace 7. Yes 9. Dependenl
5 1. a. Consider a relation c 1ii + ... + . _ _ . . .  . I CpVp+d l w l + !=. dq Wq = 0. Then c 1v 1 + ... + c  _  d I W I  . ..  d  . P Vp ., . ·r~ Wq IS 0, because thi s vector IS. both In V and In W. The claim follows. b. ~~ om part a we know that the vector V I , ... ,Üp , w l , · · ··wq are linearly Independent. Consider a vector in V + W · of v + w we can wn.1e By definition _ x = v + ÜJ fo r a ü in V and a ÜJ ii' W Th · 1· • . ev IS . and w  1 .s a . a mear combi nati on of the u,, linear com bin ation of the wJ·. Tb"IS ShOW th at the vectors ii 1• · · · · iiP•wl,  ... , wq  pan
l I. Independent
27
29.
2
3
4
5
I 0 0 0
0 I 0 0
0 0 I 0
0 0 0
Ul m [!l [~ J
mm·m
I. [
fo r ii;. 37 . The vectors T (Ü 1), .. . , T (Ü 111 ) arenot neces ari ly independent. 39. The vectors Üt, ... , ü", are linearly independent. 4 1. The colu mns of B are li nearly independent, while the columns of A are dependent. 43. The vectors are linearly independent.
v
5
[~Jfn
[~ !]
49. L
~
im
I I.
13
ke< [
0
 l
 I
J
basis of image:
[
0
0
2
3 0
0
I 0 0
I
 1
1
I
1 0 0 0
0
0
0
I 0 0
0
0 0
25 .
4 I
I
[~lm·[nm [!J nr~J
27. They do.
29. [
0
[~]
31 h
m·m
19.
[lll~J [~ll
Jl[;]
l~ r ~l
33. The dimension of a hyperplane in lR" i 35. The dimension is
15. 5
17
m~ ~
1 0 0 0
9.
45. Yes 47 .
n
[
fil
r;r n1[!J.[~ J :1~l
23
n
3
7.
35. Consider a relation c1ii1 + · · · + c;Ü; + · · · + c",ii", = Ö where c; =/: 0 but Cj = 0 for all j > i . Solve
0 I 0 0
bMis of imoge
V + W.
3.3
~]
lll UHil
x
,.m m 33
21 b'5 is ofkemel [
A7
37. A =
11 
11 _
I.
1.
l~ ~ ~ ~ ~l 0 0
0 0
0 0
0 0
0 0
39. ker(C) is at least Idimensional, and ker(C) i
contai ned in ker(A) 4 1. A basis of V i also a basi, of
w. by Fact 3.3.4c.
43. dim( V + W) = dim(V ) + di m W), by E, ercise 3.2.5 1
A8 •
Answers to Odd umbered Exercises 2 1. For exa mple: b = d = e = g = 0, I )3 _ .fj
45. The fir t p co lumns of rref(A) contai n leading l 's becau e the Ü; are linearly independent. ow apply Fact 3.3.7 . 49 . [ 0 I 0 2 0 ) . [ 0 0 I 3 0 ) .
a
25. a.
[o o o o 1] 51. a. A and E have the ame row space, s ince eleme ntary row operatio n leave the row space unchanged. b. rank (A) = di m(rowspace(A) , by part a and Exerci e 50. 55 . Suppo e rank(A) = 111 . The ubmatrix of A consi ti ng of the m pivot column of A is invenible, since the pi vot columns are linearly independent. Conversely. if A has an inve rtible m x 111 ubmatri x, then the column of tl1at ubmatri x pan JRm, so im (A) = Rm and rank(A) = m . 57. Let m be the mallest number such that A 111 = 0. B y Exerci e 56 tbere are m linearl y · independent vectors in Rn: therefore, m ~ 11 , and
b. By part a, ll
rank A)
~
rank( B )
]. 3.
JT76 J54
5. arcco ( 7. obtuse 9. acute I I. arcco (
33.
I. [
= 2
v, . x)v, + 2cü2 . •r)v2 .r
~~~]
3. [ 4/ 5 ]
3~5
7.
[ •
[~j~ ] · [ ~j~ ] , [ ~~~ ]
[i].[f]
19. a. Onhogonal projection onto L j_ b. Reftection in L .l c. Reflection in L
· [
2/3
2/ 3
:~ ~ii:~]
3 1.
e,. e2, e3
13.
37. 'L
4/ 15
1/2
 1/2
39
I
I
· _I m_ [~ ] 3
0
•
_ I_ [
JJ
v ·_
L (Ü) · L ( w )  arcco IJL (v) ll II L w) ll =
~
arccos llvll .llwll = 4(ii, w) (The eq uation
= ü · ÜJ
is ho wn in Exerci e 2
=/
11
ü1~ there
colu mn is a unü ve tor orthogonal to
J and [  COin (ip) J. (ip) [ cos(ip)  in(ip) J and COS (ip Sin (ip) in (ip) J _ cos(IP) , fo r arbitrary ip .
are two choice : [ Solution: CO (
II. For example, T (x ) =
in (ip) CO (ip)
1[ ~ ; ~] 2
2
x
I
13. No, by Fact 4.3 .2 15. No : con ider A
= [~
~ J and B = [ ~ ~ J
17. Ye. since (A 1/ . = (AT ) 1 = A 1
 2/3
 J
· <+
J
Ul}, [IJ J
3 x ( L () L () ) _
9. The fir t co lu mn i a uni t vector· we can write . [ COS (ip) Jt a 1 = . in (ip) fo r some ip . The secend
19. (ij th entry of A) = v;v1 2 1. A ll e ntrie of A are .!.
I[: :j[34]
[ :~] . [=:i~] . [:~] 1/2
33.},
Then
I. (AÜ ) · ÜJ = (AÜ)TÜJ = üT A TÜJ = Ü . ( A T ÜJ ) .
7. Ye . ince A AT
;jn
2/ 3
1/ 10
II. [4~5 J•[~~;:~] 3/5
!j~
~~ J
5. Yes, by Fact 4.3.4a.
4/ 15
29.
4.3
L Ü) · L (ÜJ)
35. [~j~ ] · [ ~j~J
] . [
1/ 2
]
_7 {
nn ks[ =iJ
lf (as n + oo) 9
4~5
.JT8
J
J r J. r
3/ 5]
3
Q2] [ ~~
A, = Q , R, is the QR fac torizatio n of A 1• 45. Ye
[Y~~;~:~ Jl[~ :~1 =: u~ !] 3/ 5
l/3
+
1/ 10
A2] = [Q,
A = [ A,
 1; .JT8J 3  1/ .J IS [ 4/ .J I8 O
l/ 2
25.
3n
[i~ ~;;;gl [~ ~~]
23.
2/ 3
~) ~ 0 . 12 (radians)
: t~Irenl~J 17
R(x)
x is a
43 . Wri te the QR facto rizatio n of A in partitioned form as
n; nn~ :n
t
I= 8
35. No; if ü is a unit vector in L , the n x · proj Lx = x · (Ü · x)Ü= (Ü · .r ) 2 ~ 0
5 .
Jn)
2/ 3 19. 2/ 3 [ 1/ 3
21
+9+4 + I+
a ;;
[:~; _:~; J[~
17 . 1.
3 1. p ~ llx11. Eq uality ho lds if (and only if) linear combination of the vec tors V;.
CHAYfER 4 4.1
) k2 ( Ü . v) =
ll~ll ii ll = 1 .~ 1 lliill =
29. By Pyrhagoras, llxll = .)49
b. 0, 1, or 2 ~
=
A9
multiplying the i lh row of A with I whe never i negati ve.
[3]
 2/ 3
UJ
27
4.2 61. a. 3, 4, or 5 63. a. rank (A B) b. rank (A B)
llkiil l = j (kü). (kv #~ = lkl ll iill
~j~ J
15. [
= 2· c = T • f   2
Answe rs to Odd  u mbered Excrcises •
II
5
:J
 J
_I_ [ ~] · .J42 1
4 1. Q i. d iago nal wilh q;; = I if a; ; > 0 and q;; =  I if a ;; < 0. You can gct R from A by
23. A represent the refl ec ti on in the line spanned by ii (compar 1 ith E ampl e 2), and 8 reprc ents the rcfl ection in the plane " irh normal vcc tor ii . 25. di m (kcr(A)) = 11  rnnk(A) (by Facr 3.3. ) and dirn (kcr(A T) ) = 111  rank (A T) = 111  rank(/\ ) (by Fac t 4.3.9c). Th refore. the dimensions of the rwo kerne! are equal if (and onl y it) 111 = 11. that is, if A is a square matri x.
A10 • _7_
nswers to Odd. umbered Exerci es
=
AT
1 . b. L +( L .r)) = proj v_r , "here
QRTQR=RTQ T QR = R T R
3 1. a. Im = QfQI = STQJQ2S= STS . o tllat S is orthogonal. b. R2 R! 1 is both orthogona l (by parl a) and upper triangular. v ith po iti e diago nal entrie . B Exercise ~oa we ha e R2 R! 1 = Im so tJ1at R2 = R 1 and Q 1 = Q2. a claimed. 3 .
u·
cf
W = im (A) = (ke r(A T)) .L
U e approx imatio n log(z) 0.664 log(g)
.
e. L + (y) =
~ ~ ~·
3. 5. 7. 9. II.
17. Yes; note that ker(A = ker AT A).
1. im(A ) =
pan [
~]
and ker(A T ) = spa.n [ ;
l
Ax *ll = 42
23. [
~]
25 .
I  3t] , f or ar b'1trary t
3. The vecrors form a ba is of :?n.
 v.L
= (ker(A)) .L = im(AT) , where
I
A= [ I
B~is
of
I
2
vL
I
[
I]
5
4.
[iHH
27. [
·* = x2 ·* ~ ~ 2l 29 ..x,
S: parallelto ker(A)
37. a. Try to
[
~n ~ ~:~: ~ J.
approx imatio n log(d) = 0.915 + O.O l7t. b. Exponentiate the eq uatio n in part a: d = IQiogd = !00.9 15+0.0171 ~ 8.221 . 1QO.OI71 8.22 1 . 1.041 llxoII < ll.t
II for
I I. b. L (L + C})) =
all other vectors
x in S.
y
c. L +(L (x)) = proj vx. where
V= (ker(A)) .L
= im (AT)
d. im ( L + ) = im(AT and ker(L + ) = {Ö}
oL+GJ~ [~
!]>
n  1?; 1> j
n
II
19.
c. Predicts 259 displays for the A320; there are much fewer since the A320 is highl y computerized. 39. a. Try to solve the system co + log(600, OOO) c 1= log(250) co + log(200 , 000)c 1 = log(60) co + log(60 , 000)c 1 = log(25) co + log( lO , OOO) c 1 = log( 12) co + log(2, 500)c 1 = log(5)
21.
23.
u e linearity in the column
[;~ ] = [:~ ] and [;~ ] = [:~ ] are
+b =
0.
~

~J
b. Tbey are 0
c. det(A) = det(A T) = det( A) = ( I)" det (A) =  det(A) , so det(A) = 0
2
x 3
1 ) 2 .
det~A )
51. det(A ) = I if 11 i. even and det(A) odd (the re are n 2 inversions 5.2 I.  3
29. ATA
= r ~ÜII2 Ü·ÜJ2 J
w
llw 11 · . o det(Ar A) = llii ll 11 w ll2  (ü · w) 2 ::: o. by the CauchySchwarz inequality. Eq uality hold onl y i f ii and ÜJ are parallel. v·
2
3 l. Expand down the first col umn : f(x) =  ..r det(A4 1) + constan t, o f'(x) =  det(A1 1) =  24. 33 . T is linear. 35. det(Q, ) = det(Q2) = I and det(Q") = 2det(Q11 _t)  det(Q" _ 2 ) . so det(Q" ) = I for alln .
47 . det( A) = ( l )l'det(A) 49. det(A  I) =
olution.
±I
27. a. [
45 . k
7. I
 Clj)
25 . det(A TA) = (det (A))2 > 0
41. det(A) 2x  x =  x(x The matrix A is in ve rtible except when x = 0 or x = I. 43.  k
3. 9
[1 (a; i>j
The eq ua tion is of rhe form px 1 + qx2 lhat i , it defines a line.
39. Let n;; be the firs t diagonal entry that does not beleng to the pattern. The partern mu t conrai n a numbe r in lhe ith row to the right of a;; and also a number in the ith column below a;; .
5. 24
Cl; .
and Exerci e 17)
=
~
n
i= l
35. det(A) = I· there are 50· 99 in ver ions. 37. det(M ) = det(A) det(C)
Use
J
ow det(A) = f(a") = k(a" ao)(a11 a1) · · · (a"  GnJ) = (Cl;  aj) , a claimed. n?: i> J
31.  I ' 0, 2 33. det(A) = 0
+ 0 . 1 sin(l )  1.41 cos(r ) co + 35c 1= log(3 5) co + 46c 1= log(46) olve the . y tem + (77) co 59 c,= 1og co + 69c 1=log ( l 33)
Least quares so luti on [
6
29 . I
3 1. 3 + 1.51
im(A T) = (ker(A J)..L
15 . Ana logaus to Example 4 1 1 17. a. det[ = a 1  a0
24
23. in venible if a, b, c are aJJ nonzero 25. invert ible 27 . 3, 8
33 . approximately 1.5
9.
II. 8
b. Use Laplace Expan ion d wn the last column to . ee 1hat /(1) is a polynomial of degree .:::; n . The coefficient k of t " is [1 (a;  aj ).
aqr 21 . invert ible
,n
7. im (A) = (ker(A)).L
9.  72
ao a 1
19. a ps 
1
All
13. 8
0 0 0 13. 36 15. 24 17.  1
19. [ : ]
~ J. llh 
1.6 16] 0.664 ·  1.6 16 +
CHAPTER 5 5.1 I. 2
15. Let 8 = (AT A)  1 AT .
2 1. ; • = [
=
~[
b. Exponentiate the eq uation in part a. z = IOiog(;:l = IQ 1.616+0.664 1og(g) ~ 0.0242. 80.664
d. im ( L + ) =[i1rA~]) and ke r( L + ) = ker (A r)
A = L DU. then A T = U T D L T i the L DU
factorization of A T_ Since A = AT the two fa torizations are idenrica.l. o that U = L T. a claimed.
4.4
Leastsquares. o luti o n [ co ]
V = (ker(A)) ..L = im (AT) c. L ( L + (y)) = projiV\·, where
29 . By Exercise 4. _.45, we can wri te AT = QL whe re Q i orthogonal and L is lower rriangular. Then A = (QL T = LT Q T doe the job.
Answers to Odd umbered Exerci es •
=
I if 11 i
37. det(P 1) = I and det(P") = det(P11 _ 1) , by ex pan ion dow n the fir t co lumn , o det ( P11 ) = 1 for all n. 39. a. Note thar det(A)det(A  1) =I , and both fac tors are integers. b. Use 1he fo rmu la fo r rhe inver e of a 2 m
A12 • 5.3
An wers to Odd umbe red Exercises
I. 50
~ ~ ~] ; A 1
25. adj(A) = [
3. 13
 2
det~A)
7. 110
9. Geometrically: ll lidl is the ba and llii2 ll sin a the height of Lhe parallelogram defi ned by li, and  2· Algebrnical/y: In Exerci e 5._.29 we leamed that 2 2 det(Ar A = ll lit Wllii2ll  (ii , · 2) =
nswers to Odd ·umbered Exercises •
•dj(A)
~
0  •dj(A)
I
~ ~ [ 
23. n. u, = e; becau. e Se; = ii; b. s 'AS = s 1 A(ü 1 ... ii" ] =
S I 
lliitll 2 ll ii2 ll  llii tl lli tl l co a = Jlli 111 2 llli2 112 sin2 a , so that I det(A) I =
29. dx 1 =  D  1R2( l  Rt ( I  a) 2de2, dy 1 = o  1(1  a) R_(R t ( I  a) + a)de_ > 0, dp = oI R t R2d e1 > 0 .
) ctet (AT A) = ll litll ll ii tl l sina.
31. ITTTT
2
2
2
35. x(t)
s I [ AI jj I .. · A." v" ] = [ AI e1... /.. e" ] =
! ~]
27. x =  2 ° ? > 0: )' = 2  bb? < O;x decrea es a + ba + a b increa es.

[
0
:
fi nd A
= [
v;
v;
17. a. V(v 1 , v2 , Ü3,
ü,
x ii2 x Ü3) =
V (v 1 , ii2, ü3)1 Jli, x ii2 x Ü3ll becau e
v1 x ü2 x ii3 is orthogonal to ii t , ii2, b.
vcü,. ii2. li), v, x
~
and V:,.
13.
15.
[ü 1 ii2 Ü3] = ii 1 · Cii2 x Ü3) is positive if (and only if) ii 1 and ~ x Ü3 enclose aJ1 acute angle.
19. det
21. a. reverse
23 .
.x,
~ ~]
det [ =
det [
~ ~ ]
det [ X2
= de{
c. reverses
b. preserves
 1_ .  17 '
~ ~ J
~ ~ ]
17. 19.
6 =
T7
21.
We need a matrix
J,[:J. with A.
We
2
4] .
we
'"d the e;ge" b' ;, s 1AS.
wi th
s  'ASv=).
ASü = S),Ü = ASÜ, and A. is an eigenvalue of A (Sii is an eigenvector). Likewise. if is an eigenvector of A, then s I is aJl eigenvector of s  1AS witb the same eigenvalue. 4 1. TIT
Ö. The
~ ~J 2d

r
0
7. ker(U11  A) i= lÖ} becau e (U"  A)v = matrix Al"  A is not invertibl e.
+ :b 
J
c rrespondi ng eigenvector ü. then
5. Yes
l l. [ 2a
:
39. Jf A i an eigenvalue of
CHAPTER 6 6.1 l. Yes ; the eigenvaJ ue i A3 3. Yes; the eigenvalue i A + 2
9. [
[ 
37. a. A represents a reflection in a line followed b a di lation by a factor of ._/32 + 42 = 5. The eigenvalues are therefore 5 and  5. b. So lving the lineru· systems Ax = 5x
27 .
x v3) =
I det[Ü t x ~ x v3 v, Ü1 li) ]J = llvt x Ü2 x Ü3 ll 2, by defin ition of tbe cross product. c. V(v 1, ii2, Ü3) = lllit x Ü2 x ii31l. by parts a and b.
4
2
[";"f[ :~
V(v 1, ~ .. .. , vk) and
1
J= [ ~ ~ Jand . olve for
A [:
25.
hO j det (AT A are zero. One of the V; isa linear combination of ü1, .. . , ii; t. Then proj v,_1 = Ü; ru1d  ;  proj .,_ = Ö. This show that " f 1 \l(ü 1. . .. , vk) = 0, by Definition 5.3.6. Fact 4.4.3 implies that det(AT A) = 0.
J+ 6
a. soc1ated e1genval ues 2 and 6, re pectively. Let
A"
diagonal entries A. 1, . . . , A." .
parallelogram defined by ii , a11d ii2.
15.
[:
A wi th eigenvectors [:
] , the diagonal matri x with
II . 1det(A) 1 = 12, the expansion factor of T on tbe
13.
1
11
0 At · ·
=2
A 13
w
29.
~]
6.2
w
I. I , 3
3. I , 3
J,
31 All vectors of the form [ where t "I= 0 ( olve 5t t.he linear system A.'t = 4x). The nonzero vectors in L are the eigenvectors with eigenvalue 1, and the nonzero vectors in LJ. have eigenvalue I. Construcl an eigenbasis by picking one of each. No eigenvectors and eigenvalues (com pare with Example 3). The nonzero vectors in L are the eigenvectors with eigenvalue I, and the nonzero vectors in the plane L J. have eigenvalue 0. Construct an eigenbasis by picking one nonzero vector in L and two linearly independenl vectors in L J. . (Compare with Example 2.) All nonzero vectors in JR 3 are eigenvectors wi th eigenvalue 5. Any ba i. of JR3 i an eigenbasi .
5. none 7. I. I , l
9. I , 2. 2 II.  1
13. I 3 1.
15. Eigenvalues A.u = l ± .Jk. Two di tinct real eigenval ues if k is po iti e: none. if k i negati e.
17. A r pre ent a reflectiondilari n. ' ith a dilation fa tor of .Ja 2 + b . The eigenvalue. are ±Ja2 + b2 . 19. True (the di criminant t.r(A) ~  4 det(A ) i
po. itive) 2 1. det(A) i the product of thc cigenva lue .
33 . c(t) = 300( 1. 1) 1  200(0.9) 1 r(t) = 900( J.I Y 100(0.9) 1
and tr(A) is thei r . um. To see thi s, write JA (A) = (),A l)··· (A.  A") = A." _ (A l+ ... +A")Att  1 + .. . +  I)"A.t ·. ·A" and compare with the for mul a in Fact 6.2.5.
A14 •
Answer to Odd umbered Exerci e
_3. Js(A.) = det(A. /"  B ) = det(H" 
s
1
.
= det(S I ()../"  A)S) = (cter(s) r
1
A[
~ ] = [ ~ ] and A [ _ :] =
[0.O. 9 0O..9]
con 1cler A =
ctet(M" A)cter(S)
=JA().) A and B ha e the same eigenva lue , with the ame algebraic multiplicitie .
5
e need not be an eigen
then IA. I :::: I .
S)
33.:
ote that Ia  cl < I Phase ponrail when a > c:
35 A
line spanned by [
:~ [~(
~]
1

Cl
~ li ~ ~ !]
J;
line spanned b
[ _: ]
I. eigenbas is: [
~ ]. [ ~]. with e igenvalues 7 , 9. ~ ] . [ :
3. eigenba is: [
l
7. eigenbasis:
9.
e;geob~;,
e .e2, e3,
with eigen al ues 4 , 9.
wi th eigenvalues I , 2, 3.
mmf n il Hl Ul Ul hHa 1
with e igenvalues I , I . 0.
II . e;geoba ;,
[
wi th eigenvalues 3, 0, 0.
13 b. A' = [ A'e 1
Ne1 ] approaches
j
U~ l
' 7. :anbaL [; :] 29.
Ae e,
e
= so that is an eigenvector with associated eigenvalue I.
3 J. A and AT have the same eigenvalues, by Exercise 22. Si nce the row ums of AT are I , we can use the resu lts of Exerc ise 29 and 30: J is an eigenvalue of A; if A. i an eigenvalue of A ,
e;geob.,;,
wirb eigenvalues 0. I ,  I .
15.
dgeov~tOß
[!l[n
with eigenvalue 0, l.
=
A =
= [:
= 2[
l
ancl
'
CQ
4 1.
23. Th e onl y eigenvalue of A is 1 with E1 = span e1). There is n eigenbasis. A represent a shear parallel to the x 1axi .
n
< I.
~:~ ~:n [ ~ ~J
a. A = [ b. 8 =
;r heohoo""
Cad w;,
, u] b=
c. The eigenval ues of A are 0.5 and 0. 1, those of 8. 0.5,  0. 1, I . If ü i an eigenvector
25. Tbe geometri c mu ltipl icity is always 1.
of A. then [
~]
F"rthe<more,
[(I :)'b ] ~
2
27 . }A(A.) = A. 5A. + 6 = (A.2)(A.  3) , othat the eigen va lues are 2, 3. 29. 8oth multiplicities are n r.
o eigenbasis.
33 . a. AÜ·ÜJ
i an eigenvector of B.
43. Let x(t) = [
A
so.r(t)=!
j (I) = 450( 1.2)' + I 00(  0.8)' +50( 0.4)'
[:
~
l]
.!.
Eigenbasis for
!
wiili
e;geoval o~
[~1] +(~)
r+ l [
I]
~
I,
fort >O.
The proportion in the long run i I :2: I . 45 . I ( rref(A) i likely to be the matri wirb all I 's direc tl y above the main diagonal ancl 0 s everywhere eise)
a t ) = I 00( 1.2)' + 50(  0.8)' + I 00(  0.4)'
~
I ) = A.Y(t)
t.o Xo ~ •, ~ ! [}~ [ _n +~ [H
with eigenva lues 1.2, 0.8.  0.4; .Xo = SOv 1+ 50ü 2 + 501 .
39 a h
! ! ~ J. u] ·[_n~ [~n 0
m. [n[n
T he populatio n approach the proport ion 9:6:2.
~~~]. Then .'i(t +
where A = [
b. E1 = V and E 1 = v.t, o that the mul tip licity of I is m and that of I i n  m .
19. The geornetric multiplici ty of the eigenvalue I is I if a =/= 0 and c =/= 0; 3 if a = b = c = 0; ancl 2 otherwi se. Therefore, there is an e igenbas is on ly if A = h
for any
w(t
35. a. E1 = V ancl Eo = v.t, . o that the geometric mult:ipliciry of ] · is 111 and that of 0 is 11  111. The alge braic multiplici rie are the ame ( ee Exerci e 3 1 .
el' e3
b = [~ ]
initial va lue.
w
17 . eigenbasis: e2 , e4 , e1, with eigenvalues I , I , 0, 0.
0
1
d. Willapproach (h A) 
b. S·uppo e Aü = A.v ancl Aw = flW. Then AÜ · ,i; = A.(Ü · w) ancl Ü · A'w = /1(Ü . w) . By part a, A.(v · w) = 11(Ü · w) , so that (A.  /1)(Ü · w) = 0. Since A. =/= 11 it follow that Ü · = 0, as claimed.
11!(1) = 300(1.2)'  100( 0.8)'  100(0.4) '
m;"
e1genvector of B with eigenvalue I .
= ( AÜ)TüJ = ÜT ATÜ; = ÜT AÜJ = Ü·AÜJ
o7. E;geoba. ;. focA
A15
~ (1 + 'f) [:] + Hl' [ _:] +
H )' '* [=
: ] . The un ique so luti on i
u=~ J
X(t )
that is,
3 1. They are the ame.
5. No real eigenva lue . 27. a.
n
U] U]
~] A [ ~ ~] = [ ~
17
37. We ca n wri te fA (A.) = (A.  A.o) 2 g (A.) . By th procluct ru le, f~ A.) = 2(A.  A.o) (A.) + (A.  A.o)g'(A.), o that 1 (A.o) = 0. 6.3
2 1· We want A
A[
~]b)..
 5
Tr
(a c) [ _ : ] .
r 1
ector of A;
Answers to OddNumbered Exercise •
6.4
I. JfS(co
3.
<i
+ i sin(i
)
cose~k) + i sine~k). for k = 0, .. . , 11

I
A16 • 5. If
An swe rs to OddN um bered Exerci s
An wer to Odd um be red Ex e rci es
z
19 · ·~·(1 ) =
= r(co~ (
w = ::/f(co ( 4>
+} :rrk) +
i si n(
+} ;rk)),
fo r
k =0 .... . n  1. 7. Cl ockw ise rotatio n through an angle f ~ followed by a d ilation by a fac tor of Ji. 9. pira L out ward ince lz l > I . I I. f (A.) = (A. I )(A.  I  2i)(A. I + 2i) 13. Q is a field. 15. The binary dig it fo rm a fie ld . 17. H i not a fie ld (mulri plicati on i noncommu tati ve) 19. a. tr(A) = 111 . det(A) = 0 b. tr(B) = 2m  11. der 8 ) = ( n nm compare
~
1i
Exerci e 30), so that lim r
(Ys AY
e xi t and ha
identical co lumns (see Exercise 30 ). T herefore, the column of A 1 are nearl y ide ntical for Iarge t .
7.1
4> = a rctan (~ ) : . pi rals o ut wa rd. J,;';r [
2 1. ;i·(l) = v
1/
I.
5 sin ((/11) ] 1 co. (1) + 3 sin (/11) , w lere
3.
4> = ar wn(*) ; pira ls outward.
:
 (l)' [
:2 3. .\ (t )
2
CO
9.
3 1. a. Follow from w + :: = w + ijth entry of
z and wz =
11.
= L,a;~: bkj = '[) ;k Vkj
= L,a;k bkj
13.
= ijth e ntry of A B) b. Take the conj ugate of both ide of the JS.
equa1ion A(ü + iw ) =
ü + i w) .
and u e part a:
17.
A(ü i ÜJ) = (p i q)(Ü
iw) 19.
33. The matrix repre. ent a rotati ondi lati on with a dilation facto r of J0.99 2 + 0 .0 12 < 1. Trajectory
xo=
c1Ü1 + · · · +
2 1.
6.5
17. x (t ) = [  . in(
 23 1] 154
c"ü" .
[_lJ [=:]
[ ~]
[~
~]
:l
is a stable cqu ilibrium.
is
n ~] [ 00.36 .48
[ ;
1
I + 1
J
ü+
iw
:
J.
~
:]
h and 8 = [ ~
:
J
J
rn
1.
33 .
[ I~ J
35.
21 l [
I
 t
2
I
J
4 1. Ye
08 ]  0 .6
43. Yes
0
45 . Yes
ü i w ] and
[~
D= [
~
_
i] 1
47. No are
49 . . o; con ider A = [
s ~ n ~ !J. v ~ [~ ~
9. not diagonalizab lc
5 1. A = [
n
5. not diagonalizab le over IR.
7
]
39. Ye
0.48 0 .64 0.6
I . S = 12
s=
I
0; con ider A =
inve rt ible.
3.
 4
37. Not imi lar, becausc A2 = 0 and 8 2 :f= O.
0
29. Both [
not bounded . Th is cloes not contradi ct part a, ·ince there is no eigenbasis for the matri x
39. [
3
llx (r)l l ~ lcd lllid l + ·· · +Je" I llii"ll = M
7.2
8 0
[
J.D = [ p +0 iq
1
2 29. [ /
[n [~n [; ~ ]  0 .8
=
s=
25.
27. b ;j = Cln + l  i.n+ l  j
:J[n
15.
I 1
[~0 0~ ~]  J
23. Ye. ; both are sim il ar to [
and
[~
s = [:
•
2 1. Ye
23.
(use thetriangle incqua lit.y ll ii +wl l ~ l l ,~ l l + ll wl l. and ob. erve that JA.'; I ~ I)
13.
19. A1 = 21 [ l  t 1
~]
Then
b. The trajectoryx(l ) =
0.3 ]
O. J
0
e;
= c1 A.'1Ü1 + · · · + c  }.~Üs is nearly parallel to  1 fo r Iarge t . I .. table 3. not stable 5. not , table 7 . not stable 9. not table I I. For Jkl < I 13. For all k 15. ever table
b. [ 0.9 0.3
0
spiral ioward. 35. a. Choose an eigenba i Ü1, ... , ü" and write
33 . c. Hinr : Let )q. ),2, .. .. )s be the eigenvalues, wi th Ä 1 > p,jJ, for j =  · . . . , 5. Let ü1. ü2 , . . . , v5 be corre. ponding eigenvectors. Write = C l V l + ... + csüs. Then 1 ith column of A' = A e;
[~
D=
6
~ ~]
7 [  149 . 99
25. not table 27 . . table 29 . may or may not be stable; cons ider A = ± ~ /2
~] ,
[~ ] [~ ~ ]
5. a. [
5 ·in ((/11 ] ' whe re ( ! ) + 3 in((/11)
4> = arc tan (i) ; piral. in ward.
A8
± l, ±I . ± i  I.  1. 3 tr(A) = ) 1 + ),2 + ), 3 = 0 and det(A) = >. 1>. 2 ), 3 = bcd > 0. Therefore. there are one positive and two negative eigenvalue ; the po iti ve o ne i largest in ab olute value. 3 1. ~ A i a regular tran ition matri x (compare with
23 . 25 . 27 . _9.
[  . in (1) ] . wherc os(1)
w:;
with Exer i e 6.3.35) 2 1. 2 ± 3i
JT3r
CHAPTER 7
53. [
n
~ ~]
~ ~ J and 8 = [ ~ ~
J
~ C~l ]
55. Ye (by Exa mple 6) 61 . 65.
ote tha t
JA (A) =
.f.1(),) =
fa (A.)
0
67. True ( A has three di sti nct eigenva lues)
A 17
A18 •
n wer to Odd umb r d Exercises
nswer to Odd um bered Exercises •
3 1. True
5. a = arccos( f,). Hint: lf Üo, ... , Ü11 are . uch vectors. Iet A = [ iio . . . 11 ] . Then the n ninvertibl e matrix AT A ha 1's on Lhe diagonal aod co (a) everyw here el e. ow u e Exercise 17.
I I. The . ame (the eigenva lucs of A and A  I have thc sa mc signs).
27. True
13.
29. False
= q(e;) > 0.
15. Ellipse; principal axes spanned by
v
[ 
[2] 1
~ ] ; equation 7cT + 2ci = 1
3 1. The closed interval an d
Js
35. B
[
L a;kbkj
I]
·
2
Y
, cquat1on 4cj  c2 = 1. .
?
~ L la;k\lbkj \
19. A paiclof Ii o.";; pd odpal axos spaoood by [ 
k= l
= ijth enLry
[~ ~0
I I. Same S as in 9, D =
b. By inducüon on r, using part a: \N i = \N 1A\ ~ \A'  1 \\ A\ ~ \A\'  1\A\ = 001]
\A\
4 1. Let).. be tbe max_imum of all \r;; \, for i = I , . .. n. Note that ).. < I. Then \R\ ~ J...(/11 + U), where U is upper triangu lar with u ;; = 0 and u;j = \rij \j).. if j > i. Not e that U" = 0 (see 1 Exercise 38a). Now IR'\ ~ IR\1 ~ J.. ' (/11 + U) ~ A1 111 Un + u + ... + uul . From calculus we know that I im ) 1 t" = 0.
13. Ye (reflection in E 1 15. Ye (can use the same orthonorma l eigenba is) 17. Let A be the n x 11 mauix who e entries are l. The eigenvaJues of A are 0 (w ith multiplicity 11 I ) and 11. 0\ B = qA + p q )l n, so that the eigenvalues of B are p  q with multiplicity 11  1) and q11 + p q. Therefore, det(B) = p  q)n! qn + p  q ). _I . l8 = 6 · 4 ·  (note that A ha 6 unit eigenvectors)
23. The only possible eigen alue are I and  1 (becau e A is orthogonal). and the eigenspaces Et and E t are orthogonal complements (becau e A is symmetric). A repre ents the reftection in a ub pace of iR". I
25.
s= h
27. If
11
0 0 0
1 0 0 0  1
0
0
I
I
0 0
0
0
./2
I
 I
0
0
0 0
i even, we have the eigenbas is
e, eu, e2  en1· . .. , en/2  en/2+1. e, + en, e2 + en l, .... eu/2 + e"/2+1' with associated eigenvalues 0 (n /2 ti me 29. Yes
and 2 (11 j2 ti mes).
1
,.....
7.4
I. [
3
ii,
[~ LH
.../2
n
_
39. L
~ [  0.47 ~:~~]. v2 ~ [~:;~]. ü ~ [ ~::~] 0.78 0.41 3
Since all eigen value are positive, the . urface i an ellip oid. The points farth e t from the orig in are
=[
7.5
~n
4 0]3
1
~ ~ ~] 4
3
I
43. For 0 < a < a.rccos I. = 2, a2 = l
a,

_ ). a. The fir t i an ellip oid. the econd a hyperboloid of one heet and lhe th ird a hyperboloid of two sheet (see any text in multivariab le calculus). On ly the ell ipsoid is bounded, and the first two surfaces are connected. b. The matrix A of thi qu adratic form ha po iti e eigenva lues J.. 1 ~ o. 6 . ..\2 ~ 4 _44 _ and ),3 = I ' with corresponding unit eigenvectors
~.5 ~.5]
(~ ) II I
3. All ingu lar value are 1 ( ince AT A = In) 5. a 1
= a2 = J p + q2
7. [
9
.
II
Js
~ ~ J[~
n[~ ~ J
un [~ ~ Js [~ n J
[! ~ n[~ nr~ ~] 3
13. /_ [
~ 3s]
15 [ ~
~]
15. Singular value of A ' are the rec iprocaJ of tho e of A. 21. [
0.8 0.6] [ 9  0.6 0.8 2
2] 6
=I, . . . ,r  3. AATu,. _ { aFü  ; for i 0 fo r i =r+ 1• • . . , 117 The nonzero eigen alue of AT A and T are Lhe ame.
?
5. indefinite
1 ü, ± 11
~
1. 15 ] . ± [ 0._6  0.63
vAt
7. indefinite 9. a. A 2 is symmetric b. A 2 =  AT Ais negative semidefinite, so that its eigenvalues are ~ 0. c. The eigenvalues of A are imaginary (that is, of the form bi , fo r a real b) . The zero matrix is the only skewsym meLric matri x that is diagonalizable over lR.
Sc~
] ; eq uation = 1. 2 ote th at we can write x2I + 4 x I x 2 + 4.\.2 . 2 ( x I + 2x2) 2 = I , o that x 1 + 2x = ± 1 2
and [
of\A\\ B\
37. L = _I_ [
[An, ).. 1J
6 2]  3 4
= ~ [ ~~
?
k= l II
33 . 8 =_I_ [
17. Hyperbola; principal axes spanned b [ 2 ] 1 and
II
39. a. ijth entTy of \A B\ =
a;;
A 19
and those closest are
±
1
11
u2 ~
vA2
23. Ye.; A = ~(M +MT) 2s .
q(ii) = v·J...v = J....
25. Choo e vectors ü1 and Ü? a in Fact • 7 .52 . . W n.te 0.1 5 ] . ± [ 0.26 0.37
ft
Note thal ow
= Ci
VJ
+ c2
2·
A20 •
Answers to Odd umbered Exercises
Answers to OddNumbered Exercises • 19.
so that
[]AÜI I2
2 39. E 1 . = span [  1] and E1.4 = span [ 3 ] . .L.00 ks ro ugbly li ke the phase portrait in Exerci se 35.
cf ii Aii l I I ~ + c~ I ] Aii2le
=
?
?
= cjuj
~
(/)
41.
__.. trajectory I
?
+ c:zui
?
?)
A21
2
?
:S (Cl f C2 Uj EA,
We conclude that IIAüll :S o2 :s JI AÜ I) is analogous.
o1.
The proof of
27. Apply Exercise 26 to a unit eigenvector associated eigenvalue ),.
33. False; consider A = [ T
T
35. (A A) 1 A Üi
l
0 37. Yes, s ince A A
=
 [l ']
2 1. x(t) =
for i = l , ... , r
= r + I , . . . , 111 31 <(,)
!11
trajectory 3
door s Iams 1·r w(O) a(O)
23 . x(t ) is a so lution.
29.x(t) = 2 for i
xo
1
0
27. x(l) = 0.2e
~ ~] .
_l_ Vi = l!i
7
v witJ1
 trajectory 2 EA.,
61
[
~ ] + 0.4e l [ ~ J
~ ,, [
1
s1
3. ,Jie3rr if4 43 . a. Competition
2
5. e0 11 ( cos(2t)  i sin(2t)); spiraJs inward, in the
b. Y
clockwise direction
n
9. stable
X
c. Species 1 "wins" if
CHAPTER 8 8.1 I. x(t) = 7e 51
_ng~
< 2.
45 . b. y
3. P (t) = 7e 0·031
1 ~ 1 , has a vertical asymptote =(Cl  k)t. + I ) 1/ (i  k)
9. x(t)
at r = I.
15. X
II. x(t) = tan (t)
13. a. about 104 billjon dollars b. about 150 billion dollars 15. The solution of the equation = lOO kn(2) ~
T
6f
c. Species I "wins" if y(O) < l · x(O) 2 47 . a. syrnbiosis b. the eigenvalues are ~ (  5 ± .J9 + 4k2). There are two negati ve eigenvalues if k < 2; if k > 2 there is a negati ve and a positive eigenvalue.
35. ekT! LOO
ll. a. B = 2A d. The zero s tate is a stable equilibrium of the system ~; = grad(q) if (and only if) q is negative definite (then, the eigenvalues of .4 and B are alJ negative). 13. The eigenvaJues of A I are the reciprocals of the eigenvalues of A ; the real parts have the same sign.
0.8e0·81
7. x(t) =
'
7. not stable
33.
5. y(t) = 
I. 1
8.2
[ 2]+ e [I] _
< A.2 .
=
2 is
49. g(t) = 45e 0 ·81  l5e 0.4t and h (t) =  45e  0·81 + 45e  0 41
17.
37. Eu
= span [ ~ Jand
E1.6
= span [ ~
l
Looks
rough ly like the phase portrait in Figure '10.
19. False; consider A with eigenvalues I, 2,  4. 21. a.
e P1
[cos(qt) ] , a spiral if p =/= 0 and a sm (q t) circle if p = 0. Approaches the origin if p is negative.
53. x(t) =
17. If lk I < 1
= H
q 55. Eigenvalues AJ.2 eigenvalues are negative
± J q 2  4p) ; both
~~
d
I af
= O.Sb
=
+
s
I
0.07s
b. b(t) = 50.000e 0·071  49,000e 0·51 s(t) = I , 000e0·071 ?3
~ ·
: ) _ [ cos(3t)  sin (3t) '' U sin(3t) cos(3t) are arbitrary constants
J[b J' where a
a' b
A22 • _s.
n wers to Odd ·umbered Exercise
EigenvaJue 2 + 4i wirh corre pond ing
~
eigenvector [ _ ] . p = 2. q = 4 . 1
X(l ) = e2
[
~
w=
39. The _ystem
e Fact 8.2. 6, ' ith
~
[
J.
V= [ _
n.
c~ng;~
c:~~:~ ]
2 in (2r  cos (21) counterclockwi e direction.
E[ ig~nvaJ] ue_i
with
1
sin ) [ in (l ) +CO (I orientation.
c
[:]=
] .
I
I
in r)
=
A n ellipse wi th clockwi e

35.
trajectory .r (t)
/
ker A)
//
/
21
:n::.nJ' Un~n~r ;
tr
~::e~ ~].[~ ~].[~ tY~
a
23 . : ::"
Dimen ion: I. 25. Basi s: I  r, I Note that Ax is parallel to ker(A) for all .X because A 2 = 0. Check anal yticall y that .T(1) = io + 1 Axo o I ves dx di = A x:
dx

dl
_
Ai = Axo + 1 A2.io = Axo .J 37 . U e Exerci e 35 to solve the system
~~ = (A

Ao /2)c,
then apply Exercise 8. 1.24.
~] +
c2e
21
[
~
17'
b. Basi. of ker(T) : 5  61 + 12 0 3 1 ] [ Basis of im(T ): [
J
The
2 1 .
21
span
23.
9.3
_:]). Es=
9.2
29. Basi :
I. I , I . .. .), (0, I, 2 . 3 , . . .) . 01mens10n:
5. Dimension: 2.
. 7.
[~
u~
9. [I
0 0
im (T):
25.
I)
13. t  r 3 , 1 +2r  3r 2. 15. ker( F) = Eo = skewsymmetric matrices. im(F ) = E 1 symmetric matrices. F is diagonaJizable.
=
j I + ! + ~ + . .. = :16· ao
h
k = 0 for all k. ·v 2 2 bk = 7(i[ if k is odd { 0 if k i even. l 7T 2 29. I: 7'Y = u0 k odd k
~J
Ul [~ J
in(!)) .
~ ~ J is sy mmetric and positi ve
19. A must be positive defi rri te (compare ' ith Exercise 15). 21. Ye . 23. I. 2r 1.
~]
II. ba i of ke r(T ) : I  21 + t 2 . Ba i of
(b  l ~ 2 +a 2 ((b  l ) cos(r) +a
definite. This mean that b = c, a > 0, and ad bc > 0. 17. lfk er(T) = {0} .
27.
2/ 3
J
No soluti on if a = 0 and b =I . b. Yes a. ker(T) = [OJ c. There i (j us t one) such polynomiaJ. a. Jf S is invertibl e. b. If S is orthogonal. Yes. For positi ve k. True. The angle is 8.
15. If the matrix [
The e four matrices
A. O v = A..f(ü) . I. Ye
3. Ye
3. 5. 7. 9. II.
(l
b I
13. The two norms are equal, by Fac t 9.3.6.
ü EB w = J Cv) + f(w )
Dimension: 2.
3 1. a. Basis: I , t , 1 . Dimension: 3. b. Basis: 1, 13 . Dimension: 2. 33. Linear; not an i omorphi sm. 35 . Linear; not an isomorphism. 37. Isomorphi sm. 39. Linear; not an isomorphi sm. 4 1. Linear ; not an isomorphi sm . 43. Subspace.
(U ~ l [~ ~ ]) .
a
b. f (t ) =
fo rm an eigenbas is.
a
2.
~ ] , [~
[b 1
· a.
6 1. The standarcl basis of !Rn x n (the matrices with one entry equal to I and all other entries equal to 0). 63. Ye . LeL f be an invertible function fro m JR 2 to R and defi ne
~ ~ ] , [ ~ ~] . 4
59. E_ , = span ([ _ :
]
degree :::: I, for exa mple, g(l ) = 1.
= c ,e r + c2e  2r _
55 . E1genspaces: E, = rea l num ber E _, = imagin ary num ber . Eigenbasis I, i. 57. E igen pace : E, = even function. , E_ 1 = odd fun cti on .
3
0  1 ' 0 5 19. T (J) = gf , where g is a polynomial of
[!~~ rTfiTT~ m irn E~rr;
mn .
27. Basis: [
2
= Ax0
Cte  r [
x(l )
CHAPTER 9 9.1 I. Not a ubs pace. 3. Sub pacewith ba ·i · J1 , 2  t 2 .5  t 3 . 5. Subspace with ba is 1, 1" 7. Subspac . 9. Subspace. I I. Not a ubspace. 13. ot a subspace. 15. Subspace. 17 . Matri ce with one entry equa l to I and all other '9
uj j j
=
53 . E igen paces: E2 = sy mmeti c matrices,
/ /
=
so lu l.l ons of the given OE are
in ward, in the
c[ ~rre t]on[d~:g(~;g]envector
I + i · x (l ) =
[~ ] =
where k 1 , k2 • h ar arbitrary con tants. The olut ion of the gi en y tem are .~ ( 1 ) = e )..1 r), by Exerci se 8. 1.24. The zer state is a stable equili bri um soluti n if and onl y if) the real part of A. i negative.
~].
e r [ cos (2 l ) + sin ( l ) ] . Spi ral
29.
=
49. Ow = T (Ov)  T (Ov) T (Ov + Ov)  T (Ov) T (Ov ) + T (Ov)  T (Ov) T (Ov )
n[~~~~;; c~:~::) ] [~ ]

corresponding eigenvector [ e r [
[ ~0
5 1. Convert the OE into the system d.x Tt = y . dv , Wllh so lut ions cit =  2x 3 y
27. Eigen alue I + 2i with
~(r) =
dc =
dt
A23
Answers to Odd umbered Exe rcise • 45 . Yes , yes.
9.4
=
I . Ce 51
3. ~ e 31 + ce1 (use Fa t 9.4. 1') 5.  I  r + Ce1 7. c ,e·lr + c2e3r
9. II.
+ c2e  31 e (c1 cos(r) + c2 sin(l)) c 1e 1
31
A24 •
13 . e 1 c 1 + c2 1) (compare ' i th  xamp le 10). 1 .
+ C2 l + '2 1)  ~ CO ( 1) cos(1) + c 1 cos(J2t ) + c~ . i n( v'2t ) Ci + C2e + C3e  lt C(
17. e 1 (C t 19. 2 1.
1
1
T . 3e 51 25 . e  21 +~ _7. 
SUB]ECT INDEX
n ' er to Odd umbered E. r i es
in 1)
29.
1i n
31.
u r )=
,.....lim
r) 
b. 2e 1

e 21
d. l n pm1 th e o. cillmor gocs through Lhe equ il i bri um state once; in part b it never reac he it.
37. x(l) = re 31 39.  I (c l + C2 i + C3 f 2 ) 4 1. A. i an eigenva lue ' i th dim (EJJ = 11 , becau e EJ. i. the kem el of the 11th rder l inear di fferential
t
sin(2r )
qf CI 
v(r) =
. 5. a. c 1e 1 + c1 e ~ 1 c.  e  t + _ e 2t
'.!.!..Kk '·
operator T (x )  A.x.
ekt fm)
.n. !'o
= tenninal velocity .
45. e, [ l
o
(1)+
=
121]
1~
in (l )+c , e 21 +c2 e 31
A Affin e . ys tem, 337 Algebraic multipli city of an eigen va lue 3 12 and geo metri c rnultipliciry, 325, 400 and co mp lex eigenvalue , 349 Algorithm, 23 Angle, 186 and orthogonal tra nsformations, 208 Argument (o f a co mplex number), 343 of a produ ct, 345 Associati ve law for matri x multiplicati on 104 A ymptoti ca lly table equilibrium, 358 ' (see table equilibri um) Augmented rn atri x, 14
B Basi , 15 I , 482 and coord inates, 377 and dimen ion 162 482 and un iqu e representation, 154 find ing ba i of a li near space, 484 of an i mage, 167 of a kernet, 164 of ~~~ . 169 standard , 163 Binm·y di gits, 143 , 35 1 Bounded trajecto ry, 367
c Carbon dat ing 45 1 CauchySchwarz inequ ality, 186, 51 5 and angles, 187 and triangle inequ ality, 192 Center of mass, 3 1 Characteristi c polynomi al, 311 and algebraic multiplicity 312 and it deri va ti ve, 320
of linear differential operator 532 of imil ar matrices, 393 Chole ky factorization, 423 Ci rculant rnatri x, 355 Cla ica l adjoin t 285 Closedforrnul a solution for discrete dynamica l svstem 30 1 fo r in verse, 285 ' for Ieas tsquares approximat ion, 223 for linear system, 283 Codomain (of a function), 55 Coefficient marrix of a linear y tem, 14 of a linear transformaüon, 53 Colu mn of a matrix 12 Colu mn pace of a matrix, 132 Column vector, 13 Cornmuting matri ces, 102 Complement , 174 Comp lex eigenvalues. 348 and determinant, 350 and rotationdi lati on matrice , 359, 36 1, 380. 465 and stable equitibrium, 359 and trac . 350 Camplex num bers, 34 1 and rotati ondilati ons, 345. 503, 5 10 in polar fo rm, 344 Complexvalued functions. 458 deri va ti ve of, 459 ex pon nti aL 460 Component of a veclor. 13 Compo ite fun cti ons, 97 Cornputati onal co mplexity, 94, 122 Consistent linear system, 33 Continuous lea tsqu ares cond iti on. 2 6 5 16 Continuous linear dynami cal syst m, 442 stability of. 463 with cornpl ex eigenbasis. 463 with rea l eigenbasis, 449 Coordinate ·, 377 496
11 .
12 •
Subj ect Index •
ubject Index
Coordinate tran formation. 497 Coordinat.e vector, 377 , 496 Correlation (coefficient , 188, 189, 190 Cramer' Rule, 283 , 28Cro product in JR3 , 65 , 554 in IR" , 269 Cubic equation (Ca.rclano's formula) , 320, 356 Cubic spbnes, 27
D Data compre.ssion, 433 Data fitting, 225 multi ariate, 228 De Moivre' s forrnula 346 Deterrninant, 243 a.nd characteri tic polynorn:ial 311 a.nd complex eigenvalues, 350 a.nd Cra.mer' s rule, 283 and elementa.ry row operations, 256 and invertibibty, 257 a.nd Laplace expansion. 262 a.nd QR factorization , 277 a a.rea, 274 a.s expansion factor , 279 a volume, 276 i linear in row and column , 252, 253 of inverse, 259 of orthogonal matrix, 273 of permutation matrix, 248 of product. 258, 272 of rotation matrix, 273 of sirn:ila.r matrices. 393 of 3 x 3 matrix. 240, 276 of tra.nspose, 251 of triangular matrix, 246 of 2 x 2 matrix, 239, 274 Vandermonde, 266 Diagonabzahle matrices, 388, 389 ancl eigenbases, 388 and power 391 orthogonall y, 402, 408 imultaneously, 400 Diagonal matrix, 13
Diagonal of a matrix, 13 Dilati on 7 1 Dimension. 162, 48 and isomorphi m, 488 of imag , 167, 506 of kerne! , 165 506 of orthogonal complement, 220 Direction field , 443 Discrete linear dynamical sy tem, 301 and complex eigenva lue. , 359 36 1, 363 and table quibbrium. 358, 359 Di tance, 51 4 Distributive Laws, 106 Doma.in of a function. 55 Dominant eigenvector 448 Dot product 28, 2 11 , 55 1 and matrix product, 103, 2 11 a.nd prod uct Ax, 41 rules for, 552 Dyn amical system (see co ntinuous, di crete bnear dynarnical system)
E Eigenbasi , 326, 490 a.nd continuous dynamical ystem , 449 , 463 and di agon alization, 388 and di crete dynamical y tem , 30 I and di tinct eigen value , 329 and geometric multiplicity , 330 Eigenfunction , 532 Eigenspaces, 321 , 490 and geometric multiplicity, 325 and principal axes, 420 Eigenvalue(s), 298, 490 algebraic multiplicity of, 312 and characteristic polynomia.l, 308 and determinant, 350 and positive (semi)definite matrices, 417 and QR factorization, 316 and sheardilations, 395 and shears, 393 and singular values, 425 and stable equilibrium, 359, 463
and Lrace, 350 co mpl ex, 349, 350, 36 1 geo metri c rnulti plic ity of, 325 or: orthogonal matri x, 300 ot r? tati ondil ati on matri x, 359 of w ntlar matrices, 393 of ymmetric matri x, 406 of tri angul ar matri x, 309 . power method fo r findin g, 353 E1 genvectors 298 and linear independence 329 dominant, 448 ' of ymmetri c matrix, 403 Elernentary matri x, 115 Elementary row operati.ons, 2 1 and determin ant 256 and e lementary matrices l J 5 Ellip e, 85 a image of the unü circle, 85, 408, 427 as lev.el curve of qu adratic fo rm 418, 420 a traJectory, 363 370, 46 5 Equilibrium (state), 358, 368 Error (in Ieastsquares Solution), 221 Errorcorrecting codes, 144 Euler identities, 51 8 519 Eu ler' · forrnu1a , 461 Euler' theorem , 335 , 386 Exotic ?perations (in a linear space), 496 Expcu1SIOn factor, 279 Exponenti al function , 440 complexva1ued , 460
Flo w line, 443 Fo uri~r anal y is, 236, 51 8, 520 Functlon 55 Fundamenta l theorem of algebra, 347
G Gaussian integration, 527 GaussJordan elimination 22 and determinant, 256, 257 and inverse, 91 flow chart for, 24 Geometrie m~ltiplicity of an eigenva]ue, 325 and a~gebrcuc multiplicity, 325, 400 and e1genba es 330 Golden section, 340 GramSchmidt process, 201 , 515 and determinant, 273 and orthogonal diagonalization, 408 and QR factorization, 202
H Ha.rmonics, 520 Hilbert space e2 , 195, 513 and qua.ntum mechanics, 523 Hornogeneou linear system 48 Hyperbola, 420 Hyperplane, 173
I
F Factorization ' Cholesky , 423 LDC , 2 18, 424 LDU , 116, 218 L{r , 423 LU , 116, 250 QR , 202, 217, 277, 316, 424 sos  1, 39 .1 SDS T, 402 UL.VT , 43 1 Field, 348, 351
Identity matrix /11 , 57 Identity tra.nsformation, 57 Image of a function , 128 Image of a li near Iransformation ' .132'486 and rank, 167 i a subspace, 133, 146 ort~ogonal complement of, 2 19 wntten as a kerne!, 143 Image (of a ubset), 67 of the unit circle, 85. 408 427 of the unit square. 68 lmag.i i~ary part of a complex number, 342 Imphcit function theorem, 272 .
I 3
I 4 •
Subject Index
Inconsistent linear sy tem, 6, 21, 33, 42 and lea t quares , 221 Indefinite matrix, 416 Infi nitedimensional linear space, 486 Inner product (space). 511 Inputoutput analysis, 7, 29, 96, 120, 386 Intermediate value tbeorem, 84 Inter ection of subspaces, 156 dimension of, 174 lnver e (of a matrix), 88, 91 and Cramer' s rule, 285 determinant of, 259 of an orthogonal matrix, 212 of a product. 105 of a 2 x 2 matrix, 61. 91 of a transpo e, 213 Inversion (in a matrix), 243 Invertible function. 86 lnvertible matrix, 88, 89 and kernel. 138 and deternunant, 257 Isomorphism, 487 and dimension, 488
J Jordan normal form, 393
K Kemel, 134 486 and invertibility, 138 and Linear independence, 154 and rank, 165 dimension of, 165, 167, 506 i a subspace, 138, 146, 486 kparallelepiped, 276 kvolume 276
L Lap lace expansion, 262
LDLT factorization , 218, 424 LDU factorization , 116, 218
Leading one, 19 Leading variable 20 Lea t quares o lution , 23, 195 222 and normal equation, 223 minimal 231 Left inver e, 159 Length of a vector, 177, 552. and orthogona l transformatJOn . 207 Linear combiJ1ation, 38, 479 and span, 131 Linear dependence 151 Linear differential eq uation, 529 homogeneaus 529 order of, 529 solution of, 531 solvi ng a first order, 440, 538 solving a second order 534, 535 Lmear differenria l operator, 528 cbaracteristic polynomial of, 532 eigenfunctions of, 532 kerne) of, 531 image of, 539 Linear independence in lR", 151 in a linear space, 482 and dimension, 164 and kemel, 154 and relations, 152 of eigenvectors 329 of orthonorma1 vector 179 Linearity of the determinant, 252, 253 Linear relation , 152 Linear space(s), 479 basis of, 482 dimension of, 482 finding basis of, 484 mfinitedimensional, 486 isomorphic, 487 Linear system clo edformula solution for, 283 consistent, 33 homogeneous, 48 inconsistent, 6, 33 Ieastsquares solutions of, 222 matrix form of, 39
minimal so lution of, 231 nurnber of so lutions of, 33 of differential equations: see continuous linear dynamical system unique so lu tion of 36 vector form of, 37 with fewer equations than unknowns 35 Linear tran sforrnation , 56, 66, 486 ' image of, 132, 486 kerne! of, 134, 486 matrix of, 59, 378, 501 , 502 Lower triangular matrix , 13 LU factorizat ion, 116 250 and pri ncipal submatrices, 117
M Main diagonal of a matrix, 13 Mass pring ystem, 472 Matrix, 12 (for a composite entry such as ' zero matrix" see zero) Minimal so lution of a linear , ystem, 23J Minor of a matrix , 260 Modulus (of a complex number), 343 of a product 345 Momentum, 31 Multiplication (of matrices), 101 and determinant, 258 column by co lumn , 102 entry by entry, 103 is associative, I 04 i. noncommutative, 102 of partitioned matrice , 107
N Negative feedback loop, 470 Neutral eJement, 479 Nilpotent matrix , 176, 412 Norm of a vector, 177 in an inner product space, 5 14 Normal equation, 223
Subject Index •
0 Orthogonal complemen t.., 179 dimension of 220 is a subspace, 180 of an image, 219 Orthogonally diagonalizable matrix. 402, 408 Orthogonal matrix, 207, 212 determinant of, 273 eigenvalues of, 300 has orthonormal columns. 209 in verse of, 212 transpo e of, 212 011hogona1 projection, 76, 181 , 515 and GramSchmidt proces , 201 and reflect:ion, 76 as close t vector, 221 matrix of, 214, 224 Orthogonal transformat:ion , 207 and Ot1honorma1 bases, 209 preserve angles, 208 preserves dot product 2 J 5 Orthogonal vectors. 177, 514, 553 and Pythagorean tbeorem, 185 Orthonorma1 bases, 180, 183 and GramSchmidt process, 201 and orthogonal transformations, 209 and symmetric matrice , 402 Ot1hononnal vectors. 178 are linearly mdependent, 179 0 cillator. 509
p Parallelepiped, 275 Parallel vecrors, 550 Paramet:rization (of a curve), 129 Partitioned matrice , I07 Pattem (in a matri ), 242 Permutation matrix. 94 determin anr of, A8 Perpendicular vector . 177, 5 14, 553 Pha ·e portrair. of discret y tem, 296, 30 I of continuous system, 445 450 summary. 468
I 5
I 6 •
ubj ct Index
Pi ecewi e o ntin uou function. 1 Pi ot column . J 67 Pi o tin g 23 Po lar form (of