Linear Algebra With Applications

  • Uploaded by: Amit Banerjee
  • 0
  • 0
  • January 2021
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Linear Algebra With Applications as PDF for free.

More details

  • Words: 179,431
  • Pages: 303
Loading documents preview...
LINEAR ALGEBRA WITH APPLICATIONS Otto Bretscher Harvard University

PRENTICE HALL, Upper Saddle River, New Jersey 07458

CONTENTS Library of Congress Cataloging-in-Publication Data Bretscher. Otto. Linear algebra with applications I Otto Bretscher. p. cm. lncludes index. ISBN: 0-13- 190729-8 1. Algebras. Linear. I. Title. QAI84.B73 1996 94-36942 512' .5-dc21 CJP

Preface

1

Acquisitions Editor: GEORGE LOBELL Editorial Assistant: GALE EPPS Assistant Editor: AUDRA J, WALSH Editorial Director: TIM BOZIK Editor-in-Chief: JEROME GRANT RICCARDI . Assistant Vice President of Production and Manufacturing: DAVID W. Editoriai/Production Supervision: RICBARD DeLORENZO Managing Editor: LINDA MIHATOV BEHRENS Executive Managing Editor: KATHLEEN SCHIAPARELLI Manufacturing Buyer: ALAN FISCHER Manufacturing Manager: TRUDY PISCIOTTI Director of Marketing: JOHN TWEEDDALE Marketing Assistant: DIANA PENHA Creative Director: PAULA MAYLAHN ArtDirector: MAUREEN EIDE Cover and Interior Design/Layout: MAUREEN EIDE Cover Image: nromcrown Chapel. copyrighl by R. Greg Hurlsey; Eureka Springs, Arkansas.

2

Linear Equations 1 1.1

Introduction to Linear Systems

1.2

Matrices and Gauss-Jordan Elimination 12

1.3

On the Solutions of Linear Systems 33

Linear Transformations 52 2. 1 Introduction to Linear Transformations and Their Inverses 52

v

3

©

1997 by Prentice-Hall, lnc. Sirnon & Schuster I A Viacom Company Upper Saddle River, NJ 07458

2.2

Linear Transformations in Geometry 65

2.3 2.4

The Inverse of a Linear Transformation 86 Matrix Products 97

Subspaces of IRn and Their Dimensions 128 3.1

Image and Kerne) of a Linear Transformation 128

3.2

Subspaces of !Rn; Bases and Linear Independence 146 The Dimension of a Subspace of Rn 160

3.3 All rights reserved. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher. Printed in the United States of America.

4

Orthogonality and Least Squares 177 4.1 Orthonormal Bases and Orthogonal Projections 177 4.2 Gram-Schrnidt Process and QR Factorization 198

10987654321

4.3

ISBN 0-13-190729-8 Prentice-Hall International (UK) Limited, London Prentice-Hall of Australia Pry. Limited, Sydney Prentice-Hall Canada Inc., Toronto Prentice-Hall Hispanoamericana. S.A., Mexico Prentice-Hall of India Private Limited, New Delhi Prentice-Hall of Japan, lnc., Tokyo Sirnon & Schuster Asia Pte. Ltd., Singapore Editors Prentice-Hall do Brasil. Ltda., Rio de Janeiro

zx

4.4

s

Orthogonal Transformations and Orthogonal Matrices 207 Least Squaresand Data Fitting 218

Determinants 239 5.1 Introduction to Deterrninants 239 5.2

Properties of the Deterrninant 250

5.3

Geometrical Interpretations of the Deterrninant; Cramer's Ru1e 272

Vi •

Contents

/ 6

Eigenvalues and Eigenvectors 291 6.1

1

6.2

Dynamical Systems and Eigenvectors: An Introductory Example 291 Finding t11e Eigenvalues of a Matrix 307

6.3

Findi ng the Eigenvectors of a Matrix 321

6.4

Complex Eigenvalues 341

6.5

Stability 357

Coordinate Systems 371 7 .l

Coordinate Systems in IR" 371

7.2

Diagonalization and Similarity 387

7 .3

Symmetrie Matrices 401

7.4

Quadratic Foro1S 4 J3

r 7.5

/ 8

Singular Values 425

Linear Systems of Differential Equations 438 8.1 An Introduction to Continuous Dynamical Systems 438 8.2

/ 9

The Complex Case: Euler's Forrnula 458

Linear Spaces 4 77 9.1 An Introduction to Linear Spaces 477 9.2 Coordinates in a Linear Space 496

APPENDIX A

9.3

Inner Product Spaces 511

9.4

Linear Differential Operators 528

Vectors 546 Answers to Odd-Numbered Exercises A-1 Index I-1

To my parents O~to and Margrit Bretscher-Zwicky wzth love and gratitude

PREFACE

Key Features .

·I o n in the text to make the di cu. -

d

• Linea r rransformario~IS are mtroduce . :~~ f and easier to visualize. M apping,

ion of matri x operauon mo re mea nll1 ", u ' . . and transf rmati n be ome a the me in the tex t thereaftet . . . . . I . are used as a un tfytn g the m ' a • Discrere and conrinuou dynanuca ·' .relll I" . f linear aJo ebra. m tiYati n ~ r eiQ.enve tors. and a maJor app tcatton o "' . .II 1i: nd an abundan f rhoughr-provoking (and occa. tona ll y • The rea d er Wl deli2.htful ) problem and exerci e · . d d d all throuohout the text. The major • Ab tracr concepr are mtro u e ~a Uc I f "' ·ali ty befo re the stuclent idea are arefullv d Ye loped at \·ano u le e o genet i intr duced t ab tra t ve tor pace · eollleri.I·cal internretation are emphas ized exten ively

• \ isuali-aiion alld

A po li ce offl eer o n patrol at rni dni g ht, o runs an o ld joke, notice a man crawlin g abo ut on hi s hand and knee unde r a treetlam p. He wa lk over to investigate, where upo n the man expl ain in a tired and omew hat lurred o ice that he has lost hi hou ekeys. The poli ceman offers to help, and fo r th e n xt five minutes he too i search in g on bis hands and knees. At last he excla irn . . " A re you absolu tely certain th at thi s is where yo u droppecl the keys?" "Here ? A bso lute ly not. I dropped them a block down, in the middl e o f the street. " "T hen w hy the dev il have you got me hun ti ng around thi s lamppo tT ' Because th i i where the li ght is ."

r

throu.::hout.

Hornepage

It is mathematics, and no t j ust (as Bismarck c laimed) po li tics, that con i in ·' the art of the pos ibl e." Rather than search in the clarkness for so\uti o n to pro bl e m of press ing interest, we contri ve a realm of proble ms whose intere t Jie · above a ll in the fac t that so lutio n can concei vably be found. Perhaps the lm·gest patch of li ght s urro unds the techniques of matri x ari thme ti c and algebra, ancl in part icttlar matri x mul tiplicatio n and row redu ctio n. Here we mig ht begin with De ca11es, since it wa he who di scovered the co nceptu al meeting-point of geometry ancl algebra in the identifi cation of Eu clidean pace with ]!( 3 ; the tec hnique and application pro liferated in ce hi day. To organize and clarify th o e i the role of a modern linear algebra course.

+

Computersand Computation A n essenti al issue th at needs to be addressed in establishing a mathe matica l methodo logy is the ro Je of co mputation and of co mput ing techno logy. Are the pro per subjects o f mathe mati cs algorithm and ca lcul at io n . or are they grand theori e and abstractio ns which evade the need for co mputatio n? l f the fo rmer. i it important that the studc nt leam to carry out the computations \ ith penc il and pape r, or hould the algorithm "press the calcu lato r· x - 1 button'· be a ll ' ed to substitute for the tnditi o nal method of fi nd ing an in er. e? lf the lauer. . ho uld the abstraction be taught thro ug h el aborate no tati onal mechani m . o r rhro ugh co mputati onal exa mples and g raphs? We seek to take a con istenr approach to these question : a lgorith ms and co mputatio ns cu·e primary, and prec isely for thi · reaso n computer are not. Again and again we exa mine the nitty-g ritty of row recluc ti on or matri x multipli ati o n in order ro derive ne \ in ·igh t . Most of the proo fs. w h ther of rank- null ity the re m. the volume-chang formul a fo r determinant , r the pectra l theore m for ·ymmetri matri ces. are in thi , way tied to hand -o n proced ure . .

ix

Preface • Xi X • Preface The aim is not just to know how to compute the solution to a prob lern, b~t t:o imagine the computations. The student needs to perform enoug.h,;ow reductl~ns by band to be equipped to follow a Jine of argument of the form . .I f w~ calcu ate . ' and to appreciate m advance the reduced row echelon form o f such a ma tn x · · · , . . what the possible outcomes to a particular com.putatton are. . . In applications the solution to a problern IS hardly ~ore. J~portant than .rec. · Its · range o f v aJ1'd'1ty , and appreciating how sensitive It IS to perturbatwns oomzmo . of the i~put. We emphasize the geometric and qualitative nature of ~e sol~ttons, notions . of approximation, stability, and "typicaJ" matrices. The dJscu~swn of Cramer' s rule, for instance, underscores the value of closed-form solutJO~s . ~or visualizing a system's behavior, and understanding its dependence from Imtlal conditions. . The availability of computers is, however, neither to be Ignored n~r r~gretted. Each student and instructor will h.ave to dec~de how much p~actice IS needed to be sufficiently farniliar with the mner workings of the a!gonthm .. As the explicit computations are being replaced gradu~lly b~ a theorettcal overview of how the algorithm works, the burden of calculatton will be t~en up by te~h­ nology, particularly for those wishing to carry out the more numencal and applied exercises. It is possible to turn your Linear algebra course into a more computer orie~ted or enhanced course by wrapping with this text either ATLAST Computer Exerctses for Linear Algebra (1997, edited by S. Leon, E. Herman, and ~ · Faulkenb.erry) ?r Linear Algebra Labs with Matlab , 2nd edition ( 1996 by D . Hill and D. Zitar.e lh). Each of these supplements goes beyond just using the computer for computatwnal matters. Each takes the standard topics in linear algebra and finds a method of illuminating key ideas visually with the computer. Thus both have M-files available that can be delivered by the Internet.

+ Examples, Exerdses, Applications, and History The exercises and examples are the heart of this book. Our objective is not just to show our readers a "patch of light" where questions may be posed and solved, but to convince them that there is indeed a great deal of useful, interesting material to be found in this area, if they take the time to Iook around. Consequently, we have included genuine applications of the ideas and methods under discussion to a broad range of sciences: physics, chernistry, biology, econornics, and, of course, mathematics itself. Often we have simplified them to sharpen the point, but they use the methods and models of contemporary scientists. With such a !arge and varied set of exercises in eacb section, instructors should have little difficulty in designing a course that is suited to their aims and to the needs of their students. Quite a few Straightforward computation problems are offered, of course. Simple (and, in a few cases, not so simple) proofs and derivations are required in some exercises. In many cases, theoretical principles that are discussed at Jength in more abstract linear algebra courses are here found broken up in bite-size exercises.

The examples make up a significant portion of the tex t; we have kept abstract exposition to a minimum. It i a matter of ta ·te whethe r general theorie . . hould gi ve 1i se to s pecific examples, or be pasted tagether from them . In a text uch a thi s one , attempting to keep an eye on applications, the latter i. clearly pre ferable: the examples alway. precede the theorems in thi s book. Scattered throughout the mathemat ical exposition are quite a few name and dates , some hi storical accounts , and anecdotes as weil. Students of mathematics are too rare ly shown tbat the seeming ly strange and arbi trary concept they tudy are the results of long and hard struggles. It will encourage the readers to know that a mere two centmies ago some of the most brilliant mathematician were wrestling with probl ems s uch as the meaning of dimension or the interpretation of e", and to realize that the advance of time and understanding actually enables them , wi th some effort of their own, to ee farther than those g reat mind .

+ Outline ofthe Text Chapter 1. This chapter provides a careful introduction to the olution of systems of linear equations by Gauss-Jordan elimination. Once the concrete problern is solved, we restate it in terms of matrix formali sm, and di cu the geometric properlies of the solution s. Chapter 2. Here we raise the abstraction a notch and reinterpret matrice as linear transformations. The reader i introduced to the modern notion of a function , as an arbitrary a ociation between an input and an output, which Iead into a di scussion of inverses. The traditional method for finding the inverse of a matrix is expl ai ned: it fits in naturall y as a sort of automated algorithm for G auss-Jordan e limination . We define linear transformations primaril y in tem1 of matlice . ince that is how they are used ; the abstract concept of linearity is presented as an auxil iary notion. Rotation in IR 2 are empha ized, both as archetypal, easily-vi ualized examples, and as preparation for future applications. Chapter 3. We introduce subspaces. im ages and kerne! , linear independence, and base , in the context of IR" . This allows a thorough discu ion of the central concepts, wi thout requiring a digres ion at thi early stage into the Jippery issue of coorclinate systems. Chapter 4. This chapter includes some of the mo t basic application • . We introduce orthonormal ba e and the Gram-Schmidt proce s. along with the QR factori zation. The calculation of correlation coefficients i di cussed, and the important technique of lea t-squares approximations i explained. in a number of differe nt context . Chapter 5. Our di cuss ion of determinants is algorithmic. based on the counting o f '·patterns" (a transparent way to deal with pernmtation ) . We deri ve the properli es of the deterrninant from careful analysis of thi procedure. and tie it Iogether with Gauss-Jordan elimination . The goal i to prepare for the main application of determinant. : the computation of characteri tic polynomiaL . Chapter 6. This chapter introduces the central application of the latter half of the text: linear dynamical systems. We begin with di crete stems. and arc

xii •

Preface • Preface

naturally led to seek eigenvectors, which characterize the long-t~r~ behav_i~r of the system. Qualitative behavior is emphasized, particularl_y s~ab!ltty ~ondi_t.Ion . Camplex eigenvalues are explained, without apology, and ued mto earller dLscus-

Xiii

I have received valuable feedback from the book's revtewers: . Frank B ~atro~ s, University of Pittsburgh Tracy Bibelnieks, University of Minnesota Jeff 0 · Farmer, University of Nortbern Colorado K~nrad J. Heuvers, Michigan Technological University Mic?ael ~allaber, Washington State University Dame! King, Oberlin College Richard Kubelka, San Jose State University Peter C. Patton: University of Pennsylvania Jeff~ey M. Rab~n , Uni:ersity of California, San Diego D an~el B. _Shapiro, Ohio State University David Stemsaltz, Technische Universität Berlin

sions of two-dimensional rotation matrices. Cbapter 7. Having introduced the change of coordinates to an eigenbasis in Chapter 6, we now explore the "general" theory of coordin~te s~stem~ (s till finnly fi xed in ~~~, however). This Ieads to a more com~rehens!ve d1scusst~n of diagonalization, and to the spectral theorem for symmetrtc matnces. These tdeas are applied to quadratic forms, and conic sections are explained. Chapter 8. Here we apply the methods developed for discrete dynamical systems to continuous ones, that is, to systems of first-order linear differential equations. Again, the cases of real and complex eigenvalues are discussed. Chapter 9. The book ends where many texts begin: with the theory of abstract vector spaces (which are here called "linear spaces," to prevent the confusion that some students experience with the term "vector"). The chapter begins with assembling the underlying principles of the many examples that have come before; from this lush growth the general theory then falls like ripe fruit. Once the reader has been convinced that there is nothing new here, the book concludes with important applications tbat are fundamentally new, in particular Fourier analysis. (with David Steinsaltz)

. encouragement and advice I alsod thank and Rieb D L my editor ' Ge o.rge. Lobe11 ' •.or his ar e orenzo for coordmat.Ing book production. ,

struct~~:l d~:~~~t:,:n~~!dth~; :!~~ ~~~g~~pported Otto Bretscher Department of Matbematics Harvard University Cambridge, MA 02138 USA e-mail: [email protected]

+ Acknowledgments I first thank my students, colleagues, and assistants at Harvard University for the key role they have played in developing this text out of a series of rough lecture notes. The following professors, who have taught the course with me, have made invaluable contributions: Persi Diaconis Edward Frenkel David Kazhdan Barry Mazur David Murnford Shlomo Sternberg I owe special thanks to William Calder and Robert Kaplan for their thoughtful review of the manuscript. I wish to thank Sylvie Bessette for the careful preparation of the manuscript and Paul Nguyen for bis well-drawn figures. I am gratefu~ to t~ose who _have contributed to the book in many ways: Me_noo Cung,. SrdJan I?tvac, Robm Gottlieb, Luke Hunsberger, Bridge! Neale, Akilesh Palantsamy, Rita Pang, Esther Silberstein, Radhika de Silva, Jonath an Tannenhauser, Selina Tinsley, and Larry Wilson.

.

- -- - ·- -

-

-

-.

by a grant from the In-

LINEAR EQUATIONS INTRODUCTION TO LINEAR SYSTEMS Traditionally algebra was the art of solving equations or systems of equation The word algebra comes from tbe Arabic al-jabr, wbich means reduction. Tbe term was first used in a mathematicaJ sense by Mohammed al-Khowarizmi, who lived about A.D. 800 in Baghdad. Linear algebra, tben, is tbe art of solving systems of linear equations. The need to solve systems of linear equations frequently arises in mathematics, statistics, physics, astronomy, engineering, computer science, and economics. Solving systems of linear equations is not conceptually difficult. For small systems ad hoc methods certainly suffice. Larger systems, however, require more

systematic methods. The approach generally used today was beautifully explained 2000 years ago in a Chinese text, the "Nine Chapters on the MathematicaJ Art" (Chiu-chang Suan-shu), which contains the following example. 1 Tbe yield of one sheaf of inferior grain, two sheaves of medium grain, and three sheaves of superior grain is 39 tou. 2 The yield of one sheaf of inferior grain, three sheaves of medium grain, and two sheave of superior grain is 34 tou. The yield of three sheaves of inferior grain, two sbeaves of medium grain, and one sbeaf of superior grain is 26 tou. What i the yield of inferior medium, and superior grain? In this problem the unknown quantitie are the yield of one sheaf of inferior, one sheaf of medium, and one sheaf of superior grain. Let us denote these

1

8. L. v.d.Waerden: Geometry and A lgebra in Ancient Civilizariom. Springer-Verl ag. Berlin , 1983. is a bronze bowl used as a food container during the middle and late Chou dynasty (c. 900255 B.C.). 1 A 1011

1

Sec. 1.1 In trod uct ion to Li nea r y te ms •

2•

h ap. 1 Linea r Equatio n s

. Tl e probl m ca n th 1y . . 1 quantiti e by x , v. and ;::,_ re pecu ve_ the followi ng system of linear equ auons. r +2r + 3;:: = 39 :r 3:r 2;:: = 34 3x + 2y + .: = 26

be repre e nted by

11

Fi nall y, we e limin ate the vari able

x + 5z = 49 y - z= - 5

+ +

z=

X

-

.. .

y = .... into the form 34 26 z = ... · . . t rm th at are off the di agonal. those 1 ln othe~ words. we . need to elt nundate ~~~ t1~e coeffi cient of the vari ables along circled ll1 the equatwn below. an ma the diagonal equal to l . =3 9

+ 3v + - ?. = 3x + - ·V + z = x

3

X+ @ +®

G

+ 3y + ®

® +CB +

~ - 1 t equation

= 11 7

+ 2y + 3z =

x + 2 y + 3z = 39

Y3x + 2y +

7

= - 5

z = 26

(3 x Ist equation).

V-

Z

x

~

39

= - 5

3x + 2 y + z = 26

- 3 x I t equation

+ 2y + 3z

= 39 y - z= - 5 - 4y - 8z = - 9 1

Similarly, we eliminate the variab le v off the di agonal.

x + 2y + 3z = 39 y- z= - 5 - 4 y - 8z = -9 1

-2 x 2nd equation + 4 x 2nd eq uation

..r + 5z = 49 yz= - 5 - 12z = - I II

ß efore we elim in ate the vari able z off the diagonal, we makc the coeffi cient of ;:: 011 the diagonal equal to I, by di vid ing the last eq uation by - 12.

x

+ 5z = 49 yz= - 5 - 12z= - 11 1

~

-:- (- 12)

X

=

2.75 + 2

X

4.25 + 3

X

9 .25

2.75 + 3

X

4.25 + 2

X

9.25 = 34

2.75 +2

X

4.25 +

39

9.25 = 26

How can we interpret thi s resu lt geometrically? Each of the t11 ree equ ations of the system defi nes a plane in x - y- z space. The sohttion et of the ystem consists of those points (x. y , z) which lie in all three plane , that is, the in tersection of the three planes. Algebraically speaki ng, the solution set consists of those ordered triples of numbers (x , y, 7) which ati fy all three equation simultaneously. Our computations show that the ystem ha only one solu tion. (x , y, z) = (2.75 . 4.25 , 9.25 ). T hi s means that the plane defined by the three equations intersect at the poi nt (x, y . -) = (2.75 . 4 .25. 9.25 ), a hown in Figure 1.

and then ·ubrract thi result from the th ird equation.

x

~

= 2.75 y = 4.25 - = 9.25

+ Geometrie Interpretation

To el iminate the variab le x fro m the thi rd eq~ation , we subtract_the first eql~ation from the tbird equation three times. We m ul ttply the first equatwn by 3 to oet 3x + 6y + 9-

9.25

X

.

We can accompli h the e goals step by step, one vari a_ble at a t~me. ln the . l' fi ed y te ms of equations by addtng equau ons to one past you may I1ave nnp t . . bl f another or subtracting them. In thi y tem we c~n elimin ate the van a e .r rom the eco nd eq uati.on by ubtracting the fi rst equauon from the second .

- 39 x + ?- Y + 3 Z x + 3y + 2- = 34 3x + 2y + - = 26

- 5 x Ia t equation + last equ ati on

Happ ily. in li near algebra you are almost always abl e to check your olutions. It wi ll help you if you get into the habi t of checking now.

= 34

z = 26

the di agonal.

T he yields o f inferi or, mediu m, and superior grain are 2.75 , 4.25, and 9.25 tou per shea f, re pecti vely. By ubstituting these va lue , we can check that x = 2.75, y = 4.25 , z = 9.25 is indeed t11e so lution of the ystem.

To olve for .r . y. and -, we need to transform this sy t m from the form

X+ 2_,, + 3;:: = 39

z off

3

x + 5z = 49 y- ;:: = - 5 z = 9.25

Figure 1 Three planes in space,

intersecfing at a point.

4•

Sec. 1.1 Introduction to Linear Systems •

Chap. 1 Linear Equations

2x+4y+ 6z= 0 4x + Sy + 6z = 3 7x+8y+ 9z = 6 x

X

+ 2y + 3z = 0 - 3y- 6z = 3 - 6y - 12z = 6

x + 2y + 3z = 0 ---; 4x+ 5y+ 6z =3 - 4(I) 7x +8y +9 z =6 - 7 I)

-:-2

+ 3z = 0 -2(II) y+ 2z = - 1 ~ - 6 y - 12z = 6 + 6(U)

x +2y -:- ( -3)

= 2 y + 2z = - L -

Z

0=

5

x l y

0

-

21

z=

+ 2z = - 1

After omitting the triv ial eq uatio n 0 = 0, we have onl y two equations with three unlmowns. The solution set is the intersection of two nonparallel planes in space, that is, a line. This sy tem ha in fi nitely many solu tions. The two equ ations above can be written as fo ll ows:

Figure 2a Three planes having a

line in common.

x= l

z+ 21

y = - 2z - 1 ·

We see that both x and y are determined by ~- We can freel y choose a value of z, an arbitrary real number· then the two equ ations above give us the values of x and y for thi s choice of z. For example: • Choose z = l. Then x (x y '") = (3, - 3, l) . Flglll'e 2b

• Choose z = 7. Then x = i (x . y, z) = (9 - 15. 7).

Three planes with no common intersedion. t

While three planes in space usually intersect at a poi?t, they _may have a in common (see Figure 2a) or may not have a common tntersectwn at all, as s;~wn in Figure 2b. Therefore, a system of three _equations with t~ree unknowns may have a unique solution, infinitely many solutmns, or no solutwns at all.

li

+

= z+2 =3

A System with Infinitely Many Solutions 2x +4y + 6z = 0 4x + Sy + 6z = 3

7x

+ 8y + 9z = 6

We can solve this system using elimination as discussed above. For simplicity, we Iabel the equations with Roman numerals.

z + 2 = 9 and y

= - 2~ -

l = -3 . The solution i

= - 2z - 1 = -15 . The solution

More generally, if we choose z = t . an arbitrary real number. we get x = y = - 2t - 1. Therefore the general solution is

+ 2 and

(x, y. z)

= (t + 2. - 2t -

I, t)

= (2, -

l. 0) + t ( l , - 2, l ).

Tbi eq uation represent a line in space, a shown in Figure 3.

Figure 3 The line (x, y, z) =

Next, let's consider a system of linear equ ations that has infinitely many solutions:

and y

(t + 2, - 2t - 1, t).

6•

Chap. 1 Linear Equations Sec. 1.1 Introduction to Li near Systems •

+ A System without Solutions

. . .

o urself to o btain the result shown.

17. Fi nd all o lutions of the linear syste m

In the system below, perform the elimmauons y x· + 2y + 3z = 0 . 4x + 5y + 6z = 3 7x+ 8)+9z= 0

I

x - z = 2·1 +~= y 0 = -6

~

x

3x

Whatever values we choose for x, y, an_d z, the equ1ation u· s0 = - 6 cannot be . that is ' tt has no so u on . satisfied. T his system is incOJlSIStellt,

7

+ 2y = + Sy =

I

a b '

whe re a and b are arbi trary constants. 18. Find all o luti ons of the linear system x

x

+ 2y + 3z = + 3y + 8z =

a b ,

x + 2y+ 2-=c

EXERCISES

. man as three linear equations with Set up and solve systems Wlt? as d ~eir solutions geometrically . three unknowns, and interpret the equatwns an

GOAL

w here a, b, c are arbitrary con tants.

19.

1 to 10• find all solutions of the linear systems using e li mination as . In E xerclSes . discussed in this section. Then check your solutJons.

1.

x + 2Y = 1 2x + 3), = ~

I

2x

4.

1.

+ 4y =

I

21

5 12x + 3Y = 0 · 4x +5 y =0

13x +6y =3 x x x

+ 3z = 3x + 2 y + z =

1 7x +2) - 3z =I

b. Find the equilibrium price , that is, the price fo r w hich s upply equaJ dem and for both products. 20.

x + 2y + 3z = 1 10. 2x+ 4y+ 7z = 2 3x + 7 y + 11 z = 8

1

+ 2 P2.

like Vo lvos and BMWs, or do they complement one anotber, like shirt and tie ?

y

x

St=- 14+3 P 1

s2 = - 7

a. W hat i. the re lations hip between the two commodi tie ? Do they compete

x + 2y+3z= 8 6. x + 3 + 3z = 10 x+2y+ 4 z = 9

I

3z = 0 6z = 0 7x+ 8y + i0z = 0

1 3 + 4y + 5z = 4

D1 = 70 - 2 PI + P2, D2 = 105 + P1- P_,'

3 12x + 4 y = 31 . 3x + 6 y = 2

+ 2y + 8. 4x + 5 y +

+ 2y + 3z = + 3y + 4 z =

x +2y

9.

14x + 3y = 21 2. 7x + 5y = 3

Consider a two-commodity market. Whe n the uni t price of the product are P1 and P2, the quantities demanded, D 1 and D 2 , and tbe quantitie supplied, S1 and S2 , are gi ven by

The US economist and N obel laureate Wassily Leonti ef (bom 1906 in S t. Petersburg, Russia) was interested in the follo wing question : Wbat o utput ho uld each o f the indu stries in an economy produce to ati sfy the to tal demand for all products? Here we consicler a very s imple example of inpu to utput analysi , an economy witb o nJy two indu tri e , A and B . A s ume th at tbe con umer demand for tbeir products is, re pecrively, I ,000 and 780, in mi!Jj ons of dollars per year.

In Exercises 11 to 13, find all solutions _of t~e linear systems . Represent your solutions grapbically, as intersections of Lmes m the x -y plane.

11

·

I

X -

3x

2y =

21

+ 5 y = 17

12.

X -

2y =

31

I2x -4y =6

13.

I2x X -

2y = 31 4 y= 8 Con umer

In Exercises 14 to 16, find all solutions of the linear systems. Describe your solutions in terms of intersecting planes. You need not sketch these planes.

x + 4y + z = O 14. 4x +i3 y + 7z =0 7x +22y +l3z =l X + 4y + Z =0 16. 4x +13 y + 7z = 0 ?x + 22y + 13z = 0

x+y- z = O 15. 4x - y + 5z = 0 6x + y + 4z = 0

Which outputs a and b (in rnillion of do llar per year) hould the two in du tries generate to satisfy the demand ? You may be tempted to ay 1.000 and 780 respecti vely; bu t thi ngs are not quite a imple as rh at. We have to take into account the inte rindu try demand a weU. Let u sa indu try A produces e lectri city. O f course, produ ci ng almos t any product will require e lectric power. Suppo e thal indu stry B needs I O!t worth of e lecu·ic ity for each $ 1 of outpur B produce , and that indu try A net:ds 20~ worth of B · s

8•

Sec. 1.1 Introduction to Linear System •

hap. 1 Linear Equation

F d th output a and iJ needed

product for each l of output A produces. m_ to ati fy both consumer and interindu stry demand .

21. Find the outpul a and b needed to sati fy the consumer and interindu stJy demands given below (see Exercise 20).

9

25. Consider the linear system

X+ y - Z = -2 3x - 5 y + 13 z = 18 x- 2y + 5z = k where k is an arbitrary number. a. For which val ue( ) of k does this system have one or infinitely many solutions? b. For eacb value of k you found in part a, how many solution s does the system have? c. Find all solutions for each value of k . 26. Consider the linear system

x+ y2y x + y X+

z =2

+ + (k2 -

= 3 5)z = k Z

where k is an arbitrary con tant. For which choice(s) of k does this sy tem have a unique solution ? For which cboice(s) of k does the system have infinitely many olutions? For wbich cboice(s) of k i the system incon istent?

27. Emile and Gertrude are brother and sister. Emile has twice as many sister as brothers, and Gertrude has just as many brothers as sis ters. How many ch.ildren are there in this family?

Consumer

28. In a grid of wires, the temperature at exterior mesh points is maintained at constant values (in oq as shown in tbe figure below. When the grid i in thermal equilibrium, the temperature T at each interior mesh point i the

22. Consider the differential equation

d2 x

dx

dt 2

di

- - - - X= CO

(t

average of tbe temperatures at the four adjacent points. For example,

Thi equation could de cribe a forced damped osc_illator, a we ~i ll see in Chapter 9. We are told that the differential eq uat 1on has a solut1on of the

T2 =

T3 + T,

+ 200 + 0 4

form x(r) = a sin(t)

+ öcos(l).

Find the temperatures T1, T2 , and T3 wben the grid is in thermal equilibri um.

Find a and b, and graph the solution .

23. Find all solutions of the system

1-~~~8~=~ 1· V

'2

for

a. >.. = 5, b. >.. = 10, c. >.. = 15. 24. On your next trip to Switzerland, you should take the seeni e boat ri_de from Rheinfall to Rheinau and back. The trip downstream from Rbemfa U to Rheinau takes 20 minutes, and the return trip takes 40 minutes· the distance between Rheinfall and Rheinau along the river is 8 kilometers. How fast does the boat travel (relative to the water), and how fa t does the river Rhein ftow in thi s area? You may assume both s peeds to be con tant throughout the journey.

200°

.--- ---.-- ---...oo ._---~---~---~00

-· hap . 1 Linea r Equ ations

c

Se . 1.1 ln troductio n to Linear y tcm

r.



q. r?

31. Find all point (n. b. c) in pace for which th

2

Exp lain briefly how you fou nd these graph . Argue geometri call , without solving the ystem algebraically. b. ow olve the system algebraically. Verify th at the graph yo u sketched in pan a are com pati ble with yo ur algebraic olution. 34. "A certain person buy . heep, goat and hog , ro the number of 100, fo r l 00 crown ; the sheep cosr him a crown a-piece: the goat 1~ crown ; and the hog , 3 ~ crow ns. How many had he of eachT (From Leonhard Euler: Elements of A lgeb ra. St. Peter. burg, 1770. Translated by Rev. John Hewlett. ) Find al/ olu tions to th i problem. 35. Find a y tem of linear equ ations with three unkn own who e oluti on are

has at least one -olu tion. . . . I I a y to ol e when they are in trian g ultr form , 32. Linear s tems are pm t1 u ar Y e that is. all entri above or below the diagonal are zero.

a.

X 1 -.\"1

t

= - 3 = 14

x2

= + 2X2 + .\"3 + 8.r1 - 5.r3 + X ,J =

9 33

X =

Solve thi lower triangular y tem by Forward sub titution, fi nding first x1, then x 2 , then x 3 , then x.~. b. Sol e the upper triangu lar ystem .x 1 + 2x 2

X3 + 4.r4 = -3 x2+ 3x3+ 7x-1= 5 x3 + 2x4 = 2 X-1 = 0

33. Consider the linear ·ystem

X+

y= I I

x+ - v = 2"

l

m

x + 2y + 3z = a 4x+5y+6::=b 7x + 8v + 9-= c

- 3x 1 +

11

y

X

2

. . " . ol nomial of the form .f(t) = 0 + hl + c t ) 29. Fi nd a pol 1110 ITII UI of degi e - (a I? Y _ 1) (2 ,) and (3. 13). k · tch thc who e graph g e. through the po1nt ( I. · ·- ' oraph of t.his polynomlal. , ~ . . 2 .1 ol nomial of thc fo rm .f (r ) = a + /JI + er - ) (? ) ('"' · ) where p q r 30. Fi nd a polynomwl of cleglee (< P . Y 1 tl · ohtl1e pomts(l.p). _,q . .), / , '.' who e grap 1 goe 11ouo I a ol nomi al exi. t for nII choices of p, are arbitrary co n tanL. Doe. . uc 1 p .



1

where r is a nonzero constant. a. Derermi ne the x - and y-intercepts of the lines x + y = I and x + ( t /2) Y = t ' ketch these Ii nes. For whi ch values of the constant 1 clo these line intersect? For Lhese va lues of 1, the point of intersecti on (x , y) depencl on the choice of the co nstant 1, that is, we can co n: ider x ancl y as [uncti ons of 1. Draw rou gh sketche. of these functi ons.

11

6 + 5t .

y= 4

+ 3t .

where t is an arb itrary constant. 36. Bori s and Marina are hopping fo r chocolate bar . Bori obse rve , '·If 1 add half my money to yours, it \ ill be enough to buy two chocolate bar .·· Marin a naively asks. "Tf 1 add half my money to yours. how man_ can we bu ?" Bori repl ie , ·'One chocolate bar." How much money did Bori have? (Fro m Yuri Chern yak and Robert Rose: T!Ie Chicken from Minsk. Ba ic Book . New York, 1995.) 37. Here i another method to olve a system of linea r eq ualionc: Sol e one of the equations for one of the vari ables, and substitute the re ult in to the other eq uations. Repeat this process unril you run out of vari able or eq uatio n . Co nsider the exa mple di cu ed earl y in thi ecti on: .r +-.v+3 - =39 x+ 3y+ 2z =34 3.r + -Y + :: = 26

We can olve the fi rst equ ation for x .

.\" = 39- 2y- 3::: Then we . ubstitute into the other equ ati ons.

341

(39 - 2y- 3::) + 3y + 2:: = 3(39-2y -.)::) +2y + .:= 26 I

12 •

Sec. J .2 Ma trice and Gau -Jordan Eliminat io n •

Chap. 1 Linear Equatio ns

The four co lum n

We can s implify.

1-4~ =8z : ~~I Now -'

=z-

/

/ [ 2~

- 12z = - 111. 111 = 9.25 . Then \ e find tbat z = 12

/

y = z - 5 = 4.25

Note rha t the first co1umn of thi s matri x correspo nd to the first variab le of Lhe syste m , w h.il e the first row corre p nds Lo the First eq uation . lt is c ustomary to Iabe l the e ntrie of a 3 x 4 m atri x A with doubl e sub cript as fo ll ows:

a,2 Cl t3 Clt4] C/22 a 23 C/24

and

= 39 -

2y - 3z = 2.75 .

C/32

E plain h this method i es entially the same as the method di c u sed in thi ection: only the bookkeeping is different.

.2

~L\TRICf.S ~rn

GA"CSS-JORDAN

ELDfiNATIO_ .

~-rems of linear equations

1 ·e

·o

are u uall ol ed b tem on a omputer:

e following

I

X -

r -r 1~X

)'

+ 4: =-= 5

5y +

!Oy -

compurer. Suppo e

.

-= 1

10

4 2] I

5

- 1

I

C/33

.

C/34

The fir t s ubscript refer. to the row, and the econd to the co1umn: The e ntry aij i · located in the ith row a nd the jth co lumn . Two matrice A and Bare equa l if they have the sa me ize and corresponding entries are eq ua l: aiJ = biJ. If the number of rows of a m atrix A eq uaJ the number of column (A i n x n). then A is ca ll ed a square matrix, and the e ntries a 11 , a2_ , ... C/ 1111 form the (main ) diagonal of A. A square ma trix A is called diagonal if all its entrie off the main diagonal are zero, that is a;j = 0 whenever i =1- j. A quare matrix A is called upper triangular if all it entries be low the m ain diagonal are zero. that is. a;j = 0 whenever i exceeds j. Lower triangular ma trice are defined analogou Jy. A matrix whose en tries are a ll zero is called a -ero matrix and is denoted by 0 (regardles of its ize) . Cons ider the matrices

A --

Whar information about this stern would tbe compute r need to ol e it? Witb the right oftware. alJ you need to enter i the pattem of coefficient of tbe ariable and the numbe on tbe rigbt-hand ide of the equations: 8 5

f Lhe matri

.1

,1,

T he th ree rows of the matri x ~

5, o th at -4(z - 5) - 8z = - 91 , or

x

13

[I4

2

5

~] D =

B-

[~ ~ ].

[I 42] ' 3

E=

c~

[~

0 3 0

un

~l

0 0 2

The matrices B, C , D , E are square , C is diagonal , C and D are uppe r trian g ular , and C and E are lower triangular. M atrice with only one column or row are of particttlar interest.

.

All the i.nformation about tbe y tem i conveniently stored in thi arra of numbe

ed ed

matri:x.

1

Sin e tbe matri.x abo e bas three rows and four colurnns, · x read ""three b four' _

in thi

sense by lhe Engüsh llllllhernati ian

A ma tiix with only one column i ca!led a co lumn vector, or simply a vector . The entries of a vector are called its compon.enrs . The set of all co lumn ec tor with n components is denoted by IR". A ma trix with only one row i called a row vecror. In th.is text the term vecror refer to co lumn vector , unles stated otherwise. The reaso n for our preference for column ec tor will b c me appare nt in the next section .

Sec. 1.2 Matrices and Gauss-Jordan Elimination •

14 •

15

Chap. 1 Linear Equations Example of vectors are

a ( olumn) vector in ]]{4 , and [ I 5 5 3 7 ] a row vector with five compo111 nent . Note that the 11 columns of an m x n matrix are vectors in 1R • In previous cour e in mathematic or phy ic , you may h~ve thought about vector from a more aeomet1ic point of view. (See the appendtx fo r a ummary of ba ic fact on vec t~rs.) In thi ou r e it will often be helpful to think about a vector numerically, as a (finite) eq uence of numbers, which we wi ll u ual ly write in a column. In our digital age, infom1ation i often tran mitted and storecl as a equence of numbers. that i , as a vector. A sequence of 10 econd of mu ic on a CD i tored a a vector with 440,000 components. A weather photograph ta.ken by a satellite is digitized ancl transmitred to Earth a a sequence of nurober . Consider aga.i n the sy tem

2x + 8y + 4z = 2 2x+ 5 + z =5 4x

+ JOy- z =

but. working with the augmented matrix requires less writing, saves time and i easter to read. lnstead of dividing an equation by a scalar, 1 you can divid~ a row by a scalar. lnstead of adding a multiple of an equation to another equation you ' can add a multiple of a row to another row. As you p~rform elimination on the augmented matrix, you should always remernher the bnear system lurking behind the matrix. To illustrate this metbod ~e perfon11 the elirnination both on the augmented matrix and on the linear syste~ lt represents.

8 5

10

4 1 - 1

2x+ 8y + 4z = 2x + 5 y + z= 4x+ 10y z=

+

[~

4

2

5 10

1 - 1

,(.

-2(1) - 4(D

x+ 4y + 2z = 2x + 5 y + z= z= 4x + 10y -

[~

2

-3 -3 -6 -9

x ; ] + (-3)

-3

+

4y + 2z = -3 y - 3z = 3 -;- (-3) - 6y - 9z = - 3

.j,

l

[~

Sometimes we are interested in the matrix

,(.

4 1

2

-6

-9

1

I] -4(ll) -1 -3 +6(ll)

x

+ 4 y + 2z = y+ z= - 6y -

,(.

wbicb contains the coeffi cients of the sy tem called ils coefficient matrix. By contrast, tbe matrix

8 5 10

I

~

- 1

8 4 : 2] 1 - 1

: :

5 I

Even when you so lve a linear system by hand rather than by computer, it may be more efficient to perfon11 the e lirnination on the augmented matrix rathe r than on the system of equations. The two approaches apply the same concept,

0

-2

1

1

5]

-1 -9

y

+

...;-(-3)

-2

1

l

0

1 ,(.

[~

1

0

1In

5

z = -1 -3z = - 9 ...;-( -3) ,(.

0

0

9 = -3 +6(ll)

2 =

X

,(.

-

1 - 4(ll) -1

,(.

0 -3

4 2]

which di spl ays all the numerical infom1ation containecl in the system is called its augme11ted matrix. For the sake of clarity, we will often indkate the position of the equal signs in the equations by a dotted line.

5 10

[~

1 5 -2(D -4(D

+

,(.

4

2 -;-2

5

0 0

-n

+2(TII) - (fll)

11]

-4 3

X

y

+

2z =

5

z=

-1

z=

3

11 -4 z= 3 =

X

y

=

vector and matrix algebra, the term " calar" is ynonymou s wilh (real) number.

+2(Ill) - (lll)

Sec. 1.2 Matrices and Gauss-]ordan Elimin ation •

16 •

Chap . 1 Li nea r Eguation s

In Step 1 we are merely writing down the equations in a di ffe rent order. This will certainly not affect the solutions of the sys tem.

The olution is often represent d a a ector.

In thi s exa mpl e, the process of elimination works very _smoothl y. ::~ an elimin.a te all entries off the di agonal and can make every co ffi ctent on the dta.oonal e ual to I . The process of elimination works . w II unle _s we encou_nt_er a zero a~no the di agonal. The e zero repre ent miss tn g terms Jn some equauon . T he follo:_,ing example illu trates how to solve uch a sy tem: x 3 - X4 - X5 = 2x 1 + 4x 2 + 2x3 + 4x4 + 2xs = 2x 1 + 4x 2 + 3x3 + 3x4 + 3xs = 3x 1 + 6x2 + 6x3 + 3x4 + 6xs =

The augmented matrix of thi

4 4 4 6

0 2 4 A= 2 4 [ 3 6

I 2 3 6

I -4 3 3

2 3 6

6 6

- 1 - 1 2 4

3 3

3 6

i]

Our first goal is to make the cursor entry equal to l. We can acco mplish thi s in two steps as follows:

Step J If the cursor entry is 0, swap the cursor row with some row below to make the cursor entry nonzero. *

-

- *To make the process unambi guous, swap the cursor row with thc jir .\'1 row below that has a nonzero

ent ry in Lhe cursor column.

- I 4 3 3

;]~

-1 2 3 6

x3 -

2x t + 4x2 2x t + 4x2 3x t + 6x2

[/~

4 4 6

2 1 3 6

X4-

=4

+

-!-

0

xs

+ 2x3 + 4x4 + 2xs = 4 ~ + 3x3 + 3x4 + 3xs = 4 + 6x3 + 3x4 + 6xs = 6

4 - 1 3 3

2x t + 4x2

2 - 1 3 6

;]

+ 2x3 + 4x4 + 2xs = 4

XJ X4 - x 5 = 4 2x t + 4x2 + 3x3 + 3x4 + 3xs = 4 3x t + 6x2 + 6x3 + 3x4 + 6xs = 6

Now we can proceed as in the previous examples.

Step 2 does not change the solutions of the system, because the equ ation corresponding to the cursor row has the same soiutions before and after the operation.

nonzero column of the matrix.

2 3

1 2 3 6

- 1

As in the previous examples, we are tryin g to bring_the matrix_into di agonal fo rm . To keep track of our work we will place a cursor 111 the matn x, ~s. you would on a computer screen. Inirially, the cursor i placed at the top posttJOn of the fir t

0 4 4

[/~

0 4 4 6

Step 2 Divide the cursor row by the cursor entry to make the cursor entry equal to 1.

ystem is 0

17

[/~

4 4

2 1 3

6

6

0

4 -1 3 3

2 - l 3

6

~r2

n 2

4 6

I 1 3 6

2 - 1 3 3

+ 2x3 + 4x4 + 2xs =

4 -7-2 X) - X4 - x 5 = 4 2xt + 4x2 + 3x3 + 3x4 + 3xs = 4 3x t + 6x2 + 6x3 + 3x4 + 6xs = 6 -!-

-!-

0

2x 1 + 4x2

1

-1 3 6

~]

Xt + 2x2 + x3 + 2x4 + x 5 = 2 X4 XJ x 5 =4 2x t + 4x2 + 3x3 + 3x4 + 3xs = 4 3xt + 6x2 + 6x3 + 3x4 + 6xs = 6

Step 3 Eliminate alJ other entries in the cursor co1umn by ubtracting suitable multiples of the cursor row from the other rows.*

*We may also add a multiple of a row, of course. Think of th is as subtractin g a negati ve mult iple of a row.

18 •

ec. 1.2 Matrices an d Gau -Jo rdan Elimin ati o n •

Chap. 1 Linea r Equatio ns

[/~

2 0 4 6

I 3 6

.r 1 +2x 2 +

2 - l

- I

3 3

3 6

X3

+ 2x4 + xs =

- 2(1)

iJ - 3(l)

[~

2

x5 = 4 2.r 1 + 4.r 2 + 3..r:~ + 3x.J + 3xs = 4 - 2(1) x 1 + 6x _ + 6x3 + 3x4 + 6xs = 6 - 3(1) X3 -

X4 -

2 0 0 0

2

/ I I 3

- 1 - I

I - I l

-3

3

~

n 2 0 0 0

l

2

.,

.)

- I - I

-3

x 1 + 2x2 +

~j

1 - I I

.,

.)

[~

+ 2x4 +

X3 -

X4 -

2 0 0 0

0

3

/ l

- I

0 0

0 0

2 - J 2 6

~

Step 4

Convince yourself that thi s operatio n does not cha nge the o lu tion of the

[~

te m (Ex.erci e 18). No w we have taken care of the first colum n (the firs t variable), o we ca n move the c ur o r to a new posiri o n. F oll owing the a pproach take n in Sectio n 1.1 , we move the c ursor down di agonally that i , down o ne row and over one colum.n. 2

0 0 0

I I l

2 - I - I

- I I

3

-3

3

[~

For o ur method to work a. before, we need a nonzero c ursor e ntry. Since not o nl y the cur or e ntry but al o all e ntries below are zero, we cannot acco mplj h this by swapping the c ursor row with some row below, a we did in St:e p I. l t would not help us to wap the c ursor ro w with the row above; thi would affect the first column o f the m atrix, which we bave already fi xed. Thu we have to give up o n the econd column (tbe econd vari able); we w ill m ove tbe c ursor to

[t

2

i

0 0 0

/ 1 I 3

2 - 1 -I -3

1 - I 1 3

~j

2 0 0 0

0 l 0 0

3

2 - 1 / 2 6

- 1 0 0 ~

~j

the next colurrm .

- (I !)

2o] - n 4

0

- 3(11)

Step 3

x5 = 2 x5 = 4 X X~+ x 5 = 0 3x3 - 3x4 + 3xs = 0 X3

19

E-

[I

0 0 0

2 0 0 0

0 1 0 0

0 1 0 0

3 - 1 0 0

-2 ] _: -7-2 - 12

Ste p 2

2 -1

3 -I 0 0

-2]

6

-2(III)

4 -?

+( III)

- 12

-6 111)

;d ~

2 0 0 0

-2l

4 -4 - 12

Step 3

0 0

)' 1 0

-~J

W he n we try to appl y Ste p 4 to thi matrix, we ru n o ut of col umn : th e proce s of row reductio n comes to an e nd . We ay that rhe matri E i in reduced row echelon fo nn, or rref for s hort. We w rite E = rref(A),

w he re A is the a ug me nted m atri x of the sy te m.

To sumn1arize:

A matri x is in reduced row echelon form if it sati . fi e all o f the fo llowing

Step 4 Move the cursordown di agonally, that is, down one row and over one column. lf the new cursor e ntry a nd all e ntries below are zero , move the cursor to the next column (remaining in the sa me row). Repeat thi step if necessary. Then return to Step 1.

condition :

a. If a row has no nzero e ntries, the n the fir t nonze ro e ntry is 1. ca lled the leading I in thi s row.

b. lf a column co ntains a leading 1, the n a ll othe r e ntries in tha t co lumn are zero.

He re, since the cur or entry is 1, we can proceed directly to Step 3 and eliminate all other e ntries in the cursor column.

c. If a row co ntains a leadin g I. the n ach row abo e contain a Ieadio cr I furthe r to the left .

"'

20 •

Sec. 1.2 Matrices and Gauss-] ordan Elimination •

hap. 1 Linear Equ atio ns

21

A matrix E in reduced row chelon fo rm may contain row of zeros, as in the example above. By condi tion c, these rows must appear a · the last rows of the matri x. Convince your elf tbat th e proced ure outl ined above (repeatedly perfo rm:ing Step 1 to 4) indeed produces a matrix with these three properties. Below, ' e circle the leading l ' in tbe reduced row eche lon form of the

For exampl e, if we set s

= t = 0,

we get the particular solution

matrix.

[~

2

2

0

0 0

CD0

0

0

3

0

2

- 1

0

2

(D :

0

-2

-2

0

0

0

0 2 0 Here is a summary of the elimination process outlined above.

Thi matrix repre ents rhe follow ing y tem:

G+-X2

+

I G)

3x4 =

X~

I G)

2

Solving systems of linear equations

2

Wri te the augmented matlix of the system. Place a cur or in the top entry of the first nonzero colurnn of thi matrix.

-2

Again, we say tbat tbi sy tem i in red uced row echelon form. Tbe leading variables correspond ro rhe leading l ' s in the echelon fonn of the matrix. We also draw the staircase fo rmed by the leading variable . That is where the name ecbelon fonn come from: accordi ng to Web ter, an ecbelon is a Formati on " like a sedes of steps." Now we can solve each of the equations above for the leading variable. x 1 = 2 - 2x2 - 3x4 X3

= 2 + X4

We can freely cboose the nonleading vari ables, x 2 = s and x4 = t , where s and t are arbitrary real numbers. The leading vari ables are then deterrnined by our choi ces for s and t ; that is, x 1 = 2 - 2s - 3t, x 3 = 2 + t, and .x:5 = -2. This system has infinitely many so lution , namely,

where s and 1 are arbitrary real numbers. We can represent the solutions as vectors in ~ 5 . X2 X3

=

2- 2s- 3t s 2+t

X4

t

xs

-2

We often find it helpful to write this solution as

Step 2

Di vide tbe cur or row by the cursor entry.

Step 3 Eliminate all other entries in the cursor column, by ubtracting suitable multiples of the cursor row from the other rows. Step 4 Move tbe cursor down one row and over one column. If the new cursor entry and al! entries below are zero, move the cursor to the next column (remaining in the same row). Repeat the last step if neces ary. Return to Step 1.

x 5 = -2

XI

Step 1 If the cursor entry is zero, swap tbe cursor row witb some row below to make the cursor entry nonzero.

The process ends when we run out of rows or columns. Then the matrix is in reduced row echelon form (rret). Write down the linear system corresponding to this matrix, and solve each equation in the system for the leading variable. You may choo e the nonleading variables freely ; the leading variables are then deterrnined by these choices. lf the echelon form contains the equation 0 = l , then there are no solutions; the system is inconsistent. Tbe operations performed in the Steps I, 2, 3 are called elementmy row operations: swap two rows, divide a row by a scalar, or subtract a multiple of a row from another row . Below is an inconsistent system.

(s, 1 arbitrary)

x1 3x, - 2x , + - x, +

3x2 - Sx4 = - 7 12x2 - 2..r3 - 27x4 = - 33 10..>:2 + 2x3 + 24x~ = 29 6x2 + x3 +14x4 = 17

22

11

Sec. 1.2 Matrices and Gauss-jordan Elimination •

Chap. 1 Linear Equations The augmented matrix. of the system is

-5

0

-3 - 12 10 6 - 1

l

j

-2 - 27 24 14

2

-7]

- 33 29 17

.

The reduced row echelon form for this matrix is

l ~ ! H ~~J . 0

0

0

0

(We leave it to you to perform the elimination.) . Since the last row of the echelon form represents the equauon 0 = 1, the system is inconsistent This method of solvino linear systerns is sometimes referred to as GaussJordan e/imination, after the"'German mathematician Carl Fiiedrich Gau_ss ( 17771855; see Figure 1), perhaps the greatest mathematician of m?dern tunes, and the German engineer Wilhelm Jordan (1844-1899). Gau~s h1~self called the method eliminatio vulgaris. Recail that the Chinese were usmg thts method 2000 years ago. How Gauss developed this method is noteworthy. On January I, 18~1, the Sicilian astronomer Giuseppe Piazzi (1746-1826) discovered a planet, ":Iuch he named "Ceres," in honor of the patron goddess of Sicily. Today, Ceres IS called

fpe 1 Corl Friedrich Gouss oppeors on the Germon 10-mork note (in foct this is the mirrar image of o well-known portroit of Gouss).1

AY5393908 A5 UJ

I

u

~ :::> UJ

0

z

(!U

I

obbe

N

UJ

Do.."KI-•e..~

~~ ~~ .! "f:r·'"·'oOt'l

t....,.oi9G'I

1Reproduced



by permission of the German Bundesbank.

23

an asteroid or rninor planet; it is only about 1000 km in diameter. The public was very interested in this discovery. At that time, the nurober of planets in the solar system was till an issue debated by many philosopher and repre entatives of the Church. Piazzi was able to track the orbit of Ceres for forty days, but then it was lost from view because Piazzi feil ill. Gauss, however, at the age of 24, succeeded in calculati ng the orbit of Ceres, even though the task seemed hopele s on the basis of a few Observations. His computations were so accurate that the German astronomer W. Olbers (1758-1840) located the asteroid on December 31, 1801. In the course of bis computations, Gauss had to solve systems of 17 linear equation .. In dealing with rhi s problern Gauss also used the method of least sq uares which he had developed araund 1794 (see Section 4.4). Since Gauss at first refused to reveal the methods which led to tbis amazing accomplishment ome even accused him of sorcery. Gauss later described his methods of orbit computation in hi book Theoria Motus Cmporum Coelestium (1809) . The method of solving a linear system by Gauss-Jordan elimination is called an algorithm, 1 that is, ·'a procedure for solving a mathematical problern in a finite nurober of steps that frequently involves repetition of an operation" (Webster's New World Dictionary, Th.ird Edition) . Gauss-Jordan elimination is weil suited for solving linear systems on a computer, at least in principle. In practice, however, some tricky problems associated with roundoff errors can occur. Numerical analysis teils us that we can reduce the proliferation of roundoff errors by modifying Gauss-Jordan elimination with partial or complete pivoring techniques. Partial pivoting requires us to modify step 1 of the algorithm as follows: Swap the cursor row with a row below to make the cursor entry as !arge as possible (in absolute value). This swap is performed even if the initial cursor entry is nonzero, as long as there is an entry below with a !arger absolute value. In the first ex.ample worked in the text, we would start by swapping the firs t row with the Ia t.

2 3 2

3 2 1

2 1 3 2

2 3

26] 34 39

In modifying Gau s-Jordan elimination, an interesting quest.ion arises: If we transform a matrix A into a matrix B by a equence of elementary row operation , and if B is in reduced row echelon form, is it necessari ly true that B = rref(A)? FortunateJy (and perhaps surprisingly) this is indeed the case. In this text, we will not utilize this fact , so there is no need to pre ent the somewhat technical proof. If you feel ambitious, try to work out the proof yourself, after studying Chapter 3 (see Exercises 3.3.65 to 3.3.67).

1The word algorit hm is derived from the name of the mathematician al-Khowarizmi, who introd uced thc tenn algebra into mathematics (see page 1).

24 •

Sec. 1.2 Matrices an d Gauss-Jo rdan Elimj natio n •

hap. 1 Linear Equ ations

25

In Exercise l t~ 1 ~ fi~d all so lu tio ns o f the equ ati ons wi th paper and pencil u ing Gauss-Jordan e ltrru natw n. Show all your work . Solve the ystem in E xerci e 8 as a sy. tem in the variable x 1, x 2, x 3 , x 4 , x 5 .

1.

x + y- 2z =51 I2x + 3 y+ 4 z = 2

X+ y = 4.

Pick the mallest such h. Swap the itl1 row with t11e h th row.

Subtract a1if ti me the i t11 ro' from the h th row, for allh i

X4

2x l

12.

xs =3 - 2x 5 = 2 + xs = I = 0

+

0

=0 = 0

x1

+ 2x2

2x 4 + 3x5 = 0 X3 + 3x4 + 2xs = 0 X3 + 4x4 - X = 0 xs = 0

X1

+ 2x2

+ 2x 5 - x 6 = 2 + X5 - X6 = 0 2x3 - X 5 + X6 = 2 2x3 + 4x4 = -8 x4

I

9.

= 0

- 3x3 x2 + 6x3 x2 - 3x3

+ X4 =

4

+ x4 = 0

7.

x 1 +2x2 +

4x 1 + 3x2 + 2x3 - x 4 = 4 Sx1 + 4x2 +3x3 - x 4 = 4 -2x l - 2x2 - X3 + 2x 4 = - 3 11xl + 6x2 + 4x3 + x 4 = l l

+ 7xs +

x 1+

' 11.

x2 - 3x3 - x 4 = 6 3x 1 + 4x2 - 6x3 + 8x4 = 0 - x 2 + 3x3 + 4x 4 = - 12

7x6 = 0

- 6xs - 12x6 = + xs + Sx6 = - 2x_ + X4 + X5 + X6 = 2xl + x2 - 3x3 + 8xs + 7x 6 =

- 2xl

XJ

+ 2 y + 3z =

x2 +x3 + x2

X1

+ 2x4 + 3xs 4x4 + 8xs I x2

10.

5.

+

XJ

3. x XJ

2x - y = 5 3x + 4y = 2

6.

*

3x + 4 y - z = 8 1 · 16x + 8 y - 2 z = 3

l

x 1 - ? x2

S.

2

0 0 0 0

Solve the li near systems in Exerc i es 13 to 17 below. You may u e techno logy.

13.

fpe 2 Aow chart for Gauss-Jordan eliminatian. Ais an m x n matrix.

'

3x

15.

Display reduced ro w echelon form

16. In Fi gure 2 , we present a ftow chart for Gauss-Jo rdan elimination. EXERCISES

GOAL

Use Gauss-Jordan elimination to solve linear systems. Do simple problems using paper and pencil , and use technology to solve more complicated problems.

3x + 11 y + 19z = - 2 7x + 23 y +39z = 10 - 4x - 3 y - 2 z = 6

17.

14.

+ Sy + 3z = 25

+ 19- = 65 - 4x +5 y + ll z = 5 7x + 9 y

3x l + 6x2 + 9x3 + Sx4 + 25xs = 53 7xl + 14x2 + 2 1x3 + 9x4 + 53x 5 = 105 - 4xl - 8x2 - 12x3 + Sx4 - l Oxs = 11 2xl + 4xl + - 2x l x1 +

4x2 + 3x3 + Sx4 + 6xs = 37

8x2 + 7x3 + Sx 4 4x2 + 3x3 + 4x4 2x2 + 2x3 - x 4 Sx 1 - l Ox2 + 4x3 + 6x-1

+ -Xs = 74 - Sxs = 20 + 2x - = 26

+ 4x = 24

3x + 6y + 14z = 22 7x+ l 4y+ 30 =46 4x + Sy + 7z = 6 7

Sec. 1.2 Matrices and Gau -J ordan · limin ation •

26.

27

hap. 1 Linear Equations IO\ 18. Oe term.ine ' hich of the mau·i es b - 0 2... l 0 J .) 0 b. 0 1 4 0 0 0 0 2 3 -+ ] d. Lo 19. Fi ndall 4 x J matri e · in red uced row 20. We ay that two 111 x n matri c _ in r du

u

~]

· l~

type if they

0 0

0 1 0

~]

c. [

i

2 0 0 0 0-

~]

2a

2c

+ 3d.

Wh ile th~r~ are many po s ible choice fo r a, b, c, and d w hich balance th e 1t 1 customary to u e the sma ll est possib le pos it ive intel!e rs. Ba lance tht reactio n. ~

echelon form . ed ro ech Ion fom1 are of the __ a me o ntai n th e ame number of leading I ' in the sa me post u ons .



....) and

J

30. Fi~d a pol ynomial of degree 3 (a po lyno m ial f tb e fo rm j(l = a + br + ct· + dt 3) w hose g raph goes thro ug h the poin ts (0, I ), ( l , 0 , (- I , 0) . an d (2 , -15). Sketch the g raph f th i c ubic . 31. Find the poly no m ial of degree 4 w hose gra ph goes through the p int ( L 1). (2, - I ), (3. -59) , ( -l. 5), -2. - 29). Graph this polynomia l. 32. Cubic splines. S uppose yo u are in c ha rge of the des il! n of a ro ll er coa te r r_1de .. T hi imp le ride wi ll not make any left or rig bt t~m , tha t i , th e track IJ.es 111 a vert ica l pla ne. The figu re below s hows the ride as viewed from the tde. The point (ai. bi ) are given to yo u, an d you r job is to connect th e dot in a reasonably mooth way. Let ai+ l > a i .

0

0

are of the ame type. How many type

+b =

re~ct 1 on,

For examp le,

[~

before a nd after the reaction. For example, becau e the numbcr of oxygen atoms m ust remai n the same,

are in red uced r w echelon fo rm .

of 2 x 2 matrice · 111 reduced row

echelo n fo rm are there . 21. How many t pe of 3 x 2 matrice in reduced row eche lon form are there ( ee Exercise 20)?

22. How many type of 2 x 3 matrice in reduced row echelon form are there ( ee Exercise 20)? 23. Suppose yo u apply G au -Jo rdan e lim.ination Lo a matri . Explai n how you can be . ure th at the re u lting matri x is in red uced row echelon fo rm. 24. Suppose matti x A is tTan fo rmed inro matrix B by means of an e lementary row operation. I there an eleme ntary row operation tha t tra nsforms B into

A? Expla.in. 25. Su ppose m atr ix A is tra nsformed into matrix B by a seque nce of e leme ntary row operations. 1 there a equence of e le me ntary row operation that tran fo rms B into A? Exp lain (see Exerci e 24) . 26. C o nsider an 111 x 11 matrix A. Can yo u transform rref(A) i nto A by a seque nce of eleme ntary row o peration (see Exercise 25)? 27. I there a eq uen ce of e lemen tary row Operati o ns that transfo rms

I 2 3] 4 5 6 [7 8 9

in to

1 0 0 I

[ 0

(a", b.)

/

I

O ne method often e mpl oyed in such de ign pro blems i the tech niqu e of ubic We choo e fi (t). a poly norn.ia l of degree 3, ro defi ne the sha pe of the n de betwee n (ai - l · bi _ 1 ) a nd (a i, bi). for i = I, ... . n.

sp l111~s.

0

Exp lain. Suppo e yo u subtract a mu ltiple of an eq uation in a ·y tem fro m a nothe r equatio n in the system . Explain why the two sy tem s (before a nd afte r thi . operati o n) have the . a me so lutio n . 29. Balancing a chemical reaction. Co nside r the c he mi ca l reacti o n

i8.

a

N02

+ b H20

-r c

HN02 + d HN0 3,

where a, b, c, and d are un known pos itive integers. T he reacti o n mu. t be balanced , that is, rhe numbe r o f ato ms of each e le me nt m ust be rhe sa me

Ob io us ly, it is re JUirecl that }; (n,) = bi a nd fi(n, _ 1 ) = b, _ 1, f r i = 1. .... 11 • To g uara ntee a smooth ride at the po ints (a i . bi). we ' a nt the li rs t and th

28 •

hap . 1 Linear Equations

Sec. 1.2 Ma trices a n d Gauss-Jo rdan Eliminatio n •

fi+I to agree at the e poi nts.

econd de rivatives of f; and

.f;'(a;) =

.f/+1 (a;)

and

.f/' (a;) = fi'~ 1 (a;) ,

for i = l. . . . , n - I.

29

37. Fo r so me backgro und o n thi exerc i e, see Exe rci e 1. 1.20. Co ns ider a n eco nomy w ith three ind ustries, / 1, 12 , / 3 • W hat o utput x 1, x2, x3 ho uld they produ ce to sati fy bo th co ns umer de mand a nd inte ri nd u tr y de mand ? The demand s put o n tbe three industri e are show n be low.

Expl ain Lhe pract:ica! ignifi ance of the e condi ti o ns. Ex plain w hy , fo r the convenience of th ride r , it i al o requi red rhat

f((a o) = .r,;Can) = 0. Show that sati fying all t:he e condition s amourit to so lving a yste m of linear equati o n.. Ho w many ariable ar in thi s sysre m ? How m a ny equ atio ns? (Note: It can be hown that thi sy te m ha a unique o lution .) 33. Find a polyno mial f( t ) of degree 3 uc h that f (l ) = 1, /(2) = 5, .f'(l ) = 2, and .f' 2 = 9 where .f'(t ) i the de ri vati e of f (t ) . Graph thi polynomi a l.

34. T he dor product of two ve tor

x,~2 ] [

.i? =

_

and

Y I] .

Y2

y=

Xn

[ ;ll

in ffi: 11 is defined by



y=

Xl)ll

+ X_ }'2 + · · · + XnYn·

Note that the dot product o f two vectors is a calar. We ay th at the vectors ji are perpendicular if ,\: · y = 0 . Find all vectors in W.: 3 perpendicul ar to

x and

Ul

Draw a sketch.

Consumer

38. lf we consider more tha n three indu trie m an input-output model, it i cumber ome to repre ent all the demand in a diagram a in Exercise 37. Suppose we have the industries / 1 , 12 , . • . , 1". witb outputs x 1• x 2 . .. , x 11 • The outpul vector i

_ X =

x,] [x·~~ X2 .

.

The corzsumer demand vector is

35. Find a U vectors in IR4 which are perpendi cul ar to the three vector

[lJ · [iJ·

[~J

(see Exercise 34). 36. F ind all solu tio ns x 1, x 2 , x 3 of the equati on

b= w here

X 1VI

where b; is the consume r de m an d o n ind ustry I;. T he de m a nd vector for indu stry l j is

+ X2Ü2 + X3VJ,

alj]

-

v·J --

0 2j

[

.

'

Onj

where a ij is the de m a nd industry l j puts on industry I;, fo r each $ 1 of o utpul industry lj produces. F o r exa mpl e , a 3_ = 0 .5 means that ind u try h needs

30 •

hap. 1 Linear Equation s

Sec. 1.2 Matrices and Gauss-j ordan Eli mina tion • SO ~ wortl1 of product from industry ! 3 fo r each $ 1 worth of goods 12 produces. The coeffi cient a ; ; need not be 0: producing a product rn ay reqUJre oood or ervice frorn the am indu try. :. Find the fo ur demand vector for the economy in Exercise 37. b. Wbat i tl1e meani ng in economi terrn of Xj Üj? _ c. What i the rneaning in economi c terrn s of x 1v1 + x2v2 + · · · + x" V,, + b? d. What is the meaning in economic terrns of the equ atio n

x 1ü1 + x2Vi + · · · + x" v~, + b = .\:?

The position vector of the center of mass of thi s system is

where M = m 1 + m 2 + · · · + mn . Consider the triangular plate sketched be1ow. How mu st a total mass of 1 kg be distributed among tl1e tl1ree vertices of tl1e plate so tl1at the plate can be supported at tl1e point [

39. Consider tlle economy of J rael in 1958 .* The three industries considered here are 11: agricu lture, 1'2 : manufacturing, h: energy.

31

~] , that is, rcm= [ ~]?

Assurne that the mas of

the plate itself is negtigible.

Outputs and demands are rnea ured in miltion of 1 raeli pound , the curr ncy of Israel at tl1at time. We are told tl1at

b=

13.2 ] 17.6 . [ 1.8

ü1 =

0.293] 0.014 , [ 0.044

üi

=

[~.0.01207] '

a. Why do the fir t component of ü2 and ü3 eq ual 0? b. Find tl1e outputs x 1• x 2 , x 3 required to arisfy demand . 40. Consider some particles in the plane witl1 po ition vectors r1 , mas es m 1. m 2, . . . , m" .

ü3 =

[~.0.2017] 16

r2, ... , r"

and

41. The momentum P of a system of n particles in space with masses m 1, m2 , . . . , m" and velocities Ü1 , Üz, . .. , Ün is defined as

Now consider two elementary particles witl1 velocities

The particles collide. After tlle collision, tl1eir respective velocities are observed tobe

*W. Leontief: lnput·Output Economics, Oxford Uni versit y Press, 1966.

Assurne tl1at the momentum of the system is conserved throughout tl1e collision. What does this experiment tell you about tl1e masses of the two particles? (See figure on page 32.)

Sec. 1. 3 On th e Salu tions of Li near Sy tems •

32 •

hap. 1 Linear Equations

33

For example, 5(47) = 11.5 means that the time from unrise to un set on February 16 i · I L hour and 30 rni nu te . For locations clo e to the equ ator the function S(t) i wei l appro ximated by a trigonometric funcrio n of the form

Particle I

S(t) = a

+b

cos

(~~~) + c

sin

(~~~)

(the period i 365 days o r I year). Find thi s approximation for Bo mbay and graph your o lution. Acco rding to thi s model , bow long is the Iongest day of the year in Bombay?

·

·rY m · the United States. The traffic volume through certain block dunng . an h~ur has been

42. The sketch below represent. a maze of one-way st.reet _m a

Cl

mea ured . Suppose that the vehicles leaving t.he area dunng tlu

.ON THE SOLUTIONS OF LINEAR SYSTEMS

hour were

exactl y the sa me a tho e entering it. " JFK Street Dun ·ter Street

ln the last ection we di cussed how a sy tem of linear equatio n can be solved by Gauss-Jordan el imin ation. N ow we will inves ti gate what thi method tell us about the nurober of so lution s of a linear ystem. How man y ol uti on can a linear system poss ibl y have? How can we tell whether a system has any solutions at a ll? Fir t, we ob erve that a ]jnear ystem ba no olutions if (and only if) it reduced row echeJon fo rm contains a row of the form

[0

0 0

0

repre enting the equation 0 = 1. ln thi case we ay that the system is inconsisrent. lf a linear y tem is consistent that is, if it does bave solutions) how many o luti ons can it have? The number of olutions depend o n whether or not there are nonleading variable . If there i at lea t one nonleading variable, then there will be infinitely many o lu tion , since we can a sign any value to a no nleading variable. If all variable are lead ing. on the otber hand , there will be only one solution , since we cannot make any choices in assigning value to the variables. We have sbown: 1

Mt. Auburn Street

Winthrop Street

Fact 1.3.1

Number of solutions of a linear system A linear system has either

What can you say about the traffk volume at the four location s indicated by a question mark? Can you figure out e~actly how_much traffic there was an each block? If not, describe one poss1ble scenano. For each of the four locations, find the highest and the lowest possible traffic volume. 43. Let S (t ) be the Jength of the tth day of the year in Bomba~, lndia (measured in hours, from sunrise to sunset). We are given the followmg values of S(t):

• no so/utions (it is incon istent), • exactl) one solution (if the y tem i con i tent and all

ariable are

Jeading). or

• infinitely many solutions (if the system i con i t nt and tbere are nonleacling variable ).

S(t) 47 74 273

11.5 12 12

1Starting in th is section. we will number the definitions we g ive and the fac ts we deri e. l11e nth fact stated in Section p .q is labeled as Fact p.q .n .

34 •

ec. 1.3 On th e o luti ns o f Lin ear Sy tems •

Chap. 1 Li near Equation

b. The eq uation rank (A) =

111 mean that there i a leading I in each r w f the echeJon form of A. Thi im plies that the echelo n form of th aug mented matrix doe not contain the row

of the au.2:mented matrice of three sy. tem are EXAMPLE 1 ... T he red uced ro w ec I1e Ion l-01·11's • ~ given below . H O\.

a.

l~

many Ol utio n are ther in each case?

1

2

0

0

2

0

0 0

0

l

0

0

0

0

0

0

0

0

[0

0

lutio n (the econd variable is nonl ead ing).

EXAMPLE 4 ... Co nsider a linear sy tem w irb fewe r eq uatio ns than unk.nown . How man y olutio n couJd thi

E ·ample 1 hows that the number of leadin g 1 s i~ the echel_o n forn~ teil s u abour the number of o lution of a linear y tem . T h1 ob er at10n motJvate

ystem have?

Solution

the fo ll owiJ1g definition:

Suppose there a.re 111 eq uation and n unknown ; we are to ld that be the coefficient matrix of the ystem. By Examp le 3a.

Rank

rank (A) Sm <

The rank of a matrix A is the number of leading 1'

EXAMPLE2 ...

[ 7

2 5

since

rref

I 4 [ 7

8

2 5

in nef(A) .

0

0

0

A has the ize m x n . Show that:

sm

< n. Let

11. .

Th us the sy tem w ill have infin ite ly many so lution or no solution at all (by Examp le 3d). The key Observat ion i that there a.re no nl eading ariabl es in thi ~ ~ case.

EXAMPLE 3 .... Consider a system of m linear equation wi th n un know ns. It s coefficie nt matrix a. rank(A)

111

1

36= ] [CDo Q)o -'l 2.

8 9

~

inconsistent or it has infin ite ly many ol ution (by Fact 1.3. 1) .

o solution (the third row represenrs the eq uaüon 0 = I).

I 4

0

c. The eq uatio n rank (A) = 11 mean s that th ere is a lead ing I in each colu mn. i.e. , all variab le are leading. T herefore. eithe r the y. te m i incon i tent r it has a uniq ue so lutio n (by Fact 1.3. 1 .

0

b. E ·actl one olution (a ll ari ·~b le are leadin g .

rank

0

d. There are nonleadi ng variab les in th i ca e. Th erefore, either the sy. tem i

a. Jnfi nüely many

Definition 1.3.2

0

Themfore, the y tem mu t have at lea t one o lution.

Solution

c.

35

and rank(A) S n.

b. Jf rank(A) = m. then the sy tem i con istent. c. lf rank(A) =

11 ,

then the sy -tem has at most one so luti on.

d. If rank(A) <

11 ,

then the system has eilher in fi ni tely many so luti ons, or no ne.

Solution

Fact 1.3.3

A linear yste m wi th fewe r eq uation than un.known - ha e ithe r no o lutio ns or infi ni te ly ma ny o luti ons.

\.

To illu trate thi o b ervatio n, con ider two eq uatio ns in tlu·ee variable : Two planes in pace ither in ter ect in a lin e or are parall e l (see Figure I), b ut they will never intersect at a po in t! Th is mean that a sy ·tem of two eq uation w ith three unkno\ n ca nnot have a un iq ue o lution.

EXAMPLE 5 ... Co n ider a linear ystem of 11 eq uation w ith

11 unknov ns. Whe n doe thi have a unique o lu tion? U e the rank of the coefficien t matri x A.

Figure 1 o) Two planes intersect in o line. b) Two parallel planes.

a. By the defin ition of the reduced row eche lo n form , there is at most o ne leadinrr 1 in each row and in each co lumn . lf there i a lead in g I in each row, then rank (A) = m; otherwise, rank(A) < m . L ikew ise for the columns.

IThi is a preliminary , rather technica l delinition. In Chapter 3 wc wi ll gain a heller conccptual undcrstandi ng of the rank of a matrix.

(a)

(b)

y tem

36 •

Chap. 1 Linear Equations Sec. 1.3 On the Solutions of Linear Systems •

Solution

37

We ob erve fir t that rank(A) s n , by E amp.le 3a. lf rank(A = n then the system has a unique oluti o n. by Example 3, part b and c. However, if rank (A) < n , then the sy tem doe not have a unique so lution, by Example 3d. ~

:~ [1]

A linear y tem of n equations with n unknowns ha a unique olutio n if (and onl y if) the rank of its coefficient matrix A i n. Thi means that

Fact 1.3.4

1 0 0 0 1 0 0 0

rref A) =

0 0 0

Figure 2

0 0 0 The unique olution of thi

+ The Vector Fonn and the Matrix Fonn of a Linear System We now introduce some notations that allow us to represent y tems of linear equations more succinctJ y. As a imple examp.le, consider the linear sy tem

y te m i x = 2, y = 1. The equation

i ca lled the vector tßr.m fth~ linear ~stem. Note that the vectors [ 3 ] and [ 1 ] are the colurnns of 1ts coefficJent matn x. I 2 1 ow consider the genera l linear ystem

3x + y = 71· 1 X+ - Y = 4

a,,x, az1x1

+ a 12X2 + · · · + a, x" + a22x2 + · · · + GznXn 11

= b1 = bz

[o Section 1.1 we interpreted the olution of thi system as the intersection of two lines in the x -y plane. Here i another interpretation. We can write the system above as

We can write [

3x X

+ y ] = [7] , +2y 4

O J JX J Gz JXJ

or [

Gm I

+ OJ2X2 + · · · + GJ X + anx_ + · · · + a2nx" 11

~-I + Gm2:~-2 + · · · + G

11111

11

:"Cn

]

[

=

b1] b

b;" '

or or

u,

Vz

i

t

x, [a.,] . a21

To solve this system means to write the vector [ of [

~]

and a scalar multiple of [

~] .

~]

as the sum of a scalar multiple

This problem and its solutiön can be

represented geometrically, as illustrated in Figure 2.

a,;,, or, more succinctly,

+ xz

[a". ] a22

a,;,2

+···+

XII

VII

/;

i

i

[a,"] a zn .

a,;,"

=

[1J

38 •

S c. 1.3 On the Sol ution of Linear Systems •

Chap . 1 Linear Equ ations

Note that the product kr i defi ned only if the nu rober of columns of A matche the number of components of x. Also, note th at the product Ax i a 111 vector in IR • Thi s de fi nit~o n allows u · to represent the lin ear system with vector form x1ii1 + · · · + x" ü" = b as , . v ~ • _ ,

This is the vectorf orm of the linear ystem. In thi s context, the following definitio n is useful:

Definition 1.3.5

39

Linear combinations A vector b in ~ 111 is called a linear combination of the vectors ii1 , ii2 .. . , v" in ~~~~ if there are scalars x 1, x 2 , •• . , x" such that

b=

x 1ii 1 + x2ii2

the matrixform of the I i near system. Below are so me examples of matrix products.

+ · · · + x" ii" . EXAMPLE 6 ...

Solving a linear system with augmented matrix [A : b] amounts to writing the vector b as a linear combination of the column vectors of A . We now give another defini tion that permits an even more compact representation of a linear system . We define the linear combination

[~

0 2

EXAMPLE 7 .... The product 0 2

to be the product of the m x n matrix

i unde fi ned, because the number of columns of the matri x doe not match the OLJmber o f component of the vector. <111111 A

=

v1

vz

EXAMPLE 8 .... If

and the vector

x=

[~;] .

and

xi

any vec tor in IR 4 , find D~r.

x" Solution Definition 1.3.6

l

The product Ai lf the column vectors of an m x n matrix A are ii ~> v2 , vector in ~~~, then the product Ax is defined as

. ••,

ii", and

x is a

Dx =

[

0 ~

Thu D."i =

v"

x, for all x in JR4 .

EXAMPLE 9 .... Re present th

ystem 2x 1 9x l

ln words, Ai is the linear combination of the columns of A with the components of as coefficients.

x

in matri x form

A-r = b.

'

3-~2 + Sx3 = 71 + 4~1 2- 6.\3 = 8 -

40 •

Sec. 1.3 On the o lution of Linear Syste ms •

Chap. 1 Linear Equation

characterization invol ves the dot product of two vector ; you may wi h to rev iew thi s concept in the appendix (Defi rliti on A4). Consider the product of an m x n matrix A with a vector ,r in IR".

Solution Begin by writing !l1e

tern in vector form.

[~

=

[~

a21 J a22 J a2" aii . + X2 [ a12 . + · · · + Xn [ a1". J

X1 [

Ax = b,

where A =

-3

[ Gm

For an m x n matrix A, two vectors

x and y in IR.", and a

calar k,

a~n

b. A(kx) = k(Ax).

We will prove the fir t equation, and leave the verification of the second as

Fact 1.3.8

Exerci e 45.

+ y) =

V2

x

lf i a vector in IR." and A is an m x n matrix with row vector then

VII

[ Xn

J

I~ I + am2~2 + · · · + Gmn~n

XI+ Yll X2 + Y2 V1

G 111 X 11

a211Xn

Note tbat the first component of Ax is the dot product of the fir t row of A with x etc.; the ith component of Ax i the dot product of the ith row of A with x. Thu we have hown:

a. A(x + y) =A-i+ A_v,

A (x

a~,l

a,:,l

G11X1+ a12X2 + · · · + a21X1 + a22X2 + · · · +

4

Here are two important algebraic rul es conceming the product Ax .

Fact 1.3.7

W2 w1

-

-

Wm

-

[

+ Y11

(x1+ Y1)Ü1 + (x2 + Y2)Ü2 + · · · + (x" + y")v" = x1ü1+ x2~ + · · · +x"Ü" + Y1Ü1 + Y2Ü2 + · · · + y"Ü" = Ax + Ay

-J

--

_

X =

[ ~I ·-~ ] W2 ·X

.

Wm ·X

=

In Definition 1.3.6 we express the proQuct Ax in terms of the columns of the matrix A. This product can also be expressed in terms of the rows of A. This

(i.e., the ith component of A.r is the dot product of

box • marks the end of a proof.

w;

and ,r).

EXAMPLE 10..,_

[! 1A

41

0 2

(Compare with Example 6.)

+ +

0. 1

2 -1

+ +

(-1 )·2]=[III] 3 ·2

42 •

Sec. 1.3 On t he S lution s o f Linea r ys tem s •

Chap. 1 Linea r Eq uation

c

L t us defi ne two othe r operatio n in o l in g matri ce .

Definition 1.3.9

We ca n find a vector in 1R111 such th at the ys te m E.l: = is incon si. te nt: Any vecto r whose ~ast com po ne nt is no nzero will do . Using this vecto r how can we find a vec to r b in 111 such th at the system A.l: = b is in con is tent? T he key idea i to work backw ard thro ug h Ga us -Jordan e limjnatio n. We know th at there is a seq uence of e leme ntary row o peratio n, which tran fo rm A into E. By invetting each of th e e Operatio ns a nd pe t·fo rming them in re er e o rder, we ca n find a seq uence of e le me nt ary row Operat ions that tran fo rms E int A (fo r exam ple, in tead of di vid ing a row by k we multipl y the ame row b k ). If we apply the ame ro w Operation to the a ug mented matri x [ E : c], we end

Sums of matrices T he sum of rwo matrices of the - ame size is defi ned entry by ntry. 11

.

0!

]

+

[

. . . a,;m

.

h11

b,;,t

c

111

x n matti

c,

A.r

= b i inco n i. tent, a de ired up w ith a matrix [ A : b] s uch that the , y -rem (reca ll that the sy tem Ex = is inco nsi te nt and eJe men tary row operatio n do not change the e t o f o luti o ns . For example Iet

Scalar multiples of m atrices Tbe produ r of a sca lar k w ith an

c

43

is defined entr by entry.

Cl ! !

k

: [

Clm \

ote th at the e definitions genera lize the co rre pond ing operation fo r vecror. (see the Appendix , Defi ni tion Al ).

EXAMPLE 11 ~

A= EXAlVIPLE 12 ~

r~

".) [ - 21

3 4 -!-

ilJ

r~ ~l

t

4

We w ill co nclude thi secti on with an example that applies many of the concepts we have introduced.

EXAl\•I PLE 13 ~ Consider an 111 x n malri x A wi th rank(A) < m . Show that there is a vector IR"' such that the syste m A.l: = b is inconsistent.

I 2

2

4 8 2 4

-7-(2)

3

h in

3

6 2

t

-1-

r~

Solution Consider the reduced row echelon form E = rref(A) . S ince the rank of the matri x A is les th an the number of its rows, the last row of E does not conta in a leading I and is therefore zero.

4 "

..)

;l

J2

- 4(/ I) - 3(1 1) - (II)

t

-1-

E = rref(A) =

L 000

00

E=

r~ ~l 0

0 0

I

I

0

0

Jl

cJ =

2

0 0 0 0

il

+ 4 ( // +3 (11) + (I I )

44 •

Chap. 1 Li nea r Equation s

Sec. 1.3 On th e Sal ution s of Li near Sys tems •

The rea onin g above how that the sy tem

45

7. Con ider the vector v 1 ii2 , ii 3 in JR2 ketched below. How man y olurion does the system

Ai = b

in con istent.

have? Argue geometricaJly.

EXERCISES

GOALS Use the red uced row echelon form of the augmented matrix to find the number of so lution of a Linear sy tem. App ly the definition of the rank of a matrix . Compute the product Ax in tenn s of the column or the rows of A. Represent a linear sy tem in vector or in matrix form . 1. The redu ced row echelon form of the augmented matri ces of three sy tems are given below. How many oluti ons does eacb . ystem have?

; ~ ~]

1 0 0 1 [ 0 0 0

a.

:

b.

1

[6

0

)

~]

0 c. [ 0

1 0 0 1

8. Consider tbe vectors Ü1, Ü2, Ü3, Ü4 in IR 2 ketched below. How many solutions does the system

have? Argue geometrically.

~]

Find the rank of the matrice in Exerci es 2 to 4.

2.

I

[0 0

'J

3]

0

1

1 2

9. Wri te the system x + 2y + 3z = I 4x + Sy + 6- = 4 7x+8 y +9z = 9

5. a . Write the system X

+ 2y = 71 y = 11

I3x +

in vector form . b. Use your answer in part a to represent the system geometricall y. Solve the system and represent the so lution geometrically. 6. Consider the vectors ii 1, ü2 , ü3 in IR 2 sketched be low (ii 1 and ii2 are para ll el). How many solutions does the system

have? Argue geometricall y.

.in matri.x form. Compute the dot products in Exercises 10 to 12 (if the products are defined).

10.

mHJ

II. [ I

9

9 7]

m

12. [ I

2

3

4]

m

Compute the products Ax in Exercises 13 to 15 u ing paper and pencil. In each case, compute the product two ways: in terms of the column of A (Definition 1.3.6), and in tenns of the rows of A (Fact 1.3.8).

14.

u; !] [-n

15. [I

2

3

4]

m

46 •

Sec. 1.3 On the Solutions of Linea r Systems •

Chap. 1 Linear Equa tion Compute the prod uct

A."\- in E · reise

16 to 19 us in g paper and pe ncil (if the

product are defin d).

16.

19.

[ ~ ~ J [ -3] [

-~

-5

17.

[41 25

18.

un[~]

-nm

20. a. Fi nd

iJ .

- 1 b. Find - 1

4 21. U e technology to compute the prod uct

27. True or false? Justify your answers. Consider a system A.x = b. a. If Ax = b is inconsistent, then rref(A ) contains a row of zeros. b. 1f rref(A) contains a row of zeros, then Ax = b is incon istent. 28. True or fa lse? Ju stify your answers. Suppose the matrix E is in reduced row echelon form. a. If we omit a row of E, then the remaining matrix is in reduced row echelon form . b. lf we omit a column of E, then the remaining matrix is in reduced row echelon form.

29. True or false? Justify your answer. Consider a system Ax = b. This system is consistent if (and only if) rank(A) = rank[ A : b]. 30. If the rank of a 5 x 3 matrix A is 3, what is rref(A)? 31. If the rank of a 4 x 4 matrix A is 4, wbat is rref(A)? 32. True or false? Consider the system Ax = b, wbere A is an n x n matrix. Thi s sy tem has a unique solution if (and only if) rank(A) = n . 33. Let A be the n x n matrix with all 1 's on the diaoonal and all O's outside the diagonal. What is Ax, where .X is a vector in~~~? 34. We define the vectors

in ~3. 22. Consider a linear system of three equations with three unkn owns. We are told that the ystem ha a unique olution. What does the red uced row echelon form of the coefficient matrix of this system Iook like? Explain yo ur an wer. 23. Consider a linear system of four equations with three unknown . We are told that the system has a unique solution. What does the reduced row echelon form of the coefficient matrix of thi s system Iook like? Explaio your ans wer. 4 24. Let A be a 4 x 4 matrix , and Iet b and be two vectors in ~ . We are to1d that the system Ax = b has a unique soluti on. What can you say about the number of . ol ution of the sy tem Ax = c? 25. Lel A be a 4 x 4 matri x, and Iet b and be two vectors in ~4 . We are to ld that the sy. tem Ax = b is incons istent. What can you say about the number of olutions of the system Ax = c? 26. Let A be a 4 x 3 matrix, and Iet band be two vectors in ~4 . We are told that the system Ax = b has a unique so lution . What can you say about the number of so lutions of the system Ax = c?

c

c

c

47

a. For

c]

f

k

compute Ae1, Ae2, and Ae3.

b. If B is an m x 3 matrix with columns

v1,

'

li2 , and ü

Be3? 35. In ~~~, we define

0 0 e;

=

+--

0 If A is an m x n matrix, what is Ae;?

ith component

48 •

Chap. 1 Linear Equations

Sec. 1.3 On the olutions of Linear ystems •

36. Find a 3 x 3 mau·ix A such tJ1at

37. Find aJI ector

Ai

where

A

?z

!f

x1 and are so lutions of tbe homogeneou sy tem Ai= Ö, then .~ 1 +x2 1s a solut1on as weU. . d. If x i a soluti on of tbe homogeneaus sy tem Ax = Ö, and k is an arbitrarv constant, then kx is a solution as weil. c.

x such that

= b,

49

= [~

~

!]

and

b=

m

38. a. Using technology , generate a "random' 3 x 3 matrix A (the entries may be either single-digit integers or number between 0 and 1, depending on the technology you are using). Find rref(A . Repeat this experiment a few time . b. What does the reduced row echelon form of "most" 3 x 3 matrice Iook like? Explain. 39. Repeat Exercise 38 for 3 x 4 matrices.

48. Consider a so lution part a and b:

x

1

of the linear system AX..=

b.

Justify the facts tated in

a. If x" is a solu~ion of the system Ax = Ö, then .X 1 + ;" is a olution of the system Ax = b . b. If is another Solution of the system Ai = b, then 2 - ..t 1 i a solution of the system Ax = Ö. c. N~w s~~pose A is a 2 x 2 matrix. A solution vector x1 of the ystem Ax = b JS ske~cbed below (in JR.Z). We are told that the olution of the system Ai = 0 form the Une sketch~d below. Sketch the line con isting of all solutions of the system Ai = b.

x

.rz

40. Repeat Exercise 38 for 4 x 3 matrices. 41. How many olutions do 'most" systems of three linear equation with three unknowns have? Explain in terms of your work in Exercise 38. 42. How many olutions do ·'most" systems of three Unear equations with four unk:nowns have? Explain in terms of your work in Exercise 39.

olutions of Ai= Ö

43. How many solutions do "most" systems of four linear equations with three unknowns have? Explain in terms of your work in Exercise 40. 44. Consider an m x n matrix A with more rows than columns (m > n) . Show that there is a vector b in IR."' such that the system Ax = b is inconsistent. 45. Consider an m x n matrix A, a vector in ]Rn, and a scalar k . Show that

lf you are puzzled by the generality of this problem, think about an exarnple first:

x

A(kx) = k(Ax).

46. Find the rank of the matrix a

0

[ 0

b d

0

49. Consider the table below. For some linear systems A.t = b. you are given of the coefficient matrix A, or the rank of the auomeoted either the rank • b matrix [A : b]. For each system, state whether the system could have no solution, one solution, or infinitely maoy solution . There may be more than one possibility for some system . Justify your an wers.

where a, d , f are nonzero, and b, c, e are arbitrary numbers. 47. A linear system of the form

Numberor equations is called homogeneous. Justify the following facts: a. All homogeneaus systems are consistent. b. A homogeneaus system with fewer equations than unknowns has infinitely many solutions.

a. b.

c. d.

3 4 4 3

Number or unknowns 4 3 3 4

Rank

Rank

or A

or [A: bJ 2

3 4

3

50 •

Chap . 1 Linear Eq uati n

Sec. 1.3 On the So lutlo ns of Linear Systems •

SO. Consider a linear ystem A.~ = rank [

:

b] =

4. H ow man

b, where A i. a 4 , 3 matrix . We are to lcl that o lu tio n do thi sy. tem have? ector -~ in IR". For

51. Con ide r an 111 11 matri · A . an r x 5 matrix 8. and a w hich choi ce of m. 11 , r. 5. p is the product

57. Express the vector [

~ J as

1

the

um of a vector on the li.ne y = 3x and a

vector o n the line y = x j 2.

y

defined. 52. Con ide r the mauice

~]

A = [:

and

B-

[0 -l]0 . I

Can you find a 2 x _ matri x C such that A Bx) = Cx ,

for alJ vectors

.:r

X

in IR2 ?

53. lf A and B are two m x

11

matrices, i (A

for alt

+ B )x

= A_~

+ B.r,

x in l "?

54. Con sider two vector ü1 and ü2 in IR 3 that are not parall I. Wtli ch vector in IR 3 are linear combinations of ü1 and Vi? Describe the set of the e vectors geometrically. lnclude a sketch in your an wer.

55. ls the vector [

i]

a linear comb;nat;on of [

i]

and

[

!]

56. ls the vector

30 - I

38

56 62 a linear combination of I

7 I

9 4

5 6 3 2 8

9 2 3

5 2

-2

-5 4 ? 7 9

1

51

r

l S c. 2. 1 In trod uctio n to Lin ea r Tran for m ati o n and Their lnve rse •

53

your encodecl position will be

Y=[y'Y2 ]=[l31] 220 (see Figure 1). The cocling transformation can be represented as

LINEAR TRANSFORMATIONS

or, more uccinctly, as

Iy=

Ax

j.

Tbe _matrix A is called the coefficient matrix of the tran formation , or simpl y its matnx. A transfonnation of tbe form

y =Ax .

1

i called a linear transform.ation. We will discuss thi s important concept in greater detail below.

INTRODUCTION TO LINEAR TRANSFORMATIONSAND THEIR INVERSES

~s tbe _hip reacbes a new position, tbe sailor on duty at headquarter in Marsedl e recetves the encoded messaae b

lmagine yourself cruising in tbe Mediterranean as a crew member on a French coast guard boat, looking for spies. Periodically, your boat radios its position to headquarters in Marseille. You have to expect that communications will be imercepted. So, before you broadcast anything, you have to tran form tbe actual position of the boat,

b=

u~~J.

He must determine tbe actual position of the boat. He will have to solve the linear system

Ax= b, or, more explicitly,

I

(x 1 for Eastern longitude, x 2 for Northern latitude), into an encoded position

x, +3x 2 = 1331 + Sx2 = 223 ·

2x t

Figure I

You use tbe followin g code: Yt = x, .Y2 = 2x ,

+ 3x2 + Sx2.

For example, when the actual position of your boat is 5°E, 42°N or

52

I

.

:~l]ual :os[iti~n] LJ~

[ ·' 2

42

y

encoded posi rion

[Y 1] Yo

=

[131] 220

54 •

Chap. 2 Linear Transformations

Sec. 2.1 ln troduction to Linear Transformationsand Their Inver es •

Here is hi solution . Is it correct? c

4]

~

[Xt ] = [ 43 X2

X =

d.1ng, w1.thmatnx. A = [I2 3] 5

.r

As the boat travels on and dozen of positions are radioed in, the sailor get a Iinie tired of olving all those linear systems, and he thinks there mu st be a general formula to si mplify thi s task. He wants to so lve the system

Figure 2

55

y decoding, with matri x B

=[ - S

3] 2 -1

x

.Since. the _decod~ing transformation = By is tbe inverse of the coding transforfT!atwn y = Ax, we say that the matrix B is the inverse of the matrix A. We can wtite thi as B = A - 1• Not al l linear transformation

Xt + 3x2 = Yt I I2x t + 5x2 = Y2

when y 1 and .Y2 are arbitrary con tants, rather than particul ar numeri cal value . He is looking for the decoding transformation y -+ x

are invertible. Suppose some ignorant officer chooses the code

that is, the inverse 1 of tbe coding transformation

Yt = x1 +2x2 Y2 = 2xt + 4x2

y.

X --7

Tbe method of solution is nothing new. We apply elimination as we have for a linear system witb known val ues Yt and Y2· Xt 2xt

I

+ 3x1 = Yt I + 5x1 = Y2 Xt

I

+ 3x1 = Yt X2 = 2y t -

Xt

- 2(1)

I

I

+

3x2 = -X2

-3(11)

+ )12

I

for the French coast guard boats. When tbe sailor in Marseille has to decode a position, for example,

--+

- [ 17889] '

-7- ( - ])

b=

he will be chag1ined to discover that the system

Xt = - 5 y 1 +3y21

I

Y2

Yt

= -2)11

X2

=

with matrix

2 y1- Y2

Xt

+ 2x2 =

89 1

I2x t + 4x2 = 178

The formu la for tbe decoding transformation is

has infinitely many olutions, namely

Xt = - 5y 1 + 3y2 X2 = 2yt- Y2.

or

x = By,

where B = [

-5

where t is an arbitrary number. Because this system does not have a unique solution, it is impossible to recover the actual position from the encoded position: tbe coding transformation and the coding matrix A are noninvertible. This code is useles !

2

Note that the decoding transformation is linear and tbat its coefficient matrix is

-5

B= [ 2

T

The relationship between the two matrices A and B is sbown in Figure 2.

1

We will discuss the coocept of the inverse of a tran sfom1ation more systematicall y in Section 2.3.

Figure 3 Domoin X ond codomoin Y of o fu nclion T.

Now Iet us di sc uss tbe important concept of linear Iransformations in oreater detail. Since linear transformation are a special cla of functions , it ;ay be helpful to review the concept of afunction first. Con ider two set X and Y . A function T from X to Y is a ru le that a ociate with each element x of X a uniq ue element y of Y. The 'et X is called the domain of the function , and Y is its codomain. We will sometimes refer to x as rhe input of the function. and to y as it output. Figure 3 hows an examp le where domain X and codomain Y are finite.

56 •

Sec. 2. 1 lnt rod uct io n to Li near Transformatio nsa nd Their Inve rses •

Chap. 2 Linear Transform ations In precalcul us and calcul us you tudi d functions whose in put and outpul are calars, i. e., whose domai n an d cod omain are the real number ~ or subsets of

R

Yl

=

a1 I XJ

+

a 12x2

+ ··· +

a 111 X 11

Y2

=

a 2 1X 1

+

a22x2

+ ·· · +

a2 11 X"

Ym

=

+ ··· + G

111 11 X 11

57

for exa mple,

.f

x)

1'2 - 2 g(t ) = - - . r- 1

= ex,

ln mul6 variable ca lculus you may have ncountered function whose input and/or output \ ere ector .

EXAMPLE 1_." y = Xj"

+ xi + x~

.r =

y,

[

.

= ? x 1 + 3x2 - 9x3 + 8x4

= 6x1 + 2x_ - 8x3 Y3 = 8x1 + 4x2

Y2

(a function from JR to

r=

111 2X2

The output var iable y; are linear functions o f the input variab les x;. In so me branches of mathematics, a first-order function with a constant term , such as Y = 3xl - 7.x2 + Sx3 + 8, is called linear. Not so in linear alge bra: the linear fun cti on of n variables are those of the form y = c 1x 1 + c2 x 2 + ... + cnx,11 for some coeffi cients c 1 c2 ... , c,. .

4

CO

+G

EXAMPLE 3_." The linea r tran sfonn ation

Tbi formul a defines a function from ~- to ~ - The input i the vector and the output is the calar y.

EXAMPLE 2 _."

a", JXJ

(t)]

JR3 )

Tbi formula defines a function from IR to IR3 , with inpu t

l

and Output

7x4

is represented by the 3 x 4 matrix A =

s i~(l )

+ 7x4

7

3

6

2

[8

-98]

-8 7 . 0 7

4

EXAMPLE 4_." The coefficient matrix of the identity transformation

r.

Yl = X ]

We now return to the topic of linear transformation s.

Definition 2.1.1

Y2

Linear transformations A function T from 1Ft" to !Ft m x n matrix A such that

is called a linear transformation if the re is an

(a linear transformation from iR" to lR" who e output equals .its input is the n x n matrix

r

T (x ) = A x ,

x in~".

This is a preliminary definition; a more conceptual characteiization of linear transformation will follow in Fact 2.2.1. Make sure you understand that a linear transforma6on is a special kind of fun ction. The input and the output are both vectors. If we denote the outpul vector T (.( ) by ji, we can write

5i = Ax. Let us write this equation in terms of its components.

(/111] (/211

[ G JJXI

XJ ] X?

G2 JX l

[ am 2

X]

Yn = 111

for all

=

Gmn

;"

=

+ GJ 2X2 + · · · + + a 22X2 + · · · +

Xn]

11 GJ a 211X11

aml~l + am2~2 + · · · + a,.m~11

~ ~

~l

0

1

0

(all entries on the main diagonal are 1, and all other entries are 0). Thi matrix i called the identiry marrix and is denoted by ! 11 • I_ = [

6 ~l

13 =

1 0 0 1

[ 0

etc.

0

We have already seen the ide n6ty m~ttrix in other context . For example, we have shown that a linear system A.i = b of n equations with n un.knowns ha a unique oJution if and only if rref(A) = 111 (see Fact 1.3 .4).

EXAMPLE 5_." Give a geometric inte rpretation of the linear transformation .ii = Ax.

where A

= [~

-6J.

58 •

Sec. 2. 1 Introduction to Linear Transformation and Their Inverse •

hap. 2 Linear Transformations

59

Solution

A Straightforward computat ion show that

T

r [~J = U ~

and

nm

Tm= [; i ~J [~J

Figure 4

Solution Fir t, we rew1ite the linear tran formation to how the component :

Note that T

m

UJ

m

=

i the fir I column of the matrix A, and T

n]

IS

it third

co lumn . Thi observation can be generali zed as follow :

Now consider the geometric rel ation hip between the input vector

- [Xl ]

Fact 2. 1.2

r-

Consider a Jjnear trao formation T from Rn to

"' Then the matri x of T i

X2

0 0

and the corresponding output ector

y=

[

)'~ ] = [ -::~ ] .

T (e,.)

We ob erve that the rwo vector .t and 5 ha e the same length , Jx~ + and that they are perpeudicu lar to one another (because the dot product equal s 0). From the sign of the components, we know that if i. in the first quadrant, then y will be in the second, as hown in Figure 4. The outpur vecror y is obtained from by rotafing through an angle of 90° (!f radians) in the counterclockwi e direction. Check that the rotation is indeed counterclockwise when is in the econd third , or fourth quadrant. ~

. where e; =

xi,

~

0

x

To ju ti fy thi result, write

x

x

EXAMPLE 6 .... Con ider the linear transformation T (x) = Ax , with

A=u in

T hen 0 0

Find

Tm whe.e for simplicity we wrire

and

r[H

Tm forT ([

m

U;

=

VII

0 by d fi nition of the prod uct A e; .

V;,

ith .

60 •

Sec. 2.1 Introdu ct ion to Linear Tra nsforma tionsa nd Their In erse •

Chap. 2 Linear Transformations The vector el. e2 . ... . eJI in IR" are so me times referrecl to a the Sf!ll'l!Ja~·d ve tors in II. The Standard ec tOL - I ' e2, e3 in IR 3 are often denoted by i ' j' k.

EXERCISES

In Exerci ·e 9 to 12, decide whetber the g iven matrix i invertible. in verse if it exists. In Exe rc ise 12, the constant k i arbitrary.

9. [ Use the concept of a linear transformation in terms oF the formul a .v = kv. and interpret imple linear tTan formatio ns geometrica lly. Findthe inverse of a linear tran fonnation fro m IR 2 to IR 2 (if it exi ts). Find the matrix of a linear tran fonnation column by column .

GOAL

)' I =

X

3.

2. Yl = 3x3 )'3 = X 1

= = Y3 =

)'I

X2 - X3

V2

X 1X3 Xj -

10. [

~ ~J

A = [~

X2

=

= Sx 1 +

x2

. 12. [

~ ~]

!]

b] . . .

Cl [

9xl + 3x2 - 3x3 )'2 = 2xl - 9x2 + X3 }'3 = 4x l - 9x2- 2x3

y4

u~]

i in vertible if and only if ad - bc =1= 0. Hint: Consider the ca e a =1= 0 and a = 0 separately . b. ff

4. Find the matrix of the linear tran Formation Yl

11.

F in d the

13. Prove the fo llow in g Fact : a. The 2 x 2 matrix

Consider the transfmmations from IR 3 to IR defined in Exerc ises I to 3. Which of these tran Formation are linear . ."1 = 2x2 1. Y2 = X2 + 2 :V3 = 2x2

~ ~]

61

c

d

1

uwert1ble

then 1

a [c

+ Sx3.

b]1 [ d d - ad- bc -c

-ab ] .

(The formula in part b i worth memorizing .

5. Con ider a linear tran Formation T From iR to IR 2 , w here 3

14. a . For wh.ich c hoices of the con tant k i the matrix [;

i]

b. For which cho.ices of the consta nt k are a.ll entrie of [; Find the matrix A ofT.

6. Con ider the tran Formation T From ffi: 2 to JR 3 given by

invertible?

i ]-I

integer?

15. For which choices of the con tant a and b is the matrix A=

[a -b] b

a

invertible? What is the inver e in thi s case? ls this transformation Linear? If o , find its matrix . 7. Suppose ü1, ü2 , •.. , Ün are arbitrary vect:ors in IR 111 • Consider tbe transformation from IR" to IR 111 gi ven by

l

X2

T

xl

:

J= X1u1 + X2V2 + · .. +x"un. _

_

_

.x"

Is this transformation linear? IF so, find its matri x A in te rms oF the vector

-

-

-

Give a geometric interpretation of the linear transformations defined by the matrice in Exercises 16 to 23. Tn each case decide whether the tra.nsformation is invertible. Findtheinver e if it ex.ists and interpret it geometrically.

16.

[~ ~]

17.

20.

[~ ~]

21.

[-1 0] [-~ 6] 0

- 1

18.

[00.5 0.50]

19.

22.

[6 -~J

23.

U1, U2, .. . , U11 •

8. Find the inverse of the linear Iransformation x 1 + ?x2 Y2 = 3xl + 20x2.

Yl =

Consider the circular face on the fo llowing page. For each of the matrice A in Exercise. 24 to 30, draw a sketch howing the effect of the linear tran Formation T (x) = A.r on thi Face.

62 •

Chap. 2 Linear Transformation

Sec. 2.1 lntroduction to Linear Transformationsand Thei r Inverses •

63

35. In the French coast guard example in this section, suppose you are a spy watehing the boat and ·listening in on the radio messages from tbe boat. You collect the foiJowing data: • When the actual position is [ ;] , they radio [ ~;] . 4 • When the actual position is [

24. [

,.

.,

..

-b]

b ~]

25.

29.

[~ ~]

26.

27. [

[-1 0]

~

-n

30. 0 - 1 31. In Chapter 1 we mentioned that the image of Gauss hown on a German bill is a mirror image of Gau s' actual likenes . What linear transform atio n T can you apply to get the acrual picture back? 28. [

?

~

4

~

l

they radio [

~~

l

Can you crack their code, i.e., find the coding matrix , assuming tbat the code is linear? 36. Consider a linear transformation T from ~~~ to ~m. Using Fact 1.3.7, justify the following equations: T ( ü + w ) = T (Ü )

+ T (w),

T (kü) = kT(Ü) ,

for all vectors ü, w in IR", for all vectors ü in IR" and all scalars k .

37. Consider a linear transformation T from IR2 to iR2 . Suppose ü and ÜJ are two arbitrary vectors in IR 2 , and is a th.ird vector wbose endpoint is on the line segment connecting the endpoints of ü and w. Is tbe endpoint of the vector T(x ) necessarily on the line segment connecting the endpoints of T ( Ü) and T (w )? Justify your answer.

x

32. Find an

11

x n matrix

A such that Ax =

3x, for all

x in ~~~ .

33. Consider tbe tran formation T from ~2 to ~ 2 , which rotates any vector througb an angle of 45° in the counterclockwise direction .

.:r Hinr: We can write ~r = ü + k (w - ü). for some scalar k between 0 and 1.

X

Exercise 36 is helpful. 38. The two column vectors ü1 and ~ of a 2 x 2 matrix A are ketched below. Consider the linear transformation T(Jr) = Ax. from IR 2 to IR 2 . Sketch the vector

You are told that T is a linear transformation (thi s will be shown in the next section). Find the matrix ofT. 34. Consider the transformation T from ~2 to ~ 2 which rot.ates any vector .X through a given angle cp in the counterclockwise direclion (compare with Exercise 33). You are told that T is linear. Find the matrix ofT in terms of cp .

0

64 •

Sec. 2.2 Linear Tra nsformations in Geometry •

hap. 2 Linear Tran fo rmat ion

39. Tm or fal

? If T is a lin ar tran formati on fro m lR" to lR111 • then

.r l] X2

T

[

:

= XJT el ) + x _T el)

b. Whic h poi nts

+ ... +x" T (e"),

,\n

are tran sfonned to [

\ bere e1 . e2 . ... , e" are the Standard ectors jn JR" . 40. Describe all linear tran formation fTom lR (= JR 1) to JR. What do thei r graph Iook like? lR 1) . What do their 41. De: c1ibe all li near tran formation from JR 2 to graphs Iook like? 42. When you repre ent a three-dimensional object graph ically in the plane (o n paper, the blackboard, or a computer creen). you have to trans form patial coordinates.

~ " ·i

(the dot

x

=

a3

b3

r cü x

I

fo r all vectors ii and

[~] ·

Include the images of the x 1, x 2 , and x 3 axes in your sketch.

[l]

win JR3 .

w) = T (ü) x T (w)

See Exerci se 44 fo r the defini tion of the cros

product.

a. Use this tran form ation to represent the uni t cube with corner points

[l]· [!]

a1b2-a2b 1

Consider an arbitrary vector ü in JR 3 . Is the transformation T (x) = ü x x fro m JR 3 to JR 3 li near? Jf so, find its matrix in tenns of the components of the vector ii. 45. True or fa.lse? lf T is a linear tranfo rm ation fro m JR 3 to JR 3 , then

[-: 1 0]·

[~] · [g]. [!]· [~] ·

Js the tran format;on T (i)

[ :~ ] X[ ~~ ] [ :2:~ =:~~~ ] .

fo r ex an1ple, the one given by the rnatrix

0

[! l

x

The simplest choice is a linear tran fo rmalion,

- 2

~

Explain .

product) from 3 to lR linear? lf so fi nd the matrix ofT . b. Consider an arbitrary vector ü in 3 . I the tran fonnation T (x = u. x linear? If so, fi nd the matrix ofT (i n term of the component of ii). c. Conversely, consi.der a li near tra nsformation T from iR 3 to iR. Show that there is a vector ü in 3 such that T = ü . .X, fo r all in R 3 . 3 44. The cro s prod uct of two vector in _ is defi ned by

X)

~ ].

~] ?

43. a. Con ;der the vector "

[ ~~ ] . into plane coordinates [

65

2.2

LINEAR TRANSFORMATIONS IN GEOMETRY In the Ia t section, we defi ned a linear transformation a · a function lR" to 1R111 such that

.v =

T Cr) from

y =A.~ .

fo r some m x n matrix A.

- -+-- +--- +--+

Y1

EXAMPLE 1 .... Consider a ]j near transformati on T (_r) = Ax fro m lR" to lR111 • a. What i the rel ationship between T(Ü) . T(w) , and TC+ w), where ü and are vectors in lR"?

- I - I

w

b. What is the relationship between T (v) and T (k ü), where ü is a ector in lR" and k is a scalar?

66 •

Sec. 2.2 Li near Transformati ons in Geometry •

Chap. 2 Linea r Transfo rma tions T(kv) = kT(Ü)

T (x)

. ''

= T l~; ] =

T (x ,e,

+ x 2 e2 + . . . + x" e" )

Xn

'



Figure 1 (o) lllustroting the property r + wl = r !vl + T!Wl.

= T (x ,e,)



rv

+ T (x2e2) + ... + T (x"e")

(b) lllustroting the property b)

(a)

Solution

T (e")

a. Applying Fact 1.3.7 we find that T (Ü

+ w)

=

A

ü + w)

=



+ Aw

=

T ( Ü)

+ T (w).

In words: The transform of the sum of two vector equal the um of the transforms.

T (k v)

=

A(k - )

kT (Ü) .

Xn

Solution

e

e

The unü square consi ts of aJJ vectors of tbe form .t = x 1 1 + x 2 2 , where x 1 and x2 are between 0 and 1. Therefore, the image of the uni t square consists of all vectors of the form

Figure I ilJu strates these two properties in the case of the linear Iransformati on T from 2 to ~ 2 which rotate a vector through an angle of 90° in the counterclockwi se direction (compare with Example 2. 1.5). In Example 1 we saw that a linear transformation sati sfies the two equation s T ( ü + w) = T (v + T (w ) and T (k ü) = k T (Ü). Now we will show that the converse is true as well: Any tran sformation from R" to R"' which sati sfies thesc two equations is a linear tran form ation.

T (-'i) = T (x ,e,

+ X2e2)

=

XtT

(e ,)

+ x2 T (e_).

where x 1 and x 2 are between 0 and 1. The last step folJows from Fact 2.2.l. One such vector T (x) is shown in Figure 3. Figure 2

x

T ~

A tran formation T from IR" to IR"' is linear ' if (and onl y if)

+ T (w),

(by Definition 1.3.6) •

are sketched in Fi gure 2. 1 Sketch the image2 of the uni t quare under thi s trau formati on.

In words: The transform of a calar mulüpl e of a vector i the caJar multipl e of the transfonn . -<1111

a . T ( ü + w) = T (v)

= A,t

EXAMPLE 2 ~ Con ider a linear Iransform ati on T from R 2 to ~ 2 . The vectors T(e 1 and T( e 2)

= = kAÜ

l

:_:~ ]

Here is an example illu trating Fact 2.2. 1.

b. Again , apply Fact 1.3.7:

Fact 2.2.1

(by property a) (by property b)

= X t T (e, ) + X2 T (e2) + ' .. + x" T (e/1 )

r !kVJ = kT !vJ.

67

for all ü, ÜJ rin R", and

b. T (k v) = kT (v) , for all ü in R" and all sca ars k.

Proof

In Example I we saw that a linear transformation sati sfi es the equations a and b. To prove the converse, consider a Iransformation T from IR" to !R"' that satisfies equations a and b. We must show that there is a matrix A such that T Ci ) = Ax, for all X in !R". Let eI ' . . . ' eil be the Standard vectors introduced in Fact 2.1. 2.

1

ln ma ny texts, a linear transfonnaii on is defined as a fun ction from IR" to IR 111 wit.h these two propen ies. T he order of presentatio n doe not rea ll y mauer; what is important is that you thin k o f a linear Irans formation both as a fun ction T from lR" to IR"' of the form T (x) = Ax and as a fun cti on w hich has the propen ies a and b ·tated in Fact 2.2. 1.

Do ma in

Codomain

1Note that the.re are two sli ghtly di ffere nt ways to re present a linear Lran formati n fr m R 2 to IR geometricall y. Somet imes we wil l draw two di ffe rent planes to represen t do main and codo ma in (a here) . . and sometimes wc will draw the in put .r und the outpul _v = T(.r ) in the same pl ane (as in Examp le 2. 1.5). The fit" l represent, tion is les crowded. wh.i le the second one cln rifie the geome tric relation hip between X and T (x ) (for examp le, in Exa mple 2. 1.5, ;r ~ nd TC:r) are perpendicu lar). 2The imoge o f a subset S of the do ma in consists of Lhe vectors T (x). for a ll .1' in S.

68 •

69

Sec. 2.2 Linear Transformatio ns in Geo metry •

Cha p. 2 Linea r Transformations

l f you th.in k of the doma in IR 2 of the tran sfo rmatio n T as a rubber heet, you can im agine that T ex pands the sheet by a fac tor of 2 in the ü2 -di:recti on and ~ co ntrac t it by a fac to r of ~ in the ü1-direction.

Yz

. . . - . . -. --,' .._,

:

~ ~ Rotations

Nex t, we prese nt some c lasses of linear transfo rmations from IR2 to are of interest in geometry .

2

that

Consider the transformati o n T fro m JR2 to IR2 that rotates a vector through an angle a in the counterclockwise directi on, 1 as shown in F igure 8. Recall that in Example 2. 1.5 we studied a rotation with a = rr / 2.

Figure 4

Figure 3

I

EXAMPLE 4 ... Let T be the coun terc lockwise rotatio n througb an angle a. The vector T (x) in Fi gure 3 is in the shaded parallel ogram sho wn in Figure 4. Conversely, any vector b in the shaded parallelogram can be wci tten a.

b=

a. Draw sketches to illu trate that T is a linear transfo rmation. b . Find the matrix of T .

XI T(el) + x2 T e2) = T(x l e l + x2e2) ,

for two calars x 1 and x 2 between 0 and 1. This shows that the image of the uni t square is rhe parallelogram defined by T(e 1) and T (e2). ~ For generali zatio ns of thi example, see Exercises 35 and 36.

EXAMPLE 3 ... Consider a linear transformati on T from IR 2 to IR 2 uch th at T (Ü1)

=

4v1 and

T (v2 ) = 2 ü2 for the vectors ü 1 and iJ2 . ketched in Figure 5 . On the am e axes, sketch T Cr , for the vector -~ given in the sam e figure. Ex pl ain your o lutio n.

Solution a . See Figure 1 for the case rx = rr j2. Fo r an arbitrary rx, we illu strate on ly the property T (kv) = k T (iJ), leav ing the property T (Ü + w ) = T (v) + T (w) as an exercise. Fi g ure 9 show that tbe vectors T (k v) and k T (v) are equal ; convince yourself that the two vectors have tbe ame length and the same argument. b. The matri x of the tran sfo rmation T is

Solution Using a parallelogram , we can represenr

x as a linear combi nation of iJ 1 and

Ü2 :

as shown in Figure 6. By Fact 2.2.1 ,

Figure 8 A rototion.

Figure 9

T (x) = T (c 1ü1 + czv2) = c1T Cii1) +c2 T Cii2) = ~c 1 ü 1 +lc2ii2.

T he vector c 1ü1 is cut in half, and the vector c2 ü2 is doubl ed, as sho wn in Fi gure 7 .

Figure 5

Figure 6

T(kii ) = kT(ii)

.Y = T(x)

Figure 7 X

k ii

uz

1 It is eas iest to dcti ne a rotation in terms of po hu· coordi nates. The le.ngth of T (.r) equuls the length of .~ . and the argument (or polar angle) of T (.r) exceeds the argument of .r by a .

70 •

Chap. 2 Linear Tra nsform atio ns

ec. 2.2 Lin ear Transformation in Geo metry •

r

)=[-sicosn aJ a

T(ii 1

r sin a

Figure 10

Figure 11 by Fact 2.1.2. To find T (e 1 ) and T(e2), apply ba ic tri gonometry , as show n in Figure l 0. ote that the vector T(e 1) and T e2) both have length I. -
Fact 2.2.2

71

Figure 12

Solution Co nsider the va lu e T(e1)

Rotations The matrix of a counterclockwi e rotarion through an ang le cx i -

This forrnula is important in many applications, fTom phy ic to computer graphic · it is worth memorizi.ng. Here are two special ca es: • The matrix of a counterclockwi e rotation through an angl e of

[

'

I

2

• The matrix of a counterclockwise rotation through an angle of n /6 (or 30°) is

[

. Jr ] [.T./3

cos ~ -s m ~

=

Jr

Slfl -

6

CO

l 2

-

6

-

:J

~l

+ Rotation-Dilations EXAMPLE 5 ..... Give a geometric interpretation of the linear transformation T (i) = [

where a and b are arbitrary constants.

~

-b]a

x,

-!l

th at i , the colu mn of the

e1

and e2), write the vector T (e 1) = [

~ J in term

e

o f it polar coordin-

ate. :

[a] [ CO Ci ] r

b

- . r sin cx

·

as illus trated in Figure 1- . Then

as in Example 2. 1.5.

.

(not ju t

is

l

cos 2

sm -

=[

matrix. Note that T (e1) and T (e2 ) are perpendi c ular vector of the ame Jength, a how n in F igure 11. We can ee that the vector T (e 1) and T (e2 ) are obtained from 1 and 2 by fi r t perfo m1ing a counterclockwi e rotation through the angle a, where tan cx = b/ a. and then a dilation by the fac tor r = Ja2 + b 2 , the length of the ve tor T (e J) and T (e2). To veri fy th at the tran form ation act in the ame way on all ve tor

in~ ] = [O -1] 0 '

JT

and T (e2

e

incx] · CO Ci

cos ~

= [~]

[~ The matrix [

~

- r sin cx] - r [ co

r cos cx

cx

sm cx

-

in cx ] · cos Ci

- b ] is a scalar mul tiple of a rotatjon matrix ( ee De fi ni tion 1.3.9 (/

and Fact 2.2.2). The refore,

- [a -b]-

T (x) =

b

(I

x =r

[ CO. . Ci 111 CX

A ve tor X is first rotated th rough an a ng le cx in the counterc lockw ise di recti on; the re ulting vector i then mul t.i plied by r, whi ch repre e nt.. a dil ati on, a · how n in Figure 13. -
72 •

Sec. 2.2 Linear Transfo rmations in G ometry •

hap. 2 Linear Tran forma tions T(.t ) =

[a b

- b] X_

a

= I'

[cos er sm a

- sin er cos er

.

73

J~ .I

:r Figure 13 }

Figure 14

':7

' Fact 2.2.3

Rotation-dilations

and

To interpret the linear transformati n

geomerrically, write the vecror [

-b]-

[ab

T (x) --

~]

X

a

Here is a more tangible de cription of thi s transformaüon. Consider a tack of thin sheets of cardboard. viewed from the side, a hown in Figure 15. Align a boarcl vertically against the edges of the sheets, as shown. Hold the bottom edge of the board in place while pu hing the top edge to the right. The higher up a sheet is, tbe further it gets pushed to the right, wirb its elevation uncbanged .

in polar coordinate :

J

s er [ ba = [ rr c? sm a

J

The transformation T(.'t) = [

'

~

t] x

i called a shear paraJJel to thex 1-axi .

More generally, we define:

'

' ' '

Definition 2.2.4 where r = .Ja2 + b2 and tan(a) = a~. Then T is a counterclockwise rotation through the angle a followed by a dilation by the factor r. We call T a rotation-dilation .

Shears Let L be a line 1 in JR2 . A linear Iransformation T from JR2 to JR2 is called a shear parallel to L if

a. T(Ü) = ü, for aU vectors ü in L, and b. T(x) - .t is in L for all vectors x in JR2 .

+ Shears EXAMPLE 6 .... Con ider the linear tran sformaöon

-

y=

['0 1I] 2

-

X.

T

Figure 1S Board

~

To understand this transformation geometrically, sketch the image of the unit square. Solution By Example 2, the image of the unit square is the parallelogram defined by T(e 1) and T(e1 ), as illustrated in Figure 14 (compare with Figure 4). Note that

1Convention:

All lines considered in lhis text run through the origin unless stated otherwi e.

74 •

)

hap. 2 Linear Tran forma tions Property a rnean- that the tran Fonnation leav . the line L un hanged , and prop rty b say th at the tip of any vector -~ i moved parallel to the l me L , as illu trated in Figure 16. Let us check propertie a and b for the linea r tran ·fo rmati on - = [ 1 T(x)

0

Sec. 2.2 Linear Transformations in

+

[ -~ J = [ ~

b. T [

Conside r a 1ine L in IR 2 . For any vector ü in 2 , there is a uniqu e vect r ÜJ in L such that ÜJ is pe rpendicu lar to L , as shown in F igure 18. Th i vector is ca ll ed the orthogonal proj eclion of ü onto L , clenoted by proj~.ii (" orthogona l' means "perpend ic ular"). Intuiti vely, you canthink o r proj ~.ii as th e shacl ow ü ca ts on L if we shin e a Ught straight down on L. How can we generalize the idea of an orthogo na l projection to lin in II("? Let L be a line in IR". i.e., the set of al t scalar multipl es of some no nzero vector ü, w hi ch we may choo e a a unit vector. For a given vector ii in IR 11 , i it po sible to find a uruque vector w in L uch that ii - ÜJ is perpendicu lar to L (i.e., perpendicu lar to ü)? lf uch a vec tor exis t . it i. a calar multiple = kü of ü. See Figure 19. We have to c hoo e k o that ü i perpendi cul ar to Ii, that is,

v-

~1 J.t

fJ[-'~ J = [ x~ J. The line L re1nains unchanged.

n[;~ J_[:~ J

:,::~ J _ [::~ J = [ ~

= [

XJ

75

Projections and Reflections

dj scu _ed in Examp le 6, where L is the x1 -axi . a. T

eome try •

w

in

l

~2 ~X2 J _ [;~ J = [ !;1

w

w

The vector T (.t) - ; is in L.

EXAMPLE 7 .... Con ider two (nonzero) perpendicular vectors

t-:i

and w in IR2 .

ü . cü -

Show that the

w)

= ü . cv -

kü)

= ü .ü-

k (ü . ü )

= ü . ü- k = o.

tran Fonnation We have u ed the facr that ü · ü = l ( ince ü i a unit vector . Therefore. ü - kü i perpendi cul ar to ü if (and on ly it) k = ii · ü.

T(.r) = .x + (ii · ,r)w

is a shear parallel to the line L spanned by

w(see Figure

17).

Solution We Ieave it to you to verify that T i a Linear h·ansformation . We wi ll check properties a and b of a shear (Definition 2 .2.4). a. If

v is a vector on

Figure 18

Projecting onto o line.

L, then T (v) = ü + (ü.

ü)w = ü.

because ü . ü = 0 ( ince the vector ü and

v are perpe ndicu lar).

b. For any vector -~ in IR the vector 2

r (x)

+ (ü . x) iü - ; = Cü . .x) w

- ; = ;

is in L , becau e thi vector is a scalar multipl e of ·ÜJ.

Figure 16

Figure 17 Figure 19

X ll

L

0

76 •

ec. 2.2 Linear Tran Fo rm ations in

Chap. 2 Linear Transformations

eometry •

77

We ummari ze:

Fact 2.2.5

Orthogonal Projections • Let L be a line in IR" con isting of all calar multiple of ome unit v ctor ü. For any vector v in IR" there is a uniqu e vector win L such th at ti - w i perpendi ul ar to L , namely, ÜJ = (Ü · ti)ü. This vector ÜJ called the orrhogonal projection of onto L:

v

proj'-ü = (ü.

0

Figure 21

v)u

• The transformation T (li) = proj L(ti) from !Rn to !R" i linear.

Definition 2.2.6

Let L be a line in IR". Fora ector - in lR", the vector 2 proj L v)- ü i called the refl.ection of ii in L:

The verification of linearity i straightforward; u e Fact 2.2.1. We will check property b here, and leave property a as an exer ise: proj L(kv) =

Reflections

refcü

= 2( proj '-v)- ü = 2(ü · ii)ii- ü.

where ü i a unit vector in L.

Ü · kÜ)Ü = k(Ü · Ü)Ü = k projL V)

v v in

Con ider again a line L and a vector ti in 1R 2 . The reftection ref L of the line L i shown in Figure 20. Figure 21 illu trates the relationship between projection and refiection:

v

If i perpendicular to L we expect th at reh- = -ü. Th.i result can be confirmed by computation: lf ü is perpendicul ar to L , then ü . ü = 0, so that refLü = 2(ü · ii)ii- ü = 0- ii = - - .

We leave it as an exercise for you to verify that the reflection in a Line in !Rn is a li near transfom1ation from !R" to ]Rn .

Now solve for refL ü:

v

ref'- = 2(projL Ü) -

v=

2(ü · ü)i:t - ii,

where il i a unit vector in L. We can use the formula 2(proj'-ti) - ii to define the refi ection in a line in !R".

EX ERCI SES

GOALS

Check whether a transformarion is linear. Use the marri ces of rotation and rotation-dilation s. Apply the definitions of shear . projections. and ref1ections.

1. Sketch the image of the unit square under the Linea r Iransformation Figure 20

- [3 2I] -

T (x) =

1

X.

2. Find the matrix of a rotation through an angle of 60° in the counterclockwi e direction. 3. Consider a linear tran formation T from JR2 to JR 3 . Use T (e 1) and T(e 2 ) to describe the image of th.e unit square geom trically. 4. Interpret the following linear tran formation geometrically: - = [ l T (x)

1

-1]~ ~

78 •

Sec. 2.2 Linear Tra nsformation s in Geom etry •

hap. 2 Linear Tran fo rmation s

13. Suppose a li ne L in IR 2 co ntains the uni t vector

5. T he matrix

- 0.8 [ 0.6 repre

- 0.6 ] - 0.8

nts a rotatio n. Find the ang le of rotati n (i n radian s).

6. Let L be th line in IR' that con i t of a ll scalruc multiple of the vector [

Find the orthogonal projectio n of the

ector [ : ]

~].

Find the matrix A of the linear transfo rmation T (t) = ref'-x . Give the entri e of A in terms of u 1 and u 2 •

14. S uppose a line L in ~" contains the uni t vector

- r~~J

nto L.

u=

7. Le t L be the line in IR' which co nsists of all calruc multipl s of

the reflection o f the

79

Ul

.

u"

F ind

ecl
.

a. Find the matrix A of the li near transfo rm ati on T (x) = proj Lx . Gi ve the e ntries of A in terms of the co mponents u 1 of ü. b. What is the sum of the d iagonal entries of the matri x you fo und in part a? 15. Suppo e a line L in lR" conta.ins the uni t vector

8. Interpret tbe following linear transfom1 ati on geo metri call y: T Cx) = [

~

-

~ J~

ü=

9. Interpret the following li near tran Formati on geo meLTicall y: T x) =

un

r:; J. u"

Find the matrix A of the linear transformati on T (,r ) of A iJ1 terms of the components u ; of ü.

K

10. Find the matrix of the orthogonal projection onto the line L in ~ 2 be low.

16. Let T (x ) = refLx be the re flection in the Line L

=

reft_x. Gi ve the entri es

mR2

sho wn below .

a. Draw sketches to illustrate that T is li near. b. Find the matrix of T terms of ct .

m

L

11. Refer to Exercise 10. Fi nd the matri x of the refl ectio n in the line L.

17. Consider the linear transformation

12. S uppose a line L in ~ 2 contain the unit vector

- [u1]

LI =

T (x- ) = [ 1 1

.

u2

Fi nd the m atri x A of the linear transformati on T (x ) entrie of A in terms of u 1 and u 2 .

proj '-x . Give the

1] I

X.

a. Ske tch the image of the unit square under this transformati on. b. Explain how T can be interpre ted as a projecti on fo llowed by a dil ation.

80 •

Chap. 2 Linear Transformati ons

Sec. 2.2 Lin ear Transformatio ns in

18. fnterpret the tran fo rmatio n

eo metry •

81

29. Let T and L be tran sformations from IR" to IR". Suppose L is the inverse of T , that is,

.r.

T (L (.K)) =

a a reft ection fo ll owed b a dilalion.

fo r all

Find the mau·ices of the linear tran form ation from JR3 to JR 3 give n in Exerci e 19 to 23 . Some of these tran fonnati ons have not bee n form all y defi ned in the text. Use common sen e. You may as ume that all rhese t.ransformation are linear.

J

21. The rotation about the 7-axi through an angle of rr /2, counterclockw i e as viewed from the positive z-axis. 22. The rotation about the y-axi through an angle a, counterclockwi se a viewed from the positive y-ax.is. 23. The reftection in the plane y = . 24. Consider the linear transformation

rcx>=[i ~J .x.

where k i an arbitrary constant.

Interpret your result geometrically. 26. Let

rcx>=

1 [ -I

4] -

3

X

be a Linear transformation. Is T a shear? Explain. 27. Consider two linear transformations y = T(x ) and = L(Y), where T goes frorn IR" to !Rm and L goes from IRm to IRP. Is the tran. formation z= L(T(,r) linear as weil ? (The rransforrnarion z = L(T (x )) is called the composite ofT and L.) 28. Let

z

A=[~ ~]

and

a linear transform ation , i L hnear a well? Hint: T(L (x) )

+ T(L ()i) )

= T(L (x)

+ L ()i

),

because T is linear. Now apply L on both side . 30. Find a nonzero 2 x 2 matrix A such that Ax is parallel to the vector [

B = [P r

31. Find a nonzero 3 x 3 matrix A such that Ai is perpeod;cular to [

,r in

~

J,for

q] . s

Find the rnatrix of the linear transformation T(x) = B(Ax) (see Exerci se 27). Hinr: Find T{e 1) and T(e2 ) .

~

J.

for all

IR3 .

= [ c?

32. Consider the rotation matrix D

a

sm a

111 -

a

COSCi

J and

the vector ü =

[~~~; J, where a and ß are arbitrary angles. a. Draw a sketch to explain why Dü

a. Sketch the image of the unit square under this transformation . b. Show that T is a shear. c. Find the inverse transformation and describe it geometrically.

~ ~] ,

L (T (x)) = x

aU .r. in !R2 .

19. The orthogonal projection onto the .x- y plane. 20. The reftection in the x- z pl ane.

25. Find the inverse of the matrix [

x in IR". lf T i x+ y =

and

=

[c?s(a + ß>].

SIO(a + ß) b. Compute Dü. Use the resuJt to derive the add.ition theorems for ine and cosine: COS (et

+ ß) =

. .. ,

sin (a

+ ß) =

.. . .

33. Consider two distinct hnes L 1 and L 2 in .IR 2 • Explain why a vector ü in IR2 can be expressed uniquely a

where Ü1 is in L1 and Ü2 in L 2 . Draw a sketch. The tran formation T (v) = ü1 is called the projection onto L 1 along L 2 . Show aJgebraically that T i Linear. 34. One of the five matrices below represents an orthogonal projection onto a line and another represents a reftection in a line. Identify both and briefty ju tify your choi.ce.

A =

J

[I

3 ;

2 2]

J 2

2 1

'

'

B =

t!

D= -~u 72] ' 3

2

2 1 2

:J.

3

~ = [ -1

E=

f!

c= 3 2 - 1

2

;J

- I

n

82 •

Sec. 2.2 Linear Transformations in Geometry •

Chap. 2 Linear Transformations 2

83

2

35. Let T be an in vertible linear Iransformation from lR to JR . Let P be a parallelegram in JR 2 w ith one vertex at tbe origin . Is the image of P a pru
p

39. Let

36. Let T be an invertible linear transformation fro m JR 2 to JR-. Let P be a parallelogram in JR 2 . Is the image of P a parallelogram as weil ? Explain.

be nonzero perpendi c ular vectors. Find the matrix A of the hear T(x )

= ; + Cii -x)w

(See Example 7). Gi ve the e ntries of A in terms of a and b.

40. Let P and Q be two perpendicular Iines in iR 2 . For a vector ; in JR 2 , what is projpx + projQx? Give your answer in term of x. Draw a sketch to ju tify

tr

your an wer.

41. Let P and Q be two perpendicular lines in JR 2 . For a vector ; in JR2 • what i the relationship between refpx and refQx? Draw a ketch to ju tiEY your ans wer.

37. Let T be a linear transformation from JR2 to JR2 . Three vectors 1, ~, w in lR 2 and the vectors T(ii 1), T (v 2 ) are shown below. Sketch T (w). Explain.

v

42. Let T(,t) = projL ..r be the projection onto a line in '";,". What is the re lation hip between T (.t) and T(T(x))? Justify your answer carefully.

43. Find the inver e of the rotation matrix A = [

Xz

l) '

c? a 5111 Ct

-sina ] . CO a

Interpret the Jjnear transformation defined by A -

1

geo metric all . E plain.

44. Find the in verse of the rotation-dilat ion matri

A= [a -ba ] b (assunting that A is not the zero matrix). Interpret the linear tran formation defined by A - I geometrically. Explain. 38. Let T be a linear transformation from JR 2 to JR 2 . Let ii 1, ü2 , w be three vectors in lR 2 , as shown on the following page. We are told that T(ii 1) = ii 1 and T(ii2) = 3ii2 • On the same axes, sketch T(w).

45. Consider two linear tran formation T and L from JR2 to JR 2 . We are told that T (v 1) = L(ii,) and T(v _) = L (Ü_) for the vectors ü1 and ü2 skerched on th following page. Show that T (-"r) = L (,t), for all vec tors ;r in .JR2 .

84 •

Sec. 2.2 Li nea r Transformations in Geo met ry •

hap. 2 Linear Transformation

85

Find the functio n f(t) defi ned in Exerci e 47, graph it (u ing techno logy , and find a number c between 0 and rr/2 . uch that j(c) = 0. U e your an wer to find two perpendicul ar unit vectors v1 and ü2 such that T(ÜJ) and TC 2 ) are perpendi cu lar. Draw a sketch. 49. Sketch the im age of the unit circle under the linea r transformation

- [50 0]2 -

T(x) =

cul ar vectors 46. Con ider a hear r from ~- to ~ . Show that there are perpendi 2 - and in 2 . uch that T (r) = .X + (v · ~r)w , for all .r in ~ . 2 2 47. Let r be a linear tran formati on from JR to ~ . Consider the function 2

w

X.

50. Let T be an invertible linear transfom1ation from ' 2 to JR2 . Show that the image of the unit circle is an eLEpse centered at the origin. 1 Hint: Con ider two perpendicular unit vecLOrs v1 and ü2 such that T (ii 1) and T (~) are perpendicular (see Exerci e 47d). The unit circle consist of aU vectors of the form ü = cos(t)ü 1 + sin(t)v2 ,

from ~ to JR. Show: a. The function j(t ) i continuous. You may take fo r granted that the function in (t) and cos(r) are continuous, and also that ums and product of continuou function s are continuou .

b. f(rr/2) = - f( O). c. There is a number c between 0 and rr/2 uch that f(c) = 0. U e the intermediate value theorem of calculus, whi ch teil u the following: lf a function g(t ) is continuou for a ::: t ::: b, and L i a number between g(a) and g(b), then there i at least one number c between a and b such that g(c)

=

where r i a parameter. 51. Let w1 and 2 be two nonparallel vector in ~ 2 . Coo ider the curve C in JR2 which consist of all ector of the form co (t 1 + in (r) where r i a parameter (draw a sketch). Show that C is an eUip e. Hint: You can interpret C as the image of the unit circle under a suitable Linear transformation; then use Exercise 50.

w

w

w_,

52. Consider a linear traosfonnation T from JR2 to JR2 . Let C be an ellipse in JR 2 . Show that the image of C under T i an ellipse as weil. Hint: Use the result of Exercise 51.

L.

d. There are two perpendicular unit vectors v1 and ~ in ~ 2 such that the vectors T (ii 1) and T (v 2 ) are perpendicular as well. (See Fact 7.5.2 for a generalization.) 1

An ellipse in JR- centered at the origiu may be defined as a curve that can be parametrized a ·

w

w2.

w2.

Suppose the length of Wt exceed- the length of Then we for two perpendicular vec tors 1 and cull the vectors 1 the semimajor axes of the ellipse. and ±w1 the semiminor axes. Convemion: All ell ipse considered in this tex t are ce ntered at the origin unless stated otherw i e.

±w

48. Refer to Exercise 4 7. Consider the linear transformation

r (x)

= [

~

_

~ J; .

-

86 •

Sec. 2.3 TheInverse of a Linear Transformation •

Chap. 2 Linea r Transformations

.3

T- i

T

THE INVERSE OF A LINEAR TRANSFORMATION

87

y

Let' s first review the concept of an invertible function.

Figure 2 Afunc1ion T. lts inverse r-l.

Definition 2.3.1

Invertib le functions ow consider the case of a linear transfo nnation from lR" to _ m given by

A function T:X~

Y

.Y =

where A is an m x n matrix . The tran Formation y =

is called invertible if the equation T x) = y ha a unique solution x in X for each y in Y. Here X and Y are arbitrary ets.

Ax

Ax ,

is invertible if the linear system

Ax =

v

ha a unique so lution ,i in lR" for alt in !R117 • Using tecbnique developed in Section 1.3 ("On the so lution of linear ystems') we can deterrnine for which matrice A thi s is the case. We examine the ca e m < n, m = n , and m > n eparate ly.

Consider the examples in Figure L, where X and Y are finite set . If a function T: X ~ Y is invertible. its inverse y - l: Y ~ X is defined by

y - 1 (y = (the unique x in X suchthat T x) =

).

See the example in Figure 2. Note that for all x in X , and for a1l y in Y.

( T - 1) - 1 = T .

If a function is given by a formula, we may be able to find its inverse by olving the formula for the input variable( ). For example the inverse of the function

x3

-

1

(from lR to JR)

is

x = J 5y+l Figure 3 Figure 1 Tis invertible. Ris not invertible: the equotion R(x) = y0 hos two solutions, x1 ond is no x such that S(x) = Yo· T

R

y

no olutions or infinitely many sol ution s, for any given y in ._"' (Fact 1.3.3). Therefore, the tran Formation ji = .X i noninvertible. As an example, consider the case when m = 2 and n = 3, i.e., the ca e o f a linear tran formation from 3 to 1?.' 2 . Figure 3 ugge ts rhat ome coJJap ing take place a we tran form JR 3 linearly .into IR2 . In other ward , we expect that m any point in ~ 3 get transformed inro the ame point in R 2 . The algebraic rea o ning above l:_ow that thi is exactly what happen : for some y in JR2 for example for y = 0), the equaüon T(."i) = A.r = y wi ll ha e infinüely many o lution x in JR 3 . See Exerci e 2. 1.42 for an example. (Surpri ingJy, there actually are invertjbJe (but nonlinear transformation from JR 3 ro JR2 . More generaJly, there are nonlinear in ertible tran formation from II (Q J! l1l fof any tWQ positive integer /1 and m.)

If a fu nction T is invertible, then so i y - I , and

y= - 5

Ax

< n • The system = has fewer equation (m) than unknowns (n) . Since there must be nonJeading variable in this ca e, the sy tem ha either

Case 1: m

y - 1(T X)= x T(T-l(y)) = y,

.Y

X7.

S is not invertible: there

s

T(.r =A.r

88 •

Sec. 2.3 The lnver e of a Linear Transformatio n •

Chap. 2 Linear Transformations Case 2: rn = n • Here. the number of equation in the sy tem A .~ = b matches rhe m1mber of unkno n . By Fact 1.3.4, the sy t m A.r = y has a unique so lution .\- if and only if

I 0 nef(A) =

0 I

0 0 0

0 0

0 0

Fact 2.3.3

An m x n matrix A i invertible if and only if a. A is a square matrix (i.e., m = n), and

b. rref(A ) = !".

tnY ~J./

tri

-v-.

1 ft..l

rn".(

= !., .

EXAMPLE l ... Is the matrix A invertible?

0 0 0

.v Ax

We concl ude that the tran formation = i invertible if and only / , or, equi alentl y. if rank A) = 11. 11 Con ider the linear tran formation from !R2 to !R2 we tudied in You may have een in the la t exercise ection that rotation , reflection are invert.ible. An orthogonal projection onto a line i noninvertible.

h [~ ~] 2 5 8

if rref(A) = Section 2.2 . , and shears (Why ?)

Solution

Ax

Case 3: rn > n • The tran formation )i = i nonjnvertible, because we can find a vector y in !R111 uch that the ystem Ax = y is inconsistent ( ee Example 1.3.1 3; note that rank(A)::: n < 111 here). As an exam ple, consider the ca e when m = 3 and n = 2, i.e., the case of a linear transformation from _ 2 to JR3, a shown in Figure 4. lntuitively. we expect that we cannot obtai n all points in JR3 by transforming JR2 linearly into lR . For many vector .v in JR 3 the eq uation T (.\-) = .Y wi ll be inconsistent.

[~ ~] [~ J] 2 5 8

A man·ix A is called invertible if the linear transformation y = kr i invertible. The matrix of the in ver e tran formation 1 i denoted by A- I. If the transformation y = is inve1tible, its inver e is y.

Ax

x =A-I

3]

2

-6 -3 - 6 - 12

2

-2(TI)

1

-+

+6(ll)

[~

0 1 0

-+ -;- -3)

-i]

A i not invertible, mce rref(A) =I h

Fact 2.3.4

Invertible matrices

u

-+

-4(1 -7(1)

-6

We now introduce some notation and summarize the results we derived above.

Definition 2.3.2

89

Let A be an n x n matrix. a. Con ider a vector b in lR". If A is invertible, then the y tem kr = b has the unique olution = A- 1b. lf Ais noninvertible, then the sy tem A.\- = b ha infinitely many olutions or none.

x

b. Consider the special case when b = Ö. The system A.t = Ö has .r = Ö a a so lution. lf A is invenible, then tills i the only olution. If A .i noninvertible, then there are infinitely many other olutions.

Figure 4

If a matrix A i invertible, how can we find the inverse matrix A- I? Con ider the matrix A =

1 1 2 3 [ 3 8

~]

or, equivalently, the linear Iransformation Yt

l] = [ 2x Y y

+ X2 + X 3] + 3x· + 2x . 3x, + 8x2 + 2x XJ

1The

invcr e transformation is linear (see Exercise 2.2.29).

[ Y3

1

90 •

Sec. 2.3 Th e lnver e of a Lin ear Transformation •

Chap. 2 Linear Transformation To fi nd the in v r

tran for mation. we olve thi " yste m for the inputvariab le x;.

+

X1

X2

2x 1 + 3x2 3x 1 + 8x2

+ X3 = + 2x3 = + 2x > =

+ X:!+

x,

X

5x2 -

91

Thi s process ca n be de cribed succinctly as fo llows :

.\' 1

- 2(1) - (1)

.\'3

Fact 2.3.5

Finding the inverse of a matrix To fi nd the inve rse of an n x n matrix A , form the n x (2n) matrix [A : I ,.] and compute rref[A : I"].

- (ll)

= Yl = - 2 )1 1 + x 3 = - 3y ,

X3

• lf rref[A : I" ] i of the form [I" : B], then A is in vert ible and A-

1

=

B.

• If rref[A : I" ] is of anoth er form (i.e. , its left half i · not I" , the n A

-5(II)

not in vertib le ( note that the left half of rret1A : I"] i n:ef(A)).

x,

+ .\'2

x,

+

= 3y,- Y2 = - 2y , + Y2 7 y , - 5 y2 + X3 =

--7

X3

X3

X X

X3

- (Hl)

= 3y, - Y2 = -2y , + n = - 7y, + 5y2- )'3 = 1Oy1 = -2yl =

- 7 )'I

6Y2

-

The inverse of a 2 x 2 matrix is particularly easy to find .

-;- (- 1)

)'3

Fact 2.3.6

The inverse of a 2 x 2 matrix

--7

a. The 2 x 2 matrix

A= [ ~ ~ ]

+ v3

is invertible if and only if ad - bc =!= 0.

+ Y2 + 5 Y2 -

b. If

)13

Wehave fo und the inverse tran formation ; its matrix i

B = A- 1 =[ ~~

in ve11ib le. then

~]·

-~

a [c

5 - 1

-7

b]-l

d

d -ba ].

I [ - ad- bc - c

We can write the computations performed above in ma!rix form .

u u [~

1

I

3 8

2 2

1 0 0 1 0 0

0

1

3

l

0

-2

0

- I

7

0 0 l 0 0

n n u --7

- 2(1) - 3(1)

- 1

[~

--7

1

- 5

lO

-6

- 2

I

-7

5

-:-(- 1)

j]

1

I

1 5

0 -1

0]

-(!!)

-3 0

1

-5(U)

Campare with Exercise 2 .1.13 .

0

--7

EXERCI S ES GOALS

0

1

I

0

0

1 0 -2 I

3 - 2

- 1

-7

5

1

j]

- (lll)

Apply the concept of an in vertibl e f unctio n. Derermine whether a matrix (or a linear transform ati on) is invertible, and find the in ver e if it ex i t .

--7

Oe ide whether the matrice s in Exercise 1 to 15 are in ertib le. lf they are, find the inverse. Do the computation with paper and pe ncil. Show all your work.

]. u~ ]

2. [

~

:]

3. [

~

7]

92 •

Chap. 2 Linear Transformations

s.

u3 n

9. [;

12. [

14.

Sec. 2.3 Th e Inverse of a Linear Transformation •

6.

~]

[~0 0~ ~1 ] 10.

:]

I

l

I

2 3

[ 1

29. For which choices of the constant k is the matrix be low inve rtibl e?

[i [ -~

15. [

·

18.

y 1 = 3x 1 + Sx2 . ·_ = Sx 1 8x2

17

+

.V t = =

~

4 7 11

1

[

25 50

25

32. Find aJI matrice [ :

·

)12

)' I

=

X1

x,

+

X2

= x,

+

b]c 0

uch that ad- bc

X3

+ 3x3 + 9x3

A =

=

I a nd A-

_! l

1

=

A.

wbere a a nd bare arbitrary

a [

0

0 0] b

0

.

0 0 c

a. For which c hoice of a, b, and c is A invertible? If it i in vertible, w hat is A- 1?

(

b. For which choice of the diagonal element i a diagonaJ matrix of arb itrary size) invertible?

WJ1icb of the fo!Jowi.ng fu nctions ible?

f

35. Con si der the upper triangular 3 x 3 matrix

from lR to lR (Exerci es 21 to 24) are in vert~

{/ A =

22. f(x) = 2x

[YI ]=[

J

[ 0

b d

c] e

.

0 f

a. For which choice of a. b. c. d . e, and f is A in ertible? b. More generally, when i an upper triangular matrix (of arbitrary s ize)

[Y1] = [x1+x2 J· x 1· x2

invertible? c. If an upper triangular matrix is invertib le , is it inve r e an upper triangular matrix as weil ? d. When is a lower triangular matrix invertibJe?

2

x2 27. • Y2 xf + x2 Y2 4 4 28. Find the inverse of the Lineartransformation T : JR -+ JR below. 26

0

(Exerci es 25

from JR 2 to

Which of the foUowing (nonlinear) transformarion to 27) are invertible? Find the inverse if it exists.

[Y Y2I] =[xt] x2

0 - c

co nstant . For which choices of a a nd b i A- I = A?

7x2

25.

a

34. Consider the di agonal matri.x

Y l = x , + 3x2 + 3x3 20. y_ = x 1 +4x2 + 8x3 Y3 = 2x , + + 12x3

21. f(x) = x

~]

0

-a -b

33. Cons ider the matri ce of the form A = [ :

+ 2x2 + 8x2

19. Y2 = + 2x2 Y3 = x 1 + 4x2

2

~ 0~]

31. For which choices ofthe co nstants a, b. a nd c i the matrix bel ow invert ib le?

~]

3 7 14

2

= x1 = 4x l

1

X2

Y2 .r3

• 3

I ] k k2

- c

- b

Decide w hether the linear tran formation in Exercise 16 to 20 are invertib le. Find the inverse transformation if it ex.ists. Do the computations with p aper and penciJ . Show all our ork.

16

1

2 4

30. For wh ic h c ho ices of the constants b and c is the matri x below in vertib le?

i -i ~ ~]

runJ

93

36. To deterrn ine whether a square matrix A is invertible, it i not alway - necT

X? XI

J

r

;~ =X 1

r

- 16 22~

J+

X2

r ~ J + r ~ J+ r ~ J -3

13

-2

X3

8

-2

X4

3



essary to bring it into reduced row echelon form . Ju tify the following rule: To determi ne whether a square matrix A i invertible, reduce it to triangular form (upper or lower), by elementary row operations. A is in e rtible if (a nd on ly if) a ll entries on the diagonal of this triang ul ar form ar nonzero.

94 •

hap. 2 Linea r Transfo rmations

Sec. 2.3 The In verse of a Linear Transformation •

37. Jf A i, an in ertible matri and c i a nonzero calar, i th matri x cA invertible? [f so. w hat is the relationship between A- I and (cA )- 1?

38. Find A- I fo r A = [

~ ~I ]

39. Consider a quare matrix w hich differ from the idenlity matrix at ju t entry, off the diagonal. For exam ple,

A~ -~ [

0 0] I

0

0 J

ne

1

In brreneral is a matrix M o f thi form invertible? If so, what is the M - ?

40. Show that if a squarematrixA ha two equal column , then A i not invertible. 41. Which of the following linear Iransformatio ns T: IR 3 ~ IR 3 are invertib le? Find the in verse of th ose that are invertible. a. R efl ection in a plane . b. Projection onto a plane. c. Dilation by 5 (i.e. , T (v) =S v, fo r all vectors ü). d. Rotation about an axis. 42. A square matrix i called a permuta!ion matrix if it contai n rhe entry I exactl y once in each row and in each column , with all otl1er e ntrie bei ng 0 . Examples are /" and

0 0 l] [ 1 0 0 I

0 0

.

Are permutati on matrices in ertible? 1f o, i tlle inver e a permutation matrix as weil ?

43. Con ider two in vertible

the mul tiplicatio ns and di visions are counted . As an example, we exami ne in vertin g a 2 x 2 matrix by e li mination.

[~

u

x 11 matrices A and 8. the linear tran Form ation y = A( Bx) invertible? If o, w hat is the inver e? Hint: Solve the equation y = A( B x) fust fo r Bx, and tllen for x. 11

[~

n

1

b d

0 .(.

b' d

.

95

e

0

-:-a,

requires 2 operati ons: b ja and ! ja

~]

(where b' = b ja, and e = ! ja) -c (I ), requires 2 operatio ns: cb' and ce

~]

-:-d', requires 2 o pe ration

.(.

b'

e

d'

g .J,

[~

b'

~]

e

g'

-b'(! /) , require 2 operation

,).

[6

0

e'

1

g'

{]

The w hole process requires 8 operations. Note that we do not count operatio n witll a predictable res ult such as I a , Oa, aja , Oja. a. How many operations are required to in vert a 3 x 3 matrix by eliminati on? b. How many operati ons are required to in vert an n x n matrix by elimin ation? c. If it takes a slow hand-held calculator 1 second to invert a 3 x 3 matrix. how long wiU it take the same calcul ator to in vert a 12 x 12 matrix? A ume that the m atrices are in verted by Gau s-Jordan elimination , and th at the duratio n of the co mputati on depends o nly on the number of mul tipli cati o n and divisions involved.

46. Consider tlle linear sy te m

44. Consider tbe n x n matrix Mn which contai n all in teger l , 2, 3 . . . n 2 a its e ntries, w ritten in eque nce, col urrm by co lu mn ; for exam ple.

M,

~ ~ ~ :! !!] [

a. Derermine the rank of M4. b. Deterrrune the rank of M", fo r an arbi trary n :::: 2. c. For whi ch integers n is M 11 in vertibl e?

45. To gauge the complexity o f a computational task, mathem ari c ians and computer scienti ts count the numbe r of e le mentary Operatio ns (addiri ons, s ubtracti o ns, multiplication , and di vision ) required. S ince additions and s ubtractions require very little work compared to multipli cati on. and divi sio ns (think about performing these operations o n e ight-di git numbers), often only

w here A is an invertible matrix. We can olve this y tem in two d iffere nt ways: • B y findin g tlle reduced row echelon form of the aug mented matri x [A : b]. • By computing A-

1

,

and using tlle formul a

.r =

A-

1

b.

In generaJ , which approach reqwre fewer Operations? See E ercise 45.

47. Give an example of a noninvertible function f from IR ro IR and a number b such that the equation f (x ) = b

ha a unique solution.

96 •

Sec. 2.4 Matrix Prod ucts •

hap. 2 Linear Tra nsformations 111

97

such th at the

_ lf the matrix (/" - A) is invertibl e,2 we can expres .X a a f-unction of b (in fact, as a linear transformati on): _,-b. X = Un - A)

has a unique solution. Why doe n' t thi example contradict Fact 2.3.4a? 49. lnpur- ourpur analysis. (This exerci e builds on Exerci es 1. 1.20, 1.2.3 7, 1. 2.38. and 1.2.39.) Con ider th indu tries J 1. h , . . .. l n in an econo my. Suppo. e the consumer demand vector is b, th outpur vect.or .i _an.d the dem and vector of the jth indu try is Üj (the ith co mponent aiJ of vj I S the demand indu try l j puts on industry J; , per unit of output of l j . As we. have een in Exercise 1.2.38, the output ."K j u t meets tb e aggregat:e demand 1f

a. Consider the example of the economy of Israel in 1958 (discussed in Exercise 1.2.39) . Find the technology matrix A , the m atrix (!11 - A), and its in verse, Un - A) - 1• b. In the example discussed in part a, suppose tbe consumer demand on agriculture (Industry 1) is 1 unit (1 million pounds , and the demand on the other two industries are zero. Wh at outpur ; is required in th is case? How does your answer relate to the matrix (!11 - A) - 1? c. Expl ai n, in terms of eco nornics, why the diagonal elements of the rn atri x (!" - A) - 1 you fo und in part a must be at least 1. d . If the con umer demand on man ufacturing increa es by l (from whatever it was), and the consumer dernand on the two otber industries remain the same, bow will the output bave to cbange? How does your an wer relate to the matrix U~~ -Ar ' ? e. Using your answers in parts a to d as a guide, explai n in general (not just fo r this example) what the columns and tbe entrie of tbe matrix (/11 - A) - 1 tel! you, in tenns of econornics. Those wbo have studied multi vari able calcu lus may wish to consider the partial derivatives

48. Give an exam ple of an m x n matrix A and a vector b in IR li near ystem A .~

= b

x

X '--y---'

agg regate demancl

outpul

Tbis equ ation can be wri t:ten more succinctly a

-

Vn

OX;

or Ai+ b = x. The matrix A is ca lled the rechnology matrix of thi economy; its coeffi cient.s aij describe tbe interindustry demand , whi ch depends on the technology u ed in the producti on proce s. The equ ation

Ax +b = x

abj 50. This exercise refers to Exercise 49a. Consider the entry k = a 11 = 0.293 of the technology matrix A. Verify that tbe entry in tbe first row and the fir t column of (!11 - A) - 1 is tbe value of tbe geometrical eries l + k + e + ... . Interpret thi s observati on in tenn of economic .

describes a linear system ; we can wri te it in the custo mary fo rm :

x - Ax = b r"; - Ax = b u " - A)x = "b

x

lf we want to know the output required to sati sfy a given consumer demand b (this was our objecti ve in the previous exercises), we can solve this linear system , preferably via the augmented matrix. In economics, however, we often ask other questi ons: l f b changes, how will .X change in response? If the consumer demand on one indu stry increases by 1 unit, and the consumer demand on the other industries remain s unchanged, how will ; change? 1 If we ask question s like these, we think of the output as a f un.ction of the consumer demand b.

x

1

The re levance of quesli ons like these became parl icul arl y c lear durin g WWII . when lhc demand on certa in indu lries s uddenl y c hanged dramalicall y. When U.S. Pres ide nt F. 0 . Roosevell as ked for 50,000

MATRIX PRODUCTS Recall the composite of two functions: The composite of the funcöons y = sin (x) and z = cos(y) .is z = cos(sin (x)), as illustrated in Figure l. Sirnilarly, we can compose two linear Iransformation . To understand tbis concept, let's return to the coding example di cussed in Section 2. 1. Recall that the pos ition

~r = [ ~~ ]

of your boat is encoded and th at

airplanes 10 be bu ilt , il was easy e nough to pred icl thal lhe coun1ry would have 10 produce rnore a lum inurn . Unexpcclcd ly. lhe demand fo r copper drn malically increased {why?). A copper hon age then occurrcd , which was solved by borrowin g il ver from Fort Knox. People realized thal in pm-oulpul ana lys is ca n be effec1i ve in mode ling and predicting c ha ins of increased demand like lh i . After WWf!, tJ.lis 1echniq ue rapidly gained acceptance, and was soon used 10 mode l the econo mie o f more 1han 50 counlrie . 2 T his will always be the case for n ' ·producli ve" econo my. See Exercise 2.4.75.

98 •

hap. 2 Linear Transform at ions

Sec. 2.4 Yfatrix Products •

99

o that Z1 = 6(x 1 + 2x2) + 7(3x l + 5x2 ) = (6 · 1 + 7 · 3)xJ + 6 · 2 + 7 · 5)xz Figure I

= 27xl + 47x2 ,

X------------

Z2 = 8(x 1 + 2x2) + 9(3xl + 5x2) = (8 · l + 9 · 3)x l + (8 · 2 + 9 · 5)x2 = 35xl + 6lx2 .

o. (_in (.r))

you radio the encoded po iti on .Y = [

~~]

to M arseill e. The coding trao for m-

ation i

"=A.r.

with A = [ ;

6 · 1+7·3 [ 8·1+9 · 3

;J.

s.v.

with B = [

47] 61 .

~ ~]

z

T(v

+

ÜJ

= ß (A(ü + w)) = B (AÜ + Aw) = B Av) + B(AÜJ ) = T(v ) + T (ÜJ = ß (A(kü)) = B k(A Ü) = k ß Av)) = kT(v )

Once we know that T i linear we ca n find it matrix by computing the vector T (e1) = B (Ae 1) and T 2 ) = ß (Ae2 ); the matrix of T i then [ T 1) T (e2 ], by Fact 2.1.2.

e

e

T (e1)

= B (Ae 1) = B(fi r

T(e2)

=B

t co lumn of A

= [ ~ ~] [~ J = [;~]

B (A.t) ,

z

the compo ite of the two tran fo rmation ji = Ai and = B y. ls this transformation = T (i) linear. and , if o, what i its matrix ? We will how two approaches to these important que tions, one u ing brute force (a), and one using ome theory (b).

z

B (Ax)

linear:

T(kü)

this time. and the ailor in Mar eille radio the encoded position to Pari s ( ee Figure 2). We can think of the mes age = received in Pari s as a function of the actual po it ion i. of the boat:

z=

6·2+7· 5]-[27 8·2+9 · 5 - 35

b. We can use Fact 1.3.7 to how that the transformation T(i)

In Section 2.1 we left out one detai I: your po ition i rad ioed on to Pari s, a you wou ld expect in a centrally govemed country like France. Before broadca ting to Pari . the po iti on y is agai n encoded , using the linear transformarion : =

This shows that the compo ite is indeed Jjnea r, with matrix

Aez)

= B(

econd column of A)

= [ ~ ~]

DJ= [ :n.

We find that the matrix of the linear tran formation T (-"r ) = B (Ai) i

a . We write the component of the two transformations and substitute. - ~ = 6y l + 7 y2

Z2 = 8y1 + 9y2

Figure 2

and

Yl = x 1 + 2x2 Y2 = 3x l + 5x2

T hi s re ult agrees with the result in a, of our e. The matrix of the linear transformation T(i) = B tA.r) is call d the product of the matrices B and A, written a BA . Thi mean t.hat Marsei lle: ji

_

ßoat: x /

~,whoreA y = A-r

= [ ;

n

T(r ) = B(A:r) = ( BA )i

for all vectors

x in IR 2 (see Figure 3).

100 •

Sec. 2.4 Ma trix Prod uct •

hap. 2 Linear Transformations

== B(Ax) = where BA =

101

(BA)x ,

Mar ·eille: _\-

[ 27 47 ] 35 61

/

/_

_,. = Ax. whereA =

ji- -in

IR"} ll * q rr>q

Y IO IA.

[I?]S 3

Boal:i

Figure 3

.r in IR''

Figure 6

Now let's Iook at the product of !arger matrices. Let B be an m x n matrix and A an 11 x p matrix. These matrice represent linear transformation a hown in Figure 4. Again, tbe composite tran formation z = B Ax) is linear (the justification given above applies in this more general case as weil) . The matrix of the Unear transformatioo = B(Ax ) i called the product of tbe matrices B and A, written as BA . Note tbat BA i an m x p matrix (as it represents a linear tran formaöon from P to IR 111 ) . As in the case of ~ 2 , tbe equation

Matrix multiplication

Definition 2.4.1

a. Let B be an m x n matrix and A a q x p matrix. The product BA i defined if (and only if) n = q .

z

z=

b. If B i an m x n matrix and A an n x p matrix then tbe product BA i defined as the matrix of the linear transformation T(x ) = B(Ax) . Thi means that T (x) = B(Ax) = ( BA )~'r, for all in JRP . The product BA is an m x p matrix.

x

B (Ax) = ( BA )-:

x

holds for all vectors in ~", by definition of the product BA ( ee Figure 5). In the definition of the matrix product B A, the number of column of B tches tbe number of rows of A. What happeos if the e two number are different? Suppose B is an m x n matrix and A is a q x p matrix , with n # q. In thi case, the tran fonnations = B ji and y = Ax cannot be composed, since the codomaio of y = is different from the dom ai n of = B _- ( ee Figure 6). To put it more plainl y: The output of y = Ax is not an acceptab.le = B 5. In this case, tbe matrix product BA i. input for the transformation undefined.

z

A.r

z

z

0

Although this definition of matrix muJtiplication does not 0oive us concrete for computing the product of two numerically given matrices, such 0

~nstruc~ons

mstructJOns can be derived ea ily from the definition. As in Definition 2.4. 1 Iet B be an m x n matrix and A an n x p matrix. Let 's tbink about the column of the matrix BA . (ith column of BA ) = (BA )e; = B(Ae;) = B (ith column of A)

Figure S

Figure 4

If we denote the columns of A by ü1, Ü2 •

ji in IR"

z

= B(Ax) = (BA)x

x in IR"

Üp, we can write

ji in IR"

u" x in IR''

. . . ,

BÜp

102 •

Sec. 2.4 Matrix Product •

Chap. 2 Linear Transformations

Fact 2.4.2

Let ü,, Ü2, .. . , Üp be Lhe co1umns of A. T hen, by Fact 2.4.2,

The matrix product, column by column Let B be an 111 x 11 matrix and A an Üp · Then the product. B A is

BA = B

Vi

11

103

x p matrix with co1umns

ü,

u2, . ..

V2

The ijth entry of the product BA is the ith com ponent of the vector B -1, which is the dot produ ct of the ith row of B and ü1 , by Fact 1. 3.8. To find BA , we can multiply B with the co lumns of A, and combine the resulting vector .

Fact 2.4.4

The matrix product, entry by entry Let B be an m x n matri x and A an n x p matrix. The ijth e ntry of BA i the dot product of the ith row of B and the j th co1umn of A:

Thi is exactly how we computed the product

b,,

bl 2

bi n

b21

.b22

b zn

BA =

b;J

b in

b ;z

column of AB is

U;][~ l

the econd i

bm1

bml

U;] [~ l

1 2 5

J[68 79 J = [22 58

25 66

an2

+

b; zG2j

+ ···+

b; 11 0 11 j

=

J.

L

b;kakj.

k= l

•1

EXAMPLE 1 ....

0

/"-...

__./

\"rl

/f 6 \Ls

7][1

9

3

2 1{2 [6·1 + 7 ·3

s Jr s . 1 + 9 . 3

y

6. 2 + 7 .

../

s J=Z[27

8 .2+9 .5

) 35

47 61

J

We have done these computation before (where ?).

+

Matrix Algebra Here we discuss ome ru1e of matrix a1gebra.

Matrix multiplication is noncommutative: AB =!= BA , in general. However at times it does happen that AB = BA ; then we say that the matrices A and

• Consider an invertib1e n n matrix A. By definition of the in er e. A mu1tip1ied with its inverse repre ents the identity transformarion.

B commute.

Fact 2.4.5 lt is useful to have a formu la for the ijth entry of the product BA of an m x n matrix B and an n x p matrix A .

anp

bmn

II

Campare the two previous disp1ays to see that AB =I BA : matrix mu1tiptication is noncommutative . This should come as no surpri se, because the mat1ix product represents a composite of transformations. Even for functions of one variable, the order in which we compose matters. Refer to the first examp1e in this sect ion and note that the function s cos(sin(x)) and sin(cos(x)) are different.

Fact 2.4.3

{lllj

a 2p

is the m x p matrix whose ijth entry is b;JGJj

AB = [ 3

a,,J

l""

a:, ,

on page 99, using approach b. For practice Iet u multip1y the same matrices in the reverse order. The first

a11

Gij

a 12

a22

a 21

For an inve1tible n x n matrix A, AA - 1 =In and

A - 1 A = I 11 •

104 •

Chap . 2 Li n ea r Transformations

Sec. 2.4 Ma trix Products •

• Compo in g a lin ea r tran formatio n w ith the identity transformation , on e ither side. leave tJ1e transformation unchanged.

the equation by B -

1

105

from the left: B-

1

y = B-

1

B Ax

=

l"A x =Ai

Now we multipl y by A - t from the left:

Fact 2.4.6

For an m x n matrix A, A l " = ImA

=

A - 1B -

A.

1

y=

A - 1 Ax = i

Thi s computation shows that the linear transformati o n

y =BA i

• lf A is an 111 x n matrix, B an I! x p m atrix, and C a p x q m atrix, w hat is the relalion hip b tween (A B )C and A(BC)? One ay tothink abo ut this prob le m (perhap not the most e legant) is to write C in tenns of its col umns: C = [ ü, Ü2 Üq ] . Tben (A B )C =

AB)[ ü,

vq ] = f (AB)Üt

U2

(A B )~

in invertible, and that its inverse i x=A- 1 B-

(AB)Üq ]

Fact 2.4.8

1

lf A and B are inverti bl e n x n matrices, then B A is in verti bJe as weil, and

and

BA r ' = A - 1 B -

A(BC = A [ Bii t

y.

1



A(Bvq) ].

B ü_

Pay artenti o n to the order of the matri ces.

Since ( AB - ; = A BÜ;) by definition of the matrix product, we find that (A B )C = A BC). Paris:

Fact 2.4.7

[~J

To verify th.i resu lt, we can multipl y A and check tb at the res ult i /11 :

Matrix multiplication is associative

1

s - 1 by B A

(i n eitber order),

(A B )C = A(BC)

We can write impl y ABC for the product (AB ) C = A ( BC).

BA

A-l ß- 1

[

Mar eil le:

A mo re conceptu al justification of thi s property is that matrix multipbcation i associative because compo ition of functions i associative. The two linear transformation T (x) = ((A B )C)x

and

L(i ) = (A( BC)) ,r

Boat:

[~J

;.~

]

~

Figure 8

are identical becau. e, by definition of matrix multipli cation ,

~~

IRq-

""--c

Rp ---+-

" - ~"'

~

(AB)C~

Figure 7

T (.r) = ((A B )C)i = (A B )(Cx) = A (B(Ct))

and

Fact 2.4.9

Everything worked out! To understand the order of the factors in the formula ( B A) - 1 = A - 1 B- 1 better, th.ink about our French coa t g uard story again. Torecover the actua.l position; from the doubly encoded po itioo ::, you firs t apply the decoding transformation y = B-I and then the decoclin g tran fo rmation .X = A- 1 y. Theinverse of:: = Bkf. is therefore; = A- t s - 1 ~, a illu trated in Figure 8. Tbe following result is often useful in find.ing inverse :

z

Let A and B be two n x n matrices such that

L (x) = (A ( BC)) x = A ( (BC)x ) = A ( B (Cx)) .

The domain and codomains of the linear transformation defined by the matrices A, B , C, BC, AB, A ( BC) , and (AB)C are shown in Figure 7 . • If A and B are invertible n x n matrices, i BA invertible as weil? lf so, what is its inverse? To find the inverse of the linear transformation

y= we solve the equation for

x,

BA x,

in two steps. First, we mu ltiply both sides of

BA = I" .

Theh a. A and B are both invertib le,

b. A- 1 =Band B - 1 = A, and

c. AB = I 11 • It follows from the definüion of an invertible function that if A 8 = In and 1 and B = A - t. Fact 2.4 .9

BA = 111 , then A and B are inverses, tbat is, A = B -

106 •

hap. ~ Linea r Transformations

Sec. 2.4 Matrix Product •

make the point th ar the eq uation BA = !" alone guaranr es ~ha~ A and ~ at:e inver es. Exerci I- and 79 will help you to under. tand the s1g nd1cance of" tht. laim bert er.

Proof

+ Partitioned Matrices lt _i : o~1etim~ u sef~l to break a !arge matrix down into small er ub matrice by s lt ~ tn g 1t up wtth h on zo~tal o r vertical lines th at go all the way through th e matrix.

A.r

To demoo strate th ar A is in verübl e it uffices to how that the linear sy ·te m =Ö has on ly the olurion .K = Ö(by Fact 2.3.:.4b)._ lf we n1ll!tipl y tl1e equatio n A ..\- = Öby B from the left, \ e find that BA.\-= BO = 0, o .\- = 0. ince BA = !" . There fo re, A is invertible. If we multipl y the equ ati o n B A= !" by A - 1 fr m the ri g ht, we find th at B = A - 1• T he matrix B. being the in verse of A. i it lf in vertible. and B - 1 = (A - 1) - 1 = A (rhi s fo ll ow from the definition of an invertib le fu nction; see • page 6) . Finall y, AB= AA - 1 = !" .

Fot exa mpl e, we can thmk of the 4 x 4 matrix

A=

l [ d ad- bc -c

i the in verse of

it suffices to verify that B A = h BA= - 1-[ d

ad- bc

~

b] = I [ ad - bc

ad - bc ac - ac

A

!].

bd - bd ] ad - bc

=

U~ J ß =

Here is another example illustraring Fact 2.4.9.

EX.J\MPLE 2 ..... Suppose A. B , and C are three n x n matrice. and ABC= I". Show that B

s-

1

Fact 2.4.12

=

Wrile ABC = (A B )C !". Wehave C(AB) = !," by Fact 2.4.9c. Since matJ.i x multiplication i as ociati e, we can write (CA ) B = !" . Applying Fact 2.4.9 again , ~ we conclude that B i invettible, ancl B- 1 =CA.

Fact 2.4.10

3

2

=

l: :! :l ~~: ~~~ l =[

5

4

3

2

etc.

[

41

7

2 5

36 ] = [ 4l 52 63 8 9 7 8 9

J

= [

~II 21

A useful property of partitioned matrices is the following :

in term of A and C .

Solution

4

T he ubmatrices in such a partition need not be of equa l size; for exarn ple, we could have

1, -

Compare with Fact 2.3.6.

invertible, and expre

9 8 7 6

a a 2 x 2 matri x who e "entries · are fou r 2 x 2 matrice :

with A11 =

d

-c

A = [

l~ ~ ~ ~l 5

You can u e Fact 2.4.9 to chec k o ur work w hen you compu te the in ver. e of a matrix. For example. to check th at B _

107

Multiplying partitioned matrices Partirioned matrice can be multiplied as thou 011h the ubmat.rices were calar , i. e., u ing the formul a in Fact 2 .4.4.

Distributive property for matrices lf A, B are

111

x n matrices, ancl C, D are n x p matrices, the n A(C + D ) = AC+AD , (A + B )C = AC+ BC .

and is the partitioned mat.rix whose ij th 'entry" is the matrix

You will be asked to veri fy thi property in Exe rci e 63.

II

A ;t B lj

Fact 2.4.11

If A is an m x n matri x, B a n n x p matrix, ancl k a scalar, then (kA)B

= A(kB) = k(AB).

You will be asked to verify this property in Exercise 64.

+ A ;2 B2j + · · · + A;"B j = 11

L Aii:Bkj · k= l

·provided that all the product A ;k Bkj are defined . Verifying thi · fact i left as an exercis . On the ne t page i a nume ri cal example.

108 •

Sec. 2.4 Mat ri x Produc

hap . 2 Linear Transfor mat ion s

7

~ [[~ m~

= [

-~ ~~

I

109

We have to solve thi s . ystem for the ubmatrice Bij · By equ atio n l , A 11 mu t be in.vertibl e, and B 11 = Ali . By equati on 3, B21 = 0 ( multi ply by A-II1 from t he nght). Equaüon 4 now . im pli fies to B22 A 22 = 1111 • Th refo re, A22 mu t be 1 invertible, and Bn = A22 . Lastly, equation 2 become A A 12 + B1 2A22 = 0, or 1 B 12A n -- - A-11 A 12, 01· B 12=- A -11 1A12A - I . 22

E:XAMPLE 3 ....

[~ ~ I -:JI~ ; ~ ] l



Ii

8 9

;J+[-:J[7

[~ m~J + [ - :J [9] J

8]

a. A is in vertib le if (and o nl y if) both A 11 a ncl A 22 are inve1tib le (no condition is imposed on A 12).

~~ J.

b. lf A is in vertible its in verse i

Compute this product without u ing a partition and see whether you find the same re ult. ~ In thi s simple example, using a partition i somewhat poinrles ; Example 3 merely illu trate Fact 2.4.12. Example 4 below how a mo re sen ibl e applicati on of the concept of partjti.oned matrice .

Yerify this result for the example below.

EXAMPLE 5 ....

EXAMPLE 4 .... Let A be a partitioned matri x 0 0 0

where A 11 is an n x n matrix, A22 is an m x m matri.x, and A 12 i an n x m matri x. a . For which choices of A 11, A 12, and An is A invertible? b. If Ai invertible, what is A- 1 (in terms of A 11> A 12 , A_2)?

Solution We are looking for an (11

+ m)

x (n

+ m)

[

/~,].

/"

O

1 0 0 0 1 0 0 0

2

-1

- ]

0 0 0

2

I

-3

-3

1

0 I 0

0 0 0

0 0

-; l

GOALS Compute matrix product column by column and e nrry by entry. Interpret matrix multiplicatio n in term of the underly ing Jjnear transformation . e the rules of matrix al gebra. Multiply pa.rtitioned matrice . lf pos ible. compute the matrix pr duct in Exerci e penc il.

B21

B22

[~ -n [i 6]

7.

[~

where 8 11 is n x n, B22 is m x m , etc. The fact that B is the inverse of A means that

or, using Fact 2.4.12,

9. BIIAII B12An B21AII B22 A22

= / II = 0 = 0

=Im

[~ nu ~J

4.

B = [B 11 B1 2 ]

+

0 0 0

s

- I

E X E RJ,C{(

1.

Let u partition B in the same way a. A:

B21A12

4

3 6

2

matri.x B . uch that

BA=/" +"'=

B1rA 12 +

I 2

12.

0 1 - I

~n [!

u;J[-~ -4J m[J

2

31

2. 5. 2

2

[ -~ -~J [ 7

u!] [~ ~]

!]

10. [I

l to 13. u ing pape r and

n

3.

[I4 2s 3][1 2] 6 3 4

6.

[(/ b] [ -b] c

d

d -·

2

31m

8. [~ ~ J[~ ~ J 0

- II

un

13. [ 0

0

11.

11

[J

[ a d

b

g

h

e

{][!]

(I

110 •

Sec. 2.4 Matrix Product •

Chap. 2 Linear Transformations

111

7·?1

14. For rhe matri es A --

[I

B= [I

I

2

-I]

0 .

3 ],

I

D

~

[ \

l

E

29. Find a nonzero 2 x 2 matrix B suchthat [;

~ ]; ,: [ ~ ~ ] ·

30. If A is a noninvertible n x n matrix , can you alway matrix B such that AB = 0? 31. For the matrix

~ [5] ,

]

B=

find a matrix A such that

determinewhichofthe25matri x products AA. A B . AC , ... , E D , E Eare defined , and compute those which are defined . 15. Compute the matrix product

-2 5

-5 ][8~ 11

How many so lutions A doe this problern have?

-1]

32. Can you fi.nd a 3 x 2 matrix A and a 2 x 3 matrix B ucb that the product AB is h? Hint: Con ider any 3 x 2 matrix A and any 2 x 3 matrix B. There is a nonzero ,r in ~ 3 such that Bx = Ö. (Why?) Now think about A Bx .

2 .

- I

33. Can you find a 3 x 2 matrix A and a 2 x 3 matrix B ucb that the product AB is invertible? (The hint in Exercise 32 still works.)

Explain why the re ult does not contradi t Fact 2.4.9. For two invertible n x n matrices A and B, detennine which of the formulas tated in Exerci ' 16 to 25 are nece sarily uue .

+ A) =I"- k . (J"-rl) (A + 8 ) 2 = A 2 + 2AB + 8 2 .

16. (/11 ,. 17.

-

A)(l"

(I;, 1A).:

18. k is invertible, and (A 2 ) - 1 = (A- 1) 2 . ~ 19. A +Bis invertible, and A + B )- 1 = A-

20. (A- ß) ( A + ß) = A 2 21. A ßß - IA - 1 = 1" . 22. ABA - 1 = B. 23. (ABA - 1) 3 = AB 3 A -

24. Un

+ A)(l + A-

25. A -

18

-

82

1

find a nonzero n x n

34. Consider two n x n m atrice A and B such tbat the prod uct AB is invertible. Show that the matrices A and B are botb invertible. Hint: A B(A B ) - I = !" and (AB) - 1AB= ! " . Use Fact 2.4.9. 35. Consider an m x rz matrix B and an n x m matrix A such that

a. Find all soltitions of tbe linear ystem

+ s- 1

A,r =

.

Ö.

b. Show that the linear system

1

) = 21" + A +A-I is invertible, and (A- 1 8) - 1 = ß - 1 A . 1

is consistent, for alJ vector b in lR"'. c. What can you say about rank(A)? What about rank( B)? d. Explain why m ::=: rt.

11

U e the given partition to compute the product in Exerci es 26 and 27 . Check your work by computing the same products without using a partition . Show all your work.

~- [t ~ ~ n[~ i:n ~ r ~ H:~ 1 Z?. [

28. Find a nonzero 2 x 2 matrix A such that A

1

2

= [ ~ ~].

~ff;EJ ~~)

36. Find all 2 x 2 matrice X uch that AX = B , where

37. Find all 2 x 2 matrice X which commute with A = [

6 ~] .

38. Find all 2 x 2 matrices X which commute with A = [

6 ; ].

1

39. Find tbe 2 x 2 matrice X w hich commute w ith all 2

2 matrice .

112 •

Sec. 2.4 Matrix Product •

Chap. 2 Linear Transformations

2 rnatri e . A and B. We are told that

40. Consid r tvvo _

8 - 1 =[~

;].m1d

A[

(AB)- = [~ ~ J. 1

~] =

U]

and A [;] =

form the matrix equation A [

Find A . 41. Consider the matrix

Ul ~

113

These two equation · can be combined to

; ] = [

i

~].

45. Using the last exercise as a guide, justify the following statement: Let Ü1, Üz, ... , Ün be some vector in R" such that the matrix -

ina ] . a.

CO

We know that the linear transformation T(x) = D(l .~ i a counterclockwi e . rotation through an angle a . a. For two angles, a and ß consider the product D(l Dß and DßDC: . Arg~mg geometrically de crib the linear transforrnation _v = D(l D px and y = DpD(l>r. Al:e the two transformations the same? b. Now compute the products D(l Dß and DßD(l. Do the re ults make sen e in terms of your answer in part a? Recall the trigonometric identitie in (a

± ß) =

sin a cos ß

± cos a si n ß

cos(a ± ß = co a cos ß -=f sin a sin ß .

w

w

is invertible. Let Ui1, in !R"' . Then there 2 , .. . , 11 be arbitrary vector for all i a unique linear Iransformation T: lR" -+ !Rm such that T ( v; ) = i = I , .. . , n. Find the matrix A of th.is rransformation in term of S and

w;,

42. Consider the lines P and Q in JR2 sketcbed below . Con ider the linear tran formation T (x) = ref Q ( ref p (.t)), that is, we first reflect x in P llild then reflect tbe result in Q. B =

WJ

w_

W 11

p

46. Find the matrix of the linear transformation T: JR2 -+ JR3 with

.\

a. For tbe vector x given in the figure, sketch T(x). What angle do the vectors x and T (x) enclose? What is the relationship between the lengths of x and T(x)? b. Use your answer in part a to describe the transformation T geometrically as a reflection, rotation, shear, or projection. c. Find tbe matrix of T. 43. Find a 2 x 2 matrix A #- fz such tbat A 3 = h

44. Find all linear Iransformations T: JR2 -+ JR2 such that T [; T [;] = [

~].

J=

[

i]

(compare with Exercise 45).

47. Find the matrix A of the linear transformation T: JR2 -+ R 2 with

and

Hint: We are looking for the 2 x 2 matrices A such that

(compare with Exercise 45).

114 •

Chap. 2 Linear Transformations

Sec. 2.4 \tlatrix Product •

48. Con ider the regu lar tetrahedron sketched below, who e cent er is at the origin .

49. Find the matrices of the tran s formation 50. Consider the matrix

E -

3-

-

[-'] - I

115

T a nd L defined in Exerci e 48.

] 0 ~ [ -~ 0I 0 0

1

and an arbitrary 3 x 3 matri x

I

[

b

(/

A =

e

:

h

a . Compute EA . Cornment on the relation hip between A and E A. in tenns of the technique of elimü1ation we learned in Section 1.2. b. Con ider the matrix

0

Let T : ~ 3 -7 ~ 3 be the rotation about the axi throu gh the points 0 and P2 which tran form P 1 inro P3 . Find the image of the four corners of the tetrahedron under thi tran formation . T

Po -7

P,

-7

p3

p2 -7 p3 -7 Let L: ~3 -7 ~ 3 be the reflection in the plane through the poims 0, Po, and P3 . Find the images of lhe four corners of the telrahedran under lhi transformation.

Po

L -7

P,

-7

!

4

0

~]

and an arbitrary 3 x 3 matrix A. Compute EA. Comment on the relationhip between A and E A . c. Can you think of a 3 x 3 matrix E such that EA i obtained from A by wapping the Ja t two row for any 3 x 3 matri x A)? d. The matrice of the forms introduced in parts a. b, and c are ca lled elemenrary: An n x n matrix E i e lementary if it can be obtained from / 11 by performing one of the three elementary row operatio ns on / 11 • Oe cribe the format of the three type of e lementary matrice . 51. Are elementary matrices invertible? If o, i the inver e of an e le mentary matrix elementary as weil? Explain the ignificance of your an wer in term of elementary row operation .

52. a . Justify the fo ll owing: If A i an m x n matrix , then there are elementary 111 x m matrice E 1 E1, .. . , Ep uch that

p2 --7 p3 --7 Describe the transformations in parts a to c geometrically.

b. Find uch e lementary matrices E 1, E 2 ,

a. T- 1 b. L - 1 c. T 2 = T o T (the composite ofT with it elf) d. Find the images of the fo ur corners under the transformations T o L and L o T. Are the two transformations the same?

Po

T oL

LoT

--7

Po

P,

--7

P,

--7

p2 p3

-7

p2 p3

--7

-7

-7

--7

e. Find the images of the fo ur corners under the transformatior} L o T o L. Describe this Iransformation geometricall y.

A=

...,

Ep for

[0I 23 ].

53. a. Justify the fo llowing: If A i an m 111 x m matrix S uch that

11

matrix , then th re i an in ve rtible

rref(A) = SA .

b. F ind such an invertib le matrix S for

116 •

Chap. 2 Linear Transformati ons

Sec. 2.4 Matrix Prod uct •

54. a. Justify the following: A ny invertible matrix is a product of e leme ntary matrice .

b. Write A = [

~ ~]

a a product of elementary matrice .

y = Ex

56. Consider an invertibl

t

ol e a

linear sy tem

55. Write all po sible form of eiern ntary 2 x2 matrices E. Jn each case, describe the transformation

59. Knowing an LU -factoriza tion of a matri x A make it much ea ier

117

Ax = 'b . Cons ider the LU -factori zation

geo meu·ically.

n x

matrix A and an n x n mau'ix 8 . A certa in equence of elementary row operations tran forms A into 1". a. What do you get when you apply the ame row Operations in the ame order to the matrix AB ? b. What do you get when you apply the same row Operations to 1"? 11

A=

l

-~

2

- I

-5

1 - I

4 6

6 6

20

;~] Jl-1 -~ ! 43

8

HJ l~~

-5

0

J

Suppo e we have to o lve tbe ystem Ax = LU ,r =

b,

2 I

0 0

where

57. Is the product of two lower triangular matrices a lower triangular matrix a weil? Explain your an wer.

58. Con ider the rnatrix

= Ux, and solve the ystem L ji = b, by forward ubstitution (fi nding first y ,, then Y2 etc .) Do thi using paper and pencil. Show all your work. b. Solve the system !j ."K = ji, by back substitution , to find the olution of the y tem Ax = b. Do thi usjng paper and pencil. Show all your work.

a. Set ji a. Find lower triangular elementary matrices E ,, E2, .... Em s uch that the

x

product

is an upper triangular matrix U. Hinr: Modify Gau -Jordan eljmination a follows : In Step 3, elimjnate only the entries in the cur or co.lumn below tbe Jeading l ' . Thi can be accomplished by mean of lower triangular elementary matrice . Al o, you can skip Step 2 (divi ion by tbe c ur or entry). b. Find lower triangular elementary matrices M 1, M2 , .•• , Mm and an upper triangular matrix U such tbat

/'~

x- -- ---------.... E A

60. Show that the matrix A = [

b]

~

cannot be written in the form A = LU ,

where L i lower triangular and U i upper triangular.

61. ln this exerci e we wil l examine which invertible n x n matrice A admit c. Find a lower triangular matrix L and an upper triangular matrix U such that A =LU. Such a representation of an invertible matrix is called an LU -factori zation. The method outlined in thi s exercise to find an LU -factorization can be streamJined somewhat, but we bave seen the major ideas. An LU factorization (as introduced here) does not always exür (see Exercise 60) . d . Find a lower triangular matrix L with 1 's on the diagonal, an upper tri angular matrix U with I 's on the diagonal, and a diagonal matrix D such that A = LDV. Such a representation of an invertib le matrix is called an L DU -facrorization.

an LU -factorization A = LU , a discu ed in Exercise 58. The following definition will be u eful: Form = I , ... , n the principal submarrix A (m) of A is obtained by omitting all row and columns of A past the mth. For example, the matrix

A

= [;

2 5

8

~]

ha the pr.incipal submatrices

A<'> = (1],

A<2l _ [ 1

-

4

;J.

A'"

= A = [;

2 5

8

n

118 •

Chap. 2 Linear Transformation

Sec. 2.4 Matrix Products •

We , ill show that an invertibl 11 x 11 matri x A adm.its an LU -factorization A = LU if (and only if) all it · principal submatrice are invertible. a. Let A = LU b an LU -factorization of an n x 11 matrix A. Use partitioned matrice to show that A (m) = L (m l u <ml for m = .I , ... , n. b. U e part a to how that. if an invertible n n matri~ A l~ as an LUfactorization then all its principal ubmatrices A (m ) are mvert1ble. c. Consider an 11 x 11 matrix A whose principal submatrices are alt invertible. Show that A admit an LU -factorizat.ion. Him: By induction you can a ume that A
63. Prove the disTributive laws for matrices: A(C

+ D) =

+ B )C =

64. Consider an m x n matrix A, an (kA)B

11

where B and R are n x 11 matrice R is invertible), k i a calar and ii i a row vector with n components. Compute s- 1 AS.

69. Consider the partitioned matri x

where A 11 is an invertible matrix. Derermine the rank of A in te rm of the ranks ofthe matrices A 11 , A 12, A 13 , and A23 . 70. Consider the partitioned matri x

where ii is a vector in IR", and w is a row vector with n compone nt . For which choices of ii and i A invertible? In these ca e , what is A- I? 71. Find al t invertib.le n x 11 matrices A such that A 2 = A. 72. Find an n x 11 matri x A whose entries are identical such that A2 = A .

w

73. Consider two AC+ AD

and A

AC

+ BC.

x p matrix B, and a scalar k. Show that

= A(kB) = k

AB)

65. Consider a partitioned matrix

A = [ A~I A~2 ]

11 x n matrices A and B wbose entrie are positive or zero. Suppo e that all entrie of A are less than or equal to s, and all colurnn ums of B are les than or equal to r (tbe jth column sum of a matrix i the sum of the entrie in its jth col umn ). Show that aU entrie of the matri x AB are le th an or equal to sr .

74. (This exercise builds on Exercise 73 .) Consider an n x n matrix A who e e ntrie are positive or zero. Suppose tbat all column um of A are !es than 1. Let r be the largest column sum of A. a. Show that the entrie of A" are less than or equal to r 11 , fo r alt positi e integers n. b. Show that lim A"

n->

where A 11 and A22 are square matrice . For which choices of A 11 and A22 is A invertible? In these ca es, what is A- I?

66. Consider a partitioned matrix A=[Atl A21

0]

A22

~ ~]

and

S= [

b ~ ].

=0

(meaning that all entrie of A 11 approach zero).

c. Show that the infinüe series /II

where A 11 and A 22 are square matrices. For which choices of A t, , A21, and A22 is A invertible? In these cases, what is A _ , ? 67. Consider two matrices A and B whose product AB is defined. Describe the ith row of the product AB in terms of the rows of A and the matrix B. 68. Consider the partitioned matrices A= [

119

+ A + A 2 + .. . + AII + ...

converges (entry by entry) . d. Compute the product (!" - A) (!"

+ A + A 2 + ... + A

II

Simplify the result. Then Iet n go to infinity, and tim show that 1

Un - A) - = In+ A + A 2

75. (Thi s exercise builds on Exercise c i e 2.3.49 .)

+ · · · + A" + · · ·.

73 and 74 above, a

weil a

Exer-

120 •

Chap. 2 Linea r Transformation

Sec. 2.4 Matrix Products •

onsider tbe in du tri 1 1 • • • •• J" in an economy . We say that industry J. i produclive if th e jth co lumn sum of th, technology matri x A is Ies tl~a n 1. What does thi mean in renn of economjcs? b. We say that an economy i produclive if all of its indu tri es are productive. Exercise 74 show that if A is the technology rnatrix of a produclive econom then tl1e matrix !" - A is invertible. What doe this result tell you about the ability of a prod uctive economy to sati sfy any kind of co nsumer demand '. c. Interpret rhe formula

a.

(!" - A) -

I

= !"

121

c. Find the matrix for the composite Iransformation which li ght undergoe as it fir t passe through the sung lasses and then the eye. d. As yo u put on the unglas es, the sign al you recei ve (inte n ity, long- and short-wave s ignal s) undergoes a transformation . Find the matrix M of th i transformation.

J(- i;

+ A + A 2 + ... + A + .... II

M

derived in Exerci e 74d in tenn of econom.ic . 76. The color of light can be repre ented in a vector

[~]

Li ght passcs through glasses and thc n through eyes.

where R = amount of red , G = amount of green, and B = amount of blue. The human eye and the brain transform the incoming signal ioto the signa1

77. A vii Jage i divided into three mutu ally exclu ive group called clans. Each per. on in the village belang to a c lan, and thl identification i permane nt. There arerigid rules conceming marriage: A per on from one clan can marry only a person from one other clan . These rules are encoded in tbe matrix A below. The fact that the 2-3 entry i I indicate that marriage between a man ·from c lan lll and a woman from clan Il i allowed. The clan of a chlJd i detennined by the mother' clan, a indicated by the matrix B . According to tbi scheme ibling belang to the ame clan.

wbere Husband ' clan intensity

R+G + B

I

I = - ---"3 --

L= R- G R+G shon-wave igoal S =B - - long-wave ignal

2

a. Find the matrix P representing the transfonnation from

b. Consider a pair of yellow sunglasses for water sports which cuts out all blue light and passes all red and green light. Find the matrix A whlch represents the Iransformation incoming light undergoes as it passes through the sung lasses.

A=

[~

Mother· clan

Ilill

I 0 ~ o(D

0 0

li

m

Wife clan

I

Ilffi

B~u

0 0 1 I 0

T ll

Child' clan

JlJ

The identification of a person with clan I can be repre ented by the vector

and likewi e for the two other clans. Matrix A tran form the husband' clan into the wife '. clan (if x represents the husband ' clan, then Ax represent the wife's clan). a. Are the matrices A and B invertibl ? Findtheinver e if they exi t. What do your answers mean , in practical term ? b. What i the meaning of 8 2 , in term of the rule of the community.

122 •

Chap. 2 Linear Tran sformation

c. V hat i, the rn e·~n i ng of AB and BA, in tenn of rhe rul e of the c mmu nity? Are AB and BA the ":une? d. Bueya i" a _ ung ' oman ' ho ha ' many ma le firs r c u in , bolh on her mother· and on her father' side . The kin hip betw en Bueya and each of her ma le cousi n ca n be represented by one of the fou r diag rarns below:

Sec. 2.4 Ma tri x Product • to find AB for A = [

~ ~ J and B = [ ~ ~ J. Fir t c

123

mpute

h 1 = (a + d)(p + s) h2 = (c+d)p lt 3 = a(q - s) lt 4 = d(r-p) hs = a + b). h6=(c - a)(p+q) h 1 = (b- d) (r + s).

Then

! B"'Y'

~

0

79. Let N be the er of allposit ive integers, I. 2, 3, .... We define two funcli n .f and g from N to N:

A mo lo ""' ""''"

f(x) = 2x , g

/

/

/

S2 0

0 S2

00

~

~

~ 0

I

0

I

0

( ) X

fo r all x in ·

x /2 = {

X+

if .r is eve n jf X iS Odd

1)/2

Find formula for the co mpo ite function g(f(x)) and f(g(x ). I one of them Lhe identity tran fom1ation from · to N? Are the fun ction f and g invertible? 80. Ceometrica / optics. Consider a thin bi-convex lens with two pheri cal face .

l

In each of the four cases, find the matri x which g1ve yo u the cou in 's cl an in terms of Bueya'. clan. e. Accorcling to the rules of the vi ll age, cou ld Bueya marry a firs t cou. in? (We do not know Bueya·s clan.) 78. As backgrou nd to this exercise, . ee Exerci . e 2.3.45. a. If you use Fact 2.4.4, how many mu ltiplications of numbers are necessa ry to multipl y two 2 x 2 matri ce ? b. [f you use Fact 2.4.4, how man y multiplication · are needed to multiply an m x n and an n x p matrix? In 1969, lhe German mathematician Volker Strassen surpri ed the mathematical commun ity by showing that two 2 x 2 matrice can be muitipliecl with onl y seven multiplications of numbers. Here is hi s trick: Suppose you have

This is a good model for th e lens of the human eye and for the len e u ed in many optica l in trument , uch a readi ng gla es, amera . micro cope , and tele copes. The line through the centers of the sp here defining the t-; o faces i called the optical axis of the Jen . .

,'

enter of sphere dcfin ing the righ1 fa e

,'

""" Cemcr of 'flhcre defimng thc left facc

124 •

Chap. 2 Linear Transformation

Sec. 2.4 Matrix Products •

In thi exerci e. we Jearn how we can track the path of a ray of light as it pa se through the lens, provided that the fo ll owing conditions are satis fi ed :

125

We wanl to know how the o utgoing ray depe nds on the incomin g ray , that i , we are interested in the tran Formati o n

• The ray hes in a plane with the opticaJ axis. • The ang le tJ1e ray makes ' ith th optica l axi

s ma ll. We wil l see that T can be approx.irn ated by a linear Irans form ati on provi ded lhat m is s mall , a we assumed . To tudy thjs transformatio n, we di ide the palh of the ray into three seg rnents, as s hown below .

To keep track of the ray, we introduce two reference planes perpendicular to the optical axis, to lhe left and to tbe right of the Jen .

n

IIT

We have introduced two auxi li ary reference plane , directly to the le ft and to t he right of the lens. Our tran Formation [

x ]

~ [ Y] can now be represented

m n a the compo ite of three simpler Iransformation :

From the definition of the slope of a line we get the relation and y = w + Rn . Left refercnce plane

u= x

Ri ght rcference plane

II

We can characterize the incomi ng ray by its . lope m and its intercept x with the left reference plane. Likewise, we characterize the outgoing ray by slope n and i ntercept y .

[ mu ] = [ x

+ Lm ] m

= [ I L] [ x ] ; 01 m

[~J [6 ~tl [:.J -. [ ~J [6

[ y ] = [ I R] [ w] n 01 n

nL:J

+ Lm

126 •

Chap. 2 Linear Transformation

Sec. 2.4 Matrix Products •

It would Iead u too far into phy ic ro derive a fonnula for the transformati n

here. * Under the a umption approximated by

we have made, lhe tran Formation

1•

weil

12 7

you want y to be independent of x (y must depend on the slope m alone) . Explain why l jk is called thefocallength of the len . b. What value of k enables you to read thi s text from a di tance of L 0.3 meters? Consider the figure below (whi ch i not to scale).

for some po itive con tant k (thi formula implies that w = v).

The transformation [ x ] 111

~ [ 1lY] is represented by the marrix

product

[l R][ I 0][1 L]-[1 - Rk L+R-kLR ] 0

1

-k

1

0

I

-

-k

I - kL

.

a. Focusing parallel rays. Consider the Jens in the human eye, wi th the retina a the right reference plane. In an adu lt, the di stance R i about 0.025 meters (about l inch). Tbe ciuary muscles allow you to vary the shape of tbe Jensand thu tbe Jen constant k , within a certain range. What value of k enables you to fo us paralle l incoming ray , as shown in t:he figure ? This alue of k will all ow you to . ee a distant object clearly. (The cu tomary unit of measurement for k is 1 di opter = 1 m1 ter .

c. The telescope. An astronornical telescope consists of two lense with the same optical ax.is.

t.,:'

-'

r ; ; - - - --- :

D

Left reference plane

y

Right reference plane

Find the matrix of the tran Formation [ x ] ~ [ in term of 1• 2 , m n and D . For given values of k 1 and k_, how do you choo e D o that parallel incorning rays are converted into parallel outgoing ray ? What is the relation hip betweeo D and the focal lenglh of the two len e , I jk 1 and l/k2? 81. Tru e orfalse? l f A i an 111 x 11 matrix and B i an 11 x p matrix , then

Y],

rref(A B ) = rref(A) rref(B) . Hint: In terrns of the transformation

*See, for exampl c, Paul Samberg and Shlorno Sternberg: A Course in Matlr ematics .for Student.~ c~( Physics 1. Cambridge University Press. I 991.

k k

Sec. 3.1 Image and Kernet of a Linear Transformation •

129

f X: the domain of f

Y: Lhe codomai n off

image /)

SUBSPACES OF ~n AND THEIR DIMENSIONS

Figure 1

EXAMPLE 2 .... The image of the exponential function

f: ~ ~

~

g i ven by

y = ex

consi ts of a ll positive number .

1

EXAMPLE 3 .... More generall y, the image of a fu nction

IMAGE AND KERNEL OF A LINEAR TRANSFORMATION

f:

~

Y,

EXAMPLE 4 .... The im age of tbe fun ction

f: ~

where X and Y are arbitrary e t .

Definition 3.1.1

f

from X to Y. Then

image Cf) = {values the fun ction

f

= (y in Y: y = f(x) , for some x in X}. 1 Y of f.

\

tC\.Y')'

EXAMPLE 1 .... Consider the function in Figure I, whose domain and codomain are finite set . ~

1

128

given by

f( t ) = [

c~~(:~ ]

takes in Y)

= (f x):x in X}

Note that image(f) is a subset of the~aj

~ ~-

the unit circle ( ee Figure 3). The function f i · calJed a parmnetri-ation of the urrit citcle. More generaJi y a parametri zation of a c urve C in ~ 2 i a fu nction g from lll to R 2 wbo e image is C. ,....

Image Consider a function

lll

consi st. of all numbers b uch that the line y = b intersects the grapb of f (po ibl y more than once), as illustrated in Fi g ure 2. <1111

You may be farniliar w ith the notion of the image of a function

f:X

~

Some authors use the Lerm " ran ge" for what we call the i mage, w hi le othcrs use the term " range" for what we call the codomain. Make sure Lo check which detinition is uscd when you cncounter thc tcrm ·•range" in a text.

EXAMPLE 5 .... U the function f: X~ Y is invertible, th en image(f) = Y . For eacb y in Y tbe re one (and o nl y one) x in X such that y = f(x), namely, x = f - 1 (y). ~ Figure 2 b is in the image of f, since b = f( xtl = f! x2J = f(xJl = f( x4 ); c is not in the image of f because there is no x such that c = f(x).

)'

c

___J

I

'

'

'

'

______ _ _ J I _ _ ____ __ J I __ ____ _____ _ _____ J I _ _ _

'

L-~----~·------~ · ------------~ ·-.x

130 •

hap . 3 Subspaces of ~~~ and Their Dimen ions

Sec. 3.1 Im age an d Kemel of a Linear Tra nsfo rmation •

---- _1\r) = [ cos(r) 111 (1)

J

im(7)

Figure 5 Figure 3

Since the vectors [

thogonally into the 1 2 plane, a illu trated in Figure 4. The image of T i the 1 2 plane in ~ 3 .

e -e

<111111

~]

and [

~]

scalar multiples of the vector [ ;

EXAMPLE 6 ~ Con ider the linear transfonnation T from ~3 to ~ 3 which projects a vector or-

e -e

131

are paral le l, the image of T

tbe line of al t

J,as illu tr ated in Fi gure 5.

EXAMPLE 8 ~ Describe the image of the linear tra nsformati on T: R2

-+

JR3 given by the matrix

EXA..iVIPLE 7 ~ Describe the im age of the Jj near n·an formation T : R 2 -+ ~ 2 given by the matri x A =

u~l

Solution Solution

The image T con i ts of alJ vector of the form

The imageofT consist of ali value ofT , i.e., all vectors of the form

T

[XI]= A [.r1] = [ 21 63][XL ]= .\ z X2 X2

Xl [

J] +

2

Xz

[3]

T [ ;;

6

l ~ [: ~ J[;; l ~

X,

mm + ,,

that is all li near combinations of the vec tors

UJ

and

Ul

Th is is the plane '· pan ned" by the two vecto rs. in an intui tive, geomerric en e (see Figure 6). <111111

Figure 4

The ob ervatio ns made in Exam ple 7 an d 8 motivate the following defi niti on.

ü

Defin ition 3.1.2

Span Co n ider the vec tors ·ü1 • ü_ . .. . ü" in R 111 • T he s t of a ll linear combin ation of the vectors ü1, ü2 , • • • , ü" is called their span . span (Ü

T(üJ

1.... . Ü") = {c1Ü1+···+c"ü":

;ar bitra.ry ca lar }

132 •

hap. 3 Su bspaces of R" and Their Dirnen ions

ec. 3. 1 Im age a nd Kerne! of a Linear Transformatio n •

133

• lf ii is in im(T ) and k i an arbitrary ca lar, then k- is in the image as we il (we say the image is closed under cal ar multiplication). Again, Iet' verify thi s: The vector ii can be written as ü = T (w) for ome w in " . T hen kü = kT (w) = T (kw), o th at kii is in the image.

[il

Let' s summarize:

0

Fact 3.1.4

Properties of the image The image of a linear transformation T (from IR" to !Rm has the folJowing properties:

Figure 6

a. The zero vector Ö in JRm i contai ned in im(T). Fact 3.1.3

b. The image is closed under addition: lf ii 1 and ii 2 are both in im(T), then so is üi + Ü2. c. The image i closed under calar multiplication: If a vector ii is in im (T) and k is an arbitrary scalar, then k v i in the image as weil.

For a linear transfonnation T: IR" --+ lR111

given by

T (x) = Ax

tbe imageofT is the span of the colu mn of A.I We denote the image ofT by im (T) or im (A). To justify this fact, we write the transformation T Examples 7 and 8:

T (x ) =Ax=

vi

111

vector form a ·

X? ] XI :- = XI VI + X2Ü2 + · · · + Xn Vn

Vn

[

/1i

v

Figure 7 lf is in the imoge, then so ore oll vectors on the line L in JR3 sponned by v.

.X n

1

It follow s from prope11ies b and c that the image is closed under linear combinations : If some vectors iii , ~ •. .. , Üp are in the image, and c" c2, ... , cp are arbitrary scalars, then c 1 iii + c2 ii 2 + · · · + cP iiP i in the image as weil. If the codomain of the transformation is IR 3 (that is, m = 3), we can interpret thi property geometrically, as shown in Figures 7 and 8.

In

L

k ii

This shows that the image of T consi ts of all linear combinations of the vectors ii;, that is, the span of iii , ii 2, . . . , iiw Consider a linear transformation T from IR" to lR111 gi ven by T (x) = Ax, for . ome m x n matrix A. The imageofT has some remarkable properties: • The zero vector

Ö in !Rm is contai ned in im(T), since Ö=



The imageofT is also called the column ~pace of A.

1

- - - - --- - -,-

= T(Ö).

• lf ii i and ii 2 are in im(T), then so is iii + ii2 (we say the image is closed under addüion). Let's veri fy thi s: Since ÜI and ii 2 are in im(T), there are vectors WI and w2 in lR." ·uch that iii = T (w i) and 2 = T(iil 2 ). Then ii i + ii2 = T(wi ) + T(w2) = T(w i + w2), so that iii + 2 is in the image as weil.

v v

Figure 8 lf v1 ond v2 ore in the imoge, then so ore oll vectors in the plane Ein IR 3 sponned by VI ond v2•

' '' ''

E

134 •

Ch ap. 3 Subspaces of ~~~ and Their Di mensions

Sec. 3.1 Image and Kern et o f a Lin ea r Transfo rmatio n •

/

135

EXAMPLE 9 .... Co n ider an n x n matrix A. S how that im(A 2 ) i contained in im (A) , i.e., each vector in im(A 2 ) i al o in im(A).

Solu tion

T

w

Consider a vecto r in im(A 2 ) . B y definitjon oF the image, we ca n write ÜJ = A 2 ü = AA ü, for ome vector ü in IR" . We h ave to how th at ÜJ can be re pre e nted as = A ü, For ome ü in IR". Let ü = A ü. Tben

w

w=

AA Ü = Aii . Figure 9

+ The Kernel of a Linear Transfo nnation When you tudy functi ons y = f x of o ne variable, you are offen interested in the eros oF this function , i.e . the solu tions of tbe equ ation f(x) = 0. For example, tbe function ) = in (x ) has in fi nitely man y zero name ly, a ll integer mul tiples of n . The zeros of a linear transformation are of intere t as weil.

Definition 3.1.5

Kemel

EXAMPLE 10 .... Consider again the o rthogonal proj ecti on o nto the mation T fro m IR 3 to IR 3 , a show n in F igure I 0.

1

2

plane, a linear tran For-

T he kernet of T con ist of all vector who e orthogonal projectio n onto pl ane is Ö. The e are the vectors o n the -axi (the calar multi pl e the 1 of eJ). ~

e -e2

e3

EXAMPLE 11 .... Fi nd the kerne t of the linear tran Formati on T fro m IR to IR 2 given by 111

- [I

1

Consider a linear tran formation T (."i) = Ai from IR" to IR • The kemel of T i tbe set of all zeros of the transformation, i.e. , the olution of the equati o n T (i) = Ö (see Figure 9). In otber words: The kerne t of T is the soluti on et of the linear syste m

T (x ) =

T(i) = [

Figure I 0

For a linear transformation jj

• im (T ) is a subset of the codomain IRm of T , and

• ker(T) is a subset of the domain IR" ofT .

The kemel of T is also called thc null space of A .

2l

J1

J-

X.

We have to olve the equati on

We denote tbe kerne! of T by ker(T) or ker (A) .

T: lR" --+ IR"' ,

1

Solution

Ai = 0.

1

e -e

T(Ü)

~

1 3

J- 0,X =

136 •

Chap. 3 Subspaces of R" and Their Dimensions

Sec. 3.1 Image and Kernel of a Linear Transformation •

i.e., we ha e to ol e a linear sy tem . Since we ha e stud ied thi prob le rn carefully in Section 1.2, we only sketch the solution here.

.[I

rref

1 1] _ [ I 1 2 3 - 0

0 I

13 7

The kerne! of T consists of the solutions of the system

x1 x2

- 1] 2

6x3

-

+ 2x3

The so lutions are the vectors

+ 6xs =

0

2x 5 = 0 x 4 + 2xs = 0 -

l[ l

1 X2 x X3

X=

=

-2s 6s - +6t21 S

,

X4

-21

X5

I

[

where s and 1 are arbitrary constants.

In the solution above. t i an arbitrary constant. The kerne! i the line panned by

the vector [

-~]

m~ 3

~

Con ider a linear transformation T(x) = Ai from IR" to IR"', where n exceeds m (a in Example 11). There will be nonleading vmiable for the equation T (i) = A .~ = Ö, that is, this sy. tem ha infinitely many so lutions. Therefore, the kernel of T con ist of infinitely many vectors. Thi s agree with our intuition : we expect some coll apsing to take place a we transform IR" into IR 111 •

ker(T )

~

I

6s- 6t -2s + 21 s -2t

I

' · t ru-bittary scalru-s

We can write

6s - 61 -2s + 2t s -21

6

-6

-2

2 0

+t

1 0 0

=

EXAMPLE 12 .... Find the kerne! of the linear Iransformation T : IR5 ~ IR4 given by the following

-2

matrix: This shows that ker T ) is spanned by the vector

5

4

6

6

3 6 10 7

7 8 6 6

6

-2 1

0 0

and

[

-~ -2 I

Solution We have to solve the equation T (x) = Ai=

rref(AJ

= [

~

!

0 0

Ö.

-6 2 0 0

Consider a linear transformatlon T fro m IR" to IR"' given by T (.r) = A,r, for some m x 11 matrix A .

138 •

Sec. 3.] Im age and Kerne t of a Lin ear Transfo rmat io n •

Chap . 3 Subspace of IR" a nd Th eir Dime nsions

Defin ition o f in ver1ibi lity (2.3. 1 and 2.3.2)

T he kerne ! of T ha the follow ing propertie. : Fact 3. 1.7 (vi)

Fact 3.1.6

139

~

,...

(i)

~

,... (ii )

Properties of the kernel a . The zero vecto r 0 in lR" is contain d in k r(T).

Fact 2.3.3

b. The kerne ! i clo-ed unde r ums. c. T he kern e! is c losed unde r scalar multipl

. (iii )

C o mpare the e pro perti of the kerne t w ith the corre po nd ing propertie of the image Ii ted in F act 3. 1.4. The veiifi cation of Fac t 3. 1.6 is Straig htforward and left as Exercise 49. Co n ider a quare matri x A of ize n x 11 . we j u t ob erved , the kern of A co ntain s tbe zero vecto r in iR.". Tt is pos ib le rhat the kern e t co n i ts of the zero ecror_alone . fo r exam pl e. if A = ~· · More ge nerall y , if i in _ertibl . the n ke r(A) = {0) , since the y tem Ai = 0 ha o nl y the o lutio n .x = 0 . However, if A i not invertible, the n the _Y tem A.t = 0 ha in fi nite ly m any o lution (by Fact 2.3.4b), o that ker A -+ {0}.

Fact 3.1.7

Defi ni tion ofrank ( 1.3.2)

(iv

Examples 3b a nd 13 of Section 1.3 '

C o n ider a square matri x A. The n ( v)

ker(A) = {0}

Figure II

if (and o nly if) A i in vertibl e .

EX ERCI SES We co nclude thi s . ecti o n with a summary that re lates ma ny co ncep t we have introduced thus far. L a te r, we w ill ex pand thi s sum mary .

Summary 3.1.8

Le t A be an 11 x 11 m atii x . The fo ll ow in g tate me nts are equi val e nt (i.e ., they are either all true or al l fa l e): i. A i in vertibl e. ii. The linear syste m

Ax = b ha

a unique so lutio n

,r, fo r all b

GOALS U e the concept of the image and the kerne! of a linear tran sformati o n (or a matrix). Express the image and the kerne! of any matiix as the pan o f some ector . Use kemel and image to de termine whether a matri x is in vertible. For each m atri x A in Exerc ises I to 13, fipd vector that Use paper and pencil. l. A=U

~]

4. A = [ 1

2

in lR" .

iii. rref(A) = !" . iv. ra nk (A) = n . v. im (A) = IR" .

2. A =

3]

5.

A~

[~

~]

[ :

1 2 3

pan the kerne ! of A.

3. A =

n

6.

A

~

[~ ~] [:

:]

vi. ke r(A) = {0} . 1

imagc) .

Figure 11 briefly reca lls the justificatio n for these equiv ale nce :

Note that

b is

in Lhe image of A if and only if the · yste m Ax =

b is consis te nt

(by defin it io n of the

140 •

Chap. 3 Subspaces of IR" and Thei r Dirn nsion

7. A

lo.A

12.

A

~

[

l

~ [~ =

Sec. 3.1 Image and Kerne! of a Linear Tranformation •

n n

2 3 2

8. A

=[~

2 3 1 2 0 0

[ - 1I

~

11.

- I I

-I

I

0

-2

-1 -2

-2

0 3

- 1

~]

I 2

['

h~ I

il

13. A

0 0 0 0

=

[' ~]

9. A

= :

0 1 4 -1

2 - 3

2 0 0 0 0

-6 3 0 1 0 0 0

0 0 I

0 0

-!l 0 0 0 0 0

3 2 I. 0 0

Foreach matrix A in Exerc ises 14 to 16, find vector that pan the im age of A. Give a few vecror as po ible. U e paper and penc il.

15. A = [

~

1

I

2

3 4

HJ

I]

141

27. Give an exampl e of a noninvertible fun cti o n

f: lR

~ IR

w ith im (.f) = IR.

28. Give an exampl e of a parametrizatio n of the ellipse ?

l

r+ - = 1 4 in IR2

(

ee Example 4).

29. Gi ve an example of a function wbo e image i the unit sphere

x2 + / + z2 = I in 1R 3 .

30. Give an example of a matri x A such that im (A) is panned by the vector [ ~] .

m~'.

31. Gi ve an example of a matrix A

vector

uch th at im (A) i the pl ane w ith norma l

in

32. Give an example of a linear transformation whose image i tbe line panned For each matrix A in Exercises 17 to 22, describe the image of the tran form ation T (x) = A.r geometricall y a a Jjne, pl ane, etc. in 1R2 or JR 3 ).

u ~]

17. A

=

19. A

= [ -~ -4 -6

21. A

2

~

3

_:]

[! i] 7 9 6

u 1~ ]

18. A

=

20. A

~[:

22. h

1

1

]

1

[~

4

1 5

in JR 3 .

l]

33. Give an example of a linear transformation w hose kerne! is the plane x + 2 y + 3z = 0 in JR 3 . 34. Give an example of a linear transforn1ation whose kerne! i the line panned

~]

by

Describe the images and kernels of the transformations in Exercises 23 to 25 geometricall y.

23. Reftection in the line y = x/3 in JR2 . 24. Orthogonal projection onto the plane x

+ 2y + 3z =

givenby

0 in !R 3 .

where a , b, c are arbitrary scalars?

f(t) = t 3 + at 2 +br +c,

in JR 3 . 35. Consider a nonzero vector ü in JR 3 . Arguing geometrically, de cribe the image and the kerne! of the linear transformation

25. Rotation through an angle of rr /4 in the counterclockwise direction (in 1R2 ). 26. What is the image of a function f: JR ~ JR

by

.

T: JR 3 ~ lR

g iven by

T (i) = ü . .r.

36. Consider a nonzero vector ü in JR3 . Arguing geometrica lly, de cribe the image and the kerne! of the linear tran formation

T: JR 3

""7

JR 3

g iven by

T (.r ) = ü x .r.

142 •

Sec. 3.1 Image and Kerne! of a Li n ear Transformation •

Cha p. 3 Subspaces of IR" and Th eir Dimensions

For whi ch vectors ji is thi s system cons i tent? The answer allow expre s im (A) as the kerne! of a 2 x 4 matrix B .

37. For the matrix

143 you to

43. Using your work in Exerci se 42 as a guide, explain how yo u can write tbe image of any rnat1i x A as the kerne! of some matrix B.

de cribe th im ages and kern I of the m atrice A, A

2

,

3

and A g o metrica ll y.

38. Con ider a quare matrix A. a. What i the relation hip bet• een ker(A and ker(A- ? More gene rally 4 what can o u say about ker(A), ker(A 2 ), ker(A 3 ), ker(A ) , .. .. 3 2 b. What can you say about im(A , im (A ), im(A ) •.. . ? 39. Con ider an m x 17 m atrix A and an 11 x p matrix B. a . What i the relationship between ker AB) and ker(B)? Are they always equal? I o ne of them aJways contaioed in the other? b. W hat is the relarion hip b rween im (A and im (A B)?

40. Con ider an 111 x 17 matrix A and an 11 x p matri x 8. lf ker(A = im (B , w hat can you ay about the product AB?

.

.

41. Con 1der the mamx A =

[ 0.36 0.48 ]. 0.4 _ 8 0 64

a. Descri be ker(A) and im(A ) geometri call y. b. Find A2 . If v is in the image of A w hat can you say about Av? c. Based on your answers in parts a and b, describe the transformation T ,'t) = Ax geomerricaJly.

42. Express the image of the matrix

~ l1 ~] 1

2

A

3 4

,.,

.)

5 7

a the kerne! of some matrix B. Hint: The image of A co nsists of all vectors y in !R4 such that the system Ax = ji is consistent. W1ite thi s system more explic itly:

x 1 + x2 + X3 + 6x4 = Yl x, + 2x2 + 3x3 + 4x4 = Y2 x, + 3x2 + 5x3 + 2x4 = Y3 x, +4x2 + 7x3 Y4 Now reduce rows:

x1 x2

x3 + 8x4 = + 2x3 - 2x4 =

0 0

=

4 y3 - 3y4 Y3 + Y4 Y1 - 3)13 + 2 Y4 Y2 - 2y3 + Y4 -

44. Consider a matrix A, and Iet B = rref A . a . Is ker(A) nece sarily equal to ker(B )? Explain. b. Is im(A) necessarily equal to im (B)? Expiain. 45. Consider an m x n matrix A with rank(A) = r < n. Explain how you can write ker(A) as the span of n. - r vectors.

..-

~ · Con ider a 3 x 4 matrix A in red uced row echelon fo rm . What can yo u say

about the image of A? Describe all case in tem1 of rank(A), and draw a sketch for each.

47. Let T be the projection along a line L 1 onto a line L 2 (see Exerci e 2.2.33 ). Describe geometrically the image and the kerne) ofT .

48. Consider a 2 x 2 matrix A with A2 = A . a. If w is in the image of A, what is the relationship between w and A w? b. What can you say about A if rank (A) = 2? What if rank(A) = 0? c. If rank(A) = 1, show tbat the linear transformation T (x) = Ax is the projection onto im(A) aJong ker(A ) (see Exercise 2.2.33) . 49. Verify that the kerne! of a linear transformation is closed under addition and scalar multiplication (Fact 3 .1.6).

SO. Consider a square matrix A with ker(A 2 ) = ker(A 3 ). Is ker(A 3 ) = ker(A 4 )? Justify your answer.

51. Consider an m _x n matrix A and an n x p matrix B uch that ker(A) = {0} and ker(B) = [0}. Find ker(A B).

52. Consider a p x n matrix A and a q x n matrix B. and form the partitioned matrix

What is the relationship between ker(A), ker(B) , and ker(C) ? 53. In Exercises 53 and 54, we will work with the binary digit (or bit ) 0 and J, instead of the real numbers R Addition and multiplication in thi y tem are defined as usual , except for the rule 1 + 1 = 0. We denote thi number y tem with lF2 , or simply IF. The et of aJJ vectors with 17 component in IF is denoted by lF"; note that IF" consists of 2" vectors. (Why?) In information technology, a vector in IF 8 is called a byte (i.e. a byte is a string of 8 binary digits). The basic ideas of linear algebra introduced so far (for the real number ) apply to lF without modifications.

144 •

Chap. 3 Subspaces of ~" and Their Dimens ions

Sec. 3. 1 Image an d Kerne! of a Linear Transfo rm at io n •

A Hamming matrix with 111 rows i a matri x which ·ontain all non zero vector in F111 a it co lu mns (in any ord er). Note that there are 2111 - I column . Here is an ex ample:

145

will work with blocks of o nl y fo ur binary digits, i.e., with vector in Ir". For example: . .. j t O I J j l O O t j i O I O j l O l l j l OOO I .. .

0 0 l 0

0

0

I 0

ll

3 row

Suppo e these vectors in JF4 have to be transmitred from one computer to another, say from a satellite to ground control in KoUI·ou, French Gui ana (the Stati on of the Eu ropean Space Agency) . A vec tor ii. in JF4 is first tran fo rmed in to a vector v = Mu in JF1 where M is the matrix you found in Exerci e 53 . The last fo ur entries of are just the entrie of the first three entrie of v are included to detect errors. The vector v is now transmitted to Kourou . We assume that at most one error will occur during tran rril ssion, that i , tbe vector ÜJ received in Kourou will be either (if no error ha occurred), or ÜJ = + if tbere is an error in the itb component of the vector. a . Let H be the Hamroing matrix introdu ced in Exercise 53. How can the computer in Kourou use H w to detem1ine whetber there was an error in the transmission? If there is no error, what is H w? If there is an error, how can the computer determine in which component the error was made? b. Suppose the vector

23 - I = 7 colurru1

v

a. Ex press the kerne! of H a the pan of fo ur v ctors in IF 7 of the form

*

*

* U1 =

*1

0 0 0

2 =

* *0

U3 =

* * * 0

I

0

0 0

0

1

*

v

v e. ,

*

U4 =

u;

*

0 0 0

1

b. Form tbe 7 x 4 matrix

0 1 0

W =

1

0 0 is received in Kourou . Derermine whether an error wa transmission and, if so, correct it (i.e., find v and il). Explain wby im (M) = ker(H ). If

x is an

arbitrary vector in

made m the

JF4 , what is

H (M x)?

54. (See Exercise 53 for some background.) When in formation is transmitted, there may be some errors in the communication. We present a method of adding extra information to messages so that most eiTors that occur during transmission can be detected and corrected. Such methods are refen·ed to as error-correcting coäes (contrast these to codes whose purpose is to conceal infom1arion). The pictures of man ' s first landing on the moon (in 1969) were televised just as they had been received and were not very clear, since they contained many errors induced during transmi ssion . On later missions, much clearer error-corrected pictures were obtained. In computers, information is stored and processed in the form of string of binary digits, 0 and l. Thi s st.ream of binary digits i · often broken up into "blocks" of eight binary digits (bytes). For the sake of simplicity, we

Kourou

JF'3 H

SateUite

detect error M 11

encode

tra nsmiss ion

JF'1 ii

~

possible error

JF7 ÜJ

correct error

JF1 ii

IF'~

decode

ü

146 •

Chap. 3 Subspaces of IR" and Their Dim ensio n Sec. 3.2

2

SUBSPACES OF IR''; BASES AND LINEAR INDEPENDENCE

ubspaccs o f

;

Ba es a nd Linear lnd epe nd nce •

147

y

w

In the last ction we have een that both the image and the kerne[ of a linear Iransfo rm ation contain the zero vector (of th e codoma.in and the domain. re -pectively) , are clo ed under addition. a nd are c lo ed unde r calar multip lication . S ubset of IR" that ha ve the e three properties are called ubspace of IR".

Definition 3.2.1

11

Subspaces of IR" ub e t W of IR" i called a subspa e of IR" if it ha the fo ll owing propertie : a. W contains the zero vector in IR" .

b. W i closed unde r adclition: If

w, + w2.

w1 ancl w2

c. W i closed unde r scalar multiph cation: If

are both in W , then

wi

o i

in W and k i an arbitrary

Figure 1

X

calar, the n kw is in W. y

Fact 3.1.4 a nd 3.1.6 tel ! us the following:

w Fact 3.2.2

If T : !R"

--7

IR"' i a linear transformation, then

• ker(T ) is a subspace of IR11 , and • im (T) is a subspace of IR"'. ii in W

The term space m ay eem surpn smg in this context. We have seen in examples that an image or a kemel can well be a line or a plane, w hi c h is no t something we ordina rily call a "space." In modern mathematics the term "space" i a pplied to structu res Lhat are not even re mote ly imil ar to the space o f o ur experience (i.e., the space in which we li ve, w hi ch i nai vely ide ntified w.ith JR 3 ). The term space is used to describe IR" for any value of n (not just n = 3). A.ny subset of the space IR" with the properties li sted above i ca ll ed a s ubspace of IR", even if it i. not ·'three-dimensional." EXAMPLE 1 ..... Js W = { [

~]

2

2

.r

Figure 2

- ii= (- l )ii

in JR : x ::: 0 , y ::: 0} a subspace of JR ?

Solution Note tha t W consists of a lt vect.ors in the first quadrant, including the positive axes, as shown in Figure I.

The answer is no: W contains the zcro vector. and it i c losed unde r addition. but it i not c losed under muJtipli cation wi th negative sca la r ( ee Fi oure 2). ~

148 •

Sec. 3.2 Su bspa ces of IR"; Bases and Linear lndependence •

Chap. 3 Subspaces of R" and Their Dimensions

149

y

L

X

0

Figure 3 Figure 5 y

as a linear combination of ii 1 and i:i2 • Therefore, ü is contained in W (since W i closed under linear combination ). Thi hows that W = 2 , a claimed. <11111

ii

Similarly, the on ly subspaces of JR( 3 are JR( 3 itself the plane through tbe origin , the line tbrough the origin, and the set {0} (Exerci e 5). ote the hierarchy of subspace , arranged according to their dimen ions (the concept of dimension will be made precise in the next section). X

ÜJ

11:

Subspaces of JR 2 Dimension Dimension Dirnenion Dimension

3 2 I 0

Subspaces of JR 3 ]R3

JR2

planes through Ö lines th!ough Ö (0}

lines through 0 {0}

f1911re 4

Solution Note that W consists of all vectors in the first or third quadrant (including the axes). See Figure 3. Again, the answer is no: While W contains 6 and is closed <11111 under scalar multiplication, it is not closed under addition (see Figure 4).

EXAMPLE 3 .... Show that the only subspaces of ~2 are IR2 itself, tbe set {0}, and any of the lines through the origin.

We have seen that both kerne! and image of a linear transformation T are subspaces (of the domain and rhe codomain of T , respectively). Conversely, can we express any ubspace V of IR(" as the kernel or the image of a Linear transformation (or, equivalently, of a matrix)? Let us consider so me examples. A plane E in JR(3 is usually described either by giving a linear equation such as x1

+ 2x2 + 3x3 =

0,

or by giving E parametrically, as the span of two vectors : for example,

Solution Suppose W is a subspace of IR2 which is neither the set {Ö} nor a line through the origin. Wehave to show tha~ W = JR( 2 . Pick a nonzero vector iJ 1 in W (we can find such a vector, since W =I= {0}). The ubspace W contains the Line L spanned by Üt , .but W does not equal L. Therefore, we can .find a vector 2 in W which is not in L (see Figure 5). Using a parallelogram, any vector in JR2 can be expressed

v

v

UJ [-H and

In other words, E is described either a ker [ 1

2

3]

150 •

Sec. 3.2 Sub paces of lR 11 ; Bases and Linear Independence •

Chap. 3 Subspace of !R.11 and Their Dimensions

151

or im [

·~ - 1

_; ] . 1

Similarly, a line L in rn! 3 may be described either parametrically, as the spa n of the vecror

~~

or by the two linear equations XI XI -

I

X2 -

2xz +

X) X3

I

/J

= 0 = 0 .

Figure 6

6 _:;how~ tha~ we need on ly li 1 and ü2 to span the image of A. Since + u2, the vectors Ü3 and ü4 are redundant: tbey are li near co mbmatJon of u1 and ü2 .

Therefore.

- 1

-2

J

- 1l .

A subspace of rn!" i u ually eilher presented as the olution set of a homogeneous linear system (i. e., a a kerne!), or given parametrically as the span of ome vectors (i.e., as an image). In Exercise 38 you will see that any subspace of IR" can be represented as the image of a matrix. Sometimes a subspace that has been defined as a kemel must be given a an image, or vice versa. The translation from kerne\ to image is traightforward: we can represent the solution set of a linear system parametrically by Gau s-Jordan elimination (see Chapter I) . A way to write the image of a matrix as a kerne\ i di scu ed in Exerci e 42 and 43 of Section 3.1.

+

Basesand Linear Independence EXAMPLE 4 ..... Consider the matrix

F~gure

_

hu~ ~ n

Find vectors li 1, ü2 , . .. , Ü111 in IR3 tbat span the image of A. What is tbe smallest number of vectors needed to span the image of A?

Solution

=

v3

~u1 . and u4 ~ v 1

im (A) =

pan (li 1, Ü2 , Ü3,

v4 ) =

span(li 1, ü2 )

The image of A can be spanned by two vectors, but clearly not by fewer than ~~ _

_ We

VI'

V2

oft~ n ~vish

.. . ' Vm 10

to expre s a ubspace V of IR" as the span of some vector V. It is reasonable to require that none of the 1 be a linear



combination of the other ; otherw ise we might as we!J omit it. We fi rst introduce some u eful terms:

Definition 3.2.3

Linear independence; basis Con icler a sequence Ü1, . . . , Ü111 of vectors in a subspace V of JR". The vectors Ü1, . . . , Üm are called linearly independent if none of them i a ljnear combination of the others. 1 Otherwise the vectors are called linearly dependent . We . ay that the vector ü1 ••• , Ü111 form a basis of V if tbey span V and are linearly independent. 2 The vector u" u2 , ü3 , Ü4 in Example 4 span V = im (A), but they are linearly dependent because Ü-t = ü1 + ü2 • Therefore, they do not fom1 a basis of V. The vectors 1• ü2 on the other band, do pan V and are Iinearly

v

We know from Fact 3.1.3 that the image of A is spanned by the columns of A: 1Thi s

det'i nition makes -ense i f' 111 is
We say th at a ·in gle vector, v 1 , is linear!

indcpendcnt if ii 1 'I Ö. 2B conve ntion. Lhe empty et 0 is a basis of the space {0} .

152 •

ha t . 3 Subspaces of Ii(" and The ir Dimens ions Sec. 3.2 Subspaces of JR"; Bases and Linear lndependence • independen t (neither of them i a mul tiple of the other). T h refore, the ve~ tor u1 - , are a ba i of \1 = im(A). T he vector u1. - 2 do not form a ba. 1 of IR 3. h;w -v r. While they ar linearly independent, they do not span IR 3 (they span only a plane). T he following defi nition w ill he lp us to ga in a better conceptual understand ing of linear independen e.

Solution To fi nd the relations among these vectors, we have to solve the vector equation

I

Con ider the vector

3

+c, [

4 5

Linear relations

u1 u2•. . .•

10

+ C4

11

25

~m

= 0

u;.

u;.

For example, am o ng the vector introdu ed in Examp le 4 we have the nontrivial relation

Proof

p

o r the matrix eq uati on Cm -m

is call ed a (lin ear) relation among tbe vector There i al ways the trivial relation, with c 1 = c2 = .. · = cm = 0. Nontrivi.al relation may or rnay not ex.i t among the ectors

Fact 3.2.5

+c,

1

4 9 16

Um in IR" . An eq uatio n of the form

C1 - 1 + C2 -2 + · · ·

because u4

i]

2 Cl

Definition 3 .2.4

153

[i

6 7

2

Cl

0

11 25

C4

0

5

8 9

7

10

I]~~ r~1~ [0 ~ 4

3

A

In other words, we have to fi nd the kernet o f A. To do . o, we compute rref(A ). Usi ng technology, we find that

= u, + u2.

The vectors ü1, ••• , um in lR" are linearly dependenr if (and only if) there are no nt1·ivial relation amo ng the m.

rref(A) =

1 0 0 0 0 l 0 0 0 0 I 0 0 0 0 l 0 0 0 0

• If one of the Ü; is a linear combination of the others, then

and we can fi nd a nontrivial relation by subtracting equation above: C[V I

+·· · +Ci- I U;- 1 + (- l ) u; +

Ci+ l Vi + l

v; from both sides of the

+ ··· +

CmVm =

0

• Conversely, if the re is a nontri vial relation

Cl ul + ... +

C;V;

+ . .. +

This shows that the kerne! of A is (Ö}, because there is a leading I in each column of rref(A). There is o nly the tri vial re lati on among the four vectors, and they are therefore linearly independe nt. <011111 Since any relation

Cl UI +

Cm Um = 0 , with C; =/:- 0 ,

C2

Vz

+ ···+

Ö

Cm Um =

can alternati vely be wrinen as

then we can solve fo r Ü; and thus expre s Ü; as a linear combination of the other vector . •

EXAMPLE 5 ..,._ Determine whether the fo llowing vectors are li nearly independe nt. 1 2 3

4 5

6 7 8 9 10

2 3

5 7 11

C2 cl .

1

4 9 16

25

[

J

= 0,

C;n

we can generalize the observation made in Example 5.

154 •

Sec. 3.2 Sub paces of IR"; Base and Linear lnclepcnd nce •

Chap. 3 Subspace of IR 11 and Their Dirnen ions

155

~-----------------------r

Fact 3.2.6

The vect r - , ..... -", in !R" ar linear! ind pendent if (a nd on ly il)

v",

ker

=

{0)

E= im(AJ

or. eqUI alentl y. if there is a leadin g I in ea h column of Figure 7

rref be expre. ed uniquel y a a linear combinat ion of the

0 = Oü, + Oü2

v,,

namel y. a

+ · · · + Oü",.

Butthis mea n th at there i only the tri ial relation among the v;: they are linear! indepenclent (Fact 3.2.5). • We co nclude thi ectio n with an important alternative characterization of a basi . If the vectors v1, v2 , . .. , v", are a basis of the subspace V of lR", then any vector - in V can be expres ·ed a a linear combination of the ba is vecto r ( ince they pan V) . Thi s repre entation i in fact u11.ique.

Fact 3.2.7

Proof

v

v",

Con ider the vecto r iJ 1, 2 , ... , in a sub. pace V of lR". T he vector Ü; are a basi of V if (and only if) any vector ü in V be expre. ed uniquely as a linear combi nation of the vector- V;.

ü = c,ü, + c2v2 + · · · + CmÜ", = d, ü, + d2Ü2 + · · · + dmv",. By ubtraction, we find -

)v, + (c2 -

d1

dz)Ü2

+ · · · + (cm

but also

an

Suppo e the vector v; fom1 a basis of V , and consider a vector ü in V. Si nce. the ba i vector span V, the vector v can be written as a I inear comb ination of tbe v;. We have to demonstrate rhat thi s repre en tation is unique. To do ·o, we con ider two repre. entations of ü, name ly

(c 1

As an example, consider the plane E = im (A) in ~ 3 introduced in Example 4 ( ee Figure 7). The vectors v1, ü2 v3 , v4 do not form a ba i of E. sin ce any vector in E can be expre ed in more than one way as a linear combination of the Ü;. For example, we can write

v-1

= O:V , + Oü2 + Oü3 + I. ü.j .

Because any vector in E can be ex pre ed uniquely a a li near combination of ü1 and iJ2 alone, the vectors ii ,, v2 fo rm a ba is of E.

EX ERCI S ES GOALS Check whether a subset of !R" is a ubspace. App ly the .concept of linear independence (i n terms of Definiti o n 3.2.3. Fact 3.2.5, and Fact 3.2.6). Apply th oncept of a ba is, both in t rms of Defi nition 3.2.3 and in terrn of Fact 3.2.7.

- d",)Ü," = 0,

which is a relation among the Ü; . Since the Ü; are linearl y in dependent, we have c; - d; = 0, or c; = d;, fo r all i: the two repre entations of iJ are ide ntical, as claimed . Conversely, uppose that each vector in V can be expres ecl uni q uely as a li near com binati on of the vector v; . Clearly, the v; pan V . T he zero vector can

Which of the sets W in Exerci es 1- 3 are sub pace of JR 3 ?

156 •

Chap. 3 Subspaces of ~~~ and Their Dimensions

Sec. 3.2 Su~spaces of R11 ; Basesand Lin ear Independ ence •

15 7

ln E xerci se 20 and 21 , decide whe th er the vector are linearly indepe ndent. Y u may use technology .

3. W =

I[4;:;~ : ~~ ] :

x , y, z

7x + 8y + 9z

arbi~ary constants}

4. Consider the vectors ü1, ü2 , • . . , Üm in iR" . I sub pace of iR"? Justify your answer.

O-m - ln l:!J

span(i:it , . . . , tim) neces arily a

5. Give a geometrical description of all subspaces of JR3 . Ju tjfy your answer.

I

.21.

6. Con ider two sub paces \1 and W of iR". a . I the intersection \1 n W necessarily a ubspace of iR"? b. Is the union \1 U W necessarily a sub pace of IR"?

1

7 3 5 4 2

2 5 3 9 7

[i m

-or which choices of the constants a, b, c d , e, linearly independent? Justify your answer.

7. Consider a nonempty subset W of iR" that is closed under addition and under scalar multiplication . Is W necessarily a subspace of lR"? Explain.

f are the following vector

~ Find a nontrivial relation among the following vectors:

[!]. in IR", with ii 111 = 0. Are these vectors linearly jndependent? How can you teil?

9. Consider the vectors i:i 1, ü2 ,

... , V111

In Exercises 10 to 19 use paper and pencil to decide whether the given vectors are linearly independent.

@

Consider a subspace \1 of IR". We define the orthogonal complement V .1.. of \1 as the set of those vectors in IR" which are perpendic ular to all vector in \1, that is, ü = 0, for all in \1 . Show that V ..L is a subspace of Jl{".

w u



24. Con ider the cise 23).

lin~ L spanned by[;·]

in IR 3 . F ind a basis of L ..L (see Exer-

3

25. Consider the subspace L of IR 5 spanned by the vector below. F ind a basi of L ..l. (see Exercise 23).

26. For whi ch choices of the constan ts a , b, . .. , m are the vectors below linearly independent? {/

b

c d

I

0

e 1 0 0 0 0

f

k

g

/'11.

h

1

j

0 0 0

1

158 •

Chapo 3 Subspaces of IR11 and Their Dimensi o ns

Sec. 302 Subspaces of R 11 ; Basesand Linear Independence •

Find a ba i of the imag o f each matrix in Ex rci es 27 to 33°

159

lf you are puzzled, think fir t abou t the peci al case whe n V i a plane in JR 3 What is m in thi s ca e? c. Show that any subspace V of lR" can be represented as the image of a matrixo 0

27.

30.

33.

un [~

0 0

[~

1

2

0 0 0

q

I

28.

!] 0

Jo

[i

2 3

2 31. [ 1 3 5 0 3 I 4 0 0 0 '0

n

I

67 8

29.

J

32.

[!

~]

2 5

39. Cons ider some linearl y independent vectors ü1, ii2 , 000, Ü111 in lR" and a ector ü in lR" that is not contained in the pan of ü" ü2 , 000 Ümo Are the vector ü, , Ü2, 000, Ü111 , Ü necessari ly lin early independent? Justify your answer.

[~ i

40. Con sider a n m x n matrix A a nd an

11 x p m atrix B We are told that the co lumn s of A and the column s of B are linearly independento Are the colurnn of the product AB linearly independeot a weil ?

J

0

41. Con ider a n m x n rnatrix A and a n n x m matrix B (with 11 =!= m) uch that AB= Im (we say tbat Ais a left inverse of B)o Are the columns of B linearly independent? What abou t the columns of A?

!]

42. Co nsider sorne perpendicular unit vectors ü1, ii2, 000, Ü111 in lR11 0 Show that the e vectors are necessari ly linearly indepe nde nt. Hint: What happen whe n

34. Con ider the 5 x 4 rnatrix A below:

you fo rm the dot product of Ü; and both ides of the equation below?

u,

A =

3

U-1

43. Con ider three linearly independent vectors ü1,

~.

ü3 in lR11 Are the vector 0

ü, ü, + ü2 , ü1 + ü2 + ü3 linearly independent as well? How can you tell? 44. Consider the li.nearJy indepe nde nt vectors Ü1, ü2 , 000, Üm in lR 11 and Iet A be We are told that the vector [

jJis in the kerne! of A.

Write V, as a linear

an invertible m x m matrixo Are the column o f the foiJowing matrix linearly independent?

combination of - , , Ü2, Ü3o

35. Consider the linearly dependent vectors ü1, ü2 , 000, Ü111 in lR" where ü1 =!= 00 Show that one of the vector Ü; (for i = 2, 000, 11) is a linea r combi nalion of the previous vectors ü1, ü2 , 000 Ü; _ 1

U111

A

o

36. Con si der a linear transfom1ation T from lR" to JRP a nd some linearly dependent vector ü1, ~. 000, ü", in lR"o Are the vectors T(Ü J), T (~), 000, T ( Ü111 ) linearl y dependent? How can you tel!?

37. Consider a linear transfom1ation T from lR" to JRP and some linea rly independent vectors ü,, Ü2, 000, Ü111 in lR"o Are the vectors T( v 1) , T(ü2 ) , 000, T(Ü 111 ) necessarily linearl y inde pe ndent? How can you tell?

38. a. Show that we can find at mo t n linearly independent vectors in lR"

46. Find a basis of the ke rne! of the matrix belowo

0

b. Let V be a subspace of lR" Let m be the largest number of linearly independent vectors we can find in V (note that m :s 11 , by part a)o C hoose so me linearly independent vectors ü,, ü2 , 000, Ü111 in V 0 Show that the vectors ü,, Ü2, 000, Ü,11 spa n V, and a re therefore a bas is of V 0 This exercise shows that a ny subspace of ~" ba a basi so 0

45. Are the co lumns of an invertible matrix Jinearl y independent?

l [

2

0 0

0

3 4

~]

Ju stify your a nswer carefully , ioeo. ex pl ain how you know that the vector you found a re linearly independent and pan the kerne!.

160 •

h ap. 3 Sub paces of IR 11 and Th eir Dirn nsi n

Se . 3.3 The Dime nsion of a Subspace o f IR" •

,

47. Con ider three linear!_ independent ve tor ü1• ü2, ü3 in lR

48. Expre the plane E in JR 3 with equation1 .q a matri · A and as the image of a matrix B.

JR 3

+ 4x 2 + Sx3 =

4

.

Find

0 a the kernet of

i]

49. Express the llne L in spanned by the vecto r [ as the image of a matJi.x A and as the kerne! of a rn atri x B. 1 50. Consider two ub paces V and W of lR". Let V + W be the et of all vector in lR" of the fom1 ü + tÜ, where ü i in V and ÜJ in W . I V + W nece aril y a subspace of lR"? lf V and W are two di tinct Jine in lR-', what is V + W? Draw a ketch. 51. Consider tv~o sub pace V and W of IR" who e intersection con i t only of the vector 0. a. Con ider linearl y independent vector u1, ü2 , •• • , Üp in V and w1, w2 . . . , wq in W. Explain why the vectors ü1, ü2 , .. . , Üp, w1, w2 , ... , Ü!q are linearly independent. b. Con ider a basis ü,, Ü2, . . . , ÜP of V and a basis ÜJ 1, w2 , .. . , Ülq of W . Explain why Ü1, u2 , •• • , ü", w1, w2 , ... , wq i a ba i of V + W see Exercise 50). 52. True or false? If V is a ubspace of IR", then there is an n x 11 matrix A such tbat V = im(A).

Figure 1 The vectors

v v2 form o 1,

bosis of E.

w w

w

Consider some vectors ü1, ü2 , • •. , vP and 1 , 2 , ••. , 9 in a subspace V of lRn ·. If tbe vectors Ü; are linearly independent, and the vectors Wj span V, then p ~ q.

For example, Iet V be a plane in JR3 . Our geometric intuition tells us that we can choose at most two linearly independent vectors in V, so that p ~ 2, and we need atleast two vectors to span V , so that 2 ~ q : Therefore, the inequality p ~ q does indeed hold in thi s case.

Proof

Tbis proof is rather technical and does not reaUy explain the result conceptually. In Chapter 7, when we study coordinate systems, we will gain a more conceptual understanding of tlü matter. Since the Wj span V, we can express each as a linear combination of the vectors wj :

v;

(Hint: See Exercise 38.)

We write each of the e equations in matrix form:

THE DIMENSION OF A SUBSPACE OF

161

~n

Consider a plane E in IR 3 . Using our geometri c intllltiOn, we observe that all bases of E consist of exactly two vectors (any two nonparallel vectors in E will do; see Figure I). One vector is not enough to span E , and three or more vectors are linearly dependent. More generally, all bases of a subspace V of IR" consist of the same number of vectors. To prove thi s impo1tant fact, we need an auxiliary result.

Wq

r

a~t ] :

{/ lq

_-

-

VJ

Wq

ra~l ] Opq

=

Üp

162 •

Sec. 3.3 The Dim ension of a Sub pa e of J?." •

Chap. 3 Sub pace of IR" and Their Dim en io ns We combine all the e equ ation int o o ne matri x equatio n:

Thi s algebraic definition of dimen sion re pre. ents a maj r step in the development of linear al gebra: it a llow u. to co nceive of pace. w ith m re than three dimen s.ion s. Thi s idea is of'ten poorly understood in popu lar culture, where ·ome mystici sm . uJTounds higher-dimen ·ional space . the German mathematici an Hermann Weyl ( 1885-1955) said: " We are by no mean ob li ged to . eek illumination fro m the mystic doctrines of spiriti t to obta in a clearer i ion of multidime nsion a l geometry" Raum, Zeit, Materie, 19 18) . The first mathematician who th ought about dimen ion fro m
Ar

The kerne! of A is comained in th e kernel of N (if A.\- = 0, then M = But the kemel of i {Ö}, ince the Ü; are linearl y independent (Fact 3.2.6). Tberefore, the kerne] of A is {Ö} a weil. This impli e th at the q x p matrix A has at least as rn an y row as colu mns, o that p .::: q (by Fact 1.3.3). •

N.'i = Ö.

Now we are ready ro pro e rhe fo ll owing:

Fact 3.3.2

Proof

idee peut etre conteslee, m.ais elle a, ce me semble, quelque merite, quand ce ne serait que celui de Ia nouveaute] . (Encyclopec/ie, vo l. 4, 1754.

All bases of a ub pace V of iil&" co n ist of the same nurnber of vector .

Consider two bases ü1, • • • , Üp a:nd ÜJ 1, ••• , wq of V. Since the Ü; are linearly independent and the Üij pan V, we bave p.::: q. by Fact 3.3.1. Likewi e, . ince the Üij are linearly independent and the Ü; spa n V, we have q .::: p. Therefore,

p=q.



Consider a line L and a plane E in JR 3 . A ba is of L co nsi t of ju t one vector (any nonzero vector in L wi ll do) , while all ba es of E con ·ist of two vector . A basis of JR 3 consists of three vectors (t he vectors e1. e2, e3 are o ne po sible choice). In each case, the number of vector. in a ba is COJTespond to what we intuiti vely ense is the dimension of the sub pace.

Definition 3.3.3

163

Dimension Consider a sub pace V of lR". The number of vectors in a basis of V is caUed the dimension of V denoted by dim(V). 1

1 For lhi s defi nition to make sen e. we have to be sure that any subspace of IR'." has. a basis . This verificat ion is left a Exercise 3.2.38.

The homme d'esprit wa no doubt d ' Alembert himself. D ' Alembert wi hed to proteer him e lf against being attacked fo r what appeared as a ris ky idea. The idea of dimen ion wa later tudied more y tematically b the German mathematician Hermann Günther Gra smann (1809-1877), who introduced the concept of a subspace of IR". Gras mann' methods were only slo wly adapted, partly becau e of hi ob eure writing . Gra mann ex pre ed hi idea in the book

Die lineare Ausdehnung lehre. ein neuer Zweig der Mathematik (The The01y of Linear Extension, a New Branch of Mathematics) , in 1844. Simi lar work \ a done at about the ame time by the Swi mathematician Lud wig S hlä fli ( 18 141895). Today , dimen ion is a tandard and central tool in mathemaric . . a v eil a in phy ics and stati stic . The idea ha been applied to certain n nlinear ubset of IR", called manifold , th us genera lizing t.he idea of curve and • urface in IR 3 . Let u rerurn to the more mundane: w hat i the d imen io n of IR" it e lf? Clearly. IR" ought to have dimension /!. Thi i- indeed the ca. e: thc vectO I'" el 2 , ... , 11 form a basi of IR" ca ll ed it ' sranda rd basis. A plane E in IR 3 is rwo-dimen ional. Above, we mentioned th at we ca nnot find mo re than two linearly inde pende nl ectors in E . and that we need at I a t two vector to pan E. lf two ector in E are linea rl y indepen de nt, they form a basi of E. Likewi e . if tvvo vec tors pan E, 1hey fo rm a ba i of E. These ob erva tion s ca n be generali zed a f llow :

e

e

164 •

Chap. 3 Subspaces of JR:" and Their Dimensions Sec. 3.3 The Dirnen io n of a Su bspace of JR" •

Fact 3.3.4

Consid r a subspace V of IR" with dim (V) = d.

Thi corre po nds to the ys tem

a. We an find at most d linearly independent vector in V.

x 1 + 2.x2

I

b. We need at least d vector to span V . c. Jf d vectors in V are linearly independent then they form a basis of V . d. If d vector

X3

+ 3x4 + 3x4 + 5.xs

= 0 =

I

0

with general solution

pan V , then they form a ba i of V . Vi

~:: : l~

The point of parts c and d i the following : by Definition 3.2.3, some vectors form a basi of a ubspace V if they are linearly independent and span V. However, when we are deaüng with "the right number' of vectors (namel y, the dimension of V), ir suffice to check onl y one of the rwo prope.rties ; the other will follow .

Proof

165

tl

+t

[

~

V3

+' [

i

The three vector ü, Ü2, Ü3 pan ker(A), by co ostruction. To verify that they are al o linearly independent, con ider the component corresponding to the nonleading variables: the secood, fourth , and fifth. As you exami ne the econd components, I , 0, and 0, respectively, you realize that ü1 can not be a linear combination of Ü2 and ü3 . Similarly, we can ee that ü2 isn't a linear combination of ü, and ü3· likewise for Ü3 . Wehave howo that the vector ü 1, ü2, ü3 form a basis of the kerne] of A. The nurober of vector in thi basis i.e., the di men ion of ker(A)) i the nurober of nonJeading variable .

We demonstrate parts a and c, and Ieave b and d a Exerci e 58 and 59. Part a: Consider ome linearly independent vectors Ü1• ü2 , • • • ÜP in V and choo e a basis iü~, iü 2 , ..• , iüd of V . Since the ÜJ; pan V we have p s d , as claimed (by Fact 3.3.1). Part c: Consider some linearly independent vector ü1 , .•• , üd in V . We have to how that the v; pan V. Pick a ü in V . Then the ector ü1, ••• , üd v will be linearly dependent. by part a. Therefore, there is a nontrivial relation

dim (ker(A)) We know that c =!= 0. Why?) We cao solve the relation for ü and thu s express ü as a linear combination of the -;. We have hown that an y vector ü in V i a linear combination of the Ü;, that is, the Ü; span V. • From Section 3. 1 we know that the kerne! and image of a linear transforrnatio n are subspace of the domain and the codomain of the transformation, respectively. We will oow examine how we can find base of kerne) and image, and thus determine their d.imensions.

s [

V2

(number of non leading variable ) (number of column of A ) - number of Jead ing vari able (number of column of A ) - rank (A) = 5 - 2 = 3

The method we u ed in Example 1 applie to all matrices .

Fact 3.3.5

Con ider an m x n matrix A . Then, dim(ker(A)) = n- rank(A) .

+ Einding a Basis of the Kernel EXAMPLE 1 ..... Find a basis of t~ ~etof the fo ll owing matrix , and determine the dimension of the kernel. A =

+ Einding a Basis of the Image Again , we study an exam ple, m aking sure that our procedure applie to the general case.

[I2 24 0 39 0]5

EXAMPLE 2 .... Fi nd a ba i

Solution Wehave to solve the linear system A

= [ 21

2 0

4

3

9

0]

5

Ai = Ö. As in Chapter l , fi rst we fi nd rref(A) .

- 2(/) -+

rref(A)

= [ 01

2

0

0

33 0]5 .

T IR

5

~

IR' with matrix A

~ ~ ~

and determ ine the dimens io n of the image.

[

I 1

2

0 - 1

- 1

2 1

166 •

Chap . 3 Subspaces of

I!("

Sec. 3.3 The D imens io n of a Sub pace of ~~~ •

and Their Dimensio ns

_Co n. ider the co lu m ns ii 1, ii2 , ü5 of A corre po nd ing to the column of E 2, 5 are linearl y independent, so are the vectors u, , u2, us . T he vectors ii 1, ü2, ü5 span the im age of A . . in ce anv vector in ' the i.mage of A can be expressed as

Solution

co n ta 1 n 1 ~g ~e ~ead in g l" s. Sin ce ÜJ 1,

We know rhat the columns of A pan the image of A Fact 3. 1.3) , bu t they are linearly dependent in thi s exampl e. (Why?) To construct a bas is of im (A), we could find a relatio n among the co lumn of A, exp ress o ne of the co lumns as a linear combination of the other , and then omü this vector as redun dant. After repeati ng this p ro edure a few time , we would have some linearly inde pendent vector that still pan im (A), i. e. we would have a basi of im (A) . We pro pose a method fo r fi nding a basis of im (A) whi ch u es the ideas just di scussed , but preseot them in om ewhat streamlined form. We fi rst find the reduced row echelon fo rm of A.

u, A=

V2

U3

'U4

u

I

2

0 0 1

0

1

- l

l

2 1 - 1

-

vs

tll"2

U/3

0 I

I - 2

0 0

0 0

Ul4

E

rref(A)

c, ii ,

2

-3 0 0

As shown above, we denote the i th columns of A and E by Ü; and ÜJ;, respectively. We have ro express ome of the Ü; a linear combinations of the other vectors Üj to identify redundant vectors. The con·e pondin g problem for the ÜJ; is easily so lved

ln~ö

Since A and E have the same kerne ls (the reduced row eche lon form wa de fined so that the systems Ai = Ö and Ex = Ö have the sa me soluti o n ), it follo w that

L

(ii 1 - 2ii2) + c4(2ii 1 - 3ü2)

Definition 3.3.6

<011111

Fact 3.3.7

A column of a matri x A is call
The pi vot ·c olumn. of a matrix A fo rm a bas is of im (A) .

Fact 3.3.8

For any matrix A, rank (A) = dim(im(A)).

Tru s interpretation of the rank a a dimen io n i co nceptua ll y more sati factory than the rather technical Defi niti o n 1.3.2. Co nsider an m x n matrix A . Facts 3.3.5 and 3.3.8 teil us that dim (ker(A)) dim (im (A))

=

n - rank(A). rank(A).

and

Adding the e two equati o n , we obtain the follo wing :

Fact 3 .3.9

Tf A is an m x n matrix , then dim (ker (A)

+ dim (im (A)) =

For matri ces of a g iven size the !arger th and vice versa. We ca n inte rpret the fo rm ul a

11 .

k rne l. the . malle r the irnage ,

n - dim (ker(A)) = dim (im (A))

I

+ c5ü5.

The fo ll owing de finiti o n w ill be help ful in stating these re ults more genera lly:

Let us explain why correspondin g rel ations ho ld amo ng the columns of A and E = rre f(A) . Consider a relatio n c 1ii 1 + c 2iJ 2 + · · · + c 5ii 5 = Ö among the columns of A. Thi s relation can altemati vely be written as

A

~\

+ c2ii2 + c

B ecause the number of pivot columns i the rank of A by definiti o n, we have the fo ll owing re ult:

ln general, we can express any column o f rref(A) th at does not contain a leading 1 a a linear combination of column that do co ntain a leading 1. lt may surprise you that the . ame relationship hold among the correspo nding columns of the matri x A. Verify the following for yourself :

j,

c 1ü,

i.e., it ca n be written a a linear combinati on of ü1, ü2 , ü5 a lone . We have show n that ii 1, ü2 , ü5 is a basis of im (A), and dim (im (A)) = 3.

by inspecti on:

-;

+ c2iJ2 + c3Ü3 + c4Ü4 + csils =

ww

ws

il- ~ ~ U !l IJ II

16 7

168 •

Sec. 3.3 The Dimension of a Sub pace of IR" •

hap. 3 Subspaces of IR'' and Their Dimen ions

169

The linear system

L = im(A)

v"

[~; l =b c"

has a uniq ue solution if (and only if) the n x n matrix

=

ker (A) L... : the plane perpendicular to L

Figure 2

geometri call y a fo ll ow : Con ider a lLnear tran Formati on

T (i) = A.l-

" to IR"' .

from

Note that n is the dimension of the do rnain IR" of the tran fo rmation T . The qu antiry dim (ker(A)) counts the number of dim en ions th at co ll apse as we perform tbe transformation, and dim (im (A ) ouots the number of dimen ion th at ur ive after the transformatioo. A an example cons ider the orthogonal projection ont o a line L in IR 3 ( ee Figure 2). Here, the dimen ion 11 of the domain i 3, two dimen ions coli apse (the kerne! i a plane) and we are left 'i ith the I-dimensional image L.

is invertible. We have shown tbe following re ult:

Fact 3.3.10

The vectors ü ~, Ü2, . . . , Ü11 in IR" form a basis of lR" if (and only if) the matrix

u" n - dim(ker(A)) = dim(im(A))

I

3 -2 = I

is io vertible.

+ Bases of ~

11

EXAMPLE 3 ... Are the fo llowing vector a basis of IR4 ? We know that any basis of IR" con ·ists of n vectors, since we have the standard basis e1, ••• , 11 • Conversely, how can we teil whether 11 given vector u1, ü2 , ••• , Ün in IR" fo rm a ba is? The Ü; fo rm a basis of IR" if every vector b in IR" can be written uniquely (Fact 3.2.7): as a linear combin atio n of the

e

v;

Solution

VII

c,l [::

We have to check whether the matrix

[~ i i ~]

170 •

h ap. 3 Subspaces of ~~~ and Th eir Dimen ions Sec. 3.3 Thc Dim ens io n of a Subspace of R11 is invertible . U ing te hno logy. we fin d that

l I] [~ ~ 8 9

rref

The ec tor -; fo rm a basi of IR

1 7

4.

= 14"

5 3

4

7.

.

un [~

3

5. [ 5

-4

Summary 3.3.11

l 2 0 0

~]

0

8.

li IO.l~

- I

- I

l ]

Con ide r an n x n matrix

i. A i invertibl e.

[~

14.

un

a unique o lu tio n ;:r, fo r allb in IR" .

iv. rank(A) = n .

17.

V. im (A) = IR" .

=

[0}.

li

vii. The Ü; are a ba is of IR". viii. The Ü; span IR" .

;J

11.

iii. LTef(A) = /".

vi. ker(A)

3

6 2 4

9 4 9

2 6

0

2

1 4

-3 -6 3

- 1

n

i] -i]

II

Then, the fo ll owing tatement are eq uivalent:

Ax = b has

2

171

In Exerci e I I to 20, fi nd a ba i of the image of the oiven matrix and th u ' determi ne the dimensio n of the image. U e paper and pen~i l.

A=

ii. The linear sy tem

2

6. [;

l

Fact ..).3 .4 part c and d), applied to \1 = IR" and Fact 3.3. 10 provide us wid1 new rules for o ur ummary. 9. [ l

3]



19. [

2 2

0

-4

2

-;]

u n [1 3 2

13.

3]

18.

-3 4 -6 - 1 3

ix. The Ü; are linearly independe nt.

u l]

15. [ 5

~]

2 2

~

12.

~

16.

l~

20. [

i

8 6 4 2

1 1 1 1

2

3 2 9 6

6 2 4

3 2

~~]

1 2

9

~]

~]

4 9

For eachmatri x in Ex rci es 2 1 to 23, find a basi o f the kerne! and a ba i o f the image. and thus determine their dimensions. U e Fact 3. 3.9 to c heck your \ o rk.

EXERCISES

GOALS

Use the concept of dimen ion. Apply the ba ic res ults a bout linearl y independent vectors in a d-dime nsional space (Fact 3.3.4). Find a ba is o f the kernel and the im age of a linea r transfo rm ati o n.

21.

In Exercise 1 to I 0, find a ba i of the ke rn e! of the o jven matrix , and thu s detem1ine . the dimension of the kerne!. Use paper and pe1~cil.

[-: 2

I l

23.

- I I - I

-2

-1 0 -2 - 1

l 2l]

- 2 0 3

3 4

[lI 22 2

1

22

.

2 0

4 0

3

2 1 3 - I

j]

172 •

Ch ap. 3 Subspaces of IR" and Their Dim ensions

Sec. 3.3 The Dimcnsi n of a Subspacc of JR" •

24. Find a basis of the ubspace of IR! 5 spanned by the vector below:

9 6 3

1

25. Pick a basis of

JR!5

0 4 5 5

8 4 9 1 5

3 2 4 1 2

3 6

1

2 3 2

from the fo llowing vectors (if pos ible) :

[~

4 9 I

'

5

2

4 3 2 1

4

I 2 9 1 8

26. Consider the matrix A =

[~

1 2 0 0

0 I

Find a linear transfo rmation T from l!i 3 to IR 4 such that ker(T) = {0) and im (T) = V. Oe cribe T by it matrix A .

32. Fi nd a ba is of the ubspace of to both

4

[-!]

which con i t of all vector perpendicular

and

m

33. A sub ·pace V of IR" i calJed a hype1plane if V i defi ned by the homogeneou linear eq uation

4 I 7 0

2 3 5 7 5

31. Le t ·V be the ub pace of IR4 defi ned by the eq uation

0 4 5 5

8

3 2 4 1

3 6 9 6 3

173

~ ].

where at least one of the coefficient c; is no nzero. \Vhat i the dimen ion of a hyperplane in "? Ju tify your an wer carefully. What i a hyperplane in JR2 ? What is it in IR! 3 ?

34. Co nsider a subspace V in IR!" whi ch is defined by m homogeneou li near eq uations: a, , x, G2 1X 1

Find a matrix B such that ker(A) = im (B).

+ a 12X2 + · · · + G J X + G22X2 + · · · + G2nXn 11

11

= 0 = 0

27. Determine wbetber the vectors below form a basis of ffi!4 .

What i the relation hip between the dim ensio n of V and the q uantity n - m? State your an swer a an inequa lity. Explain carefull y.

28. For which choi ce(s) of the constant k do the vectors below fo nn a basis of ffi! 4 ?

[~] · [~] ·

[!] ·

[~]

29. Find a basis of the subspace of IR! 3 defined by the equ atio n 2x,

+ 3x2 +x3 =

0.

35. Consider a nonzero vector ü in IR." . What is the dimension of the space of all vector in IR!" whi ch are perpendicul ar to ü?

36. Can you find a 3 x 3 matrix A such that im (A) = ker (A)? Expl ain. 37. Give an example of a 4 x 5 matri x A with dim(ker(A)) = 3. 38. a. Consider a linear Iran sform atio n T from IR 5 to JR3 . What are the po ible values of dim(ker(T ))? Expl ain. b. Co n ider a linear Iransformation T fro m IR 4 to IR 7 . What are the po - ib le value of di m(im (T))? Explai n. 39. We are told that a certai n 5 x 5 matrix A can be written as A = BC

4

30. Find a basis of the subspace of IR! defined by the equation

2x , -x2 +2x3 + 4x4

= 0.

where B i a 5 x 4 matrix and C i 4 x 5. E pl ai n how you know tb at not in vertible.

1,

17 4 •

hap. 3 Subspace

Sec. 3.3 The Dimension of a u bspace of ;Rn •

f JR" and Their Dimensio n

40.

on ider two ubspa e V a nd W of IR", w he re V i co ntained in W. Exp lain w hy dim(V ) ::: dim (W). (Thi s statement eem intuiti ve ly rathe r obv io u . Still, we can no t rely o n o ur inruition when dea lin g w ith IR".)

~n Exerci es 4~ to 5_2, we wi ll study the row space of a matrix. The ro w pace of an ~ n ~ n matnx A ts defined a · the span of the row vectors of A, i.e., the set of thetr hnear combi nations. For example, the row space of the matrix

41. Consider two ubspace V a nd W of IR", wh re V is co nta:ined in W. In E ercise 40 we Iearned Lh at dim (V) :=: dim (W). Show that if dim (V) = dim(W), then V = W. 42. Co ns id er a ubspace V of

17 5

2

3

I

1

2 2

IR" with dim(V) = n . Ex pl ai n w hy V= IR".

43. Cons ider two ub pace V and W of IR" , w ith

nW

= (Ö} . What is the re lat:ion h.ip between dim(V ) dim (W) and dim (V + W)? (For the de fin itio n of \1 + W, see E ·e rc i 3.-.50; Exerci e 3.2.5 1 i he lpful.) 44. Two ubspace V a nd W of IR" are call ed complemenrs if a ny vector in IR" can be expre. sed u nique ly as ; = ü + where ü is in V and Ü!_ is in W. Show that V and W a.re complements if (and on ly if) V n W = [0] and

x

w,

dim (V + dim (W) = n. 45. Consider some linearly independent vector ü,, Ü2•. .. , ü" in a subs pace V of IR", and some vector wh ich pan V. Show that there i 1, a bas is of V which con i t of oll the Ü; a nd some of the Wj. Hint: Find a ba is of the image of the matrix

w w_, ..., wq

the et of all row vectors of the form

a[l

2

3

4]+b[l

1] +c [ 2

2

2

3].

49. Find a basis of the row sp ace of the matrix be low. 0

1 0

0

0

l

[ 0 0

0 0

0 0

E=

2 0OJ 3 0

l 0 0

50. ~onside~ an m x n matrix E in reduced row echelon fo rm . Using your work 111 Exerc1se 4_9 as a guide, exp lain how you can find a bas i of the row pace of E. What 1s the relationship between the dimen ion of the row space and the rank of E?

51. Con sider an arbitrary m x n matrix A.

a. What is the relationship between the row spaces of A a nd E = rref(A)?

A=

Hint: Examine how the row space is affected by e lementary row operation s. b. What is the relationship between the dimension of the row space of A and the rank of A ·.

Wq

46. Use Exercise 45 to construct a basi of IR4 which co n i t of the vector

1

[il [il

A= [

in IR4. 47. Con ider two ubspace V and W of !R". Show that and some of the vectors

e,' e2, e3, e4

dim (V) + dim (W) = dim (V

n W) + dim(V +

v w iv

w

W)

ivq

48. Use Exercise 47 to answer the following question : If V and W are subspaces of IR 10 , with dim V) = 6 a nd dim(W) = 7 , w hat are the poss ible dime n io ns of V

n W?

l

1

2 2 2 1 2 3 1 3 5

53. Cons ider an n x n matrix A . Show that there are scala.r c0 , c 1, ••• , c" (not all zero) suchthat the matrix co ln + c 1 A + c2 A2 + · · · + cnA" is noninve rtible. Hint: Pick an arbitrary nonzero vector in lR". The n the n + 1 vector- ü A ü, A 2 v, .. . , A" ii will be linearly dependent. (Much more i true: There ar~ scalar co, c,, . .. , c" uch that co l" + c 1A + c2A 2 + · · · + 11 A" = 0 . You are

v

(for the defi nitio n of V + W, see Exerci. e 3.2.50). Hin/: Pi ck a basis ü" ü2 , .•• , Üm of V n W . Using Exercise 45, co n truc t bases ü 1, Ü2, ... , iim. ü,, ii2, ... , of V and ü 1, ü2 , ... , ii 111 • of W . 1, w 2 , ... , Show that ü 1 , 1~ 2 , . . . , Üm , ii , , ii2, . .. , 17 , 1 , 2, . .. w11 is a basis of V+ W. (Demonstratin g linear independe nce is a bit tricky .)

v"

52. Find a basis of the row space of the matrix below.

not asked to demoostrate this Fac t here.)

54. Con ider the matrix l A = [2

-2]

1 .

Find calars co, c 1, c2 (not all zero) uch that the mau·ix c0h + c 1A noninvert:ible (see Exerci e 53 .

+ c1 A 1 i

17 6 •

l

Ch ap. 3 Subspaces of R11 and Their Dirnen io ns

55. Con ider an m

has an invertibl

n m atri x A. Show tbat the rank of A is m if (and ? nl y if) A 111

x m ubmatri · i.e .. a matri x obtained by de le t1ng n -

1n

col umn of 111 56. An 11 x 11 matrix A i called nilporenr if A = 0 fo r some positive integer m . Examp le are trian gular matri ces who e entri o n th di agona l are all 0. Con ider a nilpotent n n man·i x A, and choo e the rn a ll~st nu mber m ' UCh that A"' = 0. Pick a ve tor - in IR" uch that Am- li) =!= 0. Show that. the vector v, Av, A 2 ü, ... , A"' _ 1 _ are linearly inde_pendent. Hinr : Co nsider a re latio n co - + c 1 A v + c2 A2 - + · · · + c", _ 1Am - I = 0. Mul tiply both ides of the equation with Am-I to show that co = 0. ext, how that c1 = 0, etc. e the re ult demon57. Co nsider a nilpotent n x n matrix A ( ee Exerc i e 56). strated in Exerci e 56 to how that A" = 0. 58. Explain why yo u need at lea t d vecto rs to span a space of d imension d

ORTHOGONALITY AND LEAST SQUARES

v

(Fact 3.3 .4b).

59. Prove Fact 3.3.4d: If d ve tor span a d-dimensional space, they fo rm a ba is of the space.

60. If a 3 x 3 matrix A repre e nt the orthogo nal proje tio n onto a plane in !R

3

,

what is rank (A )? 61. Consider a 4 x 2 matrix A and a 2 x 5 matrix B. a. What are the po sible dime nsio ns of the kerne! of A B ? b. What are the pos ible dimensions of the image of A B ? 62. Tru e or f alse? There is a 7 x 6 matrix A with rank (A ) = 4 and a 5 x 7 m atrix B with rank B ) = 4 such that BA is the zero matti x.

ORTHONORMAL BASES AND ORTHOGONAL PROJECTIONS Not all bases are created equal. Wben working in a plane E in 11(3 , for example, it is particularly convenient to use a basis ü1, ü2 consisting of two perpendicular unit vectors (see Figure 1). In thi s section we will develop some vocabulary to deal witb uch bases. Then we will use a basis consisting of perpendicular unit vectors to study the orthogonal projection onto a subspace. In the next section we will construct such a basis of a sub pace of IR" . First, we review some basis concepts.

63. Co nsider an m x n matri x A and an n x p matrix B. a. Wbat can you say abo ut the re lationship between rank (A) and rank(A B)? b. What can you say abo ut the relationship between rank( B) and rank(A B)? 64. Consider two m x n matrices A and B . Wbat can you say abo ut the relationship

between tbe quantitites rank(A) , rank(B), and rank(A + B )? 65. Consider a 4 x 5 matrix A in reduced row echelon form. Suppose A has the same kerne! as the matri x

2 0]

0 1 0 0 0 1 3 0 B= 0 0 0 0 1 . [ 0 0 0 0 0 Show that A = B. Hint : Think about relations among the columns of A . 66. Consider two m x n matrice A and B in reduced row ecbelon form. Show that if A and B have the same kernet , then A = B. U e Exercise 65 as a guide. 67. Suppose a matri x A in reduced row eche lon form can be obtained from a matrix M by a sequence of elementary row Operations. Show that A = rref(M) .

Definition 4.1.1

Orthogonality, length a. Two vectors v and

v·w=O.

win IR" are called perpendicular or orthogonal

1

if

b. The length (or norm) of a vector v in II(" is llilll = ~c. A vector ü in II(" is called a unit vector if its lengrh is I, that i , IJ ÜII = 1, or ii · ü = 1. lf

vi

a nonzero vector in II(" , then u- = - 1

IJÜIJ

v-

is a unit vector (Exercise 25b).

1The

two terms are synonymous: "perpendicu lar'" comes from Latin and .. orthogo nal'" fro m G ree k.

177

178 •

Sec. 4. 1 Ortho normal Bases and Orthogona l Projections •

hap. 4 Orthogonality and Lea t qua res

1 79

The fo iJ ow i.ng properties of orthonormal vectors are often u eful :

Fact 4.1.3

a. Orthonormal vectors are linearly independent. b. Orthonormal vectors ü1, • •• , Ü11 in IR" form a basi of lR11 •

L...-----~ l' t

Proof

a. Con sider a relation

Figure 1

Definition 4.1 .2

among the orthonormal. vectors ü1, ü2 , ..• , Ü111 in IR.11 • Let us form the dot product of each side of thi s eq uation with ü;:

Orthonormal vectors The ectors ü1, ü2 ... , -~~~ in IR11 are called orth onom w / if they are all unit ector and orthogonal ro o ne another:

ifi = J, if i =/= j.

Becau e the dot product i di tri butive (see Fact AS in the Appendix), c , (ü, · Ü;)

EXAMPLE l ... The ectors

e e1 • . .. , e 1•

a] [-sina]

EXAMPLE 2 ... For any scalar a the vector [ co. , IO Ct'

CO Ct'

.

111

(Üm · Ü;) =

0.

We know that Ü; · Ü; = 1, and al1 otber dot products are zero. Therefore, c; = 0. Since this bolds fo r aU i = 1, . .. , m, ü follows that the vector Ü; are linearly independent.

in ~" are orthononnal.

11

+ c2(Ü2 · Ü;) + · · · + C;(Ü; · Ü;) + · · · + C

are orthononnal ( ee F1 gure 2 .

b. Thi fo ll ows from part a and Summary 3.3.11 (any n linearly independent vector in ][{II form a ba is of lR"). •

....

EXAM PLE 3 ... The ectors -

u, =

4

l

l /2 1/ 2

~~

J .

l

l/2J l/2 - 1/ 2 , - 1/ 2

V3

=

l

l/2J

- 1/2 1/ 2 - l /2

Definition 4.1.4

Orthogonal complement Consider a subspace V of ][{". The orthogonal complement V .l of V is the set of those vectors ,r in IR" which are ot1hogona1 to all vectors in V:

v .L

in IR are orthonormal (verify this) . Can you find a vector ü4 in 1R4 such that all Lhe vectors ü1, 2 , 3 , Ü4 are orth onormal (Exercise 16)? _...

= {x in ][{17 :

ü · ,t = 0, for all

v in

V}.

v v

v

If we have a basis Ü1, 2 • . . . , Ü111 of V , then v.1 i the et of those vector whi h are orthogonal to all the Ü; (Exercise 22):

Figure 2 [

-sin a

cos (l

x in IR"

J

V .l cos a ] [ stn a

=

{- . x tn

1[])11

JN>.

:

u; · x-

. 1. = = 0 , for

I , ... ,

m J.

For example, a vector .~ in IR 3 is orthogonal to all vectors in a plane E if (and only it) i.t is orthogonal to two vectors ü1, ü2 that form a ba ·i of E (see Figure 3). In Figure 4, we sketch the orthogonal complements of a line L and o f a plane E in JR 3 . Note tbat both L.l and E.l are subspace of IR 3 .

180 •

Chap. 4 Orthogonality and Least Squares

181

Sec. 4.1 Ort honormal Bases and Orthogonal Proje tions • X

x-w

Figure S

X

V

This vector ÜJ is cal led the orthogonal projection of; onto V, denoted by proj 11 ; Jn Fact 2.2.5 we stated the fo nn ula

Figure 3

.

x (v1·x)v 1.

proj 11 =

Now co nsider a subspace V of ii{" with arb itrary dimension m . Suppo e we have an orthonormal bas is Ü1 ~, .. . , Ü111 of V . Con ider a vector _-y: in ". 1 it still pos ible to fi nd a vector ÜJ in V such tb at ; - i in v .L ? If uch a ex i t we can wri te

w

-

-

0

0

for some sca.lars c; (since ÜJ is in V). It is required that

X-

E

L""

Figure 4

Proof

ÜJ

= X-

CJ Ü1 -

CzUz -

· ·· -

C111

Vm

be perpendicul ar to V, that is, perpendicul ar to ali the vectors v;:

V;. (x -

Fact 4.1.5

w

w) = Ü;. (x -

=

If V i a subspace of iR", theu it orthogonal complement v .L is a sub pace of lR" as weil.

x-

C;(Ü; · Ü;)- · · · - C111 (Ü; · Üm)

v; .x. wm

Thi s equation holds if (and only if) c; = We have sbown that there is a unique namel y,

w=



(vl .

V uch that

,r - w is

in V .L ,

x)vl + ... + (Ü", ..r)Üm .

Let us summarize.

Fact 4.1.6

v

C;V; - ... - CmVm)

v; ·-r -CJ(U; · Ü1)- · · · -

v

As mentioned at the beginning of thi s section, it is often con venient to work with an orthononnal basis of a subspace V of iR" (an orthonormal basis is a bas is of V th at consists of orthonormal vectors). In the next secti on, we will see how such an orthonormal bas is of a subspace can be fo und. Assuming we have an orthon ormal basis of a subspace V of lR", we will now examine how we can find the orthogonal projecti on of a vector in IR" onto V. Before we deal with the general case, Jet us brieft y revi ew the case of a onedimensional subspace V of !R". In thi s case, an orthonormal bas is of V consists of a single unit vector, 1 • In Secti on 2.2, we fo und that for any vector ; in lR" there ÜJ is perpendicul ar to V (see Figure 5). is a unique vector ÜJ in V such that

VJ - ... -

= Ü; ·X - C; = 0

We will verify that v .L is closed under scalar multiplication and Jeave the verification of the rwo other properties a Exercise 23. Consider a vector win v .L and a scalar k. We have to show that kw is orthogonal to all vectors in V. Pi ck an • arbitrary vector v in V. Then (kw) · v = k(w · v) = 0, as claimed.

+ Orthogonal Projections

CJ

Orthogonal projection

v v

v

Consider a subspace V of lR" with orthonormal basis 1, 2 , .•. 111 • For any vector -r in IR" there is a unique vector w in V such that _-y: - w is in v .L. This vector is called the orthogonal projecrion of; onto V, denoted by proj 11 .r. We bave the fo rmula

w

proj vx = Cv1 · ,r) üJ + · ··+(um

· x)vm.

The transformation T (x) = proj 11 ,r from lR" to lR" is linear. We Jeave the verifi ca üon of rhe last assertion as Exerci e 24.

182 •

Chap. 4 Orthogooality aod Least Squares

Sec. 4.1 Orthono rmal Ba es and Orthogonal Pro ject ions •

183

The n

· - c- -)- c- - -

proJ vX = VJ·X v1 + Vz·x )vz = 6-v1+ 2v- 2 =

To check this ans wer, verify that ,'

x-

proj

vx i

[ ~] [ -~] _i [~] ~

[:l

x = Cii1 · x)ü1 for all

Note that proj vi i tbe sum of aJ I vectors (Ü; · ,t) Ü; repre e nti ng the orthogonal projections of onto tbe Lines spam1ed by the vectors Ü;. This i perhaps a good way to memorize the formula for proj 1 For example, projectiog a vector in 1~3 onhogonally oot.o the x 1-xz plane amounts to the same as projecting it onto the x 1-axis, then onto the xz -axis, and then adding the resultant vector (see Figure 6).

x

x in IR". x

x

- 1

Figure 7

Solution The two columns of A form a basis of V. Since they happen to be orthogonal we can construct an 01thonormal basis of V merely by dividing these two vectors by their Iength (2 for botb vectors):

l/2] - -l/2 l/2] l/2 ' [1/2 [ - ~~; .

+ ... + (Ü/1. x )Ün

Thi means that if you proj ect .r onto all the lines panned by the basi vector Ü; and add the resullant vectors, you get the vector back. Figure 7 illustrates this in the case n = 2. Wh at is tl1e practical ignificance of Fact 4.1.7? Whenever we have a basi ü1, ... , Ü11 of IR", any vector in ffi. 11 can be expressed uniquely as a Linear combination of the Ü; , by Fact 3.2.7 :

1 1] [i -~ .

Find proj vx, for

+ ... + Ciin · ,t)ün ,

Consider an orthonormal basis ü1, . . . , Ü11 of IR" . Then

for aJ I

EXAMPLE 4 ~ Con s ider the subspace V= im (A) of ~4 • where 1

x

x in IR" . ; = (Ü J . x)iiJ

,x.

-

.

What happens wheo we apply Fact 4.1.6 to the su bspace V = lR" of lR" with = ,r , for all in lR" . Therefore, orthonormaJ ba is ü~. ü2 , . . . iin? Clearly, proj

Fact 4.1.7

VJ =

~

....

,'

[~ ]

A =

=

perpend icular to both ü1 and ii2 .

vx

Figure 6

+

1/2

Vz =

0

184 •

Chap. 4 Orth ogonality and Least quares

Sec. 4.1 Orthonormal Bases and Orthogo nal Pro jections •

To fi nd the calars C; . we need to so lve a lin ear ys te m, w hi ch may invo lve a fa ir amount o.f compu tati on. However. if the ba is Ü1• .. • , ü" i o rtho no rmal, we can find the ·; much more eas ily: C; =

~ [ ~] a

Figure 9

= 3l [

I '

; ]

X

a linear combination

of

Ui



V;· X

EXAMPLE 5 .. By u ·ing paper and pencil , expres the vectm X

-

translated

185

Fact 4.1.8

ih- =~[-~] 3 ' 2

Theorem of Pythagoras Con ider two vectors

x and y in ffi:

11



The eq uatio n

+ .YII 2 = llxll 2 + II.YII 2 y are orthogonal (see Fig ure

11x holds if (and onl y if)

x and

9).

Solution Since

v1• Ü] . v3 i

an orthononnal basi of JR 3 , we have

.Proof

The veri fica tion is straightforward : 11 -r

+ .Y II 2

=c.x +

)i) · c.x

+ )i)

= llx ll2 + II.YII2

+

= .x + 2c.x .

.'i · )i) + .Y . .Y if (and only if) .X· y = 0

=

llxll

2

+ 2 (.'i . )i) + 11511 2 •

Now we can generali ze Example 6.

From Pythagoras to Cauchy EXAMPLE 6 ~ Co nsider a line L in JR3 and a vector

x in

JR 3 . W hat can you say about the

relation hip between the length of the vectors

x and projJ ?

Fact 4.1.9

Consider a subspace V of IR" and a vector

x in lR

11



Then,

ll proj v.'ill :::: ll.r ll . The Statement is an equ al ity if (and o n.ly if) .~ is in V .

Solution Appl ying the theorem of Pythagoras to the shaded right tri angle in F ig ure 8, we find that Jl proj ur ll :::: ll xll . The statement is an equality if (and on ly if) i in ~

x

Proof

We can wri te .X = proj v.X (see Fi gure 10).

+ (x-

11-rll 2 =

Does this inequality hold in higher-dimensio nal ca es? We have to examine whether the theorem of Pythagoras holds in IR".

lt fo llows that

proj v.r) and appl y the theorem of Pythagoras

II proj 1, .rll 2 + 11-r - proj 11 .r ll 2



II proj 11 ,t ll :::: llxll , a claimed.

Figure 10

Figure 8

x - proj .r 1,

L

(lranslated)

186 •

Sec. 4.1 Ort honormal Bases and Orthogonal Project ions •

Chap. 4 Orthogonality and Least Squa re

18 7

y

\1

Figure 12

Figure II

y in For example, Iet V be a one-di men ional sub pace of ~~~ (nonzero) vector )i. We introduce tbe unit vector

panned by a

cos (X =

1 u = IJ )i ll y

in \1 (see Figure 11 ). We k:now that proj v.r for any .r in

~~~ .

This formula allows us to find the angle between two nonzero vector ~3 :

X· y

-=-- :-

a = arcco

or

lli ii ii.Y II

x and

x ·y

11-tii iJy iJ .

In lR", where we have no intui tive notion of an angle between two vecto r , we ca n use this form ul a to define the angle:

Definition 4.1.11

= Cü . x )ü = ~ CY. x)y , IJ y ll-

Angle between two vectors Consider two nonzero vec tors xa nd vectors i defined a

Fact 4.1. 9 teils us that

y in !R".

The angle a between the e

X· )l

ll.ill:::: II proj vxll =

1[ ; 112 cY · x)jil\ = 11

ll; ll 2 15i · .rl ll5i ii-

To justify rhe last step, note that llkiill = lk l ll vll , fo r all vector scalars k (Exercise 25a). We conclude that

v in

a

!R" and all We have to make sure that

lx · .Y I _ m s llxii-

X·y arccos _ _ llxiiii.YII

Multiplying this equation by II .YIL we find the follow.i ng useful inequality:

Fact 4.1.10

= arccos lli iiii.YII .

is defined, i.e., that X·)l

Cauchy-Schwarz inequality 1 lf

x and

IJxl l ll)ll

ji are arbitrary vectors in !R", then

is between -1 and 1 or, equivalently, that

Ii · .Y I s lli ii ii.Y IIThis statement is an equality if (and only if)

x and ji are para lleL

Consider two nonzero vectors x and ji in ~ 3 . You may know an expression for the dot product ji in terms of the angle a between the two vectors (see Figure 12): x · ji = llxiiii.Y II cosa.



!11,:11·11~11/- 11ir11· 1 ~11 s L This follows from the Cauchy-Schwarz inequality, l.r · _YI s ll.i ii ii.Y II (Fact 4 . 1. 10): divide both ides by ll,riiii.YII-

EXAMPLE 7 .... Find the angle between the vectors

and 1

Named after the French mathcmatician Augustin-Lou is Cauchy (1789- 1857) and the German math emat ician Herman n Amandus Schwarz ( 1843- 192 1).

i =

[iJ

188 •

Chap. 4 Orthogonality and Least Squares

Sec. 4.1 Orthonormal Bases and Orthogonal Projections •

Solution X·V

a

[

Ca ncer rate (de viation from mean)

X

= arccos ll.ill ll._vll = arccos "]":2 = 3

189

United Statcs



Here is an application to statistics o:f some concepts introduced in this section. 10



Great Britain

Consider the meat consumption (in grams per day per person) and incidence of colon cancer (per 100,000 women per year) in various industrialized countries: - 100 Country

Meat consomption

Cancer rate

Japan Finland Israel Great Britain United States

26 101 124 205 284

7.5 9.8 \6.4 23.3 34

Mean

148

18.2

e

100

I rael e Finland e

-10

Japa n

Figure 13

Can we detect a positive or negative correlation 1 between meat consumption and cancer rate? Does a country witb high meat consumption have high cancer rates, and vice versa? By "high" we mean "above average," of course. A quick Iook at the data showssuch a positive correlation: in Great Britain and the United States, both meat consumption and cancer rate are above average. In the three other countries they are below average. This positive correlation becomes more apparent when we Iist the data above as deviations from the mean (above o.r below the average).

A positive correlation is indicated when most of the data points (in our ca e, all of them) are 1ocated in the fir t and third quadrant. To process these data numerically, it is convenient to represent d1e deviation for both characterist:ics (meat con umption and cancer rate) as vectors in JR5 :

X= Country

Meat consumption (deviation from mean)

Cancer rate (deviation from mean)

Japan Fin land Israel Great Britain United States

-122 - 47 -24 57 136

- 10.7 - 8.4 - 1.8 5.1 15.8

Perhaps even more informative is a scatter plot of the deviation data (Figure 13).

1

We are using the term "correlation" in a colloquial, qualitative sense. Our goal is to quantify this

tenn.

Meat consurnption (de viat ion from mean)

- 122 47 - 24 57 136

y=

-10.7 8.4 - 1.8 5.1 15 .8

We wiJl call these two vectors the deviation vectors of the two characterisücs. In the case of a positive correlation. most of the cotTesponding entries x,-, y; of the deviation vectors have the same sign (both positive or both negative). In our example, thi s is the case for al l entries. This means that the product x; y,- will be positive most of the time; hence, the sum of all these products will be positive ....... But thjs sum is simply the dot product of the two devi ation vectors. Still using the term "correlation' in a colloquial sense, we conclude the following: Consider two characteristics of a population, with deviation vectors ,( and )i. There is a positive correlation between the two characteristics if (and only if) _\: · .Y > o.

190 •

Chap. 4 Orthogonality and Least Sq uares

Sec. 4.1 Orthonormal Bases and Orthogonal Pro jections • Obw se angle

Righ1 angle

Acute angle X

r= I

r= - 1

(a)

(b)

X

.r

Figure 15 (aI y= mx, for positive m. (b) y= mx, for negative m.

.Y r=l

y

y

w Figure 14 (o) Positive correlation:

191

~

x. y>

y

Y;= 111.X;

r=- 1

~

0. (b ) No correlotion:

x· y= 0. (c) Negative correlation: x· y < 0. X

A positive correlation between tbe characteristics means that the angle cx between the deviation vectors i le tban 90° (see Figure 14). We can u e the cosine of the aDgle a between .:r and :li as a quantitative n1easure for tbe correlation between the two characteristics.

Definition 4.1.12

Figure 16

Correlation coefficient

The correlation coefficient r is always between -1 and 1; the cases when r = 1 (representing a perfect positive correlation) and r = -I (perfect negative correlation) are of pruticular interest (see Figure 15). In both cases, the data points (x; , y1) will be on the straight line y = mx (see Figure 16).

The correlation coefficient r between two characteristics of a population is the cosine of the angle a between the deviation vectors .X and y for the two characteristi cs:

r = cos(a)

X·y

= - -Jixii ii.YII EXERCI S ES

In the case of meat consumption and cancer, we find that

GOALS Apply the basic concepts of geometry in IR" : length angle..s orthogonality. Use the idea of an orthogonal projection onto a subspace. Find thi s projection if an orthonormal basis of the subspace is given.

4 182.9 ,. ~ -19- 8-.5-3-. -2 1-.5- 3-9 ~ 0 ·9782 · The angle between the two deviation vectors is arccos(r) ~ 0.2 1 (radians) ~ 120. Note that the length of the deviati on vectors is inelevant for the correlation : if we had measured tbe cancer rate per I ,000,000 women (instead of 100,000), the vector y would be 10 times Jonger, but the correlati on would be the same.

Find the length of each of the vectors ü in Exerc ises 1-3.

c=z:ss

192 •

hap . .f Orthogon ali tya nd Least qua re ·

Sec. 4. 1 Orthono rmal .Bases and Orth ogonal Proj ection •

Find the anale a between each of the pairs of vectors ü and

"'

4. I; = [ :

6. ü

lv

v in Exercises 4-6.

n;

= [ 1

l.

~ ~i ~ m [

19 3

of the weight. The strin g of the pulley line ha the ame ten ion everywhere ; hence, the fo rces F2 and F3 have the same magnitu cle a F1. A su me that the magnitude of each force is 10 pounds. Find the angle et so tbat the magni tude of the force exerted on the leg is 16 pound . . Round your answer to the nearest degree. (Adapted fro m E. Batsehelet lntroductiort to Ma thernatics fo r Life Scientists, Spri nger, 1979).

v

Foreach pair of vector 1;, li sted in Exerci es 7-9, detemline w he the r the angle bet:ween ii and is acute. obtuse, or rigbt.

v

et

7. ü = [ _; ]

v= [ ~]

;

9.ü~L1J , ~[il 10. For which choice( ) of the constant k are the vectors

perpendicular?

14. Leonardo da Vinci and the resolution offorces. Leonardo (1452- 1519) a ked hirnself how the weight of a body upported by two strings of different lengtb, is apportioned between the two string .

11. Con ider the vectors

in IR".

v.

a . For n = 2, 3, 4 find the angle between ü and For n = 2 and 3, repre ent the vectors graphically. b. Find the Iimit of this angle as n approaches infinity. 12. Give an algebraic proof for the triangle inequality

8

E

A

Ionger Lring

shoner string

weig ht

llii + wll :::: llii ll + llwllDraw a sketch . Hint: Expand Cauchy-Schwarz inequality.

llii + wll 2

=

(ii +

w).

(ii + w). Then use the

13. Leg traction : The accompanying figure shows how a leg may be stretched by a pulley line for therapeutic purposes. We denote by F1 the vertical force

Three force are acting at tbe point D : the ten ions F 1 and and the weight W. Leonardo believed that

IIFdl IIF2II

EA

EB

F~

in l.he Lring

194 •

Chap. 4 Orthogonality and Least Squares

Sec. 4.1 Orthonorma l Base and E

B

A

rthogonal Projection •

195

i· i6 , ...)

a. Check that ,r = ( I . !, ! , is in e2 , and find llxll . Reca llth formul a for the geo metri c seri es: I + a + a 1 + a 3 + · · · = 1/( I - a , if - I < a < 1. b. Find the ang le between ( I , 0 , 0 , . . .) and ( I , ~ .!· c. Givc an cx.ampl e of a seq uencc (x 1 , .x2 , . . . that c nvergcs t (th at is X 11 = 0) but does 1101 belang LO f2. lim 11 d. Let L be the subspace ofe 2 spanned by ( 1. 2I , 4I , 8I , . . . . p·1nd tI1e 11 hogona I projecti on of ( I , 0, 0, ... ) o nto L. . The Hilbert pace e2 was in it ial ly used mo tly in phy ic : Werner !"1e1 enberg ' s formulation of quantum mechani c i in term of f2. Today, th 1 pace i used in many other applicat ion , including econom ics ( ee, fo r examp1e, the work of the Catalan econorni t Andreu M as-Co lel l).

k· .·..

D

w Wa he righr? Source: Les Manuscrits de Uonard_ de Vin ci, publi hed by Ravai sson-Mollien, Pari 1890.) Hi11ts: Re olve F 1 into a horizonral and a ertical component" do the ame for F2. Since the y tem i at re t, the equation F1 + F2 + ~V = Öhold . Expres the ratio and

19. For a Jine L in JP. 2 , draw a ketch to interpret the foll win g tran Formation geometrical ly:

EA EB

a. T(x) = ; - proh.X . b. T (x) = x- 2 proj'"i. c. T; = 2 proj'"; - i . 20. Refer to F igure 13.

in terms of a and ß. u ing trigonometric function , a nd compare the re ult . 15. Con ider the ector

~

Cancer rate (deviation from mean

- lil

Un ited States



4

Find a basis of the sub pace of 1!( con isting of all vectors perpendicular to ü. 16. Con ider the vectors

-= VI

10

1/2j2l 1 1/2 '

lj2l - = -1/2 1/2

r

VJ

1/2

4

ru



Great Britain

Meat con umption (deviai io n from mean

- 1/2

4

in Ii{ . Can you find a vector u4 in 1!( such that the vectors orthonormaJ? If o, how many such vectors are there? 17. Find a basis for w.L where

1

,

i:i2 , iJ3 ü4 are



- 100

100

Israel

..



Finland

- 10

Japan

18. Here i_s an _"infinite-dimensional" version of Euclidean space: in the space of all mfi~Jte sequences, consider the subspace e2 of ". quare-summable" sequences, I.e., those sequences (x1, x2, . .. ) for which the infinite series xf + + · · · converges. For .X and ji in e2 we define

xi

II.XII = V/x +x 22 + · · ·, 2

1

(Why does the series x1 Y1

'-------

+ x2Y 2 + ...

x- . y- -x - I y I + x2Y2 + converge?)

.• ..

The Ieast-squares /in e for tbe e data is the line y = mx that fit the da_ta best, in rhat the um of t.he square of the verticaJ distances between the Im and the data point is minimal. We want to minimi ze the sum (mx 1 - Y1 )

2

+ ( m x2

-

n)

2

+ · · · + (mxs

2

- Ys) ·

196 •

Chap. 4 Orthogonality and Least Squares

Sec. 4.1 Orthonormal Basesa nd Orth gonal Projections •

19 7

25. a. Con ider a vector ü in IR", and a ca lar k. Show that V = /IJ.X

llkvll = lkl llüll . b. Show th at if ü is a non zero vector jn lR", then ü = 1 ~ 1 ü is a unit vector.

11LY; --

!}

- - -----

:" :

:~ ] onto the sub pace of 111

tn.r;- Y;

... - -- --- -

26. F;nd the orthogonal pmjecüon of [

3

spanned by

(x;.)';)

[-H

X;

and In vector notation, to minimize the sum mean to find the scalar m such that

27. Find the orthogonal projection of 9e1 onto the sub pace of JR 4 spanned by

llmx - .YII 2 is minimal. Arguing geometrically, explain how you can fi nd m. U e the accompanying sketch, which is not to scale.

and

28. Find the orthogonal projection of

Find m numerically, and explain the relationship between m and the correlation coefficient r. You may find the following information helpful :

x·y=

4182.9,

11x11

~

198.53,

II.YII

onto the ub pace of JR 4 panned by

~ 21.539.

To check whether your solution m is reasonable, draw the line y = mx in the scatter plot reproduced at the beginning of thi exercise. (A more thorough discussion of Ieast-squares approximations will follow in Section 4.4.) 21. Find scalars a, b, c, d, e, j, g uch that the vectors

Ul [=lJ

[!J.

29. Consider the mthonormal vectors VI, Li2 , Li 3 , Ü4, ü5 in JR 10 . Find the length of the vector

x

30. Consider a subspace V of I!(", and a vector in I!(". Let is the relation shjp between the following quantitie ?

are orthonormal. 22. Consider a basi Üt, ~ •... , ü", of a subspace V of IR". Show that V .L = {

x in I!(" : Ü; · x ~ 0

II.YII 2

for all i = I , . .. , m} .

23. Complete the proof of Fact 4.1.5: the orthogonal complement space V of IR" is a subspace of lR" as weil.

v.t

of a ub-

24. Complete the proof of Fact 4.1.6: orthogonal projections are linear Iransformations.

and

.v =

projvx· What

y ·x

31. Con .ider the orthonormal vectors VI, Ü2 •... , Ü111 in IR" , and an arbitrary vector in lR". What is the relationship between the two quantities below?

x

p=

- 2 ( -VI ·X)

+ cV2-

-)2 + · · · +

·X

When are the two quantities equal?

c-

_)ry

V", ·X -

and

198 •

Sec. 4.2 Gram-Schmidt Processand QR Factorizati on •

Cha p. 4 Orthogonality and Least quares

32. Con ider two vector ii 1 and ü1 in IR!" . Fonn the matrix

V

199 V

For wluch choice of ü1 and ii2 i the mattix G invertible? 33. Con ider a plane E in JR! 3 witb orthonormal ba i ü,, ii2. Let ."t be a vector in JR!3 . Find a formula for the reflection R(x ) of -~ in the plane E . 34. Con ider three unü vectors ü1• ü2 , ü3 in IR!" . We are told that ii , · ü:! = ii, · ii 3 = 1/2. What are the po sible value of Ü2 · Ü3? What cou ld the angle between the vector ii2 and ü3 be? Gi e examples; draw ketche for the ca e n = 2 and n = 3.

35. Can you find a line L in IR!" and a vector

x in IR!" such thar

0

Figure 1

V

V

is negative? Explain , arguing aJgebraically.

0

0

2

Figure 2

GRAM-SCHMIDT PROCESS AND QR FACTORIZATION In tbe last section we have seen that it is sametime useful to have an orthonormal basis of a subspace of IR!" . Now we how how to construct such a basis. We pre ent an algorittun that allows us to convert any ba ·is ii1. ü2, . . . . Ü111 of a subspace V of IR!" into an orthononnal basis ÜJ 1 ÜJ2 .. .. , W111 of V . Let us first think about simple cases. If V is a line wi th basis ii 1, we can find an orthonormal basis w1 simply by dividing ii 1 by its length:

When V is a plane with basis ii 1 ü2 , we first divide the vector ii 1 by its length to get a unit vector

(see Figure 1). Now comes the crucial step: we have to find a vector in V orthogonal to ÜJ , (initially, we will not insist that th is vector be a unit vector). Recalling our work on orthogonal projections, we realize that ii 2 - projL ii 2 = ii2 - (w 1 . ii 2)ÜJ, i a natural choice, where L is the line spanned by 1• See F.igure 2.

w

The last tep is easy : we divide tbe vector il2 - projL ii2 by its ]ength to get the second vector 2 of an orthonormal ba i (see Figure 3).

w

Figure 3 V

V

L 0

200 •

Sec. 4.2 Gram-Schmidt Process and QR Factorizatio n •

hap. 4 Orthogonality and Lea t q uare ·

201

Now that we l<Jlow how to fi nd an o rthonorm al basis of a plane, how wo ul d we proceed in the case of a three-dimensio nal subspace V of lR" with basis v1, ü2. Ü3? We can first find an orthonorm al bas is w1 , 2 of the pl ane E = span(ü 1 , ~) as above. T he n, we consider the vector ü3 - proj Eü3 and di vide it by its Jength to get W3, as show n in Figure 4. (How do we know that ü3 - proj Eü3 is no nzero?) Reca ll fro m Fact 4.1.6 that

EXAlVIPLE 1.... Fin d an orLhonorma l ba i of the ubspa e

w

of

IR?.\

with bas is

Ü3 -

proj Eü3 = Ü3- (w 1 · Ü3) w 1 -

w2· Ü3)ÜI2.

U ing the same method, we can construct an o rtho normaJ ba i of any ubspace of lR". Unfo rtunate ly, the notatio n gets a bit heavy in the general case.

The Gram-Schmidt process 1

Algorithm 4.2.1 Solution

Con ·ide r a subspace V of lR" with bas is ü1, ü2 , • •• , Ü111 • We wish to construct an orlhonormal basis w1, ÜJ2, . .. , 111 of V . Let Ül 1 = 0/llv lii)ÜI. As we define fo r i = 2, . .. , m, we may ass ume that an orthonormal basis üi 1, ÜJ2 , . .. , ÜJ;- 1 of V; - 1 = span (ÜJ ü2, .. . , Ür - I) ha already been found . Let

U ing the termino logy in trodu ced above, we find the fol lowing re ul t

_ Wi

1I //22 ]

1 _

=

ll vdi UI

w

w;

[ l/2

= 1/2 .

I

W;

=

-

IIV; - proj V,_, v;ll ( U;

-

proj v,_,Ü;) .

Note that

u;- proj v,_,v; = ü; - (w 1 · v;)w l - · · · - (w; - 1 · v;)w; - ). by Fact 4. 1.6. Beca use the computati on of ü2 - proj Lü 2 is rather messy and an error i ea y to make, it is a good idea to check that the vector yo u fo und is indeed perpendicular to w1 (or, equi valently, to u1) :

If you are confused by these fo rmul as, go back to the cases where V i a two- or three-dimensional pace. Figure 4

F inall y,

We have found an orthonormal basis of V:

Wi

=

1/2 1/2 ] [

1/2

1/ 2

'

E

E

E

' amed aftc r the Danis h ac tuary Jörge n Gram ( 1850-19 16) and lhe Genna n mmhematician Schmidt ( 1876-1959).

rhardt

202 •

+

Chap. 4 Orthogonality and Least Squ ares

Sec. 4.2 Gram-Schm idt Process and QR Factorization •

The QR Factorization

The verification of the unigueness of the Q R factotization is left as E xercise ~. 3. 31. To find the QR factorization of a matrix M we pe1form the GramSchnudt process on the columns of M, constructing Q and R column by column. No extra computations are required: all the information necessary to build M and R .is pro:'ided by the Gram-Schmidt process. Q R factorization is an effective way to orgamze and record the work performed in the Gram-Schmidt process; it useful for many computational and tbeoretical purposes .

The Gram-Schmjdt proces s can be presented succi nctly in m atrix fon~. U sin? the tenninology introduced in Algorithm 4.2.1, let us express the vector v; as a hnear

tü1.. .. , w;. v1= l! v11iw1 and v; = pro.i v,_1v; + )I Ü;

combination of

-

pro.i v,_1 v; llw;

= (w 1 . v; )w1 + ... + (w;- 1 . i:i;)w;-1+

llv; -

for i = 2, ... , m.. Let r 11= ll viii. r;; = for i > j. Then v1= r 11 iü t

llv; - pro.i v,_1 ü; llw;

pro.i v,_1 ü;ll for i > 1, and l'j;

203

EXAMPLE 2 ... Find the QR f actorization for

= iiJj ·v;

v2 = r 12w 1 + rnw2 Solution We can use tbe work done in Example 1. We can write tbese equations in matrix form:

WJ

-

Wm

lV2

u~

rl" 1"22

,..1111 r 2m

0

rn~m

l

llii i l/

-!.

M

QR factorization

v

Consider an n x m matrix M with linearly independent columns 1, . . . , V111 • Tben there is an n x m matr ix Q whose column s are orthonormal and an upper ttiangular m x m matrix R with positive diagonal e ntries such that

r 11

where

= IIÜI II

and

r;;

=

llv; - proj v,_1 Ü; II (for i > 1),

W2

-5/~44l 3/.JM

2

12

3/.JM [ 0 .J44

J

-1 //44

Note that you can check this answer.

Note that M is an n x m matrix with linearly iJ1dependent columns, Q is an n x m matrix with orthonormal columns, and R is a:n upper triangular m x m matrix witb positive entries on the diagonal.

This representation is unique. Furthermore,

= [·w1

ll iiz - proj v, ii2ll

M=QR

M= QR.

QR

1/2 1/ 2 1/2 [ 1/ 2

\/

Fact 4.2.2

=

- J[ I"J0J

EX E RC I S ES GOALS Perform the Gram-Schmidt process, and thus find the QR factori zation of a matrix . Using paper and pencil, perfom1 the Gram-Schmidt process on the equences of vectors given in Exercises I to 14.

204 •

Chap. 4 Orthogonality and Least Squares

s.

10.

Sec. 4.2 Gram-Schmidt Process and QR Factorizat ion •

[il UJ

30. Consider two linem:ly

i~dependent

vec tors

v1 =

[ : ] and

v2 = [ ~ ]

205 in 11l2.

Draw sketches (as 111 Ftgures 1 to 3 of this section) to illustrate the GramSchmidt process for v1, 2 • You need not peiform the process algebraically. 31. Perform the Gram-Schmidt process on the follo wing bas is of lll3 :

v

rll m

I~ [1]· [~] · [_~] ·

Here, a, c, and f arepositive constants and the other constants are arbitrary. Hlustrate your work with a sketch, as in Figure 4 of this section . 32. Find an Ot1honormal basis of the plane Xt

+ x2 +x3 =0.

33. Find an orthononnal basis of the kemel of the matrix

Using paper and pencil, find the QR factorizations of the matrices cises 15 to 28 (compare with Exercises 1 to 14).

16.

19.

22.

-n un [~

r~ ~J . 2

17.

20.

23.

-2

j] ~- rt ~ n ~- [~

26.

lD

u_:n [~ ~

[~

n

!]

- 5 1 3 1

2 4] [ 3 0

4 2

6

13

.

A=[i

4 '

Illustrate your work with sketches, as in Figures 1 to 3 of this section.

~l

~ ~ ~l

35. Find an ortbonormal basis of the image of the matrix

A=[;

2

i].

70

-2

36. Consider tbe matrix

~ [~

I

1

1

2

- 1 1 -1

Find the QR factorization of M. 37. Consider the matrix

M ~ ~ [i

- [-3]

- 1

34. Find an orthonormal basis of the kerne! of the matrix

M=

29. Perform the Gram-Schmidt process on tbe basis of l!l below:

=

A=U -i

.

2

VJ

Exer-

1 - 1

- 1

1

- 1 I

-1

-l] [HJ. - 1

Find tbe QR factorization of M . 38. Find the QR factorization of

-3 0 0 0

0

0

206 •

Sec. 4.3 Orthogonal Transfo rmations and Orthogona l Matrices •

Chap. 4 Orthogonality and Least Squares

39. Find an orthonormal ba is

w1. w2. w3 of IR3such thal

20 7

3 ORTHOGONAL TRANSFORMATIONS AND ORTHOGONAL MATRICES fn geometry, we are particularly interested in those linear Iran sformations which preserve the le ngth of vectors.

and

span (W,, W2)

~ span (UJ. [_ [])

Definition 4.3.1

Orthogonal transformations and orthogonal matrices A linear transformation T from IR" to IR" is called orthogonal if it preserves the le ngth of vectors:

40. Consider an invertible n x n matrix A whose column are orthogonal but not neces arily orthonormal . What doe the Q R factorization of A Iook like? 41. Consider an invertible upper triangular n x n matrix A. What does the Q R fac-

IIT(x) ll = llx ll, for all .K in IR". lf T (x) = Ax is an orthogonal tran format.ion, we say tbat A

torization of A Iook like? 42. The two column vectors ü1 and ü2 of a 2 x 2 matri x A are shown in the figure below. Let A = Q R be the Q R factorization of A. Represent tbe diagonal entries r 11 and r22 of R a length in the figure. Interpret the product ~"11r22 as an area.

an

orthogonal malrix.

EXAMPLE 1 ..... The rotation

is an orthogonal transformation from IR2 to IR2 , and A =

[c?sifY

-

Slll f/Y

inqy]

cosqy

an orthogonal matrix, for all angles f/Y.

43. Consider a partitioned marrix A2 ]

with linearly independent column (A 1 is an n x m 1 matrix and A 2 is n x m2). Suppose you know tbe QR factorization of A. Explain how this allows you to find the QR factorization of A 1• 44. Consider an n x m matrix A with rank(A) < m. Is it alway possible to write the matrix A as A = QR

where Q is an n x m matrix with ortbonormal columns and R is upper triangular? Explain.

45. Consider an n x m matrix A with rank(A) = m. Is it always possible to write Aas A = QL

where Q is an n x m matrix with ortbonormal columns and L is a Jower triangular m x m matrix with positive diagonal entries? Explain.

x

Jvx -x

V of ~" . Fora vector in IR" , the vector R(,r) = 2 pr i called the reflection of .X in V (compare with Defi nition 2 .2 .6; ee Figure I . Show that reflection are orthogonal tran formations .

EXAMPLE 2 ..... Consider a ub pace A= [ AI

Figure 1

208 •

Sec. 4.3 Orthogonal Transformations and Ort hogona l Matri ces •

Chap. 4 Orthogonality and Lea t Squares Solution we can w1i te R(.r ) = proj 11 x + ( proj 11 x- .r) and .r = proj 11.r + (x -

proj 11 .r).

209

The two shaded triangles are congruent, because corresponding ides are the a.me length (s ince T preserves length). Since D 1 is a ri ght tri ang le, o is D2 • Here is an alternative characteri zalion of orthogonal transformations:

By

the theorem of Pythagoras, we have 2

II R(x) \1 2 = II proj 11 xll 2 + II proj v-r - .rll = II proj 11 .r ll- + 11-r - proj .r f

=

11-rll

Fact 4.3.3

2

a. A li near transformation T from IR" to IR" is orthogonal if (and onl y if) the

.

vectors

Fact 4.3.2

Proof

Con ider an orthogonal transformati on T fro m IR to IR". If the vectors iü in IR" are orthogonal, then o are T(u) and T (w).

Figure 3 illustrates part a for a li near transformation from JR2 to JR2 .

v and Proof

+ T (w)ll 2 =

II T (ü)ll

2

We prove part a; part b then follows from Fact 2.1 .2. If T is orthogonal, then by de finition the T (e;) are unit vectors and by Fact 4.3. 2 they are orthogonal. Conversely, suppose the T (e;) form an orthonormal basis. Consider a vector = x,e, + x2 e2 + · · · +xnen in IR". Tben

x

By the theorem of Pythagoras, we have to sbow that IIT (ü)

+ .. · +x"T (en)i1 2 llx , T(e,)ll 2 + \\x2T Ce2)ll 2 + · · · + ll xnT (en)li 2 x~ +xi + · ·· + x; llx ,T(e,) +

+ IIT(w)ll 2 .

Let's see. !I T Cü)

+ T Cw)ll 2

II T (v+w

f

+ wll 2 lliill 2 + Hwf ii T(v)li 2 + IIT (w)\1 2

llü =

T (e2) , .. . , T (e11 ) form an orthonormal basis of IR11 •

b. An n x n. matrix A is orthogonal if (and only if) its columns form an orthonormal basis of IR".

As the name sugge ts orthogonal transformations preserve right ang les. In fact, orthogonal tran fo rmations preserve all angles (Exerci e 3).

11

T(e,),

(T is linear)

x2 T Ce2)



(T is orthogonal)

(ü and

ware 01t hogonal)

(T is orthogonal)

Wa m in.g: A matrix with orthogonal columns need not be an orthogonal



matrix. As an example, consider the matrix A = [

Fact 4.3 .2 is perhaps berter explained with a sketch (see Figure 2).

EXAMPLE

A

~~ 2

T

--------------

I

~

3 ~ Show that the matrix A is orthogonal.

Figure 2

üi (translated

(by Pythagoras)

[:

l l

-1 -1

- 1

1 1

-1 1

Figure 3

.....--------.. T

e2

T(ii 2)

-ll - 1

-!l

210 •

Sec. 4°3 Orthogonal Transformation s and Orthogonal Matrices •

hapo 4 Orthogonality and Least Squares

211

This result i no coincidence: the ijth entry of BA is the dot product of the ith roow of B and the jth column of A By definition of B , this is just tbe dot product of the 1th colum.n of A and the jth column of Ao Since A is orthogonal, this -<1111 product is 1 if i = j and 0 otherwiseo

B.'i

0

preserves

preserves

leng th

leng th

Let us generalize: ABX

Definition 4.3.5

Figure 4

The transpose of a matrix; symmetric matrices Consider an m x n matrix A The t ranspose Ar of A is the n x m matrix whose ijth entry is the j ith entry of A We say that a square matrix A is symmetric if AT = A 0

0

Solution

0

Check that the column of A form an orthonormal basi of lR

4 0

He re are some algebraic properlies of orthogonal matrice

0

EXAMPLE S ... If A

Fact 4.3.4

a. The product AB of two orthogonal b. Theinverse

Proof

A- 1

11

~

u; n

then

A'

~

un

x n matrice A and B is orthogonal.

EXAMPLE 6 ..... The matrix

of an orthogonaln x n matrix Ai oithogonal.

In part a, the linear transformation T Ct) = A B.x preserve length because II T (i) II = 1 II A( Bi ) ll = IIB.xll = llx ll o In part b the linear tran formation T(x) = A- x 1 1 preserves lengtb because IIA - xll = II A(A- x)ll = 11-tll o Figure 4 illustrates prop~ao •

is symmetric because AT = Ao Note that the transpese of a (column) vector ü is a row vector: if

+ The Transpose of a Matrix then

EXAMPLE 4 ..... Con ider the orthogonal matrix 6

2

-3

-n

0

B

1[2 =7 ~

Fact 4.3.6

If ü and

oÜJ

are two (column) vector in IR", then

-n

V

For example,

Solution

1[2

49

6 3

3 2

-6

-n[~

6 2 -3

-~]2 ~ ~f~ 49 0

o

W

dot product

Compute BA and explain the result.

BA=-

2

3]

0

The transpese gives us a convenient way to express the dot product of two (column) vectors as a matrix product.

Form another 3 x 3 matrix B whose ijth entry is the jith entry of A, ioeo, the rows of B correspond to the columns of A

3 2 -6

VT = [ l

0 49 0

0]

0 49

= /3

matrix product

·212 •

Sec. 4.3 Orthogonal Tra nsforma ti onsa nd Orthogonal Matrices •

Chap. 4 Orthogonality and Least Squares

Here are some algebraic propertie of transposes:

Now we can succinctly state the observation made in Example 4 .

Fact 4.3.7

213

Fact 4.3.9

Consider an n x n matrix A . The matrix A is orthogonal if (and only if) 1 AT A = I" or, equivalently, if A- = AT.

a. If A is an m x n matri x and B an n x p matri x, then (A B )T

=

B T AT.

Note the order of the factor . b. Jf an n x n matrix Ais invertible, then so is AT, aod

To justify this fact , write A in terms of its column :

(AT) - 1 = (A- t )T.

c. For any matrix A, rank (A) = rank (AT) . .A =

VI

t12

Proof

a. Campare entries: ijth entry of (A B )T

Then

jith entry of AB

= (j th row of A) · (i th col umn of B )

ijth entry of s T AT= (ith row of B T). (j th column of AT)

-T

u,

A7 A =

-r v2 U1

v2

-r vn

v"

V2 ·V I

l~' ~·

V2 · V2

ti1 V2 ·· Vn

v" · v1

v" · v2

Vn · V 11

V I · V2

- -ü" ] .

= (i th column of B ) · (j th row of A)

b. We know that

.

Transpo ing both sides and using patt a we find that



By Fact 4.3.3b this product is I" if (and only if) A is orthogonal.

Later in this text, we will frequently work with matrices of the fonn AT A. lt is helpful to think of ArA as a table displaying the dot products Vj among the columns of A, as shown above. We surnmarize the various characterizations we have found of orthogonal matrices.

(AA - I)T = (A-t)TAT = 1" .

By Fact 2.4.9, it follows that

v; ·

Summary 4.3.8

=

I

c. Consider the row space of A, i.e., the span of the row of A. It is not hard to show that the dimension of thi s space is rank(A) (see Exercises 49- 52, Section 3.3): rank (A 7) = dimension of the pan of the colum.n of AT

Orthogonal matrices

= dimen ion of the pan of the row of A

Consider an n x n matrix A. Then the following Statements are equivalent:

= rank(A)

i. A is an orthogonal matrix . ii. The transformation L (x) = Ai preserves length, that is, IIAill = llill for all i in !Rn. iii. The columns of A fom1 an orthonormal basis of IR". iv. AT A = I" . v. A - 1 =AT .

+ The Matrix



of an Orthogonal Projection The transpese al lows us to write a formula for the matrix of an orthogonal projection. Consider first the orthogonal projection proj L,r = (ii 1 . •r)ü 1

214 •

Sec. 4.3 Orthogonal Transformationsand Orthogona l Matrices •

Chap. 4 Orthogonality and Least Squares

v

onto a Jine L in JR", where 1 i a unit vector in L . l f we view ~e vector an 11 x 1 matrix and the scalar ü1 . .i as a 1 x 1 matrix, we can wn te

v

1

as

Solution Note that the vectors

proj L.i= = v1(v 1 · .1:) =

= M Jr

vi is 1

X

=

- -TV] VJ X

+··· + VmVmX

-

- -T (VJ V 1

- -T)-: + ·· · + Vm V X

11

Tberefore, the mat:ri x is 1 0

I

-1

1 - 1

1] 1

1

=2

1

0 0 [

0

] 1 1 1

1 0

0

EXER C I S E S GOALS

Use the various characteri zations of orthogonal transformations and orthogonal matrices. Find the matrix of an orthogonal projection. Use the properties of the transpose.

onto a subspace V of lR11 with orthonormal basis v1, ... , Vm. We can write -

l[ 2

n, so tbat M is

proj v.X = (v 1 · x )v1 + · · · + (vm · x)v"'

·

1

-1

üf.

prOJ vX =

v and v are orthonormal.

-~

- -TV] VJ X

where M = V] Note tbat V] is an n X I matrix and n x n as expected. More generally, consider the projection

215

-T-

1. Con ider an m x n matrix A , a vector that

111

v in lR", and a vector win JRm . Show

( Av). w = v. (Arw) .

2. Con ider an mthogonal transformation L from lR11 to lR" . Show that L preserves the dot product:

v · w= fo r all

v

The matrix of an orthogonal projection Consider a subspace V of lR" with orthonormal basis ii1, v2. matrix of the orthogonal projection onto V is

v and win JR".

3. Show that an orthogonal transformation L from lR" to lR" preserves angles: the in lR" equals the angle between angle between two nonzero vectors and L (v) and L (w). Conversely, is any li near transformation that preserves angles orthogonal? ·

We bave shown tbe following result:

Fact 4.3.10

L ( v) · L (w),

Vm·

The

Pay attention to tbe order of the factors (AAT as opposed to A 7 A).

EXAMPLE 7 ..... Find the matrix of the orthogonal projection onto the subspace of JR4 spanned by

w

4. Con ider a linear transform ation L from lR" to JRm which preserves length . W11at can you say about the kerne! of L? Wllat is the dimension of the image? What can you say about the relationship between n. and m? If A is the matrix of L, what can you say about the columns of A? Wbat is A T A? What about AAT? Jllustrate your answers with an example, where n = 2 and m = 3. 5. If a matrix A is orthogon al , i A 2 orthogonal as weil ? 6. If a matrix A is orthogonal, i A 7 orthogonal as weil ? 7. Are the rows of an orthogonal matrix A orthonormal? 8. a. Consider an n x m matrix A uch tbat A7 A = 1111 • Is it necessarily true that AA T = 111 ? Explai n. b. Consider an n x n matrix A such tbat A T A = 111 • Is it necessarily true that AA T = 111 ? Explain. 9. Find all orthogonal 2 x 2 matrices. 10. Find a11 orthogonal 3 x 3 matrices of the form a c [ e

214 •

Sec. 4.3 Orthogonal Transformations and Orthogona l Matrices •

Chap. 4 Orthogonality and Least Squares onto a line L in JR", where ü1 is a unit vector in L. . 1f we view t~1e vector an n x l matrix and the calar ü, . as a I x 1 matr1x, we can wnte

x

prohi

=

v,

i an

1

Note that the vector

/1 X

~r -:

v{

i 1

X 11,

1

- 1

- 1

Therefore, the matrix i

l] ~ [~ ~ ~ ~]

1

=

201101' l 0 0

so that M i

EX ERC I S ES

proj 11 -r = (v 1 • i)ü , + · · · + Vm · .r)vm onto a subspace V of IR" with orthonormal ba i ii,, . . . , Vm· We can write - -rproJ· 11x- = -u,u- r1 X+···+ VmVmX - -r)-

+ · · · + Vm V

I

_ .: ] [ 1

M_i l matrix and

v1 and ü2 are orthononn al. - l 1

n x n as expected. More generally con ider the projection

- -r = (v 1v 1

Solution

a

= ü, (v, · _r) = VI VI X ~

where M = ~, üf. Note that

v

215

111

X

GOALS U e the various characterizations of orthogonal transformations and orthogonal matrices. Find the matri x of an orthogonal projection . Use the propertie of the tran spose.

1. Con ider an m x n matrix A , a vector that

v in IR", and a vector win IR"' . Show

Av)·ÜJ=v·(ATÜJ) .

2. Consider an orthogonal tran fonnation L from IR" to IR" . Show that L preerves the dot product: Ü · ÜJ = L(ii) · L (ÜJ),

We have shown the following result:

Fact 4.3.10

The matrix of an orthogonal projection Consider a subspace V of IR" with orthonormal basis ii 1, u2 , matrix of the orthogonal projection onto V is

for aU ü and ÜJ in JR". 3. Show that an orthogonal transformation L from IR" to IR" preserves angle : the angle between two nonzero vectors and w in IR" equals the angle between L (v) and L (ÜJ . Conversely, is any linear transfonnation that preserve angJes orthogonal? 4. Consider a linear tran forrnation L from 1R11 to IR111 which preserves length. Whar can you ay about the kerne! of L? What is the dimension of tbe image? What can you say about the relation hip between n and m? If A is the matrix of L , what can you say about the colurnn of A? What is ArA? What about AAr? Illustrate your answers with an example, where n = 2 and m = 3.

v

If a matrix A i orthogonal, is A 2 orthogonal as weU? lf a marrix A is orthogonal., is A 7 orthogonal as well? Are the rows of an orthogonal matrix A orthonormal? a. Consider an n x m matrix A such that ArA = / 111 • ls it necessarily true that AA 7 = 1"? E plain. b. Con ider an 11 x n matrix A such that AT A = 111 • Is it neces arily true that AA T = 111 ? Explain. 9. Find all onhogonal 2 x 2 matrices. 10. Find all orthogonal 3 x 3 matrice of the form S. 6. 7. 8.

Pay attention to the order of the factors (AAr as opposed to ArA).

EXAMPLE 7 .... Find the matrix of the orthogonal projection onto the subspace of IR 4 spanned by

216 •

Sec. 4.3 Orthogonal Tra nsformati onsand Orthogonal Matrices •

hap. 4 Orthogona lity and Least Squares

11. Find an orthogonal tran formati on

T : ~3 ---+

rU~J ~ 12. Find

an

~3 uch that

m

orthogonal matri x of the form

2/ 3 2/ 3

[ l/3

22. Let A be the matrix. of an orthogonal projection . Find A 2 in two ways: a. Geometrically (what happens when you apply an orthogonal projection twice?). b. By computation, using the formula given in Fact 4.3.10. 23. Consider a unit vector ü in IR: 3 . We define the matrices A

1/-12 - 1/-12 3

and

3

rnJ ~ HJ ?

r14. 1f A IS · any m x n matnx, · s11ow that · the matrix AT A i

16 17. 18.

19.

matrice A and B are symmetric, is A B neces arily synunetiic? If an n x n matrix A i ymmetric, is A2 nece arily sy nunetric? f an invertible n x n matri.x Ais symmetric, i A- 1 neces aril y symmetric? An n x n matrix A is called skew-syrnmetric if A T = - A. a. Give an example of a skew-symmetric 3 x 3 matrix. 2 b. If A is skew-symmetiic, what can you say about A ? Consider a line L in ~~~ , spanned by a unit vector 11

Consider the matrix A of the orthogonal projection onto L. Describe the ijth entry of A, in terms of the components v; of ii. 20. Consider the subspace W of ~ 4 spanned by the vectors

and

Find the matrix of the orthogonal projection onto W. 21. Find the matrix A of the orthogonal projection onto the line in ~~~ spanned by the vector below:

l~~~- llJ

dim(im (A)) + dim(ker(AT)) , in term of m and n . 25. For which m x n matrices A does the equation dim(ker(A)) = dim(ker(A T))

ynunetric. What about

AA T?

15_•..- If two n x

= 2U.üT - / 3 and B = 13 - 2üüT .

Describe the linear Iransform ations defined by these matrices geometrically. 24. Consider an m x n matrix A . Find

0

13. Is there an orthogonal tran form ation T: ~ ---+ ~ su h that

Tm ~ m

217

all n components are 1

hold ? Explain. 26. Consider a Q R factorization M=QR.

Show that R = QTM .

27. If A = QR is a QR factorization, what is the relationship between AT A and RTR?

28. Consider an invertible n x n matrix A. Can you write A as A = LQ, where L is a lower triangular matrix and Q is orthogonal? Hint: Consider the Q R factorization of A T. 29. Consider an invertible n x n matrix A . Can you write A = R Q, where R is an upper triangular matrix and Q is orthogonal? 30. a. Find all n x n matrices whicb are both orthogonal and upper triangular, with positive diagonal entries. b. Show that the Q R factorization of an invertible n x n matrix is unique. 1 1 Hint: If A = Q1R1 = Q 2R2, then the matrix Q2 Q1 = R2RI is both orthogonal and upper triangular, with positive diagonal entries. 31. a. Con ider the matrix product Q1 = Q2S , where both Q1 and Q 2 are n x m matrices with orthonom1al columns. Show that S is an orthogonal matrix. Hin.t: Compute Q[ Q 1 = (Q 2S l Q2S. Note that Qf Q I = QJ Q2 = Im · b. Show that tbe QR factori zation of an n x m matrix M is unique. Hint: 1 [f M = Q 1 R 1 = Q2R 2, then Q , = Q2 R2 R~ • Now use part a and Exercise 30a. 32. Consider the matrix

A=[;; =!] 2

2

0

218.

Sec. j.4 Least Sq uaresand Data Fitting •

h ap. 4 Ort hogo nality and Lea t Squares with LDU factorizati on

A=

I 0I 00 ][10 -~ ~] [~ [ 3

2

0

I

0

0

2

0

: -1]

0

+ Another Characterization of Orthogonal

/

Complements

2 . 1

Consider a subspace V = im (A ) of IR", where A = [ ij 1

V .L

Find the LDU factorizati on of Ar ( ee Exerci e 2.4.58d). 33. Con ider a symmetr ic invertible n x n matrix A whi ch adm it an LDU fac torizarion A = LD U ee Exerci e 58, 6 1, and 62 of ection 2.4). Recall that thi fac torization is uniqu ( e Exer ise 2.4.62). Show that U = LT. (Th.i. 1 o metimes called the L D LT Ja ctoriza tion of a y mmetric matrix A.)

v.L

Vm ]. Then

= {x i n IR": v· x = 0, for all v in v } = {x in IR": v; · x = 0 , for i = 1, . . . m } = {.r in lR": vTx = 0 for i = 1 ,.,1} l

In other words

v2

'

'

• •• '



is the kerne! of the matri x v,- r

34. T hi exercise shows one way to defi ne the quatemions. d iscovered in 1843 by the lrish mathe mati cian Sir W . R. Hamitton ( 1805- 1865). Consider the set H of all 4 x 4 matrice M of the form

-q

-r

p

s

-s

p

r

-q

-sj - r

1

'

p

where p . q, r, s are arbitrary real numbers. We can write M more uccinctly

Fact 4.4.1

. For any matrix

A,

in partitioned forn1 a M =

[~ Here is a very simp le example : cons ider tl1e line

where A and B are rotati on-diJ ation matrices. a. Show that H is clo ed under addition: If M and N are in H , tl1en so i M+N.

b. Show tl1at H i clo ed under sca1ar multiplicati on: If M is in H and k is an arbitrary scalar, then k M is in H. c. . Show that H is closed under m ultipli cation: lf M and N a.re in H , then

Then

so is MN .

d. Show tl1at if M is in H , then so is MT. e. For a matri x M in H , compute MT M. 1 f. Which matrices M in H are invertible? lf M is invertib1e, is M- alway s in H? g. lf M and N are in H , doe the equation MN = NM always ho ld ?

LEAST SQUARESAND DATA FITTING In thi ection, we wi ll pre ent an impo11ant application o f the ideas introd uced in t11i.. ch~pter. First we ta.ke another Iook at orthogo nal co mpleme nts and orthogonal prOJeCtJOn .

219

V .L = ker [ 1 is the plane w ith equation x 1 + 2x 2

Figure 1

V.L = ker [ I 2 3] the plane .r 1 + 2r2 + 3x

+ 3x 3 =

2

3]

0 (see F igure 1).

=0

-

the line panned by [

~3

]

220 •

ec. 4.4 Least quar

Chap. 4 Orthogonality and Least Squares

a nd Data foittin



221

b. Note that A.,. A is an n x n matrix . By part a, kcr(AT A = {Öl, and AT A i

Here ar some prope1tie of the orthogonal compl ement which are obvious for ubspaces of ~ 2 and ~ 3 :

there fore invertible ( ee Su mmary 3.3. 11 ).



+ An Alternative Ch.aracterization Fact 4.4.2

Con ider a subspace V of ~~~. Tbeo

a. dim ( \f + dim (V .L) = b. (V .L ).L =V

c. v

n v .L =

of Orthogonal Projections

11 ,

Fact 4.4.4

{Ö).

onsider a vector .X in lR" and a sub pace V of IR". Th n the orthogonal projection proj v.X .is thc vector in V closesl to :X, in that

II .X Proof

for all

a. Let T (i = proj vJ r be the orthogonal projection onto V. Note that im(T) =

pr ~ V-II <

u- - VII.

ü in V different from proj v - .

V a.nd ker(T) = V .L . Fact 3.3.9 tell u that n = dim (ker(T)) +dlm (im T )

= dim V)+ dim (V.L .

To justify thi s fact , app ly the theorem of Pythagora t triangle in Figure 2.

b. First observe that V ~ V .L ).L, ince a vector in V i orthogonal to every vector in v .L (by definition of V .L ). Furthermore, the dimen ions of the two spaces are equaJ, by part a: dim (V.L .L = n - dlm (V.L)

+

Con. ider an in consistent linear y. te m Ax = b. The fac t that this . y tc m i. inconsistent. means that the vector b is not in the imagc of !I (sce igur ). Though thi .. y tem can not be solved wc might be intercstcd in findin g a good approximate o.lu tion. We ca n try to find a vector .X* such that !l x• i "a' close as possible" tob . ln other words, we try to minimize the error llh - Ax ll.

= n- (n- dim (V)) It foUows that the two spaces are equaJ (Exercise 3.3.41).

x

c. Jf is in V andin v.L, then xisorthogonal to itself; that is, .X.; = Jlxll2 = 0, aod thus .i = Ö. • Figure 2

The following somewhat technica1 result wiU be u eful later:

(rranslated)

V

a. Jf A is an m x n matrix, then ker(A) = ker(A TA).

b. Jf A is an m x n matrix with ker(A) = {0}, then AT A is invertible. Figure 3

Proof

a. Clearly, the kerne! of A is contained in the kerne! of AT A. Conversely,

x

consider a vector in the kerne! of AT A, o that AT Ax = Ö. Theo Ax is in the image of A andin the kerne! of AT . Since ker(Ar) is the orthogonal complement of im(A) by Fact 4.4.1, the vector Ax is Ö by Fact 4.4.2c, that is, .X is in the kerne! of A.

. hacled right

Least-Squares Approximations

= dim (V)

Fact 4.4.3

th

222 •

Chap. 4 Orthogon ality and Least Squares

Definition 4.4.5

Sec. 4.4 Least Squares and Data Fitting •

Fact 4.4.6

Least-squares solution

The Ieast-squares solutions of the system

Consider a linear syste m

Ai =b Ar = &.

are the exact solutions of the (coosistent) system

where A is an m X /1. matrix. A vector x* in IR" is call ed a Least-squares sohaion of this system if ll b - Ax*ll ~ Jlb- Axll fo r all ~r. in IR". See Figure 4. The tenn "Ieast-squ ares so lution" reflects the fact _that we are rnio imizing the sum of the squares of the components of the vector b - Ai. If the system = b happens to be consistent, then the " least-squm·es solutions" are its exact solutions: the error llb - A.t ll is zero. How can we find the Ieast-squares solutions of a linear system kr = b? Consider the foUo wing stri ng of equivalent Statements.

ATAx = A r b. The system A r Ax = AT b is referred to as the normal equation of Ax = b. The case when ker(A) = {Ö} is of particular importance. Then the matrix ArA is invertible (by Fact 4.4.3), and we can give a closed formula for the Ieastsquares solution.

A.x

The vector X.* is a Ieast-squares solution of the system Ai = b. ~

b-

If ker(A) = {Ö}, then the linear system Ax =h has the unique Ieast-squares solution

x* =

~

Jlb - Aill

for all

x in IR".

Fact 4.4.4

Ai* = proj vb, where V = im (A ) ~

Fact 4.4.7

Def. 4.4.5

l b- Ai" II ~

223

v.t = ( im(A)).l =

From a computational point of view it may be more efficient to solve the normal equation A TAi = Arb by Gauss-Jordan elimination rather than by using Fact 4.4.7. EXAMPLE 1.... Use Fact 4.4.7 to find tbe Ieast-squares solution

Facts 4.1.6 aud 4.4.1

A1* is in

(A TA )- IATb.

ker(Ar)

AX =

b,

~

where

A = [:

n

Wbat is the geometric relationship between

x* of tbe system and

b=

m

Ax* and b?

Solution We compute Take another Iook at Figures 2 and 4.

and

Figure 4

As ctiscussed above, A. Check tbat

b* = Ai • is the orthogonal projection of b onto the

im(A )

is indeed perpendicu1ar to the two column vectors of A (see Figure 5).

image of

224 •

Sec. 4.4 Least Squaresand Data Fitting •

Chap. 4 Orthogonallty and Least Squares

b- k i' * (translated)

225

Solution Let

and compute Figure S

If _;* is a Ieast-squares solution of the system Ax = b, then b* = Ai* is the orthogonal projection of b onto im(A) . We can use this fact to fi nd a new forrnula for orthogonal projections (compare with Fact 4.1.6 and Fact 4. 3.10). Consider a subspace V of !Rn and a vector b in ]Rn . Choose a basis 1, . • . , Vm in V, and Vm ]. Note that ker(A) = {0), since the colurnns form the matrix A = [ v1 of A are linearly independent. The Ieast-squares solution of the system A-t = b is x* = 1AT A)- 1A rb. As observ~d above, the orthogonal projection of b onto V is proj yb = A.~* = A (AT A ) - I ATb.

v

+ Data Fitting Seienrists are often interested in fitting a function of a certain type to data they have gathered. The functions considered could be linear, polynomial, rational, trigonometric, or exponential. The equations we have to solve as we fit data are frequently linear (see Exercises 29 and 30 of Section 1.1, and Exercises 30 to 33 of Section 1.2).

EXAMPLE 3 .... Find a cubi.c polynornial whose graph passes through the points (1 , 3), (- 1, 13), (2, 1), (-2, 33).

The matrix of an orthogonal projection

Fact 4.4.8

Consider a subspace V of lR" with basis

v1, ii2,

••. ,

vm. Set

Then the matrix of the orthogonal projection onto V is A (AT A )- IAT .

We are not required to find an orthonormal basis of V here. If the vectors happen to be orthonormal, then AT A = 1111 and the formula simplifies to A AT (see Fact 4.3 .10).

v;

Solution We are looking for a function f(t ) =Co+ C j ( + c2 t 2 + c3 t 3

such that .f (l) = 3, .f( -1) = 13, .f(2) = 1, .f(- 2) = 33; that is, we have to solve the linear system below. Co + CJ + C2 + C3 = 3 Co - Ct + C2 - C3 = 13 co + 2ct + 4c2 + 8c3 = l co - 2c t +4c2 - 8c3 = 33 This linear system has the unique solution

EXAMPLE 2 ..... Find the matrix of the orthogonal projection onto the subspace of .JR4 spanned by the vectors

mm and

that is, the cubic polynornial whose graph passes through the four given data points is .f(t) = 5 - 4t + 3t 2 - t\ as shown in Figure 6. ....

226 •

Chap. 4 Orthogonality and Least Squares

Sec. 4.4 Least Squ ares an d Data Fitting • This qu adratic function ( I, 3)

22 7

f* (t) fi ts the data points best, in that the vector

(2, I)

Figure 6

is as close as possible to Frequently, a data-fitting problern Ieads to a linear system with more equations tban variabl es (thi s happens wben tbe number of d ata points exceeds the number of parameters in the function we seek). Such a system is usually inconsistent, and we willlook for the Ieast-squares solution(s).

EXAMPLE 4 .... Fit a quadr atic function to the four data points (a , , b ,) (a3, b3)

=

(1, 4), and (a4, b4)

=

= ( - 1, 8), (a2, b2) = (0, 8),

(2, 16).

This means that

Solution We are looking for a function f( t ) = c0 j(a 1) = b1 f(a.2) = b~ f(a.3) = b3 j(a4) = b4

/I I] [ I

0-1

1a1

~

+

= 8 = 8 + c 1 + c2 = 4 co + 2c 1 + 4c2 = 16

Co -

or

+ c 1t + c2 t 2 suchthat

CJ

Figure 8

is minimal: the sum of the squares of the vertical distances between graph and data points is minimal (compare with Figure 8). ~

C2

c0 c0

or c0 + c 1t whicb best fits the data points (a 1 , b 1), (a2 , b 2 ) , . .. , (an , b" ), using least squares. Assurne that a 1 -1= a2 .

EXAMPLE 5 .... Find the linear function

where

._ 4

Solution We have to attempt to solve the system

and

co co

We have four equations, corresponding to tbe four data points, but only three unknowns, tbe three coefficients of a quadratic polynomial. Piease check that this system is indeed inconsistent. The Ieast-squares solution is

+ c ,a, = b1 + c1a2 = bz

or

The Ieast-squares approximation is j*(t) = 5- t Figure 7

+ 3t 2 ,

as shown in Figure 7.

or

Note that rank (A)

= 2 since a , i= a2.

228 •

Chap. 4 Orthogonality and Least Squares

Sec. 4.4 Least Squaresan d Data Fitting •

The Ieast-squares solution is

229

Solution We have to attempt to solve the system

co + 76c 1 + 48c2 co co co co

= 43

+ 92cl + 92c2 = + 68c, + 82c2 =

90 64 + 86c, + 68c2 = 69 + 54c, + 70c2 = 50

The Ieast-squares solution is

[ ~~ ] = (AT A )- iA Tb ~ [ - ~::39]. 4

ci

0.799

The function which gives the best fit is approximately

f

We have found:

= -42.4

+ 0.639h + 0.799m .

This formula predicts the score

f

= -42.4 + 0.639. 92

+ 0.799 . 72 ~ 74

for Selina. These formulas are well known to statisticians. You do not need to memorize iliem. ~ We conclude this section with an example for multivariate data fitting.

EXAMPLE 6 ..... In ilie accompanying table we list the scores of five students in ilie tbree exams given in a dass.

EX ER C I S ES

GOALS Use the formula (im (A))..L = ker (AT) . Apply ilie characterization of proj vx as the vector in V "close t to Find the least- quares olution of a linear system Ai = b using the normal equation A T Ai = AT b.

x."

1. Consider the subspace im(A ) of JR2 , where

A=[; :]. Find a basis of ker(A T) and draw a sketch illustrating tbe formula Gabriel Kanya Je sica Janelle Wynn

76 92 68 86 54

48 92 82 68 70

43 90 64

69 50

in this ca e. 2. Consider the subspace im(A) of IR 3 , where

A= [

Find ilie function of ilie form f = co + c1h + c2 m whicb best fits tbese data, using least squares. What score f does your formula predict for Selina, another student, whose scores in tbe first two exams were h = 92 and m = 72?

~1 3~]

.

Find a basi of ker(A T) , and draw a sketch illustrating the formula ( im(A) )..L = ker(AT) in this case.

230 •

ec. 4.4 Leas t Squares and Da ta Fitting •

Chap. 4 Orthogonality and Least Squares

3. Consider a subspace V of lR" . Let ü1, a basis of y.L _ Is ü1 , . •• ,ü", 1 •• ••

w

... •

ü" be a basis of V and iü ,, ... , w"

,wq a basis oflR"?

Explain.

231

b. What relationship do you observe between ker(A) and im (A7 )? Exp lain . c. What relation hip do you ob erve berween ker(A) a nd S? Explain.

x

x

d. Find the unique vector 0 in the intersec tion of Sand (ker(A)) .L . Show 0 o n your sketch . e. What can you say abo ut the le ngth of 0 , com pared to Lhe le ngth of all other vectors in S?

4. Let A be an m x n matrix. Is the formula

x

necessarily true? Explain. 5. Let V be the solution space of the linear system

x 1 + x 2 + x3

+

..r4 = 0 I

Ix 1 + 2x2 + Sx3 + 4x4 =

0 ·

Find a basis of V .L . 6. If A is an rn x n matrix , .i the formula im (A) = im (AAr).

10. Co nsi der a consistent syste m Ai = b. a. Show that thi s system has a olution >to in (ker(A)) .L . Hint: An arbitrary so luti on x of the system can be written as ~-r = ,(" + _( 0 , where x" i in ker(A) ancl 0 is in (ker(A )).L .

x

b. Show that the system Ax = b has onJy one soluti on in (ker(A )) .L . Hint: If x0 and ~-r , are two solutions in (ker(A) ).L , think about x1 - ,v0 • c. lf .1:0 is the solution in (ker(A) ) .L and ; , is another solu tion of the system Ax = b, show that l!ioll < l!i 1 1!. The vector i 0 is ca lled the minimal solution of the linear ystem A,r =

b.

11. Consider a linear transformatio n L (x) = A,'i: from

necessarily true? Explain.

7. Consider a symmetric n x n matrix A. What is the relationship between im(A) and ker(A)? 8. Consider a linear transformation L (i) = A.t from lR." to JR.m, with ker (L) = {0}. The pseudo-inverse L + of L is the transformation from 1R111 to lR" given by L + (Y) = ( the Ieast-squares solution of L (x) = ji).

a. Show that the transformation L + is linear. Find the matrix A+ of L +, in terms of the matrix A of L. b. If L is invertible, what is the relationship between L + and L - t? c. What is L +(L (i)), for i in IR"? d. Wbat is L (L +(ji)), for ji in lR111 ? e. Find L + for the linear transformation

rank(A) = m. The pseudo-inverse L lR" given by

" to IR111 , where of L is the transformation fro m lR117 to

L + (y) = (the minim al solution of the syste m L (x) =

ji)

( ee Exercise 10).

a. Show that the tran formation L + is linear. b. What i L ( L + ()i)) , for ji in lR. 111 ? c. What is L + ( L (x)) , for x in IR"? d. Derermine image and kerne! of L +. e. Find L + fo r the linear tran formation - = [ 1 L (x)

0

0 0]1 0

X.

12. Usi ng Exercise lO as a g uide de fin e the tenn "min.im allea t- quare solution" of a linear system. Explain why the minimal Ieast-square SO luti o n x* of a

linear sy tem A-i =

9. Consider the linear system

Ax = b, where

bi

in (k r(A)) .L .

13. Con .ider a linear transformation L(x) = A.Y from lR" to lR111 • The p e udoin ver e L + of L is the tran formation from 1R111 to lR" given by L + (y) = (the minimal least- quru·es solution of the sy tem L(.'i) =

a. Draw a sketch showing the following subsets of JR 2 : • the kerne! of A, and (ker(A )).L. • the image of Ar . • the solution set S of the system Ai =

b.

( ee Exerci es 8, 11 , and 12 for spec ial cases). a. Show Lhat the tran fonnation L + i linear. b. Wh at ~ L + (L (-~)) , for ~ ~n lR"? c. What 1 L (L + (y) ) , for y w lR111 ?

.Y)

(continued

232 •

Sec. 4.4 Least Squares and Data Fitting •

Chap. 4 Orthogonality and Least Squares

d. Derermineimage and kerne! of L +(in tenns of ün (Ar ) and ker(Ar)). e. Find L + for the linear transfonna6on

- [20 00 0]0 -

L (x) =

20. By using paper and pencil, find the Ieast-squares Solution x* of the system

AX = b,

x.

14. In the figure below we show the kerne! and the image of a linear transformation L from JR2 to JR2, tagether with some vectors ·u h 1, w2, w3. We are told that L (v 1) = w 1 • For i = 1, 2, 3, find the vectors L + (wi), where L + is the pseudo-inverse of L defined in Exercise 13 . Show your solutions in the figure, and explain how you found them.

w

233

where

A

[l ~]

=

and

b=

m

Verify that the vector b- Ax* is perpendicular to the image of A. 21. Find the Ieast-squares solution x* of the system

AX

= b,

where

A

~

=[

Ii]

and

b= [

4n

Determine tbe error llh- Ai* II. 22. Find the Ieast-squares solution x* of the system

AX = b,

where

A

=

0

un m and

b=

Deterrnine the error llb- Ax* ll. 23. Find the Ieast-squares solution x* of the system 15. Consider an m x n matrix A with ker(A) = {0]. Show that tbere is an n x m matrix B such that BA = In . Hint: ArA is invertible. 16. Use the formula ( im(A) )j_

Ax

=h,

where

A

=

= ker(Ar) to prove the equation

[i1 1] ~

Explain. 24. Find the Ieast-squares Solution x* of the system

rank:(A) = rank(Ar) .

17. Does the equation rank(A)

= rank(A r A)

hold for all m x n matrices A? Explain. 18. Does the equation

Draw a sketch showing the vector b, the image of A, the vector Ax*, and the vector b - Ax*. 25. Find the Ieast-squares solutions .\:* of the system = b, where

Ar

hold for all m x n matrices A? Explain. Hint: Exercise 17 is useful. 19. Find the Ieast-squares Solution x" of the system Ax = b,

where

A=

[1 0b]

Use only paper and penciJ. Draw a sketcb. 26. Find the Ieast-squares solutions .\:* of the system Ax =

~

Use paper and pencil. Draw a sketch showing the vector the vector Ax*, and the vector b - Ax* .

b, the

image of A,

A=

1 2 3] [7 8 9 4

5

6

b, where

234 •

Chap. 4 Orthogonality and Least Squares

Sec. 4.4 Least Squares and Data Fitting •

D, where A is a 3 x 2 matrix. We are told tllat tl1e Ieast-squares Solution of this system is x* = [ 1~ Consider

27. Consider an inconsistent linear system Ax =

l

to a given continuous function g(t) on tbe closed interval fro m 0 to 2rr. One approach is to choose n equally spaced points a; between 0 and 2rr (a; = i · (2rrjn ), for i = 1, . . . , n, say). We can fit a function

an oriliogonal 3 x 3 matrix S. Find tl1e least-squares so lution(s) of the system ,f,, (t) =

SAx = Sb. 28. Consider an ortl1onormal basis

u1, u2, ..., Ün

in IR". Find the Ieas t-squares

solution(s) of the system Ax =

C11

+p

11

sin(t)

+q

11

cos(t)

to the data points (a; , g(a;)), for i = I , . . . , n. Now examine what happens to the coeffic.ients c11 , p 11 , q11 of f,, (t) as n approaches irrfinity.

ii", g(t )

\Vhere

29. Find the Ieast-squares solution of the system

Ax

235

= h,

where A

~ [ 10~ 00

!OL]

To find fn (t), we have to rnake an attempt to sol ve the equations and

b=

[

10~ 10 ] .

10 -10

for i = 1, ... , n or

c"

+ p" sin(a 1) + q + p" sin(a2) + q

Cn

+ p" sin(a") + q" cos(a") = g(a

c11

Descrlbe and explain the difficulties you may encounter if you use technology. Then fi nd ilie solution using paper and pencil.

30. Fit a linear function of the form f(t) = c0 + c 1t to ili.e data points (0, 0), (0, 1), (1 , 1), using least squares. Use only paper and pencil. Sketch your solution, and explain why it makes sense. 31. Fit a linear function of the form f(t) = c0 + c 1t to the data points (0, 3), (1, 3), (1, 6), using least squares. Sketch the solution. 32. Fit a quadratic polynornial to the data points (0, 27), ( J, 0), (2, 0), (3, 0), using least squares. Sketch the solution. 33. Find the trigonometric function of the fonn f(t) = c0 + c 1 sin(t) + c2 cos(t) which best fits the data points (0, 0), (1, 1), (2, 2), (3, 3 ), using least squares. Sketch the solution together with the function g(t) = t. 34. Find the function of the fonn f(t) = co

11

cos(a 1 ) = g(at) cos(a2) = g(a2)

11 )

or All

where

A"

~

[I

+ c 1 sin(t) + c2 cos(t) + c3 sin(2t) + c4 cos(2t)

which best fits the data points (0, 0), (0.5, 0.5), (1, l), (1.5 , 1.5), (2, 2), (2.5 , 2.5), (3, 3), using least squares. Sketch the solution, together with the function g (t) = t .

11

sin(a 1) sin(a2)

;

sin(a")

[c" ] = Pn q/1

bn

cos(a,)] cos(a2)

cos~an)

b/1

= [

+ p sin(t) + q cos(t)

.

g(~n)

'

a. Find the entries of the matrix A?," A" and the components of the vector TbA" "' b. Find

35. Suppose you wish to fit a function of the form j(t) = c

;~~~~]

) Iim ( -2rrAT n "An

n-4-oo

and

T-)

. (2rr hm - A 11 b .

n~

n

236 •

Sec. 4.4 Least Squaresand Data Fitting •

Chap. 4 Orthogonality and Least Squares

Hint: Interpret tbe entries of the matrix (2rrjn)A~ A11 and the components of tbe vector (2rr jn) A Tb as Riemann sums. Then the limits are the corresponding Riemann integrals. Evaluate as m any integrals as you can. Note that . (2rr T ) lun - A"A" n-+ oo n is a diagonal matrix.

c. Find lim [ ;: ]

n-+oo

. T = lim (A A") 11 ~ 00

11

-I

A 11T b"

qll

. (2rr = [ lim - A 11T A" n~OO n

. = n-+ ltm

[(21T - A A" )-I(.2rr -A" b" n n T

2rr r b- 11 ) . lim ( -A,. n

ll~ OO

The resulting vector [ : ] gives you the coefficient of tlte desired ft10ction

Write f(t). The function f(t) is called tbe first Fourierapproximation of g(t). The Fourier approximation satisfies a 'continuous" Ieast-squares condition, which we wiU make more precise in Chapter 9. 36. Let S (t) be the number of daylight hours on the tth day of the year 1997, in Rome, Italy. We are given the following data fo r S{t):

January 28 March 17 May 3 June 16

28 77 124 168

10 12 14 15

. ( 2rr t ) a + b srn 365

+ ccos (

Douglas DC-3 Lockheed Constellation Boei ng 707 Concorde

'35 '46 '59 ' 69

t

Displays d

35 46 77 133

+ c1t to the data po.ints (t;, log(d;)), usi ng least squares. b. Use your answer in part a to fit an expo nenti al functio n d = ka' to the data points 1; , d;). c. The Airbus A320 was introduced in 1988 . Ba ed on yow· an wer in part b, how many di splays do you expect in the c ckp it of thi s pl ane? (Ther are 93 displ ays in the cockpit of an Airbus A320. Explain.)

a. Fit a linear function of the form log(d) = c0

Height h (in inches above 5 feet)

Gender g (1 = "female," 0 = ''male'')

2 12 5

0

I

11

I I

6

0

Weightw (in pounds)

110 180 120 160 160

Fit a function of the form

to these data, u ing least squares. Before you do the computations, !hink about the signs of c 1 and c2 . What igns would you expect if the e data were representative of the generat popu lation? Why? What i, the ign of c0 ? What is the practical significance of c0 ?

We wish to fit a trigonometric function of the fo.rm f(t) =

Year

some young adults.

ll~OO

S(t)

Plane

38. In the table below, we Ust tbe height h the gender g , and the weight w of

f(t) = lim J,,(t).

t

37. The table be!ow lists several commercial airplanes, the year they were introduced, and the number of di splays in the cockpit.

T- ) ]

11

)]-I .

Day

23 7

2rr r)

365

to these data. Find the best approximation of this form, using least squares.

39. In the table below, we Iist the estimated number g of gene and the e timated number z of cell types for various organisms.

238 •

h ap . 4

r

rth ogona Uty an d Least Sq uares

Organism Humans Anne1 id warm Jellyfi h Sponges Yea t

Number of genes, g

Nurober of cell type ·, z

600,000

250 60 25 12 5

_oo.ooo 60,000 10,000 2.500

a. Fit a function of the fo rm log(z

=

+ c, log(g)

co

log g;), log(.:;) using least quares. b. Use your an wer in part a to fi t a power fu nction

l

to the d ata points

z = kg"

DETERMINANTS

to the data poi nts

(g;. Z;) . c. Using the theor of self- regulato ry sy te m

c ie nti t developed a model tbat predi cts th at z i a square- root function of g, that i , a = k.jg, fo r some constant k. Is your an wer in part: b rea o na bl y c lo e to thi fo rm ?

.1

INTRODUCTION TO DETERMINANTS

40. Consider the data in the tabl e below. In Chapter 2 we found a criterion for tbe invertibility of a 2 x 2 matrix: the matrix

Planet

a mean d.istance from tbe Sun (in astronomical units)

D period of revolution (in Earth years) is invertible if (and only if)

ad - bc =f 0,

0.24 1 1 J 1.86 84.0 248.5

0.387 1 5.20 19. 18 39.53

Mercury Earth Jupiter Uranu P1uto

A=[~ ~] by Fact 2.3.6. The quantity ad - bc is called the detemzinant of tbe matrix A , denoted by det(A). lf the matrix A is invertible, tben its inverse can be expressed in terms of the determinant: A-l-

Use the metbad discussed in Exerci e 39 to fit a power functio n of the fo m1 D = ka" to the e data. Expl ain , in tem1s of Ke pler s laws of pl a netary motion. Explain why the constant k is clo e to 1.

41. In tbe table below '.ve Iist the publi c debt D of the U ni ted State of dollars), in the year t (a of Septe mber 30).

D

'70

' 75

'80

'85

' 90

' 95

370

533

908

1,823

3,233

4 ,871

(i11

1 [ d ad- bc - c

-b] = 1 [ a

det (A)

d -c

It is natural to ask whether the concept of a determinant can be generalized to square matrices of arbitrary size. Can we assign a number det(A) to any quare matrix A (expressed in terms of the entries of A) , such that A is invertible if (and only if) det(A) =f 0?

billions

+

The Detenninant of a 3

x

3 Matrix

Let (11 2

a. Fit a linear function of the form log (D ) = c0 + c 1t to the data points (1;, log( D;)), using lea t quares. Use the res ult to fit an ex po nenti al function to the data points (1;, D;) . b. What debt does your formula in part a predict for the year 2020?

(122

(1 32

v

Let us denote the three column ü, and w. Tbe matrix A is not invertible if the three vectors ü, and are contained in a plane E. (This means that the image of A is not all of JR3 .)

v,

w

239

. 240 •

Chap. 5 Determinants

Sec. 5.1 Introduction to Determinants • In this case, the vector ü is perpendicul ar to the cross produ ct

1

11 · cv x w) =

S

v x 1v, i.e.,

aij

of

[~~; ] · ([:~~ J [~~! ]) [ ~~:] [ ~~~~~~ = ~~:~~~~ ] x

a 31

=

iixw

o

(compare with Fact A.lOa in the Appendix). See Fig.ure I. . What does the condition ü · (v x w) = 0 mean m terms of the entnes the matrix A?

t;· (vxtv) =

o 32

=

= a 11 a22 a 33 -

E

·

a 33

C13 1

a 11 (o22 a 33 - o 32 o 23 ) + a2 I (a32 a 13

- a 12a33)

a 11 a 32 a 2 3 + Clz J0 32 CI J3 -

OJ2CI23 -

CI22C1 13

+ GJ J (G J2G23

- a·na13)

Figure 1

CI2JCIJ2CI33 + CIJ JCIJ2CI23- G3 JCl22 0 JJ.

We define the last expression as the determinant of A. The work above shows that the 3 x 3 matrix A is invertible if (and only if) its determinant is nonzero. Since the formula for the determinant of a 3 x 3 rnatrix is rather long, you may wonder how you can memorize it. Here is a convenient rule:

Fact 5.1.1

241

EXAMPLE 2 ... Find the determinant of the upper triangular matrix below.

A=[~ ~ ;]

Sarrus's rule 1 Solution

To find the determinant of a 3 x 3 matrix A, write the first two columns of A to the right of A. Then multiply the entries along the six diagonals sbown below.

We find that det(A) = adf because all other contributions in Sarrus's formula are zero. The determinant of an upper (or lower) triangular 3 x 3 matrix is the product of its diagonal entries. <11111

EXAMPLE 3 .... For whicb choices of the scalar A. is the matrix

+

+ +

Add or subtract these diagonal products as sbown in the diagram. det(A )

=

A =

[

G J JCI22CI33 +a !2Cl23 CI31 + Cl 13 Cl'2 JCl3'2 -

I-.\

I

1 I

1 - .\

1

invertible?

CIJ3CI22CI3! - GJlCI23 CI32 - GJ 2G2J CI33

Solution

EXAMPLE 1... Find

3]

det(A) = (1 - A.) 3 + 1 + 1 - (1 - .\) - (1 - A.) - (1 - A.)

6 .

= 1 - 3.\ + 3>..2 - .\3 + 2- 3 + 3A. =

10

=

The determinant is 0 if A. = 0 or A = 3. The matrix A is invertible if A. is different <11111 from 0 and 3.

1 · 5 · 10 + 2 · 6 · 7 + 3 · 4 - 18- 3 · 5 · 7 - l · 6 . 8 - 2 . 4 . 10 = - 3.

This matrix is invertible.

+ 3A.2

= }•.?(3 - A.)

Solution det(A)

- ).3

~

+ The Determinant of an n

x

n Matrix

We may be tempted to define the determinant of an n x n matrix by generaJizing Sarrus's ru le (Fact 5.1.1). Fora 4 x 4 matrix, a naive generalization of Sarrus' s

1

Staled by Pierre Frederic Sarrus (1798- '1861 ) of Strasbourg, c. 1820,

L

242 •

Sec. 5.1 Introduction to Determinants •

Chap. 5 Determinants

conclude that there are n(n - l )(n - 2) . . . 3. 2. ] patterns in an n x n matrix. The quantity 1 · 2 · 3 · · · (n - 2) . (n - 1) . n is called n! (read "n factorial"). We observed that the detenninant of a 3 x 3 matrix is obtained by adding up the proclucts associated with some of the patterns in the matrix and then subtracting the products associated with some other patterns. We note first th at the product a11a22a33 associated with the diagonal pattern is added. To find out whether the product associated with another pattern is added or su btracted, we bave to determine how much this pattern deviates from the diagonal pattern. More specifically, we say that two numbers in a pattern are inverted if one of thern is to the right and above tbe other. Below, we indicate the inversions for each of the six patterns in a 3 x 3 matrix. Note that the diagonal pattern is the only pattern without any inversions.

rule produces the expression a 11 a 2 2 a 33 a 44

+ · · · + a 14 a 2 1a32G43 -

l7 J4G2 a 3:!a41 -

+

+

... -

+

a 13a 22 a 31a 44 ·

+

For example, for the invertible matrix

A =

l 0 0

l

0 1 0

[~@ @: : j

0 0

the expression given by a generalization of Sarrus' s rule is 0. Thls shows that we crumot define the determinant by generalizing Sarrus' s rule in thls way: recall that the detenninant of an invertible matrix must be nonzero. Wehave to Iook for a more subtle structure in the formula det (A) =

GJJG22033

a31

a32

2 invers ions

no inversion

+ G J2G23G31 + GJ)G2 JG32

- a 13 a 22 a 31

243

- a 11a23a32 - a12a21a33

for the determinant of a 3 x 3 matrix. Note that each of the six terms in this expression is a product of three factors involving exactly one entry from each row and each column of tbe matrix:

3 i.nversions

@

(/ 12

a21

a22

03 1

a32

l

J

inversion

2 invers ions

J inversion

Wben we compute the determinru1t of a 3 x 3 matrix , tbe product associated with a pattern is added if the number of inversions in the pattem is even and is subtracted if this number is odd. Using this observation as a guide, we now define tbe detenninant of a larger square matrix.

Definition 5.1.2

Determinan t A pattern in an n x n matrix is a way to choose n entries of tbe matrix such

For Iack of a better ward, we call such a choice of a number in each row and column of a square matrix a pattern in the rnatrix. The simplest pattem is the diagonal pattem, where we choose all numbers a;; on the main diagonal. For you chess players, a pattem in ru1 8 x 8 matrix corresponds to placing 8 rooks on a cbessboard so that none of them can attack another. How many patterns are there in an n x n matrix? Let us see how we can construct a pattem column by column. In the first column we have n choices. For each of these, we then have n - I choices left in the second column. Therefore, we have n(n - 1) choices for the numbers in the first two columns. For each of these, there are n - 2 cboices in the thlrd column, and so on. When we come to the last column, we have no choice because there is only one row left. We

that there is Q!le chosen entry in each row and in each colunl!! of the matrix. Two numbers in a pattern are inverted if one ortnem is located to the rioht and above the other in the matrix. 0 We obtain the deterrninant 1 of an rz x n matrl.x A by adding up the products associated witb all patterns with an even number of inversions and subtracting the products associated with all patterns with an odd number of inversions.

--~ lt

appears that determinants were first considered by the Japanese mathematician Seki Kowa (1642t 708). Seki may have known that the detenninant of an n x n matnx has n !_ terms and that row ' a~d columns are interchangeable (compare with Fact 5.2. 1). TI1e French mathemattelall Ale.xandre-Theoph•le

244 •

Chap. 5 Determinants

Sec. 5.1 Introduction to Determinants •

EXAMPLE 4 ~ Count the number of inversions in the pattern below.

103 7 8

EXAMPLE

6 ~ Find

det(A) for

56 907 6 3 2 1 2 5 6 7 8

2 0 0 0 0

4

5 3

4 4

9 3

8 7 2Q)2

A=

604 3

245

0 3 0 0 0

0 0 5 0 0

0 0 0 7 0

0 0 0 0 9

Solution

4

The diagonal pattern (with no inversions) makes the contribution 2 · 3 · 5 · 7 · 9 = 1890. All other patterns contain at least one zero and will therefore make no contribution toward the determinant. We can conclude that

Solution There are seven inversions, as indicated by the bars.

det(A) = 2 · 3 · 5 · 7 · 9 = 1890. More generally, we observe that the determinant of a diagonal matrix is the product of the diagonal entries.

EXAMPLE

7 ~ Find

det(A) for

A=

0 2 0 0 0 0 0 0 0 3 0 0

0 0 0 8 0 0 0 0 2 0 0 0

0

0

0

0

5

0

0 0 1 0 0 0

EXAMPLE 5 ~ Apply Definition 5 .1.2 to a 2 x 2 matrix and verify that the result agrees with the defi.nition given on page 239. Solution Solution There are two patterns in the 2 x 2 matrix A = [ ;

no inversions

!].

As in Example 6, only one pattern makes a nonzero contribution toward the determinant.

one inversion

Therefore, det(A) = ad- bc. 7 inversions

Thus, det(A) = - 3 · 2 · 1 · 8 · 5 · 2 = - 480 .

.EXAMPLE 8 ~ Find det(A) for Vandermonde (1735-1796) was the first to give a coherenl and systematic exposition of the theory of determinants. Throughout the 19th century determinants were considered the ultirnate tool in linear algebra, used extensively by Cauchy, Jacobi, Kronecker, and others. Recently, determinants have gone somewhat out of fashion, and some people would like to see them eliminated altogether from linear algebra. (See. for example, Sheldon Axler' s article ''Down with Determinants" in The American Marhematical Monthly, February 1995, where we read: "This paper will show how linear algebra can be done better without determinants." Read it and see what you think.)

6 0 1 0 0 A=

9 8

3 0

2 3

3 2

7 9

0 0 4 0 0 5

0

5

0

1

246 •

Solution Again, only one pattern makes a nonzero contribution , but this fact is not as obvious here as it was in Examples 6 and 7. In the second column, we must choose the second component, 3. Then, in the fourth column we must choose the third component, 2. Next, think about the last column; and so on.

@o

1

902

EXAMPLE

10 ~ Find det(A) for

o o

Solution

3 7

Most of the 4! = 24 patterns in this matrix contain zeros. To get a pattern that could be nonzero we have to choose the entry n in the last row and the k in the third. In the first two rows we are then left with two choices:

8 02_(2)9

o o(9o o o s o(D

5

Cf><]) ;~ ] o o([)m [

J inver ion

det(A) = - 6 · 3 · 4 · 2 · 1 = -144.

0

0

0

@; ~] o o([)m .

and

0

0

9~

0

0

0

one inversion

no inversions

EXAMPLE

247

Sec. 5.1 Introduction to Determinants •

Chap. 5 Determinants

Thus, det (A) = afkn - ebkn = (af - eb)kn. Note that the first factor is the

Find det(A) for

A=

1 0 0 0 0

2 2 0 0 0

3 3 3 0 0

4 4 4 4 0

~].

determinant of the matrix [:

5 5 5 5 5

~

EX ER C I S ES GOALS Use pattems and inversions to find the determinant of an n x n matrix. Use detenninants to check the invertibility of 2 x 2 and 3 x 3 matrices.

Solution Note that A is an upper triangular matrix. The diagonal pattem makes the contribution

Find the determinants of the matrices in Exercises 1 to 20. 1 . 2. 3. 4 . 5 = 120. Any pattem other than the diagonal pattern contains at least one entry below the diagonal (Exercise 39), and the contribution the pattem makes to the determinant is therefore 0. We conclude that

1.

~

5.

det(A) = product of diagonal entries = 1 · 2 · 3 · 4 · 5 = 120.

We can generalize this result:

Fact 5.1.3

The determinant of an (upper or lower) triangular matrix is the product of the diagonal entries of the matrix.

9.

12.

u ~l [~ [~ [:

0 3 0 f;

0

,6 h b

e

n n n n n[l n ~l un n n [~ ;] [~ n 2.

6.

11 . [~ 11]

[:

3.

3 3 0

10.

7. [

8 8 5 3

13.

[~

4.

2 5 8

3 0

[!

8. [;

11.

1 1 1

14.

0 0 1 0

4

6 7 0 0

~

2 2 0

0 1 0 0

0 2 3

2 5 8

248 •

Chap. 5 Determinants

15.

Sec. 5.1 lntrodu cti on to Determinants •

[~

2 2 0 0

3 3 0 3

0 0 I 0 0

0 0 0 0

l 0 0 0 0

17.

~

19. [

b

p r

n 0 1 0 0 0

16.

0 0 0

18.

0 0 0 0 3

0 0 9 0 4

a

b

0

0 g

.f

2 0 7 0 5

k q

I

h m

r

s

n

l 1 1 I I 1

1 l 1 1 I 1

1 1 I 1 I 1

20.

35J Find the detem1inant of the 100 x lOO matri x A below.

2 3 5 5

8

c 0

1 0

1

3 2 9 0 d

0 tl

A=

e 0 j p

24.

[

~ ~].

I 25 63] .

4 7

8

23.

1 1 1 1 1

1

[~0 0~ c~] .

30.

[6

;J.

u~ n

27.

[~

;J.

31. [

28.

~~

:

l

29. [ _ ;

32. [

~~

-~ l

Jl

------

0

1

1 0

0

0 0

0

l 0

'

0

1 0

0 0

0 0

=[

~ ~l

Is it necessarily true that det(M) = det(A) det(D ) - det(B) det (C) ? 39. Explain why any pattem in a matrix A, other than the diagonal pattem, contains at least one entry below the diagonal and at least one entry above the diagonal. 4ö. For which choices of CL i the matrix COSCL

A

.

What are the possib1e values of the deterrninant of a pemmtation matrix? Explain.

=

[

0 sin CL

invertible? 41. For which choices of x is the matrix

A=[l; ~]

I, with all otber entries being 0. Examples are

0

n

M=[~ ~l

[~ -~J.

0 1 0] [

0 0

The ijth entry of A is l if i + j = n + l; all other entries are 0. Give your answer in terms of the remainder you get when you divide n by 4 . 37. For 2 x 2 matrices A , B , C, form the partitioned 4 x 4 matrix

A square matrix is called a ermutation matrix if each row and each column

-:::;; contains exactly one entry

I

Express det(M) in terms of the determinants of A, B , and C. 38. For 2 x 2 matrices A, 8 , C, D , form the partitioned 4 x 4 matrix

33. If a matrix A hac; a row of zeros or a colurnn of zeros, what can you say about its determinant?

,. W .

l 0

0 0

M

For the matrices A in Exercises 26 to 32, find all (real ) number A. such that the matrix ).[n - A is not invertible. 26.

... 0 . ..

A=

25.u~n

9

0 0

The ijth entry of A is 1 if i + j = 101 ; all other entries are 0. 36. Find the deterrninant of the n x n matrix A below.

Use the deterrninant to find out which matrices in Exerci ses 21 to 25 are invertible.

22. [

[!

u

1 I

249

invertib1e? In Exerci es 42 to 45, Iet A be an arbitrary 2 x 2 matrix with det. A) = k. 42. If B is obtained from A by multiplying the first row by 3, what is det(B)?

250 •

Chap. S Determinants

Sec. 5.2 Properti es o f th e Determinant •

c is obta.ined by swappi ng tbe two row of A , what. i. det(C)? 44. lf D i obta.ined from A by adding 6 times the econd row to the fir t, what is det(D)? 45. What i det(A r)? 46. Con ider two vector ü a11d ÜJ in 3 . Form d1e matri

43. If

A =[ Ü x w

+ The Dete1minant of the Transpose EXAMPLE

1 ~ Let

A=

w]

Ü

w

w

Expres det(A) i n rerms of II ü x 11. For which cho ices o f Ü and i A invertible? 47. If A is an 11 x 11 matrix , what is the relation hip betwee n det (A) and det( - A)?

3

3 8 5 2 7

4 9 4 3 8

5 8 3

4 9

Solution For each pattern in A we can consider the conesponding (tran posed) pattem in A T· for example,

7 4 2] [3 1 3 5

I 2 6 7 7 6 2 l 5 6

Express det(AT) in terms ofdet(A). You need not compute det (A) .

48. lf A is an 11 x n rn atrix and k is a constant, what i the relat.ion hip berween det(A) and det(kA )? 49. lf A is an invertible 2 x 2 matrix, what is the relationship berween det (A) and det(A - 1)? 50. Does the matrix A below have an LU factorization (see Exercises 2.4.58 and 2.4.6 1)? A =

251

I

A=

51. Find tbe detenninant of ilie (2n) x (2n) matri x

/" ] 0

Th two pattems (in A and AT) involve the same numbers, and they contai n the sa me number of inversion , but the role of the two numbers in each inversion is reversed . Therefore, the two pattern make the same contribution to the re pective determinants. Since these observation apply to all pattern of A, we can conclude that det(Ar) = det(A).

.

52. Js the determinant of the matrix

A =

l 5 1000 5

1000

2

3

6 9 4 2

7 8

1000 7 2

3 1000

3

4 8 6 1000 4

<1111111

Since wehavenot used any special prope1ties of the matrix A in Example l , we can state more generally:

positive or negative? How can you tell? Do not use technology. Fact 5.2.1

Determinant of the transpose ff A is a square matrix , then

2

PROPERTIES OF THE DETERMINANT The main goal of thi s section is to show iliat a square matrix of any size is invertible if (and only if) its determin ant is nonzero. As we work toward this goal, we will discuss a number of otber prope1ties of the determinant that are of interest in ilieir own ri ght.

det (A T) = det (A) .

Thi s y mmetry property will prove very useful. Any property of the determin ant expressed in terms of the rows hold for the column a weil. and vice versa.

252 •

?

Sec. 5.2 Properties of th e Determina nt •

253

Chap. 5 Determinants For example, the transfo rmation

Linearity Properdes of the Detenninant ,EXAMPLE

2 ~ Consider tbe transformation

T

1

xX2 ] x = det

T

[

3

X4

[

41 7 4

X x t] = det [ lJ 2

[

X3

X1

X4

1

is linear (by linearity in the third row). Consider the linear transfo rmati on

from IR.4 to IR.. Is this transformation linear?

Solution T (x ) = det

Each pattem in the matrix

A-

}2

4 5

[47

6 3

Li"

3] 6

XI X2 X3

5 1

X4

introduced in Fact 5.2.2. Using the alternative characterization of a linear transformation (Fact 2. 2. 1), we infer that

involves three numbers and one of the variables x;; that is, the product associated with a pattem has the form kx; , for some constant k. The determinant itself, obtained by adding and subtracting such procJ,ucts, has the form

and + y) = T (x) + T (Y) vectors x and y in IR." and all scalars k. T (x

T (k.'i) = kT (X),

for all We can write these results more explicitly: for some constants c;. Therefore, the transformation T is indeed linear, by Definition 2.1.1. (What is the matrix of T ?) ~

Fact 5.2.3

~

We can generalize:

Linearity of the determinant in the columns a. lf three n x n matrices A , B , C are the same except for the jth colurnn and the jth column of C is the sum of the jth colurnns of A and B , then det(C) = det(A) + det(B). r-

Fact 5.2.2

Linearity of the determinant in the columns Consider the vectors i:i,, ... , Vj -l , Üj+l , .. . ,

v in IR." . Then the transformation

det

11

.i + y

Üt

Vn

c T (x) = det

v1

= det

from Rn to IR. is linear. This property is called linearity ofthe dete1minant in the jth column. Likewise, the determinant is linear in the row s.

v1

X

A

V11

+det

Vt

y

8

254 •

Sec. 5.2 Properlies of the Determinant •

Chap. 5 Determinants

b. If two

11 x n matrices A and B are the same except fo r the j th column and the jth column of B is k times the j th column of A, then det(B) = k det(A).

EX.AMPLE 3 .... Consider the matrices

A=

de{~·

kx

VII

= kdet

Vi

X

255

VII

1 2 6 7 7 6 2 1 5 6

3 8 5 2 7

4 9 4 3 8

5 8 3 4 9

:>

6 1

B=

and

7

2 5

7

2 6 1 6

8 3 5 2 7

9 4 4 3 8

n

?bserve th at B is obtained from A by swapping the first two row . Express det(B) 111 term of det(A). A

B

Solu tion For each pattern in A we can consider the corresponding pattern in B ; for example, The same holds for the rows. For example,

2

x3 +

5

6

x 2 + y2

3

3

8 9 8

2 Y2 5

] = det [ x1 1 4

A=

and

B=

7 2

+

5

Determinants and Gauss-fordan Elimination Suppose you bave to find the determinant of a 20 x 20 matrix. Since there are 20! ::::;; 2 . 10 18 pattems in this matrix, you would have to perform more than 10 19 rnultiplications to compute the determinant using Definition 5.1.2. Even if a computer performed l billiou multiplications a second, it would still take over 1000 years to carry out tbese computations. Clearly, we have to look for more efficient ways to compute tbe determinant. So far in this text, we bave found Gauss-Jordan elirninati on to be a powerful tool for solving numerical problems in linear algebra. lf we could understand wbat bappens to tbe determinant of a matrix as we row-reduce it, we could use GaussJordan elimination to compute deterrninants as weil. We bave to understand what bappens to tbe determinant of a matrix as we perform the three e lementary row operations: (a) dividing a row by a scalar, (b) swapping two rows, aud (c) adding a multiple of a row to anotber row.

a. lf

The two patterns (in A and B) involve the same nurnbers, but the number of inversions for the pattern in B is I less than for A (we lose the inversion formed by the numbers in the first two rows of A). This implies that the respective contributions tbe two patterns make to the determinants of A and B bave the same absolute value but opposite signs. Since these remarks apply to all patterns of A , we can conclude tbat · det(B) = - det(A) (if the numbers in tbe first two rows of A do not form an inversion, tben an <11111 inversion is created in B).

lf the matrix B is obtained from A by swapping any two rows ( rather tban tbe first two) , keeping track of the number of inversions is a little trickier. However, it is still true that the number of inversions changes by an odd number .for eacb pattern (exercise), so that det(B) = - det(A) , as above.

EXAMPLE 4 .... If a matrix A has two equal rows, what can you say about det(A) ? Solution

A=

V;

and

B=

V11

then det (B) = (1 /k) det(A ) , by linearity in the ith row. b. Refer to Example 3.

v;/k

Swap the two equal rows and call the resulting matrix B. Since we have wapped two equal rows, we have B = A. Now

Vn

det(A) = det(B) = - det(A) , so that det(A)

= 0.

- - 256 •

Sec. 5.2 Properties of the Determinant •

Chap. 5 Determinants

c. FinaUy, what happens to the determ:inant if we add k times the ith row to ./' the jth row?

25 7

Then det( rref(A)) = ( - 1)'

1

k tkz . . . k,

det(A)

or V;

A=

-+

det(A) = ( -

B=

by Fact 5.2.4. Let us examine ilie cases when A is invertible and when it is not. If A is invertible, then rref(A) = / 11 , so that det(rref(A)) = 1, and

V_;

det (A) = (- l)sktkz . . . k,.

By linearity in ilie j th row, we find that

~

~act5.2.5

= det(A)

+kdet

det(B) = det

V;

Vj

A square matrix A is inve.rtible if and only if det(A) # 0. If A is invertible, the discussion above also produces a convenient method to compute the deterrninant, using Gauss-Jordan elimination.

by Example 4.

Fact 5.2.4

----

Note that this quantity is not 0 because all the scalars k; are different from 0. If A is not invertible, then nef(A) is an upper triangular matrix with some zeros on the diagonal, so iliat det ( rref(A)) = 0 and det(A) = 0. We have established the following fundamental result:

v·/,

V;

lYk 1k 2 ... k, det( rref(A)) ,

0 Elementary row operations and determinants

Algorithm 5.2.6 I

Consider an invertible matrix .4. Suppose you swap rows s times as you row-reduce 1 A and you divide various rows by the scalars kt , kz , .. . , k, . Then

a. If B is obtained from A by dividing a row of A by a scalar k , then det(B)

= (l jk ) det(A). Suppose you have to find ilie determinant of a 20 x 20 matrix. Gauss-Jordan elimination of an n x n matrix requires less than n 3 operations, that is, less than 8000 for a 20 x 20 matrix. lf a computer performed 1 billion operations a se-cond, iliis would take less than 100 ~ 000 of a second (compared to the more than 1000 years it takes when we use Definition 5.1.2). Here is a much briefer example.

b. If B is obtained from A by a row swap, then det(B ) = - det(A). c. If B is obtained from A by adding a multiple of a row of A to another row, ilien det(B) = det(A). Analogaus results hold for elementary column operations.

L)

EXAMPLE 5 ... Find

: ~ _!]. I

Now that we understand how elementary row operations affect determinants, we can describe the relationship between the determinant of a matrix and that of its reduced row echelon form. Suppose that in the course of Gauss-Jordan elimination we swap rows s times and divide various rows by the scalars k 1, k 2 , •. • , k,.

1

2

I Here, it is not necessary to reduce A all the way to rref. It suffices to bring A into upper triangu lar form with l 's on the diagonal. (Why?)

258 •

Sec. 5.2 Properties of the Determinant •

Chap. 5 Determinants

+- The Determinant of an Inverse

Solution We perform Gauss-Jordan elimination , keeping a note of all row swaps and row divisions we do .

l

~ ~ ~]~ lb ~ ~ ~] . ;-2 l~ I I

2

2 I

- 1 2

I I

l

2 I

- 1

1 L

--7

2

0 0

If A is an in vertible n x n matrix, what is the relationship bet.ween det(A) and det(A - 1)? By definition of the inverse matri x, we have

~ ~] ~

2

- 1 2

AA - 1 = ! 11 •

- (!) - (I )

By taking the deterrrunants of both side and using Fact 5.2.7, we find that det(AA - 1)

lI~ l ~ jl J~ Il~0 l j iJ +;:) l~ 6 ~

= det (A) det(A - 1) = det(/ = 1. 11

)

1

- 1

_;

0

0

-2

..;- (-2)

0

0

0

_i J

Fact 5.2.8

Determinant of an inverse If A is an in vert.ible matri x, then

I

Wemade two swaps and performed row divi sions by 2, - .L , -2, so that det(A ) = ( - 1) 2 · 2 · (- 1) · ( - 2)

= 4.

ctet(A - ' )

1 - . det(A)

= (ctet(A)r ' = -

<11111

EXAMPLE 6 .... If S is an in vertible n x n maui x, and A an arbitrary n x n matri x, what is the

+ The Determinant of a Product

relationship between det(A ) and det(S- 1 AS)?

Con ider two n x n maui ces A and B . What is the relation hip between det (A B ), det(A) , and det(B)? First suppo e A is invertible. Our work applies the following auxiliary result (the proof of which is left for Exercise 20). Consider an invertible n x 11 matrix A and an arbitrary n x n matrix B . Then, rref[ A : A B ] = [In

Solution det cs - ' AS) = detcs- ' ) det(A) det(S) = ( det(S) I det (A) det(S) = det (A)

(Fact 5.2.7) (Fact 5.2.8) (The determinants commute, because they are scalars.)

r

B].

Suppo e we swap rows s times and divide rows by k 1, k_, ... , kr as we perform this elirnination. Considerj'ng tbe left and right ·'hal ves" of the matrices separately, we conclude that

0

~-.

Thus, det(S- 1 AS)

::\l P ! I I • .::_ +- Minors and Laplace Expansion 1

. / v\

~

and det (A B ) = (-I Y k ,k 2 ... k, det (B) = det(A) det(B).

Therefore, det(A B) = det(A) det ( B ) when A is invertible. If A is not invertible, then neither is A B (exerci e), so that det(AB ) = det(A) det(B) = 0 .

= det (A).

r

Recall the formula det(A) =

Fact 5.2.7

259

a 11a22a33

+ a n a 23a3 1 + a13a2 1a32

- a13a22a3 1 - a,,a23a32 -

a,2a2 1a 33

for the determinant of a 3 x 3 matrix (Fact 5.1.1). Collecting the two terms involving a 11 and then those involvin g a 2 1 and a 3 , , we can write det(A)

=

a 11 (a22a33 -

+ a 2 1 a 2a 13 + a 1 (a 12a23 -

a3_a23) a 12a 3)

a22a13)

Determinant of a product lf A and B are

11

x n matrices, then

(Where have we seen this fonnula before?)

det(A B) = det(A ) det(B ).

An alternative proof of thi s re ult is outlined in Exerci se 42.

I Named aftcr the French mathematicia n Pierre-Simon Marqu i de l a pl acc ( 1749-1827). l aplace is perhaps be st known f r his investigation inlo the stab ilily of the solar system. He was also a prominent member of 1he co mmitt ee whi ch aided in the organization of the metric syste m.

260 •

Sec. 5.2 Properties of the Determinant •

Chap. 5 Determinants Note that computing the determi nant this way require o nl y 9 multiplications, compared to 12 for Sarrus ' s formul a. Let us analyze the tructure oftbis formu ht a Ii ttle more closely . Tbe tenns (022 0 33 - a 32a 23 , (a 32a 13 - a ,2a33) , and (a 12a 23 a 22 o 13 ) can be thought of as the determinants of submatrice of A, as follows . The expression a 22 a 33 - a 32a 23 is the detemtinant of the mat:rix we get w hen we omit the first row and the first column of A: -

[a'

a ,

a 22

a"

0 33

0 32

~ au det [a , a,

+ 031 det [ a0 'I

J

a,;

J - a def '

G t2

an 032

0 33

a. 1

0 32

G t2

0 22

21

an] (/23

We ca n now represent the determinant of a 3 x 3 matrix more succinctly:

T hi s representati on of the determi nant is called the Laplace expansion of det(A) do wn the .first column. Likewise, we can expand along the first row ( since det(AT ) = det(A)) :

L ikewise for the other sununands: det(A )

261

an

j

0 33

In fac t, we can expand along any row or dow n any colurnn (we can verify this directly, or argue in terms of row or column swaps). For example, the Laplace expansio n down the second colurnn is

.

and tbe L aplace expansion along the thi rd row is

To state these observations more succinctly, we introduce som e tenni nology.

Definition 5.2.9

The m le for the signs is as follo ws: The summand aij det (Aij) has a negative sign if the sum of the two indices i + j is odd. The signs follow a Checkerboard pattern :

Minors '

For an n x n matrix A , Iet Aij be the matrix obtained when we omit the itb row and the jth column of A . The (n - 1) x (n - l ) matrix A;j is called a minor of A.

A=

G I!

01 2

Gt j

0 111

0 2!

an

0 2j

(/211

[

~

+

~]

The following example will help us to generali ze Laplace expansion to larger matrices.

EXAMPLE 7 .... Consider an n x n matrix A whose jth column is an

a",

Gi2

0112

G;j

a"j

a;"

G I!

(/ 12

(/2 1

ac2

an

0;2

Clll i

(/11 2

0 0

e;: a ," a2n

a " ll

A=

o,,

0 12

lj

a1 11

o,z,

an

2j

G2n

au,

0

a""

/

A ij =

j th column

a",

On 2

nj

What is the relationship between det (A) and det(A;j)?

262 •

Chap. 5- Determ.inants

263

Sec. 5.2 Properlies of the Determinant •

Solution

Proof

As we compute det(A ), we need to con ider only tho e pattems in A that involve the 1 in the jth colurru1, since all other pattems will contai.n a zero. Each such pattern corresponds to a pattern of Aij . involving the same numbers except for the 1 we have omitted. For example,

Sin ce det(Ar) = det(A), it suffices to dernon trate Fact 5.2. 10 for the colum.ns. Consid er an n x n matrix

VII

5

8

4

A=

3

~ i = 3

3

4

0

5

with jth column

8 9

8 9 7

4

~

7

58

'\ j =4 8 inversion

5 inversions

The products associated with the two pattems are the same, but what happen to tbe nurnber of inversions? We lose aU the inversion involving the 1 in the jth colurnn. We leave it as Exercise 36 to show that the number of such inversion is even if i + j is even and odd if i + j is odd (in our example, both numbers are odd). Since these remarks apply to all the patterns of A involving the 1 in the jth column, we can conclude that det(A) = ± det(A ;j ) , taking the plus sign if i + j is even and the negative sign if i + j is odd. We can write more succinctly:

By linearity of the determinant in the jth colurnn (Fact 5 .2.3), we can write

det(A)=O tjdet

Vt

Vn

+

Clz j

det

VJ

VII

det(A) = (- J)i+j det(A ij).

Now we are ready to show how to compute the detenninant of any n x n rnatrix by Laplace expansion. + · ·· + G11j det

Fact 5.2.10

VJ

VII

Laplace expansion We can compute the determinant of an n x n matrix by Laplace expansion along any row or down any column. Expansion along the ith row: n

det(A) = L (- J) i+ ja;jdet (A;j) j=J



(by Example 7). Again, the signs follow a Checkerboard pattern:

+

Expansion down the jth colurnn: n

det(A) = L (-l)i+j aijdet(Aij) i= l

+

+

+ +

+

+ +

264 •

Chap. 5 Determinants

Sec. 5.2 Prope rties of the Determlnant •

EXAMPLE 8 .... Use Laplace expan ion to compute det(A) for

!~

A= [ : 5

0

3.

[-l -2

0

-1 2 1 6

2 I 14

-2]

10

6 10 33

4.

.

0 0 0

2 0 5

2 3

0 7

4

3

5

1

9

0 0 9

0 4

265

1 2 9 1 8

1

1 1 2 2 2 2 1 3 3 3 1 1 4 4 1 1 I 5

Solution Looking for rows or colurnns with niany zeros to simplify the computations, we can choo e the econd col umn. det(A) = -a 12 det(A 12 ) +an det(A22 -an det (A32) = 1 det

= det

+ a42 det(A42)

6. Find the deterrninant of the matrix

[~l_ ~ r ~ ~]

;t~

I 1 2] [5 0 3 9

= _ det [ ;

2

0

~J

- 2 det

~ ] + 3 det [ ~

2det [

5.

~

[I 1 2] 9

3

0

5

0

3

M"=

~ ] + 3 det

; ] - 2 ( 2 det [ ;

U~ ])

1 I I

1 2 2

1

1

2 3

2

3

1 2

3

n

for arbitrary n (the ijth entry of M~ is the rninimum of i and j). 7. Find the determinant of the following matrix using paper and pencil:

/' Expand down the last column

A=

= -20 - 21 - 2(-30- 18) =55.

I 2

0

3 0 0

2 0 0

0 0

9 7 I 5 0 7 0 5

1

8 6 3 4 3

8. Consider the linear transformation Computing the determinant usi ng Laplace expansion is a bit more efficient

than using the definition of the determ.inant, but a Iot less efficient tha.n Gau sJordan eümination (exerci e). T (x) = det

x

v2

v3

v"

EX ER C I S ES GOALS

Compute the determinant by Gauss-Jordan eümination or by Laplace expansion. Find the detenninants of products, inverses, and transposes. Apply the linearity of the determinant in the rows and columns. Use the determinant to check whether a matrix i invertible.

Find the determinants of the matrices in Exercises 1 to 5, using either Gauss-Jordan elünination or Laplace expansion, but no technology.

1. [;

2 5

8

~l

10

2.

[~

1 0 9 9 9

I

0 9

n

from !R" to JR, where ii2. ii3, . . . , v" are linearly independent vector in !R". Describe image and kerne) of this transformation. and determine their dimensions. Consider a 4 x 4 matrix A with rows ii,, V2, V), deterrninants in Exercises 9 to 14.

9. det [

-9H

• det

[U

V4.

If det (A) = 8, find the

11. det

[q V4

266 •

Sec. 5.2 Properties of the Determinant •

Chap. 5 Determinants

u.

l"' ~; "l 9

det

Explai n why f(t) is a polynomial of n th degree. Find the coefficient k of . . . , a" _ 1 • Explain why

t" using Yandermonde 's formula for a0

13.

r

26 7

f(ao) = f(a J) = · · · = f(a" _ J) = 0.

Conclude that

= k(t - ao)(t

f(t)

15. Ju tify the following statement: if a square mau·ix A ha

two identical

columns, then det(A) = 0.

for the calar k you found above. Substitute t = a,. to dernon trate Vandennonde 's fo rmula.

18. Use Exercise 17 to find

16. Consider two distinct number , a and b. We define the function

1

1]

b

a a2

f(t)= det [ ;

'2

det

.

b-

a. Show that f( r) is a quadrati c function . What is the coefficient of t 2 ? b. Explain why f(a) = f(b) = 0. Conclude that f( t ) = k(r - a)(r - b) , for ome constant k . Find k, using your work in part a. c. For which values of r i the matrix invertible?

A=

•• • ,

19. For n distinct scalars al> a 2 ,

1 1

4

9

8

27

1

a" a;

o"0

a]

a"n

Cl]

1 3

1

1

4

5

16 25 64 125 16 81 256 625

. ..,

a 11 , find 02 2 C/2

a2

ci}_

... a"n

a" ] n

:T

det

a"1

02

20. a. For an invertible n x n matrix A and an arbitrary n x n matrix B show that rref[A

Yandermonde showed that det (A) =

I 2

[ 0a,

a,.. We defi ne the (n + 1) x (n + 1)

1 ao a2

0

1 1

without u ing technology.

17. Vandermonde determinants [introduced by Alexandre Theo phile Vandermonde]. Consider distinct calars a0 , a 1, matrix

- a1) ... (r - a,. _ 1)

AB]

= (I,.

B] .

Hint: The left part of rref[A AB] is rref(A) = /". Write rref[ A : A B] = [111 : M]; we have to show that M = B . To demonstrate this note that the vector

n

(a; - aj) ,

i>j

B ~; ] [-e;

the product of all difference (a; - aj). where i exceed j .

a. Yerify this formula in the ca e of n = I. b. Suppose the Vandermonde formula holds for n - l. You are asked to dernarrstrate it for n . Consider the function

is in the kerne! of [A

all i

1 ao

f(t) = det

a2

0

a"0

C/ j C/2

1

a"I

AB] and therefore in the kerne! of (I"

= I, .. . , n.

b. What does the formula a" _ J (/ 2

11 - 1

a"n-

1

'2 t"

rref[A : AB]= [I" : B] teil you if you set B = A-I?

M], for

268 •

Chap. 5 Determinants

Sec. 5.2 Properties of the Oeterminant •

ZV Consider two distinct point [ :~ ] and [ ~~ ]

in the plane. Ex plain why the

solutions [ ~~ ] of the equation

30. The cross product in IR". Consider the vectors ü2 ü3 , u·ansformatio n - -

T (i) = det

form a Iine and why thi line goe through the two poin ts [ 22. Consider three disti nct points [

:~

l [~~ J[~~ ]

1

l

x~

x,

c,1 C?

X2

c~

+ xi

and [

~~ ] .

in the plane. Describe the

set of all points [ ~~ ] atisfying the equation

det

:~ ]

+ c~

l

x

ü" in IR". The

-

VII

linear. Therefore, there i a unique vector ü in IR" uch that T(x ) = x · ü

for all x in IR" (compare with Exercise 2.1.43c). This vector ü is called the cross product of ü2 , ü3 , ... , ü,., written a

- = V2- X V]-

U

= 0.

... ,

269

-

X · · · X Vn.

In other words, the cross product is defined by th fact th at

-

VJ

23. Consider an 11 x n matrix A such that both A and A- t have integer entries. Wbat are the possible values of det(A)?

If det(A) = 3 for some 11 x 11 matrix, what is det(ATA)? iS)If A is an invertible matrix, what can you say about the 24.

sign of det (AT A)?

26. lf A is an orthogonal matrix, what are the possible values of det(A)? 27. We say that an 11 x n matrix A is skew-symmetric if AT = -A . a. Give an example of a nonzero kew-symmetric 2 x 2 matrix . b. What can you say about the diagonal elements of a skew-symmetric matrix? c. Consider a skew-symmetric n x n matrix A, where n is odd. Show that A is noninvertible, by showing that det(A) = 0. 28. Consider an m x n matrix A = QR

where Q is an m x n matrix with orthonormal columns, and R is an upper triangular n x n matrix with positive diagonal entries r 11 , ••• , r 1111 • Express det(A TA) in terms of the scalars ru. What can you say about the sign of det(AT A)?

w

v w].

29. Consider two vectors ü and in IR". Form the matrix A = [ Expre s det(A T A) in terms of lliill, llwll , and ü. w. What can you say about the sign of the result?

x

for all in IR" . Note that the cross product in IR" is defined only for n - 1 vectors (for example, you cannot form the cross product of just 2 vectors in IR4 ). Since the ith component of a vector w is we can find the cross product by components as follows:

e; · w,

ith component of V2

X

VJ

X . .. X

Vn

= e; . (v2 X ... X v/1)

~ ~; ~ det [

v,

v,

a. When is ii2 x ü3 x · · · x Ü11 = 0? Give yorn answer in term of linear independence. b. Find e2x e3 x ... x e". c. Show that ii2 x ü3 x · · · x ü" is orthogonal to all the vectors ii;, for i = 2 ... 'n. d. What is the relationship between Ü2 x Ü3 x · · · x ü" and ii3 x ii2 x · · · x ü"? (We have swapped the first two factors .) e. Express det [ii2 x ii3 x · · · x ii" ii2 v3 v/1 ] in terms of llii2 V] X (continued) ···X V" II.

2 70 •

Sec. 5.2 Prope rties of the Determinant •

Chap. S Determinants 3

f. How do we k:now that the cro product of two vector in IR as defined here, is the same as the usual cro s product in IR 3 ( ee Defi nition A.9 in the Appendix)? 31. Find the derivative of the function 1

1 9 9

0 0 f(x) = det 1 X 7 0

2 2 0 2 0

4 4 4

3 3 3 9

and

1 ; ] =7 1

0 4

det [

~

1 2 3

n

= ll

a. Find

d] e

35. Con ider the n x n matrix Qn defined the sarne way as Mn in Exerci e 34 except that the nnth entry is 1 (instead of 2). For example,

Find a formula for det(Q 11 ), first in terms of det(Q 11 _ J) and det(Q 11 _ 2 ), and then in terms of n.

1

32. I am thinking of ome numbers a, b c d, e, and f such that

det[:

271

36. Consider a pattern in an n Show that the number of and odd if (i + j) is odd. to tbe left and above aij . terms of k.

x n matrix, and choose an entry aij in thi s pattem. inversions involvi ng a;j is even if (i + j ) is even Hint: Suppose there are k numbers in the pattem Express the number of inversions involving aij in

37. Let P" be the n x n matrix whose entries are all ones, except for zero directly below the main diagonal; for example,

. Ps =

f b. Find

1 1 0 1 1 0

1

1 1 1 1 1 1 1 0 1 1 1 0 1 1

1 I 1

Find the determinant of Pn. 38. A zero-one matrix is a rnatrix whose entries are all zeros and ones. What is the smallest possible nurober of zeros you can have in an invertible zero-one matrix of size n x n? Exercise 37 is helpful.

33. Is the function

T[

!]

~

= ad

39. Con ider an invertible 2 x 2 matrix A with integer entries. a. Show that if the entries of A-I are integers, then det(A)

+ bc

34. Let M 11 be the n x n matrix with 2' s on the main diagonal , 1' directly above and below the main diagonal, and O's elsewhere. For example,

Ms=

0 0 0

1 0 0 0 2 1 0 0 1 2 0 0 1 2 I 0 0 1 2

We define Dn = det(M11 ). a. Find D1 , D2 , D 3 , and D4 . b . If n exceeds 2, find a formula for Dn in terms of D 11 _ your answer. c. Find a formula for D 11 in terms of n.

1 or det (A) =

-1.

linear in the rows and columns of the matrix?

2 1

=

b. Show the converse: if det(A) = 1 or det (A) = -1, then the entries of A-I are integers. 40. Let A and B be 2 x 2 matrices with integer entries such that A , A + B A + 28. A + 38, and A + 4B are all invertible matrices whose inverses have integer entries. Show that A +58 is invertible and that its inverse has integer entries. (Th:is question was in the William Lowell Putnam Mathematical Competition 2

in 1994.) Hint: Consider the function f(t) = (det(A + r8)) - 1. Show that thi is a polynomial; what can you say about its degree? Find the value f(O), f(l), f(2), f(3), f( 4 ), using Exercise 39. Now you can determine f( r) by using a familiar result: if a polynornial f(t) of degree s m has more than m zeros, then f( t) = 0 for al l t. 1

and D 11 _ 2 • Justify

41. Fora fixed positive integer n , Iet D be a function which as ign to any n x n matrix A a number D(A) such that a. D is linear in the rows (see Facts 5.2.2 and 5.2.3),

2 72 •

Sec. 5.3 Geometrical Interpreta tions of the Determin ant; Cramer's Rule • Chap. 5 Determinan ts

2 73

(Fact 4.3.7). Taking the determinants of both sides and using Facts 5.2. 1 and 5.2.7 , we find that

b. D ( B ) = -D (A) if B i obtai ned fro m A by a row w~p, and c. D 1" ) = 1. Show that D (A) = det(A) for all 11 x n matrices A . Hint: Consider E = rref(A). Think about the relationhip between D (A) and D(E), mirnicklog Algorithm 5.2.6. Note: The point of this exerci se is that det.(A) can be characteri zed by the three properlies a, b, and c above ; the determin ant can in facr be de.fined in term of these properties. Ever ince rhis approach wa fir t presented in the 1880s by the German mathematician Kar] Weierstra ( 1817-1897), this definition ha been generally u ed in advanced linear al gebra courses because it allow a more elegant presentation of the theory of determinants. 42. U e the characterization of the determinant given in Exercise 4 1 to show that

det(AT A ) = det(AT ) det(A)

= (det(A)) 2 =

1.

Therefore, det(A) is either 1 or - 1.

Fact 5.3.1

The deterrninant of an orthogonal matri x is either 1 or -1 . For example, -sina] cos a

det(AM ) = det(A) det(M).

=

1

for rotation matrices, and

Hint: For a fixed invertible matri x M consider the function det(A M) D(A) = det(M) .

det [ c?s a s1n a

Show that this function ha the three properlies a, b and c li sted in Exercise 42, and therefore D (A) = det (A) . 43. Consider a linear transformation T from ~n+m to ~~~~ . The matrix A of T can be written in partitioned form as A = [ A1 A2 ] , where A 1 is m x n and A2 is m x m. Suppose that det(A2) =1= 0. Show that for every vector in ~" there is a unique y in ~m such that

x

sin a - cos a

J = -1.

For matrices of !arger sizes, we define:

Definition 5.3.2

An orthogonal matrix A with det(A) = I is called a rotation rnatrix, and the linear transformation T(x) = A-t is called a rotation.

+ The Determinant as Area and Volume Show that the transformation

_t

-4

y

from IR" to R m is linear, and find its matrix M (in terms of A 1 and A2). (This is the linear version of the implicit function theorem of multivariable calculus.)

3

GEOMETRICAL INTERPRETATIONS OF THE DETERMINANT; CRAMER'S RULE We now present several ways to think about the detenninant in geometric terms. Here is a preliminary exercise.

We will now examine what the Gram-Schrnidt process (or, equivalently, the QR factorization) tells us about the determinant of a matrix.

EXAMPLE 2 .... Below, weillustrate the Gram-Schrnidt process for two linearly independent vectors ii 1 and ü2 in JR 2 . As usual, we define V1 = span(vJ).

w- , =w _

- _ Uuill

L w,

EXAMPLE 1 .... What are the possible values of the determinant of an orthogonal matrix A? Solution We know that AT A = 111

What is the relationship between the determinants of the matrice A , B. C, and Q?

274 •

Chap. 5 Determinants

Sec . 5.3 GeomelTical Interpretations of the Determinant; Cramer's Rule •

Solution

2 75

Now coosider an invertible n x n matrix

Since we ar dividing the fi rst column by lliiiii det(A) det B) = ~· Since we are subtracti ng a multiple of tl1e first column from the econd, det(C) = det(B). Since we are dividing the . econd colu11111 by llwll

=

llii2 -

By Fact 4.2.2 we can write

proj v, il2ll .

det (C) det(Q) = ~·

A = QR

where Q is an orthogonal matrix , and R is an upper tri.angular matrix whose diagonal enrries are

We can conclude that det (A = llii,ll ll ~- proj 11, ii2ll det( Q).

and We conclude that

Since det(Q = 1 or det( Q) = -1 , by Fact 5.3. 1, we find that det (A = ±llii, ll llii2 -

for i :::: 2.

det (A) = det ( Q) det(R)

proj 11, ii2ll

= ±llii1ll llii2- proj 11, ii2ll ... llii"- proj 11"_ , Ünll

or because det(Q) = ±I by Fact 5.3.1 (since Q is orthogonal). Since R is triangular, its determinant is the product of the diagonal entries (by Fact 5.1.3). Tbis formula provides us with a simple geomenic interpretation of the detenninant: The area of the shaded parallelogram in Figure l is (base)(height) = llii !llll ii2 -

Fact 5.3.3

proj 11, ii2ll = I det (A) I.

Consider a 2 x 2 matrix A = [ u1 ii2 ]. Then the area of the parallelegram defined by u1 and 2 is I det(A) 1.

v

Fact 5.3.4

v

If A is an n x n matrix with colu0111s ii 1, ... , 11 , then I det(A) I = IIu, ll llÜ2 -

proj v, ii2 ll . . . IIV11 -

proj v._, Ünll

where Vi = span(u, u2 , . . . , üi).

Above, we verified this result for an invertible n x n matrix A; we Jeave the noninvertible case as Exercise 8. As an example consider the 3 x 3 matrix

There are easier ways to derive this fact, but they do not generalize to higher-dimensional cases as well as the approach taken above.

Figure 1 Height =

IIii2 - projvhll

12:~ Base = ll iidl

.7 v,

with ldet(A)I = lliiiii ii 'Ü2 -

proj 11, ii2ll llii3 -

proj 111 Ü 11 .

As we observed above, II ii, ll IIÜ2 - proj v,ii2ll is the area of the parallelogram defined by 1 and 2 (see Figure 1). Now consider the parallelepiped defined by ii,, Ü2,

v

v

2 76 •

Sec. 5.3 Geometr i.cal Interpretations of the Determinant; Cra mer's Rute •

Chap. 5 Determinants

277

Alternative ly, we can write the formula for the k-volume as

v

Let A be the n x k matrix whose columns are 1, ... , ük . We will show that the k-vol ume V(Ü t , ... , vk) is closely related to the quantity det(AT A) . 1f the col umns of A are linearly independent, we can consi der the Q R factorization A = QR. The n AT A = R r QT QR = R T R, because QT Q = l k (since the column of Q are orthonormal). Therefore,

Height

Figure 2

l

det(AT A)

= det (R T R) = (det(R)) 2 = (rt t r22 ... rkk) 2

v

and 3 , that is. the set of all vectors of the form are between 0 and 1, as shown in Figure 2. The volume of this parallelepiped is volume

Ct V t

+ c2ii2 + c3u3,

= ( 11 iitll ll ii2 -

where the c;

rroj v,ii2 11 ... ll ih - projvk-l iik ll r

= (v(v], ... ,Vk)r.

= (base)(height) = IIvtll II ii2 - proj v, ii2 II II u3 - proj v2 u3 II = I det(A) I

We can conclude:

Fact 5.3.7

by Fact 5.3.4.

Consider the vectors ü1, ii 2 , . . . , ük in IR". Then the k-volume of the kparallelepiped defined by the vectors 1, . .. , ük is

v

) det (A 7 A),

Fact 5.3.5

Consider a 3 x 3 matrix A = [v1 ii2 ü3 ]. Then the volume of the parallelepiped defined by 1, ü2, and Ü3 is I det(A) I.

v

where A is the n x k matrix witb colurnns ii 1, ii2 , In particular, if k = n, thi s volume is

• •• ,

vk.

I det (A) I (compare with Fact 5.3.4).

Let us generalize these Observations to higher dimensions.

Definition 5.3.6

k-parallelepipeds and k-volurne Consider the vectors v1, v2 , . . . , vk in IR". The k-parallelepiped defined by tbe vectors Ü1, . . . , iik is tbe set of all vectors in IR" of the form c 1ü2 + c2 ü2 + · · ·+ Ck vk. where 0 :::: c; :::: I. The k-volume V (v 1, ... , ük) of this k-parallelepiped is defined recursively by V(ii 1) = llii!ll and V(ii t •.. . , vk) = V(Ü t, ... , ük- t) ll ih -

We Jeave it to the reader to verify Fact 5.3 .7 for linearly dependent vectors (Exercise 15). As a simple example, consider tbe 2-volume (i.e. , area) of the 2-parallelepiped (i.e., parallelogran1) defined by the vectors

v1, . . . , ük

"·=[:]

projv._. vkll·

and

V, =

m

in ~R 3 . By Fact 5.3 .7, this area is Note tbat this formula for the k-volume mirnies the formula det ( [ (base )(height) which we used to compute the area of a parallelogram (that is, a 2-parallelepiped) and the volume of a 3-parallelepiped in IR3.

~

11] [I m= 2

3

1 1

det

[~ ~~J = ~.

In this special case, we can also determine the area as the norm II cross product of the two vectors (exercise).

v1 x Ü2 II of the

2 78 •

Sec. 5.3 Geometrical Interpretations of the Deterrninant; Cramer's Rule •

Chap. 5 Determinants

+ The Determinant as Expansion Factor Consider a linear transformation T from iR2 to IR 2 . In Chapter 4 we exam.ined how a )jnear transfom1ation T affects various geometric quantities such as lengths and angles. For exarnple, we ob erved that a rotation preserves both the lengtb of vectors and the angle between vector . Sim.ilarl y, we can ask how a linear transformation affects t.he area of a region Q in the plane (see Figure 3). We migbt be interested in finding the expansion factor , the ratio

T (x)

Ax

Figure 5

lt is remarkable that the bnear Iransformation T (x) = Ax expands tbe area of all parallelograms by the same factor (namely, 1 det(A) I).

The simplest example is the case when Q is the unit square ( ee Figure 4). Since the area of Q i 1 here, tbe expansion factor is simply the area of the parallelograrn T (Q) , which is Idet(A)I, by Fact 5.3.3. More generally Iet Q be the parallelogram defined by iit and ii2 as shown in Figure 5. Let B = [ii t Ü2]. Then

area of T (Q) = l det[Aii 1

=

~

area of T (Q) area of Q ·

area of Q = I det(B) I,

2 79

Fact 5.3.8

Expansion factor Consider a linear transformation T (x) = Ax from JR 2 to iR2 . Then 1 det(A ) 1 is the expansionfactor area of T(Q) area of Q of T on parallelograms Q . Likewise, foralinear transformation T (X) = A.X from lR" to IR11 , 1 det (A) 1 is the expansion factor of T on n-parallelepipeds:

and Aii2] I = ldet(A.B) I = l det(A)IIdet(B) I,

V(Aiit • . .. , Aiin) = I det(A)I V(iit •... , ii,.)

and the expan ion factor is

for all vectors ü1 , • • • ii,. in IR".

area of T(Q) Idet(A.) I I det(B)I - -- - = =I det (A)I. I det(B) \ area of Q

This interpretation allows us tothink about tbe formulas det(A - t) = lj det(A) and det(AB ) = det(A) det(B) from a geometric point of view. See Figure 6 and 7. The expansion factor I det(A - 1)1 is tbe reciprocaJ of the expansion factor I det(A )I :

Figure 3

1 I det(A - I)I = I det(A)I

The expansion factor I det(AB) I of the composite transformation is the product of the expansion factors I det(A) I and I det(B) I: Idet(AB) I = I det(A)I I det(B) I Figure 4 T(x )

=Ai=[: :]x

Figure 6

.i' =A- ')i

....

280 •

Sec. 5.3 Geometrical Interpretations of the Determinant; Cramer's Rule •

hap. 5 D termin ants

281

Tbis formula can be used to hnd a closed-formula solurion for a linear sy tem :: = AB .r OllX !

I a 2 1X1

+a1 2X2 = b11

+ a22x2 = b 2

when the coefficient matrix is invertible. We write the system as A;( =

b, where

Then

Figure 7

To write th.i s form ula more succi nctly, we observe that Using techniques of calculus, you can verify that I det(A) I gives us the expansion factor of the transfonnation T (x) = Ax on any regi on Q in the plane. The approach uses in cribed parallelograrn (or even square ) to approximate the area of the region, as shown in Figure 8. Note that the expansion factor of T on each of these squares is I det(A) I. Choosing smaUer and smaller squares and applying calculus you can conclude that the expan ion factor of T on Q itself is I det(A) I. We will conclude this chapter with the discussion of a closed-form solurion for tbe linear system Ax =bin the case wben the coefficient matrix A is invettible.

a22 b1 - a12b2 = det [ a 11 b2 - a21b 1

= det [

:~

replace the first colum.n of A by

011

replace the econd column of A by

02 1

Let Ai be the matrix obtajned by replaci ng the ith colurnn of A by

The olution of the systern Ax = X

b.

b can

b.

b:

now be written a

det(A 1)

---! det (A) '

X2

=

det(A 2 ) det(A)

EXAMPLE 3 ..... Use the formula above to solve the system lf a matrix a.12

a·n

2x, +3x2= 7 1 14x 1 + Sx2 = 13 ·

J

is invertible, we can express its inverse in terms of its detenninant: A- ' = _ 1_ [ a22 det(A) - a21

- al2

J

Solution det [

x, =

a11

1 ~ ~] =2,

det [; ragure 8 T(!l)

rcx> =Ai ~

a sq uare S

T(S)

area (T(S)) area (S)

= [det (A)l

n

det [;

~~ J=1

det [;

;J

X2 =

This rnethod is not particularly helpful for . olving numerically given linear systems; Gauss-Jordan elirnination is preferable in this case. However, in many applications we have to deal with systems whose coefficients contain parameters. Often we want to know how the solution changes as we change the parameters. The closed-fonnula solution given above is weH suited to deal with such questions.

282 •

Sec. 5.3 Geometrical Interpretations of the Determi nant; Cra mer's Rule •

Chap. 5 Determinants

EXMIPLE 4 .... Solve the y tem

I

ax2 = 0 - ax 1 + (b - l )x2 = C ·

(b- I )x i +

I

where a, b C are arbitrary positive con rants.

I

I

I

b

b

b

I d

I d

-

283

I d

Solution der [

~

b:

XI =

der [ b- 1 -a

b

1] ~ 1]

= der [ b- 1 -a

=

~]

der [ b - I -a X2

a

b:

- aC (b- 1)2

c

a

(a)

+

c

{b)

a

c

(c)

Figure 9 (o) 8oth components x ond y of the solution ore positive. (b) ax;aa < 0: os a increoses, the component x of the solution decreoses. (c) ax;ac > 0: os c increoses, the component x of the solution increoses.

a2

b - 1)C

l]

(b - 1) 2 + a 2

Fact 5.3.9

Cramer's rule Con ider the linear system

Ax = h,

EXAMPLE 5 .... Consider the linear sy tem

ll

ax +by = l cx +dy = l .

where d > b > 0

and

a > c >

w here A is an in vertible n x n matri x. The components x; of the olution vecto r -"r are

0.

det (A;) ' ' - det (A) ' r· -

How doe the solution x c hange as we change the parameter a a nd c? More precisely, find ax ;aa and ax ;ac, and determine the igns of these quantities.

w here A; is the matiix obtained from A by replacing the i th column of A by b.

Solution det [ X

=

~ ~]

- --=------':=a det [ c

db ]

d_ b

ax

> 0

=

ad - bc

ax ac

aa

'

b(d - b)

_ _

_

...:_ 2

- d(d - b) - - - -2 < 0, (ad - bc)

This result is due to the S wi s mathematician Gabriet Cramer ( 1704- 1752). T he rule appeared in an appendix to his 1750 book, Introducrion l 'analy e des lignes courbes algebriques.

a

Proof > 0.

(a d - bc)

See Figure 9 .

Write A = ( WI ÜJ2 ... th e n det( ;) = det[ Üit w2 = del[w, w 2

An interesting application of these simple results in biology is to tbe srudy of castes [see E. 0. Wilson, "The Ergonomics of Caste in the Socia l ln sects," American Naturalist, 102 (923): 41 -66 ( 1968)]. The closed formula for solving linear syst.ems of two equ ations with rwo unknowns generalizes easily to !arger system s.

= det[Üit w2 = X;

w; ... W J. 11

b .. .

Tf ,'Y i

w"]

the so lution of the sy te m A ,r = b,

= der[ w, w2 .. . A.i . . .

w" ]

+x2w2 + ·· · +x;w; + · · · +x"w") ... w"] x;w; . .. wll ] = X; det[iv, iü_ .. ' W; ' '. wll ] (.r t WI

det(A).

Note that we have used the linearity of the de r rmin ant in the ith column .

284 •

Sec. 5.3 Geo metri ca l In terpretations o f the Determjnant;

Chap. S Determ.inants

285

We have shown the following result:

Therefore, det(A;) = det.(A) .

X;

Cramer·

ramer's Rule •



ru le allows us to find a closed formula for A -

J,

Fact 5.3.10

Corollary to Cramer's rule Con sider an in verti ble n x n matri x A. The classical adjoint adj(A) n x n matrix whose ijth entry is (- l )i+j det(Aj;). Then

generalizing the

result

a [c

b

d

]-J

for 2 x 2 matrice . Consider an invertible n x

A- 1

=

[m".

n1 21

m" J

L

[

= det(A)

1 A- 1 = -d - - adj (A). et(A)

d

-~ For an invertible

11

matrix A and write adj (A) = [

111 12

tn) j

117 22

m - 1.

'" '"] tn 2n

...

m"j

m "2

11l:llll

We know that AA- 1 = / 11 • Pieking out the jth column of A-

lntj ] A . = [ 11! 2j

1

,

m;j

02 1

A;=

GjJ

Gj2

0 0

-b] a

and

A-I -

1 [ d ad - bc - c

eh - bk

bf cd- af ae - bd

1

A- 1 = - det(A)

ak- cg

dh _ eg

bg- ah

a11 2

0

t

a2n

.

S E S

1. Find the area of the parallelogram defi ned by [ ~] and [ ~

jth row

a""

ith column

By Example 7 of Section 5.2, det(A;) = (- J)i+j det (Aji ), so that . .. _ ( l)i+j det(Aj; ) m,J - . det(A)

ce]

GOALS Interpret the determinant a an area or volume and a factor. Use Cramer' s rule.

G(n

Gjll


[ekfh fg -dk

2. Find the area of the triangle defined by [ a"J

- ab ]

the formula i ej·

EX ER

an a22

d

-c

we find that

= det(A;)/ det(A), where the ith column of A is replaced

GJJ

2x 2matrix A = [ ac db] , we find

(compare with Fact 2.3 .6). For an invertible 3 x 3 matrix

.

m"j

By Cramer's rule, by ej.

the

[;]

~]

and [

~].

~

l

an e xpansion

286 •

Sec. 5.3 Geometrical In terpretations of the Determina nt; Cramer's Ru te •

Chap. 5 Determinants

287

8. Demonstrate the equ ati on

3. Find the area of tbe triangle sketched below.

I det(A) I = llii, llllii2 -

proj v, ii2l l .. . ll iin- proj v"_' ii"ll

for a noninvertible n x n matrix A = ( jj 1 i:i 2 Ü11 ] (Fact 5.3 .4). 2 9. Suppose two nonzero vector ü1 and 2 in JR enclose an angle a (with 0 ::::: a ::=:: n). Explai n why

v

I det( ü,

ü2 ] I= ll iJ I!I IIi:i2ll sin (a ) .

v

· · 4. Consider tbe area A of the tnangle w1·tb vertlce

[a'] [b' ] [c'] 02

,

b

2

,

c

2

. E xpress

A in terms of

5. The tetrahedron defined by tbree vectors ii" ii2, ii3 in JR 3 is the set of all vectors of the form c 1ii 1 + c2ii2 + c3Ü3, wbere c; 2: 0 and c , + c2 + c3 ::=:: l. Draw a ketch to explain why tbe volume of this letrahedra n i o ne-sixth of the volume of tbe parallelepiped defined by i:i, ii2, Ü3. 6. What is tbe relationship between the volume of the tetrahedron defined by the vectors

10. Consider an n x 11 matrix A = ( i:i 1 iJ 2 11 ]. What is the relationship between the product lliJI!III ~II ... IJÜ11 II and 1det(A) I? When is 1det(A) I = llii, ll lli:i2 ll ... llii"IJ? 11. Consider a linear transformation T (-"r) = Ai from IR.2 to JR2 . Suppose for two vectors ü, and v2 in IR.2 we have T (v 1) = 3ii 1 and T (ü2 ) = 4ü2. What can you say about I det(A) I? Justify your answer carefully. Draw a sketch. 12. Consider those 4 x 4 matri ce whose entries are all 1, - 1, or 0 . What is the max imal value of the determinant of a matrix of this type ? Give an example of a matrix whose determinant has this maximal value. 13. Find the area (or 2-volume) of the parallelogram (or 2-parallelepiped) defined by the vectors

and

14. Find the 3-volume of the 3-parallelepiped defined by the vectors and tbe area of the triangle with vertices

(see Exercises 4 and 5)? Explain thi s relationship geometrically. Hint: Consider the top face of the tetrahedron. 7. Find the area of the region sketched below.

l~l·

l1l ·

l~l-

15. Demonstrate Fact 5.3.7 for linearly dependent vectors ü" .. . , vk. 16. True or fals e? If Q is a parallelogram in IR 3 and T (.r ) = Ai is a linear transformation from IR. 3 to IR. 3, then area of T (Q) = I det (A) I (area of

Q).

17. (For some background on the cross product in IR.", ee Exercise 5 .2.30.) Consider three linear! y independent vectors ü" Ü1 , ü3 in IR.4 . a. What is the relationship between V (ü, . ~. Ü3) and V (ii 1, i:i2 , ü3 Ü1 x ~ x 3 )? See Definition 5 .3.6. Exerci e 5.2.30c i belpful. b. Express V (Ü1, Üz, v3, ii, x Ü2 x Ü3) in terms of II ü, x Ü2 x Ü3IJ . c. Use parts a and b to express V(i:i ,, ii2 Ü3) in term of llii, x Ü2 x u3 11. I your result till true when the Ü; are linearly dependent ? (Note the analogy to the fact that for two vectors ü 1 and ii2 in IR. 3 , IJÜ 1 x ii2 11 is the area of the parallelogram defined by ü, and ii2.)

v

288 •

Sec. 5.3 Geometrica!Jnterpretations of the Determinant;

Chap. 5 Determinants

18. lf T Cx) = Ax i an invertible linear tran formation from IR 2 to IR 2 • then the image T (Q) of the unit ircle Q is an ellipse (see E erci e 2.2.50).

a. Sketch th.is ellip e when A = [

~ ~

ramer's Ru le •

289

25. Find the class ical adjoint of the matrix

J

where p and q are positive. What

is it area? b. For an arbitrary invertible transfom1ation T (x) = Ax, denote the length of the semi-major and the semi-minor axis of T (Q) by a and b, respectively. What is the relat.ion hip between a, b, and det(A )?

and use the result to find A -

J.

26. Consider an 11 x n matrix A with integer entries such that det(A) = 1. Are the entries of A- I necessarily integers? Explain.

27. Consider two positive numbers a and b. Solve the system below.

~ ~~~!; :~ 1 What are the signs of the solutions x and y? increases?

How does x change as b

28. In an economics text 1 we find the system below: sY +ar = JO +G m Y - hr = Ms -Mo

Solve for Y and r. 29. ln an economics text2 we find the system below:

c. For the transformation T(x) = its axes. Hint: Con ider T [

[i ;];,

~]

Rl

sketch this ellipse and determine

and T [ _

1- a

~].

basis Ü1, Ü2 , Ü3 of IR 3 is called

19. A positively orien.ted if ii 1 encloses an acute angle with ii2 x v3. Illustrate th.is definition with a sketch. Show that the basis is positively oriented if (and only if) det [ ü1 v2 v3 ] is positive. 20. We say that a linear transformation T from IR 3 to JR 3 preserves orientation. if it transfonns any positively oriented bas is into another positively oriented basis (see Exercise 19). Explain why a linear transformation T (x) = Ax preserves orientation if (and only if) det(A) is positive. 21. Arguing geometrically determine whether the followino- orthoo-onal transforTll>3 Tll>3 b b . f maLions rom m. to ~ preserve or reverse orientation (see Exercise 20). a. Reftection in a plane. b. Reftection in a line. c. Reftection in the origin .

3x+ 7y = 1 1 14x+ 1l y = 3 · 2x+3y = 8 24. 4y + 5z = 3

22.

6x

+ 7z=-1

dx l

o0

dp

-R2 de2

J

Ci

Sol ve for dx1, dy1 , and dp. ln your answer, you may refer to the detenninant of the coefficient matrix a D (you need not compute D). The quantities R 1, R 2 and D are positive, and a is between 0 and 1. lf de 2 is positive, what can you say about the signs of dy 1 and dp?

30. (For those who have studied multivariable calculus.) Let T be an invertible linear transformation from IR 2 to IR.2 , repre ented by the matrix M . Let Q 1 be the unit sq uare in IR. 2 and Q 2 its image under T (see figure on page 290). Consider a continuous function f(x , y) from IR 2 to ~. and define the function g(u, u) = f( T (u, v)). What is the relationship between the two double integrals below?

II nz

Use Cramer's rule to solve the systems in Exercises 22 to 24.

~g ~:12 ] [ d y 1J _ [ -

- (1 _ a ) 2

- R2

f(x, y)d A

a.nd

II

g(u, v)dA

n1

Your answer will involve the matrix M . Hinr: What happens when f(x , ) = 1, for all x, y?

1Simon

2 Simon

and Blume, Matlr ematics for Economists, Norton. 1994. and Blume, op. cit.

290 •

Chapo S Determinants y

V

T ~

X

EIGENVALUES AND EIGENVECTORS 31. True or False? a. U A i an n x n matrixo then det (2A ) = 2det(A)o b. Suppose A and B are 11 x 11 matrices, and A is invertibleo Then der (A BA = det(B)o

c. The transformation T (x) = [

~

; ] ; preserve

1 I)

the area of a parallelo-

gramo d. If A is an n x n matrix, then det(AA T) = det(A T A)o e. lf all diagonal entries of a square matrix A are odd integers, and all other entries are even, then A is inverribleo

32. True or Fa/se? a. The matrix 127 4

9 5

b. c. d. e.

3 153 8 4

j]

is inve11ibleo If all entries of a square matrix A are zeros and ones, then det(A) is 1, 0, or - I. lf A and B are 11 x n matrices, then det(A + B ) = det(A ) + det(B )o Suppo e A is an n x n matrix and B is obtained by swapping two rows of Ao lf det(B) < det(A), then A is inve11ibleo If all diagonal entries of a square matrix A are even integers, and all other entries are odd, then A is invertibleo

DYNAMICAL SYSTEMS AND EIGENVECTORS: AN INTRODUCTORY EXAMPLE A tretch of de ert in northwestern Mexico i populated mai.nly by two pecies of anima ls: coyote and roadrunnerso We wi h to model the population c(t) and r (t ) of coyotes and roadrunners t years from now if the current populations c 0 and ro are known o1 For thi s habitat, the following equations model the transformation that this system wi ll undergo from one year to the next , from timet to time (t + 1) : c(t + 1) = 0086c(t ) r (t +I) = -0 012c(t )

I

+ 0008r (t ) I + 1. 14r(t)

0

Why is the coefficient of c(t ) in the first equation less than I. while the coefficient of r (t ) in the second equation exceed I? What i the practical significance of the s1gn of the other two coefficient , 0 008 and - 0012? The two equations above can be •vritten in matrix form :

I)]

c(t + [ r (t+ l )

= [

Oo86c (t) + 0 008r (f) ] = [ 0 086 -0012c(t +1.14r(t) -0012

0 0008 ] [ c(r) ] 1.14

r (l )

0

The vector x(t) = [ c(t) r (t)

J

is caJled the state vecror of the y tem at time t becau e it completely de cribe thi ystem at time t 0 lf we Iet A = [

0086 -0012

0 008] , 1.14

1Thc point of' this lighthcarted stor io 10 present an introd uctory examp lc where neithcr mcsoy data nor a compli cated scenario dis tracl us from thc mathematica l ideas we w ish 10 d<::velopo

291

292 •

Sec. 6.1 Dynamica l Systems and Eigenvecto rs: An Introductory Example •

hap. 6 Eigenvalue and Eigenvectors we can write the matrix equation abo e more succinctly ac

x( I)= [I330 IO] = 11 · xo



l ."tCt + I ) = Ax(t )

293

- [100 300 ]

xo =

The transformation the system undergoes over the period of one year is linear, represenred by the matrix A.

x(t )~xCt + I) Suppo e we know that the initial state i

Figure 1 - 0) = xo - = [ co x( ro ] . lt is now easy to compute We wish to find x(t), for an arbitrary po üive integer t . _

,1

_

A

_

A

_(

)

A

A

arbitrary t:

; (2) = Ax ( 1) = 1.1 Axo = 1. I 2.t 0

-( )

.x(0)----*.l."(l)---+X(2) -'- H 3 -----+· · ·---*X

x(t ), for

A

=

Ax(2)

x(t) =

l.l'io

x(3 )

f -----* · · ·

=

1.1 2 A;to = l.l 3io

We can find x(t) by applying the transformation t time to x(O):

I x(t ) = A' x(O) =

1

A -'to

I

Although it is extremely tediou s to find x(t) with paper and pencil for large t , we can easily compute x(t) using technology. For example, given

We keep multiplying the state vector by 1.1 each time we apply the transformation A. Recall that our goal is to find closed formulas for c(t and r(t). Wehave

~( ) =

- = [ 100 xo 100 ] '

x f

[ c(t) ] = 1. 1 ' '= I . I' r (t ) xo

f\l3 10 ,

L~.)v

J

so that we find that x(IO)

=

10

A xo

~ [ 1 ~~].

To under tand the long-term behavior of this system and how it depends on the initial values we must go beyond numerical experimentation. It would be useful to have closedformulas for c(t) and r(t), expressing the e quantities as function s oft. We will at first do this for certain (carefully chosen) initial state vectors.

c(f) = 100(1.1)'

and r(t) = 300(1.1)' .

Both populations will grow exponentially, by 10% each year.

Case 2 • Suppose we have c0 = 200 and ro = 100. Then - 1

coyote and 300 roadrunners, . o that

x

xO) = Ax0 =[ - 00.86 . 12

0

= [

~~ ] .

Then

o.o8J [ 'oo ]=[' ' o] 1.14 300 330 .

x(l)

= Axo = l.lio

[

0.86 -0. 12

0 .08] [200] [ 180] 0 91.14 100 = 90 = · xa.

Both populations decline by 10% in the first year and will therefore decline another 10% each ubsequent year:

so that

Note that each population has grown by 10%. This means that the state vector

x(l) is a scalar multiple of x0 (see Figure 1):

-

x() = Axo =

Case 1 • Suppose we have co = 100 and r0 = 300. lnitiaLiy, there are 100

c(t) = 200(0.9)' and r(t) = 100(0.9)'.

294 •

Sec. 6.1 Dynamical Systems and Ei.genvectors: An Introductory Example •

Chap. 6 Eigenvalues and Eigenvectors The initial populations seem to be mismatched: ~oo many coyotes chasing too few roadrunners, a bad state of affairs for both spectes. 1

Case 3 •

Suppose we bave c0 = ro = 1000. Then -

-

x(I) = Axo =

[

0.86 - 0.12

0.08] [ 1000] - [ 940] 1.14 1000 1020 ·

Things are not working out as nicely as in the ~r~t. two cas:s we considered: ~he state vector x(l) is not a scalar multiple of the tmttal state xo. Just by computmg x(2), -~(3), . . . , we could not easily detect a trend that would allow us to generate a closed formula for c(r) and r(t). Wehave to look foranother approach. The idea is to work with the two vectors -

VI=

[ 300 100]

-

and

I I I I I r (l ) I I I I I _ _ ______ I c(t}

Figure 2

v;

-

+ 1).

Sometimes we are interested in the state of the system in the past, at times - 1., -2, .... Note that x(O) = Ax(- 1), so that i(-1) = A - 1.Xo if A is invertible (as in our exa:rnple). Likewise, x(-t) = (N) - 1.r0 , for t = 2, 3, ....

[ 1000] 1000 Figure 3

of the coyote-roadrunner system:

A straighttorward computation shows that

Recall that A1

t

v

xo =

H.ow can we represent the computations petformed on the preceding page graphically? Figure 3 shows the representation 0 = 2iJ 1 +4ü2 of 10 as the sum of a vector in L1 and a vector in L 2 • The formula

now teils us tbat tbe component in L 1 grows by 10% each year, while the component in L2 shrinks by 10%. The component (0.9)'4ih in L2 approaches Ö, which means that the tip of the state vector .X (t) approaches the line L 1, so that the slope of the state vector will approach 3, the slope of L 1• To make a clearer picture of the evolution of the system, we can sketch just the endpoints of the state vectors .:t(t). Then the changing state of the systemwill be traced out as a sequence of points in the e-r plane. It is natural to connect the dots to create the illusion of a continuous trajectory (although, of course, we do not know what really happens between times t and

considered in the first two cases, for which A 1 was easy to compute. Since and ü2 form a basis of iR2 , any vector in iR2 can be written uniquely as a 1 linear combination of 1 and ü2 . This holds in particular for the initial state vector

v

Note that the ratio r (t)jc t) can be interpreted as the slope of the state vector x (t), as shown in Figure 2.

x

[200 100 ]

=

v2

x(t ) = [ c(t) ] r (t)

c1

= 2,

c2

r

= 4:

v = (1.1) Ü1 and N il2 = (0.9Y Ü2. Therefore, x(t) = A xo = A (2ü1 +4Üz) = 2A Üt +4Nv2 1

1

1

1

1

= 2(1.1) VI = 2(1.1)

1 ,[

1

+ 4(0.9) ü2 1

1000

;~~ ] + 4(0.9) i~~ ] . 1

[

Considering the components of this equation, we can find formulas for c(t) and r(t): c(t) = 200(l.lY

r(t) = 600(1.1)

1

+ 800(0.9)' + 400(0.9) 1

Since the terms involving 0.9 1 approach zero as t increases, both populations eventually grow by about 10% a year, and their ratio r(t)jc(t) a:pproaches 600/200 = 3.

295

c

296 •

Sec. 6.1 Dynamical Systems and Eigenvectors: An Introducto ry Example •

hap. 6 Eigenvalues and Ei en ectors

r

The trajectory (future and pa t) for our coyote-roadrunner ystem is shown in

LI = Span [1 00] 300

Figure 4 . To 0oet a feelino0 for the lon 0o-tenn behavior of thi s sy tem and how it depends on th ini tial state, we can draw a rough sketch that shows a number of different trajectories, representing the vruious qualitative types of behavior. Such a sketch is called a phase portrait of the system. In our example, a phase portrait might how the three cases di cu ed above, a weil as a traje tory that tarts above L 1 and one that starts below L 2 . See Figure 5. To ketch these trajectorie . express the initiaJ ta te vector as the sum of a vector 1 in L 1 and a vector 2 in L2 . Then see how the e two vectors change over time. If .r0 = w1 + ÜJ2, then

w

xo

w

~- =

We see that tbe two population will prosper over the long term provided that the ratio r 0 fc 0 of tbe initiaJ population exceeds 1(2; otherwi e, botb populatio n will die out. From a matbematical point of view , it is informati ve to ketch a phase portrait of this sy tem in the whole e-r plane, even though the trajectories outside tbe first quadrant are meaningles in term of our population tudy ( ee Figure 6). Figure S

Figure 4

c

r

Figure 6

1000

1000

c

r

span

[z100oo]

29 7

298 •

Chap. 6 Eigenvalues and Eigenvectors

,Se · 6.1 Dynamical Systems and Eigenvectors: An Introd uctory Example •

299

+ Eigenvectors and Eigimvalues In the example di scu sed above. it turned out to be very u eful to have ome nonzero vectors 1 such that A ü i a sca lar multipl e of

L

v, that is ,



for some scalar A.. Such vectors are important in many otber context as we ll.

Definition 6.1.1

T(tÜ) =

Eigenvectors 1 and eigenvalues Con ider an n x n matrix A. A nonzero vector ü in IR" i called an eige-nvector of A if A ü is a scaJar multiple of ü, that is, if

Aü = A.ü ,

EXAMPLE

2~ Let T

be the ortho/cmaJ projection onto a Iine L in IR.2 • Describe the eigenvectors of T geo_metrically and find all eigenvaJ ues of T.

Solution We find the e igenvectors by inspection : can you think-of any nonzero vectors ü in the plane uch that T (v) i a caJar,. multiple of ü? Clearly, any vector ü in L will do (wi th T(v) = lil), as wei l as any vector perpendicular to L (witb T(w) = Ö= Ow). See'Figure 8. The eigenvalue are 1 and 0. <11111

v

. A nonzero vector ü is an eigenvector for A if the ectors and Au are parallel , as shown in Figure 7 (see Definition A.3 in the Appendix).

w

1 ~ Find all eigenvectors and eigenvalues of the identity matrix /" .

EXAMPLE

Solution

Ow

Figure 8

for some scalar A.. (Note tbm this calar A. may be zero. l]le scalar A. above is caJ ied tbe eigenvalue associated with tbe eigenvector u.

EXAMPLE

Ö=

3 ~ Let T: IR 2 --+ ~2 be the rotation in the plane through an angle of 90° ·in the counterclockwise direction . Find all eigenvalues and eigenvectors of T .

Su1ce l"ü = ü = lii for all ü in ~~~. al l nonzero vectors in IR" are e igenvectors, wüh eigenvalue I . ~

Solution

v

lf ü i any nonzero vector in IR2 , tben T(Ü) is not parallel to (it.s perpendicular). See Figure 9. There are no eigenvectors and eigenvalues here. <11111

Flgure 7 Aii

ii Aii= ii

Aii = 2ii eigenva lue: 2

1

Aii

- Figure 9

/

= ii = l ii

eigenvaiÜe: I

From Eierman eigen: proper, characleristic.

T(ii) AÜ = -ii = (- l )ii eigenvalue: - 1

Aii

= Ö= Oii

eigenvaluc: 0

300 •

Sec. 6.1 Dyn amica l Systems and Eigen vectors: An ln troduct01y Examp le •

Chap. 6 Eigenvalues and Eigenvectors

301

Then Aii

Figure 10

(o) A = 1. (b) A = - 1.

x(t) = A'xo.

= ii = l ii

/

Such a system is called a discrete linear dy_MtmiJ::aLSY- · "m ("discrete" indicates that we model the change the sy tem undergoes from time t to time t + l , rather than modeling the continuous rate of change, which is described by diffe rential equations). For an initial state 0, it i often our goal to find closed f o rm.ulas for x 1(l ), x2 (t ), ... , x,. (t ), that is, formula s express ing x;(l ) as a function oft alone (as oppo ed to a recursive fonnula , for example, whi ch wouJd merely express x; (t + 1) in tenn of x, (t ), x 2 (t ), . . . , x,.(t )).

(b)

(al

x

EXAMPLE 4 ~ What are the possible real 1 eigenvalues of an orthogonal matr.ix A? Solution Recall that the linear Iransformation T (x) = kr pre erves length : II T (x) II = I!Ai ll = 11-rl!, for all vectors ; (Definition 4.3.1 ). Consider an eigenvector ü of A , with eigenvalue ),:

Fact 6.1.3

Con ider the dyo arnical system

Aü = A.ü.

x(t

Then

+o=

A x(t )

with initial state

lllill

=

IIAli ll = IIA.li ll

=

IA.IIIli l! ,

so that A. = l or A. = -1. The two possibilities are illustrated in Figure 10.

Fact 6.1.2

;x

Discrete dynamical systems

Suppose we can find a basi ü1, li2 , of A , with AÜ;

The possible real eigenvalues of an orthogonal matrix are 1 and - 1.

•.•

ü,. of IR" coosisting of eigenvector

= >,,ü;.

Write i 0 as a linear combioation of the Ü;:

As an example, consider the reflection in a Line in IR2 (Exercise 15). Then

+ Dynamical Systems and Eigenvectors Consider a physical system whose state at any given time t is described by some quantities x 1(r) , x 2 (t) , . . . , x,.(t). (In our introductory example, there were two such quantities, the populations c(t) and r(t).) We can represent the quantities x 1(t), x 2 (t), .. . , x,.(t) by the state vector

x(t) =

[~~~~~ ] Xn

.

(t )

Suppose the state of the system at time t + 1 is detennined by the state at time t, and the transformation the system undergoes from time r to time t + 1 is linear, represented by an n x n matrix A: x(r

1

+ t) = Ax(r ) .

ln Section 6.4 we will cons ider complex eigcnvalues.

We are left with two questions: How can we find the eigenvalues and eigenvectors of an n x n matrix A? When is there a basis of ~~~ consi ting of eigenvectors of A ? These issues will keep us bu y for the re t of this chapter. If A is a 2 x 2 matrix, we can sketch a phase portmit for the system .t(t

+ 1) =

A.t (t ,

.ro.

showing the trajectories x(t) for various initial states Even though each trajectory is just a sequence of points, .t (O) = 0 • ,r(l ), .r (2) , .. . , we often gain a better sense of the evolution of the ystem if we "connect the dots" and ketch continuous trajectories. In Figure 11 we sketch phase portraits for the case when A has two eigenvalues A. 1 > A.2 > 0 with associated eigenvectors ü, and v 2 . We leave out the pecial case when one of the eigenvalues is 1. Start by sketching the trajectorie along the line L 1 = span(ü 1) and L 2 = span (Ü2) . A you sketch the other trajectories

x

1

x(r) = c, A1il1

+ c2A.~ ü2 ,

302 •

Sec. 6.1 Dynam ica l Systems and Eigenvectors: An Introductory Example •

hap. 6 Eigenvalue and Eigenvectors

.

303

Characteri zati on (x) was stated in Fact 5.2.5. The equiva lence of (v i) and

(X1) fo ll ows from the definition of an eigenva lue. (Note that an eigenvector of A

with eigenva lue 0 is a nonzero vector in ker(A).)

EX E R C I S E S GOALS Apply the concept of eigenvalues and eigenvectors. Use eigenvectors to analyze di screte dynarnical systems.

In Exerci e 1 to 4, Iet A be an in vert.ible n x n matrix and with as ociated eigenvalue A. (c)

(b)

(a)

Figure II

th.ink about tbe summand c 1A. 11ü1 and c2 A.;~. Note that fo r a large positi ve t the vector x(r ) will be almostparall el to L 1, since ),11 will be much )arger than ;,.;. Likewi e for large negati ve t the vector x(t ) will be almo t para llel to L 2. We conclude th.is section with a third (and final) ver ion of the summing-up theorem (compare with Summary 3.3. 11).

v an eigenvector of A

1. Is ii an eigen vector of A 3 ? lf so, what is the eigenvalue? 2. Is ii an eigenvector of A-I? lf so, what is the eigenvalue?

l s ii an eigenvector of A + 2/" ? If so, what is the eigenvalue? Is ii an eigenvector of 7 A? If so, what i the eigenvalue? If a vec tor ·v is an eigenvector of both A and B , is ii an eigenvector of A + B? Tru e or fa lse? A square matrix A i invert.ible if (and only if) 0 is not an eigenvalue of A . 7. If ii is an eigenvector of the n x n matrix A with associated eigenvalue A., what can you say about

3. 4. 5. 6.

ker(A/" - A) ?

Summary 6.1.4

ls the matrix Al" - A invert.ible?

Con ider an n. x n. matrix

8. F:ind all 2 x 2 matrices for whicb e1gen value 5. 9. Find all 2 x 2 matrices for wh.ich

u"

(iii) rref(A) = 1". (iv) rank(A) = n. (v) im(A) = IR".

(vi) ker(A) = {0}. (vii) The ii; are a basis of IR". (viii) The v; span IR". (ix) The ii; are linearly independent.

(x) det(A) =1= 0. (xi) 0 is not an eigenvalue of A.

e

1

is an eigenvector with associated

i an eigenvector.

~]

is an eigenvector with corresponding

11. Find an 2 x 2 matrices for which [ ~ ] is an eigenvector.

(i) A is invertible.

b has a unique solution x, for

1

10. F_ind all 2 x 2 matrices for wh.ich [ e1genvalue 5.

Then the following Statements are equivalent:

(ii) The linear system Ai =

e = [ ~]

all

b in IR".

12. Consider the matrix A = [;

~].

Show that 2 and 4 are eigenvalues of A

and find all corresponding eigenvectors.

~]

13. S_how that 4 i an eigenvalue of A = [ _ ~~ 1 and find all corresponding eJgenvector . 14. Find all 4 x 4 matrices for which e2 i an eigenvector. Arguing geometrically, find all eigenvectors and eigenvalues of the linear transformation s ti sted in Exercises 15 to 22. Find a basis consisting of eigenvectors if possible. 15. Reflection in a line L in IR2 .

304 •

Sec. 6.1 Dynamica l Systems and Eige nvectors: An Introdu ctory Exa mpl e •

Chap. 6 Eigenvalues and Eigenvectors 2

16. Rotation through an angle of 180° in lR . 17. Counterclockwise rotation through an angle of 45o followed by di lation by 2 in lR2 . 18. Reftection in a plane E in !R3 . 3 19. Orthogonal projection onto a line L in lR . 20. Rotation about the e3-axi through an angle of 90°, coun terclockwise as 3 viewed from tbe positive e3-axi in lR 21. Dilation by 5 in JR 3 . 22. The sbear witb T(v) = v and T(w) = v + w for the vector v and w in 1R2 0

sketcbed below.

305

In Exercises 30 to 32, consider the dynarnical system

0 ~ J .xcr) .

11

.t(t + l ) = [

Sketch a phase portrait of tbis system for the given val ues of A:

r 30. A = 1.2. ;: 31. A = I. K32. )" = 0.9. / 33. Consider the coyotes and roadrunners system discus ed in thi ection. Find clo ed formtdas for c(t ) and r (t ), for the initial popula6on c0 = 100, r 0 = 800. 34. Find a 2 x 2 matrix A such that [

i]

and [

~]

are eigenvectors of A , witb

eigenvalues 5 and 10, respectively. Y 35. Find a 2 x 2 matrix: A such that

23. a. Consider an invertible matrix: S = [ii1 u2

v

11 ].

Find

s- 1ii;.

is a trajectory of the dynarnical system Hilll : .x cr

Se;= v;.

b. Consider an n x n matrix A and an invertible n x n matrix: S = [ii 1 ii2 vn] wbose columns are eigenvectors of A with Av; = A;v;. Finds- i AS. Hint. Think about the product s- 1 AS column by column. In Ex:ercises 24 to 29, consider a dynamical system x(r

+ 1) =

+ 1) =

Ax(t).

:X 36. Imagine that you are diabe6c, and you have to pay close attention to bow your body metabolizes glucose. Letg_(L) r~j:lresent tbe excess glucose concentration in your blood, usually measured in rnilligrams of glucose per 100 milliliters of b d Ex:cess means that we mea re_b.l v much the alucose concentration de · tes from our fasting Ievel, i.e., tbe Ievel 0ur s stem a roacbes ·er many hours of fasting .) A negative value of g(t) indicates that tbe glucose concentration is below fasting Ievel at time t . Sbortly after you eat a heavy meal , the function g(t) will reach a peak, and then it will slowly return to 0. Certain hormones belp regulate glucose, especially the hormone insulin . l,&! h t re resent the excess hormone concentration in your blood . Researchers have developed mathematical models for the glucose regu atory system. Here is one such model, in slightly simplified form (these formulas apply between meals; obviously, tbe system is disturbed during and right after a meal): 1

Ax(r)

witb two components. Below, we sketch the initial state vector .Xo and two eigenvectors, ii 1 and ti 2, of A (with eigenvalues Ai and A2, respectively). For the given values of Ai and )..2 , sketch a rough trajectory. Think about the future and tbe past of the system.

+ 1) + 1)

g(t h(t l

= ag(t) - bh (t ) = c g(t) + d h (t)

I

where time t is measured in minutes; a and d are constant lightly less than I ; and b and c are small positive constants. For your system the equations might be ,,_" ..... "1

g(t h(t I

+ 1) = 0 .978g(t)

_ 0 .006h (t)

+

+

)
1.

24. AJ = 1.1 , A2 = 0.9. 27. A1 = 0.9, )"2 = 0.8.

Y25 .

.A 1

= 1, A2

= 0.9.

'I 28. AJ = 1.2, A2 = 1.1.

Y 26. A1 = 1.1, .A2 = 1. y 29. AJ = 0.9, A2 = 0.9.

1) = 0 .9,04g(t) 1

'H!

(

I,

.!-.)

0.992h(t )

:)!Ue:t-r

Ir



"'

I 0

1)

The term - 0 .006h (t) in the first equation is negative because insulin helps your body absorb glucose. The term 0.004g(t ) is pos itive becau e glucose in your blood timulates the cells of the pancreas to ecrete in ulin (for a more

306 •

Sec. 6.2 Fi nding the Eigen va lu es of a Ma trix •

Chap. 6 Eigen val ues and Eigen ectors thorough di cu io n of this model, read E. Ackerm an et al. . " Blood gluco e regul ado n and diabete ,' Chapter 4 in Concept and Mode ls of Biomathematics Marcel Dekker, 1969): Con ider the coeffi cient matri x

A - [0.978 - 0.004

b. Yerify that [

and [ _ ~ ] are eigenvectors of A. Find the a so-

ciated eigen alue . b. After you ha e consumed a heavy meal, the concentTati on in your blood are g0 = 100 and ho = 0. Find closed fo rmul a for g(t ) and h (r) . Sketch the trajectory. Briefl y de cribe the evolution of this sy tem in practi cai tenn . c. For the case di cus ed in part b how long does it take fo r the gluco e concentration to fall below fas ting Ievel? (Thi s qu anti ty i useful in diagno ing diabetes: a period of more than fo ur hours may indi cate mild di abetes.)

37. Con ider the matrix A =

[~ -~

l

year 2

+ n(O) = I a(OJ = 0

s- 1AS

then Sv is an eigenvector of A. 40. Suppose v is an eigenvector of the n x n mattix A, with eigenvalue 4. Explain why is an eigenvector of A 2 + 2A + 3/11 • What is rbe associated eigenvalue? 41. True or fa lse? a. If 0 is an eigenvalue of the matrix A, then det (A) = 0.

v

b. lf [

~]

is an eigenvector of [

n(l)

=0

a(l) = I

=2 a(2) = I n(2)

~ ~ ]. then [ ~ ]

year 3

_±_

0 0 0 c. If A is a 4 x 4 matrix with A4 = 0, then the only possible eigenvalue of A is 0. 42. True or fa lse? a. lf is a nonzero vector in lR", then is an eigenvector of the matrix b. If v is an eigenvector of A and B , then v is an eigenvector of AB. c. If is an eigenvector of A, then is in ker(A) or in im(A).

v

v

n(3) a(3)

=2 =3

n(4) a (4)

=6 =5

FINDING THE EIGENVALUES OF A MATRIX ln the last section, we saw how eigenvalues help us analyze a dynarnical system

+ 1) =

A-r(t ).

Now we will see how we can actual ly find them. Consider an n x n matrix A and a scalar >.. . By definition, A. is an eigenvalue of A i f there is a nonzero vector in lR" such that

v

Av

= >.. ü

or

a. Find the matrix A such that n(t [ a(t

+ I)] = A + I)

[n(t)] a(l) ·

vv 7 .

v

year 4

±

is an eigenvector of

[ ~ ~ ~] .

~ru year I

are eigenvectors of A . Find the as ociated

v

a. Use the geometric interpretation of this tran Formation to fi nd tbe eigenvalue of A. b. Find tw o linearly independent eigenvecrors fo r A. x 38. Consider the growth of a lilac bush. The state of this lil ac bush for several years (at year' s end) is sketched below. Let n (t) be the number of new branches (grown in the year t ) and a(t) tbe number of old branches. In the sketch, the new branches are represented by shorter lines. Each old branch will gww two new branches in the foUowing year. We assume that no branche die. year 0

~]

39. Consider two n x n matrices A and S, where S is invertible. Show rhat the matTices A and s- 1 AS have the same eigenvalues. Hint: lf v is an eigenvector

- 0.006 ] 0.992

of th is dynarnical system.

~]

and [ _

eigenvalues. c. Find closed formulas for n (t ) and a (t).

of

a. We are told tbat [ -

~]

30 7

or (Al" - A )v =

Ö.

308 •

hap. 6 Eigenvalues and Eigenvectors

Sec. 6. 2 Finding the Eigenvalues of a Matrix •

EXAMPLE 2 ..... Find the eigenva lues of

This means, by defirution of the kernel , that ker(Al" - A) =I= {0} . (i.e .. there are other vectors in the kernet. besides the zero vector). This is the case if (and only it) the matrix AI" - A is not in vertible (by Fact . 1.7) tbat is , if det(A l" - A) = 0 (by Fact 5.2.5).

Fact 6.2.1

309

A=

1 0 0 0 0

2 2 0 0 0

3 3 3 0 0

4 4 4 4 0

5 5 5 5 5

Solution

Con ider an n x n matrix A and a scalar A. Then A i an eigenvalue of A if (and only it) det(H11 - A) = 0.

Again , we have to so lve the equ ation det(A./ 5

-

A) = 0.

- 2 -3 -5 -4 A-2 - 3 0 - 5 -4 A) = det -4 0 .l..-3 -5 0 ), -4 - 5 0 0 0 A.-5 0 0 0 0 = ().. - 1) ), - 2)(A. - 3)(), - 4)()..- 5) = 0 A. - 1

To clarify things, we represent the observations made above as a series of equivalent Statements:

det(,l,. / 5

-

). is an eigenvalue of A . ~

There is a nonzero vector -

ucb that Aü

= AÜ or

(,\ /" - A)v

= Ö.

Reca11 that the determinant of a triangular matrix is just the product of the diagonal entries (Fact 5. 1.3). Tbe solution are I, 2, 3, 4, 5 ; tbese are the eigenvalues of A . ~

~

ker(U11

-

A ) =/= {0 }.

~ Hn - A is not invertible.

Fact 6.2.2

The eigenvalues of a triangular matrix are its diagonal entrie .

~

det(H" - A ) = 0.

By Fact 6.2. 1, we can think of the eigenvalues of an n x zeros of the function

EXAMPLE 1 ..... Find the eigenvalues of tbe matrix A=

[!

I [4 (A.) = det (A.I" -

~l

A)

11

matrix A as tbe



EXAMPLE 3 ..... Find .fA(A.) for tbe 2 x 2 matrix Solution

A=[~

By Fact 6.2. 1, we have to Iook for numbers A. such that det(A./ 2 - A) = 0. det(A./2 - A) = det ([ = det

[

~ ~] -

[! ;])

), _- 1 A. -2 ] _ 4

=

= (A. - 5)(A.

Solution A Straightforward computation show that

3

= (). - l )(A.- 3)- 8 A.2 - 4A. - 5

+ 1)

The equation det(U2 - A) = (A. - 5)(A. + 1) = 0 holds for ,\ 1 = 5 and A.2 = -1. These two scalars are the eigenvalues of A. <1111

!].

.fA A.)

= det(A/2 -

A)

a = det [ A.- c

-b ] ).. _ d

= A. 2 -

(a

+ d)).. + (ad- bc) .

This is a quadratic polynomial with constant tenn det(A) = ad - bc. This makes sense becau e the constant term is j~, (0) = det(Oh - A) = det(- A) = det(A). The coefficient of A. is the negative of the sum of the diagonal entrie of A. Since thi sum is important in many other contexts as weil, we introduce a nam~ for it.

310 •

Chap. 6 Eigenvalues and Eigenvectors

Definition 6.2.3

Sec. 6.2 Fi n di ng t he Eigenvalues of a Matrix •

is po ible to describe the coefficients of >.., A. 2 , com pl icated, and we do not need the m bere.

Trace The sum of the diagonal elements of an n x n matrix A i ca lled the trace of A, denoted by tr(A ).

Fact 6.2.5

Characteristic polynomial

J A(Ä)

!A ()..)

If A is a 2 x 2 matri x. tben /A (A.) = A.'!. -

For the matrix A =

[!

~]

tr(A)A.

+ det

-

is ca1led tl1e characteristic polynomial of A.

EXAMPLE 4 ..... Find all eigen value of - a ln -a2n

- a2 1

]

. [ - On2

A _: Clnll

0 11 )(). -

=

A=

.

Tbe contribution a pattem makes to thi s determinant i a polynomi al in >.. of degree less than or equal to 11. Thi s implies that the determina nt itself, being tbe sum of all tbese contributions, is a polynomial of degree les than or equal to n . We can be more precise: the diagonal pattem of tl1e matrix A/11 - A makes the contribution ().. -

l 0 0 0 0

2 2 0 0 0

3 3 1 0 0

4 4 2 2 0

Figure 1

022) · · · (). - Clnn)

)..n -

= )," -

(a ll

+ a22 + · ·· + 0

tr(A)>.."-

1

1111

A11 - 1 + (a polynor.rual of degree _::: (n - 2))

+ (a polynomial

of degree _::: (n -

2)).

Any other pattem involves at least two calars off the diagonal (Exerci se 5.1.39), and its contribution toward the determinant is therefore a polynomial of degree less than or equal to n - 2. This impli es that !A(>..) = ).!' -

f 4 (>..)

li m

A.-+-

See Figure l .

A)

-On I

and

).~

).. - a 11

= det

tr(A).l.." - 1 + · · · + (- 1)" det(A) .

tim JA(),) =

as we found earlier. Wbat is tbe format of ! A(A) for an 11 x n matrix A? 11 -

11

A) is a polyn01rual of

Note tbat Fact 6.2.4 represents a special case of Fact 6.2.5 . What does Fact 6.2 .5 teil us about the number of eigenvalues of an n x n matrix A? We know from elementary algebra that a polynomial of degree n has at mo t n zeros. Therefore an n x n matrix has at most n eigenvalue . lf the characteri sti c polynomial / ;r.(.l..) is of odd degree, then tbere is at lea t one zero, by tbe intermedi ate value tbeore m, since

i.n Exan1ple 1 we bave tr(A) = 4 and det(A) =

= det(Al

=A

-

A) .

- 5, so that

f_.. (A.)

;._n- 2, but these fo rmula are

Consider an n x n matri x A. Then JA(>.. ) = det(A/11 degree n, of the fo rm

We an now wri te the function JA (A.) associated wi th a 2 x 2 matrix A uccinctly a fo llows:

Fact 6.2.4

. . . ,

311

tr A)A."- 1 + (a polynomial of degree _::: (n - 2)).

The constant tenn of thi s polynomia l is f A(O) = det (-A) = (- 1)11 det(A) . lt

An eigenva lue of A

5 5 3 3 l

= - oo.

312 •

Sec. 6.2 Find ing the Eigenvalu es qf a Matrix •

Chap . 6 Eigen va lue and Eigenvec tors

313

Solution The characteri tic polynomial is f 11 (J...) = (J... - l ) 3 (J... - 2) ~ . ! he ei gen v~lue are 1 and 2. Since 1 is a root of multiplicity 3 of the characten st:Jc poly no mtal, we say that the eigenvalue 1 ha algebraic mu/riplicity 3. Likew ise, the eigenvalue 2 has al gebraic multiplicity 2. <11111

Definition 6.2.6

Algebraic multiplicity We say that an ei genval ue J.. 0 of a quare matrix A has algebraic mulriplicity k if

Figure 2

fA ('J. . ) = ('J... - J...o)k g (A)

for some polynomial g (J...) with g (J.. o) k of j;~(A) .

i= 0, that is,

if Ao is a root of multiplicity f f n is even, an n x n matrix A need not have any real ei genvalue . Perhaps the . implest example is

A= [~

In Example 4, we had k

+ f;~('A)

=

+ I. See Figure 2. GeometricaJi y it make sen e that A ha no real eigenvalue : the tran formati on T (x ) = Ax i a counterclockwise rotati on through an angle of 90"' . with

(A- 1)3 (J... - 2)2

t

>-o

'-v-"'

g(A.)

The algebraic multipliciry of the eigenvalue Ao = I is k

= 3.

EXAMPLE 5 ..... Find the eigenvalues of A

=

[ -~-I

-1 2 - I

-b]

-1] - 1 2

!A (A)

= A2

EXAMPLE 6 ..... Describe all po ible cases fo r the number of real eigenvalues of a 3 x 3 matrix and thei.r algebraic multi plicitie . Give an example in each case and graph the characteristic poJynomi al.

Solution The characteri tic po lynorni al either factor completely

with their algebraic multiplicities. or it has a quadrati c fac tor without real zeros:

Solution

fA(A) = (A - Ai p ('A .

)... - ?

~

fA(A) = det [

A- 2 I

In the fir Lca e, the Ai could all be distinct two of them could be equ al, or they could all be equ al. This Ieads to the following possibilitie :

We find two di stinct ei genvalues, 3 and 0, with al gebraic multiplicities 2 and ~ respectively. Let us summari ze:

Fact 6.2.7

An n x n matrix has at most n eigenvalues, even if they are counted with their algebraic multiplicities. If n is odd, then an n x n matrix has at least one eigenva lue.

No. of Distinct

Case I 2

3 4

Eigenvalues

Algebraic Multipllcities

3

I each

2

_ and I 3

314 •

Sec. 6.2 Find ing th e Eigenvalues of a Matrix •

Chap. 6 Eigenvalues and Eigenvectors

315

Examples for each ca e follow.

Case 1 • A

~ [~

0 2 0

See Figure 3.

Case 2 •

A ~u See Figure 4.

n n

f11 (A) = (A - l )(A - 2)(A - 3)

Figure 4

0 1 0

Case 3 • A = / 3,

See Figure 5.

Case 4 • f A(A)

=

(A- 1) ().2

+ 1)

Figure S

See Figure 6. You can recognize an eigenvalue ;.0 whose algebraic multiplicity exceeds I on the graph of /A (l ) by rhe fact that /A (lo) = f~ (), 0) = 0 (the deri vative is zero ). The verification of this observation is left as Exercise 37. <11111

figure 3

Figure 6

• Finding the Eigenvalues of a Matrix in Practice To find the eigenvalues of an 11 x 11 matrix, we have to find the zero of the characteristic function , a polynomial of degree 11. For n = 2, this isatrivial matter:

316 •

Sec. 6.2 Finding the Eigenvalues of a Matrix •

Chap. 6 Eigenvalues and Eigenvectors

EXERCISES

7. /3

10.

-

A) to find Lhe

For each of the matti.ces in Exercises I to 13, find all real eigenvalues, wit h their algebraic multiplicities. Show your work. Do not u e technology.

8.

[-~

0 - I

7

-2

- l

[~

14.

Consi~er a 4 x 4 matrix A =

9.

- 1

-~] u n 1 1 2

11.

13.

1 0 0

- 1 - 1

- ] - ]

-7

12.

u l~

-2 0 0

;]

- 2 - 1 0 0

0 0 3 2

J] -3

!] [

g ; ],where B, C, and D are 2 x 2 matrices.

What 1s the relationship between the e igenvalues of A B, C, and D?

. A 15. C ons t'der t he matnx

. an arbttrary . = [ 11 k ] , where k ts constant.

For which 1 val ues of k does the matrix have two disti nct real eigen values? When is there no real e i.genvalue?

16. Consider the matrix A = [

~ ~

l

where a, b, and c are nonzero constants .

For which choices of a, b , and c does A have two distinct eigenvalues? (_

GOALS U e tbe characteristic polynomial JA (,l,.) = det (A/11 eigenval ues of a matrix A. with their algebraic multiplicities .

[-1 -1] - 1

we can e ither factor the pol ynomial by in pecti on or use the quadratic formu la. The problern of finding the zeros of a po lynomial of higher degree is nontri via l: it has been of considerable interest throughout the hi tory of math emati c . In the early 1500s ltalian mat.hematic ians found fonnu las in the cases 11 = 3 and 11 = 4 , published in the Ars Magna by Gerolamo Cardano.t See Exerci ·e 38 for the ca e 11 = 3. Duri ng the ne ·r 300 years. people tried to find a general fonnula to ol e the qui nt.ic (a polynomial equation of fifr.b orde r). [n 1824 the Norweoian mathematician Niels Henrik Abel (1802- 1829) showed that no uch 0 general solution is possible, putting an end ro the long search. The first numerica l example of a quintic which cannot be so lved by radica ls was given by the French mathematician Evariste Galoi s (1811-1832). (Note the short life pans of these two brilliant mathematician . Abel died from t:uberculosis; Galoi. , in a duel.) It is usuaJiy impo sible to find the exact eigenvalues of a matrix. To find approxim ations for the eigenvalues, you could graph tbe cha:racteristic po lynomial (using technology). The grapb may give you an idea of the number of eigenva lue and their approximate values. Numerical analyst tel! us that thi i not a very efficien t way to go about find.ing tbe eigenvalues; other techniques are used in practice (see Exercise 6.4.33 for an exa:mple; another approach uses QR facto rization).2 lf you have to fi nd the eigenva lues of a matrix , it may be worth trying out a few small integers, such a ±1 and ±2 (those matJice considered in introductory li11ear algebra texts often happen to have such eigenvalues).

317

17. Consider the matrix A = [

~

_ : ] , where a and b are arbitrary constants.

Fi nd all e igenva lues of A. Explain in terms of the geometric interpretation of the li near transformation T (.r ) = iLt.

18. Consider the matrix A

= [~

!],

where a and b are arbitrary constants.

Find all e igenvalues of A.

19. Tr-ue or false? If the determi nant of a 2 x 2 matrix A is negative, then A has 1.

two distinct real eigenvalues.

[~ ~]

20. Consider a 2 x 2 matr ix A with two distinct real eigenvaJues, At and )..2.

5. [ 161 - 15 ] -7

6.

u~ ]

1Cardano ( 150 1- 1576) was a humanist wi th a wide range of interest . In hi s book Liber de ludo aleae he pre ents Lhe firs t sy tcmatic compu tation of probabilities. Trained as a phy ician. he gave thc first clinical description of typhus fever. In hj s book Somn ionmr Synesionmr (Basel, 1562) he ex plores the meaning of dreams (sample: "To dream of living in a new and unknown ciry mea n immi ne nt death''). Still, he is best known today as Lhe most outstanding rnathematician of his time and the author of' the Ars Mag na. In 1570 he was arTested on accusation of heresy: he lost his academic position a nd the righ t to pub li sh. For an Engl ish tran slation of pans of Chapter XI of the Ars Magna (dealing wilh cubic eq ua!ions) see D. J. Struik (editor). A Source Book in Ma riremmies 1200-1800, Princeton Universi ty Press, 1986. 2 See, for example, J. Stoer and R. Bulisch , ln!roductionro Nwnerical Analysis, Springer Verl ag, J980.

Express det(A) in terms of At and A2 . Do the same for the trace of A. Hint: The characteristic polynomiaJ is

Yerify your answers in the case of the matrix A =

21. Consider an n

[! 2] 3 .

x n matr ix A with 1z d.isti nct real e igenvalues, At, )"2 , ... , An. Express the determi nant of A in terms of the A;. Do the same fo r the trace of A. 22. Consi der an arbi trary n x n matrix A . What is the re lationship between the characteri stic po lynomials of A and A 7 ? What doe your answer tell you about the eigenvalues of the two m atr ices?

318 •

Chap. 6 Eigenvalues and Eigenvectors

Sec. 6.2 Finding the EigenvaJues of a Matrix •

23. Consider two 11 x n mau·ice A and B, where B = s- A S for an in verti ble n x 11 matrix s. What i the rel ationship between the c haracte ri tic po lynomial · of A and ß? What does your an wer tell you about the eigenva lue of the two matr.J. ces.? 24. Find all eigenvalues of t.he mat1ix

319

1

0.5 A = [ 0.5

0.25]

0.75 .

25. Con ider a 2 x 2 matrix of the form

A= [~ ~ l where a, b, c, d arepositive numbers such that a + b = c + d = l (the matrix in Exercise 24 has this format). A matrix of this fo rm i called a regt.t!ar rransition m.arrix._ Verify that [

~]

and [ _

~]

are eigenvecto rs

o~ A~hat

are tbe associated eigenvalue ? I tbe ab olute value of the e e1genva lue more or Jess than I ? Sketch a phase portrait. 26. Based on your answer to E xer i e 25, ketch a phase portrair of the dynamical ystem _ [o.5 o.25]- c) 1 x(t

+ ) = 0.5 0.75

X

I .

27. a. Based on your answer to Exercise 25, find closed formulas for the components of the dynarnical y tem

- + l )= [0.5 _ x(l

05

o.25]- c) 0.75 X I .

x

with initial value 0 = e1. Then do the ame for the initial value Sketch the two trajectorie . b. Consider the matrix

A = [0.5 0.5

e2.

x(t )

=

[w(t) ]' m (t)

where w(t) and m(t) are the numbers of farnilies shopping at Wipfs and at Migro , re pectively, t weeks after Migro opens. Suppose w(O) = 1200 and m(O) = 0. a. Find a 2 x 2 matrix A suchth at ," i(t + I ) = Ax(t). Yeri fy that Ais a regular tran ition matrix . b. How many farniües will shop at each store after t weeks? Give closed form ul a . c. The Wipfs expect that they mu st close down when they have less than 250 customers a week. Wben does that happen ? 29. Consider an n x n matrix A uch that the sum of the entries in each row is 1. Show that the vector

in IR" i an eigenvector of A. What is the corresponding eigenvaJue? 30. a. Consider an n x n matrix A uch that the sum of the entries in each row is I and all entries are positive. Consider an eigenvector of A with positive components. Show that the associated eigenvalue is les than or equal to I . Hin t: Consider the !arge t entry of v. What can you ay about the corresponding entry of A ii? b. lf we drop the requirement that the components of the eigen vector ü be po itive, i it still true that the associated eigenvalue i Je than or equal to 1 in ab olute value? Justify your answer.

v

31. Con ider a matrix A with po itive entrie such that the entries in each column

0.25]

0.75 . A2,

A5

~ ~] is an arbitrary regular transition marrix, what can you

ay

Using technology , compute some powers of the matri x A: say A 10 , . ... What do you ob erve? Explain your an wer carefully. c. Tf A = [

xo =

the fo ll owing week. The state of this town (as far a grocery shopping is concerned) can be represented by the vector

about the powers A 1 as 1 goes to infinity? 28. Con ider the isolated Swiss town of Andelfin gen, inhabited by 1200 families. Each farnily takes a weekly shopping trip to the only grocery store in town, run by Mr. and Mrs. Wipf, until the day when a new, fancier (and cheaper) chain store, Migros, opens its doors. It is not expected that everybody will immediately run to the new store, but we do anticipate that 20% of those shopping at Wipfs one week switch to Migros the following week . . Same people who do switch mi . s the personal service (and the goss ip) and switch back: we expect that 10% of tho e shopping at Migros one week go to Wipf' s

add up to 1. Ex.plain why 1 is an eigenvaJue of A. What can you say about the other eigenvalues? Is

nece sarily an eigenvector? Hint: Consider Exercises 22 29, and 30. 32. Consider the matrix 0 1 A =

0 [ k

0 3

wher k i an arbitrary constant. For whi ch values of k does A have three di tinct real eigenvalues? What happen in the other case ? Hint : Graph the fun ction g(A.) = A. 3 - 3A.. Find it local maxima and rninima.

320 •

hap . 6 Eigenvalue and Eigenvectors

Sec. 6. 3 Finding the Eigenvectors of a Matrix •

33. a. Find th e characteri tic polynomial of the matrix

Thi s solution can also be wri tten as

I 0]I .

3 q

0

c

b

2

b. Can yo u find a 3 x 3 matr.ix M whose characteri tic polynorn:iaJ is

What can go wrong when p is negative?

e. Con ider an arbitrary cubi c equation

>.. 3 - l7 A.2 + 5>.. - rr ?

x

34. Suppo e a certain 4 4 matrix A ha two. distinct real eige.nvalues. What cou ld the algebraic multipl icitie of these e1genvalue be? G1ve an example for each poss ible case and ketch the characteristic polynomial. 35. Give an exan1ple of a 4 x 4 matrix A without real eigenva lues.

36. For an arbitrary positive integer n, give a 2n x 2n matr.ix A without real eigenvalues. 37. Con ider an eigenvalue A{) of an n x n matrix A. We are told that the algebraic multiplicity of A{) exceeds 1. Show that f~ (>..o) = 0, that i , the derivative of the characteristic polynomiaJ of A vanishes at A{). 38. In hi groundbreaking text Ars Magna (Nuremberg, 1545), the ltalian mathematician Gerolamo Cardano explains how to solve cubic equations. In Chapter XI, he considers the following example:

I

(Aln- A)Ü

= 0.

In other words, we have to find the kernel of the matrix >..!" - A. The following notation i useful:

Definition 6.3.1

Eigenspace Consider an eigenvalue A. of an n x n matri x A. Then tbe kerne! of the matrix A./ 11 - A is called the eigenspace associated with A., denoted by E>.. : EJ.. = ker(A/11

-

A)

Note that E>.. consists of aJI solutions ii of the equation

Aü =

2

to find u and v. d. Consider the equation

= q.

Afterwehave found an eigenvalue >.. of an n x n matrix A, we may be interested in tbe corresponding eigenvectors. We bave to find the vectors ü in IR" such that

3

=

0.

FINDING TBE EIGENVECTORS OF A MATRIX

u = 20 I

vu

+ pt

or

solution is easy to find by inspection. The point of the exercise is to show a systematic way to find it. b. Cardano explains bis method as follows (we are usi ng modern notation for the variables): " J take two cubes v3 and u3 whose difference shall be 20, so that the product vu shall be 2, that is, a third of the coefficient of the unknown x . Then, I say that v - u is the value of the unknown x ." Show that if v and u are chosen as stated by Cardano, then x = v - u is indeed the solution of the equation x 3 + 6x = 20. c. Salve the system -

+ ax 2 + bx + c =

t3

a. Explain why this equation has exactly one (real) so lution . Here, this

3

3

Show that the Substitu tion x = t - (a / 3 allows you to write thi s equation as

x 3 + 6x = 20.

v

321

A.u.

In other words , the eigenspace E>.. consists of all eigenvectors with eigenvalue A., tagether with the zero vector.

x3

+ px

= q

EXAMPLE 1 ~ Let T (x) = A.i be the orthogonal projection onto a plane E in IR . Describe the

where p is positive. Using your work in parts a, b, and c as a guide, show that the unique solution of this equation is

eigenspaces geometricaUy.

Solution X =

3

-

q -+ 2

We can find the eigenvectors and eigenvalue by inspecüon: Th nonzero vector ü in E are eigenvectors with eigenvalue 1, because A ü = ü = 1ü. The eigenspace

322 •

Sec. 6.3 Finding the Eigenvectors of a Matrix •

Chap. 6 Eigenvalues and Eigenvectors

Eo: 1l1e vectors ii _ such lhal Aii = Oii = 0

323

'

[7-\- p -----,-

E1: Lhe vectors ii

-> '

L-----------:------~ /

such lhal Au

= 1u

= u

Figure 1

E 1 consists of all vectors ü in IR such that A Ü = I Ü = ü, that is, E 1 is ju t the plane E. Likewise, Eo i the line EJ. perpendicular to E . Note that Eo is ~i mply the kerne! of A, that i , Eo consists of all solutions of the equation A ü = 0. See Figure l. ~

Figure 2

We can (and should) check that the vectors we found are indeed eiaenvectors of ö A, with the eigenvalues we claim:

To find the eigenvectors a ociated with a known eigenvalue A. algebraically, we seek a basis of the eigenspace E>.· Since E>. = ker(A/" - A) th i amounts to finding a basi of a kerne!, a problern we can handle (see Section 3.3) .

EXAMPLE 2 .... Find the eigenvector of the matrix A =

[!

~ J.

Both eigenspaces are lines, as shown in Figure 2.

EXAMPLE 3 .... Find the eigenvectors of Solution In Section 6.2, Example 1, we saw that the eigenvalues are 5 and - I. Then E 5 = ker(S / 2

-

A) = ker [ _:

-2]2 . Solution

It is easy enough to find this kernet algebraically, using the reduced row echelon form: Es=ker[ _ :

-;J=ker[~ -~2 ] =

span[

i

12

The eigenvalues are l and 0 (the diagonal entries of A), with algebraic multiplicities 2 and I, respectively. Then

]= span[;J.

-I 1 0

We can also think about this problern geometrically: we are looking for a vector that is perpendicular to the two rows (which are parallel, of course). A vector perpendicular to [

-~ J is [;]

(swap the two components and change one of the

signs). Therefore,

Es = ker [ _ :

~] =

span [;

J

-2

E-1 = ker [ _ 4

- 1

.

0

To find thi kerne! , bring the matrix into reduced row echelon fom1 and olve the corresponding system.

[~

1

-1]

0

0

- I

-1

rref -+

The general solution of the system

and -)

- ;] = span [

-1]

IX2

X3

= 01 = 0

[~

1 0 0

!]

324 •

Sec. 6. 3 Fi nding the Eigen vectors of a Ma trix •

ha p. 6 Eigenvalues and Eigenvectors

325

To discu s these different case , the following termin ology is useful :

Definition 6.3.2

Geometrie multiplicity C 01~ sid e r an eigenvalue Ä of a matri x A. Then the dimension of the eigenspace

Therefore

E;,_

Eo = ker

I

but

0

G+

Ä.

(algebraic multip licity of eigenvalue I) = 2,

l

0 0

[ 0

ca lled the geometric multiplicity of

Example 3 shows that the geometri c multiplicity of an eigenvalue may be di fferent from the al gebraic multipli city: we have

Now Iet us think about the eigenspace Eo: I

IS

X2

f.:\ = 0 ~ = 0

The generat solution is

?] ~2 [-X

= X2

(geometric mu ltiplicity of eigenvalue 1) = 1.

I

[-1]

However, the following inequality always holds:

Fact 6.3.3

Consider an eigenvalue ), of a matrix A . Then

~

Therefore,

Both eigenspaces are lines in the x 1x 2-plane, as shown in Figure 3.

(geometric multiplicity of Ä) :::: (algebraic multiplicity of Ä).

To give an elegant proof, we need some additional macrunery, wruch we will develop in Chapter 7· the proof will then be left as Exercise 7.2.60. The following example illustrate the relationsrup between the algebraic and geometri c multiplicity of an eigenvalue.

EXAMPLE 4 .... Consider an upper triangular matrix of the form l

Note that Example 3 i different from Example l , where we studied the orthogonal projection onto a plane in ~3 . There too, we have two eigenvalues, I and 0, but one of the eigenspaces, E,, is a plane.

A=

0 0 0 0

• • • • • • • 0 4 • • 0 0 4 • 2

0

0

0

4

Figure 3

where the bullets are placeholders for arbitrary entries. Without referring to Fact 6.3.3 , what can you say about the geometric multiplicity of the eigenvalue 4 ?

Solution

E4 = ker

3 • • • • 0 2 • • • 0 0 0 • • 0 0 0 0 • 0 0 0 0 0

326 •

Sec. 6.3 Finding the Eigenvectors of a Matrix •

Chap. 6 Eigenvalues and Eigenvectors

E0

The reduced row echelon fo rm is

CD o 0

327

= EJ..

•••

0 ...

0 0 0 • • 0 0 0 0 • 0 0 0 0 0 where any of the bullet in rows 3 and 4 could be leading 1 's. The number of leading variables wi II be between 2 and 4, and the dimension ofthe kernel (i.e., the dimen ion of E4 ) will be between 3 (=5 - 2) and 1 (=5 - 4). We can conclude that the geometric multiplicity of d1e eigenvalue 4 is between J and 3. This result agrees wi th Fact 6.3.3, since d1e algebrai c multiplicity of d1e eigenvalue 4 is 3. <11111

Figure 4

Example 2 Revisited • The vector [

Using Example 4 a a guide, can you give a proof of Fact 6.3.3 for triangular matrices? See Exercise 30.

A =

+ 1) =

Example 3 Revisited • A

Ax(t ) ,

e

Based on Example 3, ü is easy to find a criterion that guarantees there will be no eigenbasis for a given n x n matrix A: if the sum of the dimensions of the eigenspaces is Jess tban n, then there are not enough linearly independent eigenvectors to form an eigenbasis. Conversely, suppose the dimensions of the eigenspaces do add up to n, as in the first two examples discussed above. Can we con truct an eigenbasis for A simply by picking a basis of each eigenspace and combining the e vectors?

Eigenbasis Consider an n x n matrix A. A basis of IR" consisting of eigenvectors of A is called an eigenbasis for A.

Figure S

Below we discuss the following questions:

• How can we teil whether there is an eigenbasis for a given matrix A? • How can we find an eigenbasis if there is one?

Recall Examples 1 to 3 above.

v

Example 1 Revisited • Projection onto a plane E in JR 3 . Pick a basis 1, Ü2 of E and a nonzero Ü3 in E l. . The vectors 1, 2 , 3 form an eigenbasis. See Figure 4.

v v v

~ [~ ~ ~] .

There are not enough eigenvectors to form an eigenbasis. For example, we are unable to express 3 as a linear combination of eigenvectors. See Figure 6.

v2,

wbere A is an n x n matrix, we may be interested in finding a ba is Vt. v" of IR" that consists of eigenvectors of A (recall the introductory example in Section 6.1, and Fact 6.1.3). Such a basis deserves a name.

Definition 6.3.4

~].

~] and [ - ~]form an eigenbasis for A, as sbown in Figure 5.

When we analyze a dynamical system x(t

[!

[_:]

328 •

Chap. 6 Eigenva lues and Eigenvectors

Sec. 6.3 Finding th e Eigenvector · of a Matri x •

Fact 6.3.5

Proof

329

v

Con ider the eigenvector ii 1, 2 , . . . , Ü111 of an n x n matrix A, with d i tin ct e igenvalues AI, A2, . . . , A111 • The n the Ü; are linearly indepe ndent.

T he proofs of Fact · 6.3.5 and 6.3.7 are somewhat technjcaJ. You may wi h to kip the m in a fi r t reading of this text. We arg ue by ind uction on m . We Jeave the case m = 1 as an exercise, and assume that the claim holds for m- 1. Consider a relation

Figure 6

C!

VI + · · · + Cm - l V

111

- l

+

CmV111

=

Ö.

Fir t, we apply the transforrn ation T (x ) = Ax to both side of eq uation (I , keeping in mi nd that Av; = A; ii;: Thi s approach certainly works in the two example. above. However, in more compli cated cases we mu t worry about the linear independence of the n vectors we fi nd by combining ba e of the various eigenspace . Let us first thi nk about a simple case.

Next, we multip ly both sides of equation (I) by Am: (III)

EXAMPLE 5 ..... Consider a 3 x 3 matrix A with three eigenvalues. I , 2 , and 3 . Let ii " ii2 and Ü3

Now subtract equ ation (III) from equation (11).

be corresponding eigen ector . Are the Ü; necessaril y linearly independent?

(IV)

Solution

v;

B y ind uction, the in equation (IV ) are linearly independent. Therefore, equation (IV) mu t represent the tri via! relation, that i , c; (A; - A111 ) = 0, fo r i = I , . . . , m - J. The eigenvalues were assumed to be d istinct; therefore A; - A111 =f. 0, for i = 1, .. . , m- I. We conclude that c; = 0, for i = 1, . .. , m - 1. Eq uation (I) now te ils us that C111 Üm = Ö, so that C111 = 0 a well.

Consider the plane E pan ned by ii 1 and ~ . We have to examine whe ther Ü3 could be contained in this plane. More generall y, we examine whether the plane E could contain any eigenvectors besides the multiples of ü, and Ü2. Consider a vector = c 1ü1 + c 2 ~ in E (with c 1 =f. 0 and c2 =f. 0). Then Ax = c, Aii , + c2 Ai:i2 = c, i:i, +2cz iiz. Tru s vector is not a scalar multiple of that is, is not an eigenvector of A. See

x

x,



x

Figure 7.

The proof above gives us the foll owin g re ult.

We have shown that the plane E does not contain any eigenvect.ors beside. the multiples of ii 1 and v2 ; in particul ar, v3 is not co ntained in E. We conclude that the vecto rs ii 1, ii2, and v3 are Linearly independe nt. <11111 T he o b ervation we made in Example 5 generalizes to hi ghe r dimensions.

Figure 7 I

I

, - - - -- --- -

AX = c 1ii 1 + 2c2 ii 2

Fact 6.3.6

If an n x n m atrix A has n distinct eigenva lues, then there is an e igenbasis for A: we can construct an e igenbasis by choos ing an eigenvector fo r each eigenvalue.

EXAMPLE 6 ..... Ts there an eigenba is for the fo ll owing matrix?

I I I I-X

I I ~- -- - -

I

I I

_ __ _ _ _; ii ,

I

=

c 1-u 1 + c2 u- 2

A=

1 2 0 2 0 0 0 0 0 0 0 0

3 3

4 4 4 4 0

3 0 0 0 0

5 5 5 5 5 0

6 6 6 6 6 6

330 •

hap. 6 Eigenva lue and Eigenvectors

Sec. 6.3 Findin g th e Eigenvectors of a Ma trix •

Solution Ye . because there are 6 di tin ct igenvalues, the di agonal entrie of A: 1, 2, 3,

~

4, 5. 6.

What if an n x n mau'ix A has Jess than n di stin ct eig · nvalues? Then we have to consider the geo metric multiplicities of the eigenvalue . .

331

We leave it a : an exerci se to in terpret the entri es of A in terms of reprodu ction rares and surv1va l rares. Supp~se the initi al popul ations are j 0 = 750 and mo = a0 = 200. What will the popul at10ns be after t years, accordin g to this model? What will happen in the long tenn ? Solution

Fact 6.3.7

Con ider an 11 x 11 matrix A. If the geometric multi plic itie of tbe eigenval ue of A add up to 11 , then there i an eigenbasi fo r A: we can construct an eigenba i- by choo ing a ba i of each eigen pace and com bini ng these vector .

To an wer the e que tions, we have to find an eigenbasis for A (see Fact 6.1.3).

Step I

Find the eigenvalues of A. The characteristic polynomial i

f11 (!..) = det

Proof

Suppose the eigenvalues are 1.. 1, 1..2 , .. • • A"" with dim (E>J = d;. We first choose a ba i ü1 ii 2, . . . , ud, of E>., , then a basis üd,+r. . . . , iit~, +d2 of E-J.. 2 and so on. We have to show th at the vector ü1 ••• , ii" found in thi way are linearly independent. Consider a relati on

c1ii1 + · · · +

w1

in

cd, - d,

+ ··· +

cd,+d2 iid,+d1

+ ·· · +

· · · + c" ü" =

"----v--"

). - 0 .8 [ 0

E,.,

?

- 0.5

= ),3 - 0.76). - 0.24.

"

Note that AJ = 1 is an eigenvalue. To find the others, we factor: f11 (A) -7- (). -

ÄJ)

= (1.. 3 - 0.761.. - 0.24) -7- (A2

= 1..

1)

+ 1.. + 0.24

= (!.. + 0.6)().. + 0.4)

The characteri ti c polynomial is

w;

w

We conclude this section by workin g another exampl e of a dynamical system:

fA (!..) = (!.. - J)(!.. + 0.6)(1.. + 0.4) and the eigenv alues of A are !..1 = 1,

goat has a life span of three years. At the end of each year t ilie farmer conducts a censu of his goats. He counts the number of youn g goats j (t) (those born in the year t ), the middle-aged ones m (t ) (bom the year before), and tl1e old one a (t ) (born in the year t - 2). The tate of the herd can be repre ented by the vector x(t ) =

[~~~/) ] .

!..2

Step 2

Construct an eigenbasis by finding a basis of each eigenspace.

E 1 = ker

[

1

- 0.8 0

A

=

[~s

- 0. 1

] ~ =;· 0 0 5

= ker[b

0

IXJ

(use technology to find rref)

- 2.5X3= 0 I 2x3=0

X1 -

The general solution is:

How do we expect the population to change from year to year? Suppose that for thi s breed and this environment the evolution of the system ca n be n~.odeled by where

06]

- 0.95 1 - 0.5

a(t )

Ax(t ) ,

= - 0.6.

Now we know that an eigenbasis exists for A, by Fact 6.3.6 (or Example 5).

EXAI\IPLE 7 ..... Con ider an Albanian mountain farmer who raises goats. Thi particular breed of

+ t) =

- 0 6]

1..

6.

Each of tl1e vectors ÜJ; above eitl1er is an eigen vector with eigenva lue A.; or else is Ö. We wi sh to sbow that the ÜJ; a:re in fact all 6. We argue indirectly, assuming that some of the are nonzero and showing that thi s assumpti on Ieads to a contradiction. Those ÜJ; which are nonzero are eigenvector with di tinct eigenvalues; iliey are linearly _0dependent, by Fact 6.3.5. But here i our contradiction: the sum of the ÜJ; is 0; t~erefore they are linearly dependent. Because 1 = 0, it follo w that c1 = c1 = · · · = cd, = 0, ince ü1, v2 , ... vd , are linearly independent. Likew ise, all the other Cj are zero. •

x(t

- 0.95

0.95 0 0.5

00.6 ] . 0

[

2.5x3 ] [2.5] 2x3 = 2 x3

J

X3

E 1 = span

[

2.5] = 7

pan [5] ~

332 •

Sec. 6.3 Fin d ing the Eigen vecto r o f a Ma tri x •

hap . 6 Eigenvalues and Eigenvecto rs

Similarly, we find Lo.6

~

pan

[-:~

l

We have constructed the eig nba i

'·~ m -, ~ [-:~l

with a ociated eigenvaJue A.3 = - 0.4.

A.2 = - 0.6,

),, =I ,

Step 3 Express the initial tate vector as a lin ear combination of eigenvector . Wehave to solve the linear . y tem .X:o = c,ü, + c2Ü2+c3Ü3. U in g technology, we find that c 1 = 100, Ce =50 and c3 = 100. Step 4 Write tbe clo ed formula and cliscu s tJ1e long-term behav ior. See Fact 6.1.3 .

n-2] 4 -3

_;J.

5.

[ -~

8.

[~ ~

11. [ :

0

14. [

-n l 17•

333

n

:

:

-~ ~

l

n

~ 0~ 0~ 0~l -

0 0

°

0

0

19. Con ider tbe matrix

1 l b] c

Then 1

A'.t o = c, A' ii , + c2A' ü2 +c3A Ü3 = c I).\ ü, + C2A~ V2 + C3 >-~ V)

x(t ) =

A

~100m + 50(-0.6)' [ _ :n + 100(-0.4)' [ ~n .

=

= 400 - 600( -0.6)' + 400( - 0.4)' , a (t ) = 200 + 500( - 0.6)1 - 500( - 0.4)' .

In the long term, the populations approacb the equilibrium value

a = 200.

EXERCISES

GOALS For a given eigenvalue, find a basis of the a. ociated eigenspace. Use the geometric multiplicilies of the eigenvalues to determine whether there i an eigenbasis for a matrix . For each of the matrices in Exercises l to 18 below, find all (real) eigenvalues. Then find a ba<;is of each eigenspace, and find an eigenbasis if you can. Do not use techno1ogy.

2. [

~ ~

l

0

I

where a, b, c are arbitrary con tants. How does tbe geometric multipJjcity of the eigenvalue 1 depend on the con tants a b, c? When doe A have an eigenba i ?

j (1) = 500 + 450( - 0.6)' - 200 (- 0.4)' ' m (r )

m = 400,

a

20. Con ider the matrix

The individual populations are given by

j = 500,

[0

0

A=

1 b] [00 01 2c , a

where a, b, c are arbitrary constants. How do the geometric multiplicities of the eigenvalues 1 and 2 depend on the constants a , b c? When is there an eigenbasis for A? 21. Find a 2 x 2 matrix A for which and How many such matrices are there? 22. Find all 2 x 2 matrices A for which

E7 = IR2. 23. Find aJI eigenvalues and eigenvectors of A = [ Interpret your result geometricaJly.

~ ~

J

I ther an eigenba is?

334 •

Sec. 6.3 Finding the Eigenvectors of a Matrix •

Chap. 6 Eigenvalue and Eigenvectors

335

33. Con ider a ymmetric n x n matrix A. a. Show that if ü and are two vectors in Rn, then

24. Find a 2 x 2 matrix A for wlüch

w

Av · w= v · Aw.

w

b . Show_tl~at if ü and are two eigenvectors of A, with distinct eigenvalues then w IS orthogonal to

i the onl y eigeospace. 25. What can you say about the geometric mul ti pli c ity of the e igenval ues of a matri x of tbe form

A =

[~

b b

~]c .

where a , b. c are arbitrary coo tant ? 26. Tru e or fa lse? l f a 6 x 6 matri x A has a negative determinant, then A ha at Jea t one positive eigenvalue. Hint: Sketch the graph of the characteristic polynonüal. 27. Consider a 2 x 2 matrix A. Suppo e that tr(A) = 5 and det (A) = 6. Find the eigenvalues of A . 28. Consider the rnatrix

0

A.

0

0

0

A.

J/1(),) =

0 0

0 0

0 0

(all A.'s on the diagonal and I s directly above), where A. is an arbitrary constant. Find the eigenvalue(s) of 111 (A.), and deternüne their algebraic and geometric multiplicity .

v.

34. Consider a rotation T (x) = Ax in iR3 (that is, A is an orthogonal 3 x 3 mat1ix with deternünant 1) . Show that T has a nonzero fixed point (i.e., a vector v with T (ü) = v). T his result is known as Euler's theorem after the great Swi s mathematician Leonhard Euler (1707-1783). Hint: Co~sider the characteristic polynonüal JA. Pay attention to the intercepts with both axes. Use Fact 6.1.2. 35. Consider a s ubspace V of R " with dim(V) = m. a. Suppose the n x n matrix A represents the orthogonal projection onto V . What can you say about the eigenvalues of A and tbeir algebraic and geometric multiplicities? b. Suppose the n x n matrix B represents the ref:lection in V. What can you say about the eigenvalues of B and their algebraic and geometric multiplicities? 36. Let x(t) and y(t) be the annual defense budgets of two antagonistic nations (expressed in billions of US dollars). The change of these budgets i modeled by the following equations: x(t

y(t

+ l) + 1)

= bx (r )

+ b y(t) + a y(t ),

where a is a constant slightly less than l , expressing the fact that defense budgets tend to decline when there i no perceived threat. The constant b is a mall positi ve number. You may a sume that a exceeds b. Suppo e x(O) = 3 and y( O) = 0.5. What will happen in the long term? Tbere are three possible ca e , depending on the numerical value of a and b. Sketchf'h trajectory for each case, and di cu the outcome in practical terms . Tnclude the eigenspaces in all your s ketche .

37. Con ider a modification of Ex ample 7 : uppo e the transforrnation matrix is

29. Cons ider a diagonal n x n matrix A with rank (A) = r < n. Find the algebraic and the geometric multiplicity of the eigeovalue 0 of A in terms of r and n. 30. Consider an upper triangular n x n matrix A with a;; -:j:. 0 for i = 1, 2, .. . , m and a;; = 0 for i = m + 1, ... , n . Find the algebraic multiplicity of the eigenvalue 0 of A. Without u ing Fact 6.3.3, what can you ay about the geometric multiplicity ?

= a x(t)

A =

[~8

1.4 1.2] 0 0 . 0.4 0

The initial populations are j (0) = 600, 111 (0) = 100, and a (0 = 250. Find clo ed formulas for j(t) , m(t), and a(t). Describe the Iang-term be havior. What can you say about the proportion j (t ): m(t): a(t) in the lang tem1?

31. Suppose there is an eigenbasis for a matrix A. What is the rel ationship betwee n the algebraic and geometric multiplicities of its eigenvalues?

38. A street magician at Montmartre begin · to perform at 11 :00 p.m . on Saturday

32. Consider an eigenvalue A. of an n x n matrix A. We know that A. is an eigenvalue of AT as weil (since A and AT have the same characteri . tic polynomi al). Campare the geometric multiplicities of ), as an eigenvalue of A and AT.

night. He tarts out with no onlooker , but he attract pa ersby at a rate of 10 per minute. Same get bored and wander off: of the people pre ent t nünutes after 11 :00 p.m ., 20% will have left a minute later (but everybody

336 •

Sec. 6.3 Finding the Eigenvectors of a Matrix •

Chap. 6 Eigenva lues and Eigenvector

stays for at least a minute). Let C(l ) be the ize of the cro>vd 11 :00 p.m. Find a 2 x 2 matrix A uch that

[ C(t

~ 1)] = A [ C

I

minutes after

1r)].

a. Explain the significance of the entries of tbe tran formation matrix in practical terms. b. Find closed formulas for the amount of pollutant in each of the tbree lakes t weeks after the accident. Graph the three functions against time (on the same axes) . When does the poUution in Lake Sils reach a maxünum?

Find a closed formula for C (r), and graph thi s function . What is t.he lon g-term behavior of C (1) ? 39. Tbree fr.iends , Alberich, Brunnhilde, and Carl , play a number game together: each think of a (real) number and announces it to the others. In the fir t round, each player find the average of the number chosen by the two others· that is bis or her new score. In the second round , the corresponding averages of the cores in the fir t round are taken , and so on. He re i an example:

Initial choice After Ist round After 2nd round

A

B

C

7

II

5 9 7

8 7.5

6 8.5

33 7

Ri ver lnn

Wboever is ahead after 100 l rounds wins. a. Tbe state of the garne after r rounds can be represented as a vector: x (t ) =

a(t)] [ b (1)

c(t)

Alberich 's score BrunnhiJde' score Carl ' s score

41. Consider a dynarnical system x(t) =

Find the matrix A such that i (l + I ) = Ai 1). b. With the initial values mentioned above (a 0 = 7, bo = 11 , co = 5), what is the score after 10 rounds? After 50 rounds? Use technology. c. Now suppose tbat Alberich and Brunnbilde initiaUy pick the numbers I and 2, respectively. If Carl picks the number c0 , what is the state of the game after r rounds? (Find closed formulas for a (1), b (t ), c(t), in terms of c0 .) For whicb choices of c0 does Carl win the game? 40. In an unfortunate accident involving an Austri an truck, I 00 kg of a highly toxic substance are spilled into Lake Silvaplana, in the Swiss Engactine Valley. The river lnn carrie tbe poUutant down to Lake Sils and later to Lake St. Moritz. This sorry state, t weeks after the accident, can be described by the vector

x(r) =

[;~~~~] X3(l)

pollutant in Lake Silvaplana } pollutant in Lake Sils poll utant in Lake St. Moritz

Suppose

X2

whose Iransformation from time t to time t equations:

x(r +I) =

[

~.I

~.6 ~

0.2 0.8

]

.xcr).

+ 1 is

given by the following

Xt(t + l)=O.lx,(t) +0.2x2(l) + 1, x2(t + 1) = 0.4x 1 (t ) + 0.3x2 (t) + 2.

Such a system, with constant terms in ~e equations is not linear but affine . a. Find a 2 x 2 matrix A and a vecror b in ~2 such tbat

xCt + 1) = A-"i(r) + E. b. Introduce a new state vector

(in kg) w.ith a "dummy" 1 in the last component. Find a 3 x 3 matrix 8 such that y (t

0.7

[x1 (t(t )) J

+ 1) = By (t) .

How is 8 related to the matrix A and the vector b in patt a? Can you write B as a partitioned matrix involving A and b? (continued)

338 •

Sec. 6.3 Finding the Eigenvectors of a Matrix •

Chap. 6 Eigenvalu es and Eige nvectors

c. What is the relation hip between the eigenvaJ ue of A a nd B? What about

b. lntrod uce the state vector

eigenvectors?

d. For arbitrary values of x 1(0 and x2(0) , what can you say about the longterm behavior of x , (t ) and x2 (t ) ? 42. A machine contains the grid of wires sketched be low. A t the se ven indicated points tbe temperature is kept fixed at the given value (in °C) . Consider tbe temperature-s T1(t) , T2 (t ), and T3 (t ) at the other three me h po ints. Because o f heat ft ow aJong the w ires, the temperatures T;(t ) changes according to the formula T;(t +I ) = T;(t ) -

1

!0 L ( T;(t ) -

Tactj Ct))

where the sum is taken over the fo ur adjacent points in the grid and time is measured in rnin utes. For example,

Note that eacb of the fo ur terms we subtract repre e nts the cooling caused by heat flowing aJong one of the wires . Let

0

200

0

0

200

0

400

a. Find a 3 x 3 matrix A and a vector x(t

+ 1) =

b in

JR 3 such that

Ax (t ) + E.

339

y(t) = [

~~i~~ J

T3(t )

'

1 wi th a "dummy ' I as the last component. Find a 4 x 4 matrix B such that .Y(t + I) = B y(t ) .

(T hi technique for converting an affine system in to a li near y tem is introd uced in Exercise 41 ; ee al o Exercise 38.) c. S uppo e the initial temperature are T1(0) = T2 (0) = T3 (0) = 0. Using technology, find the temperatures at the three points at t = 10 and t = 30. Wh at long-tenn behav ior do you expect? d . U ing technology, fi nd numerical approximations for the e igenvalues of the matrix B . Find an e igenvector fo r the largest eigenva lue . Use the re ults to confirm your co njecture in part c. 43. The co lor of snapdragons is determined by a pair of genes. w hich we designate by tbe Ietter A and a. The pair of genes is caJied the flower' genof) pe. Genotype A A produces red fl owers, genotype Aa pink one and genotype aa white o ne . A bi ologist undertakes a breeding prog ram , starring with a ]arge population of ftowers of genotype AA. E acb flowe r is fe rtiJi zed with pollen fro m a plant of genotype Aa (taken from another popuJati on), and one offs pring is produced. Since it is a matter of chance which of the genes a parent pa se on, we expect half of the. fl owers in the next generation to be red (genotype AA) and the otber half pink (genotype Aa). A ll the fto wers in thi generati on are now fe11ilized w ith po llen from plant of genotype Aa (taken fro m another popuJ ation), and o on. a . Find c lo ed formul as for the frac tion of red, pink, and w hite fl owers in the t th generati on. We know that r (O) = l and p(O) = w (O) = 0 , and we fo und that r ( l ) = p(l ) = ~ and w( l ) = 0 . b. What is the proportion r(t) : p (t ): w(t ) in the long run? 44. Leonardo of Pisa: The rabbit p rob fem. Leonardo of Pi sa (c. 1170- 1240), also kn ow n as Fibo oacc i. was the fir t o utstanding Euro pea n mathematician after the ancient Greeks. He trave led wide ly in the Islamic world and tudied Arabic mathematicaJ wri ting. Hi work i in the spirit of the Arabic rn atbematics of hi . day. Fibo nacci brought the decimal-positio n y te m to Europe. Tn hi book Liber abaci (1202), Fi bonacci di scusses the fo llow ing problern: How many pairs of rabbits can be bred from o ne pair in o ne year? A man has o ne pair of rabbit at a certain pl ace e ntire ly urro unded by a wall. We wi sh to know how many p airs can be bred fro m it in one year if the nature of the. e ra bbits is s uch that they breed e very month one other pai r and begin to breed in the second month aftcr their birth . Let the first pair breed a pair in the fi rst month , the n d upli cate it and there w ill be 2 pairs in a mo nth . Fro m these pair o ne, namely the

340 •

Sec. 6.4 Ca mplex Eigenvalues •

Chap. 6 Eigenvalues and Eigenvectors

b. In hi s paper "On the Measurement of the Circle' the great Greek mathe-

first, breeds a pair in the s cond month. and thus there are 3 pa:irs Ln th e econd month. From these i n one month two will become pregnant, o that in the third month 2 pairs of rabbit wiii be born. Thu s there are 5 pairs in this month. From these in the same month 3 will be pregnant, so that in the fourth month there will be 8 pair ' . From these pair 5 will breed 5 other pair . which added to the 8 pairs gives 13 pair in the fifth month, from which 5 pairs which were bred in that same month) will not conceive in that mooth , but the other 8 will be pregnant. T im there will be 2 1 pairs in the sixth month . When we add to these the 13 pair that are bred in the 7th month, then there will be in that month 34 pairs [and o on, 55, 89, 144, 233, 377, .. .]. Fi.nally there will be 377 . And thi nurnber of pairs has been born from the first-mentioned pair at the given place in one year.

matician Arehirnedes (c. 280-210 sc) uses the approx imation

where

1351 780

to estimate cos(30°) . He does not expl ai n bow he arri ved at these estimates . Explain how we ca n obtain these approximations from the dyoamical system in part a. Hint : 4

A =

[

A6 _ [ 1351 2340

56] 97 '

97 168

780] 1351 .

c. Without using technology, explain why 1351 780

.J3 <

10- 6.

Hint : Consider det (A 6 ).

d. Basedon the data in part b, give an underestimate of the form pjq of that is bett:er than the one given by Archimedes.

4

A-r(r),

.J3 <

265 < 153

Let j (t) be the number of juvenile pairs and a 1) the number of adult pair after t months. Fibonacci st.art his thought experiment in rabbit breeding witb one adult pair o j (0) = 0 and a (0 = l. At r = 1, the adult pair will bave bred a (juveni le) pair, so a(l) = 1 and j(l) = 1. At t = 2, the in:itial adult pair will bave bred another (juvenile) pair, and last month's juvenile pair will have grown up, so a (2) = 2 and j (2) = 1. a. Find formulas expressing a(t + 1) and j(t + 1) in terms of a(t) and j(t) . Find the matrix A s uch that

.xcc+ 1) =

341

-/3

COMPLEX EIGENVALUES In Section 6.1 we considered the rotation matrix

x(l) =

[~~~~ J.

A =[ ~ -~ ]

b. Find closed formulas for a(t) and j (t). Note: You will have to deai with irrational quant ities here. c. Find the Iimit of the ratio a(t) / j (r ) as t approaches infinity. The result is known as the golden section. The goiden section of a li.ne segment Aß is given by tbe point P such that

with characteristic polynornial

We observed that this matrix bas no real eigenvalues, because the equation A.2

AP

+I=0

A.

2

=- 1

has no real solutions. However, if we allow complex solutions, then the equation

PB

)...2

A

or

p

= - 1

B has the two solutions A. 1, 2 = ±i; the matrix A has the complex eigenvalues ±i.

45. Consider a "random" n x n matri x A with zeros everywhere on the diagonal and below tbe diagonal. What is the geometric m ultiplicity of the eigenvalue 0 likely to be? 46. a. Sketch a phase portrait for the dynamical system x( t A= [;

~

l

+ I) =

Ax(t), where

• Complex Numbers: A Brief Review Let us review some basic facts about complex numbers. We trust that you have at .least a fl.eeting acquaintance with complex numbers. Without attempting a defi nition, we recall that a complex number can be expressed as

z =a+ib.

342 •

Sec. 6.4 Camplex Eigenvalues •

hap. 6 Eigenval ues and Eigenvectors lmaginary axis 3i

- = ~

- -- - -

+ 3i

iz = - b

343

+ ia

2i

z=

(l

+ ib

Figure 2 Rea l axis

Figure 1

where a and b are real number . 1 Addüion of complex number natural way. by the ruJe

(a

+ ib) + (c + id ) =

(a

defined in a

: = a + ib

+ c) + i (b + d ),

and multipli ation is defined by th e ru le

(a

+ ib )(c + id ) =

(ac - bd)

+i

ad

~~z

+ bc),

that i , we Jet i · i = -1 and di tribute. lf z = a +ib is a complex nurnber we caJI a it real part (denoted by Re(z)) and b its imaginary part (denoted by lm(z)). A complex number of tbe form ib (with a = 0) is called imaginmy . The set of all complex numbers is denoted by C. The real numbers, ~' fonn a subset of C (namely , tho e complex numbers w ith imaginary part 0). Complex numbers can be represented graphically as vectors (or points) in the complex plane? as shown in F igure 1.

= a- ib

Figure 3

EXAMPLE 1 ~ Con ider a nonzero complex number z. What is the geometric relationship between

z

- = a

+ ib

and i- in the complex plane?

Solu tion If z

=

a

+ i b,

then iz

=

-b

i z) by rotating the vector [

+ ia.

~]

4> = arg(z)

We obtain the vector [ - b ] (representing a

(repre enting z) through an angle of 90° in the

The conjugate of a complex number z = a

z= a -

Figure 4

~

counterclockwise direction. See Figure 2.

+ ib

i defined by

ib

1 The !e uer i for the imaginary unit was imroduced by Leonhard Eul er. the mos t prolific mathematician in hi story. Fora fascinating gli mpse at the hi story of the complex numbers see Tob ias Dantzig, Number: The Lang uage of Science. Macmillan. 1954. 2 AI o ca lled "Argand plane; · after the Sw iss mathematician Jean Roben Arga nd ( 1768- 1822). The representation of complex numbers in the plane was introduced independently by Argand , by Gaus . and by the orwegian mathematician Caspa r We sei (1745- 1818).

(the ign of rhe imaginary part is re ersed). We say tbat z and -= form a conjugare pair of complex numbers. Geometrically, the conjugate: i the reflection of in the rea l axi , a shown in Figure 3. Sometimes it is useful to de crib a complex number .in polar coordinates as shown in Figure 4. The length r of the vector i called the modulus of .::, denoted by lzl. The angle qy is ca lled an argument of z; note that the argument i determined only up to a multiple of 2rr (mathemaücian ay ·' modulo 'hr" . For example, fo r <: = - I we can choo e tbe argumenr rr or - rr or 3rr. 7

344 •

Sec. 6.4 Camplex Eigenvalues •

Chap. 6 Eigenvalues and Eigenvectors

345

w

Figure S

- 2

Figure 7

In general, if z = r (cos a

+ i sin a)

zw = rs( cos(a + r (cos

f/J)

and w = s(cos ß + i sin ß ), tben ß) + i sin(a + ß )) .

When we multiply two complex numbers, we mu1tiply the moduli and we add the arguments:

Figure 6

EXAMPLE 2 ..... Find the modulus and argument of z = -2 + 2i .

lzw l = lzl lwl arg (zw) =arg z +arg w

Solution

lzl = J 22 + 22 = J8. Representing z in the complex plane, we see tbat ~ rr is an

(modulo 2rr )

~

argument of z. See Figure 5.

If z is a complex nurober with modulus r and argument
EXAMPLE 4 ..... Describe the transformation

T (z) = (3 + 4i )z from C to C geometrically.

as z = r (cos ) + ir (sin<jJ)

= r (cos<jJ +

i sin<jJ) ,

Solution

as shown in Figure 6. The representation

IT(z)l = 13 + 4i l lzl = 51zj,

Iz=

r(cos <jJ

+ i sin<jJ)

arg(T (z)) = arg(3 + 4i ) + arg(z) = aretau (

is called the polar form of tbe complex number z.

The transformation T is a rotation-dilation in the complex plane . See Figure 8 .~

EXAMPLE 3 ..... Consider the complex numbers z = cos a + i sina and w = cos ß + i sinß. Find the polar form of tbe product zw.

Solution Apply the addition formulas from trigonometry (see Exercise 2.2.32):

Figure 8 Rotale through about 53 o ond streich the vector by a factor of 5.

)

zw = (cos a + i sin a ) (cos ß + i sin ß ) = (cos a cos ß - sina sin ß ) + i(sin a cos ß + cos a sin ß) = cos(a + ß ) + i sin(a + ß) We conclude that the modulus of zw is 1, and a + ß is an argument of z w. See Rgure7. ~

~) + arg (z) ~ 53° + arg (z).

T(z2 )

346 •

Sec. 6.4 Camplex Eigenvalues •

Chap. 6 Eigenvalues and Eigenvectors

34 7

Tbe polar form is convenient for finding powers of a complex number z: if z = r(cosrj>

+ i si n rj>)

then 2

7

2

= r 2 ( cos(2r/>)

+ i sin(2r/>))

11

= r 11 ( cos(nrj>)

+ i sin (nr/>))

-I

for any positive integer n. Each time we multiply by z the modulus is multiplied by r and the argument increases by rj>. The formula above is due to the French mathematician Abraham de tvfoivre (1667-1754).

Fact 6.4.1

De Moivre's formula (cos rj>

-i

+ i sin 1> )"

= cos(nrj>)

Figure 9

+ i sin(nr/>)

EXA.M PLE S ~ Consider the complex number z = 0.5 + 0.8i. Represent the powers z2 , z3 , the complex plane. What is lim -"?

.. .

in

1)400

Fact 6.4.2

Fundamental theorem of algebra Any polynornial p (A) with complex coefficients can be written as a product of linear fac tors:

Solution

p(),)

To study the powers, write

z in polar form: z = r (cos 1>

+ i sin rj> ) ,

where

r

=

= k(),-

AJ)(A- A2) ... ( l - A11 )

,

for some complex. numbers A1, A2, . .. , A", and k. (Tbe A; need not be distinct.) Therefore, a polynomial p (Ä) of degree n has precisely n roots if tbey are properly counted with tbeir multiplicities.

Jo.52 + 0.8 2 :::::: 0.943 For example, the polynornial

and

0.8 :::::: 58°. 0.5

= arctan -

p(A) = A2

which does not bave any real zeros, can be factared over C:

Wehave

p (A.) = (A.

z" = r" ( cos(n) + i sin(n)). The vector representation of z"+ 1 is a little sborter than that of z" (by about 5.7%), and z" + 1 makes an angle ljJ :::::: 58° with z". If we connect the tips of consecutive vectors, we see a trajectory that spirals in toward the origin, as shown in Figure 9. Note that lim i ' = 0, since r = lzl < 1. ~

+ i)(A. -

i).

More generally, for a quadratic polynornial q().)=),2 + bA.+c,

where b and c are real, we can find the complex roots -b

± ,Yb2 -

4a c

AJ .2= - - - - - -

n---+- oo

Perhaps the most remarkable property of the complex numbers is expressed in the fundan1ental theorem of algebra, first: demonstrated by Carl Friedrich Gauss (in his thesis, at age 22).

+ 1,

2

and

348 •

Sec. 6.4 Camplex Eigenvalue •

hap. 6 Eigenvalues and Eigenvector Proving the fundamental theorem would Iead us too far afield. Read any intro1 duction to complex analy i or check Gauss ' s ori g in al proof.

The vectors [

~

l [-~]

349

form a complex eigenbasis (of C 2 ) for A. We can check

thi · result:

+ Complex Eigenvalues and Eigenvectors The complex mtmber bare some basic algebraic properties with the rea l number ; mathematician ummarize these properties by say in g that both the real nurober 2 IR and the complex numbe r C form a field. To what extent do the re ult and rechnique derived in this text thu far still apply when we work in C, i.e., when we co n ider comp lex calars, vectors with complex components, and matrice with compl ex e ntri es? We observe that everything work in
EXAMPLE 6 ..... Find a complex eigenba is for

A = [

~

-10 ].

We have already observed that the complex eigenvalues are Al.2 = ±i. Then

~]

=

pan [ ; ]

and

E_; = ker( -i/ 2

-i

-

A) = ker [ _ 1

Fact 6.4.3

A complex n x n matri x has n complex eigenvalues if e igenvalue are c unted with their algebraic mul tipli cities.

Although a complex n x n matrix may have fewer than n distinct complex eigenvalues (exarnples are / 11 and [

Solution

E; = ker(i h - A) = ker [ _ :

The great advantage in working with complex eigenvalue is that the characteri sü c pol ynomial always factors completel y (by the fund amental theorem , Fact 6.4.2) :

1]

-i

= span

[-i ]

1 .

~ ~ ] ),

thi s is literally a coincidence (some

of the A; in the factorization of the characteri tic polynomial coincide). "Most' ' complex n x n matrices do have n di tinct complex eigenvalue . We can conclude that there is a complex eigenbasis for most complex n x n matrices (by Fact 6.3.6). [n thi text , we focu on those complex n x n matrices for wh.ich there is a complex eigenbasi and often dismi s the other as aberration . Much attention is given to these other matrices in more advanced linear algebra course , where ub tit:ute for eigenba e. are constructed.

EXAMPLE 7 ..... Consider an n x n matrix

A with complex eigenvalue At. A2, ... , A", Ii ted with their algebraic multiplicities. What is the relationship betwee n the A; and the determinant of A? Hint: Evaluate the characteri tic polynomial at A = 0.

Solution 1C. F. Gau s: Werke , lll , 3-56. For an English trans lation ee D. J. Struik (ed itor): A Source ßook in Motlzemarics 1200-1800, Princeton University Press, 1986. 2 Here is a Ii t of these properties: I . Addition is commutati ve. 2. Add ition is as ociati ve. 3. There is a unique number 11 ·uch that a + 11 = a , for all numbers a. Thi s number n i denoted by 0. 4. Foreach number o there is a unique number b such that a + b = 0. This number b is denotcd by -o . (Comment: Thi s propeny says that we can ubtract in thi s number ystem .) 5. Mu ltipli cation i commutative. 6. Multiplication i a sociative. 7. There is a unique number e such that ea a, for a ll numbers a . This number e is denoted by I. 8. For each nonzero number a thcre is a unique number b such that ab = I . This number b is denoted by a - 1. (Comm enr: Thi s propen y ays that we can divide by a no nzero nurnbcr. ) 9. Mullipli cation disuibutes over addition: a(b + c) = ab+ ac . 10. The nurnbers 0 and I introduced above arenot eq ual.

=

/A(A) = det (A /" - A) = (A - At)(A - A2) . . . (A - A")

.f.4 (0) =

det -A) = (- t)"AIA2 . . . A"

Recall that det( -A) = ( - 1)" det(A) by linearity

in the row . We can conclude

that

Can you interpret thi re ult geometrically when A i, a 3 x 3 matrix with a real eigenbasis? Hint: Think about the expan ion factor. See Exerci e 18. Above we have found that the determin ant of a matrix i the product of .it complex eigenvalues. Likewise, the trace i the surn of the eigenva lue . The verification is left as Exerci e 35.

350 •

Chap. 6 Eigenvalues and Eigenvertors

Fact 6.4.4

Sec. 6.4 Complex Eigenval ues •

Consider an n n matrix A with cornplex eigenvalues A1, A2, .. . , A", listed wi th their algebraic rnultiplicities. Then tr(A) = A1 + A.2

351

For Exercises 13 to 17 : Which of the follow ing- are jields (with the customary addi tion and multiplication)? 13. The rational numbers Ql. 14. The integers Z .

+ · · · + A"

and

15. The binary digits (introduced in Exercises 3.1.53 and 3.1.54).

det(A) =At · A2 · · · · · A.".

16. The rotati on-dil ation matrices of the form [ P. q

Note that this result is obvious for a triangul ar matri x: 1n thi s case the eigenvalues are the diagonal entiies.

EXERCISES

GOALS Use the basic properties of complex numbers. Write products and powers of complex numbers in polar form . Appl y the fund amental tbeorem of algebra. 1. Write the complex number z = 3 - 3i in polar form. 2. Find all complex numbers z such that z4 = I. Represent your answers graphically in the complex plane. 3. For an arbitrary positive integer n, find all complex numbers z such that z" = 1 (in polar form). Represent your answers graphically. 4. Show that if z is a nonzero complex number, then there are exactly two complex numbers w such that w2 = z. If z is in polar fom1, describe w in polar form. 5. Show that if z is a nonzero complex number, then there are exactl y n complex numbers w such that w" = z. If z is in polar form, wiite w in polar form. Represent the vectors w in the complex plane. 6. If z is a nonzero complex number in polar form , describe 1/z in polar fmm. What is the relationship between the complex conjugate and l jz.? Represent and 1/z in the complex plane. the numbers z, 7. Describe the transfonnation T (z) = (1 - i )z fron1 C to C geometrically. 8. Use de Moivre' s formula to express cos(3<j>) and sin (3<j>) in terms of cosq) and sin <j>. 9. Consider tbe complex number z = 0 .8 - 0.7i . Represent the numbers z2, z\

z,

z

.. . in the complex plane and explain their long-term behavior. 10. Prove the fundamental theorem of algebra for cubic pol ynomi als with real coefficients. 11. Express the polynomial f ().) = A3 - 3A2 + 7 A- 5 as a product of linear factors over C. 12. Consider a polynomial f (.!.) with real coefficients. Show that if a complex number .l.o is a root of f (.l.), then so is its complex conjugate, I 0 .

-q] , wbere p and q are real p

numbers. 17. The set H considered in Exercise 4 .3 .34. 18. Consider a real 2 x 2 matrix A with two distinct real eigenvalues, )._ I and A.2 . Explain the formula det(A) = A1A2 (or at least Idet(A) I = IA.1A21) geometrically, think:ing of I det(A) I as an expansion factor. Illustrate your explanation with a sk.etch . Is tbere a similar geometric interpretation for a 3 x 3 matrix? 19. Consider a subspace V of IR", with dim ( V) = m < n . a. If the n x n matrix A represents the orthogonal projection onto V , what is tr(A)? Wbat is det(A)? b. If the n x n matrix B represents the refiection in V , what is tr(A) ? What is det(A)? Find all complex eigenvalues of the matrices in Exercises 20 to 26 (includi ng the real ones, of course). Do not use techuology. Show all your work. 20. [32

24.

[~

-5 ] -3 . I

0

-7

21. [ 1

n

~

25.

- 15 ]

-7 .

[~

22.

0 0 0 0 1 0 0 1

[_! 1~ J.

n

23. [ 1

26

·

1 0 0

n -:l

[0 0 1 0 0 1

- 1

1 0 0

1

1 l l

1

.

1

27. Suppose a real 3 x 3 matrix A has only two distinct eigenvalues. Suppose that tr(A) = 1 and det (A) = 3. Find the eigenvalues of A with tbeir algebraic multiplicities. 28. Suppose a 3 x 3 matrix A has the real eigenvalue 2 and two complex conjugate eigenvalues. Also, suppose that det (A) = 50 and tr(A) = 8. Find the complex eigenvalues. 29. Consider a matrix of the form 0 a A= [

~ ~

where a, b, c, d are positive real numbers. Suppose the matrix A has three distinct real eigenvalues. Wbat can you say about the signs of the eigenvalues (how many of them are positive, negative, zero)? ls the e.igenval ue with the largest absolute value positive or negative?

352 •

Sec. 6.4 Ca m plex Eigenva lues •

Ch ap. 6 Eigen values an d Eigenvect o rs al!ed a regul ar tran ition matrix if a ll en~ri es of A 11 matrix A i are positive, and the e ntries in each co lu mn add up to 1 ( ee Exerc1.ses 24 to 3 1 of Section 6.2). A n example i how n be low.

30. A real n x

A=

0.4 0. 3 0 .1 ] 0.5 0.1 0.2 [ 0.1 0.6 0.7

• 1 i a n ei oenvalu of A , with d im ( E 1) = 1. • If )... i a Zomplex eigenvalue of A o ther than I, the n IA.I < I.

x

a. Consider a reo uJ ar n x n tran iti o n matrix A a nd a vecto r in IR" w hose entries add u; to 1. Show that the e ntrie of A.i wi ll also add up to 1 .. b. Pi ck a regular tra nsition matrix A. and com pute ome power of A u s1~ g technology): A 2 , •.. . A 10 , .. . , A 100 , •• .. Wbar do you o bser ve? Expla.!Jl your ob ervatio n . He re, you may assume that there is a com plex eigenba is fo r A . Hint: Think abo ut N column by column ; the ith column of N ts A1 31. Fo rm a 5 x 5 m arrix by w ri ting the integers I, 2 , 3 , 4 , 5 into each co lumn in auy order you want. He re is an example:

e;.

A=

5 l 3 2 4

2 3 3 5 5 4 2 1

l 4

2 3 4 l 5

For examp le, 20% of the peo ple w ho u e AT&T go to S print one month later. a. We introd uce the state vector

x(t ) =

Yo u may take the fo llowing properties o~ a r g ul ar tra nsitio n matri x fo r granted (a partial proof i outJined in Exerc1se 6.2.3 1):

2 3

4 5

(Optional question for combinatori cs afici o nados: how m a ny suc h matrices are there?) Take higher and higher power of the m atri x you have chosen (using technology) and compare the columns of the matrices yo u get. What do you observe ? Explain the result (Exercise 30 i helpful).

32. M ost long-distance te lephone service in the United S tates is prov ided by three companies, AT&T, MCI, and Sprint. The tbree companies are in fierce competition, affering discounts or even ca. h to those w ho sw itch . If the figure-s advertised by the companies are to be believed , people are switching their long distance provider from o ne mo nth to the nex t according to the following pattern :

353

t)]

[

a m (l )

s( t )

fractio n us ing AT&T fraction us in g MCl fraction us ing Sprint

Find the matri x A such that x(r + I) = Ax(t ) , ass uming that the c u to mer base re mai ns unch anged. Note that A is a reg ul ar tra ns itio n matri x. b. Whi c h fract ion of the cusro mer will be with each company in the long term ? Do yo u have to know the curre nt market shares to answer thi s q uestio n? U e the power method introduced in E xercise 30.

33. Th e power method fo r fin ding eigenvalu.es. Conside r E xercises 30 an d 31 fo r some background . Usi ng techn ology, gene rate a ra ndo m 5 x 5 matrix A with no nnegative entries (de pending on the technology yo u are using, the e ntrie co uJd be integers between 0 and 9, o r number be tween 0 and 1). Usin g technology, compute B = A 20 (or another hi g h po wer of A). We w ish to compare the co lurnns of B . This i. hard to do by inspecti o n, particularly because the e ntri es of B are probably rather large . To get a be tter hold on B , form the diagonal 5 x 5 m atrix D whose ith diagonal element i b 1; , the itb eleme nt of the first row of B . Co mpute C = BD - 1 • a. How i C o btai ned fro m B? Gi ve your a nswer in tenn of e le mentary row or column Operations. b. Take a Iook at the co lumns of the matrix C you get. What do you o bserve? W hat does yo ur an wer tell you abo ut the colurnns o f B = A 20 ? c. Explain tbe o bservation you m ade in part b. You m ay assume that A has 5 di stinct (complex) eigenvalues and th at the eigenvalue w ith max imal modulu s is real and positi ve (we cannot explain here why thi w ill " us ually" be the case). d. Compute AC. What is the sig nificance of the e ntries in tbe to p row of thi matr ix in terms of the ei genvalu e of A? What is the sig nificance of the columns of C (or B ) in te nns o f the e igenvectors of A?

34. Exercise 33 illu trates how you can use the powers of a m atrix to find it SPRINT

~ 10%

MCI

I

10%

AT&T 20%

dominant eigenvaJue (i.e., the ei genvalue with m axim al modulu s), at lea t whe n this eige nvalue is real. But what about the other ei genvalues? a. C o n ider an 11 x n matri x A w ith 11 di tinct complex e igenvalues A., , A._ , ... , A.," where A. 1 i real. Suppose you have a good (real) approx imatio n A. of )... 1 (good in tha t IA. - A. 1 1 < IA. - A.; 1. for i = 2 . . . . , n ). Con ider the matri x AI" - A. What are it e igenvalue ? Wh ic h ha the ma llest modulu . No consider the matrix (Al" - A) - 1 • What are it · e igenvalues? Whi ch has the !arge t modulus? What i the re lationship be tween lhe eigenvecto rs of A

354 •

Sec. 6.4 Camplex Eigenva lues •

Chap. 6 Eigen alues and Eigen ectors and (A/11 - A) - 1? Consid r higher and high er po~ers ?f (A l" - A.) - I. How does thi help you to find an eigenvector of A wllh e1genvalue A. 1, and A. 1 itself? U e tbe re ults of Exerci e 33. b. As an example of part a. consider the matrix

A

=

I 2 4 5

[ 7

8

~]. 10

35. Demonstrate the formula tr(A ) = )q + Al

lf current age-dependent birth and death rates are extrapolated, we have the following model :

x(t + 1) =

We wish to find the eigenvector and eigenvalues of A without using the corresponding command on tbe computer (which i , after all, a "black box'' . First. we find approximations for the eigenvalues by graphing the characteristic polynomial (use technology). Approximate the three real eioenvalues of A to the nearest integer. One of the three eigenvalues 0 of A i negative. Find a good approx.imation for thi s eigenvalue and a corre ponding eigenvector by using the procedure outlined in part a. You are not asked to do the ame for the two other eigenvalues.

1.1 0.82 0 0 0 0

1.6 0 0 .89 0 0 0

A = [

36. ln 1990, tbe population of the African country Benin was about 4.6 rnillion people. lts composition by age was as follows:

Age Bracket

0-15

15-30

30-45

45-60

60-75

75- 90

Percent of Popu lation

46.6

25.7

14.7

8.4

3.8

0.8

We represent tbe. e data in a state vector whose components are the populations in the various age brackets, in rnillions:

x(O) = 4.6

::::::::

2.14 1.18 0.68 0.39 0.17 0.04

We measure time in increments of 15 years , with t = 0 in 1990. For example, x(3) gives the age composition in the year 2035 (1990 + 3 · 15).

0 0 0 0 0 .53 0

0 0 0 0 0 0.29

0 0 0 0 0 0

x(t ) = Ax(r).

~

-;J,

where w and z are arbitrary complex numbers. a. Show that IHI is closed under addition and multiplication, i.e. , the um and the product of two matrices in IHI are again in IHI. b. Which matrices in IHI are invertible? c. If a matrix in IHI is invertible, is tbe inverse in IHI as weil? d. Find two matrices A and B in IHI such that AB =1- BA . IHI is an example of a skew fie/d: it saüsfies all axioms for a field except for the commutativity of mulüplication. [The skew field IHI wa introduced by the Irish mathematician Sir William Hamilton (1805- 1865); it elements are called the quatemions. Another way to define tbe quaternions is discussed in Exercise 4.3.34.]

!n

38. Con ider tbe matrix

0.466 0.257 0 . 147 0 .084 0 .038 0.008

0.6 0 0 0.8 L 0 0

a . Explain the significance of all the entries in the matrix A in terms of population dynamics. b. Find the eigenvalue of A with largest modulus and an a sociated eigenvector (use technology). What is the significance of tbese quantities in terms of population dynarnics? (For a summary on matrix technique used in the study of age-structured populations, see Drnitrü 0. Logofet. Motrices and Graphs: Stability Problems in Mathematical Ecology, Chapters 2 and 3, CRC Press, 1993.) 37. Con ider the et IHI of all complex 2 x 2 matrice of the form

+ · · · + A"

where the A; are the complex eigenvalues of the matrix A, counted with their 1 algebraic multiplicities. Hint: Consider the coefficient of A" - in !11 (A) = (A - ),!)(A.- ),1 ) · · · (A. - A.11 ) and compare the result witb Fact 6.2.5.

355

c,~ [~ ~

a. Find the powers of C4: CJ C~, C! .... . b. Find all complex eigenvaJue of C4, and con truct a complex eigenbasis. c. A 4 x 4 matrix is called circu/ont if it i of the form

M

~ l~ ~ ~

n

(

356 •

Chap. 6 Eigenvalues and Eigenvectors

Sec. 6.5 Stability •

Circulant matrices plav an important role in statistics. Show that any circulant 4 x 4 matrix M can be expressed as a linear combination of / 4, C4, C~ , CJ. Use this representation to find an eigenbasis for M. W hat are the eigenvaJues (in terms of a, b , c, d )? 39. Consider tbe n x n matrix C11 which has ones clirectly below the main diagonal and in the rigbt upper corner, and zeros everywhere else (see Exercise 38 for a discussion of C4). a. Describe the powers of C". b. Find aJI complex eigenvalues of C11 , and construct a comp lex eigenbasis. c. GeneraJize part c of Exercise 38. 40. Consider a cubic equation

In our case: 14p 2

!,

t.

(~) = p , sin (~) =2p , sin

sin

+ 12p 3 -

1 = 0.

Now Iet p

1

=-. X

At this point we inten"Upt Einstein 's work and ask you to finish the job. Hint: Exercise 40 is helpful. Find the exact solution (in tenns of trigonometric and inverse trigonometric functions), and give a numerical approximation as weiL (By the way, Einstein, who was allowed to use a logarithm table, solved the problem correctly.) Source: The Collected Papers of Albert Einstein, Val. 1, Princeton University Press, 1987.

x 3 + px = q, where (p/ 3)3 + (q/ 2)2 is negative. Show that thi s equation has three rea l solutions; write the solutions in the fonn xi = A cos(c/>J) for j = l , 2, 3, expressing A and cPi in terms of p and q . How many of the solutions are in the intervaJ <J-p/3, 2/ -p/ 3)? Can there be solutions !arger than 2J- p j3? Hint: Cardano's fonnula derived in Exercise 6.2.38 is useful. 41. In hishigh school final examination (Aarau, Switzerland, 1896), young Albert Einstein ( 1879-1955) was given the following problem: In a triangle ABC Iet P be the center of the inscribed circle. We are told that A P = 1, B P = and C P = Find the radius p of the inscribed circle. Einstein worked through this problern as follows:

35 7

STABILITY In applications, the long-term behavior is often the most important qualitative feature of a ~ynarnicaJ system. We are frequently faced \Vith the following situation: the state 0 represents an equilibrium of the system (in pbysics, ecology, or economics, for example). If the system is disturbed (moved into another state, away from the equilibrium _9) and then left to its own devices, will it always return to the equilibcium state 0?

EXAMPLE 1 ..... Cons.ider a dynamical system x (t+ 1) = A x(t) where Ais an n xn matrix . Suppose an initial state vector i o is given. We are told that A bas n distinct complex

(~) = 3p.

eigenvalues and that the modulus of eacb eigenvalue is less than I. What can you say about the Iang-tenn behavior of the system, that is, about lim x(t )? 1-+ 00

c Solution

(not to scale)

v;.

For each complex eigenvalue A; , we can choose a complex eigenvector Then the form a complex eigenbasis for A (by Fact 6.3 .6). We can write _{0 as a complex linear combination of the ii; :

v;

Then For every triangle the following equation holds: . 2 sm

(a) . (ß)2 + . (Y) + . (a) . (ß) . (y) = 2 + 2 2 2 2 sm2

sm2

2 sm

sm

Sill

B y Example 5 of Section 6.4

1.

lim :>-; 1 -->00

= 0,

since lA; I < 1.

358 •

Sec. 6.5 Stabili ty •

Chap. 6 Eigen values and Eigenvectors

359

Therefore , Jim .r (l) = o . t~

For the di scussion o f the long- term behavio r of a dynami ca1 ystem the fo llowing defi nition i u eful :

Definition 6.5.1

Stahle equilibrium Consider a dynami ca l system

Figure Ib Not osymptoticolly stoble . .x cr + I ) = Ax(t ).

We say that

Öis an (asymptoticall y) stable equilibrium for thi s yste m if lim ,_,.

x t) =

Genera li zing the result of Example 1 above, we have the foll ow ing result:

0

Fact 6.5.2

fo r all it trajectories. 1

Consider a dynami cal sy tem .r(f + I ) = Ax(t ) . The zero state is asymptotica ll y stable if (and onl y if) the modulus of all complex eigenva lue of A is le than l.

ote tbat the zero state is stable if (and only if) lim A 1 = 0 1->

(meaning that all entries of N approach zero). See E xercise 36. Consider tbe examples shown in Fi gure 1. Figure 1a Asymptoficolly stoble.

fac t onl y w hen there i a complex eigenbas is fo r

Ex ampl e I illust.rate thi

A . Recall th at this is the case fo r " mosf' matrices A. In Exercises 37 to 42 o f

Secti on 7.3 we wi ll di scu s the case whe n there is no complex eigenbasi . For an il lustrati on of Fact 6.5.2 see Fig ure 11 of Secti on 6. 1. where we ketched the phase portraits fo r 2 x 2 m atrice wüh two di tinct po iti ve eigenva tue . We w ill now turn our atte nti.on to the phase portraits fo r 2 x 2 m atrices with eigenvalues p ± i q (where q =/= 0).

EXAMPLE 2 ... Con ider the dynamical y tem ,i:(t + I ) = [

~

-q] p

.X (t ),

w he re p is an arbitrary real number and q is nonzero. Examine the tability of thi y tem. Sketc h phase portra it . Discus your re ult in term of Fact 6.5.2.

Solu tion We ca n interpr t the linear tran form atio n T (.x) = [ P

q

-q]x p

a a rotation-dil ati o n with a di lati 11 factor f r = J p - q 2 . I 11 Fig urc 2 we dra\! so me trajectories fo r thi y tem, for vari o u va lu e of p 2 + q 2 . N t th at 1

In th is tex t, '·stabl e" will always mean asymptotically stablc. Seve ral othe r not io ns of stability are used in applied mathematics.

p -q] x (f) = [q p

1 -

.t

_

Xo -

I

[ CO

(rj> f

in (rj> t )

-

i n(r/> t)

CO(r/>t)

JXo. _

360 •

Sec. 6.5 Stabili ty •

Chap. 6 Eigenvalue an d Eigen vectors

361

gi~e a somewhat ro undabout expressio n fo r the trajectory x(t): the vector x(t , Wil h rea l ~o m po n e nts, is expressed in tenn of complex qu antities. We may be

lO terested In a " rea l formuJ a" for x(t), that is, a formuJ a ex pressing x(t) in tern1s of rea l quantiti es alone. S uch a fo rmula i useful , fo r exampl e, when ketchi ng a phase portrait. It is he lpful to wri te Ä 1 in po lar fo rm : ).. 1 = r (co () + i in )) · the n 1 Ä 1 ~ r' (cos(cjJt) + i sin (t)). Si nce the vectors c 1)..'1ü1 and c 2 ~ ü2 are compl ex conJu gates, we ca n write x(t) = 2 Re (c)

(b)

(al

Figure 2 (o) p2 + q1 spiral outward.

2 1: trojectories spiral inward. (b) p2 + q = 1: trajectories are circles. (c) p

2

<

+ q2 >

= 2 Re((/z + ik)r'(cos(t ) + i in (t ))(ü + iw))

1: trajectories

= 2r ' (h cos(cjJl)ü - h sin (t)w- k cos(t)w- k sin (t)ü)

= Fi gure 2 illustrate that the zero tate is stabl e if p + q < 1. Al tern ati vely, we can find the eigenvalues and apply Fact 6.5.2.

AJ.2 =

-

2 p'A

2

I'A II = I'A2l =

= p

c?s(t ) sm(t)

- sin (t) ] [ cos(t ) h

·

Note that x(O) = - 2kw + 2hü. Let us set a = - 2k and b = 2h for impli city. The computati ons above may seem o mewhat contri ved; we have carefull y arranged things in s uch a way that the resuJt involves the rotation matrix

.

± rq

J P + q2 2

COS(t ) [ in cjJ r)

Again, we o bserve stabili ty if p 2 + q 2 < I , by Fact 6.5.2.

In E xarnple 2 we have seen that the matrix [ P - q ] ha the eigenvalue q p p ± i q , and we have sketched phase portraits for the dyna:mical system .xcr + l ) =[P q

-q] .xcr). p

Fact 6.5.3

Co nsider a real 2 x 2 matrix A with eigenvalues Ä1,2

c,'A',ü , + c?.Ä 2V2 , 1

where Üu = ü ± iw are ei genvectors with the ei genvalues Ä. 1, 2 (see Exercise 31). We leave it to tbe reader to show that c 1 and c2 are complex conju gate numbers: c~, 2 = h ± i k see Exerci se 32). Note that the formula

± i q, it is

= J7

± iq

= r(cos()

± i sin)

and corresponding e igenvector ü ± i~. Consider the dynarnical y te m :xcr + 1) = Ax(t )

1

1Convemion: Whenever we consider eigenvectors ii ± i 7v with assoc iated eigenvalues p understood th at p and q are real; q is nonzero; and ii and ÜJ are vectors with rea l components.

- sin (t ) ] CO (c/J t ) .

Let u summarize.

More generall y, if A is any real 2 x 2 matrix with e igenvalues 'A 1, 2 = p ± i q, what does the phase portrait of the dynamical system x(t + 1) = Ai 7(t) Iook like? Consider the traj ectory x(t ) = A'.xo for a given (real) initi al state .Xo. By Fact 6.1.3 we have x(r ) =

-k]

v][

= .2r' [ w

+ p2 + q 2

2p±)4 p2 - 4p 2 - 4q 2

ü] [-- kk c?s(t ) - h sin (c/Jt ) ] sm( t ) + h cos(t )

2 ,., [ w

2

2

JA()...) = )..?.

(c 1)..'1ü1)

with initial state x(O) = The n

xo. Write x

x(t ) = r' [

w

0

=

aw + bv.

a]

- sin (t ) ] [ cos(t ) b

·

In Exa mple 5 of Section 7.2, we will present a more conceptu al deri vaü on of thi re ult . There is no need to memorize the formula in Fact 6.5 .3, but make s ure you understand its implications tated in Fact 6.5.4 below.

362 •

Sec. 6.5 Stability •

Chap. 6 Eigenvalues and Eigenvectors

363

t = ':!

Figure 3 Figure 5 Wbat does the trajectory ."i(t given in F act 6.5 .3 Iook like? Let u fir t think about the trajectory cos(
- sin(
If r = I, we have fo und the trajectory x(l) . If r exceeds 1, then the exponential growth fac tor r 1 will produce Ionger and langer vectors as t increases: the trajecto ry will spiral outward, as shown in Fi gure 5. Likew i e, if r is Ies than 1, then the trajectory spirals in ward.

J[ab J.

Since the matrix represents a rotation, these po int will be located o n a circle as shown in Figure 3. The invertible matri x [w v] transforms the circle in Figure 3 into an ellipse (compare with Exercise 2.2.50). See Figure 4. Let u summarize our work thus far:

Fact 6.5.4

Co nsider a dynamical sy tem x(r

+ I) = Ax(r ),

w here A is a real 2 x 2 matrix with eigen values -

in (
A.1.2

= p ± iq = r(cos(
;r

If r = 1. the points (t) are located on an ellipse; if r exceeds 1, the trajectory pirals outward; and if r is le s than 1, the trajectory spirals inward. points on an ell ipse

Figure4

I =

2

Fact 6.5.4 provides another illustration of Fact 6.5.2: The zero state i stable if (and only if) r = /A.ll = /A.2I < l.

EXAMPLE 3 ..... Consider the dynamical system + I) =

x(t

U -5] _

1

x (r ),

. h trutta . . . I state xo - = [ 0 ] . Find real closed formulas for the components of x(l), wtt 1 and sketch the trajectory.

Solution We wi ll use the terminology introduced in Fact 6.5.3 throughout. The characteristic polynomial of A = [

~

=n

is A.2

A. 1 ? =

·-

-

2A.

+ 2, with eigenvalues

2±J4 - 8 . = I ± t. 2

364 •

Chap. 6 Eigenvalues and Eigenvectors

Sec. 6.5 Stab.i lity •

We write A. 1 in polar form

365

I r= 2

o that

../2

r=

J[

a.nd

Next we need to find an eigenvector ü +

E 1+i = ker [

~]

and ü = [

=;l

Since

iw for the eigenval ue

I

+ i:

cos ('.fr) -sin('.f r)] [ 1J [ sin (%r) cos ('.fr) 0

- 2 +i _

1

We write the initial state vector [

rp -- -4 '

x=

('.f r) [0 -5] [ cos sin(%r) I

- 2

(a)

.x0 = [ ~] as a linear comb in ation of

- sin (%r) cos (t r)

J[1J 0

(b)

ÜJ

ÜJ we have simply

0

x(r )

xo= lw + Oü,

=

(·f21' VL}

Figure 6

so that a =I and b = 0.

[01 -5][ cosSin('.f('.ft)t) - 2

- sin('.ft ) CO (%t)

J[1J 0

(c)

Tbe traj ecto ry spirals outward, since r = .J2 exceeds 1. In Figure 6, we develop the trajectory step by step as we have done in Fig ure 3 to 5 above. Note that we are using different scales in Fi gures 6a 6b, and 6c. ....

Now we are ready to u. e the fom1Ula stated in Fact 6.5.3 .

If you have to sketch a trajectory as in Fig ure 6c without the aid of tecbnology, it he lps to compute and plot the first few poin ts .r(O), x(l), .1:(2) ,.. . until you can see a trend . Then use Fact 6.5.4 to continue the trajectory.

EX ER C I S E S GOALS

Use eigenva lues to determine the stability of a dynamical yste m. Analyze rhe dynarni cal system ~r(r + 1) = A-r(t) , where A is a real 2 x 2 matrix wi th eigenvalues p ± iq. Tbe components of x (t) are x , (r) = - 5(.J2)1 si n

(~c)

x2 (r ) = C.J2)' (cos (

~~) - 2 si n (~t) ) .

For t:he mat:rices A in Exercises 1 to 10 deterrnine whether the zero state i a stable equilibrium of the dynarnical system .r(t + 1) = A.x(r) .

J

0 1• A = [ 00.9 0.8 .

J

0.8 0.7 3. A = [ - 0.7 0.8 .

366 •

Sec. 6.5 Stabili ty •

36 7

hap. 6 Eigenvalue and Eigenvecto r [ - 0.9 0.4 4. A =

-0.4 ] - 0.9 .

- [ 0.5 - 0.3

5.

- 2.5 ] 7. A = [2.4 I - 0.6 . 9. A =

[08

0 0.6

8. A = [

-06]

0 0.7 0

0.6] 1.4 .

0 0.8

10. A =

.

[- 1 - 1.2

6. A =

b. l

~.6 ].

- 0.2 ] 0.7 .

[03 03] 0 .3 0.3 0.3

0.3 0.3

0.3 0.3

.

11.

A =

14. A = [

k

12. A = 15.

.

I

[~~

A =[o.~l

k ] 0.6 .

13. A = [007

k1 ].

~:~

16. A = [

k ] - 0.9 . k ] 0.3 .

For the matrice A in Exercises 17 to 24 find real clo ed form ulas for tbe trajectory .r(t + l = Ai t), where .t(O = [

17. A =

[ ~:~

20. A = [ _ 4 3 23. A =

[ -0.5 -0. 6

~

l

Draw a rough sketch. [ - 0.8 -0 .8

-0. 8 ] 0.6 .

18. A =

~] ·

21. A = [

1.5 ]

I -2

24. A =U .2

1.3 .

0.6 ] - 0.8 .

~l

19. A = [ ; 22. A =

[~

- 32 ] . - 15] -11 .

x

26. x(t + I ) = A Tx(t). 25. ;er+ I)= A- ' .xcr). 28. x(r + I ) = (A - 2 /" )x(r) . 21 . .xcr + 1) = - A.t(t). 29. x(r + 1) = (A + I" )x(t ). 30. Let A be a rea l 2 x 2 matrix. Show that the zero state is a stable eq uilibrium of the dynarni cal system x(r + I) = Ax(t) if (and only if) itr(A) I - 1 < det(A) < I. 31. Consider a complex m x n matrix A. The conju gate the conjugate of each entry of A . For example, if 5] 9 '

then

- [2 A=

I

(see Exercise 6.1.36 for some background). Sketch tbe trajectory for the initial state g(O) = 100 and h (O) = 0. Wbat does the trajectory tell you in practical terms? Be as specific as possible. 34. Consider a dynamical system x(t +I) = Ax(t) , where Aisareal n x n matrix. a. If ldet(A)I ::: 1, wbat can you say about the stability of the zero state? b. lf ldet(A) I < 1, what can you say about tbe stability of the zero state? 35. a. Consider a real n x n matrix with n distinct real eigenvalues A. 1 • • • A.", where lA.; I ::: 1 for all i = 1, .. . , n. Let x(t) be a trajectory of the dynarnical system x(t + 1) = Ax(t). Show tbat this trajectory is bounded: that is, there isa positive number M such that llx(t)ll ::: M for allpositive integers t. b. Are all trajectories of the dynarnical system

x(r+ l) =[~

-3 ] - 2.6 .

Consider an invertible n x n matrix A such that the zero state is a stable equilibrium of the dynarnical sy tem (t + 1) = Ax(t ) . What can you ay about the tability of the systems Listed in Exercises 25 to 30?

A=[2+3i 2i

b. Let A be a real n x n matrix and v+iw an eigenvector of A with eiaenval ue p+iq. Show that the vector v-iw is an eigenvector of A with ei;en value p- iq. 32. Consider a real 2 x 2 matrix A with eigenvalues p ± iq and corre ponding eigenvectors ± iw . Show that if a real vector .X0 is written as -~o = c 1 ( v + i ÜJ) + c2 ( v - i ÜJ), then c 2 = c 1• 33. The glucose regulatory system of a certain patieot cao be described by the equation s g(t + 1) = 0.99 g(t) _ 0.0 1 h (t ) h(t + 1) = 0.01 g( r) + 0.99 h (t)

v

Consider the matrices A in Exercise II to 16. For which real nu mbers k is the zero state a stable equilibri um of the dynamical sy tem .r(r + 1) = kr(t)?

[kO 0~9] . ~ k]

a. Show that if A and B are compl ex m x n and n x p matrices , re pectively then A B = A B.

-

A

3i . 2L

is defi ned by taking

i].t(t)

bounded? Explain . 36. Show that the zero state is a stable equilibrium of the dynarnical ;r(t +I)= Ax(t) if (and only if)

ystem

( -'>

(meaning that all entries of N approach zero). 37. Consider the national income of a country, which consi t of con umption , inve tment, and govemment expenditures. Here we assume the govemment expenditure tobe constant, at Go, while the national income Y (t), consumption C(r), and investment I (I) change over time. Accord.ing to a imple model , we have Y(t) = C(t) + I (t ) +Go (0 < y < 1) C(t + l ) = y Y (r) (et > 0) I(r. + 1) = et(C(t + 1) - C(t)) where y is the marginal propensity to con ume and et is the acceleration coefficient. (See Paul E. Samuel · on, "Imeractions between the Multiplier

368 •

Sec. 6.5 Stability •

Chap. 6 Eigenvalues and Eigenvectors

Analy i and the Principle of Aceeieration " Re1iew of Economic Statisrics, May 1939 . pp. 75-78 .) . a. Find the equilibrium solution of these equ atwns, when Y (t + 1 = Y (1) , C (1 + 1) = C(l ) , and I (r + 1) = l (t ) . b. Let y (1) , c(r), i(r) be the dev iations of Y(1), C(r), I (1) from the equilibriurn state you fo und in part a. T he e quantities are related by the equations y (t )

c (t i (l

39. Consider the dynarn:ical system X1(t

+ 1) =

x2(t + I ) =

+ 0. 2x 2 (t ) + I , 0.4x l (t) + 0 .3x2(t ) + 2

O. lx 1(t )

(See Exercise 6.3 .41 ). Find the equilibrium state of th.i s system and determine its stability (see Exercise 38). Sketch a phase portrait. 40. Consider the matri x

= c(l ) +i (t )

+ l ) = yy(t) + 1) = a(c (t + 1) -

369

c (t ))

-q p -s r

(Verify this!) By substituling y(1) into the econd equat.ion, set up equations of the form c(t+ l ) = p c(t) + q .i (r) l · l i(t + 1) = r c(t + s 1(1)

c. When a = 5 and y = 0.2, determine the stability of the zero state of th.i s system. d. When a = 1 (and y is arbitrary , 0 < y < 1), deterrnine the stability of the zero state. e. For eacb of the four sector in the a - y plane below, determine the stability of the zero state.

- r

s p

-q

-sl - r

q p

where p, q , r, s are arbitrary real numbers (compare with Exerci e 4.3.34). a. Compute AT A. b. For which choices of p , q , r, s is A invertible? Find the inverse if it ex.ists. c. Find the determinant of A . d. Find the complex eigenvalues of A . e. If .X is a vector in IR4 , what is the relationship between Jlx II and IIAi II? f. Consider the numbers

4a 'Y = ( I

+ a )2

and

3

2

4

Ci

Express the number 2 183

Discuss the various cases, in practical terms. 38. Consider an affine transformation

T (i) = Ai +

as the sum of the squares of four integers,

b,

where A is an n x n matrix and b is a vector in !Rn (compare with Exercise 6.3.41 ). Suppose that 1 is not an eigenvalue of A . a. Find the vector ii in !Rn such that T(ii ) = ii; this vector is called tbe equilibrium state of the dynarnical system x(t + 1) = T (x(t )) . b. When is the equilibrium ii in part a stable (meaning that lim x (t ) = ii for all trajectories)? ~--+

Hint: Parteis useful. Note that 2183 = 59 · 37. g. The French mathematician Joseph-Louis Lagrange (1736-1 8 1 ) howed that any prime number can be expres ed a the um of the square of four integers. Using this fact and your work in pmt f as a guide, show that any positive integer can be expressed in thi s way.

3 70 •

Chap. 6 Eigenva lues and Eigenvectors

41. End a 2 x 2 matii · A without real eigenvalue · and a vector -~o in JR uch that fo r all positivei nt ger r the point A1.xoi lo ated on the e llipse sketched 2

below.

COORDINATE SYSTEMS COORDINATE SYSTEMS IN JRn 42. We quote from a texr on computer graphic (M. Beeler et al., "HAKMEM," .rvm Artjficial Intelligence Report AIM-239 , 1972):

Here is an elegant way to draw almost circles on a point-plotting di splay.

We now tak:e a fresh Iook at the example in Section 6.1 about coyotes and roadrunners. Consider the matrix A = [

CIRCLE ALGORITHM : NEW X NEW Y

= =

OLD X OLD Y

K

+

K

* OLD * NEW

Y;

X.

Thi makes a very round ellipse centered at the origin with its ize determined by the initial point. Tbe circle algorithm was invented by mistake when 1 tried to save a register in a display hack!

with eigenvectors ii, = [

[i~]

7]

0.86 -0.12

0.08] 1.14

and Ü2 = [ ; ] . Suppose the initial state is

x=

to make the numbers more manageable, we measure the populaüons of

animals in hundreds) . We wish to analyze the dynamical system .r(t+ l ) = A.x(t) . First, we express ,r as a linear combination of the vectors 1 and ü2 :

v

(In the formula above, k is a small number.) Here, a dynamical system is

or

defined in "computer lingo." In our terminology, the fonnulas are x(t+ I =x(t) - k y(l) y(l

+ 1) =

y( t)

+ kx(t + 1).

a. Find the matrix of th.is transformation (note the entry x(t + I) in the second formula). b. Explain why the trajectories are elljpses, as claimed above.

The unique solution of this linear system is c, = 4, c2 = 2. We can imagine that the vectors ii 1 = [

7]

and Ü2 = [ ; ] define a coordinate system in the plane, as

sketched in Figure I. The figure confirms our computation: the coordinates of ,{, with respect to the basis iil> ii2, are c 1 = 4 and c2 = 2. In the c,-c2 coordinate system, we can represent ; by its coordirwte vecror

371

3 72 •

Chap. 7

Sec. 7.1 Coordinate ystem in lR 11

oordinate ystems

EXAMPLE 2 .... Find

[XJ8

for

x=

[

~]

1



373

(for the bas is ß introduced above).

Solution To find the coordin ates of

x, we solve the linear system or

10

A Jjttle computation hows that c 1 = 2 and c2

= 3, so that

It is important to point out that a ba is is a sequence of vectors and not ju a set (the order matters). Consider the ba is

5

L

UJ. [7] n

of JR2, denoted by (we have rever ed the order of tbe vectors Ü1. Ü2 above) . Tben the order of the coordinates i reversed as well ; for example: Figure 1

5

[xJn = Let us denote the basis ü1, respect to ß is denoted by [x] 8 :

v2 by ß.

Then the coordin ate vector of .X with

for

[;]

,t =

L~ J.

1 2 . ß.., cons1.stmg . o f the vectors v- 1 = [ ] an d-v2 = [ ] . Now return to the bas1s 3 1

The equation

definino0 the coordinates of a vector X

Solution In Figure 1, we can read off the coordinates of ü1: they are Algebraically, we have

Either way, we find that

c1

= 1 and

c2

= 0.

IR 2 can be written in matri x form as

I I] [I [ C J ~I [ ~I ~2

-

=

.r in

-

Ci

-

=

lf we denote the matrix

by S, the equalion takes the compact form and, likewise,

1 .-r

= s [.r] 8



3 74 •

Chap. 7

Sec. 7.1 Coordinate Systems in R" •

oordin ate ystems

[x]6

W can now " rite a c losed fonnula for

3 75

in term of x and

[x-] ß = s-1XFor example, if

x=

[

~ ] , we find that

1

[21 31]- [7 ]_1[ 3 11 - S - 1 1

-

[x ]ß=

S

- 1-

x=

10

Note tbat this agree with the re ult in Example 2. Now we go a tep further. We wish to tudy the linear u·ansformatio n

-

-

T(x) = Ax =

[ 0.86 -0.12

0.08 ] 1.14 x

5

in the oordinate sy tem defined by the ba is ß. More precisely. we wi h to find the mat1ix B th at transforms

[r<x)] 8 . Recall fir t tbat ü1

0.9ü 1 and If .J;

= [ ~]

and ü2

A~ = l.lÜ2.

= c 1ü1 +c2 ü2 ,

rhen T(X)

= [ ~]

[.q 6

into

are eigenvector of A with A ii1

= 0.9c 1ii 1 + U c2 ii2 .

=

Or, expressed in terms of

Figure 2

coordinate vectors: if

tben

0] [Ci ]= [0.09

l.l

C2

Because ü1, ü2 is an eigenbasi , w ith as ociated eigenva1 ues 0.9 and 1.1 , respectively, B is a diagonal matrix , wirb the eigen value on the diagonaL We wil l rerurn to tbis point in tbe next ection (Fact 7 .2.1 ). Whm is the relation hip between the " tandard matrix

The matrix

A = [

B _ [0.9

-

0

0 ] 1.1

0.86 -0.12

ofT (whi ch tran forms .X into T (.r)): the matri

which transforms [x] 8 into [T (x) ]8 i called the rnatrix ofthe linear transformation T with respect to the basis ß. 1 Tbi means that in the c 1-c2 coordi nate sysrem defined by the basis ß , the first component is contracted by 10% and the seco nd component is stretched by 10%, as illustrated in Figure 2.

B=[o0.9 1.l0] ofT with respecr to ß (which transform

s -1

Some authors introduce the notation [Tla for thi s matrix.

0.08] 1.14

which repre ent the basis ß?

[x]8

into [T(x)] 6 ) : and the matri x

[2 1] 1 3

3 76 •

ha p. 7

Sec. 7.1 Coord inate Systems in

oord inate Systems A

We can gene rali ze the concepts introduced in the example above:

rc.v)

X

Definition 7 .1.1

Coordinates Co nsider a basi ß of IR", consis ting of the vectors ü1, ii2 , vector >i: in IR" can be written uniq uely a

s

x=

c1ii1

C on ider the diagram in Figure 3. After a Iittle ''diagram-chas ing" we can conclude that

is called the coordinate vector of More explicitly:

B [x ]

6

= [T(.t)] 8 = s- 1T

x)

= s- 1Ax = s- 1AS [.K) 6 ,

Verify thi s formula by computing

Ün. 1 T he n any

x

r

x w ith

respec t to tbe bas is ß , and the

~:l c"

denoted by

[x]8 .

Consider the Standard basis A o f Ji{"' consisting of tbe vectors e". A vector

so tba t B =

•..•

+ c2ii2 + · · · + c"li".

The c; are called the coordinates o f vector

B

Figure 3

iR'' • 377

el' e2, .. ..

s- 1AS.

s- 1AS fo r the

matii ces A a nd S given in our

example. The formula B =

s- 1AS

in Ji{" can be represented a

A = s s s- 1

or

give us a new way of thinking about the dyna mi cal system x(r

+ 1) =

A>r (r)

or

x x

i.e. tbe coorilinates of with respec t to the tanda rd ba i A are just the compone nts of the vector (the x; ). means that the coordinate vector [.r ]A i x itself:

1

.r(r) = A xo.

Fir t note that

t times

so rhat

l

~ [0.9 0 J[Cic2 ] v 0 1.1 2

I

Thi agrees with the fonnula we fo und in Fact 6 . 1.3.

1

1

Tm

In m any applications, in physic , stati stics, and geome try in particu lar, it i ofte n use ful to work in a coorilinate syste m th at i '·weil adju ted" to a proble rn at band. For example, if there i a n eige nba i for a Linear tran formati on T from IR" to IR" , it makes sense to work in the coordinate syste m defined by thi eigenbasis (a we have do nein C hapte r 6 , without fo rm all y introducing the co ncept of coordi nates) .

1 For simplicity we stiife the defi nitions of this ecti on ··over IR." Everything works thc ame way over C (or uny other tield).

3 78 •

Chap. 7 Coordinate Systems

Sec. 7.1 Coordinate Systems in

--------------------------------~

Fact 7.1.2

Solution

Consider a basis ß of IR". Then ; =

s [.t]8

B = S- 1 AS=[l . 1

for any vector ; in IR" , where S is th~ matr~ x wh? e columns ar~ the basis vector (in the given order). Since S 1 an mvertJble n x n matn x, we can write thi equation a

x = S [x)z3 is simpl y a compact way

3

- 4

I

[r cx>]8 = s [x]8

to write for all

which is the definition of coordinates.

x in lR".

Applying thi s equation to = B

[ü;] 8

x = Ü;, we find tbat

=Be;= ith column of B.

Thi s Observation allows us to find the matrix of a linear transfonnation column by column :

The matrix of a linear transformation Consider a linear transformation T from !R" to IR", and a basi ß of IR". The x n matrix B which transforms into [T (x) ]8 is called the m atrixof T w ith respect to ß:

[xL,

11

for all

Fact 7.1.4

[5 -6] [I

. <=:onsider a linear transformation T from lR" to lR" and a bas i ß of JR" consJstmg of the vectors ü, , Ü2, ... , ü". Let B be the matrix of T with respect to ß , defined by the equation

[T (v;)]8 Definition 7.1.3

2 ] -' l

Work out the details of this computation yourself.

[x-]ß = s-'-x. Again note that the formul a the equation

JR'' • 379

Fact 7.1.5

Consjder a linear transformation T from JR" to IR." and a bas i ß of JR" consjsting of the vectors ii,, ü2 , .•. , Ü11 • Then the matrix B ofT with respect to ß is

x in JRn.

If A is the standard matrix of a linear transformation T (that is, T (;"t ) = Ax , for all ; in IR"), and B is the matrix of T witl1 respect to some basis ß , then

B

= s- 1As,

that is, the ith column of B is the coordinate vector of T (Ü; ) with respect to the basis ß.

Note that Fact 7.1.5 generalizes the fonnula

where S is the matrix whose columns are tlle vectors of the basis ß . Note that the "srandard matrix" A ofT, introduced in Chapter 2, is simply the matrix of T with respect to the Standard basis A , because the equation

[T (x)]A =

A

rce")

[x]A

siroplifies to for the standard matrix A of a linear transformation T (Fact 2.1.2). ln the introductory example, we have

T(x ) = Ax.

EXAMPLE 3 ..... Find the matrix

B of the linear transformation T(.r) =

U=~ J ;

with respect to the basis ß consisting of the vect.ors [ : ] and [

8

7].

~ [ (T(V,JJn

(T(V,)Jn]

~ [ (09V,Js (I.IV,Js] ~ [009

101].

EXAMPLE 4 ..... Consider two orthogonal unit vectors v1 and ü2 in JR 3 • Form the ba is ü1, i), v3 = ü, x ii2 of JR.l , denoted by ß . Find the matrix B of the linear tran formatio~ T(x) = ü1 x ;r. with respect to the basis ß.

380 •

ec. 7.1 Coordinate ystem in

hap. 7 Coordjnate Sy tems

11



381

~ he~oe the first component lists the height in meters and the econd one the mass

Solution

111 kilogram . For my friend s in the United States, I need to ex press these measure me nt in their system:

U e Fact 7 .1.5 to construct th

I foot ~ 0.305 meters,

1 pound =

-V (

X

-

Vz

=

-3

-!]

0

~

0.454 kilograms.

lf I in sist on using matrix techniques to perfmm these unit change , I can in troduce the basi. ß of IR 2 consisting of the vectors and

[

~.454 ]

0

T hen

[x]a = s-';

draw a sketch

1~5 J

gives the height in feet and the weight in pou nds, as desired. Draw a sketch superimposing the coordi nate system in term of feet and pound over the metric coordinate y tem. Use two different co lor

EXMIPLE 5 ..... Con ider a real 2 x 2 matrix A with eigenvalues p±iq and corresponding eigenvector - ±iw. It can be hown that the vector u and ÜJ in IR2 are linearly independent (Exercise 29). Find t11e matrix B of the linear Lransfom1ation T (x) = Ax with respect to t11e basis ß consi ti ng of ÜJ and ü.

= [

0

EX ER C I S ES GOALS

Use the concept of coordinates. Apply ilie defi ni tion of the matrix of a linear tra nsformation with respect to a basis. Relate this rnatrix to the standard matrix of the transformation. Find the matrix of a linear tran formation (with respect to any bas is) column by column.

Solution Because

ACü + iw ) =


~ 1.

we have

Find the coordinate vector of [

~]

1

with respect to the ba i [; ] . [ ; ] . 1

2. Find the coordinate vector of

Aw =

pw+qv AÜ = - qw + pÜ

[i]

and

with res pect to the basis ote that B is a rotation-dilation matrix . Compare wi th Exercise 30.

~

The problern of coordinate changes is a multidi men ional version of the probl ern of unit changes which you may have encountered .in math or physics courses (or in your dai ly lives, when dealing with fo re.ign systems of measurement or fo reign currencies). We illustrate th.is poi nt with a simple example. Suppose I collect data regarding the height and weight of people. For each person, 1 represent the data as a vector; fo r example:

-- [1.83] 84

X-

l~J · l~J· l~J· l~J

3. Find the matrix of the linear tran fo rmati on

T (_v) = [ with res pect to ilie ba i [

~

l [~ l

~

; ]

x

382 •

hap . 7 Coordinate System

Sec. 7.1 Coordinate Sys tems in IP. 11

4. Find the matrix of t.he linear mm formation

-1]-

~ = [ _7 T (x)

8

6

with respect to the ba i

12. ln the figure below, sketch the vector .X with basis of JR 2 consisting of the vectors ii,

X

1]· [21] [-3

s. Let T: JR2 -* JR 2 be the orthogonal

projection onto the line spanned by [

~

. [3] [-1]

, . Draw a ketch. 1 3 b. Use you r answer in part a to find the standard matrix of T .

6. Let T: JR3

---7

13. Consider the vectors ii,

v, w sketched

0

2

iR with tandard matrix [

Fi nd the matrix oftb is transfonnation with respect to the ba i [

where ß i the

w.

below . Find the coordinate vector of

ii (Iran ·Ja!Cd)

Draw a sketch. b. Use your answer in part a to fi nd the standard matrix of T. -*

l

ii.

a. Find the matrix of thi transformation with re pect to the bas i

7. Con ider tbe linear tran formation T: IR

~

0

ÜJ with respect to the basis ii ,

be tbe reflection in tbe plane given by the equation

2

= [ -

383

l

a. Find the matrixofT with r pect to tbe bas1s

!Pl3

[.XJ 8



~

:

l

/(

14. Given a hexagonal tiling of the plane. such as you might find a n a kitche n floor, con ider the ba i.s ß of ~2 con isting of the vector ii ÜJ ketched below.

R

~] , [ ~ ] .

8. Consider a linear tran sforma lion T: IR 2 --+ iR2 • We are told that the matrix of T with respect to the ba i.

Dl U]

is [

~ ~ J.

Fi nd the standard matrix

ofT. 9. Con ider a linear tran formation T : T with respect to the ba i · [

~] ,

[

2

--+ ~ 2 .

b]

is [

~

We are told th at the matrix of

!].

Find the Standard matrix

ofT in tenns of a , b, c, d .

10. Consider the basis ß of

[.XJ 8

for

x= [ ~].

11 . Da Exerci e 10 for

~2

con isting of the vectors [

II Iu strate the re ult with a sketch.

x= [ ~

J

~]

and [ -

~].

Find

a. Fi nd the coordinate vector [ 0 P ]

b. We are to ld that [ OR ]

8

=

UJ

8

and [ O.....Q ]

8

.

Sketch the po in t R . ls R a vertex

ce nter of a ti Je? e. We are to ld thar [

ÖS ]

8

= [ :;].

a ccnte r r a vertc

f a til ?

r a

384 •

Sec. 7.1

hap. 7 Coord inate y -tem

15. Find tJ1e coorclinate vector of e1 with respect to the ba i

oordinate Systems in IR" •

23. Let L be the line in JR 3 spanned by the vector ü =

Ul nl m

0.6 ] [ ~-8

. Let T: R 3

385 R 3 be

the rotation about thi s line through an angle of rr j2, in the direction indicated in the sketch be.low. Find the matrix A suchthat T (x) =Ai .

of ~ 3 . 16. If ß i a ba is of IR!", i the tran formation T from IR!" to IR!" given by T (x) = [x ] 8 L

linear? Ju tify your an wer. 17. Con ider the ba i ß of R 2 con i t.ing of the vectors [ ; ] and [ told that

[.x] 8

= [

1 ~]

~].

We are

for a certain vector

,r in R- .

Fi nd

XI .

x.

18. Let ß be the ba is of IR!" consisting of the ector Ü1, Ü2, .. . , ü" , and Iet T be some other basis of IR!". Is

a basis of

~n

as well? Explain.

19. Consider the ba is ß of

n

, ... "' ... '

06

~2 con

isting of the vector [

~]

and [ ;

l

ii •• ••

• ••• • •• • • ••••••••••••••• • ~

24. Con ider the regular tetrahedron ketched below, whose center i at the origin.

an d Iet

be the basis consisting of [ ~ ] , [ ~] . Find a matri x P such that

for all .X in iR2 . 2.0. Find a basis ß of IR- uch that and 21. Consider two orthogonal unit vectors consisting of the vectors ü1, ü2 , ü3 =

ü1 and ü2 in JR 3 and form the basis ß ü1 x ü2 • Find the matrix of the linear

transformation

with respect to the ba is ß. 22. Consider a 3 x 3 matrix A and a vector ü in IR uch th at A 3 ü = 0, but A 2 ü =/= Ö. a. Show that the vectors A2 ü, AÜ ü form a bas is of JR: 3 . Hint : Demonstrate linear independence. b. Find the matrix of the transformation T x) = Ax with re pect to the ba i A 2 ü, AÜ , ü.

Let ii 0 , ü1, ü2 , ii3 be the position vector of the four vertice. of the tetrahedr n: ..... ü0 = OP o, .. . , ii. = OP 3. a. Find the sum iio + ii1 + Ü2 + Ü3 . b. Find the coordinate vector of Üo with re pect to the ba i VJ. IJ:!, VJ. c. Let T be the linear transformat.ion with T(Üo) = Ü3, T(v3) = v1 . and T(ü 1) = ü0 . What is T (v 2 )? De. cribe t.he transformation T geometrically (as a reflection, rotation, projection , or whatever). Fi nd the matri 8 ofT with respecr to the basi ii 1, ii2 , Find the complex eigenval u of 8. 3 What is 8 ? Expl.ain.

v.

386 •

Chap. 7 Coordinate Systems

Sec. 7.2 Diagonalizati on and Similarity •

25. Con ider a rotation T (x) = Ax in ffil. 3 (that is, T is an orthogonal tran formation. and det(A) = I). a. Consider an orthonormal basis ß of ffil. 3 . Is the matrix B of T with respect . . . ? to ß orthoaonaJ? Js it necessanly a rotatwn matnx. b. Now supp~se ß i an ?rthon~rmal ~asi Ü1, Ü2, V3 of IR\_ where Ü1 is_,a fixed point of T , that ts, T v 1) = v 1 • (Such a vector ex tsts by _Eu Iet s theorem· see Exercise 6.3.34.) What can you say about Lhe matr1x 8 of T with ~espect to ß? Describe the first row and the tirst column of 8 , and explain why the rninor 8 11 is a 2 x 2 rotation matrix . ExpJain the significance of this result in geometric terms. 26. Consider a linear tran formation T (,\-) = kr from IR" to IR". Let B be the matrix ofT with re pect to the ba is - e3, ... , (- l )"e" of IR" . Describe the entries of B in terms of the entries of A.

-e1, e2,

29. Consider a real 2 x 2 matrix A with eigenvalues p ± iq and corresponding eigenvectors ± iw. Show that the matri x [ w ii ] i invertible. Hint: We ca n write

v

30. Consider a real 2 x 2 matri x A with eigenvalue p ± iq and corre ponding eigenvectors v ± iw. Find the matrix B of the linear Iransformation T(x) = Ax with respect to the basis consisting of v and w. Campare with Example 5.

DIAGONALIZATION AND SIMILARITY

27. Consider a linear transformation T (x) = Ax from IR" to IR". Let B be the matrix ofT with respect to the basis e1" en-1, ... , e2, e1 of IR". Describe the entrie of B in terms of the entrie of A. 28. This problern refers to Leontief s input-output model , first di cussed in the Exerci es 1.1.20 and 1.2.37 . Consider three industries /1 , h / 3 each of which produces only one good, with unü prices P I = 2 P2 = 5, PJ = LO (in doiJars), respectively. Let tbe three products be good 1, good 2, and good 3. Let

012 013] = [0.3 0.1 o 22

a 23

0 32

0 33

0.2

0.2 0.3 0.2

387

The introductory example of the last section (pp. 374 and 375) sugge t the following resu!t:

Fact 7.2.1

The matrix of a linear transformation with respect to an eigenbasis is diagonaL More specifically, consider a linear Iransformation T (.r) = Ax, where A is an n x n matrix . Suppose ß is an eigenbasi for T consisting of the vectors v1, ü2 , ..• , 11 , with A = .A; Then the matrix B of T with respect to ß is

v

v;

v;.

!l'' JJ 0

0.1 ] 0.3 0.1

8

~ s-' AS ~

)'2

0

be the matrix that lists the interindustry demand in terms of dollar amounts. The entry oij teils us how many doiJars' worth of good i are required to produce one doUar's worth of good j. AJtematively, the interindustry demand can be measured in units of goods by means of the matrix

where bij teils us how many units of good i are required to produce one unit of good j. Find the mat.rix B for the economy discussed here. Also wri.te an equation relating the tbree matrices A, B, and S, where

S=

2 0 0 5

[0 0

~~J

is the diagonal matrix listing the unit prices on the diagonal. Justify your answer carefully.

where

To justify this fact, consider the column of B (see Fact 7.1.5):

0 0 (ith column of

B) = [T(Ü;)] 6

=

[.A;v;]6

=

.A;

+- i th

component

0 Applying this observation to i = 1, 2, .. . ,

11 ,

we prove our claim.



388 •

Sec. 7.2 Diagonaliza tion and Sirrti la rity •

Chap. 7 Coordin ate Systems The con verse of Fact 7 .2.1 is true as weil : if the matrix of a li near tra n fo m1 ation T with respect to a basis ß i diagonal, lhe n ß is an eigenbasis fo r T (Exercise 17). Fact 7.2. I motivates the followin g detinition:

Asking whether. a matrix A is diaoo naJizable is the ame as a kin bo whethe r . . b . there IS an e tgenbasts fo r A. l n C hapter 6 we outli ned a method for answering thi s questio n. We summarize th is proce s below:

Algorithm 7.2.4 Definition 7.2.2

Diagonalizable matrices An n x 11 matrix A is called diagonalizable if there is an invertibl e matri x S such that s-' AS is di agonal.

389

Diagonalization Suppose we are asked to decide whe ther an n x n mat rix A is diagonali zable, and, if so, to fi nd an invertibl e S such that s-' AS is di agonal. Th is process can be carried out over IR, over C , or over any other fie ld.

a. Fi nd the eigenvalues of

A , that is, olve the eq uatio n / A(A.) = 0.

b. For each eigenvaJ ue A., fi nd a bas i of the eigen pace As we observed above, s- 1 AS is diagonal if (and only il:) the column. of S form an eigenbasis for A. This impli es the following result:

E 1. = ker(AI" - A ).

c. T he matrix A is diagonaJi zable if (and only if) the dimens ion of tbe eigenspaces add up to n . In thi case, we find an eigenbas is ti 1 ti2,

Fact 7.2.3

The matrix A is diagonalizable if (and only il:) there is an eigenbasis for A . In particula r, if an 11 x n matri x A has n di stinct eigenvalues, then A .is diagonah zable (by Fact 6.3.6).

. . . , ü" for A by combining the bases of the eigenspaces we fo und in step b. Let S = [ 1 ti2 iJ"]. Then s-' AS is diagonal with the cotTes ponding eigenvalues on the di agonal.

u

EXAMPLE 1 ~ Diagonali ze the matri x A matrix with real entries may be di agonaliza ble over C, but not over 1lt For example, the rotation matrix

Solution has no real eigenvectors, so that A is not diagonalizable over IR. However, there is a complex eigenbasis for A, namel y,

We proceed step by step as outlined above:

a. Find the eigenvalues: 1

JA (A.) with associated eigenvalues i, - i (see Example 6, Section 6.4). T herefore, A is diagonalizable over C. The matrix S=

u -n

= det [ A. - 0 - 1 A. - 1) 3

= (A. -

-

I )(A.

2

),

~1 0

~

] A. - I

(A. - I ) = (A. - I ) ( A. - 1)2 -

2A.) = (A.- I )(A. - 2)A.

1)

-

=0

The eigenva lu es are 1 2, and 0, so that A is diago nali zab le (over IR). b. Find an eigenvector for eac h eigenva lue :

does the job:

s- ' A S = [ ~

-~J

E 1 = ker

Note that "most" n x n matrices (with real or complex entries) are diagonalizable over C , because they have n distinct complex ei genvalues. An example of a matrix which is not diagonalizable over C is the shear matrix [

.

b ~ ].

0

0

0

0

[ - 1 0

-l] [I 00] ~ = ker ~ ~ ~

;

390 •

Sec. 7.2 Diagona lization and Similarity •

Chap. 7 Coordinate Sy tem where

Si milarly, we find eigenvector for the eigenvalues 2 and 0: l

2

t

t

s~

-i]

0

J .

Diagonalization is a usefu l too l to study the powers of a matrix: if

[-n m m

c. Let

s- 1 AS =

u -1] 0

A=

.

and

0 0] 2

0

1 tim es

Note that D' is easy to compu te: raise the calars on the di agonal to the t th power. Tf

.

0 0 Note that you need not actua lly comp ute s- 1 AS. The theoretical work above guarantee that s- 1 AS is diagonal, wi th the eigenvalues on the diagonal. To check yo ur work you may wish to verify that s- 1 AS = D , or, equivalently, that AS = SD (here we need not compu te the inverse of S).

-1 ] [0 ~] ~] [! -I][I 0~] ~ u ~]

AS ~ [6

1 0

so~[!

0 1

=

I 0

0 OJ . ' 0

2 0 2

0 1

0 0

0

2 0

.xcr where A is di agonali zable, with

.J

EXAMPLE 2 ..... Diagonalize the rotation-dilation matrix

where p and q are real numbers with q

I

s-

=1=

A.~

0

0

.

. ..

S=

Solution The eigenvalues of A are A. 1.2 = p ± iq (see Section 6.5, Example 2) . T hen

AS

=

Ax(t). D . Then 1-

'U J

'U2

v"

'

D' =

[~

A.'2

0

0

.

0

J

S

- 1-

xo = [xo] 8 =

1

C JA 1

c2 A.~

v"

and

-~ J.

c"A.~,

Now

s-J[ ~7!} "'

-Pq

Js =

[ P -lo - iq

o

p-iq

J'

= c 1A.11v1 + c_A.~Ü2

+ · · · + c"A.~, i.i".

Thi is the formul a famili ar from Chapter 6 (Fact 6.1.3).

[,,: J. C? -

c"

Then

· -- ker [ iq E p+rq -q

E1,-;q = span [

n

Thi is a more succinct way to write the formula fo r dynarnical y tems derived in Chapter 6. To see why, write

'

0.

+ 1) = 1

X(I) = A' Xo = s D' s- I io

-q] p

[~

0

Consider a dynami cal system

2

A = [ :

D'=

then

A."

2

l 0

sos- 1

1

Then

0 L 1 0

D,

then

1

0

391

392 •

+

Chap. 7

oordinate System

Sec. 7.2 Diagonalization and Similarity •

Similarity

p ± iq and correspondi ng ei genIn Example 5 of Secti on 7. 1 we have seen that A is imi lar to

EXAMPLE 5 ...... Consider a real 2 x 2 matri x A with eigenvalues vectors

So far we have focu ed o n the diagonalizable matrices. that is, those matrices fo r which there is an eigenbas is. lf there i. no eigenba i for a linear transfo rmati on T (_r ) = A.r from C " to C ". we may still Iook for a ba is ß of C" thar i "weil adj usted'' to T , in the sense thar rhe matrix B of T with r spe t to ß i ea ier to work wilh than A it elf. To put it differently, for a (nondi agonalizable) marrix A we may wi h to fi nd an inve1tible S such that B = s- 1 AS is of a simpl er form than A itse lf. Ln view of ou r intere t in dynamical sy tems, the main requ irement will be that the power 8 1 of B are ea to ompute. Then we can "do" the dynami cal y tem x(t + I) = A,\- (1) :

I x(r ) =

N ,\-o =

v± iw.

[p More specifica lly, B = s- AS, where S = q -q]· p 1

B =

[w

ü ]. Use thi

simil ari ty to derive a real closed fo rmula for x (t ) = N x 0 .

Solution We can write - sin(
csss- 1Y-"ro = ss ~ s- 1.Xo I

.

Then

-()t = A1-xo = sBIS

For the di cu sion of this prob lem the fo llowi ng termi nology is useful :

Definition 7.2.5

393

X

1[-

- 1-

xo = r

- ] [ COS(cPI)

w

v

Similar matrices

where [ ~ ] is tbe coordinate vector of

Con ider two 11 x 11 matrice A and B. We say that A is similar to B if there is an invertible matrix S such that B = s- 1 A S.

that we fi rst stated thi formula in Fact 6.5.3.

x0

sin(
- sin (
with respect to tbe basis

a] b

'

w, v.

Recall

~

As the term suggests, sirrülar matrices have many properties in common: In terms of hnear tran Formations, this mean that A is similar to B if 8 is tbe matrix of T (x) = Ax with re pect to some ba is ß. Note that a matrix by definiti on is diagonaLizable if it i similar to a diagonal matri x. The verification of tbe following fact is left a Exercise 20: • An n x

11

Suppo e A is sirnilar to B. Then

a. !A ().. ) = f n ()...). b. A and B have the same eigenvalues, with the same algebraic and geometric multiplicities.

matrix A is simi lar to itself.

• If A is sirnilar to B , then B is irnilar to A. • l f A i sirnilar to B , and B i sirnilar to C, then A i

EXAMPLE 3 ...... Find all matrices

Fact 7.2.6

c. det(A)

im.il ar to C.

= det( B) .

d. tr(A) = tr(B).

B similar to !".

e. rank (A) = rank(B).

Solution If B i similar to !" is 111 itself.

l 11 ,

then B =

EXAMPLE 4 ...... Are the matrices A = [ ~

s- 1 J"S =

-n

! 11 • Therefore, the only marrix . imi lar to

and B = [

~

=~ ~~ ]

fa(A) = det(Al11

simi lar?

[2 0]

. Because A and B are both sirnil ar to D = 0 3 are sirnilar to each other.

B ) = det (Al" -

= (det S) - det(A/11

Find the eigenvalues of A and B . It turns out that both matrice have the eigenvalues 2 and 3. This means that both A and B are diagonalizable, wi th diagonal

.

-

1

Solution

matnx D =

We will demonstrate clai m a. The verification of the other claims i left to the reader (Exercises 27, 28, 61). For all scalars A,

[2 0] 0

3

, they <11111

-

s - I AS)

= det (S- 1()...111

-

A)S)

A) det(S) = det(Ain - A) =JA().) .



Consider a nondiagonalizable matri x A. l s A irnilar to some matrix B " in simple form "? ln thi s introductory text, we will not addres this issue in full aenerality. If you are interested, read the chapter on ' Jordan Normal Form " f'n an advanced linear algebra text (for example, Section 6.2 in the text Linear Algebra by Friedberg, In el, and Spence; Prentice Hall). . . . We conclude thi s section with three examples concerrung nondiagonahzable 2 x 2 matrices.

394 •

Sec. 7.2 Diagonalization and Similarity •

Chap. 7 Coordinate Systems

395

Figure 1

EXAMPLE 6 .... Consider a real 2 x 2 matrix A (other tban

/2) suchthat / A('A) = ('A -

2

1)

. Show

Figure 2

that A represents a shear.

Solution

Solution Cboose a nonzero vector ü in the one-dimensional eigenspace E 1 of A and a vector that is not contained in 1• By Definjtion 2.2.4, we have to how tbat ÜJ is a scalar multiple of ü. See Figure l. Consider tbe matrix B of tbe transformation T (x) = A-t with respect to the

w

E

Aw-

basis ü, ÜJ. Since A ü = ü, the first col umn of B is [

~}.

Wehave !A ('A) = ),2 -2}..+ 1 = (A.-1) 2, so that A represents a shear, by Example 6. By definüion of a shear, x(O), x(l), x(2), ... will be equally spaced points along a straight

li~e p~allel to E1•

= Axo = [ ~]

We compute .X ( I )

sketched 1n F1gure 2. Note that x(t) = io + t(x(l ) In our example, we bave

xo)

= xo

+I (Axo

and find the trajectory

- xo).

B=[~ ~ ] Since fA(A.) = fB(A.), we have b = 1, so that

EXAMPLE 8 .... Consider the dynamical system

B=[~ ~] ·

x(t

The second column of B teils us that

+ 1)

= Ax(t)

with

xo

= [~] ,

~* ~ ].Interpret t~e linear transformatjon cally, and find a clo ed form ula for x 1). where A = [

or

Aw - w=

aü,

Solution

as claimed. Note tbat we can write the tran formation Tin even imp ler form: the matrix of

T witb respect to the basis Aw- w, wis [ ~ ~].

<111

wbere A = [

=!

~

l

We have

JA

A.

= A.2 -).. + ~

= ('A- ~) 2 ; the eigenvalues of A are >.. 1 = .A2

Therefore, the matrix M = 2A = [

~t ; ]

= ~­

has the eigenvalue J with algebraic

multiplicity 2. By Example 6, the matrix M represents a hear. Thu the matrix A = represents a shear-dilation , that i , a shear followed by a dilation by a

1M

EXAMPLE 7 .... Consider the dynarrucal ystem

x 1 + J) =

T (x) = Ax geometri-

A-t t)

with

x0 = [ ~]

Find a closed formula for x(t) and sketch the trajecrory.



factor of We can find M 1

x

0

as in Example 7:

1Mxo=xo+t(Mxo - xo =

[

0] [2] [ 1 +t

1 =

l 21 + t] ·

396 •

Chap. 7 Coordinate Systems

7.

10.

Figure 3

0

[~

0 I

0

-n

8.

[I 2 3] 0 0

2 0

3 3

[: l]

II. [

~ -3

Then _

I _

x(t) = A xo =

(

1)

2

I

I _

M xo =

(

1)

2

I

[

I

2t ]

-~ ~]

14. Diagonalize Lhe matrix A = [

=:

+t

s- 1 A S is of the form [ ~ ~ .

16. Consider the matrix A = [ ~

-~

See Figure 3. Let us summarize our observation in Examples 6 and 7: consider an invertible real 2 x 2 matrix A that is nondiagonalizable over C. Then ! A(A.) = (Ä - .>. d for some nonzero A.Q, and A represents a shear-d.i lation , that i , a shear followed by a dilation by a factor of .>.. 0 . . . Note that we now un~erstand the dynarnical ystem

.r. cr +

an inverüble matrix S such that

JA (A.)

GOALS

Use the concept of a diagonalizable matrix and the idea of similarity of matrices. Analyze the dynarnical y tem x (t + 1) = Ax(t) for any reaJ 2 x 2 matrix A and sketch a pha e portrait. Decide which of the matrices A in Exercises I to 12 are diagonalizable (over JR). lf possible, find an invertible S and a diagonal D uch that s- 1AS = D. Do not use technology.

4•

0

l~

0 0

l I

j

0

l

over C. over C. Find an in vet1ible mat.rix S uch tbat

Verify that A i a hear mat:rix , and find

s- JAS =

[

6 ~ J.

-n.

ß i di agonal , then ß is an eigeobas is forT .

18. Let A = [

~

Find a clo ed formula for A 1 (entry by enrry). lf A1

what can you infinity?

[2 -1]

J.

12.

I 1

17. Show that if the matrix of a Linear tran formalion T with respect to a basi

= A. 2 is left as Exercise 56).

E X E R C I S ES

[~ ~]

-~] -5

[~ ~] 1 1 0

I)= Ax(t ) ,

for all real 2 x 2 matrices A (the case when

1.

~]

1

-~ ] .

l

9.

- 1 3 9

13. Di agonali ze the matri x A = [

15. Consider the matrix A = [ _ ;

1

39 7

Sec. 7.2 Diago naliza ti o n and Simil arity •

5] 5• [--] 2 5

3.

[ ~ =~]

6

u=i ~]

.

19. Let A = [

[a (t )

_

-

c(t )

b(t ) ] d (t )

·

ay about the proportion a t ) : b (t ): c t ) : d (t ) as

-~ ~].

Find a closed formula for N

1

goe

to

entry by entry .

20. Justify the following facts : lf A B , and C are n x n matrice

then

• A is im.ilar to itself.

• lf A is sim.ilar to B , then B is similar to A.

• lf A i

imilar to B , and B i

im.ilar to C, then A i

21. Are the matrice [

~ ~]

22. Are the matrice [

~ ~] and [ =~ ~] sim.ilar?

and [

~ ~]

im.ilar to C .

im.ilar?

23. Jf two matrices A and B both represent a hear in JR2 to A?

B nece

arily imilar

398 •

Chap. 7 Coordinate Systems

Sec. 7.2 Diagonalization a nd Similarity •

24. In Exa mple 4 we found that the matrices A = [ 25. 26. 27.

28.

~ -~ ] a nd 8 = [ =~ 1 ~ ]

are imilar. Find an invertibl e matri x S such that 8 = s- l A S. Consider two 2 x2 matrices A and 8 with det(A) = det(B) and tr (A) = t.r(8). Are A and B nece sarily similar? Consider two 2 x 2 matrice A and 8 with d t(A) = det(B = 7 and tr(A) = tr(B) = 7. Are A and B neces ari ly simil ar? · A an d B = s- 1 AS . Show that i f ."K is in the kerne! a. Con ider tb e matnces of 8 then S.r i in the kerne! of A. b. Shm tbat rank (A) = rank(8 ). Con ider the matrice A and 8 = s- 1 AS. Suppo e .r i an eigenvector of 8 , with a . ociated eigenvalue A.. Show that Si i aJ1 eigenvector of A wi.th the same eigenvalue. Campare the geometric multiplicities of A. a. an eigenval ue of A and B.

For the matrices A in Exerci e 29 to 36, find clo ed formula for the co mponent of the dynamical system .r (l

+ 1) = A.~(t)

wirb

_r(O)

= [~] .

Sketch the trajectory.

~

i]

30. A = [

32. A =

[~

~]

33. A =

35. A =

[i -i]

29. A = [

6 -i]

[i -6] -~ !]

43. If A is diagonalizable, i A 2 diagonali zable as well? 44. If A is diagonalizable and r is an arbi trary positive integer, is A' diagonali zab le as weil? 45. lf A i diagonalizable and invertible, i A - I diagonali zable as well? 46. lf A is in vertible, is A necessarily di agonalizable over C? Conver ely if A is diagonalizable, is A neces arily invertible? 47. Jf A and 8 are di agonalizable, is A

34. A = [ -_ l

4

~]

36. A = [

+B

necessmily diagonalizable?

48. ff A and 8 are diagonaJizable, is AB necessari ly di agonalizable? 2

49. lf A i diagonalizable, i A itself necessaril y di agonalizable? 50. Tf A and B are n x n matrice and A i in vertibl e, how th at AB to BA.

imiJar

51. Give an exampl e of two (nonin vertible) 2 x 2 matrice A and B uch that A B is not imilar to BA . 52. a. Consider two matrices A and 8 , where A i in vertible. Show that the matrices AB and BA have the same characteristic polynornial (and therefore the same eigenvalues with the ame algebraic multiplicities). Hillt : Exerci e 50 is helpfu l. b. Now co n ider two n x n matlice A and B which may or may not be invertible. Show tbat A 8 and BA have the same characteristic polynorniaL and therefore the same eigenval ues. H int: For a fixed scalar A., consider the function

n

31. A = [ ;

399

j(x)

= det (Ain -

(A -

X

ln) 8) - det (Hn- B (A - x l n)).

Show that f x) is a polynornial in x. What can you say about it degree? Explain why f(x) = 0 when x i not an eigenvalue of A. Conc lude tbat f(x) = 0 for all x; in partic ular, /(0) = 0. 53. Find clo ed formulas for the entries of the hear matrix

37. Are the matli ces A and 8 below similar?

h [~

l 0 0 0

Hint : Consider A2 and 8 2 .

0 0 0 0

n

s~ [~

I 0 0 0

0 l 0 0

~l

38. Tru e or false? If two n x n matrice A and 8 have the ame eigenvalue , with the ame algebraic and geometric multipliciti e , then A and B are imilar. 39. If A and B are invertible, and A is simiJar to B , is A- 1 imi lar to B- 1 ? 40. If A i similar to 8 , and A is invertible, is B necessarily in vertible? . 41. If Ai s similar to 8 , i AT necessaril y sirni lar t.o B T? 42. lf A is diagonalizable, is AT similar to A?

54. Show that

55. Consider a real 2 x 2 matri x A (other than / 2 ) uch that tr(A) = 2 and det (A) = l. 1 tbe linear t.ran form ation T(x) = Ai nece aril y a he ar . Explain.

400 •

Chap. 7

oordinat

Sec. 7.3 Symmetrie Matrices •

ystems

56. Let A be a 2 similar to [

2

2 matrix \ ith chara teri'tic polynorniaJ A . Show that A

~ ~

l

2

What can you ay abou r A ?

57. Show that any ~ x 2 matrix i si milar to it transpose. 58. ShO\ that any complex 3 x 3 matrix i imilar to an upper triangular 3 x 3 matri ·. 59. Con ider two real n x 11 matrices A and B whi h are "si milar over C": that i the re i a compl e invertib le 11 11 matTix S uch that B = - I AS. Show that A and B are in fact '·simil ar over ~· · : that i , there is a real R such that B = R - 1A R. Hints: Write S = S1 + iS2, where S1 and S2 are real. Con ider the function f(x) = det(S 1 +.rS2 ), where x is a compl ex variable . Show that f(x) i a nonzero poly nornial. Conclude that rhere is a rea l number x 0 uch that f(x o) f= 0. Show that R = S1 + xoS2 doe the j ob.

60. In this exercise we will show that the geometric multiplicity of an eigenvalue is les than or eq ual to the algebraic multip li c ity . Suppose Ao i an e igenva lue of an n x n matrix A with geometric multiplicity d. a. Explain why there is a basi ß of ~~~ whose firs t d vector are eigenvectors of A, with e igenvalue Ao· b. Let B be the matrix of the linear tran formation T (x = A .~ with respect to ß . What do the fir t d columns of B Iook like? c. Explain why the characteri tic polynomi al of B i of the form

!s (),)

= (A - Ao)d g (A) ,

for some polynomial g()..). Conclude that the algebraic multipli ity of A.o as an eigenvalue of B (and A) i at lea t d.

61. Con ider two sirnilar matrices A and B . Show that a. A and B have the . ame eigenvalues, with the same algebrai c multipliciti e . b. det(A) = det (B). c. tr(A) = tr(B). 62. We say that two n x n matrices are simulraneously diagonalizable if there is an n x n matrix S such that s- 1 AS and s- 1B S are both diagonal. a. Are the matrices

0 0] 1 0 0 I

and

simultaneously diagonalizable? Explain. b. Show that if A and B are simultaneously diagonalizable then AB = BA . c. Give an example of two n x n matrice such that AB = BA , but A and B are not simul taneously diagonalizable. d. Let D be a diagonal 11 x n matri x with 11 di stinct entries on the diagonal. Find all n x 11 matrices B which commute with D.

e. Show tbat if A B = BA and A has n distinct eigenvalue are simultaneously diagonalizable. Hint: Part d is usefu l.

401

then A and B

63. Consider a diagonalizable n x n matrix A with m di stinct eigenval ues A 1, . . . , A111 • Show that (A - Alln)(A- A.2/n) ... (A - A.111 / 11 ) = 0.

64. Consider a diagonalizable n x n matrix A witb characteristic polynorni al

!A (A)

= An + a"_l),n-l + · · · + a1A. + ao.

Show that

65. Consider a real 2 x 2 matrix A with fA(A) with Exercise 64.

=

(A- 1) 2 . Find fA(A). Campare

66. Is tbere a 3 x 3 matrix A with all of the following properties? • All eigenvaJues of A are integer . • A is not diagonalizable over C.

• det(A TA) = 36. Gi ve an exampJe or show that none can exist.

67. True orfalse? lf 2 + 3i is an eigenvalue of a real 3 x 3 matrix A, then A i diagonalizable over C. 68. For real 2 x 2 matrices A, give a complete proof of Fact 6.5.2.

SYMMETRIC MATRICES In this section we will work over R except for a brief digression into C, in tbe discussion of Fact 7 .3.3. Our work in the last five sections dealt with tbe following central question: When is a given square matrix A diagonalizable; tbat is, when is there an eigenbasis for A? In geometry, we prefer to work with orthonormal bases, wbich raise the question:

For wbich matrices is tbere an orthonormal eigenbasis? Or, equivalently for which matrices A is there an orthogonal matrix S uch that s- 1AS = ST AS is diagonal?

402 •

Chap. 7 Coordinate Systems

Sec. 7.3 Symmetrie Matrices •

403

(Recall that s- 1 = sT for orthogonal matrices, by Fact 4.3. 7.) We say that A is orrhogonally diagonalizable if there is an orthogonal S uch that s- l A S = ST AS is diagonal. Then the question is:

Whicb matrice are ort11ogonally di agonalizable? Simple exam ples of ortllogonally diagonalizable matrice are diagonal matrices (we can et S = !") and tlle matrices of ortllogonal projections and reflection (exercise).

EXAMPLE 1 ~ If A is orthogonally diagonalizable, what is tlle relationship between AT and A?

Figure 1

Solution Note that tl1e two eigenspaces, E3 and E 8 , are perpendicular (this is no coincidence, as we wi ll see in Fact 7.3.2). Therefore, we can find an ortbonormal eigenbasis simply by dividing tbe given eigenvectors by their lengtbs:

Wehave

s- 1AS = D

or

A

= SDS- 1 = S DST.

- .J51 [-12], - .J51[1]

for an orthogonal S and a diagonal D. Then AT = (S DST)T

ul=

= SDTST = SDST = A.

V2

h[i· ~,] ~ 5s[-~ n

AT= A .

Surprisingly, the converse is true as weil:

then s- 1A s will be diagonal, namely, s- l A s =

Spectral theorem

AT= A).

We will prove tllls theorem Iater in tllis section, based on two preliminary results, Facts 7 .3.2 and 7.3.3 . First we will illustrate the spectral theorem witll an example.

2 ~ For tlle symmetric matrix A = [ ~

~

l

;

J. find an orthogonal s suchthat s- AS is

Consider a symmetric matrix A . lf ii1 and ii2 are eigenvectors of A, witl1 distinct eigenvalues, )q and A. 2, then ii1 · iiz = 0, that is, ii2 is orthogonal to ii1.

Fact 7.3.2

Proof We compute the product in two different ways :

1

- ) = /\.2 ' (- ) = v-T1 (A.2v2 v1 · u1 vi Av2 = üf ATÜ2 = (AÜJ)TÜ2 = ()..1Ü 1)Tv2 = A.1(Ü 1 · Ü2) T Avz v- 1

diagonal .

Comparing the results, we find

Solution

A. 1Cü 1 · vz) = A2Cii1 · ii_)

We will first find an eigenbasis. The eigenvalues of A are 3 and 8, witll corresponding eigenvectors [ _

[~

The key ob ervation we made in Example 2 generalizes as follows:

A matrix A is orthogonally diagonalizable (that is, there is an orthogonal S such that s- 1 AS = srAS is diagonal) if and only if A is symmetric (that is,

EXAMPLE

·

U we define the orthogonal matrix

We find that

Fact 7.3.1

2

=

iJ

and [;]. respectively. See Figure 1.

or

404 •

Sec. 7.3 Symmetri Matrices •

Chap. 7 CoordinateS stems

405

Since tbe fir t factor in Lhi product, A., - A.2 . is nonzero the ·e ond factor u1 . v2, mu t be zero a claimed. • Fact 7.3.2 tel! us that the eig nspace of a symmetri mati-ix are perpendicul ar to one another. Here i another iUu tration of thi property: EXAMPLE 3 .... For the ymmetric matri ·

find an orthogonal S such that

s-' AS i

diagonal.

Figure 3

Solution

In Figu re 3, the vectors ii 1, ii2 form an orthonormal bas i of E0 , and ii3 i a unit vector in E3. Then ii,, ii 2, ii3 i. an orthonormal eigenbasi fo r A. We can Iet S = [ii, ii2 ii3 ] to diagonaJize A orthogonall y. If we apply Gram-Schm.idt 1 to the vector

The eigenvalues are 0 and 3, with

Note tbat the two eigen paces are indeed perpendicul ar to one another, in accordance with Fact 7.3.2. See Figure 2. We can construct an orthononnal eigen ba i for A by picking an orthonorma l ba i of each eigen pace (using Gram-Schmidt in the case of E 0 ) . See Figure 3.

panning E0 , we find and

3

f19ure 2 The eigenspoces fo ond f ore orthogonal complements.

[I] E3 = span

~

The computations are left as an exercise. For E 3 we get

Therefore, the orthogon al matri x

- 1/../2 - l j -/6 [

1Allernati ve ly,

Ü2 =

ii

X ii1 .

l j ./2

I/J3 l/J3 ] 2! -16 t;J3

-l j -/6

0

wc could fi nd a unit ve tor ii1 in Eo and

:1

un it vec10r ii3 in EJ, :1nd then set

406 •

Sec. 7.3 Symmetrie Matrices •

Chap. 7 Coordinate Systems

40 7

For a I x 1 matrix A, we can Iet S = [1). Now ass ume the claim is true for n - 1; we show that it holds for n . Pick a :eal eigenvalue A. of A (this is possible by Fact 7 .3.3) and choose an eigenvector v1 of length 1 for A.. We can find an orthonormal basis ii 1, ii 2 , . . • , iin of JRil (think about how you could construct such a basis). Form the orthogonal matrix

diagonalizes the matrix A:

s- 1 AS = 00 00 0] 0 [0 0 3

By Fact 7 .3.2, if a symmetric matrix is diagonalizable, tben it is orthogonally diagonalizable. We till have to how that sy mmetric matrices are diagonalizable in tbe first place (over IR). The key point is the following observation: VII

Fact 7.3.3 Proof

I

The eigenvalues of a symmetric matrix A are real.

(This proof may be skipped in a first reading of this text without harming your understandlog of the concepts.) Consider two complex conjugate eigenvalues p±iq of A with corresponding eigenvectors ii±i (compare with Exercise 6.5.31 b). We wish to show that these eigenvalues are in fact real, that is, q = 0. Note first that

and compute P - 1A P .

w

Cii + iw) 7 cii- iw) = lliill

2

+ !lwf

Tbe fir t column of p- l AP is A.e 1 (why?). Also note tbat p - 1 AP = p T AP is symmetric: (P 7 AP) 7 = P 7 A 7 P = P 7 AP, because Ais symmetric. Combining these two statements we conclude that p - 1AP is of the form p - 'AP - [ A.

(verify this). Now we compute the product

-

Cii + iwl ACii - iw) Cii + iwl A(ii- iw )

Comparing the results, we find that p + iq

=

'

Q- 1 BQ = D

+ iw) 7 (ii -

iw)

is a diagonal (n - 1) x (n - 1) matrix. Now introduce the orthogonal n x n matrix R=

p- iq, so that q = 0, as claimed. •

The proof above is not very enlightening. A more transparent proof would follow if we were to define the dot product for complex vectors, but to do so would Iead us too far afield. We are now ready to prove Fact 7.3.1: symmetric matrices are orthogonally diagonalizable. Even though this is not logically necessary, Iet us first examine the case of a symmetric n x n matrix A with n distinct eigenvalues (recall that this is the case for " most" matrices). By Fact 7.3.3, the n distinct eigenvalues are real. For each eigenvalue, we can choose an eigenvector of length 1. By Fact 7.3 .2, these eigenvectors will form an orthonormal eigenbasis, that is, the matrix A will be orthogonally diagonalizable, as claimed. Proof

0]

B

where B is a symmetric (n- 1) x n - 1) matrix. By induction, B is orthogonally diagonalizable; that is, there is an orthogonal (n - 1) x (n - 1) matrix Q such that

in two different ways:

Cii + iw) 7 CP - iq )(ii - iw )
0

(of Fact 7.3.1): This proof is somewhat technical; it may be sk.ipped in a fi rst reading of this text without harm. · We prove by induction on n that a synunetric n x n matrix A is orthogonally diagonalizable.

[~ ~l

Then

is diagonal. Combining equations (1) and (TI) above, we find that

R- 1 P- 1 APR=[~ ~]

(lll)

is diagonaL Consider the orthogonal matrix S = PR (recall Fact 4.3.4a: the 1 1 1 product of orthogonal matrice is orthogonal). Note that s- 1 = ( P R)- = R- p - . Therefore equation (ill) can be written

s- 1 AS = proving our claim.

[

~ ~

l



408 •

hap . 7 Coord inate

Sec. 7.3 Symmetrie Matrices •

tems

409

The method outlined in the proof of Fact 7.3 .1 i not a en ible way to fi nd the matrix S in a numerical exampJe. Ratl1er, we can proceed as in Example 3:

Algorithrn 7.3.4

Orthogonal diagonalization of a symmetric m atri x A a. Fi nd the eigenva lue of A. and find a basi of each eigenspace. b. Usi ng Gram-Schmid t, find an orthonormal ba i of each eigenspace. c. Form an orthonormal eigenba is iJ, . Ü2, ... , ii" for A by combining the vectors you fou nd in part b, and Iet

Un it circle

Figure 4

EXER CI S ES GOALS Find orthonormal eigenba e fo r symmetric matrices. Appl y the pectra l theorem. 5 i. orthogonal (by Fact 7.3 .2) and

s-' AS will be diagonal (by Fact 7 .2 .1).

For each of the matrices in Exerci es 1 to 6, find an orthonormal eigenba i . Do not use technology.

We conclude thi sectio n with an example of a geometric nature:

[~

6.

[~ ~

EXAMPLE 4 ..... Con ider an in vertible symmetric 2 x 2 matrix A. Show that the linear Iransformatio n T (.i) = A.t maps the unit circle into an ellipse, and find the lengths of the emimajor and the semin:tinor axe of this ellipse in term of the eigenvalues of A . Compare with Exerci e 2.2.50.

n

3.

j]

For each of the matrice. A in tbe Exercise 7 to 11 fi nd an orthogonal matrix S and a diagonal matrix D such that s-' AS = D . Do not use technology.

Solution The spectral theorem teils us that there is an onhonormal eigenbasi.s Ü1, ii2 for T , with associated real eigenvalues ), 1 and A. 2 . Suppose that IA.II ::::. IA.2 1. These eigenvalues will be nonzero, since A is invertible. Tbe unit circle in ~2 consists of all vectors of the form

ii = cos(t)v 1 + sin(t)ii2 . The image of the unit circle consists of the vectors

7. A = [

~ ~] I

10. A =

-2

-2 4 [ 2 -4

12. Let L: ~ ~ ~ 3 be a refl ecti on in the line spanned by

+ sin (t) T(v2 ) =cos(t)A. 1 u1 +sin(t)A.2 u2 ,

T(v) = cos(t) T(v,)

an ellipse whose semimajor ax is A. 1 v1 has the length IIA. 1 ü1 11 = IA. 1 1, while the length of the semi mj nor axjs is IIA. 2 ii2 11 = IA.2 1. See Figure 4. ln the example illustrated in Figure 4 the eigenva lue A. 1 is positive and A.2 is ~ negati ve.

a. Find an orthonormaJ eigenba i. ß fo r L . b. Find the matri x B of L with re. pect to ß. c. Find the matri x A of L with re pect to the tandard ba i of ~ 3 .

410 •

Sec. 7.3 Symmetrie Matrices •

Chap. 7 Coord.inate S tems

transformaüon . k = h r rhe . 3linear ? 13. Cons1"der a ymrne tr"1c 3 X 3 l11atrix . A with T (x) = A.r necessarily the reftectJOn m a subspace of ~ · In Example 3 in this ection we diagonalized th matnx 14. 1 1

by means of an orthogonal matri x S. Use t~i re ult to ~agonal ize the fo l-

:.w;[n~

TTJ' orthog:~a[JI-t~r and: D]

1D

eac:. ca[r ~

!]

2 2 2 I 1 -2 2 2 0 lf A is invertible and orthogonally diagonalizable, i At orthogonally diag15. onalizable as well ? 16. a. Find tbe eigenvalue of rhe matrix

A=

1

1

J 1 1

1

1

1 1 1

1 1 1

1

J 1 1 1

1 1

wirh their multip1icities (note that tbe algebraic mu1tiplicity agrees with tbe geometric multiplicity; why?). Hint: What i the kerne! of A? b. Find tbe eigenvalues of tbe matrix J

1 3

1 1

l

1

3

1

1 1

3

B=

1

J J 1 3

19. Consider a linear transfonnation L from IR" to IR"'. Show that there i an orth onorm al basis ü1, ü2 , • .. , vn of IR" suchthat the vector L(ii 1) , L (v 2 ), • • • • L (Ü") are orthogonal (note: some of the vectors L (v;) may be zero). Hint: Consider an orthonormal eigenba is ü1, ii2 , . . . , vn for the sy mmetric matrix A TA. 20. Consider a linear transfonnation T fro m IR" to JRm, where n doe not exceed m. Show that there is an orthononnal ba is 1, •• • , ii" of IR" and an orthonorrnal Ü!m of IR"' such that T (Ü;) is a scaJar multiple of for i = 1. ba is .. . n . Hint: Exercise 19 is helpful.

v

w, , ... ,

o

2 0 k 0

l k

A

=

~

0 2 0 k

~]

where k i a constant. a. Find a value of k such tbat the mattix A i diagonalizable. b. Find a value of k such that A is not diagonalizable. 23. lf an n x n matrix A is both syrrunetric and orthogonal, what can you say about the eigenvalues of A? \\'bat about tbe eigenspaces? Interpret the linear transformation T (x) = Ax geometrically in the cases rz = 2 and n = 3. 24. Con ider the matrix

0

0

0 0

1

A=

1 3

18. Consider sorne unit vectors v1 , . • • , V11 in IR" such that the angle between Ü; and üj is 60° for all i =f. j . Find the n-volume of the n-parallelepiped span~ed by ü1, • •• , V11 • Hint: Let A = ( v1 v"] and think about the rnatrix ArA and its detenninant. Exercise 17 is usefu l.

w; ,

21. Consider a symmetric 3 x 3 matrix A with eigenvalues l , 2, 3. How many different orthogonal matri ces S are there such that s- 1 AS is diagonal? 22. Con ider the matrix

1 1

with their multiplicities. Do not use technology . c. Use your result in part b to find det(B). 17. Use the approach of Exercise 16 to find the detenni nant of the n x n matrix B that has p's on the diagonal and q 's elsewhere.

411

[

0 I

1 0

Find an orthonormal eigenba i for A . 25. Consider the matrix

00 0 0 I

00 0 I 0

00 1 0 0

01 0 0 0

01] 0 . 0 0

Find an orthogonal 5 x 5 matrix S uch that s- tAS is diagonal. 26. Let }11 be the n x n matri x with aJI ones on the '·other diagonal" and zero elsewhere. (In Exercises 24 and 25 we tudied 14 and J . respectively.) Find the eigenvalues of 111 , with their multiplicities.

412 •

Sec. 7.4 Quadra tic Forms •

Chap. 7 Coordinate System

b. Co_n ider a c? mp lex n x n matrix A that has zero a it only eigenva lue (wtth algebraJc mul tipli city n). Use Exercise 37 to how that A is nilpotent 39. Let us first introdu ce two notati ons. For a complex n x n matri x A , Iet lAI be the matrix whose ij th entry is lai;" IFor two rea l n x n matrices A and B we wri te A < B if a-.1 < b·1. fo r all entri e . Show that ' - ' a. lA B I S IAII BI, for all corn plex n x n matrices A and B , and b. lA' I S lAI' , fo r all co mpl ex n x n matrices A and allpos iti ve integer t_ 40. Let U ::::: 0 be a real upper triangul ar n x n matri x with zero on the di agon al. Show that

27. Di agonati ze the n x n matri x I

0

0 0 0

0

0

0 0 l 0

(all one along botb di agonal

1 0 0

I

0 0

0

1 0 0

and zeros el ew here).

28. Diagonalize the 13 x 13 matrix 0

0

0

0

Un + u I .::: r"Un + u + U 2 + .. . + un- I)

1

0 0 0

0 ]

0 0 0

0

for all pos iti ve integers t. See Exercises 38 and 39. 41. Let R be a complex upper tri angular n x n matrix with Show that

1

29. 30.

31. 32.

33. 34. 35.

413

Ir;; 1< 1 for i

= 1. . .. , n .

lim R' =0

(all ones in the la t row and the last column, and zeros e l ewhere). Con ider a symmetric matri x A. If the vector ü i in the image of A and is in the kernel of A, is ü necessarily orthogonal to w? Ju li fy your answer. Consider an orthogonal matri x R wbose first column is ü. Form the synunetric matrix A = vür. Find an orthogon al matiix S and a di agonal matrix D such that s-t AS = D. Describe S in terms of R . True or false? If A is a symmetric matrix, then ran.k(A) = rank (A 2 ) . Consider the n x 11 matrix with all ones on the main di agonal and all q 's elsewhere. For which choice of q is this matri x in vertibl e? Hint: Exercise 17 i helpful. For whi ch angle(s) a cao you find three dislinct unit vectors in ~ 2 such that the angle between any two of tbem is a ? Draw a ketch. For which angle(s) a can you find four distinct unit vectors in ~ 3 such that the angle between any two of them is a ? Draw a sketch . Consider 11 + 1 distinct unit vectors in ~~~ such that the angle between any two of them is a. Find a .

w

36. Consider a syrnmetric 11 x n matrix A with .4 2 = A. Ts the Lineartransformation T (.r = A.i= necessaril y the orthogonal projection onto a subspace of IR"? 37. We ay that an 11 x n matrix A is triangulizable if A i simil ar to an upper triangular n x n matrix B. a. Gi ve an example of a matrix with real entries that i not trianguli zable over ~b. Show that any n x n matrix with complex entries is tri anoulizable over C. Hint: Gi ve a proof by induction analogaus to the proof ~f Fact 7.3.1. 38. a . Consider a complex upper triangular n x n matrix U with zeros on the diagonal. Show that U is nilpotent, that is, un = 0 . (Compare with Exercises 56 and 57 of Section 3.3.)

r~

(meaning that the modulu s of all e ntri es of R' approaches zero . Hint : We ca n write IRI S )..(in+ U ), fo r ome po iti ve real numbe r ).. < 1 and an upper triangul ar matr ix U ::::: 0 with zero on the diagonal. Exerci es 39 and 40 are helpful. 42. a . Let A be a complex n x n matrix such that 1)..1 < I for all eigenvalues ), of A. Show that lim A 1 = 0 / -4

(meaning that the modu lu of all entries of A ' approache zero) . b. Prove Fact 6.5 .2.

QUADRATIC FORMS In thi s ection we will present an important apptication of the pectral theore m (Fact 7 .3. 1). In a multi vari abl e calculu text, we found the foll o\ ing proble m:

EXAMPLE 1 .... Consider the fun ction q (x1. x2) = 8x~ - 4.x lx2 + 5x= .

Note that q (0 0 = 0. Determine whether this fun ction ha a tri ct m1mmum at (.x 1,.x2 = (0 , 0), that is, whether q (x1,x2) > 0, for all (.x 1, x 2) =1- (0 , 0). There are a number of way to do thi. proble rn ome of which you may ha ve seen in previous courses. Here we pre ent an approach whi ch u e matrix techniques. We first develop some theory. and the n do the examp le.

414 •

Chap. 7 Coordinate Systems

Sec. 7.4 Quadratic Forms •

Note tbat we can write

I

415

/'

We "split" the contribution -4x 1x 2 equally among tb.e two components. More succinctly, we can write

qC-"i) = ;. Ax ,

where

A = [

-~

-2] 5 ,

or

Figure 1 q (x-) = x-r Ax- .

Let us present these ideas in greater generality:

Tbe matrix A is symmetric by construction. By tbe spectral tbeorem (Fact 7 .3. 1), there is an ortbonormal eigenbasis Ü1, 2 for A . We find

v

- = .j51 [- 2]}

Vt

- ./51[1] v2

2 ,

=

witb associated eigenva1ues ). 1 = 9 and )..2 = 4 (verify this). If we write = c 1 ü1 + c2 ü2 , we can express the value of the function as follows:

x

Definition 7.4.1

Quadratic forms A function q (XI. x 2 , . . . , Xn) from ~n to ~ is called a quadratic form if it is a linear cornbination of functions of tbe form x;Xj (where i and j may be equal). A quadratic form can be written as q (x ) = ; . Ax =

;r Ai ,

for a syrnmetric n x n matrix A .

v

(Recall that 1 · ü, = I , ü, · Ü2 = 0, and ~ · ~ = 1, si nce ü1, ü2 is an ortbonormal basis of ~2 .) The formula q (x) = 9cf +4ci shows that q (x) > 0 fo r all nonzero x, because at least one of the terms 9cf and 4ci is positive. Our work above shows that tbe c 1-c 2 coordinate systern defi ned by an orthonormal eigenbasis for A is "well adjusted" to the function q. The formul a

EXAMPLE 2 ~ Consider the function q (X J, X2, X3) = 9.xf

+ 7xi + 3xj

- 2X JX2 + 4XlX3- 6x2X3.

Find a symmetric matrix A such that q (x) = x · Ai for all i in R 3 .

9ci + 4c~ Solution is easier to work with than tbe original formula

As pointed out in Exan1ple l , we Iet a;; = (coefficient of xf), aij = aj; = tccoeffi cient of x ;.xj).

because no term involves c 1c2 : q (x,, x2) = 8xf - 4x ,x2

+ Sxi

= 9cf + 4c~ The two coordinate systerns are sbown in Figure 1.

Therefore,

if i =I= j.

416 •

Chap. 7

Sec. 7.4 Quadratic Form •

oord inate Systems

By Fact 7.4.2, tbe definitenes of a ymmetric matri x A i ea y to determine from it e igenval ues:

The ob en ation we made in Example 1 abov can now be generalized as follows:

Fact 7.4.2

Con ider a quadratic form q ,\-) = ,\- . A.\- from IR" to lR. Let ß be an orthonorm al eigeobasis for A , witb a ociated eigenvalue At, . . . ,).." . T hen q( i ) = ), , c j

Fact 7.4.4

+ )..2d + · · · + ).."c~ . pect ro ß .1

A symmetric matrix A is positive definite if and on ly if) all of its eigenvalue are positive. The matrix A is positive semidefinite if (and on ly if) al l of it e igenvalue arepositive or zero. These facts foiJow immediately from the formula -

q (x )

Aoain. note that we have been able to get rid of tbe mixed term : no umma:d i~volve c;c1 , itb i =j:. j) in the fonnula abo e. To j ustify tbe formula rated in Fact 7 .4.2. we can proceed as in Example 1. We leave the detail a an

Fact 7.4.5

Positive definite quadratic forms Consider a quadratic form q (x) = ,\- -Ai, where A is a ymmetric 11 x n matrix. We say that A i positive definite if q (x ) is positive fo r all nonzero x in :IR", and we call A positive semidefinite if q(x ) 2: 0, for aH x in :IR". Negative definite and negative semidefinite ymmetric matrices are de.fined analogously. Finally, we call A indefinite if q takes po. itive as weil as negative value. .

EXAMPLE 3 ..... Consider an

m

x

n matrix A. Show that the functi on q (x ) =

= At C21 + · · · + AnC2 11

(Fact 7.4.2).



The determinant of a positive definite matrix is positive, ince the determinant is the product of the eigenvalues. The converse is not true, however: consider a symmetric 3 x 3 matrix A with one po itive and two negative eigenvalues. Then det(A) is positive, but q (x) = ,"Y · Ai is indefinite. In practice. the fo llowing criterion for positive definiteness is often used (a proof i outlined in Exercise 38):

exercise. When we study a quadratic fo rm q we are often intere ted in fi nding out \ herher q .\- ) > 0 fo r all nonzero .\- as in Example 1). In thi context it i u eful to introduce the following terminology :

Definition 7 .4.3

417

Consider a symmetric n x I! matrix A. F or m = 1, ... , 11 , Iet A (m ) be tbe m x m matrix obtained by omitting all row and columns of A pa t the mtb. These matrices A (ml are called the principal submatrices of A. The matrix A i positive definite if (and only if) det(A (ml ) > 0, for aJI m = 1, ... , 11. A an example, i::onsider the matrix

-1 7 -3

II A~ f is a quadratic

form, find its matrix , and determine its definitene

-n

from Example 2. det(A ' l = det[9] = 9 > 0 9 det(A <2> = det [ - l ] = 62 > 0 - l 7 det(A t 3>) = det A) = 89 > 0

Solution \ e can \ rite q .\-) = Ax ) . (Ai ) = (Axl (A.r) = ; r AT Ai = ; . (AT Ai). This shows that q i a quadratic form, with matrix AT A. This quadratic form i positi e emidefinite, becau e q (.r = II A.r ll 2 2: 0 for a!J vector ; in 2". ote that q (x = 0 if and on ly if i i in tbe kerne! of A. Therefore, the quadratic form i po iti e definite if and onl if ker(A) = {Ö}. ~

We can conclude that A i po itive definite. Alternatively, we could find the ei genva lu e technology, we find that )q ~ 10.7, A2 ~ 7 .1, and

f A and u e Fact 7.4 . . in g ~ J .2, c nfirming ur result.

A.J

+ Princip a l Axes 1The

basic propert ie o f quadrati fonns were fi rst derived by the Dutchman Johan de Witt (16251672) in his Eiementa curvamm linearum. De Witt was one of the leadin g state me n o f his time, guidin g hi country lhrough two war against England . He consolidated his nati on· s com me rc ial and naval power. De Witt met an unfortunate end when he w tom to piece by a pro-En gli h mob. (He . hou ld h, ve staycd with math !).

When we study a fu nction j(.x ,, x _, .. . , Xn) from IR" to IR . w are often intere. ted in the solution of the equation j(Xt,X2, ... ,X11 )

=k

418 •

Chap. 7 Coordinate Systems

Sec. 7.4 Quadratic Forms •

419

Now consider the Ievel curve q (x) = x ·Ai = 1,

where A is an invertible symmetric 2 x 2 matrix. By Fact 7.4.2, we can write this equation as Ä1ct

+ Ä2d

= 1,

where c1 , c2 are the coordinates of .X with respect to an orthonormal eigenbasis for A, and AJ, Ä2 are the associated eigen values. As we discussed above, this curve is an ellipse if both eigenvalues are positive and a hyperbola if one eigenvalue is positive and one negative (what happens when both eigenvalues are negative?).

Figure 2 for a fixed k in ~. called the Ievel sets of f (Ievel cur-ves for n = 2, Level swfaces for n = 3). Here we will t.hink about the Ievel curves of a quadratic form q (x ~ 2 ) of two variables. For simplicity, we focu s on the level curve q (x 1, x 2) = I. Let us first think about tbe case wben there i no mixed term in the formula. We trust that you had at least a brief encounter with those Ievel curves in a previous course. Let us discuss the two major cases:

EXAMPLE 4 .... Sketch the curve

1•

Case 1 • q (x 1, x 2 )

(see Example 1). Solution

In Example 1 we found that we can write this equation as

= ax~ + bxi =

I, where b > a > 0. This curve is an ellipse, as shown in Figure 2. The lengths of the semimajor and the semiminor axes are 1j ..j0. and 1/.Jb, respectively. Thi s ellip e can be parameterized by

9cT where c 1, c2 are the coordinates of

0 ] . [XI] = COS (t ) [ Jjfa] +sm. (t ) [ lj.Jb X 2

Ü

= a x ~ + bx~ =

is a hyperbola, _:,ith x 1 -intercepts [

I, where a is positive and b negative. This

±l~..jO.l

for A

as shown in Figure 3. What is the

slope of the asymptotes, in terms of a and b? Figure 4 Figure 3

= [ -~ -;

J

=

1,

x with respect to the orthonormal eigenbasis

- .j51 [-J2]

U]

Case 2 • q (x1, x2)

+ 4d =

V? =-1

-

.../5

[1] 2

We sketch this ellipse in Figure 4.

420 •

Chap. 7 Coordinate Systems

,. q.q

Sec. 7.4 Quad ratic Forms •

Th c 1- and the c 2 -ax ar called the prin ipal axes of the quadratic fo rm 2 ote that tlle - are the e igen pace of the matrix 1. ) -_ g·,.21 _ 4-r ,.2 ·I·r 2 + 5·r 2·

421

9. Arealsquarematrix A i called skew-symmetric if Ar = -A . a. If A i skew-sy mmetric, is A 2 kew-symmetric as weil ? Or i A 2 ymmetric ?

b. lf A is skew-symmetric, what can you say about the defin iteness of A 2 ? What about the eigenvalues of A 2 ? of the quadratic form .

Definition 7.4.6

c. What ca n you say about the cornplex eigenvalue of a skew- ymmetric matrix ? Which skew-sy mmetric matrices are diagonalizable over ~ ? 10. Consider a quadratic form q(x) = x . Ax on ~~~ a nd a fi xed vector v in iR" . Is the transformation

Principal axes Con ider a quadratic form q x ) = X . Ax, where A i a ymmetric 11 X 11 matrix with n distinct eigenvalue . Then tbe eigen pace of A a.re called the principal axes of q (note that tllese wi ll be one-dimen iona l). Return to the case of a quadratic form of two variab le . We can ummarize our findings as follow :

+ v)- q (.r) -

L (.r) = q(,r

q (v)

linear? lf 0 what is its matrix ? 1

11. If A is an invertible symmetric matri x, what is the relationship betweeo the defi ni tenes of A and A - 1 ? 12. Show that a quadratic form q (x) = ,r -kr of two varia bles is indefinite if (and on ly if) det(A) < 0. Here, A is a symmetric 2 x 2 m a trix .

13. Show that the diagonal e lements of a positive definite matrix A are positive.

Fact 7.4.7

2

Con ider the curve C in iR defined by

Let >.. 1 and >..2 be the eigenvalues of the matrix [

~ J where a and det(A) are both positive.

14. Con ider a 2 x 2 matrix A = [ :

? ' 1. q(x 1, x 2 =ax"j+bx 1x2+cx2=

2

b~2 b~ ]

Without u ing Fact 7.4.5, show that A i positi ve definite. Hint: Show first that c i positive, and thu tr(A) i positive. Then think about the si.gns of the eigenvaJues.

of q.

lf both ), and >..2 are positive, then C is an ellipse. If there i a positive and a negative eigenval ue, tben C is a hyperbo/a. 1

EXERCISES

GOALS Apply the concept of a quadratic fom1 . U e an ortbonormal eigenba i for A to analyze the quadratic fonn q(x) = x · A.t. For each of the quadratic fonns q li ted in Exerci e matrix A suchthat q (x) = x . Ax.

1. q (x1 x2) = 6xr - 7..r Jx2 + 8xi . 2. q(X J X2) = XJX2· 3. q(x J x2 1x3) = 3x~ + 4x~ + 5xj

1 to 3 find a ymmetric

I

I

+ 6x1x3 + 7x2x3.

Detennine the definüeness of the quadrati c forms in Exercises 4 to 7 .

4. 5. 6. 7. 8.

q (xJ x2) = 6xT I

+ 4xlx2 + 3xi.

q(x 11x 2 ) = x~ +4x l x2+xi. q (x J, x2 ) =

2xT + 6x1 x2 + 4xi .

q (x 1 x21 x3) = 3x~ I

+ 4x 1x3.

If A is a symmetric matrix, what can you say about the definiteness of A2 ? When is A 2 positive definite?

Sketch tbe cu rve defined in Exercises 15 to 20. In each case, draw and Iabel the principal axe , Iabel the intercepts of the curve with the principal axe , and give the formula of the curve in the coordinate ystem defined by the principal axe

+ 4xJXz + 3xi = 1. 3x~ + 4XJX2 = J. x? + 4x 1.rz + 4xi = 1.

15. 6x?

16. XJXz =1 .

17.

18.

19.

20.

9xT- 4x lx2 + 6x~ = l . -3xT + 6x1xz + 5xi = I.

21. a. Sketch the following three surfaces:

x? +

4xi

+ 9x j

+

4x 2

-

xf

- xr - 4x~

1

I

9xj

1,

+ 9xj

1.

Which of these are bounded? Which are connected? Label the point close t to and farthe t from the origin (if there are any . b. Consider the surface

xr + 2xi + 3xj +

XJ-'"2

+ 2XJXJ + 3x2X3 =

I.

Whi ch of the three surface in part a doe this surface qualitati vely re emble most? Which point on thi urface are clo est to the origin? Gi ve a rough approximation ; u"e technology.

422 •

Sec. 7.4 Quadratic Forms •

Chap. 7 Coordinate Systems 22. On the surface

- xf + xi - X~ + l ÜX JX3 =

1,

find the two points close t to the origin. 23. Consider an n x n matrix M that is not symmetric, and define the function g(-~) = .~ . Mx from JR" to JR. Is g necessarily a qu adratic form? lf so, give a symmetric matri x A (in terms of M) such that g(.~) = .~ ·

A,t .

32. Show that any positive definite 11 x n matrix A can be written a A = 8 B T , where 8 is an n x n matrix with orthogonal columns. Hint: There is an orthogonal matrix S suchthat s- 1 AS = sr AS = D is a diagonal matrix with positive diagonal entries. Then A = S D ST. Now write D as the square of a diagonal matrix. 33. For the matrix A = [ -~

q(.~)

=; . A x,

where A is a symmetric 11 x n matrix. Find q(eJ ) . Give your answer in tetms of the entries of the matrix A . 25. Consider a quadratic form q(x) = x

q(x)

= ;. Ax ,

where A is a symmetric n x 11 matrix . True o r fa lse? lf there is a nonzero vector ii in IR." such tbat q (ii) = 0 then A is not invertible. 27. True o r false? lf A and B are two distinct symmetric n x n matrices, then the quadratic forms q(x) = Ax and p (."i) = are di stinct as well. 28. True or false? If A is a symmetric n x 11 matrix whose entries are all positive, then the quadratic form

x.

x· Bx

q(x) = x · Ai is positive definite. 29. True or false? If A is a symmetric n x n matrix such that the quadratic form q (i) =

x · Ax

[~ ~ ]=[~ ~][~ ~ l 37. Find the Cholesky factorization (discussed in Exercise 36) for A = [

-~

-;J.

38. A Cholesky fa ctorization of a symmetric matrix A is a factorization of the form A = LLT , wbere L is lower triangular with positive diagonal entries. Show that for a symmetric rz x n matrix A the following are equivalent:

i. A is positive definite. ii. All principal submatrices A (m ) of A arepositive definite (see Fact 7 .4.5). iii. det(A (m)) > 0 for m = 1, ... , n. iv. A has a Cholesky factorization A = L L r. Hints : Show that i implies ii, ii implies iü, iii impl:ies iv, and iv implies i. The hardest step is the implication from iii to iv: arguing by induction on n, you may assume that A
A = [

is positive definite, then all entries of A are positi ve. 30. True or false? If q is an indefinite quadratic form from JR" to JR, then there is a nonzero vector x in lR" such that q (x) = 0. 31. Consider a quadratic fonn q(i) = · Ai, where A is a symmetric n x n matrix with the positive eigenvalues ;. 1 2: ;.2 2: . . . 2: ;." . Let S"- 1 be the set of all unit vectors in lR". Describe the image of s"- 1 under q , in terms of the eigen values of A.

-;] write A = 8 2 as discussed in Exercise 34.

See Example 1. 36. Cholesky factorization for 2 x 2 matrices. Show that any positive definite 2 x 2 matrix A can be written uniquely as A = LLT, where L is a lower triangular 2 x 2 matrix with positive entries on the diagonal. Hint: Solve the equation

· Ai ,

wbere A is a symmetric n x n matrix. Let ii be a unit eigen vector of A , witb associated eigenvalue A.. Find q (ii) . 26. Consider a quadratic form

- ;] write A = B8 T as discussed in Exercise 32.

See Example 1. 34. Show that any positive definite matrix A can be written as A = 8 2 , where B is a positive definite matrix. 35. For the matrix A = [ -~

24. Consider a quadratic form

4 23

A~; >

n [ffr ~] [B~r ~ ]. =

Explain why the scalar t is positive. Therefore we have the Cholesky factorization

x

(continued )

424 •

Ch ap. 7 Coord inate ystems

Sec. 7.5 Singular Values •

Tlli rea onin o al o shows t.hat the C hole ky facto ri zatio n of A is un ique. Alternati vely,b_ ou can use the LDL T factorizatio n of A to show that ii i _ . imp lie iv ( ee E e rci e 4 .3.33). To show that i implies ii consider a no nzero vector x tn IR"' and de fin e

5

425

SINGULAR VALUES We Start with a somewhat technical example.

EXAMPLE 1 .... Consider an m x n matrix A. Tbe n x n matrix

AT A is symmetric; therefore, there is an orthonorrnal eigenbasis ü1, ü2 , . . . , Ün for AT A, by the spectral theorem (Fact 7 .3. 1). Denote the eigenvalue of A T A associated with Ü; by A.;. Wbat can you say about the vectors A Ü1, • . • , A Ü11 ? Th.ink about their lengths and the angles

they enclose. in IR" fi ll in

11 -

111

zeros). Then .\-TA (111

Solution

= YT Ay > 0.

;

Let us compute the dot products A Ü; . A Üj.

39. Find the Cholesky factorizatio n of the matrix

A~H ~~

2n

40. Consider an invertible 11 x 11 matrix A. Wh at is the relatio n hip between the matrix R in the QR factorization of A and the matrix L in the Chole ·ky factorizati on of AT A? 41. Consider the qu adratic form

(A -)T A-T ATA-T . - - AA V;· Vj= V; Vj= U; Vj = V;AjVj = Aj(V; · Uj ) =

{0

Aj

The vectors AÜt , .. . , AÜn are orthogonal , and the length of AÜj is

ifi=j=j ifi=j

jfj.

~

This observation motivates the following definition: Definition 7.5.1

Singular Values The singular values of an m x n matrix A are the square roots of the eigenvalues of the symmetric n x n matrix AT A (listed with their algebraic multi pljcities). It is customary to denote the singular values by at. a~, .. . . a11 , and to Ii st them in decreasing order:

We define

Tbe discriminant D of q is defined as We summarize the observation made in Example 1: D = det [ q

11

Cf2 1

The second derivati ve test teils us that if D and q 11 are both positi ve, then q (x 1, x 2) has a minimum at (0, 0). Justify th.i s fact, using the theory developed in thi s section. 42. For wh.ich choices of the constant p and q i the n x n m atrix

p

B=

Cf

q p

Cf

q

.

[

l]

positive definite? (B has p ' s on the diagonal and q 's elsewhere.) Hint: Exercise 7.3.17 is he lpful. 43. For whi ch angles a can you fi nd a basis of IR" such that the angle between any two vectors in thi s basis is a ?

Fact 7.5.2

Let L (x ) = Ai be a linear transformation from IR" to IRm. Then there is an orthonormal basis ü1, ••• , Ü11 of IR11 such that

a. the vectors

L (Ü1),

•• •,

L (vn ) are orthogonal, and

b. the lengtb of L(Üj) is aj, the jth singular value of A . To construct ü1,

••• ,

Ü11 find an orthonormal eigenbasis for the matrix ArA .

Part a of Fact 7.5.2 is the key statement of thi ection ; make ure you understand the significance of this claim. See Exercise 2.2.47 for a special case. Here are two numerical exa:mples:

EXAMPLE 2 .... Let L(x) =Ai ,

where

A = [

-~ ~J.

426 •

Chap. 7 Coordinate Sy tems

Sec. 7.5 Singular Values •

42 7

a. Find the si ngul ar value of A . b. Find orthonormal vector -, and iJ! in IR 2 such that L (ü,) is orthogona l to L(Üz).

LCi) = Ai

c. Sketch and descri be the image of the unit circle under the transformation L.

~

5

Solution

a. We need to find the eigen alues of the matrix. AT A fi rst. AT A

= [ ~ - ~ ] [ _~

~ ] = [ -~~ -~~]

The characteristic polynomial of AT A is

125A. + 2500 = (A.- lOO)(A.- 25) .

;_2 -

The eicrenvalues of AT A are A. 1 = 100 and A.z = 25. The singular value of ö A are

c. The unit circle consists of all vectors of the form .i = cos(t ü1 + sin(t)ii2 . The image of the unit circle consisrs of the vectors L (.i) = cos(t L (ii 1) + in (r) L Üz); that is, the image is an ellipse whose semimajor and semiminor axe are L (ü,) and L(Üz), respectively. Note that IIL(Ü 1) 11 = a, = 10 and IIL(ii2)11 = a2 = 5. See Figure 2. <111111

and b. Find an orthonormal eigenbasi for AT A ( ee Example 1).

]5 E IOo = ker [ 30 E 25

=

-~~

ker [

30 60 ] = span [ - 2] 1 _

~~ ] = span [ ~]

Therefore, the vector

Fact 7.5.3

-u, = -J5l [- 2] 1

(Different scales are used for domain and codomain.)

Figure 1

and

Let L (r) = A.i be an inverti ble linear transformation from IR2 to IR 2 • The image of the unit circle under L i an elJipse E . The lengths of the ernimajor and erniminor a.xes of E are the Singular values a 1 and a2 of A , respectively (see Figure 2).

do the job. We can check that the vectors

-

Au,=

1 [ - 20 10] .J5 Figure 2

and - = Avz

L(.i')

1 [ 10] -J5 5

= Ai

~

are perpendicular. Let us also checkthat the lengths of Aii, and Avz are tbe singular values of A: IIAii , ll =

IIA vzll The vectors ii 1,

iiz

JsKoü ~ I [ ~] I = Js Jt2s

~ I [ _;~J I =

= 10 = a ,

1

=

= 5

= az

and their images L(ii i), L (Ü 2 ) are shown in Figure 1.

Uni I circle

the unit circle: an ellipse

428 •

Chap. 7 Coordinate Sy tems

Sec. 7.5 Singular Values •

429

EXAMPLE 3 .... Con ider the linear tran formation L(.r )

= A.r,

= [~

A

where

L(."i) =Ai

I

~

1

a. Find the singular val ue ' of A. b. Find orthonormal vectors -~> ii2 , ii3 in ~3 such that L(ii1 ) , L (ii2), and L(Ü3) are orthogonal. c. Sketch and describe the image of the unit sphere under the transformation L.

Unil sphere in ~3

Solution Figure 3

a. 1 1

The image is tbe full e llipse shaded in Fig ure 3. Example 3 shows that some of the si ngular value of a matrix may be zero. Suppose the singtllar values a 1, . .. , a,. of an m x I! matri x A are nonzero, while as+ l ... , a" are zero. Choose vectors ü1, .. . , ii,., ü +I · . . . , ii" for A a introduced in Fact 7.5 .2. Note that II A ii; II = a; = 0 and tberefore A Ü; = 6 for i = s + l , . . . , n. We claim that the vectors Aü 1 , •• • , Aüs form a basis of the image of A. lndeed, these vectors are linearly independent (because they are orthogonal and nonzero), and they span tbe image, since any vector in the image of A can be written as

The eigenvalues are >.. 1 = 3, >.. 2 = I , >.. 3 = 0. The ingular value of A are

b. Find an orthonormal eigenbasis u1, u2 , u3 for AT A (we ornit the details).

Eo

~ ker ~ span [ - : ] A)

kr = A(c 1 ü + ... + c,. v + ... + c"Ü") 5

1

= c 1 Aü 1 + · · · + c AÜ5 .

Thi s show that s = dim(im A) = rank (A).

~] •

1 ii2 = - [

.J2

-1

Fact 7.5.4

We compute Aii" Av2 , Aii 3 and check orthogonality:

- .J61[3]

Au 1 =

• • •

ar

are

• The Singular Value Decomposition

c. The unit sphere in JR 3 consists of all vectors of the form c1il 1 + c2 Ü2

+ c3Ü3 ,

where

cT + c~ + cj =

The image of the unit sphere consists of the vectors

L (x) = c1L Cii1) where

1 ,

3 ,

We can also check that the length of Au; is a; :

x=

lf A is an m x n matrix of rank r, then the singular vaJue a nonzero, while ar + l, ... , a" are zero.

ci + ci : : : I (recall that L (ii3) =

+ c2 L Cii2),

0).

l.

Ju t a we expre sed the Gram-Schmidt process in term of a matrix decompo ition (the QR factorization), we will now expres Fact 7 . 5.~ in t rm. of a matrix decomposition. Consider a linear transfonnation L(,r) = A.r from IR" to IR"', and choose an orthonormal basis ii" ... , ü" as in Fact 7.5.2. Let r = rank (A). We k..11ow that the Aur areorthogonal and nonzero, with IIAÜ; II = a; . We introduce vectors Aü the unit vectors 1

,

• • • ,

-

111

I

-

= - Av1 , a1

I , !Ir = -Aur. ar

430 •

Sec. 7.5 Si ngul ar Va lues •

Chap. 7 Coordinate Systems We can expand the sequence ü 1, Then we can write

.. • ,

ü,. to an orthonormaJ basis

u1, .. .• Üm

or, more succinctly,

of IR"'.

AV = U'E .

Note that V is an orthogonal n x n matrix, U is an orthogonal m x m matrix, and "E is an m x n matri x whose first r diagonal entries are a ~> .. . , a,., and all other entries are zero. Mul ti plyi ng the equation A V = U "E witb V7 from the right, we find that A = U"EV 7 .

and A

v;= 0

431

for i = r

+1

. . . , n.

We can express these equations in matrix fo rm a foll ow

Fact 7.5.5

Singular value decomposition (SVD) Any m x n matrix A can be written as

Vr

Vr+l

v"

A = U"EV 7 ,

where U is an orthogonal m x m matrix; V is an orthogonal n x n matrix; and "E is an m x n matrix whose first r diagonal entries are tbe nonzero singular values a 1, ••• , a,. of A , and aJI other entries are zero (r = rank(A)). Alternatively, this singular value decomposition can be written as

V

- - T A =a1u- 1-uT1 + · · · +a,.u,.v,. ,

where the Ü; and the Ü; are tbe columns of U and V , respectively (Exercise 29).

a,.u,.

0

0

A ingular value decomposition of a 2 x 2 mat:r:Lx A is presented in Figure 4.

a1

-

u2

0 ~

Ul

u,.

0

0

E \I T

A =

figure 4

a,.

Aii l = lT IÜI

ul

0

0

VI = I'

I~

orlhogonal

al

fnho~onal

e2 lT2e l

0 Ul

u,.

Ur+ l

a,.

Um

el

0

0

u

L

L

lT 1e 1

= [ rrl

0

0 ] cr2

432 •

Sec. 7.5 Singular Values •

Chap. 7 Coordinate Systems

Consider a singular value decomposition

Here are two numerical examples.

2-lJ. 6

6

EXAMPLE 4 ..,. Find an SVD for A = [ _ 7 Solution

433

A = U L:V r ,

(Compare with Example 2 .)

where

- "J51 [_2] and- "J51 [l ] so hat 1 [- 21 2l].

In Example 2 we fou nd u1 =

v2

1

=

2 ,

t

v,.

and

U=

UJ

Um

V = "JS

The columns ii 1 and 1~ 2 of U are defined by

.. u

1

=

I _ 1 [ 1] , cr Av 1 = J5 _ 2

1

- = 1 - = "J51[2]

112

CT2

We know tbat A v;

A i.i;

and therefore

1 [- 2l 2] u = "J5 l .

for i = 1, . . . , r

and

1 '

A U2

= cr;Ü ; =Ö

fori=r + l , . .. , n.

These equations tell us tbat ker(A) = span(i.ir+ 1, ... , Ün and

Finally,

o]=[1o0 o5 ].

cr2

You can check that

im(A) = span(Ü1, .. . , Ür). (Fill in tbe details.) We see that an SVD provides us with orthorrormal bases for tbe kerne! and image of A . Likewise, we bave AT = v r; Tu T

EXAMPLE 5 .... Find an SVD for A = [

~ ~

6].

or

AT U = vr; T.

Reading the last equation colunm by colunm, we find that (Compare with Examp1e 3.)

for i = 1, ... , r and

Solution

T0... AU;=

Using our work in Example 3, we find that V =

l j../2 1/J6 1;.J3] 0 2/J6 - 1;.J3 ' [ Jj./6 - t;../2 1/.J3

u = [ 1/.J~ - 1/../2 ] 1/.J2

:E=[.J303 o1 o]0 . Checkthat A = U:EV r .

(Observe that the roles of the Ü; and the As above we have

+ 1, . . . , m .

v; are reversed.)

and

1;.J2 '

and

for i = r

ker(AT) = span(Ür+ l, . .. , Üm). In Figure 5 we make an attempt to vi sualize these Observations. We represent each of the kernels and images simply as a line. Note that im(A) and ker(AT ) are orthogonal complements, as observed in Fact 4.4.1.

434 •

Sec. 7.5 Singular Values •

Chap. 7 Coordinate Systems

6. span (ii, +

1 •• • • •

AÜ;

ii")

=a ; ll; if i s r =Ö if i > r

=ker(A)

= ker(A7')

the singular values of A = [

II Au11l

spau (ii,...1• 1 • • •• • ii..,)

Aii;

Fin~

~ ~

Figure S

11. We conclude th.is section with a brief discussion of one of the many applications of the SVD-an application to data compression. We follow the exposition of Gilbert Strang (Linear Algebra and Its Applications, 3d ed., Harcourt, 1986). Suppose a satellite transmits a picture containing 1000 x I 000 pixels. If the color of each pixel is digitized, th.is information can be represeoted in a 1000 x 1000 matrix A . How can we transmit the essential ioformation contained in thi.s picture without sending an 1,000,000 numbers? Suppose we know an SVD - -T A=a 1u- 1v-T1 +--·+a,u,v,.

Even if the rank of the matrix A is !arge, most of the singular values will typically be very small (relative to a 1). If we neglect those, we get. a good approximation A ~ 1v[ + · · · + 5 5 v[, wbere s is much smaller than r. For example, if we cboose s = 10, we need to transmit only the 20 vectors a 1 ii 1, . . . , 1000 , which is 20,000 numbers. a10 10 and ti,, .. . , VJO in IR

a,u

au

v

1

such th at

Find Sin gular value clecompositions for the rnatrice l[st.ed in Exercises 7 to 14. Work with paper and pencil.

[~ -~J 9. [~ ~]

ATii ; = a;v; ifi :Sr ATii; =Öifi > r

Find a unit vector

= a 1. Sketch the image of the unit circle.

7. span (ii 1 , •• • , ii,.) = im(AT)

J.

435

13.

8.

[~ ~] [-~ ;J

[~ -q]p

-7]

10.

[~

12.

[! i]

14.

[~

6

(see Ex.ample 4

( ee Example 5)

;J

15. Jf A is an invertible 2 x 2 matrix, what is the relationship between the singular values of A and A- 1? Justify your answer in terms of tbe image of the unit circle. 16. If A is an invertible n x n matrix, what is the relationship between the singular values of A and A - 1? 17. Coosider an m x n matrix A with rank (A) = n and a singular value decomposition A = U:EVr. Show that the Ieast-squares solution of a linear system = b can be writteo as

A-r

-

-

b·Ü 1-

b ·Ü 11

_

X" = - - U I +· · · + - -VII .

u

0'11

0'1

18. Consider the 4 x 2 matrix

EXE R C I S ES GOALS

Find the singular values and a singular value decomposition of a matrix. of the unit Interpret the singular values of a ' 2 x 2 matrix in terms of the imacre 0 circle.

1. Find the singular values of A = [

~

_

~ J.

l

A= w

[I

1

-~

1 - 1 - 1

I - 1 1 - 1

_:J[~ ~J [! -4] - 1

1

3 .

0 0

Use the result of Exercise 17 to find the lea t- ·quares olution of the linear system

2. Let A be an orthogonal 2 x 2 matrix. Use the image of the unit circle to find the singular values of A. 3. Let A be an orthogonal n x n matrix. algebraically. 4. Find the singular values of A

= [ ~ ~ J.

5. Find the singular values of A = [ : metrically.

Find the singular values of A

-;

J.

Explain your answer geo-

Work with paper and pencil. 19. Consider an m x n matrix A of rank r and a singular value decompo ition A = U :E vr . Explain how you can expre ·s the least- quares Solution f a -~~ of V. sy. tem Ax = ;; a linear combination. of the columns v 1 ,

•• • •

436 •

hap. 7

oordinate Systems

Sec. 7.5

20. a. Explain how any "quare matrix A can be written as

ingul ar Values •

43 7

29. Show that an SVD

A = QS ,

where Q i orthogo nal and S is symmetric posiöve emidefinite. Hint: Write A = U'EV T = UV T V'EVr . b . 1 it po ible to write = S 1 Q 1, w here Q1 is orthogonal and S 1 i symmetric positive semidefini re ? 21. Find a decompo iti on A = Q5 a discus ed in Exercise 20 for A = [ (Compare wi th Examples 2 and 4.)

can be written as

30. Find a decomposition

-~ ~ ] ·

22. Con ider an arb itrary ~ x 2 matrix A and an orthogonal 2 x 2 matrix 5. a. Explain in terms of the image of the unit circle w hy A and SA have the same ingul ar values. b. Explain aJgebraically why A and SA have tbe same singular values .

- -T A = a 1u1 v1

for A = [ _

~ ~]

(see Exerci e 29 and Example 2).

31. Show that any matrix of rank r can be written as the su m of r matrice of ran k I . 32. Consider an m x n matrix A an orth ogonal m x m matrix 5 and an orthogonal n x n matrix R . Campare rhe Singular values of A and S A R. 33. True orfalse? If A is an n x n matrixl then th e singul ar values of A2 are the squares of the ingul ar values of A. Explain. 34. For which sq uare matrices A is there a si ngular value decomposition A = UL.VT with U =V ? 1

23. Con ider a singular va lue decomposiöon

of an m x 11 matrix A. Show that the colunms of U form an or1honorma l eigenbasi for A AT. Wbat are the a ociated eigenvalue ? What does your answer tell you about the relationship between the eigenva lues of A r A and AAT?

1

35. Consider a singular value decomposition A = ur.vr of an m x n matri x A with rank(A) = n. Let ü1, ••• V11 be the col umns of V and ü 1• • •• ii 111 the columns of U. Without using the res ults in Chapter 4, compute (Ar A) - 1 ArÜ;. Explain the re ult in terms of Ieast-square approximations. 36. Consider a singular value decomposition A = U'EVT of an m x 11 matrix A with rank (A) = n. Let ~~ 1• . .. iim be the columns of U . Without u ing the results in Chapter 4, compute A(ATA)- 1 ArÜ;. Explain your result in terms of Fact 4.4.8. 37. If the ingular values of an n x n matrix A are all 1. is A nece arily orthogonal? 38. Tru.e or false'? Sirnilar matrices have the same ingular value . I

24. If A is a symmetric 11 x n matrix. whal is the relation hip between the eigenvatue and tbe singul ar values of A? 25. Let A be a 2 x 2 matrix and ü a unit vecror in JR2 . Show that

1

where a 1 a 2 are the sing ular values of A. Illu trate thi inequality with a . ketch, and justi fy it algebraicall y. 26. Let A be an m x n matrix and ü a vector in lR". Show that

where a1 and a" are the largest and the sm allest si ngul ar va lues of A, respectively. Campare with Exercise 25.

27. Let J.. be a real eigenvalue of an n x n matrix A. Show that

where tively.

a, and a"

are the largest and the sm allest singul ar values of A, respec-

28. If A is an 11 x 11 matri x, what is the product of its singular values a 1 , • • • , an? State the product in tenns of the determinant of A. For a 2 x 2 matrix Al explain thi. resull in tenns of the image of the unit circle.

+ a2u- 2v-T2

I

Sec. 8.1 An lntroduction to Cont inuous Dynamical Systems •

439

At CS, by definition of continuous compounding, the balance x (t) grow at an instan.taneous rate of 6% of the cunent balance: dx - = 6% of balance x( r) dt

LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS AN INTRODUCTION TO CONTINUOUS DYNAMICAL SYSTEMS

or dx

-

dt

Here, we use a differential equation to model a continuous linear dyoamical system with one component. We will so lve the differential equation in two ways, by separating variables and by making an educated guess. Let us try to guess the solution. We think about an easier problern fir t. Do we know a function x( t ) tbat is its own derivative: dxjdt = x? You may recall from calculu that x(t ) = e1 i such a function (some people define x(r) = e1 by thi s property). More generally, the function x(t) = Ce 1 is its own derivative, for aoy constaot C. How can we modify x(r) = Ce 1 to get a function whose derivative is 0.06 times itself? By the chain rule, x(t) = Ce 0 ·061 will do:

There are two fundamentally different way to model the evolution of a dynamical system over time: tbe discrete approach and the continu.ous approach. As a imple example, consider a dynarnical ystem with onJy ooe component.

EXAMPLE 1 .... Suppo e you wi h to opeo a bank account in Switzerland and you shop araund for tbe be t interest rate. You learn that Union Bank of Switzerland (UBS) pay 7 % interest, compounded annually. Its competitor, Credit Suis e (CS , offer 6% interest compouoded continuously. Everything el e being equal, where hou ld you open the account?

= 0.06x .

dx dt

= !!.__ dt

(ce

0 061 ) ·

= 0.06Ce 0·061 = 0.06x(t )

Note that x(O) = Ce 0 = C; that is, C is the initial value, x 0 . We conclude that the balance after t year is x(l)

=

e0.06t xo.

Again, the baJance x(t) grows exponentially. Alternatively, we can solve the differential equation dx j dt = 0.06x by eparating variables. Wri te

Solution Let us think about the two banks: at UBS, the balance x(r) grows by 7% each year if no deposits or withdrawals are made.

dx

= 0.06dt

X

+ I inte rest I and integrate both ide : x( t x( r

+ l) + l)

x(r) l.07x(t)

+

.j, 0.07x (t )

This equation describes a discrete linear dynamical system with one component. The balance after r years i

ln (x = 0 .06t for ome con tanr k.

The balance grows exponentially with time.

438

ow exponentiate: X

x(r) = (J.07rxo

where C =

ek.

+ k,

=

e ln (x)

= e0.06t+k = e0.06t C,

440 •

Sec. 8.1 An lntroduct io n to Co nt inuo us Dynamical System s •

Chap. 8 Linear Systems of Diffe rential Equations W lü ch bank offers the better dea l? We have to comparc the ex po n nti al functio n 1.OJI and e0·061 . Us ing a caJ ul ator, we o mpute

to see that UBS offe rs the better dea l. The extra interest from co ntinuo us compoundi ng doe" not mak up for th e o ne-poin t dj ffere nce in the nomin al intere. t rate.

....

dx jdt

We can generali ze the work we have clone with the differenti al equation = 0 .06x.

or

for some n x n. matri x A .

In tbe ~ontinuous approach, we mode l tbe gradu al change the sy tem undergoes as ttme goes by. Mathem ati ca ll y speaki ng, we model the (instantane~u s) .rates of change of the components of the state vector x(t ) , that is, the ir denvattves

dt

Fact 8.1.1

lf these rates depend linearly on x 1, x 2 ,

Consider the linear di fferential equaüon dx - = kx dt '

clx 1

~

t) =

=

~ = a 2 1X 1

1

. .. ,

G JJX t +a 12 X 2

dx2

with given initial val ue x 0 (k is an arbitrary consrant). The so lution is x

x," then we can write

+

+ a 22 x 2 +

e k XQ .

Tbe quantity x will gro w or decay exponen ti all y (dependin g on the sign of k). See Figure 1. or, in matrix form, Now consider a dynam.icaJ ystem with state vector x (t ) and component s x 1(t ), ... . Xn (t ) . ln Chapter 6 we use the discrete approach to mode l thi s dynamical system: we take a snap hot of the system at ümes t = 1, 2, 3, ... , and we describe the Iransformation tbe system undergoes between these snap hots. If ,'t (t + I ) depends li nearly on (t ), we can write

x

I

.x cr + 1) = Ax(r)

where

1

A= Figure 1 (o) x{t) =

ekl with positive k. Exponenliol growth. (b) x{ t)

=

ekr with negative k. Exponentiol decay.

[""

Gt:~

021

a 22

a:,t

Gn 2

a," ] (1 2 11

Gn n

X

The derivative of the parameterized curve .x(l) is definecl component-wi se:

clx 1

X

:t

-

dt

5

~ 10

(a)

15

0.5

cLt dt 0

2 (b)

441

dx 2

dt dx" dt

442 •

Chap. 8 Linear Systems of Differential Equations Sec. 8.1 An Introduction to Continuous Dynamical Systems • We summarize these observ·nions:

In other words, to solve the system d.r _ -=Ax dt '

A li11ear dynamical system can be modeled by x (r

+ 1) =

(di screte model)

Bx(t)

x

or

A and B are

11

x

11

d.r A(continuous mode l). - = X dr matrices, where 11 is the number of components of the

systent.

for a given initial state 0 we have to find the trajectory in the x 1-x 2 plane that ta11 at i o and whose velocity vector at each point i is the vector Ax. The existence and uniqueness of such a trajectory seems intuitively obvious. Our intu.ition can be misleading in such matters, however, and it is comforting to know that we can e tablish the existence and uniquene s of the trajectory later. See Fact 8.1 .2. Fact 8.2.3, and Exercise 9.4.48. We cao represent graphically as a vector field in the x 1-x2 plane: at the endpoint of each vector x we attach the vector Ax. To get a clear picture, we often sketch merely a direction field for Ai, which means that we wi ll not necessarily sketch the vectors Ax to scale (we care only about their direction). To find the trajectory ,r(t), you simply follow the vec tor field (or direction field); that is you fo!Jow the anows of the field , starring at the point representing the initial state 0 . The trajectories are also called the fiow lines of the vector field Ax. To put it differently , imagine a traffic officer standing at each point of the plane, showing us in which direction to go and how fast to move (in other word , defining our veloci ty). As we follow his directions we trace out a trajectory.

A-r

We will first think about the equation d.r - = Ax 4

dr

from a graphical point of view when A is a 2 x 2 matrix . We are looking for the parameterized curve

-=c) _ .~ 1 -

[x.tx2(t) C r) J xo.

that represents the evolution of the system from a given initial value. Each point 00 the curve x(r) will represent the state of the system at a ce1tam moment in time, a shown in Figure 2. . It is naturaltothink of the trajectory ,~(t) in Figure 2 as the path of a movu:g particle in the x 1-x 2 plane. As you may have seen in a previ~us cour e, the vel~ct~ vector dx f d t of this moving particle is tangent to the traJectory at each pmnt. See Figure 3. Figure 3

Figure 2

~

.T(O) = i o

Trajectory

/

1It --

443

is se nsible to attac h the veloci ty vector dx jdt at the en dpoint of Lhe staLe vector x(r). indicating the path the panicle would take if it wcre to maintain the direclion it has at time t.

x

EXAMPLE 2 ..... Con ider the linear system d.r f dc = Ai where A = [ sketch a direcrion field for values.

Ax. Draw rough

~

;

J

In Figure 4, we

trajectories for the three given initi al

444 •

Chap. 8 Linear Systems of Differential Equ ations

Sec. 8.1 An Introd ucti n to Co ntinuous Dynami cal Systems •

445

Figure 7

Figure S

Solution

We have seen earlier that the eigenvalues of A =

Sketch the ftow lines for tbe three given points by following the arrows, as shown in Figure 5. Thi picture does not teil the whole story about a trajectory x(t ). We don ' t know the position x(t) of the moving particle at a specific time t. In other words, we k.now roughly wbich path the particle takes, but we don ' t know how fast it moves along that path. ~

for some scalar ).. Tbis means that the nonzero vectors along these two special lines are j ust the eigenvectors of A , and the special lines themselves are the eigenspaces. See Figure 6.

Figure 6 (o) AX = ü, for o positive J... (b) AX = ü , for o negative A.

(a)

(b)

are 5 and -1.

with corresponding eigenvectors [ ~] and [ _ ~ ] . These results agree with our graph.ical work in Figures 4 and 5. See Figu.re 7. A in the case of a discrete dy namical system, we can sketch a phase portrait for the system d,r jdt = Ai that shows some representative trajectories. See Figure 8. In Su mmary: if the initia] tate ,rois an eigenvector, then the trajectory mo e aJong tl1e corresponding eigenspace, away from the origi n if tbe eigenvalue i

As we Look at Figure 5 our eye' s attention is attracted to two special line , along which the vectors Ax point eilher radially away from t.he origin or directly toward the origin. In either case, the vector Ax is parallel to .i : Ai = J..x ,

[! ; ]

Figure 8

x_

446 •

Chap. 8 Linear Systems of Differential Equations

Sec. 8.1 An Introduction to Continuous DynamicaJ Systems •

po 1u and toward the origin if the eig~nvalu e _is negativ .. lf the eigenvalue is zero, then 0 is an equilibrium olution : x(t ) = xo. for all Limes ' ·

x

Now .let' s do

a slightly barder example:

EXAMPLE 4 ..... Find all so lutions of the system

How can we solve the system dx jdr = A:i: ana lytically ? We starl with a simp le case..

EXAMPLE 3 ..... Findall solutions of tbe y tem

447

2]-

d,:r = [

.1 - 1 4

dt

X.

Solu tion Above, we have seen that tbe eigenvalues and eigenvectors of A tell us a lot about the behavior of tbe solutions of the system dx/dt = Ax. Tbe eigenvalues of A are A. 1 = 2 and A.2

= 3 with

corresponding eigenvectors ü1

Soluti on This means that

The two differential equation dxt = 2xt

dt

s- tAS

S= [

= B , where

i ~J

iJ

aod i:i2

= [~ ~

=[

iJ.

J. the matrix

considered in Exarnple 3. We can write tbe system

dx2

dx

dt

-=Ax

are unrelated or uncoupled· we can solve tbem separately u ing the formula tated in Fact 8.1.1. 21 X t (I) = e Xt (0), 31 X 2 (t) = e X2 (0),

aod B

=[

_

dt

as cLt

_

1 -=sssx dt

or

s- 1 d,i: = Bs- 1.-r

or

dt

or (see Exercise 51) Botb components of; t ) grow exponeotially, and the econd one wi ll grow ~aster than the first. In particular, if one of the components i initially 0, it remaw s 0 for aJl future times. In Figure 9, we s ketcb a rough pha e portrait for this Y tem. ~

.:!.._cs- t;) dt

=

Bcs- 1.r).

Let LI S i ntroduce the notation c(t) = s- t; (t); note that c(l) i the coordi nate vector of; (t ) with respect to the eigenbasis Üt, ü2 • Theo the system takes the form

dc

_

-dt = Be ' Figure 9

whi ch is just the equation we o lved in Example 3." We fo und that the olutioos are of the form c(t)

=

[ ee~;c~c2 J,

where c 1 and c2 are arbitrary con tants. Therefore, the olution of the original sy tem d,t - = A.r

dt

are

448 •

Chap. 8 Linear Systems of Differential Equations

Sec. 8.1 An Introduction to Continuous D:rnamical Systems •

We can write tbis formula in more general term s as ~

A lt -

x(r) = c 1e · v,

+ c2e

This is a linear di stortion of the phase portrait we sketched in Figure 9.

~ore precisely, the mat1ix S =

J.., r -

- v1 .

Note that c 1 and c2 are the coordinates of .~ (0 with respect to tbe bas.is ince

It is informative to consider a few special trajectories: if c , trajectory x- (t) = e 2r

2 1

[

moves along the eigenspace E2 spanned by [ and c2

=

=

l and c2

v,, v~ .

= 0,

lf c2

-=F

0, then the entries of c2 e 31

value) than the entlies of c1 e21

[iJ

i] transform~

the phase portraits in Figure 9

and

e into [ ~ ] ).

e

1

into the eigenvector

~

2

Our work in Examples 3 and 4 generalizes readily to any n x n matrix A that is diagonalizable over R i.e., for which there is an eigenbasis in ~Rn:

the

J

~] , as expected.

Fact 8.1.2 Likewise, if c , = 0

~],

[

~]

[

i]

will become much !arger (in absolute

as t goes to infinity. The dominant tenn

Consider the system dxjdt = Ax. Suppose there is a real eigenbasis ii " .. . , v" for A, with associated eigenvalues }. I , . .. ' A". Then the general Solution of the system is

x

0

with respect to the basis ü1,

. _. ,

V

12



We can think of the general solution as a linear combination of the solutions e;..,, Ü; associated with tbe eigenvectors ii;. Note the similarity between this solution and the general solution of the discrete dynamica1 system ,i (t + 1) = Ax(t):

associated with the !arger eigenvalue, determines the behavior of the

system in the distant future. The state vector .t(t) is almost parallel to E3 for !arge

r. For Iarge negative 1, on the otber band, tbe state vector is very small and almost parallel to E2. . In Figure 10, we sketch a rough phase portrait for the system dxjdt =Ai.

Figure 10

7

1, we have the trajectory

moving along the eigenspace E3.

[

[

mto the phase portrait sketched in Figure 10 (tran~fonning

The c; are the Coordinates of

c2 e 3'

449

Xz

The terms )...~ are replaced by e;..,,. We have already observed this fact in a dynamical system with only one component (see Example 1).

EXAMPLE 5 ~ Consider a ystem d ,i 1dt

= A.x , where A is diagonalizable over R When is the zero state a stable equilibrium solution? Give your answer in terms of the eigenvalues of A.

Solution Note that lim eM = 0 if (and only if) ).. is negative. Therefore, we observe tability

,.....

if (and only if) all eigenvalues of A are negative.

~

Consider an invertible 2 x 2 matrix A with two distinct eige.nvalues AJ > )•2 · Then the phase portrait of d.xjdt =Ai Iooks qualitatively like one of the three sketches in Figure 11. We observe stability only in Figure llc. Note that the direction of the trajectories outside the eigenspaces always approaches the direction of E;.. 1 as t goes to infinity. Compare with Figure 6.1.1 I.

450 •

Chap. 8 Linear Systems of Differential Equations

Sec. 8.1 An Introduction to Contin uous Dynamical Systems •

dx 8. -, Gt

= ../X, x(O)

dx = c.1t

9. -

xk

451

= 4.

(with k =/:. 1), x(O)

=

I.

d.x 1 10. -d = - (- ), x(O) = 0. {

COS X

dx . ? 11. -, = I +x-, x(O) = 0. Gl

12. Find a differential eg uation of the form dx jdt

= kx

for which x(t)

= 31 is

a

solutioo. (c)

Figure 11

EXERCISES

GOALS Use the concept of a continuous dyna:mic.al system. Solve the differential equation dx jdt = kx. Solve the system d.t / dt = when Ais diagonalizable over R and sketch the phase portrait for 2 x 2 mat1ices A.

Ax

Solve the initial value problems posed in Exercises 1 to 5. Graph the solution.

dx 1. - = 5x with x(O) = 7. dt dx

.

2. -

= -0.7lx

w1th x(O)

3. d p dt dy 4. dt dy 5. dt

= 0.03P

with P (O)

dt

= - e. = 7.

= 0 .8t

with y(O) = - 0.8.

= 0 .8y

with y(O)

= -0.8.

Solve the nonlinear differential equations in Exercises 6 to ll using the metJ1od of separation of variables (p. 439): write the differential equation dxjdt = .f(x) as dxj f(x) = dt and integrate both si des. dx I 6. - = - , x(O) = 1. dt X dx ? . 7. - = x-, x(O) = 1. Describe the behavior of your solution as t wcreases. dt

13. In 1778, a wealtJ1y Pennsylvanian merchant named Jacob DeHaven lent $450,000 to the Contineutal Congress to support the troops at Valley Forge. The loan was never repaid. Mr. DeHaven 's descendants are taking the United States Government to coutt to collect what they believe they are owed. Tbe going interest rate at the time was 6%. How much were the DeHavens owed in 1990 a. if interest is compounded yearly? b. if interes t is compounded continuously? (Adapted from The New York Tim es, May 27, 1990.) 14. The carbon in living matter contains a minute proportion of tlle radioactive isotope C-14. This radiocarbon arises from cosrnic-ray bombardment in the upper atmosphere and enters living systems by exchange processes. After tlle death of an organism, exchange stops, and tlle carbon decays. Tberefore, carbon dating enables us to calcuJate the time at which an organism died. Let x(t) = proportion of the original C- 14 still present t years after death. By definition, x(O) = 1 = 100%. We are told tllat x(t) satisfies tlle differential eguation

dx I - = - - -x. dt

8270

a. Find a formu la for x(t). Determine tlle half-l.ife of C- 14 (that is, the time it takes for half of tlle C- 14 to decay). b. The Iee man. In 1991 , the body of a man was found in melting snow in tlle Alps of Nortllern ItaJy. A well-known historian in Innsbruck, Austria, deterrnined that the man bad lived in the Bronze Age , which starred about 2000 s.c. in this region . Examination of tissue samples performed independently at Zürich and Oxford reveaJed tllat 47% of the C- 14 present in the body at the time of his death had decayed. When did this man die ? ls the result of the carbon dating c.ompatible with the estimate of the Austrian hi storian?

15. Justify the "rule of 69": if a quantity grows at an instantaneou rate of ko/o , then its doubling time is about 69 /k. Example: In 1995 the population of India grew at a rate of about 1.9%, witll a doubling time of about 69/1.9 ~ 36 years.

452 •

Sec. 8. 1 An Introd uctio n to Continuous Dynami cal Sys tems •

453

Ch ap. 8 Linear Syste ms of Differential Equations

24. Let A be an n x n matrix and k a sca lar. Consider the two y tems below: Consider the system

~ = [~ ~J.x. For the va1ues of ;, and A.2 given in Exercises 16 to 19, sketch the trajectories 1 for all nine initial values hown below. Foreach of the points, trace out both the

dx

(I)

-

(11)

dl =CA+ k i")c

= A-t

dl

dc

Show that if .~(t) is a Solution of system (l), then c t ) = of ystem (II).

future and tbe pa t of the sy t m .

ek 1 x(t)

is a solution

25. Let A be an n x n matrix and k a scalar. Consider the two systems below:

cLx







~~ =

~ J.X

[ ;

cL.r = [ -4 27. dr 16. A.t = 1, >-2 = - 1

28.

~~ =

[:

29.

:~ =

U ~ J.r

17. AI = 1, A2 = 2

19.

),1

= 0, )..2

-2

=1

20. Consider tbe system dx jdt = Ax with A = [

~

-

~ J. Sketch a direction

fi eld for Ai. Based on your sketch, describe the trajectories geometrically. From your sketch, can you guess a fonn ula for the o lution wi.th

xo = [ ~ ]?

Veri fy your guess by sub tituti ng in to tbe equation .

21. Consider the system dx jdt = Ax with A = [

~ ~ J. Sketch a direction field

of Ax. B ased o n your sketch, describe the trajectories geometricaUy . Can you find the solutions analytically? 22. Consider a linear system dx /dt = Ax of arbitrar y size. Suppose it (t) and x 2 (t) are solutions of the system. Is the sum x(t) = x 1(t) + 2 (t) a solution as well? How do you know?

d."V = [ I 30. dt

2

31. dx dt

-- = kAc dt

dc

~ J.r

with

with with

J. .t"(O) = [ ~ J. x(O) = [:

J_ with x(O) _ = [_ 21 ] .

2 x

4

[7 ~ 3

Jx

~ J. 1 J. ,i(O) = [ O

x (O) = [

2

~] ;

2

with

x(O)

=

[-~l ] ·

Sketch ro ugh phase pmtraits fo r the dynarnicaJ systems gi ven in Exercises 32 to 39.

x _ [ 3I

32. ddt 33.

x

23. Consider a linear system dx jdt = Ax of arbi trary size. Suppose Xt (t ) is a solution of the system and k is an arbitrary constant. Is x(t) = kx t(r) a solution as well? How do you know?

=

with

_3 3

2

).z =

(II)

In Exerci e 26 to 31, solve the ystem with the given initial value. 26.

18. >. 1 = -1 ,

- = Ax dt

Show that if x(t) is a Solution of y tem (I), then c(l ) = x (kt ) is a olution of ystem (fl ). Campare the vector fields of tbe two sys tem .

----~--~---+-------- X I



_

(I)

34.

35.

2]

0

X-

.

~;: = [ -~ -~ J.r.

n

~;~ = [ : X. :~ = [ ~ ; J.r.

454 •

Chap. 8 Linear Systems of Differential Equations

Sec. 8.1 An lntrodu ction to Continuous Dynamical Systems •

- + 1) = [00..92 0.22 ]-()

36. x(t

_ 37. x(t

1.

. = + 1)

[ _ 1_ 02

c. What will happen in the long term ? Does tbe outcome depend on the initial populations? If so, how?

x t ·

J-c)

o.3 1. x t · 7

43. Answer the questions posed in Exercise 42 for tbe system below:

0.2] -()

dx

1.1 38. x(t + 1) = [ -0.4 O.S x t ·

39.

_

X

(t

+ 1) =

[o.s 0 _3

- o.4 ] 1.6

455

-

dt

-c)

=

Sx -

y

dy

X I ·

-

dt = -2x + 4y

- .

40. Find a 2 x 2 matrix A such that the system dx jdt = Ax has

x(t

-- [2e2 ' + 3e3'] 3e2r + 4 e3r

as one of its solutions. Consider a noninvertible 2 x 2 matrix A with two distinct eigenvalues (note 41. that one of the eioenvalues must be 0). Choose two eigenvectors ü, and ÜJ with eioenvalue~ ), 1 = 0 and A.2 . Suppose A.2 is negative. Sketch a phase p~rtrait fo; the system dx jdt = A.t , clearly indicating the shape and long-term behavior of the trajectories.

44. Answer tbe questions posed in Exercise 42 for tbe system below: dx - = x+ 4y

dt

dy

-=2x- y dt

45. Two berds of vicious animals are figbting each other to the death. During the fight, the populations x(t ) and y(t) of the two species can be modeled by the system below: 1 dx

dt

- 4y

dy

- =- x dt

a. Wbat is the significance of the constants -4 and -1 in these equations? Which species has the more vicious (or more efficient) fighters? b. Sketch a phase portrait for this system. c. Who wins the fight (in the sense that some individuals ofthat species are left while the other herd .is eradicated)? How does your answer depend on tbe initial populations? 46. Repeat Exerc.ise 45 for the system 42. Consider the interaction of two species of animals in a habitat. We are tol d that the change of the populations x( r ) and y(r) can be modeled by the equations

dx

- py

dt dy

dx = 1.4x - 1.2y dt

-

-

d ...!... = 0.8x -

dt

1.4y

dt

=-qx

'

where time t is measured in years. a. What kind of interaction do we observe (symbiosis, competition, predatorprey)? b. Sketch a phase porlnit for this system. From the nature of the problem, we are interested only in the first qu adrant.

where p and q are two positive constants. 2

'This is the simples!. in a serie of c01nbm models developed by F. W. Lanchester duri ng World War 1 (F. W. Lanchester. Aircraft in W'wfare, tile Daw11 of 1he Fourlir Arm. Tiptree, Constable and Co.. Ltd., 1916). 2The result is known as Lanchester's square luw.

456 •

hap . 8 Linear

tems of Differential Equations

Sec. 8.1 An lntroduction to Continuous Dynamlcal Systems •

47. The intera tion of two popul ati on of anim als is modeled by the differential

52. Find all solutions of the sy. tem

equation

dx

dx - =-x+k y

dr

dv __:_ = kx - 4v

dr

·

di

say about the sign of the on the value of the constant c. Foreach case you di cussed does each pha e portrait tell

eigenvalues? How doe your an wer depend

k? in part b, sketch a rough pbase portrait. What you about the fut ure of the two population ?

48. Repeat Exercise 47 for the sy tem dx

-

dr

= -x+ky

dy X

dt

-

dt

- g - 0.2h

~;~ = [ ~

0.6g - 0.2h.

where timet is measured in hours. After a heavy holiday dinner, we measure g(O) = 30 and h(O) = 0. Find closed formulas for g(l) and h (t ). Sketch the

trajectory. 50. Consider a linear system dx jdr = Ax, where A is a 2 x 2 matrix which is diagonalizable over JR. Wben is the zero tate a stable eq uilibrium solution? Gi ve your answer in terrns of the determinant and the trace of A.

51. Let x(t ) be a differentiable cur-ve in JRn and S an n x n matrix. Show that d

_

- (Sx) dt

X,

-! ]x

l

xo = [ b

with

Sk~tch the trajectory for the case

when p is positive, negative or 0. In which cases does the trajectory approach the origin? Hint: Exercises 20, 24. and 25 are helpful. 54. Conside_r a cloor that open to on ly one side (as mo 't doors do). A spring mechant 111 closes the door automaticaUy. The tate of the door at a oiven time r (measured in econds) is deterrnined by the angular displacemen7 a(t) (measured in radians) and the angular velocity r.v i) = da jdt . Note that a i alway positive or zero (since the door opens to only one side • but r.v can be positive or negative (depending on whether the door i opening or closing).

When the door is moving freely (nobody i pushing or pulling), its movement. is ubject to the foUowing differential equation :

da dt dr.v

=

).,

0

-4 y

where k i a po itive constant. 49. Hereis a continuou model of a per on ' s glucose regulatory syste m (compare with Exercise 6.1.36). Let g(r) and h (r ) be the excess glucose and in sulin concentrations in aper on's blood. We are told that

dg dr dh

1]-

[ ).,

=

where ).. is an arbitrary constant. Hint: Exercises 21 and 24 are helpful. Sketch a phase porh·ait. For which choices of 'A is the zero state a table eq uilibrium olution ? 53. Solve the initial value problem

for ome po iti ve con tant k.

a. What kind of interaction do we ob. erve? What is the practica l ignifi cance of the con tant k? b. Find the eigenvalue of the oeffic ient matrix of the sy tem . What can you

45 7

dx

= S- . dt

dr

(the definition of r.v)

(l)

( -2a reflects the force of the spring, and -3r.v model friction)

= - 2a - 3w

a. Sketch a phase pmtrait for this system. b. Di cuss the movement of the door repre ented by the quali tativel y d ifferent trajectories. For which initial st.ates doe the door slam (i .e., reach a = 0 with velocity (tJ < 0)? 55. Answer the que tion posed in Exerci se 54 for the sy tem

da dt d (tJ

dt

(/)

= - pet - qr.v

where p and q are po itive, and q 2 > 4p.

458 •

2

Sec. 8.2 The Co mplex Case: Euler's Formula •

Chap. 8 Linear System of Differential Equations

We can write a complex-valued funct ion z(t ) in terms of it nary parts: real and imagi -

THE COMPLEX CASE: EULER'S FORMULA Con ider a linear system

z(t ) = x(t )

dx

_

+ i y (t )

iCon~ider the two ex~ple_s abo ve.) l f x(t) and

y (t ) are differentiable real-valued unctiOns, then the den vattve o f the comp lex-valued function z(r) is defined by

- = Ax dt '

dz dx dy -= - + i-

, bere the n x n matrix A i di agonalizable over C: ther i a complex eigenbasi jj " . . . • ü" fo r A. with a ociated complex eigenva lue A~o ... , A" . You may ' onder ' hetber the formul a

dt

dt

dt .

For examp le, if

then

with com plex c;) produce the general complex solution of the system, just as in the real case (Fact 8.1.2). Before we can make sen e out of the fommla above, we have to think about the idea of a complex-valued fun ction and in partic ul ar about the exponential function e )J for complex A.

dz dt =

I+

2il .

If Z(l ) = COS(I )

+

459

+ i sin (t),

then

Complex-Valued Functions

dz dt = - sin (t)

A complex-valued ftmctio n z = f(t) is a function from IR to C (with domain lR and codomain C ): the input t is real, and the output z i complex. Here are two examples:

Pl e~se ve1ify that the basic rules of differential calculu (the sum, product, and quot1 ent rules) apply to complex-valued functions. The chai n rule holds in tbe fo llowi ng fom1: if z = f(l) i a differentiable complex-valued function and t = g(s) i a differentiable function fro m lR to IR, tben

7 =L + i t 2 Z = COS(f ) + i sin (t)

For each r, the ou tpur ' can be represented a a point in the complex plane. As we Iet t vary , we trace out a trajecto ry in the complex pla ne. ln Figure l we sketch the trajectories of the two complex- valued fun ction defined above.

r.-e 1

(a) The lrajedory of

Z=f + if 2.

1 ~

(b) The trajectory of

z = castn + isin(t).

d-

d z dt

ds

dt ds ·

T_he d_erivative d z / dt of a compl ex-valued function z (t) , for a given t, can be vJsuahzed as a tangent vector to the trajectory at z(t), as shown in Fig ure 2. Next let' s think about the complex-valued exponential function z = e).' where A is complex and t real. How should the function z = eJ..J be defin ed? We can get ome in piration from the real case: the exponential function x = ekr (fo r real k) i the unique function uch that dxjdt = kx and x(O) = 1 (compare with Fact 8. 1.1 ).

= 1rll =i

1 = - I

Figure 2 t =O z=O

I =

0

z= l

!!I:at l = .I dt

I

= 31Tf2

z= (a)

(b)

- i

+ i cos (t ) .

460 •

Sec. 8.2 The Camp lex Case: Euler's Formula •

hap. 8 Linear Sy tem s of Differential Equation

461

We an u e thi funda me nta l property of real xpone ntial fun ctio ns to defin e z(l) = cos(t)

the complex exponentiaJ function s:

+ i sin(t)

I

Complex exponential functions

Definition 8.2.1

If ).. i a complex number, the n = e1'1 is the unique com ple -va lued fun ction 7

su h that d~

- = )... dt

and

- (0) = l.

(The exi tence of uch a function, for any

"J.. ,

will be established below; the

Figure 4

proof of uniquene s is left as Exercise 34.) It follow that the unique com plex-valued function z(r) wi.th and

z(O)

=

The unit circ/e, with parametri zation -:(1) = cos(t) req u iremen ts:

o

dz

dr

.

= - Sll1 (t )

+ i CO

+ i sin (i),

ati fi e these

(1) = i z(t)

IS

and z(O) = 1. See Figure 4. We have bown the following fundame ntal re ult: fo r an arbitrary complex initial value zo. Let us first con ider the impl e t ca e, - = e;', where )... = i . We are loolO ng for a complex-valued function z(t) ucb that d- jdt = i - and z(O) = l. From a graphical point of view, we are looki ng for the trajectory z(L) in the complex plane tbat starts at z = 1 and whose tangent vector d z/ dt = i is perpendicular to z at each poi.nt (see Example I of Seclion 6.4). In other ward , we are Iook.ing for the ftow line of the vector field in Figure 3 tm·ting at z = I.

Fact 8.2.2

/ r = COS(t ) + i PRO JUV NTUTE 1957

~ . ~

.;;

~ '~

z

;

3

. ),

.

Figure 3

~

5

+5

in (t )

The case 1 = rc Ieads to the intriguing formula eirr = - 1; this ha been called the most beautiful formula in all of mathematics. 1 Euler's formula can be u ed to write the polar form of a complex number more succinclly:

~



M

-. . < .,.

/

Euler's formula

_ ; . .·!, -~... ~.

.

'

EL\/

TlA

Figure S Euler's likeness end his celebroted formulo ore shown on o Swiss postage stomp.

z = r (cos<j> + i sin <j>) = re;q, Now consider z = e)J , where ).. is an arbitrary compl ex number, )... By manipulating exponentials as if they were real , we find that

e'J = e
= eP'eiqr = eP' ( co

(qt)

+i

= p + iq .

in(qt ).

We can va lidate lhi s result by checking th at the complex-valued function z(t) = eP'( co (qt)

+i

in (qt))

1 Benjami n Peirce ( 1809- 1880), n Hurvard mathcmatician. after observing tiM em = - I, u. cd to turn to his studems and say, ·'Gentlemen, that is surcl y truc. it is absolu tcl y paradox i al, wc cn nnot under ta nd it, and we don't know what it means, but we have pro ed it, und thercforc wc knO\ it must bc lhc tru th ." Do you not now thin k that we understand not onl y that the formul a i' truc but al o what it m.:a ns?

462 •

hap. 8 Linear Systems of Differential Equation s

Sec. 8.2 The Co mplex Case: Euler's Fo rmula •

does indeed satisfy the definition of e).,1 , namel y, d-;, j d r

=

AZ and z(O)

=

~i genbas is. ii t, . . . , ii", with eigenvalues A1,

1:

+ iq )e"

1

(

co (qt ) + i sin (ql) ) = AZ

, An. Findall complex soluti ons

x(t ) of tht s system . By a complex solution we mean a functio n from IR to C"

d z =pe" 1 (co (qt )+ i in (qr))+ ePf ( - qsin (q t)+ iqcos(qt )) dr

= (p

••.

463

(that is, t is. 1: al and .r is in C"). In other words , the component funct.ions x 1 (t ), ... , x"(t ) ol· x(t ) are cornplex-valued functions. A · you review our work in the last section , you will find tbat the approach we took to the real case applies to the complex case as well, without rnod.ifications:

.J

EXAMPLE 2 ~ Sketch the trajectory of the comple -valued fun ctio n z (t = e
Solution

Fact 8.2.3 -;,(1)

=

eO.I I eil=

eO

I r ( COS (I )

+i

Con ider a linear system

in (r) )

dx

-

The trajectory spiral nent.ially.

outward a

dt

shown in Figure 6 since e 0· 1' grows expo-

_

=Ax .

Suppose there is a complex eigenba .is ii 1, •• • , v" for A with associated complex eigenvalues A1, . .. , An. Then the general complex solutioo of the sy tem is

<11111

EXAMPLE 3 ~ For which complex number ), is Lim eA1 = 0? 1->

Solution

where the c; are arbitrary cornp lex numbers.

Recall that e~·'

= e
We can checkthat the given curve .'i(t) sarisfies the equation dxjdt = A.i: we have

so that leAI 1 = e"1 . This quantiry approaches zero if (and only if) p is negative, that is, if e"' decays exponentiall y. We summarize: Lim e).,1 = 0 if (and only if) the real part of A is negative. <11111 1->

We are now ready to tackle the problern po ed at the beginning of th i ection: consider a . y tem dx /dt = Ax , where the n x n matrix A ha a comple

(by Definition 8.2 .1 ), and

because the Ü; are eigenvectors. The two answers match. When i the zero state a stable equiiibrium solution for the system d.i jdt = A.r? Con ideri ng Example 3 and the form of the solution given in Fact 8.2.3, we can conc lude that this is the case if (and only if) the real part of all eigenvalue are negative at least when A i diagonaiizable over C). The nondiagonalizable case is left a Exercise 9.4.48.

Figure 6

Fact 8.2.4

For a system c1.-r

_

-=Ax dr ·

the zero tate is an asymptotically stable eq uilibrium solut.ion if (and only if) the real part of all eigenvalues of A are negative.

464 •

Sec. 8.2 The Camplex Case: Euler's Fo rmu la •

465

Chap. 8 Linear Systems of Differenti al Eq uation ~ he~~ A i_s a r~al 2x2 matrix with eigenvalues p±iq. Consider an eigenvector v + 1w w1th e1genvalue p + iq. Tben

EXAMPLE 4 .... Consider the ystem d-~ jd r = A.i=. where ~ is ~ (r al) 2 x 2 matrix. When i.s the zero tate a table equilibrium ·so lutJOn fo r th t system? Give your an wer in terms of tr(A) and det(A ) .

x(t) = e"l [w

ility either if A has two negative eigenvalue or if A . has two . We o b erve Stab conj ugate eigenvalue p±iq, wber~ p is negative. In bot:h case , tr(A) l ' negatlVe and det (A) is positive. Check that 111 all other cases tr(A) 2: 0 o r det(A) S 0. <1111

dx

_

EXAMPLE

5 ~ Salve the system

ddxt

As a specia1 case of Fact 8.2.3, let's consider the system d,r _ - = Ax. dt w here A j a real 2 x 2 matrix with eigenvalues A. 1,2 = p ± iq and corresponding

few simple points .X, say

1

x(t) = 2Re (c !e"' ÜI)

+ iw)) k cos(qt )w - k sin (qt)ti)

Figure 7.

= 2Re(Ch +ik)e"1 (cos(qr) +i sin (qt))(ti

w

= 2e"'[w Note that x(O) = - 2kw

_ J [ - k cos(qt)- h sin (q t) J v - k sin (q t) + h cos(qr) v][cos(qt) sin (q r)

+ 2hv.

- sin (qt) J[ - k]· cos(qt) h

Let a = - 2k and b = 2h for si mpli c ity.

Cons ider a linear system

dx

_

- = Ax , dr

[35 -2]_ _

3

x

w1.th

x_0 =

[o] 1

.

The e igenvalues are A.1.2 = ±i, so that p = 0 and q = 1. This teiLs us that the trajectory is an ellipse. To detennine the d.irection of the trajec tory (clockwise or counterclockwise) and its rough shape, we can draw the vector field Ai for a

v

= 2 e"' [

=

Solution

eigenvettors ii1.2 = ± iw. . Tak:ing the same approach as in the di crete case (Fact 6 .5.3), we ~an wnte tbe real SOlUtion x(t ) = C le ). 11 V1 + c2e).1 1 V2 (wbere C ! = h + ik and C2 = C!) as

2e"' (h cos(qr)v- h sin (q r)ÜJ -

- si n(c'j>t) ] [ cos(c'j>t ) b

in the case of the discrete system x(t + 1) = Ax(t) (Fact 6.5.3). The factor r' is replaced by e" 1 , and 4> is replaced by q. This make good sense, because A.1 = r 1 ( cos(c'j>t) + i sin (c'j>t)) in the formula for the discrete system is replaced by e'·1 = e"' ( cos(q r) + i sin(qt)) in the continuous case (Fact 8.2.3).

where A is a real 2 x 2 matrix. Then the zero tate is an asymptotical ly stab le equi librium solut.ion if (and only if) tr(A) < 0 and det(A) > 0.

=

a]

x(t) = r 1[ Ü;

Consider the system

w, v.

The trajectories are either ellipses (l.inearly d.istorted circles), .if p = 0, or spirals, spiraling outward if p is positive andinward if p is negative. Note the similarity of the fonnula in Fact 8.2.6 to the formu la

- =Ax , dr

Fact 8.2.6

- sin(q t) ] [ a ] cos(qt) b

where a and b are the coord.inates of .X0 with respect to the bas is

Solution

Fact 8.2.5

u ][ c?s(qt) sm(qt)

Figure 7

x = ±e~. and sketch the flow line starring at [ ~ J.

See

466 •

Chap.

Linear

tem of Di ffe rential Equation

Sec. 8.2 The Complex Case: Euler's Formula •

o w Iet u fin d a fom1Uia fo r the trajectory. i- 3 Ei = ker [ _ 5

I -

2

[i

is represented by the point ( tr(A) , det(A)). Recall that the characteristic equation is

. 2 .., ] = span [ .-23 ]

I + J

~ 3 J = ..._...,_., [ =~ J+ i [

46 7

2

>.. -

n

tr(A)A. + det (A) = 0

and the eigenvalues are

'-,..-'

ii T he linear

tem

,r0 =

.t(l = e P' [ ÜJ

=

=

-3

AJ.2 =

tr(A)

± J (tr(A))2 -

4 det(A)).

-] [ CO (q t ) in (qt )

[ CO (t ) in(t)

-

Therefore, the eigenvalues of A are real if (and only if) the point (tr(A) , det(A)) is located below or on the parabola

a]

in q1 ) ] [ CO (qt ) b

det (A) = (

- . in (t ) ] [ I ] cos(t )

tr~A) ) 2

0 in the tr(A) - det (A) plane. See Figure 9. Note that there are fi ve major cases, cotTe ponding to the regions in Figure 9, and ome exceptional cases, coiTesponding to the dividing line . The ca e when

[ ~ =~ J [ c~~i;~ ] = Co (t ) [ ~] +sin (t) [ =~ J

] - 2 in (t ) - [ CO (t) - 3 in (t) .

det(A) = ( - tr (A) - ) 2

You can check that

c1.-r _ = Ax dt

-

and

i (O) = [

~].

Consider a 2 x 2 matrix A. The variou seenarios for the system dx / dt = A.r can be conveniently represented in the tr(A) - det(A) p lane, where a 2 x 2 matrix A

Figure 9 31T

r =2

2

is di cus ed in Exercises 35 and 37 . What does the phase portrait Iook like when det(A) = 0 and rr(A) =!= 0? In Figure 10 we take another Iook at the five major types of phase porrrai ts. Both in the d.iscrete and in tbe continuous case, we sketcb the phase portraits produced by variou eigenvalue . We include the ca e of an ellipse, since it is important in application .

The trajectory is the ellip e shown in Figure 8.

Figure 8

~(

aw + b- ha the oluti on a = 1 b = 0. T herefore,

-2]

[~

Ü;

- det(A) = (

tr(A)

2

)2

468 •

Ch ap. 8 Linea r Systems of Diffe rent ial Equati o ns Sec. 8.2 Th e Camplex Case: Eul er's Fo rmula •

Figure 10 The major types of

phase partraits.

Discrete

Continuous

469

E X. E R C I S E S GOALS Use tbe definition of the complex-valu ed expo nential function Solve the system

cLx

z = e'·

_

- = Ax dt for a 2 x 2 mattix A with complex eigenvalues p 1. Find e2n i . 2. Find eC 112lrr i .

± iq.

3. Write z = - I+ i in polar form as z =reit/> . 4. Sketch the trajectory of the complex-valued function =

Z

e 3ir .

What .is the petiod? 5. Sketch the trajectory of the complex- valued function

z=

e (-O. l -2i )r .

6. Find all complex solutions of the system

-2]-

~: =[;

-3

X

in the form given in Fact 8.2.3. What solution do you get if you Iet c 1 C2

= I?

7. Derermine the stability of the system

"-1.2 = p ± iq p2 + q 2 > 1

AJ.2 = p

±

p>O

2]-

~~ = [ -~

iq

-4

X.

8. Consider a system

dx

_

-=Ax dt ,

"-1.2 = p ± iq p2 + q2 < 1

"-1.2 =

p ± iq

p
where A is a symmettic matrix. When is the zero state a stable equilibrium solution? Give your answer in terms of the definiteness of the matrix A . 9. Consider a system

d.r _ dt = Ax , where A is a 2 x 2 matrix with tr(A) < 0. We are told that A has no real eigenvalues. What can you say about the stability of the system ?

"-1.2 = p ± iq p2 + q2 = 1

A. t ,2

= ± iq

10. Consider a quadratic form q (x ) = x · A.t of two variables, x 1 and x 2 • Consider the following system of differential equations: dx,

dt dx2 dt

=

aq

ax , aq ax2

1•

470 •

Chap. 8 Linear Systems of Differential Eq uations

Sec. 8.2 Th e Complex Case: Euler's Fo rmula •

or. more succinctly

471

16. Consider the system dx

-

dt

= grad(q).

d,t

dt =

a. Show that the ystem dx j d t = grad (q ) i linear by fi nding a matrix B (in

terrn of the sym m tric matrix A) such tl1at grad(q) = Bx. b. When q is negative defi nite, draw a sketch howing po ible Ievel curves of q. On tl1e same sketch, draw a few trajectories of the system dxjdt = grad(q). Wbat does your ketch suggest about tl1e stability of tl1e system dx jd r = grad q)? c. Do the same a in part b for an indefinite quadratic form. d. Explain tl1e re lationship between tbe definiteness of tbe fonn q and the tability of tl1e y tem dx /dt = grad (q) . 11. Do parts a and d of Exercise 10 for a quadratic form of n variables.

12. Derermine tl1e stability of the sy tem

~

dx = [ dt - 1

1 0 - 1

~] x.

-2

13. l f tl1e ystem dx j dr = A.t is stable is dxjd t = A - 1; stable as well? How can you tel!? 14. Negativefeedback /oops. Suppose some quantities x 1(t), x2(t), ... , X11 (t) can be modeled by differential equations of the fonn

[ 0 Cl.

J-

1 b x,

where a and b are arb itrary constants. For which choices of a and b i the zero state a stable equilibrium solution ? 17. Consider tl1e system dx [ - 1 dt = k

-k}

JX ,

where k i an arbitrary constant. For which choices of k is the zero state a stab le equilibrium soluti on?

18. Consider a diagonalizable 3 x 3 matrix A such that the zero state is a stable equilibrium olution of the system dx j dt =Ai. What can you ay about the determinant and the trace of A? 19. True or false? If the trace and the detenninant of a 3 x 3 matrix A are both negative, then the origin is a table equ.ilibrium solution of the system cLt j dt = Ax. Justify your answer. A with eigenvalues ±rr i. Let ü+ i iü be an eigenvector of A witl1 eigenvalue rr i. Solve the initial value problern

20. Consider a 2 x 2 matrix

d.r

_

- .- = Ax

dt

with

_

xo = iü .

Draw tl1e olution in the figure below. Mark the vectors .t(O), .r(2).

,r(4). x(l) , and

- bx11 üi

dx 11 dt

X11-1 -k~~x"

where b is positive and the k; are posillve (the matrix of this system has negative numbers on the diago nal, 1's directly below the diagonal, and a negative nurnber in the top right corner). We say that tl1e quantities x 1, ... , xll describe a (linear) negative feedback loop. a. Oe cribe the significance of tl1e entries in the system above, in practical tenns. b. Is a negative feedback loop with two components (n = 2) necessarily stable? c. I a negative feedback loop with three components necessarily stable? 15. Con ider a noninvertible 2 x 2 matrix A witl1 a positive trace. What does the pha e portrait of the ystem d xjdt = Ax Iook Like?

21. Ngozi opens a bank account witl1 an initial balance of 1000 Nigerian naira. Let b(t) be the balance in the accmnH at timet ; we are told that b(O) = 1000. The bank is paying interest at a continuou rate of 5% per year. gozi make deposits into the account at a continuous rate of s(l) (measured in naira per year). We are told that s(O) = 1000, and s(t) is increasing at a continuou rate of 7% per year. (Ngozi can save more as her income goes up over time.) a. Set up a linear sy tem of the form

(time i measured in years).

b. Find b(t ) and s(t) .

db - = ?b dt

+ ?s

ds ? - = .b dt

+ ?.s

472.

hap. 8 U near

Sec. 8.2 The Complex

tems of Differen tial Equ at io ns 3

3

2

2

I 0

n 0

- I

- I

-2

-2

22. For each of the linear systems below, fi nd d1 matehing pha e portrait on the following page.

a. .t(l + 1) = [ 22. 5

o~5J.~cr )

I = [ - -1.5 b. x- (l + .)

- 1 x(r) 0.5

c.

d. e.

J-

J- 1 J~~ = [ -;·5 0.5 ~;~ = [ -~ 0I J-

d.t

dr

[

=

3 0 .\' - 2.5 0.5

- 3 -3

X

-2

- I

0

2

-3 -3

3

ase: Euler' FormuJa •

473

-2

- I

0

2

3

-2

- I

0

2

3

- 2

- I

0

2

3

3

X

2

Find all real olution of the y tems in Exerci e 23 to 26.

23.

:~ = [~ -~ ] ; dt -

o - 9 40

25. d.r dt

-2 -3 -3

X

:~ = [ -~

-2]

o -4

0 J-

-2

l

dl

29. dx = dr

-1 I

X

X

[-t 1] -

30. d.'i = [ 7 dr -4

X

10]-5 X

with with with with

x(O) = [

x (O) = [ i(O) =

-~

J.

i(O) = [

-I

0

2

3

- 3 -3 3

2

2

V 0

VI 0

- I

- I

-2

- 2

6].

[~

-2

3

X

dr

28. dx = [

- I

X

Solve the sy tems in Exercise 27 to 30. Give the solution in real form . Sketch the solution.

27.

IV

- I

J= [ 2 4 J-4 2 di=[ - 11 15 J7 -6

24. dx _ [

26.

In 0

l

- 3 -3

6].

-2

- l

0

2

3

- 3 -3 3

31. Coosider the mass-spring system sketched below.

vm 0

VII

(continued on page 474)

- I

- I

- 2

-2

-3 -3

-2

- I.

0

2

3

-3 -3

3

474 •

Sec. 8.2 The Camplex Case: Euler's Formula •

Chap. 8 Linear Systems of Diffe rential Equat ions Let x(t ) be the deviation oftheblock fro m the equilibrium po ition at time 1. Consider the veloci ty v(t ) = dx j dt of the block. There are two fo rces acti no on the ma s: the pring force F5 , wlu h i - a umed to be proporti onal ro th~ di splacement x and the force F1 of friction, wh ich is assumed to be proportional to the velocity: F1 = - qv ,

F = - px.

34. Let Z1(t ) and z2 (r ) be two complex-valued solutions of the ini tial value problem

dz

-

= >..z

dt

Z l (t )

-px- qv .

is zero. Conclude that z 1(I ) = z2 (t ) . b. Show that the initi al value problem

dz

dv

-dt = >..z

F = ma =m - , dt

where a repre ents acceleration and m the mass of the block. Combining the last two equations we find that dv m- = -px- qv dt p

m

X -

q

-

m

dx

V.

dv dt

_

with

_

x (O) =

.'to

has the solution

Let b = p j m. and c = q jm for simplicity. Then the dy nanlies of this massspri ng system are described by the system

-

=1

35. Consider a real 2 x 2 matrix A with JA(>..) = >..2 • Recall that A 2 = 0 (see Exercise 7.2.56). Show that the initial vaJ ue problern - = Ax dt

-= - -

z(O)

with

ha a unique complex-val ued olution z(t) . Hint: One solution is given in the text.

or

dr

1

Z2(1)

+ FJ =

By Newton 's second law of motion we ha e

dx

=

(where >.. is a complex number). Suppose that z2 (r) =f. 0 for all t . a. Using the quotient rule (Exercise 33), show that the deri vative of

the total force acting on the mass is

dv dt

z(O)

with

where p > 0 and q :::: 0 (q i 0 if the o cillation i fri ct ionl e ). There fore,

F = F5

475

,ro + t Axo.

x(t) =

Sketch the vector fi eld

A-r

and the trajectory i(t ) .

36. U e Exerc ise 35 to so lve the initia l value problem

V

(b > 0,

c:::: 0).

=-bx - cv

Sketch a phase portrait for this system in each of the following case , and describe brieft y the significance of your traj ec tories in terms of the movement of the block. Comment on the stability in each case. a. c = 0 (frictionless). Fi nd the period. b. c? < 4b (underdamped). c. c2 > 4b (overdamped). 32. Prove the product rule for deri vati ve of complex-valued fu nctions. 33. a. Fora differenti able complex -valued function z (t ), fi nd the derivati ve of

z(t )

b. Prove the quotient rule for derivatives of complex-va lued functions.

In both parts of tlus exercise you may use the product rule (Exercise 32).

~: = [ _!

_!J-r

with

i(O)

=

[ ~] .

Sketch the vector fi eld and the trajectory .r(t ). 37. Consider a real 2 x 2 matri x A with JA (A.) = (>.. - A.o) 2 . Solve the ini tial value problem

-d,'Y = Axdt

.h

W lt

- 0

x( )

= x- o.

For which values of >..o is the zero state a stable equil ibri um soluüon of the system? Hint: Use Exercise 8.1 .24 and Exercise 35 above. 38. U e Exercise 37 to solve the iniüal value problem

~;~= [ - 1 ~ J-r x

Sketch the trajectory (t) .

with

.r (O) =[~J.

476 •

Chap. 8 Lin ea r System s of Differential Equ ations 39. So h re the system d .r

dt

u~ n,

Compare with Exerci e 8. 1.24. When is the zero s tate a stab le equili brium so lution? 40. An eccentric mathemati ian i able to gai n autocratic power in a small A lpine country . In her first decree, he anno unces the introduction of a new currency, tbe Eul er wbi ch is meas ured in complex units . Banks are ordered to pay o nl y im aginary interest on deposits . a. lf you invest 1000 Euler at Si% in terest, compo unded annuall y, how much money do you have after 1 year, after 2 year , after 1 years? Descri be the effect of com pounding in thi case. Sketch a t:rajectory showing the evoluti on of the bal ance in the complex plane . b. Do part a in the case when the Si % interest i cpmpounded contin uously. c. Suppose people ' s ocial standing i determined by the mod ulus of the balan ce of their bank account. Under these circumsta nces, wou ld you choose an account with annu al compound ing or with conti nu o us compoundi ng of interest? (This p roblern is based o n an idea of Prof. D. Mu rnford , H arvard University .)

LINEAR SPACES AN INTRODUCTION TO LINEAR SPACES In this chapter, we present the basic concepts of linear a lgebra in a m ore general contex t. Here is an introductory example, where we use many concepts of linear a loe·bra in the context of fun ction , rather than vectors in ~" . C onsi der the di ffe renG al equation (OE)

cPx

or

- = -X dt 2

Going throu gh a Iist of " .imple" functi ons, we can gues rhe so lution X 1(t)

=

COS(! )

and

x 2 t ) = in (!).

We observe th at all " linear combinati on "

are olutions as weil (verify thi s). U ing techniques from C hapter 8, we an how that these are in fact all the solutions of the OE (see Exercise 50). Thi mea n th at the hmction s x 1 (t) = cos(t) and x2(1) = in (!) form a " ba i · of the " olution space" V of the OE. The "dim nsion ' of V is 2 . Now Iet C be the set o f aU smooth functi ons from ~ to (these are the functions that we an differenti ale as many time, a we wa nt). Beca u e x 1(t ) = cos(t ) and x 2 (t ) = sin (t) aTe smooth fun cti o ns and V i c lo ed under linea r comb inati o n , V i a " ubspace" of C . Let u defi ne the Iran sform ati on T: C

-+ C

given by

T (x ) =

d 2x -

J

dr -

+ x.

477

478 •

Sec. 9. 1 An lntrod uction to Li near Spaces •

Chap. 9 Linear Spaces It follows from tbe basic rules fo r differentiatio n that T (x

+ y) =

T (x )

+ T (y)

and T (kx) = k T (x).

for all smooth functi o ns x and y, and all real scalars k . We can summari ze the e pro pertie by saying that T is a '"linear tran Formation·· from C to C . The " kerne!" of T is T (x ) = 0}

ker T) = (x in C

dt

Definition 9.1.1

Linear spaces A (real ) linear space 1· 2 \1 is a set endowed with a rule for addition (if f and g are in \1 , then so is f + g) and a rule fo r scalar multiplication (if f is in \1 and k in R then kf is in \/ ) such that these operations satisfy the eight ax ioms below3 (for all f, g, h in \1 and all c, k in IR):

1. (f + g) + h= f + (g +h ). 2. f +g = g

+ f.

3. There is a neutral element n in V such that f This n is unique and denoted by 0. 4. For each f in \1 there is a g in \1 such that f

+n = +g =

f , for a!J f in \1.

0. This g is unique

and denoted by (- f).

5. k(f + g) =kf+kg. 6. (c +k )f = cf + kf. 7. c(kf) = (ck )f.

= \1 .

In Section 9.4 \.Ve will see that thinking of the soluti on space \1 as the kerne t of the linear tra.nsformation T helps u ana.lyze \1 (j ust as it was useful to interpret the solution space of a linear system A.~ = Ö as the kerne! of the linear transformation T (.t ) = Ax). Wbat are the '"eigenfuncti ons·· and ·'eigen values" of the linear transformation T ? We are looking for nonzero mooth function x and sca lars A. such that

T (x) = A.x

We will now make theseinformal ideas more precise.

d2 r

- ·2 +x = 0)

= {x in C

479

8.

1!

= f.

Tlüs definition contains a 1ot of "fine print." In brief, a linear space is a et with two reasonably defined. operations, addition and scalar multiplication , that allow us to form Linear combinations of the element of this et. Probably tlle most important exa.mples of linear spaces. besides !Rn , are spaces offunctions, as in the introductory example.

EXAMPLE 1 .... In IR", the prolotype linear space, the neutral element is the zero vector, 0.

or

<11111

EXAMPLE 2 .... On page 477 we introduced the linear space C00 the set of all smooth function In Section 9.4 we will sol ve this proble rn systematicall y. Here Iet us ju t give some exa.mples: T (e' )

= 2e' ,

so that e' is an eigenfunction witl1 e igenvalue 2. T (t) = t ,

from IR to IR (functions we can differentiate a.s many time a we want) with the operations (f + g)(t ) = f( t ) + g(t ) (kf)( t ) = kf (t )

for all t in IR. Figure 1 illustrate the ruJe for scalar mu1tiplication of function from lR to IR. Draw a sinülar sketch illustrating the rule for addition. <11111

EXAMPLE 3 .... Another linear space is F (lR.lR"), the set of a.ll function s from IR to lR", that i , all parameterized curves in lR". The operations are defined as in Exa.mp le 2 .

<11111

so that t is an eigenfunction with ei genvalue 1.

T( cos(l))

= 0,

so that cos(t ) is an ei genfunction with eigenvalue 0 .

' The teml ve ror space is more collJmonly u·ed. 2lf we work with complex scalars. thcn V is a co mplex linear pace. Our space · are real un le srated otherwise. JThese axioms were established by the halian muthe matician Giuseppe Peano ( 18 - 1932) in hi Calcolo Geomerrico of 1888. Pean called V a .. linear syste m:·

480 •

Sec. 9.1 An Introducti on to Linear Spaces •

Chap. 9 Linear Spaces

481

for some Sealars c1, . . . , c11 • Since the basic notions of linear algebra (ini tiall y introduced for lR") are defined in terms of linear combinations, we can now generali ze these notion to linear spaces without modification s. A hort version of the rest of this section would say that the concept of subspace, linear independence, basis, dimension, linear Iransformation kernel, image and eigenva lue · can be defined for linear spaces in just tbe same way as for lR". What fo ll ows is the lang version, with many exampl es.

Definition 9.1.2

Figure 1

Subspaces A subset W of a linear space V is called a subspace of V if

EXAMPLE 4 ... Let x be an arbitrary et. Then V= F ( X , ~~~ , the et of all function from X to lR" with the operations defined in Exampie 2, is a linear space. ~

a. W contains the neutral element 0 of V . b. W is closed under addition (if f and g are in W, then so is f

EXAMPLE 5 ... If addition and calar multipli cation are given as in Definition 1.3 .9 then ~"' x ",

c. W is closed under scalar multiplication (that is, if f is in W and k is a scalar. then kf is in W).

the set of all m x n matrices. i a linear pace. The neu tral element is the ze~ matrix.

EXAMPLE 6 ... The complex numbers C formareallinear pace (and also a complex linear pace).

+ g ).

We can su mmarize parts b and c by saying that W is closed under linear combinations.

~

EXAMPLE 7 ... The et of all infinite . equences of real numbers is a linear space, where addition and scalar multiplication are defined componentwise: (x 0, X J . x 2... .)

+ (yo. ) 1, }'2 , ... )

= (xo + Yo,

x1

+ Yl · x2 + )12 , · · .)

Note that a subspace W of a linear space V i a linear space in it own righ t. (Why do the axioms hold for W?)

EXAMPLE 9 ... Here are two subspaces of C :

k (xo. x 1, x2, .. .) = (kxo, kx1, kx2, .. .) .

a. P, the set of all polynomial functions f(t)

EXAMPLE 10 ... The upper triangular matrice are a subspace of ~

(0, 0, 0 .. . ·).

EXAMPLE 8 ... The set of all geometric vectors in a (coordinate-free) plane is a linear space. The rules for addition and scalar multiplication are illustrated by Figure 2. Consider the elements .f1, h . . .. , f," and is a linear combination of the f; if

f

= cdt + c2h

+ · · · + a"rn.

b. ? 11 , the et of all polynomial ftmctions of degree :::; n.

The neutral element i tbe sequence

f

= ao + a1t

~

f of a lin ear space. We say that

+ · · · + c,.,J,,

11

11 ,

the space of all n x n

~

matrices. (Verify thi .) I

EXAMPLE 11 ... The quadratic forms of n variables form a subspace of F(lR" , lR). lf q(.i) = .\;TA .r and p(x) = xTB,T, are two quadratic forms the n so is (q + B)x . Think about scalar multiples.

+ p)(.i) =

;T (A

q(.i)

+ p(.r)

=

~

EXAMPLE 12 ... Consider an n x n matrix A. We claim that the set W of all so lution of the ystem d.i/dt = A-t i a subspace of F(R ~"). We checkthat W i closed under calar multiplication. lf x(1) is a solution and k is a calar, then

Figure 2

-.. : -·· -·· -··

ü+w

d d.-r _ < _) - (kx) = k - = kAx = A kx , dl dt

so that k.'i(t) is a solution as ~eil. Checkthat W i closed under additi on and that W contains the curve .r(t) = 0. ~ 2

EXAMPLE 13 ... Consider tbe set W of all noninvertible 2 x 2 matrices in R -x of

~2 x2?

.

I W a ub pace

482 •

Chap. 9 Linear Spaces

Sec. 9.1 An Introduction to Li near Spaces •

Solution

w contain

483

Solution the neutral element [

~ ~]

of JR 2

2

and is closed under scalar multi-

We can write any 2 x 2 matrix A

plication, but W is not closed under addition. As a counterexarnple, consider the sum

in W

in W

2 2

Therefore, W is not a subspace of JR x

not in W

.

Next we generauze the notions of linear independence and basi

= [ ~ ~]

as

[~ ~ ] =a[~ ~]+b[~ ~] + c [~ ~]+d[~ ~l '-v--'

'-v--'

'--...--'

'---v--'

E11

E1 2

E2 1

E22

This shows that the matrices E 11, E 12, E 21, &z 2 span JR 2x 2. The four rnatrices are also linearly independent: none of them is a linear combination of the otbers since each has a 1 in a position where the three others have 0. Thi s shows tha; E11, E1 2, E21, E22 is a basis of JR 2 Y 2 ; tb at is, dirn (JR2x2) = 4. ~

EXAMPLE 15 ..... Find a basis of ? 3, the space of all polynomials of degree ::; 3. Definition 9.1.3

Linear independence, span, basis Consider the elements

!I' .... _{,,

Solution

of a unea:r space V.

a. We say tbat the f; span V if every f in V can be expressed as a linear

We can write any polynornial f(t) of degree ::; 3 as f(t) =

combination of the f;.

+c2h+···+c"J;, = 0

CQ · 1 + CJ · l

bas only the trivial solution Ct = C2 = · · · = C11 =

+ +ct +

0,

that is, if none of the f; is a linear combination of the other

2

bt

dt 3 =



1

+ b. t +

c. r

2 +

d.

r3 .

This shows that the polynom:ials l t , t 2 , 13 span P3 . Are the four polynomials also linearly independent? One way to find out is to consider a relation

b. We say tbat the f; are linearlv independent if the equation CJ!J

a

fJ- 1

c. We say that tbe f; are a basis of V if they are unearly independent and span V. This is the ca e if (and only if) every f in V can be written uniquely as a linear combination of the J; .2

+ C2 · 12 +

C) ·

t3

= Co+ c 1 t + c 2t 2

+ c3 13 =

0.

lf some of the c; were nonzero. thi s polynomial could bave at mo. t rhree zero ; therefore, all tbe c; must be zero. We conclude that I , 1 , r2 , 13 is a basis of p 3 o that dim ( P3) = 4. ~

EXAMPLE 16 ..... Consider a system d,r jdt = A-i' where A is an 11 x n matrix. Suppose there is a real eigenbasis ü,, Ü2, ... , Ün for A with associated eigenvalues A1, . .. , An. In Example 12 we have seen that the solutions of tbe sy rem form a sub pace W of F(IR, lR"). Find a basis for W and thus determ.ine dim(W) .

Solution

Fact 9.1.4

Dimension Jf a linear space V has a basis with n elernents, then all other bases of V consist of n elements as weil. We then say that the dimen sion of V is n: dim (V) = n.

By Fact 8.1.2 we can write the solutions of the syste m as

,((/) = c , e.A ,r Ü1 + · · · + c

11

that is rhe curve x ,(t) = eAtli/ ,, ... , xf/(1) independent? Consider a relation C

We defer the proof of thi s fact to the next section.

EXAMPLE 14 ..... Find a basis of JR2 x 2 and tbus deterrnine dim (JR 2 x 2).

See Definiti ons 3.2.3 and 3.2.4 and Fact 3.2.5. 2 See Defi niti on 3.2.3 and Fact 3.2.7.

1

,c,>.,r-v I + · · · + c e>.,. r;;

Evaluating this equation at t

II

= 0, CJ'ÜI

1

= e1'"

e.A,.r Ü11 ,

pan

ÜJI

-0-

""-

w.

Are the :\-;(1) linearly

.

we find that

+ ·· · +

C11 Ün = Ü.

Since the Ü; are linearly independent, we can concl ucle that c 1 = c2 = ... = c11 = 0; that is. the (t are linearly independent. T herefore, the (t) are a ba is of W and dim (W) = n . ~

x;

x;

484 •

Chap. 9 Linear Spaces

Sec. 9.1 An lntroduction to Linear Spaces •

EXAMPLE 17 ..... Find a basis of C a a real lineru· pace.

EXAMPLE 19 ..... Fi 11 d a b·as 1·s of t1e 1 space

·

485

V of all po1ynom1al s f(t) in P2 suchthat f' (l ) =

0.

Solution Solution

Any complex number - can be written uniq uely as

For this sp~ce, writing down a typical element of V requires some work. Consider a polynor~u.al f(~) = a + bt+ ct 2 in P2 , witb J'(t ) = b+2ct , o that .f' (l) = b+2c. The co nd1t10n f (1) = 0 means that b + 2c = 0; that is b = -2c. Thus a typical element of V ha the form

- = x ·l +y·i for two real numbers x and v. Therefore, 1, i is a basi of C, and dim (C) = 2. The graphical representation of. the complex nurober · in the comp lex pla ne i ba ed ~ on this fact.

f( t ) = a - 2ct

x 3 matrices A, and determine the dimension of thi space. Recall that a matrix A is called skew-sy mmetri c if AT = -A. Note that thi implie that the cliagonal elements of A are 0 ince a;; = - a;;.

+ ct 2 •

EXAMPLE 18 ..... Find a basis of the pace of all skew- ymmetri c 3

where a and c are arbitrary constants. write

f( t ) = a · 1 + c · (1 2

A skew-symmetric 3 x 3 matrix can be written as

=

[-~-b

a

0 -c

b] c

-b

a 0

-c

b] c = a [ - 01 01 0] 0 0 0 0 0

+b [

21 ).

Not all linear spaces have a finite basis f 1, h . . . . , fn. Consider for example, the pace P of all polynomials. We will show that an arbitrary sequence J1 , h, ... J,, of polynornials does not span P , and therefore isn' t a basis of P. Let N be the maximum of the degrees of the polynomials f 1, h , . .. , fn · Then all linear combination of / 1, .. . , f" will be in PN (the space of the polynomial of degree :::: N). A polynomial of higher degree, such as f(t) = tN +l, will not be in the span of /1, ... , J,,. Therefore, the sequence f 1, j 2, . . . J,, is not a ba i of P.

,

0

where a, b, c are arbitrary constants. Writing A as a li near combination with the arbitrary constants a coefficients we find that

[ -~

-

Be~ause the two functions 1 and t 2 - 2t are li nearly independent, they form a b~ 1s of V . Check that the function satisfy the requirement f ' (1) = 0. See F1gure 3. ~

Solution

A

Following step b m tbe rec1pe, we

00 00 01] - 1 0 0

+ c [ 00 0

0 0] 0

1

.

- 1 0

These three matrices are clearly linearly independent: they are a basis of the space of skew-syrnmetric 3 x 3 matrices. Thi space is therefore three-dimensional. ~

Figure 3

Based on our work in Examp1es 14 to 18 we present the following " recipe" for finding a basis of a linear space V : h r) = r2 -

Finding a basis of a linear space

V

a. Write down a typical element of V in terms of some arbitrary constants. b. Using the arbitrary constants as coefficients, express your typical element as a linear combination. c. Ve1ify that the elements of V in this linear combination are liriearly independent; then they form a basis of V.

r = r(r - 2)

486 •

Chap. 9 Linear pace

Sec. 9. 1 An I ntroducti on to Linear Space • --------------------------------~

Definition 9.1.5

Infinite-dimensional linear spaces A linear pace whi ch doe not have a fin ite ba is j ,, infiniTe-dimensional. 1

h , . .. ,

EXAMPLE 21 ..... Let C[O, I ] be the linear space of alt continuous functions from the clo ed interval [0, l] to IR. We defi ne

fn is call ed by

We _adopt the ~implified notation / Cf) = bas1c rules of tntegration :

Next we generaJize the nolion of a linear transformation .

'
and W. A function T fro m V to W is called a

Con ider two linear pace linear transformation if T (f

+ g)

= T (f)

1 1

I: C[O, l J ~ IR

As we have just seeo the pace P of alJ polyno mi als i infinite-dimensional.

Definition 9.1.6

48 7

/ ( /) =

f01 f.

f(t ) d l .

To check that I is linear we app ly

1' + 1 1'! f

=

1'

(kf)

I

g = I (f )

+ I (g)

= k l (f)

k

What is the im age of ! ? Foreac h real number k, there i a function j (r) uch that k = l (f): one possibl e choice i th e con tant functi on f (t ) = k. Therefore,

+ T (g)

an d

im (! )= ~ -

T (kf

= k T (f).

fo r all f and g in V and all real sca lars k. For a linear transformation T fro m V to W , we let ker(T)

=

EXAMP.LE 22 ..... Consider the functi on T: ~ 2

~

C given by T [ ;

J = x + i y.

We leave it as an

exerci e to check that T is li near. The tran fo rmati on T i clearl y invertible, with

(f in V : T (f) = 0}

r - 1(x + i y) = [ ~ ] .

and i m ( T ) = {T (f) : f in V}.

Note that ker(T) is a subspace of V and im(T) is a subspace of W .2

EXAMPLE 20 ..... Let D : C00 ~ C00 be given by D (f) =

We can easily go back and forth betweeo

f'.

What is the kerne! of D ? This kerne! consists of all smooth functions f such that D (f) = f' = 0. A you may reca ll from caJcul u , these are the con tant functions f (t = k. Therefore, the kerne! of D i one-dimension al (the fun ctio n f (t ) = 1 is a basis). What is the image of D ? The image consists of all smooth fun cti ons g such that g = f' for . ome f in C 00 , that is, all smooth fu ncti ons g whi cb have a smooth antideri vative f . The fundamental theorem of calculus teils us that aU smooth function s (in fact, all continuous functions) have an antiderivati ve. We can conclude that im(D) = C .

[~ ]

More ad vanced tex!S inLroduce the concept of an infinite basis for uch spaces. See Fac!S 3. 1.4 and 3. 1.6. To show that ke rne! and image are sub paces, yo u need the following a uxili ary results. whic h we leave as Exercise 48 and 49 for tho e with an inte rest in ax iomatic : a. l f Ov i the neutral el eme nt of a linear pace V, then Ov + Ov = Ov and kO v = Ov. fo r all ca lars k. b. Jf T is a linear tnmsformati on fro m V LO W, the n T (Ov) = Ow , where Ov and Ow are the neutral elements of V and W .

and x

+ iy :

T

X+

iy

y-1

The in venible linear transformati on T in Exan1ple 22 make C into a carbon copy of JR 2 (as far a sum and rea l calar multiples are concerned). We ay th at T is an isomotp hism and that the linear space ' 2 and C are i ·omorphi , whi ch means in Greek that they have the ame tructure. Eac h carrie ome tru ctures not hared by the otber (for example, tbe dot prod uct in IR 2 and the comple product in q , but a fa r as the two ba ic Operati ons of linear algebra are co ncerned (additi on and real calar multipli cation) th ey behave in the ame way.

1

2

[~ ]

Definition 9.1.7

Isomorphisms An invertibl e linear tran fo rmation i. call ed an isomorphism . We ay that the linear paces V and W are isomo1p hic if there i an isom rph i. m from V to W.

488 •

hap. 9 Lin ea r Space Sec. 9.1 An Tntrod uction to Linear paces •

489

Below, we tate om u fu l fact coocerning i orn rphi ms: Be low, we give two mo re exa mples of isomorphi ms.

EXAMPLE 23 .... Show that the linear spaces lR"x"' and m. IID mx11 are tsomorp · h' tc. Fact 9.1.8

a. Jf T is an isomorph i. rn , then so is r- 1.

Solution

b. A linear tran fo rmatio n T from V to W i an i omorphism if (and only if) ker(T) = (0 } and im(T) = W.

We need to find ao isomorphi sm L fro m lR"x"' to JRm x n th t · · • . , a ts an m venible Li near tr an s1orm <11111 att on. Check that L (A) = AT does the j ob.

c. Con ider an i omorphi m T from V to W . Lf / 1, h, . . . J,, is a ba i of V, tben T (f 1 ). T(h .. . . T(J,,) is a ba i of W. d. lf II and W are i o morph ic and dim ( V) = 11 then dim ( W) = 11.

EXAMPLE 24 .... Let

F(R R) b h . , e t e lmear space of all functions from T :V

~V

a. We mu st s how that r - 1 i linear. Con ider two e lemen ts codo main of T (that i , the do ma in of r - 1). Then

or

f an d g of the

( T U+ g))(l) = (T J

for all

r -t Cf+ g) = r - 1(rr - 1Cf)+

1

+ T g)(t) .

in R Now (T U+ g ))(t)

( ince T is linear)

In a similar way you can how that r- t (kf) = kr - t (f), fo r all codomain of T and all calar. k.

= U + g)(t -

1)

= f( t - I)+ g(t

- 1)

and (T f

f in the

to solve the equation T (f = 0. Applying r - t on both sides, we find that ! = 0 so that ker T ) = (0}, as claimed . Any g in W ca n be wri tten as g = TT -t(g), so th at im (T) = W . Con versel y, s uppo e that ker(T) = {0} and im (T) = W. We have to show that the equati o n T (J) = g has a unique so lu tio n f for any g in W (by Defini tion 2.3. 1) . There is at least one uch solution f, in ce im (T) = W. Con ider two o luti ons, ft and !z: T (Jt) = T(h) = g. Then 0 = T Ut) - T (h) = T Ut - h), so th at ft - h i in the kerne] ofT ; that i , !t - h = 0 , and f t = h, a claimed. c. We will show first that the T U;) pan W. For any g in W, we can write 1 T - (g) = c1f1 +· · ·+c"J,, because the J; span V. Appl yi ng T on both side and u ing linearity, we find that g = c tT (f1) + .. . +c" T (f"), as c laimed. To show the linear independence of the T (f;), consider a re lation Ct T Cft) + · · · + c" T (j,,) = 0 or T (c if1 + · · · + c" j,,) = 0 . Since the kerne! of T is 0 , we have c 1! 1 + · · · + c" j,, = 0. The n the c; are zero since the fi are linearly independent.



+ T g)(t ) =

(T f)( t)

+ (T g)(t ) =

f( t - l )

+ g(t

- 1):

the two re ult agree. We leave it as an exercise to check that

b. Suppose first that T i an i omorphi sm. To fi nd the kerne! of T , we have

d. Fe llows from part c.

( T f)( t ) = f(t- 1),

T U+g) =Tf+ T g

T lli proof may be skipped in a fir t reading.

r r - 1Cg)) = y - 1( r (r - 1Cf) + y - 1(g))) = r - t cn + r - t cg).

by

JR to R We define

where we write T f instead of T (.f) to simplify the notation. Note th at T moves the graph of f to the right by one urtit. See Figure 4. l s T Lin ear? We first have to check that ~

Part d ~ hou ld come a no urprise, since isomorphi c paces have the sa me stru ture, as far as linear a lgebra i concerned.

Proof

v--

T (kf) = k (T f).

(What doe thi Statement mean in tenn o f shifting graph ?) . . Note th at the transform ation T is in vertible: we can get f back fro m Tf by shtfttng the graph of T f by one unit to the left (see Fig ure 5). (1' - 1 g)(t ) = g(t + I )

We conclude that T i an i ·omorphi m from V to V.

Figure 4

t - I

490 •

Chap. 9 Li near Spaces Sec. 9.1 An lntrod uction to Lin ear Spaces •

491

~he~~ C ~ s ·

an arbitrary_constan_t. Thi~ shows that all scal ars ./,. are eigenvalue of e e1genspace E;, 1 one-dimenswnal, panned by e ).t . ~

EXAMPLE 26 .... Consider the linear Iransformation r+ I

Figure S

L.

T!l) II X /1 ~ JD) II X /1 !1'0. ----r Jl'l.



g1ven by

T

L (A) = A .

Find all eigenvalues and eigenspace of L. Is L di agonalizable?

ext we generali ze the not:ion of an eigen alue.

Solution Definition 9.1.9

I~

Eigenvalues, diagonalization

L (A ) = 'AA, then L (L (A)) = A = ), 2 A

e1gen va lues a.re

Co nsider a linear transform arion T:

V --+ V ,

= 1 and

.1,. 2

o that

.1,. 2

= 1. The only possible

= - I. Now

E, = {A : Ar = A) = {symmetric matrices }

where V i a linear pace . A scalar 'A is cal led an eigenvalue of T if there is a nonzero f in V uch rhat

T (f)

.1,. 1

and

= ), j. E_ , = {A : Ar = - A} = {skew-symrnetric matrices}.

If ). is an eigenvalu e of T , we define rhe eigenspace

We leave it as an exercise to show th at

EJ.. = {f in V: T (f) = 'Af}. Now suppose that V is finite-dimen sio nal. The n the transformation T is call ed diagonalizable if there is a ba is f 1 , •• • , J,, of V such that T /;) is a scalar multiple of f;, for i = 1, ... , n. Such a basi i called an eigenbasis for T.

.

d1m(E 1) = l + 2 + ... + n =

rz (n + 1) -----=-

2

and

EXAl\tPLE 25 .... Find all eigenvalues and eigenspaces of the linear transformation D: C 00 --+ C""

given by

dx

D (x) = -

dt

.

.

dim (L 1) =1+2+· ··+(n-l ) =

(n- l )n

2

,

so that

Solution We have to olve the differenti al equ ati o n

dx

We can find an eigenbasis for L by choosing a basis of each eigenspace; thu L is diagonalizable. For example, in the case n = 2 we have the eigenba i

D x) = - = 'Ax. dt

[~ ~l

By Fact 8. 1.1 the solutions of thi s OE fo r a fi xed )._ are the functio ns ~

t basi. of E 1

[~ /'

n.

[-~ 6] t ba is of E _ 1

492 •

Chap. 9 Linear Spaces

For the convenience of the reader, we include a list of notations introduced in this section :

Sec. 9.1 An Introducnon to Linear Spaces •

493

~m x n

the set of all functions from X to Y . the linear space of all real m x n matrices.

Let V be the space of , 11 · fi · . \VI . h f ' a In mte sequences of real numbers (see Example 7) . llc o the Subsets of V given in Exercises 12 to 15 are subspaces of V ? 2 1 . The arithmet~ic seq uences, i.e., sequences of the form (a , a + k , a + 2k . a + 3k , · · .), for some constants a , k. · 13 · The geometric sequences, i.e. , seq uences of tbe form (a , ar, ar 2, ar3, .. .). for · some constants a and r .

p

the linear space of all polynormals with real coefficients.

14. The sequences (xo, x1 , .. .) which converge to 0 (that is, lim x = 0).

P"

tbe linear space of all polynormals of degree :S n. the linear space of all smooth functions from ~ to ~ (those functions that we can ditferentiate as many time as we want).

15. The "square-summable" sequences (x 0 , x 1, converges.

C[O, 1]

the linear space of a.ll contin uou s functi.ons from the closed interval [0, 1] to ~.

Find a basis for each of the spaces in Exercises 16 to 28, and determine its dimension .

D

the linear transformation from C 00 to C )O given by Df = derivative).

Notations F (X , Y)

coo

.f' (the

17-loCO

16.

~3 x 2 .

17,

}Rtlll X T/ •

18. P". 19. The real linear space

EX ER C I S ES

GOALS spaces.

Apply basic notions of linear algebra (defined earlier for

~")

to linear

Which of the subsets of P3 given in Exercises 1 to 5 are subspaces of P3? Find a basis for those that are subspaces. 1. {p(l): p(O) = 2) 2. {p(t): p(2) = 0} 3. {p(r): p' (l) = p(2)} (p' is the derivative)

1 1

4. {p(t) :

tbat is, those for which

f

x1

i=O

([2.

The space of all matrices A in ~ 2 x 2 with tr(A ) = 0. The space of a.ll symmetric 2 x 2 matrices. Tbe space of all symmetric n x n matrices. The space of all skew-symrnetric 2 x 2 matrices. The space of all skew-symmetric n x n matrices. The space of all polynormals f (t) in P2 such that f(l) = 0.

26. The space ofall polynomials f(t) in P3 suchthat f(l) = 0 and J~ f(t)dt = 0.

1

27. The space of all 2 x 2 matrices A that commute with B = [ 1 0

0] 2 .

J

p(t) dt = 0}

5. (p(t): p( - t) = - p(t) for all t} Which of the subsets of ~ 3 x 3 given in Exercises 6 to 11 are subspaces of IR 3 x 3 ? 6. The invertible 3 x 3 matrices. 7. The symrnetric 3 x 3 rnatrices.

28. The space of all 2 x 2 matrices A that commute with B = [ 1 1 0 1 . 29. In the linear space of infinite sequences, consider the subspace W of arithmetic sequences, i.e., those sequences in which the difference of any two consecutive entries is the same, for example, (4, 7, 10, 13, 16 . . . .) ,

where the ctifference of two consecutive entries is 3 and

8. The skew-symmetric 3 x 3 matrices (recall that A is skew-symmetric if AT= - A). 9. The diagonal 3 x 3 matrices.

10. The 3 x 3 matrices for which [

20. 21. 22. 23. 24. 25.

• •• ) ,

11

~] is an eigenvector.

11. The 3 x 3 matrices in reduced row echelon fonn.

(10, 6. 2, -2. -6, ... ), where the ctifference of two consecutive entries is -4. Find a basis for and thu s determine the dimension of W.

w

30. A function f(t) is called even if .f(-t) = f( t ), for all t in ~ . and odd if f( - t) = - f(t) , for all t. Are the even functions a subspace of F(IR, ~), the space of all functions from .IR to .IR? What about the odd functions? Justify your answers carefully.

494 •

Chap. 9 Li near Spaces

Sec. 9.1 An Introduction to Linear Spaces •

31. Find a basis of eacb of the fo llow in g linear paces, and thus determ ine their dimension (see Exerci e 30). a. [f in P_.: f i even }. b. {f in P4 : f is odd}. 32. Let L (iR". IT{111 ) be tbe et of aJI li near transfonnarions from iR" to ~~~~ . I · L(iR" , IT{111 ) a subspace of F (~", iR"'), the pace of all functions from iR" to iR"'? Ju tify your answer carefully.

495

46. Consider an n x n matrix A that is diagooalizab le over R Let W be the solu tion space of the system dx / dt = ki: ( ee Example 12). Is the transformation T: W--+ lll"

given by

T(x(l)) = x(O)

linear? Is it an isomorphism?

47. For the transfonn ation T defi ned in Example 24, veri fy tbat T(kf) = k(Tj) ,

Decide which of the rransformatio n in Exerci e 33 to 41 are linear. For those th at are linear, detennine whetber they are isomorphism .

33. T: P3

~ iR

1 3

given by

T(f ) =

.f(t) dt .

34. given by T (A) = A + 35. T : JR3 x 3 ~ lR given by T(A) = tr(A). 36. T : JR 2 x:>. ~ lR given by T (A) = det(A ) . 37. T: C ~ C given by T( z) = (3 + 4i)-.

38. T: JR2x 2

~ ]Jl2x 2

given by

T (A) =

39. T:

~ ]Jl2x 2

given by

T (A) = [ ;

JR2x2

40. T: P6 ~ P6

41. T : P2

~ lll

2

given by

Tl.f) =

given by

T(J)

/ 2.

s- 1AS,

f in F(JR, IR). Draw a sketch illustrati ng

48. S how that if 0 is the neutral element of a linear space V then 0 + 0 = 0 and kO = 0, for all scalars k. 49. Show that if T is a linear transformation from V to W, tben T (O v) = Ow, where Ov and Ow are the neutral elements of V and W, res pectively. SO. Find all so lutions of the differential equation

-2

T: JR2x 2 ~ JR2 2

for allreal scalars k and al l fu nctio ns th.i s fonn ula.

where S =

d 2x

u J.

-

~

Hint : ln troduce tbe auxiü a.ry funcrion y = dx /dt . The DE above can be converted into the system

~ JA.

dx dt dy - = -x dt

!" +4f'.

= [ j<~L J.

. . 2 x 2 matn.ces "1orm a 42. Do the positive senndefi mte 43. In the space F (R !Pl) of al1 func ti ons from iR to IR, functions with period 3; th at is, those functions f(t) for all t in lll. Do these functio ns fonn a subspace answer carefull y.

+x = O.

dt 2

su b space of .JR2 x 2?. consider the subset of al l such th at f(t+3). = .f(t), of F (IR, iR)? Just1fy your

y

Apply tecbniques introduced in Chapter 8.

51. Use the idea presented in Exercise 50 to fi nd all solutions of the DE d 2x

-

d t2

dx

+ 3-

dt

+2x = 0.

52. U e the idea presented in Exercise 50 to fi nd all solutions of the OE

44. Con. ider the transform ation

d 2x

-

d t2

T :IRmxn ~ F(lll" , IR/11)

dx

- 4-

dt

+ 13x =

0.

Fi nd the (real) eigenvalues and eigenspaces of the li near transformati ons in Exercises 53 to 60. Find an eigenbasi if po sible.

given by (T A)(v) = Aü.

Show rhat T is linear. Wh at is the kerne! of T? Show that the image of T i the space L (lll" , JR"') of all linear transformations from IR" to lR"' (see Exercise 32). Find the dimension of L(JR" , IRm) . 45. If T is a linear transfonnation fro m V to W and L is a linear transformation fro m W to U, is the composüe transfonn ation L o T fro m V to U · linear? How can you te ll? If T and L are iso mo rphi sms, is L o T an isomorphism as well ?

53. 54. 55. 56.

L: !Pl2 x 2 ~ JR 2 x 2 T: C 00 --+ C 00

T: C ~ C

given by

L (A) = A +AT.

given by T(f) = given by T( z) = z.

f + f' .

T: V ~ V given by T (xo , x ,, x 2, X3, . . . ) = (x,, x_, space of infinite sequences of real numbers.

57. T:C 00 --+ C given by (Tf)(t) = f(- t ). 58. T: P2 --+ P2 given by (T f)(t) = t · f ' (t) .

X3, .. . ),

where V is the

496 •

hap. 9 Linear Spaces

59. T : JR2 x2 ---+ JR 2

2

given by

T ( A) =

60. T : JR.2 x2-+JR 2 •2 given by

T (A) =

[!

Sec. 9.2 2

EXAMPLE 1 .... C 0 11 sider the linear space P2, wiili the basis

] A.

S- 1 AS,

7 - 3 I + llt 2 , find

whereS = [ ~ ~ ] ·

B

UJa.

consisting of l

,

497

t , r2_1 For f (t ) =

Solution

61. Find an igen ba i for the linear tran Formation T (A) = s- l AS from "JR.II X /1 to lR" x" , where S i an invertible diagonal matrix. 62. Let :JR.+ be tl1e et of positive real numbers. On JR+ we defi ne the "exotic" 1 operati on (usuaJ multiplication) x EB y = xy

The coordinate of

f wi th respect to

[fl,

and

are 7, - 3, and 11 , so that

r

B

~ ~n

~onsid_er a basis B o f a linear pace V, consisting of f 1 , • • • , f,, . Introducin o coordm ates 111 ~ with respect to B allows us to transform V into "IR": we ca~ define the coordmate transformation

k O x = xk.

a. Show that JR+ wiili the e operati ons is a linear space; fi nd a basis of this space. b. Show that T (x ) = ln (x) i a linear tran Formation from JR+ to :IR., where 1R is endowed wiili ilie ordinary operations. Is T an i omorphi m? 63. I it po ible to defi ne "exotic" 1 operation on !R( 2 o that dim (lR2 ) = I? 64. Let X be the set of all citizen of Timbuktu. Can you define operarion on X that make X into a real linear space? Expl ai n.

2

oorcUnates in a Linear Space •

I~

UJa

from V to lR" . This coordinate transformation is in vertible: its in ver e is the tran Formation

[ ~; l ~

c,f<+

+ c.J,,

CII J

COORDINATES IN A LINEAR SPACE

fro m IR" to V . Tbe coordinate tran Formation is also linear, because [kf ]a = k(J] and 8 identity, leaving tbe econd as kc 1 f 1 + ... + k c" J,, , o that 11

In thi s section we continue generalizing the basic concepts of li near algebra from IR(" to Linear paces.

Definition 9.2.1

[/ + g] a = [/Ja + [g]a. We verify the first Exercise 25. If I = c,l1 + · · · + c J," ilien kf = 1

Coordinates Consider a linear space V with a basis f in V can be written uniquel y as

f

=

B

con i ting of

f 1,

• • • ,

[k!J a =

J,,. The n any

r

l

[c'l c2

= k

k c"

ctf, + · · · + CnJ,"

for some scalars c1. c2 , . . . , c". The c; are called the coordinates of re pect to B, and the vector

k c2 kc;

:

= k (J] 8 .

C11

We have shown ilie following result:

f with Fact 9.2.2

Coordinate transformation Consider a linear space V , wiili a basis B, consisting of j 1 , coordinate tramformation

f is called the coordinate vector of f, de noted by [!] 8 .

1

Exotic in the sen e of ·• trikingly out of the ord inary" (Webster) .

~

. . .•

ln · Then the

UJa

from V to IR" is an isom01phism, i.e., an in vertible linear tran formation.

1

We re fer

10

the ba.s is I. 1• .. • • 111 a.s the swnda rd basis of P11 •

~"'""'

498 •

Sec. 9.2 Coordi nates in a Linear Space •

Chap. 9 Linear paces

- ···- ·-·-

499

EXAMPLE 3 ..... The coordinate transformation for C with respect to the standard basis 1, i is

c x+iy

~

JR2

~

[ xy ] .

EXAMPLE 4 ..... The coordinate transformation of iR 2 x 2 with respect to tbe standard basis

[b ~] '

Figure 1

[~

b]. [~ ~l

[~ ~]

is JR2 x 2

Th i fact allow us ro gi e a conci e con ceptu al proof of Fact 9. 1.4 (rhis proof wa deferred). Con ider a linear space V with two bases A (wi th n e lements) and B (with m lement . We claim that 11 = m. To prov rhis fac t con ider the coordinate transformations

T_",

Cf)

=

[fh

[~

~]

~

~

from V to iR"

]R4

m

V be the solution space of the differential equation !" + f = 0. In rhe introduction to Section 9. 1 we discussed the solutions, f(t) = c 1cos(t) + c2 sin(r). The coordinate Iransformation for V with respect to the basis cos(t), sin(r), is

EXAMPLE 5 ..... Let

and from V to iR 111 • See Figure I. Then T6 o T~- J i an isomorphism (an inve1tible linear transformation) from lR" to lR111 (see Fact 9.1.8a and Exercise 9.1.45). The exi tence of such an invertible Jjoear transformation (described by an invertible matrix) implies that n = m (by Fact 2.3.3), as clairned. Fact 9.2.2 teils us that by introducing a ba i in a linear space V we " make V into a copy of iR"" (rhis is really just an extension of Descartes' concept of analytical geometry). This translation process makes computation easier because we have powerful numerical tools in iR", based on matrix techniques. We do not need a separate theory for finite-dimensional linear spaces, since each such space is isomorphic to some iR" (it has the same structure as lR"). For example, we can say that n + 1 elements of an n-dimen ional linear space are linearly dependent, since the corresponding result holds for iR" (by Fact 3.3.4). Here are some (rather harmless) examples of coordi.nate Iransformation .

EXAMPLE 2 ..... The coordinate transformation for P2 with respect to the Standardbasis 1, p2

~

1, t

2

V

c 1 cos(t)

+ c2

EXAMPLE 6 ..... Derermine whether the matrices [ ~

in (t)

~

~

JR2

~

[ cc2' ] ·

l [; ~ l [ i 1

~~]

are linearly inde-

pendent. If not, find a nontrivial relation among them.

Solution We translate the problem into iR4 , using the standard coordinate tran fonnation (Example 4). Then the question is: are the vectors

is

JR;3

linearly independent? We can use matrices to answer this q ue tion.

ao+ a,t+a, t' +--+ [ :; ] We u e double-headed arrows to emphasize the invertibi lity of the coordinate ~ transformation.

~~]

11 12

rref

500 •

hap. 9 Linear paces

Sec. 9.2 Coordinates in a Linear Space •

Therefore. the tbree vector are li nearly de pendenl, with

Written in coordinates, the transformation T maps

m

into

[b

+ 2c ] 2c

= [ 0

0

1 2] [ 0 2

~ J. c

The matrix

or

EXAMPLE 7 ..... Consider a polynomial J(t = a0 + a 1t + · · · +_ 0 111 1111 and an n x n_ matrix

is called the matrix of the transformation T with respect to the bases A and B: it de cribes the transformation T in coordinates.

A.

We can evaluate f at A' that is, we can con 1der the n x 11 matnx f( A ) = J + a 1 A + ... + a 111 A 111 • For a given n x 11 matrix A , show that there is a nonzero 00 11 polynomial f (T of degree ~ n 2 uch tJ1at f( A ) = 0.

T

Solution

M

. . . 1"' A , A2 , . . . , A"2 Since the linear space lR" x" J. n 2 -d1mensiOnal, the n 2 + 1 matnces 2 are lioearly dependent , i.e., there i a nontrivial relation co /" + c,A + c2A + · · · + 111 2 c111 A"' =O(withm =11 ) amongthem. Thepolynomial f( t ) = co + c lt+ · · · + cm1 ha the desired property. ~

+

501

This diagram can be drawn more succinctly as follows: T

The Matrix of a Linear Transformation Next we examine how we can write a linear transfom1ation in coordinates. For example con ider the linear transformation T : P2 -+ P1 given by

T(J ) =

+ bt + ct 2) =

Definition 9.2.3 (b

+ 2 ct ) + 2c =

(b

+ 2c) + 2ct

The matrix of a linear transformation Consider a linear transformation

or

T: a

+ bt + ct

2

T

(b

[T(f)] a

J' + J".

More expli citly, we can write T (a

M

+ 2c) + 2ct.

3 U ing the standard basis A = I , t , t 2 of P2 , we can tran sform P2 into lR . 2 Likewise, we can use the standard basis B = l , t of P1 to transform P1 into 1R .

V-+ W.

where dim (V) = n and di m (W) = m. Suppose we are. given a basis A of V and a bas i B of W: V

T

w

T

Let us apply these coordinate tran formations to the input and the output of the Iransformation T above: T

(b

+ 2c) + 2ct

1 Ts

[

Then the matrix M of the linear tran fo m1ation Ta o T o m.arrix ofT with re.spect ro A and B. Note that

b + 2c ]

2c

[T(J)] a

lR"

[T(J)] a = M[Jh ,

fo r all

f in

V.

r_."- 1 i ca lled the

502 •

Chap. 9 Linear Spaces T

\1

w

1

1

Ts

TA

M

IR"

Sec. 9.2 Coordinates in a Linear Space •

T

f

EXAMPLE 9.... For two given real nurnber

P an

d

q,

fi d .~.. n ute matrix of the linear transformation

T,.

'

T( z) = (p

JRIII

(f)A

M

[T(f )]a

503

+ iq) z

from C to C with respect to the Standard basis B of C consisting of 1 and i .

Solution Compare with Definition 7.1.3. . We can describe the matrix M column by colurnn compare w1th Fact 7.1.5) . Suppose the ba i A of V consi t of ! 1 , • •• , fi,. App lying t11 e formula

T

p

+ iq

- q+ip

[T(J )]a

=

--~

M[f ]A

Note that M is a rotation-di1 ation matrix. to

f

= j; , we find that

Fact 9.2.5

[T /;) ] 8 = M[f;]A = M e; = i th column of M .

The matrix M of the linear transformation T(z) = (p

Fact 9.2.4

+ iq) z

from C to C with respect to the standard basis of
Consider a linear transformation T from V to W . Suppose A i a basis of V, consisting of f~> .. . , j," and B i a basis of W. Let M be the matrix of T with re pect to A and B. Then

M = [:

-:

l

ith column of M = [T(f; )]a. a co (t ) + b in (t ), iliat i the inusoidal funclions with period 2n. Find the matri x M of the linear tran formation

EXAMPLE 10 .... Let V be tbe linear pace con isting of all function s of the form f (t) = Fact 9.2.4 provide u with a mechanical proced ure for finding the matrix of a linear transformation , as illu strated below.

(T.f)(t)

EXAMPLE 8 .... Use F act 9.2.4 to find the matrix of the linear transformation T(f ) =

!' + !" from

=

f(t - 8)

fro m V to V with respect to the basis B con i ting of co (t) and sin(t ). ote that T moves the graph of f to the right by 8 units (compare with Example 24 of Section 9.1).

P2 to P1 wiili respect to the tandard ba es.

Solution We apply the transfonnation T to ilie functions 1, r , and t 2 of the standard basis of P2 . We then write the resulling function s in coordinate with respect to tbe standard basis of P1 • Next we combine the three resulting vectors in IR2 to construct the matrix M of the transformation T. T

0

Solution Be prepared to u e tl1e addition theorems for ine and co ine. cos t)

T --~

co (1 - 8) = cos(8) cos(t) si n(l - o) = -

in(t)

+ sin (o)

in (o) CO (t ) +

in r )

CO (o)

[

si n(t

[

where -

2t + 2 - -

in(8) CO

-+

a rotation matrix .

(o )

J'

(8) in (8)

CO

-

J)

in (8) ] CO

(8

M.

504 •

hap. 9 Linear Spac ·

Sec. 9.2

proble m about a linear tran Formation T can oft n be done by ol ving lhe corresponding problem for the matri x M of T with r pect to o n~e bases. Thi tech nique can be used to find the k rnel and image of T. to de termt~e whether T is an isomorph.ism, to find the e igenvalues ofT , or to ol ve an equat10n T(J ) = g for a given g . ker(T)

c

T

V

--7

w

:::J

im T)

:::J

t im M)

t ker(M)

s;

1

~II

--7

~111

T (f) = .f( A) ,

2

12

=

is a ba ·i of the image of

r.

[1 0] 0

A= [ 1

1

3

+ 2xJ =Ü

x 2+ 5x 3 =0

[ =~~~ ]

~ ~]

(compare with Example 7 . Find the matrix of T with re p ct to the standard ba e . U e this matrix to find ba e for the kerne! and image of T .

A ba; of the kemel of M;

Solution

! · we find that

Construct the matrix M of T column by column, u ing Fact 9.2.4:

JS

A

{2

=

[il

u~]

[l~J

A= [ 157 2210] 2

= XJ

t

1

M =

[~

4

16l 15 22

[

~

; ~~J 3 4

15 22

-

rre f -7

Image: A basis of the image of M is

=~] . Tcan fomting back illto P

1,

s;

? 2 ~ ~ 2 x 2 :::J

t

ker(M)

2 3

[=;]. the domaill of

f( t ) = - 2- 5t + 12 i a ba is of the kerne! ofT . Note that f (t) the characteristic polynomial of A (compare with Exerc i e 7.2.64). im(T)

t

t

s; ~ 3 ~ IR4

:::J

im (M )

V be the linear pace consi ting of all functions of the form f t = a cos t b sin (t). Con ider the linear transformation T from V to V given by

EXAMPLE 12 ..... Let

T(.f ) =

f "- 2/' -

+

3f.

a. ls T an isomorphism? b. Find all olutions f in V of the differential equation

Let us find bases for the image and kerne! of M:

0 I

[

ker(T)

m



The general olution is

-1.3

h

2] 4

Kerne!: To find the kerne! of M we have to solve the y tem

gi e n by

A= [

where

505

(we pick the pivot columns). Tran formin g these vectors back into R 2 x 2, the codomain of T , we find that

The vertical arrow · abo e repre ent coordinate tran fo rm ati o n .

EXAMPLE 11 ..... Con ider the linear transformation from P?. to IR 2

oordinates in a Linea r Space •

!"- 2 f'-

3f

= cos (t).

Solution Find rhe matrix M ofT with respectto the ba i B consi ting of co (1) and in (r): cos (t)

- 4cos (t ) + 2 in(t )

---7

si n(t)

- 2cos(t)- 4 in (r)

---7

Note that /VI i. a rotation-dilation marrix.

506 •

Chap. 9 Linear Spaces Sec. 9.2 Coordinates in a Linear Space • 50 7 Consider the matrix M of T . h . matrix with ker(M) = {O ) ~It re~p~ct to. some bas t . Then M i a quare an isornorphism. 'so tat M lS mverttb le by Fact 3. 1.7. Therefore, Ti

a. Since M is invertib le. T is an isomotphism. b. We wtite the equation T (f) = co (t) in coordinate : [T(f)]a = [cos (t)]a

or



EX E R C I S E S where

The solution is _

x = M

_ 1 [

.X = [f) s.

l] I [-4- 2 2][1]

0 = 20

-4

0

GOALS Us e th e concept of coordmates. . . formatwn.

= [ - 0.2]

1. ~e the polynomiaJs f(t) = 7 + 3t + ltnearly independent?

-0. 1 .

Transfonning th.is an wer back into V, we find that the uni.que olution in V of the given differential eq uation is

Find the matrix of a linear trans-

,2 g(t) =

9 + 9r + 4r 2, h (t = 3 + 2t + r2

2. Are the matrices

J(t) = -0.2cos(r) - 0. 1 in(r) .

linearly independent?

Check th.is answer.

f

E

V

V

3

CO

t

t

t X E

T

JR2

(r )

t

M

ll{2 3

[

~]

The ba ic facts we have derived for linear transformation from lR" to IR111 generalize easi1y to linear transformations between finite-dimensional Linear spaces.

3. Do the

If T is a ünear transformation from V to W, where ker T and im(T) are finite-dimensional linear spaces, then V is finite-dimensional, and dim (ker(T))

+ dim (im(T))

= dim(V).

j(r) = 1 + 2t

+ 9r2 + r3,

g(t)

= 1 + 7 t + 71 3

h(t) _

' 3 4. Con tder _the polynomials f(t) = r + 1 and g(t) = (r + 2)(t + k) where k 1 an ar~ttrary constant. For wh.ich cboices of tbe constant k are the three polynomtals f(t), tf(l), g(t) a basis of p2 ? In Exercises 5 to 10 find the matrix of the given linear tran formation with respect to the tandard bases.

T:P2~lR 2

given by

T(f)=[f,~ 1?)] ·

6. T : P2 ~ Pz

given by

(T f)(t) = f(t

5. Fact 9.2.6

p~lynot~aJs

l+B_t + r + 5t,k(t ) = 1+8t+4t-+8z 3 formabasi ofP ?

7. T: P3

Compare with Fact 3.3.9.

~

JR- given by

T(f)

=

+ l).

[!'f(l) J. f(r)dt

- I

Here i another useful result:

8. T: P2

~ JR

3

given by

T(f) =

[f~~~~ ] . f(2)

Fact 9.2.7

Consider a linear transformation T from V to W , where V and Ware finitedimensional linear spaces. Then T i an isomorphj m if (and only if) a. dim(V) = dim (W) and b. ker(T) = {0}.

Proof

If T is an isomorphism, then dim (V) = dim (W) by Fact 9.l.8d, and ker(T) = {0} by Fact 9.l.8b. Conversely, suppose that dim(V) = dim(W) and ker(T) = (0].

9. T: JR1

2

~ IR

given by

10. F : JR 2 x - ~ IR 2 "' 2

T(A) = tr(A ).

given by

F (A) =

tA + iAT.

11. For the tran formation T in Exercise 5. find bases of ker(T) and im (T). 12. Ts the transformation T in Exercise 6 an isomorphi sm? Find the eigenvalues and eigenspaces of T. Is T diagonali zable? 13. Find a basis of the kerne! of t11e linear transformation T in Exerci e 7. 14. For the transformation T in Exercise 8, find bases of ker(T) and im (T) . I T an isomorphi m?

508 •

Sec. 9.2

Ch ap. 9 Linear Space

15. Oe rib the kerne! and image of the transfonnation F in Exerci e 10. Find the e igenvalu es and eigen pace of F . I F diagonal izable? •

)

,

16. Let \1 be the li near space of al l quadratt forms q (x 1, x 2 = a.lj in two ariable . Concider the linear tran Formatio n

ar

of

OXI

OX2

+ /; -~'"1-r2 + c~-_?2

T (J) = - ·- x_ - - x2

oord inates in a Lin ear Spa ce •

a. Find the matri x of the linea r tran Formation T (f) =

T:V

-7

509

V given by

f" + af' + bf

w_ith respect to_ the basis cos(t), in (t). Here a and b are arbitrary co n tant . b. Fmd the functwn (s) f in V such that T (j)

=

f" + af' + bf =CO

(l ).

You_r sol ution will contain the arbitrary constant a and b. For which fro m \1 to V. a. Find the matri x of T wi th re pect to the basis x~ , .r1x2, x~. b. Find bases of the kerne! and im age ofT. c. Find the eigenval u and eigen pace ofT. I T diagonali zable?

17. Consi der the linear tra nsfonnation T:

2x 'P2 -7 llll IN.

gt.

en by

T (j) - [ f'o(3)

3f(l) ]

- f(S)

.

P2

given by

T (J) =

T(f )

= f + af' + bj",

where a and b are arbitrary constants. a. Find the matrix of T with respect to the srand ard bas i of P2. b. If g is an arbitrary polyno mial in P2 how many so lutio n f in P2 does the differenti al equarion

g

have? Ju. ti fy your an wer. 19. Consider a linear tran . Formati on T: V -7 V with ker(T ) = (0 }. If V is finitedimensional , then T is an isomorphi sm, by Fact 9.2.7. Show that this is not necessarily the ca e if V i infinite-dimensional: for \1 = P give an exampl e of a linear transfonnation T : V -7 V with ker(T ) = (0} which is not an isomorphism (recall that P is the space of all polynomial s). 20. Consider two finite-dimen ional linear spaces \1 and W. lf V and W are isomorphic, then they have the same dimension (by Fact 9. 1.8d). Conversely, if V and W have the same dimensio n, are they neces. arily isomorphi c? Ju stify your an wer carefull y. 21. Con ider the linear space V th at consists of all fun crions of the form f( l ) = c 1 COS(I )

where c 1 and c2 are arbitrary constant .

+ C2 sin (t),

-7

(t)

+ c4 t

in (t) .

V gi ven by

f" + f.

J" + f

= CO

(t).

Graph your soluti on(s). (The di ffe rential equation f" + j = co s(l) decribes a forced undamped oscillator. In this example, we observe the phenomenon of resonan.ce.) 23. Consider the linear transformation T: P11 -7 JR"+ 1 given by

f(no) ] f(a,) T (f) =

f + af' + bf " =

+ c3 r co

a. Find the matri x of T with re pect to the basis cos(t) , sin (t), t co (t ), t in (r). b. Find all olutions f in W of the differenti al equ ation T (j) =

18. Con ider the linear transformation -7

f( r) = c 1cos(r) + c2 sin (r) Consider the linear transformati on T : V

a. Find the matri x of T with respect to the stan dard bases. b. Find bases for tbe kernel and the image of T.

T : P2

cho J c~s of ~ and b i there 11 0 such fun~ti on j? a n ph y ic ' the diffe rential equatt on f + af' + bf = cos(r) de cnbe a fo rced o c illator.) 22. Let V be the linear space of a ll functi ons of tbe form

[

.

,

J(~n)

where the a; are distinct constants. a. Find the kerne! ofT. Hint: A nonzero polynom.i a1 of degree

~ 11 ha - at most n zeros . b. I T an i omorphi m? c. What does your answer in part b tell you about the po ibility o f fitrin g a polynomial of degree ~ n to 11 +I given points (ao, bo), (a 1. b 1) •.•.• (an . b11 ) in the plane? 24. Consider the linear space V of all infinite eque nce of rea l number . We define the subset W of L consi ting of all sequences (xo, x 1• x 2 , ... ) uch that X n+2 = Xn+l + 6 xn for all 11 ~ 0 . a. Show that W is a subspace of V . b. Determine the dimension of W . c. Does W contain any geometri c ·eque nce of the form ( I . c. c 2 • c· . . .. ), for some constant c? Find all such . equ ences in W . d. Can you find a basi of W con isting of geo metric sequence, ?

510 •

hap. 9 Linear Spaces

Sec. 9.3 Inner Product paces • e. Con ider the equence in W w ho -e fir t two term are .ro = . 0. x 1 = I . Find x 2 , x 3 , x 4 . Fin d a clo ed fo rmula for the nth term x" of th1 s sequence.

33. Let a , ... , a" be distinct rea l numbers. w1, . . . , w" such that

Hinr : Write thi

equence a a linear combination of lhe sequences you found in part d. 25. Consider a basis B of a I in ear space V. Show that (f

+ g]s

= (f]s

f_

t

w;j(a;),

34. Find the weights w 1, w 2 , w 3 in Exerci e 33 for a 1 = - J, a2 = 0, a = 1 3 (compare with Simpson s rule in calculu ). 35. Consider a linear Lransformation

fo r all fand g in V .

26. Con ider the linear tran formatio n I (f) = f dx fTo m C[O, 1] to IR (see Example 2 1 of Section 9. 1). Show that the kerne) of I i infin ite-dimensional. 27. Let A be an m x n m atrix with rank (A) = n. Show that the linear transforma1 0 .f(x)

T : V ---+ V,

tion L (x) = Ax from II(" to im(A) is an isomorphism. Show that the inverse is L - 1 (}i) = (AT A) - 1 Ary.

where \1 is a finite-dimensional linear space. How do you think the deten ninant ofT is defined ? Explai n.

28. Let A be an 1n x n mat:Ii x. Show that the line~ tran fo rm ati on L (x) = _A-r fr~~ im (AT) to im(A) is an isomorphi m. Is the Im ear transformat1on F(y ) = A y from im(A) to im(A T) nece aril y the inverse of L? 29. a. Let T be a linear tran for mation from V to \1. where V is a finitedimensional complex linear pace. Show that the transformation T ha comp1ex eigenvalues. b. Consi der two matrice A and B in C" " such that AB = BA. Show that A and B have a con.unon eigenvector in C". Hint: Let V be an eigenspace for A . Show that Bx is in V fo r all .X in II. Thi implies that we can define the linear transfom1 atio n T (x) = B X. fro m II to \1.

2 2

f(t) dt = 1

Show that there are "weigbts"

fo r all polynomials f(t ) in P"_ 1 • Hint: It suffi ces to prove the cl aim for a basis !1 , ... , j;, of P"_ 1. Exercise 32 is helpful.

+ [g]a,

30. Let C be the set of all rotation-dilation matrices [ :

1

511

-~ ] ,

INNER PRODUCT SPACES In the last two sections, we foc used on those concepts of linear algebra that can be defi ned in terms of linear combinati ons alone, i.e., in term of surns and scal ar multiples. Other important concepts relating to vectors in IR" are defined in terms of the do t product: length , angles, and orthogonality (orthogonal projections, orthonormal bases, orthogonal transformations). It is sometimes usefu l to defi ne a product analogaus to the dot product in linear spaces other than lR" . The e generalized dot products are cal led inner products.

a subspace of

Thinkina of C and C as real linear spaces, defi ne an iso morpbi sm

"'

T: C -* C

Definition 9.3.1

uch that T (zw) = T (z) T (w). The existence of such an i omorphism implies that C and C are isomorphic not just as linear space , but as fi elds. 31. In Exercises 4.3.34 and 6.4.37 we have presented two di fferent ways to think about the quatemion s. Mimic the approach of Exerci e 30 to show that the two sets H and lHl carry the same structure as far as additi on and multiplication are concerned. 32. Consider a basis .f1, ... , f" of P"_ 1 • Let a 1, . .. , a" be di tinct real numbers. Con ider the n x n matrix M whose i j th entry is jj (a;). Show Lb at the matrix M is invertible. Hint: If the vector

Inner products An inner product in a linear pace V is a ru le tbat assign a real scalar (denoted by (f, g)) to any pair f, g of elements of V, such that the fo Uowing propertie hold for all f g, h in V, and all c in IR:

a. (f, g) = (g, f) b. (f + g , h) = (f. h) + (g, h) c. (cf. g) = c(f. g) d. (j, f) > 0 for all nonzero

f in

V.

A linear space endowed with an inner product is cal led an inner p roduct space. The prototype of an inner product space is II(" with the dot product: (ü, w) =

ü . u) . Here are some other exampl s: is the kernel of M , then the polynorn.i al f at a 1, • • • , a"; therefore, f = 0.

= c 1 f 1 +· · ·+c,,f"

in P"_ J vani she

EXAMPLE

t ~ Con s ider the linear space C[a. bJ con i ting of all con.tinu ou fun ction domain i the closed intervaJ [a. b J, where a < b. See F1 gure 1.

who e

512 •

Sec. 9.3 Inner Prod uct pa ce •

hap.9 LinearSpaces

513

. .Thi . approxi~ation shows th at the inn er product (f, g) = J:' .f(t g(t ) dt for functions JS a contmuous version of the dot product: the more subd ivisio ns you choose, the better the dot product on the right wilJ approx im ate the in ner product (f ,g ). ~

EXAMPLE 2 .... Let e2 be the space of aLt "sq uare-summabl e" infinite sequence , i.e ., seq ue nces X =

uch that

Figure 1

I: xl

=

i=O

(Xo , X 1

.x2 ,

... • X 11

xö + x~ + · · · converges.

, • • •

In this space we can defin e the inner

product For fu nctions

f

(x,

and g in Cla , b], we define (J, g) =

lb

=

lb

.f(t)g(r) dr

f( t )g(t ) dl.

=

X;y;

= X o )'O + X I YI + · · ·

i =O

(show that thi s se1ies converges) . T he verifi catio n of the axioms i Ca mpare with Exercises 4. 1.1 8 and 9.1.1 5.

The veri fication of the fir t three ax iom fo r an inner prod uct i For e ample, (f, g )

y) = L

ib

traightforward .

traightforward . ~

EXAMPLE 3 .... In IR"' x11 we can defi ne the inner product (A B ) = trace (ATB ).

g (t).f(t ) d t

= (g,

f).

We wi ll verify the fir t and the fourth ax iom .

T he verifi cation of the last axiom requires a bit of calcu lus. We leave i t as Exercise I . .f(t)g (t ) dt i the Iimit of the R iemann RecaJJ that the Riem ann integral

J:

(A , ß )

=

trace (Ar B )

=

trace((Ar ß )T ) = trace (B r A)

=

(8 , A)

To check that (A, A) > 0 for nonzero A write A in terms of its columns:

111

um

L

.f (tk) g (rd ßt , where the

tk

can be chosen as eq uall y paced poin ts i n tbe

i= l

interval [a , b]. See F igure 2 . T hen

1 b

=

(f, g)

.f(t )g(t ) dt

~

A=

b

.{ f((12) lJ

111

.f(tk) g(tk) M

=

.: ([

J Um )

J· [ : J)

V)

li2

g(t g (t2) J)

ßt

g(t", )

fo r !arge m.

(A, A )

Figure 2

__.

---

__. /

/

.,....---------

=

trace(ATA)

=

trace

II

j(l)

-T VII

/

g(t)

111 ~~ ~ 1 2



= trace

(l .

514 •

Chap. 9 Lin ear Spaces Sec. 9.3 In ner Product Spaces •

If A i no nzero, the n at lea t one of the Ü; i nonzero, and the sum !lli , J1 2 + II id~ + · · · + II Ü11 JI 2 is po itive, a de ired . ~

2

We can in troduce the ba ic conc pts of geometry for an inner product sp ace exactl y as we did in IR11 for the dot product.

Definition 9.3.2

The norm of an eleme nt

(f,.

g)

= fo

[ " Sln . (t)

EXAMPLE 6 .... Find the distance of f( t )

Norm, orthogonality

=

=t

cos(t) d t

and g(t)

=

[

J

1 sin2 (t) 12Tr

2

=0

0

= 1 in C[O, 1].

Solution

f of an inner product pace is 11111

515

Solution

Ju. .n.

dist(f, g)

=

Two elements j. g of an inner product pace are ca lled orthogonal (or perpendicul ar) if . T~e results and procedures discussed for the dot product generaUze to arbJ trary mne~ product space . For example. the theorem of Pythagoras holds; the ~ram-~chnudt process can be used to construct an orthonormal ba is of a (finitedunen JOnal) mner product space· and the Cauchy-Schwarz inequality teU us that l(f, g)l S ll fll llg ll for two elements f , g of an inner product space.

(f,g) =0.

We can defi.ne the distance of two elements of an inner product pace as the norm of their difference: dist(f. g) =

+ Orthogonal Projections

II! - gll

Consider the space C [a , b ]. with the inner product defi ned in Exa mple I . In physics, the quanti ty 11 f 11 2 can often be interpreted as energy. For exa mpl ~, it describes the acoustic eneray of a periodic sound wave f( t ) and the elast1c potential energy of a unifo rm s~ring with verti cal displa~ement f(x) (see Fi gure 3). The quantity 11!11 2 may a1 o measure thermal or electnc energy .

EXAMPLE 4 .... ln the inner product space C[O, 1] with (f,g) f(t)

=

=

t 2.

j 01 f(t)g( t )

dt , find

In an inner product space V , consider a subspace W with orthonormal basis g , 1 . . . , g"'. The orthogonal projection proj w.f of an element f of V onto W is defined as the unique element of W such that f - proi f is orthoaona1 to w . J IV . e · As 111 the case of the dot product in !Rn, the Oitbogonal projection is given by the formula below.

11!11 fo r Fact 9.3.3

lf g , , .. . , 8m is an orthonormal basis of a ubspace W of an inner product space V, then

Solution 11111

= Ju,n =

jfo'

Orthogonal projection

l

4

dt

=

[f for all

EXAMPLE 5 .... Show that f(t) = sin (1) and g(t) = cos(t ) are perpendi cular in the inner product space C[O, 2JT] with (f, g) = J~rr f(l)g(l ) d t. Figure 3

Displacement fix)

Vertical di splacemen t at x

A string attached at (a, 0) and (b, 0)

X

V.

(Verify this by checking that (.f - proj w.f, g;) = 0 for i = 1. . .. , m.) We may think of proj IV f as the element of W closest to f. In other words, if we choose another element h of W , then the di stance between f and Ir will exceed the di tance between f and proj w f. As an example, consider a ubspace W of C[a . b], with the inn er product introduced in Example 1. Then proj IV f is the function g in W that i clo est to f, in the sense that

I a

f in

di t(.f, g) = b

is least.

II! - gll

=

1"

2

(f(t) - g(t) ) dt

516 •

Chap. 9 Li near Spaces Sec. 9.3 Inner Product Spaces •

517

Solution pro· f

We need to find .

~P1

.

we

·

fi

rst find an ortbonom1al basis of P for tbe

~;aen Smnher pdroduct, then we will use Fact 9.3. 3. ln general , we have t~ use the

°·

· . B m- c hrru t process to find an rth ononna1. ba IS of an mner product space ec~u e t e two function s I, I in the Standard basis of P 1 ace already orthogonal.

th at. IS ,

,

(1 , I )

Discrele Ieast-squares condition: L:~ 1 (b, - g (a,)) 2 is minimal. Figure 4a

=

r'

}_ I

f

d1

= Ü.

we merely need to divide each function by it norm:

The requirement that

11111 =

l

m

2

a

.Ji

and

12 li l/l =jr t dt = Jf 1- 1 . 3

and

~t.

ow

. 1 prü]pJ = 2 (l , f ) l

3

+ 2 ('

f )l

I = - (e - e- 1) +3e- 11

2

(We orllit the Straightforward computations.)

2

L

k=l

EXAMPLE 7 ..... Find the linear function of the form g (c) = a

-/2

1 -1

(! (t )- g (t )) dr = !im "'(/ (tk) - g(td ) ßt . m 4-

1 dt =

An orthonormal basis of P1

be mimmal is a continu.ous Ieas t-squares condition , as opposed to the di crete lea t-squares cooditions we discus ed in Section 4.4. We can use the di screte Ieast-squares condition to fit a function g of a certain type to some data points (ab bk) , while the continuous least-squaces condition can be used to fit a function g of a certain type to a given function f ("function of a certain type' are frequently polynormals of a certain degree or trigonometric function s of a certain fom1 ). See Figures 4a and 4b_ We can think of the continuous least- quares condition as a limüing case of a discrete Ieast-squares condition by writing b

II:

See Figure 5.

+ br

that be t approximates tbe function f (l) = e 1 over the interval from - 1 to 1, in a continuous least- quares sense.

(ontinuous Ieast-squares condition: t !f!n - g( tJF dt is minimal.

Figure 4b

Figure S fl..l)

projp,.f

fl..t) - g(f)

.____"" ' - I

a

b

:' I

518 •

Chap. 9 Linear Space

Sec. 9.3 Inner Product Spaces • What follow

one of the major applicati on of thi theory .

These equations tel l usthat the functions l , sin(t), cos (r), . .. , si n(nt ), cos(n.t) are orthogonal to one another (and tberefore li.nearly iodependent). Another of Euler' s identities teils u that

+ Fourier Analysis'

i:

In th pace C[ - TC, TC] we introduce an inner prod uct that i a light modi ficatio n of th definition given earlier: (/, g)

=;l

l(t)g(t) dt

t---+ c

+ 1; 1 sin(t) + c 1 cos(t) + · · · + b" sin

+ Cn cos(nt)

in(pr) cos(mt ) dt

= 0.

for integers p, m ,

sin(pt) sin(m.t) dt

= 0,

for disti nct iotegers p, m,

J::. cos(pf) cos(mt) dr = 0,

for distin t integer p, m.

I:

2

cos (mt) dt =TC ,

therefore, 1

f(t)

= lll(t)/1 = .Ji ;

g(t)

,

is a fu.nction of nonn one.

called trigonometric polynomials of order ::: n . From calculus you may recall the Euler identiries:

~~:

i:

11"-;r ldt = ·h;



nt )

=

11111= ;

Also it is requi red that .f(c equal one of the two one-sided Iim its. See Fig~re Fora po itive integer n , consider the sub pace T,, of C[-TC,TC] which 1 defined as the span of the functions I , sin(t), cos(t), sin(2t). cos(2t), . .. , sin(nt), cos(nt). The space T" consists of all functions of the form f(t) = a

dt

for positive integer m. This means that the functions sin(t), co (t), ... , sio (nt ), co (nt ) ar~ of nonn I witb respect to the given inner product. This is why we chose tbe Inner product as we did, with the factor .!. . The norm of the fu nction l(t) = 1 i "

The factor 1/TC i introduced to facilitate the computation . Convince yourself that this is indeed an inner product (compare with Exercise 7) . More generally, we can consider this inner product in the space of ~I piecewise continuous fimctio ns defined in the interval [- TC, TC]. These are functw ns I (t) that are continuous except forafinite number ofjump-discontinu.ilies, that is, point c where the one- ided Jimit lim f(t) and lim... .f(t) both exist but are not equal. r~ c -

2

sin (mt)

1" - 71

519

Fact 9.3.4

Let T,, be the space of alJ trigonometric polynornials of order ::: n , with the inner product (J, g)

11"

=;

-71

.f(t)g(t) dt.

Then the function I

.

.

.Ji' sm (t), cos(t), sm(2t). cos(2!), ... ,

fW hos o jump-discontinuity ot t = c.

i n(nt), cos(nt)

Figure 6

fo rm an orthonom1al basi of T,, .

For an arbitrary fu nction

I

in C[-TC JT], we can con ider

c

As we discussed above, J,, is the trigonometric polynomial in T,, that best approximate .f. in the sen e that 1Named after the Fre nch mathematician Jean-Bapti ste-Joseph Fourier ( 1768- 1830). who developecl the subject in his Theorie analytiqrre de Ia clwleur ( 1822), where he investi gated the conduction of heat in very thin s heet~ of melal. Baron Fourier was also an Egyptologist and govern ment admini strator; he accompanied Napo leon on hi s ex pedition to Egypl in 1798.

dist(f. J,,) < dist(f, g) for al l other g in T".

520 •

Sec. 9.3 Inner Product Spaces •

Chap. 9 Linear paces

521

We can u e Fact 9 . .3 and 9.3.4 to find a formula for f,, = pro_ir,.J.

I

A, [

35 F ac t 9..

Fourier coefficients If f i a piecewise continuou functio11 defined 011 the i11te rv al [ - n. ;r ] , then

I

Piano

2

3 4

5

6

• k

it best approximat ion /" in T" i .f;,(r)

=

= ao ~ + b , sin(l) + c, co

proj 7J(t)

+h1, si11 (n1)

+ ···

(l )

A, [

+ c" cos(nf),

I

Figure 7

where

f ;r

l

bk = (f(r). in kt)) = ck

= (f(r)

a0 =

(

~

os kr )) =

f(r), -

I ) =

..J2

L:

f(l) in (kt) dr.

- rr

lf

l

M

...;2n

f( r) co (kr) dr.

-n

f.

The function fn(t) = ao

.

~ + b 1sin

r)

....;2

+ c 1cos(t) + · · · + h

11

sin(nr)

+ c" cos(nl)

called the nth-order Fourier approximation of f .

EXAMPLE 8 ... Find tbe Fourier coefficients for the function

Note that the constant term , written somewhat awkwardly is

1

M

1 =-

2n

f ;r

~~

f(f) dt ,

- ;r

J

• k

Ak si11 (k( t - 8k)),

where Ak = b~ + c~ is the amplitude of the harmo11ic and 8k is the phase shif1. Consider the sound generated by a vibrating string, suc h a in a piano or 011 a violin . Let f(t) be the air pressure at your eardrum a a fu11ction of timet (the function f(t) is meas ured as a deviation from the normal atmospheric pressure). In thi s case, the harmonics have a simple physical interpretation: lbey correspond to the various si11usoidal modes at which the string can vibrate. See Figure 7. The fundamental frequency (corresponding to the vibration shown at the bottarn in Figure 7) gives us the.first harmonic of .f(t) , while the overtones (w ith

f(t) = t on the interval -rr ::::: t ::::: n.

1 ["

}_rr sin (kf)t dt

1- U l cos(kt)t

I

~

value of f(t). The function bk sin (k r) + ck cos(kt) is calied the kth harmonic of f(t). Using elementary trigonometry, we can write the hat111onic altern atively as

+ ck cos(kr) =

6

+~{

cos(kt) dt

l

Ontegration by parts)

2 . - - if k is even k

which i the average value of the function f between 0 and 2n. It make sense that the best way to approximate f(t) by a constant function is to take the average

bk sin (kr)

5

frequencies that are integ~r mu.l tiples of the fundrunental frequency) give us the other ter.ms of th~ harmoruc senes. The quality of a tone is in part determined by the .relattve runphtudes t?e harmonics. When you play concert A (440Hz) on a p1ano the first harmomc 1s much more prominent tban the higher ones, but the same tone playe? on a vio~n . gives p~ornin~nce to higber harmonics (especially the fifth): Se.e Ftgure 8. S~~ constderat:J.ons apply to wind instrument ; they have a v1bratmg colunm of rur mstead of a vibrating string. . The human ear carmot hear tones whose frequencies exceed 20,000 Hz. We p1ck up only finitely many hru1nonics of a tone. What we hear is the projection of f (t) onto a certain T,, .

.

...;2

3 4

Figure 8

bk = (f, s1n(kt)) = ;

ao

2

o!

f ;r .f(t) dt.

The bk> the ck. and a 0 are call ed the Fourier coefficients of the fu nction

Violin

II

k

AU

if k is odd

2

-

k and ao are zero, since the integrands are odd functions . The first few Fourier polynornials are:

ck

f 1= h =

!3

2sin(t) 2 sin(t) - sin(21)

= 2sin(t)- sin(2t)

14 = 2 sin(t) See Figure 9.

+ ~ si n(3 t) 1

?

si n(2t)

+~

in(3t)-

2 sin (4t).

522 •

Sec. 9.3 Inner Prod uct paces • Ch ap. 9 Li near pace

523

Co mbinin oo the last t wo " boxe d , eq uatton . , we get the fo llowing identity:

Fact 9.3.6

2 00

+ b2I + Cl2 + · · · + b~? + C2 + · · · = 11

ll f !1

2

The .in fi nite _ seri~s of the quares of the Fo urier coefficient of a piecew i e contmuous l un ctiOn f converges to 11 ! 1 2.

For the function f( t ) studied in Example 8, thi mean that

4 + -4 4

+ -4 + .. . + ?4 + . . . = -I 9

JT

n-

!"

(2

dt

= -2 JT 2

- rr

3

or

Figure 9

I

How do the en·ors llf - fnll and II! - f u+ t ll of the 11. th and the (n + l ) th Fourier approximation compare? We hope that f u+l will be a better approximati on than

f

11 ,

11

is a po lynomial in T,,+ 1,

ince T,, is

cont ained in T,,+ l, and

II f - fn+ t ll s II f -

1

rr 2

an equ atio n cl i covered by Euler. Fact.9.3.6 h.as a ph.ysical interpretati o n w hen 11! 11 2 represent e nergy. F or exampl e, 1f f (x) IS the di splacement of a vibrating tring then b~ + c~ repre ents ~he energy of the kth h ~m o ni c, and Fact 9.3.6 teil s us that the total energy 1 1.fll 2 1s the um of the energ1es of the harmonics. There is an interesting appli cation of Fourier analysis in quantum mechan. tcs. In the . 1920s quantum mecbanics was presented in two quite distinct forrn s: Werner Het enberg ' s matri x mechanics and Erwin Schrödinoer' wave mech anic . Sch~·ödinger (1887-1 9? 1) later .sh?wed that the two theori~s are mathe matically equt valent: they use tsomorphic tnner product spaces. Hei enbero work w ith the space f2 introduced in Example 2, while Schrödinger works wftb a fun ction space. related to ~[ -JT , rr]. The isomorphism from Schröd.inger' pa e to 2 is established by tak mg Fourier coeffic ients (see Exerci e 13).

II J - fn + I II S II f - J,, II

f

l

11 =1

or at lea t no wor e:

Thi s is indeed the ca e, by definitio n:

I

L ?n- =1 + -4 + -9 + -16 +· · · = 6- '

g II.

for all g in Tu+ l • in particular for g = f,,. In other words a n goe to infi nity, the error II! - f,,ll becomes maller and smaller (or at lea t not !arger). U in g somewhat advanced calculus, one can how that thi error ap proache zero:

e

What does this teU us about lim

111;,11?

By the theore m of Pyth ago ras, we

"""""'

have

As n goes to infinity, rhe fi rst summand , lim

n~ oo

We have an expansio n of

f

11

11 !" 11

II f =

f 11 11

2

,

EX E R C I S E S

approaches 0 , so th at

GOALS Use the idea o f an inner product, and apply the ba ic result derived earli er for the dot product in !R" to inner product spaces.

11!11 .

1. In C [a. b ], define t.he product

in terms of an orthonormal basis:

1

f,, = a 0- - + b 1 sin (t) + Ct co (t ) + · · · + bu in (n t) +

-/2

Cn

where the b b the ck. and a0 are the Fourier coefficie nts. We ca n ex pre. s terms of these Fourier coeffi cients, usin g the theorem of Pythago ras :

II J,,ll 2 =

a~ + bT+ CT+ · · · + b~ + c;,



(f, g ) =

cos(n t ),

II !"II

in

1b

f( t )g(t ) dt .

Show that this product sati sfie the property (J, f) > 0

for all nonzero f .

524 •

Chap. 9 Linear Spaces Sec. 9.3 Inner Prod uct Space •

2. Does the equation

True or false? If f · (.f, g

+ h) =

f

(.f. g ) + (f, h )

hold for all elements f, g , h of an inner product pace? Explain.

10.

·

f

c ·

u.g) = ~

(.r , ji) = (Sx )TS.y .

4. In

~m x u ,

consider tbe inner product

.

h . IS a contmuous even functiO n and g is a con tinuo u odd unc wn , t e n fa nd gare orthogona l in C[- l 1] E I . . , . . xp a m . onsJc1er the space p2 with inner product

3. Consider a matrix S in ~" x n . ln ~~~, de-11 ne tbe product

a. For which choice of S is this an inner product? b. For which cboices of S is (x, ji) = .t · y (the dot product)?

525

1:

f (t )g(l ) dt .

Find an orthonormal basi of the space of all function s ·,n (t ) = 1. Pz orthogonal to

f

11. !he angle between two nonzero element JS defined a v and w of an inner prod uc t space

(A , B ) = trace(AT B ) 4-(v, w) = arcco

defined in Example 3. a . Find a formula for this inner product in ffi."' x 1 = ffi."' . b. Find a formula for thi inner product in ~ 1 x u (i.e., the space of row veclor with n component ). 5. Is ((A. B )) = trace(ABT ) an inner product in ~m " ? (The notation (( A , B)) is chosen to di stinguish this product from the one considered in Example 3 and Exercise 4.) 6 . a. Consider an m x n mat.rLx P and an n x m matrix Q. Show that

In the space C[- rr , rr] with inner product

(f, g) = ;1

fi~d

12.

(A, B )

=

~m x u

1" -;r

f(t)g( t ) dt.

the angle between f(t ) = cos(r) and g(t) = cos(t + 8), where 0 < 8 < rr + 8) = co (1) cos(8) - sin (t si n(8). .

~tnt: Use the formu la cos(t

Fmd all Fourier coefficient of the ab olute value function

trace( PQ ) = trace(QP) .

b. Campare the two inner product in

(v, w)

llvll llwll .

f(t) =

lt l.

13. For ~ function f in C[ -rr , rr_] (with .the inner product defined on p age 518

below:

coos1der the sequeoce of all 1t Founer coefficients,

trace(A T B

(ao. b, , c, , b2 , c2, .. . , b" , c", .. .).

and

this infinite sequence in f2 ? lf so, what is the relationship betwee n

((A , B)) = trace (A BT )

ll.f II

(see Example 3 and Exercises 4 and 5 ). 7. Consider an inner product (u , w} in a space V , and a scalar k. For which choices of k is

an inner product?

8. Consider an inner product (u, w} in a space V. Let

w be a fixed element of

V. ls the following transfonnation linear? T:

V -r IR

given by

Wbat i its image? Give a geometric interpretation of its kerne!.

= f(t),

(the norm taken in

e2 was

for all t,

and

introduced in Examp]e 2.)

14. Which of the following i an inner product in P 2 ? Explain . a. (f, g) = f(l)g(l) + f 2)g(2) b. ((f,g)) = f(l)g(l) + f(2)g(2) + f(3)g(3) product in

JR 2 ?

""

(L;~] ' [ ~~ ]) = ax1 Y1 + bXIJ'2 +

for all

CX2 )11

+ dx2y2

16. a . Find an orthonormal ba is of the space P1 with inne r product (f,g) =

odd if f( - t) = - f(t) ,

2 )?

15. For which choices of the constants a, b, c, and d is the followin cr a n inner T(v ) = (u , w)

9. Recall that a function f(t) from ~ to ~ is called even if f( -1)

and

(The inner product space

((u, w}} = k(v, w)

(the norm taken in C[ - rr. rr])

1'

f( t g(t ) dt .

t.

(con tinued)

526 •

Sec. 9.3 Inner Product Spaces •

Ch ap. 9 Li nea r Spaces b. Find the linear polynomi al g r) = a+br that be t approx.im ates the function I (t) = 1'2 in the intervaJ [0. 1] in the (conti nuous) least-. quares en e. Draw a sketch. 17. Consi der a linear space V . For which linear transfo rrnations T : V~ 1-

(v. w)

For three polynornials

an inner product in \1? 18. Consider an orthonormal ba is B of the inner p roduct space V. For an element I of V, what is the relationship between II !II and II [.f]e II (the norm in lR" defined by the dot product)? 19. For which 11 x 11 mauices A is an inner product in lR"? Hint: Show first that A must be symmetric. Then give your answer in terms of the definiteness of A .

8

I

4

o

8

g h

0 1 8 3

3 50

c. Find proj Eh where E = span f g). Express your solution a linear combinations of f and g. d. Fin? an o_rthononnal ba i of span(f, g , h ). Express the functions in your basts as linear combinations of f, g, ancl h . 25. Find the norrn 11-t ll of

x = (I , ~ ~· ...,~ · .. .) (e2

a. Find all vectors in JR2 perpendicul ar to [

W

~]

f (with respect to th i inner

(I) = { -I

v

+ w)

- q ü)- q (w )

define an inner product in lR11 ? How can you tell ? 22. If .f(l ) is a continuou function, what is the relationship between and

Hint: U e the Cauchy-Schwarz inequality. 23. In the pace P1 of the polynomi al of clegree ::: 1, we define the inner procluct (f, g) =

~ (!(ü)g(O) + j(l )g (l)).

Find an orthononnal basis for this inner product space. 24. Consider the linear space P of all polynomials, with inner product

1 1

(f, g )

=

f (r)g (r) dt.

!f

l <

1 lf t

~

0 0.

Sketch the graphs of the fir t fe w Fourier polynornials. 27. Find the Fourier coefficient of the piecewise continuou function

product). Draw a sketch. b. Sketch all vectors in JR2 with lliill = 1 (with respect to thi inner product). 21. Cousider a positive defi nite quadrati c form q (,t) in !Rn. Doe the formu la (ü, w ) = q (ü

e2 .

is defi ned in Example 2).

2

in ~ (see Exercise 19).

in

26. Find the Fourier coefficient of the piecewise continuou function

[5 2]2

h

a. Find (f, g + h). b. Find \lg + hll-

(li . ÜJ) = VT AÜJ

-T V

g

For example, (f, f) = 4 and (g, h) = (h, g) = 3.

dot product

( V,~ W- ) =

f1 we are g1 ·ven tbe fo Uowtng . . mner prod ucts:

(, ) I

IR"

= T (v) · T (w)

20. Consider the inner product

f , g,

52 7

f(t) = { 0

.

~f

t < 0

[ Lf t

~ Ü.

28. Apply Fact 9.3.6 to your an wer in Exerci e 26. 29. Apply Fact 9.3.6 to your an wer in Exerci e 27. 30. Consider an ellip e E in JR 2 whose center is the origin. Show that there is an inner product (· , ·) in IR2 suchthat E con i ts of all vectors with !lx ll = 1, where the norrn is taken with re pect to the inner product (·, . ) .

x

31. Gaussian Integ ration: In an introductory calculus cour e you may bave een approximation fonnulas for integrals of the form b

1 a

ll

j(r) dt

~ .{; w;J(a;) ,

where the a; are equally spaced points in the interval (a , b), and the w; are certain "weights" (Riemann sum , trapezoidal sums, Simpson 's rule). Gaus has shown that with the same computational effort we can get better approximalions if we drop the requirement that the a; be equ ally paced. BelO\.v we discuss bis approach.

528 •

hap. 9 Linear Spaces Sec. 9.4 Linear Differential Operator •

529

Con ider tlle pace P" with the inner product Verify that a linear differential operator is indeed a ünear tran Formation (exercise). {f, o) = [ I j(l g(l ) d t .

Examples of linear differential operator are

degree(ji,. ) = L f . · · · ·Jr" be an orthonormal ba is of thi·d spacetl , with et Iio,I t ddbI d· 1 h , k (to· con truct u 11 a ba 1·s~ . appl y Gram-Schn11 t ro 1e a n ar t , ... , r") . lt can be shown rhat f 11 has n di srinct root. a 1, a2 . . .. , a" 111 t e interval ( - 1, 1). We can find . weight " wl , W2 , .... w" uch th at I

(*)

[

1

8

T (J) =

L(f)

w;j(a 1),

than 2n. · b' b k You arenot asked to prove t11e a ertion above for ar Jtrary n , ut wor out the case n = 2: find 0 1, 0 2 and w 1, w 2 , and how that the formu la 1 j(1) d t

=

w 1j(a 1)

6 f"

and

+ SJ,

tf T i an nth-order linear differential operator and g is a smooth function then the equation

for a11 polynomial of degree less than n (see Exerc i ~ 9.2.33). In fact, much more is true: The formula (* holds for all pol ynol11J a1s f (t ) of degree less

[

!"- S f' + 6f.

= f"' -

of first, second, and third order, respecti vel y.

"

j(t ) dr =

= f ',

D (f)

T(J )

=g

or

! (") + an - 1f tll - 1) + ... + a l ! ' + aof = g is called an nth-order linear differential equation (OE). The OE is called homogeneous if g = 0, andinhomogeneaus otherwi e. Examples of linear OE' s are

+ w2f(a_)

1

holds for all cubic polynorruals.

!" - f'- 6f =

0

(second order, homogeneous)

and f '(t ) - Sf (t ) = sin(t)

LINEAR DIFFERENTIAL OPERATORS In this final section, we will study an important class of linear tran sformations from c to c . Here C 00 denotes the linear space of complex-valued smooth functions (from JR to q , wbich we consider as a linear space over

Definition 9.4.1

Linear differential operators A Iransformation T:C

~

C00

T(f) = j 111 ) + a11 _ J! (II - IJ + · · · + aJ!'

Note that solving a homogeneaus OE T (J) = 0 amount to finding the kerne! ofT . We wiU first thin.k about the relationship between the solutions of the OE' T (J) = 0 and T(J ) = g. More generally, consider a linear transformation T from V to W where V and W are arbitrary linear paces. What is the relationship between the kerne! of T and the solutions f of the equation T(J ) = g, provided that this equation has olution at all (compare with Exerci e 1.3.48)? Here is a simple example:

EXAJ\tfPLE 1 .... Consider the linear transformation

of the fonn

f (k)

T (.r ) = [

~

;

~ J.:r

from

3

to JR2 . Oe cribe

rhe relationship between the kernel of T and the solutions of the linear ystem

+ aof

is called an 11th-order linear differential operator. 1•2 Here derivative of J , and the ak are complex scalars.

(first order, inhomogeneou ).

T (x ) = [

denotes the kth

~

1

J

both algebraically and geometrically.

Solution

--~More precisely, this is a linear differential operator wirh consranr coefjiciems. More advanced texLS consider the case when the ak are funclions. .. . . . . . of 2The term "operator'' is often u ed for a Iransformation whose domam and codomam consJst function s.

Using Gauss-Jordan elirrunation, we find that rhe kerne! ofT consi ts of aJI vector of the form

Sec. 9.4 Linear Differen tial Operat ors •

530 •

Chap. 9 Lin ear Spaces

. Note tha.t T (f) ~ T(cJ/, + · · · + c11 f 11 ) + T (fr,) = 0 + g = g , so that f is mdeed a solutLon. Venfy that aU solutions are of tb.is form . What i ~ the s!gnificance of Fact 9.4.2 for linear differenti al equati ons? At the end of thts sectLOn we will demoostrate the following fundamental re ult:

-olulions of

T(i )=

531

Ln Fact 9.4.3

The kerne] of an nth-order linear differenti al operator is n-dimen ional.

kernet ofT

. Fa~t 9.4.2 now provides us with the following strategy for solving Linear dtfferent.J al eq uations: Figure 1

.F act 9.4.4

To solve an nth-order linear OE

with basis

T(f) = g

we have to fi nd

- = [ 6] ·

The solution set of the system T (x) form

12

con

1

a. a basi of all vectors of the

t

f = cJ!J + · · · + Cnf,, + JP,

6-2x-3x3] [-2]+ .[-3]+ [ =

l 0

X2

X3

X3

a vector in the kerne! of T

where the c; are arbitrary constant .

0 1

a particular olution of the sy. tem T(x)

planes in JR3 , as shown in Figure 1.

EXAMPLE 2 .... Find all olution of the OE

= [ ~~}

~]

The kerne) of T and the solution set of T (x) = [ 1

J"( c)

fo rm two parallel

...

These observations generalize as follows :

Fact 9.4.2

Consider a linear transformation T from V to W , where V and W are arbitrary linear spaces. Suppose we have a basis f ,, h, .. . f " o~ the kernel of T · Consider an equation T (f) = g with a particular solutLOn fp · Then the solutions f of the equation T (f) = g are of the form

f = e lf, +

c2f2 + · · · + Cnf,, + fp ,

where the c; are arbitrary constants.

of kernel (T), and

Then the solutions f are of the form

2

X2

! 1 ... , fn

b. a particular solution /p of the OE.

We are told that / p(t)

=

+ f( c) =

e'.

~e' is a particular olution (verify tb.is).

Solution Consider the linear differential operator T f = f '' + f. A ba is of the kerne! of T is f 1 (t ) = cos(r) and h(t ) = sin(r) (compare with Exercise 9.1.50). 1 Therefore, the olutions f of the OE ! " + f = e are of the form .

f(t) = c, cos(r) + c2 m(t)

I

+ 2e

1 ,

where c 1 and c2 are arbitrary constant . We now present an approach that allows us to find olutions to homogeneaus linear OE' more systematically .

532 •

Chap. 9 Linear Spa s Sec. 9.4 Linear Differential Operators •

• The Eigenfunction Approach to Solving Linear DE's

EXAMPLE ... Find all exponentia1 function e;. 1 in the kernet of the linear differential operator T (.f)

Definition 9.4.5

E~IPLE

533

4

= .f" + J' -

6f.

Eigentunetions

Solution

Con ider a lin ear different ial operator T from C to C . A s mooth functi on f is called an eigenfunction ofT if T (f) ~ f..J f?r ome .compl ex . calar f..; this scaJar)... is caJJed the eigen alue assoc1ated w1th the eJgenfunctJon f.

The characteri stic polynomial is Pr (J..) = ).. 2 +)... - 6 = (.A + 3) ).. _ 2), witb root 2 and -~. Therefore, the functions e21 and e- 31 are in the kerne] ofT. We can check th1s:

3 ... Findall eigenfunction and eigenvalue of the operator D (J) =

f' .

and

Solution We have to solve the differential equation

F=

S_in~e

rno t po lynomials of degree n have 11 distinct complex root , we can find n d1 tinct exponential function e/. 11 • • • , eA. 1 in the kerne] of most nth-order linear d.1 fferentia~ Operators. Note that these functions are linearly independent (they are e1genfunctwns of D witb di tinct eigenvalues· the proof of Fact 6.3.5 applies). Now we can use Fact 9.4.3.

f..J.

We know that for a ~iven )... the solution are all exponential functions of the form f(t) = C eM. This ~ean that all comp.lex nu~bers ar~ eige~walues of D, and tl~ eigenspace a ociated with the eigenvalue ).. 1s one-d1menswnal , panned by e
= J
T(f ) then 11

T(e ·

Con~ider an nth-order linear differential operator T who e cbaracteristic polynonual Pr (J..) has 11 disünct roots .A 1, ••• , An. Then the exponentia1 f unctions

Fact 9.4.8

)

= (f.."

form a basis of the kernet of T , that is, a basis of the solution pace of tbe homogeneaus DE

+ Dn- IA."- 1+ · · · + a1f.. + ao)e 1

J.

T(f) = 0.

This ob ervation motivates the following definition:

Definition 9.4.6

See Exerci e 38 for the case of an n th-order linear differential operator whose characteristic polynomial has less than n distinct roots.

Characteristic polynomial Consider the linear differential operator T(J ) =

EXAMPLE 5 ... Find all solution f of the differential equation

/">+ a" _ J/
! " + 2!' - 3f = 0.

The characteristic polynomial of T i defined as Pr(f..)

= A + On- IA 11

11

- l

+ ... +DIA+ ao.

Solution Tbe characteri stic polynomial of the operator T(J) = !" + -!'- 3f i Pr ,\) = 2 + 2f..31- 3 = (f.. + 3)(A - 1), with root I and - 3. The exponentia l fun tion e1 and e- form a ba is of the olution space, i.e .. the solution are of the form )..

Fact 9.4.7

If T is a linear differential operator, then e}.J is an eigenfunction of T, with associated eigenvalue Pr (J..) for all A.: T(e'J ) = p 7 ('J..)eÄ1

In particular, if p 7 ()..) = 0, then e/.1 is in the kernet of T.

EXAMPLE 6 ... Find all solution f of the differential eq uation

!" -

6J' + 13! = 0.

534 •

Ch ap. 9 Linear Spaces

Sec. 9.4 Lin ear Differenti al Operator •

535

f( t) =eP'(cl cos(qt) + c2 sin(ql))

Solution The characteri ti polynomial i Pr (A.) = A. The exponential functions

2

6A. + I 3, with complex roots 3 ± 2i.

-

+ i sin (2t))

et3+ 2ilt

= e31 ( cos (2t

e<3 - 2i l t

= e31 ( cos (2t - i sin (2r))

and

form a basis of the so lutio n space. We may wish to find a basis of the soluüon pace consisting of real-valued functio ns. The foll owing observation is helpful : if f(t) = g r) + i h(r ) is a olution of the DE T(f) = 0, tben Tf = Tg + i Th = 0, o th at g and h are solutions as weil. We can appl y this remark to the real and the imaginary parrs of the solu tion e <3+ 2i) t : the functions e 31 co (2t )

e 31 sin(2t)

and

Figure 2

What .about . nonhomogeneou. differenti al equations? Let us di scuss an example that ts part.Icularly important in applications.

EXAMPLE 7 .... Consider the differential equation are a basis of the solution space (they are clearly linearly independent), and the general solutio n is

! " (!)

+ f ' (t)

- 6 f( t ) = 8cos(2t )

a. Let. V be the linear space consisting of all functions of tbe fom1 ci cos(2t ) + c2 Sin (2r) ..Show that the linear differential operator T (f) defines an Isomorphism from V to V .

Fact 9.4.9

!" + af' + bf

T (f) =

1

6!

c. Find all solution s of the DE T (f) = 8 cos(2t).

= 0,

where the coefficient a and b are real. Suppose the zero of Pr (A.) are p ± iq , wi th q =!=- 0. Then the solution of tbe given DE are eP

+!' _

j"

b. Part a. implies. that ~he DE T (f) = 8 co (2t) has a unique particular solution .ft,(t ) 10 V. Fmd thi s solution.

Consider a di fferen tial equation

f(t) =

=

(c i cos(qt)

+ c2 sin (qt

),

Solution

a. Consider the matrix

A

of T with respect to the basi

where c 1 and c2 are arbitrary constants. Tbe special case when a = 0 and b > 0 is important in many application s. Then p = 0 and q = -Jb, so th at the solutions of the DE

!" + bj =

0

are f(t) = Ci cos(.Jbr)

+ c2 sin (.Jbt ).

A= [ - 10 -2

2] -10 '

a rotation-dilarion matri x. Since A. is invertible, T defines an isomorphism from V to V .

b. If we work in coordinates with respect to tbe ba is cos(2t). sin (_r) the DE T (f) = 8 cos(2t) takes the form A.x = [

ote that the fun ction 1

f( t ) = eP (ci cos(q t)

+ c2 sin(qt ))

is the produ ct of an ex ponential and a inusoidal function . Tbe case wben p is negative comes up frequ ently in physics, when we model a damped oscillator. See Figure 2.

o (2t ) . -in(2t). A

traightforward computation shows that

x = A- 1 [ 8 ]= - ' 0 104

~

l

with the solution

[-lo -2] [8] [-10/13] 2

-10

0

=

The part.icular solution in V is J~(t )

= -

lO

8

cos (2t) +

2

T3

in(2r).

2/ 1

.

536 •

Chap. 9 Linear Spaces Sec. 9.4 Linear Differential Opera tors •

A more Straightforward way to find f 1)(t) is to set f,, (t) ~ P cos 2i) + Q sin(2t) and substirute this trial solution into the OE to determme P and Q.

can be written as

c. In Example 4 we have seen that the functions / 1(t ) = e~r and f2 (!) = e- 31 fom 1 a basis of the kerne] ofT. By Fact 9.4.4, the so luti ons of the DE are of the fom1 f (t)

T = D

c l e 2t

2

+D

= (D + 3) o (D

T

(CD +3) o (D -2))! = (D +3)(f' - 2f) =

C cos(wt) ,

where a. b, C, and w are real numbers. Suppose that a DE has a particular solution of the form

=

P cos(wt )

+

=0

and b

#

0 or b

#

w 2 . This

Fact 9.4.11

T =D" + a"_ 1D" - 1 + · . . + a 1 D +ao = (D- A.1)(D- A.2) ... (D- ).") , where the A.; are complex nurnbers.

f of the OE.

= w 2?

. We can therefore hope to understand alllinear differential operators by studymg first-order operators.

The Operator Approach to Solving Linear DE's We will now present an alternative, deeper approacb to DE's, which _al~ows us to solve any linear OE (at least if we can find the zeros of the charactenstJc p~lyno­ mial). This approach will Iead us to a better understanding of the kerne! and 1mage of a linear differential operator; in particular, it will enable us to prove Fact 9.4.3. Let us first introduce a more succinct notation for linear differential operators. Recall the notation Df = f' for the derivative operator. We Iet

EXAMPLE 8 ~ Find the kerne! of the operator T =D - a where

a is a complex number. Do not

use Fact 9.4.3.

Solution We have to solve the homogeneaus differential equation T (f) = 0 or j'(t ) = 0 or f'(t) = af(t). By definition of an exponential function, the solutions are the functions of the form f(t) = Cear, where Cis an arbitrary constant. (See Definition 8.2.1). ~ af(t)

Dm = D o D o ... o D m times

that is,

Fact 9.4.12

The kerne! of the operator

T= D-a Then the operator

is one-dimensional, spanned by T(f ) = j
+ a" _IJ
f(t) = ea'.

can be written more succinctly as

T = D 11 +

a.n - 1D" -

1

+ ·· · +

a. 1 D

the characteristic polynomial PrO.) "evaluated at D". For example, the operator T (f) =

= f"- 2f' .

An nth-order linear differential operator T can be expressedas the composite of n first-order linear differential operators:

+ Q sin (wt).

Now use Facts 9.4.4 and 9.4.8 to find all solutions What goes wrong when a

6f = (D 2 + D- 6)f.

The fundamental theorem of algebra (Fact 6.4.2) now teils us the following:

+ af' (t) + bf(t) =

f p(t )

f" -2j' +3!' -

This works because D is linear: we have D(f' - 2f)

Consider the linear differential equation J"(t)

- 2).

We can verif-y that thi s formula gives us a decomposition of the operator T :

10 2 . 2) + c2 e-3r - 13 cos(2t ) + ]3 sm ( t .

Let us summarize the methods developed in Example 7:

Fact 9.4.10

- 6.

Treating T formally as a polynomial in D , we can wri.te

= c1/1 (I)+ c2!2Ct) + f~, (t ) =

53 7

f" + 6 j' -

6

+ ao,

Next we think about the nonhomogeneaus equation

(D - a)f=g or f'(t) - af(t)

= g(t) ,

538.

hap. 9 Linear Spaces Sec. 9.4 Linear Differentia l Operato rs •

· 1~ w ill- atturn out to where g(r) is a smooth fu nct10n. · be u eful to multipl y both ide of t.hi s equation wi th the fu nctwn e

539

We can break thi s DE down in to n fi rst-order DE 's : D->.,.

e-fll j'(r) - ae- fll f r) = e- nt g(r)

D-).,. _1

f ~ Jn - i ~ fll -2

. the left- h an d J.de of thi equation as the deri vaLive of the fun ction We reco 0omze . e-m j(r), o that we can wnte

We can successive ly solve the first-order DE's

(e - at f(l)) ' = e - at g(t .

O\ we can integrate:

(D - A. ,) j,

= g

( D -A.2)h

= Ii

(D - AII - I) f n- l = f n- 2

and j(r) = e"'

I

(D - A" ) j = f n- l· e - a'g(t ) dt ,

In particul ar. the DE T (f )

where Je-a t g t ) dt denote the inde~ nite i?tegral , that i ' rhe fa mil y of all aJ1lideri ari ve of the fu nction e - at g(r), lllvolvwg a param eter C.

Fact 9.4.13

Fact 9.4.14

= g does have solutions

The image of aU linear differential Operators (fro m C is, any li near DE T(f ) = g ha solutions j.

f .

to

c )

is

c::c , that

Con ider the differential equation f' (r) - af (t ) = g (r),

EXAMPLE 10 .... Fi nd all soJutions of the DE

where g(t ) is a smooth funcüon and a a constant. Then j (r ) = e111

I

T (f) =

e -at g (t ) dt .

2

Note that Pr (A.) = A. - 2)... use Fact 9.4.8.

Fact 9.4. 13 shows that the differential equaüon (D - a)f = g has olutions f , for any smooth function g ; thi means that im(D- a) = C00 .

D-l

The DE ( D - l )j, = 0 has the generaJ o lution arbitrary con. tant.

Solution

e- at ce 0 1 d t = ea1

Solution

f ....----....... .f, ....----....... 0

where c is an arbitrary constant.

I

()... - I )2 has onlv one root I o that we cannot • ' '

D- 1

! ' - aJ = ceat ,

j(t ) = e01

2j' + j = 0.

We break the DE down into two first-order DE's, as discussed above.

EXAMPLE 9 .... Find the solutions f of the DE

U ing Fact 9.4.13 we find that

+1=

f"-

I

c d t = e"' (c t

+ C),

.f1 (1)

= c 1e' , where c is an 1

Then the DE (D - l ).f = .f1 = c ,e' ha the generaJ solu tion f (t ) = e1 (c t +c_), 1 where c_ i another arbitrary constant ( ee Example 9). 1 1 The functions e and t e form a basis of the solutio n space (i.e., of the kerne! of T ). Note the kerne! is two-dimen ional. since we pick up an arbi trary constant each time we solve a first-orcler DE. ..

where C is another arbitrary constant. Now consider an nth-order DE T (f) = g, where T = D 11 + a11 _ 1 D" - 1 + · · · + a, D + ao , = ( D - A.,)( D - A.2 ) . . . ( D - A11 - I)(D - A.n) .

Now we can explain why the kerne! of an n tb-order linear di fferential operator T is n-dimensional. Roughly speaking, thi i true because the general o luti on of the DE T(f) = 0 contain n arbitrary con tants (we pick up one each time we solve a fir t-01·der linear DE).

540 •

Chap. 9 Linear Spaces

Sec. 9.4 Linear Differential Opera tor •

Hereisamore formal proof. We will argue by indu tion o n n . Fact 9.4. 12 take care of t11e ca e 11 = 1. We can write an nth-order linear d iffere ntial operator a T = L o (D - A. 11 ), where L i of order n - I :

f

L

11. -dt 2

Arguing by induction. we can assum that the kerne! of L is (n - I ) -dimensiona l, with ba is h 1.h 2 • •• • , h 11 _ 1• Then the ol uti ons f ofthe DE ( D - A11 )j =

are j(1) = e;.., ,

c J11

J

+ · · · + Cn-

e-J..·'(c 1h, 1)

lhn - 1

+ ··· +c"_,flu - IC!)) dt

Let H; 1) be an antiderivative of e_;..,., h ;(t ), for i = 1, ... , 11 - I. Let F;(t) = e>.. , H;(t); note that (D- A.11 ) F; = h ; by constr uction. Then the so.l utions f(t) above can be written a f(t) =

=

(c, H, (t ) + · · · + c"_, H 11 - 1(t ) + C) c 1 F1 (t) + · · · + C 1 F,,_, (/) + Ce'-•'.

e }."r

11 _

where C i an arbitrary constant. 1l1i how that ker(T) i spanned by the n function F, (1), .... Fn-1 (t), e/.." 1• We claim that these functions are linearly independent. Consider a relation Ci F1

(t)

+ ·· ·+ C

11

+ Ce/.."

- t F11 - i (t)

1

= 0.

(! )

+ · · · + Cu -l h u- 1(1) =

0.

We conclude that the c; must be zero, si nce the function s h; (t ) are linearly independent; then C = 0 a weil. We have hown lhat the functions F1(!), ... , F11-1 (t ), e /.." 1 form a ba i of ker(T), so that ker(T) is n-dimensional, as claimed.

d 2x dx + 3- - 1Ox = 0 2 dt d1 · 10. J"(t ) + f(t) = 0.

l 2f(t) = 0.

8. -

dx dt

- ?-

13. f"(t)

-

+2'· - 0 " - .

+ 2J'(t) + f(t)

12. J"(t ) - 4f' (t) = 0.

+ 13f (t ) =

14. f"(t) + 3f'(t) = 0. 16. J"(t) + 4j'(t ) + 13.f(t) =

15 . .f"(l) = 0.

17. f "(t) + 2f' (t) + f(t) = sin (t). d 2x 19. df2 + 2x = COS(l).

18. f"( t ) + 3f' (t ) + 2f(t) = 20. f"'(l) - 3 f"( t )

21. .f"' (t )+2J"(t )- f'( t ) -2f (t ) = 0.

22.

!

111

0.

COS(l ).

cos(t .

+ 2f'(t ) =

0.

(1) - j"(1) -4 j'(t ) +4j(r) = 0.

Salve the initial va lue problems in Exercise 23 to 29. 23 . .f' (t )- Sj"(1) dx

24. -d

= 0,

j(O)

= 3.

+ 3.x = 7, x(O) = 0. + 2j(t) = 0, f( l) =

t 25. f'(t)

I. 26. J"(l)- 9f(t ) = 0, .f(O) = 0, f'(O) = 1. 27. j"(1)+9j(t) =0. f(0) =0, f(!}) =l. 28. J"(t) + J'(t) - 12f(1) = 0, f(O) = f'(O) = 0. 29. f"(l) + 4.f (t ) = sin(1), f(O) = f'(O) = 0.

30. The temperature of a hot cup of coffee can be modeled by the DE

Applying the operator D - A11 on both side , we find that CJh l

+ J'( t )-

9. J"(t)-9f( t ) = 0. d 2x

.......--......,,.......--......o D -/.. 0

7. J"(!)

541

T ' (t) = -k(T(l)-

A).

a. What i the ignificance of the constants k and A? b. Solve the DE for T(t) , in term of k, A, and the initial temperature To. Hint: There i a constant particular solution . 31. The peed v(1) of a falling object can omelimec be modeled by dv

m- =mg - kv dt

or

EXERCISES

dv

GOAL

-

dt

Solve linear differential equation .

Find all real solutions of the differential equation in Exercise I to 22.

1. f'(t)- S f( t ) = 0.

3. f '(l)

+ 2f(t)

= e31.

5. J'(t) - f(t) = t.

dx

2 . - + 3x = 7. dt dx

4. -

dt

- 2.x = cos(3t).

6. f'( t ) - 2f(t) = e21 .

k

+ - v=g. m

where 111 i tl1e mas · of the body, g the g ravitational when acceleration. and k a con tant related to the air res istance. Solve thi DE when v(O) = 0. De cribe the long term behavior of (!). Ske tch a graph. 32. Consider the balance B(l) of a bank account, with initial balance B(O) = Bo. We are withdrawing money at a continuou rate r (in DM/year). Th intere t rate is k (%/year), compounded continuously. Set up a differenrial equation for B (t ) and solve it in term of Bo, r and k . 'Nhat will happ n in the lon a run ? Describe all pos ible sc~::narios. Sketch a graph for B(t in each ca e.

542 •

Chap. 9 Linear Spaces

Sec. 9.4 Linear Differen tial Operators •

33. Con ider a pendulum of lengtl1 L . Let x t ) be the angle tl1e pendul.um .make , with tbe vertical (measured in radi ans). For mall angle , the mot10n ts weil approximated by the DE d 2x g --- r dt- L-

(wbere g is the acceleration due to gravi.ty, g ~ 9.81 m/sec ). Ho"':' .long does the pendulum have to be o tbat it swings from one extreme po 1t1on to the oilier in exactly one second? 2

543

35. The di splacement x(t ) of a cettain oscillator can be modeled by the DE d 2x dx -1 2 + 3 -d + 2x =0. c. I t a. Find all solutions of tlli s DE. b. Find the Soluti on with initial values x(O) = I , x' (0) solution.

=

0.

Graph the

c. Find the solution with initial values x (O) = l , x ' (O) = - 3. Graph the solution. d. Describe tlle qualitative difference of the solutions in parts b and parts c, in tenns of the motion of the oscil lator. How many times will the oscillator go through tlle equilibrium state x = 0 in each case?

36. The displacement x(t) of a certain oscillator can be modeled by the DE d 2x

-

dt 2

N ote: x(l) i negati ve when the pendul um is on the left.

The two exLreme posi tion of the pendu lum.

dt

Find all olutions of this DE, and graph a typical solution. How many times will the oscillator go tlrrough ilie equilibrium state x = 0?

37. The di placernent x ( t ) of a certain oscillator can be modeled by ilie DE d 2x dx ? +6- + 9x=0. dtdt

Historical nore: The result of this exerci e was considered a a po · ible definition of ilie meter. The Frencb comm.ittee reforming the measures in the

1790's finally adopted another definition: a meter i · the lO,OOO,OOOili part of ilie distance from ilie Norili Pole to tbe Equator, measured along the meridian though Paris. 34. Consider a wooden block in the shape of a cube, with edge 10 cm. The density of ilie wood is 0.8 gj cm 3 . The block is submersed in water; a guiding mechanism guarantees iliat tbe top and ilie bottom surfaces of the block are parallel to ilie surface of ilie water at all times. Let x (r) be tbe depth ~f the block in the waterat time t. A ume that x is between 0 and 10 at all umes.

dx

+2- + lülx = 0.

-

Find the solution x (r ) for the in.itial values x (O) = 0, x '(O) = 1. Sketch the graph of tbe solution . How many time will the oscillator go through the equilibrium state x = 0 in this ca e? 38. a. lf p (t ) is a polynom.ial and ). a scalar, show that ( D- A.)(p(t)el 1 ) = p ' (t ) e)J .

b. If p(r) is a polynom.ial of degree les than m , what is ( D - A.t' (p (t )e'1 ) ?

c. Find a basis of tlle kerne! of the linear differential operator ( D - A.) 111 • d. If A. 1, . .• , A., are distinct calars and rn 1• • • • , m, are positive integers, find a basis of the kerne] of the linear differential operator (D- A. 1) 111 '

a. Two forces are acting on the block: it weight and the buoyancy (the weight of the displaced water). Recall iliat the den sity of water is 1 g/cm 3 . Find formulas for these two forces. b. Set up a differential equation for x(t ). Find the olution, assum.ing that the block is initially completely submersed (x( O) = 10) and at rest. c. How does the period of the oscillation change if you chaoge the dimen ions of the block (consider a !arger or smaller cube)? What if the wood has a different density, or if the initial state is different? What if you conduct the experiment on the moon?

•••

(D-A.,)'"' .

39. Find all olutions of tlle linear DE 111 /

(!)

+ 3/" (1) + 3f'(t ) + f

(t) = 0.

(Hint: U e Exercise 38.)

40. Find all olution of the linear DE d 3x

-

dr

(Hint: U e Exercise 38.)

d2x

dx

dr

dt

+ -2- -

- x = O.

544 •

hap. 9 Linear Space

Sec. 9.4 Linear Differential Operators •

41. If T i an nth-order linear differ nüal operator and ). i an arbitrary sca lar, i ). nec ssarily an eigenva lue of T ? [f so, what i lhe dimen sion of the e igen pace associated wirh ). ? 42. Let C be the s pace of all real-valued smooth functi ons. a. Consider the lin ar differential operator T = D 2 from C to cco. Find all (real) eigenvalue ofT. Foreach eigenva lue . find a ba is of the associated e igenspace. b. Let P be the ub pa e f C con i ting of all periodic funcüons f( t with period one (that i . .f(t + 1) = f(t), for a ll r). Con ider the linear differential operator L = D2 fro m P to P. Findall (r a l) e igenvalues and eigenfunctions of L. 43. The displacement of a certain forced oscillator can be modeled by the OE d 2x dt 2

dx + 5 dc

+ 6x

d 2x

dx

dr-

dt

= cos(t).

= cos(3t).

a. Find all solutions of this OE. b. Describe the long-terrn behavior of tbis oscillator.

45. Use Fact 9.4.13 to solve the initial value problern c.dtx.r- --

[o1 21 Jx-

with

.T(Ü) = [ _

Hint: Find first x 2 (t) and then x 1(t).

!n

46. Use Fact 9.4.13 to olve the initial value problern

~~ ~ [ g

with

X(0)

~

IJ.

1

ul

Hint: Findfirst x 3 (t), then x 2 (r), and then x 1 (t). 47. Consider the initial value problem

dx

- = Ai dt

with

x(O) =

io ,

where A is an upper triangular n x n matrix with m distinct diagonal entries AJ. . .. , ),m· See the examp les in Exercises 45 and 46. a. Show that this problem has a unique sol ution x(t), whose cornponents x;(t) are of the form x;(t) = PJ(t)e>" r + · ·-+ p111 (c)e>.",r,

for some polynomials PJ (t ). Hint: Findfirs t x"(t ), then x" _ 1 (t ) and so on.

b. ~h~w that. the zero state is a table eq uilibriurn soluti on of thi system if an

only If) the real pmt of all the ). 1. is neoati ve

48. Consicler an n x n . h m d1.stmct . • • "' e1genvalue ;,_ 1 ma t n· x A Wit a . S how that the inüial value problern

dx

dl

=Ai

with

). ' • .. '

x(O) = ;



0

ha a unique solution ,r(t).

b. Show that the zero tate i a table equilibrium so lution of the ystern

dx

dt= A .i

if and only if the real part of all the A· is negative. Hint: Exercise 7.3 .37 ' and Exercise 47 above are helpful.

a. Find all solution of thi OE. b. Oescri be the long-terrn beha ior of this o cillator. 44. The di placement of a certai n forced oscil lator can be modeled by the OE

- . , + 4 - + 5x

545

App. A Vector •

547

b. Scalar multiplication The product of a · calar k and a vector ü i defined componentwi e as weil : kÜ=k

v,v2

r~II

J

=

kv2 J rkvl

k~/1

EXAMPLE 1 ....

VECTORS EXAMPLE 2 .... Here we will provide a conci e ununary of basic facts on vectors. In Section 1.2, vectors are detined as matrice with only one column : ü = [

~ J.

The scalars v;

v"

The negative or opposite of a vector ü in ~~~ is defin ed as

are called the components of the vecto r. 1 The set of all vectors with n components is denoted by ~" . You may be accustomed to a different notation for vectors. Writin g the components in a column is the most convenient notation for linear algebra.

+

-Ü=(- l )Ü.

w

The dijf~rence ü of two vectors ü and ÜJ in ~~~ is defined componentWlse. Alternat1vely, we can express tbe di ffere nce of two vector a

ü-

Vector Algebra Definition A.1

The vector in IR" that consi ts of n zero i cal led the zero vector in IR.":

Vector addition a. The sum of two vectors ü and

win ~~~

is defined "componentwise":

Fact A.2 _

_

v+w=

1

546

w= u+ c- w) .

v'. J+ [w ,J .'J [v,+. w + W2

V2

W2

V2

[ ~~~

~II

V" ~ Wn

In vector and matrix algebra. the term "scalar" i synonymaus wi th (real) number.

Rules of vector algebra The followin g formulas hold for all vectors ü v, w c and k:

10

"

and fo r all calars

1. (Ü + ü) + w= ü + (Ü + tv) (additi on i associative) 2. ü + w= ÜJ + ii (addi tion i commutative) 3. ii + Ö= ii 4. ~or ea~ h ü in IR" there is a unique ;r in IR" such that ü + i: = 0, namely, x = - v.

548 •

App. A Vectors App. A Vectors •

+ iü) = k Ü + kiil c + k )v = cü + kv

5. k (Ü

549

6. 7. c(kv) = (ck)ü 8. 1- = -

The e rule follow from the corre ponding rule fo r calars (commutativity, associativity, ilistributivity); for example,

Figure 3 The components of o vedor in standard represenlation are the coordinates of its endpoint.

L

+ Geometrical Representation of Vectors The stan.dard representation of a vector

R

- [Äl]

x=

in the Cartesian coorctinate plane i as an arrow (a ilirected lined segment) connecting the origin to the point (x 1 • x 2 ), a shown in Figure I. Occasionally it is helpful to translate (or shift) the vector in the plane (preerving its direction and length), o that it will connect some point (a 1, a 2) to the point (a r + x 1, a2 + x2) . See Figure 2. In this text, we consider the standard representation of vectors, unl e explicitJy state that the vector ha been tran lated .

we

A vector in 1R2 (in standard representation) is uniquely determined by its endpoint. Conversely, with each point in the pl ane we can associate its position vector, which connects the origin to the given point. See Figure 3.

Figure 1

xis a vedor on the line l. (b) is a vector in the region K.

Figure 4 (a)

-'2

x

(a)

(b)

We need not clearly distinouish betw . . identify them as long as we consi' tentJ ' use e~: a vector and tts end_point; we can For example, we will talk abou/ "the vec:~~dard represe~tatwn of vector . mean the vectors whose endpoints are o th r s ot~ a !Jne L when we really Likewise we can talk about "the t ~ e 1 ~e L (m Standard representation). ' vec or tn a reo 100 R" · th 1 . tn efp ane. See Frgure 4 . Adding vector in JR2 can be representedo b . Y means 0 a paraiJelograrn, a bown in Figure 5. If k is a positive scalar, then kv is obtained b , . factor of k leaving its düection unchanged If k . ) stre_tchmg the vec~or v by a 15 · negatJve, then the direction is reserved. See Figure 6.

Figure 2 Figure S

Figure 6

tran lated i '

----- --- --- --- -- -----'

translated ü V

translated ÜJ

./ (- t)

jj

550 •

App. A Vectors

Definition A.3

App . A Vectors •

We say that two vectors multiple of the other.

551

v a.nd win IR" areparallel if one of them is a scala.r (0, O,x3)

.. - - - - - - - - - - ;71

, :: ______

---.. .

--

'

:

:(x 1, x2, x3): '

EXAMPLE 3 .... The vectors

:

UJ

and

UJ

o:

/---i---7------l~

Xz

- - (0,-"2. 0)

are parallel, since Figure 7

Figure 8

+ Dot Product, Length, Orthogonality EXAMPLE 4 .... The vectors Definition A.4 and

w v

Consider two vector ü and with components u1, ll2 , . . . , v,. and w 1• w2, . . . , w" , respectivel y. Here and may be column or row vectors, and they need not be of the sarne type (these conventions are convenient in linear algebra). The dot product of and w is defined as

w

v

are parallel, since

Note that the dot product of two vectors is a scalar.

EXAMPLES ...

UJ ·[=lJ =I 3+2 (-1)+1(-1)=0

Let us briefly recalJ Cartesian Coordinates in space: lf we choose an ongm 0 and three mutually perpendicular coordinate axes through 0, we can describe any point in space by a triple of numbers, (x 1, x 2 , x3). See Figure 7. The standard representation of the vector

EXAMPLE6 ...

x=

[;~ ] X3

is the arrow connecting tbe origin to the point (x 1 , x2 , x3 ), as shown in Figure 8.

552 •

App. A Vectors Fact A.S

App. A Vectors •

Rules for dot products The fo ll ow ing equations hold for all column or row vectors ii, components. and for a ll sca lars k:

v,

Ü; with n translated

w

a. v. = w · v b. (ü + ü . w= ii · w+

v.

Figure 10

The erification of the e rule is traightforw ard. Let u justify ru le d :

0

ince Solution

ü is nonzero, at least one of the components v; is nonzero, o that vt is po. itive . Then

llx 11 -

-

V· V =

2 2+ v 21+ ~+ ·· ·+V;

···+

Definition A.7

X~

·

X

2

Xj"

+ x 2= ~~ - ~~2 2

X

11-i

llü II

= 1; tbat is, tbe lengtb of the

x and ji in JR2, as shown in Figure 10.

+ .Y II2 = llxll 2 + II.YII 2

or

c; + ji) . <x + y) = .:r . .x + .Y . .Y.

;

By Fact A.5,

therefore,

; . x + 2(,r . ji) + ;

l lxll=~Verify that this formul a holds for vectors in JR 3 as weil. We can use this formu la to define the length of a vector in lR":

The length (or norm)

llxll of a

IJ xll

=

vector

~=

. 5i =

; .;

+ .Y . .Y

or

x

Definition A.6

J49 + 1 + 49 + 1 = 1o

Consider two perpendicular vectors By the theorem of Pytbagoras,

jxT

?

~=

A vect~r _ü in II{" is cal led a unit vector if vector u ts 1.

i n JR:! is + x~ by the Pythagorean theorem. See Fi gure 9. Trus length i often denoted by llxll . Note that we have

- - [Xt] [Xt] =

=

2

V 11

positive as well. Let us think about the length of a vecto r. The length of a vector

X· X =

y

v· w

c. (kü) . w= k(v. w) d. ü. ü > 0 for all non zero

Figure 9

553

x · ji =

0.

~ou can read ~ese equ~tions ba~kward t? show that ,t . .Y = 0 if and onl y jf ;t= and y are perpend1cular. This reasomng appbes to vectors in JR3 as wtf~. We can use this cbaracterization to define perpe nd icul ar vectors in II{" :

x in lR" is

jxT + xi + · · · + x;. Definition A.8

EXA MPLE 7 ..... Find llx II for

Two vectors v and w in II{" are called perpendicular (or orthogonal) if

ü . w= 0.

+ Cross Product Here we present tbe cross product for vectors in JR 3 only; for a genera lization to IR" see Exercises 5.2.30 and 5.3.17.

554 •

ANSWERS TO ODD-NUMBERED EXERCISES

pp. A Vectors In C hapter 5 we di cu

Definition A.9

tJ1e cro

product in th cont , t of li near algebra .

The cro

5.[;~] = [-; ]

CHAPTER l 1. 1 l. (x.y) = (- 1, 1)

3.

o so luti on

5. (x, y)

Unlike the dot product , t he cro

Prod uct

v x ÜJ

3

i a vector in !R

=

9. (x, y, z) = (1,

1- 2t , t) , where 1 is arbitrary

I I . (x,y) = (4, I)

EXAMPLE 8 ....

13. No

s lut ion

15. (x,

y, z) =

17. (x, y)

Fact A.lO

Let

v x wbe the cross product of two vectors v and ÜJ

in ffi-

3 .

w.

a . ü x ÜJ is orthogonal to both ü and b. T he lenath of the vector x is numerically equal to the area of the parallel;gram defined by ü and ÜJ (see Figure lla).

v w

c. 1f ü and

ware not parallel , then the vectors Ü.

W.

ÜX

W

3 I. If a - 2b

+c=

(o)

.X =- i - 2')1 = ~

s -2r

1-

,.

X2

9.

X)

-t+ s

=

X4

l - 2s s

Xj

+I +2

X6

I I.

r;:J r-;:+4 J =

,t 4

13.

-

2

o olutions

IS. m ~ m

0

33. a. The intercepts of the line x + y = I are ( I 0) and (0, I ). The in tercepts of the line x + ~ y = r are (t , 0) and (0, 2). The lines intersect if r =f. 2. t 21 - 2 b

!IV X wll is numericolly equollo lhe shaded oreo. (b) Arighl-honded syslem. Figure 11

[-2~ ]

Xi

29. f(f) =I - Sr + 3r 2

form a tight-handed sy tem ( ee Figure 11 b).

7.[~j] ~

b)

21. a = 400, b = 300 23. a. (x ,y) = (1, 2t ); b. (x ,y) = (l, - 31); c. (x , y) = (0, 0) 25 . a. If k = 7 b. lf k = 7, there are infinitely many solutions . c. lf k = 7, the solution are (x , y, z) = ( I - 1. 2r - 3, r). 27. 7 children (3 boys and 4 girl

Then

I

(0, 0, 0)

= (- Sa + 2b , 3a -

19. a. Product are competing. b. P1 = 26, P_ = 46

Geometrie interpretation of the cross product

- l

X4

(0, 0)

7. No so lu ti ons .

X3

17. [ ; : X4

l -~::~:::;0l =[

X5

- 459/434 699/434

35. Tl; ere are_~~~~~ ~~r~ ect an wers. Example: y

__, ' ,' ,'

---

ii X

: '

1.2

'

'

m~ [~~: ~ I:] 3 m[4_2: _3:J I

~

w U)

(a)

w

- 3- = - 2

(b)

2 1. 4 rypes

25. Ye ; perform the operations backward . 27. No; you cannot make the last co lu mn zero by elementary row operation . 29. a = 2, b

=c =

d = I

A-1

A-2 •

Answers to Odd-Numbered Exercises

+ 4t 2 + 3t 3 - 21 3 5 + 13r- 10t 2 + 3c

31. f(t) = l -51

[-Il

33 . f(t) = -

35

37. [ : :]

~

Answers to Odd- umbered Exercises •

: u; !~] : ;:e~ ~T:::e ~b'"M'

4

~ittruy

wh"e t ;,

33.

1. a. No solutions

b. One solution c. lnfinitely many solutions

uJ n n

3. rank is l 5. a.

X

b.

X

+Y[

= [ 1

= 3, y = 2

7. One solution

9

[~~nm~m

21.

~;]

= Ci .X I

2 1.

the graph is a plane through the

2.2 L

5.A= [ 1 ~ ~ -:~ ]

Ax = c has infinitely many

7. T is linear; A = 9. Not invertible

b. False

~

~

-~

A= [l -,ab a-

39.

[~~ ] · [~~ ]

-

~j~

J

!]

# 0.

COSCt . [ - SIO Ct

43.

X3

. ..

A - = a2 1/ [ 17. Reftection in the origin; this transformat1on own inverse.

IS

1ts

7.

~

[:

~[

25

x

= [~

!J

.1:. Expres f(t) in term

of a, b, c, d .

49. The image i an eJijp e with s mimaj or axes ±5el and semim.inor axes ±_e2.

i]

7 24

J, a clockwise rotation througlJ

45. Write = C J VJ + c2 ii2 and u e Fact 2.2. 1 to compute T (x) and L (x).

2

;n R defined by U,e ,eotot<

51. The curve C is the image of the unit circle under the Iransformation with matrix [ 1 2

w w J.

9. A shear parallel to the e2 axis II .

sinet COS Ct

J

t he angle et.

5. About 2.5 radians In this case,

- b2 1 +ab

41. refQx = -refp .t

3. The parallelegra m in JR 3 defined by the vectors T( e-I) and T (e2)

v"]

Vz

15. A is inv.ertible if a ;;j:. 0 or b 1

-~ ~ ~]

47. Write T(.'r)

[vl

I I. Tbe inverse is [ _

solutions

ny:::'nr

hear back.

0 0 0 33. Use a parallelogram 35. Yes; mirnie Example 2.2.2. 37. Write ÜJ = C J Vj + c2 ü2 ; then T ( w) = C JT(ÜI ) + c2 T (~ )

v3].

= C ]X J +c2X2 +c3X3 =

Y

1

~ -~ J;you

31. A = [

v = [ ~~'VJJ , then the matrix of T is Vz

0

27 . Yes ; use Fact 2.2. I. 29. Ye

45 . False

3. Not linear

!]

25. [ i on the line

[2 3 4].

[v1

0

u~ n

23.

43. a. T is linear, re presented by the matrix

b. If

0

0

+ k(T(w)- T (v) )

+ c2x2;

J

[~ -~ ~]

tepre<e"ted by the

origin in JR3.

CHAPTER 2 2.1 I . Not linear

25. The system or none. 27. a. True

,., .

39. True

41 . Y

when i ;;j:. j

~["tT r] a dilation by a factor of 2 0

un

37. T (x) = T (v) segment.

= 2u ; uj

19

33.1 [: -;J 35.

A-3

J

17. Projection onto the line spanned by [: .

C3

123

;

*

41. One solution 43. No solutions 47. a. ; = 0 is a solution. b. By part a and Fact 1.3.3 _ _ _ c. A(x 1 + x2) = Ax1 +_ A x~ = 0 + 0 = 0 d. A(kx) = k (Ax ) = kO = 0 49. a. Infinitely many solution or none b. One solution or none c. No solutions d. Infinitely many solutions 51. lf n. = r and s = p

C.

['i~J

~

: ]

57. [:1] = [~] + [~]

m

23. [

[~0 ~0 ~1

::~,::'Fr ~~J

J l.

2u 1u2 2u~- I

15. a;; = 2u7 - I, and a;j

27. Re Aecti o n in the e-1 axis 29. Re tl ccti on in the ori g in

+2

15. 70 17. Undefined 19.

Ax = X.

-I [ 2u~2u1u2

13.

23. C loc kwise rotati on through an angle of 900 fo ll owed by a di lati on by a factor of 2; in ve,rtible. 25. Dilati o n by a factor of 2

: ::, m~ -~ mm

II . Undefined 13. [

39.

eI axis; not in vertible.

2 1. Rota tio n throug h an ang le of 90o in the clockw ise dJrectl o n; in vert ible.

o: ,A

[:]

39. a. Nei!.her the manufacruring nor the energy secror makes demands on agriculture. b. X ] ~ !8.67, X1 ~ 22.60 , X3 ~ 3.63 2m2 41. m 1 = 1 43. a ::::: 12.17, b::::: -1.15 , c ~ 0.18. Tbe Iongest day is about 13.3 bours.

1.3

19. Projecti o n o nto lhe

24 ] -7

2.3

I. [ 3. [

-~ -~ J

-t ~ J

Answers to Odd-Numbered Exercises •

A -4 •

Answers to Odd-Numbered Exerci es

45 . A = BS -

n

3. U ndefined 5. Not invertible 7 . Not inverti ble

5. [ :

9. Not invertible

u1-n

l1

13.

15.

1l

2 :0

l

- 6

9. [

1

9

2

-3

19.

X1

X3

+

= -3 )' t = Yl

-

49. mat ri x of L :

~ ~J

4)'2 )'3 1.5)'2 + 0.5 Y3

25. True

27

2 1. Not inverti ble 23. lnverti ble 25. lnvertible 29 . For all k except k = I and k = 2

3 L [ -2:

3 1. l t's never invertible

c. Yes; use Fact 2.3.5. d. Invertible if all diagonal e ntries are nonzero 1

37. (cA)- 1 = iA 39. M is invertible; if m ij k (where i f= j) , then the ijth entry of M - 1 is - k; all other entri es are

=

the same. 4 1. The tran fonnatio ns in part a, c, d are inverti ble, while the projection in part b is not. 43. Yes;

x=

B - 1(A -

45. a. 33 = 27

b. n

1

y

3

c.

123

}3 = 64 (seconds)

47. f(x) = x 1 is not inven ible, but the equation f(x) 0 has the unique solution x 0.

=

2.4

I. [

~ ~J

=

33. No = 35. a.

x

s=

l

-~

n[! n -~ n = [

~ ~ l [~ ~ l [~ ~ l [! ~ l [~ ~ J

. [ whe re k is nonzero aod c is arbitrary. Cases 3 and 4 represent shears, and case 5 is a reflection.

13.

foc rubitrary

I

aod '

.

6 1. a. Wn te A =

=

b . .X = ih c. rank(A) rank(B) d. By Example 1.3.3a

=

L

=m

~ ~

l

4 1. a. T he matrices Da Dß and DßDa both represent the coun terclockwi se rotatio n through the

cos(2n /3) 43. A = [ sin (2n /3)

1 [ ~ -~]

A

3

[L(m) 0] L3

A1 ] .~~.

,

15.

nJj

=

[u(m)

- [ A(n--Il AW k for x, y and 1. A

J

- . in(2n /3) cos(2n /3)

-

-

X

I

0

[A;II 0

0 1] A22

67. (i th row of A B ) = (i th row of A)B

J_ -

69. rank(A) = rank (A 11 )

+ rank(A23)

73. (ijth entry of AB ) = L~= l b kj ::::;

sr

0

0

0

-2

0 0 0

-I

1 0

[~l [~]

y]

1

a ;k b kj ::::

~]

21. 23 . 25 . 27.

All of ~3 kerne! is {0}, image is all of JR 2 Same as Exercise 23 j(x) = x 3 - x

29.

f [: ] = [

:ii:~~ c~~i:~ ] cos(rp)

31:o:r~r,~rroal

71. Only A = !"

s L~= l

-3

1 0 0 0 0

19. The Jine spanned by [ _

65. A is invertible if both A 11 and A22 are invertible. In this case, - l=

-2

17. All of IR2

0

L4 ' U

Ü]-[L' 0] [U'

for an arbitrary k

angle a + ß. b. Da Dß = DßDa = cos(a + ß) - sin(a + ß) [ sin(a + ß) cos(a + ß)

[ A(m)

Use Fact 2.4. 12. c. Solve the equation

37. Al l diagonal 2 x 2 matrices 39. T he matrices [

[

-~ ~ J

~ ~ -2: ~ ~ 0

l. ker(A) = {0}

57. Ye ; use Fact 2.4.4.

2

33. I:f a 2 + b = I 35. a. Invertible if a, d, f are all nonzero b. lovenible if all diagonal entries are nonzero

CHAPTER 3

1

53. a. Use Exercise 52; Iet S = E 1 E 2 . . . Ep

55

lilil

~ ~]

51. Yes; yes; each ele mentary row operation can be "undone" by an elementary row operation.

b.

29. For example, B = [

27. Not invertible

0

3.1

21. True 23 . True

81. False. Consider A = B = [

- 1 0

[ ~ ~ ~] 0

19. Fa1se

+ 0.5 )'3

f(g(x)) = { x if x _i s e~en X + I If X IS odd The functions f and g are not invertible.

~ ~ ~]

0

17. False

17. Not invertible XJ = 3)' 1 - 2.5)'2

=! [-~ I ~ ]

-4

13. [II) 15. h: Fact 2.4.9 applies to square matrice o nl y.

-~ -~ =~ - 5

-2

79. g(f(x)) = x , for all x.

matrix of T: [ -

I I. [ I 0]

-2

- )

[-~ ~ ~]

7.

~

_:

47. A

1

33 T

m~ x

+ 2y + 3l

oootdloote

1

A-5

Answe rs to Odd-Numbered Exercises •

A-6 •

Answe rs to Odd- umbered Exerci se

35. ker(T) is the plane with normal vector

ii; im T) = IR. 37. im(A) = pan(e1, e2) : ker(A) = 2 . pan (e1): im(A2) = pan(e 1); ker(A ) = 3 3 pan (e1. e 2 : A3 = 0 ker(A ) = IR and im (A3 ) = {Ö} 39. a. ker(B) i contained in ker(A 8 ). but they need not be equal. b. im(AB) i contai ned in im(A), but they need

13. Dependent

15. Dependent 17. Independent 19. Dependent _I. Dependent

25.

not be equal. 41. a. im (A) i the line panned by [ ! l and ker(A)

i the perpendicul ar line, spanned by [

-~ J.

b. A2 = A : if ii is in im (A), then Aii = ii. c. [!]"gonol proj<etion ooto the line sponned by

43. Suppose A is an 111 x 11 matrix of rank r . Let B be the matrix you get when you omit the first r rows and the firs r n colunms of rrefl.A : / 111 ] . (What can you do when r = 111 ?) 45. There are n - r nonleading variables, which can be cho. en freely. The general vecror in the kerne! can be wrinen as a linear combinaüon of n - r vector , with the nonleading variables as

coefficients. 47. i.m (T ) = L 1 and ker(T) = L2

51. ker(AB) = (0} 0

I

I 0 0 I I 0 0 0 I 53. a. 0 0 I 0 0 1 0 0 0 0 0 b. ker(H ) = pan(v1, "iil , Ü3, Ü4) , by parl a, and im (M) = span(ii 1, ii2, v3, Ü4) , by Fact 3.1.3. Thus ker(H ) = im (M). H (Mx) = Ö, since Mx is in im (M) = ker(H ).

3.2

I. Not a subspace 3. W is a subspace 7. Yes 9. Dependenl

5 1. a. Consider a relation c 1ii + ... + . _ _ . . . - . I CpVp+d l w l + -!=. dq Wq = 0. Then c 1v 1 + ... + c - _ - d I W I - . .. - d - . P Vp ., . ·r~ Wq IS 0, because thi s vector IS. both In V and In W. The claim follows. b. ~~ om part a we know that the vector V I , ... ,Üp , w l , · · ··wq are linearly Independent. Consider a vector in V + W · of v + w we can wn.1e -By definition _ x = v + ÜJ fo r a ü in V and a ÜJ ii' W Th · 1· • . ev IS -. and w - 1 .s a . a mear combi nati on of the u,, linear com bin ation of the w-J·. Tb"IS ShOW th at the vectors ii 1• · · · · iiP•wl, - ... , wq - pan

l I. Independent

27

29.

-2

-3

-4

-5

I 0 0 0

0 I 0 0

0 0 I 0

0 0 0

Ul m [!l [~ J

mm·m

I. [

fo r ii;. 37 . The vectors T (Ü 1), .. . , T (Ü 111 ) arenot neces ari ly independent. 39. The vectors Üt, ... , ü", are linearly independent. 4 1. The colu mns of B are li nearly independent, while the columns of A are dependent. 43. The vectors are linearly independent.

v

5

[~Jfn

[~ !]

49. L

~

im

I I.

13

ke< [

0

- l

- I

J

basis of image:

[

0

0

-2

-3 0

0

I 0 0

I

- 1

1

-I

1 0 0 0

0

0

0

I 0 0

0

0 0

25 .

-4 I

I

[~lm·[nm [!J -nr~J

27. They do.

29. [

0

[~]

31 h

m·m

19.

[lll~J [~ll

Jl[-;]

l~ -r -~l

33. The dimension of a hyperplane in lR" i 35. The dimension is

15. 5

17

m~ ~

1 0 0 0

9.

45. Yes 47 .

-n

[

fil

r;r n1-[-!J.[-~ J :1~l

23

-n

3

7.

35. Consider a relation c1ii1 + · · · + c;Ü; + · · · + c",ii", = Ö where c; =/: 0 but Cj = 0 for all j > i . Solve

0 I 0 0

bMis of imoge

V + W.

3.3

~]

l-ll UHil

x

,.m m 33

21 b'5 is ofkemel [

A-7

37. A =

11 -

11 _

I.

1.

l~ ~ ~ ~ ~l 0 0

0 0

0 0

0 0

0 0

39. ker(C) is at least I-dimensional, and ker(C) i

contai ned in ker(A) 4 1. A basis of V i also a basi, of

w. by Fact 3.3.4c.

43. dim( V + W) = dim(V ) + di m W), by E, ercise 3.2.5 1

A-8 •

Answers to Odd- umbered Exercises 2 1. For exa mple: b = d = e = g = 0, I -)3 _ .fj

45. The fir t p co lumns of rref(A) contai n leading l 's becau e the Ü; are linearly independent. ow apply Fact 3.3.7 . 49 . [ 0 I 0 2 0 ) . [ 0 0 I 3 0 ) .

a

25. a.

[o o o o 1] 51. a. A and E have the ame row space, s ince eleme ntary row operatio n leave the row space unchanged. b. rank (A) = di m(rowspace(A) , by part a and Exerci e 50. 55 . Suppo e rank(A) = 111 . The ubmatrix of A consi ti ng of the m pivot column of A is invenible, since the pi vot columns are linearly independent. Conversely. if A has an inve rtible m x 111 ubmatri x, then the column of tl1at ubmatri x pan JRm, so im (A) = Rm and rank(A) = m . 57. Let m be the mallest number such that A 111 = 0. B y Exerci e 56 tbere are m linearl y · independent vectors in Rn: therefore, m ~ 11 , and

b. By part a, ll

rank A)

~

rank( B )

]. 3.

JT76 J54

5. arcco ( 7. obtuse 9. acute I I. arcco (

33.

I. [

= 2

v, . x)v, + 2cü2 . •r)v2- .r

~~~]

3. [ 4/ 5 ]

3~5

7.

[ •

[~j~ ] · [ -~j~ ] , [ -~~~ ]

[-i].[-f]

19. a. Onhogonal projection onto L j_ b. Reftection in L .l c. Reflection in L

· [

2/3

2/ 3

:~ ~ii:~]

3 1.

e,. e2, e3

13.

37. 'L

4/ 15

1/2

- 1/2

39

I

I

· _I m_ [~ ] 3

0



_ I_ [

JJ

v ·_

L (Ü) · L ( w ) - arcco IJL (v) ll II L w) ll =

~

arccos llvll .llwll = 4(ii, w) (The eq uation

= ü · ÜJ

is ho wn in Exerci e 2

=/

11

ü1~ there

colu mn is a unü ve tor orthogonal to

J and [ - COin (ip) J. (ip) [ cos(ip) - in(ip) J and COS (ip Sin (ip) in (ip) J _ cos(IP) , fo r arbitrary ip .

are two choice : [ Solution: CO (
II. For example, T (x ) =

in (ip) CO (ip)

1[ -~ -; ~] 2

2

x

I

13. No, by Fact 4.3 .2 15. No : con ider A

= [~

~ J and B = [ ~ ~ J

17. Ye. since (A- 1/ . = (AT )- 1 = A- 1

- 2/3

- J

· <+

J

Ul}, [-IJ J

3 x ( L (-) L (-) ) _

9. The fir t co lu mn i a uni t vector· we can write . [ COS (ip) Jt a 1 = . in (ip) fo r some ip . The secend

19. (ij th entry of A) = v;v1 2 1. A ll e ntrie of A are .!.

I[: -:j[34]

[ :~] . [=:i~] . [-:~] 1/2

33.},

Then

I. (AÜ ) · ÜJ = (AÜ)TÜJ = üT A TÜJ = Ü . ( A T ÜJ ) .

7. Ye . ince A AT

;jn

2/ 3

1/ 10

II. [4~5 J•[~~;:~] 3/5

-!j~

~~ J

5. Yes, by Fact 4.3.4a.

4/ 15

29.

4.3

L Ü) · L (ÜJ)

35. [~j~ ] · [ ~j~J

] . [

1/ 2

]

_7 {

nn ks[ =iJ

lf (as n -+ oo) 9

-4~5

.JT8

J

J r J. r

3/ 5]

3

Q2] [ ~~

A, = Q , R, is the QR fac torizatio n of A 1• 45. Ye

[Y~~;~:~ Jl[~ :~1 =: u~ -!] 3/ 5

l/3

-+

1/ 10

A2] = [Q,

A = [ A,

- 1; .JT8J 3 - 1/ .J IS [ 4/ .J I8 O

l/ 2

25.

3n

[i~ ~;;;gl [~ ~~]

23.

-2/ 3

~) ~ 0 . 12 (radians)

: t~Irenl-~J 17

R(x)

x is a

43 . Wri te the QR facto rizatio n of A in partitioned form as

n-; -nn~ -:n

t

I= 8

35. No; if ü is a unit vector in L , the n x · proj Lx = x · (Ü · x)Ü= (Ü · .r ) 2 ~ 0

5 .

Jn)

2/ 3 19. 2/ 3 [ 1/ 3

21

+9+4 + I+

a ;;

[:~; _:~; J[~

17 . 1.

3 1. p ~ llx11-. Eq uality ho lds if (and only if) linear combination of the vec tors V;.

CHAYfER 4 4.1

) k2 ( Ü . v) =

ll~ll ii ll = 1 .~ 1 lliill =

29. By Pyrhagoras, llxll = .)49

b. 0, 1, or 2 ~

=

A-9

multiplying the i lh row of A with -I whe never i negati ve.

[3]

- 2/ 3

UJ

27

4.2 61. a. 3, 4, or 5 63. a. rank (A B) b. rank (A B)

llkiil l = j (kü). (kv #~ = lkl ll iill

~j~ J

15. [

= 2· c = T • f - - 2

Answe rs to Odd - u mbered Excrcises •

II

5

:J

- J

_I_ [ -~] · .J42 1

4 1. Q i. d iago nal wilh q;; = I if a; ; > 0 and q;; = - I if a ;; < 0. You can gct R from A by

23. A represent the refl ec ti on in the line spanned by ii (compar 1 ith E ampl e 2), and 8 reprc ents the rcfl ection in the plane " irh normal vcc tor ii . 25. di m (kcr(A)) = 11 - rnnk(A) (by Facr 3.3.- ) and dirn (kcr(A T) ) = 111 - rank (A T) = 111 - rank(/\ ) (by Fac t 4.3.9c). Th refore. the dimensions of the rwo kerne! are equal if (and onl y it) 111 = 11. that is, if A is a square matri x.

A-10 • _7_

nswers to Odd-. umbered Exerci es

=

AT

1 . b. L +( L .r)) = proj v_r , "here

QRTQR=RTQ T QR = R T R

3 1. a. Im = QfQI = STQJQ2S= STS . o tllat S is orthogonal. b. R2 R! 1 is both orthogona l (by parl a) and upper triangular. v ith po iti e diago nal entrie . B Exercise ~oa we ha e R2 R! 1 = Im so tJ1at R2 = R 1 and Q 1 = Q2. a claimed. 3 .



cf

W = im (A) = (ke r(A T)) .L

U e approx imatio n log(z) 0.664 log(g)

.

e. L + (y) =

~ ~ ~·

3. 5. 7. 9. II.

17. Yes; note that ker(A = ker AT A).

1. im(A ) =

pan [

~]

and ker(A T ) = spa.n [ -;

l

Ax *ll = 42

23. [

~]

25 .

I - 3t] , f or ar b'1trary t

3. The vecrors form a ba is of :?n.

- v.L

= (ker(A)) .L = im(AT) , where

I

A= [ I

B~is

of

I

2

vL

I

[

I]

5

4.

[iHH

27. [

·* = x2 ·* ~ ~ 2l 29 ..x,

S: parallelto ker(A)

37. a. Try to

[

~n ~ ~:~: ~ J.

approx imatio n log(d) = 0.915 + O.O l7t. b. Exponentiate the eq uatio n in part a: d = IQiogd = !00.9 15+0.0171 ~ 8.221 . 1QO.OI71 8.22 1 . 1.041 llxoII < ll.t

II for

I I. b. L (L + C})) =

all other vectors

x in S.

y

c. L +(L (x)) = proj vx. where

V= (ker(A)) .L

= im (AT)

d. im ( L + ) = im(AT and ker(L + ) = {Ö}

oL+GJ~ [~

!]>

n - 1?; 1> j

n

II

19.

c. Predicts 259 displays for the A320; there are much fewer since the A320 is highl y computerized. 39. a. Try to solve the system co + log(600, OOO) c 1= log(250) co + log(200 , 000)c 1 = log(60) co + log(60 , 000)c 1 = log(25) co + log( lO , OOO) c 1 = log( 12) co + log(2, 500)c 1 = log(5)

21.

23.

u e linearity in the column

[;~ ] = [:~ ] and [;~ ] = [:~ ] are

+b =

0.

~

-

~J

b. Tbey are 0

c. det(A) = det(A T) = det( -A) = (- I)" det (A) = - det(A) , so det(A) = 0

2

-x 3

-1 ) 2 .

det~A )

51. det(A ) = I if 11 i. even and det(A) odd (the re are n 2 inversions 5.2 I. - 3

29. ATA

= r ~ÜII2 Ü·ÜJ2 J

w

llw 11 · . o det(Ar A) = llii ll 11 w ll2 - (ü · w) 2 ::: o. by the Cauchy-Schwarz inequality. Eq uality hold onl y i f ii and ÜJ are parallel. v·

2

3 l. Expand down the first col umn : f(x) = - ..r det(A4 1) + constan t, o f'(x) = - det(A-1 1) = - 24. 33 . T is linear. 35. det(Q, ) = det(Q2) = I and det(Q") = 2det(Q11 _t) - det(Q" _ 2 ) . so det(Q" ) = I for alln .

47 . det(- A) = (- l )l'det(A) 49. det(A - I) =

olution.

±I

27. a. [

45 . k

7. I

- Clj)

25 . det(A TA) = (det (A))2 > 0

41. det(A) 2x - x = - x(x The matrix A is in ve rtible except when x = 0 or x = I. 43. - k

3. 9

[1 (a; i>j

The eq ua tion is of rhe form px 1 + qx2 lhat i , it defines a line.

39. Let n;; be the firs t diagonal entry that does not beleng to the pattern. The partern mu t conrai n a numbe r in lhe ith row to the right of a;; and also a number in the ith column below a;; .

5. 24

Cl; .

and Exerci e 17)

=

~

n

i= l

35. det(A) = I· there are 50· 99 in ver ions. 37. det(M ) = det(A) det(C)

Use

J

ow det(A) = f(a") = k(a"- ao)(a11 a1) · · · (a" - Gn-J) = (Cl; - aj) , a claimed. n?: i> J

31. - I ' 0, 2 33. det(A) = 0

+ 0 . 1 sin(l ) - 1.41 cos(r ) co + 35c 1= log(3 5) co + 46c 1= log(46) olve the . y tem + (77) co 59 c,= 1og co + 69c 1=log ( l 33)

Least- quares so luti on [

6

29 . I

3 1. 3 + 1.51

im(A T) = (ker(A J)..L

15 . Ana logaus to Example 4 1 1 17. a. det[ = a 1 - a0

-24

23. in venible if a, b, c are aJJ nonzero 25. invert ible 27 . 3, 8

33 . approximately 1.5

9.

II. 8

b. Use Laplace Expan ion d wn the last column to . ee 1hat /(1) is a polynomial of degree .:::; n . The coefficient k of t " is [1 (a; - aj ).

aqr 21 . invert ible

,n

7. im (A) = (ker(A)).L

9. - 72

ao a 1

19. a ps -

1

A-ll

13. 8

0 0 0 13. 36 15. 24 17. - 1

19. [ : ]

-~ J. llh -

1.6 16] 0.664 · - 1.6 16 +

CHAPTER 5 5.1 I. -2

15. Let 8 = (AT A) - 1 AT .

2 1. ; • = [

=

~[-

b. Exponentiate the eq uation in part a. z = IOiog(;:l = IQ- 1.616+0.664 1og(g) ~ 0.0242. 80.664

d. im ( L + ) =[i1rA~]) and ke r( L + ) = ker (A r)

A = L DU. then A T = U T D L T i the L DU

factorization of A T_ Since A = AT the two fa torizations are idenrica.l. o that U = L T. a claimed.

4.4

Least-squares. o luti o n [ co ]

V = (ker(A)) ..L = im (AT) c. L ( L + (y)) = projiV\·, where

29 . By Exercise 4. _.45, we can wri te AT = QL whe re Q i orthogonal and L is lower rriangular. Then A = (QL T = LT Q T doe the job.

Answers to Odd- umbered Exerci es •

=-

I if 11 i

37. det(P 1) = I and det(P") = det(P11 _ 1) , by ex pan ion dow n the fir t co lumn , o det ( P11 ) = 1 for all n. 39. a. Note thar det(A)det(A - 1) =I , and both fac tors are integers. b. Use 1he fo rmu la fo r rhe inver e of a 2 m
A-12 • 5.3

An wers to Odd- umbe red Exercises

I. 50

~ -~ -~] ; A- 1

25. adj(A) = [

3. 13

- 2

det~A)

7. 110

9. Geometrically: ll lidl is the ba and llii2 ll sin a the height of Lhe parallelogram defi ned by li, and - 2· Algebrnical/y: In Exerci e 5._.29 we leamed that 2 2 det(Ar A = ll lit Wllii2ll - (ii , · -2) =

nswers to Odd- ·umbered Exercises •

•dj(A)

~

0 - •dj(A)

I

~ ~ [ -

23. n. u, = e; becau. e Se; = ii; b. s- 'AS = s- 1 A(ü 1 ... ii" ] =

S- I -

lliitll 2 ll ii2 ll - llii tl lli tl l co a = Jlli 111 2 llli2 112 sin2 a , so that I det(A) I =

29. dx 1 = - D - 1R2( l - Rt ( I - a) 2de2, dy 1 = o - 1(1 - a) R_(R t ( I - a) + a)de_ > 0, dp = o-I R t R2d e1 > 0 .

) ctet (AT A) = ll litll ll ii tl l sina.

31. ITTTT

2

2

2

35. x(t)

s- I [ AI jj I .. · A." v" ] = [ AI e1... /.. e" ] =

! -~]

27. x = - 2 ° ? > 0: )' = 2 - bb? < O;x decrea es a + ba + a b increa es.

-

[

0

-:

fi nd A

= [-

v;

v;

17. a. V(v 1 , v2 , Ü3,

ü,

x ii2 x Ü3) =

V (v 1 , ii2, ü3)1 Jli, x ii2 x Ü3ll becau e

v1 x ü2 x ii3 is orthogonal to ii t , ii2, b.

vcü,. ii2. li), v, x

~

and V:,.

13.

15.

[ü 1 ii2 Ü3] = ii 1 · Cii2 x Ü3) is positive if (and only if) ii 1 and ~ x Ü3 enclose aJ1 acute angle.

19. det

21. a. reverse

23 .

.x,

~ -~]

det [ =

det [

-~ -~ ]

det [ X2

= de{

c. reverses

b. preserves

- 1_ . - 17 '

-~ ~ J

-~ -~ ]

17. 19.

6 =

T7

21.

We need a matrix

J,[-:J. with A.

We

2

4] .

we

'"d the e;ge" b' ;, s- 1AS.

wi th

s - 'ASv=)-.

ASü = S),Ü = ASÜ, and A. is an eigenvalue of A (Sii is an eigenvector). Likewise. if is an eigenvector of A, then s- I is aJl eigenvector of s - 1AS witb the same eigenvalue. 4 1. TIT

Ö. The

~ ~J 2d

-

r

0

7. ker(U11 - A) i= lÖ} becau e (U" - A)v = matrix Al" - A is not invertibl e.

+ :b -

J

c rrespondi ng eigenvector ü. then

5. Yes

l l. [ 2a

:

39. Jf A i an eigenvalue of

CHAPTER 6 6.1 l. Yes ; the eigenvaJ ue i A3 3. Yes; the eigenvalue i A + 2

9. [

[ -

37. a. A represents a reflection in a line followed b a di lation by a factor of ._/32 + 42 = 5. The eigenvalues are therefore 5 and - 5. b. So lving the lineru· systems Ax = 5x

27 .

x v3) =

I det[Ü t x ~ x v3 v, Ü1 li) ]J = llvt x Ü2 x Ü3 ll 2, by defin ition of tbe cross product. c. V(v 1, ii2, Ü3) = lllit x Ü2 x ii31l. by parts a and b.

4

2

[";"f[ :~

V(v 1, ~ .. .. , vk) and

1

J= [ ~ -~ Jand . olve for

A [:

25.

hO j det (AT A are zero. One of the V; isa linear combination of ü1, .. . , ii;- t. Then proj v,_1 = Ü; ru1d - ; - proj .,_ = Ö. This show that " f- 1 \l(ü 1. . .. , vk) = 0, by Definition 5.3.6. Fact 4.4.3 implies that det(AT A) = 0.

J+ 6

a. soc1ated e1genval ues 2 and 6, re pectively. Let

A"

diagonal entries A. 1, . . . , A." .

parallelogram defined by ii , a11d ii2.

15.

[:

A wi th eigenvectors [:

] , the diagonal matri x with

II . 1det(A) 1 = 12, the expansion factor of T on tbe

13.

1

11

0 At · ·

=2

A -13

w

29.

~]

6.2

w

I. I , 3

3. I , 3

J,

31 All vectors of the form [ where t "I= 0 ( olve 5t t.he linear system A.'t = 4x). The nonzero vectors in L are the eigenvectors with eigenvalue 1, and the nonzero vectors in LJ. have eigenvalue -I. Construcl an eigenbasis by picking one of each. No eigenvectors and eigenvalues (com pare with Example 3). The nonzero vectors in L are the eigenvectors with eigenvalue I, and the nonzero vectors in the plane L J. have eigenvalue 0. Construct an eigenbasis by picking one nonzero vector in L and two linearly independenl vectors in L J. . (Compare with Example 2.) All nonzero vectors in JR 3 are eigenvectors wi th eigenvalue 5. Any ba i. of JR3 i an eigenbasi .

5. none 7. I. I , l

9. I , 2. 2 II. - 1

13. I 3 1.

15. Eigenvalues A.u = l ± .Jk. Two di tinct real eigenval ues if k is po iti e: none. if k i negati e.

17. A r pre ent a reflection-dilari n. ' ith a dilation fa tor of .Ja 2 + b- . The eigenvalue. are ±Ja2 + b2 . 19. True (the di criminant t.r(A) ~ - 4 det(A ) i

po. itive) 2 1. det(A) i the product of thc cigenva lue .

33 . c(t) = 300( 1. 1) 1 - 200(0.9) 1 r(t) = 900( J.I Y- 100(0.9) 1

and tr(A) is thei r . um. To see thi s, write JA (A) = (),-A l)··· (A. - A") = A." _ (A l+ ... +A")Att - 1 + .. . + - I)"A.t ·. ·A" and compare with the for mul a in Fact 6.2.5.

A-14 •

Answer to Odd- umbered Exerci e

_3. Js(A.) = det(A. /" - B ) = det(H" -

s-

1

.

= det(S- I ()../" - A)S) = (cter(s) r

1

A[

~ ] = [ ~ ] and A [ _ :] =

[0.O. 9 0O..9]

con 1cler A =

ctet(M"- A)cter(S)

=JA().) A and B ha e the same eigenva lue , with the ame algebraic multiplicitie .

5

e need not be an eigen

then IA. I :::: I .

S)

33.:

ote that Ia - cl < I Phase ponrail when a > c:

35 A

line spanned by [

:~ [~(

~]

1

-

Cl

~ li -~ ~ -!]

J;

line spanned b

[ _: ]

I. eigenbas is: [

~ ]. [ ~]. with e igenvalues 7 , 9. -~ ] . [ :

3. eigenba is: [

l

7. eigenbasis:

9.

e;geob~;,

e .e2, e3,

with eigen al ues 4 , 9.

wi th eigenvalues I , 2, 3.

mmf n il Hl Ul Ul hH-a 1

with e igenvalues I , I . 0.

II . e;geoba ;,

[

wi th eigenvalues 3, 0, 0.

13 b. A' = [ A'e 1

Ne1 ] approaches

j

U~ l

' 7. :anbaL [; :] 29.

Ae e,

e

= so that is an eigenvector with associated eigenvalue I.

3 J. A and AT have the same eigenvalues, by Exercise 22. Si nce the row ums of AT are I , we can use the resu lts of Exerc ise 29 and 30: J is an eigenvalue of A; if A. i an eigenvalue of A ,

e;geob.,;,

wirb eigenvalues 0. I , - I .

15.

dgeov~tOß

[!l[-n

with eigenvalue 0, l.

=

A =

= [:

= 2[

l

ancl

'

CQ

4 1.

23. Th e onl y eigenvalue of A is 1 with E1 = span e1). There is n eigenbasis. A represent a shear parallel to the x 1-axi .

n

< I.

~:~ ~:n [ ~ ~J

a. A = [ b. 8 =

;r heohoo""

Cad w;,

, u] b=

c. The eigenval ues of A are 0.5 and -0. 1, those of 8. 0.5, - 0. 1, I . If ü i an eigenvector

25. Tbe geometri c mu ltipl icity is always 1.

of A. then [

~]

F"rthe<more,

[(I -:)-'b ] ~

2

27 . }A(A.) = A. -5A. + 6 = (A.-2)(A. - 3) , othat the eigen va lues are 2, 3. 29. 8oth multiplicities are n- r.

o eigenbasis.

33 . a. AÜ·ÜJ

i an eigenvector of B.

43. Let x(t) = [

A

so.r(t)=!

j (I) = 450( 1.2)' + I 00( - 0.8)' +50( -0.4)'

[:

~

l]

.!.

Eigenbasis for

!

wiili

e;geoval o~

[~1] +(~)

r+ l [

I]

-~

I,

fort >O.

The proportion in the long run i I :2: I . 45 . I ( rref(A) i likely to be the matri wirb all I 's direc tl y above the main diagonal ancl 0 s everywhere eise)

a t ) = I 00( 1.2)' + 50( - 0.8)' + I 00( - 0.4)'

~

I ) = A.Y(t)

t.o Xo ~ •, ~ ! [}~ [ _n +~ [-H

with eigenva lues 1.2, -0.8. - 0.4; .Xo = SOv 1+ 50ü 2 + 501 .

39 a h

! ! ~ J. u] ·[_n~ [~n 0

m. [-n[-n

T he populatio n approach the proport ion 9:6:2.

~~~]. Then .'i(t +

where A = [

b. E1 = V and E- 1 = v.t, o that the mul tip licity of I is m and that of -I i n - m .

19. The geornetric multiplici ty of the eigenvalue I is I if a =/= 0 and c =/= 0; 3 if a = b = c = 0; ancl 2 otherwi se. Therefore, there is an e igenbas is on ly if A = h

for any

w(t

35. a. E1 = V ancl Eo = v.t, . o that the geometric mult:ipliciry of ] · is 111 and that of 0 is 11 - 111. The alge braic multiplici rie are the ame ( ee Exerci e 3 1 .

el' e3-

b = [~ ]

initial va lue.

w

17 . eigenbasis: e2 , e4 , e1, with eigenvalues I , I , 0, 0.

0

1

d. Willapproach (h- A) -

b. S·uppo e Aü = A.v ancl Aw = fl-W. Then AÜ · ,-i; = A.(Ü · w) ancl Ü · A'w = /1-(Ü . w) . By part a, A.(v · w) = 11-(Ü · w) , so that (A. - /1-)(Ü · w) = 0. Since A. =/= 11- it follow that Ü · = 0, as claimed.

11!(1) = 300(1.2)' - 100(- 0.8)' - 100(-0.4) '

m;"

e1genvector of B with eigenvalue I .

= ( AÜ)TüJ = ÜT ATÜ; = ÜT AÜJ = Ü·AÜJ

o7. E;geoba. ;. focA

A-15

~ (1 + 'f) [:] + Hl' [ _:] +

H )' '* [=

: ] . The un ique so luti on i

u=~ J

X(t )

that is,

3 1. They are the ame.

5. No real eigenva lue . 27. a.

n

U] U]

~] A [ ~ ~] = [ ~

17

37. We ca n wri te fA (A.) = (A. - A.o) 2 g (A.) . By th procluct ru le, f~ A.) = 2(A. - A.o) (A.) + (A. - A.o)-g'(A.), o that 1 (A.o) = 0. 6.3

2 1· We want A

A[

~]b)..

- 5

Tr

(a- c) [ _ : ] .

-r 1

ector of A;

Answers to Odd-Numbered Exercise •

6.4

I. JfS(co

3.

<-i

+ i sin(-i

)

cose~k) + i sine~k). for k = 0, .. . , 11

-

I

A-16 • 5. If

An swe rs to Odd-N um bered Exerci s

An wer to Odd- um be red Ex e rci es

z

19 · ·-~·(1 ) =

= r(co~ () + i j n 4> ) . then

w = ::/f(co ( 4>

+} :rrk) +

i si n(

+} ;rk)),

fo r

k =0 .... . n - 1. 7. Cl ockw ise rotatio n through an angle f ~ followed by a d ilation by a fac tor of Ji. 9. pira L out ward ince lz l > I . I I. f (A.) = (A.- I )(A. - I - 2i)(A.- I + 2i) 13. Q is a field. 15. The binary dig it fo rm a fie ld . 17. H i not a fie ld (mulri plicati on i noncommu tati ve) 19. a. tr(A) = 111 . det(A) = 0 b. tr(B) = 2m - 11. der 8 ) = (- n n-m compare

-~

1i

Exerci e 30), so that lim r-

(Ys AY

e xi t and ha

identical co lumns (see Exercise 30 ). T herefore, the column of A 1 are nearl y ide ntical for Iarge t .

7.1

4> = a rctan (~ ) : . pi rals o ut wa rd. J,;';r [

2 1. ;i·(l) = v

1/

I.

5 sin ((/11) ] 1 co. (1) + 3 sin (/11) , w lere

3.

4> = ar wn(*) ; pira ls outward.

-:

- (l)' [

:2 3. .\ (t )-

2

CO

9.

3 1. a. Follow from w + :: = w + ijth entry of

z and wz =

11.

= L,a;~: bkj = '[) ;k Vkj

= L,a;k bkj

13.

= ijth e ntry of A B) b. Take the conj ugate of both ide of the JS.

equa1ion A(ü + iw ) =


ü + i w) .

and u e part a:

17.

A(ü- i ÜJ) = (p- i q)(Ü-

iw) 19.

33. The matrix repre. ent a rotati on-di lati on with a dilation facto r of J0.99 2 + 0 .0 12 < 1. Trajectory

xo=

c1Ü1 + · · · +

2 1.

6.5

17. x (t ) = [ - . in( r ] . where 4> = arctan(13) ; a cos(1 ) circle.

- 23 1] 154

c"ü" .

[_lJ [=:]

[ ~]

[~

~]

:l

is a stable cqu ilibrium.

is

n -~] [ 00.36 .48

[ ;

1

I + 1

J

ü+

iw

:

J.

~

:]

h and 8 = [ ~

:

J

J

rn

1.

33 .

[ I~ J

35.

21 -l [

I

- t

2-

I

J

4 1. Ye

08 ] - 0 .6

43. Yes

0

45 . Yes

ü -i w ] and

[~

D= [

~

_

-i] 1

47. No are

49 . . o; con ider A = [

s ~ n ~ !J. v ~ [~ ~

9. not diagonalizab lc

5 1. A = [

n

5. not diagonalizab le over IR.

7

]

39. Ye

0.48 0 .64 0.6

I . S = 12

s=

I

0; con ider A =

inve rt ible.

3.

- 4

37. Not imi lar, becausc A2 = 0 and 8 2 :f= O.

0

29. Both [

not bounded . Th is cloes not contradi ct part a, ·ince there is no eigenbasis for the matri x

39. [

3

llx (r)l l ~ lcd lllid l + ·· · +Je" I llii"ll = M

7.2

8 0

[

J.D = [ p +0 iq

1

2 29. [ /

[-n [~n [-; ~ ] - 0 .8

=

s=

25.

27. b ;j = Cln + l - i.n+ l - j

:J[n

15.

I -1

[~0 0~ ~] - J

23. Ye. ; both are sim il ar to [

and

[~

s = [:



2 1. Ye

23.

(use thetriangle incqua lit.y ll ii +wl l ~ l l ,~ l l + ll wl l. and ob. erve that JA.'; I ~ I)

13.

19. A1 = 21 [ l - t -1

~]

Then

b. The trajectoryx(l ) =

0.3 ]

O. J

0

e;

= c1 A.'1Ü1 + · · · + c - }.~Üs is nearly parallel to - 1 fo r Iarge t . I .. table 3. not stable 5. not , table 7 . not stable 9. not table I I. For Jkl < I 13. For all k 15. ever table

b. [ 0.9 0.3

0

spiral ioward. 35. a. Choose an eigenba i Ü1, ... , ü" and write

33 . c. Hinr : Let )q. ),2, .. .. )s be the eigenvalues, wi th Ä 1 > p,jJ, for j = - · . . . , 5. Let ü1. ü2 , . . . , v5 be corre. ponding eigenvectors. Write = C l V l + ... + csüs. Then 1 ith column of A' = A e;

[~

D=

6

~ ~]

7 [ - 149 . 99

25. not table 27 . . table 29 . may or may not be stable; cons ider A = ± ~ /2

~] ,

[-~ ] [-~ -~ ]

5. a. [

5 ·in ((/11 ] ' whe re ( ! ) + 3 in((/11)

4> = arc tan (i) ; piral. in ward.

A8

± l, ±I . ± i - I. - 1. 3 tr(A) = ) 1 + ),2 + ), 3 = 0 and det(A) = >. 1>. 2 ), 3 = bcd > 0. Therefore. there are one positive and two negative eigenvalue ; the po iti ve o ne i largest in ab olute value. 3 1. ~ A i a regular tran ition matri x (compare with

23 . 25 . 27 . _9.

[ - . in (1) ] . wherc os(1)

w:;

with Exer i e 6.3.35) 2 1. 2 ± 3i

JT3r

CHAPTER 7

53. [

n

~ ~]

~ ~ J and 8 = [ ~ ~

J

~ C~l ]

55. Ye (by Exa mple 6) 61 . 65.

ote tha t

JA (A) =

.f.1(),) =

fa (A.)

0

67. True ( A has three di sti nct eigenva lues)

A -17

A-18 •

n wer to Odd- umb r d Exercises

nswer to Odd- um bered Exercises •

3 1. True

5. a = arccos(- f,). Hint: lf Üo, ... , Ü11 are . uch vectors. Iet A = [ iio . . . 11 ] . Then the n ninvertibl e matrix AT A ha 1's on Lhe diagonal aod co (a) everyw here el e. ow u e Exercise 17.

I I. The . ame (the eigenva lucs of A and A - I have thc sa mc signs).

27. True

13.

29. False

= q(e;) > 0.

15. Ellipse; principal axes spanned by

v

[ -

[2] 1

~ ] ; equation 7cT + 2ci = 1

3 1. The closed interval an d

Js

35. B

[

L a;kbkj

-I]

·

2

Y

, cquat1on 4cj - c2 = 1. .

?

~ L la;k\lbkj \

19. A paiclof Ii o.";; pd odpal axos spaoood by [ -

k= l

= ijth enLry

[~- ~0

I I. Same S as in 9, D =

b. By inducüon on r, using part a: \N i = \N- 1A\ ~ \A' - 1 \\ A\ ~ \A\' - 1\A\ = 001]

\A\

4 1. Let).. be tbe max_imum of all \r;; \, for i = I , . .. n. Note that ).. < I. Then \R\ ~ J...(/11 + U), where U is upper triangu lar with u ;; = 0 and u;j = \rij \j).. if j > i. Not e that U" = 0 (see 1 Exercise 38a). Now IR'\ ~ IR\1 ~ J.. ' (/11 + U) ~ A1 111 Un + u + ... + uu-l . From calculus we know that I im ) 1 t" = 0.

13. Ye (reflection in E 1 15. Ye (can use the same orthonorma l eigenba is) 17. Let A be the n x 11 mauix who e entries are l. The eigenvaJues of A are 0 (w ith multiplicity 11- I ) and 11. 0\ B = qA + p- q )l n, so that the eigenvalues of B are p - q with multiplicity 11 - 1) and q11 + p- q. Therefore, det(B) = p - q)n-! qn + p - q ). _I . -l8 = 6 · 4 · - (note that A ha 6 unit eigenvectors)

23. The only possible eigen alue are I and - 1 (becau e A is orthogonal). and the eigenspaces Et and E- t are orthogonal complements (becau e A is symmetric). A repre ents the reftection in a ub pace of iR". I

25.

s= h

27. If

11

0 0 0

1 0 0 0 - 1

0

0

I

I

0 0

0

0

./2

I

- I

0

0

0 0

i even, we have the eigenbas is

e,- eu, e2 - en-1· . .. , en/2 - en/2+1. e, + en, e2 + en- l, .... eu/2 + e"/2+1' with associated eigenvalues 0 (n /2 ti me 29. Yes

and 2 (11 j2 ti mes).

1

,.....

7.4

I. [

3

ii,

[~ LH

.../2

n

_

39. L

~ [ - 0.47 ~:~~]. v2 ~ [~:;~]. ü ~ [ -~::~] 0.78 0.41 3

Since all eigen value are positive, the . urface i an ellip oid. The points farth e t from the orig in are

=[

7.5

~n

4 0]3

-1

-~ ~ ~] 4

3

I

43. For 0 < a < a.rccos I. = 2, a2 = l

a,

-

_ ). a. The fir t i an ellip oid. the econd a hyperboloid of one heet and lhe th ird a hyperboloid of two sheet (see any text in multivariab le calculus). On ly the ell ipsoid is bounded, and the first two surfaces are connected. b. The matrix A of thi qu adratic form ha po iti e eigenva lues J.. 1 ~ o. -6 . ..\2 ~ 4 _44 _ and ),3 = I ' with corresponding unit eigenvectors

-~.5 -~.5]

(-~ -) II- I

3. All ingu lar value are 1 ( ince AT A = In) 5. a 1

= a2 = J p- + q2

7. [-

9

.

II

Js

~ ~ J[~

n[~ ~ J

u-n [~ ~ Js [-~ n J

[! ~ n[~ nr~ ~] 3

13. /_ [

~ 3s]

15 [- ~

~]

15. Singular value of A- ' are the rec iprocaJ of tho e of A. 21. [

0.8 0.6] [ 9 - 0.6 0.8 -2

-2] 6

=I, . . . ,r - 3. AAT-u,. -_ { aFü - ; for i 0 fo r i =r+ 1• • . . , 117 The nonzero eigen alue of AT A and T are Lhe ame.

?

5. indefinite

1 ü, ± 11

~

1. 15 ] . ± [ 0._6 - 0.63

vAt

7. indefinite 9. a. A 2 is symmetric b. A 2 = - AT Ais negative semidefinite, so that its eigenvalues are ~ 0. c. The eigenvalues of A are imaginary (that is, of the form bi , fo r a real b) . The zero matrix is the only skew-sym meLric matri x that is diagonalizable over lR.

Sc~

] ; eq uation = 1. 2 ote th at we can write x2I + 4 x I x 2 + 4.\.2 . 2 ( x I + 2x2) 2 = I , o that x 1 + 2x = ± 1 2

and [

of\A\\ B\

37. L = _I_ [

[An, ).. 1J

6 2] - 3 4

= ~ [ ~~

?

k= l II

33 . 8 =_I_ [

17. Hyperbola; principal axes spanned b [ 2 ] 1 and

II

39. a. ijth entTy of \A B\ =

a;;

A -19

and those closest are

±

1

11

u2 ~

vA2

23. Ye.; A = ~(M +MT) 2s .

q(ii) = v·J...v = J....

25. Choo e vectors ü1 and Ü?- a in Fact • 7 .52 . . W n.te 0.1 5 ] . ± [ 0.26 0.37

ft

Note thal ow

= Ci

VJ

+ c2



A-20 •

Answers to Odd- umbered Exercises

Answers to Odd-Numbered Exercises • 19.

so that

[]AÜI I2

2 39. E 1 . = span [ - 1] and E1.4 = span [ -3 ] . .L.00 ks ro ugbly li ke the phase portrait in Exerci se 35.

cf ii Aii l I I ~ + c~ I ] Aii2le

=

?

?

= cjuj

~

(/)

41.

__.. trajectory I

?

+ c:zui

?

?)

A-21

2

?

:S (Cl -f- C2 Uj EA,

We conclude that IIAüll :S o-2 :s JI AÜ I) is analogous.

o-1.

The proof of

27. Apply Exercise 26 to a unit eigenvector associated eigenvalue ),.

33. False; consider A = [ T

T

35. (A A)- 1 A Üi

l

0 37. Yes, s ince A A

=

- [l ']-

2 1. x(t) =

for i = l , ... , r

= r + I , . . . , 111 31 <(,)

!11

-trajectory 3

door s Iams 1·r w(O) a(O)

23 . x(t ) is a so lution.

29.x(t) = 2 for i

xo

1

0

27. x(l) = 0.2e-

~ ~] .

_l_ Vi = l!i

7

v witJ1

- trajectory 2 EA.,

61

[

-~ ] + 0.4e- l [ ~ J

~ ,, [

1

s1

3. ,Jie3rr if4 43 . a. Competition

2

5. e-0 11 ( cos(2t) - i sin(2t)); spiraJs inward, in the

b. Y

clockwise direction

-n

9. stable

X

c. Species 1 "wins" if

CHAPTER 8 8.1 I. x(t) = 7e 51

_ng~

< 2.

45 . b. y

3. P (t) = 7e 0·031

1 ~ 1 , has a vertical asymptote =(Cl - k)t. + I ) 1/ (i - k)

9. x(t)

at r = I.

15. X

II. x(t) = tan (t)

13. a. about 104 billjon dollars b. about 150 billion dollars 15. The solution of the equation = lOO kn(2) ~

T

6f

c. Species I "wins" if y(O) < l · x(O) 2 47 . a. syrnbiosis b. the eigenvalues are ~ ( - 5 ± .J9 + 4k2). There are two negati ve eigenvalues if k < 2; if k > 2 there is a negati ve and a positive eigenvalue.

35. ekT! LOO

ll. a. B = 2A d. The zero s tate is a stable equilibrium of the system ~; = grad(q) if (and only if) q is negative definite (then, the eigenvalues of .4 and B are alJ negative). 13. The eigenvaJues of A- I are the reciprocals of the eigenvalues of A ; the real parts have the same sign.

0.8e0·81

7. x(t) =

'

7. not stable

33.

5. y(t) = -

I. 1

8.2

[ 2]+ e [I] _

< A.2 .

=

2 is

49. g(t) = 45e- 0 ·81 - l5e- 0.4t and h (t) = - 45e - 0·81 + 45e - 0 41

17.

37. Eu

= span [ ~ Jand

E1.6

= span [ ~

l

Looks

rough ly like the phase portrait in Figure '10.

19. False; consider A with eigenvalues I, 2, - 4. 21. a.

e P1

[cos(qt) ] , a spiral if p =/= 0 and a sm (q t) circle if p = 0. Approaches the origin if p is negative.

53. x(t) =

17. If lk I < 1

= H-

q 55. Eigenvalues AJ.2 eigenvalues are negative

± J q 2 - 4p) ; both

~~

d

I af

= O.Sb

=

+

s

I

0.07s

b. b(t) = 50.000e 0·071 - 49,000e 0·51 s(t) = I , 000e0·071 ?3

~ ·

-: ) _ [ cos(3t) - sin (3t) '' U sin(3t) cos(3t) are arbitrary constants

J[b J' where a

a' b

A-22 • _s.

n wers to Odd- ·umbered Exercise

EigenvaJue 2 + 4i wirh corre pond ing

~

eigenvector [ _ ] . p = 2. q = 4 . 1

X(l ) = e2

[

~

w=

39. The _ystem

e Fact 8.2. 6, ' ith

~

[

J.

V= [ _

n.

c~ng;~

c:~~:~ ]

2 in (2r - cos (21) counterclockwi e direction.

E[ ig~nvaJ] ue_i

with

1

sin ) [ in (l ) +CO (I orientation.

c

[-:]=

] .

I

I

in r)

=

A n ellipse wi th clockwi e

-

35.

trajectory .r (t)

/

ker A)

//

/

21

:n::.nJ' Un~n~r ;

tr

~::e~ ~].[~ ~].[~ tY~

a

23 . : ::"

Dimen ion: I. 25. Basi s: I - r, I Note that Ax is parallel to ker(A) for all .X because A 2 = 0. Check anal yticall y that .T(1) = io + 1 Axo o I ves dx di = A x:

dx

-

dl

_

Ai = Axo + 1 A2.io = Axo .J 37 . U e Exerci e 35 to solve the system

~~ = (A

-

Ao /2)c,

then apply Exercise 8. 1.24.

-~] +

c2e-

21

[

-~

17'

b. Basi. of ker(T) : 5 - 61 + 12 0 3 1 ] [ Basis of im(T ): [

J

The

2 1 .

21

span

23.

9.3

_:]). Es=

9.2

29. Basi :

I. I , I . .. .), (0, I, 2 . 3 , . . .) . 01mens10n:

5. Dimension: 2.

. 7.

[~

u~

9. [I

0 0

im (T):

25.

I)

13. t - r 3 , 1 +2r - 3r 2. 15. ker( F) = Eo = skew-symmetric matrices. im(F ) = E 1 symmetric matrices. F is diagonaJizable.

=

j I + ! + ~ + . .. = :16· ao

h-

k = 0 for all k. ·v 2 2 bk = 7(i[ if k is odd { 0 if k i even. l 7T 2 29. I: 7'Y = -u0 k odd k-

~J

Ul [~ J

in(!)) .

~ ~ J is sy mmetric and positi ve

19. A must be positive defi rri te (compare ' ith Exercise 15). 21. Ye . 23. I. 2r- 1.

~]

II. ba i of ke r(T ) : I - 21 + t 2 . Ba i of

(b - l ~ 2 +a 2 ((b - l ) cos(r) +a

definite. This mean that b = c, a > 0, and ad- bc > 0. 17. lfk er(T) = {0} .

27.

2/ 3

J

No soluti on if a = 0 and b =I . b. Yes a. ker(T) = [OJ c. There i (j us t one) such polynomiaJ. a. Jf S is invertibl e. b. If S is orthogonal. Yes. For positi ve k. True. The angle is 8.

15. If the matrix [

The e four matrices

A. O v = A..f(ü) . I. Ye

3. Ye

3. 5. 7. 9. II.

(l

b- I

13. The two norms are equal, by Fac t 9.3.6.

ü EB w = J Cv) + f(w )

Dimension: 2.

3 1. a. Basis: I , t , 1 . Dimension: 3. b. Basis: 1, 13 . Dimension: 2. 33. Linear; not an i omorphi sm. 35 . Linear; not an isomorphism. 37. Isomorphi sm. 39. Linear; not an isomorphi sm. 4 1. Linear ; not an isomorphi sm . 43. Subspace.

(U ~ l [~ ~ ]) .

-a

b. f (t ) =

fo rm an eigenbas is.

a

2.

~ ] , [~

[b- 1

· a.

6 1. The standarcl basis of !Rn x n (the matrices with one entry equal to I and all other entries equal to 0). 63. Ye . LeL f be an invertible function fro m JR 2 to R and defi ne

~ ~ ] , [ ~ ~] . 4

59. E_ , = span ([ _ :

]

degree :::: I, for exa mple, g(l ) = 1.

= c ,e- r + c2e - 2r _

55 . E1genspaces: E, = rea l num ber E _, = imagin ary num ber . Eigenbasis I, i. 57. E igen pace : E, = even function. , E_ 1 = odd fun cti on .

3

0 - 1 ' 0 -5 19. T (J) = gf , where g is a polynomial of

[!~~ rTfiTT~ m irn E~rr;

mn .

27. Basis: [

2

= Ax0

Cte - r [

x(l )

CHAPTER 9 9.1 I. Not a ubs pace. 3. Sub pacewith ba ·i · J-1 , 2 - t 2 .5 - t 3 . 5. Subspace with ba is 1, 1" 7. Subspac . 9. Subspace. I I. Not a ubspace. 13. ot a subspace. 15. Subspace. 17 . Matri ce with one entry equa l to I and all other '9

uj j j

=

53 . E igen paces: E2 = sy mmeti c matrices,

/ /

=

so lu l.l ons of the given OE are

in ward, in the

c[ ~rre t]on[d~:g(~;g]envector

I + i · x (l ) =

[~ ] =

where k 1 , k2 • h ar arbitrary con tants. The olut ion of the gi en y tem are .~ ( 1 ) = e )..1 r), by Exerci se 8. 1.24. The zer state is a stable equili bri um soluti n if and onl y if) the real part of A. i negative.

~].

e- r [ cos (2 l ) + sin ( l ) ] . Spi ral

29.

=

49. Ow = T (Ov) - T (Ov) T (Ov + Ov) - T (Ov) T (Ov ) + T (Ov) - T (Ov) T (Ov )

n[~~~~;; -c~:~::) ] [~ ]

-

corresponding eigenvector [ e- r [

[ ~0

5 1. Convert the OE into the system d.x Tt = y . dv , Wllh so lut ions cit = - 2x -3 y

27. Eigen alue -I + 2i with

-~(r) =

dc =

dt

A-23

Answers to Odd- umbered Exe rcise • 45 . Yes , yes.

9.4

=

I . Ce 51

3. ~ e 31 + ce--1 (use Fa t 9.4. 1-') 5. - I - r + Ce1 7. c ,e-·lr + c2e3r

9. II.

+ c2e - 31 e (c1 cos(r) + c2 sin(l)) c 1e 1

31

A-24 •

13 . e- 1 c 1 + c2 1) (compare ' i th - xamp le 10). 1- .

+ C2 l + '2 1) - ~ CO ( 1) cos(1) + c 1 cos(J2t ) + c~ . i n( v'2t ) Ci + C2e- + C3e - lt C(

17. e- 1 (C t 19. 2 1.

1

1

T . 3e 51 25 . e - 21 +~ _7. -

SUB]ECT INDEX

n ' er to Odd- umbered E. r i es

in 1)

29.

1i n

31.

u r )=

,.....lim

r) -

b. 2e- 1

-

e- 21

d. l n pm1 th e o. cillmor gocs through Lhe equ il i bri um state once; in part b it never reac he it.

37. x(l) = re- 31 39. - I (c l + C2 i + C3 f 2 ) 4 1. A. i an eigenva lue ' i th dim (EJJ = 11 , becau e EJ. i. the kem el of the 11th- rder l inear di fferential

t

sin(2r )

qf- CI -

v(r) =

. 5. a. c 1e- 1 + c1 e -~ 1 c. - e - t + _ e -2t

'.!.!..Kk '·

operator T (x ) - A.x.

e-kt fm)

.n. !'o

= tenninal velocity .

45. e, [ l

o

(1)+

=

121]

1~

in (l )+c , e- 21 +c2 e- 31

A Affin e . ys tem, 337 Algebraic multipli city of an eigen va lue 3 12 and geo metri c rnultipliciry, 325, 400 and co mp lex eigenvalue , 349 Algorithm, 23 Angle, 186 and orthogonal tra nsformations, 208 Argument (o f a co mplex number), 343 of a produ ct, 345 Associati ve law for matri x multiplicati on 104 A ymptoti ca lly table equilibrium, 358 ' (see table equilibri um) Augmented rn atri x, 14

B Basi , 15 I , 482 and coord inates, 377 and dimen ion 162 482 and un iqu e representation, 154 find ing ba i of a li near space, 484 of an i mage, 167 of a kernet, 164 of ~~~ . 169 standard , 163 Binm·y di gits, 143 , 35 1 Bounded trajecto ry, 367

c Carbon dat ing 45 1 Cauchy-Schwarz inequ ality, 186, 51 5 and angles, 187 and triangle inequ ality, 192 Center of mass, 3 1 Characteristi c polynomi al, 311 and algebraic multiplicity 312 and it deri va ti ve, 320

of linear differential operator 532 of imil ar matrices, 393 Chole ky factorization, 423 Ci rculant rnatri x, 355 Cla ica l adjoin t 285 Closed-forrnul a solution for discrete dynamica l svstem 30 1 fo r in verse, 285 ' for Ieas t-squares approximat ion, 223 for linear system, 283 Codomain (of a function), 55 Coefficient marrix of a linear y tem, 14 of a linear transformaüon, 53 Colu mn of a matrix 12 Colu mn pace of a matrix, 132 Column vector, 13 Cornmuting matri ces, 102 Complement , 174 Comp lex eigenvalues. 348 and determinant, 350 and rotation-di lati on matrice , 359, 36 1, 380. 465 and stable equitibrium, 359 and trac . 350 Camplex num bers, 34 1 and rotati on-dilati ons, 345. 503, 5 10 in polar fo rm, 344 Complex-valued functions. 458 deri va ti ve of, 459 ex pon nti aL 460 Component of a veclor. 13 Compo ite fun cti ons, 97 Cornputati onal co mplexity, 94, 122 Consistent linear system, 33 Continuous lea t-squ ares cond iti on. 2 6 5 16 Continuous linear dynami cal syst m, 442 stability of. 463 with cornpl ex eigenbasis. 463 with rea l eigenbasis, 449 Coordinate ·, 377 496

1-1 .

1-2 •

Subj ect Index •

ubject Index

Coordinate tran formation. 497 Coordinat.e vector, 377 , 496 Correlation (coefficient , 188, 189, 190 Cramer' Rule, 283 , 28Cro product in JR3 , 65 , 554 in IR" , 269 Cubic equation (Ca.rclano's formula) , 320, 356 Cubic spbnes, 27

D Data compre.ssion, 433 Data fitting, 225 multi ariate, 228 De Moivre' s forrnula 346 Deterrninant, 243 a.nd characteri tic polynorn:ial 311 a.nd complex eigenvalues, 350 a.nd Cra.mer' s rule, 283 and elementa.ry row operations, 256 and invertibibty, 257 a.nd Laplace expansion. 262 a.nd QR factorization , 277 a a.rea, 274 a.s expansion factor , 279 a volume, 276 i linear in row and column , 252, 253 of inverse, 259 of orthogonal matrix, 273 of permutation matrix, 248 of product. 258, 272 of rotation matrix, 273 of sirn:ila.r matrices. 393 of 3 x 3 matrix. 240, 276 of tra.nspose, 251 of triangular matrix, 246 of 2 x 2 matrix, 239, 274 Vandermonde, 266 Diagonabzahle matrices, 388, 389 ancl eigenbases, 388 and power 391 orthogonall y, 402, 408 imultaneously, 400 Diagonal matrix, 13

Diagonal of a matrix, 13 Dilati on 7 1 Dimension. 162, 48 and isomorphi m, 488 of imag , 167, 506 of kerne! , 165 506 of orthogonal complement, 220 Direction field , 443 Discrete linear dynamical sy tem, 301 and complex eigenva lue. , 359 36 1, 363 and table quibbrium. 358, 359 Di tance, 51 4 Distributive Laws, 106 Doma.in of a function. 55 Dominant eigenvector 448 Dot product 28, 2 11 , 55 1 and matrix product, 103, 2 11 a.nd prod uct Ax, 41 rules for, 552 Dyn amical system (see co ntinuous, di crete bnear dynarnical system)

E Eigenbasi , 326, 490 a.nd continuous dynamical ystem , 449 , 463 and di agon alization, 388 and di crete dynamical y tem , 30 I and di tinct eigen value , 329 and geometric multiplicity , 330 Eigenfunction , 532 Eigenspaces, 321 , 490 and geometric multiplicity, 325 and principal axes, 420 Eigenvalue(s), 298, 490 algebraic multiplicity of, 312 and characteristic polynomia.l, 308 and determinant, 350 and positive (semi)definite matrices, 417 and QR factorization, 316 and shear-dilations, 395 and shears, 393 and singular values, 425 and stable equilibrium, 359, 463

and Lrace, 350 co mpl ex, 349, 350, 36 1 geo metri c rnulti plic ity of, 325 or: orthogonal matri x, 300 ot r? tati on-dil ati on matri x, 359 of w ntlar matrices, 393 of ymmetric matri x, 406 of tri angul ar matri x, 309 . power method fo r findin g, 353 E1 genvectors 298 and linear independence 329 dominant, 448 ' of ymmetri c matrix, 403 Elernentary matri x, 115 Elementary row operati.ons, 2 1 and determin ant 256 and e lementary matrices l J 5 Ellip e, 85 a image of the unü circle, 85, 408, 427 as lev.el curve of qu adratic fo rm 418, 420 a traJectory, 363 370, 46 5 Equilibrium (state), 358, 368 Error (in Ieast-squares Solution), 221 Error-correcting codes, 144 Euler identities, 51 8 519 Eu ler' · forrnu1a , 461 Euler' theorem , 335 , 386 Exotic ?perations (in a linear space), 496 Expcu1SIOn factor, 279 Exponenti al function , 440 complex-va1ued , 460

Flo w line, 443 Fo uri~r anal y is, 236, 51 8, 520 Functlon 55 Fundamenta l theorem of algebra, 347

G Gaussian integration, 527 Gauss-Jordan elimination 22 and determinant, 256, 257 and inverse, 91 flow chart for, 24 Geometrie m~ltiplicity of an eigenva]ue, 325 and a~gebrcuc multiplicity, 325, 400 and e1genba es 330 Golden section, 340 Gram-Schmidt process, 201 , 515 and determinant, 273 and orthogonal diagonalization, 408 and QR factorization, 202

H Ha.rmonics, 520 Hilbert space e2 , 195, 513 and qua.ntum mechanics, 523 Hornogeneou linear system 48 Hyperbola, 420 Hyperplane, 173

I

F Factorization ' Cholesky , 423 LDC , 2 18, 424 LDU , 116, 218 L{r , 423 LU , 116, 250 QR , 202, 217, 277, 316, 424 sos - 1, 39 .1 SDS T, 402 UL.VT , 43 1 Field, 348, 351

Identity matrix /11 , 57 Identity tra.nsformation, 57 Image of a function , 128 Image of a li near Iransformation ' .1-32'486 and rank, 167 i a subspace, 133, 146 ort~ogonal complement of, 2 19 wntten as a kerne!, 143 Image (of a ubset), 67 of the unit circle, 85. 408 427 of the unit square. 68 lmag.i i~ary part of a complex number, 342 Imphcit function theorem, 272 .

I -3

I -4 •

Subject Index

Inconsistent linear sy tem, 6, 21, 33, 42 and lea t- quares , 221 Indefinite matrix, 416 Infi nite-dimensional linear space, 486 Inner product (space). 511 Input-output analysis, 7, 29, 96, 120, 386 Intermediate value tbeorem, 84 Inter ection of subspaces, 156 dimension of, 174 lnver e (of a matrix), 88, 91 and Cramer' s rule, 285 determinant of, 259 of an orthogonal matrix, 212 of a product. 105 of a 2 x 2 matrix, 61. 91 of a transpo e, 213 Inversion (in a matrix), 243 Invertible function. 86 lnvertible matrix, 88, 89 and kernel. 138 and deternunant, 257 Isomorphism, 487 and dimension, 488

J Jordan normal form, 393

K Kemel, 134 486 and invertibility, 138 and Linear independence, 154 and rank, 165 dimension of, 165, 167, 506 i a subspace, 138, 146, 486 k-parallelepiped, 276 k-volume 276

L Lap lace expansion, 262

LDLT factorization , 218, 424 LDU factorization , 116, 218

Leading one, 19 Leading variable 20 Lea t- quares o lution , 23, 195 222 and normal equation, 223 minimal 231 Left inver e, 159 Length of a vector, 177, 552. and orthogona l transformatJOn . 207 Linear combiJ1ation, 38, 479 and span, 131 Linear dependence 151 Linear differential eq uation, 529 homogeneaus 529 order of, 529 solution of, 531 solvi ng a first order, 440, 538 solving a second order 534, 535 Lmear differenria l operator, 528 cbaracteristic polynomial of, 532 eigenfunctions of, 532 kerne) of, 531 image of, 539 Linear independence in lR", 151 in a linear space, 482 and dimension, 164 and kemel, 154 and relations, 152 of eigenvectors 329 of orthonorma1 vector 179 Linearity of the determinant, 252, 253 Linear relation , 152 Linear space(s), 479 basis of, 482 dimension of, 482 finding basis of, 484 mfinite-dimensional, 486 isomorphic, 487 Linear system clo ed-formula solution for, 283 consistent, 33 homogeneous, 48 inconsistent, 6, 33 Ieast-squares solutions of, 222 matrix form of, 39

minimal so lution of, 231 nurnber of so lutions of, 33 of differential equations: see continuous linear dynamical system unique so lu tion of 36 vector form of, 37 with fewer equations than unknowns 35 Linear tran sforrnation , 56, 66, 486 ' image of, 132, 486 kerne! of, 134, 486 matrix of, 59, 378, 501 , 502 Lower triangular matrix , 13 LU factorizat ion, 116 250 and pri ncipal submatrices, 117

M Main diagonal of a matrix, 13 Mass- pring ystem, 472 Matrix, 12 (for a composite entry such as ' zero matrix" see zero) Minimal so lution of a linear , ystem, 23J Minor of a matrix , 260 Modulus (of a complex number), 343 of a product 345 Momentum, 31 Multiplication (of matrices), 101 and determinant, 258 column by co lumn , 102 entry by entry, 103 is associative, I 04 i. noncommutative, 102 of partitioned matrice , 107

N Negative feedback loop, 470 Neutral eJement, 479 Nilpotent matrix , 176, 412 Norm of a vector, 177 in an inner product space, 5 14 Normal equation, 223

Subject Index •

0 Orthogonal complemen t.., 179 dimension of 220 is a subspace, 180 of an image, 219 Orthogonally diagonalizable matrix. 402, 408 Orthogonal matrix, 207, 212 determinant of, 273 eigenvalues of, 300 has orthonormal columns. 209 in verse of, 212 transpo e of, 212 011hogona1 projection, 76, 181 , 515 and Gram-Schmidt proces , 201 and reflect:ion, 76 as close t vector, 221 matrix of, 214, 224 Orthogonal transformat:ion , 207 and Ot1honorma1 bases, 209 preserve angles, 208 preserves dot product 2 J 5 Orthogonal vectors. 177, 514, 553 and Pythagorean tbeorem, 185 Orthonorma1 bases, 180, 183 and Gram-Schmidt process, 201 and orthogonal transformations, 209 and symmetric matrice , 402 Ot1hononnal vectors. 178 are linearly mdependent, 179 0 cillator. 509

p Parallelepiped, 275 Parallel vecrors, 550 Paramet:rization (of a curve), 129 Partitioned matrice , I07 Pattem (in a matri ), 242 Permutation matrix. 94 determin anr of, A8 Perpendicular vector . 177, 5 14, 553 Pha ·e portrair. of discret y tem, 296, 30 I of continuous system, 445 450 summary. 468

I -5

I -6 •

ubj ct Index

Pi ecewi e o ntin uou function. -1 Pi ot column . J 67 Pi o tin g 23 Po lar form (of a complex number), 344 and po' e r . 346 and product . 345 v ith Eul er· fo mm la, 461 mi definite matri x. ~ 1 6 Po iti e and i!!en aJue . 4 17 and prin ipaJ ubmatri e . -+ 17 Pos itively a ri enred basi . _ Po' e r merhod fo r fi nding eigen a 1ue. . 3 - 3 Principal axe . 4 0 Principal ubm atrice , 11 7 and po iti ve de finüe m atrice . 4 17 Produ ct A.r. 38 and dor product, 41 and matri · mul tiplication, 10 Proj ecti on, 1 (see al o orthogonal projectio n) P eudo-in ver e, 230, 231 Pyth agorean rheorem, 185, 5 15, 553

Q QR facrorization. 202 and Chole ky factorization, 4_4 and dererrni nam. _77 and eig n a1ue , 316 i unique, 217 Quadrati fo nns . 4 15. 4 16 indefinite, 4 16 negati ve ( emi definite. 4 16 po iti ve e mi definite, 4 16 principal axe for , 4 _0 Qu aremion . 2 1 . 369, 5 10 fo m1 a ke\ fi e1d. 355

R Rank (of a matrix), 34, 167 and im age, 167 a nd kerne !, 165 and ro w pace, 175 and in g ul ar va lues, 429

o f irni lar matrice , 393 of tra n pose, _1 3 Real part of a complex num be r. _42 Redu ced row echelon form (rref) , 19 and derermin ant. 257 and in er e. 89, 9 1 and rank. 34 Refte ti o n. 77 , 112 Regular tran irion matrix. 3 1 . 352 Relation among ecto1 ), 152 Re onan ce, 509 Ri mann integral, 236, 51 2 Rotati on matrix, 5 7, 70 81 , 112, 273 Rotati on-dil ation marr ix. 72 and co mp lex eigenva lue , 359, "' 61 , 380, 465 and co mplex number . 345 . 503, 5 10 Row of a matri x, 12 Ro' pace, 175 RO\: vector, 13 Rul e of 69, 45 1

s Sarru · rule, 240 doe not generalize, 242 Scalar. 15 Scalar mul tiple of matrice . 42 econd deri ari e te t. 424 Separation of variable , 439 Shear . 73 and e ige nvalues, 393 Shear-di lation , 395 S imil ar matri ces, 392 and c haracteri tic po lynomial , 393 Sim ultaneou ly diagonaJi zabl e matrice , 400 Singul ar alue decom po ition S D ), 4 3 1 Si ngul ar alue , 425 and ellip e , 427 a nd rank, 429 Skew field , 355 Ske w-symmetric matrices, 2 16, 42 1 dete rrninant of, 268 Smooth fu nction, 477 Space, 146 of fun clions, 479

Span in IR!", 13 1, 132 in a li near space, 482 Spec tra l theo re m 402 Spirals, 346, 360, 363, 465 Square matr ix, 13 Square- umm a ble eq uences, 1. 94, 5 13 Stab le eq uil ibrium, 358 of di . cre te y tem, 359 of co nti nu o u ystem, 463 Sta ndard ba i , J 63, 497, 499 Stare vector, 29 J Sub pace of IR", 146 of a linear pace, 48 1 Sum of ub pace , 160 dim ension o f J 74 Sum of matri es, 42 Symmetrie matrice , 2 1 J are orthogo na lly d iagona1izab le, 402 have rea l eige nvalu e , 406

T Tetrahedro n. 11 4 286, 385 Theorem of Pythagora . 185, 5 l5. 553 Trace, 3 10 and cbaracteri ti c po lynornial , 3 J 1 and compl ex eigenvalue , 350 and inner product, 513 of imj]ar matrice , 393 Traj ecto ry , 295, 442 bounded, 367 Transpo ·e (of a matri x , 2 11 and determ in ant, 25 1 andin ve r , 2 13 and rank. 2 13 of an orthogo nal matri x, 2 12 of a produ ct, 2 J 3 Tri ang le inequ ality, 192 Triang ular matrices, J 3 and det:ermin ant , 246 and e igenva lues, 309

ubj ect In dex •

in vertible, 93 Tri angu li zable matrix, 41 2 Tri gonometri e po lyno mi als, 5 18

u Unit vector 177, 553 Upper tri ang ul ar matri x, 13

V Vandermonde deterrninant, 266 Vector( ), 13 add iti on, 546 column 13 coordin ate, 377, 496 cro produ ct of, 65, 269 554 dot product of, 551 geometri c representation of, 548 length of 177, 552 orthogonal, J 77. 553 parall el. 550 po iti on. 548 row, 13 rule of ector algebra, 547 calar multi plica ti on, 547 tandard repre entati on of 548 unit, 177. - 53 velocity 442 v . po in , 549 zero 547 Vector fi eld 443 Vector form of a linear y tem, 37 Vector pace, 479 ee linear pace) Velocity vector. 44_

z Zero matrix , 13 Zero vector, 547

1-7

NAME INDEX

Peirce, Benjamin, 46 1 Pi azzi, G iu s ppe, 22 Pyth ago ras, 184

A

H

R

bei, iel Henrik. 3 l 6 AI -K110wariz mi Mohammed, I . 23 Archi mede-. 34 1 rgand. Jean Roben , 342

Hami lton, Willi am Rowan, 2 18, 35 5 Hei enberg, We rner, 195, 523 Hilben. Dav id. l 95

Ri emann , Bernh ard, 236

c Cardano, Gerolan1o. 3 16. 320 Cauchy. Au gustin-Loui , 186, 244 Cramer. Gabriel. _83

J Jacobi . Carl Gu tav Jacob. 244 Jordan , Camille, 393 Jordan, Wilhelm. 22

K

D

Kepl er, Joh annes. 238 Kronecker. Leopold. 244

D'Alembert, Jean Le Rond, 163 D Moivre. Abraham , 346 De cartes. Rene, 498 De Wirt , Johan. 4 16

L

E

Lagrange, Joseph-Loui , 369 Laplace. Pierre Simon . 259 Leo nardo da Vinci , 193 Leonardo of Pi a, 339 Leon tief, Wa sily, 7, 386

Einstein, Albert, 356 Euler. Leonhard. l I, 335,342, 3 6, 46 1, 518, 523

M

F Fibonacci , 339 Fourier, Jo e ph 236, 5 18

Mas-Co lell Andreu , 195 Mumfo rd, David, 476

0 G Galoi, , Evari te, 3 16 Gau . Carl Fried1ich , 22, 62, 342, 346, 527 G ram , Jörgen 20 I Gra mann , Hermann Günther, 163

1-8

Olbers, Wi lhelm , 23

p Peano, Giu eppe, 479

s Sarrus, Pi er-r Fn~deric, 240 Schläfli , Ludwig, 163 Schmidt, Erhardt, 20 1 Sc hrödinger, Erw in , 523

ame Index • Sch:varz, Hennann Amandu s, 186 Sek1 Kowa, 243 Strassen, Volker, 122 Sylves ter, James Jo. eph, 12

V Vandermonde ' Alexand Ie-Theophile, . , . 244. 266

w Weierstra , Kar!, 272 Wes e1 Caspar, 342 Weyl. Hermann , 163

1-9

IS BN

9 780131 90729lf


Related Documents


More Documents from "gomesjunior2002"