Skip to content
deercrossing

<-- back to all linear algebra notes

Notes on Linear Algebra Done Right

note: incomplete and may never be finished. also the formatting is bad because i just ripped this off of my notion

Chapter 1—Vector Spaces

1A

  • Definition of complex numbers, sets, tuples
  • FF stands for R\mathbb{R} or C\mathbb{C} (fields)
  • FnF^n defined as set of all lists of length nn

1B Definition of Vector Spaces

  • Vector space is a set VV with addition and scalar multiplication defined s/t:
    • addition commutes + associative
    • additive & multiplicative id.
    • additive inverse
    • distributive property
  • If SS a set, FSF^S the set of functions from SS to FF & is a vector space
  • Additive id. and inv. in v.s. are unique
  • VV denotes a vector space over FF

1C Subspaces

  • UVU\subset V a subspace of VV is UU a vector space with the same additive id. and operations as on VV.

    • In other words, iff UU satisfies 0U0 \in U, closure under addition, and closure under scalar mult
  • Sum of subspaces: for V1,,VmV_1, …, V_m subspaces of VV,

    V1++Vm={v1++vm:v1V1,,vmVm}.V_1 + … + V_m = \{v_1 + … + v_m : v_1 \in V_1, …, v_m \in V_m \}.

    and is the smallest subspace of VV containing V1,,VmV_1, …, V_m.

  • For V1,,VmV_1, …, V_m subspaces of VV,

    V1++VmV_1 + … + V_m a direct sum if each element can be written in only one way as a sum v1++vmv_1 + … + v_m where each vkVkv_k \in V_k, then denoted as V1VmV_1 \oplus … \oplus V_m

    Direct sum iff the only way to write 0 as a sum v1++vmv_1 +…+v_m is by taking each vk=0v_k = 0

  • Suppose U,WU, W subspaces of VV. Then U+WU + W a direct sum     \iff UW={0}U \cap W = \{0\}.


Chapter 2—Finite Dimensional Vector Spaces

2A Span and Linear Independence

  • Defines linear combination, span
  • Span of a list of vectors in VV is the smallest subspace of VV containing all vectors in the list.
  • Vector space is finite dimensional if some list of vectors in it spans the space
  • Defines polynomials over FF, P(F)\mathcal{P}(F), and polynomials of degree at most mm over FF, Pm(F)\mathcal{P}_m(F).
  • A vector space is infinite dimensional if it is not finite dimensional.
  • A list of vectors in VV is linearly indepedent if the only choice of a1,,amFa_1, …, a_m \in F s/t a1v1++amvm=0a_1 v_1 + … + a_m v_m = 0 is a1==am=0a_1 = … = a_m = 0
  • Suppose v1,,vmv_1, …, v_m linearly dependent in VV. Then there exists k{1,2,m}k \in \{1, 2…, m\} s.t. vkspan(v1,,vk1)v_k \in \text{span}(v_1, …, v_{k-1}).
    • If the k-th term removed from v1,,vmv_1, …, v_m, then the span of the remaining list equals the span of the original list
  • Every subspace of a finite dimensional vector space is finite dimensional

2B Bases

  • Defines basis
  • A list v1,,vnv_1, …, v_n of vectors in VV is a basis of VV iff every vVv \in V can be written uniquely in the form v=a1v1++anvnv = a_1 v_1 + … + a_n v_n, where a1,,anFa_1, …, a_n \in F.
  • Every spanning list contains a basis
  • Every finite dimensional vector space has a basis
  • Every linearly independent list of vectors in a finite dimensional vector space can be extended to a basis of the vector space
  • Every subspace of VV is part of a direct sum equal to VV.
    • e.g., sps. VV finite dimensional and UU a subspace of VV. Then there exists a subspace WW of VV s.t. V=UWV = U \oplus W.

2C Dimension

  • Any two bases of a finite dimensional vector space have the same length
  • Dimension of a finite-dimensional vector space is the length of any basis of the vector space, denoted dim(V)\text{dim}(V)
  • Dimension of a subspace of a finite dimensional vector space is less than that of the original vector space
  • VV finite dim. v.s. Then every linearly ind. list of vectors in VV of length dim(V)\text{dim}(V) is a basis of VV.
  • VV finite dim v.s. and UU a subspace of VV s.t. their dimensions are equal. Then V=UV = U.
  • Every spanning list of vectors of VV of length dim(V)\text{dim}(V) is a basis of VV
  • If V1,V2V_1, V_2 are subspaces of a finite dim v.s., then dim(V1+V2)=dimV1+dim(V2)dim(V1V2)\text{dim}(V_1 + V_2) = \text{dim}V_1 + \text{dim}(V_2) - \text{dim}(V_1 \cap V_2).

Chapter 3—Linear Maps

3A Vector Space of Linear Maps

  • Definition of a linear map:
    • a linear map from VV to WW is a function T:VWT: V \mapsto W with the following properties:
      • additivity: T(u+v)=T(u)+T(v)T(u+v) = T(u) + T(v) for all u,vVu,v\in V
      • homogeneity: T(λv)=λ(Tv)T(\lambda v) = \lambda (Tv) for all λF\lambda \in F and all vVv \in V
  • Set of linear maps from VV to WW denotes L(V,W)\mathcal{L}(V,W)
    • from VV to VV is L(V)\mathcal{L}(V)
  • Suppose v1,,vnv_1, …, v_n a basis of VV and w1,,wnWw_1, …, w_n \in W. Then there exists a unique linear map T:VWT: V \mapsto W such that Tvk=wkTv_k = w_k for each k=1,,nk = 1, …, n.
  • Linear maps are closed under addition and scalar multiplication (i.e., summing two linear maps is still a linear map) defined as
    • (S+T)(v)=Sv+Tv,(λT)(v)=λ(Tv)(S+T)(v) = Sv + Tv, (\lambda T)(v) = \lambda (Tv)
  • L(V,W)\mathcal{L}(V,W) is a vector space with the operations defined above

3B Null Spaces and Ranges

  • For TL(V,W),T \in \mathcal{L}(V,W), the null space of TT is the subset of VV whose vectors map to 0 under TT:
    • nullT={vV:Tv=0}\text{null}T = \{v \in V : Tv=0\}
  • Null space is a subspace (above, null T is a subspace of V)
  • A function T:VWT: V \mapsto W is injective if Tu=Tv    u=vTu = Tv \implies u = v
  • TL(V,W)T \in \mathcal{L}(V,W). Then TT injective iff null(T)={0}\text{null}(T) = \{0\}.
  • For TL(V,W),T \in \mathcal{L}(V,W), the range of TT is the subset of WW that are equal to TvTv for some vVv\in V:
    • range(T)={Tv:vV}\text{range}(T) = \{Tv : v \in V\}
  • The range is a subspace (above, range T is a subspace of W)
  • A function T:VWT: V \mapsto W is surjective if its range equals WW
  • Fundamental theorem of linear maps
    • sps. VV finite dimensional and TL(V,W)T \in \mathcal{L}(V,W). Then range(T)\text{range}(T) is finite dimensional and dimV=dim null(T)+dim range(T)\text{dim}V = \text{dim null}(T) + \text{dim range}(T).
  • Sps. V,WV,W finite dim. v.s. s.t. dim V > dim W. Then no linear map from VV to WW is injective.
  • A homogeneous system of linear equations with more variables than equations has nonzero solutions
  • A system of linear equations with more equations than variables has no solution for some choice of the constant terms

3C Matrices

  • Definition of a matrix
  • Definition of matrix of a linear map:

notes

notes

  • If TT is a linear map from FnF^n to FmF^m, assuming standard bases, we can think of elements of FmF^m as columns of mm numbers and the k-th column of M(T)\mathcal{M}(T) as TT applied to the k-th standard basis vector

  • For the rest of the section: assume U,V,WU,V,W finite-dim. and with a chosen basis

    • Defines matrix addition
      • In particular, matrix as the sum of linear maps:
        • S,TL(V,W)S,T \in \mathcal{L}(V,W). Then M(S+T)=M(S)+M(T)\mathcal{M}(S+T) = \mathcal{M}(S) + \mathcal{M}(T).
    • Defines scalar multiplication of matrix
      • Similarly as scalar times linear map:
        • TL(V,W)T \in \mathcal{L}(V,W) and λF\lambda \in F. Then M(λT)=λM(T)\mathcal{M}(\lambda T) = \lambda \mathcal{M}(T).
    • Notation: for m,nm,n positive integers, the set of all m×nm \times n matrices with entries in FF denoted as Fm,nF^{m,n}
      • With addition and scalar multiplication defined as above, Fm,nF^{m,n} a vector space of dimension mnmn.
  • Defines matrix multiplication

    • Motivation: matrix as product of linear maps
      • If TL(U,V)T \in \mathcal{L}(U,V) and SL(V,W)S \in \mathcal{L}(V,W), then M(ST)=M(S)M(T)\mathcal{M}(ST) = \mathcal{M}(S)\mathcal{M}(T)
      • (Can use this motivation for understanding when matrices commute, when function composition retains certain properties)
    • Ways to think about matrix product entries:
      • entry in row jj, column kk of ABAB is row jj of AA times column kk of BB
      • column kk of ABAB equals AA times column kk of BB
      • if AA is m×nm \times n and bb is n×1n \times 1 with entries b1,,bnb_1, …, b_n,
        • Ab=b1A.,1++bnA.,nAb = b_1 A_{. , 1} + … + b_n A_{. , n}

        • i.e., a linear combination of the columns from AA and the entries in bb

          notes

  • Column-row factorization & rank of matrices

    • Definition of column and row rank:
      • AA a m×nm \times n matrix with entries in FF
        • Column rank is dimension of the span of the columns of AA in Fm,1F^{m,1} (e.g., at most nn)
        • Row rank is dimension of span of the rows of AA in F1,nF^{1,n} (e.g., at most mm)
    • Definition of matrix transpose
    • Column-row factorization:
      • Suppose AA is m×nm \times n matrix with entries in FF and column rank c1.c \geq 1. Then there exists an m×cm \times c matrix CC and a c×nc \times n matrix RR, both with entries in FF, such that A=CRA = CR.
        • Pf. Each column of AA a m×1m \times 1 matrix. The list of columns of AA can be reduced to a basis of the span of the columns, a list with length cc. These cc columns in the basis can be put together to form Cm×cC_{m \times c}. Column kk of AA is a linear combination of the columns of CC. Make the coefficients of this linear combination into column kk of a c×nc \times n matrix, which we call RR. Then A=CRA = CR.
      • Column rank = row rank = rank of a matrix

3D Invertibility and Isomorphisms

  • Defines invertible, inverse:
    • A linear map TL(V,W)T \in \mathcal{L}(V,W) is invertible if there exists a linear map SL(W,V)S \in \mathcal{L}(W,V) such that STST equals the identity operator on VV and TSTS equals the identity operator on WW.
    • A linear map SL(W,V)S \in \mathcal{L}(W,V) satisfying ST=IST = I and TS=ITS = I is called an inverse of TT
    • An invertible linear map has a unique inverse
  • Invertibility     \iff injectivity and surjectivity
  • Sps. V,WV, W are finite-dim v.s. with dim VV = dim WW and TL(V,W)T \in \mathcal{L}(V,W).
    • Then TT invertible     \iff TT injective     \iff TT surjective.
    • Sps also that SL(W,V)S \in \mathcal{L}(W, V). Then ST=I    TS=IST = I \iff TS = I.
  • An isomorphism is an invertible linear map.
    • Two finite-dim vector spaces are called isomorphic if there is an isomorphism from one vector space onto the other one
  • L(V,W)Fm,n\mathcal{L}(V,W) \cong F^{m,n}
  • dim L(V,W)\mathcal{L}(V,W) = (dim VV)(dim WW)
  • Linear maps thought of as matrix multiplication
    • Matrix of a vector:
      • sps. vVv \in V and v1,,vnv_1, …, v_n a basis of VV. Then the matrix of vv w.r.t. this basis is the n×1n \times 1 matrix M(v)=(b1...bn)\mathcal{M}(v) = \begin{pmatrix} b_1 \\ . \\ . \\ . \\ b_n \end{pmatrix}, where b1,,bnb_1, …, b_n are scalars s.t. v=b1v1++bnvnv = b_1 v_1 + … + b_n v_n
    • M(T).,k=M(Tvk)\mathcal{M}(T)_{.,k} = \mathcal{M}(Tv_k) (if we have a linear map from VV to WW, each v.s. with chosen basis, thten the k-th column of M(T)\mathcal{M}(T) equals M(Tvk)\mathcal{M}(Tv_k)
      • e.g. each column of linear map represented as a matrix is the linear map’s action on the corresponding basis vector in the domain, represented as a matrix
    • Hence linear maps act like matrix multiplication:
      • Suppose TL(V,W)T \in \mathcal{L}(V,W) and vVv \in V. Suppose v1,,vnv_1, …, v_n basis for VV and w1,,wmw_1, …, w_m a basis for WW. Then M(Tv)=M(T)M(v)\mathcal{M}(Tv) = \mathcal{M}(T) \mathcal{M}(v).
        • Pf. Sps. v=b1v1++bnvnv = b_1 v_1 + … + b_n v_n, with coefficients in FF. Then Tv=b1Tv1++bnTvnTv = b_1 Tv_1 + … + b_n Tv_n (by linearity). Hence M(Tv)=b1M(Tv1)++bnM(Tvn)\mathcal{M}(Tv) = b_1 \mathcal{M}(Tv_1) + … + b_n \mathcal{M}(Tv_n) (from the above). This equals b1M(T).,1++bnM(T).,n=M(T)M(v)b_1 \mathcal{M}(T)_{.,1} + … + b_n \mathcal{M}(T)_{.,n} = \mathcal{M}(T)\mathcal{M}(v), as desired.
    • Sps. V,WV, W are finite-dim and TL(V,W)T \in \mathcal{L}(V, W). Then dim range TT equals the column rank of M(T)\mathcal{M}(T).
      • Pf. Sps. v1,,vnv_1, …, v_n a basis of VV and w1,,wmw_1, …, w_m a basis of WW. The linear map that takes wWw \in W to M(w)\mathcal{M}(w) is an isomorphism from WW onto the space Fm,1F^{m,1}. The restriction of this isomorphism to range TT is an isomorphism from range TT onto span$(\mathcal{M}(Tv_1), …, \mathcal{M}(Tv_n))$. The m×1m \times 1 matrix M(Tvk)\mathcal{M}(Tv_k) equals column kk of M(T)\mathcal{M}(T).
  • Change of basis
    • Defines identity matrix
    • Defines invertible matrices (matrix inverse not introduced with det / computational method)
    • Suppose TL(U,V)T \in \mathcal{L}(U,V) and SL(V,W)S \in \mathcal{L}(V, W). If u1,,umu_1, …, u_m a basis of UU and v1,,vnv_1, …, v_n a basis of VV and w1,,wpw_1, …, w_p a basis of WW, then M(ST,(u1,,um),(w1,,wp))=M(S,(v1,,vn),(w1,,wp))M(T,(u1,,um),(v1,,vn))\mathcal{M}(ST, (u_1, …, u_m), (w_1, …, w_p)) = \mathcal{M}(S, (v_1, …, v_n), (w_1, …, w_p))\mathcal{M}(T, (u_1, …, u_m), (v_1, …, v_n)).
      • Essentially same result of linear map and matrix mult., with explicit bases
    • Suppose u1,,unu_1, …, u_n and v1,,vnv_1, …, v_n are bases of VV. Then the matrices M(I,(u1,,un),(v1,,vn))\mathcal{M}(I, (u_1, …, u_n), (v_1, …, v_n)) and M(I,(v1,,vn),(u1,,un))\mathcal{M}(I, (v_1, …, v_n), (u_1, …, u_n)) are inverses of each other
    • Change of basis formula:
      • Shorthand notation: M(T,(u1,un))=M(T,(u1,,un),(u1,,un))\mathcal{M}(T, (u_1, … u_n)) = \mathcal{M}(T, (u_1, …, u_n), (u_1, …, u_n))
      • Sps. TL(V)T \in \mathcal{L}(V). Sps u1,,unu_1, …, u_n and v1,,vnv_1, …, v_n are bases of VV. Let A=M(T,(u1,,un))A = \mathcal{M}(T, (u_1, …,u _n)) and B=M(T,(v1,,vn))B = \mathcal{M}(T, (v_1, …, v_n)) and C=M(I,(u1,,un),(v1,,vn))C = \mathcal{M}(I, (u_1, …, u_n), (v_1, …, v_n)). Then A=C1BCA = C^{-1}BC.
    • Suppose v1,,vnv_1, …, v_n a basis of VV and TL(V)T \in \mathcal{L}(V) invertible. Then M(T1)=(M(T))1\mathcal{M}(T^{-1}) = (\mathcal{M}(T))^{-1}, where both matrices are w.r.t. basis v1,,vnv_1, …, v_n.

3E Products and Quotients of Vector Spaces

notes

  • Product of vector spaces is a vector space
  • Dimension of a product is the sum of dimensions

notes

notes

  • Definition of a translate:

notes

  • Defines a quotient space:
    • Sps. UU a subspace of VV. Then the quotient space V/UV/U is the set of all translates of UU. Thus, V/U={v+U:vV}V/U = \{v + U : v \in V \}
  • Want this quotient space to be a vector space.
    • Sps. UU a subspace of VV and v,wVv, w \in V. Then vwU    v+U=w+U    (v+U)(w+U)v-w \in U \iff v+ U = w + U \iff (v+U) \cap (w+U) \neq \emptyset
      • e.g., two translates of a subspace are equal or disjoint
    • Defining addition and scalar multiplication on V/UV/U:
      • Sps. UU a subspace of VV. Then addition and scalar multiplication defined as:
        • (v+U)+(w+U)=(v+w)+U(v+U)+(w+U) = (v+w) + U
        • λ(v+U)=(λv)+U\lambda(v+U) = (\lambda v) + U
    • With the operations above V/UV/U is a vector space.
  • Defines quotient map:
    • UU a subspace of VV. The quotient map π:VV/U\pi : V \mapsto V/U is the linear map defined by π(v)=v+U\pi(v) = v + U for each vVv \in V.
  • dim V/UV/U = dim VV - dim UU
  • Sps. TL(V,W)T \in \mathcal{L}(V, W). Define T~:V/nullTW\tilde{T}: V / \text{null}T \mapsto W by T~(v+nullT)=Tv\tilde{T}(v+\text{null}T) = Tv.
    • T~π=T\tilde{T} \circ \pi = T, where π\pi the quotient map of VV onto V/nullTV / \text{null}T
    • T~\tilde{T} is injective
    • range T~\tilde{T} = range TT
    • V/(nullT)V / (\text{null}T) isomorphic to rangeT\text{range}T
    • Hence we can think of T~\tilde{T} as a modified version of TT, with a domain that produces a 1-1 map.

3F Duality

  • A linear functional on VV is a linear map from VV to FF (an element of L(V,F)\mathcal{L}(V, F)

  • The dual space of VV, denoted VV’, is the vector space of all linear functionals on VV, i.e., V=L(V,F)V’ = \mathcal{L}(V, F)

  • Sps. VV is finite dimensional. Then VV’ is also finite-dimensional and dim VV’ = dim VV.

    • Pf. dim VV’ = dim L(V,F)\mathcal{L}(V,F) = dim VV dim FF = dim VV.
  • If v1,,vnv_1, …, v_n a basis of VV, then the dual basis of v1,,vnv_1, …, v_n is the list ϕ1,ϕn\phi_1, … \phi_n of elements of VV’, where each ϕj\phi_j is the linear functional on VV such that

    ϕj(vk)={1k=j0kj\phi_j(v_k) = \begin{cases} 1 & k=j \\ 0 & k \neq j \end{cases}

notes

  • The dual basis of a basis of VV consists of the linear functionals on VV that give the coefficients for expressing a vector in VV as a linear combination of the basis vector:

    • Sps. v1,,vnv_1, …, v_n a basis of VV and ϕ1,,ϕn\phi_1, …, \phi_n is the dual basis. Then v=ϕ1(v)v1++ϕn(v)vnv = \phi_1 (v) v_1 + … + \phi_n (v) v_n for each vVv \in V.
  • For VV finite dimensional, the dual basis is a basis of the dual space.

  • Sps. TL(V,W)T\in \mathcal{L}(V, W). The dual map of TT is the linear map TL(W,V)T’ \in \mathcal{L}(W’, V’) defined for each ϕW\phi \in W’ by T(ϕ)=ϕTT’(\phi) = \phi \circ T.

CH. 3 NOT DONEEEE


Chapter 5—Eigenvalues and Eigenvectors

Standing notation: FF is the reals or the complex numbers, VV is a vector space over FF

Eigenvalues

  • A linear map from a vector space to itself is called an operator

  • TL(V)T \in \mathcal{L}(V). A subspace UU of VV is called invariant under TT if TuUTu \in U for every uUu \in U

    • Motivation: to understand the behavior of a linear operator, we only need to understand the behavior of the linear operator restricted to subspaces which partition the entire VV. However, to apply many useful tools, we want the restriction of T to a subset (say, VkV_k) to map back into VkV_k, hence we would like to study invariant subspaces

    • Thus UU is invariant under TT if TuT|_u is an operator on UU

      notes

  • The simplest possible nontrivial invariant subspaces (other than {0}\{0\} and VV) are invariant subspaces of dimension 1.

    • Take any vVv \in V with v0v \neq 0 a nd let U={λv:λF}={span(v)}U = \{\lambda v : \lambda \in F \} = \{\text{span}(v)\}. UU is a 1 dimensional subspace of VV (and every 1-dimensional subspace of VV is of this form for some appropriate vv).
    • If UU is invariant under an operator TL(V)T \in \mathcal{L}(V), then TvUTv \in U, and so there exists a scalar λF\lambda \in F such that Tv=λvTv = \lambda v.
    • Conversely, if Tv=λvTv = \lambda v for some λF\lambda \in F, then span(v)\text{span}(v) is a 1-dimensional subspace of VV invariant under TT.
  • TL(V)T \in \mathcal{L}(V). A number λF\lambda \in F is an eigenvalue of TT if there exists vVv \in V such that v0v \neq 0 and Tv=λvTv = \lambda v.

    • For an eigenvalue λ\lambda, the corresponding vector vVv \in V s.t. Tv=λvTv = \lambda v is an eigenvector
  • Every list of eigenvectors corresponding to distinct eigenvalues of TL(V)T \in \mathcal{L}(V) is linearly independent

  • For a finite-dimensional vector space VV, each operator on VV has at most dim(V) distinct eigenvalues