$\newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator{\subjectto}{subject to} \newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\MIN}[2]{\begin{array}{ll} \minimize_{#1} & {#2} \end{array}} \newcommand{\MINone}[3]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\MINthree}[5]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \\ & {#4} \\ & {#5} \end{array}} \newcommand{\MAX}[2]{\begin{array}{ll} \maximize_{#1} & {#2} \end{array}} \newcommand{\MAXone}[3]{\begin{array}{ll} \maximize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mSigma}{\ensuremath{\mat{\Sigma}}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}} \newcommand{\vlambda}{\vecalt{\lambda}}$

# Common matrix structures

We now illuminate some of the relationships between matrix computations and linear algebra.

Why is this stuff important? The important bit is the concept of the rank of a matrix. This gives the dimension of the vector-space associated with the matrix. So it's worth reviewing up to the point of rank.

# Sets of vectors

Linearly independent A set of vectors $\{ \vx_1, \ldots, \vx_k \}$ in $\RR^{n}$ is called linearly independent if imples $\alpha_i = 0$ all $i$.

Examples The vectors $\vx_1 = \bmat{1 \\ 2}$ and $\vx_2 = \bmat{2 \\ 3}$ are linearly independent. This can be verified by showing that the system of equations: only has the solution $\alpha_1 = \alpha_2 = 0$. However, the vectors $\vx_1 = \bmat{1 \\ 2}$ and $\vx_2 = \bmat{2 \\ 4}$ are not linearly independent because $2\vx_1 - \vx_2 = 0$.

As a matrix The property of being linearly independent is easy to state as a matrix. Suppose that $\mX$ is an $n \times k$ matrix where $\vx_i$ is the $i$th column: Then the set of vectors is linearly independent if $\mX \va = 0$ implies that $\va = 0$.

Span (not spam) The span of a set of vectors is the set of all linear combinations.

# Subspaces

Defining a vector spaces is best left to Wikipedia:

Suffice it to say that that the set $\RR^{n}$ is a vector-space with the field of real-numbers as scalars.

A subset $V \subset \RR^{n}$ is called a subspace if it also satisfies the properties of being a vector-space itself.

Example Let $V = \{ \alpha \vx, \alpha \in RR \}$ for some vector $\vx \in \RR^{n}$. Then $V$ is a subspace of $\RR^{n}$.

Spans and subspaces The example we just saw shows that $\text{span}(\vx)$, the span of a single vector, is a subspace. This is true in general: $\text{span}(\vx_1, \ldots, \vx_k)$ is a subspace.

Linearly independent spans Let $\vx_1, \ldots, \vx_k$ be linearly independent. Then for $\vb \in \text{span}(\vx_1, \ldots, \vx_k)$, there exists a unique set of $\alpha_i$'s such that $\vb = \sum_{i=1}^{k} \alpha_i \vx_k$. As a matrix, this is saying that the system of equations: has a unique solution $\va$ where

Subspaces to bases and dimensions For any subspace $V \subseteq \RR^{n}$, we can find always find a set $S$ of linearly independent vectors $S = \{ \vx_1, \ldots, \vx_k \}$ such that $V = \text{span}(\vx_1, \ldots, \vx_k)$. We call any such set a basis for the subspace $V$.

IMPORTANT Any basis for a subspace always has the same number of vectors. Thus, the number of vectors in a subspace is a unique property of a vector space and is the dimension of the vector-space.

This ends our discussion of subspaces. Now we'll see how we can use subspaces to discuss matrices

# Matrices to subspaces

Given a matrix $\mA \in \RR^{m \times n} = \bmat{\va_1, \ldots, \va_n}$.

Range The range of a matrix is the subspace: Note that So the range is just one particular span of a set of vectors.

# Rank

Perhaps the most important thing in these notes is the concept of rank. At this point, rank is simple.

That is, the rank of $\mA$ is the dimension of the subspace given by the range of $\mA$. This property is fundamentally important.

For instance, if $\mA \in \RR^{m \times n}$ with $m \ge n$ and $\text{rank}(\mA) = n$, then we know that has a set of linearly independent column vectors!

Example Here's where we can use some of our matrix algebra to prove a statement. Let $\mP$ be an $n \times n$ permutation matrix. Show that $\text{rank}(\mA \mP) = \text{rank}(\mA)$.

Proof Sketch A permutation matrix just reorders the columns of the matrix. This won't change anything in the range of $\mA$. So the set of vectors in the range of $\mA$ won't change. Thus, the dimension of that vector space won't change.

Key question How do we compute rank?

## Useful matrix decompositions

Let $\mA \in \RR^{m \times n}$ be a matrix. The following are matrix decompositions exist for any matrix:

1. $\mA = \mQ \mR$ where $\mQ$ is $m \times m$ and orthogonal, and $\mR$ is $m \times n$ and upper-triangular.

2. $\mA = \mU \mat{\Sigma} \mV^T$ where $\mU$ is $m \times m$ and orthogonal, $\mV$ is $n \times n$ and orthogonal, and $\mat{\Sigma}$ is $m \times n$ and diagonal, with diagonal entries $\sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_n \ge 0$. (That is, sorted in decreasing order and non-negative.)

3. $\mA = \mP \mL \mU \mQ$ where $\mP$ and $\mQ$ are permutation matrices and $\mL$ and $\mU$ are lower and upper triangular.

These decompositions expose the rank of a matrix in various ways. For instance, the number of entries on the diagonal of $\mSigma$ that are non-zero is equal to the rank of the matrix.