
# Inverses

In this lecture, we'll look into the inverse of a matrix, and find out which matrices are invertible. These notes roughly follow Trefethen and Bau, section 1.

## Background

Let $\mA \in \RR^{m \times n} = \bmat{ \va_1 & \cdots & \va_n}$. Recall that

We call $\mA \in \RR^{m \times n}$ full rank if $\text{rank}(\mA) = \min(m, n)$.

## Full rank matrices

Full rank matrices have the important property that they give rise to one-to-one maps between $\RR^{n}$ and $\RR^{m}$. Let's show this.

Theorem (From Trefethen and Bau, Theorem 1.2) Let $m \ge n$, a matrix is full rank if and only if it maps no two distinct vectors to the same vector.

Proof If a matrix is full rank, then it has $n$ linearly independent column vectors. These vectors are a basis for $\text{range}(\mA)$. This, in turn, implies that any vector in $\text{range}(\mA)$ has a unique representation in this basis. (If not, then $\mA \vc_1 = \mA \vc_2$ and so $\mA$ has linearly dependent columns, which it can't!) Thus, any vector $\mA \vy$ corresponds to a unique $\vy$.

We also have to prove the reverse direction, but this is easier to prove via the contrapositive. If $\mA$ is not full rank, then it's columns are linearly dependent. Hence, there exists a vector $\vc$ such that $\mA \vc = 0$. Let $\vy$ be any vector in $\RR^{n}$, then $\mA \vy = \mA(\vy + \vc)$ and so we have two distinct vectors that give us the same result.

There's a great picture I could put here, but it's too tricky. The point is that we have a one-to-one map between $\RR^{n}$ and $\text{range}(\mA)$, which is a subset of $\RR^{m}$ when $m \ge n$. Because this map is one-to-one, it's invertible! So we can take any vector $\vb \in \text{range}(\mA)$ and find for some $\vx \in \RR^{n}$.

## Linear systems

It's worth repeating this equation because it's so fundamental to the rest of the class -- and the entire field.

We call a linear system of equations.

Usually these are defined with squares matrices $\mA$.

## Square, full rank matrices

Let $\mA \in \RR^{n \times n}$ be a full-rank matrix. What we've shown above is that any vector in $\RR^{n}$ can be written as $\mA \vx$ for some unique $\vx$.

Thus, we can find the following $n$ vectors: If we write this as a matrix equation, we have:

The matrix $\mX$ is called the inverse and usually written $\mA^{-1}$.

## The matrix inverse

We've shown that $\mA \in \RR^{n \times n}$ and full rank has an inverse $\mA^{-1}$ such that Let's study a few properties of this inverse.

First, does $\mA^{-1} \mA = \mI$ too? We'll show this is the case. Let $\mA \mX = \mI$ and let $\mY \mA = \mI$. Then but also Thus, $\mX = \mY$.

Second, $(\mA \mB)^{-1} = \mB^{-1} \mA^{-1}$ assuming that $\mA$ and $\mB$ are square. The standard way to check that you have the inverse of a particular matrix is just to show that it satifies $\mA \mX = \mI$. In this case: where we've canceled all the inverse pairs represented by parentheses.

## When is a matrix full rank?

The following set of statements from Trefthen and Bau helps to characterize when a matrix has an inverse. Let $\mA \in \RR^{n \times n}$, these statements are all equivalent to each other:

1. $\mA$ has an inverse $\mA^{-1}$
2. $\mA$ has rank $n$
3. $\text{range}(\mA) = \RR^{n}$.
4. (not fully covered) $\text{null}(\mA) = \{ 0 \}$ (null is the nullspace).
5. (not fully covered) 0 is not an eigenvalue of $\mA$
6. (not fully covered) 0 is not a singular value of $\mA$
7. (not fully covered) $\text{det}(\mA) \not= 0$

## Solving a linear system

Let $\mA \in \RR^{n \times n}$ be full-rank. Then the linear system: has solution

Be warned, this is not a good way to find $\vx$ unless $\mA$ is very special. In this class, we will see many superior procedures to find $\vx$ that satifies this linear system. A good way to demonstrate that you have not learned the material is to utilize this idea in your programs.