# Network & Matrix Computations

### CIVL 2123

$\newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}}$

# Lecture 1 - Norms, Linear Systems, and Eigenvalues

## Outline

In this lecture, we’ll treat a few prerequisites from dense linear algebra.

1. Norms
2. Linear systems
3. Eigenvalue problems

After the lecture, you should be able to use these concepts to solve problems.

All of these notes are based Golub and van Loan, sections 2.2 and 2.3.

## Norms

### Vector norms

Let $\vx, \vy \in \RR^{n}$, and $\alpha \in \RR$. A function $f$ that satisfies the following three properties is called a vector norm:

1. $f(\vx) \ge 0$ and $f(\vx) = 0$ only when $\vx = 0$.
2. $f(\vx + \vy) \le f(\vx) + f(\vy)$
3. $f(\alpha \vx) = |\alpha| f(\vx)$.

In such cases, we write

The most common vector norm is, by far, the Euclidean norm, also called the 2 norm:

Usually, this is the norm that people refer to when they aren’t specific about another choice.

A more general norm is the $p$-norm ($p > 1$):

This becomes the Euclidean norm when $p=2$, hence, when it is also called the 2-norm. There are two other common choices:

THEOREM (Equivalence of Norms) Let $\normof{\vx}_a$ and $\normof{\vx}_b$ be any two vector norms on $\RR^n$. Then there exist fixed constants $c_1$ and $c_2$ such that

which hold for any $\vx \in \RR^{n}$.

Example

Vector convergence The importance of this theorem is given by the following definition. Let $\vx\itn{1}, \vx\itn{2}, \ldots$ be a sequence of vectors. We say that $\vx\itn{k}$ converges to $\vx$ if

Because of the equivalcence of norms, we can show this result for any vector norm. For the forthcoming problems with PageRank, showing such results with the 1-norm will be especially nice.

A small digression
Many people call these norms the $\ell_1, \ell_2,$ and $\ell_{\infty}$ norms, or even the $L_1, L_2$, and $L_{\infty}$ norms. In my view, these are misnomers, although, they aren’t incorrect. Usually the $\ell_1$ and $L_1$ norms apply to sequences and functions, respectively:
$\ell_1 : \sum_{i=1}^\infty |x_i|$
$L_1 : \int_{D} |f| \, d\mu$ (over an appropriately measureable space.) So I prefer 1-norm, 2-norm, and $\infty$-norm to the “L” versions.

### Matrix norms

Let $\mA \in \RR^{m \times n}$ be a matrix. Then any function $f$ where

1. $f(\mA) \ge 0$ and $f(\mA) = 0$ only when $\mA = 0$

2. $f(\mA + \mB) \le f(\mA) + f(\mB)$

3. $f(\alpha \mA) = |\alpha| f(\mA)$ is a matrix norm and we write:

A frequently used matrix norm is the Frobenius norm

Just like

## Linear systems

### A problem from http://www.raymondcheong.com/rankings/background.html

A team’s ASM depends on the strength of the opponents. Suppose we had a measure of each team’s strength, $\vr$ (i.e. the ranking), such that $r_i - r_j$ is the expected score margin when team $i$ plays team $j$. For $\vr$ to be consistent with the available data, then the average discrepancy between the actual and expected score margin should be zero. In other words, for each team $i$, $\vr$ ought to satisfy the equation:

## Exercises

These are not required.

1. Give an example showing that the $p$-norm isn’t a norm when $p < 1$.

2. Show that $\lim_{p \to \infty} \normof{\vx}_p = \normof{x}_{\infty}$ as defined above.

3. Show that the $F_{\infty}$ norm is not submultiplicative via an example.