$\newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator{\subjectto}{subject to} \newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\MIN}[2]{\begin{array}{ll} \minimize_{#1} & {#2} \end{array}} \newcommand{\MINone}[3]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\MINthree}[5]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \\ & {#4} \\ & {#5} \end{array}} \newcommand{\MAX}[2]{\begin{array}{ll} \maximize_{#1} & {#2} \end{array}} \newcommand{\MAXone}[3]{\begin{array}{ll} \maximize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mSigma}{\ensuremath{\mat{\Sigma}}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}} \newcommand{\vlambda}{\vecalt{\lambda}}$

# Vector norms

These notes introduce vector norms and discuss their properties.

## Rationale for a vector norm

The idea with a vector norm is a to measure the size of a vector with reference to the $0$ vector. Such a norm also provides us with a distance measure between vectors, which let us answer questions such as: is $\vx$ close to $\vy$?

## Definition of a vector norm

Let $\vx, \vy \in \RR^{n}$, and $\alpha \in \RR$. A function $f$ that satisfies the following three properties is called a vector norm:

1. $f(\vx) \ge 0$ and $f(\vx) = 0$ only when $\vx = 0$.
2. $f(\vx + \vy) \le f(\vx) + f(\vy)$
3. $f(\alpha \vx) = |\alpha| f(\vx)$.

In such cases, we write

## The Euclidean norm

The most common vector norm is, by far, the Euclidean norm, also called the 2 norm: Usually, this is the norm that people refer to when they aren't specific about another choice.

## The p-norms

A more general norm is the $p$-norm ($p > 1$): This becomes the Euclidean norm when $p=2$, hence, when it is also called the 2-norm. There are two other common choices:

## A more interesting norm

The above norms are commonly used. A less common norm is: th largest magnitude entries in $\vx$. } Note that for $k=1$, this is the $\infty$-norm defined above, just the largest magnittude entry, and for $k=n$, this is the $1$-norm.

## Equivalence of norms

Does it matter which norm you use? The following theorem helps us understand that something that is small in one norm cannot be arbitrarily large in another.

THEOREM (Equivalence of Norms) Let $\normof{\vx}_a$ and $\normof{\vx}_b$ be any two vector norms on $\RR^n$. Then there exist fixed constants $c_1$ and $c_2$ such that which hold for any $\vx \in \RR^{n}$.

Example

Vector convergence The importance of this theorem is given by the following definition. Let $\vx\itn{1}, \vx\itn{2}, \ldots$ be a sequence of vectors. We say that $\vx\itn{k}$ converges to $\vx$ if Because of the equivalcence of norms, we can show this result for any vector norm. For the forthcoming problems with PageRank, showing such results with the 1-norm will be especially nice.

A small digression
Many people call these norms the $\ell_1, \ell_2,$ and $\ell_{\infty}$ norms, or even the $L_1, L_2$, and $L_{\infty}$ norms. In my view, these are misnomers, although, they aren't incorrect. Usually the $\ell_1$ and $L_1$ norms apply to sequences and functions, respectively:
$\ell_1 : \sum_{i=1}^\infty |x_i|$
$L_1 : \int_{D} |f| \, d\mu$ (over an appropriately measureable space.) So I prefer 1-norm, 2-norm, and $\infty$-norm to the "L" versions.