image of a large network

Network & Matrix Computations

David Gleich

Purdue University

Fall 2011

Course number CS 59000-NMC

Tuesdays and Thursday, 10:30am-11:45am

CIVL 2123

Lecture 1 - Norms, Linear Systems, and Eigenvalues


In this lecture, we’ll treat a few prerequisites from dense linear algebra.

  1. Norms
  2. Linear systems
  3. Eigenvalue problems

After the lecture, you should be able to use these concepts to solve problems.

All of these notes are based Golub and van Loan, sections 2.2 and 2.3.


Vector norms

Let , and . A function that satisfies the following three properties is called a vector norm:

  1. and only when .
  2. .

In such cases, we write

f(\vx) = \normof{\vx}.

The most common vector norm is, by far, the Euclidean norm, also called the 2 norm:

\normof{\vx} = \normof{\vx}_2 = \sqrt{\sum_{i=1}^n |x_i|^2}.

Usually, this is the norm that people refer to when they aren’t specific about another choice.

A more general norm is the -norm ():

\normof{\vx}_p = \left( \sum_{i=1}^n |x_i|^p \right)^{1/p}.

This becomes the Euclidean norm when , hence, when it is also called the 2-norm. There are two other common choices:

p = 1 \quad : \quad \normof{\vx}_1 = \sum_{i=1}^n |x_i|p = \infty \quad : \quad \normof{\vx}_{\infty} = \max_{1 \le i \le n} |x_i|

THEOREM (Equivalence of Norms) Let and be any two vector norms on . Then there exist fixed constants and such that

c_1 \normof{\vx}_a \le \normof{\vx}_b \le c_2 \normof{\vx}_a,

which hold for any .


\normof{\vx}_2 \le \normof{\vx}_1 \le \sqrt{n} \normof{\vx}_2\normof{\vx}_{\infty} \le \normof{\vx}_2 \le \sqrt{n} \normof{\vx}_{\infty}\normof{\vx}_{\infty} \le \normof{\vx}_1 \le n \normof{\vx}_{\infty}

Vector convergence The importance of this theorem is given by the following definition. Let be a sequence of vectors. We say that converges to if

\lim_{k\to\infty} \normof{\vx\itn{k} - \vx} = 0.

Because of the equivalcence of norms, we can show this result for any vector norm. For the forthcoming problems with PageRank, showing such results with the 1-norm will be especially nice.

A small digression
Many people call these norms the and norms, or even the , and norms. In my view, these are misnomers, although, they aren’t incorrect. Usually the and norms apply to sequences and functions, respectively:

(over an appropriately measureable space.) So I prefer 1-norm, 2-norm, and -norm to the “L” versions.

Matrix norms

Let be a matrix. Then any function where

  1. and only when

  2. is a matrix norm and we write:

    \normof{\mA} \text{ for } f(\mA).

A frequently used matrix norm is the Frobenius norm

\normof{\mA}_F = \sqrt{\sum_{i=1}^m \sum_{j=1}^n |A_{i,j}|^2 }

Just like

Linear systems

A problem from

A team’s ASM depends on the strength of the opponents. Suppose we had a measure of each team’s strength, (i.e. the ranking), such that is the expected score margin when team plays team . For to be consistent with the available data, then the average discrepancy between the actual and expected score margin should be zero. In other words, for each team , ought to satisfy the equation:

\sum_{j \in Games(i)} (r_i - r_j) - (S_i - S_j) = 0


These are not required.

  1. Give an example showing that the -norm isn’t a norm when .

  2. Show that as defined above.

  3. Show that the norm is not submultiplicative via an example.