$\newcommand{\eps}{\varepsilon} \newcommand{\kron}{\otimes} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\trace}{trace} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator{\subjectto}{subject to} \newcommand{\mat}[1]{\boldsymbol{#1}} \renewcommand{\vec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\vecalt}[1]{\boldsymbol{#1}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\normof}[1]{\|#1\|} \newcommand{\onormof}[2]{\|#1\|_{#2}} \newcommand{\MIN}[2]{\begin{array}{ll} \minimize_{#1} & {#2} \end{array}} \newcommand{\MINone}[3]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\MINthree}[5]{\begin{array}{ll} \minimize_{#1} & {#2} \\ \subjectto & {#3} \\ & {#4} \\ & {#5} \end{array}} \newcommand{\MAX}[2]{\begin{array}{ll} \maximize_{#1} & {#2} \end{array}} \newcommand{\MAXone}[3]{\begin{array}{ll} \maximize_{#1} & {#2} \\ \subjectto & {#3} \end{array}} \newcommand{\itr}[2]{#1^{(#2)}} \newcommand{\itn}[1]{^{(#1)}} \newcommand{\prob}{\mathbb{P}} \newcommand{\probof}[1]{\prob\left\{ #1 \right\}} \newcommand{\pmat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \newcommand{\bmat}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\spmat}[1]{\left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \newcommand{\sbmat}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \newcommand{\RR}{\mathbb{R}} \newcommand{\CC}{\mathbb{C}} \newcommand{\eye}{\mat{I}} \newcommand{\mA}{\mat{A}} \newcommand{\mB}{\mat{B}} \newcommand{\mC}{\mat{C}} \newcommand{\mD}{\mat{D}} \newcommand{\mE}{\mat{E}} \newcommand{\mF}{\mat{F}} \newcommand{\mG}{\mat{G}} \newcommand{\mH}{\mat{H}} \newcommand{\mI}{\mat{I}} \newcommand{\mJ}{\mat{J}} \newcommand{\mK}{\mat{K}} \newcommand{\mL}{\mat{L}} \newcommand{\mM}{\mat{M}} \newcommand{\mN}{\mat{N}} \newcommand{\mO}{\mat{O}} \newcommand{\mP}{\mat{P}} \newcommand{\mQ}{\mat{Q}} \newcommand{\mR}{\mat{R}} \newcommand{\mS}{\mat{S}} \newcommand{\mT}{\mat{T}} \newcommand{\mU}{\mat{U}} \newcommand{\mV}{\mat{V}} \newcommand{\mW}{\mat{W}} \newcommand{\mX}{\mat{X}} \newcommand{\mY}{\mat{Y}} \newcommand{\mZ}{\mat{Z}} \newcommand{\mLambda}{\mat{\Lambda}} \newcommand{\mPbar}{\bar{\mP}} \newcommand{\ones}{\vec{e}} \newcommand{\va}{\vec{a}} \newcommand{\vb}{\vec{b}} \newcommand{\vc}{\vec{c}} \newcommand{\vd}{\vec{d}} \newcommand{\ve}{\vec{e}} \newcommand{\vf}{\vec{f}} \newcommand{\vg}{\vec{g}} \newcommand{\vh}{\vec{h}} \newcommand{\vi}{\vec{i}} \newcommand{\vj}{\vec{j}} \newcommand{\vk}{\vec{k}} \newcommand{\vl}{\vec{l}} \newcommand{\vm}{\vec{l}} \newcommand{\vn}{\vec{n}} \newcommand{\vo}{\vec{o}} \newcommand{\vp}{\vec{p}} \newcommand{\vq}{\vec{q}} \newcommand{\vr}{\vec{r}} \newcommand{\vs}{\vec{s}} \newcommand{\vt}{\vec{t}} \newcommand{\vu}{\vec{u}} \newcommand{\vv}{\vec{v}} \newcommand{\vw}{\vec{w}} \newcommand{\vx}{\vec{x}} \newcommand{\vy}{\vec{y}} \newcommand{\vz}{\vec{z}} \newcommand{\vpi}{\vecalt{\pi}} \newcommand{\vlambda}{\vecalt{\lambda}}$

# Homework 3

Please answer the following questions in complete sentences in submit the solution on Blackboard Fri. Sept. 30th by Midnight.

# Homework 3

• 2016-09-27: Small clarifications based on Piazza questions.

## Problem 1: Warm up problems (15 points)

These questions are most similar to what I'd ask on an exam or on future quizzes.

1. Suppose $\mA$ is an $r \times s$ matrix. How many multiplication and addition operations are required for a general matrix vector product $\mA \vx$.

2. Suppose $\mA$ is an $m \times n$ matrix and $\mD$ is an $m \times m$ diagonal matrix (that means that it has zeros everywhere except on the diagonal elements). Write down the simplest expression for the $ij$th entry of $\mD \mA$ you can. (This will end up being used in the future for some problems!)

3. This is just like the previous problem, but now do the same thing where $\mA$ is an $m \times n$ matrix and $\mD$ is an $n \times n$ diagonal matrix. Write down the simplest expression for the $ij$th entry of $\mA \mD$.

## Problem 2: Implement MatMul

(If you wish, you may perform this task in a language besides Julia that provides a built-in matrix-matrix multiplication routine -- such as Matlab or Python, but you must have the same comparison.)

1. Implement the following function:

"""
matmul
========

Compute Matrix-matrix multiplication or MatMul.

Functions
---------
- C = matmul(A,B) returns the matrix-matrix product of C=A*B
"""
function matmul(A,B)
end


using only scalar operations. (i.e. without any of Julia's built in matrix operations.)

2. Report the output of your function for the following commands

@show matmul([1 2], [2; 1])
@show matmul([1 2; -2 4; 0 3], [-1; 1])
@show matmul([-1 1], [1 -2 0; 2 4 3])
@show matmul(3, 5)
t = pi/2
@show matmul([cos(t) -sin(t); sin(t) cos(t)],
[cos(t) sin(t); -sin(t) cos(t)]);

3. Compare the time and accuracy of your code to Julia's built in matrix-matrix operator * on matrices of random normals with sizes ranging between $10$ and $1000$. To evaluate accuracy, use:

C = matmul(A,B)
D = A*B
diff = vecnorm(C-D,);


To evaluate time, use:

dt = @elapsed C = matmul(A,B);


Use 50 different random samples for each size and report the mean time for Julia vs. your own code as one plot. (So on the x-axis we have problem size, and on the y axis, we have one line for the mean time over 50 trials of Julia's built-in operation, and a second line we have the mean time over 50 trials of our function.)

Also prepare the same type of plot on the maximum difference over 50 trials between your code and Julia's code to show the accuracy.

4. Write a new function:

"""
matmul2
========

Compute Matrix-matrix multiplication or MatMul faster

Functions
---------
- C = matmul(A,B) returns the matrix-matrix product of C=A*B
"""
function matmul(A,B)
end


That uses some of the ideas we talked about in class to make a faster matrix-matrix product. You may use Julia's built-in matrix-matrix multiplication for up to 16-by-16 matrices. Or matrix-vector products up to $256-by-1$. Compare the time and accuracy of your new matmul2 code to your original.

## Problem 3: Make Yoda Spin!

1. G&C Chapter 2 Problem 11. The Julia code for this example is

# create matrix whose columns contain the coordinates of
# each vertex.
U = [1.0 0 -1 0 1.0; 0 1 0 -1 0]

theta = pi/4.0

# Create a red unit square
# Note U(1,:) denotes the first row of U
plot(U[1,:]',U[2,:]',fill=(0,:red))

# Perform rotation.
R = [cos(theta) -sin(theta); sin(theta) cos(theta)];
V = R*U;
# Plot the blue square
plot!(V[1,:]', V[2,:]',fill=(0,:blue))

2. G&C Chapter 2 Problem 12 (not assigned)

**I really wanted to assign this one, but I still haven't
figured out how to translate the Matlab code into Julia
code. This one might get assigned in the future.**


## Problem 4: Make an image small

Consider the following problem. We have a $32 \times 32$ pixel image. Each pixel is a real-valued number between $0$ and $1$. However, I want to show this on an iPhone and I only have a $16 \times 16$ pixel area to show the image. In order to reduce the size of the image, I want to average groups of 4 pixels. In this problem, we'll create a Julia program to do this

Let's work through a smaller example first. For the $4 \times 4$ image, represented here by a matrix-like array: I want to compute

We will solve this problem using a matrix-vector product.

1. Suppose I call:

Further, suppose we consider the vectors $\vy \in \RR^{4}$ and $\vx \in \RR^{16}$ to be the new image and old image, respectively. Write down the matrix $\mA$ such that $\vy = \mA \vx$.

In the remainder of the problem, we'll work through how to do this for a particular image in Julia.

2. download("http://www.cs.purdue.edu/homes/dgleich/cs314-2016/homeworks/smallicon.csv","smallicon.csv")


in Julia. You should get a $32 \times 32$ matrix $\mX$. What is the sum of diagonal elements of $\mX$?

3. Show the image

using Images
grayim(X)


Or write it to a file

using FileIO
using ImageMagick
save("icon1.png",X)


But that picture doesn't look right, does it? To get full points, make sure you adjust the figure so that it looks correct.!

Show your final code and the image you get.

4. In what follows, we'll talk about two different types of indices. The image index of a pixel is a pair $(i,j)$ that identifies a row and column for the pixel in the image. The vector index of a pixel is the index of that pixel in a linear ordering of the image elements. For instance, in the small little example, pixel (3,2) has linear index $10$. Also, pixel (1,4) has index $4$. Julia can help us built a map between pixel indices and linear or vector indices:

N = reshape(1:(4*4), 4, 4)'


This creates the pixel index to linear index for the problem above because

N(1,4)
N(3,2)


return the appropriate entry.

In your own words, explain what the reshape operation does. What happens if you omit the final transpose above and try:

N = reshape(1:(4*4), 4, 4)


5. Now we need to construct the matrix $\mA$ in order to reduce the size of a $32 \times 32$ image to a $16 \times 16$ image as we did in part 1.
Suppose we call the output vector $\vy$ and the output image $\mY$. I'm giving you the following template, that I hope you can fill in. Feel free to construct an $\mA$ that accomplishes our image reduction task any way you choose, but the following should provide some guidance.

NX = <fill in> # the map between pixel indices and linear indices for X
NY = <fill in> # the map between pixel indices and linear indices for Y
A = <fill in>  # the matrix we are trying to build
for i=1:32
for j=1:32
xi = <fill in> # the index of the pixel i,j in the vector x
yi = <fill in> # hint, div(i,k) is integer division!

A[yi,xi] = 1/4 # place the entry of the matrix
end
end

6. In order to use the matrix $\mA$ we created, we need to convert the matrix $\mX$ into a vector! The reshape operation helps here:

x = reshape(X',32*32,1)


We can now rescale the image $\mX$ by multiplying by $\mA$ and reorganizing back into a matrix $\mY$.

y = A*x
Y = reshape(y,16,16)'


Show the image of $\mY$. Does that look correct?
(Hint, apply the same correction you did before!)