Assignment 5
Message Passing
Due Date: Nov. 18, 2014
In this assignment, you will allow your previous CHESS implementation to explore all 2-preemption schedules and then parallelize it using MPI.
There are 2 things you need to consider: job assignments,
new synchronization points.
- Job assignments:
Make sure that there are no 2 processes working on the same schedule. In other
words, each schedule should be examined only once by one process.
- New synchronization points:
To explore two-preemption schedules,
processes should share newly discovered synchronization points with each other.
For example, in the very beginning, one process will examine the 0-preemption
schedule and discover a list of possible synchronization points. This process
should share the list with other processes so that the others can work on
1-preemption schedules. Similarly, when a process examines a 1-preemption
schedule, it should share the list of discovered possible synchronization
points after the first preemption.
Requirement:
- Run CHESS on "n processes" concurrently. The performance should show close to linear improvement when the number of processes increases.
- Make CHESS to explore all "2-preemption" schedules.
NOTE: to test all n-preemption schedules,
Each process testing a (n-1)-preemption schedule should report observed new
synchronization points to the others so that another process can examine a new
n-preemption schedule.
Turnin:
- Send your code and the required documents to Jiangjun Huang.
- You must provide a README file that explains how to run your system.
Hint:
Because the MPI library uses the pthread library, you cannot use the
MPI library inside your chess code. Instead, you can make a launcher that executes
the chess code in a separate process. The launcher uses the MPI library, shares
new synchronization points and assigns jobs. Then the launcher can execute
one chess instance with one assigned job. You can modify your launcher in the
assignment 3 if you have one.
In summary, your system will work as follow:
- The launchers communicate with each other through MPI and distribute jobs.
- When a job is assigned to a launcher, the launcher executes chess in a
separate process and gives the assigned schedule to the chess similar to the
assignment 3.
- After the chess process explores the schedule, the launcher collects new
synchronization points and shares it with other launchers using MPI.
- Repeat from step 1.