The objective of this lab is to extend XINU's fixed priority scheduling to dynamic priority scheduling that facilitates fair sharing of CPU cycles among processes. Through monitoring we aim to compensate I/O-bound processes that voluntarily relinquish CPU before depleting their time slice by improving responsiveness compared to processes that hog a CPU. We do so by bumping up the priority of I/O-bound processes and lowering the priority of CPU-bound processes based on their behavior. In addition to dynamically adjusting process priority to affect fairness, our scheduler needs to be efficient. We will adopt a dynamic scheduling framework based on multilevel feedback queue used in operating systems such as UNIX Solaris.
Please use a fresh copy of XINU, xinu-spring2023.tar.gz, but for preserving the mymotd() function from lab1 and removing all code related to xsh from main() (i.e., you are starting with an empty main() before adding code to perform testing). As noted before, main() serves as an app for your own testing purposes. The TAs when evaluating your code will use their own main() to evaluate your XINU kernel modifications.
We will modify XINU so that it can monitor CPU usage of processes. For a process that has not terminated, its CPU usage will be the time (in unit of msec) that it has spent executing on Galileo's x86 CPU, i.e., in state PR_CURR. Estimating CPU usage accurately requires understanding device management (of which hardware clocks are a part) which we will cover later in the course. We will implement a method that is not very accurate but adequate for our purpose of gauging whether our dynamic priority scheduler implemented in Problem 4 achieves fair allocation of CPU cycles to processes.
Introduce a new process table field, uint32 prcpu, where XINU will record CPU usage (in unit of msec). For a process that is not in state PR_CURR, prcpu will contain XINU's estimate of its CPU usage from the time of its creation until now, i.e., the moment prcpu is inspected. For the current process, CPU usage will be the sum of prcpu and currcpu where currcpu is a global variable of type uint32 that estimates the time (in msec) that the current process has spent in state PR_CURR after being context-switched in. Modify XINU so that when a context-switch occurs the old process's prcpu field is updated as prcpu + currcpu. Then currcpu is reset to 0 so that the time that the new process spends in state PR_CURR can be tracked. Since XINU's sytem timer is set to interrupt every 1 msec increment currcpu in clkhandler() similar to how msclkcounter2 was updated in Problem 4.1 of lab1.
Please use the "volatile" qualifier when declaring currcpu which helps prevent gcc from incorrectly applying optimization that may introduce bugs. The same goes for other variables, including msclkcounter2, that may be subject to optimization by gcc. Although clkhandler() is called from clkdisp() in clkdisp.S there is no function that calls clkdisp() since it is invoked in response to an asynchronous interrupt. As gcc is a static tool it may generate machine code to increment currcpu in clkhandler() by keeping its value in a register (e.g., ECX) without writing its value to currcpu in main memory every time an increment ALU operation occurs in Galileo's x86 CPU. If the value of currcpu is read by, say, a process running main() then the value read from main memory may not reflect the increment operations performed on the CPU on ECX. Declaring currcpu with the qualifier "volatile" instructs gcc to refrain from such optimization.
Introduce a new system call, syscall cpuusage(pid32), in system/cpuusage.c that returns total CPU usage in unit of msec of the process with pid32. If the argument specifies the PID of the current process, cpuusage() returns prcpu + currcpu. Note, prcpu of the current process is updated only when it is context-switched out. If pid32 specifies a process that is not in state PR_CURR, return prcpu. Of course, the argument passed should be checked to verify that it is a valid PID and return SYSERR if invalid. Test your implementation to assess correctness. Describe in lab3ans.pdf your method for doing so. At the start of the problem description it is noted that the method for estimating CPU usage via currcpu is "not very accurate." Based on the material covered so far and our understanding of how XINU works on x86, discuss in lab3ans.pdf what the chief reason for the inaccuracy may be. Your answer should be technical and precise, not allude to generalities or traits of operating systems that we have not yet covered.
Another important metric, especially for I/O-bound processes, is response time (in unit of msec) defined as the time a process resided in XINU's readylist before it became current. In other words, how long a ready process waited in line before it was serviced. Introduce two new process table fields, uint32 prresptime, and uint32 prctxswcount, where prctxswcount initialized to 0 when a process is created counts how many times a process was context-switched in, i.e., entered state PR_CURR from state PR_READY. The field prresptime initialized to 0 upon process creation sums the total time a process has spent since its creation in the readylist. We will estimate average response time of a process over its lifetime by dividing prresptime by prctxscount. Introduce a new system call, syscall responsetime(pid32), in system/responsetime.c that returns average response time rounded to an integer (in unit of msec) by dividing prresptime by prctxswcount.
When a process enters XINU's readylist, record the current time since bootloading a backend machine in a new process table field, uint32 prbeginready. Reuse the global variable msclkcounter2 from lab1 to keep track of system time elapsed since bootload. When a process transitions from PR_READY to PR_CURR, subtract from msclkcounter2 prbeginready. If the difference is 0, set the value to 1 (msec), in which case we overestimate response time. Also, increment prctxswcount. When coding responsetime() we need to consider border cases. For example, a process may have become ready for the first time but not current. If so, prctxswcount will be 0 and let responsetime() return msclkcounter2 - prbeginready. If pid specified in the argument of responsetime() is that of a ready process (i.e., resides in readylist) then responsetime() will add msclkcounter2 - prbeginready to prresptime (but not update prresptime) and divide the resultant value by prctxswcount + 1, which is returned as an integer. Note that responsetime() returns values that estimate response time but it does not update prresptime and prctxswcount.
When modifying kernel code to implement CPU usage and response time monitoring, please check that your methods work correctly when a process that is context-switched in is a new process that is running for the first time. Keep in mind that for a new process ctxsw() does not return to resched() but directly jumps to user code (i.e., function pointer passed as first argument of create()). Test your implementation to assess correctness. Describe in lab3ans.pdf your method for doing so.
Note: When implementing and testing code to monitor CPU usage and response time, use the legacy fixed priority XINU kernel, not the kernel with dynamic priority scheduling in Problem 4. Only after verifying that CPU usage and response time monitoring works correctly under fixed priority scheduling utilize it in Problem 4 for evaluating dynamic priority scheduling.
We will use a dynamic process scheduling framework based on multilevel feedback queues to adaptively modify the priority and time slice of processes as a function of their observed behavior.
Classification of processes based on observation of recent run-time behavior must be done efficiently to keep the scheduler's overhead to a minimum. A simple strategy is to classify a process based on its most recent scheduling related behavior: (a) if a process depleted its time slice the process is viewed as CPU-bound; (b) if a process hasn't depleted its time slice and voluntarily gives up the CPU by making blocking call we will consider it I/O-bound. A third case (c) is a process that is preempted by a higher or equal priority process that was blocked but has become ready. When a process of equal priority becomes ready we have a choice of preempting or not preempting the current process. For simplicity we will choose the former. We will reserve judgment and remain neutral by neither increasing nor decreasing its priority. Before resched() is called by kernel functions in the upper and lower half, XINU needs to make note of which case applies to the current process which, in turn, affects its future priority and time slice. In case (a), the current process is demoted in the sense that its priority is decreased following XINU's new scheduling table that follows the structure of the UNIX Solaris dispatch table. Demotion of priority is accompanied by increase of time slice in accordance with the needs to a CPU-bound process. In case (b), the current process is promoted in the sense that its priority is increased per XINU's scheduling table which also decreases its time slice. In case (c), no changes to the process's priority or time slice are made. We will remember the process's remaining time slice so that when it becomes current again its quantum is set to the saved time slice value.
To keep coding to a minimum, we will use sleepms() as a representative blocking system call for all other blocking system calls. Hence a process is considered I/O-bound if it makes frequent sleepms() system calls, thereby voluntarily relinquishing Galileo's CPU before its quantum expires. If a process calls sleepms() with an argument that is negative, sleepms() will return SYSERR. If the argument is 0 which is allowed in XINU and prompts the kernel to call resched(), we will treat it as I/O-bound behavior since a ready process of equal priority would preempt the current process. If a process knows that there are no ready processes of equal priority then repeatedly calling sleepms(0) could be a way for a process to amplify its priority, a potential system vulnerability.
For preemption events we will only consider those triggered by system timer management in XINU's lower half, clkhandler(), which checks if a process in state PR_SLEEP needs to be woken up and inserted into the readylist. Time slice depletion is also detected by clkhandler() which calls resched() to let the scheduler determine who to run next.
We will introduce a new kernel data structure, XINU scheduling table, which will implement a simplified version of the UNIX Solaris dispatch table containing 6 entries instead of 60. Hence the range of valid priority values is 0, 1, ..., 5. We will reserve priority value 0 for the idle/NULL process which must have priority strictly less than all other processes so that it only runs when there are no other ready processes in readylist. Hence normal processes take on priority values in the range 1, 2, ..., 5. All processes spawned using create() are subject to priority promotion/demotion based on recent behavior. The idle process is treated as a special case and its priority and time slice stay constant.
Define a structure in new kernel header file include/xsched.h
struct xsched_tab { uint16 xtqexp; // new priority: CPU-bound (time quantum expired) uint16 xslpret; // new priority: I/O-bound (called sleepms()) uint16 xquantum; // time slice associated with priority level };Declare, struct xsched_tab xdynprio[6], as global in initialize.c which is indexed by process priority and initialized so that for the idle process whose initial priority is set to 0 by sysinit() in system/initialize.c
xdynprio[0].xtqexp = 0; xdynprio[0].xslpret = 0; xdynprio[0].xquantum = QUANTUM;That is, the idle process whose priority is initialized 0 will not be promoted/demoted.
For normal processes spawned by calling create(), modify create() so that it ignores the value passed as third argument (i.e., priority) if a macro DYNSCHEDENABLE is set as, #define DYNSCHEDENABLE 1, in xsched.h. If DYNSCHEDENABLE is set to 0, XINU performs legacy fixed-priority scheduling. Thus dynamic priority scheduling related modifications made to legacy XINU must be conditioned on DYNSCHEDENABLE to be enabled/disabled. If dynamic prioriy scheduling is enabled, create() initializes a process's priority to 3 and time slice to 30 (msec). Hence a newly created process is assigned priority in the mid-range.
For the remaining entries of xdynprio[i] for i > 0, configure their values as
xdynprio[i].xtqexp = max{1, i-1} xdynprio[i].xslpret = min{5, i+1} xdynprio[i].xquantum = 60 - 10*iA CPU-bound process gets demoted by decrementing its priority subject to there being a floor at priority level 1. An I/O-bound process gets promoted by incrementing its priority subject to there being a ceiling at priority level 5. The time slice values associated with priority levels 1, 2, 3, 4, 5 are 50, 40, 30, 20, 10, in accordance with the heuristic of assigning CPU-bound processes larger time budgets. If a process repeatedly depletes its time slice it will eventually hit rock bottom at priority level 1. Conversely, if a process repeatedly makes blocking system calls using sleepms() its processes reaches highest priority level 5.
It is straightforward to implement a multilevel feedback queue with 6 entries which replaces XINU's readylist by defining a 1-D array of 6 elements of type struct whose fields contain pointers to the head and tail of a FIFO list. The FIFO list contains all ready processes at a specific priority level. The FIFO list at priority level i = 0, 1, ..., 5 is empty if the head and tail pointers are NULL. Insertion incurs constant overhead since a process is added at the end of the list pointed to by the tail pointer. Extraction incurs constant overhead since after determining the highest priority level at which there exists a ready process (i.e., FIFO is not empty), the head pointer is used to extract the first process in the list. Since we will not be evaluating kernel performance using hundreds of processes at which point the advantage of constant overhead may become significant, we will continue to utilize XINU's readylist to manage ready processes which also reduces coding effort. Our focus in Problem 4 is dynamically adjusting the priorities of processes based on whether they behave in a CPU- or I/O-bound manner, and using the instrumention tools from Problem 3 to assess if equitable sharing of CPU cycles is being achieved.
Perform basic debugging by checking the internal operation of the modified kernel to verify correct operation. Use macro XINUDEBUG for this purpose which is then disabled to suppress output of debug messages in your submitted code. Use macro XINUTEST to enable output of the values during benchmarking where equitable sharing of CPU cycles gauged by CPU usage and improved response time of I/O-bound processes is evaluated.
Benchmark apps CPU-bound app. Code a function, void cpubnd(void), in system/cpubnd.c, that implements a while-loop. The while-loop checks if msclkcounter2 exceeds a threshold which is defined by macro STOPPINGTIME set to 8000 (msec) in process.h. A process executing cpubnd() will terminate when msclkcounter2 has reached about 8 seconds. By design, a process executing cpubnd() hogs Galileo's CPU and is therefore an extreme case of a CPU-bound app.
I/O-bound app. Code a function, void iobnd(void), in system/iobnd.c, that implements a while-loop to check if msclkcounter2 has exceeded STOPPINGTIME. If so, iobnd() terminates. Unlike the body of cpubnd()'s while-loop which is empty, iobnd()'s while-loop body has a for-loop followed by a call to sleepms() with argument 80 (msec). Try different values for the bound of the inner for-loop such that it consumes several milliseconds of CPU time. It should not exceed 10 msec but otherwise is not important. The inner for-loop can contain arithmetic operations (even a nested for-loop) to help consumes CPU time not exceeding 10 msec. Inspect the value of msclkcounter2 before and after the for-loop to calibrate the bound.
Benchmark output. Before terminating, cpubnd() and iobnd() print PID, "CPU-bound" or "I/O-bound", msclkcounter2, CPU usage, and response time.
Workload scenarios We will consider homogenous and mixed workloads in benchmarks A-C. Benchmark D considers mixed workloads where starvation may occur.
Benchmark A. Spawn a workload generation process using create() that runs main() with priority 5. The process running main() spawns 6 app processes each executing cpubnd(). Call resume() to ready the 6 processes after creating them. The workload generation process terminates after creating and resuming the six benchmark processes. Output by the 6 app process upon termination at around 8 seconds of wall time should indicate approximately equal sharing of CPU time and similar response times. Discuss your results in lab3ans.pdf.
Benchmark B. Repeat benchmark scenario A with the difference that the 6 app processes execute iobnd(). Since the apps are homogenous, their CPU usage and response time should be similar. Discuss your finding in lab3ans.pdf. Compare the results of benchmarks A and B.
Benchmark C. Let the workload generator main() create 6 app processes, half of them executing cpubnd(), the other half iobnd(). Discuss your results in lab3ans.pdf.
Benchmark D. Code an app, void chameleon(void), in system/chameleon.c that tries to exploit weaknesses in how our dynamic priority XINU scheduler classifies processes to promote/demote priorities to get the best of both worlds: large CPU usage and small response time compared to processes running cpubnd() and iobnd(). Benchmark using 5 processes where 2 run cpubnd(), 2 iobnd(), and 1 chameleon(). Describe in lab3ans.pdf the logic behind your chameleon() app and why you expect it to achieve both large CPU usage and small response time. Analyze your results to assess if chameleon() indeed delivers.
Heuristic methods employed by modern kernels to adjust process priorities dynamically as a function of monitored process behavior is conductive to equitable sharing of CPU cycles and resultant performance. However, they do not, without additional effort, guarantee that starvation does not occur. That is, a process may for a prolonged period receive disproportionately less CPU cycles even though it is ready. Describe in lab3ans.pdf how you would go about enhancing the dynamic priority scheduler in Problem 4 to detect starvation and take remedial action to mitigate starvation. Devise a new workload involving the CPU-bound app, cpubnd() from Benchmark A, and a revised I/O-bound app, iobnd9() in system/iobnd9.c that may cause a CPU-bound process running cpubnd() to starve. Use a benchmark scenario involving 6 processes with 1 process running cpubnd() and the rest running iobnd9() to demonstrate starvation. Discuss your results in lab3ans.pdf. You do not need to implement mitigation of starvation.
Note: The bonus problem provides an opportunity to earn extra credits that count toward the lab component of the course. It is purely optional.
General instructions:
When implementing code in the labs, please maintain separate versions/copies of code so that mistakes such as unintentional overwriting or deletion of code is prevented. This is in addition to the efficiency that such organization provides. You may use any number of version control systems such as GIT and RCS. Please make sure that your code is protected from public access. For example, when using GIT, use git that manages code locally instead of its on-line counterpart github. If you prefer not to use version control tools, you may just use manual copy to keep track of different versions required for development and testing. More vigilance and discipline may be required when doing so.
The TAs, when evaluating your code, will use their own test code (principally main()) to drive your XINU code. The code you put inside main() is for your own testing and will, in general, not be considered during evaluation.
If you are unsure what you need to submit in what format, consult the TA notes link. If it doesn't answer your question, ask during PSOs and office hours which are scheduled M-F.
Specific instructions:
1. Format for submitting written lab answers and kprintf() added for testing and debugging purposes in kernel code:
2. Before submitting your work, make sure to double-check the TA Notes to ensure that any additional requirements and instructions have been followed.
3. Electronic turn-in instructions:
i) Go to the xinu-spring2023/compile directory and run "make clean".
ii) Go to the directory where lab3 (containing xinu-spring2023/ and lab3ans.pdf) is a subdirectory.
For example, if /homes/alice/cs354/lab3/xinu-spring2023 is your directory structure, go to /homes/alice/cs354
iii) Type the following command
turnin -c cs354 -p lab3 lab3
You can check/list the submitted files using
turnin -c cs354 -p lab3 -v
Please make sure to disable all debugging output before submitting your code.