Students may register for CS 358 and do a "theory" project or CS 342 and do a "systems" project.
Processes and objects are both relatively complex entities that are used to structure the control and data flow of computation. It seems reasonable to expect some form of connection between the two constructs. In addition, it seems difficult to avoid design considerations that involve interactions between obejcts and processes.
There have been some designs, such as COOL at Stanford, that do not seem to combine objects and processes in any interesting way. However, it might be an interesting project to analyze this system, or try writing some code, in light of the ideas that arise in this class.
Some design possibilities:
class stack private contents : array[1..n] of int top : int constructor stack () = top := 1 public push (x:int) : unit = if top < n then top := top +1; contents(top) := x else raise stack_full; pop ( ) : int = if top > 0 then top := top -1: return contents(top+1) else raise stack_empty; end StackUsing interleaving to explain the result, we could have this sequence of operations:
Initial state top=1 contents(1) = 3 contents(2) = # Call push(4) pop() Commands top := 2 top := 1 contents(1) := 4 return # Final state top=1 contents(1) = 4 contents(2) = #A general correctness condition for parallel execution of parts of a program is that this should be indistinguishable, in overall effect, from some serial ordering of the program parts. (This idea comes from distributed databases, say, where requests to the database are not inherently serialized.) However, the result of the interleaved sequence of commands above is not consistent with any sequential ordering of push and pop.
Process calculus models of objects such as \cite{Walker,PierceTurner94:COPC} use records, each represented using a single channel, to represent objects. Since messages along a channel arrive serially, messages to objects are also serialized. We will investigate whether this approach is likely to yield useful parallelism for reaslistic programs.
In Cliff Jones's $\pi o\beta\lambda$ language, for example, only one method can be invoked at a time, resulting in a rendez-vous that causes the caller (client) to block until object (server) exits the rendez-vous. An object may continue execution of the method body after the rendez-vous, allowing the caller to proceed in parallel, but the object cannot accept another message until computation is complete.
An alternative to parallel execution of methods is to serialize method invocation, then allow each method body to decide when it is "safe" for the next method to begin. In other words, each method body would be structured into a critical section and a non-critical section. the critical section would be executed first, excluding the execution of other methods. When the critical section is completed, the non-critical section could continue in parallel with the critical section (or other parts) of another method.
Exercise: Structure the implementations of push and pop into critical and non-critical sections so that concurrent execution of these methods would be correct. Does this approach seem useful? Try rewriting the following implementation of queues using critical regions. Explain why your implementation is correct. You may assume that after insert passes the end of its critical region, a call to remove may be processed in parallel, but another call to insert may not. Mark the end of the critical region of each method with a statement like end_critical_region. Would you be able to better if you could explicitly lock a part of the array without locking all of it?
class queue private contents : array[1..n] of int front, back : int constructor queue() = front := back := 1 public insert (x:int) : unit = if back+1 mod n != front then back := back+1 mod n; contents(back) := x else raise queue_full; remove ( ) : int = if front != back then front := front+1 mod n; return contents(front-1 mod n) else raise queue_empty; end Stack
It is also not clear how much programmer attention is required. When objects are known to be local, it is tiresome to specify the action to take when communication fails or times out. However, some programmer effort may be necessary in the case of remote communication.