# CS 358. Concurrent Object-Oriented Programming Spring 1996

## Lectures 2-3. Basic Issues in Object-Orientation and Concurrency

References:
G.R. Andrews, Concurrent Programming: Principles and Practice, Benjamin/Cummings, 1991.
R. Milner, Semantics of Concurrent Processes, in J. van Leeuwen (ed.), Handbook of Theoretical Computer Science, Elsevier and MIT Press, 1990.

This lecture reviews background material that may be familiar to many students.

### Objects

An object consists of
• private data,
• public methods,
• a communication protocol.
In sequential languages, communication is generally done by function call. In this case, the communication protocol is given by a set of selectors, each naming a method of the object, together with the number and type of arguments (or parameters) that must be supplied when a methods is invoked. In Smalltalk terminology, a message to an object consists of a selector name together with actual values for each of the required parameters.

In concurrent languages, methods may execute in response to other forms of communication, such as broadcast events of a specific form or point-to-point communication along specific channels. Depending on the programming language or definition of the object, communication with objects may be synchronous or asynchronous.

The methods of an object may use the private data to compute and return a value. When invoked or activated, a method may also send messages to other objects, create new objects, or (depending on the programming language) change the values of its private data.

An important aspect of objects is encapsulation, namely, the fact that access to private data, and possibly other private features such as private methods, is restricted to the methods of the object. Thus there are two views of an object, the internal view that includes private features, and the external view that is provided to client programs.

Relevance to concurrency:

• private data -- provides some notion of "locality"
• communication protocol -- object boundary identifies place where remote communication could take place

### Program Extensibility: Polymorphism, Subtyping and Inheritance

A program or expression is polymorphic if it works over many types of data. OOP's are implicitly polymorphic -- since a program accesses an object only through its interface, an object may be replaced by another that has the required interface. In particular, an object may be replaced by one that has a more specific interface guaranteed additional behavior.

Subtyping is a relation on types. The type of an object usually consists of its method names (or selectors), the types of parameters for each method, and any additional description of its communication protocol that might be appropriate. The general description of subtyping is that A is a subtype of B, written A <: B, if any object of type A may be used in place of an objects of type B. When A <: B, the interface information specified by B is a subset of the interface information specified by A.

One potential confusion about subtyping is that many statically typed programming languages do not recognize all the subtyping that is possible. (There may be good reasons for this.) We can understand this by distinguishing semantic subtyping from syntactic subtyping. Semantic subtyping means that semantically, or in principal, objects of one type could be used in place of objects of another. In contrast, syntactic subtyping means that in a particular programming language, the compiler recognizes that one type named syntactically in a program is a subtype of another type named in a program. For most languages, every syntactic instance of subtyping recognized by the compiler is an instance of semantic subtyping, but not conversely. Inheritance is a mechanism for reusing code, generally provided as a way of defining objects of one type from objects of a related but simpler type. In concurrent object-oriented languages, it might be possible to inherit behavior or parts of the communication protocol of an object (or class of objects).

### Concurrency

Some important issues that distinguish concurrent from sequential computation are:
• mechanisms to initiate and terminate independent processes,
• nondeterminism, due to uncertainties and variations in the speed of execution of different processes and the speed and reliability of communication,
• the need for communication between process, achieved by mechanisms such as buffered or synchronous communication channels, broadcast, or shared variables,
• interference between processes or contention for resources,
• atomicity, which effects both the interaction between processes and the handling of error conditions (An atomic action is one which either completes successfully or terminates in a state that is indistinguishable from the state in which the action was initiated. A composite action may involve intermediate states that are observable by other processes; the process may halt in error before the entire action is complete, leaving some trace of its partial progress.),
• coordination mechanisms that explicitly or implicitly cause processes to wait for results from other processes before continuing. These mechanisms may be used to limit interference between processes.

There are also some changes in perspective. For example, a sequential program is rarely considered useful unless it terminates. However, in a concurrent context, a nonterminating process, or system that contains nonterminating processes, may be useful if it communicates with other processes. In fact, termination may be undesirable for control processes, operating systems and other applications.

A standard view of correctness is that any simultaneous execution of atomic processes A and B is allowed if the result is equivalent to some serial execution of one followed by the other. If concurrent or interleaved scheduling of two or more processes is equivalent to some serial execution, then we say the schedule is serializable. A simple example involves the following processes, each consisting of a single assignment.
    Process A:   y := x+1
Process B:   x := y+1

If x=y=1 initially, then any serializable scheduling of processes A and B must terminate with either y=2 and x=3 or x=2 and y=3.

If each statement is compiled into a series of machine instructions, and the instructions are executed concurrently without any effort to restrict the order of reads and writes to x and y, then it is possible to get other results. For example, the constituent instructions coule be executed in this order:


Process A                 Process B
-----------               -----------
increment register 1      increment register 2
store register 1 in y     store register 2 in x

This leaves x=y=2, which is not equivalent to any serial execution. Using locks, it is possible to prevent such non-serializable executions. However, it is also possible to produce deadlock unless specific conventions (or careful designs) are used.

One mechanism for limiting interference between concurrent processes is to maintain read and write locks for shared data. If a process must carry out a transaction that involves reading or writing data, the process must first acquire the appropriate read or write lock. A standard convention is that when one process has a read lock, no other process may write to that location. However, other processes may be allowed to acquire read locks on the same location.

Deadlock is a situation in which a process can never proceed to termination due to the state of another process. This may occur with the example processes above, if each process simply locks each variable in succession. More specifically, process A may first lock x, compute x+1 and then try to lock y. In parallel, process B may lock y, compute y+1 and then try to lock x. The processes are then deadlocked since neither process can proceed. Each requires a lock held by the other.

A technique that may be used to avoid deadlock is called two-phase locking. In simple forms of two-phase locking, a process is viewed as a sequence of independent tasks. For each task, the process must first acquire all read and write locks that are needed (or could be needed) for the task to complete. Then, as the computation proceeds, the process may release locks that are no longer needed for the current task. Thus there are two phases in the use of locks, a locking phase in which all locks are acquired, and a second phase in which all locks are released. (The locking phase may be implemented by establishing a critical section for each process. This is discussed below.) If a process proceeds after acquiring all locks, then it cannot deadlock during this task. Since similar reasoning holds for all tasks, we can see that this convention prevents deadlock.

The critical section problem is a traditional problem in concurrent programming. It is described in detail in many books on concurrent programming or operating system techniques. Suppose that we have some set of processes, each structured so it has a critical section in which some shared resources must be used, followed by some non-critical section code that may proceed without preventing any other process from entering its critical section. The problem is to design a protocol that meets four criteria:

• Mutual exclusion: at most one process may execute its critical section at any given time,
• Absence of Deadlock: If two or more processes are trying to enter their critical sections, at least one will succeed,
• Absence of Unnecessary Delay: If a process is trying to enter its critical section and the other processes are either in their non-critical sections or have terminated, then the first process may enter its critical section,
• Eventual Entry: A process that is attempting to enter its critical section may eventually succeed.
It is possible to satisfy the first three conditions using shared variables. The fourth condition is more difficult, generally requiring some sort of timeout procedure in the case that a process within its critical section somehow fails to proceed.

### Process Calculus

There are a number of versions of process calculi, including Hoare's communicating sequential processes (CSP) and Milner's calculus of concurrent systems (CCS). These calculi are interleaving models of concurrency, in that the concurrent processes are executed by interleaving their actions in some way. It is also typical for process calculi to provide a synchronous form of communication, where send and receive are performed simultaneously when both processes are ready to do so.

#### Main points

• Represent parallel execution by interleaving.
• Synchronous communication.
• Study process equivalence.

#### Processes

Assume a set of process variables X, Y, Z, ... and atomic actions a, b, c, and their complements a', b', c', ... The intuitive idea is that a and a' are and dual, representing abstractly the principle that for an action such as "send the value of x on channel c" there is a complementary action "read the value of y from channel c" that must be performed simultaneously in a synchronous system. In a more detailed form of process calculus, we might have specific data values that could be sent or received as part of communication actions.

The set L of labels contains all of the atomic actions a, b, c, and their complements a', b', c', ..., together with a so-called silent action \tau. The processes are given by the grammar

E ::=   X  |  Halt | a.E  | E+E  |  E|E  |   E\L   |  fix_j { X_i = E_i | i in I}

where L may be a set of labels. These have the following intuitive meanings:
• X is a process variable (to be bound by enclosing recursion)
• Halt is a process that does nothing (halts)
• a.E is the process that performs action a then behaves as E
• E_1+E_2, nondeterministic choice, may behave as E_1 or E_2
• E_1|E_2, parallel composition, performs E_1 and E_2 in parallel
• E\L, restriction, behaves like P, but with actions in L prohibited
• fix_j { X_i = E_i | i in I} is the jth component of the sequence of solutions to the recursive system X_i = E_i for i in I.
The most confusing is recursion. The idea behind fix_j { X_i = E_i | i in I} is that we might have a set of mutually recursive processes, such as
    X = a.Y
Y = b.Z + c.X
Z = a.a.Y + b.c.X

If we want to write an expression for the second process, Y, in this sequence, we can write
   fix_2 { X_1 = a.X_2,     X_2 = b.X_3 + c.X_1,    X_3 = a.a.X_2 + b.c.X_1 }

The subscript number two in fix_2 indicates that of the three processes listed, we the expression is meant to give the second one. Processes 1 and 3 are there only to make it easier to write the second one.

A technical point that wont be very important in this course is that there is an alternative syntax for recursion. Assuming we always write finite expressions (i.e., the index set I is finite), we could rewrite any processes fix_j { X_i = E_i | i in I} in the form fix_1 { X_1 = E_1 }, where E_1 only contains fix_1, not fix_j for j>1. The idea is just to replace mutually recursive declarations by nested declarations. Since this is always possible, we can replace the general form above by the simpler form mu X. E for the process X satisfying X=E.

#### Operational semantics

The execution of a process may be described using a labelled transition system. The assertion E --a--> E' means that expression E defines a process that can do atomic action a and then behave according to E'. We also write E --t--> E' for a sequence t of actions to mean that E defines a process that can do the sequence t of atomic actions and then behave according to E'.

Intuitively, E --a--> E' means, "process, or concurrent system consisting of one or more processes, E can do action a if the environment will permit it and, after doing so, will be in the configuration described by E'." An important aspect of the transition relation is the provision about the environment. This may be understood by considering a process

Process A:   send the value of x on channel C, then halt

when channels allow only synchronous communication. In a process-calculus-like formalism for describing communication along synchronous channels, we might write something like
             (Process A)  --Write(x,C)-->  Halt

However, in a synchronous setting, it would not be possible for Process A to actually do this unless there were another process ready to read from channel C. In process calculus, the dependence on the environment shows up in the behavior of the restriction operator "\", which restricts interaction to certain designated processes. This is illustrated by example after the presentation of the operational semantics.

The three-place relation (.) --(.)--> (.) on processes and atomic actions is defined by the following axiom and inference rules.

a.E   --a-->  E

E   --a-->  E'                            E   --a-->  E'
---------------------                     ---------------------
E | F   --a-->  E' | F                    F | E   --a-->  F | E'

E   --a-->  E'        F   --a'-->  F'
-----------------------------------------
E | F   --\tau-->  E' | F'

E   --a-->  E'                            E   --a-->  E'
---------------------                     ---------------------
E + F   --a-->  E'                          F + E   --a-->  E'

E   --a-->  E'
----------------------  (a, a' not in L)
E\L   --a-->  E'\L

[fix_k { X_i = E_i | i in I}/X_k ]_{1<=k<=n}  E_j   --a-->  E_j'
-------------------------------------------------------------------
fix_j { X_i = E_i | i in I}   --a-->  E_j'

Using the alternative syntax mu X.E for recursive processes, we can rewrite the last rule
    [mu X.E/X]E   --a-->  E'
------------------------------
mu X.E  --a-->  E'

Examples:
1. The process a.(b.Halt + c.(d.Halt + e.Halt)) has several possible execution sequences of actions. Here are two that lead to Halt:

a.(b.Halt + c.(d.Halt + e.Halt))
--a-->   b.Halt + c.(d.Halt + e.Halt)
--b-->   Halt

a.(b.Halt + c.(d.Halt + e.Halt))
--a-->   b.Halt + c.(d.Halt + e.Halt)
--c-->   d.Halt + e.Halt
--d-->   Halt


2. The silent \tau action, which occurs when a computation involves actions "hidden" by the restriction operator "\", represents an internal communication between parts of a concurrent system. For example, in the computation
(a.Halt | a'.Halt)\ab  --\tau-->   Halt | Halt

two local processes execute complementary actions. We can think of the expression (a.Halt | a'.Halt)\ab as an abstract description of a concurrent system consisting of two processes such as
Process A:   receive a value a channel C
Process B:   send a value on channel C

where channel C is local to this pair of processes and not considered observable (or open for transmission) by other processes.
3. Although the concept is a bit slippery, it is interesting to think identify a class of processes that are "obviously" not able to perform any actions. Provisionally, let us say that a process is done it is given by the following grammar:
D ::=   X  |  Halt |  D+D  |  D|D  |   D\L   |  mu X.D

The main property of these expressions is that they do not contain any atomic actions, so there is no action that we could expect these processes to perform. The processes that are not done appear seem, intuitively, to reflect common properties of starvation or deadlock.

An example of a "not done" process is

(a.Halt | b.Halt)\ab

Intuitively, this process is "deadlocked." It consists of two sub-processes that each "want" to perform an atomic action, but neither is allowed to since its environment cannot perform the complementary action. By analogy with the previous example (producing a silent action), we can think of the expression (a.Halt | b.Halt)\ab as an abstract description of a concurrent system consisting of two processes such as
Process A:   receive a value on channel C
Process B:   send a value on channel C'

where channels C and C' are local to processes A and B. Since communication is synchronous, neither process can proceed without another process acting in a complementary way. (Warning: this definition of "done" is not very robust, since some processes that are not done are equivalent, in a sense discussed below, to processes that are done.)
4. The expression
   fix_2 { X_1 = a.X_2,     X_2 = b.X_3 + c.X_1,    X_3 = a.a.X_2 + b.c.X_1 }

an expression for the second process, Y, in the following sequence of recursive definitions
    X = a.Y
Y = b.Z + c.X
Z = a.a.Y + b.c.X

Using this rewriting the "fix" expression, we can see
    Y --b-->--a-->--a--> Y
Y --b-->--b-->--c-->--a--> Y

Using the rules above, we can see that the formal operational semantics gives us
    fix_2 {X_1 = a.X_2, X_2 = b.X_3 + c.X_1, X_3 = a.a.X_2 + b.c.X_1}
--b-->--a-->--a-->
fix_2 {X_1 = a.X_2, X_2 = b.X_3 + c.X_1, X_3 = a.a.X_2 + b.c.X_1}

fix_2 {X_1 = a.X_2, X_2 = b.X_3 + c.X_1, X_3 = a.a.X_2 + b.c.X_1}
--b-->--b-->--c-->--a-->
fix_2 {X_1 = a.X_2, X_2 = b.X_3 + c.X_1, X_3 = a.a.X_2 + b.c.X_1}

5. The recursive process mu X.X does nothing.
6. The behavior of recursive process mu X.((b.X + b'.X)\b) is often described as "infinite chatter." This process sits contentedly by itself and does silent tau actions internally. In other words, this process continues indefinitely, but does not communicate since the only actions it performs are actions that are local and not visible to other processes.

#### Execution Trees

It is useful to draw the set of possible executions of a process in a form called an exection tree. The execution tree Tree(E) for process E is defined by
• Tree(D) is a leaf if D cannot perform any atomic actions
• If E --a_1--> E_1, ..., E --a_n--> E_n are all the possible execution steps from E, then Tree(E) consists of a root node with n subtrees, Tree(E_1), ..., Tree(E_n). The root node is connected to subtree Tree(E_i) by an arc labelled with atomic action a_i.
• An execution tree may be finite or, if the process uses recursion, possibly infinite.

Examples:

1. 
/--b-->
a.(b.Halt + c.(d.Halt + e.Halt))      --a-->/
\        /--d-->
\--c-->/
\
\--e-->

2. For recursive processes, we can either draw a finite "tree" with cycles (a labelled directed graph),

|------c--------|
|               |
V        /--b-->|
mu P (a.(b.c.P + c.(d.Halt + b.P)))       |---a-->/
|       \        /--d-->
|        \--c-->/
|               \
--------b--------|

or unwind this into an infinite tree that cannot be completely drawn.

/--b-->...
/--b-->---c--->--a-->/
---a-->/                     \
\        /--d-->       \--c-->...
\--c-->/
\              /--b-->...
\--b-->--a-->/
\
\--c-->...

This infinite tree is the execution tree for the process; the finite cyclic graph is simply a convenient finite representation of this infinite tree.