\section{Modeling Adaptation \label{model}}
In this section, we introduce a formal model for adaptation in asynchronous programs. We consider program as an automaton. Informally, adaptation of a program can be described as transforming the automaton to another automaton. We first discuss the informal overview of adaptation in distributed systems and then present the formal model to reason about the correctness of adaptation. In the rest of the paper, for sake of brevity, we use the term adaptation to mean {\em overlap} adaptation. However, our approach also applies to non-overlap adaptation.
We refer to the program before adaptation as the {\em old program} and program after adaptation as the {\em new program}. An adaptation replaces the old program being executed by the system with the new program. We assume that the old program and the new program are independently correct, i.e., by themselves they can execute and produce acceptable behavior. The goal of verifying adaptation is to ensure that: $(i)$ the adaptation ends in a state from where the system satisfies the behavior of the new program, and $(ii)$ the behavior during adaptation is acceptable (as defined by specification during adaptation).
An adaptation in a distributed system involves multiple steps that are executed at various processes. We consider the replacement of a fraction at a single process as an atomic step of adaptation. The key to verifying adaptation is to ensure that the atomic steps in adaptation occur in an appropriate order and the instances when they occur are ``safe'', i.e., the specification during adaptation is satisfied. We denote each atomic step of adaptation as {\em atomic adaptation} (defined formally later).
%The adaptation from old program to new program may not be instantaneous. The system may go through intermediate states where the behavior of the old and the new program overlaps. For example, consider a protocol that provides secure communication between a sender and a receiver. Such a protocol consists of two types of fractions (\cite{kb03podsy}), namely, {\em encryption fraction} at the sender that encrypts the packets before sending and {\em decryption fraction} at the receiver that decrypts the encrypted packets received from the sender. To replace such a protocol, each fraction of the protocol needs to be replaced. Thus, the adaptation in distributed programs involves multiple steps that are executed at various processes. We consider the replacement of fraction at a single process as an atomic step of adaptation.
%Thus, an adaptation consists of a sequence of atomic adaptations.
To verify adaptation, the ordering among atomic adaptations and the behavior during adaptation needs to be verified. To verify the behavior during adaptation we need to classify the states of the program during adaptation. The intermediate states that occur during adaptation are due to overlapping of the old program and the new program. The properties satisfied by these intermediate states may be different from either the old program or the new program. Consequently, the behavior expected during adaptation needs to be specified separately from the old program and the new program. Towards this end, we define the notion of {\em intermediate program}. The intermediate program arise due to overlapping of behavior of the old program and the new program. The first atomic adaptation modifies the old program into the first intermediate program. Similarly, other atomic adaptations modifies one intermediate program into the next intermediate program. The last atomic adaptation results into the new program. The specification during adaptation identifies the requirements for these intermediate programs.
%In the example discussed earlier, consider the system in which the adaptation has replaced the encryption fractions at the sender but has yet to replace the decryption fractions at the receiver. The sender may continue to send packets that may be buffered at the receiver or the sender may be blocked from sending more packets till the receiver has replaced the decryption fractions. Clearly,
\subsection{Abstract Model of Adaptation}
We model a program as an automaton $\mathcal{A}$ represented as a tuple $\langle S, \Sigma, \delta, S_0 \rangle$, where
\begin{itemize}
\item $S(\mathcal{A})$ - a set of states
\item $\Sigma(\mathcal{A})$ - a set of actions
\item $\delta(\mathcal{A})$ - a state-transition relation, where $\delta(\mathcal{A}) \subseteq S(\mathcal{A}) \ \mathsf{x} \ \Sigma(\mathcal{A}) \ \mathsf{x} \ S(\mathcal{A})$
\item $S_0(\mathcal{A})$ - a nonempty subset of $S(\mathcal{A})$ known as initial states
\end{itemize}
Each element $(s, \pi, s^\prime)$ of $\delta(\mathcal{A})$ is known as a {\em transition}, where $s, s^\prime \in S(\mathcal{A})$ and $\pi \in \Sigma(\mathcal{A})$. If $\mathcal{A}$ has a transition $(s, \pi, s^\prime)$ it means that $\pi$ is {\em enabled} in state $s$ and executing action $\pi$ in state $s$ will lead to state $s^\prime$. A transition of the form $(s, _, s^\prime)$ denotes that $\exists \pi : \pi \in \Sigma(\mathcal{A}) : (s, \pi, s^\prime) \in \delta(\mathcal{A})$.
%We represent {\em concurrency} by {\em interleaving} and hence the basic automaton suffices for modeling adaptation. This approach can be viewed as a reduction of concurrency to non-determinism, where a concurrent execution gives rise to many possible corresponding interleaving orders. We could have used more complex automata such as (\cite{peled95ABA, shutt95SFA, lynchBook}) to model concurrency, but we adopt interleaving model as it gives a simpler theory for specification and verification of adaptation.
We model an adaptation $\Delta$ using a 5-tuple as follows:
\begin{itemize}
\item $\mathcal{I}$ - a set of automata
\item $P$ - an automaton of the old program, $P \in \mathcal{I}$
\item $Q$ - an automaton of the new program, $Q \in \mathcal{I}$
\item $\Sigma_a$ - a set of special type of actions known as {\em adaptive actions}
\item $S_{\textit{map}}$ - a state mapping is a partial function $S(\mathcal{A}) \ \mathsf{x} \ \Sigma_a \rightarrow S(\mathcal{A}^\prime)$ and $\mathcal{A}, \mathcal{A}^\prime \in \mathcal{I}, \mathcal{A} \neq \mathcal{A}^\prime$
\end{itemize}
The old program, the new program, and all intermediate programs are modeled as automata. Given an adaptive action, the state mapping defines the states of the automaton in which the adaptive action can execute, and corresponding states of the resulting automaton in which the adaptive action terminates. Note that the state mapping is a partial function, as it may not be possible to perform corresponding atomic adaptation in all states of the automaton. Each element $((s, \pi_a), s^\prime)$ of $S_{\textit{map}}$ can be represented as a triplet $(s, \pi_a, s^\prime)$. Similar to the state-transition relation of an automaton, a state mapping $S_{\textit{map}}$ can be defined as a subset of $S(\mathcal{A}) \ \mathsf{x} \ \Sigma_a \ \mathsf{x} \ S(\mathcal{A}^\prime)$ with the restriction that if $(s, \pi_a, s^\prime) \in S_{\textit{map}}$ and $(s, \pi_a, s^{\prime\prime}) \in S_{\textit{map}}$ then $s^\prime = s^{\prime\prime}$. Each element of $S_{\textit{map}}$ is known as an {\em adaptive transition}.
%Note that the range of $S_{\textit{map}}$ is $S(\mathcal{A}^\prime)$ and not $S_0(\mathcal{A}^\prime)$. In other words, we do not require an adaptive action to terminate in an initial state of the resulting automaton.
Broadly speaking, the state mapping of adaptation $\Delta$ defines an {\em automata-transformation} (partial) function $\delta_a : \mathcal{I} \ \mathsf{x} \ \Sigma_a \rightarrow \mathcal{I}$. Each element $((\mathcal{A}, \pi_a), \mathcal{A}^\prime)$ (equivalently, $(\mathcal{A}, \pi_a, \mathcal{A}^\prime)$) of $\delta_a$ is known as {\em atomic adaptation}. Thus, each atomic adaptation is modeled as transforming one automaton to another automaton.
The automata-transformation relation represents an adaptation lattice defined as follows:
\noindent {\bf Adaptation Lattice}. \ An {\em adaptation lattice} (cf. Fig. \ref{til}) is a finite directed acyclic graph in which each node is labeled with an automaton and each edge is labeled with an atomic adaptation, such that,
\begin{enumerate}
\item There is a single {\em start node} $P$ having no incoming edges. The start node is associated with the automaton representing the old program. The automata-transformation function (correspondingly, $S_{\textit{map}}$) satisfies the following condition:
$\forall \mathcal{A}, \pi_a :: (\mathcal{A}, \pi_a, P) \not\in \delta_a$
\item There is a single {\em end node} $Q$ having no outgoing edges. The end node is associated with the automaton representing the new program. The automata-transformation function (correspondingly, $S_{\textit{map}}$) satisfies the following condition:
$\forall \mathcal{A}, \pi_a :: (Q, \pi_a, \mathcal{A}) \not\in \delta_a$
\item Each intermediate node $R$ has at least one incoming edge and at least one outgoing edge. It is associated with the automaton representing the intermediate program. The automata-transformation (correspondingly, $S_{\textit{map}}$) satisfies the following condition:
$\forall \mathcal{A} : \mathcal{A} \neq P : (\exists\mathcal{A}^\prime, \pi_a :: (\mathcal{A}^\prime, \pi_a, \mathcal{A}) \in \delta_a) \ \wedge $
$\forall \mathcal{A} : \mathcal{A} \neq Q : (\exists\mathcal{A}^\prime, \pi_a :: (\mathcal{A}, \pi_a, \mathcal{A}^\prime) \in \delta_a)$
\end{enumerate}
\vspace*{3mm}
A path in the lattice from the start node to the end node is called an {\em adaptation path}.
\begin{figure}[htb]
\centering\epsfig{file=tilfig.eps}
\caption{\small An example of a adaptation lattice.}
\label{til}
\vspace*{-3mm}
\end{figure}
Based on the above definition, an adaptation can also be viewed as an automaton, where
\begin{itemize}
\item $S(\Delta) = \bigcup\limits_{\mathcal{A} \ \in \ \mathcal{I}} \{(\mathcal{A}, s) \ | \ s \in S(\mathcal{A})\}$
\item $\Sigma(\Delta) = \bigcup\limits_{\mathcal{A} \ \in \ \mathcal{I}} \{(\mathcal{A}, \pi) \ | \ \pi \in \Sigma(\mathcal{A})\} \ \cup \ \Sigma_a$
\item $\delta(\Delta) = \bigcup\limits_{\mathcal{A} \ \in \ \mathcal{I}} \{((\mathcal{A},s), (\mathcal{A},\pi), (\mathcal{A},s^\prime)) \ | \ (s, \pi, s^\prime) \in \delta(\mathcal{A})\} \ \cup$ \\
\hspace*{30mm}$\{((\mathcal{A},s), \pi_a, (\mathcal{A}^\prime,s^\prime) \ | \ (s, \pi_a, s^\prime) \in S_{\mathit{map}}\}$
\item $S_0(\Delta) \subseteq S(\mathcal{P})$
\end{itemize}
%The advantage of modelling adaptation as an automaton is that it allows us to use existing techniques of model-checking and theorem-proving to verify properties of adaptive systems. However, at the same time it is important to identify individual intermediate automata during adaptation as there are properties during adaptation that need to be verified for each intermediate automaton.
We now introduce some notations and terminology used in specifying and verifying adaptive programs.
\noindent {\bf Enables}. \ An action $\pi$ of $\mathcal{A}$ is {\em enabled} in state $s$ if $\exists s^\prime::(s, \pi, s^\prime) \in \delta(\mathcal{A})$. An adaptive action $\pi_a$ is enabled in $\mathcal{A}$ if $\exists s \in \mathcal{A}, s^\prime \in \mathcal{A}^\prime :: (s, \pi_a, s^\prime) \in S_{\textit{map}}(\Delta)$.
\noindent {\bf State predicate}. \ A state predicate $X$ of $\mathcal{A}$ is any subset of $S(\mathcal{A})$. We say a $X$ is true in state $s$ if $s \in X$.
\noindent {\bf Closure}. \ A state predicate $X$ of $\mathcal{A}$ is closed in $\mathcal{A}$ (respectively, $\delta(\mathcal{A}), \Sigma(\mathcal{A})$) iff the following condition holds:
$\forall s, s^\prime, \pi :: ((s, \pi, s^\prime) \in \delta(\mathcal{A})) \Rightarrow (s \in X \Rightarrow s^\prime \in X)$
\noindent {\bf Computation}. \ A computation of program $\mathcal{A}$ (respectively, adaptation $\Delta$) is a sequence of states $\sigma = \langle s_0, s_1, ...\rangle$ satisfying the following conditions:
\begin{itemize}
\item For first state $s_0$ in $\sigma$, $s_0 \in S_0(\mathcal{A})$ (respectively, $S_0(\Delta)$)
\item If $\sigma$ is infinite then $\forall j: j > 0: (\exists \pi :: (s_{j-1}, \pi, s_j) \in \delta(\mathcal{A}))$ (respectively, $\delta(\Delta)$)
\item If $\sigma$ is finite and terminates in state $s_l$, then there does not exist state $s$ for all $\pi$ such that $(s_l, \pi, s) \in \delta(\mathcal{A})$ (respectively, $\delta(\Delta)$), and $\forall j: j > 0: (\exists \pi :: (s_{j-1}, \pi, s_j) \in \delta(\mathcal{A}))$ (respectively, $\delta(\Delta)$)
\end{itemize}
\noindent {\bf Specification}. \ A specification of $\mathcal{A}$ is a set of {\em acceptable} computations. Following Alpern and Schneider \cite{liveness-alpern}, specification can be decomposed into a safety specification and a liveness specification. As shown in \cite{thesis-sandeep}, for a rich class of specifications, safety specification can be represented as a set of bad transitions that must not occur in program computations; i.e., safety specification is a subset of $S(\mathcal{A}) \ \mathsf{x} \ S(\mathcal{A})$.
% Also, as shown in \cite{multitolerance}, for a rich class of specifications, safety specification can be represented as a set of bad transitions that should not occur in program computations. Hence, we represent this safety specification by a set of bad transitions that should not occur in program computations.
\noindent {\bf Satisfies}. \ $\mathcal{A}$ satisfies specification if each computation of $\mathcal{A}$ is in specification. $\mathcal{A}$ satisfies specification from $X$ iff $(i)$ $X$ is closed in $\mathcal{A}$, and $(ii)$ each computation of $\mathcal{A}$ is in specification and starts from a state where $X$ is true (i.e., $S_0(A) \subseteq X$).
% each computation of $\mathcal{A}$ that starts from a state where $X$ is true is in the specification.
\noindent {\bf Invariant}. \ The state predicate $X$ of $\mathcal{A}$ is an invariant iff $\mathcal{A}$ satisfies specification from $X$. Note that $X \supseteq S_0(\mathcal{A})$. Informally speaking, the invariant predicate characterizes the set of all states reached in the ``correct'' computations of $\mathcal{A}$.
\noindent {\bf Safety specification during adaptation}. \ Similar to the specification of $\mathcal{A}$, safety specification during adaptation $\Delta$ is specified as a set of bad transitions that must not occur in computations of adaptation $\Delta$, i.e., as a subset of $S(\Delta) \ \mathsf{x} \ S(\Delta)$.
\noindent {\bf Liveness specification during adaptation}. \ We argue that the specification during adaptation should be a safety specification. This is due to the fact that one often wants the adaptation to be completed as quickly as possible. Hence, it is desirable not to delay the adaptation task to satisfy the liveness specification during adaptation. Rather, it is desirable to guarantee that, after adaptation, the program reaches states from where its (new) safety and liveness specification is satisfied. Thus, the implicit liveness specification during adaptation is that the adaptation completes. In other words, the liveness specification during adaptation is that each intermediate program eventually executes its adaptive action. For this reasons, we have omitted the representation of liveness specification of program.
\iffalse
\subsection{Programming Notation}
In the previous subsection, we described adaptation using basic automata. In this section, we give programming notations used in describing the examples in following sections. Our notations are based on guarded commands \cite{inv-dij,closure-arora,networkprotocol-gouda}, which makes the translation from program to automata transparent.
\noindent {\bf Program and process}. \ A program $\mathcal{P}$ is specified by a finite set of processes. A process $p$ is specified by a set of local constants, a set of local variables and a finite set of {\em actions}. The processes in a program communicate with one another by sending and receiving messages over unbounded channels that connect the processes. We use the notation $\textit{ch.p.q}$ to denote the content of the channel from process $p$ to process $q$, and $\#\textit{ch.p.q}$ to denote the number of messages in the channel from $p$ to $q$.
\noindent {\bf Component and fraction}. \ A component is a finite set of fractions that are involved in providing a common functionality. A component fraction is a set of local constants, a set of local variables, and a finite set of actions that are associated with a single process. A fraction also has {\em input}, which are values of the variables supplied by the program. Addition of component to a program is the union of the variables and actions of component and the program.
\noindent {\bf Action}. \ An action of $p$ is uniquely identified by a name, and is of the form
\vspace*{-2mm}
\begin{center}
$\langle\textit{name}\rangle \ : \ \langle\textit{guard}\rangle \ \rightarrow \ \langle\textit{statement}\rangle$
\end{center}
\vspace*{-2mm}
\noindent A {\em guard} is a combination of one or more of the following: a state predicate of $p$, a receiving guard of $p$, or a timeout guard. A receiving guard of $p$ is of the form \verb+rcv+ $\langle\textit{message}\rangle$ \verb+from+ $\langle q\rangle$. A timeout guard is of the form \verb+timeout+ $\langle\textit{state predicate of } \mathcal{P}\rangle$. The {\em statement} of an action updates zero or more variables and/or sends one or more messages. An action can be executed only if its guard evaluates to true. To execute an action, the statement of that action is executed atomically. A sending statement of $p$ is of the form \verb+send+ $\langle\textit{message}\rangle$ \verb+to+ $\langle q_1, ..., q_n \rangle$, which sends a message to one or more processes $q_1, ..., q_n$. We say that an action of $p$ is enabled in a state of $\mathcal{P}$ iff its guard evaluates to true in that state.
\noindent {\bf Atomic adaptation}. \ An atomic adaptation is a special type of action, whose statement is of the form
$\mathit{TransformTo}(p^\prime, \Phi)$
\noindent which denotes that when this statement is executed, the current process is replaced by $p^\prime$ and state-mapping $\Phi$ is used to initialize the variable of $p^\prime$. From modeling perspective, we consider the whole process to be changed, even if only a small part of it is actually changed. Hence, we describe atomic adaptation as a generic form of replacing processes. In actual implementation, this can be modeled in a different ways, such as blocking of some method, or loading/unloading of some class. However, from verification point of view it is not important to consider the details of how the atomic adaptation is implemented. Additionally, considering each atomic adaptation as a generic form of process replacement gives the developer freedom to choose the implementation based on the platform and language used.
\fi
\iffalse
\subsection{Programming Notation}
In the previous subsection, we described adaptation and adaptive programs using basic state-transition machine. In this section, we give programming notations to specify states and transitions of the program. Our notations are based on guarded commands \cite{inv-dij,closure-arora,networkprotocol-gouda}.
%\subsection{Concrete Model of Adaptive Programs}
%In this subsection, we discuss the concrete model of distributed adaptive programs that can be mapped to the abstract model discussed in the previous subsection. We note that general purpose languages such as C++/Java that are used in existing adaptation techniques (e.g., \cite{conadasof-schlichting, kb03podsy, filter-mckinley, evolution-arora, unanticipated-cahill, thesis-masoud, bca-ralph, friends-fabre}) are not suitable for our task as verification of programs in these languages is difficult even in the absence of adaptation. For this reason, while proving correctness of programs, we often consider their abstract version where several implementation details are omitted.
%Of course, the abstract model chosen during the verification of adaptation should be such that it is possible to obtain an abstract version of the programs and components at hand, and it is possible to translate programs in the abstract model into a concrete implementation. With this intuition, we use a variation of the guarded commands \cite{inv-dij} for our purpose. We note that there are techniques \cite{ger00} that allow one to obtain a guarded command representation from a program in general purpose language such as C. Also, there are techniques \cite{mcguire01,na02,da02} that allow one to transform a program in guarded commands into a program in general purpose languages. The modeling is based on the work in \cite{closure-arora, networkprotocol-gouda}.
%\vspace*{-2.5mm}
%\subsection{Modeling Adaptive Program}
%\vspace*{-1.5mm}
\noindent {\bf Program and Process}. \ A program $\mathcal{P}$ is specified by a finite set of processes. A process $p$ is specified by a set of local constants, a set of local variables and a finite set of {\em actions}.% (defined later in this section).
%Variables declared in $\mathcal{P}$ can be read and written by the actions of all processes.
%Each process can have local variables and constants. Further, auxiliary variables are defined that are used in specification and verification of program. These variables are not implemented by the program.
The processes in a program communicate with one another by sending and receiving messages over unbounded channels that connect the processes. We use the notation $\textit{ch.p.q}$ to denote the content of the channel from process $p$ to process $q$, and $\#\textit{ch.p.q}$ to denote the number of messages in the channel from $p$ to $q$.
\noindent {\bf Component and Fraction}. \ A component is a finite set of fractions that are involved in providing a common functionality. A component fraction is a set of local constants, a set of local variables, and a finite set of actions that are associated with a single process. A fraction also has input parameters, which are values of the variables supplied by the program; and output parameters, which are values returned to the program.
%\noindent {\bf State and State Predicate}. \ A {\em state of process $p$} is defined by a value for each variable of $p$, chosen from the predefined domain of the variable. A {\em state of program $\mathcal{P}$} is defined by
%a value for each global variable of $\mathcal{P}$,
%the state of all its processes and the contents of all channels. A state predicate of $p$ is a boolean expression over the constants and variables of $p$. A state predicate of $\mathcal{P}$ is a boolean expression over constants and variables of all processes
%, all global constants and global variables,
%and the contents of all channels.
\noindent {\bf Action}. \ An action of $p$ is uniquely identified by a name, and is of the form
\vspace*{-2mm}
\begin{center}
$\langle\textit{name}\rangle \ : \ \langle\textit{guard}\rangle \ \rightarrow \ \langle\textit{statement}\rangle$
\end{center}
\vspace*{-2mm}
\noindent A {\em guard} is a combination of one or more of the following: a state predicate of $p$, a receiving guard of $p$, or a timeout guard. A receiving guard of $p$ is of the form \verb+rcv+ $\langle\textit{message}\rangle$ \verb+from+ $\langle q\rangle$. A timeout guard is of the form \verb+timeout+ $\langle\textit{state predicate of } \mathcal{P}\rangle$. The {\em statement} of an action updates zero or more variables and/or sends one or more messages. An action can be executed only if its guard evaluates to true. To execute an action, the statement of that action is executed atomically. A sending statement of $p$ is of the form \verb+send+ $\langle\textit{message}\rangle$ \verb+to+ $\langle q_1, ..., q_n \rangle$, which sends a message to one or more processes $q_1, ..., q_n$. We say that an action of $p$ is enabled in a state of $\mathcal{P}$ iff its guard evaluates to true in that state.
\noindent {\bf Adaptive Action}. \ Each adaptive action is a special type of {\em action} and is represented by a {\em name} and has a {\em guard}. An adaptive action is executed when the guard corresponding to it becomes true. We consider a generic adaptive action that replaces a set of actions of the program while ensuring appropriate state transfer:
\vspace*{-2mm}
\begin{center}
$\langle\textit{name}\rangle \ : \ \langle\textit{guard}\rangle \ \rightarrow \ \textit{transform}(\mathcal{P}^\prime(p^\prime),\Phi(V))$
\end{center}
\vspace*{-2mm}
\noindent where $V$ is the set of variables of current process, $\mathcal{P}^\prime(p^\prime)$ denotes the process $p^\prime$ of program $\mathcal{P}^\prime$ that will replace the current process, and $\Phi(V)$ defines the state mapping function to initialize the state of the process $p^\prime$. For modeling, we restrict ourselves to this generic adaptive action; however, in the concrete implementation, this can be modeled in different ways, such as disabling or enabling some program statements, loading or unloading of a class, or changing references to a class.
%\noindent {\bf Computation}. \ A computation of $\mathcal{P}$ is a sequence of states $s_0, s_1, ...$ of $\mathcal{P}$ such that for each $j$, $j > 0$, $s_j$ is obtained from state $s_{j-1}$ by executing an action of $\mathcal{P}$ that is enabled in the state $s_{j-1}$, and satisfies the following conditions: ($i$) {\em Non-determinism}: Any action that is enabled in a state of $\mathcal{P}$ can be selected for execution at that state, ($ii$) {\em Atomicity}: Enabled actions in $\mathcal{P}$ are executed one at a time, ($iii$) {\em Fairness}: If an action is continuously enabled along the states in the sequence, then that action is eventually chosen for execution, and ($iv$) {\em Maximality}: Maximality of the sequence means that if the sequence is finite then the guard of each action in $\mathcal{P}$ is false in the final state.
%\noindent {\bf Closure}. \ Let $S$ be a state predicate. An action $ac$ of $p$ preserves $S$ iff in any state where $S$ is true and $ac$ is enabled, atomically executing the statement of $ac$ yields a state where $S$ continues to be true. $S$ is closed in a set of actions iff each action in that set preserves $S$.
%\noindent {\bf Specification}. \ The program specification is a set of {\em acceptable} computations. Following Alpern and Schneider \cite{liveness-alpern}, specification can be decomposed into a safety specification and a liveness specification. Also, as shown in \cite{multitolerance}, for a rich class of specifications, safety specification can be represented as a set of bad transitions that should not occur in program computations. Hence, we represent this safety specification by a set of bad transitions that should not occur in program computations. We omit the representation of liveness specification here as it is not required for our purpose.
%\noindent {\bf Satisfies}. \ $\delta({\mathcal{A})$ satisfies specification from $X$ iff each computation of $\mathcal{A}$ that starts from a state where $X$ is true is in the specification.
%\noindent {\bf Invariant}. \ The state predicate $X$ of $\mathcal{A}$ is an invariant iff $(i)$ $X$ is closed in $\delta(\mathcal{A})$, and $(ii)$ $\delta(\mathcal{A})$ satisfies specification from $S$. Informally speaking, the invariant predicate characterizes the set of all states reached in the ``correct'' computations of $\mathcal{A}$.
%\noindent {\bf Hoare triple}. \ If $Q$ and $R$ are state predicates and $ac$ is an action, then the Hoare triple is denoted as $\{Q\}ac\{R\}$ and it means that if execution of $ac$ is begun in a state satisfying $Q$, then it is guaranteed to terminate in a finite amount of time in a state satisfying $R$. $Q$ and $R$ are referred to as the precondition and postcondition for the triplet, respectively.
%\noindent {\bf Leads to}. We use the temporal logic formula \ $P \rightsquigarrow Q$ (pronounced $P$ leads to $Q$) described in \cite{tla-lamport}. It states that if $P$ ever becomes true, then $Q$ will be true at same time or later.
\noindent {\em Notation}. \ We use the notation $X \veebar Y \veebar Z$ to imply that only one of $X$, $Y$, or $Z$ is true at a time.
\subsection{Classifying Adaptation}
In this section, we discussed the most general form of adaptation in asynchronous system. There are several adaptation semantics that can be classified using the adaptation lattice approach. In this subsection, we discuss some of them.
\noindent {\bf Instantaneous Adaptation}. \ This is the case where there are no intermediate programs, and an adaptive action transforms the old program to the new program in one step (atomically). This type of adaptation typically occurs in a single-process program. The adaptation lattice in this case consists of only two nodes, start node and then end node.
\noindent {\bf Overlap Adaptation}. \ In this case, during adaptation there is overlap of the behavior of old program and the new program. In other words, if component $C$ is replaced by component $C^\prime$, then during adaptation system there are states when system is having fractions of both $C$ and $C^\prime$. The adaptation lattice in this case consists of at least one intermediate node.
\fi