SECTION II.
The Language and the Memory Model
A. The Language
We use the expression-oriented language presented in Fig. 1 as our core language. It is used to capture the essential features of the C11 memory model. With the support to atomic read/write (load/store), fences, and compare-and-swap (CAS) our language can express C11 programs using various kinds of inter-thread synchronisation mechanisms, including the powerful release-sequence.
Our language has variable names (represented by metavariable $x$
) and integer values (represented by metavariable $V$
) as values ($\textit {Val}$
). The pointer arithmetic, $\mathtt {let}$
-binding, loop command $\mathtt {repeat\, } e$
, conditional statement $\mathtt {if} {\dots }\mathtt {then} {\dots }\mathtt {else}$
, thread forking $\mathtt {fork\, } e$
, memory allocation $\mathtt {alloc}(V)$
, memory load (read) $[v]_{O}$
and store (write) $[v]_{O} \mathtt {\,:=\, } v$
, atomic update operation $\mathtt {CAS}_{O,O}(v,v,v)$
, and fence operations $\mathtt {fence}_{O}$
are supported as our expressions ($e$
). Specifically, in the loop command $\mathtt {repeat\, } e$
the loop body $e$
will be repeatedly executed until a non-zero value is returned.
Note that a memory order $O$
needs to be specified for some expressions indicating which degree of memory relaxation can be applied to the annotated operation. Following the C11 language, we require a memory location is either atomic or non-atomic and this cannot be changed once defined. For an atomic location $v_{1}$
, the memory order $O$
used in a load operation $[v_{1}]_{O}$
can be acquire ($\mathtt {acq}$
) or relaxed ($\mathtt {rlx}$
); the memory order $O$
in a store operation $[v_{1}]_{O} \mathtt {\,:=\, } v$
can be either release ($\mathtt {rel}$
) or relaxed ($\mathtt {rlx}$
). Meanwhile, memory accesses to non-atomic locations can only be annotated as non-atomic ($\mathtt {na}$
). The compare-and-swap expression $\mathtt {CAS}_{O_{1},O_{2}}(v_{1},v_{2},v_{3})$
requires $v_{1}$
to be the address of an atomic location and performs the following steps in a single atomic move: firstly, the value of $v_{1}$
is loaded with the memory order $O_{1}$
(which can be either $\mathtt {acq}$
or $\mathtt {rlx}$
), then the value is used to “compare” with the expected value $v_{2}$
; if they are the same, the value $v_{3}$
is stored to location $v_{1}$
with the memory order $O_{2}$
(which can be $\mathtt {rel}$
or $\mathtt {rlx}$
) (the “swapping”) and a numerical value 1 is returned indicating its success; otherwise 0 is returned indicating the failure of the compare-and-swap process.
B. The Graph Semantics
The C11 memory model resides at language-level aiming at abstracting away the differences of underlying hardware memory models. Therefore, it is not straightforward to express it in an operational manner. The axiomatic approach is often used instead to formalise the C11 memory model, where execution graphs are used to represent candidate executions (with the program actions/events as vertices and their relations represented by the edges) and a set of axioms decide if an execution is legal or not. This is the approach adopted by Batty et al. [4] to give the first formalisation of the C11 memory model. The memory models used in the C11 program logics [5], [6], [8]–[11] are defined in a similar manner but with certain simplifications, e.g., only accepting synchronisations created using a simple way without release-sequences involved. This work follows the graph based axiomatic semantics, with extra information added for threads (highlighted in Fig. 2), and supports synchronisations with fully featured release-sequences.
As shown in Fig. 2, the event graph $\mathcal {G}~(A,T, {\mathsf {sb}}, {\mathsf {mo}}, {\mathsf {rf}})$
concerns a set of events, and records their action type information in $A$
(the action types will be further discussed shortly), their thread identities in $T$
(i.e. to which threads they belong), as well as their relations in ${\mathsf {sb}}, {\mathsf {mo}}$
, and ${\mathsf {rf}}$
. The sequenced-before relation (${\mathsf {sb}}$
) represents the non-transitive program order. All store operations accessing a same location form a strict-total order, which we record in the modification-order (${\mathsf {mo}}$
). When a load operation reads from a store operation, this relation is tracked in the reads-from map (${\mathsf {rf}}$
). We formalise the thread pool as $\mathcal T$
which tracks each numerically indexed thread’s last event and the expressions to be executed.
As shown in Fig. 3 and Fig. 4, we adopt a two-layer semantics, namely, event-step and machine-step, following GPS and GPS+. An event-step ($e \mathop{\rightarrow }\limits^{\alpha } e'$
) is the execution of $e$
, resulting in a return value or a remainder expression ($e'$
) and an action ($\alpha $
) to be added to the execution graph in the machine-steps. The executions of arithmetic, let-banding, repeat, and conditional expressions returns different values or remainder expressions but they only generates skip actions ($\mathbb {S}$
) as they do not involve memory accesses. The allocation expression $\mathtt {alloc}(n)$
generates an allocation action $\mathbb {A}(\ell..\ell +n-1)$
, which indicates $n$
fresh memory location ranging from $\ell $
to $\ell +n-1$
are allocated and $\ell $
is returned. Store and fence expresses generate corresponding write ($\mathbb {W}$
) and fence ($\mathbb {F}$
) actions with specified memory orders; and we let them always return 0 as they should not be used to change the program control flow. Note that the event-step rule for read action $\mathbb {R}$
only specifies that it should return some numerical value $V$
. The actual value can be read is constrained by the memory model axioms ($\mathtt {consistentC11}$
) in the machine-steps (which will be discussed shortly in this section) with the global execution taken under consideration. There are two rules for CAS expressions to correspondingly capture the successful and failure cases. In the case of success, an update action $\mathbb {U}(\ell, V_{o}, V_{n}, O_{r}, O_{w})$
is generated, where the current value $V_{o}$
stored at location $\ell $
is required to be the same as specified in the expression; otherwise, the CAS should be considered as failed and is treated as an atomic read action which reads some value other than $V_{o}$
.
We use the machine configuration $\langle \mathcal T; \mathcal G\rangle $
to represent the execution states, where the thread pool $\mathcal T$
contains the expressions to be executed in each thread and the execution graph $\mathcal G$
is a record of the execution history. Machine-step rules are used to update the machine configurations. The first rule states that an arbitrary thread from $\mathcal T$
can take a move ($e \mathop{\rightarrow }\limits^{\alpha } e'$
) and generate a new machine configuration ($\langle \mathcal T'; \mathcal G'\rangle $
) based on the current one ($\langle \mathcal T; \mathcal G\rangle $
) with the new event ($a'$
) and the corresponding relations added to the event graph given that the C11 memory model axioms are preserved in the extended graph (${\mathsf {consistentC11}}(\mathcal G')$
). More specifically, assume the thread $i$
in the thread pool $\mathcal T$
is chosen to execute, its last event is $a$
and the expression to be executed is $e$
. Then $e$
is reduced to $e'$
following the corresponding event-step semantics rule, yielding a new action $\alpha $
with a new event name $a'$
. We update $i$
in the thread pool with this information $(a', e')$
, then add the newly generated event to the event graph as the following, yielding a new graph $\mathcal G'$
. Firstly, the mapping $a'\mapsto \alpha $
and $a'\mapsto i$
are added to the action map ($\mathcal G'.A$
) and the thread map ($\mathcal G'.T$
), respectively. This information will be crucial for reasoning about programs with C11 release-sequences, as we will need to know if different writes are from a same thread or not. Secondly, we know that $a'$
comes after $a$
, so we record it in the sequenced-before relation ($\mathcal G'. {\mathsf {sb}}$
). Finally, the modification-order only gets updated if $a'$
is a write ($\mathbb {W}$
) or a success update action ($\mathbb {U}$
), so we have $\mathcal G'. {\mathsf {mo}}\supseteq \mathcal G. {\mathsf {mo}}$
. Similarly, the read-from relation only gets updated if $a'$
is a read ($\mathbb {R}$
) or an update action ($\mathbb {U}$
) which reads from a write/update action $b$
. The exact way these two relations to be updated is restricted by the C11 memory model axioms $\mathtt {consistentC11}$
that to be presented in the next subsection.
The second machine-step rule indicates that the $\mathtt {fork\, } e$
command creates a new thread (i.e. $j$
) that will be added to the thread pool. The expression $e$
is waiting to be executed in the new thread while the parent thread ($i$
) has whatever left in the evaluation context $K[{0}]$
.
A thread terminates if its expression is reduced to be a pure value and the program terminates when all its threads terminate.
C. The Memory Model
A memory model defines how different CPU cores can access a shared memory, and thus controls how multithread programs should behave. As a language-level memory model that must be sufficiently generalised, the C11 memory model regulates the program behaviours by using a group of axioms based on the event graph. We have discussed the event graph; now we first introduce several derived relations before we can formally introduce the C11 memory model axioms.
1) Happens-Before Relation
The happens-before relation is the cornerstone of the causality in C11 programs. That is, for two events $a$
and $b$
, unless we can establish that $a$
happens before $b$
($a\mathop{\rightarrow }\limits^{ {\mathsf {hb}}}b$
), there is no guarantee that $a$
’s effect will be observed by $b$
. The happens-before relation is derived from the sequenced-before and the synchronised-with relations (to be discussed shortly in §II-C2), i.e., ${\mathsf {hb}}\triangleq ({\mathsf {sb}}\cup {\mathsf {sw}})^{+}$
. Intuitively, the happens-before relation preserves the program order for events within the same thread; however, for the events from different threads can by ordered in ${\mathsf {hb}}$
only if their threads are synchronised at appropriate locations.
This idea is demonstrated in Fig. 5 with an unsuccessful message passing program. In this example, we assume both $x$
and $y$
are initialised as 0. In the first thread, the flag $y$
is changed to 1 (event $b$
) after the message $x$
is set to be 42 (event $a$
); the second thread first reads $y$
to be 1 (event $c$
) then reads $x$
(event $d$
). Though a chain of relations can be established as $a\mathop{\rightarrow }\limits^{ {\mathsf {sb}}}b\mathop{\rightarrow }\limits^{ {\mathsf {rf}}}c\mathop{\rightarrow }\limits^{ {\mathsf {sb}}}d$
, the stale value 0 can still be read by $d$
as the read-from relation between $b$
and $c$
is not strong enough to form a synchronisation, thus the happens-before relation $a\mathop{\rightarrow }\limits^{ {\mathsf {hb}}}d$
could not be derived and $a$
’s effect is not guaranteed to be seen by $d$
.
2) Release-Sequence and Synchronisation
As a weak memory model, the C11 memory model allows threads to have different observations about the memory. But when necessary, one thread’s observation can be passed to another if they are synchronised. To form a synchronisation and share its observation, the sharer’s thread must first perform a release action. Intuitively, the release action would label its up-to-date memory observation as ready to be shared. Then a sequence of store operations, i.e. a release-sequence, work like messengers notifying their readers that some information can be acquired. However, the acquisition is only successful after an acquire action is performed in the reader’s thread. In this way, the acquire action is synchronised with the release action and is obliged to acknowledge the memory modifications happened before the release action.
The release-sequence plays the crucial role in this process. It is led by an action with the release memory order, i.e., the release head, which can either be a release fence or a release store, and is followed by the longest sub-sequence of store operations from the modification order ($\mathsf {mo}$
) where these store operations are either in the same thread as the release head or atomic update operations (i.e., $\mathbb {U}$
).
As the example shown in Fig. 6a the release write event $a$
is the release head; and the release-sequence also contains $b$
and $c$
as they follow $a$
in the ${\mathsf {mo}}$
order and is either in $a$
’s thread ($b$
) or is an atomic update ($c$
). Another thread can acquire $a$
’s observation by reading from any event from this sequence. Fig. 6b shows that a release-sequence can be interrupted if a non-update store operation ($c$
in this case) from a different thread is positioned in the chain of ${\mathsf {mo}}$
order. In the case shown in Fig. 6b, the release-sequence only contains the head $a$
. As shown in Fig. 6c, a relaxed write can lead a hypothetical release-sequence and in this case itself is called a hypothetical release head. If an operation acquires from this sequence, it is synchronised with a release fence prior to the hypothetical release head if there is any.
To formally define the synchronised-with relation, we first define a predicate (along with some shorthand definitions) that indicates if an action $b$
is qualified to be a member of the release-sequence led by action $a$
:\begin{align*} \mathsf {rs\_{}element}(a,b)\triangleq&{\mathsf {sameThread}}^{\mathcal G}(a,b)\lor {\mathsf {isCAS}} ^{\mathcal G}(b) \\ {\mathsf {sameThread}}^{ {\mathcal {G}}}(a,b)\triangleq&{\mathcal {G}}.T(a)= {\mathcal {G}}.T(b) \\ {\mathsf {isCAS}}^ {\mathcal {G}}(b)\triangleq&{\mathcal {G}}.A(b)= \mathbb {U}(-,-,-,-,-) \\ \mathsf {W}_{O}(a)\triangleq&{\mathcal {G}}.A(a) \\=&\mathbb {W}(-,-,O) \lor \mathbb {U} (-,-,-,-,O) \\ \mathsf {R}_{O}(a)\triangleq&{\mathcal {G}}.A(a) \\=&\mathbb {R}(-,-,O) \lor \mathbb {U} (-,-,-,O,-)\end{align*}
View Source
\begin{align*} \mathsf {rs\_{}element}(a,b)\triangleq&{\mathsf {sameThread}}^{\mathcal G}(a,b)\lor {\mathsf {isCAS}} ^{\mathcal G}(b) \\ {\mathsf {sameThread}}^{ {\mathcal {G}}}(a,b)\triangleq&{\mathcal {G}}.T(a)= {\mathcal {G}}.T(b) \\ {\mathsf {isCAS}}^ {\mathcal {G}}(b)\triangleq&{\mathcal {G}}.A(b)= \mathbb {U}(-,-,-,-,-) \\ \mathsf {W}_{O}(a)\triangleq&{\mathcal {G}}.A(a) \\=&\mathbb {W}(-,-,O) \lor \mathbb {U} (-,-,-,-,O) \\ \mathsf {R}_{O}(a)\triangleq&{\mathcal {G}}.A(a) \\=&\mathbb {R}(-,-,O) \lor \mathbb {U} (-,-,-,O,-)\end{align*}
For a store action $a$
, we say that event $b$
is in $a$
’s release-sequence, $a\mathop{\rightarrow }\limits^{ {\mathsf {rs}}}b$
or ${\mathsf {rs}}(a,b)$
, if and only if:\begin{equation*} \mathsf {W}_{O}(a)\land \left ({\begin{aligned} & a=b \lor \mathsf {rs\_{}element}(a,b)\land a\overset { {\mathsf {mo}}}{\rightarrow }b \land \\ & \forall c. a\overset { {\mathsf {mo}}}{\rightarrow }c\overset { {\mathsf {mo}}}{\rightarrow }b\Rightarrow \mathsf {rs\_{}element}(a,c) \end{aligned} }\right)\end{equation*}
View Source
\begin{equation*} \mathsf {W}_{O}(a)\land \left ({\begin{aligned} & a=b \lor \mathsf {rs\_{}element}(a,b)\land a\overset { {\mathsf {mo}}}{\rightarrow }b \land \\ & \forall c. a\overset { {\mathsf {mo}}}{\rightarrow }c\overset { {\mathsf {mo}}}{\rightarrow }b\Rightarrow \mathsf {rs\_{}element}(a,c) \end{aligned} }\right)\end{equation*}
Then we can formally define the synchronised-with relation as that shown in Fig. 7:
D. Demonstrating C11 Synchronisations
We have introduced the highly flexible release-sequence based C11 synchronisation mechanism. In Fig. 8, we demonstrate how C11 synchronisations can be formed in different manners by restoring the message passing protocol discussed in Fig. 5 in various ways. Recall that, as summarised in Table 1, existing C11 program logics usually support simplified versions of the C11 synchronisation mechanism, due to its complicity, with limited scenarios allowed to from synchronisations. These demonstrations are also used to illustrate which types of C11 synchronisations can be supported by existing C11 program logics.
The first C11 program logics, RSL and GPS, can only be used to reason about C11 programs with synchronisations formed between release write and acquire read pairs as that is shown in Fig. 8a. With C11 fences supported, FSL and GPS+ can also reason about programs like that is shown in Fig. 8b. Still, they do not accept release-sequences with more than one element. FSL++ overcomes this limitation, but expect the release head, it only accepts atomic update operations to be in a release-sequence (Fig. 8c). To the best of our knowledge, only this work supports the reasoning about C11 programs with synchronisations based on fully-featured release-sequences, including the scenario shown in Fig. 8d.
E. The Axiomatic Model
With the preparations that have been made about synchronisations, happens-before relations and etc., in this subsection, we present the axiomatic definitions for the C11 memory model in Fig. 9 following a similar approach used by [4]. Intuitively, the axioms are regulations that rule out illegal executions, e.g., “no one can read from an event that happens after itself” or “an update action cannot be interrupted”, and etc. These axiomatic rules also leave us enough room to ensure the aforementioned principle: no guarantee of observation without happens-before relations.
Specifically, ConsistentMO1 states that ${\mathsf {mo}}$
is a binary relation over writing actions. ConsistentMO2 requires all writing actions in ${\mathsf {mo}}$
to follow a strict total order. ConsistentRF1 indicates there always is at least one writing action before a reading on the same location, that is, all locations are initialised before reading. ConsistentRF2 says that a reading action cannot read from a writing that happens after itself. For a non-atomic reading action, ConsistentRFNA requires it must read from a writing action that happens before it. Coherence puts restrictions on happens-before relations and the modification orders, e.g. a reading actions should not read older value than its happens-before ancestors. AtomicCAS enforces no interruption could happen to an atomic update action. ConsistentAlloc states the sets of locations allocated by two allocation actions will not intersect, that is, no location will be allocated more than once. Acyclic is introduced to rule out the thin-air-read problem following [6], [9].
Note that while atomic locations are meant to be accessed concurrently, concurrent accesses (accesses that are not ordered in $\mathsf {hb}$
) to non-atomic locations with at least one write action lead to a hazardous situation called data-race, in which the program behaviour is undefined. The memory error is another hazardous situation, which involves accessing a location before it is allocated. Definitions for these two hazardous situations are needed to complete our semantical model as we need them to rule out executions with undefined results.\begin{align*}&\hspace {-0.6pc}\mathsf {dataRace}({\mathcal {G}}) \triangleq \exists \ell. \exists a, b \in {\mathsf {dom}} ({\mathcal {G}}.A). \\&\quad \left ({\begin{array}{l} {\mathcal {G}}.A(a) = \mathbb {W}(\ell, -, \mathtt {na}) \land {\mathcal {G}}.A(b) = \mathbb {W}(\ell, -, \mathtt {na}) \lor \\ {\mathcal {G}}.A(a) = \mathbb {W}(\ell, -, \mathtt {na}) \land {\mathcal {G}}.A(b) = \mathbb {R}(\ell, -, \mathtt {na}) \lor \\ {\mathcal {G}}.A(a) = \mathbb {R}(\ell, -, \mathtt {na}) \land {\mathcal {G}}.A(b) = \mathbb {W}(\ell, -, \mathtt {na}) \\ \end{array} }\right) \\&\quad \land \neg ((a, b) \in {\mathsf {hb}} \lor (b, a) \in {\mathsf {hb}}) \\&\hspace {-0.6pc}\mathsf {memErr}({\mathcal {G}}) \triangleq \exists l. \exists b \in {\mathsf {dom}} ({\mathcal {G}}.A). ({\mathcal {G}}.A(b) \!= \!\mathbb {W}(\ell, -, -) \\&\qquad \lor {\mathcal {G}}.A(b)\! = \!\mathbb {R}(\ell, -, -) \lor {\mathcal {G}}.A(b) = \mathbb {U}(\ell, -,-,-,-)) \\&\qquad \land \nexists a \in {\mathsf {dom}} ({\mathcal {G}}.A). A(a) = \mathbb {A}(\vec {\ell }) \land \ell \in \vec {\ell } \land (a, b) \in {\mathsf {hb}}\end{align*}
View Source
\begin{align*}&\hspace {-0.6pc}\mathsf {dataRace}({\mathcal {G}}) \triangleq \exists \ell. \exists a, b \in {\mathsf {dom}} ({\mathcal {G}}.A). \\&\quad \left ({\begin{array}{l} {\mathcal {G}}.A(a) = \mathbb {W}(\ell, -, \mathtt {na}) \land {\mathcal {G}}.A(b) = \mathbb {W}(\ell, -, \mathtt {na}) \lor \\ {\mathcal {G}}.A(a) = \mathbb {W}(\ell, -, \mathtt {na}) \land {\mathcal {G}}.A(b) = \mathbb {R}(\ell, -, \mathtt {na}) \lor \\ {\mathcal {G}}.A(a) = \mathbb {R}(\ell, -, \mathtt {na}) \land {\mathcal {G}}.A(b) = \mathbb {W}(\ell, -, \mathtt {na}) \\ \end{array} }\right) \\&\quad \land \neg ((a, b) \in {\mathsf {hb}} \lor (b, a) \in {\mathsf {hb}}) \\&\hspace {-0.6pc}\mathsf {memErr}({\mathcal {G}}) \triangleq \exists l. \exists b \in {\mathsf {dom}} ({\mathcal {G}}.A). ({\mathcal {G}}.A(b) \!= \!\mathbb {W}(\ell, -, -) \\&\qquad \lor {\mathcal {G}}.A(b)\! = \!\mathbb {R}(\ell, -, -) \lor {\mathcal {G}}.A(b) = \mathbb {U}(\ell, -,-,-,-)) \\&\qquad \land \nexists a \in {\mathsf {dom}} ({\mathcal {G}}.A). A(a) = \mathbb {A}(\vec {\ell }) \land \ell \in \vec {\ell } \land (a, b) \in {\mathsf {hb}}\end{align*}
SECTION IV.
Reasoning About C11 Release-Sequences and Fractional Permissions
The use of release-sequences provides C11 programs great flexibility to choose the best way to synchronise their threads. However, as discussed in previous sections no existing program logic supports the formal verification about the use of fully featured release-sequences due to its complexity. Also, there is no work in GPS family that supports fractional permissions. In this section, we introduce our new reasoning framework, GPS++, that support the aforementioned features with the aid from several novel techniques.
A. A New Type of Assertion and the Enhanced Protocol System
The key to reason about the C11 synchronisation process is to deal with the relaxed write operations involved. As that is discussed in §II-C2, unlike a release write, a relaxed write cannot form a synchronisation by itself. That is, no resource can be shared by a relaxed write to its readers unless (1) there is a release fence prior to it; or (2) it belongs to a release write’s release-sequence. To reason about the behaviours of a relaxed write in the C11 synchronisation, its context must be taken under consideration. The first scenario is relatively easier, as we can adopt the shareable assertion introduced by GPS+ to indicate if there is a prior release fence that makes some resource available for the relaxed write operation to share. The second scenario is more complicated, as we need to know if there is a prior release write to the same memory location and if so, whether or not the release-sequence led by that release write is still valid at the point where the relaxed write takes place.
To tackle this problem, a naive solution is to introduce location-based restricted shareable assertions. That is, a release write operation on location $\ell $
may create an assertion $\langle P\rangle _\ell $
, which indicates $P$
is shareable by following relaxed writes operation on $\ell $
who are assumed to be the members of its release-sequence. However, as discussed in §II-C2 a release-sequence could be interrupted by non-update writes from other threads and this definition is not sufficient to be used to detect these potential interruptions. Therefore, we introduce state-based restricted shareable assertions (which we call restricted-shareable assertions for short) instead. Specifically, a release write that changes location $\ell $
to state $s$
may make some resource $P$
shareable, $\langle P\rangle _{s}$
, for the members of its release-sequence. To check if a following relaxed write belongs to the release-sequence and can use the restricted-shareable resource $\langle P\rangle _{s}$
, we first check (1) whether they are operations on the same location; (2) whether they are in the same thread; and (3) whether the sequence is free from interruptions. The check for condition (1) can be done by simply examining whether the two writes follow the same protocol. To enable the checks for condition (2) and (3), we extend the state interpretation $\tau (s, z)$
used in GPS+ to the form like $\tau (s, z, tid, upd)$
, where $tid$
indicates in which threads the target location can be transformed to the state $s$
, and $upd$
is 1 if the state $s$
can only be reached by atomic update operations or 0 otherwise. With these preparations, we derive the following predicates:\begin{align*}&\hspace {-2pc} {\mathsf {sameThread}}(s,s') \\\triangleq&\forall t, t', v, v', c, c'. (\tau (s, v, t, c) \not \Rightarrow {\tt false}\land \\&\tau (s', v', t', c') \not \Rightarrow {\tt false}) \Rightarrow t = t' \\&\hspace {-2pc} {\mathsf {isCAS}}(s) \\\triangleq&\forall v, t, u. (\tau (s, v, t, u) \not \Rightarrow {\tt false}) \Rightarrow u = 1\end{align*}
View Source
\begin{align*}&\hspace {-2pc} {\mathsf {sameThread}}(s,s') \\\triangleq&\forall t, t', v, v', c, c'. (\tau (s, v, t, c) \not \Rightarrow {\tt false}\land \\&\tau (s', v', t', c') \not \Rightarrow {\tt false}) \Rightarrow t = t' \\&\hspace {-2pc} {\mathsf {isCAS}}(s) \\\triangleq&\forall v, t, u. (\tau (s, v, t, u) \not \Rightarrow {\tt false}) \Rightarrow u = 1\end{align*}
With these definitions, the thread and interruption checks can be formalised as:\begin{equation*} \forall s''. s'\sqsupseteq _\tau s''\sqsupseteq _{\tau } s\Rightarrow {\mathsf {sameThread}}(s'',s)\lor {\mathsf {isCAS}} (s'').\end{equation*}
View Source
\begin{equation*} \forall s''. s'\sqsupseteq _\tau s''\sqsupseteq _{\tau } s\Rightarrow {\mathsf {sameThread}}(s'',s)\lor {\mathsf {isCAS}} (s'').\end{equation*}
where $s$
is the state established by the release head and $s'$
is the target state of the relaxed write being checked. The following properties can be derived for our new restricted-shareable assertions based on our semantical model:\begin{align*} \begin{array}{c} {~[\underline {{\mathrm {SEPARATION-R}}}]}\\ \langle P_{1}*P_{2}\rangle _{s}\Leftrightarrow \langle P_{1}\rangle _{s}*\langle P_{2}\rangle _{s} \end{array} \qquad \begin{array}{c} {~[\underline {{\mathrm {UNSHARE-R}}}]}\\ \langle P\rangle _{s}\Rrightarrow P \end{array}\end{align*}
View Source
\begin{align*} \begin{array}{c} {~[\underline {{\mathrm {SEPARATION-R}}}]}\\ \langle P_{1}*P_{2}\rangle _{s}\Leftrightarrow \langle P_{1}\rangle _{s}*\langle P_{2}\rangle _{s} \end{array} \qquad \begin{array}{c} {~[\underline {{\mathrm {UNSHARE-R}}}]}\\ \langle P\rangle _{s}\Rrightarrow P \end{array}\end{align*}
The [SEPARATION-R] rule indicates that a restricted-shareable assertion can be split as while as several restricted-shareable assertions can be merged if they are restricted to the same state. The [UNSHARE-R] rule states that a restricted-shareable assertion can be transformed back into its normal form via a ghost move.
The new restricted-shareable assertion, the enhanced protocol system, and their properties are semantically supported by our upgraded resource model, which will be presented in later sections.
B. Reasoning About C11 Release-Sequences
With our new restricted-shareable assertions and the enhanced protocol system introduced, new reasoning rule can be devised to handle the C11 programs with fully featured release-sequences. In this section, we first introduce the essential rules most related to C11 synchronisations in Fig. 10. Rules to deal with fractional permissions are presented in §IV-C, while other rules are discussed in §IV-D.
Unlike a release write in GPS+ which only needs to concern about what resource it can share to its readers, a release write in this work can also initiate a release-sequence, that is, it can make some resource shareable by the qualified relaxed writes followed. This idea is formalised in our [RELEASE-STORE] rule as that part of the resource $P$
currently held in the release write’s precondition can be transformed into a restricted shareable assertion $\langle Q_{1}\rangle _{s''}$
.
We require the $P$
used in the rule must be normal, i.e., it can not contain any special forms of assertions (e.g., shareable assertions or waiting-to-be-acquired assertions). This ensures that $Q_{1}$
is also free from special assertions and we do not create the problematic nesting of special assertions by putting $Q_{1}$
into $\langle {\dots } \rangle _{s''}$
. The formal definition for the normality check is ${\mathsf {normal}}(P)\triangleq P\Rightarrow {\tt false}\lor \langle P \rangle \not \Rightarrow {\tt false} $
. This is not the only occasion where the normality check is used. Allowing special assertions to be transmitted across threads also raises problems, therefore we require state assertions to be normal as well to prevent special assertions from being included.
The [RELAXED-STORE-2] rule illustrate how a release write works in a release write’s release-sequence. With the restricted-shareable assertion $\langle P_{2}\rangle _{s_{o}}$
created by a release write and passed down to the relaxed write, the relaxed write knows it may be in a release-sequence created at state $s_{o}$
. The validity of the release-sequence needs to be checked using the second premise (recall §IV-A) then $P_{2}$
can be used to imply the target state interpretation (the first premise).
Our reasoning framework is compatible with the rules developed in GPS+ for reasoning about the synchronisation initiated by a relaxed write with the help from a prior release fence. Therefore we inherit these rules as our [RELAXED-STORE-1] and [RELEASE-FENCE] rules. They state that a release fence can turn some resource into a (unrestricted) shareable resource and then being used by any relaxed write that follows. Intuitively, these tow rules are sound for C11 release-based synchronisation mechanism because every relaxed write after a release fence is a (hypothetical) release head and is allowed to share the observations established at point where the release fence took place (recall §II-C2).
On the other side, if the reader is an acquire read, the [ACQUIRE-LOAD] rule applies. It states that when observing the location $\ell $
at a certain state $s'$
, some knowledge $\square Q$
can be learnt from the state interpretation. Similarly, as shown in the [RELAXED-LOAD] rule, a relaxed read can also retrieve some knowledge from the state interpretation, but this knowledge $\boxtimes Q$
is not instantly useable and is waiting-to-be-acquired by a following acquire fence that may transfer it to a normal knowledge according to the [ACQUIRE-FENCE] rule. These three rules are adopted from GPS+ with minor changes, as our extension with release-sequence is still compatible with the principles working on the reader’s side.
Compare-and-swap (CAS) plays an important role in C11 concurrent programming. It is the foundation to the implementation of many locks and non-blocking algorithms. It can join in a release-sequence without being in the same thread as the release head. Therefore, sophisticated concurrent algorithms like the atomic reference counter [11] can use CAS operations to create synchronisations between many different threads. GPS+ provides some basic support to CASes without user specified memory orders. In this work, we devise a set of rules to cover CAS operations with all possible memory order specifications. We first take a close look at the [ACQ-REL-CAS] rule. The first premise corresponds to the case of success where $\ell $
’s value is same $v_{o}$
as expected. In this case, the acquire-release CAS performs as a release store However, unlike normal release write which can only use the resource $P$
in its precondition to imply its target state’s interpretation, a successful CAS can also use the resource from the state interpretation of $\ell $
’s current state ($\tau (s', v_{o},-,-)$
). In this way, the CAS can retransmit the information passed down in its release-sequence. Moreover, a successful CAS can retrieve non-knowledge resources from the state interpretation of $\ell $
’s current state. The second premise corresponds to the case of failure where $\ell $
is found to have some value other than $v_{o}$
. In this case, the acquire-release CAS performs as an acquire read and some knowledge $\square R$
can be retrieved from the actual state observed.
The ideas for the other CAS rules are similar. However, when a CAS has relaxed memory order for its reading component ([RLX-REL-CAS], [RLX-RLX-CAS-1], and [RLX-RLX-CAS-2]), it still can retrieve some information ($Q$
) from the state it reads but this information needs to be marked as “waiting-to-be-acquired” in its post condition ($\boxtimes Q$
). When a CAS has relaxed memory order for its writing component ([ACQ-RLX-CAS], [RLX-RLX-CAS-1], and [RLX-RLX-CAS-2]), it can only use the resources that are already sharable to derive its target state interpretation.
C. Dealing With Fractional Permissions
As discussed in §II-E, while atomic locations are designed for concurrent accesses, concurrent accesses (i.e., the accesses not ordered in ${\mathsf {hb}}$
) to a non-atomic location with at least one of them being a store operation lead to data-races. To ensure the verified programs are data-race-free, previous work in GPS family models each non-atomic location as a resource that could only be exclusively held by one thread at a time, which means these logics can not support the reasoning about programs with concurrent non-atomic reads (though they would not result in any race-condition). To verify real-world concurrent programs with concurrent non-atomic reads (e.g. the readers-writer-lock algorithm), we introduce fractional permissions for non-atomic locations.
Fractional permissions technique is an example of partial permissions [12], [13]. In our setting, a fraction in the interval $[{0, 1}]$
is used to represent the portion of the ownership to a non-atomic location. The full permission $\ell \mathop{\mapsto }\limits^{1} -$
is needed for a thread to write to $\ell $
; while for a non-atomic read, only a fraction of the permission will be sufficient:\begin{align*}&{\frac { \begin{array}{c} [\underline {{\mathrm {NON-ATOMIC-STORE}}}] \\ \\ \end{array} }{ \{ \mathsf {uninit}(\ell)\lor \ell \mathop{\mapsto }\limits^{1} -\} [\ell]_ {\mathtt {na}}:=v \{\ell \mathop{\mapsto }\limits^{1} v\} }} \\&{\frac { \begin{array}{c} [\underline {{\mathrm {NON-ATOMIC-LOAD}}}]\\ p \in (0, 1] \end{array} }{ \{\ell \mathop{\mapsto }\limits^{p} v\} [\ell]_ {\mathtt {na}}\{x. x=v * \ell \mathop{\mapsto }\limits^{p} v\}}}\end{align*}
View Source
\begin{align*}&{\frac { \begin{array}{c} [\underline {{\mathrm {NON-ATOMIC-STORE}}}] \\ \\ \end{array} }{ \{ \mathsf {uninit}(\ell)\lor \ell \mathop{\mapsto }\limits^{1} -\} [\ell]_ {\mathtt {na}}:=v \{\ell \mathop{\mapsto }\limits^{1} v\} }} \\&{\frac { \begin{array}{c} [\underline {{\mathrm {NON-ATOMIC-LOAD}}}]\\ p \in (0, 1] \end{array} }{ \{\ell \mathop{\mapsto }\limits^{p} v\} [\ell]_ {\mathtt {na}}\{x. x=v * \ell \mathop{\mapsto }\limits^{p} v\}}}\end{align*}
The empty permission $\ell \mathop{\mapsto }\limits^{0} -$
is semantically equivalent to $\tt emp$
. Permissions can also be combined or separated as defined below:\begin{align*}&\qquad \qquad \qquad [\underline {{\mathrm {SEPARATION-F}}}] \\&\quad \ell \overset {p}{\mapsto }v*\ell \overset {q}{\mapsto }v\Longleftrightarrow \begin{cases} \ell \overset {p\oplus q}{\mapsto }v & {\mathrm {if}}~ p\oplus q~ \text{is defined} \\ {\tt false}& {\mathrm {otherwise}} \end{cases} \\&{\mathrm {where}}\quad p\oplus q = \begin{cases} p+q & {\mathrm {if}}~ p, q, p+q\in [{0,1}] \\ {undefined} & {\mathrm {otherwise}} \end{cases}\end{align*}
View Source
\begin{align*}&\qquad \qquad \qquad [\underline {{\mathrm {SEPARATION-F}}}] \\&\quad \ell \overset {p}{\mapsto }v*\ell \overset {q}{\mapsto }v\Longleftrightarrow \begin{cases} \ell \overset {p\oplus q}{\mapsto }v & {\mathrm {if}}~ p\oplus q~ \text{is defined} \\ {\tt false}& {\mathrm {otherwise}} \end{cases} \\&{\mathrm {where}}\quad p\oplus q = \begin{cases} p+q & {\mathrm {if}}~ p, q, p+q\in [{0,1}] \\ {undefined} & {\mathrm {otherwise}} \end{cases}\end{align*}
According to the composition rules, a full permission (writing permission) is not compatible with another full permission or any other non-zero permissions. As a result, a program verified by our logic would not have any race condition where a write goes in parallel with other accesses to the same non-atomic location.
D. Other Rules
Besides the rules highlighted in previous subsections, we also have the following rules that make our reasoning system complete. We gather them into groups for the convenience of discussion.
The following inference rules depict properties of knowledge assertions. That is, knowledge can be transformed back to its normal form; knowledge symbol can be safely nested; a piece of knowledge acts like pure information and thus the separation assertion is equivalent to the logical conjunction; a picked escrow, an assertion about atomic location and a pure term are all knowledge; and a duplicable ghost term is also a form of knowledge.\begin{align*}&\qquad \quad [\underline {{\mathrm {KNOWLEDGE-MANIPULATION-1}}\dots 7}]\\&\quad \square P\Rightarrow P \qquad \square P\Rightarrow \square \square P \qquad \square P * Q\Leftrightarrow \square P\land Q \\&[\sigma]\Rightarrow \square [\sigma]\quad \begin{array}{|c|c|}\hline {t:t'} & {\tau } \\ \hline \end{array}\Rightarrow \square \begin{array}{|c|c|}\hline {t:t'} & {\tau } \\ \hline \end{array}\quad t = t' \Rightarrow \square t = t' \\&\qquad \qquad \qquad {\frac {t \cdot _\mu t = t}{ \begin{array}{:c:c:} \hdashline {\gamma:t} & {\mu } \\ \hdashline \end{array} \Rightarrow \square \begin{array}{:c:c:} \hdashline {\gamma:t} & {\mu } \\ \hdashline \end{array}}}\end{align*}
View Source
\begin{align*}&\qquad \quad [\underline {{\mathrm {KNOWLEDGE-MANIPULATION-1}}\dots 7}]\\&\quad \square P\Rightarrow P \qquad \square P\Rightarrow \square \square P \qquad \square P * Q\Leftrightarrow \square P\land Q \\&[\sigma]\Rightarrow \square [\sigma]\quad \begin{array}{|c|c|}\hline {t:t'} & {\tau } \\ \hline \end{array}\Rightarrow \square \begin{array}{|c|c|}\hline {t:t'} & {\tau } \\ \hline \end{array}\quad t = t' \Rightarrow \square t = t' \\&\qquad \qquad \qquad {\frac {t \cdot _\mu t = t}{ \begin{array}{:c:c:} \hdashline {\gamma:t} & {\mu } \\ \hdashline \end{array} \Rightarrow \square \begin{array}{:c:c:} \hdashline {\gamma:t} & {\mu } \\ \hdashline \end{array}}}\end{align*}
The first inference rule below states that ghost terms can be composed or separated according to their PCM definitions. The second inference rule states that two atomic assertions about the same location only coherence if the protocols are same and the states are reachable from one to another or the other way around.\begin{align*}&\qquad \qquad \qquad [\underline {{\mathrm {SEPARATION-1}}\dots 2}]\\&\qquad \begin{array}{:c:c:} \hdashline {\gamma:t} & {\mu } \\ \hdashline \end{array} * \begin{array}{:c:c:} \hdashline {\gamma:t'} & {\mu } \\ \hdashline \end{array} \Leftrightarrow \begin{array}{:c:c:} \hdashline {\gamma:t\cdot _\mu t'} & {\mu } \\ \hdashline \end{array} \\&\begin{array}{|c|c|}\hline {\ell:s} & {\tau } \\ \hline \end{array} * \begin{array}{|c|c|}\hline {\ell:s'} & {\tau '} \\ \hline \end{array} \Rightarrow \tau = \tau '\land (s \sqsubseteq _\tau s' \lor s' \sqsubseteq _\tau s)\end{align*}
View Source
\begin{align*}&\qquad \qquad \qquad [\underline {{\mathrm {SEPARATION-1}}\dots 2}]\\&\qquad \begin{array}{:c:c:} \hdashline {\gamma:t} & {\mu } \\ \hdashline \end{array} * \begin{array}{:c:c:} \hdashline {\gamma:t'} & {\mu } \\ \hdashline \end{array} \Leftrightarrow \begin{array}{:c:c:} \hdashline {\gamma:t\cdot _\mu t'} & {\mu } \\ \hdashline \end{array} \\&\begin{array}{|c|c|}\hline {\ell:s} & {\tau } \\ \hline \end{array} * \begin{array}{|c|c|}\hline {\ell:s'} & {\tau '} \\ \hline \end{array} \Rightarrow \tau = \tau '\land (s \sqsubseteq _\tau s' \lor s' \sqsubseteq _\tau s)\end{align*}
Following rules are about possible ghost moves. Particularly, similar to the [UNSHARE-R] rule we have discussed before, the fifth rule allows us to change an unrestricted shareable assertion to its normal form. The seventh rule states that a new ghost term can popup from thin air with a fresh identifier. The eighth rule states that a ghost variable can be updated to a new value as long as the new value is compatible with the environment. The last two rules are inherited from GPS/GPS+ to cope escrows.\begin{align*}&\qquad \qquad \qquad [\underline {{\mathrm {GHOST-MOVE-1}}\dots 8}]\\&{\frac {P \Rightarrow Q}{P \Rrightarrow Q}} \quad {\frac {P \Rrightarrow Q}{P * R \Rrightarrow Q*R}} \quad {\frac {P \Rrightarrow Q\quad Q\Rrightarrow R}{P \Rrightarrow R}} \quad \langle P\rangle \Rrightarrow P \\&\quad \mathsf {true} \Rrightarrow \exists \gamma. \begin{array}{:c:c:} \hdashline {\gamma:t} & {\mu } \\ \hdashline \end{array}\qquad {\frac {\forall t_{F}:[\![\mu]\!]. t_{1}\#_\mu t_{F} \Rightarrow t_{2}\#_\mu t_{F}}{ \begin{array}{:c:c:} \hdashline {\gamma:t_{1}} & {\mu } \\ \hdashline \end{array} \Rrightarrow \begin{array}{:c:c:} \hdashline {\gamma:t_{2}} & {\mu } \\ \hdashline \end{array}}} \\&\qquad \qquad {\frac { \begin{array}{c} \sigma: P \rightsquigarrow Q \end{array} }{ Q \Rrightarrow [\sigma]}} \qquad {\frac { \begin{array}{c} \sigma: P \rightsquigarrow Q \end{array} }{ P \wedge [\sigma] \Rrightarrow Q}}\end{align*}
View Source
\begin{align*}&\qquad \qquad \qquad [\underline {{\mathrm {GHOST-MOVE-1}}\dots 8}]\\&{\frac {P \Rightarrow Q}{P \Rrightarrow Q}} \quad {\frac {P \Rrightarrow Q}{P * R \Rrightarrow Q*R}} \quad {\frac {P \Rrightarrow Q\quad Q\Rrightarrow R}{P \Rrightarrow R}} \quad \langle P\rangle \Rrightarrow P \\&\quad \mathsf {true} \Rrightarrow \exists \gamma. \begin{array}{:c:c:} \hdashline {\gamma:t} & {\mu } \\ \hdashline \end{array}\qquad {\frac {\forall t_{F}:[\![\mu]\!]. t_{1}\#_\mu t_{F} \Rightarrow t_{2}\#_\mu t_{F}}{ \begin{array}{:c:c:} \hdashline {\gamma:t_{1}} & {\mu } \\ \hdashline \end{array} \Rrightarrow \begin{array}{:c:c:} \hdashline {\gamma:t_{2}} & {\mu } \\ \hdashline \end{array}}} \\&\qquad \qquad {\frac { \begin{array}{c} \sigma: P \rightsquigarrow Q \end{array} }{ Q \Rrightarrow [\sigma]}} \qquad {\frac { \begin{array}{c} \sigma: P \rightsquigarrow Q \end{array} }{ P \wedge [\sigma] \Rrightarrow Q}}\end{align*}
Note that, following the GPS/GPS+ logic for PCM terms with type $\mu $
we use the shorthand notation $t_{1}\#_\mu t_{2}$
to indicate that $t_{1} \oplus t_{2}$
is defined. The type declaration can be omitted when it is obvious.
The following rule is for the memory allocation. Starting with any valid precondition, $\mathtt {alloc}(n)$
allocates $n$
fresh and continuous locations, which are marked as uninitialised, and uses the leading location as its return value.\begin{align*}&\qquad \qquad \qquad \qquad [\underline {{\mathrm {ALLOCATION}}}]\\&{ \left \{{ \mathsf {true}}\right \}\, \mathtt {alloc}(n)\,\left \{{x. x\not =0* \mathsf {uninit}(x)* {\dots }* \mathsf {uninit}(x+n-1)}\right \}}\end{align*}
View Source
\begin{align*}&\qquad \qquad \qquad \qquad [\underline {{\mathrm {ALLOCATION}}}]\\&{ \left \{{ \mathsf {true}}\right \}\, \mathtt {alloc}(n)\,\left \{{x. x\not =0* \mathsf {uninit}(x)* {\dots }* \mathsf {uninit}(x+n-1)}\right \}}\end{align*}
The following tow rules are for atomic initialisation. In the precondition $P$
must hold as changing an atomic location to a particular state requires the state interpretation to be satisfied.\begin{align*}&\qquad \qquad \qquad [\underline {{\mathrm {INITIALISATION-1}}\dots 2}]\\&\qquad \quad {\frac {P\Rightarrow \tau (s,v)}{ { \left \{{ \mathsf {uninit}(\ell)*P}\right \}\,[\ell]_ {\mathtt {rel}}:=v\,\left \{{ \begin{array}{|c|c|}\hline {\ell:s} & {\tau } \\ \hline \end{array}}\right \}}}} \\&\quad \qquad {\frac {P\Rightarrow \tau (s,v)}{ { \left \{{ \mathsf {uninit}(\ell)*\langle P\rangle }\right \}\,[\ell]_ {\mathtt {rlx}}:=v\,\left \{{ \begin{array}{|c|c|}\hline {\ell:s} & {\tau } \\ \hline \end{array}}\right \}}}} \\&\qquad \quad {\frac { \begin{array}{c} [\underline {{\mathrm {CONSEQUENCE-RULE}}}]\\ P'\Rrightarrow P \qquad { \left \{{P}\right \}\,e\,\left \{{x. Q}\right \}} \qquad \forall x. Q \Rrightarrow Q' \end{array} }{ \left \{{P'}\right \}\,e\,\left \{{x. Q'}\right \}}} \\&\qquad \qquad \qquad {\frac { \begin{array}{c} [\underline {{\mathrm {FRAME-RULE}}}]\\ { \left \{{P}\right \}\,e\,\left \{{x. Q}\right \}} \end{array} }{ \left \{{P*R}\right \}\,e\,\left \{{x. Q*R}\right \}}}\end{align*}
View Source
\begin{align*}&\qquad \qquad \qquad [\underline {{\mathrm {INITIALISATION-1}}\dots 2}]\\&\qquad \quad {\frac {P\Rightarrow \tau (s,v)}{ { \left \{{ \mathsf {uninit}(\ell)*P}\right \}\,[\ell]_ {\mathtt {rel}}:=v\,\left \{{ \begin{array}{|c|c|}\hline {\ell:s} & {\tau } \\ \hline \end{array}}\right \}}}} \\&\quad \qquad {\frac {P\Rightarrow \tau (s,v)}{ { \left \{{ \mathsf {uninit}(\ell)*\langle P\rangle }\right \}\,[\ell]_ {\mathtt {rlx}}:=v\,\left \{{ \begin{array}{|c|c|}\hline {\ell:s} & {\tau } \\ \hline \end{array}}\right \}}}} \\&\qquad \quad {\frac { \begin{array}{c} [\underline {{\mathrm {CONSEQUENCE-RULE}}}]\\ P'\Rrightarrow P \qquad { \left \{{P}\right \}\,e\,\left \{{x. Q}\right \}} \qquad \forall x. Q \Rrightarrow Q' \end{array} }{ \left \{{P'}\right \}\,e\,\left \{{x. Q'}\right \}}} \\&\qquad \qquad \qquad {\frac { \begin{array}{c} [\underline {{\mathrm {FRAME-RULE}}}]\\ { \left \{{P}\right \}\,e\,\left \{{x. Q}\right \}} \end{array} }{ \left \{{P*R}\right \}\,e\,\left \{{x. Q*R}\right \}}}\end{align*}
Essentially, the following rules states that the special assertions are not to be nested. Nesting special assertions is problematic as it may introduce things violate the exquisite design of the whole system. For instance, assuming we allow an assertion in the form of $\boxtimes \langle P\rangle $
, it immediately becomes shareable, $\langle P\rangle $
, after an acquire fence which does not have the releasing semantics. Therefore, we prevent such nesting from the resource model level and these inference rules are corollaries of our resource model design. Note that the annotation (e.g. $a$
) used in these rules can be any valid label (for restricted shareable assertions) or nothing (for unrestricted shareable assertions).\begin{align*}&\qquad \qquad [\underline {{\mathrm {ASSERTION-PROPERTY-1}}\dots 7}]\\&\qquad \qquad \qquad \square \langle P \rangle _{a} \Rightarrow {\tt false}~ \text{if} ~{\tt EMP}{\not \in } [\![P]\!]^\rho \\&\qquad \qquad \qquad \quad \boxtimes \langle P \rangle _{a} \Rightarrow {\tt false}~ \text{if} ~{\tt EMP}{\not \in } [\![P]\!]^\rho \\&\qquad \qquad \qquad \boxtimes \boxtimes P \Rightarrow {\tt false}~ \text{if} ~{\tt EMP}{\not \in } [\![P]\!]^\rho \\&\qquad \qquad \qquad \quad \langle \boxtimes P \rangle _{a} \Rightarrow {\tt false}~ \text{if} ~{\tt EMP}{\not \in } [\![P]\!]^\rho \\&\qquad \qquad \qquad \square \boxtimes P \Rightarrow {\tt false}~ \text{if} ~{\tt EMP}{\not \in } [\![P]\!]^\rho \\&\qquad \qquad \qquad \quad \langle P \rangle _{a} * \langle Q \rangle _{a} \Leftrightarrow \langle P * Q \rangle _{a} \\&\qquad \qquad \qquad \langle \langle P \rangle _{a_{1}}\rangle _{a_{2}} \Rightarrow {\tt false}~ \text{if} ~{\tt EMP}{\not \in } [\![P]\!]^\rho \\&\qquad \qquad [\underline {{\mathrm {PURE-REDUCTION-AXIOM-1}}\dots 2}]\\&\{ \mathsf {true}\} v \{x. x\!=\!v\}\qquad \{ \mathsf {true}\} v\!==\!v' \{x. x\!=\!1\Leftrightarrow v\!=\!v'\}\end{align*}
View Source
\begin{align*}&\qquad \qquad [\underline {{\mathrm {ASSERTION-PROPERTY-1}}\dots 7}]\\&\qquad \qquad \qquad \square \langle P \rangle _{a} \Rightarrow {\tt false}~ \text{if} ~{\tt EMP}{\not \in } [\![P]\!]^\rho \\&\qquad \qquad \qquad \quad \boxtimes \langle P \rangle _{a} \Rightarrow {\tt false}~ \text{if} ~{\tt EMP}{\not \in } [\![P]\!]^\rho \\&\qquad \qquad \qquad \boxtimes \boxtimes P \Rightarrow {\tt false}~ \text{if} ~{\tt EMP}{\not \in } [\![P]\!]^\rho \\&\qquad \qquad \qquad \quad \langle \boxtimes P \rangle _{a} \Rightarrow {\tt false}~ \text{if} ~{\tt EMP}{\not \in } [\![P]\!]^\rho \\&\qquad \qquad \qquad \square \boxtimes P \Rightarrow {\tt false}~ \text{if} ~{\tt EMP}{\not \in } [\![P]\!]^\rho \\&\qquad \qquad \qquad \quad \langle P \rangle _{a} * \langle Q \rangle _{a} \Leftrightarrow \langle P * Q \rangle _{a} \\&\qquad \qquad \qquad \langle \langle P \rangle _{a_{1}}\rangle _{a_{2}} \Rightarrow {\tt false}~ \text{if} ~{\tt EMP}{\not \in } [\![P]\!]^\rho \\&\qquad \qquad [\underline {{\mathrm {PURE-REDUCTION-AXIOM-1}}\dots 2}]\\&\{ \mathsf {true}\} v \{x. x\!=\!v\}\qquad \{ \mathsf {true}\} v\!==\!v' \{x. x\!=\!1\Leftrightarrow v\!=\!v'\}\end{align*}
We also have the following rules for conditional statements, let-binding, fork expression, and the repeat loop. The fork rule states that given $e$
can be safely executed from the precondition $Q$
, we can fork a new thread with the precondition $\{P*Q\}$
to execute $e$
and leaving only $P$
to the parent thread.\begin{align*}&\qquad \quad {\frac {\begin{array}{c} [\underline {{\mathrm {CONDITIONAL}}}]\\ { \left \{{P*v\not =0}\right \}\,e_{1}\,\left \{{x.Q}\right \}}\quad { \left \{{P*v=0}\right \}\,e_{2}\,\left \{{x.Q}\right \}}\\ \end{array}}{ { \left \{{P}\right \}\, \mathtt {if\, } v \mathtt {then\, } e_{1} \mathtt {else\, } e_{2}\,\left \{{x.Q}\right \}}}} \\&{\frac { \begin{array}{c} [\underline {{\mathrm {LET-BINDING}}}]\\ { \left \{{P}\right \}\,e\,\left \{{x. Q}\right \}} \qquad \forall x. { \left \{{Q}\right \}\,e'\,\left \{{y. R}\right \}} \end{array}}{ { \left \{{P}\right \}\, \mathtt {let\, } x = e \mathtt {\, in\, } e'\,\left \{{y. R}\right \}}}} {\frac {\begin{array}{c} [\underline {{\mathrm {FORK}}}]\\ { \left \{{Q}\right \}\,e\,\left \{{ \mathsf {true}}\right \}} \end{array}}{ { \left \{{P*Q}\right \}\, \mathtt {fork\, } e\,\left \{{P}\right \}}}} \\&\qquad \qquad {\frac { \begin{array}{c} [\underline {{\mathrm {REPEAT}}}]\\ { \left \{{P}\right \}\,e\,\left \{{x. (x = 0 \wedge P) \vee (x \not = 0 \wedge Q)}\right \}} \end{array}}{ { \left \{{P}\right \}\, \mathtt {repeat\, } e \mathtt {end}\,\left \{{x. Q}\right \}}}}\end{align*}
View Source
\begin{align*}&\qquad \quad {\frac {\begin{array}{c} [\underline {{\mathrm {CONDITIONAL}}}]\\ { \left \{{P*v\not =0}\right \}\,e_{1}\,\left \{{x.Q}\right \}}\quad { \left \{{P*v=0}\right \}\,e_{2}\,\left \{{x.Q}\right \}}\\ \end{array}}{ { \left \{{P}\right \}\, \mathtt {if\, } v \mathtt {then\, } e_{1} \mathtt {else\, } e_{2}\,\left \{{x.Q}\right \}}}} \\&{\frac { \begin{array}{c} [\underline {{\mathrm {LET-BINDING}}}]\\ { \left \{{P}\right \}\,e\,\left \{{x. Q}\right \}} \qquad \forall x. { \left \{{Q}\right \}\,e'\,\left \{{y. R}\right \}} \end{array}}{ { \left \{{P}\right \}\, \mathtt {let\, } x = e \mathtt {\, in\, } e'\,\left \{{y. R}\right \}}}} {\frac {\begin{array}{c} [\underline {{\mathrm {FORK}}}]\\ { \left \{{Q}\right \}\,e\,\left \{{ \mathsf {true}}\right \}} \end{array}}{ { \left \{{P*Q}\right \}\, \mathtt {fork\, } e\,\left \{{P}\right \}}}} \\&\qquad \qquad {\frac { \begin{array}{c} [\underline {{\mathrm {REPEAT}}}]\\ { \left \{{P}\right \}\,e\,\left \{{x. (x = 0 \wedge P) \vee (x \not = 0 \wedge Q)}\right \}} \end{array}}{ { \left \{{P}\right \}\, \mathtt {repeat\, } e \mathtt {end}\,\left \{{x. Q}\right \}}}}\end{align*}
In this section, we first demonstrate our logic with an illustrative example using a release-sequence to pass messages between three threads. Then, we further illustrate the power of our logic by using it to verify a readers-writer-lock implementation where both the release-sequence and concurrent reads are involved.
A. An Illustrative Example
In Fig. 11 we show a message passing program. In this example, the initial values for $x$
and $y$
are both 0. In the first thread, the message $x$
is set to be 42 then $y$
is set to be 1. As the write operation to $y$
is a release write, it initiates a release-sequence that contains the following relaxed write and may contain the CAS in the second thread. In the third thread, $y$
is repeatedly checked until a non-zero value is observed. Then the message $x$
is examined. Note that, for readability we use $x = e_{1};e_{2}$
as an equivalent expression for the command $\mathtt {let\, } x = e_{1} \mathtt {\, in\, } e_{2}$
(or simply $e_{1};e_{2}$
if the evaluation result of $e_{1}$
is not used in $e_{2}$
). For the same reason, we use $||$
to separate the threads forked.
We assert that at the end of the execution, the reading of $x$
must return the new value 42. Intuitively, this is because for the third thread to exit the loop, a non-zero value $y$
must be observed, which can only be the result from one of the writes in the release-sequence led by the release write to $y$
in the first thread. Therefore a synchronisation is formed between the release write to $y$
and the acquire read of $y$
, ensuring the information about $x=42$
is available when the their thread reads the value of $x$
. To formally reason about this procedure, the protocols for $x$
and $y$
must be defined first. We call $x$
’s states ${\mathbf {x}_{o}}$
(the initial state) and ${\mathbf {x}_{n}}$
(the new state). Its protocol ${\mathbf {P}_{x}}$
allows one possible state transition: ${\mathbf {x}_{o}}\sqsubseteq _ {\mathbf {P}_{x}} {\mathbf {x}_{n}} $
. The state interpretations can be defined as:\begin{equation*} {\mathbf {P}_{x}}(s,v,t,c)\triangleq s= {\mathbf {x}_{n}}\land v=42\land t=1\land c=0,\end{equation*}
View Source
\begin{equation*} {\mathbf {P}_{x}}(s,v,t,c)\triangleq s= {\mathbf {x}_{n}}\land v=42\land t=1\land c=0,\end{equation*}
which states that thread 1 is allowed to change $x$
to state ${\mathbf {x}_{n}}$
by writing 42 to it and it is not necessary to be a CAS.
There are four states for $y$
: ${\mathbf {y}_{0}}, {\mathbf {y}_{1}}, {\mathbf {y}_{2}}$
and ${\mathbf {y}_{3}}$
and the following transitions are permitted:\begin{equation*} {\mathbf {y}_{0}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{1}}, {\mathbf {y}_{1}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{2}}, {\mathbf {y}_{1}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{3}}, \textit {and}~ {\mathbf {y}_{2}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{3}}.\end{equation*}
View Source
\begin{equation*} {\mathbf {y}_{0}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{1}}, {\mathbf {y}_{1}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{2}}, {\mathbf {y}_{1}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{3}}, \textit {and}~ {\mathbf {y}_{2}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{3}}.\end{equation*}
The state interpretations are defined as:\begin{align*}&\hspace {-1.2pc} {\mathbf {P}_{y}}(s,v,t,c) \triangleq \\&\quad s= {\mathbf {y}_{1}}\land v=1\land t=1\land c=0 \land \square \, \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array} \\&\lor s= {\mathbf {y}_{2}}\land v=2\land t=2\land c=1 \land \square \, \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array} \\&\lor s= {\mathbf {y}_{3}}\land v=3\land t=1\land c=0 \land \square \, \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array},\end{align*}
View Source
\begin{align*}&\hspace {-1.2pc} {\mathbf {P}_{y}}(s,v,t,c) \triangleq \\&\quad s= {\mathbf {y}_{1}}\land v=1\land t=1\land c=0 \land \square \, \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array} \\&\lor s= {\mathbf {y}_{2}}\land v=2\land t=2\land c=1 \land \square \, \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array} \\&\lor s= {\mathbf {y}_{3}}\land v=3\land t=1\land c=0 \land \square \, \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array},\end{align*}
which indicates what values, threads, and the CAS indicators are needed to move $y$
to a corresponding state. Most importantly, the interpretations also specify that the stores must have the knowledge $\begin{aligned} \square \, \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array} \end{aligned}$
at hand before the actions can be taken. Therefore, when the acquire load in the third thread reads from any one of them, the knowledge about $x$
can be retrieved.
The proof of the program is illustrated in Fig. 12. As the threads start with observing $x$
and $y$
in their initial states. In the first thread, the relaxed store to $x$
moves $x$
to its new state thus we have $\begin{aligned} \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array} \end{aligned}$
in (1.2). This resource is essential for the next command to be performed as it is required to know $\begin{aligned} \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array} \end{aligned}$
in the state interpretation of ${\mathbf {y}_{1}}$
. With $\begin{aligned} \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array} \end{aligned}$
at hand, the release write to $y$
can be performed and moves $y$
to the state ${\mathbf {y}_{1}}$
. Moreover, according to the [RELEASE-STORE] rule and rules about knowledge it can make a restricted-shareable cope of $\begin{aligned} \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array} \end{aligned}$
, which can be used by the relaxed store that follows. When processing the relaxed store to $y$
, the [RELAXED-STORE-2] rule is applied. The release-sequence validity check will success as the only state that can be in the middle of the release head ${\mathbf {y}_{1}}$
and the target state ${\mathbf {y}_{3}}$
is ${\mathbf {y}_{2}}$
, a state can only be reached with a CAS operation, and CAS does not interrupt a release-sequence.
In different scheduling, the CAS from the second thread may find $y$
to be in state ${\mathbf {y}_{0}}$
, ${\mathbf {y}_{1}}$
, or ${\mathbf {y}_{3}}$
before its execution. The CAS only success if it observes ${\mathbf {y}_{1}}$
. But even when it fails, the protocol still holds. According to the [ACQ-REL-CAS] rule, the CAS operation’s postcondition can be derived as shown in (2.2).
In the third thread, first $y$
is repeatedly read. According to the [REPEAT] rule and the definitions of ${\mathbf {P}_{y}}$
, exiting the loop needs $y$
is at least at state ${\mathbf {y}_{1}}$
as that is denoted in (3.2). According to the [ACQUIRE-LOAD] rule, some common knowledge can be retrieved from the state interpretation, which is $\begin{aligned} \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array} \end{aligned}$
in this case. Therefore, when $x$
is read in the last step, it is guaranteed to return the latest value 42.
Our verification system can also detect possible interruptions in a release-sequences and will not allow the verification to go through. This is illustrated in the following example shown in Fig. 13, where the CAS operation in the second thread is changed to a relaxed store.
For the new program, state transitions and interpretations for $y$
have to be changed to:\begin{align*}&{\mathbf {y}_{0}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{1}} \land {\mathbf {y}_{0}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{2}} \land { {\mathbf {y}_{1}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{2}} }\,\land \\&{\mathbf {y}_{1}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{3}} \land {\mathbf {y}_{2}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{1}} \land { {\mathbf {y}_{2}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{3}} }\land {\mathbf {y}_{3}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{2}} \\&{\mathbf {P}_{y}}(s,v,t,c) \triangleq \\&\quad s= {\mathbf {y}_{1}}\land v=1\land t=1\land c=0 \land \square \, \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array}\\&\lor s= {\mathbf {y}_{2}}\land v=2\land t=2\land c=0 \\&\lor s= {\mathbf {y}_{3}}\land v=3\land t=1\land c=0 \land \square \, \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array}\end{align*}
View Source
\begin{align*}&{\mathbf {y}_{0}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{1}} \land {\mathbf {y}_{0}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{2}} \land { {\mathbf {y}_{1}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{2}} }\,\land \\&{\mathbf {y}_{1}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{3}} \land {\mathbf {y}_{2}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{1}} \land { {\mathbf {y}_{2}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{3}} }\land {\mathbf {y}_{3}}\sqsubseteq _ {\mathbf {P}_{y}} {\mathbf {y}_{2}} \\&{\mathbf {P}_{y}}(s,v,t,c) \triangleq \\&\quad s= {\mathbf {y}_{1}}\land v=1\land t=1\land c=0 \land \square \, \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array}\\&\lor s= {\mathbf {y}_{2}}\land v=2\land t=2\land c=0 \\&\lor s= {\mathbf {y}_{3}}\land v=3\land t=1\land c=0 \land \square \, \begin{array}{|c|c|}\hline {x: {\mathbf {x}_{n}}} & { {\mathbf {P}_{x}}} \\ \hline \end{array}\end{align*}
If we attempt to apply [RELAXED-STORE-2] to the command $[y]_ {\mathtt {rlx}}:= 3$
and change $y$
to state ${\mathbf {y}_{3}}$
using the restricted-shareable resource obtained from release head ${\mathbf {y}_{1}}$
, the validity check for the release-sequence (the second premise) will fail as according $y$
’s protocol definition, a non-CAS state ${\mathbf {y}_{2}}$
from a different thread may be interrupting. The verification will fail as we would have expected.
B. Verifying the Readers-Writer-Lock
In this section, we use our reasoning system to verify the readers-writer-lock implementation shown in Fig. 14. Note that, this lock has a bounded capacity N, i.e. it allows at most N readers (or one writer) to access the protected non-atomic data field at a time. Note also that, for readability we use the following field offsets: $x. {\tt data}\triangleq x+0$
, which is the location of the data field; and $x. {\tt count}\triangleq x+1$
, which refers to a counter that keeps track of the number of active players (a reader is counted as 1, while a writer is solely counted as N). Once the lock is created and initialised by the $\tt new()$
function, a reader can atomically increase the counter by 1 (when there are vacancies available, i.e. $\textit {r}_{1} \not = {\mathsf {N}}$
in Fig. 14) to inform other players that there is an active reader, which will prevent a writer from obtaining the lock. After using the shared resource, the reader relinquishes the reader’s lock by atomically decrease the counter by 1. A writer waits until the counter is 0 (which indicates that there is no other reader nor writer); then it atomically set the counter to N indicating that the shared resource is fully occupied; then it can safely modify the non-atomic data. When finishing, the writer releases the lock by setting the counter back to 0.
Our readers-writer-lock allows concurrent reads. To verify such an algorithm, we use fractional permissions. The idea is to divide the permission to access $x. {\tt data}$
into N pieces ($1/{\sf {N}}~ {\mathrm {each}}$
). Correspondingly, the value of the counter represents how many pieces of the fractional permissions have been distributed. Atomically increasing the counter by 1, a reader gains one piece of $1/{\sf {N}}$
permission, which is sufficient for it to perform the read action. However, a writer will have to update the counter from 0 to N to retrieve the full permission that consists of all of the N pieces. When releasing the lock, the permissions go back to the invariant (or protocol, in our terminology). This design is demonstrated in the execution graph shown in Fig. 15, which is annotated with the permissions transferred along the execution. In this particular execution, $\tt Writer 1$
first sets $x. {\tt data}$
to 42 after obtaining the full permission. Then this information is released together with the full ownership of the protected data by the release write $c$
. In fact, $c$
initiates a release-sequence, from which the two readers both retrieve one piece of the $1/{\sf {N}}$
permission to access $x. {\tt data}$
and read the value 42. The rest of the permission goes to $\tt Writer 2$
when $j$
reads from the release-sequence (which is underlined): $\underline {c\mathop{\rightarrow }\limits^{ {\mathsf {mo}}}d\mathop{\rightarrow }\limits^{ {\mathsf {mo}}}g\mathop{\rightarrow }\limits^{ {\mathsf {mo}}}f\mathop{\rightarrow }\limits^{ {\mathsf {mo}}}i}\mathop{\rightarrow }\limits^{ {\mathsf {rf}}}j$
. The two $1/{\sf {N}}$
permissions assigned to the readers are also transferred to $\tt Writer 2$
via release-sequences: $\underline {f\mathop{\rightarrow }\limits^{ {\mathsf {mo}}}i}\mathop{\rightarrow }\limits^{ {\mathsf {rf}}}j$
and $\underline {i}\mathop{\rightarrow }\limits^{ {\mathsf {rf}}}j$
. Thereupon, $\tt Writer 2$
can write to $x. {\tt data}$
freely.
Now we embed this idea into the definitions of the counter’s protocol ${\mathbf {P}_{c}}$
and prove that our algorithm works as intended while the protocol is preserved. Firstly, we choose $\mathbf {c}_{i,j}$
as the counter’s states, where $i$
tracks how many pieces of the fractional permissions have been issued so far and $j$
represents the number of the fractional permissions that have been returned. As the capacity of our lock is N, we require that for any valid state $i\in [j,j+{\sf {N}}]$
holds. For valid states, the state transitions are defined as $\mathbf {c}_{i,j}\sqsubseteq _ {\mathbf {P}_{c}} \mathbf {c}_{i+1,j}$
(when the counter is increased and new permission is issued) and $\mathbf {c}_{i,j}\sqsubseteq _ {\mathbf {P}_{c}} \mathbf {c}_{i,j+1}$
(when the counter is decreased and some permission is returned). The state interpretation is defined as below:\begin{align*} {\mathbf {P}_{c}}(s,v,t,u)\triangleq&s\!=\! \mathbf {c}_{i,j}\land v\!=\!i-j\land (i\!=\!j\lor i>j\land u\!=\!1) \\&*\exists v'. x. {\tt data}\mathop {\longmapsto }\limits ^{({\sf {N}}-v)/{\sf {N}}}_{}v'.\end{align*}
View Source
\begin{align*} {\mathbf {P}_{c}}(s,v,t,u)\triangleq&s\!=\! \mathbf {c}_{i,j}\land v\!=\!i-j\land (i\!=\!j\lor i>j\land u\!=\!1) \\&*\exists v'. x. {\tt data}\mathop {\longmapsto }\limits ^{({\sf {N}}-v)/{\sf {N}}}_{}v'.\end{align*}
This definition specifies that at state $\mathbf {c}_{i,j}$
, we have $x. {\tt count} = i - j$
. Both CASes and atomic writes can change $x. {\tt count}$
to 0 ($i=j$
); however, we must use a CAS ($u=1$
) to change $x. {\tt count}$
to other states (where $i < j$
). Most importantly, at state $\mathbf {c}_{i,j}$
we must ensure that there is $({\sf {N}}-(i-j))/{\sf {N}}$
permission under the guard of the protocol. This enables a player to retrieve some permission when it increases the counter (move $i$
forward) and enforces a player to return permission when it decrease the counter (move $j$
forward). With these preparations, we can verify our readers-writer-lock algorithm. Firstly, we demonstrate in Fig. 16 that the $\tt new()$
function prepares the lock invariant.
Then, as shown in Fig. 17, a reader begins with some (maybe dated) knowledge of the counter. It repeatedly reads from the counter until it reads some value that is not equal to N, which indicates that the lock is not fully occupied. According to the protocol, we can deduce that at state (2) $r_{2}$
is actually smaller than N. This is critical for our reasoning, as when the reader increases the counter in the next step, we will have to know it would not bring the counter over the bound and break the protocol. Therefore, after a successful CAS that increases the counter by 1, the reader exits the loop knowing that the protocol is preserved and a fraction of the ownership of $x. {\tt data}$
is retrieved as shown in state (4). At this stage, the resource retrieved is waiting-to-be-acquired, as the CAS itself has only relaxed memory order for loading. An acquire fence turns the resource to its normal form at (5). Then $x. {\tt date}$
can be read according to the [NON-ATOMIC-LOAD] rule. When unlocking, the reader first gets the latest value of the counter (in state (7)). As it have a fractional ownership of $x. {\tt data}$
at hand, we can deduce that the environment cannot change the counter to 0, which requires that all the fractions of $x. {\tt data}$
’s ownership to be returned. This idea is also formalised in the [RELAXED-LOAD] rule, according to which we can only read states whose interpretation is compatible with the resource we currently hold. Thus, we know that the counter’s latest value must be greater than 0 and can be safely decreased by 1 using a release CAS when we return the fractional permission (state (9)). At last, we return the value read from $x. {\tt data}$
.
The verification of the writer’s program is shown in Fig. 18. A writer’s lock can only be acquired when the value of the counter is 0. When it reads 0 from the counter, it starts attempting to update the counter to N using CAS. When the CAS successes, the full ownership of $x. {\tt data}$
can be retrieved according the protocol and the [RELAXED-LOAD] rule (in state (3)). Then, an acquire fence makes the waiting-to-be-acquired resource locally available before it can be changed to the new value $v$
that is given in the parameters. Releasing the writer’s lock is easier than releasing a reader’s lock. As the writer owns the full permissions to the protected data (state (6)), it knows that the environment cannot change the counter to another state (which requires to add or remove fractional permissions from the protocol) during the time it holes the lock. Therefore, the writer can simply use a release write to change the counter back to 0 and release the full ownership of the data.
In this section, we formulate the soundness of our proposed program logic. As in GPS/GPS+, our reasoning framework is compositional. That is, triples can be proved individually and then a bigger proof can be generated by connecting the proved triples with the $\tt let$
and $\tt fork$
rules provided. To bridge the gap between the localised reasoning and the threads’ global interactions and non-sequential-consistent behaviours, we formulate the notions of local safety and global safety and provide the soundness proof in both layers.
A. Local Safety
The rely-guarantee reasoning [15], [29] is deeply rooted in the soundness of GPS++. As GPS/GPS+, we formulate local safety to indicate that given a thread’s rely-condition respected by other threads’ guarantee-conditions, it confirms to its own guarantee. However, resource maps are used as the base model in our proofs to capture the subtle C11 synchronisation features such as release-sequences.
Based on the rely and guarantee definitions we have introduced in the previous section, we define ${\mathsf {LSafe}}_{n}(e, \Phi)$
as the set of resource maps on which the command $e$
can safely execute for $n$
steps and end up with $\Phi $
, which is the interpretation of triple’s postcondition with the return value filled in place holders, being satisfied:\begin{align*}&\mathcal {R}\in {\mathsf {LSafe}}_{0}(e, \Phi) \triangleq {\mathrm {always}} \\&\mathcal {R}\in {\mathsf {LSafe}}_{n+1}(e, \Phi) \triangleq {\mathrm {If}}~ e \in \textit {Val} ~{\mathrm {then}}~ \mathcal {R}\Rrightarrow [\![\Phi (e)]\!]^\rho \\&{\quad If e = K[\mathtt {fork\, } e'] ~{\mathrm {then}} }\\&{\quad \qquad \mathcal {R}\in {\mathsf {LSafe}}_{n}(K[{0}], \Phi) * {\mathsf {LSafe}}_{n}(e', \mathsf {true}) }\\&{\quad If e \xrightarrow {\alpha } e' ~{\mathrm {then}}~ \forall \mathcal {R} _{F} {\scriptstyle \#} \mathcal {R}. \forall \mathcal {R} _{pre} \Rrightarrow {\mathsf {rely}} (\mathcal {R}\oplus \mathcal {R}_{F}, \alpha). \exists P'. }\\&{\quad \qquad \mathcal {R}_{pre} \in [\![P']\!]^\rho \land \forall \mathcal {R} ' \in [\![P']\!]^\rho. (\mathcal {R}_{pre}, \mathcal {R}') \in \mathsf {wpe} (\alpha) }\\&{\quad \qquad \Longrightarrow \exists \mathcal {R}_{post}. (\mathcal {R}_{post} \oplus \mathcal {R} _{F}, -) \in {\mathsf {guar}}(\mathcal {R}_{pre}, \mathcal {R}', \alpha) }\\&{\qquad \quad \qquad \land \mathcal {R} _{post} \in {\mathsf {LSafe}} _{n}(e', \Phi) }\end{align*}
View Source
\begin{align*}&\mathcal {R}\in {\mathsf {LSafe}}_{0}(e, \Phi) \triangleq {\mathrm {always}} \\&\mathcal {R}\in {\mathsf {LSafe}}_{n+1}(e, \Phi) \triangleq {\mathrm {If}}~ e \in \textit {Val} ~{\mathrm {then}}~ \mathcal {R}\Rrightarrow [\![\Phi (e)]\!]^\rho \\&{\quad If e = K[\mathtt {fork\, } e'] ~{\mathrm {then}} }\\&{\quad \qquad \mathcal {R}\in {\mathsf {LSafe}}_{n}(K[{0}], \Phi) * {\mathsf {LSafe}}_{n}(e', \mathsf {true}) }\\&{\quad If e \xrightarrow {\alpha } e' ~{\mathrm {then}}~ \forall \mathcal {R} _{F} {\scriptstyle \#} \mathcal {R}. \forall \mathcal {R} _{pre} \Rrightarrow {\mathsf {rely}} (\mathcal {R}\oplus \mathcal {R}_{F}, \alpha). \exists P'. }\\&{\quad \qquad \mathcal {R}_{pre} \in [\![P']\!]^\rho \land \forall \mathcal {R} ' \in [\![P']\!]^\rho. (\mathcal {R}_{pre}, \mathcal {R}') \in \mathsf {wpe} (\alpha) }\\&{\quad \qquad \Longrightarrow \exists \mathcal {R}_{post}. (\mathcal {R}_{post} \oplus \mathcal {R} _{F}, -) \in {\mathsf {guar}}(\mathcal {R}_{pre}, \mathcal {R}', \alpha) }\\&{\qquad \quad \qquad \land \mathcal {R} _{post} \in {\mathsf {LSafe}} _{n}(e', \Phi) }\end{align*}
It is worth noting that with the possible environment moves taken under consideration, the expression $e$
actually works on some $\mathcal {R}'$
that follows the action’s rely condition. Note also that the $\mathsf {wpe}$
provides a sanity check to rule out the obvious problematic environment changes.\begin{align*} \begin{array}{l|l} \alpha & (\mathcal {R}_ {\mathsf {pre}}, \mathcal {R}') \in \mathsf {wpe} (\alpha) $if$\\ \hline \mathbb {A}(\ell _{1}..\ell _{n}) & \forall i. 1\leq i\leq n\Rightarrow \mathcal {R} '(\ell _{i})=\bot \\ \mathbb {W}(\ell,-, \mathtt {at}) & \mathcal {R}_ {\mathsf {pre}}(\mathsf {L})[\ell]\! =\! \mathtt {at}(-)\land \mathcal {R} '(\mathsf {L})[\ell] \!=\! \mathtt {at}(-) \!\Rightarrow \!\\ &\quad \exists \mathcal {R}_{E}\in {\mathsf {envMv}} (\mathcal {R}_ {\mathsf {pre}},\ell,-).\\ &\qquad \mathcal {R} _{E}(\mathsf {L})[\ell]\!=\! \mathcal {R}'(\mathsf {L})[\ell] \\ \mathbb {U}(\ell,-,-,-) & \mathcal {R}_ {\mathsf {pre}}(\mathsf {L})[\ell]\!=\! \mathtt {at}(-)\!\Rightarrow \! \mathcal {R}'(\mathsf {L})[\ell]\!\equiv \! \mathcal {R}_{pre}(\mathsf {L})[\ell] \end{array}\end{align*}
View Source
\begin{align*} \begin{array}{l|l} \alpha & (\mathcal {R}_ {\mathsf {pre}}, \mathcal {R}') \in \mathsf {wpe} (\alpha) $if$\\ \hline \mathbb {A}(\ell _{1}..\ell _{n}) & \forall i. 1\leq i\leq n\Rightarrow \mathcal {R} '(\ell _{i})=\bot \\ \mathbb {W}(\ell,-, \mathtt {at}) & \mathcal {R}_ {\mathsf {pre}}(\mathsf {L})[\ell]\! =\! \mathtt {at}(-)\land \mathcal {R} '(\mathsf {L})[\ell] \!=\! \mathtt {at}(-) \!\Rightarrow \!\\ &\quad \exists \mathcal {R}_{E}\in {\mathsf {envMv}} (\mathcal {R}_ {\mathsf {pre}},\ell,-).\\ &\qquad \mathcal {R} _{E}(\mathsf {L})[\ell]\!=\! \mathcal {R}'(\mathsf {L})[\ell] \\ \mathbb {U}(\ell,-,-,-) & \mathcal {R}_ {\mathsf {pre}}(\mathsf {L})[\ell]\!=\! \mathtt {at}(-)\!\Rightarrow \! \mathcal {R}'(\mathsf {L})[\ell]\!\equiv \! \mathcal {R}_{pre}(\mathsf {L})[\ell] \end{array}\end{align*}
Intuitively, the definitions above state that a memory allocation action will only allocation fresh locations; an atomic write may observe its target location at a state other than the state in its precondition, while this is not allowed for an atomic update action.
As that in GPS and GPS+, we formulate the local soundness definition as:\begin{equation*} \rho \vDash { \left \{{P}\right \}\,e\,\left \{{x.Q}\right \}} \triangleq \!\!\forall n, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\!\Rrightarrow {\mathsf {LSafe}} _{n}(e, \lambda x. [\![Q]\!]^\rho).\end{equation*}
View Source
\begin{equation*} \rho \vDash { \left \{{P}\right \}\,e\,\left \{{x.Q}\right \}} \triangleq \!\!\forall n, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\!\Rrightarrow {\mathsf {LSafe}} _{n}(e, \lambda x. [\![Q]\!]^\rho).\end{equation*}
The local soundness provides semantics for our Hoare triples. It states that starting from any computation state $\mathcal {R}$
in the triple’s precondition $P$
, it is safe for the expression $e$
to execute as many steps as necessary; and when $e$
terminates, we can expect that all its possible result states satisfies the triple’s postcondition $\lambda x. [\![Q]\!]^\rho $
, where $x$
is $e$
’s return value.
Ghost moves and corollary inference rules also play an important role in our reasoning system. To validate the (local) soundness of our reasoning system, we first demonstrate the correctness of our ghost move and corollary inference rules introduced in §IV.
Our reasoning system is featured with new corollary inference rules (in the form of $P\Rightarrow Q$
), namely [SEPARATION-R], [SEPARATION-F], [SEPARATION-1…2], and [KNOWLEDGE-MANIPULATION-1…7], to deal with the newly introduced types of assertions. These rules’ correctness is ensured by the enhanced resource model and is formalised in Corollary 1 (whose proof is left in the appendix).
Corollary 1 (Soundness of Corollary Inference Rules):
Our corollary inference rules are semantically sound. That is, given an inference rule allowing $P\Rightarrow Q$
, we have $\exists \mathcal {R}. \lfloor \mathcal {R} \rfloor \cap [\![P]\!]^\rho \subseteq [\![Q]\!]^\rho $
In our reasoning system we support ghost moves depicted by rules [UNSHARE-R] and [GHOST-MOVE-1…8]. A ghost move is a transition that only modifies auxiliary/logical computation states. This is ensured by the resource-level ghost moves and formalised in Corollary 2 (whose proof is left in the appendix).
Corollary 2 (Soundness of Corollary Inference Rules):
Our ghost move rules are semantically sound. That is, given a ghost move rule allowing $P\Rrightarrow Q$
, we have $\forall \mathcal {R} \in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow [\![Q]\!]^\rho $
.
Then we formalise the two of our structural rules below.
Theorem 1 (Consequence Rule):
Given $\rho \vDash P'\Rrightarrow P$
, $\rho \vDash { \left \{{P}\right \}\,e\,\left \{{x. Q}\right \}}$
, and $\forall x. \rho \vDash Q\Rrightarrow Q'$
, we can prove that $\rho \vDash { \left \{{P'}\right \}\,e\,\left \{{x. Q'}\right \}}$
.
Proof:
From the first premise of the theorem, we have the following property $\forall \mathcal {R} \in [\![P']\!]^\rho. \mathcal {R}\Rrightarrow [\![P]\!]^\rho $
.
From the second premise of the theorem, we have the following property $\forall n, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}(e, \lambda x. [\![Q]\!]^\rho) $
.
According to [GHOST-MOVE-3] rule (ghost transitive rule), we have the proof obligation transformed to the form (1) $\forall n, \mathcal {R}\in [\![P']\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}(e, \lambda x. [\![Q]\!]^\rho)$
. So far, the precondition strenthning is proven.
In (1), we choose an arbitrarily large $n$
, and unfold ${\mathsf {LSafe}}$
by its definition. For an $e$
that is terminating, the proof obligation can be reduced to $\forall n, \mathcal {R}\in [\![P']\!]^\rho. \mathcal {R}\Rrightarrow \lambda x. [\![Q]\!]^\rho $
. From the third premise, we have $\forall x. \forall \mathcal {R} \in [\![Q]\!]^\rho. \mathcal {R}\Rrightarrow [\![Q']\!]^\rho $
. By putting them together the consequence rule is proven.
Theorem 2 (Frame Rule):
Given $\rho \vDash { \left \{{P}\right \}\,e\,\left \{{x. Q}\right \}}$
, we have $\rho \vDash { \left \{{P*R}\right \}\,e\,\left \{{x. Q*R}\right \}}$
.
Proof:
From the premise of the theorem, we have the following property: $\forall n, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}(e, \lambda x$
. $[\![Q]\!]^\rho) $
.
With this, we are going to prove the term that $\forall n, \mathcal {R}\in [\![P*R]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}(e, \lambda x. [\![Q]\!]^\rho)* [\![R]\!]^\rho $
.
According to the definition of separation assertions, the proof obligation can be transformed into:\begin{align*}&\hspace {-0.5pc}\forall n, \mathcal {R}_{1}, \mathcal {R}_{2}. \mathcal {R}_{1} {\scriptstyle \#} \mathcal {R} _{2}\land \mathcal {R} _{1}\in [\![P]\!]^\rho \land \mathcal {R} _{2}\in [\![R]\!]^\rho \Rightarrow \mathcal {R}_{1} \Rrightarrow \\& \qquad\qquad\qquad\qquad\displaystyle { {\mathsf {LSafe}}_{n}(e, \lambda x. [\![Q]\!]^\rho)\land \mathcal {R} _{2}\in [\![R]\!]^\rho }\end{align*}
View Source
\begin{align*}&\hspace {-0.5pc}\forall n, \mathcal {R}_{1}, \mathcal {R}_{2}. \mathcal {R}_{1} {\scriptstyle \#} \mathcal {R} _{2}\land \mathcal {R} _{1}\in [\![P]\!]^\rho \land \mathcal {R} _{2}\in [\![R]\!]^\rho \Rightarrow \mathcal {R}_{1} \Rrightarrow \\& \qquad\qquad\qquad\qquad\displaystyle { {\mathsf {LSafe}}_{n}(e, \lambda x. [\![Q]\!]^\rho)\land \mathcal {R} _{2}\in [\![R]\!]^\rho }\end{align*}
By simplification, we can translate the formula above into the form $\forall n, \mathcal {R}_{1}\in [\![P]\!]^\rho. \mathcal {R}_{1}\Rrightarrow {\mathsf {LSafe}} _{n}(e, \lambda x. [\![Q]\!]^\rho) $
, which matches the premise.
The frame rules is proven.
Finally, we formalise the local soundness of our reasoning system as shown below.
Theorem 3 (Local Soundness):
Our verification logic is locally sound. That is, if ${ \left \{{P}\right \}\,e\,\left \{{x. Q}\right \}}$
is provable, then for all closing $\rho $
we have $\rho \vDash { \left \{{P}\right \}\,e\,\left \{{x. Q}\right \}}$
.
Proof:
The expression $e$
may be a single expression or a series of expressions connected by let-binding. We first prove that given a single expression $e$
, our reasoning rules are locally sound. Then we prove by structural induction that our reasoning system is locally sound for $e$
with arbitrary layers of let-binding.
For the rule [PURE-REDUCTION-1], where $e$
is a value $v$
(or an arithmetic term that results in $v$
), we are going to prove that $\forall n, \mathcal {R}\in [\![\mathsf {true}]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}(v, \lambda x. [\![x=v]\!]^\rho) $
. The case $n=0$
holds trivially. In the case that $n>0$
, as $e\in \textit {Val}$
we have: $\mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}(v, \lambda x. [\![x=v]\!]^\rho)\triangleq \mathcal {R}\Rrightarrow \lambda x$
.
$[\![x=v]\!]^\rho v$
, which can be derived from $\mathcal {R}\in [\![\mathsf {true}]\!]^\rho $
that is given by the precondition. The proof for [PURE-REDUCTION-1] is finished.
For the rule [PURE-REDUCTION-2], where $e$
is a relational statement $v==v'$
, we are going to prove that for all $n$
and $\mathcal {R}\in [\![\mathsf {true}]\!]^\rho $
:
$\mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}((v==v'), \lambda x. [\![x=1\Leftrightarrow v=v']\!]^\rho) $
. According to the event-step rules in our semantics, the expression $v==v'$
will be evaluated as 1 if $v=v'$
and 0 otherwise. Therefore, in the case $v=v'$
we have that the term $\mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}((v==v'), \lambda x. [\![x=1\Leftrightarrow v=v']\!]^\rho)$
is semantically equivalent to $\mathcal {R}\in [\![1=1\Leftrightarrow v=v']\!]^\rho) $
, which can be further reduced to $\mathcal {R}\in [\![\mathsf {true}]\!]^\rho $
that is given by the precondition. Similarly, in the case $v\not =v'$
, we have that the term $\mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}((v==v'), \lambda x. [\![x=1\Leftrightarrow v=v']\!]^\rho)$
is semantically equivalent to $\mathcal {R}\in [\![0=1\Leftrightarrow {\tt false}]\!]^\rho) $
which can be derived from $\mathcal {R}\in [\![\mathsf {true}]\!]^\rho $
that is given by the precondition. The proof for [PURE-REDUCTION-2] is finished.
For the [FORK] rule, where $e= \mathtt {fork\, } e'$
, we are going to prove that $\forall n, \mathcal {R}\in [\![P*Q]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}((\mathtt {fork\, } e')$
, $[\![P]\!]^\rho) $
, with the premise $\forall n, \mathcal {R}\in [\![Q]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}(e'$
, $[\![\mathsf {true}]\!]^\rho) $
. The case $n=0$
is trivial. In the case $n>0$
, according to the definition of local safety, the proof obligation \begin{equation*} \mathcal {R}\in {\mathsf {LSafe}}_{n}((\mathtt {fork\, } e'), [\![P]\!]^\rho)\end{equation*}
View Source
\begin{equation*} \mathcal {R}\in {\mathsf {LSafe}}_{n}((\mathtt {fork\, } e'), [\![P]\!]^\rho)\end{equation*}
is equivalent to the \begin{equation*} \mathcal {R}\in {\mathsf {LSafe}}_{n-1}(0, [\![P]\!]^\rho)* {\mathsf {LSafe}}_{n-1}(e', [\![\mathsf {true}]\!]^\rho),\end{equation*}
View Source
\begin{equation*} \mathcal {R}\in {\mathsf {LSafe}}_{n-1}(0, [\![P]\!]^\rho)* {\mathsf {LSafe}}_{n-1}(e', [\![\mathsf {true}]\!]^\rho),\end{equation*}
and this formula can be further reduced to the following form according to the definitions of the local safety: $\mathcal {R}\in [\![P]\!]^\rho * {\mathsf {LSafe}}_{n-1}(e', [\![\mathsf {true}]\!]^\rho)$
. According to the definition of separation assertions we have:\begin{align*}&\hspace {-0.5pc}\forall \mathcal {R} \in [\![P*Q]\!]^\rho. \exists \mathcal {R} ', \mathcal {R}''. \mathcal {R}= \mathcal {R}'\oplus \mathcal {R} ''\\& \qquad\qquad\qquad\qquad\qquad\qquad\displaystyle {\land \mathcal {R} '\in [\![P]\!]^\rho \land \mathcal {R}''\in [\![Q]\!]^\rho. }\end{align*}
View Source
\begin{align*}&\hspace {-0.5pc}\forall \mathcal {R} \in [\![P*Q]\!]^\rho. \exists \mathcal {R} ', \mathcal {R}''. \mathcal {R}= \mathcal {R}'\oplus \mathcal {R} ''\\& \qquad\qquad\qquad\qquad\qquad\qquad\displaystyle {\land \mathcal {R} '\in [\![P]\!]^\rho \land \mathcal {R}''\in [\![Q]\!]^\rho. }\end{align*}
By putting together with the premise, we have \begin{align*}&\quad \forall \mathcal {R}\in [\![P*Q]\!]^\rho. \exists \mathcal {R} ', \mathcal {R}''. \mathcal {R}= \mathcal {R}'\oplus \mathcal {R} ''\\&\land \mathcal {R} '\in [\![P]\!]^\rho \land \mathcal {R}''\in {\mathsf {LSafe}} _{n-1}(e', [\![\mathsf {true}]\!]^\rho),\end{align*}
View Source
\begin{align*}&\quad \forall \mathcal {R}\in [\![P*Q]\!]^\rho. \exists \mathcal {R} ', \mathcal {R}''. \mathcal {R}= \mathcal {R}'\oplus \mathcal {R} ''\\&\land \mathcal {R} '\in [\![P]\!]^\rho \land \mathcal {R}''\in {\mathsf {LSafe}} _{n-1}(e', [\![\mathsf {true}]\!]^\rho),\end{align*}
which implies the proof obligation. The proof for [FORK] is finished.
For the rules: [ALLOCATION], [INITIALISATION-1…2], [ACQUIRE-LOAD], [RELAXED-LOAD], [NON-ATOMIC-LOAD], [RELEASE-STORE], [RELAXED-STORE], [NON-ATOMIC-STORE], [ACQ-REL-CAS], [RLX-REL-CAS], [ACQ-RLX-CAS-1], [ACQ-RLX-CAS-2], [RLX-RLX-CAS-1], [RLX-RLX-CAS-2], [RELE- ASE-FENCE], and [ACQUIRE-FENCE], where $e$
is allocation, initialisation, read, write, CASes, or fence. The event-step used would be $e\xrightarrow {\alpha }v$
, where $\alpha $
could be $\mathbb {A}, \mathbb {R}, \mathbb {W}, \mathbb {U}, or \mathbb {F}$
. We prove the soundness of the triple by unfolding the rely (Fig. 19), guarantee (Fig. 20), and wpe definitions corresponding to action $\alpha $
; then it is trivial to check that \begin{align*}&\forall n>0, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}(e, [\![x. Q]\!]^\rho) \\ where&\mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}(e, [\![x. Q]\!]^\rho) \triangleq \\&\forall \mathcal {R} _{F} {\scriptstyle \#} \mathcal {R}. \forall \mathcal {R} _{pre} \Rrightarrow {\mathsf {rely}} (\mathcal {R}\oplus \mathcal {R}_{F}, \alpha).\\&\exists P'. \mathcal {R}_{pre} \in [\![P']\!]^\rho \land \forall \mathcal {R} ' \in [\![P']\!]^\rho. \\&(\mathcal {R}_{pre}, \mathcal {R}') \in \mathsf {wpe} (\alpha) \Rightarrow \\&\exists \mathcal {R} _{post}. (\mathcal {R}_{post} \oplus \mathcal {R} _{F}, -) \in {\mathsf {guar}} (\mathcal {R}_{pre}, \mathcal {R}', \alpha)\\&\qquad \land \mathcal {R}_{post} \in [\![\Phi (v)]\!]^\rho\end{align*}
View Source
\begin{align*}&\forall n>0, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}(e, [\![x. Q]\!]^\rho) \\ where&\mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}(e, [\![x. Q]\!]^\rho) \triangleq \\&\forall \mathcal {R} _{F} {\scriptstyle \#} \mathcal {R}. \forall \mathcal {R} _{pre} \Rrightarrow {\mathsf {rely}} (\mathcal {R}\oplus \mathcal {R}_{F}, \alpha).\\&\exists P'. \mathcal {R}_{pre} \in [\![P']\!]^\rho \land \forall \mathcal {R} ' \in [\![P']\!]^\rho. \\&(\mathcal {R}_{pre}, \mathcal {R}') \in \mathsf {wpe} (\alpha) \Rightarrow \\&\exists \mathcal {R} _{post}. (\mathcal {R}_{post} \oplus \mathcal {R} _{F}, -) \in {\mathsf {guar}} (\mathcal {R}_{pre}, \mathcal {R}', \alpha)\\&\qquad \land \mathcal {R}_{post} \in [\![\Phi (v)]\!]^\rho\end{align*}
We have also discussed that when $n=0$
, the local safety holds by definition. Therefore the aforementioned rules are locally sound.
For the case $e= \mathtt {if\, } v \mathtt {then\, } e_{1} \mathtt {else\, } e_{2}$
in the rule of [CONDITIONAL], we prove that:\begin{equation*} \forall n, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}((\mathtt {if\, } v \mathtt {then\, } e_{1} \mathtt {else\, } e_{2}), \lambda x. [\![Q]\!]^\rho).\end{equation*}
View Source
\begin{equation*} \forall n, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}((\mathtt {if\, } v \mathtt {then\, } e_{1} \mathtt {else\, } e_{2}), \lambda x. [\![Q]\!]^\rho).\end{equation*}
In the case $v\not =0$
, we can add the pure assertion to triple’s precondition according to the semantics for assertions, i.e. $\forall \mathcal {R}. \mathcal {R}\in [\![P]\!]^\rho \Rightarrow \mathcal {R}\in [\![P\land t\not =0]\!]^\rho =[\![P*v\not =0]\!]^\rho $
; then the triple is validated by the first premise. Similarly, in the case of $v=0$
, the triple is validated by the second premise. The proof for [REPEAT] is finished.
For the [REPEAT] rule, where $e= \mathtt {repeat\, } e' \mathtt {end}$
, we prove that $\forall n, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}((\mathtt {repeat\, } e'~\mathtt {end}), \lambda x. [\![Q]\!]^\rho). $
In the case that $e'$
is evaluated as some non-zero value $v$
, The proof obligation is reduced to $\forall n, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow \lambda x. [\![Q]\!]^\rho $
, which is validated by the premise. Otherwise according to the event-step semantics for repeat and conditional expressions, the definition of local safety, the rely/guarantee conditions for $\mathbb {S}$
action, this proof obligation can be reduced to that for all $n>0$
and $\mathcal {R}\in [\![P]\!]^\rho \,\,\mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n-1}((\mathtt {repeat\, } e' \mathtt {end}), \lambda x. [\![Q]\!]^\rho) $
This transformation can be recursively performed until $e'$
is evaluated as non-zero or it reaches ${\mathsf {LSafe}}_{0}$
, which holds trivially. The proof for [REPEAT] is finished.
For the [LET-BINDING] rule, where $e=(\mathtt {let\, } x=\,\,e' \mathtt {\, in\, } e'')$
, we prove that:\begin{equation*} \forall n, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}((\mathtt {let\, } x=e' \mathtt {\, in\, } e''), \lambda y. [\![R]\!]^\rho)\end{equation*}
View Source
\begin{equation*} \forall n, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}((\mathtt {let\, } x=e' \mathtt {\, in\, } e''), \lambda y. [\![R]\!]^\rho)\end{equation*}
by using structural induction. We first prove triples with single layer let-binding, that is $e'$
is one of the aforementioned expresses and does not contain let-binding, as the base case.
For $e=(\mathtt {let\, } x = \mathtt {fork\, } e' \mathtt {\, in\, } e'')$
, we have following premises:\begin{equation*} \rho \vDash { \left \{{P*Q}\right \}\, \mathtt {fork\, } e'\,\left \{{x. P}\right \}} and \rho \vDash \forall x. { \left \{{P}\right \}\,e''\,\left \{{y. P'}\right \}}.\end{equation*}
View Source
\begin{equation*} \rho \vDash { \left \{{P*Q}\right \}\, \mathtt {fork\, } e'\,\left \{{x. P}\right \}} and \rho \vDash \forall x. { \left \{{P}\right \}\,e''\,\left \{{y. P'}\right \}}.\end{equation*}
We are going to prove that $\rho \vDash { \left \{{P*Q}\right \}\,e''[0/x]\,\left \{{y. P'}\right \}}. $
From the first premise, we have:\begin{equation*}\forall n, \mathcal {R}\in [\![P*Q]\!]^\rho. \mathcal {R}\in {\mathsf {LSafe}}_{n}(\mathtt {fork\, } e',[\![x. P]\!]^\rho),\end{equation*}
View Source
\begin{equation*}\forall n, \mathcal {R}\in [\![P*Q]\!]^\rho. \mathcal {R}\in {\mathsf {LSafe}}_{n}(\mathtt {fork\, } e',[\![x. P]\!]^\rho),\end{equation*}
which can be unfolded to the following from according to the definitions of local safety:\begin{equation*}\forall n, \mathcal {R}\in [\![P*Q]\!]^\rho. \mathcal {R}\in {\mathsf {LSafe}}_{n}([\![P[0/x]]\!]^\rho)* {\mathsf {LSafe}}_{n}(e', \mathsf {true}).\end{equation*}
View Source
\begin{equation*}\forall n, \mathcal {R}\in [\![P*Q]\!]^\rho. \mathcal {R}\in {\mathsf {LSafe}}_{n}([\![P[0/x]]\!]^\rho)* {\mathsf {LSafe}}_{n}(e', \mathsf {true}).\end{equation*}
From the second premise, we have $\rho \vDash \forall x. { \left \{{P}\right \}\,e''\,\left \{{y. P'}\right \}}$
and thus:\begin{align*}&\hspace {-0.5pc}\forall n, \mathcal {R}'\!\in \![\![P[0/x]]\!]^\rho. \mathcal {R}'\!\in \! {\mathsf {LSafe}}_{n}(\mathtt {let\, } x\! =\! 0 \mathtt {\, in\, } e'',[\![y. P']\!]^\rho) \\& \qquad\qquad\qquad\qquad\qquad\qquad\displaystyle {= {\mathsf {LSafe}}_{n}(K[{0}],[\![y. P']\!]^\rho). }\end{align*}
View Source
\begin{align*}&\hspace {-0.5pc}\forall n, \mathcal {R}'\!\in \![\![P[0/x]]\!]^\rho. \mathcal {R}'\!\in \! {\mathsf {LSafe}}_{n}(\mathtt {let\, } x\! =\! 0 \mathtt {\, in\, } e'',[\![y. P']\!]^\rho) \\& \qquad\qquad\qquad\qquad\qquad\qquad\displaystyle {= {\mathsf {LSafe}}_{n}(K[{0}],[\![y. P']\!]^\rho). }\end{align*}
Therefore, we can derive that:\begin{align*}&\forall n, \mathcal {R}\in [\![P*Q]\!]^\rho. \\&\mathcal {R}\in {\mathsf {LSafe}}_{n}(K[{0}],[\![y. P']\!]^\rho) * {\mathsf {LSafe}}_{n}(e', \mathsf {true}).\end{align*}
View Source
\begin{align*}&\forall n, \mathcal {R}\in [\![P*Q]\!]^\rho. \\&\mathcal {R}\in {\mathsf {LSafe}}_{n}(K[{0}],[\![y. P']\!]^\rho) * {\mathsf {LSafe}}_{n}(e', \mathsf {true}).\end{align*}
For other single expressions let us assume that $e'$
can be reduced to numerical value $v$
. According to the event-step definition for let-binding and the corresponding rely/guarantee definitions, the proof obligation can be transformed into $\forall n, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow {\mathsf {LSafe}}_{n}(e''[v/x], \lambda y. [\![R]\!]^\rho) $
. At the same time the first premise can be simplified as $\forall \mathcal {R} \in [\![P]\!]^\rho. \mathcal {R}\in [\![Q]\!]^\rho $
; together with the second premise, the proof obligation is met.
Then we move on to the inductive case. Let us assume $\rho \vDash { \left \{{P}\right \}\,e'\,\left \{{x. Q}\right \}}$
, where $e'=(\mathtt {let\, } x = e_{1} \mathtt {\, in\, } e_{2})$
, and prove $\rho \vDash { \left \{{P}\right \}\, \mathtt {let\, } x = e' \mathtt {\, in\, } e''\,\left \{{x. Q}\right \}}$
. According to the event-step definition for evaluation context, we need to evaluate $e'$
first. According to the assumption, $e'$
can be savely executed as many steps as possible until it is reduced to a numerical value $v$
. Then we have $\forall \mathcal {R} \in [\![P]\!]^\rho. \mathcal {R}\in [\![Q]\!]^\rho $
; together with the second premise $\forall n, v, \mathcal {R}\in [\![Q]\!]^\rho. \mathcal {R}\in {\mathsf {LSafe}}_{n}(e',\lambda y. [\![R]\!]^\rho) $
, the soundness of the triples:\begin{equation*} \forall n, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\in {\mathsf {LSafe}}_{n}(e'[v/x],\lambda y. [\![R]\!]^\rho)\end{equation*}
View Source
\begin{equation*} \forall n, \mathcal {R}\in [\![P]\!]^\rho. \mathcal {R}\in {\mathsf {LSafe}}_{n}(e'[v/x],\lambda y. [\![R]\!]^\rho)\end{equation*}
is proven.
B. Global Safety and the Final Soundness Theorem
As our target programs assume a concurrent environment, in addition to their local safety, it is also necessary to demonstrate that given the triple $\{P\}e\{x.Q\}$
provable the executions of $e$
are free from data races, memory errors, nor dangling reads under all possible threads interleaving. Therefore, we formulate the global soundness of the proposed program logic similar to its predecessors GPS/GPS+. However the new logic provides full support to C11 release-sequences, which makes it much trickier to prove some critical properties such as data-race-freedom.
Before we can formally define global soundness, we first provide definition for program execution (with an arbitrary number of steps) ${\mathsf {execs}}(e)$
and the semantics for C11 programs $[\![e]\!]$
in Fig. 21 on top of the machine-step semantics discussed earlier in §II.
The definitions indicate that the result of a program $e$
is either some value that can be validated by a legal execution or an error state if $e$
allows race conditions or memory errors in its execution.
With these preparations, we define the global soundness as:\begin{equation*}\textit {if } \vdash { \left \{{ \mathsf {true}}\right \}\,e\,\left \{{x.P}\right \}} \textit {then } [\![e]\!] \subseteq \lbrace V\mid [\![P[V/x]]\!] \not = \emptyset \rbrace.\end{equation*}
View Source
\begin{equation*}\textit {if } \vdash { \left \{{ \mathsf {true}}\right \}\,e\,\left \{{x.P}\right \}} \textit {then } [\![e]\!] \subseteq \lbrace V\mid [\![P[V/x]]\!] \not = \emptyset \rbrace.\end{equation*}
Intuitively, this definition requires that a provable Hoare triple about a close program $e$
must precisely predict the result of $e$
regarding to its executions under the C11 memory model. To demonstrate the global soundness of our proposed Hoare triples, a property called global safety is defined as below:\begin{align*}&{\mathsf {GSafe}}_{n}(\mathcal {T}, \mathcal G, \mathcal {L}) \triangleq \\&\mathsf {valid}(\mathcal G, \mathcal {L}, N) \!= \!N \!\wedge \! \mathsf {compat}(\mathcal G, \mathcal {L}) \!\wedge \! \mathsf {conform}(\mathcal G, \mathcal {L}, N) \!\wedge \!\\&\forall a \!\in \! N. \mathcal {L}({\mathsf {sb}}, a, \bot)\! = \! {{\oplus }}\lbrace \mathcal {R}\mid \exists i. \mathcal {T}_{ins} (i) \!= \!(a, -, \mathcal {R}, -)\rbrace \!\wedge \!\\&\forall i. \mathcal {T}_{ins}(i) = (a, e, \mathcal {R}, \Phi) \Longrightarrow \mathcal {R} \in {\mathsf {LSafe}} _{n}(e, \Phi) \\&{\mathrm {where}}\\&N \!\triangleq \! {\mathsf {dom}}(\mathcal G.A) \land \!\! \mathcal {T}_{ins}\!\in \! IThreadMap\!\triangleq \!\lbrace \mathbb {N}\rightarrow (a, e, \mathcal {R}, \Phi)\rbrace\end{align*}
View Source
\begin{align*}&{\mathsf {GSafe}}_{n}(\mathcal {T}, \mathcal G, \mathcal {L}) \triangleq \\&\mathsf {valid}(\mathcal G, \mathcal {L}, N) \!= \!N \!\wedge \! \mathsf {compat}(\mathcal G, \mathcal {L}) \!\wedge \! \mathsf {conform}(\mathcal G, \mathcal {L}, N) \!\wedge \!\\&\forall a \!\in \! N. \mathcal {L}({\mathsf {sb}}, a, \bot)\! = \! {{\oplus }}\lbrace \mathcal {R}\mid \exists i. \mathcal {T}_{ins} (i) \!= \!(a, -, \mathcal {R}, -)\rbrace \!\wedge \!\\&\forall i. \mathcal {T}_{ins}(i) = (a, e, \mathcal {R}, \Phi) \Longrightarrow \mathcal {R} \in {\mathsf {LSafe}} _{n}(e, \Phi) \\&{\mathrm {where}}\\&N \!\triangleq \! {\mathsf {dom}}(\mathcal G.A) \land \!\! \mathcal {T}_{ins}\!\in \! IThreadMap\!\triangleq \!\lbrace \mathbb {N}\rightarrow (a, e, \mathcal {R}, \Phi)\rbrace\end{align*}
We annotate/label the edges in our execution graphs with the computation resources they carry from one node to another and a labelling maps $\mathcal {L}$
is used to record that information. The $\mathcal {T}_{ins}$
is an instrumented thread pool, in which all threads are depicted in tuples like $(a, e, \mathcal {R}, \Phi)$
. We have $a$
as a thread’s last generated event in the graph, $e$
as the thread’s continuation, $\mathcal {R}$
as the resource map the thread currently holds (representing the thread’s computation state), and the postcondition expected after the execution of the thread is depicted by $\Phi $
. The instrumented thread pool $\mathcal {T}_{ins}$
can be down casted to a machine thread pool $\mathcal T$
(recall §II-B) using $\mathsf {erase}(\mathcal {T}_{ins})$
, where $\forall i. \mathcal {T}_{ins}(i) = (a, e, \mathcal {R}, \Phi)\Rightarrow \mathsf {erase}(\mathcal {T})(i)=(a, e)$
. The predicate $\mathsf {valid}$
signifies that a set of nodes and with their edges properly labelled. By requesting $N\triangleq {\mathsf {dom}} (\mathcal G.A)$
, the global safety definition requires all nodes in the graph are properly labelled. The $\mathsf {compat(\mathcal G)}$
indicates that for any group of $\mathsf {hb}$
-independent edges in the event graph $\mathcal G$
, the sum of resources they carried is defined. We call a group of edges are $\mathsf {hb}$
-independent if for any pair of the edges $(a,a')$
and $(b,b')$
we have $\neg {\mathsf {hb}} ^{=}(a',b)$
. Note that, we only consider the compatibility of the resource maps’ local components, which should be sufficient as the local components are the actual resources involved in computation. We also show in our proofs that when the resources under other labels merged to local, the compatibility keeps preserved. The $\mathsf {conform}$
states that the ${\mathsf {mo}}$
order for all atomic writes is consistent with the predefined state transitions.
Intuitively, the global safety definition ${\mathsf {GSafe}}_{n}(\mathcal {T}, \mathcal G, \mathcal {L})$
indicates that based on an event graph $\mathcal G$
and with the resource maps recorded in $\mathcal {L}$
, it is safe for any thread from the thread pool $\mathcal {T}$
to execute $n$
steps. We aim to demonstrate that this global safe property is to be preserved during the entire executions of legal programs allowed in our logic. To achieve this, we introduce the method we used to update the labelling $\mathcal {L}$
for the event graph when a new node (i.e., a new event) is added. The labelling process is then formalised into five lemmas which will demonstrate the restoration of the global safety for the event graph with the new node added. In this section, we focus to convey the high-level ideas about this process leaving the lemmas’ proof and the detailed definitions to the appendix.
When adding a node the the event graph, its ${\mathsf {sb}}$
incoming edge is to be labelled first. Suppose that the node to be added is $b$
and $a$
is its ${\mathsf {sb}}$
predecessor in the event graph. Initially, node $a$
’s ${\mathsf {sb}}$
outgoing edge points to a sink node, i.e., ${\mathsf {sb}}(a,\bot)$
, and is labelled with the resource map $\mathcal {R}_ {\mathsf {sb}}$
which will be passed to $a$
’s ${\mathsf {sb}}$
successor. If $a$
is followed by a fork command and $b$
is the first event in the forked thread, part of the $\mathcal {R}_ {\mathsf {sb}}$
, namely $\mathcal {R}$
, is taken and used to label the ${\mathsf {sb}}(a,b)$
edge, while the remaining resource $\mathcal {R}_{r}em$
is left in $a$
’s ${\mathsf {sb}}$
sink edge for $a$
’s local thread. If there is no new thread involved and $b$
is from the same thread as $a$
, the entire $\mathcal {R}_ {\mathsf {sb}}$
should be used to label ${\mathsf {sb}}(a,b)$
. Note that we assume there is a $\mathbb {S}$
node with all its outgoing edges labelled as empty in the initial event graph; therefore if $b$
is the first event generated in the program, for generality it will take that skip node as its ${\mathsf {sb}}$
predecessor. We illustrate this process in Fig. 22 and formalise it as the following lemma.
Lemma 1 (Step Preparation):
\begin{align*} \textit {if }&{\mathsf {consistentC11}}(\mathcal G) \\&\land {\mathsf {consistentC11}} (\mathcal G')\\&\land {\mathsf {dom}} (\mathcal G'.A)= {\mathsf {dom}}(\mathcal G.A)\uplus {b}\\&\land \mathcal L({\mathsf {sb}},a,\bot)= \mathcal {R}\oplus \mathcal {R}_{r}em\\&\land {\mathsf {dom}} (\mathcal G.A)\subseteq {\mathsf {valid}} (\mathcal G,\mathcal L, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}) \land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\\&\land \forall c\in {\mathsf {dom}} (\mathcal G.A). \mathcal G.A(c)=\mathcal G'.A(c)\\&\land \mathcal G'. {\mathsf {sb}}=G. {\mathsf {sb}}\uplus {[a,b)}\\&\land \forall c\in {\mathsf {dom}} (\mathcal G.A). \mathcal G. {\mathsf {rf}}(c)=\mathcal G'. {\mathsf {rf}}(c)\\&\land \mathcal G'. {\mathsf {mo}}\supseteq \mathcal G. {\mathsf {mo}}\\ \textit {then }&\exists \mathcal L'. {\mathsf {dom}}(\mathcal G.A)= {\mathsf {valid}}(\mathcal G',\mathcal {L}', {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G',\mathcal {L}')\\&\land {\mathsf {conform}} (\mathcal G',\mathcal {L}', {\mathsf {dom}}(\mathcal G'.A))\\&\land \mathcal L'({\mathsf {sb}},a,\bot)= \mathcal {R}_{r}em\\&\land {\mathsf {in}} (\mathcal L',b, {\mathsf {sb}})= \mathcal {R}\\&\land {\mathsf {in}} (\mathcal L',b, {\mathsf {rf}})= {\tt EMP}\\&\land {\mathsf {in}} (\mathcal L',b, {\mathsf {esc}})= {\tt EMP}\\&\land \forall a'\not =a. \mathcal L'({\mathsf {sb}},a',b)= {\tt EMP}\\&\land {\mathsf {out}} (\mathcal L',b, {\mathsf {all}})= {\tt EMP}\\&\land \forall a'\not =a. \mathcal L'({\mathsf {sb}},a',\bot)=\mathcal L({\mathsf {sb}},a',\bot)\end{align*}
View Source
\begin{align*} \textit {if }&{\mathsf {consistentC11}}(\mathcal G) \\&\land {\mathsf {consistentC11}} (\mathcal G')\\&\land {\mathsf {dom}} (\mathcal G'.A)= {\mathsf {dom}}(\mathcal G.A)\uplus {b}\\&\land \mathcal L({\mathsf {sb}},a,\bot)= \mathcal {R}\oplus \mathcal {R}_{r}em\\&\land {\mathsf {dom}} (\mathcal G.A)\subseteq {\mathsf {valid}} (\mathcal G,\mathcal L, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}) \land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\\&\land \forall c\in {\mathsf {dom}} (\mathcal G.A). \mathcal G.A(c)=\mathcal G'.A(c)\\&\land \mathcal G'. {\mathsf {sb}}=G. {\mathsf {sb}}\uplus {[a,b)}\\&\land \forall c\in {\mathsf {dom}} (\mathcal G.A). \mathcal G. {\mathsf {rf}}(c)=\mathcal G'. {\mathsf {rf}}(c)\\&\land \mathcal G'. {\mathsf {mo}}\supseteq \mathcal G. {\mathsf {mo}}\\ \textit {then }&\exists \mathcal L'. {\mathsf {dom}}(\mathcal G.A)= {\mathsf {valid}}(\mathcal G',\mathcal {L}', {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G',\mathcal {L}')\\&\land {\mathsf {conform}} (\mathcal G',\mathcal {L}', {\mathsf {dom}}(\mathcal G'.A))\\&\land \mathcal L'({\mathsf {sb}},a,\bot)= \mathcal {R}_{r}em\\&\land {\mathsf {in}} (\mathcal L',b, {\mathsf {sb}})= \mathcal {R}\\&\land {\mathsf {in}} (\mathcal L',b, {\mathsf {rf}})= {\tt EMP}\\&\land {\mathsf {in}} (\mathcal L',b, {\mathsf {esc}})= {\tt EMP}\\&\land \forall a'\not =a. \mathcal L'({\mathsf {sb}},a',b)= {\tt EMP}\\&\land {\mathsf {out}} (\mathcal L',b, {\mathsf {all}})= {\tt EMP}\\&\land \forall a'\not =a. \mathcal L'({\mathsf {sb}},a',\bot)=\mathcal L({\mathsf {sb}},a',\bot)\end{align*}
Note that the shorthand notations ${\mathsf {in}}(\mathcal {L}, a, \mathsf {t})$
and ${\mathsf {out}}(\mathcal {L}, a, \mathsf {t})$
correspondingly stand for the sum of resource maps labelled with $a$
’s incoming or outgoing edges of type $\mathsf {t}$
.
Next, the new node’s ${\mathsf {rf}}$
incoming edge will be labelled. Note that this labelling process is for atomic reading actions and CASes. A non-atomic load simply returns the value recorded in its thread-local resource map. Meanwhile, an atomic load (or a CAS) is able to read from any writer with respect to the C11 memory model ${\mathsf {consistentC11}}$
. Initially, a writing event’s ${\mathsf {rf}}$
outgoing resource, which can be referred to as $r_ {\mathsf {rf}}$
, is associated to its ${\mathsf {rf}}$
sink edge. When the new node reads from that write, their ${\mathsf {rf}}$
edge is labelled following four different approaches according to the read event’s type (atomic read or CAS) and the memory order used (relaxed or acquire). If the new event is a relaxed read, its ${\mathsf {rf}}$
incoming edge is labelled with a resource map ${\tt EMP}[\mathsf {A}\mapsto |r_ {\mathsf {rf}}|]$
, indicating that it can retrieve some knowledge from the write but the knowledge is “waiting-to-be-acquired”. If the new event is an acquire read, the retrieved knowledge is directly put under the local label: ${\tt EMP}[\mathsf {L}\mapsto |r_ {\mathsf {rf}}|]$
. The labelling for the CASes is similar, but for CASes the information can be retrieved is not limited to knowledge. Note that, we always left $|r_ {\mathsf {rf}}|$
in the writer’s ${\mathsf {rf}}$
sink edge for other readers to read. This process is illustrated in Fig. 23. By labelling the new ${\mathsf {rf}}$
edge in this way, the following lemma can be proved.
Lemma 2 (Rely Step):
\begin{align*} \textit {if }&\mathcal G.A(a)=\alpha \\&\land {\mathsf {dom}} (\mathcal G.A)=N\uplus {a}\\&\land N\in \mathsf {prefix} (\mathcal G)\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L},N)\\&\land {\mathsf {in}} (\mathcal {L},a, {\mathsf {all}})= {\mathsf {out}}(\mathcal {L},a, {\mathsf {all}})= {\tt EMP}\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}) \land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\\&\land {\mathsf {consistentC11}} (\mathcal G)\\&\land {\mathsf {in}} (\mathcal {L},a, {\mathsf {rf}})= {\tt EMP}\land {\mathsf {in}}(\mathcal {L},a, {\mathsf {esc}})= {\tt EMP}\\&\land {\mathsf {out}} (\mathcal {L},a, {\mathsf {all}})= {\tt EMP}\\ \textit {then }&\exists \mathcal L'. N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}',N)\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L})\\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\\&\land {\mathsf {in}} (\mathcal {L}',a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal {L}',a, {\mathsf {rf}}) \in {\mathsf {rely}} ({\mathsf {in}}(\mathcal {L}',a, {\mathsf {sb}}),\alpha)\\&\land {\mathsf {in}} (\mathcal {L}',a, {\mathsf {esc}})= {\mathsf {out}}(\mathcal {L}',a, {\mathsf {all}})= {\tt EMP}\\&\land \forall b,c. \mathcal {L}'({\mathsf {sb}},b,c)=\mathcal {L}({\mathsf {sb}},b,c)\\&\land \forall b. \mathcal {L}'({\mathsf {sb}},b,\bot)=\mathcal {L}({\mathsf {sb}},b,\bot)\end{align*}
View Source
\begin{align*} \textit {if }&\mathcal G.A(a)=\alpha \\&\land {\mathsf {dom}} (\mathcal G.A)=N\uplus {a}\\&\land N\in \mathsf {prefix} (\mathcal G)\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L},N)\\&\land {\mathsf {in}} (\mathcal {L},a, {\mathsf {all}})= {\mathsf {out}}(\mathcal {L},a, {\mathsf {all}})= {\tt EMP}\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}) \land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\\&\land {\mathsf {consistentC11}} (\mathcal G)\\&\land {\mathsf {in}} (\mathcal {L},a, {\mathsf {rf}})= {\tt EMP}\land {\mathsf {in}}(\mathcal {L},a, {\mathsf {esc}})= {\tt EMP}\\&\land {\mathsf {out}} (\mathcal {L},a, {\mathsf {all}})= {\tt EMP}\\ \textit {then }&\exists \mathcal L'. N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}',N)\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L})\\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\\&\land {\mathsf {in}} (\mathcal {L}',a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal {L}',a, {\mathsf {rf}}) \in {\mathsf {rely}} ({\mathsf {in}}(\mathcal {L}',a, {\mathsf {sb}}),\alpha)\\&\land {\mathsf {in}} (\mathcal {L}',a, {\mathsf {esc}})= {\mathsf {out}}(\mathcal {L}',a, {\mathsf {all}})= {\tt EMP}\\&\land \forall b,c. \mathcal {L}'({\mathsf {sb}},b,c)=\mathcal {L}({\mathsf {sb}},b,c)\\&\land \forall b. \mathcal {L}'({\mathsf {sb}},b,\bot)=\mathcal {L}({\mathsf {sb}},b,\bot)\end{align*}
When a piece of resource $\mathcal {R}$
is packed into an escrow by event $a$
, $\mathcal {R}$
is removed from $a$
’s working resource map and put into $a$
’s ${\mathsf {esc}}$
sink edge for safe keeping. Another event $b$
owning the resource $\mathcal {R}'$
that is required to open the escrow may retrieve $\mathcal {R}$
through a new escrow edge created associating $a$
and itself. Then $\mathcal {R}'$
is dumped to $b$
’s escrow sink edge. This process (and other local ghost moves) is depicted in Fig. 24 and is formalised in the following lemma.
Lemma 3 (Ghost Step):
\begin{align*} \textit {if}&{\mathsf {dom}}(\mathcal G.A)=N\uplus {a}\land N\in \mathsf {prefix} (\mathcal G) \\&\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}[({\mathsf {esc}},-,a,\bot)\:=\mathcal {L} ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R}]) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\land {\mathsf {consistentC11}} (\mathcal G)\\&\land \mathcal {R} _{b}efore\triangleq {\mathsf {in}} (\mathcal {L},a, {\mathsf {sb}})\oplus {\mathsf {in}}(\mathcal {L},a, {\mathsf {rf}})\oplus {\mathsf {in}} (\mathcal {L},a, {\mathsf {esc}})\\&\land \mathcal {R} _{a}fter\triangleq \mathcal {R} \oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {esc}}) \oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}})\\&\land \mathcal {R} _{b}efore\Rrightarrow _{\mathcal {I}} \mathcal {R}_{a}fter\land | \mathcal {R}_{b}efore|\leq \mathcal {R} \land \mathcal {R} \Rrightarrow \mathcal P\\&\land \forall c. \mathcal {L}({\mathsf {esc}},-,a,e)= {\tt EMP}\\&\land \forall (\sigma, \mathcal {R}_{E})\in \mathcal {I}. {\mathsf {interp}}(\sigma)= (\mathcal Q,\mathcal Q')\Rightarrow \mathcal {R} _{E}\in \mathcal Q'\\&\land \mathcal {L} ({\mathsf {esc}},a,\bot) =\\&{{\oplus }}\left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I, \mathcal {R}_{E}\in \mathcal Q',\\ {\mathsf {interp}}(\sigma)=(\mathcal Q,\mathcal Q'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal Q)\\ \end{array} }\right \}\\ \textit {then}&\exists \mathcal L',\mathcal I', \mathcal {R}', \mathcal {R}'_{b}efore, \mathcal {R}'_{a}fter\!\in \!\mathcal P. N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}', {\mathsf {dom}}(\mathcal G.A)) \\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}'[({\mathsf {esc}},-,a,\bot):=\mathcal {L}' ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R} ']) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L}',N)\\&\land \mathcal {R} '_{b}efore\triangleq {\mathsf {in}} (\mathcal {L}',a, {\mathsf {sb}})\oplus {\mathsf {in}}(\mathcal {L}',a, {\mathsf {rf}})\oplus {\mathsf {in}} (\mathcal {L}',a, {\mathsf {esc}})\\&\land \mathcal {R} '_{a}fter\triangleq \mathcal {R} '\oplus {\mathsf {out}} (\mathcal {L}',a, {\mathsf {esc}}) \oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}}) \\&\land \mathcal {R} '_{b}efore\Rrightarrow _{\mathcal {I'}}\mathcal {I}' \mathcal {R}'_{a}fter\\&\land \forall b. \mathcal L'({\mathsf {sb}},b,\bot)=\mathcal L({\mathsf {sb}},b,\bot)\\&\land \forall b. \mathcal L'({\mathsf {rf}},b,\bot)=\mathcal L({\mathsf {rf}},b,\bot)\\&\land \forall b,c. \mathcal L'({\mathsf {sb}},b,c)=\mathcal L({\mathsf {sb}},b,c) \\&\land \forall b,c. \mathcal L'({\mathsf {rf}},b,c)=\mathcal L({\mathsf {rf}},b,c)\\&\land \forall c. \mathcal L'({\mathsf {esc}},-,a,e)= {\tt EMP}\\&\land \forall (\sigma, \mathcal {R}_{E})\in \mathcal I'. {\mathsf {interp}}(\sigma)= (\mathcal Q,\mathcal Q')\Rightarrow \mathcal {R} _{E}\in \mathcal Q'\\&\land \mathcal {L} ({\mathsf {esc}},a,\bot)=\\&{{\oplus }}\left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I', \mathcal {R}_{E}\in \mathcal Q',\\ {\mathsf {interp}}(\sigma)=(\mathcal Q,\mathcal Q'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal Q)\\ \end{array} }\right \}\end{align*}
View Source
\begin{align*} \textit {if}&{\mathsf {dom}}(\mathcal G.A)=N\uplus {a}\land N\in \mathsf {prefix} (\mathcal G) \\&\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}[({\mathsf {esc}},-,a,\bot)\:=\mathcal {L} ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R}]) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\land {\mathsf {consistentC11}} (\mathcal G)\\&\land \mathcal {R} _{b}efore\triangleq {\mathsf {in}} (\mathcal {L},a, {\mathsf {sb}})\oplus {\mathsf {in}}(\mathcal {L},a, {\mathsf {rf}})\oplus {\mathsf {in}} (\mathcal {L},a, {\mathsf {esc}})\\&\land \mathcal {R} _{a}fter\triangleq \mathcal {R} \oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {esc}}) \oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}})\\&\land \mathcal {R} _{b}efore\Rrightarrow _{\mathcal {I}} \mathcal {R}_{a}fter\land | \mathcal {R}_{b}efore|\leq \mathcal {R} \land \mathcal {R} \Rrightarrow \mathcal P\\&\land \forall c. \mathcal {L}({\mathsf {esc}},-,a,e)= {\tt EMP}\\&\land \forall (\sigma, \mathcal {R}_{E})\in \mathcal {I}. {\mathsf {interp}}(\sigma)= (\mathcal Q,\mathcal Q')\Rightarrow \mathcal {R} _{E}\in \mathcal Q'\\&\land \mathcal {L} ({\mathsf {esc}},a,\bot) =\\&{{\oplus }}\left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I, \mathcal {R}_{E}\in \mathcal Q',\\ {\mathsf {interp}}(\sigma)=(\mathcal Q,\mathcal Q'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal Q)\\ \end{array} }\right \}\\ \textit {then}&\exists \mathcal L',\mathcal I', \mathcal {R}', \mathcal {R}'_{b}efore, \mathcal {R}'_{a}fter\!\in \!\mathcal P. N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}', {\mathsf {dom}}(\mathcal G.A)) \\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}'[({\mathsf {esc}},-,a,\bot):=\mathcal {L}' ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R} ']) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L}',N)\\&\land \mathcal {R} '_{b}efore\triangleq {\mathsf {in}} (\mathcal {L}',a, {\mathsf {sb}})\oplus {\mathsf {in}}(\mathcal {L}',a, {\mathsf {rf}})\oplus {\mathsf {in}} (\mathcal {L}',a, {\mathsf {esc}})\\&\land \mathcal {R} '_{a}fter\triangleq \mathcal {R} '\oplus {\mathsf {out}} (\mathcal {L}',a, {\mathsf {esc}}) \oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}}) \\&\land \mathcal {R} '_{b}efore\Rrightarrow _{\mathcal {I'}}\mathcal {I}' \mathcal {R}'_{a}fter\\&\land \forall b. \mathcal L'({\mathsf {sb}},b,\bot)=\mathcal L({\mathsf {sb}},b,\bot)\\&\land \forall b. \mathcal L'({\mathsf {rf}},b,\bot)=\mathcal L({\mathsf {rf}},b,\bot)\\&\land \forall b,c. \mathcal L'({\mathsf {sb}},b,c)=\mathcal L({\mathsf {sb}},b,c) \\&\land \forall b,c. \mathcal L'({\mathsf {rf}},b,c)=\mathcal L({\mathsf {rf}},b,c)\\&\land \forall c. \mathcal L'({\mathsf {esc}},-,a,e)= {\tt EMP}\\&\land \forall (\sigma, \mathcal {R}_{E})\in \mathcal I'. {\mathsf {interp}}(\sigma)= (\mathcal Q,\mathcal Q')\Rightarrow \mathcal {R} _{E}\in \mathcal Q'\\&\land \mathcal {L} ({\mathsf {esc}},a,\bot)=\\&{{\oplus }}\left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I', \mathcal {R}_{E}\in \mathcal Q',\\ {\mathsf {interp}}(\sigma)=(\mathcal Q,\mathcal Q'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal Q)\\ \end{array} }\right \}\end{align*}
The following lemma demonstrates that by labelling the graph in the way we have described, the new node’s all incoming resource map will satisfy its $\mathsf {wpe}$
requirements.
Lemma 4 (Protocol Equivalence for Writes):
\begin{align*} \textit {if }&{\mathsf {dom}}(\mathcal G.A)=N\uplus {a} \land N\in \mathsf {prefix} (\mathcal G) \\&\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}[({\mathsf {esc}},-,a,\bot)\\&:=\mathcal {L} ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R}]) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N) \land {\mathsf {consistentC11}} (\mathcal G)\\&\land {\mathsf {in}} (\mathcal L,a, {\mathsf {all}})\!\Rrightarrow _{\mathcal {I}} \mathcal {R}\!\oplus \! {\mathsf {out}}(\mathcal L,a, {\mathsf {esc}})\!\oplus \! {\mathsf {out}}(\mathcal {L},a, {\mathsf {cond}}) \\&\land | {\mathsf {in}}(\mathcal {L},a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal {L},a, {\mathsf {rf}})|\leq \mathcal {R} \\ \textit {then }&({\mathsf {in}}(\mathcal L,a, {\mathsf {sb}})\!\oplus \! {\mathsf {in}}(\mathcal L,a, {\mathsf {rf}}), {\mathsf {in}}(\mathcal L,a, {\mathsf {all}}))\!\in \! \mathsf {wpe}(\mathcal G.A(a))\end{align*}
View Source
\begin{align*} \textit {if }&{\mathsf {dom}}(\mathcal G.A)=N\uplus {a} \land N\in \mathsf {prefix} (\mathcal G) \\&\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}[({\mathsf {esc}},-,a,\bot)\\&:=\mathcal {L} ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R}]) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N) \land {\mathsf {consistentC11}} (\mathcal G)\\&\land {\mathsf {in}} (\mathcal L,a, {\mathsf {all}})\!\Rrightarrow _{\mathcal {I}} \mathcal {R}\!\oplus \! {\mathsf {out}}(\mathcal L,a, {\mathsf {esc}})\!\oplus \! {\mathsf {out}}(\mathcal {L},a, {\mathsf {cond}}) \\&\land | {\mathsf {in}}(\mathcal {L},a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal {L},a, {\mathsf {rf}})|\leq \mathcal {R} \\ \textit {then }&({\mathsf {in}}(\mathcal L,a, {\mathsf {sb}})\!\oplus \! {\mathsf {in}}(\mathcal L,a, {\mathsf {rf}}), {\mathsf {in}}(\mathcal L,a, {\mathsf {all}}))\!\in \! \mathsf {wpe}(\mathcal G.A(a))\end{align*}
Finally, the new node’s outgoing edges (${\mathsf {sb}}$
and ${\mathsf {rf}}$
) will be labelled in guarantee step, using the corresponding resource map and resource ($\mathcal {R}_ {\mathsf {sb}}, r_ {\mathsf {rf}}$
) derived from the action’s guarantee definition. These resources are initially assigned to the corresponding sink edges of the new node, until the node’s future ${\mathsf {sb}}$
and ${\mathsf {rf}}$
successors are added to the graph and take the resources for the labelling of the corresponding edges. Note that, as annotation for read-from sink edge, $r_ {\mathsf {rf}}$
, is a resource instead of resource map, we require it to be compatible with other resource maps’ local component at the ${\mathsf {compat}}$
checking.
Lemma 5 (Guarantee Step):
\begin{align*} \textit {if}&\mathcal G.A(a)=\alpha \land {\mathsf {dom}} (\mathcal G.A)=N\uplus {a} \land N\in \mathsf {prefix} (\mathcal G) \\&\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}[({\mathsf {esc}},-,a,\bot):=\mathcal {L} ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R}]) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N) \land {\mathsf {consistentC11}} (\mathcal G)\\&\land \mathcal {R} _ {\mathsf {pre}}= {\mathsf {in}}(\mathcal L,a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal L,a, {\mathsf {rf}})\\&\land {\mathsf {in}} (\mathcal {L},a, {\mathsf {all}})\Rrightarrow _{\mathcal {I}} \mathcal {R}\oplus {\mathsf {out}}(\mathcal L,a, {\mathsf {esc}})\oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}})\\&\land \mathcal {R} _ {\mathsf {pre}}\in {\mathsf {rely}}(-,\alpha) \land | {\mathsf {in}}(\mathcal {L},a, {\mathsf {all}})|\leq \mathcal {R} \\&\land \forall (\sigma, \mathcal {R}_{E})\in \mathcal {I}. {\mathsf {interp}}(\sigma)= (\mathcal Q,\mathcal Q')\Rightarrow \mathcal {R} _{E}\in \mathcal Q'\\&\land \mathcal {L} ({\mathsf {esc}},a,\bot) =\\&{{\oplus }}\left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I, \mathcal {R}_{E}\in \mathcal Q',\\ {\mathsf {interp}}(\sigma)=(\mathcal Q,\mathcal Q'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal Q)\\ \end{array} }\right \}\\&\land {\mathsf {out}} (\mathcal L,a, {\mathsf {sb}})= {\mathsf {out}}(\mathcal L,a, {\mathsf {rf}})= {\tt EMP}\\&\land (\mathcal {R}_ {\mathsf {sb}},r_ {\mathsf {rf}})\!\in \! {\mathsf {guar}}(\mathcal {R}_ {\mathsf {pre}}, \mathcal {R},\alpha) \!\land \! \mathsf {wpe}(\alpha, \mathcal {R}_ {\mathsf {pre}}, {\mathsf {in}}(\mathcal L,a, {\mathsf {all}}))\\ \textit {then}&\exists \mathcal L'. {\mathsf {dom}}(\mathcal G.A)= {\mathsf {valid}}(\mathcal G,\mathcal {L}', {\mathsf {dom}}(\mathcal G.A)) \\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}') \land {\mathsf {conform}} (\mathcal G,\mathcal {L}', {\mathsf {dom}}(\mathcal G.A))\\&\land \forall b\not =a. \mathcal L'({\mathsf {sb}},b,\bot)=\mathcal L({\mathsf {sb}},b,\bot) \land \mathcal L'({\mathsf {sb}},a,\bot)\!=\! \mathcal {R}_ {\mathsf {sb}}\end{align*}
View Source
\begin{align*} \textit {if}&\mathcal G.A(a)=\alpha \land {\mathsf {dom}} (\mathcal G.A)=N\uplus {a} \land N\in \mathsf {prefix} (\mathcal G) \\&\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}[({\mathsf {esc}},-,a,\bot):=\mathcal {L} ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R}]) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N) \land {\mathsf {consistentC11}} (\mathcal G)\\&\land \mathcal {R} _ {\mathsf {pre}}= {\mathsf {in}}(\mathcal L,a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal L,a, {\mathsf {rf}})\\&\land {\mathsf {in}} (\mathcal {L},a, {\mathsf {all}})\Rrightarrow _{\mathcal {I}} \mathcal {R}\oplus {\mathsf {out}}(\mathcal L,a, {\mathsf {esc}})\oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}})\\&\land \mathcal {R} _ {\mathsf {pre}}\in {\mathsf {rely}}(-,\alpha) \land | {\mathsf {in}}(\mathcal {L},a, {\mathsf {all}})|\leq \mathcal {R} \\&\land \forall (\sigma, \mathcal {R}_{E})\in \mathcal {I}. {\mathsf {interp}}(\sigma)= (\mathcal Q,\mathcal Q')\Rightarrow \mathcal {R} _{E}\in \mathcal Q'\\&\land \mathcal {L} ({\mathsf {esc}},a,\bot) =\\&{{\oplus }}\left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I, \mathcal {R}_{E}\in \mathcal Q',\\ {\mathsf {interp}}(\sigma)=(\mathcal Q,\mathcal Q'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal Q)\\ \end{array} }\right \}\\&\land {\mathsf {out}} (\mathcal L,a, {\mathsf {sb}})= {\mathsf {out}}(\mathcal L,a, {\mathsf {rf}})= {\tt EMP}\\&\land (\mathcal {R}_ {\mathsf {sb}},r_ {\mathsf {rf}})\!\in \! {\mathsf {guar}}(\mathcal {R}_ {\mathsf {pre}}, \mathcal {R},\alpha) \!\land \! \mathsf {wpe}(\alpha, \mathcal {R}_ {\mathsf {pre}}, {\mathsf {in}}(\mathcal L,a, {\mathsf {all}}))\\ \textit {then}&\exists \mathcal L'. {\mathsf {dom}}(\mathcal G.A)= {\mathsf {valid}}(\mathcal G,\mathcal {L}', {\mathsf {dom}}(\mathcal G.A)) \\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}') \land {\mathsf {conform}} (\mathcal G,\mathcal {L}', {\mathsf {dom}}(\mathcal G.A))\\&\land \forall b\not =a. \mathcal L'({\mathsf {sb}},b,\bot)=\mathcal L({\mathsf {sb}},b,\bot) \land \mathcal L'({\mathsf {sb}},a,\bot)\!=\! \mathcal {R}_ {\mathsf {sb}}\end{align*}
In what follows we formulate the Theorem Instrumented Execution. Intuitively, this theorem states that is a program is globally safe to be executed for $n+1$
steps from the current machine configuration, an arbitrarily scheduled move will lead to a new machine configuration, based on which the rest of the program will be global-safely for another $n$
steps.
Theorem 4 (Instrumented Execution):
If we have ${\mathsf {GSafe}}_{n+1}(\mathcal {T}_{ins}, \mathcal G, \mathcal {L}) \land \langle \mathsf {erase}(\mathcal {T}_{ins});\mathcal G\rangle \longrightarrow \langle \mathcal T';\mathcal G'\rangle $
then $\exists \mathcal {T}'_{ins},\mathcal L'. \mathsf {erase}(\mathcal {T}'_{ins}) = \mathcal T'\land {\mathsf {GSafe}}_{n}(\mathcal {T}'_{ins}, \mathcal G', \mathcal {L'})$
.
Proof:
Starting from ${\mathsf {GSafe}}_{n+1}(\mathcal {T}_{ins},\mathcal G, \mathcal {L})$
, a machine step transforms the graph into $\mathcal G'$
with a new event $b$
and the thread pool into $\mathcal T'$
, leaving $n$
more locally safe steps for the active thread.
By applying Lemma 1, we have that there exists a labelling $\mathcal L_{1}$
derived from the original labelling $\mathcal L$
with ${\mathsf {in}}(\mathcal L_{1},b, {\mathsf {sb}})= \mathcal {R}\land \mathcal L_{1}({\mathsf {sb}},a,\bot)= \mathcal {R}_{r}em$
, which ensures:\begin{align*}&\hspace {-0.5pc} {\mathsf {dom}}(\mathcal G.A)= {\mathsf {valid}}(\mathcal G',\mathcal L_{1}, {\mathsf {dom}}(\mathcal G.A))\\& \qquad\qquad\qquad\displaystyle {\!\land {\mathsf {compat}} (\mathcal G',\mathcal L_{1} \land {\mathsf {conform}} (\mathcal G',\mathcal L_{1}, {\mathsf {dom}}(\mathcal G'.A)). }\end{align*}
View Source
\begin{align*}&\hspace {-0.5pc} {\mathsf {dom}}(\mathcal G.A)= {\mathsf {valid}}(\mathcal G',\mathcal L_{1}, {\mathsf {dom}}(\mathcal G.A))\\& \qquad\qquad\qquad\displaystyle {\!\land {\mathsf {compat}} (\mathcal G',\mathcal L_{1} \land {\mathsf {conform}} (\mathcal G',\mathcal L_{1}, {\mathsf {dom}}(\mathcal G'.A)). }\end{align*}
By applying Lemma 2, we have that there exists $\mathcal L_{2}$
updated from $\mathcal L_{1}$
with the new node’s $\mathsf {rf}$
incoming label and maintains the ${\mathsf {valid}}$
set.
By applying Lemma 3, we have that there exists $\mathcal L_{3}$
updated from $\mathcal L_{2}$
with escrow incoming and outgoing edges taken under consideration. With the new labelling, the ${\mathsf {compat}}$
and ${\mathsf {conform}}$
properties are recovered.
By applying Lemma 4, we have that with $\mathcal L_{3}$
, the sum of new event’s incoming resource maps satisfies its $\mathsf {wpe}$
definitions.
By applying Lemma 5, we have that there is a labelling $\mathcal L'$
with the new event’s outgoing edges updated. The ${\mathsf {valid}}$
, ${\mathsf {compat}}$
, and ${\mathsf {conform}}$
along with other properties are preserved for the new graph and labelling. Therefore, the new graph is ${\mathsf {GSafe}}_{n}(\mathcal {T}'_{ins},\mathcal G', \mathcal {L'})$
.
Now we present one more lemma, whose proof is left in the appendix, to demonstrate that all possible executions are free from data-race, memory errors, and uninitialised reads.
Lemma 6 (Error Free):
If we have ${\mathsf {GSafe}}_{n}(\mathcal {T}_{ins},\mathcal G, \mathcal {L})$
then $\neg {\mathsf {dataRace}} (\mathcal G)$
, $\neg {\mathsf {memErr}} (\mathcal G)$
, and there is no dangling reads.
Ultimately we give the global soundness Theorem Adequacy.
Theorem 5 (Adequacy):
If we have $\vdash { \left \{{ \mathsf {true}}\right \}\,e\,\left \{{x.P}\right \}}$
then $[\![e]\!] \subseteq \lbrace V\mid [\![P[V/x]]\!] \not = \emptyset \rbrace $
.
Proof:
From $\vdash { \left \{{ \mathsf {true}}\right \}\,e\,\left \{{x.P}\right \}}$
, we can derive that from $\mathsf {true}$
the program $e$
is locally safe for an arbitrarily large number of steps. We assume $e$
terminates in $n$
steps. That is, according to the step-level semantics $e$
will be reduced to some pure value, which can be referred to as $V$
, after $n$
steps. We assert ${\mathsf {LSafe}}_{n+1}(e,[\![x.P]\!])$
, based on which we construct:\begin{align*} {\mathsf {GSafe}}_{n+1} \left ({\begin{array}{l} [0\mapsto ({\mathsf {start}},e, {\tt emp},[\![x.P]\!])],\\ ([{\mathsf {start}}\mapsto \mathbb {S}],\emptyset,\emptyset,\emptyset), \\{}[({\mathsf {sb}}, {\mathsf {start}},\bot)\mapsto {\tt EMP}]\uplus [({\mathsf {rf}}, {\mathsf {start}},\bot)\mapsto {\tt EMP}]\\ \end{array}}\right).\end{align*}
View Source
\begin{align*} {\mathsf {GSafe}}_{n+1} \left ({\begin{array}{l} [0\mapsto ({\mathsf {start}},e, {\tt emp},[\![x.P]\!])],\\ ([{\mathsf {start}}\mapsto \mathbb {S}],\emptyset,\emptyset,\emptyset), \\{}[({\mathsf {sb}}, {\mathsf {start}},\bot)\mapsto {\tt EMP}]\uplus [({\mathsf {rf}}, {\mathsf {start}},\bot)\mapsto {\tt EMP}]\\ \end{array}}\right).\end{align*}
By repeatedly applying the Theorem Instrumented Execution for $n$
times, we can derive:
$\exists \mathcal T'_{ins}, \mathcal {R}. {\mathsf {GSafe}}_{1}(\mathcal T'_{ins} \uplus [0 \mapsto (-, V, \mathcal {R}, [\![x.P]\!])],-,-)$
. From this condition we can imply that $\mathcal {R}\Rrightarrow [\![P[V/x]]\!]$
and thus we can conclude that $[\![P[V/x]]\!]\not =\emptyset $
.
Appendix
More Definitions and Proof of Lemmas
In this section we present the less interesting yet indispensable definitions used in our reasoning system and semantic framework, and the proof the lemmas and corollaries we have discussed in the main text.
Our reasoning system is featured with new types of assertions and the corresponding inference rules to reason about them. We first present the semantics for our assertions; and then prove the soundness of our inference rules with the form $P\Rightarrow Q$
.\begin{align*} \begin{array}{cc|l} &R & \mathcal {R} \in [\![R]\!]^{\rho } iff\\ \hline (1)& t = t' & [\![t]\!]^{\rho } = [\![t']\!]^{\rho } \\ (2)& t \sqsubseteq _{\tau } t' & [\![t]\!]^{\rho } \sqsubseteq _{\tau } [\![t']\!]^{\rho } \\ \hline (3)& \mathsf {uninit}(t) & \mathcal {R}(\mathsf {L}).\Pi ([\![t]\!]^{\rho }) = \mathsf {uninit}\\ (4)& t \overset {f}{\mapsto } t' & \mathcal {R}(\mathsf {L}).\Pi ([\![t]\!]^{\rho }) = \mathtt {na}([\![t']\!]^{\rho }, f)\land f\in (0,1] \\ (5)& \begin{array}{|c|c|}\hline {t:t'} & {\tau } \\ \hline \end{array} & \exists S. \mathcal {R}(\mathsf {L}).\Pi ([\![t]\!]^{\rho }) = \mathtt {at}(\tau, S) \land [\![t']\!]^{\rho } \in S \\ (6)& \begin{array}{:c:c:} \hdashline {t:t'} & {\mu } \\ \hdashline \end{array} & \mathcal {R}(\mathsf {L}).g(\mu)([\![t]\!]^{\rho }) \geq [\![t']\!]^{\rho } \\ (7)& {}[\sigma] & \sigma \in \mathcal {R}(\mathsf {L}).\Sigma \\ \hline (8)& P \land Q & \mathcal {R} \in [\![P]\!]^{\rho } \cap [\![Q]\!]^{\rho } \\ (9)& P \lor Q & \mathcal {R} \in [\![P]\!]^{\rho } \cup [\![Q]\!]^{\rho } \\ (10)& P \Rightarrow Q & \lfloor \mathcal {R}\rfloor \cap [\![P]\!]^{\rho } \subseteq [\![Q]\!]^{\rho } \\ (11)& \forall X. P & \mathcal {R}\in \bigcap _{d \in \mathtt {sort}(X)} [\![P]\!]^{\rho [X \mapsto d]} \\ (12)& \exists X. P & \mathcal {R}\in \bigcup _{d \in \mathtt {sort}(X)} [\![P]\!]^{\rho [X \mapsto d]} \\ (13)& P_{1} * P_{2} & \mathcal {R} \in [\![P_{1}]\!]^{\rho } * [\![P_{2}]\!]^{\rho } \\ \hline (14)& \square P & | {\tt EMP}[\mathsf {L}\mapsto \mathcal {R}(\mathsf {L})]| \in [\![P]\!]^\rho \\ (15)& \langle P \rangle & {\tt EMP}[\mathsf {L}\mapsto \mathcal {R}(\mathsf {A})] \in [\![P]\!]^\rho \\ (16)& \langle P \rangle _{s} & {\tt EMP}[\mathsf {L}\mapsto \mathcal {R}(s)] \in [\![P]\!]^\rho \\ (17)& \boxtimes P & {\tt EMP}[\mathsf {L}\mapsto \mathcal {R}(\mathsf {A})] \in [\![P]\!]^\rho \\ \end{array}\end{align*}
View Source
\begin{align*} \begin{array}{cc|l} &R & \mathcal {R} \in [\![R]\!]^{\rho } iff\\ \hline (1)& t = t' & [\![t]\!]^{\rho } = [\![t']\!]^{\rho } \\ (2)& t \sqsubseteq _{\tau } t' & [\![t]\!]^{\rho } \sqsubseteq _{\tau } [\![t']\!]^{\rho } \\ \hline (3)& \mathsf {uninit}(t) & \mathcal {R}(\mathsf {L}).\Pi ([\![t]\!]^{\rho }) = \mathsf {uninit}\\ (4)& t \overset {f}{\mapsto } t' & \mathcal {R}(\mathsf {L}).\Pi ([\![t]\!]^{\rho }) = \mathtt {na}([\![t']\!]^{\rho }, f)\land f\in (0,1] \\ (5)& \begin{array}{|c|c|}\hline {t:t'} & {\tau } \\ \hline \end{array} & \exists S. \mathcal {R}(\mathsf {L}).\Pi ([\![t]\!]^{\rho }) = \mathtt {at}(\tau, S) \land [\![t']\!]^{\rho } \in S \\ (6)& \begin{array}{:c:c:} \hdashline {t:t'} & {\mu } \\ \hdashline \end{array} & \mathcal {R}(\mathsf {L}).g(\mu)([\![t]\!]^{\rho }) \geq [\![t']\!]^{\rho } \\ (7)& {}[\sigma] & \sigma \in \mathcal {R}(\mathsf {L}).\Sigma \\ \hline (8)& P \land Q & \mathcal {R} \in [\![P]\!]^{\rho } \cap [\![Q]\!]^{\rho } \\ (9)& P \lor Q & \mathcal {R} \in [\![P]\!]^{\rho } \cup [\![Q]\!]^{\rho } \\ (10)& P \Rightarrow Q & \lfloor \mathcal {R}\rfloor \cap [\![P]\!]^{\rho } \subseteq [\![Q]\!]^{\rho } \\ (11)& \forall X. P & \mathcal {R}\in \bigcap _{d \in \mathtt {sort}(X)} [\![P]\!]^{\rho [X \mapsto d]} \\ (12)& \exists X. P & \mathcal {R}\in \bigcup _{d \in \mathtt {sort}(X)} [\![P]\!]^{\rho [X \mapsto d]} \\ (13)& P_{1} * P_{2} & \mathcal {R} \in [\![P_{1}]\!]^{\rho } * [\![P_{2}]\!]^{\rho } \\ \hline (14)& \square P & | {\tt EMP}[\mathsf {L}\mapsto \mathcal {R}(\mathsf {L})]| \in [\![P]\!]^\rho \\ (15)& \langle P \rangle & {\tt EMP}[\mathsf {L}\mapsto \mathcal {R}(\mathsf {A})] \in [\![P]\!]^\rho \\ (16)& \langle P \rangle _{s} & {\tt EMP}[\mathsf {L}\mapsto \mathcal {R}(s)] \in [\![P]\!]^\rho \\ (17)& \boxtimes P & {\tt EMP}[\mathsf {L}\mapsto \mathcal {R}(\mathsf {A})] \in [\![P]\!]^\rho \\ \end{array}\end{align*}
Corollary 1 (Soundness of Corollary Inference Rules):
Our corollary inference rules are semantically sound. That is, given an inference rule allowing $P\Rightarrow Q$
, we have $\exists \mathcal {R}. \lfloor \mathcal {R} \rfloor \cap [\![P]\!]^\rho \subseteq [\![Q]\!]^\rho $
Proof:
For the [SEPARATION-R] rule: $\langle P*Q\rangle _{s}\Leftrightarrow \langle P\rangle _{s}*\langle Q\rangle _{s}$
, from left to right by the definition (16), we have $\mathcal {R}\in [\![\langle P* Q\rangle _{s}]\!]^\rho \triangleq {\tt EMP} [\mathsf {L}\mapsto \mathcal {R}(s)]\in [\![P* Q]\!]^\rho $
. According to the definition (13), the term can be transformed into:\begin{align*}&\exists r_{1},r_{2}. \mathcal {R}(s)=r_{1}\oplus r_{2} \land \\&{\tt EMP}[\mathsf {L}\mapsto r_{1}]\in [\![P]\!]^\rho \land {\tt EMP}[\mathsf {L}\mapsto r_{2}]\in [\![Q]\!]^\rho,\end{align*}
View Source
\begin{align*}&\exists r_{1},r_{2}. \mathcal {R}(s)=r_{1}\oplus r_{2} \land \\&{\tt EMP}[\mathsf {L}\mapsto r_{1}]\in [\![P]\!]^\rho \land {\tt EMP}[\mathsf {L}\mapsto r_{2}]\in [\![Q]\!]^\rho,\end{align*}
which implies the right hand side term. Similarly, we can do it from right to left. The proof of this rule is finished.
For the [KNOWLEDGE-MANIPULATION-1] rule: $\square P\Rightarrow P$
, according to definition (14) the semantic for its left hand side is $\mathcal {R}\in \square P\triangleq | {\tt EMP}[\mathsf {L}\mapsto \mathcal {R}(\mathsf {L})]|\in [\![P]\!]^\rho $
. As $| {\tt EMP}[\mathsf {L}\mapsto \mathcal {R}(\mathsf {L})]|\leq \mathcal {R} $
, $\mathcal {R}$
is also in $[\![P]\!]^\rho $
. The proof of this rule is finished.
Next we prove the rule [KNOWLEDGE-MANIPUL-ATION-2]: $\square P\Rightarrow \square \square P$
. Start from the rule’s left hand side, we have $| {\tt EMP}[\mathsf {L}\mapsto \mathcal {R}(\mathsf {L})]|\in [\![P]\!]^\rho $
. Therefore, $|| {\tt EMP}[\mathsf {L}\mapsto ~\mathcal {R}(\mathsf {L})]||\in [\![P]\!]^\rho $
. The proof of this rule is finished.
For [KNOWLEDGE-MANIPULATION-3]: $\square P*Q\Leftrightarrow \square ~P\land Q$
, as $\square P$
is knowledge, the semantic definition (8) $[\![\square P]\!]^\rho \cap [\![Q]\!]^\rho $
is equivalent to (13) $[\![\square P]\!]^\rho * [\![Q]\!]^\rho $
in terms of evaluation results. Therefore, the rule is proven.
For [KNOWLEDGE-MANIPULATION-4…7], they are semantically sound as according to the definition of stripping operation and their corresponding semantic definitions ((7), (5), (1), and (6)) the resource representing these assertions does not change after stripping. Therefore they are able to be transformed into knowledge form.
For [SEPARATION-1]: $\begin{aligned} \begin{array}{:c:c:} \hdashline {\gamma:t} & {\mu } \\ \hdashline \end{array} * \begin{array}{:c:c:} \hdashline {\gamma:t'} & {\mu } \\ \hdashline \end{array} \Leftrightarrow \,\,\begin{array}{:c:c:} \hdashline {\gamma:t\cdot _\mu t'} & {\mu } \\ \hdashline \end{array} \end{aligned}$
, starting from the rule’s left hand side, we have:\begin{align*}&\mathcal {R}\in \left[{\!\left[{ \begin{array}{:c:c:} \hdashline {\gamma:t} & {\mu } \\ \hdashline \end{array} * \begin{array}{:c:c:} \hdashline {\gamma:t'} & {\mu } \\ \hdashline \end{array}}\right]\!}\right]^\rho \triangleq \\&\exists \mathcal {R} _{1}\in \left[{\!\left[{ \begin{array}{:c:c:} \hdashline {\gamma:t} & {\mu } \\ \hdashline \end{array} }\right]\!}\right]^\rho, \mathcal {R}_{2}\in \left[{\!\left[{ \begin{array}{:c:c:} \hdashline {\gamma:t'} & {\mu } \\ \hdashline \end{array} }\right]\!}\right]^\rho. \mathcal {R}= \mathcal {R}_{1}\oplus \mathcal {R} _{2},\end{align*}
View Source
\begin{align*}&\mathcal {R}\in \left[{\!\left[{ \begin{array}{:c:c:} \hdashline {\gamma:t} & {\mu } \\ \hdashline \end{array} * \begin{array}{:c:c:} \hdashline {\gamma:t'} & {\mu } \\ \hdashline \end{array}}\right]\!}\right]^\rho \triangleq \\&\exists \mathcal {R} _{1}\in \left[{\!\left[{ \begin{array}{:c:c:} \hdashline {\gamma:t} & {\mu } \\ \hdashline \end{array} }\right]\!}\right]^\rho, \mathcal {R}_{2}\in \left[{\!\left[{ \begin{array}{:c:c:} \hdashline {\gamma:t'} & {\mu } \\ \hdashline \end{array} }\right]\!}\right]^\rho. \mathcal {R}= \mathcal {R}_{1}\oplus \mathcal {R} _{2},\end{align*}
which is equivalent to the semantics of the rule’s right hand side. The rule is proven.
For the [SEPARATION-2] rule: $\begin{aligned} \begin{array}{|c|c|}\hline {\ell:s} & {\tau } \\ \hline \end{array} * \begin{array}{|c|c|}\hline {\ell:s'} & {\tau '} \\ \hline \end{array} \Rightarrow \,\,\tau = \tau '\land (s \sqsubseteq _\tau s' \lor s' \sqsubseteq _\tau s) \end{aligned}$
, starting from the rule’s left hand side, we have:\begin{align*}&\mathcal {R}\in \left[{\!\left[{ \begin{array}{|c|c|}\hline {\ell:s} & {\tau } \\ \hline \end{array} * \begin{array}{|c|c|}\hline {\ell:s'} & {\tau '} \\ \hline \end{array} }\right]\!}\right]^\rho \triangleq \\&\exists \mathcal {R} _{1}\in \left[{\!\left[{ \begin{array}{|c|c|}\hline {\ell:s} & {\tau } \\ \hline \end{array}}\right]\!}\right]^\rho, \mathcal {R}_{2}\in \left[{\!\left[{ \begin{array}{|c|c|}\hline {\ell:s'} & {\tau '} \\ \hline \end{array}}\right]\!}\right]^\rho. \mathcal {R}= \mathcal {R}_{1} \oplus \mathcal {R} _{2}.\end{align*}
View Source
\begin{align*}&\mathcal {R}\in \left[{\!\left[{ \begin{array}{|c|c|}\hline {\ell:s} & {\tau } \\ \hline \end{array} * \begin{array}{|c|c|}\hline {\ell:s'} & {\tau '} \\ \hline \end{array} }\right]\!}\right]^\rho \triangleq \\&\exists \mathcal {R} _{1}\in \left[{\!\left[{ \begin{array}{|c|c|}\hline {\ell:s} & {\tau } \\ \hline \end{array}}\right]\!}\right]^\rho, \mathcal {R}_{2}\in \left[{\!\left[{ \begin{array}{|c|c|}\hline {\ell:s'} & {\tau '} \\ \hline \end{array}}\right]\!}\right]^\rho. \mathcal {R}= \mathcal {R}_{1} \oplus \mathcal {R} _{2}.\end{align*}
Given $\mathcal {R}_{1} {\scriptstyle \#} \mathcal {R} _{2}$
, according to the definition of protocol compositions, the right hand side is implied. The rule is proven.
For [ASSERTION-PROPERTY-1…7], we prove that special assertions can not be nested. According to the definition (14-17), the nesting shown on the left hand side results in an empty resource map that is used to be checked with the original assertion $P$
, which implies false unless $P$
is ${\tt emp}$
. The proof is finished.
A ghost move is a transition that only changes auxiliary/logical computation states. This is ensured by our resource-level ghost moves. We first present our resource-level ghost moves below:\begin{align*} (1) {\frac { \mathcal {R}\in [\![P]\!]^\rho }{ \mathcal {R}\Rrightarrow [\![P]\!]^\rho }} \qquad (2) {\frac { \mathcal {R}_{0}\in [\![P]\!]^\rho \quad \forall \mathcal {R} \in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow [\![P']\!]^\rho }{ \mathcal {R}_{0}\Rrightarrow [\![P']\!]^\rho }} \\ \\ (3) {\frac { m\in [\![\mu]\!] }{ \mathcal {R}\Rrightarrow \lfloor \mathcal {R} \rfloor *\{ {\tt EMP}[\mathsf {L}\mapsto (\bot, [\mu \mapsto [i\mapsto m]],\emptyset)]\} }} \\ \\ (4) {\frac { \forall g_{F}\#g. g_{F}\#g' }{\tt EMP[\mathsf {L}\mapsto (\Pi, g,\Sigma)]\Rrightarrow \lfloor {\tt EMP}[\mathsf {L}\mapsto (\Pi, g',\Sigma)]\rfloor }} \\ \\ (5) {\frac { {\mathsf {interp}}(\sigma) = ([\![P]\!]^\rho, [\![P']\!]^\rho) \quad \mathcal {R} ' \in [\![P']\!]^\rho \quad \mathcal {R}[\mathsf {L}] = (\Pi, g, \Sigma)}{ \mathcal {R}\oplus \mathcal {R}' \Rrightarrow \lfloor \mathcal {R}[\mathsf {L}\mapsto (\Pi, g, \Sigma \cup \{\sigma \})]\rfloor }} \\ \\ (6) {\frac { {\mathsf {interp}}(\sigma) = ([\![P]\!]^\rho, [\![P']\!]^\rho)\quad \sigma \in \mathcal {R}[\mathsf {L}].\Sigma \quad \mathcal {R}\in [\![P]\!]^\rho }{ \mathcal {R}_{0} \oplus \mathcal {R} \Rrightarrow \lfloor \mathcal {R}_{0} \rfloor * [\![P']\!]^\rho }} \\ \\ (7) {\frac { \begin{array}{c} \mathcal {R}'[\mathsf {L}]= \mathcal {R}[\mathsf {L}]\oplus r\quad l\in \{ \mathsf {S}\}\cup \mathbb {S} \quad \mathcal {R} '[l]\oplus r= \mathcal {R}[l]\\ \forall l'\not = \mathsf {L}\lor l. \mathcal {R}'[l']= \mathcal {R}[l'] \end{array} }{ \mathcal {R}\Rrightarrow \lfloor \mathcal {R} '\rfloor }}\end{align*}
View Source
\begin{align*} (1) {\frac { \mathcal {R}\in [\![P]\!]^\rho }{ \mathcal {R}\Rrightarrow [\![P]\!]^\rho }} \qquad (2) {\frac { \mathcal {R}_{0}\in [\![P]\!]^\rho \quad \forall \mathcal {R} \in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow [\![P']\!]^\rho }{ \mathcal {R}_{0}\Rrightarrow [\![P']\!]^\rho }} \\ \\ (3) {\frac { m\in [\![\mu]\!] }{ \mathcal {R}\Rrightarrow \lfloor \mathcal {R} \rfloor *\{ {\tt EMP}[\mathsf {L}\mapsto (\bot, [\mu \mapsto [i\mapsto m]],\emptyset)]\} }} \\ \\ (4) {\frac { \forall g_{F}\#g. g_{F}\#g' }{\tt EMP[\mathsf {L}\mapsto (\Pi, g,\Sigma)]\Rrightarrow \lfloor {\tt EMP}[\mathsf {L}\mapsto (\Pi, g',\Sigma)]\rfloor }} \\ \\ (5) {\frac { {\mathsf {interp}}(\sigma) = ([\![P]\!]^\rho, [\![P']\!]^\rho) \quad \mathcal {R} ' \in [\![P']\!]^\rho \quad \mathcal {R}[\mathsf {L}] = (\Pi, g, \Sigma)}{ \mathcal {R}\oplus \mathcal {R}' \Rrightarrow \lfloor \mathcal {R}[\mathsf {L}\mapsto (\Pi, g, \Sigma \cup \{\sigma \})]\rfloor }} \\ \\ (6) {\frac { {\mathsf {interp}}(\sigma) = ([\![P]\!]^\rho, [\![P']\!]^\rho)\quad \sigma \in \mathcal {R}[\mathsf {L}].\Sigma \quad \mathcal {R}\in [\![P]\!]^\rho }{ \mathcal {R}_{0} \oplus \mathcal {R} \Rrightarrow \lfloor \mathcal {R}_{0} \rfloor * [\![P']\!]^\rho }} \\ \\ (7) {\frac { \begin{array}{c} \mathcal {R}'[\mathsf {L}]= \mathcal {R}[\mathsf {L}]\oplus r\quad l\in \{ \mathsf {S}\}\cup \mathbb {S} \quad \mathcal {R} '[l]\oplus r= \mathcal {R}[l]\\ \forall l'\not = \mathsf {L}\lor l. \mathcal {R}'[l']= \mathcal {R}[l'] \end{array} }{ \mathcal {R}\Rrightarrow \lfloor \mathcal {R} '\rfloor }}\end{align*}
Corollary 2 (Soundness of Ghost Moves):
Our ghost move rules are semantically sound. That is, given a ghost move rule allowing $P\Rrightarrow Q$
, we have $\forall \mathcal {R} \in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow [\![Q]\!]^\rho $
.
Proof:
We prove the ghost move rules one by one.
For the [GHOST-MOVE-1] rule, we are going to prove that $\forall \mathcal {R} \in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow [\![Q]\!]^\rho $
, with the premise that is given as $\forall \mathcal {R}. \lfloor \mathcal {R} \rfloor \cap [\![P]\!]^\rho \subseteq [\![Q]\!]^\rho $
. The premise can be simplified as $[\![P]\!]^\rho \subseteq [\![Q]\!]^\rho $
. Therefore, we have $\forall \mathcal {R} \in [\![P]\!]^\rho. \mathcal {R}\in [\![Q]\!]^\rho $
. By using the resource level ghost move (1), we have $\forall \mathcal {R} \in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow [\![Q]\!]^\rho $
. [GHOST-MOVE-1] is proven.
For the [GHOST-MOVE-2] rule, we are going to prove that $\forall \mathcal {R} \in [\![P*R]\!]^\rho. \mathcal {R}\Rrightarrow [\![Q*R]\!]^\rho. $
According to the definition of separation assertions, it can be transformed to:\begin{align*}&\forall \mathcal {R} \in [\![P]\!]^\rho, \mathcal {R}'\in [\![R]\!]^\rho. \\&\mathcal {R}\oplus \mathcal {R}'\Rrightarrow \{ \mathcal {R}_{1}\oplus \mathcal {R} _{2}\mid \mathcal {R}_{1}\in [\![Q]\!]^\rho \land \mathcal {R}_{2}\in [\![R]\!]^\rho \}.\end{align*}
View Source
\begin{align*}&\forall \mathcal {R} \in [\![P]\!]^\rho, \mathcal {R}'\in [\![R]\!]^\rho. \\&\mathcal {R}\oplus \mathcal {R}'\Rrightarrow \{ \mathcal {R}_{1}\oplus \mathcal {R} _{2}\mid \mathcal {R}_{1}\in [\![Q]\!]^\rho \land \mathcal {R}_{2}\in [\![R]\!]^\rho \}.\end{align*}
According to the premise, we have $\forall \mathcal {R} \in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow ~[\![Q]\!]^\rho $
. We can check that the proof obligation is valid for all possible ghost moves allowed by the resource-level ghost move rules. [GHOST-MOVE-2] is proven.
For the [GHOST-MOVE-3] rule, initially we have the following property: $\forall \mathcal {R} \in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow [\![Q]\!]^\rho \land \forall \mathcal {R}\in [\![Q]\!]^\rho $
.$\mathcal {R}\Rrightarrow [\![R]\!]^\rho, $
and we are going to prove that $\forall \mathcal {R} \in [\![P]\!]^\rho. \mathcal {R}\Rrightarrow ~[\![R]\!]^\rho $
. From the first premise $\forall \mathcal {R} \in [\![P]\!]^\rho $
can be transformed into some resource in $[\![Q]\!]^\rho $
for all possible ghost moves. Together with the second premise, the rule is proven.
The [GHOST-MOVE-3] and [UNSHARE-R] can be proved by moving the resource under the shareable labels to the resource maps’ local component, which is allowed by the resource level ghost move (7).
For the [GHOST-MOVE-4] rule, we are going to prove that $\begin{aligned} \forall \mathcal {R} \in [\![\mathsf {true}]\!]^\rho. \mathcal {R}\Rrightarrow \left[{\!\left[{\exists \gamma. \begin{array}{:c:c:} \hdashline {\gamma:t_{1}} & {\mu } \\ \hdashline \end{array}}\right]\!}\right]^\rho, \end{aligned}$
which can be done by applying the resource level ghost move (3).
The [GHOST-MOVE-5] is a corollary of the resource level ghost move (4), while the [GHOST-MOVE-6] is a corollary of the resource level ghost move (5), and [GHOST-MOVE-7] is a corollary of the resource level ghost move (6).
Now we present the proofs of the labelling lemmas we have discussed for our reasoning system’s global safety with the following definitions (which has been informally discussed in the main text):\begin{align*}&\mathcal {R}\Rrightarrow _{\mathcal {I}} \mathcal {R}'\triangleq \exists g. \mathcal {R}'=(\mathcal {R}.\Pi \cup \{\sigma |(\sigma,-)\in \mathcal I\}) \\&N \triangleq {\mathsf {dom}} (G.A) \\&a\in \mathsf {valid}(G, \mathcal {L},N)\triangleq \exists, \mathcal {R},\mathcal I. \\&\quad \mathcal {L} \in \textit {labelling}(G) \\&\quad \mathsf {in}({\mathsf {sb}})\oplus \mathsf {in}({\mathsf {rf}})\oplus \mathsf {in}({\mathsf {esc}}) \Rrightarrow _{\mathcal {I}} \mathcal {R}\oplus \mathsf {out}({\mathsf {esc}})\oplus \mathsf {out}({\mathsf {cond}})\\&\quad (\mathsf {out}({\mathsf {sb}}),\mathsf {out}({\mathsf {rf}}))\in \mathsf {guar}(\mathsf {in}({\mathsf {sb}})\oplus \mathsf {in}({\mathsf {rf}}), \mathcal {R},G.A(a)) \\&\quad (\forall b\in N. \mathsf {isUpd}(b)\land {\mathsf {rf}} (b)=a)\Longrightarrow (\mathcal {L}({\mathsf {rf}},a,b)=\mathsf {out}({\mathsf {rf}})) \\&\quad (\not \exists b\in N. \mathsf {isUpd}(b)\land {\mathsf {rf}} {b}=a)\Longrightarrow (\mathcal {L}({\mathsf {rf}},a,\bot)=\mathsf {out}({\mathsf {rf}})) \\&\quad | \mathcal {L}({\mathsf {rf}},a,\bot)|=|\mathsf {out}({\mathsf {rf}})| \\&\quad \forall (\sigma, \mathcal {R}_{E})\in \mathcal I. \mathsf {interp}(\sigma)= (\mathcal P,\mathcal P') \Longrightarrow \mathcal {R} _{E}\in \mathcal P' \\&\quad \mathcal {L} ({\mathsf {esc}},a,\bot) =\\&\qquad {{\oplus }} \left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I, \mathcal {R}_{E}\in \mathcal P',\\ \mathsf {interp}(\sigma)=(\mathcal P,\mathcal P'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal P)\\ \end{array}}\right \} \\&\quad \textit {where, }\mathsf {in}(x)\triangleq {{\oplus }} \{ \mathcal {L}(x,b,a)\mid (x,b,a)\in {\mathsf {dom}} (\mathcal {L})\} \\&\qquad \quad \mathsf {out}(x)\triangleq {{\oplus }} \{ \mathcal {L}(x,a,c)\mid (x,a,c)\in {\mathsf {dom}} (\mathcal {L})\} \\&\mathsf {compat}(G, \mathcal {L})\triangleq \forall \epsilon \subseteq {\mathsf {dom}}(\mathcal {L}).\\&\quad \not \exists e_{1},e_{2}\in \epsilon. G. {\mathsf {hb}}^{*}(\mathsf {target}(e_{1}), \mathsf {source}(e_{2}))\\&\qquad \Longrightarrow {{\oplus }}_{\eta \in \epsilon } \mathcal {L}(\eta) defined\\&\mathsf {conform}(G, \mathcal {L},N)\triangleq \forall \ell. \forall a,b\in N. \\&\quad G. {\mathsf {mo}}_{l, \mathtt {at}}(a,b)\Longrightarrow \mathsf {out}(\mathcal {L},a, {\mathsf {rf}})[\ell] \sqsubseteq _ {\mathtt {at}}\mathsf {out}(\mathcal {L},b, {\mathsf {rf}})[\ell]\end{align*}
View Source
\begin{align*}&\mathcal {R}\Rrightarrow _{\mathcal {I}} \mathcal {R}'\triangleq \exists g. \mathcal {R}'=(\mathcal {R}.\Pi \cup \{\sigma |(\sigma,-)\in \mathcal I\}) \\&N \triangleq {\mathsf {dom}} (G.A) \\&a\in \mathsf {valid}(G, \mathcal {L},N)\triangleq \exists, \mathcal {R},\mathcal I. \\&\quad \mathcal {L} \in \textit {labelling}(G) \\&\quad \mathsf {in}({\mathsf {sb}})\oplus \mathsf {in}({\mathsf {rf}})\oplus \mathsf {in}({\mathsf {esc}}) \Rrightarrow _{\mathcal {I}} \mathcal {R}\oplus \mathsf {out}({\mathsf {esc}})\oplus \mathsf {out}({\mathsf {cond}})\\&\quad (\mathsf {out}({\mathsf {sb}}),\mathsf {out}({\mathsf {rf}}))\in \mathsf {guar}(\mathsf {in}({\mathsf {sb}})\oplus \mathsf {in}({\mathsf {rf}}), \mathcal {R},G.A(a)) \\&\quad (\forall b\in N. \mathsf {isUpd}(b)\land {\mathsf {rf}} (b)=a)\Longrightarrow (\mathcal {L}({\mathsf {rf}},a,b)=\mathsf {out}({\mathsf {rf}})) \\&\quad (\not \exists b\in N. \mathsf {isUpd}(b)\land {\mathsf {rf}} {b}=a)\Longrightarrow (\mathcal {L}({\mathsf {rf}},a,\bot)=\mathsf {out}({\mathsf {rf}})) \\&\quad | \mathcal {L}({\mathsf {rf}},a,\bot)|=|\mathsf {out}({\mathsf {rf}})| \\&\quad \forall (\sigma, \mathcal {R}_{E})\in \mathcal I. \mathsf {interp}(\sigma)= (\mathcal P,\mathcal P') \Longrightarrow \mathcal {R} _{E}\in \mathcal P' \\&\quad \mathcal {L} ({\mathsf {esc}},a,\bot) =\\&\qquad {{\oplus }} \left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I, \mathcal {R}_{E}\in \mathcal P',\\ \mathsf {interp}(\sigma)=(\mathcal P,\mathcal P'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal P)\\ \end{array}}\right \} \\&\quad \textit {where, }\mathsf {in}(x)\triangleq {{\oplus }} \{ \mathcal {L}(x,b,a)\mid (x,b,a)\in {\mathsf {dom}} (\mathcal {L})\} \\&\qquad \quad \mathsf {out}(x)\triangleq {{\oplus }} \{ \mathcal {L}(x,a,c)\mid (x,a,c)\in {\mathsf {dom}} (\mathcal {L})\} \\&\mathsf {compat}(G, \mathcal {L})\triangleq \forall \epsilon \subseteq {\mathsf {dom}}(\mathcal {L}).\\&\quad \not \exists e_{1},e_{2}\in \epsilon. G. {\mathsf {hb}}^{*}(\mathsf {target}(e_{1}), \mathsf {source}(e_{2}))\\&\qquad \Longrightarrow {{\oplus }}_{\eta \in \epsilon } \mathcal {L}(\eta) defined\\&\mathsf {conform}(G, \mathcal {L},N)\triangleq \forall \ell. \forall a,b\in N. \\&\quad G. {\mathsf {mo}}_{l, \mathtt {at}}(a,b)\Longrightarrow \mathsf {out}(\mathcal {L},a, {\mathsf {rf}})[\ell] \sqsubseteq _ {\mathtt {at}}\mathsf {out}(\mathcal {L},b, {\mathsf {rf}})[\ell]\end{align*}
Lemma 1 (Step Preparation):
\begin{align*} \textit {if}&{\mathsf {consistentC11}}(\mathcal G) \\&\land {\mathsf {consistentC11}} (\mathcal G')\\&\land {\mathsf {dom}} (\mathcal G'.A)= {\mathsf {dom}}(\mathcal G.A)\uplus {b}\\&\land \mathcal L({\mathsf {sb}},a,\bot)= \mathcal {R}\oplus \mathcal {R}_{r}em\\&\land {\mathsf {dom}} (\mathcal G.A)\subseteq {\mathsf {valid}} (\mathcal G,\mathcal L, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}) \land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\\&\land \forall c\in {\mathsf {dom}} (\mathcal G.A). \mathcal G.A(c)=\mathcal G'.A(c)\\&\land \mathcal G'. {\mathsf {sb}}=G. {\mathsf {sb}}\uplus {[a,b)}\\&\land \forall c\in {\mathsf {dom}} (\mathcal G.A). \mathcal G. {\mathsf {rf}}(c)=\mathcal G'. {\mathsf {rf}}(c)\\&\land \mathcal G'. {\mathsf {mo}}\supseteq \mathcal G. {\mathsf {mo}}\\ \textit {then}&\exists \mathcal L'. {\mathsf {dom}}(\mathcal G.A)= {\mathsf {valid}}(\mathcal G',\mathcal {L}', {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G',\mathcal {L}')\\&\land {\mathsf {conform}} (\mathcal G',\mathcal {L}', {\mathsf {dom}}(\mathcal G'.A))\\&\land \mathcal L'({\mathsf {sb}},a,\bot)= \mathcal {R}_{r}em\\&\land {\mathsf {in}} (\mathcal L',b, {\mathsf {sb}})= \mathcal {R}\\&\land {\mathsf {in}} (\mathcal L',b, {\mathsf {rf}})= {\tt EMP}\\&\land {\mathsf {in}} (\mathcal L',b, {\mathsf {esc}})= {\tt EMP}\\&\land \forall a'\not =a. \mathcal L'({\mathsf {sb}},a',b)= {\tt EMP}\\&\land {\mathsf {out}} (\mathcal L',b, {\mathsf {all}})= {\tt EMP}\\&\land \forall a'\not =a. \mathcal L'({\mathsf {sb}},a',\bot)=\mathcal L({\mathsf {sb}},a',\bot)\end{align*}
View Source
\begin{align*} \textit {if}&{\mathsf {consistentC11}}(\mathcal G) \\&\land {\mathsf {consistentC11}} (\mathcal G')\\&\land {\mathsf {dom}} (\mathcal G'.A)= {\mathsf {dom}}(\mathcal G.A)\uplus {b}\\&\land \mathcal L({\mathsf {sb}},a,\bot)= \mathcal {R}\oplus \mathcal {R}_{r}em\\&\land {\mathsf {dom}} (\mathcal G.A)\subseteq {\mathsf {valid}} (\mathcal G,\mathcal L, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}) \land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\\&\land \forall c\in {\mathsf {dom}} (\mathcal G.A). \mathcal G.A(c)=\mathcal G'.A(c)\\&\land \mathcal G'. {\mathsf {sb}}=G. {\mathsf {sb}}\uplus {[a,b)}\\&\land \forall c\in {\mathsf {dom}} (\mathcal G.A). \mathcal G. {\mathsf {rf}}(c)=\mathcal G'. {\mathsf {rf}}(c)\\&\land \mathcal G'. {\mathsf {mo}}\supseteq \mathcal G. {\mathsf {mo}}\\ \textit {then}&\exists \mathcal L'. {\mathsf {dom}}(\mathcal G.A)= {\mathsf {valid}}(\mathcal G',\mathcal {L}', {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G',\mathcal {L}')\\&\land {\mathsf {conform}} (\mathcal G',\mathcal {L}', {\mathsf {dom}}(\mathcal G'.A))\\&\land \mathcal L'({\mathsf {sb}},a,\bot)= \mathcal {R}_{r}em\\&\land {\mathsf {in}} (\mathcal L',b, {\mathsf {sb}})= \mathcal {R}\\&\land {\mathsf {in}} (\mathcal L',b, {\mathsf {rf}})= {\tt EMP}\\&\land {\mathsf {in}} (\mathcal L',b, {\mathsf {esc}})= {\tt EMP}\\&\land \forall a'\not =a. \mathcal L'({\mathsf {sb}},a',b)= {\tt EMP}\\&\land {\mathsf {out}} (\mathcal L',b, {\mathsf {all}})= {\tt EMP}\\&\land \forall a'\not =a. \mathcal L'({\mathsf {sb}},a',\bot)=\mathcal L({\mathsf {sb}},a',\bot)\end{align*}
Proof:
Firstly, we prove the ${\mathsf {compat}}$
property. Notice that for all edges in that are ${\mathsf {hb}}$
-independent with the edge ${\mathsf {sb}}(a,b)$
, they are ${\mathsf {hb}}$
-independent with the ${\mathsf {sb}}(a,\bot)$
edge as well. Suppose the sum of the resources carried under their local label $r$
is incompatible with $\mathcal {L}({\mathsf {sb}},a,b)(\mathsf {L})$
, i.e., $\neg r\# \mathcal {L}({\mathsf {sb}},a,b)(\mathsf {L})$
. According our labeling rule $\mathcal {L}({\mathsf {sb}},a,b)(\mathsf {L})\leq \mathcal {L} ({\mathsf {sb}},a,\bot)(\mathsf {L})$
. That is there exists some $r'$
that makes $\mathcal {L}({\mathsf {sb}},a,\bot)(\mathsf {L})= \mathcal {L}({\mathsf {sb}},a,b)(\mathsf {L})\oplus r'$
. Therefore we can deduce the following property $\neg r\# \mathcal {L}({\mathsf {sb}},a,\bot)(\mathsf {L})$
as $r\oplus \mathcal {L} ({\mathsf {sb}},a,\bot)(\mathsf {L})=r\oplus \mathcal {L} ({\mathsf {sb}},a,b)(\mathsf {L})\oplus r'$
which has undefined result. However $\neg r\# \mathcal {L}({\mathsf {sb}},a,\bot)(\mathsf {L})$
contradicts with the premise where the ${\mathsf {compat}}(\mathcal G,\mathcal {L})$
property holds. Therefore, the sum of the resources carried by all the edges ${\mathsf {hb}}$
-independent with ${\mathsf {sb}}(a,b)$
is compitable with its resource, and we can derive that ${\mathsf {compat}}(\mathcal G',\mathcal {L'})$
To prove the ${\mathsf {conform}}$
property, notice that the atomic locations are unchanged in this step’s labelling process. Therefore ${\mathsf {conform}}(\mathcal G',\mathcal {L}', {\mathsf {dom}}(\mathcal G'.A))$
is essentially equivalent to ${\mathsf {conform}}(\mathcal G',\mathcal {L}', {\mathsf {dom}}(\mathcal G'.A))$
. To prove the ${\mathsf {valid}}$
property and the validity of the updated labelling, we unfold the corresponding definitions and check the requirements with our labelling results.
Lemma 2 (Rely Step):
\begin{align*} \textit {if}&\mathcal G.A(a)=\alpha \\&\land {\mathsf {dom}} (\mathcal G.A)=N\uplus {a}\\&\land N\in \mathsf {prefix} (\mathcal G)\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L},N)\\&\land {\mathsf {in}} (\mathcal {L},a, {\mathsf {all}})= {\mathsf {out}}(\mathcal {L},a, {\mathsf {all}})= {\tt EMP}\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}) \land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\\&\land {\mathsf {consistentC11}} (\mathcal G)\\&\land {\mathsf {in}} (\mathcal {L},a, {\mathsf {rf}})= {\tt EMP}\land {\mathsf {in}}(\mathcal {L},a, {\mathsf {esc}})= {\tt EMP}\\&\land {\mathsf {out}} (\mathcal {L},a, {\mathsf {all}})= {\tt EMP}\\ \textit {then}&\exists \mathcal L'. N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}',N)\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L})\\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\\&\land {\mathsf {in}} (\mathcal {L}',a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal {L}',a, {\mathsf {rf}}) \in {\mathsf {rely}} ({\mathsf {in}}(\mathcal {L}',a, {\mathsf {sb}}),\alpha)\\&\land {\mathsf {in}} (\mathcal {L}',a, {\mathsf {esc}})= {\mathsf {out}}(\mathcal {L}',a, {\mathsf {all}})= {\tt EMP}\\&\land \forall b,c. \mathcal {L}'({\mathsf {sb}},b,c)=\mathcal {L}({\mathsf {sb}},b,c)\\&\land \forall b. \mathcal {L}'({\mathsf {sb}},b,\bot)=\mathcal {L}({\mathsf {sb}},b,\bot)\end{align*}
View Source
\begin{align*} \textit {if}&\mathcal G.A(a)=\alpha \\&\land {\mathsf {dom}} (\mathcal G.A)=N\uplus {a}\\&\land N\in \mathsf {prefix} (\mathcal G)\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L},N)\\&\land {\mathsf {in}} (\mathcal {L},a, {\mathsf {all}})= {\mathsf {out}}(\mathcal {L},a, {\mathsf {all}})= {\tt EMP}\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}) \land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\\&\land {\mathsf {consistentC11}} (\mathcal G)\\&\land {\mathsf {in}} (\mathcal {L},a, {\mathsf {rf}})= {\tt EMP}\land {\mathsf {in}}(\mathcal {L},a, {\mathsf {esc}})= {\tt EMP}\\&\land {\mathsf {out}} (\mathcal {L},a, {\mathsf {all}})= {\tt EMP}\\ \textit {then}&\exists \mathcal L'. N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}',N)\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L})\\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\\&\land {\mathsf {in}} (\mathcal {L}',a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal {L}',a, {\mathsf {rf}}) \in {\mathsf {rely}} ({\mathsf {in}}(\mathcal {L}',a, {\mathsf {sb}}),\alpha)\\&\land {\mathsf {in}} (\mathcal {L}',a, {\mathsf {esc}})= {\mathsf {out}}(\mathcal {L}',a, {\mathsf {all}})= {\tt EMP}\\&\land \forall b,c. \mathcal {L}'({\mathsf {sb}},b,c)=\mathcal {L}({\mathsf {sb}},b,c)\\&\land \forall b. \mathcal {L}'({\mathsf {sb}},b,\bot)=\mathcal {L}({\mathsf {sb}},b,\bot)\end{align*}
Proof:
We first focus on the ${\mathsf {compat}}$
property. For non-update reading actions, we argue that only knowledge is taken into the new node and knowledge is always compatible with the environment. For an relaxed atomic update, the resource taken in from its ${\mathsf {rf}}$
edge is left to be acquired, therefore the local resources are still compatible. For an acquire atomic update, which can take non-duplicable resource to its local resource, we notice that there must be a nearest release action $b$
that made the resource shareable. Therefore we have $b$
happens before $a$
, which does not break the ${\mathsf {compat}}$
property.
We assert that ${\mathsf {conform}}(\mathcal G',\mathcal {L}', {\mathsf {dom}}(\mathcal G'.A))$
is essentially equivalent to ${\mathsf {conform}}(\mathcal G',\mathcal {L}', {\mathsf {dom}}(\mathcal G'.A))$
as there is no changes to atomic locations in this labelling step. To prove the ${\mathsf {valid}}$
property and the validity of the updated labelling, we unfold the corresponding definitions and check the requirements with our labelling results.
Lemma 3 (Ghost Step):
\begin{align*} \textit {if}&{\mathsf {dom}}(\mathcal G.A)=N\uplus {a}\land N\in \mathsf {prefix} (\mathcal G) \\&\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}[({\mathsf {esc}},-,a,\bot):=\mathcal {L} ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R}]) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\land {\mathsf {consistentC11}} (\mathcal G)\\&\land \mathcal {R} _{b}efore\triangleq {\mathsf {in}} (\mathcal {L},a, {\mathsf {sb}})\oplus {\mathsf {in}}(\mathcal {L},a, {\mathsf {rf}})\oplus {\mathsf {in}} (\mathcal {L},a, {\mathsf {esc}})\\&\land \mathcal {R} _{a}fter\triangleq \mathcal {R} \oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {esc}}) \oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}})\\&\land \mathcal {R} _{b}efore\Rrightarrow _{\mathcal {I}} \mathcal {R}_{a}fter\land | \mathcal {R}_{b}efore|\leq \mathcal {R} \land \mathcal {R} \Rrightarrow \mathcal P\\&\land \forall c. \mathcal {L}({\mathsf {esc}},-,a,e)= {\tt EMP}\\&\land \forall (\sigma, \mathcal {R}_{E})\in \mathcal {I}. {\mathsf {interp}}(\sigma)= (\mathcal Q,\mathcal Q')\Rightarrow \mathcal {R} _{E}\in \mathcal Q'\\&\land \mathcal {L} ({\mathsf {esc}},a,\bot) =\\&{{\oplus }}\left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I, \mathcal {R}_{E}\in \mathcal Q',\\ {\mathsf {interp}}(\sigma)=(\mathcal Q,\mathcal Q'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal Q)\\ \end{array} }\right \}\\ \textit {then}&\exists \mathcal L',\mathcal I', \mathcal {R}', \mathcal {R}'_{b}efore, \mathcal {R}'_{a}fter\!\in \!\mathcal P. N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}', {\mathsf {dom}}(\mathcal G.A)) \\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}'[({\mathsf {esc}},-,a,\bot):=\mathcal {L}' ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R} ']) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L}',N)\\&\land \mathcal {R} '_{b}efore\triangleq {\mathsf {in}} (\mathcal {L}',a, {\mathsf {sb}})\oplus {\mathsf {in}}(\mathcal {L}',a, {\mathsf {rf}})\oplus {\mathsf {in}} (\mathcal {L}',a, {\mathsf {esc}})\\&\land \mathcal {R} '_{a}fter\triangleq \mathcal {R} '\oplus {\mathsf {out}} (\mathcal {L}',a, {\mathsf {esc}}) \oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}}) \\&\land \mathcal {R} '_{b}efore\Rrightarrow _{\mathcal {I'}}\mathcal {I}' \mathcal {R}'_{a}fter\\&\land \forall b. \mathcal L'({\mathsf {sb}},b,\bot)=\mathcal L({\mathsf {sb}},b,\bot)\\&\land \forall b. \mathcal L'({\mathsf {rf}},b,\bot)=\mathcal L({\mathsf {rf}},b,\bot)\\&\land \forall b,c. \mathcal L'({\mathsf {sb}},b,c)=\mathcal L({\mathsf {sb}},b,c) \\&\land \forall b,c. \mathcal L'({\mathsf {rf}},b,c)=\mathcal L({\mathsf {rf}},b,c)\\&\land \forall c. \mathcal L'({\mathsf {esc}},-,a,e)= {\tt EMP}\\&\land \forall (\sigma, \mathcal {R}_{E})\in \mathcal I'. {\mathsf {interp}}(\sigma)= (\mathcal Q,\mathcal Q')\Rightarrow \mathcal {R} _{E}\in \mathcal Q'\\&\land \mathcal {L} ({\mathsf {esc}},a,\bot)=\\&{{\oplus }}\left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I', \mathcal {R}_{E}\in \mathcal Q',\\ {\mathsf {interp}}(\sigma)=(\mathcal Q,\mathcal Q'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal Q)\\ \end{array} }\right \}\end{align*}
View Source
\begin{align*} \textit {if}&{\mathsf {dom}}(\mathcal G.A)=N\uplus {a}\land N\in \mathsf {prefix} (\mathcal G) \\&\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}[({\mathsf {esc}},-,a,\bot):=\mathcal {L} ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R}]) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N)\land {\mathsf {consistentC11}} (\mathcal G)\\&\land \mathcal {R} _{b}efore\triangleq {\mathsf {in}} (\mathcal {L},a, {\mathsf {sb}})\oplus {\mathsf {in}}(\mathcal {L},a, {\mathsf {rf}})\oplus {\mathsf {in}} (\mathcal {L},a, {\mathsf {esc}})\\&\land \mathcal {R} _{a}fter\triangleq \mathcal {R} \oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {esc}}) \oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}})\\&\land \mathcal {R} _{b}efore\Rrightarrow _{\mathcal {I}} \mathcal {R}_{a}fter\land | \mathcal {R}_{b}efore|\leq \mathcal {R} \land \mathcal {R} \Rrightarrow \mathcal P\\&\land \forall c. \mathcal {L}({\mathsf {esc}},-,a,e)= {\tt EMP}\\&\land \forall (\sigma, \mathcal {R}_{E})\in \mathcal {I}. {\mathsf {interp}}(\sigma)= (\mathcal Q,\mathcal Q')\Rightarrow \mathcal {R} _{E}\in \mathcal Q'\\&\land \mathcal {L} ({\mathsf {esc}},a,\bot) =\\&{{\oplus }}\left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I, \mathcal {R}_{E}\in \mathcal Q',\\ {\mathsf {interp}}(\sigma)=(\mathcal Q,\mathcal Q'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal Q)\\ \end{array} }\right \}\\ \textit {then}&\exists \mathcal L',\mathcal I', \mathcal {R}', \mathcal {R}'_{b}efore, \mathcal {R}'_{a}fter\!\in \!\mathcal P. N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}', {\mathsf {dom}}(\mathcal G.A)) \\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}'[({\mathsf {esc}},-,a,\bot):=\mathcal {L}' ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R} ']) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L}',N)\\&\land \mathcal {R} '_{b}efore\triangleq {\mathsf {in}} (\mathcal {L}',a, {\mathsf {sb}})\oplus {\mathsf {in}}(\mathcal {L}',a, {\mathsf {rf}})\oplus {\mathsf {in}} (\mathcal {L}',a, {\mathsf {esc}})\\&\land \mathcal {R} '_{a}fter\triangleq \mathcal {R} '\oplus {\mathsf {out}} (\mathcal {L}',a, {\mathsf {esc}}) \oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}}) \\&\land \mathcal {R} '_{b}efore\Rrightarrow _{\mathcal {I'}}\mathcal {I}' \mathcal {R}'_{a}fter\\&\land \forall b. \mathcal L'({\mathsf {sb}},b,\bot)=\mathcal L({\mathsf {sb}},b,\bot)\\&\land \forall b. \mathcal L'({\mathsf {rf}},b,\bot)=\mathcal L({\mathsf {rf}},b,\bot)\\&\land \forall b,c. \mathcal L'({\mathsf {sb}},b,c)=\mathcal L({\mathsf {sb}},b,c) \\&\land \forall b,c. \mathcal L'({\mathsf {rf}},b,c)=\mathcal L({\mathsf {rf}},b,c)\\&\land \forall c. \mathcal L'({\mathsf {esc}},-,a,e)= {\tt EMP}\\&\land \forall (\sigma, \mathcal {R}_{E})\in \mathcal I'. {\mathsf {interp}}(\sigma)= (\mathcal Q,\mathcal Q')\Rightarrow \mathcal {R} _{E}\in \mathcal Q'\\&\land \mathcal {L} ({\mathsf {esc}},a,\bot)=\\&{{\oplus }}\left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I', \mathcal {R}_{E}\in \mathcal Q',\\ {\mathsf {interp}}(\sigma)=(\mathcal Q,\mathcal Q'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal Q)\\ \end{array} }\right \}\end{align*}
Proof:
To prove the ${\mathsf {compat}}$
property, firstly we assert that the escrowed resource $a$
retrieved must have been put under escrow by an event that happens before $a$
. This is because our labelling process ensures that the escrowed resource is initially attached to the creator’s escrow sink edge, and can only appear in a node’s local component if that node is in a chain of $({\mathsf {sb}}\cup {\mathsf {sw}})^{+}$
following the creator. Then we assert that the ${\mathsf {compat}}$
property holds for the updated graph following the same argument as that used in the ${\mathsf {compat}}$
proof for the Lemma Rely Step. We also assert that the ${\mathsf {conform}}$
, ${\mathsf {valid}}$
properties and the new labelling are valid for the same reasons discussed in the proof for the previous lemma.
Lemma 4 (Protocol Equivalence for Writes):
\begin{align*} \textit {if}&{\mathsf {dom}}(\mathcal G.A)=N\uplus {a} \land N\in \mathsf {prefix} (\mathcal G) \\&\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}[({\mathsf {esc}},-,a,\bot):=\mathcal {L} ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R}]) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N) \land {\mathsf {consistentC11}} (\mathcal G)\\&\land {\mathsf {in}} (\mathcal L,a, {\mathsf {all}})\Rrightarrow _{\mathcal {I}} \mathcal {R}\oplus {\mathsf {out}}(\mathcal L,a, {\mathsf {esc}})\oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}}) \\&\land | {\mathsf {in}}(\mathcal {L},a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal {L},a, {\mathsf {rf}})|\leq \mathcal {R} \\ \textit {then}&({\mathsf {in}}(\mathcal L,a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal L,a, {\mathsf {rf}}), {\mathsf {in}}(\mathcal L,a, {\mathsf {all}}))\in \mathsf {wpe} (\mathcal G.A(a))\end{align*}
View Source
\begin{align*} \textit {if}&{\mathsf {dom}}(\mathcal G.A)=N\uplus {a} \land N\in \mathsf {prefix} (\mathcal G) \\&\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}[({\mathsf {esc}},-,a,\bot):=\mathcal {L} ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R}]) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N) \land {\mathsf {consistentC11}} (\mathcal G)\\&\land {\mathsf {in}} (\mathcal L,a, {\mathsf {all}})\Rrightarrow _{\mathcal {I}} \mathcal {R}\oplus {\mathsf {out}}(\mathcal L,a, {\mathsf {esc}})\oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}}) \\&\land | {\mathsf {in}}(\mathcal {L},a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal {L},a, {\mathsf {rf}})|\leq \mathcal {R} \\ \textit {then}&({\mathsf {in}}(\mathcal L,a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal L,a, {\mathsf {rf}}), {\mathsf {in}}(\mathcal L,a, {\mathsf {all}}))\in \mathsf {wpe} (\mathcal G.A(a))\end{align*}
Proof:
We prove this lemma by firstly unfold the definitions of ${\mathsf {valid}}, {\mathsf {compat}}, and {\mathsf {conform}}$
. Then we do case analysis based on the type of $a$
and check the corresponding $\mathsf {wpe}$
definitions.
Lemma 5 (Guarantee Step):
\begin{align*} \textit {if}&\mathcal G.A(a)=\alpha \land {\mathsf {dom}} (\mathcal G.A)=N\uplus {a} \land N\in \mathsf {prefix} (\mathcal G) \\&\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}[({\mathsf {esc}},-,a,\bot):=\mathcal {L} ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R}]) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N) \land {\mathsf {consistentC11}} (\mathcal G)\\&\land \mathcal {R} _ {\mathsf {pre}}= {\mathsf {in}}(\mathcal L,a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal L,a, {\mathsf {rf}})\\&\land {\mathsf {in}} (\mathcal {L},a, {\mathsf {all}})\Rrightarrow _{\mathcal {I}} \mathcal {R}\oplus {\mathsf {out}}(\mathcal L,a, {\mathsf {esc}})\oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}})\\&\land \mathcal {R} _ {\mathsf {pre}}\in {\mathsf {rely}}(-,\alpha) \land | {\mathsf {in}}(\mathcal {L},a, {\mathsf {all}})|\leq \mathcal {R} \\&\land \forall (\sigma, \mathcal {R}_{E})\in \mathcal {I}. {\mathsf {interp}}(\sigma)= (\mathcal Q,\mathcal Q')\Rightarrow \mathcal {R} _{E}\in \mathcal Q'\\&\land \mathcal {L} ({\mathsf {esc}},a,\bot) =\\&{{\oplus }}\left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I, \mathcal {R}_{E}\in \mathcal Q',\\ {\mathsf {interp}}(\sigma)=(\mathcal Q,\mathcal Q'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal Q)\\ \end{array} }\right \}\\&\land {\mathsf {out}} (\mathcal L,a, {\mathsf {sb}})= {\mathsf {out}}(\mathcal L,a, {\mathsf {rf}})= {\tt EMP}\\&\land (\mathcal {R}_ {\mathsf {sb}},r_ {\mathsf {rf}})\!\in \! {\mathsf {guar}}(\mathcal {R}_ {\mathsf {pre}}, \mathcal {R},\alpha) \!\land \! \mathsf {wpe}(\alpha, \mathcal {R}_ {\mathsf {pre}}, {\mathsf {in}}(\mathcal L,a, {\mathsf {all}}))\\ \textit {then}&\exists \mathcal L'. {\mathsf {dom}}(\mathcal G.A)= {\mathsf {valid}}(\mathcal G,\mathcal {L}', {\mathsf {dom}}(\mathcal G.A)) \\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}') \land {\mathsf {conform}} (\mathcal G,\mathcal {L}', {\mathsf {dom}}(\mathcal G.A))\\&\land \forall b\not =a. \mathcal L'({\mathsf {sb}},b,\bot)=\mathcal L({\mathsf {sb}},b,\bot) \land \mathcal L'({\mathsf {sb}},a,\bot)= \mathcal {R}_ {\mathsf {sb}}\end{align*}
View Source
\begin{align*} \textit {if}&\mathcal G.A(a)=\alpha \land {\mathsf {dom}} (\mathcal G.A)=N\uplus {a} \land N\in \mathsf {prefix} (\mathcal G) \\&\land N\subseteq {\mathsf {valid}} (\mathcal G,\mathcal {L}, {\mathsf {dom}}(\mathcal G.A))\\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}[({\mathsf {esc}},-,a,\bot):=\mathcal {L} ({\mathsf {esc}},-,a,\bot)\oplus \mathcal {R}]) \\&\land {\mathsf {conform}} (\mathcal G,\mathcal {L},N) \land {\mathsf {consistentC11}} (\mathcal G)\\&\land \mathcal {R} _ {\mathsf {pre}}= {\mathsf {in}}(\mathcal L,a, {\mathsf {sb}})\oplus {\mathsf {in}} (\mathcal L,a, {\mathsf {rf}})\\&\land {\mathsf {in}} (\mathcal {L},a, {\mathsf {all}})\Rrightarrow _{\mathcal {I}} \mathcal {R}\oplus {\mathsf {out}}(\mathcal L,a, {\mathsf {esc}})\oplus {\mathsf {out}} (\mathcal {L},a, {\mathsf {cond}})\\&\land \mathcal {R} _ {\mathsf {pre}}\in {\mathsf {rely}}(-,\alpha) \land | {\mathsf {in}}(\mathcal {L},a, {\mathsf {all}})|\leq \mathcal {R} \\&\land \forall (\sigma, \mathcal {R}_{E})\in \mathcal {I}. {\mathsf {interp}}(\sigma)= (\mathcal Q,\mathcal Q')\Rightarrow \mathcal {R} _{E}\in \mathcal Q'\\&\land \mathcal {L} ({\mathsf {esc}},a,\bot) =\\&{{\oplus }}\left \{{ \mathcal {R}_{E}\mid \begin{array}{l} (\sigma, \mathcal {R}_{E})\in \mathcal I, \mathcal {R}_{E}\in \mathcal Q',\\ {\mathsf {interp}}(\sigma)=(\mathcal Q,\mathcal Q'),\\ (\not \exists b. {\mathsf {hb}}^{=}(a,b)\land \mathcal {L} ({\mathsf {cond}},b,\bot)\in \mathcal Q)\\ \end{array} }\right \}\\&\land {\mathsf {out}} (\mathcal L,a, {\mathsf {sb}})= {\mathsf {out}}(\mathcal L,a, {\mathsf {rf}})= {\tt EMP}\\&\land (\mathcal {R}_ {\mathsf {sb}},r_ {\mathsf {rf}})\!\in \! {\mathsf {guar}}(\mathcal {R}_ {\mathsf {pre}}, \mathcal {R},\alpha) \!\land \! \mathsf {wpe}(\alpha, \mathcal {R}_ {\mathsf {pre}}, {\mathsf {in}}(\mathcal L,a, {\mathsf {all}}))\\ \textit {then}&\exists \mathcal L'. {\mathsf {dom}}(\mathcal G.A)= {\mathsf {valid}}(\mathcal G,\mathcal {L}', {\mathsf {dom}}(\mathcal G.A)) \\&\land {\mathsf {compat}} (\mathcal G,\mathcal {L}') \land {\mathsf {conform}} (\mathcal G,\mathcal {L}', {\mathsf {dom}}(\mathcal G.A))\\&\land \forall b\not =a. \mathcal L'({\mathsf {sb}},b,\bot)=\mathcal L({\mathsf {sb}},b,\bot) \land \mathcal L'({\mathsf {sb}},a,\bot)= \mathcal {R}_ {\mathsf {sb}}\end{align*}
Proof:
For the new node $a$
’s ${\mathsf {sb}}$
outgoing edge, we first proved the proof for its ${\mathsf {compat}}$
property. Suppose there is an edge $\xi $
that is $\mathsf {hb}$
-independent with the new node $a$
’s $\mathsf {sb}$
outgoing edge ${\mathsf {sb}}(a,\bot)$
and the resource map it carries is incompatible with $\mathcal L({\mathsf {sb}},a,\bot)$
. Supposing $a$
is not an acquire action, i.e. the incompatibility is not caused by moving resources from the current resource map’s $\mathsf {A}$
component to its local component, we use case analysis to demonstrate that the action’s guarantee condition would not introduce new incompatibility; however if the incompatibility is not newly introduced, it would appear in one of the incoming edges, which violates the ${\mathsf {compat}}$
property in the premise. If $a$
is an acquire action, which could move resources from the $\mathsf {A}$
component to $\mathsf {L}$
, we argue incompatibility can not be introduced in this process by contradiction. Assuming the incompatible resource originally in $\mathsf {A}$
is $r$
, there must be another node, say $b$
that moves $r$
from $\mathsf {L}$
to a shareable component and thereafter $r$
could be loaded to $\mathsf {A}$
in $a$
’s thread, according to our labeling method. According to the C11 memory model, we derive that $b$
happens before $a$
. If there exists an edge $\xi $
that is $\mathsf {hb}$
-independent with $\mathcal {L}'({\mathsf {sb}},a,\bot)$
and carries the incompatible resource $r$
, it also $\mathsf {hb}$
-independent with one of $a$
’s incoming edges that carries $r$
(the case that $r$
is created by $a$
is trivial), which violates the ${\mathsf {compat}}(\mathcal G, \mathcal {L})$
. With the same argument used for the release fence case, we argue that the ${\mathsf {compat}}$
property holds for $a$
’s ${\mathsf {rf}}$
outgoing edge.
For the new graph’s ${\mathsf {conform}}$
property, we prove by exhaustion on all possible type of actions, and check unfolded ${\mathsf {conform}}$
definitions against the guarantee conditions for the corresponding type of actions. Same to that for previous lemmas the proof for the ${\mathsf {valid}}$
property and the validity of the updated labelling, can be demonstrated by unfolding the corresponding definitions and check the requirements with our labelling results.
Lemma 6 (Error Free):
If ${\mathsf {GSafe}}_{n}(\mathcal {T}_{ins},\mathcal G, \mathcal {L})$
then we have $\neg {\mathsf {dataRace}} (\mathcal G)$
, $\neg {\mathsf {memErr}} (\mathcal G)$
, and all reads are initialised.
Proof:
We first prove that an event graph supporting safe executions is free from race conditions using proof by contradiction. Suppose event $a$
and $b$
in $\mathcal G$
cause a data race, by the definition of race condition we can deduce that $\neg {\mathsf {hb}} (a,b)\land \neg {\mathsf {hb}}(b,a)$
. Also, we can derive that there is a location $\ell $
that appears in both $a$
and $b$
’s incoming edges that holds some non-trivial (not $\bot $
) non-atomic values. However, given that $a$
’s incoming edges are $\mathsf {hb}$
-independent with that of $b$
’s the appearance of the non-atomic resource about location $\ell $
in both groups violates the $\mathsf {compat}$
property in the global safety definitions. Therefore, there are no two events in $\mathcal G$
that could raise a data race.
To prove the graph is free from memory errors, i.e., there is no memory access to unallocated memory locations, first notice that ensured by our instrumented semantics to manipulate a location $\ell $
, an event $b$
must have the information about $\ell $
in one of its incoming edge’s local component. Then we define a recursive search procedure to find the action that allocates $\ell $
and demonstrate that the memory accesses to $\ell $
are error free.
Firstly, we search backwards following the ${\mathsf {sb}}^{+}$
edges in the graph starting from $b$
, until we reach an event $a$
where the information about $\ell $
is not in one of its incoming edge’s local component. In the case that $a$
is an allocation action and it allocates $\ell $
, the search ends. In the case that $a$
is an acquire fence and it moves $\ell $
from the waiting-to-be-acquired component in one of its incoming edge to the local component in its outgoing resource maps, we assert that there exists an read or update event $a_{0}$
with relaxed memory order prior to $a$
in the ${\mathsf {sb}}^{+}$
relation according to our labelling process, and the information about $\ell $
appears in $a_{0}$
’s read-from incoming edge under the waiting-to-be-acquired label. Then we recursively perform this search procedure starting with the write event that $a_{0}$
reads from until we find the right allocation action. If it is not the cases aforementioned, we check if $a$
’s immediate $\mathsf {sb}$
successor $a'$
is a read or update event with acquire memory order. If so, we recursively perform this search procedure starting with the write event that $a'$
reads from. We assert these are all the cases needed to be considered as any other case violates the global safety definitions according to our labelling process. Therefore, a globally safe event graph is error free.