This paper appears in: , Issue Date: , Written by:

© 2012 IEEE

By Topic

- Aerospace
- Bioengineering
- Communication, Networking & Broadcasting
- Components, Circuits, Devices & Systems
- Computing & Processing
- Engineered Materials, Dielectrics & Plasmas

SECTION I

Trust management [12] is an access control paradigm for decentralized systems that has attracted a lot of attention over the last 15 years. Research so far has focussed on concrete architectures and policy languages for trust management, and on policy analysis. This paper attempts to shed light on some of the more foundational aspects of trust management.

Trust management can be succinctly characterized by two distinctive features:

- 1) The access policy of the relying party is specified in a high-level
*policy language*(e.g. [11], [45], [26], [37], [34], [39], [38], [23], [10], [9], [31], [6]). - 2) Access decisions do not depend solely on the local policy, but also on digitally signed
*credentials*that are submitted to the relying party together with the access request. Access is granted only if a*proof of compliance*can be constructed, showing that the requested permission $Q$ is provable from the policy $P$ combined with the set of credentials $C$.

The first feature effectively decouples the policy from the implementation of the enforcement mechanism, improving maintainability and flexibility in a context of quickly evolving access control requirements.

The second feature is necessitated by the fact that, in large decentralized systems, the relying party generally does not know the identity of the users requesting access in advance. Therefore, authorization has to be based on attributes rather than identity. Authority over these attributes may be delegated to trusted third parties, who may then issue credentials that assert these attributes or re-delegate authority to yet another party. The credentials that are used in trust management may thus be quite expressive, containing attributes, constraints and conditions, and delegation assertions. For this reason, the language for specifying credential assertions is typically the same as the one for specifying the local policy.

Given a derivability relation $\Vvdash$ between sets of assertions and permissions, the basic mechanics of a trust management system can be specified as follows: a user's request $Q$ is granted iff $P\cup C\Vvdash Q$, where $P$ is the relying party's local policy and $C$ is the set of supporting credentials submitted by the user. All policy languages mentioned above can be specified in terms of such a derivability relation $\Vvdash$; in the common case of Datalog-based policy languages, the relation $\Vvdash$ is simply the standard Datalog entailment relation [20].

Hence we arrive at a natural notion of observational equivalence on policies that captures the essential aspects of trust management: two policies $P$ and $P^{\prime}$ are equivalent iff for all sets $C$ of credentials and all requests $Q$, TeX Source$$P\cup C\Vvdash Q \iff P^{\prime}\cup C\Vvdash Q.$$

The fundamental question we are concerned with in this paper is whether an adequate model-theoretic semantics of trust management exists, i.e., one that matches this notion of observational equivalence. Neither the standard model-theoretic Datalog semantics based on minimal Herbrand models (for Datalog-based languages) nor the Kripke semantics for authorization logics related to ABLP [2] are adequate in this sense. While these semantics are sufficient for determining which permissions are granted by a *fixed* policy $P$ and a *fixed* set $C$ of supporting credentials, they do not provide any insight into questions that are particular to trust management. such as:

- (a) Given the semantics of a policy $P$, which permissions $Q$ are granted when $P$ is combined with credential set $C$ ?
- (b) Given the semantics of two policies $P_{1}$ and $P_{2}$, what is the semantics of their composition $P_{1}\cup P_{2}$ ?
- (c) What can an external user infer about an unknown policy merely by successively submitting requests together with varying sets of credentials and observing the relying party's responses?

We present the first formal trust management semantics that accurately captures the action of dynamically submitting varying sets of credentials. It is compositional with respect to policy union and provides full abstraction [43] with respect to observational equivalence. These two properties together enable it to answer the questions (a) and (b) above.

Furthermore, we develop an axiomatization that is sound and complete with respect to the model-theoretic semantics, and provides inferentially complete object-level reasoning about a trust management system's observables. For example, judgements such as *if a policy grants access to* $Q_{1}$ *when combined with set* $C_{1}$, *and denies access to* $Q_{2}$ *when combined with set* $C_{2}$, *then it must grant access to* $Q_{3}$ *when combined with* $C_{3}$” can be expressed as a formula in the logic, and be proved (or disproved) within it. It is this expressive power that enables the logic to directly answer questions such as (c) above, and thus to analyze *probing attacks*, a recently identified class of attacks in which the attacker infers confidential information by submitting credentials and observing the trust management system's reactions [31], [4], [8]. Perhaps even more strikingly, it is expressive enough to prove general *meta-theorems* about trust management systems, e.g. “if a policy satisfies some negation-free property, then this property will still hold when the policy is combined with an arbitrary credential set”.

A language-independent semantics would be too abstract to provide any interesting insights. Our trust management semantics is specific to Datalog, and thus applicable to the wide range of Datalog-based policy languages. Datalog has arguably been the most popular logical basis for languages in this context; examples include Delegation Logic [37], SD3 [34], RT [39], [38], Binder [23], Cassandra [10], [9], and SecPAL [6].

The remainder of the paper is structured as follows. We introduce in Section II a simple language for reasoning about Datalog-based trust management policies, defined by a relation $\Vvdash$, that captures the intuitive operational meaning of policies and credential submissions. This relation itself is straightforward, but, as we argue in Section III, universal truths (that hold for all policies) are both useful and highly non-trivial. This justifies the need for a logic with a formal semantics with a notion of validity that coincides with the intuitive notion of universal truths in trust management systems (Section IV). The corresponding axiomatization is presented in Section V. Section VI describes our implementation of a theorem prover for the logic. Applications and performance results are discussed in Section VII. We review related work in Section VIII and conclude with Section IX. The proofs of our theorems are lengthy; we relegate them to a technical report [?]. Our implementation is available at http://research.microsoft.com/counterdog.

SECTION II

We fix a countable set At of propositional variables called *atoms*,^{1} A Datalog *clause* is either an atom $p$ or of the form $p:-p_{1}, \ldots,p_{n}$, where $p,p_{1}, \ldots,p_{n}\in {\bf At}$. A *policy* $\gamma$ is a finite set of clauses. We write $\Gamma$ to denote the set of all policies.

Atoms correspond to atomic facts that are relevant to access control, e.g. “Alice can execute run.exe” or “Bob is a part time student” or “the system is in state Red”. From the point of view of the Datalog engine, the atoms have no inherent meaning beyond the logical dependencies specified within the policy (and the submitted credentials). It is the responsibility of the *reference monitor*, which acts as an interface between requesters and resources, to query the policy in a meaningful way. For instance, if Alice attempts to execute run.exe, the reference monitor would check if the corresponding atom CanExec(Alice, run.exe) is derivable from the policy in union with Alice's submitted credentials.

To specify when an atomic query $p\in {\bf At}$ is derivable from a policy $\gamma$, we introduce the relation symbol $\Vvdash$: TeX Source$$\eqalignno{ \gamma \Vvdash p\ {\rm iff}\ & p\in\gamma \ {\rm or}&\hbox{(1)}\cr &\exists\vec{p}\subseteq_{\rm fin} {\bf At} :(p:-\vec{p})\in\gamma\wedge\forall p^{\prime}\in\vec{p}. \gamma \Vvdash p^{\prime}.}$$

We can straightforwardly extend $\Vvdash$ to Boolean compound formulas $\varphi$, and the trivially true query: TeX Source$$\eqalignno{ &\gamma \Vvdash \top.&\hbox{(2)}\cr &\gamma \Vvdash\neg\varphi \ {\rm iff}\ \gamma \Vvdash\!\!\!\!\!\!\!/ \varphi.\cr &\gamma \Vvdash\varphi \wedge \varphi^{\prime}\ {\rm iff}\ \gamma \Vvdash\varphi\ {\rm and}\ \gamma \Vvdash \varphi^{\prime}.}$$

The relation $\gamma \Vvdash \varphi$ may be read as “$\varphi$ *holds in* $\gamma$”.

It is the negated case where Datalog differs from classical logic: in the latter, $\neg p$ is entailed by a set of formulas $\gamma$ only if $p$ is false in *all* models of $\gamma$. In Datalog, on the other hand, only the *minimal* model of $\gamma$ is considered. This fits in well with the decentralized security model, where knowledge is generally incomplete, and thus the absence of information should lead to fewer permissions.

The purpose of our language is not just to specify concrete policies, but to speak and reason about policy behaviors in a trust management context. In particular, recall that the outcome of queries is not just dependent on the service's policy alone, but also on the submitted *credentials*, which are also Datalog clauses. To express statements about such interactions, we introduce the notation $\square_{\gamma}\varphi$, which informally means “if the set of credentials $\gamma$ were submitted to the policy, then $\varphi$ would be true”. The policy is evaluated in union with the credentials, so we define
TeX Source$$\gamma \Vvdash \square_{\gamma^{\prime}} \varphi \ {\rm iff} \ \gamma \cup \gamma^{\prime}\Vvdash \varphi.\eqno{\hbox{(3)}}$$

The full syntax of *formulas* in our trust management reasoning language is thus summarized by the following grammar:
TeX Source$$\varphi::= \top \vert p\vert \neg\varphi\vert \varphi\wedge\varphi\vert \square _{\gamma}\varphi$$We write $\Phi$ to denote the set of all formulas.

As usual, we define $\varphi\vee\varphi^{\prime}$ as $\neg(\neg\varphi\wedge\neg\varphi^{\prime}), \varphi\rightarrow\varphi^{\prime}$ as $\neg\varphi\vee\varphi$, and $\varphi \longleftrightarrow \varphi^{\prime}$ as $(\varphi\rightarrow\varphi^{\prime})\wedge(\varphi^{\prime}\rightarrow\varphi)$. The unary operators □ and $\neg$ bind more tightly than the binary ones, and $\wedge$ and $\vee$ more tightly than $\rightarrow$ and $\longleftrightarrow$. Implication $(\rightarrow)$ is right-associative, so we write $\varphi_{1}\rightarrow \varphi_{2}\rightarrow \varphi_{3}$ for $\varphi_{1}\rightarrow (\varphi_{2}\rightarrow\varphi_{3})$.

Let $\gamma_{0}$ be the Datalog policy $\{p:-q, r;p:-s;q:-p, t;q:-u\}$ (we use the semicolon as separator in clause sets, to avoid the ambiguity with the comma).

- 1) Without supporting credentials, no atom holds in $\gamma_{0}$: TeX Source$$\gamma_{0}\Vvdash \neg v,\ {\rm for\ all}\ v\in {\bf At}.$$
- 2) If $u$ and $r$ were submitted as supporting credentials, then $p$ would hold in $\gamma_{0}$: TeX Source$$\gamma_{0}\Vvdash \square _{\{u;\ r\}}p.$$·
- 3) If credential $s$ were submitted, and then $t$ were submitted, then $q$ would hold in $\gamma_{0}$: TeX Source$$\gamma_{0}\Vvdash\square _{\{s\}}\square _{\{t\}}q.$$· This is, of course, equivalent to submitting both at the same time: $\gamma_{0}\Vvdash\square _{\{s;t\}}q$.
- 4) Submitted credentials may include non-atomic clauses: TeX Source$$\gamma_{0}{\Vvdash\square}_{\{s:- q; u\}}p.$$

When are two policies (observationally) equivalent? Intuitively, they are equivalent if they both make the same set of statements true, under every set of submitted credentials. This notion can be formalized using the standard Datalog containment relation $\preceq$, as follows:

(Containment, equivalence). Let $\gamma_{1}, \gamma_{2}\in\Gamma$. Then $\gamma_{1}$ *is contained in* $\gamma_{2}(\gamma_{1}\preceq\gamma_{2})$ iff for all finite $\vec{p}\subseteq {\bf At}$ and $p\in {\bf At}$
TeX Source$$\gamma_{1}\cup\vec{p}{\Vvdash}p\Rightarrow\gamma_{2}\cup\vec{p}{\Vvdash}p.$$Two policies $\gamma_{1}$ and $\gamma_{2}$ are *equivalent* $(x\equiv y)$ iff $\gamma_{1}\preceq\gamma_{2}$ and $\gamma_{2}\preceq\gamma_{1}$.

This definition may seem a bit narrow at first, but the following proposition shows that it actually coincides with the intuitive notion that exactly the same set of formulas (including □ -formulas!) holds in two equivalent policies.

Let $\gamma_{1}, \gamma_{2}\in\Gamma$. TeX Source$$\gamma_{1}\equiv\gamma_{2}\ {\rm iff}\ \forall\varphi\in\Phi.\ \gamma_{1}\Vvdash \varphi\Leftrightarrow\gamma_{2}\Vvdash\varphi$$

- $\emptyset\preceq\gamma, {\rm for\ all} \gamma\in\Gamma$.
- $\{a\}\preceq\{a;b\}\preceq\{a;b;c\}$
- $\{a:- b, c\}\preceq\{a:- b\}\preceq\{a\}$
- $\{a:-d;d:-b\}\equiv\{a:-b, c;a:-d;d:-b\}$.

SECTION III

The relation $\Vvdash$ from Section II is a straightforward specification of what a policy engine in a trust management system does. It is merely the standard Datalog evaluation relation extended with the □ -operator for expressing the action of submitting supporting credentials. The relation is easy to evaluate (for a given $\gamma$ and $\varphi$), and it directly reflects the intuition of the operational workings of a trust management system. So why should we bother developing a formal semantics that, as we shall see, is much more complex? There are three compelling reasons:

- A model-theoretic semantics lets us interpret and manipulate policies as mathematical objects in a syntax-independent way. It also provides additional insights into, and intuitions about, trust management systems.
- To prove that a formula is not a theorem, it is often easier to construct a counter-model (or in our case, a counter-world) than to work directly in the proof theory.
- The relation $\Vvdash$ actually does not even provide a proof theory for formulas $\varphi$: it is of no help in answering the more interesting (but much harder) question if $\varphi$ is
*valid*, i.e., if holds in*all*policies $\gamma$. A formal semantics is the first step towards a corresponding proof theory.

The first two answers also apply to the question on the benefits of having a model-theoretic semantics for any logic. The third point is perhaps the most important from a practical perspective: in policy analysis, we are not mainly interested in the consequences of concrete policies and concrete sets of submitted credentials, but in *universal truths* $\varphi$ that hold in *all* policies (or all policies that satisfy some properties).

We write $\Vvdash \varphi$ iff $\varphi$ holds in all policies, i.e.,$\forall\gamma\in\Gamma.\ \gamma \Vvdash \varphi$

The following examples illustrate that the reasoning techniques required in proving universal truths $\varphi$ are beyond those directly provided by the definition of $\Vvdash$.

If $p$ is true in some policy when credential $q:-r$ is submitted, then $p$ would also be true in the same policy if credential $q$ were submitted: TeX Source$$\Vvdash\square _{\{q:-r\}P}\rightarrow\square _{\{q\}}p$$Intuitively, $q$ is “more informative” than $q:-r$ (more formally, $\{q:-r\}\preceq\{q\})$, and providing more information can only lead to more (positive) truths, as Datalog is monotonic.

If submitting $a$ and $b$ individually is not sufficient for making $c$ hold in some policy, but submitting both of them together *is* sufficient, then $a$ cannot possibly hold in the policy:
TeX Source$$\Vvdash \neg \square _{\{a\}}c\wedge\neg \square _{\{b\}}c\wedge\square_{\{a;\ b\}}c\rightarrow\neg a$$For suppose $a$ were true in the policy. Then submitting both $a$ and $b$ would be equivalent to submitting just $b$, but this contradicts the observation that submitting solely $b$ does not make $c$ true.

If $c$ does not hold in some policy, and submitting $d$ is not sufficient for making $e$ hold, but submitting both credentials $b:-a$ and $d:-c$ *is* sufficient, then $c$ must hold in that policy, and furthermore, $a$ would hold if credential $d$ were submitted:
TeX Source$$\Vvdash \neg a\wedge\square _{\{d\}}{\neg}e\wedge\square _{\{b:-a;\ d:-c\}}e\rightarrow c\wedge\square _{\{d\}}a.$$This small example is already too complex to explain succinctly by informal arguments, but it illustrates that reasoning about universal truths is far from trivial. We later present a formal proof of this statement in Example V.4.

There is a class of attacks on trust management systems called *probing attacks* [31], [4], [8], in which the attacker gains knowledge of secrets about the policy by submitting a series of access requests together with sets of supporting credentials, and by observing the system's reactions. Checking if a probing attack allows the attacker to infer a secret can be very complex, but it turns out that we can express probing attacks succinctly and directly as universal truths in our language.

Here is a simple (and naïve) example of a probing attack. A service $S$ has a policy $\gamma$ that includes the publicly readable rule TeX Source$$S.canRegister(x):-x.hasConsented(S).\eqno{\hbox{(4)}}$$Informally, this should mean “$S$ says that $x$ can register with the service if $x$ says (or has issued a credential saying) that he or she consents with $S$'s terms and conditions”. The service also exposes the query $S$, canRegister ($x$) to any user $x$.

Suppose the user (and attacker) $A$ self-issues a conditional credential
TeX Source$$A.hasConsented(S):-A. isRegistered(B),\eqno{\hbox{(5)}}$$which informally means “$A$ says that $A$ consents to $S$ ‘s terms and conditions, if $A$ says that $B$ is registered”. $A$ then submits this credential together with the query $S$. *canRegister(* $A$), and observes that the answer is ‘no’. From this single observation, she learns that neither $A$ .*hasConsented* nor $A$. *isRegistered*($B$) hold in $\gamma$- or else the query would have yielded the answer “yes”. This is not very interesting so far, as she has only learnt about the falsity of statements made by herself.

But suppose she can also issue *delegation* credentials of the form ${A}. p:-{D}. p$. Such credentials are usually used to express delegation of authority; for example, to delegate authority over who is a student to university $U, A$ would issue the credential $A$ .*isStudent* ($x$) : - $U$ *.isStudent(* $x$). But here $A$ abuses this mechanism by issuing the delegation credential
TeX Source$$A. isRegistered(B):-S. isRegistered (B).\eqno{\hbox{(6)}}$$Now she submits this credential together with the first conditional credential, and evaluates the same query. By observing the service's reaction to this second probe, and combining this with her previous observation, she then learns whether $B$ is registered (according to $S!$) or not: the service's answer is “yes” iff $\gamma \Vvdash S$. *isRegistered* $(B)$. She has thus *detected* a fact in $\gamma$ that had nothing to do with the original query, and may well be confidential. Moreover, it is generally not possible to protect against probing attacks without crippling the intended policy using simple syntactic input sanitization or by enforcing strict non-interference (see [4] for details).

We now show how this attack can be expressed as a universal truth. Let $c_{1}$ and $c_{2}$ be the credentials 5 and 6, respectively. $A$ 's knowledge about the public clause (4) in the policy translates into TeX Source$$\varphi_{1}=\square_ {\{{A.hasConsented(S)\}}}S.canRegister(A).$$Her first observation is translated into TeX Source$$\varphi_{2}=\square _{\{c_{1}\}}\neg S. canRegister(A),$$and the second observation into TeX Source$$\eqalignno{ &\varphi_{3}=\square_{\{c_{1},c_{2}\}} S. canRegister(A){\rm or}\cr &\varphi_{3}^{\prime}=\square_{\{c_{1},c_{2}\}}\neg S.canRegister(A),}$$depending on the service's reaction. Then the following holds: TeX Source$$\eqalignno{ &\ \Vvdash\varphi_{1}\wedge\varphi_{2}\wedge\varphi_{3}\rightarrow S.isRegistered(B)\cr &\Vvdash\varphi_{1}\wedge\varphi_{2}\wedge\varphi_{3}^{\prime}\rightarrow\neg S.isRegistered(B)}$$We will later present a logic that can prove such statements, and thus can also be used to reason about probing attacks (see Example V.5).

Note that Examples III.3 and III.4 can also be interpreted as probing attacks. For instance, in Example III.4, let us assume that $e$ is the only query publicly exposed by the service, and the attacker initially only knows that $a$ does not hold in the service's policy. The attacker possesses three authenticated credentials: $d$ and $b:-a$ and $d:-c$, By submitting first $d$ together with the query $e$, and after that $\{b:-a;\ d:-c\}$ together with the same query, and by observing the service's reactions to these two *probes*, the attacker detects (provided she is sufficiently clever) that $c\wedge\square _{\{d\}}a$ holds in the policy. Depending on the circumstances, this may constitute a breach of secrecy.

We can succinctly define the notions of probes, probing attack, detectability and opacity from [4], [8] in our language.

A *probe* $\pi$ is a formula of the form $\square _{\gamma}\psi$. where $\gamma\in\Gamma$ is called the *probe credential set* and $\psi$ is a □ -free formula from $\Phi$ called the *probe query*.

An *observation* of a probe $\pi$ under a policy $\gamma_{0}$ is either $\pi$ if $\gamma_{0}\Vvdash\pi$, and otherwise $\neg \pi$.

A *probing attack* on $\gamma_{0}$ consisting of probes $\{\pi_{1}, \ldots, \pi_{n}\}$ is the conjunction of the observations of $\pi_{i}\in\{\pi_{1}, \ldots, \pi_{n}\}$ under $\gamma_{0}$.

Clearly, by the above definition, if $\varphi$ is a probing attack on $\gamma_{0}$, then $\gamma_{0}\Vvdash\varphi$. But there may be other policies $\gamma$ that also have the property that $\varphi$ holds in them. In the absence of other additional knowledge, the attacker cannot distinguish between $\gamma_{0}$ and any such $\gamma$. To put it positively, the attacker learns from the probing attack $\varphi$ precisely that $\gamma_{0}$ is in the equivalence class of policies in which $\varphi$ holds. We denote this equivalence class induced by probing attack $\varphi$ by $\vert \varphi \mid =\{\gamma\vert \gamma \Vvdash \varphi\}$.

Now if in all these policies, some property $\varphi^{\prime}$ holds, then the attacker *knows* with absolute certainty that $\varphi^{\prime}$ holds in $\gamma_{0}$ in particular, in which case we say that $\varphi^{\prime}$ is detectable. Conversely, if there exists some policy within $\vert \varphi\vert$ in which $\varphi^{\prime}$ does *not* hold, the attacker cannot be certain that $\varphi^{\prime}$ holds in $\gamma_{0}$, in which case we say that $\varphi^{\prime}$ is *opaque*.

(Detectability, opacity). A formula $\varphi^{\prime}\in\Phi$ is *detectable* in a probing attack $\varphi$ on a policy $\gamma_{0}$ iff
TeX Source$$\forall\gamma\in\vert \varphi\vert .\ \gamma \Vvdash\varphi^{\prime}.$$A formula $\varphi^{\prime}$ *is opaque* in a probing attack $\varphi$ iff it is not detectable in $\varphi$, or equivalently,
TeX Source$$\exists\gamma\in\vert \varphi\vert .\ \gamma \Vvdash\!\!\!\!\!\!\!\!/ \varphi^{\prime}.$$

(Probing attacks). A formula $\varphi^{\prime}$ is detectable in a probing attack $\varphi$ iff $\Vvdash\varphi\rightarrow\varphi^{\prime}$.

This theorem again underlines the importance of being able to reason about universal truths.

SECTION IV

The model-theoretic semantics we are looking for has to satisfy four requirements:

- 1) Capturing trust management: given $\varphi$ and the semantics of $\gamma$, it is possible to check if $\gamma \Vvdash \varphi$.
- 2) Supporting a notion of validity: $\varphi$ is valid (in the model theory) iff $\Vvdash\varphi$.
- 3) Full abstraction [43]: two policies are equivalent $(\equiv)$ iff their respective semantics are equal.
- 4) Compositionality: the semantics of $\gamma_{1}\cup\gamma_{2}$ can be computed from the individual semantics of $\gamma_{1}$ and $\gamma_{2}$.

We first consider some simple approaches to developing a formal semantics that may immediately come to mind, and show why they fail.

The standard model-theoretic interpretation of a set of Datalog clauses is its minimal Herbrand model, i.e., the set of atoms that hold in it. But in this approach, the policy $\gamma_{0}$ from Example II.1 would have the same semantics as the empty policy $\emptyset$, namely the empty model, even though the two policies are clearly not equivalent (Def. II.2). Hence such a semantics would not be fully abstract. This semantics is not compositional either: from the semantics of $\{p:-q\}$ (which is again empty) and of $\{q\}$, we cannot construct the semantics of their union. Therefore, this semantics is clearly unsuitable in a trust management context, where it is common to temporarily extend the clause set with a set of credentials. In fact, this semantics fails on all four accounts regarding our requirements.

We could also attempt to interpret a Datalog clause $p:-p_{1}, \ldots,p_{n}$ as an implication $p_{1}\wedge\ldots\wedge p_{n}\rightarrow p$ in classical (or intuitionistic) logic, and a policy $\gamma$ as the conjunction of its clauses: $\llbracket\gamma \rrbracket=\bigwedge_{c\in\gamma}\llbrackwt c\rrbracket$. As shown by Gaifman and Shapiro ([27]), this semantics would indeed be both compositional and fully abstract. However, this interpretation does not correctly capture the trust management relation $\Vvdash$, as we show now. First of all, we would need to translate □ formulas into logic. The obvious way of doing this would be to interpret $\square _{\gamma}\varphi$ as the implication $\llbracket \gamma \rrbracket \rightarrow \llbracket \varphi \rrbracket$. Then, for instance, we have $\{p:-q\}\Vvdash\square _{q}p$, and correspondingly also [$\llbracket \{p:-q\}\rrbracket \models \llbracket \square _{q}p\rrbracket$, since [$\llbracket \{p:-q\} \rrbracket=\llbracket \square _{q}p\rrbracket=q\rightarrow p$. Thus we might be led to conjecture TeX Source$$\gamma\Vvdash \varphi {\buildrel ? \over \iff} \llbeacket\gamma \rrbracket \models \llbracket \varphi \rrbracket.$$

Unfortunately, this correspondence does not hold in general. Consider the formula $\varphi=\neg q\wedge\square _{q}p$. From this we can conclude that $\llbracket \varphi\rrbracket =\neg q\wedge(q\rightarrow p)$. But $\{p:-q\}\Vvdash\varphi$, whereas [$\llbracket\{p:-q\}\rrbracket \nvDash \llbracket \varphi\rrbracket$. We could try to fix this by only considering the minimal model of the semantics, since **minMod([** $(\llbracket\{p:-q\} \rrbracket)\models\neg q$. But we can break this again: $\emptyset\Vvdash\!\!\!/ \varphi$, whereas **minMod** $(\llbracket \emptyset \rrbracket)\models \llbracket \varphi \rrbracket$.

The crucial observation that leads to an adequate semantics is that both Datalog clauses and the trust management specific □-actions are *counterfactual*, rather than implicational, in nature. For instance, $p:-\vec{p}$ can be interpreted as the counterfactual “if $\vec{p}$ were added to the policy, then $p$ would hold”. Similarly, $\square _{\gamma}\varphi$ can be read as “if $\gamma$ were added to the policy, then $\varphi$ would hold”. (Note that the counterfactual conditional “if A were true then B would hold” is strictly stronger than the material implication “A $\rightarrow$ B”, which vacuously holds whenever A is not true.)

Therefore, we can unify the notations and write $\square_{\vec{p}p}$ instead of $p:-\vec{p}$. Moreover, instead of writing a policy $\gamma$ as a set, we can just as well write it as a conjunction of clauses. We can thus rewrite the syntax for policies and formulas from Section II in the following, equivalent, form: TeX Source$$\eqalignno{ &{\rm Policies}\ \gamma ::={\top}\vert p\vert \square _{\wedge\vec{p}}p\vert \gamma\wedge\gamma\cr &{\rm Formulas}\ \varphi ::=\gamma\vert \neg\varphi\vert \varphi\wedge\varphi\vert \square _{\gamma}\varphi}$$As before, we write $\Gamma$ and $\Phi$ to denote the set of all policies and formulas, respectively. The relation $\Vvdash$ is also defined as before, with the obvious adaptations to the new syntax.

Henceforth, we treat $\varphi$ as syntactic sugar for $\wedge\vec{\varphi}$, and $p:-\vec{p}$ for $\square _{\vec{p}}p$.

Interpreting □ -formulas as counterfactuals, we can now give it a multi-modal Kripke semantics in the spirit of Lewis and Stalnaker [36], [49]: the counterfactual $\square_{\gamma}\varphi$ holds in a possible world $w$ if in those $\gamma$-satisfying worlds $w^{\prime}$ that are *closest* to $w, \varphi$ holds. We will express the closeness relation using a ternary accessibility relation $R$, and later apply rather strong conditions on $R$ in order to make it match the intended trust management context.

(Model. entailment). A *model* $M$ is a triple $\langle W, R, V\rangle$, where $W$ is a set, $R \subseteq \wp (W)\times W\times W$, and $V:{\bf At}\rightarrow\wp(W)$.

Given a model $M$, we inductively define the model-theoretic entailment relation $\Vdash_{M}\subseteq W\times\Phi$ as follows. For all $w\in W$: TeX Source$$\eqalignno{ &w\Vdash_{M}{\top}\cr &w\Vdash_{M}p\ {\rm iff}\ w\in V(p)\cr &w\Vdash_{M^{\neg}}\varphi {\rm iff}\ w\nVdash_{M\varphi}\cr &w\Vdash_{M}\varphi_{1}\wedge\varphi_{2} {\rm iff}\ w\Vdash_{M}\varphi_{1} {\rm and} w\Vdash_{M}\varphi_{2}\cr &w\Vdash_{M}\square _{\gamma}\varphi {\rm iff}\ \forall w^{\prime}.\ R_{\vert \gamma\vert _{M}}(w, w^{\prime})\Rightarrow w^{\prime}\Vdash_{M}\varphi,}$$where $\vert \gamma\vert _{M}=\{w\in W\vert w\Vdash_{M}\gamma\}$. Similarly, we write $\vert w\vert M$ to denote the set $\{\gamma\in\Gamma\vert w\Vdash_{M}\gamma\}$.

Intuitively, a world $w\in W$ corresponds to a policy; more precisely, to the $\preceq$-maximal policy in $\vert w\vert _{M}$. Vice versa, a policy $\gamma$ corresponds to a world, namely the $\preceq_{M}$-minimal world in $\vert \gamma\vert _{M}$, where $\preceq M$ is an ordering on worlds that reflects the containment relation $\preceq$ on policies (Def. IV.3). (Actually, in Def. IV.4, we associate $\gamma$ simply with the entire cone $\vert \gamma \vert_{M})$

(World containment). Given a model $M\ = \langle W, R, V\rangle$ and $x, y\in W$, TeX Source$$x\preceq_{M}y\ {\rm iff}\ \forall\gamma\in\Gamma : x\Vdash_{M}\gamma\ {\rm implies}\ y\Vdash_{M}\gamma.$$

(Semantics). The *semantics* of $\gamma$ (with respect to $M$) is $\vert \gamma\Vert M$.

As it is, this definition keeps the meaning of $R$ completely abstract, but we can already prove that the semantics is compositional, irrespective of $R$:

(Compositionality). For all models $M$, and $\gamma_{1}, \gamma_{2}\in\Gamma$: TeX Source$$\vert \gamma_{1}\wedge\gamma_{2}\vert _{M}=\vert \gamma_{1}\vert _{M}\cap\vert \gamma_{2}\vert _{M}.$$

In order to satisfy the remaining three requirements from Section II, we have to put some restrictions on the models, and in particular on the accessibility relation $R$. We call models that satisfy these constraints *TM models* (Def. IV.7). Intuitively, $R_{\vert \gamma\vert _{M}}(w, w^{\prime})$ should hold if $w^{\prime}$ is a world that is closest to $w$ of those worlds in which $\gamma$ holds. But what do we mean by ‘closest’? If we interpret worlds as policies, then $w^{\prime}$ is the policy that results from *adding* $\gamma$, and *nothing more but* $\gamma$, to $w$. So we have to consider all worlds that are larger than $w$ (since we are adding to $w$) and also satisfy $\gamma$, and of these worlds we take the $\preceq_{M}$-minimal ones (since we are adding nothing more but $\gamma$) (Def. IV.7 (1)).

The other two constraints (Def. IV.7 (2) and IV.7 (3)) ensure that there is a one-to-one correspondence between policies and worlds.

If $(X,\leq)$ is a pre-ordered set (≤ is a reflexive transitive relation on $X$) and $Y$ a finite subset of $X$, then ${\bf min}_{\leq}(Y)=\{y\in Y\vert \forall y^{\prime}\in Y\ :\ y^{\prime}\ {/\!\!\!\!\!\!<}\ y\}$, and ${\bf max}_{\leq}(Y)= \{y\in Y \mid \forall y^{\prime}\in Y:y^{\prime}\ {/\!\!\!\!\!>}\ y\}$.

(Trust management model). A model $M= \langle W, R, V\rangle$ is *a TM model* iff

- 1) $\forall\gamma\in\Gamma, x, y\in W$. $R_{\vert \gamma\vert _{M}}(x, y)$ iff $y\in{\bf min}_{\prec_{M}}\{w \mid w\succeq _{M}x\wedge w\in\vert \gamma\vert _{M}\}$,
- 2) $\forall\gamma\in\Gamma, \exists w\in W.\ \gamma\in{\bf max}_{\preceq}\vert w\vert _{M}$, and
- 3) $\forall w\in W, \exists\gamma\in\Gamma \gamma\in{\bf max}_{\preceq}\vert w\vert _{M}$.

To gain a better intuition for TM models, it is useful to consider the following, particular TM model: imagine a labeled directed graph with a vertex for each $\gamma\in\Gamma$ (these are the worlds $W$). There is an edge from $\gamma_{1}$ to $\gamma_{2}$, labeled with $\gamma$, whenever $\gamma_{2}=\gamma_{1}\cup\gamma$ (corresponding to the accessibility relation $R_{\vert \gamma\vert }$).

So a TM model models all possible policies and all possible trust management interactions (submitting a set of credentials $\gamma$ for the duration of a query) with these policies. The following theorem shows that TM models indeed precisely capture the trust management relation $\Vvdash$, and Theorem IV.9 states that the semantics is fully abstract.

(Capturing trust management). Let $M =\langle W, R, V\rangle$ be a TM model, $\gamma\in\Gamma$ and $\varphi\in\Phi$. TeX Source$$\gamma \Vvdash \varphi\ {\rm iff}\ \forall w\in {\bf min}_{\preceq M}\vert \gamma\vert _{M}.\ w\Vdash_{M}\varphi$$

(Full abstraction). For all TM models $M$. and $\gamma_{1}, \gamma_{2}\in\Gamma$: TeX Source$$\gamma_{1}\equiv\gamma_{2}\ {\rm iff}\ \vert \gamma_{1}\vert _{M}=\vert \gamma_{2}\vert _{M}.$$

The property that is hardest to satisfy (and to prove) is the requirement that the model theory should support a notion of validity that coincides with judgements of the form $\Vvdash\varphi$, i.e., universal truths about trust management policies. This is formalized in Theorem IV.11

(Trust management validity). $\varphi$ is *TM*-*valid* (we write $\Vdash_{\rm TM}\varphi$) iff for all TM models $M=\langle W, R, V\rangle$ and $w\in W:w\Vdash_{M}\varphi$.

(Supporting validity). TeX Source$$\Vdash_{\rm TM}\varphi\ {\rm iff}\ \Vvdash\varphi.$$

Consider the following (false) statement: “in all policies in which $p\rightarrow q$ holds, $\square _{p}q$ also holds.” By the contrapositive of Theorem IV.11, we can prove that this is not true, i.e., $\Vvdash\!\!\!\!\!\!\!\!/ (p\rightarrow q)\rightarrow\square _{p}q$, by identifying a counter-world $w$ in a TM model $M$ such that $w\Vdash_{M}(p\rightarrow q)\wedge\neg\square _{p}q$. By Def. IV.2, this is equivalent to TeX Source$$w\Vdash_{M}\neg p\wedge\neg\square _{p}\ q\ {\rm or}\ w\Vdash_{M}q\wedge\neg\square _{p}q.$$Let $w$ be a $\preceq M$-minimal world in all of $W$. By minimality, $w\Vdash_{M}\gamma$ only if $\gamma$ is universally true. Neither $p$ nor $\square _{p}q$ (assuming $p\neq q$) are universally true, hence $w\Vdash_{M}\neg p$ and $w\Vdash_{M}\neg\square _{p}q$, as required.

In this section, we developed an adequate model-theoretic semantics for trust management. We started by interpreting both Datalog clauses and trust management interactions as counterfactuals, and taking a generic counterfactual model theory as the basis. We then customized the theory by adding constraints on the models of interest to arrive at TM models. The resulting semantics satisfies all four requirements from Section II, and it provides an intuition of a trust management service as a vertex in a labeled directed graph, where the reachable vertices represent the clause sets resulting from combining the service's policy with the submitted credential set (the edge label) to the service.

However, this semantics still does not give us much insight into proving judgements of the form $\Vvdash\varphi$ (or, equivalently, $\Vdash_{\rm TM}\varphi)$. For this purpose, we equip the model theory with a corresponding proof theory in the following section.

SECTION V

In standard modal logic, it is usually straightforward to derive an axiom in the proof theory from each frame condition in the model theory, i.e., a restriction on the accessibility relation $R$. (For example, reflexivity of $R$ corresponds to the axiom $\square \varphi\rightarrow\varphi$.) This constructive method can also be applied to counterfactual multi-modal logic, if the frame conditions are relatively simple [48]. In our case, however, the restriction on $R$ (Def. IV.7 (1)) to be simply ‘translated’ into an axiom. The axiomatization presented below was actually conceived by guessing the axioms and rules, and adjusting them until the system was provably sound and complete with respect to the model theory.

In the proof system below, let $\varphi, \varphi^{\prime}, \varphi^{\prime \prime}\in\Phi$, $\gamma, \gamma^{\prime}\in\Gamma, p\in$ **At** and $\vec{p}\subseteq$ **At**. The proof system consists of the following axiom schemas:
TeX Source$$\eqalignno{ &\vdash\varphi\rightarrow\varphi^{\prime}\rightarrow\varphi &\hbox{(C11)}\cr
&\vdash(\varphi\rightarrow\varphi^{\prime}\rightarrow\varphi^{\prime\prime})\rightarrow(\varphi\rightarrow\varphi^{\prime})\rightarrow\varphi\rightarrow\varphi^{\prime\prime}
&\hbox{(C12)}\cr
&\vdash(\neg\varphi\rightarrow\neg\varphi^{\prime})\rightarrow\varphi^{\prime}\rightarrow\varphi
&\hbox{(C13)}\cr
&\vdash\square
_{\gamma}(\varphi\rightarrow\varphi^{\prime})\rightarrow\square
_{\gamma}\varphi\rightarrow\square
_{\gamma}\varphi^{\prime}&\hbox{(K)}\cr
&\vdash\square _{\gamma}\gamma &\hbox{(C1)}\cr
&\vdash\square _{\gamma}\varphi\rightarrow\gamma\rightarrow\varphi &\hbox{(C2)}\cr
&\vdash\square _{(p:-\vec{p})}\varphi\rightarrow({\vec{p}}\rightarrow
p)\rightarrow\varphi &\hbox{(Dlog)}\cr
&\qquad{\rm provided}\ \varphi \ {\rm is} \square -{\rm free}\cr
&\vdash\square _{\gamma}\neg\varphi \longleftrightarrow\neg\square
_{\gamma}\varphi &\hbox{(Fun)}\cr
&\vdash\square _{\gamma\wedge\gamma^{\prime}}
\varphi\longleftrightarrow\square _{\gamma}\square _{\gamma^{\prime}}\varphi &\hbox{(Perm)}}$$

Additionally, there are three proof rules: TeX Source$$\eqalignno{ &{\bf If}\ \vdash\varphi \ {\bf and}\ \vdash\varphi\rightarrow\varphi^{\prime}\ {\bf then}\ \vdash\varphi^{\prime}. &\hbox{(MP)}\cr &{\bf If}\ \vdash\varphi \ {\bf then} \ \vdash\square _{\gamma}\varphi.&\hbox{(N)}\cr &{\bf If}\ \vdash\gamma\rightarrow\gamma^{\prime}\ {\bf and} \ \varphi \ {\bf is}\ \neg-{\bf free} &\hbox{(mon)}\cr &\qquad {\bf then} \vdash\square _{\gamma^{\prime}}\varphi\rightarrow\square _{\gamma}\varphi}$$

Axioms (CII)-(CI3) and Modus Ponens (MP) are from the Hilbert-style axiomatization of classical propositional logic [47]. It is easy to see that they are sound, irrespective of $R$, since the Boolean operators ${\top}, \wedge$ and $\neg$ are defined classically for $\Vdash_{M}$. Axiom (K) is the multi-modal version of the basic Distribution Axiom that is part of every modal logic $(\square (\varphi\rightarrow\varphi^{\prime})\rightarrow\square \varphi\rightarrow\square \varphi^{\prime})$. Similarly, Rule (N) is the multimodal version of the basic Necessitation Rule (if $\vdash\varphi$ then $\vdash\square \varphi)$.

Axioms (CI) and (C2) are also standard in counterfactual logic [48]. The former is the trivial statement that if $\gamma$ were the case, then $\gamma$ would hold. The latter axiom states that the counterfactual conditional is stronger than material implication.

At first sight, Axiom (Dlog) may look similar to Axiom (C2), but the two are actually mutually independent. In fact, while the latter is standard, Axiom (Dlog) is deeply linked with the intuition that the possible worlds correspond to Datalog policies. Recall that, intuitively, the left hand side means “$\varphi$ would hold in the policy if the credential $p:-\vec{p}$ were submitted”. Now we expand the right hand side of the implication to TeX Source$$(\vec{p}\wedge\neg p)\vee\varphi.$$So the axiom tells us that the left hand side holds only if it is the case that

- either $\varphi$ holds in the policy anyway, even without submitting $p:-\vec{p}$,
- or the action of submitting the credential must be crucial for making $\varphi$ true, but this is only possible if the conditions $\vec{p}$ of the credential are all satisfied in the policy, and furthermore $p$ does not already hold in the policy (or else the credential could not possibly be crucial).

But the axiom only holds for □ -free $\varphi$. To see why, consider the following instance of Axiom (Dlog), ignoring the side condition: $\square _{q:-p}\square _{p}q\rightarrow(p\rightarrow q)\rightarrow\square _{p}q$. The left hand side is an instance of Axiom (C1), since $q:-p$ is just syntactic sugar for $\square _{p}q$, so the formula simplifies to $(p\rightarrow q)\rightarrow\square _{p} q$, which is not TM valid, as shown in Example IV.12.

The following lemma is a useful bidirectional variant of Ax. (Dlog):

Let $p, q\in$ **At** and $\vec{p}\subseteq {\bf At}$.TeX Source$$\vdash\square _{(p:-\vec{p})}q \longleftrightarrow q\vee(\neg p\wedge\vec{p}\wedge\square _{p}q),$$

Axiom (Fun) is also remarkable in that it is rather non-standard in modal logic. It is also the reason it is not useful to define a dual $\lozenge$-operator (i. e., $\lozenge_{\gamma}\varphi=\neg\square _{\gamma}\neg\varphi$) in our logic, since □ and $\lozenge$ would be equivalent. The axiom is equivalent to the property that the accessibility relation $R$ in a TM model $M=\langle W, R, V\rangle$ is essentially functional, i.e., for all $w\in W$, and $\gamma\in\Gamma$:

- $\exists w^{\prime}$. $R_{\vert \gamma\vert _{M}}(w, w^{\prime})$, and
- $\forall w_{1}, w_{2}$. $R_{\vert \gamma\vert _{M}}(w, w_{1})\wedge R_{\vert \gamma\vert _{M}}(w, w_{2})\Rightarrow w_{1}\preceq_{M} w_{2}\wedge w_{2}\preceq_{M} w_{1}$.

On the intuitive Datalog level, Axiom (Fun) can easily be seen to be sound, since the statement “$\varphi$ would not hold if $\gamma$ were submitted” is equivalent to “it is not the case that $\varphi$ would hold if $\gamma$ were submitted”.

Axiom (Perm) also corresponds to a property of $R$, namely that it is transitive (that‘s the ‘if’ direction) and dense (the ‘only if’ direction). It captures the intuition that submitting two credential sets in sequence is equivalent to just submitting their union.

Rule (Mon) expresses a monotonicity property on the subscripts of □, and can be reduced to a monotonicity property of TM models and $\neg$-free $\varphi$: TeX Source$$\forall w, w^{\prime}\in W.\ w\Vdash_{M}\varphi\wedge w^{\prime}\succeq_{M}w\Rightarrow w^{\prime}\Vdash_{M}\varphi.$$The intuition here is that submitting more or stronger credentials can only make more (positive) facts true. It is easy to see that this does not hold in general for negated statements: suppose $p$ does not hold in a policy (with no submitted credentials); then the negated fact $\neg p$ holds. But $\neg p$ may cease to hold when credentials are submitted, in particular, when $p$ is submitted. In other words, even though $p\rightarrow\top$ is valid, $\square _{\top}\neg p\rightarrow\square _{p}\neg p$ is not.

The main result of this section is that the axiomatization is sound and complete with respect to the model theory (Theorem V.3).

(Soundness and Completeness). TeX Source$$\Vdash_{\rm TM}\varphi\ {\rm iff}\ \vdash\varphi$$

The proof of soundness ($\vdash\varphi$ implies $\Vdash_{\rm TM}\varphi$) formalizes the intuitions given above and proceeds, as usual, by structural induction on $\varphi$. The proof of completeness ($\Vdash_{\rm TM}\varphi$ implies $\vdash \varphi$) is less standard, and can be roughly outlined thus:

- 1) We will prove the equivalent statement that if $\varphi$ consistent (with respect to $\vdash$), then there exists a TM model $M= \langle W, R, V\rangle$ and $w\in W$ such that $w\Vdash_{M}\varphi$.
- 2) From Lemma V.2, it can be shown that every $\varphi$ is equivalent to a formula $\varphi^{\prime}$ that only consists of conjunctions and negations of policies in $\Gamma$ (i.e., one that does not contain vertically nested boxes).
- 3) Based on the property of TM models that every world corresponds to some policy in $\Gamma$, it is then possible to identify $w\in W$ such that $w\Vdash_{M}\varphi^{\prime}$, whenever $M$ is a TM model.
- 4) By soundness, this implies that $w\Vdash_{M}\varphi$. Furthermore, we can show that at least one TM model exists, and hence we arrive at the required existential conclusion.

Together with Theorem IV.11, we have the result TeX Source$$\Vvdash\varphi \iff \Vdash_{\rm TM}\varphi \iff \vdash\varphi.$$We can thus use the axiomatization to prove universal truths about trust management systems.

We sketch a formal proof of the formula from Example III.4. TeX Source$$\vdash \neg a\wedge\square _{d}\neg e\wedge\square _{b:-a\wedge d:-c}e \rightarrow c\wedge\square _{d}a$$

We first show that $d$ is equivalent to $\square_{\top}d$. The direction $\vdash\square _{\top}d\rightarrow d$ follows directly from Axiom (C2). The same axiom also yields $\vdash\square_{\top}\neg d\rightarrow\neg d$, the contrapositive of which is $\vdash d\rightarrow\square _{\top}d$, together with Axiom (Fun). Therefore $\vdash d\longleftrightarrow\square_{\top}d$.

Since $\vdash c\rightarrow {\top}$, we have $\vdash\square _{\top}d\rightarrow\square _{c}d$, according to Rule (Mon), and hence equivalently $\vdash d\rightarrow d:-c$. Taking this as the premise of Rule (Mon), we get $\vdash\square _{d:-c}e\rightarrow\square _{d}e$, the contrapositive of which is $\vdash\square _{d}\neg e\rightarrow\square _{d:-c} \neg e$, by Axiom (Fun).

Therefore, the assumption $\square _{d}\neg\ e$ from the antecedent of the formula implies $\square_{d:-c}\neg e$. Conjoining this with the assumption $\square _{b:-a\wedge d:-c}e$, which is equivalent to $\square _{d:-c}\square _{b:-a}e$, by Axiom (Perm), we get TeX Source$$\square _{d:-c}(\neg e\wedge\square _{b:-a}e)\eqno{\hbox{(7)}}$$(as it can be easily shown that $\square _{d:-c}$ distributes over $\wedge$).

By Axiom (Dlog), $\vdash\square _{b:-a}e\rightarrow e\vee(a\wedge\neg b)$. Therefore, formula (7) implies TeX Source$$\square _{d:-c}(a\wedge\neg b),\eqno{\hbox{(8)}}$$ since Axiom (K) allows us to apply Modus Ponens under $\square_{d:-c}$. We have thus shown that the antecedent of the original formula implies $\square _{d:-c}a$. Furthermore, as we have shown, $\vdash d\rightarrow d:-c$, and hence by Rule (Mon), $\vdash\square _{d:-c}a\rightarrow\square _{d}a$. Modus Ponens yields one of the consequents of the original formula, $\square _{d}a$.

For the other consequent, $c$, we apply Axiom (Dlog) to formula (8), which yields $(a\wedge\neg b)\vee(c\wedge\neg d)$. Combining this with the antecedent $\neg a$, we can then conclude $c$. ■

We sketch a formal proof of the probing attack result from Section III-A. For brevity, we introduce abbreviated names for the atoms:
TeX Source$$\eqalignno{ &as=A. hasConsented (S)\cr
&\ sa=S. canRegister(A)\cr
&ab=A. isRegistered (B)\cr
&secret =S. isRegistered(B)}$$The statement that the attacker can detect *secret* in the probing attack can then be expressed as
TeX Source$$\vdash\square _{as}sa\wedge\square _{as:-ab}\neg sa\wedge\square _{as:-ab\wedge ab:-secret}sa\rightarrow secret.$$

Assume the left hand side of the formula that we want to prove. From the previous proof, we have seen that $sa$ is equivalent to $\square _{\top} sa$, Since $as:-ab\rightarrow {\top}$, we thus have $sa\rightarrow \square _{as:-ab}sa$, by Rule (Mon). Combining the contrapositive of this with the assumption, we get $\neg sa$, From the assumption and Ax. (C2), we get $as\rightarrow sa$, which together with $\neg sa$ gives $\neg as$.

Using Lemma (V.2), we can prove that $\square _{as:-ab} sa$ is equivalent **to** $sa\ V (ab\wedge\neg as\wedge\square _{as}sa)$.

Since the assumption $\square _{as:-ab} \neg sa$ is equivalent to $\neg\square _{as:-ab}sa$ (by Ax. (Fun)), it is therefore also equivalent to TeX Source$$\neg sa\wedge (\neg ab\vee\ as\ \vee\neg\square _{as}sa).$$We have already proved $\neg as$, and $\square _{as}sa$ is in the antecedent. Therefore, we can conclude $\neg ab$.

Now consider $\square_{as:-ab}\neg sa \wedge\square _{as:-ab\wedge ab:-secret}sa$ in the assumption. By Ax. (Fun) and (Perm) and distributivity of □, this is equivalent to $\square _{as:-ab}(\neg sa\wedge\square _{ab:-secret}sa)$. By Ax. (K), we can apply Ax. (Dlog) on the inner box under the outer box to get
TeX Source$$\square _{as:-ab}(\neg sa\wedge(sa\vee(secret\wedge\neg ab))),$$which implies $\square _{as:-ab}secret$. Again applying Ax. (Dlog) yields $secret V (ab\wedge\neg as)$. But since we have proved $\neg ab$ above, we can conclude that *secret* follows from the assumptions. ■

SECTION VI

Hilbert-style axiomatizations are notoriously difficult to use directly for building proofs, and they are also difficult to mechanize directly, because they are not goal-directed. In this section, we describe how a goal formula $\varphi$ can be transformed into an equivalent formula in classical propositional logic that can be verified by a standard SAT solver. We have implemented a tool based on the contents of this section; some uses of the tool are described in Section VII.

Our axiomatization has certain characteristics that enables such a transformation. Firstly, Lemma V.2 shows that $\varphi$ can be transformed into a formula in which all subscripts of boxes are □ -free, and Ax. (Fun) and (Perm) allow us to distribute boxes through conjunctions, disjunctions and negations. This forms the basis of a *normalization* transform.

Secondly, for a given $\varphi$, it is sufficient to encode just a finite number of axiom instantiations in classical propositional logic in order to characterize the non-classical properties of □. This process is called *saturation*.

In this section, we use *literal* to mean a (possibly negated) atom, and □ *-literal* to mean a (possibly negated) atom with some prefix of boxes, e.g. $\square _{\square _{r}p\wedge p}q$ and $p$ are both □ -literals $p$ is logically equivalent to $\square _{\top}p$), whereas $p$ is also a literal but $\square _{q}p$ is not.

The reasoning process is described in more detail next.

Following parsing, the goal formula is simplified through the elimination of subsumed subformulas; e.g., $\square _{a:-b}c\wedge\square _{a}c$ is simplified to $\square _{a:-b}c$. The formula is then *normalized* by computing a negation normal form and distributing all boxes, such that boxes are only applied to literals, and negation is only applied to □ -literals. We also use Ax. (Perm) to collect strings of boxes into a single box. Normalization takes care of Ax. (K), (Fun), and (Perm).

Next, the goal formula is *expanded* by applying Lemma V.2 exhaustively until all subscripts of □ -literals are □ -free. Expansion is a very productive process - it can cause the goal formula's size to increase exponentially. This step takes care of Ax. (Dlog) and Rule (N).

The resulting formula is negated and added to the *clause set*. The clause set collects formulas which will ultimately be passed to a SAT solver.

Saturation generates propositional formulas that faithfully characterize the □ -literals occurring in the clause set.

- 1) Let $\beta=\square _{\bigwedge_{i=1}^{n}(q_{i}:-\vec{q_{i}}})p$ be a □ -literal occurring in the clause set. If $\vdash\bigwedge_{i=1}^{n}(\vec{q_{i}}\rightarrow q_{i})\rightarrow p$ holds (which is checked by the underlying SAT solver), we replace all occurrences of $\beta$ by T. This step is a generalization of Ax. (C1).
- 2) For each □ -literal $\square _{\gamma}p$ (where $\gamma\neq {\top}$) occurring in the clause set, we add the formulas $p\rightarrow\square _{\gamma}p$ and $\square _{\gamma}p\rightarrow \gamma\rightarrow p$.
- 3) For each pair of □ -literals $\square _{\gamma_{1}}p, \square_{\gamma_{2}}p$ (where $\gamma_{1}\neq\gamma_{2}$) occurring in the clause set, we add the formula TeX Source$$\square _{\gamma_{1}}\gamma_{2}\wedge\square _{\gamma_{2}}p\rightarrow\square _{\gamma_{1}}p.$$ Intuitively, this formula encodes the transitivity of counterfactuals. Steps (2) and (3) together cover Ax. (C2) and Rule (Mon). Since the second step may create new □ -literals, the process is repeated until a fixed point is reached.

After saturation com- pletes, all □ -literals in the clause set are uniformly substituted by fresh propositional literals. The resulting formulas are then checked by a standard SAT solver. Our implementation offers the choice between using the in-memory API of ${\rm Z}3^{2}$^{2} and producing output in the DIMACS [24] format used by many SAT solvers such as MiniSAT [25].

The classical axioms (C11)–(C13) and Rule (MP) are covered by the SAT solver. We have therefore covered all axioms and rules, and thus the goal formula is valid iff the SAT solver reports unsatisfiability (since we negated the goal).

SECTION VII

As an example of how the axiomatization can be used for security analysis, and to compare the performance of our implementation, we conducted a small case study on analyzing probing attacks, based on the benchmark test cases described by Becker and Koleini [8], [7]. Their benchmark was set up to test the performance of their tool (henceforth referred to as BK) for verifying opacity and detectability in probing attacks. BK's algorithm attempts to construct a policy that is observationally equivalent to all probes but makes the fact to be detected false. The fact is opaque if BK manages to construct such a policy, and detectable otherwise. In contrast, Counterdog is a general theorem prover for our logic. By Theorems III.7, IV.11, and V.3, Counterdog can be used to check opacity and detectability by constructing a formula corresponding to a probing attack and then proving it mechanically.

To keep this paper self-contained, we briefly describe the tested scenarios, and refer the reader to [7] for a more detailed explanation.

The compute cluster Clstr under attack has the following policy $\gamma_{\rm Clstr}$:

Here, $x$ and $y$ range over a set of users, and $j$ ranges over a set of compute job identifiers. The first parameter of each predicate should be interpreted as the principal who says, or vouches for, the predicate. The policy stipulates that, according to Clstr, members who own a job can execute it, if Clstr can read the data associated with it according to data center Data. *Clstr* delegates authority over job ownership and membership to trusted third parties (TTP). Data delegates authority over read permissions to job data to data owners. Data also delegates authority over job data ownership to TTPs. Furthermore, both Clstr and Data say that certificate authority CA is a TTP.

In the basic test case (TC1), the attacker Eve possesses four credentials $\gamma_{{\rm Eve}}$:

With her four credentials and the query, Eve can form $2^{4}=16$ *probes (cf*. Def. III.5) of the form $\square _{\gamma}\varphi_{{\rm Eve}}$, for each $\gamma\subseteq\gamma_{\rm Eve}$. These result in 16 *observations* under $\gamma_{{\rm Clstr}}$: the observation corresponding to probe $\pi$ is just $\pi$ if $\gamma_{\rm Clstr} \Vvdash\pi$, and otherwise it is $\neg\pi$. The resulting *probing attack* $\varphi_{a}$ under $\gamma_{{\rm Clstr}}$ is then the conjunction of all 16 observations.

In TC1, Eve wishes to find out if Bob is not a member of Clstr - in other words, if $\neg mem$ (Clstr, Bob) is *detectable*. By Theorems III.7, IV.11, and V.3, this is equivalent to checking
TeX Source$$\vdash\varphi_{a}\rightarrow\neg mem ({\bf Clstr, Bob)}.$$This is provable, and therefore Eve can detect that Bob is not a member.

The atomic clause *mem*(*Clstr*, Bob) is added to $\gamma_{\rm Clstr}$, and the fact to be detected is changed to mem(Clstr, Bob). The corresponding formula is not provable, and hence *mem*(*Clstr*, Bob) is opaque.

Based on TC1, three irrelevant atomic clauses $p_{1}, p_{2}, p_{3}$ are added to $\gamma_{\rm Eve}$, increasing the number of probes to $2^{7}= 128$. The fact to be detected remains the same, and is indeed detectable.

This test case was omitted as it only tests a specific switch in Becker and Koleini's tool which is not relevant in our case.

Based on TC1, the probe query is changed to $\varphi_{\rm Eve}= canExe$(Clstr, Eve, Job) $\wedge\neg isBanned$ (Clstr, Eve). The fact remains detectable.

Based on TC5, the probe set is manually pruned to a minimal set that is sufficient to prove detectability. This reduces the number of probes from 16 down to only 3.

To get comparable performance numbers, we ran Becker and Koleini's probing attack analyzing tool (henceforth referred to as BK) and Counterdog on these test cases. For our experiments we used an Intel Xeon E5630 2.53 GHz with 6 GB RAM. The table below summarizes the timings for all test cases, comparing BK with Counterdog.

Counterdog outperforms BK in all test cases. The performance gain is most notable in the more expensive test cases. To test if this is generally the case, we performed a test series, based on TC3, adding an extra irrelevant clause to the probe credential set $\gamma_{\rm Eve}$ one by one. This doubles the number of probes (and thus the size of the formula to be proved) at each step.

Figure 1 compares the performance of both tools for this test series. Counterdog's performance gain over BK increases exponentially with each added credential in $\gamma_{\rm Eve}$ A probe credential of size 14 (resulting in 16,384 probes) was the maximum that BK could handle before running out of memory, taking 408 s (compared to 7 s with Counterdog). We tested Counterdog with up to 18 credentials (resulting in 262,144 probes), which took 179 s. A simple extrapolation suggests that Counterdog can check a probing attack based on TC3 extended to $10^{8}$ probes within less than three hours.

As we have seen, the axiomatization of the semantics together with our implementation enables us to mechanically prove universal truths about trust management systems - that is, statements that are implicitly quantified over all policies: a theorem $\vdash \varphi$) is equivalent to $\Vvdash\varphi$, by Theorems IV.11 and V.3, which can be interpreted as “all policies $\gamma$ satisfy the property $\varphi$”.

But we want to go further than that. In this subsection, we show that we can use our implementation to automate proofs of *meta-theorems* about trust management. These are statements containing universally quantified meta-variables ranging over atoms, conjunctions of atoms, $\Gamma$ or $\Phi$. Our axiom schemas and Lemma V.2 are examples of such meta-theorems, with meta-variables $p,\vec{p},\gamma, \varphi$ etc.

In classical logic as well as all *normal* modal logics, proving such meta-theorems is trivial: if a propositional formula $f$ is a theorem, then substituting any arbitrary formula $f^{\prime}$ for all occurrences of an atom $p$ in $f$ will also yield a theorem. In fact, the axiomatization of such logics often explicitly include a uniform substitution rule, and a finite number of axioms (rather than axiom schemas, as in our case).

Our logic breaks the uniform substitution property, as some of the axioms and rules have syntactic side conditions (e.g. Ax. (Dlog), Rule (Mon)). It is thus not a normal modal logic in the strict sense, but this does not pose any problems, and is perhaps even to be expected, as many belief-revision and other non-monotonic logics also break uniform substitution [41].

The only downside is that proving meta-theorems is non-trivial, and manual proofs generally require structural induction over the quantified meta-variables. It is therefore not obvious if proving meta-theorems can be automated easily. After all, the range of the quantifiers is huge, and even infinite if At is infinite. We answer this question in the affirmative by presenting a number of proof-theoretical theorems on the provability of meta-theorems (meta-meta theorems, so to speak), that show that it is sufficient to just consider a small number of base case instantiations of meta-variables.

We will use *contexts* to formalize the notion of meta-theorem. A context is a $\Phi$-formula with a ‘hole’ denoted by $[\cdot]$. We define three different kinds of contexts in Fig. 2, $\Phi$ *-hole*, $\Gamma$ *-hole*, and *At-hole contexts*. Intuitively, the holes in a $\Phi$-hole ($\Gamma$-hole, At-hole, respectively) context can be filled with any $\varphi\in\Phi(\gamma\in \Gamma,\vec{p}\subseteq_{\rm fin}\ {\bf At})$ to form a well-formed $\Phi$-formula.

If ${\cal A}$ is a $\Phi$-hole ($\Gamma$-hole, At-hole, respectively) context and $\alpha\in\Phi(\alpha\in\Gamma, \alpha\subseteq {\rm fin} {\bf At)}$, we write ${\cal A}[\alpha]$ to denote the $\Phi$-formula resulting from replacing all holes in ${\cal A}$ by $\alpha$.

It is easy to see that every $\Phi$-hole context is also a $\Gamma$-hole context, and every $\Gamma$-hole context is also a At-hole context. Each of the three types of contexts completely cover all of $\Phi$. in particular, the case $\square {\gamma}\wedge[\cdot]^{\cal A}$ (for $\Gamma$-hole and At-hole contexts) is covered because $\square _{\gamma\wedge[\cdot]}{\cal}$ is equivalent to $\square _{\gamma}\square _{[\cdot]}{\cal A}$.

Let ${\cal E}$ be a At-hole context, and let $p$ be an atom that does not occur in ${\cal E}$. TeX Source$$\vdash {\cal E}[p]\ {\rm iff}\ \forall\vec{p}\subseteq _{\rm fin} \ {\bf At}.\ \vdash {\cal E}[\vec{p}].$$

Let ${\cal D}$ be a $\Gamma$-hole context, and let $p$ and $q$ be atoms that do not occur in ${\cal D}$. TeX Source$$\vdash {\cal D}[p]\wedge {\cal D}[\square _{q}p]\ {\rm iff}\ \forall\gamma\in\Gamma.\ \vdash {\cal D}[\gamma]$$

Let $C$ be a $\Phi$-hole context, and let $p, q$ and $r$ be atoms that do not occur in ${\cal C}$. Let $S=\{p; \square_{q}p; \square_{q: -r}p\}$. TeX Source$$(\forall\varphi_{s}\in S.\ \vdash {\cal C}[\varphi_{s}]\wedge {\cal C}[\neg\varphi_{s}])\ \ {\rm iff}\ \ \forall\varphi\in\Phi.\ \vdash {\cal C}[\varphi]$$

These theorems enable us to mechanically prove meta-theorems about trust management. Essentially, they reduce a meta-level quantified validity judgement to a small number of concrete instances. There is only one case to consider for universally quantified atoms or conjunctions of atoms (Theorem VII.1), two cases for meta-variables ranging over policies (Theorem VII.2), and six cases (three of them negated) for meta-variables ranging over arbitrary formulas (Theorem VII.3).

Consider, for instance, the meta-statement TeX Source$$\forall\varphi\in\Phi.\ \vdash\varphi \longleftrightarrow \square_{\top}\varphi.$$

More formally and equivalently, we could write TeX Source$$\forall\varphi\in\Phi.\ \vdash C[\varphi], \ {\rm where} \ C=[\cdot] \longleftrightarrow \square_{{\top}}[\cdot].$$This can then be mechanically proved by proving just the six basic instances from Theorem VII.3.

It is easy to extend this method further. Meta-theorems with multiple meta-variables can also be mechanically proved with this approach by combining the theorems. For example, TeX Source$$\forall\varphi,\varphi^{\prime}\in\Phi, \gamma\in\Gamma.\ \vdash\square _{\gamma}(\varphi\wedge\varphi^{\prime})\longleftrightarrow(\square _{\gamma}\varphi\wedge\square _{\gamma}\varphi^{\prime})$$reduces to $6\times 6\times 2=72$ propositional cases: six each for $\varphi$ and $\varphi^{\prime}$, and two for $\gamma$.

We can also prove meta-theorems with side-conditions. If a meta-variable $\varphi$ ranges over negation-free formulas from $\Phi$, it is sufficient to prove the three positive instances from Theorem VII.3. Similarly, for meta-variables ranging over □ - free $\varphi$ (as in Axiom Dlog), the number of cases reduces to two ($\varphi\rightarrow p$ and $\varphi\rightarrow\neg p$).

In the following, we discuss a number of meta-theorems that we verified using the tool, based on Theorems VII.1–VII.3 (in addition to proving them manually). These meta-theorems provide interesting general insights into Datalog-based trust management systems. Moreover, they have also been essential in our (manual) proofs of soundness and completeness (Theorem V.3).
TeX Source$$\forall\varphi, \varphi^{\prime}\in\Phi, \gamma\in\Gamma.\ \vdash\square _{\gamma}(\varphi\wedge\varphi^{\prime})\longleftrightarrow(\square _{\gamma}\varphi\wedge\square _{\gamma}\varphi^{\prime})$$As in standard modal logic, the □ -operator distributes over conjunction. The trust management interpretation is equally obvious: submitting credential set $\gamma$ to a policy results in a new policy that satisfies the property $\varphi\wedge\varphi^{\prime}$ iff the new policy satisfies both $\varphi$ and $\varphi^{\prime}$. Proving this theorem took 574 ms.
TeX Source$$\forall\varphi, \varphi^{\prime}\in\Phi, \gamma\in\Gamma.\ \vdash\square _{\gamma}(\varphi\vee\varphi^{\prime})\longleftrightarrow(\square _{\gamma}\varphi\vee\square _{\gamma}\varphi^{\prime})$$In most modal logics, □ does *not* distribute over $V$. This theorem holds only because the accessibility relation is functional, or equivalently, because the result of combining a credential set with a policy is always uniquely defined. But again, the theorem is obviously true in the trust management interpretation: submitting credential set $\gamma$ to a policy results in a new policy that satisfies the property $\varphi\vee\varphi^{\prime}$ iff the new policy satisfies either $\varphi$ or $\varphi^{\prime}$. $(577 {\rm ms})$
TeX Source$$\forall\varphi\in\Phi.\ \vdash\varphi\longleftrightarrow \square _{\top}\varphi$$Submitting an empty credential set is equivalent to not submitting anything at all. (3 ms)
TeX Source$$\forall\varphi\in\Phi^{+}, \gamma\in\Gamma.\ \vdash\varphi\rightarrow\square _{\gamma}\varphi,$$where $\Phi^{+}$ denotes the set of $\neg$- free formulas in $\Phi$. This can be interpreted as a monotonicity property of the accessibility relation, and also of credential submissions: positive properties are retained after credential submissions. (95 ms)
TeX Source$$\forall\varphi\in\Phi, \gamma, \gamma^{\prime}\in\Gamma.\ \vdash\square _{\gamma}\square _{\gamma^{\prime}}\varphi\longleftrightarrow\square _{\gamma^{\prime}}\square _{\gamma}\varphi$$This property, corresponding to a commutative accessibility relation, is also unusual in multi-modal logics. A simple corollary is that all permutations of arbitrary strings of boxes are equivalent, or that the order in which credentials are submitted is irrelevant. (3059 ms)
TeX Source$$\forall\varphi\in\Phi, p\in {\bf At}, \vec{p}\subseteq {\rm fin}\ {\bf At}.\ \vdash\vec{p}\rightarrow(\square _{p}\varphi\longleftrightarrow\square _{p:-\vec{p}}\varphi)$$If $\vec{p}$ holds in a policy, then submitting the atomic $p$ results in a policy that is indistinguishable from the policy resulting from submitting the conditional credential $p:-\bar{p}$. (30 ms)
TeX Source$$\forall\varphi\in\Phi^{+}, \gamma_{1}, \gamma_{2}\in\Gamma.\ \vdash\square _{\gamma_{1}}\gamma_{2}\wedge\square _{\gamma_{2}}\varphi\rightarrow\square _{\gamma_{1}}\varphi$$This theorem asserts that credential-based derivations can be applied transitively. More precisely: if, after submitting credential set $\gamma_{1}$, the credential set $\gamma_{2}$ would be derivable from the combined policy, and if submitting $\gamma_{2}$ directly would be sufficient for making property $\varphi$ true, then $\gamma_{1}$ alone would also be sufficient. This only holds for negation-free $\varphi$. A simple counter-example can be constructed from instantiating $\gamma_{1}=p, \gamma_{2}={\top}$, and $\varphi=\neg p$. (1078 ms)
TeX Source$$\forall\varphi\in\Phi, \gamma\in\Gamma.\ \vdash\gamma\rightarrow(\varphi\longleftrightarrow\square _{\gamma}\varphi)$$If a policy contains the clauses $\gamma$, then submitting $\gamma$ as credential set is equivalent to not submitting anything at all. This holds even for properties $\varphi$ containing negation. (157 ms)

In Section VI, we gave some informal justifications as to why the reduction to propositional logic, which our implementation is based on, is not only sound (which is relatively easy to prove manually) but also complete with respect to the axiomatization. We did not prove completeness completely manually, but instead used the implementation itself to assist in the proof, thereby letting the implementation effectively prove its own completeness!

The main reason why this is possible is the ability to prove meta-theorems mechanically (Theorems VII.1–VII.3). With this feature in place, we mechanically verified all axiom *schemas*. What this proves is that the reduction rules, as implemented, cover all axioms.

It remained to show that all rules are covered as well. Recall that there are three rules, Modus Ponens, (Mon), and (N). Modus Ponens is built into the underlying SAT solver. We manually proved that Rule (Mon) can be replaced by the axiom schema $\vdash\square _{\gamma_{1}}\gamma_{2}\wedge\square _{\gamma_{2}}\varphi\rightarrow\square _{\gamma_{1}}\varphi$, **which** we mechanically verified.

To prove coverage of Rule (N), we perform a rule induction over $\vdash\varphi$ in order to conclude $\vdash^{ }\square _{\gamma}\varphi$ (where $\vdash^{ }$ denotes the proofs performed by the implementation). All the base cases, i.e., the cases where $\varphi$ is an instance of an axiom, were proven mechanically, again as meta-theorems (for example, for Ax. (C1), we prove $\forall\gamma, \gamma^{\prime}\in\Gamma$. $\vdash\square _{\gamma}^{\prime}(\square _{\gamma}\gamma))$ ‘ The two remaining cases, where $\vdash\varphi$ is a rule application, were easy to prove manually.

Together, these results prove that $\vdash\varphi$ implies $\vdash^{
}\varphi$, in other words, that the implementation is complete. The correctness of the proof rests on a couple of assumptions: the soundness of the implementation itself, the correctness of the underlying SAT solver, and the correctness of our manual proofs. We are confident about the implementation's soundness, as the reduction rules it is based on are sound, and it has been extensively tested. To achieve an even higher level of confidence about the semi-mechanically proven completeness result, one could mechanically verify all computer-generated subproofs, since an automated proof *verifier* would be much smaller and simpler than our proof generator.

SECTION VIII

Blaze et al. coined the term ‘trust management’ in their seminal paper [12], referring to a set of principles for managing security policies, security credentials and trust relationships in a decentralized system. In their proposed paradigm, decentralization is facilitated by making policies depend on submitted credentials and by enabling local control over trust relationships. Policies, credentials and trust relationships should be expressed in a common language, thereby separating policy from the application. Early examples of trust management languages include PolicyMaker [12], KeyNote [11], and SPKI/SDSI [45], [26].

Li et al. [40], [39] argue that authorization in decentralized systems should depend on delegatable attributes rather than identity, and call systems that support such policies and credentials attribute-based access control (ABAC) systems. In essence, their ABAC paradigm is a refinement of trust management that makes the requirements on the expressiveness of credentials and policies more explicit: principals may assert parameterized attributes about other principals; authority over attributes may be delegated to other principals (that possess some specified attribute) via trust relationship credentials; and attributes may be inferred from other attributes. Their proposed policy language, RT, satisfies all these requirements. Like its predecessor, Delegation Logic (DL) [37], RT can be translated into Datalog. (A more expressive variant of RT, ${\rm RT}^{c}$ [38], can be translated into Datalog with constraints [33].)

Datalog has also been chosen as the basis of many other trust management languages. Examples include a language by Bonatti and Samarati [13], [14], SD3 [34], Binder [23], Cassandra [10], [9], a language by Wang et al. [53], one by Giorgini et al. [29], [30] and SecPAL [5], [6].

Apart from their relation to Datalog, what most of these languages have in common is that attributes are qualified by a principal who “says” it, and is vouching for the attribute's truth. In a credential, this principal coincides with the credential's issuer. For example, in Binder, the fact (or condition) that principal $A$ is a student, according to authority $C$, could be expressed as $C$. *isStudenl* $(A)$; similarly, in SecPAL, one would write $C$ **says** $A$ *isStudent*. This qualifier does not extend Datalog's expressiveness, as it is easy to translate a qualified atom $C.p(\vec{e})$ into a normal Datalog atom $p(C,\vec{e})$.

The says operator can be traced back to an authorization logic by Abadi et al. (ABLP) [2], [35]. Even though it predates the paper by Blaze et al., ABLP could be seen as a trust management language. It introduced the says operator - but in contrast to the simpler Datalog-based languages, ABLP and related languages such as ICL [28], CCD [1] and DKAL [31], [32] treat the **says** (or **said**, in the case of DKAL) construct as a proper unary operator in the logic, which cannot be simply translated into an extra predicate parameter. Our semantics therefore does not cover these languages.

The Datalog-based languages inherit their semantics from Datalog. The most common way to present Datalog's semantics is as the minimal fixed point of the immediate consequence operator ${\bf T}_{\gamma}$, parameterized on a Datalog program $\gamma$ [20]. The result is the set of all atoms $p$ that are true in $\gamma$. Our inductive definition of $\gamma\Vvdash p$ coincides with this semantics: $\gamma\Vvdash p$ iff $p\in {\bf T}_{\gamma}^{\omega}(\emptyset)$. A model-theoretic semantics can be given by taking the minimal Herbrand model (i.e., the intersection of all Herbrand models) of $\gamma$, and a proof-theoretic semantics can be defined using resolution strategies [3]. All three flavours of the standard semantics are equivalent, but, as we have shown in Section IV, they are not adequate for modeling Datalog-based trust management policies that are combined with varying sets of credentials.

Abadi et al. [2] define **ABLP** axiomatically and then give it a model-theoretic semantics based on Kripke structures. However, the axiomatization is not complete with respect to the semantics. Further work along these lines has been done by Garg and Abadi [28], who present sound and complete translations from a minimal logic with a says operator called ICL, and various extensions of it, into the classical modal logic S4. Similar, Gurevich and Neeman [32] provide a Kripke semantics for DKAL2, the successor of the DKAL [31]. These modal semantics are straightforward compared to the one presented here, but this is because they have a completely different focus, namely providing a modal interpretation of the says (or said) operator. As we have argued above, this operator is not very interesting in the context of the more practical, Datalog-based, languages (at least from a foundational point of view). The focus of our semantics is to give a modal interpretation of the turnstile operator : - in Datalog policies and of credential submissions in a trust management context.

It has been noted before that the standard Datalog semantics does not enjoy compositionality and full abstraction relative to program union. Gaifman and Shapiro [27] propose a semantics for logic programs that is compositional, fully abstract and preserves congruence with respect to program union. These properties are achieved by interpreting logic program clauses as implicational formulas, as a result of which all dependencies between atoms are preserved. However, this semantics does not give us the desired behavior (see Section IV). The problem, in essence, stems from the fact that material implication is inadequate as an interpretation of conditional if-then statements [46], and thus also of Datalog clauses (in our context) and credential submissions: if $\neg p$ holds in a policy, it follows that $p\rightarrow q$ also holds, for every $q$ However, it should *not* follow that the clause $q:-p$ is contained in the policy; and similarly, it is *not* justified to infer that $q$ would hold if credential $p$ were submitted and combined with the policy.

One of the main claims of this paper is that clauses and credential submissions ought to be modeled as counterfactual statements. The complexity of our semantics, then, stems from the fact that simple, truth-functional Boolean operators cannot offer an adequate account of counterfactuals. Stalnaker [49] and Lewis [36] were the first to propose a Kripke semantics for counterfactuals, based on a similarity ordering on worlds: essentially, “if $p$ were true, then $q$ would be true” holds in a world $w$ if of those worlds in which $p$ is true, the ones that are most similar to $w$ also make $q$ true. Our semantics is based on the same basic framework. Our definition of what “most similar” means is novel, as is our counterfactual interpretation of Datalog. Therefore, our work could also be seen as a novel semantics for Datalog in general. However, the action of dynamically injecting varying sets of clauses into a Datalog program is a characteristic that is rather specific to trust management, hence it is more appropriate to frame our semantics specifically as a trust management semantics.

Much work has been done on axiomatizations of multimodal counterfactual logic. A good overview can be found in a paper by Ryan and Schobbens [48]. Their paper also contains a comprehensive listing of axioms proposed in the literature together with the corresponding frame conditions.

At first sight, hypothetical Datalog [15], [16] bears some resemblance to our logic. Hypothetical Datalog allows clauses such as $p:-$ ($q$: add $r$), meaning “$p$ is derivable provided that, if $r$ were added to the rule base, $q$ would hold”. In our logic, this would correspond to $(q:-r)\rightarrow p$. But our logic is significantly more expressive: hypothetical Datalog cannot express statements that are hypothetical at the top-level and/or hypothetically add non-atomic clauses, for instance “$p$ would hold if the conditional (credential) ‘if $r$ were true then $q$ would hold’ were added”. In our logic, this statement can be expressed as $\square _{q:-r}p$. Moreover, the work on hypothetical Datalog (and similar works on hypothetical reasoning) is only concerned with query evaluation against concrete rule bases, and not with the harder problem of universal validity.

We identified the security analysis of probing attacks as one practical area on which the present work is likely to have an impact. The problem of probing attacks has gained attention only rather recently. Gurevich and Neeman were the first to identify this general vulnerability of logic-based trust management systems [31]. In [4], probing attacks are framed in terms of the information flow properties opacity [42], [18] and its negation, detectability. Becker and Koleini developed a tool for checking detectability of confidential facts in Datalog policies, based on constructing a counter- policy, i.e., one that conforms to the given probes but makes the confidential fact false. The fact is detectable if and only if no such policy can be found. We use and extend their benchmark to compare the performance of our logic-based approach in Section VII-A.

SECTION IX

Logics and semantics have long played an important, and successful, role in security research, especially in the area of cryptographic protocols [50]. (A prominent example has been the struggle to find an adequate semantics for BAN logic [19], see e.g. [21], [17], [51], [52]). The area of trust management, however, has hitherto not been investigated from a foundational, semantics-based point of view.

Evaluating a Datalog policy is a straightforward task, and so is taking the union of two sets of Datalog clauses. At first sight, then, it may come as an unwelcome surprise that our semantics, and the axiomatization, of Datalog-based trust management is so complex. Indeed, if one were only interested in the results of evaluating access queries against a concrete policy under a concrete set of submitted credentials, then a formal semantics would be unnecessary. But if one is interested in *reasoning about* the behavior of trust management systems, it is necessary to formulate universal truths that are quantified over all policies. Proving such statements is remarkably hard, even though the base language is so simple. As we have seen, neither the standard Datalog semantics nor the Kripke semantics for ABLP and related languages properly captures Datalog-based trust management. The situation has actually been worse than that of BAN logic,^{3} since, prior to the present work, not even a sound and complete proof system existed, let alone a formal semantics.

Our formal semantics is defined by the notion of TM models and the TM validity judgement $\Vdash_{\rm TM}$, and the axiomatization of TM validity is given by the proof system $\vdash$. Theorem V.3 shows that the proof system is sound and complete with respect to the semantics. So what role does the relation $\Vvdash$, which we introduced in Section II, play? We need it, because a *semantics*, despite the term's etymology, does not really convey the *meaning* of the logic. As Read [44] puts it,

[f]ormal semantics cannot itself be a theory of meaning. It cannot explain the meaning of the terms in the logic, for it merely provides a mapping of the syntax into another formalism, which itself stands in need of interpretation.

Of course, the relation $\Vvdash$ is also just “another formalism”, but it is one that is much closer to the natural language description of what a trust management system does, and can therefore more easily be accepted as “obviously” correct. Without it (and Theorem IV.11 providing the glue), there would be a big gap between the intuitive meaning of the language and its formalization.

What the formal semantics does provide is a number of alternative, less obvious, interpretations of a trust management system. TM models are abstract, purely mathematical objects that are independent of the language's syntax. They capture precisely (and only) the essential aspects of a trust management system. The easiest interpretation of a TM model is a graph in which two policies are connected when one is the result of submitting a set of credentials to the other.

A deeper alternative interpretation is that trust management logic is a counterfactual logic - a logic that avoids the paradoxes of material implication. Both policy clauses as well as statements about credential submissions are counterfactual, rather than implicational, statements. They state what *would* be the case if something else *were* the case.

As Ryan and Schobbens have noted, counterfactual statements can also be interpreted as hypothetical minimal updates to a knowledge base [48]. Under this interpretation, a credential submission $\square _{\gamma}\varphi$ would be equivalent to saying that $\varphi$ holds in a policy after it has been minimally updated with credential set $\gamma$. The restrictions on the accessibility relation (Def. IV.7) can then be seen as a precise specification of what constitutes a minimal update to a policy.

Hence, from a foundational point of view, our semantics provides new insights into the nature of trust management. From a more practical point of view, it led us to an axiomatization that can be mechanized. We showed how our implementation could be put to good use by applying it to the analysis of probing attacks. It is the first automated tool that can feasibly check real-world probing attacks of realistic size, comprising millions of probes. But our implementation is a general automated theorem prover for our language, the expressiveness of which goes far beyond that needed for probing attacks. In particular, we used the implementation to prove general meta-theorems about trust management - some of which are intuitively obvious (but not necessarily easy to prove), and some of which are decidedly non-trivial (such as Lemma V.2 or lemmas that help prove the implementation's own completeness).

Our logic is decidable, since every formula is equivalent to a (potentially much larger) propositional formula. However, the complexity of the logic remains an open question. We also leave the development of a first-order version of the logic to future work: in this version, atoms would be predicates with constant and variable parameters, and clauses would be implicitly closed under universal quantification.

Alessandra Russo is funded in part through the US Army Research laboratory and the UK Ministry of Defence under Agreement Number W911NF-06-3-0001. We thank Mark Ryan and Stephen Muggleton for fruitful discussions, and Christoph Wintersteiger for his support with Z3. We are also grateful for the valuable comments from the anonymous reviewers.

^{1}In practice, first-order predicates are used as atoms instead of propositional letters, but if the domain is finite, as is usually the case, the first-order case reduces to the propositional one. We choose the latter presentation for simplicity.

^{2}Z3 [22] is an SMT solver, but we only use its SAT solving capabilities.

^{3}Cohen and Dam succinctly described the BAN situation thus [21]: “While a number of semantics have been proposed for BAN and BAN-like logics, none of them capture accurately the intended meaning of the epistemic modality in BAN […]. This situation is unsatisfactory. Without a semantics, it is unclear what is established by a derivation in the proof system of BAN: A proof system is merely a definition, and as such it needs further justification.”

No Data Available

No Data Available

None

No Data Available

- This paper appears in:
- No Data Available
- Conference Date(s):
- No Data Available
- Conference Location:
- No Data Available
- On page(s):
- No Data Available
- E-ISBN:
- No Data Available
- Print ISBN:
- No Data Available
- INSPEC Accession Number:
- None
- Digital Object Identifier:
- None
- Date of Current Version:
- No Data Available
- Date of Original Publication:
- No Data Available

Normal | Large

- Bookmark This Article
- Email to a Colleague
- Share
- Download Citation
- Download References
- Rights and Permissions