By Topic

IEEE Quick Preview
  • Abstract

SECTION I

INTRODUCTION

Trust management [12] is an access control paradigm for decentralized systems that has attracted a lot of attention over the last 15 years. Research so far has focussed on concrete architectures and policy languages for trust management, and on policy analysis. This paper attempts to shed light on some of the more foundational aspects of trust management.

A. Trust Management

Trust management can be succinctly characterized by two distinctive features:

  • 1) The access policy of the relying party is specified in a high-level policy language (e.g. [11], [45], [26], [37], [34], [39], [38], [23], [10], [9], [31], [6]).
  • 2) Access decisions do not depend solely on the local policy, but also on digitally signed credentials that are submitted to the relying party together with the access request. Access is granted only if a proof of compliance can be constructed, showing that the requested permission Formula$Q$ is provable from the policy Formula$P$ combined with the set of credentials Formula$C$.

The first feature effectively decouples the policy from the implementation of the enforcement mechanism, improving maintainability and flexibility in a context of quickly evolving access control requirements.

The second feature is necessitated by the fact that, in large decentralized systems, the relying party generally does not know the identity of the users requesting access in advance. Therefore, authorization has to be based on attributes rather than identity. Authority over these attributes may be delegated to trusted third parties, who may then issue credentials that assert these attributes or re-delegate authority to yet another party. The credentials that are used in trust management may thus be quite expressive, containing attributes, constraints and conditions, and delegation assertions. For this reason, the language for specifying credential assertions is typically the same as the one for specifying the local policy.

B. Trust Management Semantics

Given a derivability relation Formula$\Vvdash$ between sets of assertions and permissions, the basic mechanics of a trust management system can be specified as follows: a user's request Formula$Q$ is granted iff Formula$P\cup C\Vvdash Q$, where Formula$P$ is the relying party's local policy and Formula$C$ is the set of supporting credentials submitted by the user. All policy languages mentioned above can be specified in terms of such a derivability relation Formula$\Vvdash$; in the common case of Datalog-based policy languages, the relation Formula$\Vvdash$ is simply the standard Datalog entailment relation [20].

Hence we arrive at a natural notion of observational equivalence on policies that captures the essential aspects of trust management: two policies Formula$P$ and Formula$P^{\prime}$ are equivalent iff for all sets Formula$C$ of credentials and all requests Formula$Q$, FormulaTeX Source$$P\cup C\Vvdash Q \iff P^{\prime}\cup C\Vvdash Q.$$

The fundamental question we are concerned with in this paper is whether an adequate model-theoretic semantics of trust management exists, i.e., one that matches this notion of observational equivalence. Neither the standard model-theoretic Datalog semantics based on minimal Herbrand models (for Datalog-based languages) nor the Kripke semantics for authorization logics related to ABLP [2] are adequate in this sense. While these semantics are sufficient for determining which permissions are granted by a fixed policy Formula$P$ and a fixed set Formula$C$ of supporting credentials, they do not provide any insight into questions that are particular to trust management. such as:

  • (a) Given the semantics of a policy Formula$P$, which permissions Formula$Q$ are granted when Formula$P$ is combined with credential set Formula$C$ ?
  • (b) Given the semantics of two policies Formula$P_{1}$ and Formula$P_{2}$, what is the semantics of their composition Formula$P_{1}\cup P_{2}$ ?
  • (c) What can an external user infer about an unknown policy merely by successively submitting requests together with varying sets of credentials and observing the relying party's responses?

C. Technical Contributions

We present the first formal trust management semantics that accurately captures the action of dynamically submitting varying sets of credentials. It is compositional with respect to policy union and provides full abstraction [43] with respect to observational equivalence. These two properties together enable it to answer the questions (a) and (b) above.

Furthermore, we develop an axiomatization that is sound and complete with respect to the model-theoretic semantics, and provides inferentially complete object-level reasoning about a trust management system's observables. For example, judgements such as if a policy grants access to Formula$Q_{1}$ when combined with set Formula$C_{1}$, and denies access to Formula$Q_{2}$ when combined with set Formula$C_{2}$, then it must grant access to Formula$Q_{3}$ when combined with Formula$C_{3}$” can be expressed as a formula in the logic, and be proved (or disproved) within it. It is this expressive power that enables the logic to directly answer questions such as (c) above, and thus to analyze probing attacks, a recently identified class of attacks in which the attacker infers confidential information by submitting credentials and observing the trust management system's reactions [31], [4], [8]. Perhaps even more strikingly, it is expressive enough to prove general meta-theorems about trust management systems, e.g. “if a policy satisfies some negation-free property, then this property will still hold when the policy is combined with an arbitrary credential set”.

A language-independent semantics would be too abstract to provide any interesting insights. Our trust management semantics is specific to Datalog, and thus applicable to the wide range of Datalog-based policy languages. Datalog has arguably been the most popular logical basis for languages in this context; examples include Delegation Logic [37], SD3 [34], RT [39], [38], Binder [23], Cassandra [10], [9], and SecPAL [6].

The remainder of the paper is structured as follows. We introduce in Section II a simple language for reasoning about Datalog-based trust management policies, defined by a relation Formula$\Vvdash$, that captures the intuitive operational meaning of policies and credential submissions. This relation itself is straightforward, but, as we argue in Section III, universal truths (that hold for all policies) are both useful and highly non-trivial. This justifies the need for a logic with a formal semantics with a notion of validity that coincides with the intuitive notion of universal truths in trust management systems (Section IV). The corresponding axiomatization is presented in Section V. Section VI describes our implementation of a theorem prover for the logic. Applications and performance results are discussed in Section VII. We review related work in Section VIII and conclude with Section IX. The proofs of our theorems are lengthy; we relegate them to a technical report [?]. Our implementation is available at http://research.microsoft.com/counterdog.

SECTION II

A SIMPLE TRUST MANAGEMENT LANGUAGE

We fix a countable set At of propositional variables called atoms,1 A Datalog clause is either an atom Formula$p$ or of the form Formula$p:-p_{1}, \ldots,p_{n}$, where Formula$p,p_{1}, \ldots,p_{n}\in {\bf At}$. A policy Formula$\gamma$ is a finite set of clauses. We write Formula$\Gamma$ to denote the set of all policies.

Atoms correspond to atomic facts that are relevant to access control, e.g. “Alice can execute run.exe” or “Bob is a part time student” or “the system is in state Red”. From the point of view of the Datalog engine, the atoms have no inherent meaning beyond the logical dependencies specified within the policy (and the submitted credentials). It is the responsibility of the reference monitor, which acts as an interface between requesters and resources, to query the policy in a meaningful way. For instance, if Alice attempts to execute run.exe, the reference monitor would check if the corresponding atom CanExec(Alice, run.exe) is derivable from the policy in union with Alice's submitted credentials.

To specify when an atomic query Formula$p\in {\bf At}$ is derivable from a policy Formula$\gamma$, we introduce the relation symbol Formula$\Vvdash$: FormulaTeX Source$$\eqalignno{ \gamma \Vvdash p\ {\rm iff}\ & p\in\gamma \ {\rm or}&\hbox{(1)}\cr &\exists\vec{p}\subseteq_{\rm fin} {\bf At} :(p:-\vec{p})\in\gamma\wedge\forall p^{\prime}\in\vec{p}. \gamma \Vvdash p^{\prime}.}$$

We can straightforwardly extend Formula$\Vvdash$ to Boolean compound formulas Formula$\varphi$, and the trivially true query: FormulaTeX Source$$\eqalignno{ &\gamma \Vvdash \top.&\hbox{(2)}\cr &\gamma \Vvdash\neg\varphi \ {\rm iff}\ \gamma \Vvdash\!\!\!\!\!\!\!/ \varphi.\cr &\gamma \Vvdash\varphi \wedge \varphi^{\prime}\ {\rm iff}\ \gamma \Vvdash\varphi\ {\rm and}\ \gamma \Vvdash \varphi^{\prime}.}$$

The relation Formula$\gamma \Vvdash \varphi$ may be read as “Formula$\varphi$ holds in Formula$\gamma$”.

It is the negated case where Datalog differs from classical logic: in the latter, Formula$\neg p$ is entailed by a set of formulas Formula$\gamma$ only if Formula$p$ is false in all models of Formula$\gamma$. In Datalog, on the other hand, only the minimal model of Formula$\gamma$ is considered. This fits in well with the decentralized security model, where knowledge is generally incomplete, and thus the absence of information should lead to fewer permissions.

The purpose of our language is not just to specify concrete policies, but to speak and reason about policy behaviors in a trust management context. In particular, recall that the outcome of queries is not just dependent on the service's policy alone, but also on the submitted credentials, which are also Datalog clauses. To express statements about such interactions, we introduce the notation Formula$\square_{\gamma}\varphi$, which informally means “if the set of credentials Formula$\gamma$ were submitted to the policy, then Formula$\varphi$ would be true”. The policy is evaluated in union with the credentials, so we define FormulaTeX Source$$\gamma \Vvdash \square_{\gamma^{\prime}} \varphi \ {\rm iff} \ \gamma \cup \gamma^{\prime}\Vvdash \varphi.\eqno{\hbox{(3)}}$$

The full syntax of formulas in our trust management reasoning language is thus summarized by the following grammar: FormulaTeX Source$$\varphi::= \top \vert p\vert \neg\varphi\vert \varphi\wedge\varphi\vert \square _{\gamma}\varphi$$We write Formula$\Phi$ to denote the set of all formulas.

As usual, we define Formula$\varphi\vee\varphi^{\prime}$ as Formula$\neg(\neg\varphi\wedge\neg\varphi^{\prime}), \varphi\rightarrow\varphi^{\prime}$ as Formula$\neg\varphi\vee\varphi$,  and Formula$\varphi \longleftrightarrow \varphi^{\prime}$ as Formula$(\varphi\rightarrow\varphi^{\prime})\wedge(\varphi^{\prime}\rightarrow\varphi)$. The unary operators □ and Formula$\neg$ bind more tightly than the binary ones, and Formula$\wedge$ and Formula$\vee$ more tightly than Formula$\rightarrow$ and Formula$\longleftrightarrow$. Implication Formula$(\rightarrow)$ is right-associative, so we write Formula$\varphi_{1}\rightarrow \varphi_{2}\rightarrow \varphi_{3}$ for Formula$\varphi_{1}\rightarrow (\varphi_{2}\rightarrow\varphi_{3})$.

Example II.1

Let Formula$\gamma_{0}$ be the Datalog policy Formula$\{p:-q, r;p:-s;q:-p, t;q:-u\}$ (we use the semicolon as separator in clause sets, to avoid the ambiguity with the comma).

  • 1) Without supporting credentials, no atom holds in Formula$\gamma_{0}$: FormulaTeX Source$$\gamma_{0}\Vvdash \neg v,\ {\rm for\ all}\ v\in {\bf At}.$$
  • 2) If Formula$u$ and Formula$r$ were submitted as supporting credentials, then Formula$p$ would hold in Formula$\gamma_{0}$: FormulaTeX Source$$\gamma_{0}\Vvdash \square _{\{u;\ r\}}p.$$·
  • 3) If credential Formula$s$ were submitted, and then Formula$t$ were submitted, then Formula$q$ would hold in Formula$\gamma_{0}$: FormulaTeX Source$$\gamma_{0}\Vvdash\square _{\{s\}}\square _{\{t\}}q.$$· This is, of course, equivalent to submitting both at the same time: Formula$\gamma_{0}\Vvdash\square _{\{s;t\}}q$.
  • 4) Submitted credentials may include non-atomic clauses: FormulaTeX Source$$\gamma_{0}{\Vvdash\square}_{\{s:- q; u\}}p.$$

When are two policies (observationally) equivalent? Intuitively, they are equivalent if they both make the same set of statements true, under every set of submitted credentials. This notion can be formalized using the standard Datalog containment relation Formula$\preceq$, as follows:

Definition II.2

(Containment, equivalence). Let Formula$\gamma_{1}, \gamma_{2}\in\Gamma$. Then Formula$\gamma_{1}$ is contained in Formula$\gamma_{2}(\gamma_{1}\preceq\gamma_{2})$ iff for all finite Formula$\vec{p}\subseteq {\bf At}$ and Formula$p\in {\bf At}$ FormulaTeX Source$$\gamma_{1}\cup\vec{p}{\Vvdash}p\Rightarrow\gamma_{2}\cup\vec{p}{\Vvdash}p.$$Two policies Formula$\gamma_{1}$ and Formula$\gamma_{2}$ are equivalent Formula$(x\equiv y)$ iff Formula$\gamma_{1}\preceq\gamma_{2}$ and Formula$\gamma_{2}\preceq\gamma_{1}$.

This definition may seem a bit narrow at first, but the following proposition shows that it actually coincides with the intuitive notion that exactly the same set of formulas (including □ -formulas!) holds in two equivalent policies.

Proposition II.3

Let Formula$\gamma_{1}, \gamma_{2}\in\Gamma$. FormulaTeX Source$$\gamma_{1}\equiv\gamma_{2}\ {\rm iff}\ \forall\varphi\in\Phi.\ \gamma_{1}\Vvdash \varphi\Leftrightarrow\gamma_{2}\Vvdash\varphi$$

Example II.4

  1. Formula$\emptyset\preceq\gamma, {\rm for\ all} \gamma\in\Gamma$.
  2. Formula$\{a\}\preceq\{a;b\}\preceq\{a;b;c\}$
  3. Formula$\{a:- b, c\}\preceq\{a:- b\}\preceq\{a\}$
  4. Formula$\{a:-d;d:-b\}\equiv\{a:-b, c;a:-d;d:-b\}$.
SECTION III

UNIVERSAL TRUTHS

The relation Formula$\Vvdash$ from Section II is a straightforward specification of what a policy engine in a trust management system does. It is merely the standard Datalog evaluation relation extended with the □ -operator for expressing the action of submitting supporting credentials. The relation is easy to evaluate (for a given Formula$\gamma$ and Formula$\varphi$), and it directly reflects the intuition of the operational workings of a trust management system. So why should we bother developing a formal semantics that, as we shall see, is much more complex? There are three compelling reasons:

  • A model-theoretic semantics lets us interpret and manipulate policies as mathematical objects in a syntax-independent way. It also provides additional insights into, and intuitions about, trust management systems.
  • To prove that a formula is not a theorem, it is often easier to construct a counter-model (or in our case, a counter-world) than to work directly in the proof theory.
  • The relation Formula$\Vvdash$ actually does not even provide a proof theory for formulas Formula$\varphi$: it is of no help in answering the more interesting (but much harder) question if Formula$\varphi$ is valid, i.e., if holds in all policies Formula$\gamma$. A formal semantics is the first step towards a corresponding proof theory.

The first two answers also apply to the question on the benefits of having a model-theoretic semantics for any logic. The third point is perhaps the most important from a practical perspective: in policy analysis, we are not mainly interested in the consequences of concrete policies and concrete sets of submitted credentials, but in universal truths Formula$\varphi$ that hold in all policies (or all policies that satisfy some properties).

Definition III.1

We write Formula$\Vvdash \varphi$ iff Formula$\varphi$ holds in all policies, i.e.,Formula$\forall\gamma\in\Gamma.\ \gamma \Vvdash \varphi$

The following examples illustrate that the reasoning techniques required in proving universal truths Formula$\varphi$ are beyond those directly provided by the definition of Formula$\Vvdash$.

Example III.2

If Formula$p$ is true in some policy when credential Formula$q:-r$ is submitted, then Formula$p$ would also be true in the same policy if credential Formula$q$ were submitted: FormulaTeX Source$$\Vvdash\square _{\{q:-r\}P}\rightarrow\square _{\{q\}}p$$Intuitively, Formula$q$ is “more informative” than Formula$q:-r$ (more formally, Formula$\{q:-r\}\preceq\{q\})$, and providing more information can only lead to more (positive) truths, as Datalog is monotonic.

Example III.3

If submitting Formula$a$  and Formula$b$ individually is not sufficient for making Formula$c$ hold in some policy, but submitting both of them together is sufficient, then Formula$a$ cannot possibly hold in the policy: FormulaTeX Source$$\Vvdash \neg \square _{\{a\}}c\wedge\neg \square _{\{b\}}c\wedge\square_{\{a;\ b\}}c\rightarrow\neg a$$For suppose Formula$a$ were true in the policy. Then submitting both Formula$a$ and Formula$b$ would be equivalent to submitting just Formula$b$, but this contradicts the observation that submitting solely Formula$b$ does not make Formula$c$ true.

Example III.4

If Formula$c$ does not hold in some policy, and submitting Formula$d$ is not sufficient for making Formula$e$ hold, but submitting both credentials Formula$b:-a$ and Formula$d:-c$ is sufficient, then Formula$c$ must hold in that policy, and furthermore, Formula$a$ would hold if credential Formula$d$ were submitted: FormulaTeX Source$$\Vvdash \neg a\wedge\square _{\{d\}}{\neg}e\wedge\square _{\{b:-a;\ d:-c\}}e\rightarrow c\wedge\square _{\{d\}}a.$$This small example is already too complex to explain succinctly by informal arguments, but it illustrates that reasoning about universal truths is far from trivial. We later present a formal proof of this statement in Example V.4.

A. Probing Attacks

There is a class of attacks on trust management systems called probing attacks [31], [4], [8], in which the attacker gains knowledge of secrets about the policy by submitting a series of access requests together with sets of supporting credentials, and by observing the system's reactions. Checking if a probing attack allows the attacker to infer a secret can be very complex, but it turns out that we can express probing attacks succinctly and directly as universal truths in our language.

Here is a simple (and naïve) example of a probing attack. A service Formula$S$ has a policy Formula$\gamma$ that includes the publicly readable rule FormulaTeX Source$$S.canRegister(x):-x.hasConsented(S).\eqno{\hbox{(4)}}$$Informally, this should mean “Formula$S$ says that Formula$x$ can register with the service if Formula$x$ says (or has issued a credential saying) that he or she consents with Formula$S$'s terms and conditions”. The service also exposes the query Formula$S$, canRegister (Formula$x$) to any user Formula$x$.

Suppose the user (and attacker) Formula$A$ self-issues a conditional credential FormulaTeX Source$$A.hasConsented(S):-A. isRegistered(B),\eqno{\hbox{(5)}}$$which informally means “Formula$A$ says that Formula$A$ consents to Formula$S$ ‘s terms and conditions, if Formula$A$ says that Formula$B$ is registered”. Formula$A$ then submits this credential together with the query Formula$S$. canRegister( Formula$A$), and observes that the answer is ‘no’. From this single observation, she learns that neither Formula$A$ .hasConsented nor Formula$A$. isRegistered(Formula$B$) hold in Formula$\gamma$- or else the query would have yielded the answer “yes”. This is not very interesting so far, as she has only learnt about the falsity of statements made by herself.

But suppose she can also issue delegation credentials of the form Formula${A}. p:-{D}. p$. Such credentials are usually used to express delegation of authority; for example, to delegate authority over who is a student to university Formula$U, A$ would issue the credential Formula$A$ .isStudent (Formula$x$) : - Formula$U$ .isStudent( Formula$x$). But here Formula$A$ abuses this mechanism by issuing the delegation credential FormulaTeX Source$$A. isRegistered(B):-S. isRegistered (B).\eqno{\hbox{(6)}}$$Now she submits this credential together with the first conditional credential, and evaluates the same query. By observing the service's reaction to this second probe, and combining this with her previous observation, she then learns whether Formula$B$ is registered (according to Formula$S!$) or not: the service's answer is “yes” iff Formula$\gamma \Vvdash S$. isRegistered Formula$(B)$. She has thus detected a fact in Formula$\gamma$ that had nothing to do with the original query, and may well be confidential. Moreover, it is generally not possible to protect against probing attacks without crippling the intended policy using simple syntactic input sanitization or by enforcing strict non-interference (see [4] for details).

We now show how this attack can be expressed as a universal truth. Let Formula$c_{1}$ and Formula$c_{2}$ be the credentials 5 and 6, respectively. Formula$A$ 's knowledge about the public clause (4) in the policy translates into FormulaTeX Source$$\varphi_{1}=\square_ {\{{A.hasConsented(S)\}}}S.canRegister(A).$$Her first observation is translated into FormulaTeX Source$$\varphi_{2}=\square _{\{c_{1}\}}\neg S. canRegister(A),$$and the second observation into FormulaTeX Source$$\eqalignno{ &\varphi_{3}=\square_{\{c_{1},c_{2}\}} S. canRegister(A){\rm or}\cr &\varphi_{3}^{\prime}=\square_{\{c_{1},c_{2}\}}\neg S.canRegister(A),}$$depending on the service's reaction. Then the following holds: FormulaTeX Source$$\eqalignno{ &\ \Vvdash\varphi_{1}\wedge\varphi_{2}\wedge\varphi_{3}\rightarrow S.isRegistered(B)\cr &\Vvdash\varphi_{1}\wedge\varphi_{2}\wedge\varphi_{3}^{\prime}\rightarrow\neg S.isRegistered(B)}$$We will later present a logic that can prove such statements, and thus can also be used to reason about probing attacks (see Example V.5).

Note that Examples III.3 and III.4 can also be interpreted as probing attacks. For instance, in Example III.4, let us assume that Formula$e$ is the only query publicly exposed by the service, and the attacker initially only knows that Formula$a$ does not hold in the service's policy. The attacker possesses three authenticated credentials: Formula$d$ and Formula$b:-a$ and Formula$d:-c$, By submitting first Formula$d$ together with the query Formula$e$, and after that Formula$\{b:-a;\ d:-c\}$ together with the same query, and by observing the service's reactions to these two probes, the attacker detects (provided she is sufficiently clever) that Formula$c\wedge\square _{\{d\}}a$ holds in the policy. Depending on the circumstances, this may constitute a breach of secrecy.

We can succinctly define the notions of probes, probing attack, detectability and opacity from [4], [8] in our language.

Definition III.5

A probe Formula$\pi$ is a formula of the form Formula$\square _{\gamma}\psi$. where Formula$\gamma\in\Gamma$ is called the probe credential set and Formula$\psi$ is a □ -free formula from Formula$\Phi$ called the probe query.

An observation of a probe Formula$\pi$ under a policy Formula$\gamma_{0}$ is either Formula$\pi$ if Formula$\gamma_{0}\Vvdash\pi$, and otherwise Formula$\neg \pi$.

A probing attack on Formula$\gamma_{0}$ consisting of probes Formula$\{\pi_{1}, \ldots, \pi_{n}\}$ is the conjunction of the observations of Formula$\pi_{i}\in\{\pi_{1}, \ldots, \pi_{n}\}$ under Formula$\gamma_{0}$.

Clearly, by the above definition, if Formula$\varphi$ is a probing attack on Formula$\gamma_{0}$, then Formula$\gamma_{0}\Vvdash\varphi$. But there may be other policies Formula$\gamma$ that also have the property that Formula$\varphi$ holds in them. In the absence of other additional knowledge, the attacker cannot distinguish between Formula$\gamma_{0}$ and any such Formula$\gamma$. To put it positively, the attacker learns from the probing attack Formula$\varphi$ precisely that Formula$\gamma_{0}$ is in the equivalence class of policies in which Formula$\varphi$ holds. We denote this equivalence class induced by probing attack Formula$\varphi$ by Formula$\vert \varphi \mid =\{\gamma\vert \gamma \Vvdash \varphi\}$.

Now if in all these policies, some property Formula$\varphi^{\prime}$ holds, then the attacker knows with absolute certainty that Formula$\varphi^{\prime}$ holds in Formula$\gamma_{0}$ in particular, in which case we say that Formula$\varphi^{\prime}$ is detectable. Conversely, if there exists some policy within Formula$\vert \varphi\vert$ in which Formula$\varphi^{\prime}$ does not hold, the attacker cannot be certain that Formula$\varphi^{\prime}$ holds in Formula$\gamma_{0}$, in which case we say that Formula$\varphi^{\prime}$ is opaque.

Definition III.6

(Detectability, opacity). A formula Formula$\varphi^{\prime}\in\Phi$ is detectable in a probing attack Formula$\varphi$ on a policy Formula$\gamma_{0}$ iff FormulaTeX Source$$\forall\gamma\in\vert \varphi\vert .\ \gamma \Vvdash\varphi^{\prime}.$$A formula Formula$\varphi^{\prime}$ is opaque in a probing attack Formula$\varphi$ iff it is not detectable in Formula$\varphi$, or equivalently, FormulaTeX Source$$\exists\gamma\in\vert \varphi\vert .\ \gamma \Vvdash\!\!\!\!\!\!\!\!/ \varphi^{\prime}.$$

Theorem III.7

(Probing attacks). A formula Formula$\varphi^{\prime}$ is detectable in a probing attack Formula$\varphi$ iff Formula$\Vvdash\varphi\rightarrow\varphi^{\prime}$.

This theorem again underlines the importance of being able to reason about universal truths.

SECTION IV

SEMANTICS

The model-theoretic semantics we are looking for has to satisfy four requirements:

  • 1) Capturing trust management: given Formula$\varphi$ and the semantics of Formula$\gamma$, it is possible to check if Formula$\gamma \Vvdash \varphi$.
  • 2) Supporting a notion of validity: Formula$\varphi$ is valid (in the model theory) iff Formula$\Vvdash\varphi$.
  • 3) Full abstraction [43]: two policies are equivalent Formula$(\equiv)$ iff their respective semantics are equal.
  • 4) Compositionality: the semantics of Formula$\gamma_{1}\cup\gamma_{2}$ can be computed from the individual semantics of Formula$\gamma_{1}$ and Formula$\gamma_{2}$.

A. Naïve Approaches

We first consider some simple approaches to developing a formal semantics that may immediately come to mind, and show why they fail.

The standard model-theoretic interpretation of a set of Datalog clauses is its minimal Herbrand model, i.e., the set of atoms that hold in it. But in this approach, the policy Formula$\gamma_{0}$ from Example II.1 would have the same semantics as the empty policy Formula$\emptyset$, namely the empty model, even though the two policies are clearly not equivalent (Def. II.2). Hence such a semantics would not be fully abstract. This semantics is not compositional either: from the semantics of Formula$\{p:-q\}$ (which is again empty) and of Formula$\{q\}$, we cannot construct the semantics of their union. Therefore, this semantics is clearly unsuitable in a trust management context, where it is common to temporarily extend the clause set with a set of credentials. In fact, this semantics fails on all four accounts regarding our requirements.

We could also attempt to interpret a Datalog clause Formula$p:-p_{1}, \ldots,p_{n}$ as an implication Formula$p_{1}\wedge\ldots\wedge p_{n}\rightarrow p$ in classical (or intuitionistic) logic, and a policy Formula$\gamma$ as the conjunction of its clauses: Formula$\llbracket\gamma \rrbracket=\bigwedge_{c\in\gamma}\llbrackwt c\rrbracket$. As shown by Gaifman and Shapiro ([27]), this semantics would indeed be both compositional and fully abstract. However, this interpretation does not correctly capture the trust management relation Formula$\Vvdash$, as we show now. First of all, we would need to translate □ formulas into logic. The obvious way of doing this would be to interpret Formula$\square _{\gamma}\varphi$ as the implication Formula$\llbracket \gamma \rrbracket \rightarrow \llbracket \varphi \rrbracket$. Then, for instance, we have Formula$\{p:-q\}\Vvdash\square _{q}p$, and correspondingly also [Formula$\llbracket \{p:-q\}\rrbracket \models \llbracket \square _{q}p\rrbracket$, since [Formula$\llbracket \{p:-q\} \rrbracket=\llbracket \square _{q}p\rrbracket=q\rightarrow p$. Thus we might be led to conjecture FormulaTeX Source$$\gamma\Vvdash \varphi {\buildrel ? \over \iff} \llbeacket\gamma \rrbracket \models \llbracket \varphi \rrbracket.$$

Unfortunately, this correspondence does not hold in general. Consider the formula Formula$\varphi=\neg q\wedge\square _{q}p$. From this we can conclude that Formula$\llbracket \varphi\rrbracket =\neg q\wedge(q\rightarrow p)$. But Formula$\{p:-q\}\Vvdash\varphi$, whereas [Formula$\llbracket\{p:-q\}\rrbracket \nvDash \llbracket \varphi\rrbracket$. We could try to fix this by only considering the minimal model of the semantics, since minMod([ Formula$(\llbracket\{p:-q\} \rrbracket)\models\neg q$. But we can break this again: Formula$\emptyset\Vvdash\!\!\!/ \varphi$, whereas minMod Formula$(\llbracket \emptyset \rrbracket)\models \llbracket \varphi \rrbracket$.

B. A counterfactual Kripke semantics

The crucial observation that leads to an adequate semantics is that both Datalog clauses and the trust management specific □-actions are counterfactual, rather than implicational, in nature. For instance, Formula$p:-\vec{p}$ can be interpreted as the counterfactual “if Formula$\vec{p}$ were added to the policy, then Formula$p$ would hold”. Similarly, Formula$\square _{\gamma}\varphi$ can be read as “if Formula$\gamma$ were added to the policy, then Formula$\varphi$ would hold”. (Note that the counterfactual conditional “if A were true then B would hold” is strictly stronger than the material implication “A Formula$\rightarrow$ B”, which vacuously holds whenever A is not true.)

Therefore, we can unify the notations and write Formula$\square_{\vec{p}p}$ instead of Formula$p:-\vec{p}$. Moreover, instead of writing a policy Formula$\gamma$ as a set, we can just as well write it as a conjunction of clauses. We can thus rewrite the syntax for policies and formulas from Section II in the following, equivalent, form: FormulaTeX Source$$\eqalignno{ &{\rm Policies}\ \gamma ::={\top}\vert p\vert \square _{\wedge\vec{p}}p\vert \gamma\wedge\gamma\cr &{\rm Formulas}\ \varphi ::=\gamma\vert \neg\varphi\vert \varphi\wedge\varphi\vert \square _{\gamma}\varphi}$$As before, we write Formula$\Gamma$ and Formula$\Phi$ to denote the set of all policies and formulas, respectively. The relation Formula$\Vvdash$ is also defined as before, with the obvious adaptations to the new syntax.

Notation IV.1

Henceforth, we treat Formula$\varphi$ as syntactic sugar for Formula$\wedge\vec{\varphi}$, and Formula$p:-\vec{p}$ for Formula$\square _{\vec{p}}p$.

Interpreting □ -formulas as counterfactuals, we can now give it a multi-modal Kripke semantics in the spirit of Lewis and Stalnaker [36], [49]: the counterfactual Formula$\square_{\gamma}\varphi$ holds in a possible world Formula$w$ if in those Formula$\gamma$-satisfying worlds Formula$w^{\prime}$ that are closest to Formula$w, \varphi$ holds. We will express the closeness relation using a ternary accessibility relation Formula$R$, and later apply rather strong conditions on Formula$R$ in order to make it match the intended trust management context.

Definition IV.2

(Model. entailment). A model Formula$M$ is a triple Formula$\langle W, R, V\rangle$, where Formula$W$ is a set, Formula$R \subseteq \wp (W)\times W\times W$, and Formula$V:{\bf At}\rightarrow\wp(W)$.

Given a model Formula$M$, we inductively define the model-theoretic entailment relation Formula$\Vdash_{M}\subseteq W\times\Phi$ as follows. For all Formula$w\in W$: FormulaTeX Source$$\eqalignno{ &w\Vdash_{M}{\top}\cr &w\Vdash_{M}p\ {\rm iff}\ w\in V(p)\cr &w\Vdash_{M^{\neg}}\varphi {\rm iff}\ w\nVdash_{M\varphi}\cr &w\Vdash_{M}\varphi_{1}\wedge\varphi_{2} {\rm iff}\ w\Vdash_{M}\varphi_{1} {\rm and} w\Vdash_{M}\varphi_{2}\cr &w\Vdash_{M}\square _{\gamma}\varphi {\rm iff}\ \forall w^{\prime}.\ R_{\vert \gamma\vert _{M}}(w, w^{\prime})\Rightarrow w^{\prime}\Vdash_{M}\varphi,}$$where Formula$\vert \gamma\vert _{M}=\{w\in W\vert w\Vdash_{M}\gamma\}$. Similarly, we write Formula$\vert w\vert M$ to denote the set Formula$\{\gamma\in\Gamma\vert w\Vdash_{M}\gamma\}$.

Intuitively, a world Formula$w\in W$ corresponds to a policy; more precisely, to the Formula$\preceq$-maximal policy in Formula$\vert w\vert _{M}$. Vice versa, a policy Formula$\gamma$ corresponds to a world, namely the Formula$\preceq_{M}$-minimal world in Formula$\vert \gamma\vert _{M}$, where Formula$\preceq M$ is an ordering on worlds that reflects the containment relation Formula$\preceq$ on policies (Def. IV.3). (Actually, in Def. IV.4, we associate Formula$\gamma$ simply with the entire cone Formula$\vert \gamma \vert_{M})$

Dpfinition IV.3

(World containment). Given a model Formula$M\ = \langle W, R, V\rangle$ and Formula$x, y\in W$, FormulaTeX Source$$x\preceq_{M}y\ {\rm iff}\ \forall\gamma\in\Gamma : x\Vdash_{M}\gamma\ {\rm implies}\ y\Vdash_{M}\gamma.$$

Definition IV.4

(Semantics). The semantics of Formula$\gamma$ (with respect to Formula$M$) is Formula$\vert \gamma\Vert M$.

As it is, this definition keeps the meaning of Formula$R$ completely abstract, but we can already prove that the semantics is compositional, irrespective of Formula$R$:

Theorem IV.5

(Compositionality). For all models Formula$M$, and Formula$\gamma_{1}, \gamma_{2}\in\Gamma$: FormulaTeX Source$$\vert \gamma_{1}\wedge\gamma_{2}\vert _{M}=\vert \gamma_{1}\vert _{M}\cap\vert \gamma_{2}\vert _{M}.$$

In order to satisfy the remaining three requirements from Section II, we have to put some restrictions on the models, and in particular on the accessibility relation Formula$R$. We call models that satisfy these constraints TM models (Def. IV.7). Intuitively, Formula$R_{\vert \gamma\vert _{M}}(w, w^{\prime})$ should hold if Formula$w^{\prime}$ is a world that is closest to Formula$w$ of those worlds in which Formula$\gamma$ holds. But what do we mean by ‘closest’? If we interpret worlds as policies, then Formula$w^{\prime}$ is the policy that results from adding Formula$\gamma$, and nothing more but Formula$\gamma$, to Formula$w$. So we have to consider all worlds that are larger than Formula$w$ (since we are adding to Formula$w$) and also satisfy Formula$\gamma$, and of these worlds we take the Formula$\preceq_{M}$-minimal ones (since we are adding nothing more but Formula$\gamma$) (Def. IV.7 (1)).

The other two constraints (Def. IV.7 (2) and IV.7 (3)) ensure that there is a one-to-one correspondence between policies and worlds.

Definition IV.6

If Formula$(X,\leq)$ is a pre-ordered set (≤ is a reflexive transitive relation on Formula$X$) and Formula$Y$ a finite subset of Formula$X$, then Formula${\bf min}_{\leq}(Y)=\{y\in Y\vert \forall y^{\prime}\in Y\ :\ y^{\prime}\ {/\!\!\!\!\!\!<}\ y\}$, and Formula${\bf max}_{\leq}(Y)= \{y\in Y \mid \forall y^{\prime}\in Y:y^{\prime}\ {/\!\!\!\!\!>}\ y\}$.

Definition IV.7

(Trust management model). A model Formula$M= \langle W, R, V\rangle$ is a TM model iff

  • 1) Formula$\forall\gamma\in\Gamma, x, y\in W$. Formula$R_{\vert \gamma\vert _{M}}(x, y)$ iff Formula$y\in{\bf min}_{\prec_{M}}\{w \mid w\succeq _{M}x\wedge w\in\vert \gamma\vert _{M}\}$,
  • 2) Formula$\forall\gamma\in\Gamma, \exists w\in W.\ \gamma\in{\bf max}_{\preceq}\vert w\vert _{M}$, and
  • 3) Formula$\forall w\in W, \exists\gamma\in\Gamma \gamma\in{\bf max}_{\preceq}\vert w\vert _{M}$.

To gain a better intuition for TM models, it is useful to consider the following, particular TM model: imagine a labeled directed graph with a vertex for each Formula$\gamma\in\Gamma$ (these are the worlds Formula$W$). There is an edge from Formula$\gamma_{1}$ to Formula$\gamma_{2}$, labeled with Formula$\gamma$, whenever Formula$\gamma_{2}=\gamma_{1}\cup\gamma$ (corresponding to the accessibility relation Formula$R_{\vert \gamma\vert }$).

So a TM model models all possible policies and all possible trust management interactions (submitting a set of credentials Formula$\gamma$ for the duration of a query) with these policies. The following theorem shows that TM models indeed precisely capture the trust management relation Formula$\Vvdash$, and Theorem IV.9 states that the semantics is fully abstract.

Theorem IV.8

(Capturing trust management). Let Formula$M =\langle W, R, V\rangle$ be a TM model, Formula$\gamma\in\Gamma$ and Formula$\varphi\in\Phi$. FormulaTeX Source$$\gamma \Vvdash \varphi\ {\rm iff}\ \forall w\in {\bf min}_{\preceq M}\vert \gamma\vert _{M}.\ w\Vdash_{M}\varphi$$

Theorem IV.9

(Full abstraction). For all TM models Formula$M$. and Formula$\gamma_{1}, \gamma_{2}\in\Gamma$: FormulaTeX Source$$\gamma_{1}\equiv\gamma_{2}\ {\rm iff}\ \vert \gamma_{1}\vert _{M}=\vert \gamma_{2}\vert _{M}.$$

The property that is hardest to satisfy (and to prove) is the requirement that the model theory should support a notion of validity that coincides with judgements of the form Formula$\Vvdash\varphi$, i.e., universal truths about trust management policies. This is formalized in Theorem IV.11

Definition IV.10

(Trust management validity). Formula$\varphi$ is TM-valid (we write Formula$\Vdash_{\rm TM}\varphi$) iff for all TM models Formula$M=\langle W, R, V\rangle$ and Formula$w\in W:w\Vdash_{M}\varphi$.

Theorem IV.11

(Supporting validity). FormulaTeX Source$$\Vdash_{\rm TM}\varphi\ {\rm iff}\ \Vvdash\varphi.$$

Example IV.12

Consider the following (false) statement: “in all policies in which Formula$p\rightarrow q$ holds, Formula$\square _{p}q$ also holds.” By the contrapositive of Theorem IV.11, we can prove that this is not true, i.e., Formula$\Vvdash\!\!\!\!\!\!\!\!/ (p\rightarrow q)\rightarrow\square _{p}q$, by identifying a counter-world Formula$w$ in a TM model Formula$M$ such that Formula$w\Vdash_{M}(p\rightarrow q)\wedge\neg\square _{p}q$. By Def. IV.2, this is equivalent to FormulaTeX Source$$w\Vdash_{M}\neg p\wedge\neg\square _{p}\ q\ {\rm or}\ w\Vdash_{M}q\wedge\neg\square _{p}q.$$Let Formula$w$ be a Formula$\preceq M$-minimal world in all of Formula$W$. By minimality, Formula$w\Vdash_{M}\gamma$ only if Formula$\gamma$ is universally true. Neither Formula$p$ nor Formula$\square _{p}q$ (assuming Formula$p\neq q$) are universally true, hence Formula$w\Vdash_{M}\neg p$ and Formula$w\Vdash_{M}\neg\square _{p}q$, as required.

In this section, we developed an adequate model-theoretic semantics for trust management. We started by interpreting both Datalog clauses and trust management interactions as counterfactuals, and taking a generic counterfactual model theory as the basis. We then customized the theory by adding constraints on the models of interest to arrive at TM models. The resulting semantics satisfies all four requirements from Section II, and it provides an intuition of a trust management service as a vertex in a labeled directed graph, where the reachable vertices represent the clause sets resulting from combining the service's policy with the submitted credential set (the edge label) to the service.

However, this semantics still does not give us much insight into proving judgements of the form Formula$\Vvdash\varphi$ (or, equivalently, Formula$\Vdash_{\rm TM}\varphi)$. For this purpose, we equip the model theory with a corresponding proof theory in the following section.

SECTION V

AXIOMATIZATION

In standard modal logic, it is usually straightforward to derive an axiom in the proof theory from each frame condition in the model theory, i.e., a restriction on the accessibility relation Formula$R$. (For example, reflexivity of Formula$R$ corresponds to the axiom Formula$\square \varphi\rightarrow\varphi$.) This constructive method can also be applied to counterfactual multi-modal logic, if the frame conditions are relatively simple [48]. In our case, however, the restriction on Formula$R$ (Def. IV.7 (1)) to be simply ‘translated’ into an axiom. The axiomatization presented below was actually conceived by guessing the axioms and rules, and adjusting them until the system was provably sound and complete with respect to the model theory.

Definition V.1

In the proof system below, let Formula$\varphi, \varphi^{\prime}, \varphi^{\prime \prime}\in\Phi$, Formula$\gamma, \gamma^{\prime}\in\Gamma, p\in$ At and Formula$\vec{p}\subseteq$ At. The proof system consists of the following axiom schemas: FormulaTeX Source$$\eqalignno{ &\vdash\varphi\rightarrow\varphi^{\prime}\rightarrow\varphi &\hbox{(C11)}\cr &\vdash(\varphi\rightarrow\varphi^{\prime}\rightarrow\varphi^{\prime\prime})\rightarrow(\varphi\rightarrow\varphi^{\prime})\rightarrow\varphi\rightarrow\varphi^{\prime\prime} &\hbox{(C12)}\cr &\vdash(\neg\varphi\rightarrow\neg\varphi^{\prime})\rightarrow\varphi^{\prime}\rightarrow\varphi &\hbox{(C13)}\cr &\vdash\square _{\gamma}(\varphi\rightarrow\varphi^{\prime})\rightarrow\square _{\gamma}\varphi\rightarrow\square _{\gamma}\varphi^{\prime}&\hbox{(K)}\cr &\vdash\square _{\gamma}\gamma &\hbox{(C1)}\cr &\vdash\square _{\gamma}\varphi\rightarrow\gamma\rightarrow\varphi &\hbox{(C2)}\cr &\vdash\square _{(p:-\vec{p})}\varphi\rightarrow({\vec{p}}\rightarrow p)\rightarrow\varphi &\hbox{(Dlog)}\cr &\qquad{\rm provided}\ \varphi \ {\rm is} \square -{\rm free}\cr &\vdash\square _{\gamma}\neg\varphi \longleftrightarrow\neg\square _{\gamma}\varphi &\hbox{(Fun)}\cr &\vdash\square _{\gamma\wedge\gamma^{\prime}} \varphi\longleftrightarrow\square _{\gamma}\square _{\gamma^{\prime}}\varphi &\hbox{(Perm)}}$$

Additionally, there are three proof rules: FormulaTeX Source$$\eqalignno{ &{\bf If}\ \vdash\varphi \ {\bf and}\ \vdash\varphi\rightarrow\varphi^{\prime}\ {\bf then}\ \vdash\varphi^{\prime}. &\hbox{(MP)}\cr &{\bf If}\ \vdash\varphi \ {\bf then} \ \vdash\square _{\gamma}\varphi.&\hbox{(N)}\cr &{\bf If}\ \vdash\gamma\rightarrow\gamma^{\prime}\ {\bf and} \ \varphi \ {\bf is}\ \neg-{\bf free} &\hbox{(mon)}\cr &\qquad {\bf then} \vdash\square _{\gamma^{\prime}}\varphi\rightarrow\square _{\gamma}\varphi}$$

Axioms (CII)-(CI3) and Modus Ponens (MP) are from the Hilbert-style axiomatization of classical propositional logic [47]. It is easy to see that they are sound, irrespective of Formula$R$, since the Boolean operators Formula${\top}, \wedge$ and Formula$\neg$ are defined classically for Formula$\Vdash_{M}$. Axiom (K) is the multi-modal version of the basic Distribution Axiom that is part of every modal logic Formula$(\square (\varphi\rightarrow\varphi^{\prime})\rightarrow\square \varphi\rightarrow\square \varphi^{\prime})$. Similarly, Rule (N) is the multimodal version of the basic Necessitation Rule (if Formula$\vdash\varphi$ then Formula$\vdash\square \varphi)$.

Axioms (CI) and (C2) are also standard in counterfactual logic [48]. The former is the trivial statement that if Formula$\gamma$ were the case, then Formula$\gamma$ would hold. The latter axiom states that the counterfactual conditional is stronger than material implication.

At first sight, Axiom (Dlog) may look similar to Axiom (C2), but the two are actually mutually independent. In fact, while the latter is standard, Axiom (Dlog) is deeply linked with the intuition that the possible worlds correspond to Datalog policies. Recall that, intuitively, the left hand side means “Formula$\varphi$ would hold in the policy if the credential Formula$p:-\vec{p}$ were submitted”. Now we expand the right hand side of the implication to FormulaTeX Source$$(\vec{p}\wedge\neg p)\vee\varphi.$$So the axiom tells us that the left hand side holds only if it is the case that

  • either Formula$\varphi$ holds in the policy anyway, even without submitting Formula$p:-\vec{p}$,
  • or the action of submitting the credential must be crucial for making Formula$\varphi$ true, but this is only possible if the conditions Formula$\vec{p}$ of the credential are all satisfied in the policy, and furthermore Formula$p$ does not already hold in the policy (or else the credential could not possibly be crucial).

But the axiom only holds for □ -free Formula$\varphi$. To see why, consider the following instance of Axiom (Dlog), ignoring the side condition: Formula$\square _{q:-p}\square _{p}q\rightarrow(p\rightarrow q)\rightarrow\square _{p}q$. The left hand side is an instance of Axiom (C1), since Formula$q:-p$ is just syntactic sugar for Formula$\square _{p}q$, so the formula simplifies to Formula$(p\rightarrow q)\rightarrow\square _{p} q$, which is not TM valid, as shown in Example IV.12.

The following lemma is a useful bidirectional variant of Ax. (Dlog):

Lemma V.2

Let Formula$p, q\in$ At and Formula$\vec{p}\subseteq {\bf At}$.FormulaTeX Source$$\vdash\square _{(p:-\vec{p})}q \longleftrightarrow q\vee(\neg p\wedge\vec{p}\wedge\square _{p}q),$$

Axiom (Fun) is also remarkable in that it is rather non-standard in modal logic. It is also the reason it is not useful to define a dual Formula$\lozenge$-operator (i. e., Formula$\lozenge_{\gamma}\varphi=\neg\square _{\gamma}\neg\varphi$) in our logic, since □ and Formula$\lozenge$ would be equivalent. The axiom is equivalent to the property that the accessibility relation Formula$R$ in a TM model Formula$M=\langle W, R, V\rangle$ is essentially functional, i.e., for all Formula$w\in W$, and Formula$\gamma\in\Gamma$:

  • Formula$\exists w^{\prime}$. Formula$R_{\vert \gamma\vert _{M}}(w, w^{\prime})$, and
  • Formula$\forall w_{1}, w_{2}$. Formula$R_{\vert \gamma\vert _{M}}(w, w_{1})\wedge R_{\vert \gamma\vert _{M}}(w, w_{2})\Rightarrow w_{1}\preceq_{M} w_{2}\wedge w_{2}\preceq_{M} w_{1}$.

On the intuitive Datalog level, Axiom (Fun) can easily be seen to be sound, since the statement “Formula$\varphi$ would not hold if Formula$\gamma$ were submitted” is equivalent to “it is not the case that Formula$\varphi$ would hold if Formula$\gamma$ were submitted”.

Axiom (Perm) also corresponds to a property of Formula$R$, namely that it is transitive (that‘s the ‘if’ direction) and dense (the ‘only if’ direction). It captures the intuition that submitting two credential sets in sequence is equivalent to just submitting their union.

Rule (Mon) expresses a monotonicity property on the subscripts of □, and can be reduced to a monotonicity property of TM models and Formula$\neg$-free Formula$\varphi$: FormulaTeX Source$$\forall w, w^{\prime}\in W.\ w\Vdash_{M}\varphi\wedge w^{\prime}\succeq_{M}w\Rightarrow w^{\prime}\Vdash_{M}\varphi.$$The intuition here is that submitting more or stronger credentials can only make more (positive) facts true. It is easy to see that this does not hold in general for negated statements: suppose Formula$p$ does not hold in a policy (with no submitted credentials); then the negated fact Formula$\neg p$ holds. But Formula$\neg p$ may cease to hold when credentials are submitted, in particular, when Formula$p$ is submitted. In other words, even though Formula$p\rightarrow\top$ is valid, Formula$\square _{\top}\neg p\rightarrow\square _{p}\neg p$ is not.

The main result of this section is that the axiomatization is sound and complete with respect to the model theory (Theorem V.3).

Theorem V.3

(Soundness and Completeness). FormulaTeX Source$$\Vdash_{\rm TM}\varphi\ {\rm iff}\ \vdash\varphi$$

The proof of soundness (Formula$\vdash\varphi$ implies Formula$\Vdash_{\rm TM}\varphi$) formalizes the intuitions given above and proceeds, as usual, by structural induction on Formula$\varphi$. The proof of completeness (Formula$\Vdash_{\rm TM}\varphi$ implies Formula$\vdash \varphi$) is less standard, and can be roughly outlined thus:

  • 1) We will prove the equivalent statement that if Formula$\varphi$ consistent (with respect to Formula$\vdash$), then there exists a TM model Formula$M= \langle W, R, V\rangle$ and Formula$w\in W$ such that Formula$w\Vdash_{M}\varphi$.
  • 2) From Lemma V.2, it can be shown that every Formula$\varphi$ is equivalent to a formula Formula$\varphi^{\prime}$ that only consists of conjunctions and negations of policies in Formula$\Gamma$ (i.e., one that does not contain vertically nested boxes).
  • 3) Based on the property of TM models that every world corresponds to some policy in Formula$\Gamma$, it is then possible to identify Formula$w\in W$ such that Formula$w\Vdash_{M}\varphi^{\prime}$, whenever Formula$M$ is a TM model.
  • 4) By soundness, this implies that Formula$w\Vdash_{M}\varphi$. Furthermore, we can show that at least one TM model exists, and hence we arrive at the required existential conclusion.

Together with Theorem IV.11, we have the result FormulaTeX Source$$\Vvdash\varphi \iff \Vdash_{\rm TM}\varphi \iff \vdash\varphi.$$We can thus use the axiomatization to prove universal truths about trust management systems.

Example V.4

We sketch a formal proof of the formula from Example III.4. FormulaTeX Source$$\vdash \neg a\wedge\square _{d}\neg e\wedge\square _{b:-a\wedge d:-c}e \rightarrow c\wedge\square _{d}a$$

Proof

We first show that Formula$d$ is equivalent to Formula$\square_{\top}d$. The direction Formula$\vdash\square _{\top}d\rightarrow d$ follows directly from Axiom (C2). The same axiom also yields Formula$\vdash\square_{\top}\neg d\rightarrow\neg d$, the contrapositive of which is Formula$\vdash d\rightarrow\square _{\top}d$, together with Axiom (Fun). Therefore Formula$\vdash d\longleftrightarrow\square_{\top}d$.

Since Formula$\vdash c\rightarrow {\top}$, we have Formula$\vdash\square _{\top}d\rightarrow\square _{c}d$, according to Rule (Mon), and hence equivalently Formula$\vdash d\rightarrow d:-c$. Taking this as the premise of Rule (Mon), we get Formula$\vdash\square _{d:-c}e\rightarrow\square _{d}e$, the contrapositive of which is Formula$\vdash\square _{d}\neg e\rightarrow\square _{d:-c} \neg e$, by Axiom (Fun).

Therefore, the assumption Formula$\square _{d}\neg\ e$ from the antecedent of the formula implies Formula$\square_{d:-c}\neg e$. Conjoining this with the assumption Formula$\square _{b:-a\wedge d:-c}e$, which is equivalent to Formula$\square _{d:-c}\square _{b:-a}e$, by Axiom (Perm), we get FormulaTeX Source$$\square _{d:-c}(\neg e\wedge\square _{b:-a}e)\eqno{\hbox{(7)}}$$(as it can be easily shown that Formula$\square _{d:-c}$ distributes over Formula$\wedge$).

By Axiom (Dlog), Formula$\vdash\square _{b:-a}e\rightarrow e\vee(a\wedge\neg b)$. Therefore, formula (7) implies FormulaTeX Source$$\square _{d:-c}(a\wedge\neg b),\eqno{\hbox{(8)}}$$ since Axiom (K) allows us to apply Modus Ponens under Formula$\square_{d:-c}$. We have thus shown that the antecedent of the original formula implies Formula$\square _{d:-c}a$. Furthermore, as we have shown, Formula$\vdash d\rightarrow d:-c$, and hence by Rule (Mon), Formula$\vdash\square _{d:-c}a\rightarrow\square _{d}a$. Modus Ponens yields one of the consequents of the original formula, Formula$\square _{d}a$.

For the other consequent, Formula$c$, we apply Axiom (Dlog) to formula (8), which yields Formula$(a\wedge\neg b)\vee(c\wedge\neg d)$. Combining this with the antecedent Formula$\neg a$, we can then conclude Formula$c$. ■

Example V.5

We sketch a formal proof of the probing attack result from Section III-A. For brevity, we introduce abbreviated names for the atoms: FormulaTeX Source$$\eqalignno{ &as=A. hasConsented (S)\cr &\ sa=S. canRegister(A)\cr &ab=A. isRegistered (B)\cr &secret =S. isRegistered(B)}$$The statement that the attacker can detect secret in the probing attack can then be expressed as FormulaTeX Source$$\vdash\square _{as}sa\wedge\square _{as:-ab}\neg sa\wedge\square _{as:-ab\wedge ab:-secret}sa\rightarrow secret.$$

Proof

Assume the left hand side of the formula that we want to prove. From the previous proof, we have seen that Formula$sa$ is equivalent to Formula$\square _{\top} sa$, Since Formula$as:-ab\rightarrow {\top}$, we thus have Formula$sa\rightarrow \square _{as:-ab}sa$, by Rule (Mon). Combining the contrapositive of this with the assumption, we get Formula$\neg sa$, From the assumption and Ax. (C2), we get Formula$as\rightarrow sa$, which together with Formula$\neg sa$ gives Formula$\neg as$.

Using Lemma (V.2), we can prove that Formula$\square _{as:-ab} sa$ is equivalent to Formula$sa\ V (ab\wedge\neg as\wedge\square _{as}sa)$.

Since the assumption Formula$\square _{as:-ab} \neg sa$ is equivalent to Formula$\neg\square _{as:-ab}sa$ (by Ax. (Fun)), it is therefore also equivalent to FormulaTeX Source$$\neg sa\wedge (\neg ab\vee\ as\ \vee\neg\square _{as}sa).$$We have already proved Formula$\neg as$, and Formula$\square _{as}sa$ is in the antecedent. Therefore, we can conclude Formula$\neg ab$.

Now consider Formula$\square_{as:-ab}\neg sa \wedge\square _{as:-ab\wedge ab:-secret}sa$ in the assumption. By Ax. (Fun) and (Perm) and distributivity of □, this is equivalent to Formula$\square _{as:-ab}(\neg sa\wedge\square _{ab:-secret}sa)$. By Ax. (K), we can apply Ax. (Dlog) on the inner box under the outer box to get FormulaTeX Source$$\square _{as:-ab}(\neg sa\wedge(sa\vee(secret\wedge\neg ab))),$$which implies Formula$\square _{as:-ab}secret$. Again applying Ax. (Dlog) yields Formula$secret V (ab\wedge\neg as)$. But since we have proved Formula$\neg ab$ above, we can conclude that secret follows from the assumptions. ■

SECTION VI

MECHANIZING THE LOGIC

Hilbert-style axiomatizations are notoriously difficult to use directly for building proofs, and they are also difficult to mechanize directly, because they are not goal-directed. In this section, we describe how a goal formula Formula$\varphi$ can be transformed into an equivalent formula in classical propositional logic that can be verified by a standard SAT solver. We have implemented a tool based on the contents of this section; some uses of the tool are described in Section VII.

Our axiomatization has certain characteristics that enables such a transformation. Firstly, Lemma V.2 shows that Formula$\varphi$ can be transformed into a formula in which all subscripts of boxes are □ -free, and Ax. (Fun) and (Perm) allow us to distribute boxes through conjunctions, disjunctions and negations. This forms the basis of a normalization transform.

Secondly, for a given Formula$\varphi$, it is sufficient to encode just a finite number of axiom instantiations in classical propositional logic in order to characterize the non-classical properties of □. This process is called saturation.

In this section, we use literal to mean a (possibly negated) atom, and □ -literal to mean a (possibly negated) atom with some prefix of boxes, e.g. Formula$\square _{\square _{r}p\wedge p}q$ and Formula$p$ are both □ -literals Formula$p$ is logically equivalent to Formula$\square _{\top}p$), whereas Formula$p$ is also a literal but Formula$\square _{q}p$ is not.

The reasoning process is described in more detail next.

Normalization and expansion

Following parsing, the goal formula is simplified through the elimination of subsumed subformulas; e.g., Formula$\square _{a:-b}c\wedge\square _{a}c$ is simplified to Formula$\square _{a:-b}c$. The formula is then normalized by computing a negation normal form and distributing all boxes, such that boxes are only applied to literals, and negation is only applied to □ -literals. We also use Ax. (Perm) to collect strings of boxes into a single box. Normalization takes care of Ax. (K), (Fun), and (Perm).

Next, the goal formula is expanded by applying Lemma V.2 exhaustively until all subscripts of □ -literals are □ -free. Expansion is a very productive process - it can cause the goal formula's size to increase exponentially. This step takes care of Ax. (Dlog) and Rule (N).

The resulting formula is negated and added to the clause set. The clause set collects formulas which will ultimately be passed to a SAT solver.

Saturation

Saturation generates propositional formulas that faithfully characterize the □ -literals occurring in the clause set.

  • 1) Let Formula$\beta=\square _{\bigwedge_{i=1}^{n}(q_{i}:-\vec{q_{i}}})p$ be a □ -literal occurring in the clause set. If Formula$\vdash\bigwedge_{i=1}^{n}(\vec{q_{i}}\rightarrow q_{i})\rightarrow p$ holds (which is checked by the underlying SAT solver), we replace all occurrences of Formula$\beta$ by T. This step is a generalization of Ax. (C1).
  • 2) For each □ -literal Formula$\square _{\gamma}p$ (where Formula$\gamma\neq {\top}$) occurring in the clause set, we add the formulas Formula$p\rightarrow\square _{\gamma}p$ and Formula$\square _{\gamma}p\rightarrow \gamma\rightarrow p$.
  • 3) For each pair of □ -literals Formula$\square _{\gamma_{1}}p, \square_{\gamma_{2}}p$ (where Formula$\gamma_{1}\neq\gamma_{2}$) occurring in the clause set, we add the formula FormulaTeX Source$$\square _{\gamma_{1}}\gamma_{2}\wedge\square _{\gamma_{2}}p\rightarrow\square _{\gamma_{1}}p.$$ Intuitively, this formula encodes the transitivity of counterfactuals. Steps (2) and (3) together cover Ax. (C2) and Rule (Mon). Since the second step may create new □ -literals, the process is repeated until a fixed point is reached.

Propositionalization and SAT solving

After saturation com- pletes, all □ -literals in the clause set are uniformly substituted by fresh propositional literals. The resulting formulas are then checked by a standard SAT solver. Our implementation offers the choice between using the in-memory API of Formula${\rm Z}3^{2}$2 and producing output in the DIMACS [24] format used by many SAT solvers such as MiniSAT [25].

The classical axioms (C11)(C13) and Rule (MP) are covered by the SAT solver. We have therefore covered all axioms and rules, and thus the goal formula is valid iff the SAT solver reports unsatisfiability (since we negated the goal).

SECTION VII

APPLICATIONS AND PERFORMANCE

A. Probing attacks

As an example of how the axiomatization can be used for security analysis, and to compare the performance of our implementation, we conducted a small case study on analyzing probing attacks, based on the benchmark test cases described by Becker and Koleini [8], [7]. Their benchmark was set up to test the performance of their tool (henceforth referred to as BK) for verifying opacity and detectability in probing attacks. BK's algorithm attempts to construct a policy that is observationally equivalent to all probes but makes the fact to be detected false. The fact is opaque if BK manages to construct such a policy, and detectable otherwise. In contrast, Counterdog is a general theorem prover for our logic. By Theorems III.7, IV.11, and V.3, Counterdog can be used to check opacity and detectability by constructing a formula corresponding to a probing attack and then proving it mechanically.

To keep this paper self-contained, we briefly describe the tested scenarios, and refer the reader to [7] for a more detailed explanation.

The compute cluster Clstr under attack has the following policy Formula$\gamma_{\rm Clstr}$:

Algorithm 1

Here, Formula$x$ and Formula$y$ range over a set of users, and Formula$j$ ranges over a set of compute job identifiers. The first parameter of each predicate should be interpreted as the principal who says, or vouches for, the predicate. The policy stipulates that, according to Clstr, members who own a job can execute it, if Clstr can read the data associated with it according to data center Data. Clstr delegates authority over job ownership and membership to trusted third parties (TTP). Data delegates authority over read permissions to job data to data owners. Data also delegates authority over job data ownership to TTPs. Furthermore, both Clstr and Data say that certificate authority CA is a TTP.

TC1

In the basic test case (TC1), the attacker Eve possesses four credentials Formula$\gamma_{{\rm Eve}}$:

Algorithm 2

With her four credentials and the query, Eve can form Formula$2^{4}=16$ probes (cf. Def. III.5) of the form Formula$\square _{\gamma}\varphi_{{\rm Eve}}$, for each Formula$\gamma\subseteq\gamma_{\rm Eve}$. These result in 16 observations under Formula$\gamma_{{\rm Clstr}}$: the observation corresponding to probe Formula$\pi$ is just Formula$\pi$ if Formula$\gamma_{\rm Clstr} \Vvdash\pi$, and otherwise it is Formula$\neg\pi$. The resulting probing attack Formula$\varphi_{a}$ under Formula$\gamma_{{\rm Clstr}}$  is then the conjunction of all 16 observations.

In TC1, Eve wishes to find out if Bob is not a member of Clstr - in other words, if Formula$\neg mem$ (Clstr, Bob) is detectable. By Theorems III.7, IV.11, and V.3, this is equivalent to checking FormulaTeX Source$$\vdash\varphi_{a}\rightarrow\neg mem ({\bf Clstr, Bob)}.$$This is provable, and therefore Eve can detect that Bob is not a member.

TC2

The atomic clause mem(Clstr, Bob) is added to Formula$\gamma_{\rm Clstr}$, and the fact to be detected is changed to mem(Clstr, Bob). The corresponding formula is not provable, and hence mem(Clstr, Bob) is opaque.

TC3

Based on TC1, three irrelevant atomic clauses Formula$p_{1}, p_{2}, p_{3}$ are added to Formula$\gamma_{\rm Eve}$, increasing the number of probes to Formula$2^{7}= 128$. The fact to be detected remains the same, and is indeed detectable.

TC4

This test case was omitted as it only tests a specific switch in Becker and Koleini's tool which is not relevant in our case.

TC5

Based on TC1, the probe query is changed to Formula$\varphi_{\rm Eve}= canExe$(Clstr, Eve, Job) Formula$\wedge\neg isBanned$ (Clstr, Eve). The fact remains detectable.

TC6

Based on TC5, the probe set is manually pruned to a minimal set that is sufficient to prove detectability. This reduces the number of probes from 16 down to only 3.

To get comparable performance numbers, we ran Becker and Koleini's probing attack analyzing tool (henceforth referred to as BK) and Counterdog on these test cases. For our experiments we used an Intel Xeon E5630 2.53 GHz with 6 GB RAM. The table below summarizes the timings for all test cases, comparing BK with Counterdog.

Graphic 1

Counterdog outperforms BK in all test cases. The performance gain is most notable in the more expensive test cases. To test if this is generally the case, we performed a test series, based on TC3, adding an extra irrelevant clause to the probe credential set Formula$\gamma_{\rm Eve}$ one by one. This doubles the number of probes (and thus the size of the formula to be proved) at each step.

Figure 1 compares the performance of both tools for this test series. Counterdog's performance gain over BK increases exponentially with each added credential in Formula$\gamma_{\rm Eve}$ A probe credential of size 14 (resulting in 16,384 probes) was the maximum that BK could handle before running out of memory, taking 408 s (compared to 7 s with Counterdog). We tested Counterdog with up to 18 credentials (resulting in 262,144 probes), which took 179 s. A simple extrapolation suggests that Counterdog can check a probing attack based on TC3 extended to Formula$10^{8}$ probes within less than three hours.

Figure 1
Figure 1. Comparison of timings for the TC3-based test series on a double logarithmic scale. BK is the tool from [7], Cd is our tool Counterdog.

B. Proving Meta-Theorems

As we have seen, the axiomatization of the semantics together with our implementation enables us to mechanically prove universal truths about trust management systems - that is, statements that are implicitly quantified over all policies: a theorem Formula$\vdash \varphi$) is equivalent to Formula$\Vvdash\varphi$, by Theorems IV.11 and V.3, which can be interpreted as “all policies Formula$\gamma$ satisfy the property Formula$\varphi$”.

But we want to go further than that. In this subsection, we show that we can use our implementation to automate proofs of meta-theorems about trust management. These are statements containing universally quantified meta-variables ranging over atoms, conjunctions of atoms, Formula$\Gamma$ or Formula$\Phi$. Our axiom schemas and Lemma V.2 are examples of such meta-theorems, with meta-variables Formula$p,\vec{p},\gamma, \varphi$ etc.

In classical logic as well as all normal modal logics, proving such meta-theorems is trivial: if a propositional formula Formula$f$ is a theorem, then substituting any arbitrary formula Formula$f^{\prime}$ for all occurrences of an atom Formula$p$ in Formula$f$ will also yield a theorem. In fact, the axiomatization of such logics often explicitly include a uniform substitution rule, and a finite number of axioms (rather than axiom schemas, as in our case).

Our logic breaks the uniform substitution property, as some of the axioms and rules have syntactic side conditions (e.g. Ax. (Dlog), Rule (Mon)). It is thus not a normal modal logic in the strict sense, but this does not pose any problems, and is perhaps even to be expected, as many belief-revision and other non-monotonic logics also break uniform substitution [41].

The only downside is that proving meta-theorems is non-trivial, and manual proofs generally require structural induction over the quantified meta-variables. It is therefore not obvious if proving meta-theorems can be automated easily. After all, the range of the quantifiers is huge, and even infinite if At is infinite. We answer this question in the affirmative by presenting a number of proof-theoretical theorems on the provability of meta-theorems (meta-meta theorems, so to speak), that show that it is sufficient to just consider a small number of base case instantiations of meta-variables.

We will use contexts to formalize the notion of meta-theorem. A context is a Formula$\Phi$-formula with a ‘hole’ denoted by Formula$[\cdot]$. We define three different kinds of contexts in Fig. 2, Formula$\Phi$ -hole, Formula$\Gamma$ -hole, and At-hole contexts. Intuitively, the holes in a Formula$\Phi$-hole (Formula$\Gamma$-hole, At-hole, respectively) context can be filled with any Formula$\varphi\in\Phi(\gamma\in \Gamma,\vec{p}\subseteq_{\rm fin}\ {\bf At})$ to form a well-formed Formula$\Phi$-formula.

Figure 2
Figure 2. Formula$\Phi$-contexts with Formula$\Phi$-holes, Formula$\Gamma$-holes and At-holes, respectively. A At-hole context takes as argument an atom or a conjunction of atoms.

If Formula${\cal A}$ is a Formula$\Phi$-hole (Formula$\Gamma$-hole, At-hole, respectively) context and Formula$\alpha\in\Phi(\alpha\in\Gamma, \alpha\subseteq {\rm fin} {\bf At)}$, we write Formula${\cal A}[\alpha]$ to denote the Formula$\Phi$-formula resulting from replacing all holes in Formula${\cal A}$ by Formula$\alpha$.

It is easy to see that every Formula$\Phi$-hole context is also a Formula$\Gamma$-hole context, and every Formula$\Gamma$-hole context is also a At-hole context. Each of the three types of contexts completely cover all of Formula$\Phi$. in particular, the case Formula$\square {\gamma}\wedge[\cdot]^{\cal A}$ (for Formula$\Gamma$-hole and At-hole contexts) is covered because Formula$\square _{\gamma\wedge[\cdot]}{\cal}$ is equivalent to Formula$\square _{\gamma}\square _{[\cdot]}{\cal A}$.

Theorem VII.1

Let Formula${\cal E}$ be a At-hole context, and let Formula$p$ be an atom that does not occur in Formula${\cal E}$. FormulaTeX Source$$\vdash {\cal E}[p]\ {\rm iff}\ \forall\vec{p}\subseteq _{\rm fin} \ {\bf At}.\ \vdash {\cal E}[\vec{p}].$$

Theorem VII.2

Let Formula${\cal D}$ be a Formula$\Gamma$-hole context, and let Formula$p$ and Formula$q$ be atoms that do not occur in Formula${\cal D}$. FormulaTeX Source$$\vdash {\cal D}[p]\wedge {\cal D}[\square _{q}p]\ {\rm iff}\ \forall\gamma\in\Gamma.\ \vdash {\cal D}[\gamma]$$

Theorem VII.3

Let Formula$C$ be a Formula$\Phi$-hole context, and let Formula$p, q$ and Formula$r$ be atoms that do not occur in Formula${\cal C}$. Let Formula$S=\{p; \square_{q}p; \square_{q: -r}p\}$. FormulaTeX Source$$(\forall\varphi_{s}\in S.\ \vdash {\cal C}[\varphi_{s}]\wedge {\cal C}[\neg\varphi_{s}])\ \ {\rm iff}\ \ \forall\varphi\in\Phi.\ \vdash {\cal C}[\varphi]$$

These theorems enable us to mechanically prove meta-theorems about trust management. Essentially, they reduce a meta-level quantified validity judgement to a small number of concrete instances. There is only one case to consider for universally quantified atoms or conjunctions of atoms (Theorem VII.1), two cases for meta-variables ranging over policies (Theorem VII.2), and six cases (three of them negated) for meta-variables ranging over arbitrary formulas (Theorem VII.3).

Consider, for instance, the meta-statement FormulaTeX Source$$\forall\varphi\in\Phi.\ \vdash\varphi \longleftrightarrow \square_{\top}\varphi.$$

More formally and equivalently, we could write FormulaTeX Source$$\forall\varphi\in\Phi.\ \vdash C[\varphi], \ {\rm where} \ C=[\cdot] \longleftrightarrow \square_{{\top}}[\cdot].$$This can then be mechanically proved by proving just the six basic instances from Theorem VII.3.

It is easy to extend this method further. Meta-theorems with multiple meta-variables can also be mechanically proved with this approach by combining the theorems. For example, FormulaTeX Source$$\forall\varphi,\varphi^{\prime}\in\Phi, \gamma\in\Gamma.\ \vdash\square _{\gamma}(\varphi\wedge\varphi^{\prime})\longleftrightarrow(\square _{\gamma}\varphi\wedge\square _{\gamma}\varphi^{\prime})$$reduces to Formula$6\times 6\times 2=72$ propositional cases: six each for Formula$\varphi$ and Formula$\varphi^{\prime}$, and two for Formula$\gamma$.

We can also prove meta-theorems with side-conditions. If a meta-variable Formula$\varphi$ ranges over negation-free formulas from Formula$\Phi$, it is sufficient to prove the three positive instances from Theorem VII.3. Similarly, for meta-variables ranging over □ - free Formula$\varphi$ (as in Axiom Dlog), the number of cases reduces to two (Formula$\varphi\rightarrow p$ and Formula$\varphi\rightarrow\neg p$).

In the following, we discuss a number of meta-theorems that we verified using the tool, based on Theorems VII.1VII.3 (in addition to proving them manually). These meta-theorems provide interesting general insights into Datalog-based trust management systems. Moreover, they have also been essential in our (manual) proofs of soundness and completeness (Theorem V.3). FormulaTeX Source$$\forall\varphi, \varphi^{\prime}\in\Phi, \gamma\in\Gamma.\ \vdash\square _{\gamma}(\varphi\wedge\varphi^{\prime})\longleftrightarrow(\square _{\gamma}\varphi\wedge\square _{\gamma}\varphi^{\prime})$$As in standard modal logic, the □ -operator distributes over conjunction. The trust management interpretation is equally obvious: submitting credential set Formula$\gamma$ to a policy results in a new policy that satisfies the property Formula$\varphi\wedge\varphi^{\prime}$ iff the new policy satisfies both Formula$\varphi$ and Formula$\varphi^{\prime}$. Proving this theorem took 574 ms. FormulaTeX Source$$\forall\varphi, \varphi^{\prime}\in\Phi, \gamma\in\Gamma.\ \vdash\square _{\gamma}(\varphi\vee\varphi^{\prime})\longleftrightarrow(\square _{\gamma}\varphi\vee\square _{\gamma}\varphi^{\prime})$$In most modal logics, □ does not distribute over Formula$V$. This theorem holds only because the accessibility relation is functional, or equivalently, because the result of combining a credential set with a policy is always uniquely defined. But again, the theorem is obviously true in the trust management interpretation: submitting credential set Formula$\gamma$ to a policy results in a new policy that satisfies the property Formula$\varphi\vee\varphi^{\prime}$ iff the new policy satisfies either Formula$\varphi$ or Formula$\varphi^{\prime}$. Formula$(577 {\rm ms})$ FormulaTeX Source$$\forall\varphi\in\Phi.\ \vdash\varphi\longleftrightarrow \square _{\top}\varphi$$Submitting an empty credential set is equivalent to not submitting anything at all. (3 ms) FormulaTeX Source$$\forall\varphi\in\Phi^{+}, \gamma\in\Gamma.\ \vdash\varphi\rightarrow\square _{\gamma}\varphi,$$where Formula$\Phi^{+}$ denotes the set of Formula$\neg$- free formulas in Formula$\Phi$. This can be interpreted as a monotonicity property of the accessibility relation, and also of credential submissions: positive properties are retained after credential submissions. (95 ms) FormulaTeX Source$$\forall\varphi\in\Phi, \gamma, \gamma^{\prime}\in\Gamma.\ \vdash\square _{\gamma}\square _{\gamma^{\prime}}\varphi\longleftrightarrow\square _{\gamma^{\prime}}\square _{\gamma}\varphi$$This property, corresponding to a commutative accessibility relation, is also unusual in multi-modal logics. A simple corollary is that all permutations of arbitrary strings of boxes are equivalent, or that the order in which credentials are submitted is irrelevant. (3059 ms) FormulaTeX Source$$\forall\varphi\in\Phi, p\in {\bf At}, \vec{p}\subseteq {\rm fin}\ {\bf At}.\ \vdash\vec{p}\rightarrow(\square _{p}\varphi\longleftrightarrow\square _{p:-\vec{p}}\varphi)$$If Formula$\vec{p}$ holds in a policy, then submitting the atomic Formula$p$ results in a policy that is indistinguishable from the policy resulting from submitting the conditional credential Formula$p:-\bar{p}$. (30 ms) FormulaTeX Source$$\forall\varphi\in\Phi^{+}, \gamma_{1}, \gamma_{2}\in\Gamma.\ \vdash\square _{\gamma_{1}}\gamma_{2}\wedge\square _{\gamma_{2}}\varphi\rightarrow\square _{\gamma_{1}}\varphi$$This theorem asserts that credential-based derivations can be applied transitively. More precisely: if, after submitting credential set Formula$\gamma_{1}$, the credential set Formula$\gamma_{2}$ would be derivable from the combined policy, and if submitting Formula$\gamma_{2}$ directly would be sufficient for making property Formula$\varphi$ true, then Formula$\gamma_{1}$ alone would also be sufficient. This only holds for negation-free Formula$\varphi$. A simple counter-example can be constructed from instantiating Formula$\gamma_{1}=p, \gamma_{2}={\top}$, and Formula$\varphi=\neg p$. (1078 ms) FormulaTeX Source$$\forall\varphi\in\Phi, \gamma\in\Gamma.\ \vdash\gamma\rightarrow(\varphi\longleftrightarrow\square _{\gamma}\varphi)$$If a policy contains the clauses Formula$\gamma$, then submitting Formula$\gamma$ as credential set is equivalent to not submitting anything at all. This holds even for properties Formula$\varphi$ containing negation. (157 ms)

C. Proving one's own completeness

In Section VI, we gave some informal justifications as to why the reduction to propositional logic, which our implementation is based on, is not only sound (which is relatively easy to prove manually) but also complete with respect to the axiomatization. We did not prove completeness completely manually, but instead used the implementation itself to assist in the proof, thereby letting the implementation effectively prove its own completeness!

The main reason why this is possible is the ability to prove meta-theorems mechanically (Theorems VII.1VII.3). With this feature in place, we mechanically verified all axiom schemas. What this proves is that the reduction rules, as implemented, cover all axioms.

It remained to show that all rules are covered as well. Recall that there are three rules, Modus Ponens, (Mon), and (N). Modus Ponens is built into the underlying SAT solver. We manually proved that Rule (Mon) can be replaced by the axiom schema Formula$\vdash\square _{\gamma_{1}}\gamma_{2}\wedge\square _{\gamma_{2}}\varphi\rightarrow\square _{\gamma_{1}}\varphi$, which we mechanically verified.

To prove coverage of Rule (N), we perform a rule induction over Formula$\vdash\varphi$ in order to conclude Formula$\vdash^{ }\square _{\gamma}\varphi$ (where Formula$\vdash^{ }$ denotes the proofs performed by the implementation). All the base cases, i.e., the cases where Formula$\varphi$ is an instance of an axiom, were proven mechanically, again as meta-theorems (for example, for Ax. (C1), we prove Formula$\forall\gamma, \gamma^{\prime}\in\Gamma$. Formula$\vdash\square _{\gamma}^{\prime}(\square _{\gamma}\gamma))$ ‘ The two remaining cases, where Formula$\vdash\varphi$ is a rule application, were easy to prove manually.

Together, these results prove that Formula$\vdash\varphi$ implies Formula$\vdash^{ }\varphi$, in other words, that the implementation is complete. The correctness of the proof rests on a couple of assumptions: the soundness of the implementation itself, the correctness of the underlying SAT solver, and the correctness of our manual proofs. We are confident about the implementation's soundness, as the reduction rules it is based on are sound, and it has been extensively tested. To achieve an even higher level of confidence about the semi-mechanically proven completeness result, one could mechanically verify all computer-generated subproofs, since an automated proof verifier would be much smaller and simpler than our proof generator.

SECTION VIII

RELATED WORK

Trust Management

Blaze et al. coined the term ‘trust management’ in their seminal paper [12], referring to a set of principles for managing security policies, security credentials and trust relationships in a decentralized system. In their proposed paradigm, decentralization is facilitated by making policies depend on submitted credentials and by enabling local control over trust relationships. Policies, credentials and trust relationships should be expressed in a common language, thereby separating policy from the application. Early examples of trust management languages include PolicyMaker [12], KeyNote [11], and SPKI/SDSI [45], [26].

Li et al. [40], [39] argue that authorization in decentralized systems should depend on delegatable attributes rather than identity, and call systems that support such policies and credentials attribute-based access control (ABAC) systems. In essence, their ABAC paradigm is a refinement of trust management that makes the requirements on the expressiveness of credentials and policies more explicit: principals may assert parameterized attributes about other principals; authority over attributes may be delegated to other principals (that possess some specified attribute) via trust relationship credentials; and attributes may be inferred from other attributes. Their proposed policy language, RT, satisfies all these requirements. Like its predecessor, Delegation Logic (DL) [37], RT can be translated into Datalog. (A more expressive variant of RT, Formula${\rm RT}^{c}$ [38], can be translated into Datalog with constraints [33].)

Datalog has also been chosen as the basis of many other trust management languages. Examples include a language by Bonatti and Samarati [13], [14], SD3 [34], Binder [23], Cassandra [10], [9], a language by Wang et al. [53], one by Giorgini et al. [29], [30] and SecPAL [5], [6].

Apart from their relation to Datalog, what most of these languages have in common is that attributes are qualified by a principal who “says” it, and is vouching for the attribute's truth. In a credential, this principal coincides with the credential's issuer. For example, in Binder, the fact (or condition) that principal Formula$A$ is a student, according to authority Formula$C$, could be expressed as Formula$C$. isStudenl Formula$(A)$; similarly, in SecPAL, one would write Formula$C$ says Formula$A$ isStudent. This qualifier does not extend Datalog's expressiveness, as it is easy to translate a qualified atom Formula$C.p(\vec{e})$ into a normal Datalog atom Formula$p(C,\vec{e})$.

The says operator can be traced back to an authorization logic by Abadi et al. (ABLP) [2], [35]. Even though it predates the paper by Blaze et al., ABLP could be seen as a trust management language. It introduced the says operator - but in contrast to the simpler Datalog-based languages, ABLP and related languages such as ICL [28], CCD [1] and DKAL [31], [32] treat the says (or said, in the case of DKAL) construct as a proper unary operator in the logic, which cannot be simply translated into an extra predicate parameter. Our semantics therefore does not cover these languages.

Previous work on trust management semantics

The Datalog-based languages inherit their semantics from Datalog. The most common way to present Datalog's semantics is as the minimal fixed point of the immediate consequence operator Formula${\bf T}_{\gamma}$, parameterized on a Datalog program Formula$\gamma$ [20]. The result is the set of all atoms Formula$p$ that are true in Formula$\gamma$. Our inductive definition of Formula$\gamma\Vvdash p$ coincides with this semantics: Formula$\gamma\Vvdash p$ iff Formula$p\in {\bf T}_{\gamma}^{\omega}(\emptyset)$. A model-theoretic semantics can be given by taking the minimal Herbrand model (i.e., the intersection of all Herbrand models) of Formula$\gamma$, and a proof-theoretic semantics can be defined using resolution strategies [3]. All three flavours of the standard semantics are equivalent, but, as we have shown in Section IV, they are not adequate for modeling Datalog-based trust management policies that are combined with varying sets of credentials.

Abadi et al. [2] define ABLP axiomatically and then give it a model-theoretic semantics based on Kripke structures. However, the axiomatization is not complete with respect to the semantics. Further work along these lines has been done by Garg and Abadi [28], who present sound and complete translations from a minimal logic with a says operator called ICL, and various extensions of it, into the classical modal logic S4. Similar, Gurevich and Neeman [32] provide a Kripke semantics for DKAL2, the successor of the DKAL [31]. These modal semantics are straightforward compared to the one presented here, but this is because they have a completely different focus, namely providing a modal interpretation of the says (or said) operator. As we have argued above, this operator is not very interesting in the context of the more practical, Datalog-based, languages (at least from a foundational point of view). The focus of our semantics is to give a modal interpretation of the turnstile operator : - in Datalog policies and of credential submissions in a trust management context.

Related logics and logic programming

It has been noted before that the standard Datalog semantics does not enjoy compositionality and full abstraction relative to program union. Gaifman and Shapiro [27] propose a semantics for logic programs that is compositional, fully abstract and preserves congruence with respect to program union. These properties are achieved by interpreting logic program clauses as implicational formulas, as a result of which all dependencies between atoms are preserved. However, this semantics does not give us the desired behavior (see Section IV). The problem, in essence, stems from the fact that material implication is inadequate as an interpretation of conditional if-then statements [46], and thus also of Datalog clauses (in our context) and credential submissions: if Formula$\neg p$ holds in a policy, it follows that Formula$p\rightarrow q$ also holds, for every Formula$q$ However, it should not follow that the clause Formula$q:-p$ is contained in the policy; and similarly, it is not justified to infer that Formula$q$ would hold if credential Formula$p$ were submitted and combined with the policy.

One of the main claims of this paper is that clauses and credential submissions ought to be modeled as counterfactual statements. The complexity of our semantics, then, stems from the fact that simple, truth-functional Boolean operators cannot offer an adequate account of counterfactuals. Stalnaker [49] and Lewis [36] were the first to propose a Kripke semantics for counterfactuals, based on a similarity ordering on worlds: essentially, “if Formula$p$ were true, then Formula$q$ would be true” holds in a world Formula$w$ if of those worlds in which Formula$p$ is true, the ones that are most similar to Formula$w$  also make Formula$q$ true. Our semantics is based on the same basic framework. Our definition of what “most similar” means is novel, as is our counterfactual interpretation of Datalog. Therefore, our work could also be seen as a novel semantics for Datalog in general. However, the action of dynamically injecting varying sets of clauses into a Datalog program is a characteristic that is rather specific to trust management, hence it is more appropriate to frame our semantics specifically as a trust management semantics.

Much work has been done on axiomatizations of multimodal counterfactual logic. A good overview can be found in a paper by Ryan and Schobbens [48]. Their paper also contains a comprehensive listing of axioms proposed in the literature together with the corresponding frame conditions.

At first sight, hypothetical Datalog [15], [16] bears some resemblance to our logic. Hypothetical Datalog allows clauses such as Formula$p:-$ (Formula$q$: add Formula$r$), meaning “Formula$p$ is derivable provided that, if Formula$r$ were added to the rule base, Formula$q$ would hold”. In our logic, this would correspond to Formula$(q:-r)\rightarrow p$. But our logic is significantly more expressive: hypothetical Datalog cannot express statements that are hypothetical at the top-level and/or hypothetically add non-atomic clauses, for instance “Formula$p$ would hold if the conditional (credential) ‘if Formula$r$ were true then Formula$q$ would hold’ were added”. In our logic, this statement can be expressed as Formula$\square _{q:-r}p$. Moreover, the work on hypothetical Datalog (and similar works on hypothetical reasoning) is only concerned with query evaluation against concrete rule bases, and not with the harder problem of universal validity.

Probing attacks

We identified the security analysis of probing attacks as one practical area on which the present work is likely to have an impact. The problem of probing attacks has gained attention only rather recently. Gurevich and Neeman were the first to identify this general vulnerability of logic-based trust management systems [31]. In [4], probing attacks are framed in terms of the information flow properties opacity [42], [18] and its negation, detectability. Becker and Koleini developed a tool for checking detectability of confidential facts in Datalog policies, based on constructing a counter- policy, i.e., one that conforms to the given probes but makes the confidential fact false. The fact is detectable if and only if no such policy can be found. We use and extend their benchmark to compare the performance of our logic-based approach in Section VII-A.

SECTION IX

CONCLUDING DISCUSSION

Logics and semantics have long played an important, and successful, role in security research, especially in the area of cryptographic protocols [50]. (A prominent example has been the struggle to find an adequate semantics for BAN logic [19], see e.g. [21], [17], [51], [52]). The area of trust management, however, has hitherto not been investigated from a foundational, semantics-based point of view.

Evaluating a Datalog policy is a straightforward task, and so is taking the union of two sets of Datalog clauses. At first sight, then, it may come as an unwelcome surprise that our semantics, and the axiomatization, of Datalog-based trust management is so complex. Indeed, if one were only interested in the results of evaluating access queries against a concrete policy under a concrete set of submitted credentials, then a formal semantics would be unnecessary. But if one is interested in reasoning about the behavior of trust management systems, it is necessary to formulate universal truths that are quantified over all policies. Proving such statements is remarkably hard, even though the base language is so simple. As we have seen, neither the standard Datalog semantics nor the Kripke semantics for ABLP and related languages properly captures Datalog-based trust management. The situation has actually been worse than that of BAN logic,3 since, prior to the present work, not even a sound and complete proof system existed, let alone a formal semantics.

Our formal semantics is defined by the notion of TM models and the TM validity judgement Formula$\Vdash_{\rm TM}$, and the axiomatization of TM validity is given by the proof system Formula$\vdash$. Theorem V.3 shows that the proof system is sound and complete with respect to the semantics. So what role does the relation Formula$\Vvdash$, which we introduced in Section II, play? We need it, because a semantics, despite the term's etymology, does not really convey the meaning of the logic. As Read [44] puts it,

[f]ormal semantics cannot itself be a theory of meaning. It cannot explain the meaning of the terms in the logic, for it merely provides a mapping of the syntax into another formalism, which itself stands in need of interpretation.

Of course, the relation Formula$\Vvdash$ is also just “another formalism”, but it is one that is much closer to the natural language description of what a trust management system does, and can therefore more easily be accepted as “obviously” correct. Without it (and Theorem IV.11 providing the glue), there would be a big gap between the intuitive meaning of the language and its formalization.

What the formal semantics does provide is a number of alternative, less obvious, interpretations of a trust management system. TM models are abstract, purely mathematical objects that are independent of the language's syntax. They capture precisely (and only) the essential aspects of a trust management system. The easiest interpretation of a TM model is a graph in which two policies are connected when one is the result of submitting a set of credentials to the other.

A deeper alternative interpretation is that trust management logic is a counterfactual logic - a logic that avoids the paradoxes of material implication. Both policy clauses as well as statements about credential submissions are counterfactual, rather than implicational, statements. They state what would be the case if something else were the case.

As Ryan and Schobbens have noted, counterfactual statements can also be interpreted as hypothetical minimal updates to a knowledge base [48]. Under this interpretation, a credential submission Formula$\square _{\gamma}\varphi$ would be equivalent to saying that Formula$\varphi$ holds in a policy after it has been minimally updated with credential set Formula$\gamma$. The restrictions on the accessibility relation (Def. IV.7) can then be seen as a precise specification of what constitutes a minimal update to a policy.

Hence, from a foundational point of view, our semantics provides new insights into the nature of trust management. From a more practical point of view, it led us to an axiomatization that can be mechanized. We showed how our implementation could be put to good use by applying it to the analysis of probing attacks. It is the first automated tool that can feasibly check real-world probing attacks of realistic size, comprising millions of probes. But our implementation is a general automated theorem prover for our language, the expressiveness of which goes far beyond that needed for probing attacks. In particular, we used the implementation to prove general meta-theorems about trust management - some of which are intuitively obvious (but not necessarily easy to prove), and some of which are decidedly non-trivial (such as Lemma V.2 or lemmas that help prove the implementation's own completeness).

Our logic is decidable, since every formula is equivalent to a (potentially much larger) propositional formula. However, the complexity of the logic remains an open question. We also leave the development of a first-order version of the logic to future work: in this version, atoms would be predicates with constant and variable parameters, and clauses would be implicitly closed under universal quantification.

ACKNOWLEDGEMENTS

Alessandra Russo is funded in part through the US Army Research laboratory and the UK Ministry of Defence under Agreement Number W911NF-06-3-0001. We thank Mark Ryan and Stephen Muggleton for fruitful discussions, and Christoph Wintersteiger for his support with Z3. We are also grateful for the valuable comments from the anonymous reviewers.

Footnotes

1In practice, first-order predicates are used as atoms instead of propositional letters, but if the domain is finite, as is usually the case, the first-order case reduces to the propositional one. We choose the latter presentation for simplicity.

2Z3 [22] is an SMT solver, but we only use its SAT solving capabilities.

3Cohen and Dam succinctly described the BAN situation thus [21]: “While a number of semantics have been proposed for BAN and BAN-like logics, none of them capture accurately the intended meaning of the epistemic modality in BAN […]. This situation is unsatisfactory. Without a semantics, it is unclear what is established by a derivation in the proof system of BAN: A proof system is merely a definition, and as such it needs further justification.”

References

No Data Available

Authors

No Photo Available

Moritz Y. Becker

No Bio Available
No Photo Available

Alessandra Russo

No Bio Available
No Photo Available

Nik Sultana

No Bio Available

Cited By

No Data Available

Keywords

Corrections

None

Multimedia

No Data Available
This paper appears in:
No Data Available
Conference Date(s):
No Data Available
Conference Location:
No Data Available
On page(s):
No Data Available
E-ISBN:
No Data Available
Print ISBN:
No Data Available
INSPEC Accession Number:
None
Digital Object Identifier:
None
Date of Current Version:
No Data Available
Date of Original Publication:
No Data Available

Text Size