Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Reasoning About Privacy Properties of Architectures Supporting Group Authentication and Application to Biometric Systems

  • Conference paper
  • First Online:

Abstract

This paper follows a recent line of work that advocates the use of formal methods to reason about privacy properties of system architectures. We propose an extension of an existing formal framework, motivated by the need to reason about properties of architectures including group authentication functionalities. By group authentication, we mean that a user can authenticate on behalf of a group of users, thereby keeping a form of anonymity within this set. Then we show that this extended framework can be used to reason about privacy properties of a biometric system in which users are authenticated through the use of group signatures.

You have full access to this open access chapter, Download conference paper PDF

Similar content being viewed by others

Keywords

1Introduction

The privacy-by-design approach promotes the consideration of privacy requirements from the early design stage of a system. As an illustration of the importance of this topic, the General Data Protection Regulation adopted by the European trilogue (the European Commission, the European Parliament and the Council) in December 2015 [7] introduces privacy-by-design and privacy-by-default as legal obligations. Architectural choices have a strong effect on the privacy properties provided by a system. For this reason, the authors of [1] argue that key decisions regarding the design of a system should be taken at the architecture level. They introduce a formal framework for reasoning about privacy properties of architectures. The description of an architecture within this framework specifies the capacities of each component, the communications between them, the location of the computations and the data, and the trust relationships between the stakeholders. A dedicated privacy logic is used to express the privacy properties of the architectures. The use of formal methods enables precise definitions of properties and comparisons between architectures. It also makes it possible to provide a rigorous justification for the design choices.

As a first contribution of this paper, we propose an extension of this formal framework and show that it can be used to reason about properties of architectures supporting group authentication. By group authentication, we mean that a user can authenticate on behalf of a group of users. Several cryptographic primitives have been designed to achieve this goal. Our work provides the formal tools needed to reason about the properties of architectures involving these primitives, especially the guarantees that are provided in terms of privacy.

As a second contribution of this paper, we apply our extended framework to biometric systems. In a biometric system, users are authenticated with their biometric traits. The work of [3] uses the formal framework of [1] to reason about privacy properties of biometric architectures but it cannot deal with group signatures. We show that the extended framework can be used to reason about privacy properties of a biometric system in which users are authenticated by group signatures.

The interest of group signature in the context of biometrics has been shown in different contexts. For example, the biometric system architecture analysed in this paper was proposed in TURBINE [16], a European project which aimed at solving privacy concerns regarding the use of fingerprint biometrics for ID management. The application of this architecture was a pharmacy product research system. Pharmacists, for instance working at their selling desks, authenticate themselves to a pharmacy administration system. Authentication is based on a card owned by the employee, as well as its fingerprint. Thanks to the use of group signatures, a remote server (which does not get the fingerprint) is convinced that a valid enrolled user authenticates without knowing precisely who he is among the set of valid users (aka the employees).

Organization of the paper. Section 2 supplies an overview of the formal framework of [1]. Section 3 introduces our extension of this model. Section 4 presents the biometric architecture we are interested in, describes it within the architecture language of the formal framework, and analyses its privacy properties. Finally, we discuss in Sect. 5 some variants of the biometric architecture, before concluding in Sect. 6.

2Reasoning About Privacy Properties of Architectures

In this section, we provide an overview of the framework introduced in [1] which is the foundation for our work. The interested reader can refer to [1] for a more complete description of the framework.

This framework relies on a dedicated epistemic logic for expressing privacy properties. Epistemic logics are good candidates to express privacy properties since they deal with the notion of knowledge. However, the standardpossible worlds semantics for these logics lead to a well-known issue called thelogical omniscience problem [9]. In a nutshell, any agent knows all the logical consequences of his knowledge. To get around this issue, the authors of [1] adopt an approach based ondeductive algorithmic knowledge [13]. In this context, each component of an architecture is endowed with its own deductive capabilities.

Architectures are described with a dedicated architecture language. Then the semantics of a privacy property is defined as the architectures in which the property holds.

2.1A Privacy Architecture Language

First of all, the functionality of a system is described by a set\(\varOmega = \{X = T\}\) of equations over the following term language.

A termT might be a variableX (\(X \in Var\)), a constantc (\(c \in Const\)) orF a function applied to some variables (\(F \in Fun\)).

Then the architecture of a system is described by the following architecture language.

An architectureA is associated to a set of components\(\mathcal {C} = \{C_1, \dots , C_{|\mathcal {C}|}\}\). In the architectural primitives,i andj stand respectively for\(C_i\),\(C_j\) and\(G \subseteq \mathcal {C}\) denotes a set of components.

In the above syntax,\(\{Z\}\) denotes a set of elements of categoryZ.Pred denotes a predicate, the set of predicates depending on the architectures to be considered.\(Has_i (X)\) denotes the fact that component\(C_i\) possesses (or is the origin of) the value ofX, which may correspond to situations in whichX is stored on\(C_i\) or\(C_i\) is a sensor collecting the value ofX.\(Receive_{i,j} (\{St\}, \{X\})\) means that\(C_i\) can receive the values of variables in\(\{X\}\) together with the statements in\(\{St\}\) from\(C_j\).

\(Attest_i (\{Eq\})\) is the declaration by\(C_i\) that the properties in\(\{Eq\}\) hold and\(Proof_i (\{P\})\) is the delivery by\(C_i\) of a set of proofs of properties.\(Verify_i\) is the verification by component\(C_i\) of the corresponding statements (proof or authenticity).\(Compute_G ({X} = T)\) means that the set of componentsG can compute the termT and assign its value toX and\(Trust_{i, j}\) represents the fact that component\(C_i\) trusts component\(C_j\).

Graphical data flow representations can be derived from architectures expressed in this language. For the sake of readability, we use both notations in the next sections.

All architectures are assumed to satisfy minimal consistency assumptions, in order to restrict the analysis to those which make sense. For instance, if a component sends a variable, we assume that this variable can be sent, computed or received by the component.

Events are instantiations of the architectural primitives (trust relations excepted). Traces are sequences of events, defined according to the following trace language.

\(\mathsf {Seq} (\epsilon )\) denotes an ordered sequence of events\(\epsilon \). When instantiating a primitive containing a variableX, the notationX : V means that the variableX receives the valueV. LetVal be the set of values that the variables can take.\(T^\epsilon \) is a term where values have been assigned to variables. The set\(Val_\bot \) is defined as\(Val \cup \{\bot \}\) where\(\bot \not \in Val\) is a specific symbol used to denote that a variable has not been assigned.

As for architectures, only traces satisfying consistency assumptions are considered.\(\langle \rangle \) denotes the empty trace (with no event).

A trace\(\theta \) of events is said compatible with an architectureA if each event in\(\theta \) (except the computations) can be obtained by instantiation of an element of A (Receive,Verify, etc.). LetT (A) be the set of traces which are compatible with an architectureA.

Each component\(C_i\) is associated with a dependence relation\(Dep_i\). For a variableY and a set\(\mathcal {X}\) of variables,\(Dep_i (Y, \mathcal {X})\) – equivalently\((Y, \mathcal {X}) \in Dep_i\) – means that the value ofY can be obtained by the component\(C_i\) if it gets access to the value ofX, for each\(X \in \mathcal {X}\).

Each component\(C_i\) is also associated with a deductive system, noted\(\triangleright _i\), allowing it to derive new knowledge.\(\triangleright _i\) is defined as a relation between equations\(\{Eq_1, \dots , Eq_n\} \triangleright _i Eq_0\), where equations over terms are defined according to the following syntax.

By a slight abuse of notations,Eq is an overloaded notation of theEq definition in the language architecture, where conjunctions of equations are also possible.

Finally, the semantics of an architecture is defined from the traces of events. Each component is associated with a state. Each event in a trace of events affects the state of each component involved in the event. The semanticsS(A) of an architectureA is defined as the set of states reachable by compatible traces.

2.2A Privacy Logic

Privacy properties of architectures are expressed with the following language.

The knowledge operator\(K_i\) represents the knowledge of the component\(C_i\). The formula\(Has_i\) represents the fact that\(C_i\) can get access to variableX.

The semantics\(S (\phi )\) of a property\(\phi \) is defined as the set of architectures where\(\phi \) is satisfied. The fact that a property\(\phi \) is satisfied by a (consistent) architectureA is defined for each property as follows.

  • A satisfies\(Has_i (X)\) if there is a reachable state of\(C_i\) in whichX is not undefined.

  • A satisfies\(Has^{none}_i (X)\) if no compatible trace leads to a state in which\(C_i\) assigns a value toX.

  • A satisfies\(K_i (Eq)\) if from all reachable states\(C_i\) can deduceEq.

  • A satisfies\(\phi _1 \wedge \phi _2\) ifA satisfies\(\phi _1\) andA satisfies\(\phi _2\).

Based on the semantics of properties, [1] introduces a set of deductive rules which can be used to reason about privacy properties of architectures. This deductive system is shown correct and complete with respect to the semantics of the properties.

\(A \vdash \phi \) denotes that\(\phi \) can be derived fromA – in other words, that there exists a derivation tree such that each step belongs to the axiomatics and the leaf is\(A \vdash \phi \). A subset of this axiomatics, useful for this paper, is presented in Fig. (1a).

Fig. 1.
figure 1

Axiomatics

3Adding a Group Attestation to the Formal Model

As a first step to extend the architecture language of [1], we introduce the primitive\(Attest_G (E)\) whereG is a group of components andE a set of equations. This primitive generalizes\(Attest_i (E)\) which involves a single component\(C_i\). Section 3.1 defines the semantics of the traces containing these events and Sect. 3.2 extends the set of deductive rules.

3.1Semantics of Traces

The semantics of a trace is defined by specifying, for each event, its effect on the states of the components.

The state of a component is either theError state or a pair consisting of: (i) a variable state assigning values to variables, and (ii) a property state defining the current knowledge of a component. In the initial state of an architectureA, denoted\(Init^A = \langle Init^A_1, \dots , Init^A_{|\mathcal {C}|} \rangle \), the variables are undefined and the knowledge state only contains the trust primitives.

Let\(\sigma \) denote the global state, and\(\sigma _i\) denote the state of componenti. The semantics of traces, denoted\(S_T\), is defined recursively over sequences of events.

$$\begin{aligned} S_T (\langle \rangle , \sigma )&= \sigma \\ S_T (\epsilon \cdot \theta , \sigma )&= S_T (\theta , S_E (\epsilon , \sigma )). \end{aligned}$$

The function\(S_E\), which defines the effect of the events, is defined for each type of event. The modification of a state is noted\(\sigma [\sigma _i/(v, pk)]\) the variable and knowledge states of\(C_i\) being replaced byv andpk respectively.\(\sigma [\sigma _i /Error]\) denotes that theError state is reached for component\(C_i\). A component reaching anError state is no longer involved in any action.

Restricting our attention to the events which contains a group attestation leads us to consider the events\(Verify_i (Attest_G (E))\) and\(Verify_i (Proof_j (E))\). The semantics of the verification events are defined according to the (implicit) semantics of the underlying verification procedures. In both cases, the knowledge state of the component is updated if the verification passes, otherwise the component reaches anError state. The variable state is not affected. Informally, a verification event containing a generalized attestation statement generates new knowledge only if all possible authors of the attestation are trusted by the verifying component\(C_i\).

$$\begin{aligned} S_E (Verify_i (Proof_j (E)), \sigma ) =&{\left\{ \begin{array}{ll} \sigma [\sigma _i / Error] \quad \text {if the proof is not valid}, \\ \sigma [\sigma _i/ (\sigma _i^v, \sigma _i^{pk} \cup new^{pk}_{Proof})] \quad \text {otherwise}, \end{array}\right. } \end{aligned}$$
$$\begin{aligned} S_E (Verify_i (Attest_G (E)), \sigma ) =&{\left\{ \begin{array}{ll} \sigma [\sigma _i / Error] \quad \text {if the attestation is not valid}, \\ \sigma [\sigma _i / (\sigma _i^v, \sigma _i^{pk} \cup new^{pk}_{Attest})] \quad \text {otherwise}, \end{array}\right. } \end{aligned}$$

where the new knowledge\(new^{pk}_{Proof}\) is defined as:

(1)

and the new knowledge\(new^{pk}_{Attest}\) is defined as:

(2)

3.2Axiomatics

The next challenge to deal with group attestation is the extension of the set of deductive rules and the proof of the correctness and completeness properties still hold. Our axioms for group attestation are presented in Fig. (1b). In the remaining of this section, we show that the correctness and the completeness of the axiomatics still hold with these new axioms.

Correctness. LetA be a consistent architecture and\(\phi \) a property. The correctness theorem states that if there exists a derivation tree for this property (\(A \vdash \phi \)), then this property holds in the architecture (\(A \in S (\phi )\)).

The proof is made by induction on the depth of the tree\(A \vdash \phi \). Let us restrict our attention to the cases where\((\mathsf {K4^{+}})\) and\((\mathsf {K5^{+}})\) are used. That is, let us assume that\(A \vdash K_i (Eq)\), and that the derivation tree is of depth 1. By definition of the set of axioms, such a proof is obtained by application of (K1), (K3),\((\mathsf {K4^{+}})\) or\((\mathsf {K5^{+}})\). Let us focus on the\(\mathsf {K4^{+}}\) and\(\mathsf {K5^{+}}\) cases.

\(\mathsf {K4^{+}}\). Let us assume that\(Verify_i (Proof_j (E)) \in A\),\(Attest_G\) (\(E'\))\(\in E\) and\(\forall k \in G\):\(Trust_{i, k} \in A\) for somei,j andG. Our goal is to prove that\(\forall Eq \in E'\):\(A \in S (K_i (Eq))\).

Let us consider a given state\(\sigma ' \in S_i (A)\). By the architecture semantics, there exists a consistent trace\(\theta '\), compatible withA, such that\(\sigma ' = S_T (\theta ', Init^A)\). Two cases may happen. Either\(\theta '\) contains an event\(Verify_i (Proof_j (E))\) such that\(Attest_G (E') \in E\), and we let\(\theta := \theta '\), or it is not. In the latter case, we extend\(\theta '\) into a trace\(\theta \) such that\(\theta \) contains such an event without breaking the consistency of the trace.

In either cases, there exists a trace\(\theta \) which extends\(\theta '\) and contains an event\(Verify_i (Proof_j (E))\) such that\(Attest_G (E') \in E\). Let\(\sigma = S_T (\theta , Init^A)\). Since anError state has not been reached (we have\(\sigma ' \in S_i (A)\)), and since\(\forall k \in G: Trust_{i, k} \in \sigma _i^{pk}\) by definition of the initial state, then by the semantics of the group attestation (Eq. (1)) we have\(\forall Eq \in E\):\(Eq \in \sigma ^{pk}_i\).

By the definition of the architectures semantics, we deduce that\(\sigma \in S (A)\). The prefix order over the traces together with the definition of the semantics of the trace induce a prefix order over the states, hence\(\sigma \ge _i \sigma '\). By the reflexivity of the deductive algorithmic knowledge, we have\(\forall Eq \in E'\):\(\sigma ^{pk}_i \triangleright _i Eq\). By the semantics of the properties, we conclude that\(\forall Eq \in E'\):\(A \in S (K_i (Eq))\).

\(\mathsf {K5^{+}}\). Let us assume that\(Verify_i (Attest_G (E)) \in A\) and\(\forall k \in G\):\(Trust_{i, k} \in A\). We must show that\(\forall Eq \in E\):\(A \in S (K_i (Eq))\). Adaptation of the\(\mathsf {K4^{+}}\) to the\(\mathsf {K5^{+}}\) case is straightforward, invoking Eq. (2) of the trace semantics instead of Eq. (1).

Completeness. LetA be a consistent architecture and\(\phi \) a property. The completeness theorem states that if the property holds in the architecture (\(A \in S (\phi )\)), then there exists a derivation tree for this property (\(A \vdash \phi \)).

The proof is made by induction over the definition of the property\(\phi \). We restrict our attention here to the knowledge operator\(K_i\). Let us assume that\(A \in S (K_i (Eq))\) for a given component\(C_i\) and equationEq. We must show that\(A \vdash K_i (Eq)\).

By the semantics of properties,\(A \in S (K_i (Eq))\) means that\(\forall \sigma ' \in S_i (A)\):\(\exists \sigma \in S_i (A)\):\(\sigma ^{pk}_i \triangleright _i Eq\). By the semantics of architectures,\(\exists \theta \in T (A)\) such that (\(\sigma = S_T (\theta , Init^A)\) and\(\sigma ^{pk}_i \triangleright _i Eq\)). By the semantics of the traces, this implies one among the following statements: either there exists\(Compute_G (X = T^\epsilon ) \in \theta \) where\(Eq := (X = T)\) and\(C_i \in G\) and\(T^\epsilon \) is obtained fromT (by assigning values to variables); or there exists\(Verify_i (Proof_j (E)) \in \theta \) where\(Eq \in E\); or there exists\(Verify_i (Proof_j (E)) \in \theta \) where\(Attest_G (E') \in E\),\(Eq \in E'\) and\(\forall k \in G\):\(Trust_{i, k} \in \sigma ^{pk}_i\) and\(Eq \in E'\); or there exists\(Verify_i (Attest_G (E)) \in \theta \),\(Eq \in E\) and\(\forall k \in G\):\(Trust_{i, k} \in \sigma ^{pk}_i\).

By the compatibility of the traces, we deduce that: either\(Compute_G (X) \in A\) where\(Eq := (X = T)\) and\(C_i \in G\); or\(Verify_i (Proof_j (E)) \in A\) where\(Eq \in E\); or\(Verify_i (Proof_j (E)) \in A\) where\(Attest_G (E') \in E\),\(Eq \in E'\) and\(\forall k \in G\):\(Trust_{i, k} \in A\) and\(Eq \in E'\); or\(Verify_i (Attest_G (E)) \in A\),\(Eq \in E'\) and\(\forall k \in G\):\(Trust_{i, k} \in A\). We conclude that\(A \vdash K_i (Eq)\) by applying (respectively) (K1), (K3),\((\mathsf {K4^{+}})\) or\((\mathsf {K5^{+}})\).

4Modelling a Biometric Architecture Supporting Group Authentication

4.1A Biometric Architecture Using Group Signatures

Biometric systems involve two main phases: enrolment and verification (either authentication or identification) [10]. Enrolment is the registration phase, in which the biometric traits of a person are collected and recorded within the system. In theauthentication mode, a fresh biometric trait is collected and compared with the registered one by the system to check that it corresponds to the claimed identity. In theidentification mode, a fresh biometric data is collected and the corresponding identity is searched in a database of enrolled biometric references.

A group signature scheme [2] is an advanced cryptographic mechanism. It enables a user to sign messages on behalf of a group of users while staying anonymous inside this group. With a (public) verification algorithm, anyone can be convinced, given a group public key, a message, and a signature, thata certain member of the group authenticates the message.

The biometric system introduced in [4] aims at achieving some anonymity from the server’s point of view. The server is convinced that the authentication was successful for a certain enrolled user, but has no information about which among them. During the enrolment, a biometric reference is registered by the issuer. The issuer derives a user secret key from the biometric template and computes a group secret key, that is, a certificate attesting the enrolment inside the group. The user gets a card containing its biometric reference and the group certificate.

During the verification phase, the terminal gets a fresh capture of the biometric trait and computes a fresh template. A match between the fresh template and the reference is performed by the terminal. In case of success, the terminal derives the user secret key from the reference, produces a group signature thanks to the user secret key and the certificate (both are needed to produce a valid signature), and sends the signature to the server. The server checks the signature attesting that a registered user authenticates. If the signature is valid, the server is convinced of the correctness of the matching. However, it has no access to the biometric templates, neither to the identity of the user who authenticates.

4.2Description Within the Formal Framework

For the sake of clarity, let us distinguish the biometric system and its formalization. We denote by\(B_\mathsf {gs}\) the biometric system introduced in [4] and\(A_\mathsf {gs}\) its definition within the formal framework, which we present below.

Fig. 2.
figure 2

High-level view of the biometric system architecture using group signatures

Upper case sans serif letters in\(A_\mathsf {gs}\) denote components. Components of the\(A_\mathsf {gs}\) architecture are a set ofN enrolled users\(\mathcal {U} := \{\mathsf {U}_1, \dots , \mathsf {U}_N\}\) (each user\(\mathsf {U}_i\) owning a card\(\mathsf {C}_i\)), a serverS, an issuer\(\mathsf {I}\) and a terminal modelled by two components\(\mathsf {TM}\) and\(\mathsf {TS}\). The issuerI enrols the users. The serverS manages a database containing the enrolled templates. The terminal is equipped with a sensor used to acquire biometric traits. Formally, the terminal is split into two componentsTM andTS, corresponding respectively to its two functionalities. The matcherTM, acquires the fresh template and performs the comparison, and the signerTS authenticates on behalf of the group of users. As shown by the variants below, this distinction is motivated by the different trust assumptions a designer may consider.

Type letters denote variables.\(\mathtt {br}_i\) denotes the biometric reference template of the user\(\mathsf {U}_i\) built during the enrolment phase.\(\mathtt {rd}\) denotes a raw biometric data provided by the user during the verification phase.\(\mathtt {bs}\) denotes a fresh template derived from\(\mathtt {rd}\) during the verification phase. A threshold\(\mathtt {thr}\) is used during the verification phase as a closeness criterion for the biometric templates. The output\(\mathtt {dec}\) of the verification is the result of the matching between the fresh template\(\mathtt {bs}\) and the enrolled templates\(\mathtt {br}\), considering the threshold\(\mathtt {thr}\).\(\mathtt {db}\) denotes the database of the registered biometric templates.

As in [3], we focus on the verification phase and assume that enrolment has already been done. The database\(\mathsf {db}\) is computed by the issuer from all the references, using the function\(DB \in Fun\). A verification process is initiated by the terminal receiving as input a raw biometric data\(\mathtt {rd}\) from the user. The terminal, more precisely theTM component, extracts the fresh biometric template\(\mathtt {bs}\) from\(\mathtt {rd}\) using the function\(Extract \in Fun\). The matching is expressed by the function\(\mu \in Fun\) which takes as arguments two biometric templates and the threshold \(\mathtt {thr}\). The terminal reads in the card the biometric template\(\mathtt {br}\). The user receives the final decision\(\mathtt {dec}\) of the matching from the terminalTM. Then the terminal, here theTS component, attests that the fresh template belongs to the set of enrolled templates.

The complete description of\(A_\mathsf {gs}\) within the architecture language is as follows. Figure 2 sketches this description. When indicesi are used, it is assumed that the corresponding primitive exists in\(A_\mathsf {gs}\) for all users. For instance\(Has_\mathsf {I} (\mathtt {br}_i) \in A_\mathsf {gs}\) implicitly means that\(\forall \mathsf {U}_i \in \mathcal {U}\):\(Has_\mathsf {I} (\mathtt {br}_i) \in A_\mathsf {gs}\).

To complete the description of\(A_\mathsf {gs}\), it remains to define the dependence relations between the variables. The database is computed from all the references:\(\forall j \in \mathcal {C}\): (\(\mathtt {db}\), {\(\mathtt {br}_1\), ...,\(\mathtt {br}_N\)}). Conversely, access to\(\mathtt {db}\) gives access to all\(\mathtt {br}_i\):\(\forall j \in \mathcal {C}, \mathsf {U}_i \in \mathcal {U}\):\(Dep_j\) (\(\mathtt {br}_i\), {\(\mathtt {db}\)}). Moreover,\(\forall j \in \mathcal {C}, \mathsf {U}_i \in \mathcal {U}\): we also have (\(\mathtt {bs}\), {\(\mathtt {rd}\)}), (\(\mathtt {dec}\), {\(\mathtt {br}_i\),\(\mathtt {bs}\)}), (\(\mathtt {dec}\), {\(\mathtt {br}_i\),\(\mathtt {rd}\)})\(\in Dep_j\).

4.3Trusting a Group of Users

In the biometric system architecture\(A_\mathsf {gs}\), the group of users is trusted by the server, which is denoted\(\forall \mathsf {U}_i \in \mathcal {U}\):\(Trust_{\mathsf {S}, \mathsf {U}_i}\). However, the formalization does not define which cryptographic primitive is used in the concrete\(B_\mathsf {gs}\) system. Let us discuss this point in more detail.

In a group signature scheme, users are typically not trusted, but a group manager, called the issuer, is trusted. When it enrols a user, the issuer provides a group secret key, aka a membership certificate – concretely, a signature of some secret user-specific data. In other words, itattests that the user is enrolled. Then the untrusted userproves that it is enrolled (by supplying a zero-knowledge proof of her user secret data and the corresponding membership certificate). In our case, the server does not trust the card, but trusts the issuer of the card. The card contains an attestation that the user was indeed enrolled by the issuer, here a certificate for a group signature,i.e., a group secret key.

The point to be noticed is that we do not model its internal machinery in our formal architecture. We only express the fact that the group is trusted. Whether this trust assumption is justified or not in practice is not part of the reasoning about architecture: it rather regards the justification of the choice of certain primitives to achieve the functionality. With the same trust assumption (all users are trusted), other primitives can be used, as ring signatures [14], where a member authenticates on behalf of a group without group manager.

The use of group signatures is a choice made at the protocol level. Checking the conformity between the protocols and the architecture is out of scope of this paper. This line of work has been initiated in [15].

4.4Application of the Axiomatics

We now reason about the privacy properties of the\(A_\mathsf {gs}\) architecture from the server point’s of view.\(A_\mathsf {gs}\) should enable the server to be sure that a certain enrolled user authenticates, but the authenticated user is anonymous from the server’s point of view:\(A_\mathsf {gs} \vdash K_\mathsf {S} (\mathtt {br}_i \in \mathtt {db})\). But the server should have no access to the templates:\(A_\mathsf {gs} \vdash Has_\mathsf {S}^{none} (\mathtt {br}_i)\).

Regarding the template protection, the statement\(A_\mathsf {gs} \vdash Has_\mathsf {S}^{none} (\mathtt {br}_i)\) is shown using ruleHN. A subtlety here is the presence of the dependence between the biometric template\(\mathtt {br}_i\) and the database\(\mathtt {db}\). Therefore we first need to show\(A \nvdash Has_\mathsf {S} (\mathtt {db})\).

figure a

NowHN can be applied.

figure b

\(A_\mathsf {gs} \vdash Has_\mathsf {S}^{none} (\mathtt {bs})\) is also shown by an application ofHN.

Since the server trusts the users, an application of\(\mathsf {K5^{+}}\) shows that the server is ensured that some enrolled user authenticates.

figure c

5Variants

Several variants [4] of the biometric system\(B_\mathsf {gs}\) can be expressed and analyzed in our formal framework.

5.1Lowering the Trust on the Group Signing Functionality

If the server trusts the matching functionalityTM of the terminal but does not trust its signer functionalityTS, then the componentTS must supply a proof that some user is enrolled. The architecture, denoted\(A_\mathsf {gs}^\mathsf {p}\), becomes:

An application of the new rule\(\mathsf {K4^{+}}\) enable to prove that the server is ensured that some enrolled user authenticates.

figure d

5.2Combination with Match-On-Card

In the\(A_\mathsf {gs}\) architecture, the card is a plastic card. The biometric reference is just printed on it, together with a group secret key. To enhance the protection of the reference, a smart-card can be used instead of a plastic card, as in the Match-On-Card (MOC) technology [8,11,12]. The card stores the reference template, and the reference never leaves the card. During a verification, the card receives the fresh biometric template, carries out the comparison with its reference, and sends the decision back. The terminal trusts the smart card for the correctness of the matching. This trust is justified by the fact that the card is a tamper-resistant hardware element.

The\(A_\mathsf {gs}\) architecture in which the plastic card is replaced by a smart-card performing a MOC is modelled as follows. In addition to the comparison, the card also computes the group authentication.

Using ruleHN, it is easy to show that no component apart fromI and\(\mathsf {C}_i\) gets access to \(\mathtt {br}_i\).

The terminal should be convinced that the matching is correct:\(A_\mathsf {gs}^\mathsf {moc} \vdash K_\mathsf {TM} (\mathtt {dec} = \mu (\mathtt {br}_i, \mathtt {bs}, \mathtt {thr}))\). The proof relies on the trust placed by the server in the matching componentTM of the terminal.

figure e

Regarding the group authentication, an application of\(\mathsf {K5^{+}}\) shows that the server is ensured that some enrolled user authenticates.

5.3Anonymity Revocation

As shown in [4], an additional mechanism can be used to revoke the anonymity of a group authentication if there is any legal need to do so. After the matching phase, the terminal has to encrypt the fresh template under the public key of a specific tracing authority, to sign all messages together, and to send the authentication result to the server. Then, at a later stage, the tracing authority may decrypt the template and check, with an access to the database of the issuer, that the templates were indeed close. Thisa posteriori check ensures a form of accountability which can be requested in certain contexts.

The formal model introduced in [1] includes an additional architectural primitive, called SpotCheck, which can be used to carry outa posteriori checks and therefore to describe the above variant. However, the model including the SpotCheck primitive is proven complete only when all the functions of the term language are at most unary. Since the comparison between templates, an essential operation of biometric systems, is inherently binary, we would then obtain a correct but incomplete system.

We leave for future work the definition of a formal model witha posteriori verifications which would be both correct and complete and would not suffer this arity restriction in the term language.

6Conclusion

In this paper, we have analysed the privacy properties of a biometric system in which users can remain anonymous from the point of view of a remote server, while the server is still convinced that a valid user authenticates. Table 1 sums up the properties of the different architectures considered here. Architecture\(A_\mathsf {gs}^\mathsf {moc}\) provides the best guarantees in terms of privacy. However, its deployment has a cost, since it requires that each user owns a card with powerful capabilities. Although quite demanding, these assumptions are not out of reach of the current technology [5]. The main variant\(A_\mathsf {gs}\) is more realistic. The choice between\(A_\mathsf {gs}\) and\(A_\mathsf {gs}^\mathsf {p}\) depends on the trust placed on each component in a specific deployment. The possibility to express these trust assumptions in a formal way and to study their consequences is one of the main benefits of the framework presented here because it provides rigorous justifications to make well-informed design choices for the architecture of a system.

Table 1. Comparison between architectures

References

  1. Antignac, T., Le Métayer, D.: Privacy architectures: reasoning about data minimisation and integrity. In: Mauw, S., Jensen, C.D. (eds.) STM 2014. LNCS, vol. 8743, pp. 17–32. Springer, Heidelberg (2014)

    Google Scholar 

  2. Boneh, D., Shacham, H.: Group signatures with verifier-local revocation. In: ACM Conference on Computer and Communications Security, CCS 2004, pp. 168–177. ACM Press (2004)

    Google Scholar 

  3. Bringer, J., Chabanne, H., Le Métayer, D., Lescuyer, R.: Privacy by design in practice: reasoning about privacy properties of biometric system architectures. In: Bjørner, N., Boer, F. (eds.) FM 2015. LNCS, vol. 9109, pp. 90–107. Springer, Heidelberg (2015)

    Chapter  Google Scholar 

  4. Bringer, J., Chabanne, H., Pointcheval, D., Zimmer, S.: An application of the Boneh and Shacham group signature scheme to biometric authentication. In: Matsuura, K., Fujisaki, E. (eds.) IWSEC 2008. LNCS, vol. 5312, pp. 219–230. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  5. Canard, S., Girault, M.: Implementing group signature schemes with smart cards. In: Smart Card Research and Advanced Application, CARDIS 2002, pp. 1–10. USENIX (2002)

    Google Scholar 

  6. CAPPRIS. Collaborative Project on the Protection of Privacy Rights in the Information Society. Inria Project Lab on Privacy.https://cappris.inria.fr/

  7. European Parliament. European Parliament legislative resolution of 12 March 2014 on the proposal for a regulation of the European Parliament, of the Council on the protection of individuals with regard to the processing of personal data, on the free movement of such data. General Data Protection Regulation, Ordinary legislative procedure: first reading (2014)

    Google Scholar 

  8. Govan, M., Buggy, T.: Acomputationally efficient fingerprint matching algorithm for implementation on smartcards. In: Biometrics: Theory, Applications, and Systems, BTAS 2007, pp. 1–6. IEEE (2007)

    Google Scholar 

  9. Halpern, J.Y., Pucella, R.: Dealing with logical omniscience. In: Conference on Theoretical Aspects of Rationality and Knowledge, TARK 2007, pp. 169–176 (2007)

    Google Scholar 

  10. Jain, A.K., Ross, A., Prabhakar, S.: An introduction to biometric recognition. IEEE Trans. Circuits Syst. Video Techn.14(1), 4–20 (2004)

    Article  Google Scholar 

  11. National Institute of Standards and Technology (NIST). MINEXII - an assessment of Match-On-Card technology (2011).http://www.nist.gov/itl/iad/ig/minexii.cfm

  12. International Standard Organization. International standard iso/iec 24787 information technology - identification cards - on-card biometric comparison (2010)

    Google Scholar 

  13. Pucella, R.: Deductive algorithmic knowledge. J. Log. Comput.16(2), 287–309 (2006)

    Article MathSciNet MATH  Google Scholar 

  14. Rivest, R.L., Shamir, A., Tauman, Y.: How to leak a secret. In: Boyd, C. (ed.) ASIACRYPT 2001. LNCS, vol. 2248, p. 552. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  15. Ta, V.T., Antignac, T.: Privacy by design: on the conformance between protocols and architectures. In: Cuppens, F., Garcia-Alfaro, J., Zincir Heywood, N., Fong, P.W.L. (eds.) FPS 2014. LNCS, vol. 8930, pp. 65–81. Springer, Heidelberg (2015)

    Google Scholar 

  16. TURBINE. TrUsted Revocable Biometric IdeNtitiEs. Collaborative European project 216339 call FP7-ICT-2007-1 (2007).http://cordis.europa.eu/project/rcn/85447_en.html

Download references

Acknowledgment

This work has been partially funded by the French ANR-12-INSE-0013 project BIOPRIV. Part of this work has been conducted within the Inria Project Lab on Privacy CAPPRIS [6].

Author information

Authors and Affiliations

  1. Morpho, Issy-Les-Moulineaux, France

    Julien Bringer, Hervé Chabanne & Roch Lescuyer

  2. Télécom ParisTech, Paris, France

    Hervé Chabanne

  3. INRIA, Université de Lyon, Lyon, France

    Daniel Le Métayer

Authors
  1. Julien Bringer

    You can also search for this author inPubMed Google Scholar

  2. Hervé Chabanne

    You can also search for this author inPubMed Google Scholar

  3. Daniel Le Métayer

    You can also search for this author inPubMed Google Scholar

  4. Roch Lescuyer

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toRoch Lescuyer.

Editor information

Editors and Affiliations

  1. Fondazione Bruno Kessler, Trento, Italy

    Silvio Ranise

  2. The Mitre Corp, McLean, Virginia, USA

    Vipin Swarup

Rights and permissions

Copyright information

© 2016 IFIP International Federation for Information Processing

About this paper

Cite this paper

Bringer, J., Chabanne, H., Le Métayer, D., Lescuyer, R. (2016). Reasoning About Privacy Properties of Architectures Supporting Group Authentication and Application to Biometric Systems. In: Ranise, S., Swarup, V. (eds) Data and Applications Security and Privacy XXX. DBSec 2016. Lecture Notes in Computer Science(), vol 9766. Springer, Cham. https://doi.org/10.1007/978-3-319-41483-6_22

Download citation

Publish with us


[8]ページ先頭

©2009-2025 Movatter.jp