Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Unification (computer science)

From Wikipedia, the free encyclopedia
(Redirected fromUnification (computing))
Algorithmic process of solving equations

Inlogic andcomputer science, specificallyautomated reasoning,unification is an algorithmic process ofsolving equations between symbolicexpressions, each of the formLeft-hand side = Right-hand side. For example, usingx,y,z as variables, and takingf to be anuninterpreted function, thesingleton equation set {f(1,y) =f(x,2) } is a syntactic first-order unification problem that has the substitution {x 1,y ↦ 2 } as its only solution.

Conventions differ on what values variables may assume and which expressions are considered equivalent. In first-order syntactic unification, variables range overfirst-order terms and equivalence is syntactic. This version of unification has a unique "best" answer and is used inlogic programming and programming languagetype system implementation, especially inHindley–Milner basedtype inference algorithms. In higher-order unification, possibly restricted tohigher-order pattern unification, terms may include lambda expressions, and equivalence is up to beta-reduction. This version is used inproof assistants and higher-order logic programming, for exampleIsabelle,Twelf, andlambdaProlog. Finally, in semantic unification or E-unification, equality is subject to background knowledge and variables range over a variety of domains. This version is used inSMT solvers,term rewriting algorithms, andcryptographic protocol analysis.

Formal definition

[edit]

Aunification problem is a finite setE={l1r1, ...,lnrn } of equations to solve, whereli,ri are in the setT{\displaystyle T} ofterms orexpressions. Depending on which expressions or terms are allowed to occur in an equation set or unification problem, and which expressions are considered equal, several frameworks of unification are distinguished. If higher-order variables, that is, variables representingfunctions, are allowed in an expression, the process is calledhigher-order unification, otherwisefirst-order unification. If a solution is required to make both sides of each equation literally equal, the process is calledsyntactic orfreeunification, otherwisesemantic orequational unification, orE-unification, orunification modulo theory.

If the right side of each equation is closed (no free variables), the problem is called (pattern)matching. The left side (with variables) of each equation is called thepattern.[1]

Prerequisites

[edit]

Formally, a unification approach presupposes

As an example of how the set of terms and theory affects the set of solutions, the syntactic first-order unification problem {y =cons(2,y) } has no solution over the set offinite terms. However, it has the single solution {ycons(2,cons(2,cons(2,...))) } over the set ofinfinite tree terms. Similarly, the semantic first-order unification problem {ax =xa } has each substitution of the form {xa⋅...⋅a } as a solution in asemigroup, i.e. if (⋅) is consideredassociative. But the same problem, viewed in anabelian group, where (⋅) is considered alsocommutative, has any substitution at all as a solution.

As an example of higher-order unification, the singleton set {a =y(x) } is a syntactic second-order unification problem, sincey is a function variable. One solution is {xa,y ↦ (identity function) }; another one is {y ↦ (constant function mapping each value toa),x(any value) }.

Substitution

[edit]
Main article:Substitution (logic)

Asubstitution is a mappingσ:VT{\displaystyle \sigma :V\rightarrow T} from variables to terms; the notation{x1t1,...,xktk}{\displaystyle \{x_{1}\mapsto t_{1},...,x_{k}\mapsto t_{k}\}} refers to a substitution mapping each variablexi{\displaystyle x_{i}} to the termti{\displaystyle t_{i}}, fori=1,...,k{\displaystyle i=1,...,k}, and every other variable to itself; thexi{\displaystyle x_{i}} must be pairwise distinct.Applying that substitution to a termt{\displaystyle t} is written inpostfix notation ast{x1t1,...,xktk}{\displaystyle t\{x_{1}\mapsto t_{1},...,x_{k}\mapsto t_{k}\}}; it means to (simultaneously) replace every occurrence of each variablexi{\displaystyle x_{i}} in the termt{\displaystyle t} byti{\displaystyle t_{i}}. The resulttτ{\displaystyle t\tau } of applying a substitutionτ{\displaystyle \tau } to a termt{\displaystyle t} is called aninstance of that termt{\displaystyle t}.As a first-order example, applying the substitution{xh(a,y),zb } to the term

f({\displaystyle f(}x{\displaystyle {\textbf {x}}},a,g({\displaystyle ,a,g(}z{\displaystyle {\textbf {z}}}),y){\displaystyle ),y)}
yields  
f({\displaystyle f(}h(a,y){\displaystyle {\textbf {h}}({\textbf {a}},{\textbf {y}})},a,g({\displaystyle ,a,g(}b{\displaystyle {\textbf {b}}}),y).{\displaystyle ),y).}

Generalization, specialization

[edit]

If a termt{\displaystyle t} has an instance equivalent to a termu{\displaystyle u}, that is, iftσu{\displaystyle t\sigma \equiv u} for some substitutionσ{\displaystyle \sigma }, thent{\displaystyle t} is calledmore general thanu{\displaystyle u}, andu{\displaystyle u} is calledmore special than, orsubsumed by,t{\displaystyle t}. For example,xa{\displaystyle x\oplus a} is more general thanab{\displaystyle a\oplus b} if ⊕ iscommutative, since then(xa){xb}=baab{\displaystyle (x\oplus a)\{x\mapsto b\}=b\oplus a\equiv a\oplus b}.

If ≡ is literal (syntactic) identity of terms, a term may be both more general and more special than another one only if both terms differ just in their variable names, not in their syntactic structure; such terms are calledvariants, orrenamings of each other.For example,f(x1,a,g(z1),y1){\displaystyle f(x_{1},a,g(z_{1}),y_{1})}is a variant off(x2,a,g(z2),y2){\displaystyle f(x_{2},a,g(z_{2}),y_{2})},sincef(x1,a,g(z1),y1){x1x2,y1y2,z1z2}=f(x2,a,g(z2),y2){\displaystyle f(x_{1},a,g(z_{1}),y_{1})\{x_{1}\mapsto x_{2},y_{1}\mapsto y_{2},z_{1}\mapsto z_{2}\}=f(x_{2},a,g(z_{2}),y_{2})}andf(x2,a,g(z2),y2){x2x1,y2y1,z2z1}=f(x1,a,g(z1),y1).{\displaystyle f(x_{2},a,g(z_{2}),y_{2})\{x_{2}\mapsto x_{1},y_{2}\mapsto y_{1},z_{2}\mapsto z_{1}\}=f(x_{1},a,g(z_{1}),y_{1}).}However,f(x1,a,g(z1),y1){\displaystyle f(x_{1},a,g(z_{1}),y_{1})} isnot a variant off(x2,a,g(x2),x2){\displaystyle f(x_{2},a,g(x_{2}),x_{2})}, since no substitution can transform the latter term into the former one.The latter term is therefore properly more special than the former one.

For arbitrary{\displaystyle \equiv }, a term may be both more general and more special than a structurally different term.For example, if ⊕ isidempotent, that is, if alwaysxxx{\displaystyle x\oplus x\equiv x}, then the termxy{\displaystyle x\oplus y} is more general thanz{\displaystyle z},[note 2] and vice versa,[note 3] althoughxy{\displaystyle x\oplus y} andz{\displaystyle z} are of different structure.

A substitutionσ{\displaystyle \sigma } ismore special than, orsubsumed by, a substitutionτ{\displaystyle \tau } iftσ{\displaystyle t\sigma } is subsumed bytτ{\displaystyle t\tau } for each termt{\displaystyle t}. We also say thatτ{\displaystyle \tau } is more general thanσ{\displaystyle \sigma }. More formally, take a nonempty infinite setV{\displaystyle V} of auxiliary variables such that no equationliri{\displaystyle l_{i}\doteq r_{i}} in the unification problem contains variables fromV{\displaystyle V}. Then a substitutionσ{\displaystyle \sigma } is subsumed by another substitutionτ{\displaystyle \tau } if there is a substitutionθ{\displaystyle \theta } such that for all termsXV{\displaystyle X\notin V},XσXτθ{\displaystyle X\sigma \equiv X\tau \theta }.[2]For instance{xa,ya}{\displaystyle \{x\mapsto a,y\mapsto a\}} is subsumed byτ={xy}{\displaystyle \tau =\{x\mapsto y\}}, usingθ={ya}{\displaystyle \theta =\{y\mapsto a\}}, butσ={xa}{\displaystyle \sigma =\{x\mapsto a\}} is not subsumed byτ={xy}{\displaystyle \tau =\{x\mapsto y\}}, asf(x,y)σ=f(a,y){\displaystyle f(x,y)\sigma =f(a,y)} is not an instance off(x,y)τ=f(y,y){\displaystyle f(x,y)\tau =f(y,y)}.[3]

Solution set

[edit]

A substitution σ is asolution of the unification problemE ifliσ ≡riσ fori=1,...,n{\displaystyle i=1,...,n}. Such a substitution is also called aunifier ofE.For example, if ⊕ isassociative, the unification problem {xaax } has the solutions {xa}, {xaa}, {xaaa}, etc., while the problem {xaa } has no solution.

For a given unification problemE, a setS of unifiers is calledcomplete if each solution substitution is subsumed by some substitution inS. A complete substitution set always exists (e.g. the set of all solutions), but in some frameworks (such as unrestricted higher-order unification) the problem of determining whether any solution exists (i.e., whether the complete substitution set is nonempty) is undecidable.

The setS is calledminimal if none of its members subsumes another one. Depending on the framework, a complete and minimal substitution set may have zero, one, finitely many, or infinitely many members, or may not exist at all due to an infinite chain of redundant members.[4] Thus, in general, unification algorithms compute a finite approximation of the complete set, which may or may not be minimal, although most algorithms avoid redundant unifiers when possible.[2] For first-order syntactical unification, Martelli and Montanari[5] gave an algorithm that reports unsolvability or computes a single unifier that by itself forms a complete and minimal substitution set, called themost general unifier.

Syntactic unification of first-order terms

[edit]
Schematic triangle diagram of syntactically unifying termst1 andt2 by a substitution σ

Syntactic unification of first-order terms is the most widely used unification framework.It is based onT being the set offirst-order terms (over some given setV of variables,C of constants andFn ofn-ary function symbols) and on ≡ beingsyntactic equality.In this framework, each solvable unification problem{l1r1, ...,lnrn} has a complete, and obviously minimal,singleton solution set{σ}.Its memberσ is called themost general unifier (mgu) of the problem.The terms on the left and the right hand side of each potential equation become syntactically equal when the mgu is applied i.e.l1σ =r1σ ∧ ... ∧lnσ =rnσ.Any unifier of the problem is subsumed[note 4] by the mguσ.The mgu is unique up to variants: ifS1 andS2 are both complete and minimal solution sets of the same syntactical unification problem, thenS1 = {σ1 } andS2 = {σ2 } for some substitutionsσ1 andσ2, and1 is a variant of2 for each variablex occurring in the problem.

For example, the unification problem {xz,yf(x) } has a unifier {xz,yf(z) }, because

x{xz,yf(z) }=z=z{xz,yf(z) }, and
y{xz,yf(z) }=f(z)=f(x){xz,yf(z) }.

This is also the most general unifier.Other unifiers for the same problem are e.g. {xf(x1),yf(f(x1)),zf(x1) }, {xf(f(x1)),yf(f(f(x1))),zf(f(x1)) }, and so on; there are infinitely many similar unifiers.

As another example, the problemg(x,x) ≐f(y) has no solution with respect to ≡ being literal identity, since any substitution applied to the left and right hand side will keep the outermostg andf, respectively, and terms with different outermost function symbols are syntactically different.

Unification algorithms

[edit]
Robinson's 1965 unification algorithm

Symbols are ordered such that variables precede function symbols.Terms are ordered by increasing written length; equally long terms are orderedlexicographically.[6] For a setT of terms, its disagreement pathp is the lexicographically least path where two member terms ofT differ. Its disagreement set is the set ofsubterms starting atp, formally:{t|p :tT}.[7]

Algorithm:[8]

Given a setT of terms to be unifiedLetσ initially be theidentity substitutiondo forever    if is asingleton set then        returnσ    fi    letD be the disagreement set of    lets,t be the two lexicographically least terms inD    ifs is not a variable ors occurs int then        return "NONUNIFIABLE"    fi   σ:=σ{st}{\displaystyle \sigma :=\sigma \{s\mapsto t\}}done

Jacques Herbrand discussed the basic concepts of unification and sketched an algorithm in 1930.[9][10][11] But most authors attribute the first unification algorithm toJohn Alan Robinson (cf. box).[12][13][note 5] Robinson's algorithm had worst-case exponential behavior in both time and space.[11][15] Numerous authors have proposed more efficient unification algorithms.[16] Algorithms with worst-case linear-time behavior were discovered independently byMartelli & Montanari (1976) andPaterson & Wegman (1976)[note 6]Baader & Snyder (2001) uses a similar technique as Paterson-Wegman, hence is linear,[17] but like most linear-time unification algorithms is slower than the Robinson version on small sized inputs due to the overhead of preprocessing the inputs and postprocessing of the output, such as construction of aDAG representation.de Champeaux (2022) is also of linear complexity in the input size but is competitive with the Robinson algorithm on small size inputs. The speedup is obtained by using anobject-oriented representation of the predicate calculus that avoids the need for pre- and post-processing, instead making variable objects responsible for creating a substitution and for dealing with aliasing. de Champeaux claims that the ability to add functionality to predicate calculus represented as programmaticobjects provides opportunities for optimizing other logic operations as well.[15]

The following algorithm is commonly presented and originates fromMartelli & Montanari (1982).[note 7] Given a finite setG={s1t1,...,sntn}{\displaystyle G=\{s_{1}\doteq t_{1},...,s_{n}\doteq t_{n}\}} of potential equations,the algorithm applies rules to transform it to an equivalent set of equations of the form{x1u1, ...,xmum }wherex1, ...,xm are distinct variables andu1, ...,um are terms containing none of thexi.A set of this form can be read as a substitution.If there is no solution the algorithm terminates with ⊥; other authors use "Ω", or "fail" in that case.The operation of substituting all occurrences of variablex in problemG with termt is denotedG {xt}.For simplicity, constant symbols are regarded as function symbols having zero arguments.

G{tt}{\displaystyle G\cup \{t\doteq t\}}{\displaystyle \Rightarrow }G{\displaystyle G} delete
G{f(s0,...,sk)f(t0,...,tk)}{\displaystyle G\cup \{f(s_{0},...,s_{k})\doteq f(t_{0},...,t_{k})\}}{\displaystyle \Rightarrow }G{s0t0,...,sktk}{\displaystyle G\cup \{s_{0}\doteq t_{0},...,s_{k}\doteq t_{k}\}} decompose
G{f(s0,,sk)g(t0,...,tm)}{\displaystyle G\cup \{f(s_{0},\ldots ,s_{k})\doteq g(t_{0},...,t_{m})\}}{\displaystyle \Rightarrow }{\displaystyle \bot }iffg{\displaystyle f\neq g} orkm{\displaystyle k\neq m} conflict
G{f(s0,...,sk)x}{\displaystyle G\cup \{f(s_{0},...,s_{k})\doteq x\}}{\displaystyle \Rightarrow }G{xf(s0,...,sk)}{\displaystyle G\cup \{x\doteq f(s_{0},...,s_{k})\}} swap
G{xt}{\displaystyle G\cup \{x\doteq t\}}{\displaystyle \Rightarrow }G{xt}{xt}{\displaystyle G\{x\mapsto t\}\cup \{x\doteq t\}}ifxvars(t){\displaystyle x\not \in {\text{vars}}(t)} andxvars(G){\displaystyle x\in {\text{vars}}(G)} eliminate[note 8]
G{xf(s0,...,sk)}{\displaystyle G\cup \{x\doteq f(s_{0},...,s_{k})\}}{\displaystyle \Rightarrow }{\displaystyle \bot }ifxvars(f(s0,...,sk)){\displaystyle x\in {\text{vars}}(f(s_{0},...,s_{k}))} check

Occurs check

[edit]
Main article:Occurs check

An attempt to unify a variablex with a term containingx as a strict subtermxf(...,x, ...) would lead to an infinite term as solution forx, sincex would occur as a subterm of itself.In the set of (finite) first-order terms as defined above, the equationxf(...,x, ...) has no solution; hence theeliminate rule may only be applied ifxvars(t).Since that additional check, calledoccurs check, slows down the algorithm, it is omitted e.g. in most Prolog systems.From a theoretical point of view, omitting the check amounts to solving equations over infinite trees, see#Unification of infinite terms below.

Proof of termination

[edit]

For the proof of termination of the algorithm consider a triplenvar,nlhs,neqn{\displaystyle \langle n_{var},n_{lhs},n_{eqn}\rangle }wherenvar is the number of variables that occur more than once in the equation set,nlhs is the number of function symbols and constantson the left hand sides of potential equations, andneqn is the number of equations.When ruleeliminate is applied,nvar decreases, sincex is eliminated fromG and kept only in {xt }.Applying any other rule can never increasenvar again.When ruledecompose,conflict, orswap is applied,nlhs decreases, since at least the left hand side's outermostf disappears.Applying any of the remaining rulesdelete orcheck can't increasenlhs, but decreasesneqn.Hence, any rule application decreases the triplenvar,nlhs,neqn{\displaystyle \langle n_{var},n_{lhs},n_{eqn}\rangle } with respect to thelexicographical order, which is possible only a finite number of times.

Conor McBride observes[18] that "by expressing the structure which unification exploits" in adependently typed language such asEpigram,Robinson's unification algorithm can be maderecursive on the number of variables, in which case a separate termination proof becomes unnecessary.

Examples of syntactic unification of first-order terms

[edit]

In the Prolog syntactical convention a symbol starting with an upper case letter is a variable name; a symbol that starts with a lowercase letter is a function symbol; the comma is used as the logicaland operator.Formathematical notation,x,y,z are used as variables,f,g as function symbols, anda,b as constants.

Prolog notationMathematical notationUnifying substitutionExplanation
a = a{a =a }{}Succeeds. (tautology)
a = b{a =b }a andb do not match
X = X{x =x }{}Succeeds. (tautology)
a = X{a =x }{xa }x is unified with the constanta
X = Y{x =y }{xy }x andy are aliased
f(a,X) = f(a,b){f(a,x) =f(a,b) }{xb }function and constant symbols match,x is unified with the constantb
f(a) = g(a){f(a) =g(a) }f andg do not match
f(X) = f(Y){f(x) =f(y) }{xy }x andy are aliased
f(X) = g(Y){f(x) =g(y) }f andg do not match
f(X) = f(Y,Z){f(x) =f(y,z) }Fails. Thef function symbols have different arity
f(g(X)) = f(Y){f(g(x)) =f(y) }{yg(x) }Unifiesy with the termg(x){\displaystyle g(x)}
f(g(X),X) = f(Y,a){f(g(x),x) =f(y,a) }{xa,yg(a) }Unifiesx with constanta, andy with the termg(a){\displaystyle g(a)}
X = f(X){x =f(x) }should be ⊥Returns ⊥ in first-order logic and many modern Prolog dialects (enforced by theoccurs check).

Succeeds in traditional Prolog and in Prolog II, unifyingx with infinite termx=f(f(f(f(...)))).

X = Y, Y = a{x =y,y =a }{xa,ya }Bothx andy are unified with the constanta
a = Y, X = Y{a =y,x =y }{xa,ya }As above (order of equations in set doesn't matter)
X = a, b = X{x =a,b =x }Fails.a andb do not match, sox can't be unified with both
Two terms with an exponentially larger tree for their least common instance. Itsdag representation (rightmost, orange part) is still of linear size.

The most general unifier of a syntactic first-order unification problem ofsizen may have a size of2n. For example, the problem(((az)y)x)ww(x(y(za))){\displaystyle (((a*z)*y)*x)*w\doteq w*(x*(y*(z*a)))} has the most general unifier{za,yaa,x(aa)(aa),w((aa)(aa))((aa)(aa))}{\displaystyle \{z\mapsto a,y\mapsto a*a,x\mapsto (a*a)*(a*a),w\mapsto ((a*a)*(a*a))*((a*a)*(a*a))\}}, cf. picture. In order to avoid exponential time complexity caused by such blow-up, advanced unification algorithms work ondirected acyclic graphs (dags) rather than trees.[19]

Application: unification in logic programming

[edit]

The concept of unification is one of the main ideas behindlogic programming. Specifically, unification is a basic building block ofresolution, a rule of inference for determining formula satisfiability. InProlog, the equality symbol= implies first-order syntactic unification. It represents the mechanism of binding the contents of variables and can be viewed as a kind of one-time assignment.

In Prolog:

  1. Avariable can be unified with a constant, a term, or another variable, thus effectively becoming its alias. In many modern Prolog dialects and infirst-order logic, a variable cannot be unified with a term that contains it; this is the so-calledoccurs check.
  2. Two constants can be unified only if they are identical.
  3. Similarly, a term can be unified with another term if the top function symbols andarities of the terms are identical and if the parameters can be unified simultaneously. Note that this is a recursive behavior.
  4. Most operations, including+,-,*,/, are not evaluated by=. So for example1+2 = 3 is not satisfiable because they are syntactically different. The use of integer arithmetic constraints#= introduces a form of E-unification for which these operations are interpreted and evaluated.[20]

Application: type inference

[edit]

Type inference algorithms are typically based on unification, particularlyHindley-Milner type inference which is used by the functional languagesHaskell andML. For example, when attempting to infer the type of the Haskell expressionTrue : ['x'], the compiler will use the typea -> [a] -> [a] of the list construction function(:), the typeBool of the first argumentTrue, and the type[Char] of the second argument['x']. The polymorphic type variablea will be unified withBool and the second argument[a] will be unified with[Char].a cannot be bothBool andChar at the same time, therefore this expression is not correctly typed.

Like for Prolog, an algorithm for type inference can be given:

  1. Any type variable unifies with any type expression, and is instantiated to that expression. A specific theory might restrict this rule with an occurs check.
  2. Two type constants unify only if they are the same type.
  3. Two type constructions unify only if they are applications of the same type constructor and all of their component types recursively unify.

Application: Feature Structure Unification

[edit]
See also:Feature structure

Unification has been used in different research areas of computational linguistics.[21][22]

Order-sorted unification

[edit]

Order-sorted logic allows one to assign asort, ortype, to each term, and to declare a sorts1 asubsort of another sorts2, commonly written ass1s2. For example, when reаsoning about biological creatures, it is useful to declare a sortdog to be a subsort of a sortanimal. Wherever a term of some sorts is required, a term of any subsort ofs may be supplied instead.For example, assuming a function declarationmother:animalanimal, and a constant declarationlassie:dog, the termmother(lassie) is perfectly valid and has the sortanimal. In order to supply the information that the mother of a dog is a dog in turn, another declarationmother:dogdog may be issued; this is calledfunction overloading, similar tooverloading in programming languages.

Walther gave a unification algorithm for terms in order-sorted logic, requiring for any two declared sortss1,s2 their intersections1s2 to be declared, too: ifx1 andx2 is a variable of sorts1 ands2, respectively, the equationx1x2 has the solution {x1 =x,x2 =x }, wherex:s1s2.[23]After incorporating this algorithm into a clause-based automated theorem prover, he could solve a benchmark problem by translating it into order-sorted logic, thereby boiling it down an order of magnitude, as many unary predicates turned into sorts.

Smolka generalized order-sorted logic to allow forparametric polymorphism.[24]In his framework, subsort declarations are propagated to complex type expressions.As a programming example, a parametric sortlist(X) may be declared (withX being a type parameter as in aC++ template), and from a subsort declarationintfloat the relationlist(int) ⊆list(float) is automatically inferred, meaning that each list of integers is also a list of floats.

Schmidt-Schauß generalized order-sorted logic to allow for term declarations.[25]As an example, assuming subsort declarationsevenint andoddint, a term declaration like ∀i :int. (i +i) :even allows to declare a property of integer addition that could not be expressed by ordinary overloading.

Unification of infinite terms

[edit]
[icon]
This sectionneeds expansion. You can help byadding to it.(December 2021)

Background on infinite trees:

Unification algorithm, Prolog II:

  • A. Colmerauer (1982). K.L. Clark; S.-A. Tarnlund (eds.).Prolog and Infinite Trees. Academic Press.
  • Alain Colmerauer (1984). "Equations and Inequations on Finite and Infinite Trees". In ICOT (ed.).Proc. Int. Conf. on Fifth Generation Computer Systems. pp. 85–99.

Applications:

E-unification

[edit]

E-unification is the problem of finding solutions to a given set ofequations,taking into account some equational background knowledgeE.The latter is given as a set of universalequalities.For some particular setsE, equation solvingalgorithms (a.k.a.E-unification algorithms) have been devised;for others it has been proven that no such algorithms can exist.

For example, ifa andb are distinct constants,theequationxayb{\displaystyle x*a\doteq y*b} has no solutionwith respect to purelysyntactic unification,where nothing is known about the operator{\displaystyle *}.However, if the{\displaystyle *} is known to becommutative,then the substitution{xb,ya} solves the above equation,since

xa{\displaystyle x*a}{xb,ya}
=ba{\displaystyle b*a}bysubstitution application
=ab{\displaystyle a*b}by commutativity of{\displaystyle *}
=yb{\displaystyle y*b}{xb,ya}by (converse) substitution application

The background knowledgeE could state the commutativity of{\displaystyle *} by the universal equality"uv=vu{\displaystyle u*v=v*u} for allu,v".

Particular background knowledge sets E

[edit]
Used naming conventions
u,v,w:u(vw){\displaystyle u*(v*w)}=(uv)w{\displaystyle (u*v)*w}AAssociativity of{\displaystyle *}
u,v:uv{\displaystyle u*v}=vu{\displaystyle v*u}CCommutativity of{\displaystyle *}
u,v,w:u(v+w){\displaystyle u*(v+w)}=uv+uw{\displaystyle u*v+u*w}DlLeft distributivity of{\displaystyle *} over+{\displaystyle +}
u,v,w:(v+w)u{\displaystyle (v+w)*u}=vu+wu{\displaystyle v*u+w*u}DrRight distributivity of{\displaystyle *} over+{\displaystyle +}
u:uu{\displaystyle u*u}=uIIdempotence of{\displaystyle *}
u:nu{\displaystyle n*u}=uNlLeft neutral elementn with respect to{\displaystyle *}
u:un{\displaystyle u*n}=u   Nr    Right neutral elementn with respect to{\displaystyle *}

It is said thatunification is decidable for a theory, if a unification algorithm has been devised for it that terminates forany input problem.It is said thatunification issemi-decidable for a theory, if a unification algorithm has been devised for it that terminates for anysolvable input problem, but may keep searching forever for solutions of an unsolvable input problem.

Unification is decidable for the following theories:

Unification is semi-decidable for the following theories:

One-sided paramodulation

[edit]

If there is aconvergent term rewriting systemR available forE,theone-sided paramodulation algorithm[37]can be used to enumerate all solutions of given equations.

One-sided paramodulation rules
G ∪ {f(s1,...,sn) ≐f(t1,...,tn) };SG ∪ {s1t1, ...,sntn };S   decompose
G ∪ {xt };SG {xt };S{xt} ∪ {xt}if the variablex doesn't occur int   eliminate
G ∪ {f(s1,...,sn) ≐t };SG ∪ {s1 ≐ u1, ...,sn ≐ un,rt };S    iff(u1,...,un) →r is a rule fromR   mutate
G ∪ {f(s1,...,sn) ≐y };SG ∪ {s1y1, ...,snyn,yf(y1,...,yn) };Sify1,...,yn are new variables   imitate

Starting withG being the unification problem to be solved andS being the identity substitution, rules are applied nondeterministically until the empty set appears as the actualG, in which case the actualS is a unifying substitution. Depending on the order the paramodulation rules are applied, on the choice of the actual equation fromG, and on the choice ofR's rules inmutate, different computations paths are possible. Only some lead to a solution, while others end at aG ≠ {} where no further rule is applicable (e.g.G = {f(...) ≐g(...) }).

Example term rewrite systemR
1app(nil,z)z
2    app(x.y,z)x.app(y,z)

For an example, a term rewrite systemR is used defining theappend operator of lists built fromcons andnil; wherecons(x,y) is written in infix notation asx.y for brevity; e.g.app(a.b.nil,c.d.nil) →a.app(b.nil,c.d.nil) →a.b.app(nil,c.d.nil) →a.b.c.d.nil demonstrates the concatenation of the listsa.b.nil andc.d.nil, employing the rewrite rule 2,2, and 1. The equational theoryE corresponding toR is thecongruence closure ofR, both viewed as binary relations on terms.For example,app(a.b.nil,c.d.nil) ≡a.b.c.d.nilapp(a.b.c.d.nil,nil). The paramodulation algorithm enumerates solutions to equations with respect to thatE when fed with the exampleR.

A successful example computation path for the unification problem {app(x,app(y,x)) ≐a.a.nil } is shown below. To avoid variable name clashes, rewrite rules are consistently renamed each time before their use by rulemutate;v2,v3, ... are computer-generated variable names for this purpose. In each line, the chosen equation fromG is highlighted in red. Each time themutate rule is applied, the chosen rewrite rule (1 or2) is indicated in parentheses. From the last line, the unifying substitutionS = {ynil,xa.nil } can be obtained. In fact,app(x,app(y,x)) {ynil,xa.nil } =app(a.nil,app(nil,a.nil)) ≡app(a.nil,a.nil) ≡a.app(nil,a.nil) ≡a.a.nil solves the given problem.A second successful computation path, obtainable by choosing "mutate(1), mutate(2), mutate(2), mutate(1)" leads to the substitutionS = {ya.a.nil,xnil }; it is not shown here. No other path leads to a success.

Example unifier computation
Used ruleGS
{app(x,app(y,x)) ≐a.a.nil }{}
mutate(2){xv2.v3,app(y,x) ≐v4,v2.app(v3,v4) ≐a.a.nil }{}
decompose{xv2.v3,app(y,x) ≐v4,v2a,app(v3,v4) ≐a.nil }{}
eliminate{app(y,v2.v3) ≐v4,v2a,app(v3,v4) ≐a.nil }{xv2.v3 }
eliminate{app(y,a.v3) ≐v4,app(v3,v4) ≐a.nil }{xa.v3 }
mutate(1){ynil,a.v3v5,v5v4,app(v3,v4) ≐a.nil }{xa.v3 }
eliminate{ynil,a.v3v4,app(v3,v4) ≐a.nil }{xa.v3 }
eliminate{a.v3v4,app(v3,v4) ≐a.nil }{ynil,xa.v3 }
mutate(1){a.v3v4,v3nil,v4v6,v6a.nil }{ynil,xa.v3 }
eliminate{a.v3v4,v3nil,v4a.nil }{ynil,xa.v3 }
eliminate{a.nilv4,v4a.nil }{ynil,xa.nil }
eliminate{a.nila.nil }{ynil,xa.nil }
decompose{aa,nilnil }{ynil,xa.nil }
decompose{nilnil }{ynil,xa.nil }
decompose    ⇒    {}{ynil,xa.nil }

Narrowing

[edit]
Triangle diagram of narrowing stepst at positionp in terms, with unifying substitution σ (bottom row), using a rewrite rulelr (top row)

IfR is aconvergent term rewriting system forE,an approach alternative to the previous section consists in successive application of "narrowing steps";this will eventually enumerate all solutions of a given equation.A narrowing step (cf. picture) consists in

  • choosing a nonvariable subterm of the current term,
  • syntactically unifying it with the left hand side of a rule fromR, and
  • replacing the instantiated rule's right hand side into the instantiated term.

Formally, iflr is arenamed copy of a rewrite rule fromR, having no variables in common with a terms, and thesubterms|p is not a variable and is unifiable withl via themguσ, thens can benarrowed to the termt =[]p, i.e. to the term, with the subterm atpreplaced by. The situation thats can be narrowed tot is commonly denoted asst.Intuitively, a sequence of narrowing stepst1t2 ↝ ... ↝tn can be thought of as a sequence of rewrite stepst1t2 → ... →tn, but with the initial termt1 being further and further instantiated, as necessary to make each of the used rules applicable.

Theabove example paramodulation computation corresponds to the following narrowing sequence ("↓" indicating instantiation here):

app(x,app(y,x))
xv2.v3
app(v2.v3,app(y,v2.v3))v2.app(v3,app(y,v2.v3))
ynil
v2.app(v3,app(nil,v2.v3))v2.app(v3,v2.v3)
v3nil
v2.app(nil,v2.nil)v2.v2.nil

The last term,v2.v2.nil can be syntactically unified with the original right hand side terma.a.nil.

Thenarrowing lemma[38] ensures that whenever an instance of a terms can be rewritten to a termt by a convergent term rewriting system, thens andt can be narrowed and rewritten to a terms andt, respectively, such thatt is an instance ofs.

Formally: whenevert holds for some substitution σ, then there exist termss,t such thatss andtt andsτ =t for some substitution τ.

Higher-order unification

[edit]
In Goldfarb's[39] reduction ofHilbert's 10th problem to second-order unifiability, the equationX1X2=X3{\displaystyle X_{1}*X_{2}=X_{3}} corresponds to the depicted unification problem, with function variablesFi{\displaystyle F_{i}} corresponding toXi{\displaystyle X_{i}} andG{\displaystyle G}fresh.

Many applications require one to consider the unification of typed lambda-terms instead of first-order terms. Such unification is often calledhigher-order unification. Higher-order unification isundecidable,[39][40][41] and such unification problems do not have most general unifiers. For example, the unification problem {f(a,b,a) ≐d(b,a,c) }, where the only variable isf, has thesolutions {f ↦ λxyz.d(y,x,c) }, {f ↦ λxyz.d(y,z,c) },{f ↦ λxyz.d(y,a,c) }, {f ↦ λxyz.d(b,x,c) },{f ↦ λxyz.d(b,z,c) } and {f ↦ λxyz.d(b,a,c) }. A well studied branch of higher-order unification is the problem of unifying simply typed lambda terms modulo the equality determined by αβη conversions.Gérard Huet gave asemi-decidable (pre-)unification algorithm[42] that allows a systematic search of the space of unifiers (generalizing the unification algorithm of Martelli-Montanari[5] with rules for terms containing higher-order variables) that seems to work sufficiently well in practice. Huet[43] and Gilles Dowek[44] have written articles surveying this topic.

Several subsets of higher-order unification are well-behaved, in that they are decidable and have a most-general unifier for solvable problems. One such subset is the previously described first-order terms.Higher-order pattern unification, due to Dale Miller,[45] is another such subset. The higher-order logic programming languagesλProlog andTwelf have switched from full higher-order unification to implementing only the pattern fragment; surprisingly pattern unification is sufficient for almost all programs, if each non-pattern unification problem is suspended until a subsequent substitution puts the unification into the pattern fragment. A superset of pattern unification called functions-as-constructors unification is also well-behaved.[46] The Zipperposition theorem prover has an algorithm integrating these well-behaved subsets into a full higher-order unification algorithm.[2]

In computational linguistics, one of the most influential theories ofelliptical construction is that ellipses are represented by free variables whose values are then determined using Higher-Order Unification. For instance, the semantic representation of "Jon likes Mary and Peter does too" is like(j,m) ∧ R(p) and the value of R (the semantic representation of the ellipsis) is determined by the equation like(j,m) = R(j). The process of solving such equations is called Higher-Order Unification.[47]

Wayne Snyder gave a generalization of both higher-order unification and E-unification, i.e. an algorithm to unify lambda-terms modulo an equational theory.[48]

See also

[edit]

Notes

[edit]
  1. ^E.g.a ⊕ (bf(x)) ≡a ⊕ (f(x) ⊕b) ≡ (bf(x)) ⊕a ≡ (f(x) ⊕b) ⊕a
  2. ^since(xy){xz,yz}=zzz{\displaystyle (x\oplus y)\{x\mapsto z,y\mapsto z\}=z\oplus z\equiv z}
  3. ^sincez {zxy} =xy
  4. ^formally: each unifier τ satisfiesx: = ()ρ for some substitution ρ
  5. ^Robinson used first-order syntactical unification as a basic building block of hisresolution procedure for first-order logic, a great step forward inautomated reasoning technology, as it eliminated one source ofcombinatorial explosion: searching for instantiation of terms.[14]
  6. ^Independent discovery is stated inMartelli & Montanari (1982) sect.1, p.259. The journal publisher receivedPaterson & Wegman (1978) in Sep.1976.
  7. ^Alg.1, p.261. Their rule(a) corresponds to ruleswap here,(b) todelete,(c) to bothdecompose andconflict, and(d) to botheliminate andcheck.
  8. ^Although the rule keepsxt inG, it cannot loop forever since its preconditionxvars(G) is invalidated by its first application. More generally, the algorithm is guaranteed to terminate always, seebelow.
  9. ^abin the presence of equalityC, equalitiesNl andNr are equivalent, similar forDl andDr

References

[edit]
  1. ^Dowek, Gilles (1 January 2001). "Higher-order unification and matching".Handbook of automated reasoning. Elsevier Science Publishers B. V. pp. 1009–1062.ISBN 978-0-444-50812-6. Archived fromthe original on 15 May 2019. Retrieved15 May 2019.
  2. ^abcVukmirović, Petar; Bentkamp, Alexander; Nummelin, Visa (14 December 2021)."Efficient Full Higher-Order Unification".Logical Methods in Computer Science.17 (4): 6919.arXiv:2011.09507.doi:10.46298/lmcs-17(4:18)2021.
  3. ^Apt, Krzysztof R. (1997).From logic programming to Prolog (1. publ ed.). London Munich: Prentice Hall. p. 24.ISBN 013230368X.
  4. ^Fages, François; Huet, Gérard (1986)."Complete Sets of Unifiers and Matchers in Equational Theories".Theoretical Computer Science.43:189–200.doi:10.1016/0304-3975(86)90175-1.
  5. ^abMartelli, Alberto; Montanari, Ugo (Apr 1982). "An Efficient Unification Algorithm".ACM Trans. Program. Lang. Syst.4 (2):258–282.doi:10.1145/357162.357169.S2CID 10921306.
  6. ^Robinson (1965) nr.2.5, 2.14, p.25
  7. ^Robinson (1965) nr.5.6, p.32
  8. ^Robinson (1965) nr.5.8, p.32
  9. ^J. Herbrand: Recherches sur la théorie de la démonstration.Travaux de la société des Sciences et des Lettres de Varsovie, Class III, Sciences Mathématiques et Physiques, 33, 1930.
  10. ^Jacques Herbrand (1930).Recherches sur la théorie de la demonstration(PDF) (Ph.D. thesis). A. Vol. 1252. Université de Paris. Here: p.96-97
  11. ^abClaus-Peter Wirth; Jörg Siekmann; Christoph Benzmüller; Serge Autexier (2009). Lectures on Jacques Herbrand as a Logician (SEKI Report). DFKI.arXiv:0902.4682. Here: p.56
  12. ^Robinson, J.A. (Jan 1965)."A Machine-Oriented Logic Based on the Resolution Principle".Journal of the ACM.12 (1):23–41.doi:10.1145/321250.321253.S2CID 14389185.; Here: sect.5.8, p.32
  13. ^J.A. Robinson (1971)."Computational logic: The unification computation".Machine Intelligence.6:63–72.
  14. ^David A. Duffy (1991).Principles of Automated Theorem Proving. New York: Wiley.ISBN 0-471-92784-8. Here: Introduction of sect.3.3.3"Unification", p.72.
  15. ^abde Champeaux, Dennis (Aug 2022)."Faster Linear Unification Algorithm"(PDF).Journal of Automated Reasoning.66 (4):845–860.doi:10.1007/s10817-022-09635-1.
  16. ^PerMartelli & Montanari (1982):
  17. ^Baader, Franz; Snyder, Wayne (2001)."Unification Theory"(PDF).Handbook of Automated Reasoning. pp. 445–533.doi:10.1016/B978-044450813-3/50010-2.ISBN 978-0-444-50813-3.
  18. ^McBride, Conor (October 2003)."First-Order Unification by Structural Recursion".Journal of Functional Programming.13 (6):1061–1076.CiteSeerX 10.1.1.25.1516.doi:10.1017/S0956796803004957.ISSN 0956-7968.S2CID 43523380. Retrieved30 March 2012.
  19. ^e.g.Paterson & Wegman (1978) sect.2, p.159
  20. ^"Declarative integer arithmetic".SWI-Prolog. Retrieved18 February 2024.
  21. ^Jonathan Calder, Mike Reape, and Hank Zeevat,,An algorithm for generation in unification categorial grammar. In Proceedings of the 4th Conference of the European Chapter of the Association for Computational Linguistics, pages 233-240, Manchester, England (10–12 April), University of Manchester Institute of Science and Technology, 1989.
  22. ^Graeme Hirst and David St-Onge,[1] Lexical chains as representations of context for the detection and correction of malapropisms, 1998.
  23. ^Walther, Christoph (1985)."A Mechanical Solution of Schubert's Steamroller by Many-Sorted Resolution"(PDF).Artif. Intell.26 (2):217–224.doi:10.1016/0004-3702(85)90029-3. Archived fromthe original(PDF) on 2011-07-08. Retrieved2013-06-28.
  24. ^Smolka, Gert (Nov 1988).Logic Programming with Polymorphically Order-Sorted Types(PDF). Int. Workshop Algebraic and Logic Programming. LNCS. Vol. 343. Springer. pp. 53–70.doi:10.1007/3-540-50667-5_58.
  25. ^Schmidt-Schauß, Manfred (Apr 1988).Computational Aspects of an Order-Sorted Logic with Term Declarations.Lecture Notes in Artificial Intelligence (LNAI). Vol. 395. Springer.
  26. ^Gordon D. Plotkin,Lattice Theoretic Properties of Subsumption, Memorandum MIP-R-77, Univ. Edinburgh, Jun 1970
  27. ^Mark E. Stickel,A Unification Algorithm for Associative-Commutative Functions, Journal of the Association for Computing Machinery, vol.28, no.3, pp. 423–434, 1981
  28. ^abcF. Fages (1987)."Associative-Commutative Unification"(PDF).J. Symbolic Comput.3 (3):257–275.doi:10.1016/s0747-7171(87)80004-4.S2CID 40499266.
  29. ^Franz Baader,Unification in Idempotent Semigroups is of Type Zero, J. Automat. Reasoning, vol.2, no.3, 1986
  30. ^J. Makanin,The Problem of Solvability of Equations in a Free Semi-Group, Akad. Nauk SSSR, vol.233, no.2, 1977
  31. ^Martin, U., Nipkow, T. (1986). "Unification in Boolean Rings". In Jörg H. Siekmann (ed.).Proc. 8th CADE. LNCS. Vol. 230. Springer. pp. 506–513.{{cite book}}: CS1 maint: multiple names: authors list (link)
  32. ^A. Boudet; J.P. Jouannaud; M. Schmidt-Schauß (1989)."Unification of Boolean Rings and Abelian Groups".Journal of Symbolic Computation.8 (5):449–477.doi:10.1016/s0747-7171(89)80054-9.
  33. ^abBaader and Snyder (2001), p. 486.
  34. ^F. Baader and S. Ghilardi,Unification in modal and description logics, Logic Journal of the IGPL 19 (2011), no. 6, pp. 705–730.
  35. ^P. Szabo,Unifikationstheorie erster Ordnung (First Order Unification Theory), Thesis, Univ. Karlsruhe, West Germany, 1982
  36. ^Jörg H. Siekmann,Universal Unification, Proc. 7th Int. Conf. on Automated Deduction, Springer LNCS vol.170, pp. 1–42, 1984
  37. ^N. Dershowitz and G. Sivakumar,Solving Goals in Equational Languages, Proc. 1st Int. Workshop on Conditional Term Rewriting Systems, Springer LNCS vol.308, pp. 45–55, 1988
  38. ^Fay (1979). "First-Order Unification in an Equational Theory".Proc. 4th Workshop on Automated Deduction. pp. 161–167.
  39. ^abWarren D. Goldfarb (1981)."The Undecidability of the Second-Order Unification Problem".TCS.13 (2):225–230.doi:10.1016/0304-3975(81)90040-2.
  40. ^Gérard P. Huet (1973)."The Undecidability of Unification in Third Order Logic".Information and Control.22 (3):257–267.doi:10.1016/S0019-9958(73)90301-X.
  41. ^Claudio Lucchesi: The Undecidability of the Unification Problem for Third Order Languages (Research Report CSRR 2059; Department of Computer Science, University of Waterloo, 1972)
  42. ^Gérard Huet: (1 June 1975)A Unification Algorithm for typed Lambda-Calculus,Theoretical Computer Science
  43. ^Gérard Huet: Higher Order Unification 30 Years Later
  44. ^Gilles Dowek: Higher-Order Unification and Matching. Handbook of Automated Reasoning 2001: 1009–1062
  45. ^Miller, Dale (1991)."A Logic Programming Language with Lambda-Abstraction, Function Variables, and Simple Unification"(PDF).Journal of Logic and Computation.1 (4):497–536.doi:10.1093/logcom/1.4.497.
  46. ^Libal, Tomer; Miller, Dale (May 2022)."Functions-as-constructors higher-order unification: extended pattern unification".Annals of Mathematics and Artificial Intelligence.90 (5):455–479.doi:10.1007/s10472-021-09774-y.
  47. ^Gardent, Claire;Kohlhase, Michael; Konrad, Karsten (1997). "A Multi-Level, Higher-Order Unification Approach to Ellipsis".Submitted to EuropeanAssociation for Computational Linguistics (EACL).CiteSeerX 10.1.1.55.9018.
  48. ^Wayne Snyder (Jul 1990). "Higher order E-unification".Proc. 10th Conference on Automated Deduction. LNAI. Vol. 449. Springer. pp. 573–587.

Further reading

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Unification_(computer_science)&oldid=1320635334"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp