Inmathematics andprobability theory,continuum percolation theory is a branch of mathematics that extends discretepercolation theory tocontinuous space (oftenEuclidean spaceℝn). More specifically, the underlying points of discrete percolation form types of lattices whereas the underlying points of continuum percolation are often randomly positioned in some continuous space and form a type ofpoint process. For each point, a random shape is frequently placed on it and the shapes overlap each with other to form clumps or components. As in discrete percolation, a common research focus of continuum percolation is studying the conditions of occurrence for infinite or giant components.[1][2] Other shared concepts and analysis techniques exist in these two types of percolation theory as well as the study ofrandom graphs andrandom geometric graphs.
Continuum percolation arose from an early mathematical model forwireless networks,[2][3] which, with the rise of several wireless network technologies in recent years, has been generalized and studied in order to determine the theoretical bounds ofinformation capacity and performance in wireless networks.[4][5] In addition to this setting, continuum percolation has gained application in other disciplines including biology, geology, and physics, such as the study ofporous material andsemiconductors, while becoming a subject of mathematical interest in its own right.[6]
In the early 1960sEdgar Gilbert[3] proposed a mathematical model in wireless networks that gave rise to the field of continuum percolation theory, thus generalizing discrete percolation.[2] The underlying points of this model, sometimes known as the Gilbert disk model, were scattered uniformly in the infinite planeℝ2 according to a homogeneousPoisson process. Gilbert, who had noticed similarities between discrete and continuum percolation,[7] then used concepts and techniques from the probability subject ofbranching processes to show that athreshold value existed for the infinite or "giant" component.
The exact names, terminology, and definitions of these models may vary slightly depending on the source, which is also reflected in the use ofpoint process notation.
A number of well-studied models exist in continuum percolation, which are often based on homogeneousPoisson point processes.
Consider a collection of points{xi} in the planeℝ2 that form a homogeneous Poisson processΦ with constant (point) densityλ. For each point of the Poisson process (i.e.xi ∈Φ), place a diskDi with its center located at the pointxi. If each diskDi has a random radiusRi (from a commondistribution) that isindependent of all the other radii and all the underlying points{xi}, then the resulting mathematical structure is known as a random disk model.
Given a random disk model, if the set union of all the disks{Di} is taken, then the resulting structure⋃iDi is known as a Boolean–Poisson model (also known as simply theBoolean model),[8] which is a commonly studied model in continuum percolation[1] as well asstochastic geometry.[8] If all the radii are set to some common constant, say,r > 0, then the resulting model is sometimes known as the Gilbert disk (Boolean) model.[9]
The disk model can be generalized to more arbitrary shapes where, instead of a disk, a randomcompact (hence bounded and closed inℝ2) shapeSi is placed on each pointxi. Again, each shapeSi has a commondistribution andindependent to all other shapes and the underlying (Poisson) point process. This model is known as the germ–grain model where the underlying points{xi} are thegerms and the random compact shapesSi are thegrains. Theset union of all the shapes forms a Boolean germ-grain model. Typical choices for the grains include disks, randompolygon and segments of random length.[8]
Boolean models are also examples ofstochastic processes known as coverage processes.[10] The above models can be extended from the planeℝ2 to general Euclidean spaceℝn.
In the Boolean–Poisson model, disks there can be isolated groups or clumps of disks that do not contact any other clumps of disks. These clumps are known as components. If the area (or volume in higher dimensions) of a component is infinite, one says it is an infinite or "giant" component. A major focus of percolation theory is establishing the conditions when giant components exist in models, which has parallels with the study of random networks. If no big component exists, the model is said to be subcritical. The conditions of giant component criticality naturally depend on parameters of the model such as the density of the underlying point process.
The excluded area of a placed object is defined as the minimal area around the object into which an additional object cannot be placed without overlapping with the first object. For example, in a system of randomly oriented homogeneous rectangles of lengthl, widthw and aspect ratior =l/w, the average excluded area is given by:[11]
In a system of identical ellipses with semi-axesa andb and ratior =a/b, and perimeterC, the average excluded areas is given by:[12]
The excluded area theory states that the critical number density (percolation threshold)Nc of a system is inversely proportional to the average excluded areaAr:
It has been shown via Monte-Carlo simulations that percolation threshold in both homogeneous and heterogeneous systems of rectangles or ellipses is dominated by the average excluded areas and can be approximated fairly well by the linear relation
with a proportionality constant in the range 3.1–3.5.[11][12]
The applications of percolation theory are various and range from material sciences towireless communication systems. Often the work involves showing that a type ofphase transition occurs in the system.
Wireless networks are sometimes best represented with stochastic models owing to their complexity and unpredictability, hence continuum percolation have been used to developstochastic geometry models of wireless networks. For example, the tools of continuous percolation theory and coverage processes have been used to study the coverage and connectivity ofsensor networks.[13][14] One of the main limitations of these networks is energy consumption where usually each node has a battery and an embedded form of energy harvesting. To reduce energy consumption in sensor networks, various sleep schemes have been suggested that entail having a subcollection of nodes go into a low energy-consuming sleep mode. These sleep schemes obviously affect the coverage and connectivity of sensor networks. Simple power-saving models have been proposed such as the simple uncoordinated 'blinking' model where (at each time interval) each node independently powers down (or up) with some fixed probability. Using the tools of percolation theory, a blinking Boolean Poisson model has been analyzed to study thelatency and connectivity effects of such a simple power scheme.[13]