The direct modeling of likelihood provides many advantages. For example, the negative log-likelihood can be directly computed and minimized as theloss function. Additionally, novel samples can be generated by sampling from the initial distribution, and applying the flow transformation.
Let be a (possibly multivariate)random variable with distribution.
For, let be a sequence of random variables transformed from. The functions should be invertible, i.e. theinverse function exists. The final output models the target distribution.
Learning probability distributions by differentiating such log Jacobians originated in the Infomax (maximum likelihood) approach to ICA,[4] which forms a single-layer (K=1) flow-based model. Relatedly, the single layer precursor of conditional generative flows appeared in.[5]
To efficiently compute the log likelihood, the functions should be easily invertible, and the determinants of their Jacobians should be simple to compute. In practice, the functions are modeled usingdeep neural networks, and are trained to minimize the negative log-likelihood of data samples from the target distribution. These architectures are usually designed such that only the forward pass of the neural network is required in both the inverse and the Jacobian determinant calculations. Examples of such architectures include NICE,[6] RealNVP,[7] and Glow.[8]
As is generally done when training a deep learning model, the goal with normalizing flows is to minimize theKullback–Leibler divergence between the model's likelihood and the target distribution to be estimated. Denoting the model's likelihood and the target distribution to learn, the (forward) KL-divergence is:
The second term on the right-hand side of the equation corresponds to the entropy of the target distribution and is independent of the parameter we want the model to learn, which only leaves the expectation of the negative log-likelihood to minimize under the target distribution. This intractable term can be approximated with a Monte-Carlo method byimportance sampling. Indeed, if we have a dataset of samples each independently drawn from the target distribution, then this term can be estimated as:
The earliest example.[11] Fix some activation function, and let with the appropriate dimensions, thenThe inverse has no closed-form solution in general.
The Jacobian is.
For it to be invertible everywhere, it must be nonzero everywhere. For example, and satisfies the requirement.
The Real Non-Volume Preserving model generalizes NICE model by:[7]
Its inverse is, and its Jacobian is. The NICE model is recovered by setting.Since the Real NVP map keeps the first and second halves of the vector separate, it's usually required to add a permutation after every Real NVP layer.
In generative flow model,[8] each layer has 3 parts:
channel-wise affine transformwith Jacobian.
invertible 1x1 convolutionwith Jacobian. Here is any invertible matrix.
Real NVP, with Jacobian as described in Real NVP.
The idea of using the invertible 1x1 convolution is to permute all layers in general, instead of merely permuting the first and second half, as in Real NVP.
Instead of constructing flow by function composition, another approach is to formulate the flow as a continuous-time dynamic.[14][15] Let be the latent variable with distribution. Map this latent variable to data space with the following flow function:
where is an arbitrary function and can be modeled with e.g. neural networks.
Since the trace depends only on the diagonal of the Jacobian, this allows "free-form" Jacobian.[16] Here, "free-form" means that there is no restriction on the Jacobian's form. It is contrasted with previous discrete models of normalizing flow, where the Jacobian is carefully designed to be only upper- or lower-diagonal, so that the Jacobian can be evaluated efficiently.
The trace can be estimated by "Hutchinson's trick":[17][18]
Given any matrix, and any random with, we have. (Proof: expand the expectation directly.)
Usually, the random vector is sampled from (normal distribution) or (Rademacher distribution).
When is implemented as a neural network,neural ODE methods[19] would be needed. Indeed, CNF was first proposed in the same paper that proposed neural ODE.
There are two main deficiencies of CNF, one is that a continuous flow must be ahomeomorphism, thus preserve orientation andambient isotopy (for example, it's impossible to flip a left-hand to a right-hand by continuous deforming of space, and it's impossible toturn a sphere inside out, or undo a knot), and the other is that the learned flow might be ill-behaved, due to degeneracy (that is, there are an infinite number of possible that all solve the same problem).
By adding extra dimensions, the CNF gains enough freedom to reverse orientation and go beyond ambient isotopy (just like how one can pick up a polygon from a desk and flip it around in 3-space, or unknot a knot in 4-space), yielding the "augmented neural ODE".[20]
To regularize the flow, one can impose regularization losses. The paper[17] proposed the following regularization loss based onoptimal transport theory:where are hyperparameters. The first term punishes the model for oscillating the flow field over time, and the second term punishes it for oscillating the flow field over space. Both terms together guide the model into a flow that is smooth (not "bumpy") over space and time.
When aprobabilistic flow transforms a distribution on an-dimensionalsmooth manifold embedded in, where, and where the transformation is specified as a function,, the scaling factor between the source and transformedPDFs isnot given by the naive computation of thedeterminant of the Jacobian (which is zero), but instead by the determinant(s) of one or more suitably defined matrices. This section is an interpretation of the tutorial in the appendix of Sorrenson et al.(2023),[22] where the more general case of non-isometrically embeddedRiemann manifolds is also treated. Here we restrict attention toisometrically embedded manifolds.
As running examples of manifolds with smooth, isometric embedding in we shall use:
As a first example of a spherical manifold flow transform, consider thenormalized linear transform, which radially projects onto the unitsphere the output of an invertible linear transform, parametrized by the invertible matrix:
In full Euclidean space, isnot invertible, but if we restrict the domain and co-domain to the unitsphere, thenis invertible (more specifically it is abijection and ahomeomorphism and adiffeomorphism), with inverse. The Jacobian of, at is, which has rank and determinant of zero; whileas explained here, the factor (see subsection below) relating source and transformed densities is:.
For, let be an-dimensional manifold with a smooth, isometric embedding into. Let be a smooth flow transform with range restricted to. Let be sampled from a distribution with density. Let, with resultant (pushforward) density. Let be a small, convex region containing and let be its image, which contains; then by conservation of probability mass:
where volume (for very small regions) is given byLebesgue measure in-dimensionaltangent space. By making the regions infinitessimally small, the factor relating the two densities is the ratio of volumes, which we term thedifferential volume ratio.
To obtain concrete formulas for volume on the-dimensional manifold, we construct by mapping an-dimensional rectangle in (local) coordinate space to the manifold via a smooth embedding function:. At very small scale, the embedding function becomes essentially linear so that is aparallelotope (multidimensional generalization of a parallelogram). Similarly, the flow transform, becomes linear, so that the image, is also a parallelotope. In, we can represent an-dimensional parallelotope with an matrix whose column-vectors are a set of edges (meeting at a common vertex) that span the paralellotope. Thevolume is given by the absolute value of the determinant of this matrix. If more generally (as is the case here), an-dimensional paralellotope is embedded in, it can be represented with a (tall) matrix, say. Denoting the parallelotope as, its volume is then given by the square root of theGram determinant:
In the sections below, we show various ways to use this volume formula to derive the differential volume ratio.
As a first example, we develop expressions for the differential volume ratio of a simplex flow,, where. Define theembedding function:
which maps a conveniently chosen,-dimensional representation,, to the embedded manifold. The Jacobian is.To define, the differential volume element at the transformation input (), we start with a rectangle in-space, having (signed) differential side-lengths, from which we form the square diagonal matrix, the columns of which span the rectangle. At very small scale, we get, with:
For the 1-simplex (blue) embedded in, when we pull backLebesgue measure fromtangent space (parallel to the simplex), via the embedding, with Jacobian, a scaling factor of results.
To understand the geometric interpretation of the factor, see the example for the 1-simplex in the diagram at right.
The differential volume element at the transformation output (), is the parallelotope,, where is the Jacobian of at. Its volume is:
so that the factor cancels in the volume ratio, which can now already be numerically evaluated. It can however be rewritten in a sometimes more convenient form by also introducing therepresentation function,, which simply extracts the first components. The Jacobian is. Observe that, since, thechain rule for function composition gives:. By plugging this expansion into the above Gram determinant and then refactoring it as a product of determinants of square matrices, we can extract the factor, which now also cancels in the ratio, which finally simpifies to the determinant of the Jacobian of the "sandwiched" flow transformation,:
which, if, can be used to derive the pushforward density after a change of variables,:
This formula is valid only because the simplex is flat and the Jacobian, is constant. The more general case for curved manifolds is discussed below, after we present two concrete examples of simplex flow transforms.
Acalibration transform,, which is sometimes used in machine learning for post-processing of the (class posterior) outputs of a probabilistic-class classifier,[23][24] uses thesoftmax function to renormalize categorical distributions after scaling and translation of the input distributions in log-probability space. For and with parameters, and the transform can be specified as:
where the log is applied elementwise. After some algebra thedifferential volume ratio can be expressed as:
This result can also be obtained by factoring the density of theSGB distribution,[25] which is obtained by sendingDirichlet variates through.
While calibration transforms are most often trained asdiscriminative models, the reinterpretation here as a probabilistic flow allows also the design ofgenerative calibration models based on this transform. When used for calibration, the restriction can be imposed to prevent direction reversal in log-probability space. With the additional restriction, this transform (with discriminative training) is known in machine learning astemperature scaling.
The above calibration transform can be generalized to, with parameters and invertible:[26]
where the condition that has as aneigenvector ensures invertibility by sidestepping the information loss due to the invariance:. Note in particular that is theonly allowed diagonal parametrization, in which case we recover, while (for) generalizationis possible with non-diagonal matrices. Theinverse is:
Thedifferential volume ratio is:
If is to be used as a calibration transform, further constraint could be imposed, for example that bepositive definite, so that, which avoids direction reversals. (This is one possible generalization of in the parameter.)
For, and positive definite, then and are equivalent in the sense that in both cases, is a straight line, the (positive) slope and offset of which are functions of the transform parameters. Fordoes generalize.
It must however be noted that chaining multiple flow transformations doesnot give a further generalization, because:
In fact, the set of transformations form agroup under function composition. The set of transformations form a subgroup.
Also see:Dirichlet calibration,[27] which generalizes, by not placing any restriction on the matrix,, so that invertibility is not guaranteed. While Dirichlet calibration is trained as a discriminative model, can also be trained as part of a generative calibration model.
Consider a flow, on a curved manifold, for example which we equip with the embedding function, that maps a set ofangular spherical coordinates to. The Jacobian of is non-constant and we have to evaluate it at both input () and output (). The same applies to, the representation function that recovers spherical coordinates from points on, for which we need the Jacobian at the output (). The differential volume ratio now generalizes to:
For geometric insight, consider, where the spherical coordinates are co-latitude, and longitude,. At, we get, which gives the radius of the circle at that latitude (compare e.g. polar circle to equator). The differential volume (surface area on the sphere) is:.
The above derivation for is fragile in the sense that when usingfixed functions, there may be places where they are not well-defined, for example at the poles of the 2-sphere where longitude is arbitrary. This problem is sidestepped (using standard manifold machinery) by generalizing tolocal coordinates (charts), where in the vicinities of, we map from local-dimensional coordinates to and back using the respective function pairs and. We continue to use the same notation for the Jacobians of these functions (), so that the above formula for remains valid.
Wecan however, choose our local coordinate system in a way that simplifies the expression for and indeed also its practical implementation.[22] Let be a smooth idempotent projection () from theprojectible set,, onto the embedded manifold. For example:
The positive orthant of is projected onto thesimplex as:
Non-zero vectors in are projected onto theunitsphere as:
For every, we require of that its Jacobian, has rank (the manifold dimension), in which case is anidempotent linear projection onto the local tangent space (orthogonal for the unitsphere:;oblique for the simplex:). The columns of span the-dimensional tangent space at. We use the notation, for any matrix with orthonormal columns () that span the local tangent space. Also note:. We can now choose our local coordinate embedding function,:
Since the Jacobian is injective (full rank:), a local (not necessarily unique)left inverse, say with Jacobian, exists such that and. In practice we do not need the left inverse function itself, but wedo need its Jacobian, for which the above equation does not give a unique solution. We can however enforce a unique solution for the Jacobian by choosing the left inverse as,:
We can now finally plug and into our previous expression for, thedifferential volume ratio, which because of the orthonormal Jacobians, simplifies to:[28]
For learning the parameters of a manifold flow transformation, we need access to the differential volume ratio,, or at least to its gradient w.r.t. the parameters. Moreover, for some inference tasks, we need access to itself. Practical solutions include:
Sorrenson et al.(2023)[22] give a solution for computationally efficient stochastic parameter gradient approximation for
For some hand-designed flow transforms, can be analytically derived in closed form, for example the above-mentioned simplex calibration transforms. Further examples are given below in the section on simple spherical flows.
On a software platform equipped withlinear algebra andautomatic differentiation, can be automatically evaluated, given access to only.[29] But this is expensive for high-dimensional data, with at least computational costs. Even then, the slow automatic solution can be invaluable as a tool for numerically verifying hand-designed closed-form solutions.
In machine learning literature, various complex spherical flows formed by deep neural network architectures may be found.[22] In contrast, this section compiles fromstatistics literature the details of three very simple spherical flow transforms, with simple closed-form expressions for inverses and differential volume ratios. These flows can be used individually, or chained, to generalize distributions on the unitsphere,. All three flows are compositions of an invertible affine transform in, followed by radial projection back onto the sphere. The flavours we consider for the affine transform are: pure translation, pure linear and general affine. To make these flows fully functional for learning, inference and sampling, the tasks are:
To derive the inverse transform, with suitable restrictions on the parameters to ensure invertibility.
To derive in simple closed form thedifferential volume ratio,.
An interesting property of these simple spherical flows is that they don't make use of any non-linearities apart from the radial projection. Even the simplest of them, the normalized translation flow, can be chained to form perhaps surprisingly flexible distributions.
The normalized translation flow,, with parameter, is given by:
The inverse function may be derived by considering, for: and then using to get aquadratic equation to recover, which gives:
from which we see that we need to keep real and positive for all. Thedifferential volume ratio is given (without derivation) by Boulerice & Ducharme(1994) as:[30]
This can indeed be verified analytically:
By a laborious manipulation of.
By setting in, which is given below.
Finally, it is worth noting that and do not have the same functional form.
The normalized linear flow,, where parameter is an invertible matrix, is given by:
Thedifferential volume ratio is:
This result can be derived indirectly via theAngular central Gaussian distribution (ACG),[31] which can be obtained via normalized linear transform of either Gaussian, or uniform spherical variates. The first relationship can be used to derive the ACG density by a marginalization integral over the radius; after which the second relationship can be used to factor out the differential volume ratio. For details, seeACG distribution.
Despite normalizing flows success in estimating high-dimensional densities, some downsides still exist in their designs. First of all, their latent space where input data is projected onto is not a lower-dimensional space and therefore, flow-based models do not allow for compression of data by default and require a lot of computation. However, it is still possible to perform image compression with them.[32]
Flow-based models are also notorious for failing in estimating the likelihood of out-of-distribution samples (i.e.: samples that were not drawn from the same distribution as the training set).[33] Some hypotheses were formulated to explain this phenomenon, among which thetypical set hypothesis,[34] estimation issues when training models,[35] or fundamental issues due to the entropy of the data distributions.[36]
One of the most interesting properties of normalizing flows is theinvertibility of their learnedbijective map. This property is given by constraints in the design of the models (cf.: RealNVP, Glow) which guarantee theoretical invertibility. The integrity of the inverse is important in order to ensure the applicability of thechange-of-variable theorem, the computation of theJacobian of the map as well as sampling with the model. However, in practice this invertibility is violated and the inverse map explodes because of numerical imprecision.[37]
^abcGrathwohl, Will; Chen, Ricky T. Q.; Bettencourt, Jesse; Sutskever, Ilya; Duvenaud, David (2018). "FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models".arXiv:1810.01367 [cs.LG].
^Lipman, Yaron; Chen, Ricky T. Q.; Ben-Hamu, Heli; Nickel, Maximilian; Le, Matt (2022-10-01). "Flow Matching for Generative Modeling".arXiv:2210.02747 [cs.LG].
^Grathwohl, Will; Chen, Ricky T. Q.; Bettencourt, Jesse; Sutskever, Ilya; Duvenaud, David (2018-10-22). "FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models".arXiv:1810.01367 [cs.LG].
^Chen, Ricky T. Q.; Rubanova, Yulia; Bettencourt, Jesse; Duvenaud, David K. (2018)."Neural Ordinary Differential Equations"(PDF). In Bengio, S.; Wallach, H.; Larochelle, H.; Grauman, K.; Cesa-Bianchi, N.; Garnett, R. (eds.).Advances in Neural Information Processing Systems. Vol. 31. Curran Associates, Inc.arXiv:1806.07366.
^Dupont, Emilien; Doucet, Arnaud; Teh, Yee Whye (2019)."Augmented Neural ODEs".Advances in Neural Information Processing Systems.32. Curran Associates, Inc.
^Zhang, Han; Gao, Xi; Unterman, Jacob; Arodz, Tom (2019-07-30). "Approximation Capabilities of Neural ODEs and Invertible Residual Networks".arXiv:1907.12998 [cs.LG].
^abcdSorrenson, Peter; Draxler, Felix; Rousselot, Armand; Hummerich, Sander; Köthe, Ullrich (2023). "Learning Distributions on Manifolds with Free-Form Flows".arXiv:2312.09852 [cs.LG].
^Brümmer, Niko; van Leeuwen, D. A. (2006). "On calibration of language recognition scores".Proceedings of IEEE Odyssey: The Speaker and Language Recognition Workshop. San Juan, Puerto Rico. pp. 1–8.doi:10.1109/ODYSSEY.2006.248106.
^Ferrer, Luciana; Ramos, Daniel (2024). "Evaluating Posterior Probabilities: Decision Theory, Proper Scoring Rules, and Calibration".arXiv:2408.02841 [stat.ML].
^Meelis Kull, Miquel Perelló‑Nieto, Markus Kängsepp, Telmo Silva Filho, Hao Song, Peter A. Flach (28 October 2019). "Beyond temperature scaling: Obtaining well-calibrated multiclass probabilities with Dirichlet calibration".arXiv:1910.12656 [cs.LG].{{cite arXiv}}: CS1 maint: multiple names: authors list (link)
^The tangent matrices are not unique: if has orthonormal columns and is anorthogonal matrix, then also has orthonormal columns that span the same subspace; it is easy to verify that is invariant to such transformations of the tangent representatives.
from torch.linalg import qrfrom torch.func import jacrevdef logRf(pi, m, f, x): y = f(x) Fx, PI = jacrev(f)(x), jacrev(pi) Tx, Ty = [qr(PI(z)).Q[:,:m] for z in (x,y)] return (Ty.T @ Fx @ Tx).slogdet().logabsdet
^Boulerice, Bernard; Ducharme, Gilles R. (1994). "Decentered Directional Data".Annals of the Institute of Statistical Mathematics.46 (3):573–586.doi:10.1007/BF00773518.
^Tyler, David E (1987). "Statistical analysis for the angular central Gaussian distribution on the sphere".Biometrika.74 (3):579–589.doi:10.2307/2336697.JSTOR2336697.
^abHelminger, Leonhard; Djelouah, Abdelaziz; Gross, Markus; Schroers, Christopher (2020). "Lossy Image Compression with Normalizing Flows".arXiv:2008.10486 [cs.CV].
^Nalisnick, Eric; Matsukawa, Teh; Zhao, Yee Whye; Song, Zhao (2018). "Do Deep Generative Models Know What They Don't Know?".arXiv:1810.09136v3 [stat.ML].
^Nalisnick, Eric; Matsukawa, Teh; Zhao, Yee Whye; Song, Zhao (2019). "Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality".arXiv:1906.02994 [stat.ML].
^Ping, Wei; Peng, Kainan; Gorur, Dilan; Lakshminarayanan, Balaji (2019). "WaveFlow: A Compact Flow-based Model for Raw Audio".arXiv:1912.01219 [cs.SD].
^Shi, Chence; Xu, Minkai; Zhu, Zhaocheng; Zhang, Weinan; Zhang, Ming; Tang, Jian (2020). "GraphAF: A Flow-based Autoregressive Model for Molecular Graph Generation".arXiv:2001.09382 [cs.LG].
^Yang, Guandao; Huang, Xun; Hao, Zekun; Liu, Ming-Yu; Belongie, Serge; Hariharan, Bharath (2019). "PointFlow: 3D Point Cloud Generation with Continuous Normalizing Flows".arXiv:1906.12320 [cs.CV].
^Kumar, Manoj; Babaeizadeh, Mohammad; Erhan, Dumitru; Finn, Chelsea; Levine, Sergey; Dinh, Laurent; Kingma, Durk (2019). "VideoFlow: A Conditional Flow-Based Model for Stochastic Video Generation".arXiv:1903.01434 [cs.CV].
^Rudolph, Marco; Wandt, Bastian; Rosenhahn, Bodo (2021). "Same Same But DifferNet: Semi-Supervised Defect Detection with Normalizing Flows".arXiv:2008.12577 [cs.CV].