Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Perspective-n-Point

From Wikipedia, the free encyclopedia
Technique in computer vision

Perspective-n-Point[1] is the problem of estimating the pose of a calibrated camera given a set ofn 3D points in the world and their corresponding 2D projections in the image. The camera pose consists of 6 degrees-of-freedom (DOF) which are made up of the rotation (roll, pitch, and yaw) and 3D translation of the camera with respect to the world. This problem originates fromcamera calibration and has many applications in computer vision and other areas, including3D pose estimation, robotics and augmented reality.[2] A commonly used solution to the problem exists forn = 3 called P3P, and many solutions are available for the general case ofn ≥ 3. A solution forn = 2 exists if feature orientations are available at the two points.[3] Implementations of these solutions are also available in open source software.

Problem Specification

[edit]

Definition

[edit]

Given a set ofn 3D points in a world reference frame and their corresponding 2D image projections as well as the calibrated intrinsic camera parameters, determine the 6DOF pose of the camera in the form of its rotation and translation with respect to the world. This follows the perspective projection model for cameras:

spc=K[R|T]pw{\displaystyle s\,p_{c}=K\,[\,R\,|\,T\,]\,p_{w}}.

wherepw=[xyz1]T{\displaystyle \textstyle p_{w}={\begin{bmatrix}x&y&z&1\end{bmatrix}}^{T}} is thehomogeneous world point,pc=[uv1]T{\displaystyle \textstyle p_{c}={\begin{bmatrix}u&v&1\end{bmatrix}}^{T}} is the corresponding homogeneous image point,K{\displaystyle \textstyle K} is the matrix ofintrinsic camera parameters, (wherefx{\displaystyle \textstyle f_{x}} andfy{\displaystyle f_{y}} are the scaled focal lengths,γ{\displaystyle \textstyle \gamma } is the skew parameter which is sometimes assumed to be 0, and(u0,v0){\displaystyle \textstyle (u_{0},\,v_{0})} is the principal point),s{\displaystyle \textstyle s} is a scale factor for the image point, andR{\displaystyle \textstyle R} andT{\displaystyle \textstyle T} are the desired 3D rotation and 3D translation of the camera (extrinsic parameters) that are being calculated. This leads to the following equation for the model:

s[uv1]=[fxγu00fyv0001][r11r12r13t1r21r22r23t2r31r32r33t3][xyz1]{\displaystyle s{\begin{bmatrix}u\\v\\1\end{bmatrix}}={\begin{bmatrix}f_{x}&\gamma &u_{0}\\0&f_{y}&v_{0}\\0&0&1\end{bmatrix}}{\begin{bmatrix}r_{11}&r_{12}&r_{13}&t_{1}\\r_{21}&r_{22}&r_{23}&t_{2}\\r_{31}&r_{32}&r_{33}&t_{3}\\\end{bmatrix}}{\begin{bmatrix}x\\y\\z\\1\end{bmatrix}}}.

Assumptions and Data Characteristics

[edit]

There are a few preliminary aspects of the problem that are common to all solutions of PnP. The assumption made in most solutions is that the camera is already calibrated. Thus, its intrinsic properties are already known, such as the focal length, principal image point, skew parameter, and other parameters. Some methods, such as UPnP[4] or theDirect Linear Transform (DLT) applied to the projection model, are exceptions to this assumption as they estimate these intrinsic parameters as well as the extrinsic parameters which make up the pose of the camera that the original PnP problem is trying to find.

For each solution to PnP, the chosen point correspondences cannot be colinear. In addition, PnP can have multiple solutions, and choosing a particular solution would require post-processing of the solution set.RANSAC is also commonly used with a PnP method to make the solution robust to outliers in the set of point correspondences. P3P methods assume that the data is noise free, most PnP methods assume Gaussian noise on the inlier set.

Methods

[edit]

This following section describes two common methods that can be used to solve the PnP problem that are also readily available in open source software and how RANSAC can be used to deal with outliers in the data set.

P3P

[edit]

Whenn = 3, the PnP problem is in its minimal form of P3P and can be solved with three point correspondences. However, with just three point correspondences, P3P yields up to four real, geometrically feasible solutions. For low noise levels a fourth correspondence can be used to remove ambiguity. The setup for the problem is as follows.

LetP be the center of projection for the camera,A,B, andC be 3D world points with corresponding images pointsu,v, andw. LetX = |PA|,Y = |PB|,Z = |PC|,α=BPC{\displaystyle \alpha =\angle BPC},β=APC{\displaystyle \beta =\angle APC},γ=APB{\displaystyle \gamma =\angle APB},p=2cosα{\displaystyle p=2\cos \alpha },q=2cosβ{\displaystyle q=2\cos \beta },r=2cosγ{\displaystyle r=2\cos \gamma },a=|AB|{\displaystyle a'=|AB|},b=|BC|{\displaystyle b'=|BC|},c=|AC|{\displaystyle c'=|AC|}. This forms trianglesPBC,PAC, andPAB from which we obtain a sufficient equation system for P3P:

{Y2+Z2YZpb2=0Z2+X2XZqc2=0X2+Y2XYra2=0{\displaystyle {\begin{cases}Y^{2}+Z^{2}-YZp-b'^{2}&=0\\Z^{2}+X^{2}-XZq-c'^{2}&=0\\X^{2}+Y^{2}-XYr-a'^{2}&=0\\\end{cases}}}.


Solving the P3P system results in up to four geometrically feasible real solutions forR andT. The oldest published solution dates to 1841.[5] A recent algorithm for solving the problem as well as a solution classification for it is given in the 2003IEEE Transactions on Pattern Analysis and Machine Intelligence paper by Gao, et al.[6] An open source implementation of Gao's P3P solver can be found inOpenCV'scalib3d module in thesolvePnP function.[7]Several faster and more accurate versions have been published since, including Lambda Twist P3P[8] which achieved state of the art performance in 2018 with a 50 fold increase in speed and a 400 fold decrease in numerical failures. Lambdatwist is available as open source inOpenMVG and athttps://github.com/midjji/pnp.

EPnP

[edit]

Efficient PnP (EPnP) is a method developed by Lepetit, et al. in their 2008 International Journal of Computer Vision paper[9] that solves the general problem of PnP forn ≥ 4. This method is based on the notion that each of then points (which are called reference points) can be expressed as a weighted sum of four virtual control points. Thus, the coordinates of these control points become the unknowns of the problem. It is from these control points that the final pose of the camera is solved for.

As an overview of the process, first note that each of then reference points in the world frame,piw{\displaystyle p_{i}^{w}}, and their corresponding image points,pic{\displaystyle p_{i}^{c}}, are weighted sums of the four controls points,cjw{\displaystyle c_{j}^{w}} andcjc{\displaystyle c_{j}^{c}} respectively, and the weights are normalized per reference point as shown below. All points are expressed in homogeneous form.

piw=j=14αijcjw{\displaystyle p_{i}^{w}=\sum _{j=1}^{4}{\alpha _{ij}c_{j}^{w}}}
pic=j=14αijcjc{\displaystyle p_{i}^{c}=\sum _{j=1}^{4}{\alpha _{ij}c_{j}^{c}}}
j=14αij=1{\displaystyle \sum _{j=1}^{4}{\alpha _{ij}}=1}

From this, the derivation of the image reference points becomes

sipiimg=Kj=14αijcjc{\displaystyle s_{i}\,p_{i}^{img}=K\sum _{j=1}^{4}{\alpha _{ij}c_{j}^{c}}}.

Wherepiimg{\displaystyle p_{i}^{img}} is the image reference points with pixel coordinate[uivi1]T{\displaystyle {\begin{bmatrix}u_{i}&v_{i}&1\end{bmatrix}}^{T}}.The homogeneous image control point has the formcjc=[xjcyjczjc]T{\displaystyle \textstyle c_{j}^{c}={\begin{bmatrix}x_{j}^{c}&y_{j}^{c}&z_{j}^{c}\end{bmatrix}}^{T}}. Rearranging the image reference point equation yields the following two linear equations for each reference point:

j=14αijfxxjc+αij(u0ui)zjc=0{\displaystyle \sum _{j=1}^{4}{\alpha _{ij}f_{x}x_{j}^{c}+\alpha _{ij}(u_{0}-u_{i})z_{j}^{c}}=0}
j=14αijfyyjc+αij(v0vi)zjc=0{\displaystyle \sum _{j=1}^{4}{\alpha _{ij}f_{y}y_{j}^{c}+\alpha _{ij}(v_{0}-v_{i})z_{j}^{c}}=0}.

Using these two equations for each of then reference points, the systemMx=0{\displaystyle \textstyle Mx=0} can be formed wherex=[c1cTc2cTc3cTc4cT]T{\displaystyle \textstyle x={\begin{bmatrix}c_{1}^{c^{T}}&c_{2}^{c^{T}}&c_{3}^{c^{T}}&c_{4}^{c^{T}}\end{bmatrix}}^{T}}. The solution for the control points exists in thenull space ofM and is expressed as

x=i=1Nβivi{\displaystyle x=\sum _{i=1}^{N}{\beta _{i}v_{i}}}

whereN{\displaystyle N} is the number of nullsingular values inM{\displaystyle M} and eachvi{\displaystyle v_{i}} is the correspondingright singular vector ofM{\displaystyle M}.N{\displaystyle N} can range from 0 to 4. After calculating the initial coefficientsβi{\displaystyle \beta _{i}}, theGauss-Newton algorithm is used to refine them. TheR andT matrices that minimize the reprojection error of the world reference points,piw{\displaystyle p_{i}^{w}}, and their corresponding actual image pointspic{\displaystyle p_{i}^{c}}, are then calculated.

This solution hasO(n){\displaystyle O(n)} complexity and works in the general case of PnP for both planar and non-planar control points. Open source software implementations of this method can be found in OpenCV's Camera Calibration and 3D Reconstruction module in thesolvePnP function[7] as well as from the code published by Lepetit, et al. at their website,CVLAB at EPFL.[10]

This method is not robust against outliers and generally compares poorly to RANSAC P3P followed by nonlinear refinement[citation needed].

SQPnP

[edit]

SQPnP was described by Terzakis and Lourakis in an ECCV 2020 paper.[11] It is a non-minimal, non-polynomial solver which casts PnP as a non-linear quadratic program. SQPnP identifies regions in the parameter space of 3D rotations (i.e., the8-sphere) that contain unique minima with guarantees that at least one of them is the global one. Each regional minimum is computed withsequential quadratic programming that is initiated atnearest orthogonal approximation matrices.

SQPnP has similar or even higher accuracy compared to state of the art polynomial solvers, is globally optimal and computationally very efficient, being practically linear in the number of supplied pointsn. A C++ implementation isavailable onGitHub, which has also been ported to OpenCV and included in the Camera Calibration and 3D Reconstruction module (SolvePnP function).[12]

Using RANSAC

[edit]

PnP is prone to errors if there are outliers in the set of point correspondences. Thus, RANSAC can be used in conjunction with existing solutions to make the final solution for the camera pose more robust to outliers. An open source implementation of PnP methods with RANSAC can be found in OpenCV's Camera Calibration and 3D Reconstruction module in thesolvePnPRansac function.[12]

See also

[edit]

References

[edit]
  1. ^Fischler, M. A.; Bolles, R. C. (1981)."Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography".Communications of the ACM.24 (6):381–395.doi:10.1145/358669.358692.S2CID 972888.
  2. ^Apple, ARKIT team (2018)."Understanding ARKit Tracking and Detection".WWDC.
  3. ^Fabbri, Ricardo; Giblin, Peter; Kimia, Benjamin (2012). "Camera Pose Estimation Using First-Order Curve Differential Geometry".Computer Vision – ECCV 2012(PDF). Lecture Notes in Computer Science. Vol. 7575. pp. 231–244.doi:10.1007/978-3-642-33765-9_17.ISBN 978-3-642-33764-2.S2CID 15402824.
  4. ^Penate-Sanchez, A.; Andrade-Cetto, J.; Moreno-Noguer, F. (2013). "Exhaustive Linearization for Robust Camera Pose and Focal Length Estimation".IEEE Transactions on Pattern Analysis and Machine Intelligence.35 (10):2387–2400.doi:10.1109/TPAMI.2013.36.hdl:2117/22931.PMID 23969384.S2CID 9614348.
  5. ^Quan, Long; Lan, Zhong-Dan (1999)."Linear N-Point Camera Pose Determination"(PDF).IEEE Transactions on Pattern Analysis and Machine Intelligence.
  6. ^Gao, Xiao-Shan; Hou, Xiao-Rong; Tang, Jianliang; Cheng, Hang-Fei (2003). "Complete Solution Classification for the Perspective-Three-Point Problem".IEEE Transactions on Pattern Analysis and Machine Intelligence.25 (8):930–943.doi:10.1109/tpami.2003.1217599.S2CID 15869446.
  7. ^ab"Camera Calibration and 3D Reconstruction".OpenCV.
  8. ^Persson, Mikael; Nordberg, Klas (2018)."Lambda Twist: An Accurate Fast Robust Perspective Three Point (P3P) Solver"(PDF).The European Conference on Computer Vision (ECCV).
  9. ^Lepetit, V.; Moreno-Noguer, M.; Fua, P. (2009). "EPnP: An Accurate O(n) Solution to the PnP Problem".International Journal of Computer Vision.81 (2):155–166.doi:10.1007/s11263-008-0152-6.hdl:2117/10327.S2CID 207252029.
  10. ^"EPnP: Efficient Perspective-n-Point Camera Pose Estimation".EPFL-CVLAB.
  11. ^Terzakis, George; Lourakis, Manolis (2020). "A Consistently Fast and Globally Optimal Solution to the Perspective-n-Point Problem".Computer Vision – ECCV 2020. Lecture Notes in Computer Science. Vol. 12346. pp. 478–494.doi:10.1007/978-3-030-58452-8_28.ISBN 978-3-030-58451-1.S2CID 226239551.
  12. ^ab"Camera Calibration and 3D Reconstruction".OpenCV.

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Perspective-n-Point&oldid=1314586256"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp