Disclosure of Invention
In view of the above, the present invention provides a contour scanning imaging method, device, equipment and storage medium based on two modes, which are used for solving the problem that the speed and accuracy of human body point cloud registration are low due to the registration sliding effect in the process of registering all points in two human body point clouds by using an ICP algorithm in the existing three-dimensional scanning imaging method.
In order to achieve the above object, the following schemes are proposed:
a contour scanning imaging method based on two modes comprises the following steps:
Acquiring a two-dimensional image and an infrared image of a human body to be detected;
Performing phase unwrapping on the two-dimensional image to obtain a current body surface point cloud;
filtering special points in the current body surface point cloud, projecting the current body surface point cloud to an infrared image based on a preset coordinate conversion relation, and determining a target area according to the infrared image;
Extracting a body surface point cloud in a target area for point cloud registration, and calculating the difference between the current body surface point cloud and a reference body surface point cloud to obtain a rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud;
rendering is carried out based on the current body surface point cloud, and a target image is obtained.
Preferably, the process of rendering based on the current body surface point cloud to obtain the target image comprises the following steps:
And rendering the current body surface point cloud by using a multi-channel rendering, surface splashing and point eidolon rendering technology.
Preferably, the process of rendering the current body surface point cloud using multi-channel rendering includes:
carrying out multi-channel rendering on the current body surface point cloud through three rendering channels;
the rendering process of the three rendering channels is as follows:
The first rendering channel is used for enabling depth test, closing color writing, enabling depth writing, closing blending, closing point eidolon, adjusting the depth of each fragment in the particle according to the particle radius, rendering the current body surface point cloud and outputting a depth texture map;
Closing a depth test, opening a dot sprite, starting color writing, closing the depth writing, starting blending, setting a blending formula, adjusting the color value of each fragment by using the depth value of each fragment in the first rendering channel, rendering the depth texture map, and outputting the color texture map;
and closing the fusion mixing, drawing a quadrangle which is fully covered on a standardized equipment coordinate system, and attaching a color texture map of the second rendering channel.
Preferably, the two-dimensional image is subjected to phase unwrapping by a four-step phase shift method and a multi-frequency heterodyne method, and the process comprises:
Acquiring each phase to be modulated of a two-dimensional image by adopting a four-step phase shift method;
Solving through an arctangent function based on the phase to be modulated to obtain a phase main value;
The method comprises the steps of calculating by a multi-frequency heterodyne method, obtaining a unique main value between [0,2 pi ], calculating phase difference of sinusoidal gratings with different frequencies, converting high-frequency phases into low-frequency phases, enabling phase difference signals to cover the whole view field, and obtaining absolute phase distribution of a two-dimensional image according to the phase difference.
Preferably, the process of projecting the current body surface point cloud to the infrared image based on a preset coordinate conversion relation comprises the following steps:
According to the conversion relation between the point cloud coordinate system and the infrared camera coordinate system, converting the coordinates of each point in the current surface point cloud from the point cloud coordinate system to the infrared camera coordinate system to obtain the infrared camera coordinate of each point, wherein the origin of the infrared camera coordinate system is the optical center of the lens, the x-axis and the y-axis are respectively parallel to the two sides of the imaging plane, and the z-axis is the optical axis of the lens;
according to the conversion relation between the infrared camera coordinate system and the infrared image coordinate system, the infrared camera coordinate of each point is converted from the infrared camera coordinate system to the infrared image coordinate system, so that the infrared image coordinate of each point is obtained, wherein the origin of the infrared image coordinate system is the midpoint of the imaging plane, and the x-axis and the y-axis are respectively parallel to the two sides of the imaging plane.
Preferably, the process of converting coordinates of each point in the current body surface point cloud from the point cloud coordinate system to the infrared camera coordinate system includes:
converting the coordinates of each point in the current surface point cloud from a point cloud coordinate system to an infrared camera coordinate system by a formula, wherein the formula is as follows:
[xc yc zc]T=R[xw yw zw]T+t,
Wherein xw,yw,zw is three coordinate components of the point P in a point cloud coordinate system w, xc,yc,zc is three coordinate components of the point P in an infrared camera coordinate system c, and R and t are two transformation matrices;
a process for converting infrared camera coordinates of points from an infrared camera coordinate system to an infrared image coordinate system, comprising:
based on the principle of similar triangles, the infrared camera coordinates of each point are converted from a point cloud coordinate system to an infrared camera coordinate system through the following formula:
preferably, after extracting the body surface point cloud in the target area and performing point cloud registration, calculating to obtain a rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud, the method further comprises:
Registering the current body surface point cloud with the reference body surface point cloud to obtain a single-mode registration result;
And carrying out weighted average on the target area point cloud registration result and the single-mode registration result to obtain a target registration result.
A two-modality based contour scanning imaging apparatus, further comprising:
The image acquisition unit is used for acquiring a two-dimensional image and an infrared image of a human body to be detected;
the point cloud acquisition unit is used for carrying out phase expansion on the two-dimensional image to obtain a current body surface point cloud;
The point cloud processing unit is used for filtering special points in the current body surface point cloud, projecting the current body surface point cloud to an infrared image based on a preset coordinate conversion relation, and determining a target area according to the infrared image;
The point cloud registration unit is used for extracting the body surface point cloud in the target area to perform point cloud registration, calculating the difference between the current body surface point cloud and the reference body surface point cloud, and obtaining the rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud;
And the image rendering unit is used for rendering based on the current body surface point cloud to obtain a target image.
A contour scanning imaging device based on two modes comprises a memory and a processor;
The memory is used for storing programs;
the processor is used for executing the program, and the steps of the contour scanning imaging method based on the two modes are carried out.
A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the two modality based contour scanning imaging method described above.
According to the technical scheme, the contour scanning imaging method based on the two modes comprises the steps of firstly, obtaining a two-dimensional image and an infrared image of a human body to be detected, then, carrying out phase expansion on the two-dimensional image, filtering special points in a current body surface point cloud, projecting the current body surface point cloud to the infrared image based on a preset coordinate conversion relation, extracting points in the body surface point cloud in a target area, carrying out point cloud registration, calculating differences between the current body surface point cloud and a reference body surface point cloud, obtaining rotation and translation relations between the current body surface point cloud and the reference body surface point cloud, and carrying out rendering based on the current body surface point cloud, thus obtaining a target image. According to the invention, on the basis of common three-dimensional imaging, the infrared image is added, the data dimension is increased from three dimensions to four dimensions, the registration sliding effect can be weakened, and the registration speed and accuracy are improved.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
First, referring to fig. 1, a contour scanning imaging method based on two modes provided in this embodiment is described, where, as shown in fig. 1, the method includes:
step S01, a two-dimensional image and an infrared image of a human body to be detected are obtained.
Specifically, a two-dimensional image and an infrared image of a human body to be detected, which are acquired by an image acquisition system, are received. Image acquisition system as shown in fig. 2, a two-dimensional pattern projector projects sinusoidal fringes onto a human body to be measured using a specific frequency. Then, the contour scanning imaging system collects two-dimensional images of the human body to be detected, and the infrared imaging system collects infrared images of the human body to be detected under an infrared light source, wherein the infrared imaging system can be an infrared camera. In addition, in order that the infrared imaging system can better collect the infrared image of the human body to be detected, a polaroid can be arranged in front of the infrared imaging system and the infrared light source to prevent the influence of reflection on the infrared imaging effect, and a band-pass filter with corresponding wavelength can be added in front of the lens of the infrared imaging system to prevent the infrared imaging system from being influenced by external environment light.
Step S02, performing phase unwrapping on the two-dimensional image.
Specifically, the two-dimensional image and the infrared image can be processed through the GPU, the received two-dimensional image and the received infrared image are copied into the GPU, and the GPU performs phase expansion on the two-dimensional image to obtain the body surface point cloud. The phase unwrapping process is as follows:
the sinusoidal stripes are projected to the object to be measured by a projection device by utilizing a stripe projection technology, and a plurality of phase shift clusters are projected to mark the unique position by utilizing a phase shift method. The light intensity formula of the sinusoidal fringes is as follows:
wherein a (x, y) is the ambient light intensity, namely the reflected light of the measured object, b (x, y) is the modulated light intensity, phi (x, y) is the initial phase information of the wave surface of the measured object,The displacement amount (x, y) is the coordinates of the pixel point in the fringe pattern.
The two-dimensional image can be subjected to phase expansion by a four-step phase shift method and a multi-frequency heterodyne method, and the specific process is as follows:
the four-step phase shift method is adopted to obtain each phase to be modulated of the two-dimensional image, and each phase to be modulated is as follows:
I1=a+bcosφ;
Based on the phase to be modulated, solving through an arctangent function to obtain a phase main value:
The method comprises the steps of calculating by a multi-frequency heterodyne method, obtaining a unique main value between [0,2 pi ], calculating phase difference of sinusoidal gratings with different frequencies, converting high-frequency phases into low-frequency phases, enabling phase difference signals to cover the whole view field, and obtaining absolute phase distribution of a two-dimensional image according to the phase difference.
And S03, filtering special points in the current body surface point cloud, projecting the current body surface point cloud to an infrared image based on a preset coordinate conversion relation, and determining a target area according to the infrared image.
Specifically, in order to improve the accuracy and speed of subsequent registration and the rendering efficiency during rendering, special points which are not significant in the body surface point cloud can be filtered through a geometric shader, then the rest points in the current body surface point cloud project the current body surface point cloud to an infrared image according to a preset coordinate conversion relation, and a target area of the body surface point cloud is determined according to target areas separated by the infrared image. The coordinate conversion relationship is determined by calibrating the cameras by using a checkerboard in advance. For example, if a blood vessel region on an infrared image is used as a target region, the blood vessel region on the infrared image is segmented, a body surface point cloud is projected onto the infrared image, and a point cloud corresponding to the blood vessel region in the current body surface point cloud is determined.
And S04, extracting the body surface point cloud in the target area to perform point cloud registration.
Specifically, after a target area in the current body surface point cloud is determined, the body surface point cloud in the target area is extracted for point cloud registration. The process of point cloud registration may be based on ICP algorithm registration on the GPU. For example, the point cloud registration is performed according to the point cloud of the human body blood vessel region to be detected. Projecting the body surface point cloud calculated by the two-dimensional image to an area after the infrared image blood vessel segmentation to obtain a body surface target real-time point cloud to be registered, determining a body surface point cloud reference in the positioning process, registering the real-time body surface point cloud with the reference body surface point cloud in the following process, and calculating to obtain a rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud. The movement of the patient can be known according to the rotation and translation relation.
And step S05, rendering based on the current body surface point cloud.
Specifically, a multichannel rendering, surface splashing and point eidolon rendering technology is used for directly rendering the current body surface point cloud to obtain a target image. And carrying out multi-channel rendering on the current body surface point cloud through the three rendering channels. Fig. 3 is an effect illustration of a rendered target image. The whole multi-channel rendering process adopts surface splashing. Multi-channel rendering is the multiple rendering of an object, with the results of each rendering process being accumulated into the final rendered result. Is the overall architecture of the rendering method. The point-eidolon rendering is that one vertex is treated as one eidolon, and has the advantages that one vertex can be attached with textures, and some vertexes of a rectangle formed by four original vertexes can be finished at present, so that calculation is reduced, and the rendering speed is optimized.
Surface splatter is a point rendering and texture filtering technique that renders opaque and transparent surfaces directly from a non-connected point cloud. Texture resampling is extended to irregularly spaced point samples based on the screen region representation of an Elliptical Weighted Average (EWA) filter. Surface splash rendering assigns a gaussian filter kernel of radius symmetry to each footprint, reconstructing a continuous surface from the weighted average of the footprint data. Surface splash rendering provides high quality anisotropic texture filtering, removal of hidden surfaces, edge antialiasing, and transparency with order independence, belongs to a rendering-specific optimization method, but operates less efficiently when drawing highly complex models.
Dot sprite rendering techniques are used to describe the process of moving a large number of particles across a screen. Graphics are composed of vertices, and require texture mapping using the vertices. The vertex is formed by a rectangle with 4 points, the point eidolon can draw a two-dimensional texture picture to any position of a screen by using only one vertex, and 3-point calculation can be omitted. The point eidolon rendering technology reduces calculation, improves rendering efficiency, and is an optimization for rendering speed.
The contour scanning imaging method based on two modes comprises the steps of firstly, obtaining a two-dimensional image and an infrared image of a human body to be detected, then, carrying out phase expansion on the two-dimensional image, filtering special points in a body surface point cloud, projecting the body surface point cloud to the infrared image based on a preset coordinate conversion relation, extracting points in the body surface point cloud in a target area, carrying out point cloud registration, calculating differences between the current body surface point cloud and a reference body surface point cloud, obtaining rotation and translation relations between the current body surface point cloud and the reference body surface point cloud, and carrying out rendering based on the current body surface point cloud, thus obtaining a target image. According to the embodiment, on the basis of common three-dimensional imaging, one infrared image dimension is added, the data dimension is increased from three dimensions to four dimensions, the registration sliding effect can be weakened, and the speed and accuracy of point cloud registration are further improved.
In order to make the point cloud registration result more accurate, in step S04, the embodiment may further perform the following steps after extracting the body surface point cloud in the target area to perform the point cloud registration to obtain the rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud:
registering the current body surface point cloud with the reference body surface point cloud to obtain a single-mode registration result. And carrying out weighted average on the target area point cloud registration result and the single-mode registration result to obtain a target registration result.
Specifically, the current body surface point cloud and the reference body surface point cloud are used for carrying out single-mode point cloud registration, and the rotation and translation relation between the current body surface point cloud and the reference body surface point cloud is calculated to obtain a single-mode registration result. And carrying out weighted average on the rotation and translation relation between the current body surface point cloud and the reference body surface point cloud obtained based on target area point cloud registration and the single-mode registration result to obtain a target registration result. The target registration result is used to represent the displacement of the patient.
The embodiment combines the single-mode registration result and the multi-mode registration result, so that the obtained target registration result can more accurately represent the displacement condition of the patient in the treatment process.
Next, in step S03, the present embodiment filters a specific point in the current body surface point cloud, and projects the current body surface point cloud to an infrared image based on a preset coordinate conversion relationship, and introduces a process of determining a target area according to the infrared image:
First, in order to improve the accuracy and speed of the subsequent registration and the rendering efficiency at the time of rendering, a specific point in the current body surface point cloud may be filtered by a geometry shader, which filters a point having a coordinate value of (0, 0). The process of filtering the special points in the current surface point cloud is as follows:
a. and obtaining the video memory address of the vertex array which is stored on the GPU and used for rendering.
B. and assigning values to the array in 1 in the functions of the solution phase and the reconstruction point cloud.
C. And c, transmitting the vertex array in the step a into a rendering pipeline and discarding rasterization, transmitting the transmitted vertex coordinates to a geometric shader in a vertex shader of the pipeline, judging in the geometric shader, discarding the vertex if the vertex coordinate values are (0, 1) (homogeneous coordinates in the shader), and otherwise transmitting the vertex.
D. Before the point cloud is drawn, switching feedback is started, the number of points in the point cloud is obtained after the drawing is finished and is recorded as count, and as the vertices emitted by the geometric shader are closely arranged in the video memory, the subsequent registration and rendering stages only need to take the data of the previous size 3 count in the video memory for processing.
Then, the body surface point cloud is projected to the infrared image based on a preset coordinate conversion relation. The point cloud coordinate system describes the spatial position of the camera and the object to be measured as follows:
a. And converting the coordinates of each point in the current body surface point cloud from the point cloud coordinate system to the infrared camera coordinate system according to the conversion relation between the point cloud coordinate system and the infrared camera coordinate system, and obtaining the infrared camera coordinates of each point.
Specifically, transformation from the point cloud coordinate system to the infrared camera coordinate system is completed through rigid body transformation, and only the spatial position and orientation of the graph are changed through translation and rotation, so that the shape of an object is not changed. The coordinates of each point in the body surface point cloud are converted from the point cloud coordinate system to the infrared camera coordinate system through the formula, and the formula is as follows:
[xc yc zc]T=R[xw yw zw]T+t,
Wherein Pw=[xw yw zw]T,xw,yw,zw is three coordinate components of a point P in a point cloud coordinate system w, xc,yc,zc is three coordinate components of the point P in an infrared camera coordinate system c, R and t are two transformation matrices, the origin of the infrared camera coordinate system is the optical center of the lens, x and y axes are respectively parallel to two sides of an imaging plane, a z axis is the optical axis of the lens, the z axis is perpendicular to the imaging plane, R is 3×3, and t is 3×1.
B. And according to the conversion relation between the infrared camera coordinate system and the infrared image coordinate system, converting the infrared camera coordinate of each point from the infrared camera coordinate system to the infrared image coordinate system to obtain the infrared image coordinate of each point.
Specifically, the infrared image coordinate system is a planar coordinate system, and the positions of pixels are expressed in physical units, in mm. The intersection point of the origin camera optical axis of the infrared image coordinate system and the imaging plane is generally the middle point of the imaging plane, and the x-axis and the y-axis are respectively parallel to the two sides of the imaging plane. And completing the conversion from 3D to 2D according to the perspective projection relation. According to the principle of similar triangles, the infrared camera coordinates of each point are converted from a point cloud coordinate system to an infrared camera coordinate system through the following formula:
Finally, the process of determining the target area from the infrared image is as follows:
a. Performing Mask extraction on the infrared image, performing color inversion on the obtained infrared image gray level image, converting a white pixel into black, converting a black pixel into white, and performing the same operation on pixels of other color value segments. Selecting a rough target area on the infrared image, distinguishing foreground and background according to a threshold segmentation method, manufacturing a mask, performing bit and operation on the mask and the original infrared image, reserving pixels of the screened area, and setting the pixels of the other areas to 0.
B. the target region is extracted and the target region used for registration is extracted, for example, when the target region is a blood vessel, the blood vessel feature may be extracted by a morphological algorithm. Using a line segment of length 10 and orientation i×15 °, (0+.i < 12) as a structural element b to cover blood vessels in all directions, image detail is extracted by morphological top hat transformation of image f by the following formula:
wherein h is the image after the transformation of the top cap,The operation of the switch is represented by an on-state,Representing a corrosion operation, the method comprises the steps of,Representing an expansion operation.
C. image enhancement, image contrast is enhanced using a constrained contrast adaptive histogram equalization (CLAHE) method. The image is divided into non-overlapping sub-blocks of equal size. Calculating a histogram of the sub-block, performing contrast limitation according to a shearing threshold CLIPLIMIT, uniformly distributing pixel distribution exceeding the threshold to probability density distribution, performing equalization operation on the histogram, and reducing blocking effect after blocking by bilinear interpolation.
D. And the image is noise-reduced, gaussian blur is used for smoothing noise, and the influence of the noise is reduced. Wherein, the Gaussian kernel is: x and y are pixel coordinates, and sigma is standard deviation.
Sharpening the image using a laplace second order differential operator:
since the gray value of the image is an integer of 0-255 and the value of x is discrete, the formula needs to be generalized from one-dimensional space to two-dimensional space, wherein the process of deriving from the one-dimensional space formula to the two-dimensional space formula is as follows:
the second derivative for the discontinuous function is:
f‘’(x)=f’(x+1)-f’(x)=f(x+1)+f(x-1)-2f(x)。
The Laplace operator in two dimensions characterizes the functional gain of a point after a small change in 4 degrees of freedom, where the directions are (1, 0), (-1, 0), (0, 1), (0, -1), respectively, for 8 degrees of freedom as a diagonal. Expressing the variation mode of the calculation process through a Laplace convolution kernel, wherein the used convolution kernel is as follows:
Fourier transforming the infrared image: Wherein F (u, v) represents a frequency domain image of the infrared image, F (x, y) represents a time domain image of the infrared image, and the image width is M and the height is N.
Moving the zero frequency component to the center of the frequency domain image, and performing Gaussian low-pass filtering on the frequency domain image of the infrared image: Wherein, D0 is a cut-off frequency, which can be set to 2, and the cut-off frequency can determine the frequency bandwidth of the filter, i.e. the smoothness, D (u, v) is the distance from the image point (u, v) to the frequency center, (u, v) is the coordinates of each point on the frequency domain image, and the value ranges are [0, M-1], [0, N-1], and M and N are the widths of the image columns and rows, respectively.
Performing inverse zero translation and two-dimensional inverse discrete Fourier transform on the filtered image to restore the image:
e. threshold segmentation, namely segmenting the restored image by utilizing a binary threshold value, and converting the segmented image into a binary image. Wherein the white area represents the target area, and the rest areas are set to black. If the target region is a blood vessel, the blood vessel region is a white region.
F. Denoising the target area image, setting a threshold Ts according to the area { Si } of each connected area of the binary image, screening the target area, and removing isolated noise points. And if the target area is a blood vessel, screening the blood vessel.
G. And determining the point cloud of the target area, wherein if the pixel value of the infrared image projected by a certain point in the body surface point cloud is 1, the point belongs to the point cloud of the target area. After all the results of the points are calculated, the point cloud of the target area can be extracted, and the ICP algorithm is used for registering.
In the embodiment, three-dimensional information is projected onto an IR image (infrared image) plane, three-dimensional and two-dimensional images are combined by using a perspective multi-point algorithm, a 3D/IR matrix is calculated, and a three-dimensional body surface point cloud is associated with information of the two-dimensional infrared image.
Further, the present embodiment describes a process of performing point cloud registration on the body surface point cloud in the target area in step S04, and performing point cloud registration on the GPU by using an ICP algorithm, where the iterative closest point ICP algorithm is mainly divided into four phases:
(1) Sampling point cloud data of an original target area, and using a loss method to ensure that each loss direction has similar number of points based on loss distribution of points on the surface of an object, so as to keep the fine characteristics of the point cloud of the target area as far as possible.
(2) Determining the initial corresponding point sets P and Q the distance of the points to the tangent plane is calculated using a point-to-plane algorithm, and the reference point set p= { P1,p2,...,pn } and the data point set q= { Q1,q2,...,qn } are determined.
(3) And removing the error corresponding point set, and based on a constraint method of rigid motion consistency, utilizing a principle that the corresponding points of the overlapping area of the object to be detected are kept unchanged by rigid motion, adopting an adjacent relation between corresponding point pairs as a consistency criterion, and searching the nearest point corresponding to P for each point in Q to form a matching point pair (P, Q). If p-q is ∈sigma, then the point pairs (p, q) are removed, where sigma is the average of the distances between all the corresponding point pairs, and epsilon is a given threshold.
(4) And solving the coordinate transformation, and taking the sum of Euclidean distances of all the matching point pairs as an objective function to be solved. R and t are calculated by utilizing a Singular Value Decomposition (SVD) method to minimize an objective function, and the square sum of the error of the objective function is extremely small by constructing a least square problem, wherein the objective function is as follows:
the corresponding rotation matrix R and translation vector t are calculated, where ωi represents the weight of each point, d represents the dimensions of x and y, and the body surface point cloud d=3.
First, on the basis of the fixed R, the translation vector t is solved, and at this time, the objective function is:
and solving bias derivative of t based on the current objective function to obtain: Wherein,As a centroid of the weighted average,It can be noted that:
secondly, after t is obtained, solving R to obtain:
The formula is deduced: Because of the fact that,As scalar, from scalar property a=aT, we can obtain: substituting and solving R:
From the nature of the matrix trace, it can be seen that:
Defining covariance matrix s=xwyT, performing SVD decomposition on S to obtain s=uΣvT, substituting to obtain tr (WYTRX)=tr(∑VT RU). And solving and calculating R=VUT according to the properties of the orthogonal matrix and SVD decomposition.
If E (R, t) > E is set as the threshold value of the iteration error, a new Q' is obtained according to the conversion of R, t, the corresponding point pair is continuously found, and the iteration is performed until E (R, t) is smaller than the set threshold value E.
Still further, the present embodiment describes a process of rendering based on the current body surface point cloud in step S05. And directly rendering the current body surface point cloud by using a multi-channel rendering, surface splashing and point eidolon rendering technology to obtain a target image.
1. The process of multi-channel rendering of the current body surface point cloud through three channels is as follows:
A first rendering channel, which is used for starting a depth test, closing color writing, starting depth writing, closing blending, closing a point eidolon, adjusting the depth of each fragment in particles according to the radius of the particles, rendering the current surface point cloud, outputting a depth texture map, theoretically, calculating each pixel on a screen through a shader, and if objects which are blocked from front to back are overlapped, blended and the like, calculating a plurality of fragment shaders by one pixel;
Closing a depth test, starting color writing, closing the depth writing, starting blending, setting a blending formula, adjusting the color value of each fragment by using the depth value of each fragment in the first rendering channel, rendering the depth texture map, and outputting the color texture map;
and closing the fusion mixing, drawing a quadrangle which is fully covered on a standardized equipment coordinate system, and attaching a color texture map of the second rendering channel.
2. The rendering process by the surface splash technique:
The object based on the points is expressed as a group of non-connected points { Pk } irregularly distributed in three-dimensional space, which are combined with radial symmetry basis function rk and coefficients respectively representing red, green and blue color componentsIn relation, wk is used as a generalized representation.
Any point Q on the surface is chosen to establish a functional expression of Q with its surrounding minimum extent { Pk }. Regarding the point Q and the point Pk in the adjacent minimum area as two-dimensional plane corresponding coordinates u and uk, and defining the function of the two-dimensional plane as the weighted sum of points:
fc(u)=Σk∈Nwkrk(u-uk),
The mapping relation x=m (u) from the function plane to the screen is given as R2→R2, and the following three steps are needed:
(1) Transforming fc (u) into a screen region, generating a continuous region signal:
gc(x)=(fc☉m-1)(x)=fc(m-1(x)),
wherein, the ". Iy represents the connection of the functions.
(2) Using the primary filter h, the band limited screen area signal calculates a continuous function:
Wherein,Representing a convolution operation.
(3) Sampling the result of the continuous function, multiplying the continuous result by the pulse sequence j (x) yields a discrete result g (x) =g'c (x) j (x).
(4) Reverse expansion of the above relationship can reverse deduce detailed expression:
Wherein,Ρk (x) is a resampled kernel. Each basis function rk is independently transformed and filtered to construct resampling kernels ρk, and the process of calculating the kernels in an accumulated manner in a screen area is surface splashing.
Furthermore, ρk (x) can be further simplified, replacing m (u) with its local emission estimate at point uk:
Wherein xk=m(uk), jacobian matrixFurther calculations may be made:
Wherein,Representing the transformed basis function, the resampling kernel ρk (x) of the screen area can be expressed as a convolution of the transformed basis function r'k with the low-pass filter kernel h, although the texture function is defined on an irregular grid.
3. The dot eidolon technology rendering process:
There is a built-in read-only variable gl_ PointCoord in the fragment shader, which interpolates the texture coordinates over vertices.
There is a built-in read-only variable gl_ PointSize in the vertex shader, which is used to control the final raster size of the point, which is the pixel value. The size of the point can be determined according to the distance between the points, and the point distance conversion formula is as follows:
Wherein d is the distance from the point to the observation point, a, b and c are parameters of a quadratic equation, and can be stored as uniform values or set as constants in the vertex shader, a controls the constant part of the final value, b controls the linear change of the final value along with the distance, and c controls the square relation of the final value along with the distance transformation.
The two-mode-based contour scanning imaging device provided by the embodiment of the invention is described below, and the two-mode-based contour scanning imaging device described below and the two-mode-based contour scanning imaging method described above can be referred to correspondingly.
Referring first to fig. 4, a two-modality-based contour scanning imaging apparatus is described, as shown in fig. 4, which may include:
An image acquisition unit 100 for acquiring a two-dimensional image and an infrared image of a human body to be measured;
the point cloud acquisition unit 200 is configured to perform phase unwrapping on the two-dimensional image to obtain a current body surface point cloud;
The point cloud processing unit 300 is configured to filter a specific point in the current body surface point cloud, project the current body surface point cloud to an infrared image based on a preset coordinate conversion relationship, and determine a target area according to the infrared image;
the point cloud registration unit 400 is configured to extract a body surface point cloud in the target area for point cloud registration, calculate a difference between the current body surface point cloud and a reference body surface point cloud, and obtain a rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud;
And the image rendering unit 500 is used for rendering based on the current body surface point cloud to obtain a target image.
The contour scanning imaging device based on the two modes, which is provided by the embodiment of the invention, can be applied to contour scanning imaging equipment based on the two modes. Fig. 5 shows a block diagram of a hardware architecture of a two-modality based contour scanning imaging apparatus, referring to fig. 5, the hardware architecture of the apparatus may comprise at least one processor 1, at least one communication interface 2, at least one memory 3 and at least one communication bus 4;
In the embodiment of the invention, the number of the processor 1, the communication interface 2, the memory 3 and the communication bus 4 is at least one, and the processor 1, the communication interface 2 and the memory 3 complete the communication with each other through the communication bus 4;
The processor 1 may be a central processing unit CPU, or an Application-specific integrated Circuit ASIC (Application SPECIFIC INTEGRATED Circuit), or one or more integrated circuits configured to implement embodiments of the present invention, etc.;
the memory 3 may comprise a high-speed RAM memory, and may further comprise a non-volatile memory (non-volatile memory) or the like, such as at least one magnetic disk memory;
the memory stores a program, and the processor can call the program stored in the memory, wherein the program is used for realizing each processing flow in the contour scanning imaging scheme based on the two modes.
The embodiment of the invention also provides a storage medium, which can store a program suitable for being executed by a processor, and the program is used for realizing each processing flow in the two-mode-based contour scanning imaging scheme.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.