Movatterモバイル変換


[0]ホーム

URL:


CN119338974A - Contour scanning imaging method, device, equipment and storage medium based on two modes - Google Patents

Contour scanning imaging method, device, equipment and storage medium based on two modes
Download PDF

Info

Publication number
CN119338974A
CN119338974ACN202411365130.5ACN202411365130ACN119338974ACN 119338974 ACN119338974 ACN 119338974ACN 202411365130 ACN202411365130 ACN 202411365130ACN 119338974 ACN119338974 ACN 119338974A
Authority
CN
China
Prior art keywords
point cloud
body surface
surface point
image
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411365130.5A
Other languages
Chinese (zh)
Other versions
CN119338974B (en
Inventor
贾梦宇
魏丽欣
王宇恒
杨帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Chengguang Medical Technology Co ltd
Original Assignee
Hangzhou Chengguang Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Chengguang Medical Technology Co ltdfiledCriticalHangzhou Chengguang Medical Technology Co ltd
Priority to CN202411365130.5ApriorityCriticalpatent/CN119338974B/en
Publication of CN119338974ApublicationCriticalpatent/CN119338974A/en
Application grantedgrantedCritical
Publication of CN119338974BpublicationCriticalpatent/CN119338974B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a contour scanning imaging method, a device, equipment and a storage medium based on two modes, which are characterized in that firstly, a two-dimensional image and an infrared image of a human body to be detected are obtained, and then, the two-dimensional image is subjected to phase expansion; the method comprises the steps of obtaining a target image, filtering special points in a current body surface point cloud, projecting the current body surface point cloud to an infrared image based on a coordinate conversion relation, extracting points in the current body surface point cloud in a target area, carrying out point cloud registration, calculating the difference between the current body surface point cloud and a reference body surface point cloud to obtain a rotation and translation relation between the current body surface point cloud and the reference body surface point cloud, and finally rendering based on the current body surface point cloud to obtain the target image. According to the invention, on the basis of common three-dimensional imaging, the infrared image is added, the data dimension is increased from three dimensions to four dimensions, the registration sliding effect can be weakened, and the registration speed and accuracy are improved.

Description

Contour scanning imaging method, device, equipment and storage medium based on two modes
Technical Field
The present invention relates to the field of three-dimensional scanning technologies, and in particular, to a contour scanning imaging method, apparatus, device and storage medium based on two modes.
Background
The body surface three-dimensional information refers to three-dimensional shape, size and position information of the surface of a human body or an object obtained by a three-dimensional scanning technology. The acquisition of the body surface three-dimensional information mainly depends on a three-dimensional scanning technology, and the three-dimensional scanning technology can rapidly and accurately acquire the three-dimensional data of the object surface through a non-contact measurement mode. The three-dimensional scanning technology is widely applied to the fields of medical treatment, industrial design, forensic science, film and television special effects and the like. In the medical field, can be applied to among the radiotherapy surface guidance system, gather patient's before the treatment and in-process body surface three-dimensional information to supply before the radiotherapy and in-process to patient accurate positioning, thereby realize carrying out accurate radiotherapy to the affected part.
Currently, in order to obtain three-dimensional information of the body surface of a human body, a scanning imaging system projects a pattern on the surface of the human body, wherein the projected pattern includes, but is not limited to, one-dimensional line structured light and two-dimensional patterns. In the process of three-dimensional body surface scanning, a camera acquires and projects patterns on the surface of a human body. If the surface is flat, the pattern shot by the camera is similar to the original pattern projected by the projector, and if the surface is uneven, the pattern shot by the camera is distorted. At this time, it is necessary to calculate depth information of the target point from the spatial positions of the camera and the projector.
The core of the three-dimensional scanning system is to compare the surface of the human body with a reference surface to detect the spatial positioning deviation. In the field of computer vision, iterative Closest Point (ICP) algorithms are common point cloud registration methods. The ICP algorithm is iterated continuously to reduce the distance between the two point clouds. But ICP algorithms when aligning objects with translational or rotational symmetry can result in registration slip effects. Areas such as the chest, flat abdomen, etc. of the human body are more likely to occur, resulting in lower speed and accuracy of point cloud registration.
Disclosure of Invention
In view of the above, the present invention provides a contour scanning imaging method, device, equipment and storage medium based on two modes, which are used for solving the problem that the speed and accuracy of human body point cloud registration are low due to the registration sliding effect in the process of registering all points in two human body point clouds by using an ICP algorithm in the existing three-dimensional scanning imaging method.
In order to achieve the above object, the following schemes are proposed:
a contour scanning imaging method based on two modes comprises the following steps:
Acquiring a two-dimensional image and an infrared image of a human body to be detected;
Performing phase unwrapping on the two-dimensional image to obtain a current body surface point cloud;
filtering special points in the current body surface point cloud, projecting the current body surface point cloud to an infrared image based on a preset coordinate conversion relation, and determining a target area according to the infrared image;
Extracting a body surface point cloud in a target area for point cloud registration, and calculating the difference between the current body surface point cloud and a reference body surface point cloud to obtain a rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud;
rendering is carried out based on the current body surface point cloud, and a target image is obtained.
Preferably, the process of rendering based on the current body surface point cloud to obtain the target image comprises the following steps:
And rendering the current body surface point cloud by using a multi-channel rendering, surface splashing and point eidolon rendering technology.
Preferably, the process of rendering the current body surface point cloud using multi-channel rendering includes:
carrying out multi-channel rendering on the current body surface point cloud through three rendering channels;
the rendering process of the three rendering channels is as follows:
The first rendering channel is used for enabling depth test, closing color writing, enabling depth writing, closing blending, closing point eidolon, adjusting the depth of each fragment in the particle according to the particle radius, rendering the current body surface point cloud and outputting a depth texture map;
Closing a depth test, opening a dot sprite, starting color writing, closing the depth writing, starting blending, setting a blending formula, adjusting the color value of each fragment by using the depth value of each fragment in the first rendering channel, rendering the depth texture map, and outputting the color texture map;
and closing the fusion mixing, drawing a quadrangle which is fully covered on a standardized equipment coordinate system, and attaching a color texture map of the second rendering channel.
Preferably, the two-dimensional image is subjected to phase unwrapping by a four-step phase shift method and a multi-frequency heterodyne method, and the process comprises:
Acquiring each phase to be modulated of a two-dimensional image by adopting a four-step phase shift method;
Solving through an arctangent function based on the phase to be modulated to obtain a phase main value;
The method comprises the steps of calculating by a multi-frequency heterodyne method, obtaining a unique main value between [0,2 pi ], calculating phase difference of sinusoidal gratings with different frequencies, converting high-frequency phases into low-frequency phases, enabling phase difference signals to cover the whole view field, and obtaining absolute phase distribution of a two-dimensional image according to the phase difference.
Preferably, the process of projecting the current body surface point cloud to the infrared image based on a preset coordinate conversion relation comprises the following steps:
According to the conversion relation between the point cloud coordinate system and the infrared camera coordinate system, converting the coordinates of each point in the current surface point cloud from the point cloud coordinate system to the infrared camera coordinate system to obtain the infrared camera coordinate of each point, wherein the origin of the infrared camera coordinate system is the optical center of the lens, the x-axis and the y-axis are respectively parallel to the two sides of the imaging plane, and the z-axis is the optical axis of the lens;
according to the conversion relation between the infrared camera coordinate system and the infrared image coordinate system, the infrared camera coordinate of each point is converted from the infrared camera coordinate system to the infrared image coordinate system, so that the infrared image coordinate of each point is obtained, wherein the origin of the infrared image coordinate system is the midpoint of the imaging plane, and the x-axis and the y-axis are respectively parallel to the two sides of the imaging plane.
Preferably, the process of converting coordinates of each point in the current body surface point cloud from the point cloud coordinate system to the infrared camera coordinate system includes:
converting the coordinates of each point in the current surface point cloud from a point cloud coordinate system to an infrared camera coordinate system by a formula, wherein the formula is as follows:
[xc yc zc]T=R[xw yw zw]T+t,
Wherein xw,yw,zw is three coordinate components of the point P in a point cloud coordinate system w, xc,yc,zc is three coordinate components of the point P in an infrared camera coordinate system c, and R and t are two transformation matrices;
a process for converting infrared camera coordinates of points from an infrared camera coordinate system to an infrared image coordinate system, comprising:
based on the principle of similar triangles, the infrared camera coordinates of each point are converted from a point cloud coordinate system to an infrared camera coordinate system through the following formula:
preferably, after extracting the body surface point cloud in the target area and performing point cloud registration, calculating to obtain a rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud, the method further comprises:
Registering the current body surface point cloud with the reference body surface point cloud to obtain a single-mode registration result;
And carrying out weighted average on the target area point cloud registration result and the single-mode registration result to obtain a target registration result.
A two-modality based contour scanning imaging apparatus, further comprising:
The image acquisition unit is used for acquiring a two-dimensional image and an infrared image of a human body to be detected;
the point cloud acquisition unit is used for carrying out phase expansion on the two-dimensional image to obtain a current body surface point cloud;
The point cloud processing unit is used for filtering special points in the current body surface point cloud, projecting the current body surface point cloud to an infrared image based on a preset coordinate conversion relation, and determining a target area according to the infrared image;
The point cloud registration unit is used for extracting the body surface point cloud in the target area to perform point cloud registration, calculating the difference between the current body surface point cloud and the reference body surface point cloud, and obtaining the rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud;
And the image rendering unit is used for rendering based on the current body surface point cloud to obtain a target image.
A contour scanning imaging device based on two modes comprises a memory and a processor;
The memory is used for storing programs;
the processor is used for executing the program, and the steps of the contour scanning imaging method based on the two modes are carried out.
A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the two modality based contour scanning imaging method described above.
According to the technical scheme, the contour scanning imaging method based on the two modes comprises the steps of firstly, obtaining a two-dimensional image and an infrared image of a human body to be detected, then, carrying out phase expansion on the two-dimensional image, filtering special points in a current body surface point cloud, projecting the current body surface point cloud to the infrared image based on a preset coordinate conversion relation, extracting points in the body surface point cloud in a target area, carrying out point cloud registration, calculating differences between the current body surface point cloud and a reference body surface point cloud, obtaining rotation and translation relations between the current body surface point cloud and the reference body surface point cloud, and carrying out rendering based on the current body surface point cloud, thus obtaining a target image. According to the invention, on the basis of common three-dimensional imaging, the infrared image is added, the data dimension is increased from three dimensions to four dimensions, the registration sliding effect can be weakened, and the registration speed and accuracy are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a contour scanning imaging method based on two modes according to an embodiment of the present invention;
Fig. 2 is a schematic structural diagram of an image acquisition system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an imaging effect according to an embodiment of the present invention;
Fig. 4 is a schematic structural diagram of a contour scanning imaging device based on two modes according to an embodiment of the present invention;
fig. 5 is a hardware structural block diagram of a contour scanning imaging device based on two modes according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
First, referring to fig. 1, a contour scanning imaging method based on two modes provided in this embodiment is described, where, as shown in fig. 1, the method includes:
step S01, a two-dimensional image and an infrared image of a human body to be detected are obtained.
Specifically, a two-dimensional image and an infrared image of a human body to be detected, which are acquired by an image acquisition system, are received. Image acquisition system as shown in fig. 2, a two-dimensional pattern projector projects sinusoidal fringes onto a human body to be measured using a specific frequency. Then, the contour scanning imaging system collects two-dimensional images of the human body to be detected, and the infrared imaging system collects infrared images of the human body to be detected under an infrared light source, wherein the infrared imaging system can be an infrared camera. In addition, in order that the infrared imaging system can better collect the infrared image of the human body to be detected, a polaroid can be arranged in front of the infrared imaging system and the infrared light source to prevent the influence of reflection on the infrared imaging effect, and a band-pass filter with corresponding wavelength can be added in front of the lens of the infrared imaging system to prevent the infrared imaging system from being influenced by external environment light.
Step S02, performing phase unwrapping on the two-dimensional image.
Specifically, the two-dimensional image and the infrared image can be processed through the GPU, the received two-dimensional image and the received infrared image are copied into the GPU, and the GPU performs phase expansion on the two-dimensional image to obtain the body surface point cloud. The phase unwrapping process is as follows:
the sinusoidal stripes are projected to the object to be measured by a projection device by utilizing a stripe projection technology, and a plurality of phase shift clusters are projected to mark the unique position by utilizing a phase shift method. The light intensity formula of the sinusoidal fringes is as follows:
wherein a (x, y) is the ambient light intensity, namely the reflected light of the measured object, b (x, y) is the modulated light intensity, phi (x, y) is the initial phase information of the wave surface of the measured object,The displacement amount (x, y) is the coordinates of the pixel point in the fringe pattern.
The two-dimensional image can be subjected to phase expansion by a four-step phase shift method and a multi-frequency heterodyne method, and the specific process is as follows:
the four-step phase shift method is adopted to obtain each phase to be modulated of the two-dimensional image, and each phase to be modulated is as follows:
I1=a+bcosφ;
Based on the phase to be modulated, solving through an arctangent function to obtain a phase main value:
The method comprises the steps of calculating by a multi-frequency heterodyne method, obtaining a unique main value between [0,2 pi ], calculating phase difference of sinusoidal gratings with different frequencies, converting high-frequency phases into low-frequency phases, enabling phase difference signals to cover the whole view field, and obtaining absolute phase distribution of a two-dimensional image according to the phase difference.
And S03, filtering special points in the current body surface point cloud, projecting the current body surface point cloud to an infrared image based on a preset coordinate conversion relation, and determining a target area according to the infrared image.
Specifically, in order to improve the accuracy and speed of subsequent registration and the rendering efficiency during rendering, special points which are not significant in the body surface point cloud can be filtered through a geometric shader, then the rest points in the current body surface point cloud project the current body surface point cloud to an infrared image according to a preset coordinate conversion relation, and a target area of the body surface point cloud is determined according to target areas separated by the infrared image. The coordinate conversion relationship is determined by calibrating the cameras by using a checkerboard in advance. For example, if a blood vessel region on an infrared image is used as a target region, the blood vessel region on the infrared image is segmented, a body surface point cloud is projected onto the infrared image, and a point cloud corresponding to the blood vessel region in the current body surface point cloud is determined.
And S04, extracting the body surface point cloud in the target area to perform point cloud registration.
Specifically, after a target area in the current body surface point cloud is determined, the body surface point cloud in the target area is extracted for point cloud registration. The process of point cloud registration may be based on ICP algorithm registration on the GPU. For example, the point cloud registration is performed according to the point cloud of the human body blood vessel region to be detected. Projecting the body surface point cloud calculated by the two-dimensional image to an area after the infrared image blood vessel segmentation to obtain a body surface target real-time point cloud to be registered, determining a body surface point cloud reference in the positioning process, registering the real-time body surface point cloud with the reference body surface point cloud in the following process, and calculating to obtain a rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud. The movement of the patient can be known according to the rotation and translation relation.
And step S05, rendering based on the current body surface point cloud.
Specifically, a multichannel rendering, surface splashing and point eidolon rendering technology is used for directly rendering the current body surface point cloud to obtain a target image. And carrying out multi-channel rendering on the current body surface point cloud through the three rendering channels. Fig. 3 is an effect illustration of a rendered target image. The whole multi-channel rendering process adopts surface splashing. Multi-channel rendering is the multiple rendering of an object, with the results of each rendering process being accumulated into the final rendered result. Is the overall architecture of the rendering method. The point-eidolon rendering is that one vertex is treated as one eidolon, and has the advantages that one vertex can be attached with textures, and some vertexes of a rectangle formed by four original vertexes can be finished at present, so that calculation is reduced, and the rendering speed is optimized.
Surface splatter is a point rendering and texture filtering technique that renders opaque and transparent surfaces directly from a non-connected point cloud. Texture resampling is extended to irregularly spaced point samples based on the screen region representation of an Elliptical Weighted Average (EWA) filter. Surface splash rendering assigns a gaussian filter kernel of radius symmetry to each footprint, reconstructing a continuous surface from the weighted average of the footprint data. Surface splash rendering provides high quality anisotropic texture filtering, removal of hidden surfaces, edge antialiasing, and transparency with order independence, belongs to a rendering-specific optimization method, but operates less efficiently when drawing highly complex models.
Dot sprite rendering techniques are used to describe the process of moving a large number of particles across a screen. Graphics are composed of vertices, and require texture mapping using the vertices. The vertex is formed by a rectangle with 4 points, the point eidolon can draw a two-dimensional texture picture to any position of a screen by using only one vertex, and 3-point calculation can be omitted. The point eidolon rendering technology reduces calculation, improves rendering efficiency, and is an optimization for rendering speed.
The contour scanning imaging method based on two modes comprises the steps of firstly, obtaining a two-dimensional image and an infrared image of a human body to be detected, then, carrying out phase expansion on the two-dimensional image, filtering special points in a body surface point cloud, projecting the body surface point cloud to the infrared image based on a preset coordinate conversion relation, extracting points in the body surface point cloud in a target area, carrying out point cloud registration, calculating differences between the current body surface point cloud and a reference body surface point cloud, obtaining rotation and translation relations between the current body surface point cloud and the reference body surface point cloud, and carrying out rendering based on the current body surface point cloud, thus obtaining a target image. According to the embodiment, on the basis of common three-dimensional imaging, one infrared image dimension is added, the data dimension is increased from three dimensions to four dimensions, the registration sliding effect can be weakened, and the speed and accuracy of point cloud registration are further improved.
In order to make the point cloud registration result more accurate, in step S04, the embodiment may further perform the following steps after extracting the body surface point cloud in the target area to perform the point cloud registration to obtain the rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud:
registering the current body surface point cloud with the reference body surface point cloud to obtain a single-mode registration result. And carrying out weighted average on the target area point cloud registration result and the single-mode registration result to obtain a target registration result.
Specifically, the current body surface point cloud and the reference body surface point cloud are used for carrying out single-mode point cloud registration, and the rotation and translation relation between the current body surface point cloud and the reference body surface point cloud is calculated to obtain a single-mode registration result. And carrying out weighted average on the rotation and translation relation between the current body surface point cloud and the reference body surface point cloud obtained based on target area point cloud registration and the single-mode registration result to obtain a target registration result. The target registration result is used to represent the displacement of the patient.
The embodiment combines the single-mode registration result and the multi-mode registration result, so that the obtained target registration result can more accurately represent the displacement condition of the patient in the treatment process.
Next, in step S03, the present embodiment filters a specific point in the current body surface point cloud, and projects the current body surface point cloud to an infrared image based on a preset coordinate conversion relationship, and introduces a process of determining a target area according to the infrared image:
First, in order to improve the accuracy and speed of the subsequent registration and the rendering efficiency at the time of rendering, a specific point in the current body surface point cloud may be filtered by a geometry shader, which filters a point having a coordinate value of (0, 0). The process of filtering the special points in the current surface point cloud is as follows:
a. and obtaining the video memory address of the vertex array which is stored on the GPU and used for rendering.
B. and assigning values to the array in 1 in the functions of the solution phase and the reconstruction point cloud.
C. And c, transmitting the vertex array in the step a into a rendering pipeline and discarding rasterization, transmitting the transmitted vertex coordinates to a geometric shader in a vertex shader of the pipeline, judging in the geometric shader, discarding the vertex if the vertex coordinate values are (0, 1) (homogeneous coordinates in the shader), and otherwise transmitting the vertex.
D. Before the point cloud is drawn, switching feedback is started, the number of points in the point cloud is obtained after the drawing is finished and is recorded as count, and as the vertices emitted by the geometric shader are closely arranged in the video memory, the subsequent registration and rendering stages only need to take the data of the previous size 3 count in the video memory for processing.
Then, the body surface point cloud is projected to the infrared image based on a preset coordinate conversion relation. The point cloud coordinate system describes the spatial position of the camera and the object to be measured as follows:
a. And converting the coordinates of each point in the current body surface point cloud from the point cloud coordinate system to the infrared camera coordinate system according to the conversion relation between the point cloud coordinate system and the infrared camera coordinate system, and obtaining the infrared camera coordinates of each point.
Specifically, transformation from the point cloud coordinate system to the infrared camera coordinate system is completed through rigid body transformation, and only the spatial position and orientation of the graph are changed through translation and rotation, so that the shape of an object is not changed. The coordinates of each point in the body surface point cloud are converted from the point cloud coordinate system to the infrared camera coordinate system through the formula, and the formula is as follows:
[xc yc zc]T=R[xw yw zw]T+t,
Wherein Pw=[xw yw zw]T,xw,yw,zw is three coordinate components of a point P in a point cloud coordinate system w, xc,yc,zc is three coordinate components of the point P in an infrared camera coordinate system c, R and t are two transformation matrices, the origin of the infrared camera coordinate system is the optical center of the lens, x and y axes are respectively parallel to two sides of an imaging plane, a z axis is the optical axis of the lens, the z axis is perpendicular to the imaging plane, R is 3×3, and t is 3×1.
B. And according to the conversion relation between the infrared camera coordinate system and the infrared image coordinate system, converting the infrared camera coordinate of each point from the infrared camera coordinate system to the infrared image coordinate system to obtain the infrared image coordinate of each point.
Specifically, the infrared image coordinate system is a planar coordinate system, and the positions of pixels are expressed in physical units, in mm. The intersection point of the origin camera optical axis of the infrared image coordinate system and the imaging plane is generally the middle point of the imaging plane, and the x-axis and the y-axis are respectively parallel to the two sides of the imaging plane. And completing the conversion from 3D to 2D according to the perspective projection relation. According to the principle of similar triangles, the infrared camera coordinates of each point are converted from a point cloud coordinate system to an infrared camera coordinate system through the following formula:
Finally, the process of determining the target area from the infrared image is as follows:
a. Performing Mask extraction on the infrared image, performing color inversion on the obtained infrared image gray level image, converting a white pixel into black, converting a black pixel into white, and performing the same operation on pixels of other color value segments. Selecting a rough target area on the infrared image, distinguishing foreground and background according to a threshold segmentation method, manufacturing a mask, performing bit and operation on the mask and the original infrared image, reserving pixels of the screened area, and setting the pixels of the other areas to 0.
B. the target region is extracted and the target region used for registration is extracted, for example, when the target region is a blood vessel, the blood vessel feature may be extracted by a morphological algorithm. Using a line segment of length 10 and orientation i×15 °, (0+.i < 12) as a structural element b to cover blood vessels in all directions, image detail is extracted by morphological top hat transformation of image f by the following formula:
wherein h is the image after the transformation of the top cap,The operation of the switch is represented by an on-state,Representing a corrosion operation, the method comprises the steps of,Representing an expansion operation.
C. image enhancement, image contrast is enhanced using a constrained contrast adaptive histogram equalization (CLAHE) method. The image is divided into non-overlapping sub-blocks of equal size. Calculating a histogram of the sub-block, performing contrast limitation according to a shearing threshold CLIPLIMIT, uniformly distributing pixel distribution exceeding the threshold to probability density distribution, performing equalization operation on the histogram, and reducing blocking effect after blocking by bilinear interpolation.
D. And the image is noise-reduced, gaussian blur is used for smoothing noise, and the influence of the noise is reduced. Wherein, the Gaussian kernel is: x and y are pixel coordinates, and sigma is standard deviation.
Sharpening the image using a laplace second order differential operator:
since the gray value of the image is an integer of 0-255 and the value of x is discrete, the formula needs to be generalized from one-dimensional space to two-dimensional space, wherein the process of deriving from the one-dimensional space formula to the two-dimensional space formula is as follows:
the second derivative for the discontinuous function is:
f‘’(x)=f’(x+1)-f’(x)=f(x+1)+f(x-1)-2f(x)。
The Laplace operator in two dimensions characterizes the functional gain of a point after a small change in 4 degrees of freedom, where the directions are (1, 0), (-1, 0), (0, 1), (0, -1), respectively, for 8 degrees of freedom as a diagonal. Expressing the variation mode of the calculation process through a Laplace convolution kernel, wherein the used convolution kernel is as follows:
Fourier transforming the infrared image: Wherein F (u, v) represents a frequency domain image of the infrared image, F (x, y) represents a time domain image of the infrared image, and the image width is M and the height is N.
Moving the zero frequency component to the center of the frequency domain image, and performing Gaussian low-pass filtering on the frequency domain image of the infrared image: Wherein, D0 is a cut-off frequency, which can be set to 2, and the cut-off frequency can determine the frequency bandwidth of the filter, i.e. the smoothness, D (u, v) is the distance from the image point (u, v) to the frequency center, (u, v) is the coordinates of each point on the frequency domain image, and the value ranges are [0, M-1], [0, N-1], and M and N are the widths of the image columns and rows, respectively.
Performing inverse zero translation and two-dimensional inverse discrete Fourier transform on the filtered image to restore the image:
e. threshold segmentation, namely segmenting the restored image by utilizing a binary threshold value, and converting the segmented image into a binary image. Wherein the white area represents the target area, and the rest areas are set to black. If the target region is a blood vessel, the blood vessel region is a white region.
F. Denoising the target area image, setting a threshold Ts according to the area { Si } of each connected area of the binary image, screening the target area, and removing isolated noise points. And if the target area is a blood vessel, screening the blood vessel.
G. And determining the point cloud of the target area, wherein if the pixel value of the infrared image projected by a certain point in the body surface point cloud is 1, the point belongs to the point cloud of the target area. After all the results of the points are calculated, the point cloud of the target area can be extracted, and the ICP algorithm is used for registering.
In the embodiment, three-dimensional information is projected onto an IR image (infrared image) plane, three-dimensional and two-dimensional images are combined by using a perspective multi-point algorithm, a 3D/IR matrix is calculated, and a three-dimensional body surface point cloud is associated with information of the two-dimensional infrared image.
Further, the present embodiment describes a process of performing point cloud registration on the body surface point cloud in the target area in step S04, and performing point cloud registration on the GPU by using an ICP algorithm, where the iterative closest point ICP algorithm is mainly divided into four phases:
(1) Sampling point cloud data of an original target area, and using a loss method to ensure that each loss direction has similar number of points based on loss distribution of points on the surface of an object, so as to keep the fine characteristics of the point cloud of the target area as far as possible.
(2) Determining the initial corresponding point sets P and Q the distance of the points to the tangent plane is calculated using a point-to-plane algorithm, and the reference point set p= { P1,p2,...,pn } and the data point set q= { Q1,q2,...,qn } are determined.
(3) And removing the error corresponding point set, and based on a constraint method of rigid motion consistency, utilizing a principle that the corresponding points of the overlapping area of the object to be detected are kept unchanged by rigid motion, adopting an adjacent relation between corresponding point pairs as a consistency criterion, and searching the nearest point corresponding to P for each point in Q to form a matching point pair (P, Q). If p-q is ∈sigma, then the point pairs (p, q) are removed, where sigma is the average of the distances between all the corresponding point pairs, and epsilon is a given threshold.
(4) And solving the coordinate transformation, and taking the sum of Euclidean distances of all the matching point pairs as an objective function to be solved. R and t are calculated by utilizing a Singular Value Decomposition (SVD) method to minimize an objective function, and the square sum of the error of the objective function is extremely small by constructing a least square problem, wherein the objective function is as follows:
the corresponding rotation matrix R and translation vector t are calculated, where ωi represents the weight of each point, d represents the dimensions of x and y, and the body surface point cloud d=3.
First, on the basis of the fixed R, the translation vector t is solved, and at this time, the objective function is:
and solving bias derivative of t based on the current objective function to obtain: Wherein,As a centroid of the weighted average,It can be noted that:
secondly, after t is obtained, solving R to obtain:
The formula is deduced: Because of the fact that,As scalar, from scalar property a=aT, we can obtain: substituting and solving R:
From the nature of the matrix trace, it can be seen that:
Defining covariance matrix s=xwyT, performing SVD decomposition on S to obtain s=uΣvT, substituting to obtain tr (WYTRX)=tr(∑VT RU). And solving and calculating R=VUT according to the properties of the orthogonal matrix and SVD decomposition.
If E (R, t) > E is set as the threshold value of the iteration error, a new Q' is obtained according to the conversion of R, t, the corresponding point pair is continuously found, and the iteration is performed until E (R, t) is smaller than the set threshold value E.
Still further, the present embodiment describes a process of rendering based on the current body surface point cloud in step S05. And directly rendering the current body surface point cloud by using a multi-channel rendering, surface splashing and point eidolon rendering technology to obtain a target image.
1. The process of multi-channel rendering of the current body surface point cloud through three channels is as follows:
A first rendering channel, which is used for starting a depth test, closing color writing, starting depth writing, closing blending, closing a point eidolon, adjusting the depth of each fragment in particles according to the radius of the particles, rendering the current surface point cloud, outputting a depth texture map, theoretically, calculating each pixel on a screen through a shader, and if objects which are blocked from front to back are overlapped, blended and the like, calculating a plurality of fragment shaders by one pixel;
Closing a depth test, starting color writing, closing the depth writing, starting blending, setting a blending formula, adjusting the color value of each fragment by using the depth value of each fragment in the first rendering channel, rendering the depth texture map, and outputting the color texture map;
and closing the fusion mixing, drawing a quadrangle which is fully covered on a standardized equipment coordinate system, and attaching a color texture map of the second rendering channel.
2. The rendering process by the surface splash technique:
The object based on the points is expressed as a group of non-connected points { Pk } irregularly distributed in three-dimensional space, which are combined with radial symmetry basis function rk and coefficients respectively representing red, green and blue color componentsIn relation, wk is used as a generalized representation.
Any point Q on the surface is chosen to establish a functional expression of Q with its surrounding minimum extent { Pk }. Regarding the point Q and the point Pk in the adjacent minimum area as two-dimensional plane corresponding coordinates u and uk, and defining the function of the two-dimensional plane as the weighted sum of points:
fc(u)=Σk∈Nwkrk(u-uk),
The mapping relation x=m (u) from the function plane to the screen is given as R2→R2, and the following three steps are needed:
(1) Transforming fc (u) into a screen region, generating a continuous region signal:
gc(x)=(fc☉m-1)(x)=fc(m-1(x)),
wherein, the ". Iy represents the connection of the functions.
(2) Using the primary filter h, the band limited screen area signal calculates a continuous function:
Wherein,Representing a convolution operation.
(3) Sampling the result of the continuous function, multiplying the continuous result by the pulse sequence j (x) yields a discrete result g (x) =g'c (x) j (x).
(4) Reverse expansion of the above relationship can reverse deduce detailed expression:
Wherein,Ρk (x) is a resampled kernel. Each basis function rk is independently transformed and filtered to construct resampling kernels ρk, and the process of calculating the kernels in an accumulated manner in a screen area is surface splashing.
Furthermore, ρk (x) can be further simplified, replacing m (u) with its local emission estimate at point uk:
Wherein xk=m(uk), jacobian matrixFurther calculations may be made:
Wherein,Representing the transformed basis function, the resampling kernel ρk (x) of the screen area can be expressed as a convolution of the transformed basis function r'k with the low-pass filter kernel h, although the texture function is defined on an irregular grid.
3. The dot eidolon technology rendering process:
There is a built-in read-only variable gl_ PointCoord in the fragment shader, which interpolates the texture coordinates over vertices.
There is a built-in read-only variable gl_ PointSize in the vertex shader, which is used to control the final raster size of the point, which is the pixel value. The size of the point can be determined according to the distance between the points, and the point distance conversion formula is as follows:
Wherein d is the distance from the point to the observation point, a, b and c are parameters of a quadratic equation, and can be stored as uniform values or set as constants in the vertex shader, a controls the constant part of the final value, b controls the linear change of the final value along with the distance, and c controls the square relation of the final value along with the distance transformation.
The two-mode-based contour scanning imaging device provided by the embodiment of the invention is described below, and the two-mode-based contour scanning imaging device described below and the two-mode-based contour scanning imaging method described above can be referred to correspondingly.
Referring first to fig. 4, a two-modality-based contour scanning imaging apparatus is described, as shown in fig. 4, which may include:
An image acquisition unit 100 for acquiring a two-dimensional image and an infrared image of a human body to be measured;
the point cloud acquisition unit 200 is configured to perform phase unwrapping on the two-dimensional image to obtain a current body surface point cloud;
The point cloud processing unit 300 is configured to filter a specific point in the current body surface point cloud, project the current body surface point cloud to an infrared image based on a preset coordinate conversion relationship, and determine a target area according to the infrared image;
the point cloud registration unit 400 is configured to extract a body surface point cloud in the target area for point cloud registration, calculate a difference between the current body surface point cloud and a reference body surface point cloud, and obtain a rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud;
And the image rendering unit 500 is used for rendering based on the current body surface point cloud to obtain a target image.
The contour scanning imaging device based on the two modes, which is provided by the embodiment of the invention, can be applied to contour scanning imaging equipment based on the two modes. Fig. 5 shows a block diagram of a hardware architecture of a two-modality based contour scanning imaging apparatus, referring to fig. 5, the hardware architecture of the apparatus may comprise at least one processor 1, at least one communication interface 2, at least one memory 3 and at least one communication bus 4;
In the embodiment of the invention, the number of the processor 1, the communication interface 2, the memory 3 and the communication bus 4 is at least one, and the processor 1, the communication interface 2 and the memory 3 complete the communication with each other through the communication bus 4;
The processor 1 may be a central processing unit CPU, or an Application-specific integrated Circuit ASIC (Application SPECIFIC INTEGRATED Circuit), or one or more integrated circuits configured to implement embodiments of the present invention, etc.;
the memory 3 may comprise a high-speed RAM memory, and may further comprise a non-volatile memory (non-volatile memory) or the like, such as at least one magnetic disk memory;
the memory stores a program, and the processor can call the program stored in the memory, wherein the program is used for realizing each processing flow in the contour scanning imaging scheme based on the two modes.
The embodiment of the invention also provides a storage medium, which can store a program suitable for being executed by a processor, and the program is used for realizing each processing flow in the two-mode-based contour scanning imaging scheme.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

Translated fromChinese
1.一种基于两模态的轮廓扫描成像方法,其特征在于,方法包括:1. A contour scanning imaging method based on two modes, characterized in that the method comprises:获取待测人体的二维图像和红外图像;Acquire a two-dimensional image and an infrared image of the human body to be tested;对二维图像进行相位展开,得到当前体表点云;Perform phase unwrapping on the two-dimensional image to obtain the current body surface point cloud;将当前体表点云中的特殊点进行过滤,并基于预先设定的坐标转换关系,将当前体表点云投影到红外图像,根据红外图像确定目标区域;Filter the special points in the current body surface point cloud, and project the current body surface point cloud onto the infrared image based on a preset coordinate transformation relationship, and determine the target area according to the infrared image;提取目标区域内的体表点云进行点云配准,计算当前体表点云与基准体表点云之间的差异,得到当前体表点云与基准体表点云之间的旋转、平移关系;Extract the body surface point cloud in the target area for point cloud registration, calculate the difference between the current body surface point cloud and the reference body surface point cloud, and obtain the rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud;基于当前体表点云进行渲染,得到目标图像。Rendering is performed based on the current body surface point cloud to obtain the target image.2.根据权利要求1所述的基于两模态的轮廓扫描成像方法,其特征在于,基于当前体表点云进行渲染,得到目标图像的过程,包括:2. The dual-modality contour scanning imaging method according to claim 1, characterized in that the process of rendering based on the current body surface point cloud to obtain the target image comprises:使用多通道渲染+表面飞溅+点精灵渲染技术对当前体表点云进行渲染。Use multi-channel rendering + surface splash + point sprite rendering technology to render the current surface point cloud.3.根据权利要求2所述的基于两模态的轮廓扫描成像方法,其特征在于,使用多通道渲染对当前体表点云进行渲染的过程,包括:3. The dual-modality contour scanning imaging method according to claim 2, wherein the process of rendering the current body surface point cloud using multi-channel rendering comprises:通过三个渲染通道对当前体表点云进行多通道渲染;Perform multi-channel rendering of the current body surface point cloud through three rendering channels;其中,三个渲染通道进行渲染的过程如下:Among them, the rendering process of the three rendering channels is as follows:第一渲染通道:启用深度测试,关闭颜色写入,启用深度写入,关闭融混,关闭点精灵,根据粒子半径调整粒子中每个片元的深度,对当前体表点云进行渲染,输出深度纹理贴图;First rendering pass: Enable depth test, disable color writing, enable depth writing, disable blending, disable point sprites, adjust the depth of each fragment in the particle according to the particle radius, render the current surface point cloud, and output the depth texture map;第二渲染通道:关闭深度测试,开启点精灵,启用颜色写入,关闭深度写入,启用融混,设置融混公式,利用第一渲染通道中各片元的深度值来调整每个片元的颜色值,对深度纹理贴图进行渲染,输出颜色纹理贴图;Second rendering channel: turn off depth test, turn on point sprite, enable color writing, turn off depth writing, enable blending, set blending formula, use the depth value of each fragment in the first rendering channel to adjust the color value of each fragment, render the depth texture map, and output the color texture map;第三渲染通道:关闭融混,在标准化设备坐标系中画贴满全屏的四边形,并贴上第二渲染通道的颜色纹理贴图。Pass 3: Turn off blending, draw a full-screen quad in normalized device coordinates, and apply the color texture map from pass 2.4.根据权利要求1所述的基于两模态的轮廓扫描成像方法,其特征在于,通过四步相移法与多频外差法对二维图像进行相位展开,过程包括:4. The contour scanning imaging method based on two modes according to claim 1 is characterized in that the two-dimensional image is phase-unwrapped by a four-step phase shift method and a multi-frequency heterodyne method, and the process includes:采用四步相移法获取二维图像的各待调制相位;A four-step phase shift method is used to obtain each phase to be modulated of a two-dimensional image;基于待调制相位,通过反正切函数进行求解得到相位主值;Based on the phase to be modulated, the main phase value is obtained by solving the inverse tangent function;通过多频外差法进行计算,在[0,2π)之间得到唯一主值,并对不同频率的正弦光栅进行相位差计算,将高频的相位转化为低频相位,使得相位差信号覆盖整个视场,再根据相位差得到二维图像的绝对相位分布。Through the multi-frequency heterodyne method, a unique principal value is obtained between [0,2π), and the phase difference of sinusoidal gratings of different frequencies is calculated to convert the high-frequency phase into a low-frequency phase so that the phase difference signal covers the entire field of view. Then, the absolute phase distribution of the two-dimensional image is obtained based on the phase difference.5.根据权利要求1所述的基于两模态的轮廓扫描成像方法,其特征在于,基于预先设定的坐标转换关系,将当前体表点云投影到红外图像的过程,包括:5. The dual-modality contour scanning imaging method according to claim 1, characterized in that the process of projecting the current body surface point cloud onto the infrared image based on a preset coordinate transformation relationship comprises:根据点云坐标系和红外相机坐标系的转换关系,将当前体表点云中的各点的坐标由点云坐标系转换到红外相机坐标系,得到各点的红外相机坐标,其中,红外相机坐标系的原点为镜头的光心,x、y轴分别与成像平面两边平行,z轴为镜头的光轴;According to the conversion relationship between the point cloud coordinate system and the infrared camera coordinate system, the coordinates of each point in the current body surface point cloud are converted from the point cloud coordinate system to the infrared camera coordinate system to obtain the infrared camera coordinates of each point, where the origin of the infrared camera coordinate system is the optical center of the lens, the x and y axes are parallel to the two sides of the imaging plane, and the z axis is the optical axis of the lens;根据红外相机坐标系和红外图像坐标系的转换关系,将各点的红外相机坐标由红外相机坐标系转换到红外图像坐标系,得到各点的红外图像坐标,其中,红外图像坐标系的原点为成像平面的中点,x、y轴分别与成像平面两边平行。According to the conversion relationship between the infrared camera coordinate system and the infrared image coordinate system, the infrared camera coordinates of each point are converted from the infrared camera coordinate system to the infrared image coordinate system to obtain the infrared image coordinates of each point, where the origin of the infrared image coordinate system is the midpoint of the imaging plane, and the x and y axes are parallel to the two sides of the imaging plane, respectively.6.根据权利要求5所述的基于两模态的轮廓扫描成像方法,其特征在于,将当前体表点云中的各点的坐标由点云坐标系转换到红外相机坐标系的过程,包括:6. The dual-modality contour scanning imaging method according to claim 5, characterized in that the process of converting the coordinates of each point in the current body surface point cloud from the point cloud coordinate system to the infrared camera coordinate system comprises:通过公式将当前体表点云中的各点的坐标由点云坐标系转换到红外相机坐标系,公式如下:The coordinates of each point in the current body surface point cloud are converted from the point cloud coordinate system to the infrared camera coordinate system through the formula. The formula is as follows:[xc yc zc]T=R[xw yw zw]T+t,[xc yc zc ]T = R [xw yw zw ]T +t,其中,xw,yw,zw分别是点P在点云坐标系w中的三个坐标分量,xc,yc,zc为点P在红外相机坐标系c中的三个坐标分量,R和t为两个变换矩阵;Among them,xw ,yw ,zw are the three coordinate components of point P in the point cloud coordinate system w,xc ,yc ,zc are the three coordinate components of point P in the infrared camera coordinate system c, R and t are two transformation matrices;将各点的红外相机坐标由红外相机坐标系转换到红外图像坐标系的过程,包括:The process of converting the infrared camera coordinates of each point from the infrared camera coordinate system to the infrared image coordinate system includes:基于相似三角形原理,通过公式将各点的红外相机坐标由点云坐标系转换到红外相机坐标系,公式如下:Based on the principle of similar triangles, the infrared camera coordinates of each point are converted from the point cloud coordinate system to the infrared camera coordinate system through the formula. The formula is as follows:7.根据权利要求1-6任一项所述的基于两模态的轮廓扫描成像方法,其特征在于,在提取目标区域内的体表点云进行点云配准,计算得到当前体表点云与基准体表点云之间的旋转、平移关系之后,还包括:7. The contour scanning imaging method based on two modes according to any one of claims 1 to 6, characterized in that after extracting the body surface point cloud in the target area for point cloud registration and calculating the rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud, it also includes:将当前体表点云与基准体表点云进行配准,得到单模态配准结果;Registering the current body surface point cloud with the reference body surface point cloud to obtain a single-modality registration result;将目标区域点云配准结果与单模态配准结果进行加权平均,得到目标配准结果。The target area point cloud registration result and the single modality registration result are weighted averaged to obtain the target registration result.8.一种基于两模态的轮廓扫描成像装置,其特征在于,还包括:8. A contour scanning imaging device based on two modes, characterized in that it also includes:图像获取单元,用于获取待测人体的二维图像和红外图像;An image acquisition unit, used to acquire a two-dimensional image and an infrared image of a human body to be tested;点云获取单元,用于对二维图像进行相位展开,得到当前体表点云;A point cloud acquisition unit is used to perform phase unwrapping on the two-dimensional image to obtain a current body surface point cloud;点云处理单元,用于将当前体表点云中的特殊点进行过滤,并基于预先设定的坐标转换关系,将当前体表点云投影到红外图像,根据红外图像确定目标区域;A point cloud processing unit is used to filter special points in the current body surface point cloud, and based on a preset coordinate conversion relationship, project the current body surface point cloud onto an infrared image, and determine a target area according to the infrared image;点云配准单元,用于提取目标区域内的体表点云进行点云配准,计算当前体表点云与基准体表点云之间的差异,得到当前体表点云与基准体表点云之间的旋转、平移关系;A point cloud registration unit is used to extract the body surface point cloud in the target area for point cloud registration, calculate the difference between the current body surface point cloud and the reference body surface point cloud, and obtain the rotation and translation relationship between the current body surface point cloud and the reference body surface point cloud;图像渲染单元,用于基于当前体表点云进行渲染,得到目标图像。The image rendering unit is used to render based on the current body surface point cloud to obtain a target image.9.一种基于两模态的轮廓扫描成像设备,其特征在于,包括:存储器和处理器;9. A contour scanning imaging device based on two modes, characterized by comprising: a memory and a processor;所述存储器,用于存储程序;The memory is used to store programs;所述处理器,用于执行所述程序,实现如权利要求1-7任一项的基于两模态的轮廓扫描成像方法的各个步骤。The processor is used to execute the program to implement the various steps of the dual-modality based contour scanning imaging method as claimed in any one of claims 1 to 7.10.一种存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时,实现如权利要求1-7任一项所述基于两模态的轮廓扫描成像方法的各个步骤。10. A storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, each step of the contour scanning imaging method based on two modalities as claimed in any one of claims 1 to 7 is implemented.
CN202411365130.5A2024-09-272024-09-27 Dual-modality contour scanning imaging method, device, equipment and storage mediumActiveCN119338974B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411365130.5ACN119338974B (en)2024-09-272024-09-27 Dual-modality contour scanning imaging method, device, equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411365130.5ACN119338974B (en)2024-09-272024-09-27 Dual-modality contour scanning imaging method, device, equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN119338974Atrue CN119338974A (en)2025-01-21
CN119338974B CN119338974B (en)2025-07-08

Family

ID=94264375

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411365130.5AActiveCN119338974B (en)2024-09-272024-09-27 Dual-modality contour scanning imaging method, device, equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN119338974B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103337071A (en)*2013-06-192013-10-02北京理工大学Device and method for structure-reconstruction-based subcutaneous vein three-dimensional visualization
CN109499010A (en)*2018-12-212019-03-22苏州雷泰医疗科技有限公司Based on infrared and radiotherapy auxiliary system and its method of visible light three-dimensional reconstruction
US11295460B1 (en)*2021-01-042022-04-05Proprio, Inc.Methods and systems for registering preoperative image data to intraoperative image data of a scene, such as a surgical scene
CN116664687A (en)*2023-05-162023-08-29南方科技大学 A four-dimensional thermal imaging model generation method and device, and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103337071A (en)*2013-06-192013-10-02北京理工大学Device and method for structure-reconstruction-based subcutaneous vein three-dimensional visualization
CN109499010A (en)*2018-12-212019-03-22苏州雷泰医疗科技有限公司Based on infrared and radiotherapy auxiliary system and its method of visible light three-dimensional reconstruction
US11295460B1 (en)*2021-01-042022-04-05Proprio, Inc.Methods and systems for registering preoperative image data to intraoperative image data of a scene, such as a surgical scene
CN116664687A (en)*2023-05-162023-08-29南方科技大学 A four-dimensional thermal imaging model generation method and device, and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZWICKER M, ET AL.: "COMPUTER GRAPHICS. SIGGRAPH 2001. CONFERENCE PROCEEDINGS. LOS ANGELES, CA, AUG. 12 - 17, 2001.", 12 August 2001, NEW YORK, NY : ACM.*

Also Published As

Publication numberPublication date
CN119338974B (en)2025-07-08

Similar Documents

PublicationPublication DateTitle
CN106355570B (en)A kind of binocular stereo vision matching method of combination depth characteristic
CN112132958B (en)Underwater environment three-dimensional reconstruction method based on binocular vision
CN110288642B (en)Three-dimensional object rapid reconstruction method based on camera array
US9317970B2 (en)Coupled reconstruction of hair and skin
KR101554241B1 (en)A method for depth map quality enhancement of defective pixel depth data values in a three-dimensional image
Takimoto et al.3D reconstruction and multiple point cloud registration using a low precision RGB-D sensor
US8817037B2 (en)Reconstructing three dimensional oil paintings
Lindner et al.Sub-pixel data fusion and edge-enhanced distance refinement for 2d/3d images
Balzer et al.Multiview specular stereo reconstruction of large mirror surfaces
Seok Lee et al.Dense 3d reconstruction from severely blurred images using a single moving camera
Zhang et al.Point Cloud Denoising With Principal Component Analysis and a Novel Bilateral Filter.
EP3382645B1 (en)Method for generation of a 3d model based on structure from motion and photometric stereo of 2d sparse images
WO2007061632A2 (en)Method and apparatus for absolute-coordinate three-dimensional surface imaging
Toler-Franklin et al.Illustration of complex real-world objects using images with normals
US10019784B2 (en)Medical image processing apparatus and method
GB2441228A (en)Modelling an object and determination of how it is lit
Lenzen et al.Denoising strategies for time-of-flight data
CN114549669A (en) A Color 3D Point Cloud Acquisition Method Based on Image Fusion Technology
Guillemot et al.Non local point set surfaces
CN119338974B (en) Dual-modality contour scanning imaging method, device, equipment and storage medium
Eisenacher et al.Texture synthesis from photographs
Tokgozoglu et al.Color-based hybrid reconstruction for endoscopy
Yu et al.Shape and view independent reflectance map from multiple views
Grochulla et al.Combining photometric normals and multi-view stereo for 3d reconstruction
Georgiev et al.Joint de-noising and fusion of 2D video and depth map sequences sensed by low-powered tof range sensor

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp