Movatterモバイル変換


[0]ホーム

URL:


CN120374818A - Texture mapping method and equipment for oral cavity three-dimensional model aiming at soft tissues - Google Patents

Texture mapping method and equipment for oral cavity three-dimensional model aiming at soft tissues

Info

Publication number
CN120374818A
CN120374818ACN202510525121.6ACN202510525121ACN120374818ACN 120374818 ACN120374818 ACN 120374818ACN 202510525121 ACN202510525121 ACN 202510525121ACN 120374818 ACN120374818 ACN 120374818A
Authority
CN
China
Prior art keywords
dimensional
soft tissue
model
soft
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510525121.6A
Other languages
Chinese (zh)
Inventor
王之玥
胡庆武
尚政军
艾明耀
赵鹏程
柳天成
王子赫
蒋楚剑
范兵
曹睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHUfiledCriticalWuhan University WHU
Priority to CN202510525121.6ApriorityCriticalpatent/CN120374818A/en
Publication of CN120374818ApublicationCriticalpatent/CN120374818A/en
Pendinglegal-statusCriticalCurrent

Links

Landscapes

Abstract

Translated fromChinese

本发明提供一种针对软组织的口腔三维模型纹理映射方法及设备,包括扫描采集口腔的二维影像和三维点云数据;对二维影像进行全景融合及软硬组织分割处理;对分离出的软组织部分采用离线参数化方法,通过设定变形参数并基于分段融合的迭代优化重建软组织三维模型;将二维影像的纹理信息映射至三维模型,并通过概率图优化筛选纹理信息,完成口腔软组织的三维模型纹理映射。本发明通过差异化且高效的数据处理方法,实现快速且精确的口腔三维模型重建和纹理映射。

The present invention provides a method and device for texture mapping of oral 3D model for soft tissue, including scanning and collecting 2D images and 3D point cloud data of the oral cavity; performing panoramic fusion and soft and hard tissue segmentation processing on the 2D images; using an offline parameterization method for the separated soft tissue part, reconstructing the 3D model of the soft tissue by setting deformation parameters and iterative optimization based on segmented fusion; mapping the texture information of the 2D image to the 3D model, and optimizing and screening the texture information through a probability map to complete the texture mapping of the 3D model of the oral soft tissue. The present invention realizes fast and accurate reconstruction and texture mapping of the oral 3D model through a differentiated and efficient data processing method.

Description

Texture mapping method and equipment for oral cavity three-dimensional model aiming at soft tissues
Technical Field
The invention relates to the field of reconstruction and texture mapping of an oral cavity three-dimensional model, in particular to an oral cavity three-dimensional model texture mapping scheme aiming at soft tissues.
Background
With rapid development of technology, three-dimensional model construction and texture mapping technology are increasingly widely applied in the field of stomatology, and play an important role in orthodontics, restoration and diagnosis and treatment planning of oral diseases. Traditional oral cavity three-dimensional model construction methods mainly comprise gypsum model taking and two-dimensional imaging technologies, but the methods have remarkable limitations. For example, the method based on manual model taking is low in efficiency and limited in precision, and complex oral structures are difficult to process, and detailed features such as colors and textures of teeth and soft tissues cannot be intuitively displayed, while the method based on two-dimensional images is difficult to accurately reflect the complex structures in the oral cavity due to information loss caused by view angle limitation and projection deformation when reconstructing the three-dimensional structure. These limitations affect to some extent the accuracy of clinical diagnosis and the formulation of therapeutic regimens.
In recent years, with rapid development of oral three-dimensional scanning technology, handheld oral three-dimensional scanners are becoming popular, and can collect two-dimensional RGB images and three-dimensional point cloud data at the same time. However, how to accurately map the texture information of the two-dimensional image onto the three-dimensional point cloud model with high efficiency is still a technical problem to be solved. In particular in terms of three-dimensional reconstruction and texture mapping of soft tissue within the oral cavity, it is often difficult to achieve accurate and fast texture mapping and model reconstruction in the prior art. Because soft tissue has complex geometric shapes and non-rigid characteristics, the existing texture mapping method cannot effectively maintain the continuity and accuracy of textures, which further increases the difficulty of technical implementation.
Therefore, the development of the high-efficiency and accurate texture mapping method for the oral cavity three-dimensional model of the soft tissue, in particular to the rapid texture mapping of the oral cavity soft tissue, has important significance for improving the accuracy of oral medical diagnosis and treatment. The method needs to overcome the challenges of soft tissue non-rigid deformation to achieve high quality three-dimensional model reconstruction and texture mapping.
Disclosure of Invention
The invention provides an oral cavity three-dimensional model texture mapping method aiming at soft tissues. The method is based on a hand-held oral cavity three-dimensional scanner to collect two-dimensional images and three-dimensional point clouds, and soft and hard tissue separation processing is adopted after panoramic image fusion is carried out on the two-dimensional images. And then, setting deformation parameters of the separated non-rigid soft tissues by adopting an offline parameterization method, and accurately describing the micro deformation of the soft tissues by deformed voxels through iterative optimization of segmentation fusion, so as to accurately reconstruct a soft tissue model and finally carrying out high-definition texture mapping.
In order to achieve the above purpose, the texture mapping method of the oral cavity three-dimensional model for the soft tissue is designed by the invention and is characterized in that for the non-rigid deformation of the soft tissue region, an off-line parameterized three-dimensional reconstruction method is adopted after the soft tissue region is separated from the hard tissue, so that the accuracy of the three-dimensional reconstruction and texture mapping of the soft tissue part is ensured, and the overall modeling efficiency of the oral cavity is improved.
In order to solve the technical problems, the technical scheme of the invention is an oral cavity three-dimensional model texture mapping method aiming at soft tissues, which comprises the following steps:
Scanning and collecting two-dimensional images and three-dimensional point cloud data of an oral cavity;
carrying out panorama fusion and soft and hard tissue segmentation on the two-dimensional image;
an off-line parameterization method is adopted for the separated soft tissue part, and a soft tissue three-dimensional model is reconstructed through setting deformation parameters and iteration optimization based on segmentation fusion;
and mapping the texture information of the two-dimensional image to a three-dimensional model, and optimizing and screening the texture information through a probability map to finish the texture mapping of the three-dimensional model of the soft tissue of the oral cavity.
And when the three-dimensional point cloud data are acquired, noise filtering and redundant frame elimination are carried out on the three-dimensional point cloud data, wherein the redundant frame elimination is carried out on the basis of the overlapping rate between adjacent frames.
And the soft and hard tissue segmentation adopts an improved U-Net model, the U-Net model adds an attention mechanism between an encoder and a decoder, enhances the soft and hard tissue interface area through an attention gate, and outputs a segmentation mask.
And the feature point extraction of the soft tissue adopts a deformable component model, the feature point extraction of the hard tissue adopts an ORB algorithm, and the two-dimensional image feature and the three-dimensional point cloud feature are matched through a trans-modal model.
And when the off-line parameterized three-dimensional reconstruction is carried out, the soft tissue point cloud is converted into a voxel grid, translation, rotation and scaling deformation parameters are defined for each voxel, and the deformation parameters are solved by optimizing an objective function.
And optimizing the voxel grid, wherein the optimization comprises dividing the soft tissue into a plurality of small areas based on a K-Means algorithm, and performing piecewise fusion iterative optimization on each area as an approximate rigid body until all voxels are fused into a whole.
And the texture mapping comprises the steps of calculating the color variance of the texture area corresponding to the triangular patch, generating a probability map, reserving the texture area with the probability value higher than the corresponding threshold value, and carrying out local and global toning treatment.
In another aspect, the present invention also provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the oral three-dimensional model texture mapping method for soft tissue as described above when executing the program.
In another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements an oral three-dimensional model texture mapping method for soft tissue as described above.
In another aspect, the invention also provides a computer program product comprising a computer program which when executed by a processor implements the oral three-dimensional model texture mapping method for soft tissue as described above.
The invention has the following positive effects:
1) The invention provides an oral cavity three-dimensional model texture mapping method aiming at soft tissues, which realizes rapid and accurate oral cavity three-dimensional model reconstruction and texture mapping through a differentiated and efficient data processing method.
2) According to the invention, the soft and hard tissues are segmented in advance in the panoramic two-dimensional image, so that differential adaptation processing aiming at different rigidity characteristics of the panoramic two-dimensional image is realized. For example, feature point extraction, three-dimensional reconstruction, and the like are performed by different methods. The differential adaptation ensures modeling accuracy and improves calculation efficiency.
3) Aiming at the characteristic of non-rigid deformation of the soft tissue, the invention adopts an off-line parameterized three-dimensional reconstruction method, thereby remarkably improving the reconstruction precision of the soft tissue in the oral cavity.
4) The invention adopts probability map optimization to screen texture information, and ensures high-quality texture mapping effect.
Aiming at the technical difficulty of soft tissue reconstruction in the three-dimensional oral modeling process, the invention provides a high-efficiency solution. The method has wide application potential in a plurality of fields such as dental diagnosis and treatment planning, oral medicine digitization, remote medical treatment, dental research and development and the like, can remarkably improve the accuracy and efficiency of dental digitization, and promotes the further development of dental technology.
Drawings
In order to make the technical solution of the present invention more clear, the drawings contained in the document will be briefly described. It should be understood that these drawings represent only a few embodiments of the present invention. Those skilled in the relevant art will readily be able to derive additional possible figures from these figures without additional inventive work.
Fig. 1 is a flow chart of texture mapping of an oral cavity three-dimensional model for soft tissue according to an embodiment of the present invention.
FIG. 2 is a diagram of a network architecture based on an improved U-Net model in accordance with an embodiment of the present invention.
FIG. 3 is a specific block diagram of a network based on an improved U-Net model in accordance with an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without any inventive effort, are intended to be within the scope of the invention.
The invention provides an oral cavity three-dimensional model texture mapping method aiming at soft tissues, which comprises the following steps of acquiring two-dimensional images and three-dimensional point cloud data by using a handheld oral cavity three-dimensional scanner, preprocessing the data to reduce redundant data, carrying out panoramic fusion on the two-dimensional images, carrying out soft and hard tissue segmentation on the basis of the two-dimensional panoramic images, extracting characteristic points of the two-dimensional panoramic images and the three-dimensional point cloud, matching and dividing the point cloud on the basis of the characteristic points, introducing parameterized distortion voxel grid methods into partial point cloud of the soft tissues, setting an objective function for optimization, carrying out iterative optimization through regional segmentation fusion, enabling the distortion voxel grid to be closer to the deformation of real soft tissues, carrying out texture mapping, screening textures on the basis of probability map optimization, and carrying out local color matching and global color matching on texture information. The method overcomes the challenges brought by the non-rigid deformation of the soft tissue, realizes the reconstruction and texture mapping of the high-quality oral cavity three-dimensional model, has important significance for the diagnosis of oral diseases, has wide application value in the fields of dental diagnosis, remote medical treatment, virtual reality education and the like, and promotes the innovation and development of dental technology.
As shown in fig. 1, an embodiment of the present invention provides a texture mapping method for an oral cavity three-dimensional model for soft tissue, which includes the following steps:
Step 1), oral data acquisition and preprocessing, wherein the key point of the step is to ensure that a handheld oral three-dimensional scanner can acquire high-quality two-dimensional RGB images and three-dimensional point cloud data. By preprocessing the data collected by the hand-held oral cavity three-dimensional scanner, useless data can be removed, the data quantity is reduced, and the real-time modeling efficiency of oral cavity scanning is improved.
The preferred implementation proposed in the examples is:
And 1.1, acquiring two-dimensional images and three-dimensional point cloud data, namely adopting a handheld oral cavity three-dimensional scanner with high measurement precision (preferably recommended precision is higher than 10 micrometers) to carry out oral cavity scanning on all teeth and soft tissues around the teeth of the upper jaw and the lower jaw, and acquiring two-dimensional images and three-dimensional point cloud data in the oral cavity.
Standardized oral scans under standard procedures are preferably recommended. The standard environment means that the temperature is 18-30 ℃, the humidity is 30-70%, the device is clear and dust-free, the head of a patient is stable, and the oral cavity is relatively dry. Standardized oral scanning refers to scanning in a fixed order. It is preferable to propose a scanning sequence in which the outer side (buccal surface) of the tooth is scanned, then the inner side (lingual surface) is scanned, and then the occlusal surface of the tooth is scanned, and it is ensured that the surface of the tooth and the desired soft tissue portion are covered entirely during the scanning, avoiding missing any details. In specific implementation, 3DIFY-JMO1 oral scanner and other products can be selected.
And 1.2, performing noise filtering, discrete point removal and other treatments on the acquired three-dimensional point cloud data.
Specifically, it is preferable to perform voxel grid filtering or statistical outlier removal to reduce the data amount and primarily remove noise, and then combine a denoising algorithm based on a geometric model to further refine the processing.
And 1.3, primarily screening the images based on the two-dimensional image quality. The step can screen out low-quality two-dimensional images, and the texture quality is ensured.
The embodiment further provides that the collected two-dimensional RGB images are primarily screened based on the definition, contrast, illumination uniformity and noise level indexes.
Specifically, it is preferable to propose converting the acquired two-dimensional RGB image into a gray image, measuring sharpness by laplace operator, measuring contrast by standard deviation, measuring illumination uniformity by ratio of standard deviation to mean, and measuring noise level by local variance method. And setting weights of the four indexes according to the influence degree of the four indexes on the image quality level, and calculating the image quality level. And then determining a threshold value, and primarily screening out images within the threshold value range as high-quality images.
It is preferable to propose setting weights of the four indices to 0.4,0.3,0.2,0.1 in order. Preferably, the threshold is set to 0.75, if calculatedAbove the threshold, the image is considered to be a high quality image.
Wherein, theFor the sake of clarity the image is displayed,For the contrast ratio,For the uniformity of the illumination of the light,At the level of the noise to be present,Is the corresponding weight.
And 1.4, eliminating redundant images based on the overlapping rate between the images. This step can reduce data redundancy and improve the computational efficiency of the subsequent steps. The overlapping rate between adjacent frames can be calculated based on the image characteristic points, if the overlapping rate is higher than the threshold value, redundant frames are removed, and finally the key frames of the image are obtained.
Specifically, the high-quality image after the primary screening is traversed from frame to frame, the characteristic points of the current key frame and the next frame (adjacent frames) are extracted, and the characteristic point matching pair between the two frames is calculated. And calculating the overlapping rate between the two frames according to the number of the matched characteristic points. If the overlapping rate is lower than or equal to the threshold value, the current key frame is reserved, the current adjacent frame is set as a new key frame, and the traversal is continued. After the traversal is finished, all the obtained images are key frames of the two-dimensional RGB images. The key frame can reduce redundant data under the condition of ensuring full coverage, and improves the calculation efficiency of the subsequent steps.
The overlapping rate calculation formula provided by the invention is as follows:
and 2) extracting features, namely extracting two-dimensional image features and extracting three-dimensional image features.
The preferred implementation proposed in the examples is:
Step 2.1, extracting two-dimensional image features, and dividing the two-dimensional image features into 3 steps:
And 2.1.1, carrying out illumination correction and white balance treatment on the two-dimensional image obtained in the step 1.4.
And 2.1.2, aligning the images, splicing the images into a panoramic image with a large size, and dividing teeth (hard tissues) and soft tissues on the panoramic image.
And 2.1.3, adopting different algorithms for the tooth (hard tissue) part and the soft tissue part, and respectively extracting characteristic points.
Preferably, the embodiment selects the U-Net model for improvement, and trains the U-Net model by using the labeled two-dimensional image (the tissue regions such as teeth, gums and the like are labeled manually). The U-Net model extracts characteristics from the input image through the encoder, and a decoder of the U-Net model uses the characteristics to classify pixels and outputs the class (such as soft tissue and hard tissue) to which each pixel belongs. And adopting a cross entropy loss function optimization model to minimize the difference between the prediction and the real label. The U-Net model has the advantages that by adopting end-to-end training, the characteristics can be directly learned from the original image, and the high-quality segmentation result can be output, and the spatial resolution of the image can be better maintained by a jump connection mode, so that more accurate boundary segmentation is provided, and the method is suitable for fine segmentation of soft and hard tissues.
Referring to fig. 2, further, the present invention proposes to add Attention mechanism Attention to the U-Net model, and optimize the U-Net architecture through Attention Gate (Attention Gate), so as to improve the segmentation accuracy. The Attention Gate has two inputs, one is a feature map from the encoderThe other is a feature map from the decoderThrough the process ofConvolution and Sigmoid activate functions, generating an attention map. Striving to generate attentionElement-wise multiplication is performed, each pixel of the decoder feature map is given a different weight, and regions with lower weights are suppressed. By means of the Attention Gate, the model can pay more Attention to key areas of soft and hard tissue boundaries, and influence of irrelevant areas is reduced.
Referring to fig. 3, in implementation, the U-Net model part may be a prior art, and in this embodiment, a 4-layer encoder structure is preferably used, and a 4-layer decoder structure is correspondingly disposed.
Preferably, a classical DPM model (deformable part model) is chosen for object detection and shape analysis, soft tissue images are collected, and the geometrical feature points and number of soft tissue are defined. Using these labeling data, the DPM model is trained to learn the local deformation characteristics of the soft tissue. The DPM model has the advantages that the DPM model adapts to the morphological changes of the soft tissues under different individuals and different conditions through the feature description and the deformation model of the independent components, and the local features of the soft tissues are effectively extracted.
Specifically, the light correction is performed on the screened image key frames through self-adaptive histogram equalization, and the white balance processing is performed through gray world hypothesis, so that the image quality is further ensured.
Specifically, based on the feature point and feature point matching in step 1.4, the first frame of image is regarded as an initial panoramic image, the subsequent image and the current panoramic image are aligned and spliced by adopting a RANSAC image registration algorithm, and the joint is eliminated by using Laplacian pyramid fusion. The large-size panoramic images are fused and spliced through the incremental Laplacian pyramid, so that the calculated amount can be reduced, and the splicing efficiency is improved.
Specifically, the U-Net semantic segmentation model is utilized to segment the panoramic image of the teeth, so that a segmentation mask of the teeth (hard tissues) and the soft tissues is obtained, and the subsequent differentiation treatment of the soft tissues is facilitated.
Specifically, the tooth (hard tissue) portion is extracted based on the tooth (hard tissue) segmentation mask. After the gray processing is carried out on the tooth (hard tissue), parameters such as the number of the characteristic points and the like are set, and the ORB characteristic extraction algorithm is adopted to extract the characteristic points of the tooth (hard tissue).
Specifically, the soft tissue portions are extracted based on the soft tissue segmentation mask. And extracting characteristic points from the soft tissue part by using the trained DPM model.
And 2.2, extracting three-dimensional point cloud characteristics. Specifically, it is preferable to propose feature point extraction for point clouds based on PointNet ++.
And 3, performing off-line parameterized three-dimensional reconstruction on the soft tissue part.
And 3.1, matching characteristic points of the tooth (hard tissue) and the soft tissue with characteristic points of point cloud respectively, and dividing the point cloud into the tooth (hard tissue) and the soft tissue.
Preferably, a trans-former model of cross-modal matching is trained, and accuracy of feature point matching is ensured.
The invention further provides a preferred implementation mode for carrying out feature point matching by utilizing a transducer model to fuse the two-dimensional image features and the three-dimensional point cloud features into a unified feature space, wherein the preferred implementation mode comprises the following steps:
First, the extracted two-dimensional image features and three-dimensional point cloud features are mapped to the same dimension through linear transformation
Wherein, theA feature vector is embedded for a two-dimensional image,Feature vectors are embedded for a three-dimensional point cloud,The weight matrix is linearly transformed for the two-dimensional image,The weight matrix is linearly transformed for a three-dimensional point cloud,Is the characteristic vector of the two-dimensional image,Is a three-dimensional point cloud feature vector.
Secondly, the features of the two-dimensional images and the three-dimensional point cloud with the same dimension are spliced to obtain a joint feature vector containing two modal features
The joint features are mapped into queries (Q), keys (K) and values (V) using linear transformations. The self-attention mechanism of the transducer model is used to calculate a weight matrix representing the similarity between different features.
Wherein, theThe weight matrix is represented by a matrix of weights,() Representing the normalization function.
And finally, measuring the similarity between the two-dimensional image characteristic points and the three-dimensional point cloud characteristic points by utilizing cosine similarity, and finding out the corresponding characteristic points.
The two-dimensional image characteristic points of the teeth (hard tissues) and the soft tissues are respectively matched with the point cloud, and the point cloud is divided into two parts of the soft tissues and the hard tissues, so that the special treatment of the soft tissues in the subsequent steps is realized.
And 3.2, converting the point cloud into a voxel grid, and introducing a parameterized twisted voxel grid method into the point cloud of the soft tissue part. Deformation parameters are introduced into each voxel of the soft tissue portion, and the micro deformation of the soft tissue portion is described by translation, rotation and scaling of each voxel. And solving deformation parameters by using an optimization algorithm.
Preferably, a reasonable voxel grid size is set to compromise the rapidity of the optimization process and the accuracy of the optimization result. Preferably, it is recommended to set the voxel grid size to 0.5-1 mm. In the implementation, the method can be flexibly adjusted according to the point cloud density and the required precision.
Specifically, the point cloud is voxelized using VoxelGrid classes in the PCL library, and the point cloud is divided into voxel grids of a specified size. A parameterized warped voxel grid approach is introduced. Deformation parameters are introduced for each voxel of the soft tissue portion to accommodate deformation of the non-rigid body region. The deformation parameters include translation vectors, rotation matrices, and scaling factors. Defining the deformed voxel positions as follows:
Wherein, theFor the location of the deformed voxel,For the location of the initial voxel,In order to translate the vector of the vector,For a3 x 3 rotation matrix,Is a scaling factor.
Defining the distance between the voxel position and the corresponding point cloud as a distance error objective functionBy adjusting the deformation of the voxel grid, each voxel is made to be as close to the actual position of the point cloud as possible:
wherein a certain voxel position in the voxel grid isThe corresponding points in the point cloud are
Defining the smoothness of deformation parameters between adjacent voxels as a smoothness constraint objective functionBy minimizing the difference of deformation parameters between adjacent voxels, local excessive distortion or discontinuity of the grid is prevented, and stability and natural deformation of the grid are ensured.
Wherein, theAndIs a deformation parameter for adjacent voxels i, j in the voxel grid.
And preliminarily solving deformation parameters by adopting a nonlinear least square method.
And 3.3, fusing the soft tissue part into a plurality of small areas by using a clustering algorithm. Each small region is considered as an approximate rigid body, and the deformation parameters are optimized based on an objective function.
Specifically, based on a K-Means algorithm, the initial cluster center is uniformly selected, and adjacent voxels of the soft tissue part are fused into a plurality of small areas according to the distance between the voxels and the deformation parameters of the voxels. The fusion process minimizes the variance of the deformation parameters of the voxels in each small region as much as possible, so that the continuity of the deformation of the voxel grid can be further ensured, and the natural deformation of the soft tissue can be better simulated. Each small region is considered as an approximate rigid body, and the deformation parameters are reinitialized. For each small region, the deformation parameters are optimized using a nonlinear least squares method based on the distance error and the smoothness constraint objective function.
And 3.4, repeating the step 3.3 to perform iterative optimization. And adopting a segmented fusion mode, namely, locally fusing the small regions into a plurality of large regions based on the principle, and fusing all voxel grids of the soft tissue part into one large region. Each voxel grid retains translation, rotation, scaling deformations for each iteration as initial input for the next iteration.
Preferably, a reasonable segment fusion step length is set to give consideration to the rapidity of the optimization process and the accuracy of the optimization result. It is preferred to set the segmentation fusion step to 2mm, i.e. the large area after fusion has a side length of about 2mm longer than the small area before fusion. In the implementation, the method can be flexibly adjusted according to the point cloud density and the required precision.
And 4) mapping the texture pattern to the three-dimensional model of the oral cavity based on the two-dimensional panoramic image and the triangular mesh model.
And 4.1, converting the voxel grid model into a triangular grid model. Texture data is mapped onto a triangular mesh model.
Specifically, the voxel grid model is converted to a triangular grid model using an Open3D library. And (3) calculating texture coordinates for each triangular patch according to the characteristic point matching result based on the characteristic point matching result in the step (3.1). The pixel coordinates of the texture image are mapped onto the vertices of the triangular mesh using UV mapping.
And 4.2, evaluating texture details of the model based on the color variance, normalizing the color variance into a probability value, and generating a probability map. Based on the probability map, a threshold is set, and high-quality texture information is reserved.
Preferably, a reasonable probability threshold is set to compromise high quality and continuity of texture quality. Preferably, the suggested probability threshold is 0.6-0.85. In particular, fine tuning may be performed according to texture quality requirements. Specifically, for each triangular patch, the color variance of its corresponding texture region is calculatedThe larger the variance, the more rich the texture detail.
Wherein, theFor the average color value of the jth triangular patch,For the color value of the kth sample point in the jth triangular patch,And k is the index of the sampling points in the texture area of the jth triangular patch, and j is the index of the triangular patch. Normalizing the color variance to generate a probability mapThe detail richness of each texture region is represented by a probability map. The normalization can be performed by adopting the following formula
Wherein, theAs the maximum value of the color variance,As a minimum value of the color variance,Is the probability value of the jth triangular patch.
In the implementation, a threshold value can be set according to the probability map, a texture region with a probability value higher than the threshold value is reserved, and a texture region with a probability value lower than the threshold value is removed, so that a texture mapping result with high quality and rich details is obtained rapidly.
And 4.3, carrying out local color matching and global color matching on the screened texture information to obtain a global uniform model texture mapping result.
Preferentially, a proper color mixing method is selected according to the illumination condition and tissue characteristics of the oral cavity environment, the weight of local color mixing and global color mixing is flexibly adjusted, and finally the best visual effect is achieved. For example, for the illumination condition of the local light source in the oral cavity, self-adaptive histogram equalization is used to enhance the brightness and contrast of the local area and avoid global excessive enhancement, for the tooth highlight area, brightness stretching is used to restore details, and for the soft tissues such as gums, self-adaptive contrast enhancement is used to make the details of the soft tissues more obvious and avoid excessive exposure. Preferably, under the condition of uneven global illumination, the weight of global color matching is properly improved, the overall brightness and contrast of the image are ensured, and in the areas with rich details, the weight of local color matching is properly increased, so that the detail performance of the areas is enhanced.
In particular, those skilled in the art can implement the above process by using software technology. Accordingly, it is also within the scope of the present invention to provide an oral three-dimensional model texture mapping scheme for soft tissue, comprising a computer or server, on which the above process is performed to perform the oral three-dimensional model texture mapping for soft tissue.
In another embodiment, an electronic device is provided that includes at least one processor and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of texture mapping of an oral three-dimensional model for soft tissue described above.
In another embodiment, a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the above-described method of texture mapping of an oral three-dimensional model for soft tissue is also provided.
In another embodiment, a computer program product is also disclosed, comprising a computer program, characterized in that the computer program, when executed by a processor, implements the above-mentioned method for texture mapping of an oral cavity three-dimensional model for soft tissue.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden. From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments. The foregoing is merely illustrative of the present invention, and the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the scope of the present invention, and therefore, the scope of the present invention shall be defined by the scope of the appended claims.

Claims (10)

CN202510525121.6A2025-04-242025-04-24Texture mapping method and equipment for oral cavity three-dimensional model aiming at soft tissuesPendingCN120374818A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202510525121.6ACN120374818A (en)2025-04-242025-04-24Texture mapping method and equipment for oral cavity three-dimensional model aiming at soft tissues

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202510525121.6ACN120374818A (en)2025-04-242025-04-24Texture mapping method and equipment for oral cavity three-dimensional model aiming at soft tissues

Publications (1)

Publication NumberPublication Date
CN120374818Atrue CN120374818A (en)2025-07-25

Family

ID=96442991

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202510525121.6APendingCN120374818A (en)2025-04-242025-04-24Texture mapping method and equipment for oral cavity three-dimensional model aiming at soft tissues

Country Status (1)

CountryLink
CN (1)CN120374818A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN120597577A (en)*2025-08-072025-09-05珠海新茂义齿科技有限公司Digital base model design method based on jaw relation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN120597577A (en)*2025-08-072025-09-05珠海新茂义齿科技有限公司Digital base model design method based on jaw relation

Similar Documents

PublicationPublication DateTitle
CN111784754B (en)Tooth orthodontic method, device, equipment and storage medium based on computer vision
JP7493464B2 (en) Automated canonical pose determination for 3D objects and 3D object registration using deep learning
CN111862171B (en)CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion
CN117671138A (en)Digital twin modeling method and system based on SAM large model and NeRF
CN112785632B (en)Cross-modal automatic registration method for DR and DRR images in image-guided radiotherapy based on EPID
CN120374818A (en)Texture mapping method and equipment for oral cavity three-dimensional model aiming at soft tissues
CN115205469A (en)Tooth and alveolar bone reconstruction method, equipment and medium based on CBCT
CN118512278B (en)AI modeling method and device used before tooth 3D printing
CN118864736B (en) Method and device for molding oral prosthesis model
CN110047075A (en)A kind of CT image partition method based on confrontation network
CN114549540A (en)Method for automatically fusing oral scanning tooth data and CBCT (Cone Beam computed tomography) data and application thereof
Qian et al.An automatic tooth reconstruction method based on multimodal data
CN120411404A (en) A 3D oral and maxillofacial model reconstruction system based on multimodal data fusion
CN115761226A (en)Oral cavity image segmentation identification method and device, electronic equipment and storage medium
CN111369662A (en) Method and system for reconstructing 3D model of blood vessels in CT images
CN112562070A (en)Craniosynostosis operation cutting coordinate generation system based on template matching
CN118570326A (en)Human tissue imaging method, system, medium and computer equipment
CN117788747A (en)Dental crown three-dimensional reconstruction method by fusing oral scan data and CBCT
Dhar et al.A Deep Learning Approach to Teeth Segmentation and Orientation from Panoramic X-rays
CN117274601A (en) A three-dimensional tooth image processing method, device and system
CN118967950B (en)Three-dimensional image guiding correction planning method, system, device and medium
CN119152039B (en)Titanium network intelligent positioning method, device, equipment and storage medium
Li et al.‘A cloud collaborative healthcare platform based on deep learning in the segmentation of maxillary sinus
Cheng et al.AECT-GAN: reconstructing CT from biplane radiographs using auto-encoding generative adversarial networks
CN120495357A (en) A registration method for dental oral endoscope 2D images and CBCT 3D model images

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp