Movatterモバイル変換


[0]ホーム

URL:


CN120672973A - Auxiliary analysis method and system for cosmetic surgery based on three-dimensional facial digital measurement - Google Patents

Auxiliary analysis method and system for cosmetic surgery based on three-dimensional facial digital measurement

Info

Publication number
CN120672973A
CN120672973ACN202511190033.1ACN202511190033ACN120672973ACN 120672973 ACN120672973 ACN 120672973ACN 202511190033 ACN202511190033 ACN 202511190033ACN 120672973 ACN120672973 ACN 120672973A
Authority
CN
China
Prior art keywords
image
data set
pixel
dataset
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202511190033.1A
Other languages
Chinese (zh)
Inventor
刘欢
文婷
邹俊峰
张霖
曹日芳
叶颖
周升
王明伟
郑承凤
谢俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Meilai Medical Beauty Hospital Co ltd
Original Assignee
Changsha Meilai Medical Beauty Hospital Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Meilai Medical Beauty Hospital Co ltdfiledCriticalChangsha Meilai Medical Beauty Hospital Co ltd
Priority to CN202511190033.1ApriorityCriticalpatent/CN120672973A/en
Publication of CN120672973ApublicationCriticalpatent/CN120672973A/en
Pendinglegal-statusCriticalCurrent

Links

Landscapes

Abstract

The invention discloses a cosmetic shaping auxiliary analysis method and a cosmetic shaping auxiliary analysis system based on three-dimensional surface shape digital measurement, which are characterized in that a medical image data set is obtained, a Gaussian filter algorithm is adopted for noise processing, if the difference between a pixel gray value and a neighborhood mean value exceeds a preset threshold value, the pixel gray value is judged to be a noise point, and smoothing is carried out, so that a first image data set is obtained; the method comprises the steps of constructing an energy function according to a first image data set, filling and repairing a defect area in an image through superposition calculation of a single-point energy item and an adjacent point interaction energy item to obtain a second image data set, extracting point cloud data from the second image data set, constructing a gradient field, executing local encryption processing to generate a three-dimensional face model if the point cloud density is lower than a preset threshold value, acquiring grid vertex coordinates and normal vector information of the three-dimensional face model, fusing an ambient light component, a diffuse reflection component and a specular reflection component through an illumination intensity calculation formula to generate a visual image, and improving accuracy and efficiency of medical image processing and three-dimensional visualization.

Description

Cosmetic shaping auxiliary analysis method and system based on three-dimensional surface shape digital measurement
Technical Field
The invention relates to the technical field of cosmetology and plastic, and particularly discloses a cosmetology and plastic auxiliary analysis method and system based on three-dimensional surface shape digital measurement.
Background
With the rapid development of the medical cosmetology industry, the requirements of facial plastic surgery on accuracy and individualization are increasingly increased, and the three-dimensional digital technology has become an indispensable core technical support in the modern cosmetology and plastic field. The traditional facial analysis method mainly depends on two-dimensional photos and doctor experience judgment, is difficult to accurately grasp the three-dimensional structural characteristics of the face, and lacks quantitative evaluation standards. The existing three-dimensional reconstruction technology can acquire the spatial information of the face, but has obvious defects in the aspects of data processing integrity and visual intuitiveness.
The key challenges faced by current face three-dimensional digital measurement techniques stem from the complex processing requirements of multi-source medical image data. The tomographic data generated by the medical imaging equipment contains a large amount of noise interference and data missing, and the original data cannot be directly used for three-dimensional model construction and must undergo a complex preprocessing flow. Insufficient data preprocessing directly affects the accuracy of subsequent three-dimensional reconstruction, and when noise points, hole defects or irregular grids exist in the original point cloud data, the geometric accuracy of the face model can be reduced. The lack of geometric precision further restricts the authenticity of the visual effect, and when the traditional surface drawing method and the traditional volume data drawing method process complex facial structures, the detail fidelity and the integral visual effect cannot be simultaneously considered, so that a doctor is difficult to obtain a visual and accurate three-dimensional visual reference.
The facial cosmetic and plastic application scene has special requirements on the interactivity and parameter adjustability of the three-dimensional model. The physician needs to be able to adjust the display parameters, material properties and viewing angle direction of the model in real time in order to view facial features from different angles and to formulate a surgical plan. However, the existing system has limitations in terms of smoothness of interaction operation and refinement degree of parameter adjustment, and lacks a visual interface and an operation tool designed for the requirements of beauty and plastic profession.
Therefore, how to construct an integrated multisource medical image data processing, high-precision three-dimensional model reconstruction and specialized visual interaction facial cosmetic shaping auxiliary analysis system, and to realize a complete technical chain from original image data to an operable three-dimensional model, becomes a key problem for promoting the digital development of cosmetic shaping.
Disclosure of Invention
The invention provides a cosmetic and plastic auxiliary analysis method and a cosmetic and plastic auxiliary analysis system based on three-dimensional surface shape digital measurement, which aim to solve at least one defect in the prior art.
One aspect of the invention relates to a cosmetic and plastic auxiliary analysis method based on three-dimensional surface shape digital measurement, comprising the following steps:
acquiring a medical image data set, performing noise processing by adopting a Gaussian filtering algorithm, judging the medical image data set as noise points if the difference between a pixel gray value and a neighborhood mean value exceeds a preset threshold value, and performing smoothing processing to acquire a first image data set;
constructing an energy function according to the first image data set, and filling and repairing a defect area in the image through superposition calculation of a single-point energy item and an adjacent point interaction energy item to obtain a second image data set;
extracting point cloud data from the second image data set, constructing a gradient field, and executing local encryption processing if the point cloud density is lower than a preset threshold value to generate a three-dimensional face model;
acquiring grid vertex coordinates and normal vector information of a three-dimensional face model, and generating a visual image by fusing an ambient light component, a diffuse reflection component and a specular reflection component through an illumination intensity calculation formula;
Establishing a parameter mapping matrix to carry out range constraint on transformation parameters input by a user, automatically correcting if the rotation angle exceeds a preset range, and calculating the updated grid vertex position through a coordinate transformation formula;
and updating the visualized image in real time according to the transformation parameters and the material attribute parameters, and outputting a facial feature analysis result.
Further, the step of obtaining a medical image data set and performing noise processing by adopting a gaussian filtering algorithm, and if the difference between the pixel gray value and the neighborhood mean value exceeds a preset threshold value, determining the pixel gray value as a noise point and performing smoothing processing to obtain a first image data set comprises the following steps:
Acquiring a medical image data set, extracting a pixel gray value of each pixel from the medical image data set by adopting pixel gray value analysis, and obtaining a first gray difference value set by calculating a difference value between the pixel gray value and a neighborhood pixel gray average value;
noise detection is carried out according to the first gray level difference value set, if the difference value between the gray level value of the pixel and the neighborhood mean value exceeds a preset threshold value, the pixel is judged to be a noise point, and a noise point position set is obtained;
smoothing the noise point position set by adopting a Gaussian filter algorithm, and adjusting the pixel gray value by adopting the Gaussian filter algorithm to the noise point to obtain a second image set;
And generating a data set from the second image set, and generating a first image data set by storing the processed data of the second image set.
Further, constructing an energy function according to the first image data set, filling and repairing a defect area in the image through superposition calculation of the single-point energy item and the adjacent point interaction energy item, and obtaining the second image data set comprises the following steps:
Acquiring pixel gray values and texture features of pixel points from a first image data set, partitioning an image by adopting a region segmentation algorithm, and determining a region containing defects by calculating the average value of the texture features and the distribution of the pixel gray values of each region to obtain a defect region set;
extracting boundary contours of the defect areas by adopting a boundary detection algorithm aiming at the defect area sets, and determining the boundary contour sets by calculating pixel gray value gradients of boundary pixel points;
Constructing an energy function according to the boundary contour set and the texture characteristics to obtain an energy distribution set;
And smoothing the energy distribution set by adopting a Gaussian filter algorithm, and filling and repairing by adjusting the pixel gray values of the pixel points in the defect area and combining with texture features to generate a second image data set.
Further, the step of extracting point cloud data from the second image dataset and constructing a gradient field, and if the point cloud density is lower than a preset threshold value, executing local encryption processing, and generating the three-dimensional face model includes:
acquiring pixel gray values and depth information of pixel points in the second image data set, and generating point cloud data containing three-dimensional space coordinates by adopting a stereoscopic microscope algorithm to obtain a point cloud data set;
calculating the number of points in a unit volume aiming at the point cloud data set, and if the point cloud density is lower than a preset threshold value, carrying out local encryption on the low-density area by adopting an interpolation algorithm to obtain an encrypted point cloud data set;
Calculating the pixel gray value gradient of each point according to the encrypted point cloud data set, and constructing a gradient field describing the surface change by adopting a gradient descent algorithm to obtain a gradient field data set;
and generating triangular grids by using a grid generation algorithm through the gradient field data set and the encryption point cloud data set, and constructing a three-dimensional face model by combining boundary contours and normal vectors to obtain a three-dimensional face model data set.
Further, the step of obtaining mesh vertex coordinates and normal vector information of the three-dimensional face model, and fusing an ambient light component, a diffuse reflection component and a specular reflection component through an illumination intensity calculation formula to generate a visualized image includes:
obtaining a grid vertex data set of the three-dimensional face model, and calculating three-dimensional coordinates and normal vectors of each grid vertex by vector operation to obtain a grid vertex attribute data set;
According to the grid vertex attribute data set and the preset light source position, calculating the ambient light component of each grid vertex by adopting an illumination model, and combining the reflection coefficient of the surface material to obtain an ambient light intensity data set;
if the included angle between the normal vector in the grid vertex attribute data set and the light source position is smaller than a preset threshold value, calculating a diffuse reflection component by adopting a diffuse reflection formula i_d=k_d (n.l), wherein i_d represents diffuse reflection intensity, k_d represents a diffuse reflection coefficient, N represents a grid vertex normal vector, and L represents a light source direction vector, so as to obtain a diffuse reflection intensity data set;
Calculating specular reflection components by using a specular reflection formula i_s=k_s (r·v) ζn through the diffuse reflection intensity dataset and the viewing angle direction, and fusing the ambient light intensity dataset and the diffuse reflection intensity dataset to generate a visualized image dataset, wherein i_s represents specular reflection intensity, k_s represents specular reflection coefficient, R represents reflection vector, V represents viewing angle vector, and n represents high light index.
Further, establishing a parameter mapping matrix to perform range constraint on transformation parameters input by a user, automatically correcting if the rotation angle exceeds a preset range, and calculating the updated grid vertex position through a coordinate transformation formula, wherein the step of calculating the updated grid vertex position comprises the following steps:
acquiring a transformation parameter data set input by a user, and constructing a parameter mapping matrix through matrix operation to obtain an initial transformation matrix, wherein the transformation parameter data set comprises a rotation angle, a translation amount and a scaling;
Judging whether the rotation angle in the initial transformation matrix exceeds a preset angle threshold value, and if so, correcting the rotation angle by adopting a linear interpolation method to obtain a corrected transformation matrix;
Calculating a grid vertex data set of the three-dimensional face model by adopting a coordinate transformation formula T (v) =M.v to obtain an updated grid vertex data set, wherein T (v) represents transformed grid vertex coordinates, M represents a modified transformation matrix, and v represents original grid vertex coordinates;
And generating grids of the three-dimensional face model by combining the updated grid vertex data set with preset rendering parameters to obtain a visualized grid vertex position data set.
Further, the step of updating the visualized image in real time according to the transformation parameters and the texture attribute parameters and outputting the facial feature analysis result comprises the following steps:
acquiring a transformation parameter data set and a material attribute data set input by a user, constructing an initial transformation matrix and an initial material mapping matrix through matrix operation, and obtaining the initial transformation matrix and the initial material mapping matrix;
if the parameters in the initial transformation matrix exceed the preset range, correcting the transformation parameters by adopting a linear interpolation method to obtain a corrected transformation matrix;
Calculating grid vertexes of the three-dimensional face model by using a coordinate transformation formula T (v) =M.v through the corrected transformation matrix to obtain an updated grid vertex data set, wherein T (v) represents transformed grid vertex coordinates, M represents the corrected transformation matrix, and v represents original grid vertex coordinates;
Updating the initial texture mapping matrix by adopting a texture mapping algorithm according to the texture attribute data set to obtain an updated texture mapping matrix;
generating real-time updated visual image data through the updated grid vertex data set and the updated material mapping matrix;
Extracting facial feature points from the visual image data updated in real time by adopting a facial feature extraction algorithm to obtain a facial feature point set;
and calculating the relative position relation between the facial feature points by adopting a geometric analysis method according to the facial feature point set to obtain a facial feature analysis result.
Another aspect of the present invention relates to a cosmetic shaping auxiliary analysis system based on three-dimensional surface shape digital measurement, for implementing the cosmetic shaping auxiliary analysis method based on three-dimensional surface shape digital measurement, comprising:
The first acquisition module is used for acquiring a medical image data set and carrying out noise processing by adopting a Gaussian filter algorithm, judging the medical image data set as noise points if the difference between the pixel gray value and the neighborhood mean value exceeds a preset threshold value, and carrying out smoothing processing to acquire the first image data set;
The second acquisition module is used for constructing an energy function according to the first image data set, filling and repairing a defect area in the image through superposition calculation of a single-point energy item and an adjacent point interaction energy item, and obtaining a second image data set;
The first generation module is used for extracting point cloud data from the second image data set and constructing a gradient field, and if the point cloud density is lower than a preset threshold value, local encryption processing is performed to generate a three-dimensional face model;
The second generation module is used for acquiring grid vertex coordinates and normal vector information of the three-dimensional face model, and generating a visual image by fusing an ambient light component, a diffuse reflection component and a specular reflection component through an illumination intensity calculation formula;
the calculation module is used for establishing a parameter mapping matrix to carry out range constraint on transformation parameters input by a user, automatically correcting the transformation parameters if the rotation angle exceeds a preset range, and calculating the updated grid vertex position through a coordinate transformation formula;
and the output module is used for updating the visual image in real time according to the transformation parameters and the material attribute parameters and outputting a facial feature analysis result.
Further, the first acquisition module includes:
The first acquisition unit is used for acquiring a medical image data set and adopting pixel gray value analysis, extracting the pixel gray value of each pixel from the medical image data set, and obtaining a first gray difference value set by calculating the difference value between the pixel gray value and the neighborhood pixel gray average value;
the second acquisition unit is used for carrying out noise detection according to the first gray level difference value set, and if the difference value between the gray level value of the pixel and the neighborhood mean value exceeds a preset threshold value, the pixel is judged to be a noise point, and a noise point position set is obtained;
The third acquisition unit is used for smoothing the noise point position set by adopting a Gaussian filter algorithm, and adjusting the pixel gray value by applying the Gaussian filter algorithm to the noise point to obtain a second image set;
and the first generation unit is used for generating a data set from the second image set and generating a first image data set by saving the processed data of the second image set.
Further, the second acquisition module includes:
A fourth obtaining unit, configured to obtain a medical image dataset and obtain a first gray difference set by using pixel gray value analysis to extract a pixel gray value of each pixel from the medical image dataset and calculating a difference between the pixel gray value and a neighborhood pixel gray average value;
a fifth obtaining unit, configured to perform noise detection according to the first gray level difference set, and if the difference between the gray level value of the pixel and the neighborhood mean exceeds a preset threshold, determine that the pixel is a noise point, so as to obtain a noise point position set;
A sixth obtaining unit, configured to perform smoothing processing on the noise point position set by using a gaussian filtering algorithm, and adjust a pixel gray value by applying the gaussian filtering algorithm to the noise point, so as to obtain a second image set;
And a second generation unit for generating a data set from the second image set, and generating a first image data set by saving the processed data of the second image set.
The beneficial effects obtained by the invention are as follows:
the invention provides a cosmetic shaping auxiliary analysis method and a system based on three-dimensional surface shape digital measurement, which carry out noise processing on medical image data sets through Gaussian filtering to construct an energy function to repair a defect area, and extracting point cloud data, constructing a gradient field to generate a three-dimensional face model, acquiring grid vertex coordinates and normal vector information of the model, and fusing various illumination components to generate a visual image. The invention also establishes a parameter mapping matrix to restrict the transformation parameters input by the user, updates the positions of the grid vertexes through coordinate transformation, updates the visualized image in real time according to the transformation parameters and the material properties, and outputs the facial feature analysis result. The method realizes the construction and visualization from medical image data to the three-dimensional facial model, supports the interactive operation of the user, can be widely applied to the fields of medical image analysis, facial modeling and the like, and improves the precision and efficiency of medical image processing and three-dimensional visualization. The invention provides a cosmetic and plastic auxiliary analysis method and a system based on three-dimensional surface shape digital measurement, which have the following beneficial effects:
1. and (3) improving the precision of image preprocessing:
1. The robustness of noise suppression can effectively remove the interferences such as salt and pepper noise, gaussian noise and the like in medical images through Gaussian filter algorithm and dynamic threshold judgment (smoothing treatment when the difference between the pixel gray value and the neighborhood mean value exceeds a preset threshold value), so that the facial tissue boundary is clearer;
compared with the traditional median filtering, the Gaussian filtering suppresses noise while retaining edge details, is particularly suitable for retaining fine structures such as facial skin textures, pores and the like, and avoids characteristic distortion caused by excessive smoothing;
2. the defect repairing intelligence is based on defect filling of an energy function (superposition calculation of single-point energy items and adjacent point interaction energy items), so that local defects such as light spots, scratches or facial acne pits, scars and the like in an image can be automatically repaired, and a filling area is in seamless connection with pixel gray values and texture features of surrounding tissues;
the method has application value that three-dimensional modeling errors caused by original image defects are avoided, for example, scars at the nasal wings are repaired, so that the subsequent nose shaping simulation can be more fit with the real facial structure.
2. Detail enhancement of three-dimensional modeling:
1. The self-adaptive optimization of the point cloud data can enhance the point cloud density of key facial features (such as the corners of eyes, the lip lines and the nose bridge) through gradient field analysis and local encryption processing (automatic encryption when the point cloud density is lower than a threshold value), and solve the problem of the point cloud sparseness of the traditional laser scanning in a low curvature area (such as the cheek);
Data comparison, namely in a region with large curvature change such as a nose tip, the point cloud density can be increased from 50 points/cm < 2 > to 200 points/cm < 2 > in the traditional method, and the surface error of the model is reduced to be within 0.1 mm;
2. The reality of the physical illumination simulation is combined with the illumination model (such as a Phong illumination model) of the ambient light, diffuse reflection and specular reflection components, so that the optical characteristics (such as fat reflection of the forehead and matte texture of the cheek) of the facial skin can be truly restored, and the plastic feel of the traditional three-dimensional model is avoided;
clinical value, doctors can observe the three-dimensional sense of the face through the light and shadow change, for example, whether the light and shadow transition after apple muscle filling is natural or not is judged, and the deviation between the postoperative effect and the expected effect is reduced.
3. Safety and interaction efficiency of parameter control:
1. The constraint mechanism of the transformation parameters is that the parameter mapping matrix limits the range of rotation angles (such as a zygomatic arch inward pushing angle and a mandibular angle rotation amplitude), translation distances (such as a nose bridge heightening length) and the like, automatically corrects the ultra-limit value (such as limiting the mandibular angle rotation angle to be less than or equal to 15 degrees to avoid nerve injury risk), and does not operate reasonably in an algorithm level;
risk control, in which a safety threshold is preset in combination with an anatomical database, for example, when the upward rotation angle of the nose tip exceeds 30 degrees in the hump nose operation, the nostril is possibly exposed;
2. Real-time interactive immersive experience, grid vertex positions based on a coordinate transformation formula (such as a rotation matrix and a translation vector) are calculated in real time, and when parameters (such as a sliding bar for changing the chin length) are adjusted by a user, a three-dimensional model is updated at a frame rate of 60fps, so that a simulation effect of 'what you see is what you get' is achieved;
Doctor-patient communication optimization, wherein a patient can intuitively observe facial changes of different shaping schemes (such as comparing the heights of two nose augmentation prostheses), and a doctor can quickly verify aesthetic proportions of the design schemes (such as three-vestibule five-eye standards) through real-time rendering.
4. Comprehensive benefits of clinical application:
1. the accuracy of the operation scheme, the combination of a high-precision three-dimensional model (error <0.3 mm) and illumination simulation, can quantitatively analyze indexes such as facial asymmetry degree (such as left and right cheek width difference), skin looseness and the like, and provides data support for the personalized operation scheme;
2. Preoperative risk prejudging, namely, potential problems (such as compatibility with surrounding tissues and neurovascular compression risk after prosthesis implantation) can be found in advance through the effect of premodeling different operation parameters, so that the intra-operative adjustment time is shortened, and the operation risk is reduced by about 25%;
3. the medical resource is efficiently utilized, the efficiency of the automatic image processing and modeling process (the time required for generating a three-dimensional model from image input is less than 10 minutes) is greatly improved compared with the traditional manual measurement (1-2 hours), and the method is suitable for large-scale pre-cosmetic plastic evaluation.
In a word, the cosmetic shaping auxiliary analysis method and system based on three-dimensional surface shape digital measurement provided by the invention realize the spanning from experience leading to data driving in the cosmetic shaping field through the whole process technical innovation of high-precision image processing, detail enhancement modeling, physical illumination rendering and safe interaction simulation.
Drawings
Fig. 1 is a schematic flow chart of an embodiment of the cosmetic/plastic assisted analysis method based on three-dimensional surface shape digital measurement.
Detailed Description
In order to better understand the above technical solutions, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, a first embodiment of the present invention proposes a cosmetic/plastic assisted analysis method based on three-dimensional surface shape digital measurement, comprising the steps of:
step S100, acquiring a medical image data set, performing noise processing by adopting a Gaussian filter algorithm, judging the medical image data set as noise points if the difference between the pixel gray value and the neighborhood mean value exceeds a preset threshold value, and performing smoothing processing to acquire a first image data set.
Medical image datasets refer to sets of digitized images acquired through various medical imaging techniques for recording internal or external structures and physiological functions of the human body. These datasets contain a large amount of anatomical, pathological or functional information, which is the core fundamental data for medical diagnosis, treatment planning, medical research and medical technology development.
The Gaussian filter algorithm (Gaussian Filter Algorithm) is a linear smoothing filter algorithm based on a Gaussian function, and belongs to a classical technology in the field of signal processing and image processing. The core idea is to convolve the image or signal with a Gaussian Kernel (Gaussian Kernel) to reduce noise, smooth details by weighted averaging the neighborhood pixel values, while preserving the edge and structural information of the image as much as possible.
The pixel gray value (Pixel Grayscale Value) is a numerical value for representing the brightness of a single pixel in an image, and is a basic attribute of a gray-scale image (an image containing no color information).
The neighborhood mean (Neighborhood Mean) is a local statistic commonly used in image processing and computer vision to describe the pixel gray value averaging characteristics of a pixel point in an image and its surrounding neighboring pixels. The core idea is to extract local features of the image or suppress noise by analyzing the gray level distribution of the local area of the pixel.
Noise points (Noise pixels) refer to abnormal Pixel points in an image, which are obviously inconsistent with Pixel gray values or texture features of surrounding pixels, and are generally introduced by factors such as defects of imaging equipment, transmission interference, environmental interference and the like. Noise points interfere with the visual effect of the image and subsequent processing (e.g., image analysis, object detection, etc.), and therefore require suppression by filtering, denoising, etc. algorithms.
The smoothing process (Smoothing Processing) is a technique for smoothing the entire image or a partial region by suppressing image noise and weakening local gray scale fluctuation between pixels in the image processing. The core goal is to reduce high frequency details (e.g., noise, fine texture) in the image while preserving as much low frequency information as possible (e.g., object contours, large scale structures). The smoothing process is widely applied to scenes such as image denoising, preprocessing, denoising before feature extraction and the like.
And step 200, constructing an energy function according to the first image data set, and filling and repairing a defect area in the image by superposition calculation of the single-point energy item and the adjacent point interaction energy item to obtain a second image data set.
The Energy Function (Energy Function) is defined and applied differently in different fields, but the core idea is to map the state or attribute of the system into a numerical value (Energy value) through a mathematical Function, so as to describe the stability, cost, similarity or optimization objective of the system. The energy function is often used as an objective function of an optimization problem to solve for the optimal state of the system by minimizing or maximizing the energy value.
The image defect area is filled and repaired by superposition calculation of the single-point energy item and the adjacent point interaction energy item, and the method is an image repairing method based on energy function optimization. The method is characterized in that the filling problem of a defect area (such as scratches, noise points and missing pixels) in an image is converted into an energy minimization problem, the local characteristics and neighborhood dependency relations of the pixels are marked by defining two types of energy items, and finally, the optimal filling value is solved through an optimization algorithm, so that the repaired image is visually coherent and natural.
In computer vision, image processing, and energy optimization models (e.g., markov random field, conditional random field), a single point energy term (Unary ENERGY TERM) is the fundamental component of the energy function to describe the degree or cost of matching of individual pixel/point self-properties to the target state. Single point energy terms characterize the "energy" of a single point independent of the surrounding environment (i.e., independent cost or preference), one of the core elements in constructing a global energy function.
In computer vision, image processing, and energy optimization models (e.g., markov random field, conditional random field), adjacent point interaction energy terms (PAIRWISE ENERGY TERM) are key components of the energy function to describe the contribution of label relationships between adjacent pixels/points to the overall energy. The adjacent point interaction energy term characterizes the dependency relationship or constraint condition between the adjacent points, and the adjacent point interaction energy term and the single point interaction energy term cooperate to ensure that the model balances between local rationality (single point term) and global consistency (interaction term).
And step S300, extracting point cloud data from the second image data set, constructing a gradient field, and executing local encryption processing if the point cloud density is lower than a preset threshold value to generate a three-dimensional face model.
Point cloud data is a data set consisting of discrete points in three-dimensional space, each point typically containing coordinates (x, y, z) and attributes (e.g., color, intensity, etc.), and is widely used in the fields of laser radar, photogrammetry, three-dimensional scanning, etc.
The gradient field is a vector field used for describing the change rate (namely the steepness and the direction) of the surface of the point cloud in a local area, each point corresponds to a gradient vector, the size of the gradient vector represents the local change amplitude, and the direction points to the direction in which the function grows most rapidly (generally perpendicular to the normal vector of the curved surface).
The local encryption processing based on the point cloud density threshold is a point cloud data preprocessing strategy, and aims to improve the spatial uniformity and detail integrity of the point cloud by automatically identifying a low-density region and supplementing sampling points. The core logic is that the density value of the local area of the point cloud is calculated, the density value is compared with a preset threshold value, and encryption operation is carried out on the area with the density lower than the threshold value, so that the requirement of subsequent processing (such as three-dimensional reconstruction, surface modeling and the like) on the data density is met.
In the field of point cloud processing, the point cloud density (Point Cloud Density) is used for describing the distribution density of points in point cloud data, and is an important index for measuring the quality, geometric structure characteristics or spatial sampling characteristics of the point cloud.
The three-dimensional facial model is a three-dimensional data structure which is constructed by a digitizing technology and can accurately describe the geometric shape, texture characteristics and dynamic expression of the human face. The three-dimensional face model integrates information such as space coordinates, surface details, topological relations and the like of the face into a visual and interactive model in a mathematical or computer recognizable form, and is widely applied to the fields such as computer graphics, virtual reality, biomedicine, security recognition, film and television special effects and the like.
And S400, acquiring grid vertex coordinates and normal vector information of the three-dimensional face model, and fusing an ambient light component, a diffuse reflection component and a specular reflection component through an illumination intensity calculation formula to generate a visual image.
In a three-dimensional face model, mesh vertex coordinates and normal vector information are core data describing the model geometry and surface characteristics, which together determine the shape, lighting effect, and rendering realism of the model.
The normal vector is a unit vector perpendicular to a certain point of the model surface, and is used for describing the orientation and curvature of the surface, and is divided into a grid vertex normal vector and a plane normal vector.
The visual image is generated by fusing the ambient light component, the diffuse reflection component and the specular reflection component through the illumination intensity calculation formula, and is one of the core steps of realism rendering (Photorealistic Rendering) in computer graphics. The essence of the method is that based on a Lighting Model, different types of Lighting components (ambient light, diffuse reflection light and specular reflection light) in a scene are subjected to mathematical modeling, and final Lighting intensity of each point on the surface of an object is obtained through superposition calculation, so that a visual image with stereoscopic impression and reality is generated.
Ambient Light (Ambient Light) is Light that is uniformly distributed after multiple scattering of simulated Light in the environment, and is independent of the direction of a Light source and the surface orientation of an object, and can be regarded as basic illumination of a scene.
Diffuse reflected Light (Diffuse Light) is a phenomenon that simulates uniform scattering of Light rays in all directions when the Light rays are irradiated to a rough surface, and the intensity thereof depends on the angle between the direction of a Light source and the normal vector of the surface of an object.
Specular Light (Specular Light) is a directional reflection (e.g., metallic, glass high-Light effects) that simulates Light striking a smooth surface, with the intensity depending on the angle of the observer's viewing angle with respect to the direction of the reflected Light.
The visual image refers to a technical product which converts data, information or abstract concepts into visual forms visible to human eyes through a graphical means so as to intuitively and efficiently transfer knowledge, express rules or present scenes. The method converts complex data or contents (such as a three-dimensional model, a simulation result, statistical information and the like) which are difficult to directly understand into a visual symbology which is easy to perceive and read through visual elements such as colors, shapes, textures, spatial relations and the like.
And S500, establishing a parameter mapping matrix to carry out range constraint on transformation parameters input by a user, automatically correcting if the rotation angle exceeds a preset range, and calculating the updated grid vertex position through a coordinate transformation formula.
The parameter mapping matrix is a mechanism for mapping original parameters input by a user to target parameter ranges through mathematical transformation (such as matrix operation in linear algebra), and the core goal is to carry out range constraint on the parameters so as to ensure that input values conform to valid intervals (such as physical feasible regions, geometric rationality or business rules) preset by a system.
The parameter mapping matrix constructs a bridge between the user input space and the legal space of the system through mathematical transformation, and has the core value of retaining the directionality and continuity of the user intention while restricting the parameter range. The mechanism is widely applied to the fields of computer graphics, man-machine interaction, physical simulation and the like, not only ensures the stability of the system, but also improves the naturalness and predictability of user operation.
Transformation parameters refer to quantization parameters used to describe object or space geometric transformations (e.g., position, orientation, size, shape changes) in the fields of computer graphics, robotics, mathematical modeling, physical simulation, etc.
The automatic correction mechanism of the rotation angle is a core means for ensuring that the angle value is legal through mathematical transformation or control logic, and the core aim is to balance constraint effectiveness and user intention order preservation. What kind of correction method is selected is determined according to scene requirements (such as whether angle jump is allowed or not and whether periodicity is available or not), and finally optimization of system stability and interaction experience is achieved. When the rotation angle input by a user or calculated by the system exceeds a preset legal range, the angle value is automatically adjusted through mathematical transformation or rules so as to fall into an effective interval, thereby ensuring the stability, geometric rationality or interaction safety of the system.
The calculation of updated mesh vertex positions by means of a coordinate transformation formula is a core operation of geometric transformation (Geometric Transformation) in computer graphics, robotics and geometric modeling. The essence is that the original grid vertex coordinates are mapped to new coordinate positions through a mathematical formula so as to realize geometric transformation such as translation, rotation, scaling, miscut, projection and the like of the object, thereby changing the spatial position, direction, size or shape of the object.
And step 600, updating the visualized image in real time according to the transformation parameters and the material attribute parameters, and outputting a facial feature analysis result.
Updating the visual image in real time is to dynamically update the visual effect of the three-dimensional face model through the graphic rendering engine according to the input transformation parameters (such as translation, rotation and scaling of the face model) and the material attribute parameters (such as skin color, texture and glossiness). Wherein the transformation parameters are parameters describing the spatial pose and shape change of the face model. The texture property parameters are parameters describing the optical properties and texture of the face surface.
Facial feature analysis is to analyze real-time rendered images or three-dimensional models, extract equivalent results of facial geometric features, expression states and material matching degree, and is used for driving interaction, evaluation or decision.
Further, the method for assisting analysis of beauty and shaping based on three-dimensional surface shape digital measurement provided in the present embodiment, step S100 includes:
Step S110, a medical image data set is obtained, pixel gray value analysis is adopted, pixel gray values of each pixel are extracted from the medical image data set, and a first gray difference set is obtained by calculating the difference between the pixel gray values and the neighborhood pixel gray average value.
For example, in medical image processing, acquiring a medical image dataset typically involves extracting CT or MRI images from a PACS (Picture ARCHIVING AND Communication System ‌, image archiving and communication system) system or public dataset of a hospital, such as LIDC-IDRI (The LungImageDatabase Consortium).
Assuming a set of chest CT images is acquired, 512x512 pixels in size, with a range of pixel gray values between 0 and 255. The pixel gray value of each pixel reflects the tissue density, with the lung region typically exhibiting a lower pixel gray value and the bone region having a higher pixel gray value. This data acquisition ensures the reliability and consistency of subsequent analysis, contributing to accurate diagnosis.
Specifically, a 3x3 neighborhood window may be employed to extract the pixel gray value of each pixel and calculate its difference from the neighborhood pixel mean. For example, a pixel has a gray value of 150, and 8 neighboring pixels have gray values of 145, 148, 152, 147, 149, 146, 151, 150, respectively, and a mean value of 148.5, and a difference value of 1.5. This process generates a first gray difference set that reflects local features of gray variations in the image, helping to identify outliers.
And step 120, performing noise detection according to the first gray level difference set, and if the difference between the gray level value of the pixel and the neighborhood mean value exceeds a preset threshold value, determining the pixel as a noise point to obtain a noise point position set.
In one possible implementation, noise detection is accomplished by setting a threshold. Assuming that the preset threshold is 10, if the difference between the gray value of a certain pixel and the neighborhood mean value exceeds 10, the pixel is judged to be a noise point. For example, a pixel has a gray level of 200, a neighborhood average of 150, a difference of 50, and a far-reaching threshold, labeled as noise. All noise points constitute a noise point location set. The method can effectively distinguish normal tissues from noise interference and improve the image quality.
And step S130, smoothing the noise point position set by adopting a Gaussian filter algorithm, and adjusting the pixel gray value by adopting the Gaussian filter algorithm to the noise point to obtain a second image set.
For example, a gaussian filter algorithm is applied to the noise point location set, optionally with a gaussian kernel with a standard deviation of 1.5 and a window size of 5x5. Assuming that the pixel gray value of a noise point is 200, the pixel gray value is adjusted to be a neighborhood weighted average value after Gaussian filtering, such as 180. The smoothing process reduces the interference of noise on the image, reserves edge information, generates a second image set, and improves the visual definition of the image and the accuracy of subsequent analysis.
Step S140, generating a data set from the second image set, and storing the processed data of the second image set to generate a first image data set.
Specifically, the second image set of processed data is stored for generating the first image data set. For example, the processed CT image is stored in DICOM format, containing metadata such as patient ID and scan parameters. The saved data set may be used for deep learning model training or clinical diagnosis. The method ensures data consistency and is convenient for subsequent analysis and model development.
In one possible implementation, the choice of parameters for gaussian filtering has a significant impact on the result. The smaller standard deviation can better keep details, is suitable for fine structure analysis, and the larger standard deviation is suitable for removing obvious noise.
Preferably, the parameters may be dynamically adjusted according to the type of image, ensuring an optimal smoothing effect. The flexibility improves the adaptability of the algorithm and meets the requirements of different clinical scenes.
For example, the above processing procedure can significantly improve the accuracy of nodule detection in lung CT images. After noise is reduced, the edge of the nodule is clearer, and the misdiagnosis rate is reduced. Meanwhile, the generated first image data set provides high-quality input for AI auxiliary diagnosis, and is beneficial to improving diagnosis efficiency and reliability. The method ensures the robustness and practicality of medical image processing through multi-step collaborative optimization.
Further, the method for assisting analysis of beauty and shaping based on three-dimensional surface shape digital measurement provided in the present embodiment, step S200 includes:
Step S210, obtaining pixel gray values and texture features of pixel points from a first image data set, partitioning an image by adopting a region segmentation algorithm, and determining a region containing defects by calculating the average value of the texture features and the distribution of the pixel gray values of each region to obtain a defect region set.
In the medical image processing field, data may be extracted from a chest CT image when pixel gray values and texture features of pixel points are acquired from a first image dataset. Assuming an image size of 512x512 pixels, the pixel gray value range is 0-255. The texture features can reflect the gray level change rule of the pixel point neighborhood through the local binary pattern or gray level co-occurrence matrix calculation. Illustratively, for a certain lung region pixel, the pixel gray value is 120, the neighborhood forms a specific texture mode, and the characteristic value is obtained after quantization and is used for subsequent segmentation.
The region segmentation algorithm may employ a graph-based approach to divide the image into regions such as lung, pleural and skeletal regions. Specifically, each region calculates a texture feature mean and a pixel gray value distribution. For example, the average value of the texture features of the lung region is 0.8, the average value of the gray scale is 100, the average value of the texture of a certain region is 1.2, the gray scale distribution deviates from the normal range, and the region which is judged to contain the defect forms a defect region set. This process ensures accurate identification of defective areas.
Step S220, extracting the boundary contour of the defect area by adopting a boundary detection algorithm aiming at the defect area set, and determining the boundary contour set by calculating the pixel gray value gradient of the boundary pixel points.
In one embodiment, a boundary detection algorithm, such as the Canny algorithm, is applied to the set of defect regions to extract the boundary contours of the defect regions. Illustratively, the gray value of the pixel in a certain defect area is suddenly changed from 100 to 180, and after the gradient of the gray value of the pixel is calculated, the pixel with a higher gradient value is marked as a boundary point to form a boundary contour set.
It should be noted that the gradient calculation in combination with the texture feature can improve the accuracy of boundary detection. For example, a higher texture feature value at a boundary point indicates that it may be a lesion edge, rather than a noise disturbance. The boundary contour set provides accurate region localization for subsequent processing.
And step S230, constructing an energy function according to the boundary contour set and the texture features to obtain an energy distribution set.
Constructing an energy function based on the boundary contour set and the texture features may optimize the feature description of the defect region by minimizing the energy function. The energy function of a defect area is assumed to combine the gray gradient and the texture feature to generate an energy distribution set, and the abnormal degree of pixels in the area is reflected. Specifically, a higher pixel energy value indicates that it deviates from normal tissue characteristics. It should be noted that the energy distribution set provides a quantitative basis for subsequent repair.
And step S240, smoothing the energy distribution set by adopting a Gaussian filter algorithm, and filling and repairing by adjusting the pixel gray values of the pixel points in the defect area and combining with texture features to generate a second image data set.
In one embodiment, the energy distribution set is smoothed using a gaussian filter algorithm, with a standard deviation of 1.2 and a window size of 5x5. For example, the gray value of the original pixel of a certain pixel is 160, the gray value of the original pixel is 150 after being smoothed, and filling and repairing are carried out by combining the texture features, so that a second image data set is generated.
Preferably, the pixel gray value can be dynamically adjusted according to the texture characteristics during repair, so that the natural connection between the repair area and surrounding tissues is ensured. For example, the pixel gray values of the repaired lung area are uniformly distributed, and the texture characteristics are consistent with normal tissues. The method improves the quality of the image data set and provides reliable data support for subsequent diagnosis.
In one embodiment, the second image dataset may be stored in DICOM format, including patient ID and scan parameters, for clinical use. Illustratively, the restored CT images exhibit higher regional consistency in pulmonary nodule detection, facilitating subsequent analysis. It should be noted that, the introduction of texture features makes the repair process more targeted, reducing the risk of erroneous repair. Such multi-step collaborative processing ensures the integrity and practicality of the image data.
Further, the method for assisting analysis of beauty and shaping based on three-dimensional surface shape digital measurement provided in the present embodiment, step S300 includes:
and step S310, acquiring pixel gray values and depth information of pixel points in the second image data set, and generating point cloud data containing three-dimensional space coordinates by adopting a stereoscopic microscope algorithm to obtain a point cloud data set.
In the medical image processing field, the pixel gray value and the depth information of the pixel points in the second image data set are acquired, and data can be extracted from the skull CT image. Assuming that the image size is 512x512 pixels, the pixel gray value range is 0-255, and the depth information is generated through multi-layer CT scanning, and reflects the position of the pixel point in the three-dimensional space. The stereoscopic microscopy algorithm may convert this information into point cloud data, generating a point cloud dataset comprising three-dimensional spatial coordinates.
Specifically, each pixel is assigned an x, y, z coordinate and a pixel gray value, for example, the pixel of a certain skull region has a coordinate of (100,120,50) and the pixel gray value has a value of 150. The point cloud dataset provides a basis for subsequent three-dimensional modeling.
Step S320, calculating the number of points in a unit volume aiming at the point cloud data set, and if the point cloud density is lower than a preset threshold value, carrying out local encryption on the low-density area by adopting an interpolation algorithm to obtain an encrypted point cloud data set.
In one embodiment, the number of points per unit volume is calculated for the point cloud dataset assuming a unit volume of 1 cubic millimeter, and the preset density threshold is 100 points/cubic millimeter. If the cloud density of a certain area is 80 points/cubic millimeter and is lower than the threshold value, adopting an interpolation algorithm to carry out local encryption.
For example, based on nearest neighbor interpolation, new points are inserted in the low density region, and the pixel gray value is generated according to a weighted average of neighboring points, e.g., the pixel gray value of the inserted point is about 145. The encrypted point cloud density is increased to 110 points/cubic millimeter, and an encrypted point cloud data set is formed. This encryption ensures the uniformity of the point cloud data.
And step S330, calculating the pixel gray value gradient of each point according to the encrypted point cloud data set, and constructing a gradient field describing the surface change by adopting a gradient descent algorithm to obtain a gradient field data set.
For example, computing a pixel gray value gradient for each point based on the encrypted point cloud data set may reflect the surface gray variation. Assuming that the pixel gray value at a certain point is suddenly changed from 140 to 180, the gradient value is higher, indicating that the boundary of the skull surface is possible. And constructing a gradient field by adopting a gradient descent algorithm, generating a gradient field data set, and describing a surface change rule. In particular, the gradient field marks the junction of the skull and soft tissue, providing accurate surface information for subsequent modeling.
And S340, generating triangular grids by using a grid generation algorithm through the gradient field data set and the encryption point cloud data set, and constructing a three-dimensional face model by combining the boundary contour and the normal vector to obtain a three-dimensional face model data set.
In one embodiment, a triangle mesh is generated using a mesh generation algorithm from a gradient field dataset and an encrypted point cloud dataset. For example, based on the Delaunay triangulation algorithm, point cloud data is connected into a triangular mesh, and a three-dimensional face model is constructed by combining boundary contours and normal vectors.
Assuming that the normal vector of a certain skull region points outwards, the mesh forms a smooth skull surface after generation. The three-dimensional facial model dataset may be saved in STL format, containing mesh vertex coordinates and patch information, facilitating clinical surgical planning or 3D printing. For example, the generated skull model can accurately position the bone structure in surgical navigation, and operation errors are reduced.
The combination of point cloud encryption and gradient field construction ensures that the model surface is smoother and the boundary is clearer. For example, the encrypted point cloud data reduces voids, and the gradient field enhances the appearance of surface detail. The multi-step collaborative processing improves the precision and reliability of the three-dimensional model, and provides high-quality data support for medical image analysis.
Preferably, the method for assisting analysis of cosmetic shaping based on three-dimensional surface shape digital measurement in this embodiment, step S400 includes:
Step S410, a grid vertex data set of the three-dimensional face model is obtained, and three-dimensional coordinates and normal vectors of each grid vertex are calculated by vector operation, so that a grid vertex attribute data set is obtained.
For example, in the medical image processing field, based on a mesh vertex data set of a three-dimensional face model, mesh vertex coordinates and normal vectors may be obtained through vector operations, forming a mesh vertex attribute data set. Grid vertex coordinates describe the position of each point of the model surface in three dimensions, e.g., the coordinates of a grid vertex of a skull model are (150,200,80) mm, and normal vectors reflect the surface orientation, e.g., (0.7,0.2,0.6), for subsequent illumination calculations. The mesh vertex attribute dataset provides the underlying geometric information for the visualization.
Step S420, calculating the ambient light component of each grid vertex by adopting an illumination model according to the grid vertex attribute data set and the preset light source position, and combining the reflection coefficient of the surface material to obtain an ambient light intensity data set.
In one possible implementation, the ambient light component may be calculated using an ambient light model based on the mesh vertex attribute dataset and the preset light source locations. The ambient light simulates uniformly scattered light, and assuming that the light source position is 300,300,500 mm, the ambient light intensity is 0.3, and the reflection coefficient of the surface material is 0.4, the ambient light component of a certain grid vertex can be obtained by simple multiplication. This way it is ensured that the model still has a basic brightness in the absence of directional light sources.
Step 430, if the included angle between the normal vector in the grid vertex attribute data set and the light source position is smaller than the preset threshold, calculating a diffuse reflection component by using a diffuse reflection formula i_d=k_d (n·l), wherein i_d represents diffuse reflection intensity, k_d represents diffuse reflection coefficient, N represents the grid vertex normal vector, and L represents the light source direction vector, so as to obtain a diffuse reflection intensity data set.
For example, for the judgment of the angle between the normal vector and the direction of the light source, if the angle is smaller than the preset threshold value, such as 30 degrees, the diffuse reflection component is calculated by using a diffuse reflection formula. Diffuse reflection simulates the scattering effect of light on a rough surface. Assuming that the normal vector of a certain grid vertex is (0.5,0.5,0.7), the direction vector of a light source is (0.6,0.6,0.5), and the diffuse reflection coefficient is 0.6, the diffuse reflection intensity can be obtained through vector dot product. The method can highlight the brightness change of the surface of the model and enhance the stereoscopic impression.
Step S440, calculating a specular reflection component by using a specular reflection formula i_s=k_s (r·v) ζn through the diffuse reflection intensity dataset and the viewing angle direction, and fusing the ambient light intensity dataset and the diffuse reflection intensity dataset to generate a visualized image dataset, wherein i_s represents the specular reflection intensity, k_s represents the specular reflection coefficient, R represents the reflection vector, V represents the viewing angle vector, and n represents the high light index.
In one possible implementation, the calculation of the specular reflection component is based on the viewing angle direction and the high light index. Specular reflection simulates the specular effects of a smooth surface, such as the smooth areas of a skull model. Assuming that the viewing angle vector is (0.3,0.4,0.8), the specular reflection coefficient is 0.8, the high light index is 32, and the specular reflection intensity can be obtained by the dot product of the reflection vector and the viewing angle vector. This way the gloss characteristics of the model surface can be effectively represented.
For example, fusing the ambient light intensity dataset, the diffuse reflectance intensity dataset, and the specular reflectance intensity dataset may generate a visual image dataset. The fusion process takes into account the weighted effects of the different illumination components, such as 30% ambient light, 50% diffuse reflection, and 20% specular reflection, to generate the color values of the final image pixels. The method enables the skull model to present a vivid light and shadow effect in operation navigation, and facilitates doctors to observe bone details.
In one possible implementation, the visual image dataset may be further optimized, such as by adjusting the light source position to simulate the operating room light environment, or changing the high light index to highlight specific areas. The expansion schemes enrich the visual effect and meet different clinical requirements. It should be noted that fusion of various illumination components can significantly improve the sense of reality of the model, and provide reliable visual basis for subsequent analysis.
Further, the cosmetic shaping auxiliary analysis method based on the three-dimensional surface shape digital measurement provided in the embodiment, step S500 includes:
Step S510, a transformation parameter data set input by a user is obtained, a parameter mapping matrix is constructed through matrix operation, and an initial transformation matrix is obtained, wherein the transformation parameter data set comprises a rotation angle, a translation amount and a scaling.
In one possible implementation, the acquisition of the transformation parameter dataset is a central element of the three-dimensional face model processing. Transformation parameters typically include rotation angle, translation amount, and scaling, which together define the geometric transformation of the model in three-dimensional space. For example, the user may input a rotation angle of 45 degrees, a translation of 50 millimeters along the X-axis, and a zoom of 1.2. These parameters are entered through a user interface or configuration file to ensure that the model can adjust position and morphology according to specific needs.
It should be noted that the rationality of the transformation parameters directly affects the accuracy of the subsequent matrix operation, so that the parameter values need to be ensured to be within a reasonable range, such as the rotation angle is usually between-180 degrees and 180 degrees, when the transformation parameters are input. For example, the process of constructing the parameter mapping matrix is to translate the input transformation parameters into an operable mathematical representation. The rotation angle may be represented by a rotation matrix, the translation amount by a translation vector, and the scaling by a scaling matrix. These matrices are eventually combined into a unified initial transformation matrix. For example, assuming a rotation angle of 30 degrees, a translation of 100 millimeters along the Y-axis, and a scaling of 1.5, the system would first calculate the rotation matrix, translation vector, and scaling matrix, respectively, and then synthesize the initial transformation matrix by matrix multiplication. This approach ensures the order and consistency of the transformations, providing a reliable basis for subsequent mesh vertex coordinate calculations.
And step S520, judging whether the rotation angle in the initial transformation matrix exceeds a preset angle threshold, and if so, correcting the rotation angle by adopting a linear interpolation method to obtain a corrected transformation matrix.
In one possible implementation, it is a critical step to determine whether the rotation angle in the initial transformation matrix exceeds a preset threshold. For example, the angle threshold is set to 60 degrees, and if the input rotation angle is 75 degrees, the threshold is exceeded. At this time, the rotation angle is corrected by a linear interpolation method, for example, 75 degrees are interpolated to within 60 degrees, and a corrected transformation matrix is generated. Such correction avoids model deformation or distortion due to an excessive rotation angle.
For example, in medical image processing, excessive rotation may cause unnatural distortions in the facial model, affecting the physician's judgment of the surgical field. The modified transformation matrix can maintain the geometric integrity of the model.
Step S530, calculating the mesh vertex data set of the three-dimensional face model by using a coordinate transformation formula T (v) =m·v, to obtain an updated mesh vertex data set, where T (v) represents the transformed mesh vertex coordinates, M represents the modified transformation matrix, and v represents the original mesh vertex coordinates.
The method comprises the steps of calculating a grid vertex data set of a three-dimensional face model by adopting a coordinate transformation formula, and is a core for generating an updated grid vertex data set. For example, the original mesh vertex coordinates are (100,150,200) millimeters, and after the modified transformation matrix is acted on, new coordinates (120,180,240) millimeters may be obtained. This process is implemented by multiplication of the matrix and the vector, and is computationally efficient and results accurate.
Step S540, generating grids of the three-dimensional face model by combining the updated grid vertex data set with preset rendering parameters to obtain a visualized grid vertex position data set.
The updated mesh vertex dataset reflects the new position and morphology of the model in space, providing accurate geometric information for subsequent rendering. In one possible implementation, the generation of the visualized mesh vertex position dataset in combination with preset rendering parameters is the last step of implementing three-dimensional face model visualization. Rendering parameters may include material properties, illumination direction, or color mapping. For example, setting the material to be translucent, the illumination direction to be (200,200,300) mm, the color to be warm, the system will generate a visual grid from the updated grid vertex dataset. This approach can generate a clear model surface, facilitating the viewing of details of the facial structure.
Preferably, specific areas of the model, such as the bridge of the nose or cheekbones, can be highlighted by adjusting rendering parameters, such as increasing the illumination intensity or changing the color mapping, to meet different analysis requirements. For example, in an extension, interactive visualization may be achieved by dynamically adjusting the transformation parameters.
For example, the user may enter a new rotation angle or scale in real-time, and the system updates the transformation matrix and recalculates the mesh vertex dataset in real-time, generating a new visualization effect. This interactivity is particularly useful in medical teaching where a teacher can demonstrate the morphological changes of a face model at different angles by adjusting parameters to help students understand anatomy.
It should be noted that the dynamic adjustment can also support real-time feedback, so that the user can quickly optimize the model presentation effect. In one possible implementation, the diversity of transformation parameters provides a flexible application scenario for the model. For example, in surgical planning, a physician may simulate the effect of a facial bone present at different locations by adjusting the amount of translation, or zoom in on a particular area to view details. These operations rely on efficient computation of the transformation matrix to ensure that the model remains geometrically consistent under complex transformations, thereby providing support for accurate analysis.
Further, the method for assisting analysis of beauty and shaping based on three-dimensional surface shape digital measurement provided in the present embodiment, step S600 includes:
Step S610, a transformation parameter data set and a material attribute data set input by a user are acquired, and an initial transformation matrix and an initial material mapping matrix are constructed through matrix operation, so that the initial transformation matrix and the initial material mapping matrix are obtained.
For example, in three-dimensional face model processing, acquiring a transformation parameter dataset and a texture property dataset input by a user is the basis for constructing a visualization model. The transformation parameter data set may include a rotation angle, a translation amount, and a scaling, for example, the rotation angle is set to 30 degrees, the translation is 80 millimeters along the Z-axis, and the scaling is 1.3. The texture property data set may include texture color, reflectivity, and transparency, for example, the texture color is set to natural skin tone, the reflectivity is 0.6, and the transparency is 0.2. These parameters are entered through the user interface to ensure that the model is able to adjust morphology and appearance according to specific needs.
It should be noted that the diversity of the input parameters provides flexibility for subsequent matrix operations and texture mapping. In one possible implementation, the initial transformation matrix and the initial texture mapping matrix are constructed by matrix operations. The initial transformation matrix converts rotation, translation and scaling parameters into mathematical representations, such as 30 degree rotation angle into rotation matrix, 80 millimeter translation into translation vector, scaling of 1.3 into scaling matrix, and matrix multiplication into unified transformation matrix. The initial texture mapping matrix generates initial texture distribution according to texture color, reflectivity and other attributes, for example, skin color is uniformly mapped to the model surface. It should be noted that this matrix construction ensures mathematical consistency of the parameters.
And step S620, if the parameters in the initial transformation matrix exceed the preset range, correcting the transformation parameters by adopting a linear interpolation method to obtain a corrected transformation matrix.
For example, if the rotation angle in the initial transformation matrix exceeds a preset range, for example, the threshold is set to 45 degrees, and the input is 50 degrees, the angle is corrected to be within 45 degrees by adopting a linear interpolation method.
Step S630, calculating grid vertices of the three-dimensional face model by using a coordinate transformation formula T (v) =m·v through the modified transformation matrix, to obtain an updated grid vertex data set, where T (v) represents transformed grid vertex coordinates, M represents the modified transformation matrix, and v represents original grid vertex coordinates.
The modified transformation matrix acts on the grid vertices through a coordinate transformation formula, for example, the original grid vertex coordinates are (100,150,200) mm, and new coordinates of (120,170,220) mm can be obtained after modification. This way of correction preserves the geometric stability of the model.
Step S640, updating the initial texture mapping matrix by adopting a texture mapping algorithm according to the texture attribute data set to obtain an updated texture mapping matrix.
Preferably, the texture mapping algorithm updates the texture mapping matrix based on the texture attribute dataset, for example, adjusting reflectivity to highlight regions of the face, resulting in a more realistic skin effect.
Step S650, generating real-time updated visual image data through the updated mesh vertex data set and the updated texture mapping matrix.
In one possible implementation, the updated mesh vertex dataset is combined with the texture mapping matrix to generate real-time updated visual image data. For example, based on the corrected mesh vertex coordinates and the new texture map, the system generates a face model with natural skin tone and light shadow effects. It should be noted that the real-time update supports dynamic adjustment, for example, the model presents the zooming effect immediately after the user changes the scaling.
Step S660, extracting facial feature points from the visualized image data updated in real time by adopting a facial feature extraction algorithm to obtain a facial feature point set.
Preferably, the facial feature extraction algorithm extracts key points from the visual image data, for example, tip coordinates of the nose of (130,160,210) mm and mouth angular coordinates of (110,140,200) mm.
Step S670, calculating the relative position relation between the facial feature points by adopting a geometric analysis method according to the facial feature point set to obtain a facial feature analysis result.
These facial feature points are calculated by a geometric analysis method to obtain a facial feature analysis result, for example, the distance from the nose tip to the mouth angle is 30 mm, and the angle is 15 degrees. The facial feature analysis results may be used to evaluate facial symmetry or structural features. For example, in an extension, the system supports a user to adjust material properties in real-time, such as changing transparency to highlight skeletal structures, or adjusting illumination angles to emphasize facial contours. Preferably, this interactivity allows the user to quickly modify parameters through the interface, and the system updates the visualization effect on the fly, facilitating observation of model changes under different parameters.
It will be appreciated that the facial feature analysis results may be further used in medical diagnostics, such as determining abnormalities in facial structures by facial feature point distance, providing a reference for surgical planning. It should be noted that the flexibility and instantaneity of the method significantly improve the practicality of the model.
The invention relates to a cosmetic shaping auxiliary analysis system based on three-dimensional surface shape digital measurement, which is used for realizing the cosmetic shaping auxiliary analysis method based on the three-dimensional surface shape digital measurement, and comprises a first acquisition module, a second acquisition module, a first generation module, a second generation module, a calculation module and an output module, wherein the first acquisition module is used for acquiring a medical image data set and adopting a Gaussian filter algorithm to carry out noise processing, and if the difference between a pixel gray value and a neighborhood average value exceeds a preset threshold value, the noise point is judged and the noise point is smoothed to obtain a first image data set; the system comprises a first acquisition module for constructing an energy function according to a first image data set, filling and repairing a defect area in an image by superposition calculation of a single-point energy item and an adjacent point interaction energy item to obtain a second image data set, a first generation module for extracting point cloud data from the second image data set and constructing a gradient field, executing local encryption processing to generate a three-dimensional face model if the point cloud density is lower than a preset threshold value, a second generation module for acquiring grid vertex coordinates and normal vector information of the three-dimensional face model, fusing an ambient light component, a diffuse reflection component and a specular reflection component to generate a visual image by an illumination intensity calculation formula, a calculation module for establishing a parameter mapping matrix to perform range constraint on transformation parameters input by a user, automatically correcting the transformation parameters by a coordinate transformation formula if the rotation angle exceeds the preset range, an output module for updating the grid vertex position in real time according to the transformation parameters and material attribute parameters, and outputting a facial feature analysis result.
Further, the cosmetic shaping auxiliary analysis system based on three-dimensional surface shape digital measurement provided by the embodiment comprises a first acquisition unit, a second acquisition unit, a third acquisition unit and a first generation unit, wherein the first acquisition unit is used for acquiring a medical image data set and adopting pixel gray value analysis, extracting pixel gray values of each pixel from the medical image data set, obtaining a first gray difference value set by calculating a difference value between the pixel gray values and a neighborhood pixel gray average value, the second acquisition unit is used for conducting noise detection according to the first gray difference value set, judging pixels as noise points to obtain a noise point position set if the difference value between the pixel gray values and the neighborhood average value exceeds a preset threshold value, the third acquisition unit is used for conducting smoothing processing on the noise point position set by adopting a Gaussian filter algorithm, adjusting the pixel gray values by applying the Gaussian filter algorithm to the noise points to obtain a second image set, and the first generation unit is used for conducting data set generation from the second image set and generating the first image data set by storing processed data of the second image set.
Preferably, the cosmetic shaping auxiliary analysis system based on three-dimensional surface shape digital measurement provided by the embodiment comprises a fourth acquisition unit, a fifth acquisition unit, a sixth acquisition unit and a second generation unit, wherein the fourth acquisition unit is used for acquiring a medical image data set and adopting pixel gray value analysis, extracting pixel gray values of each pixel from the medical image data set, obtaining a first gray difference value set by calculating a difference value between the pixel gray values and a neighborhood pixel gray average value, the fifth acquisition unit is used for carrying out noise detection according to the first gray difference value set, judging pixels as noise points to obtain a noise point position set if the difference value between the pixel gray values and the neighborhood average value exceeds a preset threshold value, the sixth acquisition unit is used for carrying out smoothing processing on the noise point position set by adopting a Gaussian filter algorithm, adjusting the pixel gray values by adopting the Gaussian filter algorithm to the noise points to obtain a second image set, and the second generation unit is used for carrying out data set generation from the second image set and generating the first image data set by storing processed data of the second image set.
Compared with the prior art, the cosmetic shaping auxiliary analysis method and system based on three-dimensional surface shape digital measurement provided by the embodiment have the following beneficial effects:
1. and (3) improving the precision of image preprocessing:
1. The robustness of noise suppression can effectively remove the interferences such as salt and pepper noise, gaussian noise and the like in medical images through Gaussian filter algorithm and dynamic threshold judgment (smoothing treatment when the difference between the pixel gray value and the neighborhood mean value exceeds a preset threshold value), so that the facial tissue boundary is clearer;
compared with the traditional median filtering, the Gaussian filtering suppresses noise while retaining edge details, is particularly suitable for retaining fine structures such as facial skin textures, pores and the like, and avoids characteristic distortion caused by excessive smoothing;
2. the defect repairing intelligence is based on defect filling of an energy function (superposition calculation of single-point energy items and adjacent point interaction energy items), so that local defects such as light spots, scratches or facial acne pits, scars and the like in an image can be automatically repaired, and a filling area is in seamless connection with pixel gray values and texture features of surrounding tissues;
the method has application value that three-dimensional modeling errors caused by original image defects are avoided, for example, scars at the nasal wings are repaired, so that the subsequent nose shaping simulation can be more fit with the real facial structure.
2. Detail enhancement of three-dimensional modeling:
1. The self-adaptive optimization of the point cloud data can enhance the point cloud density of key facial features (such as the corners of eyes, the lip lines and the nose bridge) through gradient field analysis and local encryption processing (automatic encryption when the point cloud density is lower than a threshold value), and solve the problem of the point cloud sparseness of the traditional laser scanning in a low curvature area (such as the cheek);
Data comparison, namely in a region with large curvature change such as a nose tip, the point cloud density can be increased from 50 points/cm < 2 > to 200 points/cm < 2 > in the traditional method, and the surface error of the model is reduced to be within 0.1 mm;
2. The reality of the physical illumination simulation is combined with the illumination model (such as a Phong illumination model) of the ambient light, diffuse reflection and specular reflection components, so that the optical characteristics (such as fat reflection of the forehead and matte texture of the cheek) of the facial skin can be truly restored, and the plastic feel of the traditional three-dimensional model is avoided;
clinical value, doctors can observe the three-dimensional sense of the face through the light and shadow change, for example, whether the light and shadow transition after apple muscle filling is natural or not is judged, and the deviation between the postoperative effect and the expected effect is reduced.
3. Safety and interaction efficiency of parameter control:
1. The constraint mechanism of the transformation parameters is that the parameter mapping matrix limits the range of rotation angles (such as a zygomatic arch inward pushing angle and a mandibular angle rotation amplitude), translation distances (such as a nose bridge heightening length) and the like, automatically corrects the ultra-limit value (such as limiting the mandibular angle rotation angle to be less than or equal to 15 degrees to avoid nerve injury risk), and does not operate reasonably in an algorithm level;
risk control, in which a safety threshold is preset in combination with an anatomical database, for example, when the upward rotation angle of the nose tip exceeds 30 degrees in the hump nose operation, the nostril is possibly exposed;
2. Real-time interactive immersive experience, grid vertex positions based on a coordinate transformation formula (such as a rotation matrix and a translation vector) are calculated in real time, and when parameters (such as a sliding bar for changing the chin length) are adjusted by a user, a three-dimensional model is updated at a frame rate of 60fps, so that a simulation effect of 'what you see is what you get' is achieved;
Doctor-patient communication optimization, wherein a patient can intuitively observe facial changes of different shaping schemes (such as comparing the heights of two nose augmentation prostheses), and a doctor can quickly verify aesthetic proportions of the design schemes (such as three-vestibule five-eye standards) through real-time rendering.
4. Comprehensive benefits of clinical application:
1. the accuracy of the operation scheme, the combination of a high-precision three-dimensional model (error <0.3 mm) and illumination simulation, can quantitatively analyze indexes such as facial asymmetry degree (such as left and right cheek width difference), skin looseness and the like, and provides data support for the personalized operation scheme;
2. Preoperative risk prejudging, namely, potential problems (such as compatibility with surrounding tissues and neurovascular compression risk after prosthesis implantation) can be found in advance through the effect of premodeling different operation parameters, so that the intra-operative adjustment time is shortened, and the operation risk is reduced by about 25%;
3. the medical resource is efficiently utilized, the efficiency of the automatic image processing and modeling process (the time required for generating a three-dimensional model from image input is less than 10 minutes) is greatly improved compared with the traditional manual measurement (1-2 hours), and the method is suitable for large-scale pre-cosmetic plastic evaluation.
In a word, the cosmetic shaping auxiliary analysis method and system based on three-dimensional surface shape digital measurement provided by the embodiment realize the spanning from experience leading to data driving in the cosmetic shaping field through the whole process technical innovation of high-precision image processing, detail enhancement modeling, physical illumination rendering and safe interaction simulation.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

Translated fromChinese
1.基于三维面形数字化测量的美容整形辅助分析方法,其特征在于,包括以下步骤:1. A cosmetic surgery auxiliary analysis method based on three-dimensional facial digital measurement, characterized by comprising the following steps:获取医学影像数据集并采用高斯滤波算法进行噪声处理,若像素灰度值与邻域均值的差异超过预设阈值则判定为噪声点并进行平滑处理,获得第一影像数据集;Obtain a medical image dataset and use a Gaussian filter algorithm to perform noise processing. If the difference between the grayscale value of a pixel and the neighborhood mean exceeds a preset threshold, it is determined to be a noise point and smoothed to obtain a first image dataset;根据所述第一影像数据集构建能量函数,通过单点能量项与相邻点交互能量项的叠加计算对影像中的缺陷区域进行填补修复,获得第二影像数据集;constructing an energy function based on the first image data set, and filling and repairing defective areas in the image by superimposing single-point energy terms and adjacent-point interaction energy terms to obtain a second image data set;从所述第二影像数据集提取点云数据并构建梯度场,若点云密度低于预设阈值则执行局部加密处理,生成三维面部模型;extracting point cloud data from the second image dataset and constructing a gradient field, and performing local encryption processing if the point cloud density is lower than a preset threshold to generate a three-dimensional facial model;获取所述三维面部模型的网格顶点坐标与法向量信息,通过光照强度计算公式融合环境光分量、漫反射分量与镜面反射分量生成可视化图像;Obtaining mesh vertex coordinates and normal vector information of the three-dimensional facial model, and fusing ambient light components, diffuse reflection components, and specular reflection components using a light intensity calculation formula to generate a visual image;建立参数映射矩阵对用户输入的变换参数进行范围约束,若旋转角度超出预设范围则自动修正,通过坐标变换公式计算更新后的网格顶点位置;A parameter mapping matrix is established to constrain the range of the transformation parameters input by the user. If the rotation angle exceeds the preset range, it is automatically corrected and the updated mesh vertex position is calculated using the coordinate transformation formula;根据所述变换参数与材质属性参数实时更新所述可视化图像,输出面部特征分析结果。The visual image is updated in real time according to the transformation parameters and the material attribute parameters, and the facial feature analysis result is output.2.如权利要求1所述的基于三维面形数字化测量的美容整形辅助分析方法,其特征在于,所述获取医学影像数据集并采用高斯滤波算法进行噪声处理,若像素灰度值与邻域均值的差异超过预设阈值则判定为噪声点并进行平滑处理,获得第一影像数据集的步骤包括:2. The cosmetic surgery auxiliary analysis method based on 3D facial shape digital measurement according to claim 1, wherein the step of acquiring a medical image dataset and performing noise removal using a Gaussian filtering algorithm, wherein if the difference between a pixel's grayscale value and a neighborhood mean exceeds a preset threshold, the pixel is determined to be a noise point and smoothed, and the step of obtaining the first image dataset comprises:获取医学影像数据集并采用像素灰度值分析,从所述医学影像数据集中提取每个像素的像素灰度值,通过计算像素灰度值与邻域像素灰度均值的差值,得到第一灰度差值集;Obtaining a medical image dataset and performing pixel grayscale value analysis to extract a pixel grayscale value of each pixel from the medical image dataset, and obtaining a first grayscale difference value set by calculating a difference between the pixel grayscale value and a grayscale mean of neighboring pixels;根据所述第一灰度差值集进行噪声检测,若所述像素灰度值与邻域均值的差值超过预设阈值,则判定像素为噪声点,得到噪声点位置集;Performing noise detection based on the first grayscale difference set, if the difference between the pixel grayscale value and the neighborhood mean exceeds a preset threshold, determining that the pixel is a noise point, and obtaining a noise point position set;针对所述噪声点位置集采用高斯滤波算法进行平滑处理,通过对噪声点应用高斯滤波算法调整像素灰度值,得到第二影像集;A Gaussian filter algorithm is used to smooth the noise point position set, and the pixel grayscale values are adjusted by applying the Gaussian filter algorithm to the noise points to obtain a second image set;从所述第二影像集进行数据集生成,通过保存所述第二影像集的处理后数据,生成第一影像数据集。A data set is generated from the second image set, and a first image data set is generated by saving processed data of the second image set.3.如权利要求1所述的基于三维面形数字化测量的美容整形辅助分析方法,其特征在于,根据所述第一影像数据集构建能量函数,通过单点能量项与相邻点交互能量项的叠加计算对影像中的缺陷区域进行填补修复,获得第二影像数据集的步骤包括:3. The cosmetic surgery auxiliary analysis method based on 3D facial digital measurement according to claim 1, wherein the step of constructing an energy function based on the first image dataset and filling and repairing defective areas in the image by superimposing single-point energy terms and adjacent point interaction energy terms to obtain the second image dataset comprises:从所述第一影像数据集获取像素点的像素灰度值和纹理特征,采用区域分割算法对影像进行分区,通过计算每个区域的纹理特征均值与像素灰度值分布,确定包含缺陷的区域,得到缺陷区域集;Obtaining pixel grayscale values and texture features of pixel points from the first image data set, partitioning the image using a region segmentation algorithm, and determining regions containing defects by calculating the mean value of texture features and pixel grayscale value distribution of each region to obtain a defect region set;针对所述缺陷区域集,采用边界检测算法提取缺陷区域的边界轮廓,通过计算边界像素点的像素灰度值梯度,确定边界轮廓集;For the defect area set, a boundary detection algorithm is used to extract the boundary contours of the defect area, and a boundary contour set is determined by calculating the pixel grayscale value gradient of the boundary pixel points;根据所述边界轮廓集和纹理特征,构建能量函数,得到能量分布集;constructing an energy function based on the boundary contour set and texture features to obtain an energy distribution set;采用高斯滤波算法对所述能量分布集进行平滑处理,通过调整缺陷区域内像素点的像素灰度值,结合纹理特征进行填补修复,生成第二影像数据集。The energy distribution set is smoothed by using a Gaussian filtering algorithm, and the pixel grayscale values of the pixels in the defect area are adjusted and filled in and repaired in combination with texture features to generate a second image data set.4.如权利要求1所述的基于三维面形数字化测量的美容整形辅助分析方法,其特征在于,从所述第二影像数据集提取点云数据并构建梯度场,若点云密度低于预设阈值则执行局部加密处理,生成三维面部模型的步骤包括:4. The cosmetic surgery auxiliary analysis method based on 3D facial shape digital measurement according to claim 1, wherein the step of extracting point cloud data from the second image dataset and constructing a gradient field, and performing local encryption processing if the point cloud density is lower than a preset threshold, and generating a 3D facial model comprises:获取所述第二影像数据集中的像素点的像素灰度值和深度信息,采用体视显微算法生成包含三维空间坐标的点云数据,得到点云数据集;Obtaining pixel grayscale values and depth information of pixel points in the second image dataset, and generating point cloud data including three-dimensional spatial coordinates using a stereoscopic microscopy algorithm to obtain a point cloud dataset;针对所述点云数据集计算单位体积内点的数量,若点云密度低于预设阈值,则采用插值算法对低密度区域进行局部加密,得到加密点云数据集;Calculating the number of points within a unit volume of the point cloud dataset; if the point cloud density is lower than a preset threshold, locally encrypting the low-density area using an interpolation algorithm to obtain an encrypted point cloud dataset;根据所述加密点云数据集计算每个点的像素灰度值梯度,采用梯度下降算法构建描述表面变化的梯度场,得到梯度场数据集;Calculating the pixel grayscale value gradient of each point according to the encrypted point cloud data set, and constructing a gradient field describing surface changes using a gradient descent algorithm to obtain a gradient field data set;通过所述梯度场数据集和所述加密点云数据集,采用网格生成算法生成三角形网格,结合边界轮廓和法向量构建三维面部模型,得到三维面部模型数据集。A triangular mesh is generated by using the gradient field dataset and the encrypted point cloud dataset using a mesh generation algorithm, and a three-dimensional facial model is constructed by combining boundary contours and normal vectors to obtain a three-dimensional facial model dataset.5.如权利要求1所述的基于三维面形数字化测量的美容整形辅助分析方法,其特征在于,获取所述三维面部模型的网格顶点坐标与法向量信息,通过光照强度计算公式融合环境光分量、漫反射分量与镜面反射分量生成可视化图像的步骤包括:5. The cosmetic surgery auxiliary analysis method based on 3D facial digital measurement according to claim 1, wherein the steps of obtaining mesh vertex coordinates and normal vector information of the 3D facial model and fusing ambient light components, diffuse reflection components, and specular reflection components using a light intensity calculation formula to generate a visual image include:获取所述三维面部模型的网格顶点数据集,采用向量运算计算每个网格顶点的三维坐标和法向量,得到网格顶点属性数据集;Obtaining a mesh vertex dataset of the three-dimensional facial model, and calculating the three-dimensional coordinates and normal vector of each mesh vertex using vector operations to obtain a mesh vertex attribute dataset;根据所述网格顶点属性数据集和预设的光源位置,采用光照模型计算每个网格顶点的环境光分量,结合表面材质的反射系数,得到环境光强度数据集;According to the mesh vertex attribute dataset and the preset light source position, the ambient light component of each mesh vertex is calculated using the illumination model, and the ambient light intensity dataset is obtained by combining the reflection coefficient of the surface material;若所述网格顶点属性数据集中的法向量与光源位置夹角小于预设阈值,则采用漫反射公式I_d=k_d(N·L)计算漫反射分量,其中I_d表示漫反射强度,k_d表示漫反射系数,N表示网格顶点法向量,L表示光源方向向量,得到漫反射强度数据集;If the angle between the normal vector in the mesh vertex attribute dataset and the light source position is less than a preset threshold, the diffuse reflection component is calculated using the diffuse reflection formula I_d=k_d(N·L), where I_d represents the diffuse reflection intensity, k_d represents the diffuse reflection coefficient, N represents the mesh vertex normal vector, and L represents the light source direction vector, to obtain the diffuse reflection intensity dataset;通过所述漫反射强度数据集和视角方向,采用镜面反射公式I_s=k_s(R·V)^n计算镜面反射分量,融合所述环境光强度数据集和所述漫反射强度数据集,生成可视化图像数据集,其中I_s表示镜面反射强度,k_s表示镜面反射系数,R表示反射向量,V表示视角向量,n表示高光指数。The specular reflection component is calculated using the specular reflection formula I_s=k_s(R·V)^n based on the diffuse reflection intensity dataset and the viewing angle direction. The ambient light intensity dataset and the diffuse reflection intensity dataset are fused to generate a visualization image dataset, where I_s represents the specular reflection intensity, k_s represents the specular reflection coefficient, R represents the reflection vector, V represents the viewing angle vector, and n represents the specular index.6.如权利要求1所述的基于三维面形数字化测量的美容整形辅助分析方法,其特征在于,所述建立参数映射矩阵对用户输入的变换参数进行范围约束,若旋转角度超出预设范围则自动修正,通过坐标变换公式计算更新后的网格顶点位置的步骤包括:6. The method for cosmetic surgery analysis based on 3D facial digital measurement according to claim 1, wherein the step of establishing a parameter mapping matrix to constrain the range of the transformation parameters input by the user and automatically correcting the rotation angle if it exceeds the preset range, and calculating the updated mesh vertex positions using a coordinate transformation formula comprises:获取用户输入的变换参数数据集,通过矩阵运算构建参数映射矩阵,得到初始变换矩阵,所述变换参数数据集包括转角度、平移量和缩放比例;Obtaining a transformation parameter data set input by the user, constructing a parameter mapping matrix through matrix operations, and obtaining an initial transformation matrix, wherein the transformation parameter data set includes a rotation angle, a translation amount, and a scaling ratio;判断所述初始变换矩阵中的旋转角度是否超过预设的角度阈值,若超过,则采用线性插值方法修正所述旋转角度,得到修正后的变换矩阵;Determining whether the rotation angle in the initial transformation matrix exceeds a preset angle threshold, and if so, correcting the rotation angle using a linear interpolation method to obtain a corrected transformation matrix;采用坐标变换公式T(v)=M·v对三维面部模型的网格顶点数据集进行计算,得到更新后的网格顶点数据集,其中T(v)表示变换后的网格顶点坐标,M表示所述修正后的变换矩阵,v表示原始网格顶点坐标;Calculating the mesh vertex dataset of the three-dimensional facial model using a coordinate transformation formula T(v)=M·v to obtain an updated mesh vertex dataset, where T(v) represents the transformed mesh vertex coordinates, M represents the modified transformation matrix, and v represents the original mesh vertex coordinates;通过更新后的网格顶点数据集,结合预设的渲染参数,生成所述三维面部模型的网格,得到可视化网格顶点位置数据集。The updated mesh vertex data set is combined with preset rendering parameters to generate a mesh of the three-dimensional facial model, thereby obtaining a visual mesh vertex position data set.7.如权利要求1所述的基于三维面形数字化测量的美容整形辅助分析方法,其特征在于,根据所述变换参数与材质属性参数实时更新所述可视化图像,输出面部特征分析结果的步骤包括:7. The cosmetic surgery auxiliary analysis method based on three-dimensional facial shape digital measurement according to claim 1, wherein the step of updating the visual image in real time according to the transformation parameters and material attribute parameters and outputting the facial feature analysis results comprises:获取用户输入的变换参数数据集和材质属性数据集,通过矩阵运算构建初始变换矩阵和初始材质映射矩阵,得到初始变换矩阵和初始材质映射矩阵;Obtain the transformation parameter data set and material attribute data set input by the user, construct the initial transformation matrix and the initial material mapping matrix through matrix operations, and obtain the initial transformation matrix and the initial material mapping matrix;若所述初始变换矩阵中的参数超出预设范围,则采用线性插值方法对变换参数进行修正,得到修正后的变换矩阵;If the parameters in the initial transformation matrix exceed the preset range, the transformation parameters are corrected using a linear interpolation method to obtain a corrected transformation matrix;通过修正后的变换矩阵,采用坐标变换公式T(v)=M·v计算三维面部模型的网格顶点,得到更新后的网格顶点数据集,其中T(v)表示变换后的网格顶点坐标,M表示修正后的变换矩阵,v表示原始网格顶点坐标;The mesh vertices of the 3D facial model are calculated using the coordinate transformation formula T(v)=M·v using the modified transformation matrix to obtain an updated mesh vertex dataset, where T(v) represents the transformed mesh vertex coordinates, M represents the modified transformation matrix, and v represents the original mesh vertex coordinates.根据材质属性数据集,采用纹理映射算法更新所述初始材质映射矩阵,得到更新后的材质映射矩阵;According to the material attribute data set, the initial material mapping matrix is updated using a texture mapping algorithm to obtain an updated material mapping matrix;通过更新后的网格顶点数据集和更新后的材质映射矩阵,生成实时更新的可视化图像数据;Generate real-time updated visual image data through the updated mesh vertex data set and the updated material mapping matrix;采用面部特征提取算法,从实时更新的可视化图像数据中提取面部特征点,得到面部特征点集;A facial feature extraction algorithm is used to extract facial feature points from the real-time updated visual image data to obtain a facial feature point set;根据所述面部特征点集,采用几何分析方法计算面部特征点之间的相对位置关系,得到面部特征分析结果。According to the facial feature point set, a geometric analysis method is used to calculate the relative position relationship between the facial feature points to obtain a facial feature analysis result.8.基于三维面形数字化测量的美容整形辅助分析系统,用于实现如权利要求1至7任意一项所述的基于三维面形数字化测量的美容整形辅助分析方法,其特征在于,包括:8. A cosmetic surgery auxiliary analysis system based on three-dimensional facial shape digital measurement, for implementing the cosmetic surgery auxiliary analysis method based on three-dimensional facial shape digital measurement according to any one of claims 1 to 7, characterized in that it comprises:第一获取模块,用于获取医学影像数据集并采用高斯滤波算法进行噪声处理,若像素灰度值与邻域均值的差异超过预设阈值则判定为噪声点并进行平滑处理,获得第一影像数据集;A first acquisition module is used to acquire a medical image data set and perform noise processing using a Gaussian filtering algorithm. If the difference between the grayscale value of a pixel and the neighborhood mean exceeds a preset threshold, it is determined to be a noise point and smoothed to obtain a first image data set;第二获取模块,用于根据所述第一影像数据集构建能量函数,通过单点能量项与相邻点交互能量项的叠加计算对影像中的缺陷区域进行填补修复,获得第二影像数据集;A second acquisition module is configured to construct an energy function based on the first image data set, and fill and repair defect areas in the image by superimposing single-point energy terms and adjacent-point interaction energy terms to obtain a second image data set;第一生成模块,用于从所述第二影像数据集提取点云数据并构建梯度场,若点云密度低于预设阈值则执行局部加密处理,生成三维面部模型;a first generating module, configured to extract point cloud data from the second image dataset and construct a gradient field, and perform local encryption processing if the point cloud density is lower than a preset threshold to generate a three-dimensional facial model;第二生成模块,用于获取所述三维面部模型的网格顶点坐标与法向量信息,通过光照强度计算公式融合环境光分量、漫反射分量与镜面反射分量生成可视化图像;The second generation module is used to obtain the mesh vertex coordinates and normal vector information of the three-dimensional facial model, and generate a visual image by fusing the ambient light component, the diffuse reflection component and the specular reflection component through a light intensity calculation formula;计算模块,用于建立参数映射矩阵对用户输入的变换参数进行范围约束,若旋转角度超出预设范围则自动修正,通过坐标变换公式计算更新后的网格顶点位置;The calculation module is used to establish a parameter mapping matrix to constrain the range of the transformation parameters input by the user. If the rotation angle exceeds the preset range, it will be automatically corrected and the updated mesh vertex position will be calculated using the coordinate transformation formula;输出模块,用于根据所述变换参数与材质属性参数实时更新所述可视化图像,输出面部特征分析结果。The output module is used to update the visual image in real time according to the transformation parameters and material attribute parameters, and output the facial feature analysis results.9.如权利要求8所述的基于三维面形数字化测量的美容整形辅助分析系统,其特征在于,所述第一获取模块包括:9. The cosmetic surgery auxiliary analysis system based on three-dimensional facial digital measurement according to claim 8, wherein the first acquisition module comprises:第一获取单元,用于获取医学影像数据集并采用像素灰度值分析,从所述医学影像数据集中提取每个像素的像素灰度值,通过计算像素灰度值与邻域像素灰度均值的差值,得到第一灰度差值集;a first acquisition unit, configured to acquire a medical image dataset and extract a pixel grayscale value of each pixel from the medical image dataset by performing pixel grayscale value analysis, and obtain a first grayscale difference value set by calculating a difference between the pixel grayscale value and a grayscale mean of neighboring pixels;第二获取单元,用于根据所述第一灰度差值集进行噪声检测,若所述像素灰度值与邻域均值的差值超过预设阈值,则判定像素为噪声点,得到噪声点位置集;a second acquisition unit, configured to perform noise detection based on the first grayscale difference value set, and determine that the pixel is a noise point if the difference between the pixel grayscale value and the neighborhood mean exceeds a preset threshold, thereby obtaining a noise point position set;第三获取单元,用于针对所述噪声点位置集采用高斯滤波算法进行平滑处理,通过对噪声点应用高斯滤波算法调整像素灰度值,得到第二影像集;a third acquisition unit, configured to perform smoothing processing on the noise point position set using a Gaussian filter algorithm, and adjust pixel grayscale values by applying the Gaussian filter algorithm to the noise points to obtain a second image set;第一生成单元,用于从所述第二影像集进行数据集生成,通过保存所述第二影像集的处理后数据,生成第一影像数据集。The first generating unit is configured to generate a data set from the second image set, and generate a first image data set by storing processed data of the second image set.10.如权利要求8所述的基于三维面形数字化测量的美容整形辅助分析系统,其特征在于,所述第二获取模块包括:10. The cosmetic surgery auxiliary analysis system based on three-dimensional facial digital measurement according to claim 8, wherein the second acquisition module comprises:第四获取单元,用于获取医学影像数据集并采用像素灰度值分析,从所述医学影像数据集中提取每个像素的像素灰度值,通过计算像素灰度值与邻域像素灰度均值的差值,得到第一灰度差值集;a fourth acquisition unit, configured to acquire a medical image dataset and extract a pixel grayscale value of each pixel from the medical image dataset using pixel grayscale value analysis, and obtain a first grayscale difference value set by calculating a difference between the pixel grayscale value and a grayscale mean of neighboring pixels;第五获取单元,用于根据所述第一灰度差值集进行噪声检测,若所述像素灰度值与邻域均值的差值超过预设阈值,则判定像素为噪声点,得到噪声点位置集;a fifth acquisition unit, configured to perform noise detection based on the first grayscale difference value set, and determine that the pixel is a noise point if the difference between the pixel grayscale value and the neighborhood mean exceeds a preset threshold, thereby obtaining a noise point position set;第六获取单元,用于针对所述噪声点位置集采用高斯滤波算法进行平滑处理,通过对噪声点应用高斯滤波算法调整像素灰度值,得到第二影像集;a sixth acquisition unit, configured to perform smoothing processing on the noise point position set using a Gaussian filter algorithm, and adjust pixel grayscale values by applying the Gaussian filter algorithm to the noise points to obtain a second image set;第二生成单元,用于从所述第二影像集进行数据集生成,通过保存所述第二影像集的处理后数据,生成第一影像数据集。The second generating unit is configured to generate a data set from the second image set, and generate a first image data set by storing processed data of the second image set.
CN202511190033.1A2025-08-252025-08-25 Auxiliary analysis method and system for cosmetic surgery based on three-dimensional facial digital measurementPendingCN120672973A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202511190033.1ACN120672973A (en)2025-08-252025-08-25 Auxiliary analysis method and system for cosmetic surgery based on three-dimensional facial digital measurement

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202511190033.1ACN120672973A (en)2025-08-252025-08-25 Auxiliary analysis method and system for cosmetic surgery based on three-dimensional facial digital measurement

Publications (1)

Publication NumberPublication Date
CN120672973Atrue CN120672973A (en)2025-09-19

Family

ID=97051073

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202511190033.1APendingCN120672973A (en)2025-08-252025-08-25 Auxiliary analysis method and system for cosmetic surgery based on three-dimensional facial digital measurement

Country Status (1)

CountryLink
CN (1)CN120672973A (en)

Similar Documents

PublicationPublication DateTitle
CN109584349B (en)Method and apparatus for rendering material properties
US9514533B2 (en)Method for determining bone resection on a deformed bone surface from few parameters
Deng et al.A novel skull registration based on global and local deformations for craniofacial reconstruction
Deng et al.A regional method for craniofacial reconstruction based on coordinate adjustments and a new fusion strategy
CN103345774B (en)A kind of modeling method of three-dimensional multi-scale vector quantization
CN118864736B (en) Method and device for molding oral prosthesis model
Qian et al.An automatic tooth reconstruction method based on multimodal data
Shetty et al.BOSS: Bones, organs and skin shape model
CN120411404A (en) A 3D oral and maxillofacial model reconstruction system based on multimodal data fusion
JP5954846B2 (en) Shape data generation program, shape data generation method, and shape data generation apparatus
CN119445004B (en) Liver image three-dimensional reconstruction system and method based on artificial intelligence technology
CN114931435B (en)Three-dimensional model processing method and device and electronic equipment
Ropinski et al.Internal labels as shape cues for medical illustration.
Preim et al.Visualization, visual analytics and virtual reality in medicine: State-of-the-art Techniques and Applications
Guven et al.X2V: 3D Organ Volume Reconstruction From a Planar X-Ray Image With Neural Implicit Methods
CN115222887A (en) A design method for face-based craniofacial skeletal surgery planning
Lee et al.Computer-aided prototype system for nose surgery
CN117934689B (en)Multi-tissue segmentation and three-dimensional rendering method for fracture CT image
Drakopoulos et al.Tetrahedral image-to-mesh conversion software for anatomic modeling of arteriovenous malformations
US20230360214A1 (en)Technique for Optimizing Rendering Parameters of Overlays of Medical Images
CN120672973A (en) Auxiliary analysis method and system for cosmetic surgery based on three-dimensional facial digital measurement
Tapp et al.Generation of patient-specific, ligamentoskeletal, finite element meshes for scoliosis correction planning
Drakopoulos et al.Image-to-mesh conversion method for multi-tissue medical image computing simulations
Oya et al.2D/3D deformable registration for endoscopic camera images using self-supervised offline learning of intraoperative pneumothorax deformation
DotremontFrom medical images to 3D model: processing and segmentation

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp