Disclosure of Invention
The invention provides a cosmetic and plastic auxiliary analysis method and a cosmetic and plastic auxiliary analysis system based on three-dimensional surface shape digital measurement, which aim to solve at least one defect in the prior art.
One aspect of the invention relates to a cosmetic and plastic auxiliary analysis method based on three-dimensional surface shape digital measurement, comprising the following steps:
acquiring a medical image data set, performing noise processing by adopting a Gaussian filtering algorithm, judging the medical image data set as noise points if the difference between a pixel gray value and a neighborhood mean value exceeds a preset threshold value, and performing smoothing processing to acquire a first image data set;
constructing an energy function according to the first image data set, and filling and repairing a defect area in the image through superposition calculation of a single-point energy item and an adjacent point interaction energy item to obtain a second image data set;
extracting point cloud data from the second image data set, constructing a gradient field, and executing local encryption processing if the point cloud density is lower than a preset threshold value to generate a three-dimensional face model;
acquiring grid vertex coordinates and normal vector information of a three-dimensional face model, and generating a visual image by fusing an ambient light component, a diffuse reflection component and a specular reflection component through an illumination intensity calculation formula;
Establishing a parameter mapping matrix to carry out range constraint on transformation parameters input by a user, automatically correcting if the rotation angle exceeds a preset range, and calculating the updated grid vertex position through a coordinate transformation formula;
and updating the visualized image in real time according to the transformation parameters and the material attribute parameters, and outputting a facial feature analysis result.
Further, the step of obtaining a medical image data set and performing noise processing by adopting a gaussian filtering algorithm, and if the difference between the pixel gray value and the neighborhood mean value exceeds a preset threshold value, determining the pixel gray value as a noise point and performing smoothing processing to obtain a first image data set comprises the following steps:
Acquiring a medical image data set, extracting a pixel gray value of each pixel from the medical image data set by adopting pixel gray value analysis, and obtaining a first gray difference value set by calculating a difference value between the pixel gray value and a neighborhood pixel gray average value;
noise detection is carried out according to the first gray level difference value set, if the difference value between the gray level value of the pixel and the neighborhood mean value exceeds a preset threshold value, the pixel is judged to be a noise point, and a noise point position set is obtained;
smoothing the noise point position set by adopting a Gaussian filter algorithm, and adjusting the pixel gray value by adopting the Gaussian filter algorithm to the noise point to obtain a second image set;
And generating a data set from the second image set, and generating a first image data set by storing the processed data of the second image set.
Further, constructing an energy function according to the first image data set, filling and repairing a defect area in the image through superposition calculation of the single-point energy item and the adjacent point interaction energy item, and obtaining the second image data set comprises the following steps:
Acquiring pixel gray values and texture features of pixel points from a first image data set, partitioning an image by adopting a region segmentation algorithm, and determining a region containing defects by calculating the average value of the texture features and the distribution of the pixel gray values of each region to obtain a defect region set;
extracting boundary contours of the defect areas by adopting a boundary detection algorithm aiming at the defect area sets, and determining the boundary contour sets by calculating pixel gray value gradients of boundary pixel points;
Constructing an energy function according to the boundary contour set and the texture characteristics to obtain an energy distribution set;
And smoothing the energy distribution set by adopting a Gaussian filter algorithm, and filling and repairing by adjusting the pixel gray values of the pixel points in the defect area and combining with texture features to generate a second image data set.
Further, the step of extracting point cloud data from the second image dataset and constructing a gradient field, and if the point cloud density is lower than a preset threshold value, executing local encryption processing, and generating the three-dimensional face model includes:
acquiring pixel gray values and depth information of pixel points in the second image data set, and generating point cloud data containing three-dimensional space coordinates by adopting a stereoscopic microscope algorithm to obtain a point cloud data set;
calculating the number of points in a unit volume aiming at the point cloud data set, and if the point cloud density is lower than a preset threshold value, carrying out local encryption on the low-density area by adopting an interpolation algorithm to obtain an encrypted point cloud data set;
Calculating the pixel gray value gradient of each point according to the encrypted point cloud data set, and constructing a gradient field describing the surface change by adopting a gradient descent algorithm to obtain a gradient field data set;
and generating triangular grids by using a grid generation algorithm through the gradient field data set and the encryption point cloud data set, and constructing a three-dimensional face model by combining boundary contours and normal vectors to obtain a three-dimensional face model data set.
Further, the step of obtaining mesh vertex coordinates and normal vector information of the three-dimensional face model, and fusing an ambient light component, a diffuse reflection component and a specular reflection component through an illumination intensity calculation formula to generate a visualized image includes:
obtaining a grid vertex data set of the three-dimensional face model, and calculating three-dimensional coordinates and normal vectors of each grid vertex by vector operation to obtain a grid vertex attribute data set;
According to the grid vertex attribute data set and the preset light source position, calculating the ambient light component of each grid vertex by adopting an illumination model, and combining the reflection coefficient of the surface material to obtain an ambient light intensity data set;
if the included angle between the normal vector in the grid vertex attribute data set and the light source position is smaller than a preset threshold value, calculating a diffuse reflection component by adopting a diffuse reflection formula i_d=k_d (n.l), wherein i_d represents diffuse reflection intensity, k_d represents a diffuse reflection coefficient, N represents a grid vertex normal vector, and L represents a light source direction vector, so as to obtain a diffuse reflection intensity data set;
Calculating specular reflection components by using a specular reflection formula i_s=k_s (r·v) ζn through the diffuse reflection intensity dataset and the viewing angle direction, and fusing the ambient light intensity dataset and the diffuse reflection intensity dataset to generate a visualized image dataset, wherein i_s represents specular reflection intensity, k_s represents specular reflection coefficient, R represents reflection vector, V represents viewing angle vector, and n represents high light index.
Further, establishing a parameter mapping matrix to perform range constraint on transformation parameters input by a user, automatically correcting if the rotation angle exceeds a preset range, and calculating the updated grid vertex position through a coordinate transformation formula, wherein the step of calculating the updated grid vertex position comprises the following steps:
acquiring a transformation parameter data set input by a user, and constructing a parameter mapping matrix through matrix operation to obtain an initial transformation matrix, wherein the transformation parameter data set comprises a rotation angle, a translation amount and a scaling;
Judging whether the rotation angle in the initial transformation matrix exceeds a preset angle threshold value, and if so, correcting the rotation angle by adopting a linear interpolation method to obtain a corrected transformation matrix;
Calculating a grid vertex data set of the three-dimensional face model by adopting a coordinate transformation formula T (v) =M.v to obtain an updated grid vertex data set, wherein T (v) represents transformed grid vertex coordinates, M represents a modified transformation matrix, and v represents original grid vertex coordinates;
And generating grids of the three-dimensional face model by combining the updated grid vertex data set with preset rendering parameters to obtain a visualized grid vertex position data set.
Further, the step of updating the visualized image in real time according to the transformation parameters and the texture attribute parameters and outputting the facial feature analysis result comprises the following steps:
acquiring a transformation parameter data set and a material attribute data set input by a user, constructing an initial transformation matrix and an initial material mapping matrix through matrix operation, and obtaining the initial transformation matrix and the initial material mapping matrix;
if the parameters in the initial transformation matrix exceed the preset range, correcting the transformation parameters by adopting a linear interpolation method to obtain a corrected transformation matrix;
Calculating grid vertexes of the three-dimensional face model by using a coordinate transformation formula T (v) =M.v through the corrected transformation matrix to obtain an updated grid vertex data set, wherein T (v) represents transformed grid vertex coordinates, M represents the corrected transformation matrix, and v represents original grid vertex coordinates;
Updating the initial texture mapping matrix by adopting a texture mapping algorithm according to the texture attribute data set to obtain an updated texture mapping matrix;
generating real-time updated visual image data through the updated grid vertex data set and the updated material mapping matrix;
Extracting facial feature points from the visual image data updated in real time by adopting a facial feature extraction algorithm to obtain a facial feature point set;
and calculating the relative position relation between the facial feature points by adopting a geometric analysis method according to the facial feature point set to obtain a facial feature analysis result.
Another aspect of the present invention relates to a cosmetic shaping auxiliary analysis system based on three-dimensional surface shape digital measurement, for implementing the cosmetic shaping auxiliary analysis method based on three-dimensional surface shape digital measurement, comprising:
The first acquisition module is used for acquiring a medical image data set and carrying out noise processing by adopting a Gaussian filter algorithm, judging the medical image data set as noise points if the difference between the pixel gray value and the neighborhood mean value exceeds a preset threshold value, and carrying out smoothing processing to acquire the first image data set;
The second acquisition module is used for constructing an energy function according to the first image data set, filling and repairing a defect area in the image through superposition calculation of a single-point energy item and an adjacent point interaction energy item, and obtaining a second image data set;
The first generation module is used for extracting point cloud data from the second image data set and constructing a gradient field, and if the point cloud density is lower than a preset threshold value, local encryption processing is performed to generate a three-dimensional face model;
The second generation module is used for acquiring grid vertex coordinates and normal vector information of the three-dimensional face model, and generating a visual image by fusing an ambient light component, a diffuse reflection component and a specular reflection component through an illumination intensity calculation formula;
the calculation module is used for establishing a parameter mapping matrix to carry out range constraint on transformation parameters input by a user, automatically correcting the transformation parameters if the rotation angle exceeds a preset range, and calculating the updated grid vertex position through a coordinate transformation formula;
and the output module is used for updating the visual image in real time according to the transformation parameters and the material attribute parameters and outputting a facial feature analysis result.
Further, the first acquisition module includes:
The first acquisition unit is used for acquiring a medical image data set and adopting pixel gray value analysis, extracting the pixel gray value of each pixel from the medical image data set, and obtaining a first gray difference value set by calculating the difference value between the pixel gray value and the neighborhood pixel gray average value;
the second acquisition unit is used for carrying out noise detection according to the first gray level difference value set, and if the difference value between the gray level value of the pixel and the neighborhood mean value exceeds a preset threshold value, the pixel is judged to be a noise point, and a noise point position set is obtained;
The third acquisition unit is used for smoothing the noise point position set by adopting a Gaussian filter algorithm, and adjusting the pixel gray value by applying the Gaussian filter algorithm to the noise point to obtain a second image set;
and the first generation unit is used for generating a data set from the second image set and generating a first image data set by saving the processed data of the second image set.
Further, the second acquisition module includes:
A fourth obtaining unit, configured to obtain a medical image dataset and obtain a first gray difference set by using pixel gray value analysis to extract a pixel gray value of each pixel from the medical image dataset and calculating a difference between the pixel gray value and a neighborhood pixel gray average value;
a fifth obtaining unit, configured to perform noise detection according to the first gray level difference set, and if the difference between the gray level value of the pixel and the neighborhood mean exceeds a preset threshold, determine that the pixel is a noise point, so as to obtain a noise point position set;
A sixth obtaining unit, configured to perform smoothing processing on the noise point position set by using a gaussian filtering algorithm, and adjust a pixel gray value by applying the gaussian filtering algorithm to the noise point, so as to obtain a second image set;
And a second generation unit for generating a data set from the second image set, and generating a first image data set by saving the processed data of the second image set.
The beneficial effects obtained by the invention are as follows:
the invention provides a cosmetic shaping auxiliary analysis method and a system based on three-dimensional surface shape digital measurement, which carry out noise processing on medical image data sets through Gaussian filtering to construct an energy function to repair a defect area, and extracting point cloud data, constructing a gradient field to generate a three-dimensional face model, acquiring grid vertex coordinates and normal vector information of the model, and fusing various illumination components to generate a visual image. The invention also establishes a parameter mapping matrix to restrict the transformation parameters input by the user, updates the positions of the grid vertexes through coordinate transformation, updates the visualized image in real time according to the transformation parameters and the material properties, and outputs the facial feature analysis result. The method realizes the construction and visualization from medical image data to the three-dimensional facial model, supports the interactive operation of the user, can be widely applied to the fields of medical image analysis, facial modeling and the like, and improves the precision and efficiency of medical image processing and three-dimensional visualization. The invention provides a cosmetic and plastic auxiliary analysis method and a system based on three-dimensional surface shape digital measurement, which have the following beneficial effects:
1. and (3) improving the precision of image preprocessing:
1. The robustness of noise suppression can effectively remove the interferences such as salt and pepper noise, gaussian noise and the like in medical images through Gaussian filter algorithm and dynamic threshold judgment (smoothing treatment when the difference between the pixel gray value and the neighborhood mean value exceeds a preset threshold value), so that the facial tissue boundary is clearer;
compared with the traditional median filtering, the Gaussian filtering suppresses noise while retaining edge details, is particularly suitable for retaining fine structures such as facial skin textures, pores and the like, and avoids characteristic distortion caused by excessive smoothing;
2. the defect repairing intelligence is based on defect filling of an energy function (superposition calculation of single-point energy items and adjacent point interaction energy items), so that local defects such as light spots, scratches or facial acne pits, scars and the like in an image can be automatically repaired, and a filling area is in seamless connection with pixel gray values and texture features of surrounding tissues;
the method has application value that three-dimensional modeling errors caused by original image defects are avoided, for example, scars at the nasal wings are repaired, so that the subsequent nose shaping simulation can be more fit with the real facial structure.
2. Detail enhancement of three-dimensional modeling:
1. The self-adaptive optimization of the point cloud data can enhance the point cloud density of key facial features (such as the corners of eyes, the lip lines and the nose bridge) through gradient field analysis and local encryption processing (automatic encryption when the point cloud density is lower than a threshold value), and solve the problem of the point cloud sparseness of the traditional laser scanning in a low curvature area (such as the cheek);
Data comparison, namely in a region with large curvature change such as a nose tip, the point cloud density can be increased from 50 points/cm < 2 > to 200 points/cm < 2 > in the traditional method, and the surface error of the model is reduced to be within 0.1 mm;
2. The reality of the physical illumination simulation is combined with the illumination model (such as a Phong illumination model) of the ambient light, diffuse reflection and specular reflection components, so that the optical characteristics (such as fat reflection of the forehead and matte texture of the cheek) of the facial skin can be truly restored, and the plastic feel of the traditional three-dimensional model is avoided;
clinical value, doctors can observe the three-dimensional sense of the face through the light and shadow change, for example, whether the light and shadow transition after apple muscle filling is natural or not is judged, and the deviation between the postoperative effect and the expected effect is reduced.
3. Safety and interaction efficiency of parameter control:
1. The constraint mechanism of the transformation parameters is that the parameter mapping matrix limits the range of rotation angles (such as a zygomatic arch inward pushing angle and a mandibular angle rotation amplitude), translation distances (such as a nose bridge heightening length) and the like, automatically corrects the ultra-limit value (such as limiting the mandibular angle rotation angle to be less than or equal to 15 degrees to avoid nerve injury risk), and does not operate reasonably in an algorithm level;
risk control, in which a safety threshold is preset in combination with an anatomical database, for example, when the upward rotation angle of the nose tip exceeds 30 degrees in the hump nose operation, the nostril is possibly exposed;
2. Real-time interactive immersive experience, grid vertex positions based on a coordinate transformation formula (such as a rotation matrix and a translation vector) are calculated in real time, and when parameters (such as a sliding bar for changing the chin length) are adjusted by a user, a three-dimensional model is updated at a frame rate of 60fps, so that a simulation effect of 'what you see is what you get' is achieved;
Doctor-patient communication optimization, wherein a patient can intuitively observe facial changes of different shaping schemes (such as comparing the heights of two nose augmentation prostheses), and a doctor can quickly verify aesthetic proportions of the design schemes (such as three-vestibule five-eye standards) through real-time rendering.
4. Comprehensive benefits of clinical application:
1. the accuracy of the operation scheme, the combination of a high-precision three-dimensional model (error <0.3 mm) and illumination simulation, can quantitatively analyze indexes such as facial asymmetry degree (such as left and right cheek width difference), skin looseness and the like, and provides data support for the personalized operation scheme;
2. Preoperative risk prejudging, namely, potential problems (such as compatibility with surrounding tissues and neurovascular compression risk after prosthesis implantation) can be found in advance through the effect of premodeling different operation parameters, so that the intra-operative adjustment time is shortened, and the operation risk is reduced by about 25%;
3. the medical resource is efficiently utilized, the efficiency of the automatic image processing and modeling process (the time required for generating a three-dimensional model from image input is less than 10 minutes) is greatly improved compared with the traditional manual measurement (1-2 hours), and the method is suitable for large-scale pre-cosmetic plastic evaluation.
In a word, the cosmetic shaping auxiliary analysis method and system based on three-dimensional surface shape digital measurement provided by the invention realize the spanning from experience leading to data driving in the cosmetic shaping field through the whole process technical innovation of high-precision image processing, detail enhancement modeling, physical illumination rendering and safe interaction simulation.
Detailed Description
In order to better understand the above technical solutions, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, a first embodiment of the present invention proposes a cosmetic/plastic assisted analysis method based on three-dimensional surface shape digital measurement, comprising the steps of:
step S100, acquiring a medical image data set, performing noise processing by adopting a Gaussian filter algorithm, judging the medical image data set as noise points if the difference between the pixel gray value and the neighborhood mean value exceeds a preset threshold value, and performing smoothing processing to acquire a first image data set.
Medical image datasets refer to sets of digitized images acquired through various medical imaging techniques for recording internal or external structures and physiological functions of the human body. These datasets contain a large amount of anatomical, pathological or functional information, which is the core fundamental data for medical diagnosis, treatment planning, medical research and medical technology development.
The Gaussian filter algorithm (Gaussian Filter Algorithm) is a linear smoothing filter algorithm based on a Gaussian function, and belongs to a classical technology in the field of signal processing and image processing. The core idea is to convolve the image or signal with a Gaussian Kernel (Gaussian Kernel) to reduce noise, smooth details by weighted averaging the neighborhood pixel values, while preserving the edge and structural information of the image as much as possible.
The pixel gray value (Pixel Grayscale Value) is a numerical value for representing the brightness of a single pixel in an image, and is a basic attribute of a gray-scale image (an image containing no color information).
The neighborhood mean (Neighborhood Mean) is a local statistic commonly used in image processing and computer vision to describe the pixel gray value averaging characteristics of a pixel point in an image and its surrounding neighboring pixels. The core idea is to extract local features of the image or suppress noise by analyzing the gray level distribution of the local area of the pixel.
Noise points (Noise pixels) refer to abnormal Pixel points in an image, which are obviously inconsistent with Pixel gray values or texture features of surrounding pixels, and are generally introduced by factors such as defects of imaging equipment, transmission interference, environmental interference and the like. Noise points interfere with the visual effect of the image and subsequent processing (e.g., image analysis, object detection, etc.), and therefore require suppression by filtering, denoising, etc. algorithms.
The smoothing process (Smoothing Processing) is a technique for smoothing the entire image or a partial region by suppressing image noise and weakening local gray scale fluctuation between pixels in the image processing. The core goal is to reduce high frequency details (e.g., noise, fine texture) in the image while preserving as much low frequency information as possible (e.g., object contours, large scale structures). The smoothing process is widely applied to scenes such as image denoising, preprocessing, denoising before feature extraction and the like.
And step 200, constructing an energy function according to the first image data set, and filling and repairing a defect area in the image by superposition calculation of the single-point energy item and the adjacent point interaction energy item to obtain a second image data set.
The Energy Function (Energy Function) is defined and applied differently in different fields, but the core idea is to map the state or attribute of the system into a numerical value (Energy value) through a mathematical Function, so as to describe the stability, cost, similarity or optimization objective of the system. The energy function is often used as an objective function of an optimization problem to solve for the optimal state of the system by minimizing or maximizing the energy value.
The image defect area is filled and repaired by superposition calculation of the single-point energy item and the adjacent point interaction energy item, and the method is an image repairing method based on energy function optimization. The method is characterized in that the filling problem of a defect area (such as scratches, noise points and missing pixels) in an image is converted into an energy minimization problem, the local characteristics and neighborhood dependency relations of the pixels are marked by defining two types of energy items, and finally, the optimal filling value is solved through an optimization algorithm, so that the repaired image is visually coherent and natural.
In computer vision, image processing, and energy optimization models (e.g., markov random field, conditional random field), a single point energy term (Unary ENERGY TERM) is the fundamental component of the energy function to describe the degree or cost of matching of individual pixel/point self-properties to the target state. Single point energy terms characterize the "energy" of a single point independent of the surrounding environment (i.e., independent cost or preference), one of the core elements in constructing a global energy function.
In computer vision, image processing, and energy optimization models (e.g., markov random field, conditional random field), adjacent point interaction energy terms (PAIRWISE ENERGY TERM) are key components of the energy function to describe the contribution of label relationships between adjacent pixels/points to the overall energy. The adjacent point interaction energy term characterizes the dependency relationship or constraint condition between the adjacent points, and the adjacent point interaction energy term and the single point interaction energy term cooperate to ensure that the model balances between local rationality (single point term) and global consistency (interaction term).
And step S300, extracting point cloud data from the second image data set, constructing a gradient field, and executing local encryption processing if the point cloud density is lower than a preset threshold value to generate a three-dimensional face model.
Point cloud data is a data set consisting of discrete points in three-dimensional space, each point typically containing coordinates (x, y, z) and attributes (e.g., color, intensity, etc.), and is widely used in the fields of laser radar, photogrammetry, three-dimensional scanning, etc.
The gradient field is a vector field used for describing the change rate (namely the steepness and the direction) of the surface of the point cloud in a local area, each point corresponds to a gradient vector, the size of the gradient vector represents the local change amplitude, and the direction points to the direction in which the function grows most rapidly (generally perpendicular to the normal vector of the curved surface).
The local encryption processing based on the point cloud density threshold is a point cloud data preprocessing strategy, and aims to improve the spatial uniformity and detail integrity of the point cloud by automatically identifying a low-density region and supplementing sampling points. The core logic is that the density value of the local area of the point cloud is calculated, the density value is compared with a preset threshold value, and encryption operation is carried out on the area with the density lower than the threshold value, so that the requirement of subsequent processing (such as three-dimensional reconstruction, surface modeling and the like) on the data density is met.
In the field of point cloud processing, the point cloud density (Point Cloud Density) is used for describing the distribution density of points in point cloud data, and is an important index for measuring the quality, geometric structure characteristics or spatial sampling characteristics of the point cloud.
The three-dimensional facial model is a three-dimensional data structure which is constructed by a digitizing technology and can accurately describe the geometric shape, texture characteristics and dynamic expression of the human face. The three-dimensional face model integrates information such as space coordinates, surface details, topological relations and the like of the face into a visual and interactive model in a mathematical or computer recognizable form, and is widely applied to the fields such as computer graphics, virtual reality, biomedicine, security recognition, film and television special effects and the like.
And S400, acquiring grid vertex coordinates and normal vector information of the three-dimensional face model, and fusing an ambient light component, a diffuse reflection component and a specular reflection component through an illumination intensity calculation formula to generate a visual image.
In a three-dimensional face model, mesh vertex coordinates and normal vector information are core data describing the model geometry and surface characteristics, which together determine the shape, lighting effect, and rendering realism of the model.
The normal vector is a unit vector perpendicular to a certain point of the model surface, and is used for describing the orientation and curvature of the surface, and is divided into a grid vertex normal vector and a plane normal vector.
The visual image is generated by fusing the ambient light component, the diffuse reflection component and the specular reflection component through the illumination intensity calculation formula, and is one of the core steps of realism rendering (Photorealistic Rendering) in computer graphics. The essence of the method is that based on a Lighting Model, different types of Lighting components (ambient light, diffuse reflection light and specular reflection light) in a scene are subjected to mathematical modeling, and final Lighting intensity of each point on the surface of an object is obtained through superposition calculation, so that a visual image with stereoscopic impression and reality is generated.
Ambient Light (Ambient Light) is Light that is uniformly distributed after multiple scattering of simulated Light in the environment, and is independent of the direction of a Light source and the surface orientation of an object, and can be regarded as basic illumination of a scene.
Diffuse reflected Light (Diffuse Light) is a phenomenon that simulates uniform scattering of Light rays in all directions when the Light rays are irradiated to a rough surface, and the intensity thereof depends on the angle between the direction of a Light source and the normal vector of the surface of an object.
Specular Light (Specular Light) is a directional reflection (e.g., metallic, glass high-Light effects) that simulates Light striking a smooth surface, with the intensity depending on the angle of the observer's viewing angle with respect to the direction of the reflected Light.
The visual image refers to a technical product which converts data, information or abstract concepts into visual forms visible to human eyes through a graphical means so as to intuitively and efficiently transfer knowledge, express rules or present scenes. The method converts complex data or contents (such as a three-dimensional model, a simulation result, statistical information and the like) which are difficult to directly understand into a visual symbology which is easy to perceive and read through visual elements such as colors, shapes, textures, spatial relations and the like.
And S500, establishing a parameter mapping matrix to carry out range constraint on transformation parameters input by a user, automatically correcting if the rotation angle exceeds a preset range, and calculating the updated grid vertex position through a coordinate transformation formula.
The parameter mapping matrix is a mechanism for mapping original parameters input by a user to target parameter ranges through mathematical transformation (such as matrix operation in linear algebra), and the core goal is to carry out range constraint on the parameters so as to ensure that input values conform to valid intervals (such as physical feasible regions, geometric rationality or business rules) preset by a system.
The parameter mapping matrix constructs a bridge between the user input space and the legal space of the system through mathematical transformation, and has the core value of retaining the directionality and continuity of the user intention while restricting the parameter range. The mechanism is widely applied to the fields of computer graphics, man-machine interaction, physical simulation and the like, not only ensures the stability of the system, but also improves the naturalness and predictability of user operation.
Transformation parameters refer to quantization parameters used to describe object or space geometric transformations (e.g., position, orientation, size, shape changes) in the fields of computer graphics, robotics, mathematical modeling, physical simulation, etc.
The automatic correction mechanism of the rotation angle is a core means for ensuring that the angle value is legal through mathematical transformation or control logic, and the core aim is to balance constraint effectiveness and user intention order preservation. What kind of correction method is selected is determined according to scene requirements (such as whether angle jump is allowed or not and whether periodicity is available or not), and finally optimization of system stability and interaction experience is achieved. When the rotation angle input by a user or calculated by the system exceeds a preset legal range, the angle value is automatically adjusted through mathematical transformation or rules so as to fall into an effective interval, thereby ensuring the stability, geometric rationality or interaction safety of the system.
The calculation of updated mesh vertex positions by means of a coordinate transformation formula is a core operation of geometric transformation (Geometric Transformation) in computer graphics, robotics and geometric modeling. The essence is that the original grid vertex coordinates are mapped to new coordinate positions through a mathematical formula so as to realize geometric transformation such as translation, rotation, scaling, miscut, projection and the like of the object, thereby changing the spatial position, direction, size or shape of the object.
And step 600, updating the visualized image in real time according to the transformation parameters and the material attribute parameters, and outputting a facial feature analysis result.
Updating the visual image in real time is to dynamically update the visual effect of the three-dimensional face model through the graphic rendering engine according to the input transformation parameters (such as translation, rotation and scaling of the face model) and the material attribute parameters (such as skin color, texture and glossiness). Wherein the transformation parameters are parameters describing the spatial pose and shape change of the face model. The texture property parameters are parameters describing the optical properties and texture of the face surface.
Facial feature analysis is to analyze real-time rendered images or three-dimensional models, extract equivalent results of facial geometric features, expression states and material matching degree, and is used for driving interaction, evaluation or decision.
Further, the method for assisting analysis of beauty and shaping based on three-dimensional surface shape digital measurement provided in the present embodiment, step S100 includes:
Step S110, a medical image data set is obtained, pixel gray value analysis is adopted, pixel gray values of each pixel are extracted from the medical image data set, and a first gray difference set is obtained by calculating the difference between the pixel gray values and the neighborhood pixel gray average value.
For example, in medical image processing, acquiring a medical image dataset typically involves extracting CT or MRI images from a PACS (Picture ARCHIVING AND Communication System , image archiving and communication system) system or public dataset of a hospital, such as LIDC-IDRI (The LungImageDatabase Consortium).
Assuming a set of chest CT images is acquired, 512x512 pixels in size, with a range of pixel gray values between 0 and 255. The pixel gray value of each pixel reflects the tissue density, with the lung region typically exhibiting a lower pixel gray value and the bone region having a higher pixel gray value. This data acquisition ensures the reliability and consistency of subsequent analysis, contributing to accurate diagnosis.
Specifically, a 3x3 neighborhood window may be employed to extract the pixel gray value of each pixel and calculate its difference from the neighborhood pixel mean. For example, a pixel has a gray value of 150, and 8 neighboring pixels have gray values of 145, 148, 152, 147, 149, 146, 151, 150, respectively, and a mean value of 148.5, and a difference value of 1.5. This process generates a first gray difference set that reflects local features of gray variations in the image, helping to identify outliers.
And step 120, performing noise detection according to the first gray level difference set, and if the difference between the gray level value of the pixel and the neighborhood mean value exceeds a preset threshold value, determining the pixel as a noise point to obtain a noise point position set.
In one possible implementation, noise detection is accomplished by setting a threshold. Assuming that the preset threshold is 10, if the difference between the gray value of a certain pixel and the neighborhood mean value exceeds 10, the pixel is judged to be a noise point. For example, a pixel has a gray level of 200, a neighborhood average of 150, a difference of 50, and a far-reaching threshold, labeled as noise. All noise points constitute a noise point location set. The method can effectively distinguish normal tissues from noise interference and improve the image quality.
And step S130, smoothing the noise point position set by adopting a Gaussian filter algorithm, and adjusting the pixel gray value by adopting the Gaussian filter algorithm to the noise point to obtain a second image set.
For example, a gaussian filter algorithm is applied to the noise point location set, optionally with a gaussian kernel with a standard deviation of 1.5 and a window size of 5x5. Assuming that the pixel gray value of a noise point is 200, the pixel gray value is adjusted to be a neighborhood weighted average value after Gaussian filtering, such as 180. The smoothing process reduces the interference of noise on the image, reserves edge information, generates a second image set, and improves the visual definition of the image and the accuracy of subsequent analysis.
Step S140, generating a data set from the second image set, and storing the processed data of the second image set to generate a first image data set.
Specifically, the second image set of processed data is stored for generating the first image data set. For example, the processed CT image is stored in DICOM format, containing metadata such as patient ID and scan parameters. The saved data set may be used for deep learning model training or clinical diagnosis. The method ensures data consistency and is convenient for subsequent analysis and model development.
In one possible implementation, the choice of parameters for gaussian filtering has a significant impact on the result. The smaller standard deviation can better keep details, is suitable for fine structure analysis, and the larger standard deviation is suitable for removing obvious noise.
Preferably, the parameters may be dynamically adjusted according to the type of image, ensuring an optimal smoothing effect. The flexibility improves the adaptability of the algorithm and meets the requirements of different clinical scenes.
For example, the above processing procedure can significantly improve the accuracy of nodule detection in lung CT images. After noise is reduced, the edge of the nodule is clearer, and the misdiagnosis rate is reduced. Meanwhile, the generated first image data set provides high-quality input for AI auxiliary diagnosis, and is beneficial to improving diagnosis efficiency and reliability. The method ensures the robustness and practicality of medical image processing through multi-step collaborative optimization.
Further, the method for assisting analysis of beauty and shaping based on three-dimensional surface shape digital measurement provided in the present embodiment, step S200 includes:
Step S210, obtaining pixel gray values and texture features of pixel points from a first image data set, partitioning an image by adopting a region segmentation algorithm, and determining a region containing defects by calculating the average value of the texture features and the distribution of the pixel gray values of each region to obtain a defect region set.
In the medical image processing field, data may be extracted from a chest CT image when pixel gray values and texture features of pixel points are acquired from a first image dataset. Assuming an image size of 512x512 pixels, the pixel gray value range is 0-255. The texture features can reflect the gray level change rule of the pixel point neighborhood through the local binary pattern or gray level co-occurrence matrix calculation. Illustratively, for a certain lung region pixel, the pixel gray value is 120, the neighborhood forms a specific texture mode, and the characteristic value is obtained after quantization and is used for subsequent segmentation.
The region segmentation algorithm may employ a graph-based approach to divide the image into regions such as lung, pleural and skeletal regions. Specifically, each region calculates a texture feature mean and a pixel gray value distribution. For example, the average value of the texture features of the lung region is 0.8, the average value of the gray scale is 100, the average value of the texture of a certain region is 1.2, the gray scale distribution deviates from the normal range, and the region which is judged to contain the defect forms a defect region set. This process ensures accurate identification of defective areas.
Step S220, extracting the boundary contour of the defect area by adopting a boundary detection algorithm aiming at the defect area set, and determining the boundary contour set by calculating the pixel gray value gradient of the boundary pixel points.
In one embodiment, a boundary detection algorithm, such as the Canny algorithm, is applied to the set of defect regions to extract the boundary contours of the defect regions. Illustratively, the gray value of the pixel in a certain defect area is suddenly changed from 100 to 180, and after the gradient of the gray value of the pixel is calculated, the pixel with a higher gradient value is marked as a boundary point to form a boundary contour set.
It should be noted that the gradient calculation in combination with the texture feature can improve the accuracy of boundary detection. For example, a higher texture feature value at a boundary point indicates that it may be a lesion edge, rather than a noise disturbance. The boundary contour set provides accurate region localization for subsequent processing.
And step S230, constructing an energy function according to the boundary contour set and the texture features to obtain an energy distribution set.
Constructing an energy function based on the boundary contour set and the texture features may optimize the feature description of the defect region by minimizing the energy function. The energy function of a defect area is assumed to combine the gray gradient and the texture feature to generate an energy distribution set, and the abnormal degree of pixels in the area is reflected. Specifically, a higher pixel energy value indicates that it deviates from normal tissue characteristics. It should be noted that the energy distribution set provides a quantitative basis for subsequent repair.
And step S240, smoothing the energy distribution set by adopting a Gaussian filter algorithm, and filling and repairing by adjusting the pixel gray values of the pixel points in the defect area and combining with texture features to generate a second image data set.
In one embodiment, the energy distribution set is smoothed using a gaussian filter algorithm, with a standard deviation of 1.2 and a window size of 5x5. For example, the gray value of the original pixel of a certain pixel is 160, the gray value of the original pixel is 150 after being smoothed, and filling and repairing are carried out by combining the texture features, so that a second image data set is generated.
Preferably, the pixel gray value can be dynamically adjusted according to the texture characteristics during repair, so that the natural connection between the repair area and surrounding tissues is ensured. For example, the pixel gray values of the repaired lung area are uniformly distributed, and the texture characteristics are consistent with normal tissues. The method improves the quality of the image data set and provides reliable data support for subsequent diagnosis.
In one embodiment, the second image dataset may be stored in DICOM format, including patient ID and scan parameters, for clinical use. Illustratively, the restored CT images exhibit higher regional consistency in pulmonary nodule detection, facilitating subsequent analysis. It should be noted that, the introduction of texture features makes the repair process more targeted, reducing the risk of erroneous repair. Such multi-step collaborative processing ensures the integrity and practicality of the image data.
Further, the method for assisting analysis of beauty and shaping based on three-dimensional surface shape digital measurement provided in the present embodiment, step S300 includes:
and step S310, acquiring pixel gray values and depth information of pixel points in the second image data set, and generating point cloud data containing three-dimensional space coordinates by adopting a stereoscopic microscope algorithm to obtain a point cloud data set.
In the medical image processing field, the pixel gray value and the depth information of the pixel points in the second image data set are acquired, and data can be extracted from the skull CT image. Assuming that the image size is 512x512 pixels, the pixel gray value range is 0-255, and the depth information is generated through multi-layer CT scanning, and reflects the position of the pixel point in the three-dimensional space. The stereoscopic microscopy algorithm may convert this information into point cloud data, generating a point cloud dataset comprising three-dimensional spatial coordinates.
Specifically, each pixel is assigned an x, y, z coordinate and a pixel gray value, for example, the pixel of a certain skull region has a coordinate of (100,120,50) and the pixel gray value has a value of 150. The point cloud dataset provides a basis for subsequent three-dimensional modeling.
Step S320, calculating the number of points in a unit volume aiming at the point cloud data set, and if the point cloud density is lower than a preset threshold value, carrying out local encryption on the low-density area by adopting an interpolation algorithm to obtain an encrypted point cloud data set.
In one embodiment, the number of points per unit volume is calculated for the point cloud dataset assuming a unit volume of 1 cubic millimeter, and the preset density threshold is 100 points/cubic millimeter. If the cloud density of a certain area is 80 points/cubic millimeter and is lower than the threshold value, adopting an interpolation algorithm to carry out local encryption.
For example, based on nearest neighbor interpolation, new points are inserted in the low density region, and the pixel gray value is generated according to a weighted average of neighboring points, e.g., the pixel gray value of the inserted point is about 145. The encrypted point cloud density is increased to 110 points/cubic millimeter, and an encrypted point cloud data set is formed. This encryption ensures the uniformity of the point cloud data.
And step S330, calculating the pixel gray value gradient of each point according to the encrypted point cloud data set, and constructing a gradient field describing the surface change by adopting a gradient descent algorithm to obtain a gradient field data set.
For example, computing a pixel gray value gradient for each point based on the encrypted point cloud data set may reflect the surface gray variation. Assuming that the pixel gray value at a certain point is suddenly changed from 140 to 180, the gradient value is higher, indicating that the boundary of the skull surface is possible. And constructing a gradient field by adopting a gradient descent algorithm, generating a gradient field data set, and describing a surface change rule. In particular, the gradient field marks the junction of the skull and soft tissue, providing accurate surface information for subsequent modeling.
And S340, generating triangular grids by using a grid generation algorithm through the gradient field data set and the encryption point cloud data set, and constructing a three-dimensional face model by combining the boundary contour and the normal vector to obtain a three-dimensional face model data set.
In one embodiment, a triangle mesh is generated using a mesh generation algorithm from a gradient field dataset and an encrypted point cloud dataset. For example, based on the Delaunay triangulation algorithm, point cloud data is connected into a triangular mesh, and a three-dimensional face model is constructed by combining boundary contours and normal vectors.
Assuming that the normal vector of a certain skull region points outwards, the mesh forms a smooth skull surface after generation. The three-dimensional facial model dataset may be saved in STL format, containing mesh vertex coordinates and patch information, facilitating clinical surgical planning or 3D printing. For example, the generated skull model can accurately position the bone structure in surgical navigation, and operation errors are reduced.
The combination of point cloud encryption and gradient field construction ensures that the model surface is smoother and the boundary is clearer. For example, the encrypted point cloud data reduces voids, and the gradient field enhances the appearance of surface detail. The multi-step collaborative processing improves the precision and reliability of the three-dimensional model, and provides high-quality data support for medical image analysis.
Preferably, the method for assisting analysis of cosmetic shaping based on three-dimensional surface shape digital measurement in this embodiment, step S400 includes:
Step S410, a grid vertex data set of the three-dimensional face model is obtained, and three-dimensional coordinates and normal vectors of each grid vertex are calculated by vector operation, so that a grid vertex attribute data set is obtained.
For example, in the medical image processing field, based on a mesh vertex data set of a three-dimensional face model, mesh vertex coordinates and normal vectors may be obtained through vector operations, forming a mesh vertex attribute data set. Grid vertex coordinates describe the position of each point of the model surface in three dimensions, e.g., the coordinates of a grid vertex of a skull model are (150,200,80) mm, and normal vectors reflect the surface orientation, e.g., (0.7,0.2,0.6), for subsequent illumination calculations. The mesh vertex attribute dataset provides the underlying geometric information for the visualization.
Step S420, calculating the ambient light component of each grid vertex by adopting an illumination model according to the grid vertex attribute data set and the preset light source position, and combining the reflection coefficient of the surface material to obtain an ambient light intensity data set.
In one possible implementation, the ambient light component may be calculated using an ambient light model based on the mesh vertex attribute dataset and the preset light source locations. The ambient light simulates uniformly scattered light, and assuming that the light source position is 300,300,500 mm, the ambient light intensity is 0.3, and the reflection coefficient of the surface material is 0.4, the ambient light component of a certain grid vertex can be obtained by simple multiplication. This way it is ensured that the model still has a basic brightness in the absence of directional light sources.
Step 430, if the included angle between the normal vector in the grid vertex attribute data set and the light source position is smaller than the preset threshold, calculating a diffuse reflection component by using a diffuse reflection formula i_d=k_d (n·l), wherein i_d represents diffuse reflection intensity, k_d represents diffuse reflection coefficient, N represents the grid vertex normal vector, and L represents the light source direction vector, so as to obtain a diffuse reflection intensity data set.
For example, for the judgment of the angle between the normal vector and the direction of the light source, if the angle is smaller than the preset threshold value, such as 30 degrees, the diffuse reflection component is calculated by using a diffuse reflection formula. Diffuse reflection simulates the scattering effect of light on a rough surface. Assuming that the normal vector of a certain grid vertex is (0.5,0.5,0.7), the direction vector of a light source is (0.6,0.6,0.5), and the diffuse reflection coefficient is 0.6, the diffuse reflection intensity can be obtained through vector dot product. The method can highlight the brightness change of the surface of the model and enhance the stereoscopic impression.
Step S440, calculating a specular reflection component by using a specular reflection formula i_s=k_s (r·v) ζn through the diffuse reflection intensity dataset and the viewing angle direction, and fusing the ambient light intensity dataset and the diffuse reflection intensity dataset to generate a visualized image dataset, wherein i_s represents the specular reflection intensity, k_s represents the specular reflection coefficient, R represents the reflection vector, V represents the viewing angle vector, and n represents the high light index.
In one possible implementation, the calculation of the specular reflection component is based on the viewing angle direction and the high light index. Specular reflection simulates the specular effects of a smooth surface, such as the smooth areas of a skull model. Assuming that the viewing angle vector is (0.3,0.4,0.8), the specular reflection coefficient is 0.8, the high light index is 32, and the specular reflection intensity can be obtained by the dot product of the reflection vector and the viewing angle vector. This way the gloss characteristics of the model surface can be effectively represented.
For example, fusing the ambient light intensity dataset, the diffuse reflectance intensity dataset, and the specular reflectance intensity dataset may generate a visual image dataset. The fusion process takes into account the weighted effects of the different illumination components, such as 30% ambient light, 50% diffuse reflection, and 20% specular reflection, to generate the color values of the final image pixels. The method enables the skull model to present a vivid light and shadow effect in operation navigation, and facilitates doctors to observe bone details.
In one possible implementation, the visual image dataset may be further optimized, such as by adjusting the light source position to simulate the operating room light environment, or changing the high light index to highlight specific areas. The expansion schemes enrich the visual effect and meet different clinical requirements. It should be noted that fusion of various illumination components can significantly improve the sense of reality of the model, and provide reliable visual basis for subsequent analysis.
Further, the cosmetic shaping auxiliary analysis method based on the three-dimensional surface shape digital measurement provided in the embodiment, step S500 includes:
Step S510, a transformation parameter data set input by a user is obtained, a parameter mapping matrix is constructed through matrix operation, and an initial transformation matrix is obtained, wherein the transformation parameter data set comprises a rotation angle, a translation amount and a scaling.
In one possible implementation, the acquisition of the transformation parameter dataset is a central element of the three-dimensional face model processing. Transformation parameters typically include rotation angle, translation amount, and scaling, which together define the geometric transformation of the model in three-dimensional space. For example, the user may input a rotation angle of 45 degrees, a translation of 50 millimeters along the X-axis, and a zoom of 1.2. These parameters are entered through a user interface or configuration file to ensure that the model can adjust position and morphology according to specific needs.
It should be noted that the rationality of the transformation parameters directly affects the accuracy of the subsequent matrix operation, so that the parameter values need to be ensured to be within a reasonable range, such as the rotation angle is usually between-180 degrees and 180 degrees, when the transformation parameters are input. For example, the process of constructing the parameter mapping matrix is to translate the input transformation parameters into an operable mathematical representation. The rotation angle may be represented by a rotation matrix, the translation amount by a translation vector, and the scaling by a scaling matrix. These matrices are eventually combined into a unified initial transformation matrix. For example, assuming a rotation angle of 30 degrees, a translation of 100 millimeters along the Y-axis, and a scaling of 1.5, the system would first calculate the rotation matrix, translation vector, and scaling matrix, respectively, and then synthesize the initial transformation matrix by matrix multiplication. This approach ensures the order and consistency of the transformations, providing a reliable basis for subsequent mesh vertex coordinate calculations.
And step S520, judging whether the rotation angle in the initial transformation matrix exceeds a preset angle threshold, and if so, correcting the rotation angle by adopting a linear interpolation method to obtain a corrected transformation matrix.
In one possible implementation, it is a critical step to determine whether the rotation angle in the initial transformation matrix exceeds a preset threshold. For example, the angle threshold is set to 60 degrees, and if the input rotation angle is 75 degrees, the threshold is exceeded. At this time, the rotation angle is corrected by a linear interpolation method, for example, 75 degrees are interpolated to within 60 degrees, and a corrected transformation matrix is generated. Such correction avoids model deformation or distortion due to an excessive rotation angle.
For example, in medical image processing, excessive rotation may cause unnatural distortions in the facial model, affecting the physician's judgment of the surgical field. The modified transformation matrix can maintain the geometric integrity of the model.
Step S530, calculating the mesh vertex data set of the three-dimensional face model by using a coordinate transformation formula T (v) =m·v, to obtain an updated mesh vertex data set, where T (v) represents the transformed mesh vertex coordinates, M represents the modified transformation matrix, and v represents the original mesh vertex coordinates.
The method comprises the steps of calculating a grid vertex data set of a three-dimensional face model by adopting a coordinate transformation formula, and is a core for generating an updated grid vertex data set. For example, the original mesh vertex coordinates are (100,150,200) millimeters, and after the modified transformation matrix is acted on, new coordinates (120,180,240) millimeters may be obtained. This process is implemented by multiplication of the matrix and the vector, and is computationally efficient and results accurate.
Step S540, generating grids of the three-dimensional face model by combining the updated grid vertex data set with preset rendering parameters to obtain a visualized grid vertex position data set.
The updated mesh vertex dataset reflects the new position and morphology of the model in space, providing accurate geometric information for subsequent rendering. In one possible implementation, the generation of the visualized mesh vertex position dataset in combination with preset rendering parameters is the last step of implementing three-dimensional face model visualization. Rendering parameters may include material properties, illumination direction, or color mapping. For example, setting the material to be translucent, the illumination direction to be (200,200,300) mm, the color to be warm, the system will generate a visual grid from the updated grid vertex dataset. This approach can generate a clear model surface, facilitating the viewing of details of the facial structure.
Preferably, specific areas of the model, such as the bridge of the nose or cheekbones, can be highlighted by adjusting rendering parameters, such as increasing the illumination intensity or changing the color mapping, to meet different analysis requirements. For example, in an extension, interactive visualization may be achieved by dynamically adjusting the transformation parameters.
For example, the user may enter a new rotation angle or scale in real-time, and the system updates the transformation matrix and recalculates the mesh vertex dataset in real-time, generating a new visualization effect. This interactivity is particularly useful in medical teaching where a teacher can demonstrate the morphological changes of a face model at different angles by adjusting parameters to help students understand anatomy.
It should be noted that the dynamic adjustment can also support real-time feedback, so that the user can quickly optimize the model presentation effect. In one possible implementation, the diversity of transformation parameters provides a flexible application scenario for the model. For example, in surgical planning, a physician may simulate the effect of a facial bone present at different locations by adjusting the amount of translation, or zoom in on a particular area to view details. These operations rely on efficient computation of the transformation matrix to ensure that the model remains geometrically consistent under complex transformations, thereby providing support for accurate analysis.
Further, the method for assisting analysis of beauty and shaping based on three-dimensional surface shape digital measurement provided in the present embodiment, step S600 includes:
Step S610, a transformation parameter data set and a material attribute data set input by a user are acquired, and an initial transformation matrix and an initial material mapping matrix are constructed through matrix operation, so that the initial transformation matrix and the initial material mapping matrix are obtained.
For example, in three-dimensional face model processing, acquiring a transformation parameter dataset and a texture property dataset input by a user is the basis for constructing a visualization model. The transformation parameter data set may include a rotation angle, a translation amount, and a scaling, for example, the rotation angle is set to 30 degrees, the translation is 80 millimeters along the Z-axis, and the scaling is 1.3. The texture property data set may include texture color, reflectivity, and transparency, for example, the texture color is set to natural skin tone, the reflectivity is 0.6, and the transparency is 0.2. These parameters are entered through the user interface to ensure that the model is able to adjust morphology and appearance according to specific needs.
It should be noted that the diversity of the input parameters provides flexibility for subsequent matrix operations and texture mapping. In one possible implementation, the initial transformation matrix and the initial texture mapping matrix are constructed by matrix operations. The initial transformation matrix converts rotation, translation and scaling parameters into mathematical representations, such as 30 degree rotation angle into rotation matrix, 80 millimeter translation into translation vector, scaling of 1.3 into scaling matrix, and matrix multiplication into unified transformation matrix. The initial texture mapping matrix generates initial texture distribution according to texture color, reflectivity and other attributes, for example, skin color is uniformly mapped to the model surface. It should be noted that this matrix construction ensures mathematical consistency of the parameters.
And step S620, if the parameters in the initial transformation matrix exceed the preset range, correcting the transformation parameters by adopting a linear interpolation method to obtain a corrected transformation matrix.
For example, if the rotation angle in the initial transformation matrix exceeds a preset range, for example, the threshold is set to 45 degrees, and the input is 50 degrees, the angle is corrected to be within 45 degrees by adopting a linear interpolation method.
Step S630, calculating grid vertices of the three-dimensional face model by using a coordinate transformation formula T (v) =m·v through the modified transformation matrix, to obtain an updated grid vertex data set, where T (v) represents transformed grid vertex coordinates, M represents the modified transformation matrix, and v represents original grid vertex coordinates.
The modified transformation matrix acts on the grid vertices through a coordinate transformation formula, for example, the original grid vertex coordinates are (100,150,200) mm, and new coordinates of (120,170,220) mm can be obtained after modification. This way of correction preserves the geometric stability of the model.
Step S640, updating the initial texture mapping matrix by adopting a texture mapping algorithm according to the texture attribute data set to obtain an updated texture mapping matrix.
Preferably, the texture mapping algorithm updates the texture mapping matrix based on the texture attribute dataset, for example, adjusting reflectivity to highlight regions of the face, resulting in a more realistic skin effect.
Step S650, generating real-time updated visual image data through the updated mesh vertex data set and the updated texture mapping matrix.
In one possible implementation, the updated mesh vertex dataset is combined with the texture mapping matrix to generate real-time updated visual image data. For example, based on the corrected mesh vertex coordinates and the new texture map, the system generates a face model with natural skin tone and light shadow effects. It should be noted that the real-time update supports dynamic adjustment, for example, the model presents the zooming effect immediately after the user changes the scaling.
Step S660, extracting facial feature points from the visualized image data updated in real time by adopting a facial feature extraction algorithm to obtain a facial feature point set.
Preferably, the facial feature extraction algorithm extracts key points from the visual image data, for example, tip coordinates of the nose of (130,160,210) mm and mouth angular coordinates of (110,140,200) mm.
Step S670, calculating the relative position relation between the facial feature points by adopting a geometric analysis method according to the facial feature point set to obtain a facial feature analysis result.
These facial feature points are calculated by a geometric analysis method to obtain a facial feature analysis result, for example, the distance from the nose tip to the mouth angle is 30 mm, and the angle is 15 degrees. The facial feature analysis results may be used to evaluate facial symmetry or structural features. For example, in an extension, the system supports a user to adjust material properties in real-time, such as changing transparency to highlight skeletal structures, or adjusting illumination angles to emphasize facial contours. Preferably, this interactivity allows the user to quickly modify parameters through the interface, and the system updates the visualization effect on the fly, facilitating observation of model changes under different parameters.
It will be appreciated that the facial feature analysis results may be further used in medical diagnostics, such as determining abnormalities in facial structures by facial feature point distance, providing a reference for surgical planning. It should be noted that the flexibility and instantaneity of the method significantly improve the practicality of the model.
The invention relates to a cosmetic shaping auxiliary analysis system based on three-dimensional surface shape digital measurement, which is used for realizing the cosmetic shaping auxiliary analysis method based on the three-dimensional surface shape digital measurement, and comprises a first acquisition module, a second acquisition module, a first generation module, a second generation module, a calculation module and an output module, wherein the first acquisition module is used for acquiring a medical image data set and adopting a Gaussian filter algorithm to carry out noise processing, and if the difference between a pixel gray value and a neighborhood average value exceeds a preset threshold value, the noise point is judged and the noise point is smoothed to obtain a first image data set; the system comprises a first acquisition module for constructing an energy function according to a first image data set, filling and repairing a defect area in an image by superposition calculation of a single-point energy item and an adjacent point interaction energy item to obtain a second image data set, a first generation module for extracting point cloud data from the second image data set and constructing a gradient field, executing local encryption processing to generate a three-dimensional face model if the point cloud density is lower than a preset threshold value, a second generation module for acquiring grid vertex coordinates and normal vector information of the three-dimensional face model, fusing an ambient light component, a diffuse reflection component and a specular reflection component to generate a visual image by an illumination intensity calculation formula, a calculation module for establishing a parameter mapping matrix to perform range constraint on transformation parameters input by a user, automatically correcting the transformation parameters by a coordinate transformation formula if the rotation angle exceeds the preset range, an output module for updating the grid vertex position in real time according to the transformation parameters and material attribute parameters, and outputting a facial feature analysis result.
Further, the cosmetic shaping auxiliary analysis system based on three-dimensional surface shape digital measurement provided by the embodiment comprises a first acquisition unit, a second acquisition unit, a third acquisition unit and a first generation unit, wherein the first acquisition unit is used for acquiring a medical image data set and adopting pixel gray value analysis, extracting pixel gray values of each pixel from the medical image data set, obtaining a first gray difference value set by calculating a difference value between the pixel gray values and a neighborhood pixel gray average value, the second acquisition unit is used for conducting noise detection according to the first gray difference value set, judging pixels as noise points to obtain a noise point position set if the difference value between the pixel gray values and the neighborhood average value exceeds a preset threshold value, the third acquisition unit is used for conducting smoothing processing on the noise point position set by adopting a Gaussian filter algorithm, adjusting the pixel gray values by applying the Gaussian filter algorithm to the noise points to obtain a second image set, and the first generation unit is used for conducting data set generation from the second image set and generating the first image data set by storing processed data of the second image set.
Preferably, the cosmetic shaping auxiliary analysis system based on three-dimensional surface shape digital measurement provided by the embodiment comprises a fourth acquisition unit, a fifth acquisition unit, a sixth acquisition unit and a second generation unit, wherein the fourth acquisition unit is used for acquiring a medical image data set and adopting pixel gray value analysis, extracting pixel gray values of each pixel from the medical image data set, obtaining a first gray difference value set by calculating a difference value between the pixel gray values and a neighborhood pixel gray average value, the fifth acquisition unit is used for carrying out noise detection according to the first gray difference value set, judging pixels as noise points to obtain a noise point position set if the difference value between the pixel gray values and the neighborhood average value exceeds a preset threshold value, the sixth acquisition unit is used for carrying out smoothing processing on the noise point position set by adopting a Gaussian filter algorithm, adjusting the pixel gray values by adopting the Gaussian filter algorithm to the noise points to obtain a second image set, and the second generation unit is used for carrying out data set generation from the second image set and generating the first image data set by storing processed data of the second image set.
Compared with the prior art, the cosmetic shaping auxiliary analysis method and system based on three-dimensional surface shape digital measurement provided by the embodiment have the following beneficial effects:
1. and (3) improving the precision of image preprocessing:
1. The robustness of noise suppression can effectively remove the interferences such as salt and pepper noise, gaussian noise and the like in medical images through Gaussian filter algorithm and dynamic threshold judgment (smoothing treatment when the difference between the pixel gray value and the neighborhood mean value exceeds a preset threshold value), so that the facial tissue boundary is clearer;
compared with the traditional median filtering, the Gaussian filtering suppresses noise while retaining edge details, is particularly suitable for retaining fine structures such as facial skin textures, pores and the like, and avoids characteristic distortion caused by excessive smoothing;
2. the defect repairing intelligence is based on defect filling of an energy function (superposition calculation of single-point energy items and adjacent point interaction energy items), so that local defects such as light spots, scratches or facial acne pits, scars and the like in an image can be automatically repaired, and a filling area is in seamless connection with pixel gray values and texture features of surrounding tissues;
the method has application value that three-dimensional modeling errors caused by original image defects are avoided, for example, scars at the nasal wings are repaired, so that the subsequent nose shaping simulation can be more fit with the real facial structure.
2. Detail enhancement of three-dimensional modeling:
1. The self-adaptive optimization of the point cloud data can enhance the point cloud density of key facial features (such as the corners of eyes, the lip lines and the nose bridge) through gradient field analysis and local encryption processing (automatic encryption when the point cloud density is lower than a threshold value), and solve the problem of the point cloud sparseness of the traditional laser scanning in a low curvature area (such as the cheek);
Data comparison, namely in a region with large curvature change such as a nose tip, the point cloud density can be increased from 50 points/cm < 2 > to 200 points/cm < 2 > in the traditional method, and the surface error of the model is reduced to be within 0.1 mm;
2. The reality of the physical illumination simulation is combined with the illumination model (such as a Phong illumination model) of the ambient light, diffuse reflection and specular reflection components, so that the optical characteristics (such as fat reflection of the forehead and matte texture of the cheek) of the facial skin can be truly restored, and the plastic feel of the traditional three-dimensional model is avoided;
clinical value, doctors can observe the three-dimensional sense of the face through the light and shadow change, for example, whether the light and shadow transition after apple muscle filling is natural or not is judged, and the deviation between the postoperative effect and the expected effect is reduced.
3. Safety and interaction efficiency of parameter control:
1. The constraint mechanism of the transformation parameters is that the parameter mapping matrix limits the range of rotation angles (such as a zygomatic arch inward pushing angle and a mandibular angle rotation amplitude), translation distances (such as a nose bridge heightening length) and the like, automatically corrects the ultra-limit value (such as limiting the mandibular angle rotation angle to be less than or equal to 15 degrees to avoid nerve injury risk), and does not operate reasonably in an algorithm level;
risk control, in which a safety threshold is preset in combination with an anatomical database, for example, when the upward rotation angle of the nose tip exceeds 30 degrees in the hump nose operation, the nostril is possibly exposed;
2. Real-time interactive immersive experience, grid vertex positions based on a coordinate transformation formula (such as a rotation matrix and a translation vector) are calculated in real time, and when parameters (such as a sliding bar for changing the chin length) are adjusted by a user, a three-dimensional model is updated at a frame rate of 60fps, so that a simulation effect of 'what you see is what you get' is achieved;
Doctor-patient communication optimization, wherein a patient can intuitively observe facial changes of different shaping schemes (such as comparing the heights of two nose augmentation prostheses), and a doctor can quickly verify aesthetic proportions of the design schemes (such as three-vestibule five-eye standards) through real-time rendering.
4. Comprehensive benefits of clinical application:
1. the accuracy of the operation scheme, the combination of a high-precision three-dimensional model (error <0.3 mm) and illumination simulation, can quantitatively analyze indexes such as facial asymmetry degree (such as left and right cheek width difference), skin looseness and the like, and provides data support for the personalized operation scheme;
2. Preoperative risk prejudging, namely, potential problems (such as compatibility with surrounding tissues and neurovascular compression risk after prosthesis implantation) can be found in advance through the effect of premodeling different operation parameters, so that the intra-operative adjustment time is shortened, and the operation risk is reduced by about 25%;
3. the medical resource is efficiently utilized, the efficiency of the automatic image processing and modeling process (the time required for generating a three-dimensional model from image input is less than 10 minutes) is greatly improved compared with the traditional manual measurement (1-2 hours), and the method is suitable for large-scale pre-cosmetic plastic evaluation.
In a word, the cosmetic shaping auxiliary analysis method and system based on three-dimensional surface shape digital measurement provided by the embodiment realize the spanning from experience leading to data driving in the cosmetic shaping field through the whole process technical innovation of high-precision image processing, detail enhancement modeling, physical illumination rendering and safe interaction simulation.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.