Movatterモバイル変換


[0]ホーム

URL:


CN118866251A - Plastic surgery plan generation method, device and storage medium based on medical images - Google Patents

Plastic surgery plan generation method, device and storage medium based on medical images
Download PDF

Info

Publication number
CN118866251A
CN118866251ACN202410906413.XACN202410906413ACN118866251ACN 118866251 ACN118866251 ACN 118866251ACN 202410906413 ACN202410906413 ACN 202410906413ACN 118866251 ACN118866251 ACN 118866251A
Authority
CN
China
Prior art keywords
data
surgical
dimensional
facial
plastic surgery
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410906413.XA
Other languages
Chinese (zh)
Other versions
CN118866251B (en
Inventor
钟泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meizhou Peoples Hospital
Original Assignee
Meizhou Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meizhou Peoples HospitalfiledCriticalMeizhou Peoples Hospital
Priority to CN202410906413.XApriorityCriticalpatent/CN118866251B/en
Publication of CN118866251ApublicationCriticalpatent/CN118866251A/en
Application grantedgrantedCritical
Publication of CN118866251BpublicationCriticalpatent/CN118866251B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The application relates to the technical field of data processing, and discloses an orthopedic surgery plan generation method, device and storage medium based on medical images. The method comprises the following steps: preprocessing and self-adaptive segmentation are carried out on the multi-mode medical image data to obtain a three-dimensional image data set; carrying out multi-scale feature fusion reconstruction on the three-dimensional image dataset to obtain a three-dimensional anatomical model; performing interactive target definition on the three-dimensional anatomical model to obtain a quantitative operation target parameter set; performing multi-objective optimization treatment on the quantitative operation target parameter set to obtain an optimized operation plan; based on the optimized operation plan, virtual operation simulation based on physical engine driving is executed on the three-dimensional anatomical model, and a postoperative appearance prediction model is obtained; and carrying out dynamic registration and real-time feedback analysis on the postoperative appearance prediction model to obtain a surgical navigation data stream, and generating a target plastic surgery plan according to the surgical navigation data stream. The application relates to the efficiency and accuracy of the generation of an orthopedic plan based on medical images.

Description

Translated fromChinese
基于医学影像的整形外科计划生成方法、装置及存储介质Plastic surgery plan generation method, device and storage medium based on medical images

技术领域Technical Field

本申请涉及数据处理领域,尤其涉及一种基于医学影像的整形外科计划生成方法、装置及存储介质。The present application relates to the field of data processing, and in particular to a method, device and storage medium for generating a plastic surgery plan based on medical images.

背景技术Background Art

当前整形外科领域的手术规划技术正在经历从传统经验主导向数字化、智能化转变的过程。先进医疗机构已开始采用三维成像、计算机辅助设计(CAD)和虚拟现实(VR)等技术来辅助手术规划。这些方法通过构建患者的数字化三维模型,为医生提供了更直观的解剖结构可视化和手术模拟平台。同时,一些研究机构正在探索将人工智能算法应用于手术规划中,如利用深度学习进行面部特征识别和美学评估,以及使用机器学习算法优化手术方案。Currently, surgical planning technology in the field of plastic surgery is undergoing a transformation from traditional experience-based to digital and intelligent. Advanced medical institutions have begun to use technologies such as 3D imaging, computer-aided design (CAD), and virtual reality (VR) to assist in surgical planning. These methods provide doctors with a more intuitive anatomical visualization and surgical simulation platform by building a digital 3D model of the patient. At the same time, some research institutions are exploring the application of artificial intelligence algorithms in surgical planning, such as using deep learning for facial feature recognition and aesthetic assessment, and using machine learning algorithms to optimize surgical plans.

然而,现有技术仍面临诸多挑战和局限性。首先,多模态医学影像的融合和利用效率不高,难以充分提取和整合不同成像模态(如CT、MRI、3D扫描等)提供的互补信息。其次,现有的手术目标定义方法往往过于简化,难以准确量化和表达复杂的美学需求和功能要求。再者,大多数系统缺乏有效的多目标优化机制,难以在美观性、安全性、恢复速度等多个维度间寻求最佳平衡。此外,现有的虚拟手术模拟技术通常采用简化的物理模型,难以准确模拟软组织的非线性变形行为,导致在通过医学影像数据生成整形外科计划时的准确率及效率较低。However, existing technologies still face many challenges and limitations. First, the fusion and utilization efficiency of multimodal medical images is not high, and it is difficult to fully extract and integrate the complementary information provided by different imaging modalities (such as CT, MRI, 3D scanning, etc.). Secondly, the existing surgical goal definition methods are often oversimplified, making it difficult to accurately quantify and express complex aesthetic and functional requirements. Furthermore, most systems lack an effective multi-objective optimization mechanism, making it difficult to find the best balance between multiple dimensions such as aesthetics, safety, and recovery speed. In addition, existing virtual surgical simulation technologies usually use simplified physical models, which make it difficult to accurately simulate the nonlinear deformation behavior of soft tissues, resulting in low accuracy and efficiency in generating plastic surgery plans using medical imaging data.

发明内容Summary of the invention

本申请提供了一种基于医学影像的整形外科计划生成方法、装置及存储介质,用于基于医学影像的整形外科计划生成的效率及准确率。The present application provides a method, device and storage medium for generating a plastic surgery plan based on medical images, which are used to improve the efficiency and accuracy of generating a plastic surgery plan based on medical images.

第一方面,本申请提供了一种基于医学影像的整形外科计划生成方法,所述基于医学影像的整形外科计划生成方法包括:对获取的多模态医学影像数据进行预处理及自适应分割,得到三维图像数据集;对所述三维图像数据集进行多尺度特征融合重建,得到三维解剖模型,其中,所述三维解剖模型包括高分辨率几何结构、多层次组织表示信息以及生物力学数据;对所述三维解剖模型进行基于面部特征的交互式目标定义,得到量化手术目标参数集;对所述量化手术目标参数集进行多目标优化处理,得到优化手术计划;基于所述优化手术计划,对所述三维解剖模型执行基于物理引擎驱动的虚拟手术模拟,得到术后外观预测模型;对所述术后外观预测模型进行动态配准及实时反馈分析,得到手术导航数据流,并根据所述手术导航数据流生成目标整形外科计划。In a first aspect, the present application provides a method for generating a plastic surgery plan based on medical images, the method comprising: preprocessing and adaptively segmenting the acquired multimodal medical image data to obtain a three-dimensional image data set; performing multi-scale feature fusion reconstruction on the three-dimensional image data set to obtain a three-dimensional anatomical model, wherein the three-dimensional anatomical model includes high-resolution geometric structure, multi-level tissue representation information and biomechanical data; performing interactive target definition based on facial features on the three-dimensional anatomical model to obtain a quantitative surgical target parameter set; performing multi-target optimization processing on the quantitative surgical target parameter set to obtain an optimized surgical plan; based on the optimized surgical plan, performing a physics engine-driven virtual surgical simulation on the three-dimensional anatomical model to obtain a postoperative appearance prediction model; performing dynamic registration and real-time feedback analysis on the postoperative appearance prediction model to obtain a surgical navigation data stream, and generating a target plastic surgery plan based on the surgical navigation data stream.

结合第一方面,在本申请第一方面的第一种实现方式中,所述对获取的多模态医学影像数据进行预处理及自适应分割,得到三维图像数据集,包括:对所述多模态医学影像数据进行自适应直方图均衡化处理,得到增强医学影像数据;对所述增强医学影像数据进行非局部均值滤波处理,得到降噪医学影像数据;对所述降噪医学影像数据进行基于互信息最大化的空间配准,得到统一坐标系下的医学影像数据;将所述统一坐标系下的医学影像数据输入U-Net深度学习网络进行分割,得到初步分割图像;对所述初步分割图像进行形态学闭运算处理,得到优化分割图像;通过边缘检测算法对所述优化分割图像进行边界信息提取,得到解剖结构边界信息;通过等直面提取算法根据所述解剖结构边界信息进行曲面重建,得到初步三维像素网格;对初步三维像素网格进行网格简化及细分操作,得到第一候选三维像素网格;对所述候选三维像素网格进行基于所述多模态医学影像数据的纹理映射,得到具有表面特征的第二候选三维像素网格;对所述第二候选三维像素网格进行生物力学参数融合,得到所述三维图像数据集。In combination with the first aspect, in a first implementation of the first aspect of the present application, the preprocessing and adaptive segmentation of the acquired multimodal medical image data to obtain a three-dimensional image data set includes: performing adaptive histogram equalization processing on the multimodal medical image data to obtain enhanced medical image data; performing non-local mean filtering processing on the enhanced medical image data to obtain denoised medical image data; performing spatial registration based on maximizing mutual information on the denoised medical image data to obtain medical image data in a unified coordinate system; inputting the medical image data in the unified coordinate system into a U-Net deep learning network for segmentation to obtain a preliminary segmented image; The preliminary segmented image is processed by morphological closing operation to obtain an optimized segmented image; boundary information of the optimized segmented image is extracted by an edge detection algorithm to obtain anatomical structure boundary information; surface reconstruction is performed according to the anatomical structure boundary information by an iso-rectangular surface extraction algorithm to obtain a preliminary three-dimensional pixel grid; grid simplification and subdivision operations are performed on the preliminary three-dimensional pixel grid to obtain a first candidate three-dimensional pixel grid; texture mapping based on the multimodal medical imaging data is performed on the candidate three-dimensional pixel grid to obtain a second candidate three-dimensional pixel grid with surface features; biomechanical parameter fusion is performed on the second candidate three-dimensional pixel grid to obtain the three-dimensional image data set.

结合第一方面,在本申请第一方面的第二种实现方式中,所述对所述三维图像数据集进行多尺度特征融合重建,得到三维解剖模型,其中,所述三维解剖模型包括高分辨率几何结构、多层次组织表示信息以及生物力学数据,包括:对所述三维图像数据集进行小波变换,得到多尺度特征图;通过卷积神经网络对所述多尺度特征图进行深度特征提取,得到深层特征表示;通过特征金字塔网络对所述深层特征表示进行多层级特征融合,得到多分辨率特征图;通过自注意力机制对所述多分辨率特征图进行空间上下文增强,得到增强空间上下文信息;对所述增强空间上下文信息进行三维反卷积操作,得到重建体素数据;通过马奇立方体算法对所述重建体素数据进行表面重建,得到三维表面网格;对所述三维表面网格进行拉普拉斯平滑处理,得到所述高分辨率几何结构;通过纹理映射算法对所述高分辨率几何结构进行表面特征映射,得到包含多层次组织表示信息的表面数据;通过有限元分析算法对所述包含多层次组织表示信息的的表面数据进行力学属性计算,得到包含生物力学数据的数据结构;对所述包含生物力学数据的数据结构进行数据压缩和索引构建,得到所述三维解剖模型。In combination with the first aspect, in a second implementation of the first aspect of the present application, the three-dimensional image dataset is subjected to multi-scale feature fusion reconstruction to obtain a three-dimensional anatomical model, wherein the three-dimensional anatomical model includes high-resolution geometric structure, multi-level tissue representation information and biomechanical data, including: performing wavelet transform on the three-dimensional image dataset to obtain a multi-scale feature map; performing deep feature extraction on the multi-scale feature map through a convolutional neural network to obtain a deep feature representation; performing multi-level feature fusion on the deep feature representation through a feature pyramid network to obtain a multi-resolution feature map; performing spatial context enhancement on the multi-resolution feature map through a self-attention mechanism to obtain enhanced spatial context information ; Perform a three-dimensional deconvolution operation on the enhanced spatial context information to obtain reconstructed voxel data; perform surface reconstruction on the reconstructed voxel data using the March cube algorithm to obtain a three-dimensional surface mesh; perform Laplace smoothing on the three-dimensional surface mesh to obtain the high-resolution geometric structure; perform surface feature mapping on the high-resolution geometric structure using a texture mapping algorithm to obtain surface data containing multi-level tissue representation information; perform mechanical property calculations on the surface data containing multi-level tissue representation information using a finite element analysis algorithm to obtain a data structure containing biomechanical data; perform data compression and index construction on the data structure containing biomechanical data to obtain the three-dimensional anatomical model.

结合第一方面,在本申请第一方面的第三种实现方式中,所述对所述三维解剖模型进行基于面部特征的交互式目标定义,得到量化手术目标参数集,包括:对所述三维解剖模型进行关键点检测,得到面部特征点集;通过主成分分析算法对所述面部特征点集进行降维处理,得到面部形态主成分;对所述面部形态主成分进行统计分析,得到面部美学评分指标;通过卷积神经网络对所述三维解剖模型进行特征提取,得到面部深度特征表示信息;对所述面部深度特征表示信息进行聚类分析,得到面部区域分割图像;通过图卷积网络对所述面部区域分割图像进行拓扑关系分析,得到面部结构图表示;基于所述面部美学评分指标,对所述面部结构图表示进行图谱匹配,得到预期美学标准偏差值;通过变分自编码器对所述预期美学标准偏差值进行参数化编码,得到美学参数空间;对所述美学参数空间进行交互式调节,得到个性化美学目标;通过预置的傅里叶描述子对所述个性化美学目标进行形状编码,得到所述量化手术目标参数集。In combination with the first aspect, in a third implementation of the first aspect of the present application, the interactive target definition based on facial features of the three-dimensional anatomical model to obtain a quantitative surgical target parameter set includes: performing key point detection on the three-dimensional anatomical model to obtain a facial feature point set; performing dimensionality reduction processing on the facial feature point set through a principal component analysis algorithm to obtain a facial morphology principal component; performing statistical analysis on the facial morphology principal component to obtain a facial aesthetic scoring index; performing feature extraction on the three-dimensional anatomical model through a convolutional neural network to obtain facial depth feature representation information; performing cluster analysis on the facial depth feature representation information to obtain a facial region segmentation image; performing topological relationship analysis on the facial region segmentation image through a graph convolutional network to obtain a facial structure graph representation; performing graph matching on the facial structure graph representation based on the facial aesthetic scoring index to obtain an expected aesthetic standard deviation value; performing parameterized encoding on the expected aesthetic standard deviation value through a variational autoencoder to obtain an aesthetic parameter space; interactively adjusting the aesthetic parameter space to obtain a personalized aesthetic target; performing shape encoding on the personalized aesthetic target through a preset Fourier descriptor to obtain the quantitative surgical target parameter set.

结合第一方面,在本申请第一方面的第四种实现方式中,所述对所述量化手术目标参数集进行多目标优化处理,得到优化手术计划,包括:对所述量化手术目标参数集进行目标分解,得到子目标集合;通过层次分析法对所述子目标集合进行权重分配,得到加权子目标集;对所述加权子目标集进行约束条件定义,得到约束条件数据集;通过遗传算法对所述约束条件数据集进行初始解生成,得到候选解集;对所述候选解集进行适应度评估,得到述候选解集对应的质量评分数据;通过非支配排序遗传算法对所述质量评分数据进行Pareto优化分析,得到非劣解集;对所述非劣解集进行聚类分析,得到代表解集;通过模糊综合评判法对所述代表解集进行多准则决策,得到最优解;对所述最优解进行手术步骤分解,得到初步手术计划;通过蒙特卡洛树搜索算法对所述初步手术计划进行细化,得到所述优化手术计划。In combination with the first aspect, in a fourth implementation method of the first aspect of the present application, the multi-objective optimization processing of the quantitative surgical target parameter set to obtain an optimized surgical plan includes: decomposing the quantitative surgical target parameter set to obtain a sub-target set; assigning weights to the sub-target set through a hierarchical analysis method to obtain a weighted sub-target set; defining constraints for the weighted sub-target set to obtain a constraint data set; generating an initial solution for the constraint data set through a genetic algorithm to obtain a candidate solution set; performing fitness evaluation on the candidate solution set to obtain quality score data corresponding to the candidate solution set; performing Pareto optimization analysis on the quality score data through a non-dominated sorting genetic algorithm to obtain a non-inferior solution set; performing cluster analysis on the non-inferior solution set to obtain a representative solution set; performing multi-criteria decision-making on the representative solution set through a fuzzy comprehensive evaluation method to obtain an optimal solution; decomposing the optimal solution into surgical steps to obtain a preliminary surgical plan; and refining the preliminary surgical plan through a Monte Carlo tree search algorithm to obtain the optimized surgical plan.

结合第一方面,在本申请第一方面的第五种实现方式中,所述基于所述优化手术计划,对所述三维解剖模型执行基于物理引擎驱动的虚拟手术模拟,得到术后外观预测模型,包括:对所述三维解剖模型进行网格简化,得到简化三维网格数据;对所述简化三维网格数据进行离散化处理,得到计算网格数据;对所述计算网格数据进行材料属性分配,得到计算数据;通过质点-弹簧系统算法对所述计算数据进行动力学参数化,得到可变形体参数集;对所述可变形体参数集进行边界条件设置,得到约束条件下的可变形体数据;对所述约束条件下的可变形体数据进行时间离散化,得到动态仿真数据;对所述动态仿真数据进行时序数据分解,得到变形过程时序数据;通过形状匹配算法对所述变形过程时序数据进行轮廓对齐,得到标准化变形数据集;对所述标准化变形数据集进行统计分析,得到变形特征分布数据;通过生成对抗网络对所述变形特征分布数据进行外观合成,得到术后外观预测数据。In combination with the first aspect, in a fifth implementation of the first aspect of the present application, based on the optimized surgical plan, a virtual surgical simulation driven by a physical engine is performed on the three-dimensional anatomical model to obtain a postoperative appearance prediction model, including: mesh simplification of the three-dimensional anatomical model to obtain simplified three-dimensional mesh data; discretization of the simplified three-dimensional mesh data to obtain calculation mesh data; material property assignment of the calculation mesh data to obtain calculation data; dynamic parameterization of the calculation data through a particle-spring system algorithm to obtain a deformable body parameter set; boundary condition setting of the deformable body parameter set to obtain deformable body data under constraint conditions; time discretization of the deformable body data under constraint conditions to obtain dynamic simulation data; time series data decomposition of the dynamic simulation data to obtain deformation process time series data; contour alignment of the deformation process time series data through a shape matching algorithm to obtain a standardized deformation data set; statistical analysis of the standardized deformation data set to obtain deformation feature distribution data; appearance synthesis of the deformation feature distribution data through a generative adversarial network to obtain postoperative appearance prediction data.

结合第一方面,在本申请第一方面的第六种实现方式中,所述对所述术后外观预测模型进行动态配准及实时反馈分析,得到手术导航数据流,并根据所述手术导航数据流生成目标整形外科计划,包括:对所述术后外观预测模型进行特征点提取,得到关键解剖标记点集;通过迭代最近点算法对所述关键解剖标记点集进行实时位置匹配,得到动态配准矩阵;对所述动态配准矩阵进行误差分析,得到配准精度评估数据;通过卡尔曼滤波算法对所述配准精度评估数据进行噪声抑制,得到平滑化的配准参数;对所述平滑化的配准参数进行时间序列分析,得到配准趋势数据;通过支持向量机对所述配准趋势数据进行分类,得到手术进程状态标识;对所述手术进程状态标识进行决策树分析,得到实时反馈指令集;通过强化学习算法对所述实时反馈指令集进行优化,得到自适应导航策略;对所述自适应导航策略进行数据流化处理,得到手术导航数据流;通过深度神经网络对所述手术导航数据流进行综合分析,得到目标整形外科计划。In combination with the first aspect, in a sixth implementation of the first aspect of the present application, the postoperative appearance prediction model is dynamically registered and analyzed in real time for feedback to obtain a surgical navigation data stream, and a target plastic surgery plan is generated based on the surgical navigation data stream, including: extracting feature points from the postoperative appearance prediction model to obtain a set of key anatomical marker points; performing real-time position matching on the key anatomical marker point set through an iterative closest point algorithm to obtain a dynamic registration matrix; performing error analysis on the dynamic registration matrix to obtain registration accuracy evaluation data; performing noise suppression on the registration accuracy evaluation data through a Kalman filter algorithm to obtain smoothed registration parameters; performing time series analysis on the smoothed registration parameters to obtain registration trend data; classifying the registration trend data through a support vector machine to obtain a surgical process status identifier; performing decision tree analysis on the surgical process status identifier to obtain a real-time feedback instruction set; optimizing the real-time feedback instruction set through a reinforcement learning algorithm to obtain an adaptive navigation strategy; performing data stream processing on the adaptive navigation strategy to obtain a surgical navigation data stream; and performing comprehensive analysis on the surgical navigation data stream through a deep neural network to obtain a target plastic surgery plan.

第二方面,本申请提供了一种基于医学影像的整形外科计划生成装置,所述基于医学影像的整形外科计划生成装置包括:In a second aspect, the present application provides a device for generating a plastic surgery plan based on medical images, the device for generating a plastic surgery plan based on medical images comprising:

分割模块,用于对获取的多模态医学影像数据进行预处理及自适应分割,得到三维图像数据集;A segmentation module is used to preprocess and adaptively segment the acquired multimodal medical image data to obtain a three-dimensional image data set;

重建模块,用于对所述三维图像数据集进行多尺度特征融合重建,得到三维解剖模型,其中,所述三维解剖模型包括高分辨率几何结构、多层次组织表示信息以及生物力学数据;A reconstruction module, used for performing multi-scale feature fusion reconstruction on the three-dimensional image data set to obtain a three-dimensional anatomical model, wherein the three-dimensional anatomical model includes high-resolution geometric structure, multi-level tissue representation information and biomechanical data;

定义模块,用于对所述三维解剖模型进行基于面部特征的交互式目标定义,得到量化手术目标参数集;A definition module, used for interactively defining the target based on facial features on the three-dimensional anatomical model to obtain a quantitative surgical target parameter set;

处理模块,用于对所述量化手术目标参数集进行多目标优化处理,得到优化手术计划;A processing module, used for performing multi-objective optimization processing on the quantitative surgical target parameter set to obtain an optimized surgical plan;

模拟模块,用于基于所述优化手术计划,对所述三维解剖模型执行基于物理引擎驱动的虚拟手术模拟,得到术后外观预测模型;A simulation module, configured to perform a virtual surgery simulation driven by a physical engine on the three-dimensional anatomical model based on the optimized surgery plan to obtain a postoperative appearance prediction model;

分析模块,用于对所述术后外观预测模型进行动态配准及实时反馈分析,得到手术导航数据流,并根据所述手术导航数据流生成目标整形外科计划。The analysis module is used to perform dynamic registration and real-time feedback analysis on the postoperative appearance prediction model to obtain a surgical navigation data stream, and generate a target plastic surgery plan based on the surgical navigation data stream.

本申请的第三方面提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述的基于医学影像的整形外科计划生成方法。A third aspect of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores instructions, which, when executed on a computer, enable the computer to execute the above-mentioned method for generating a plastic surgery plan based on medical images.

本申请提供的技术方案中,通过对多模态医学影像数据进行预处理和自适应分割,结合自适应直方图均衡化、非局部均值滤波和基于互信息最大化的空间配准等技术,显著提高了原始医学影像的质量和信息提取效率,为后续处理奠定了坚实基础。其次,采用多尺度特征融合重建技术,包括小波变换、卷积神经网络深度特征提取、特征金字塔网络和自注意力机制等,实现了高精度的三维解剖模型重建,不仅保留了高分辨率几何结构,还包含了多层次组织表示信息和生物力学数据,为手术规划提供了全面、精确的解剖信息。在目标定义阶段,引入了基于面部特征的交互式目标定义方法,通过主成分分析、卷积神经网络特征提取、图卷积网络拓扑分析等技术,实现了手术目标的精确量化,大大提高了手术规划的个性化程度和精准性。在优化阶段,采用多目标优化处理技术,包括层次分析法、遗传算法、非支配排序遗传算法和模糊综合评判法等,有效平衡了美学效果、安全性、恢复速度等多个目标,生成了最优的手术计划。虚拟手术模拟环节中,基于物理引擎驱动的仿真技术,结合质点-弹簧系统算法和形状匹配算法,实现了高度逼真的软组织变形模拟,大大提高了术后效果预测的准确性。最后,在术中导航和实时反馈分析方面,通过迭代最近点算法、卡尔曼滤波、支持向量机分类和强化学习等技术的综合应用,实现了高精度的动态配准和实时反馈,为手术过程提供了精确的导航支持。这一系列技术创新和集成不仅显著提高了整形外科手术规划的精度和可靠性,还实现了从术前规划到术中导航的全流程智能化支持。通过综合利用多模态医学影像、深度学习、计算机视觉和人工智能技术,本方法有效克服了传统手术规划方法中信息利用不充分、目标定义不精确、优化不全面、模拟不逼真等问题,为医生提供了更全面、更准确的决策支持。In the technical solution provided by the present application, by preprocessing and adaptive segmentation of multimodal medical image data, combined with adaptive histogram equalization, non-local mean filtering and spatial registration based on mutual information maximization and other technologies, the quality and information extraction efficiency of the original medical image are significantly improved, laying a solid foundation for subsequent processing. Secondly, multi-scale feature fusion reconstruction technology, including wavelet transform, deep feature extraction of convolutional neural network, feature pyramid network and self-attention mechanism, is adopted to achieve high-precision reconstruction of three-dimensional anatomical model, which not only retains high-resolution geometric structure, but also contains multi-level tissue representation information and biomechanical data, providing comprehensive and accurate anatomical information for surgical planning. In the target definition stage, an interactive target definition method based on facial features is introduced, and the precise quantification of surgical targets is achieved through principal component analysis, convolutional neural network feature extraction, graph convolution network topology analysis and other technologies, which greatly improves the personalization and accuracy of surgical planning. In the optimization stage, multi-objective optimization processing technology, including hierarchical analysis method, genetic algorithm, non-dominated sorting genetic algorithm and fuzzy comprehensive evaluation method, is adopted to effectively balance multiple goals such as aesthetic effect, safety and recovery speed, and generate the optimal surgical plan. In the virtual surgery simulation, the simulation technology driven by the physical engine, combined with the mass-spring system algorithm and shape matching algorithm, achieved highly realistic soft tissue deformation simulation, greatly improving the accuracy of postoperative effect prediction. Finally, in terms of intraoperative navigation and real-time feedback analysis, through the comprehensive application of iterative nearest point algorithm, Kalman filtering, support vector machine classification and reinforcement learning, high-precision dynamic registration and real-time feedback were achieved, providing accurate navigation support for the surgical process. This series of technological innovations and integrations not only significantly improved the accuracy and reliability of plastic surgery planning, but also realized intelligent support for the entire process from preoperative planning to intraoperative navigation. By comprehensively utilizing multimodal medical imaging, deep learning, computer vision and artificial intelligence technologies, this method effectively overcomes the problems of insufficient information utilization, inaccurate target definition, incomplete optimization, and unrealistic simulation in traditional surgical planning methods, providing doctors with more comprehensive and accurate decision support.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以基于这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the accompanying drawings required for use in the description of the embodiments will be briefly introduced below. Obviously, the accompanying drawings described below are some embodiments of the present invention. For ordinary technicians in this field, other accompanying drawings can be obtained based on these accompanying drawings without paying creative work.

图1为本申请实施例中基于医学影像的整形外科计划生成方法的一个实施例示意图;FIG1 is a schematic diagram of an embodiment of a method for generating a plastic surgery plan based on medical images in an embodiment of the present application;

图2为本申请实施例中基于医学影像的整形外科计划生成装置的一个实施例示意图。FIG. 2 is a schematic diagram of an embodiment of a device for generating a plastic surgery plan based on medical images in an embodiment of the present application.

具体实施方式DETAILED DESCRIPTION

本申请实施例提供了一种基于医学影像的整形外科计划生成方法、装置及存储介质。本申请的说明书及权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”或“具有”及其任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。The present application embodiment provides a method, device and storage medium for generating a plastic surgery plan based on medical images. The terms "first", "second", "third", "fourth", etc. (if any) in the specification and claims of the present application and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It should be understood that the data used in this way can be interchangeable where appropriate, so that the embodiments described here can be implemented in an order other than that illustrated or described here. In addition, the terms "including" or "having" and any variations thereof are intended to cover non-exclusive inclusions, for example, a process, method, system, product or device comprising a series of steps or units is not necessarily limited to those steps or units clearly listed, but may include other steps or units that are not clearly listed or inherent to these processes, methods, products or devices.

为便于理解,下面对本申请实施例的具体流程进行描述,请参阅图1,本申请实施例中基于医学影像的整形外科计划生成方法的一个实施例包括:For ease of understanding, the specific process of the embodiment of the present application is described below. Please refer to FIG. 1. An embodiment of the method for generating a plastic surgery plan based on medical images in the embodiment of the present application includes:

步骤S101、对获取的多模态医学影像数据进行预处理及自适应分割,得到三维图像数据集;Step S101, preprocessing and adaptively segmenting the acquired multimodal medical image data to obtain a three-dimensional image data set;

步骤S102、对三维图像数据集进行多尺度特征融合重建,得到三维解剖模型,其中,三维解剖模型包括高分辨率几何结构、多层次组织表示信息以及生物力学数据;Step S102, performing multi-scale feature fusion reconstruction on the three-dimensional image data set to obtain a three-dimensional anatomical model, wherein the three-dimensional anatomical model includes a high-resolution geometric structure, multi-level tissue representation information, and biomechanical data;

步骤S103、对三维解剖模型进行基于面部特征的交互式目标定义,得到量化手术目标参数集;Step S103, performing interactive target definition based on facial features on the three-dimensional anatomical model to obtain a quantitative surgical target parameter set;

步骤S104、对量化手术目标参数集进行多目标优化处理,得到优化手术计划;Step S104, performing multi-objective optimization processing on the quantified surgical target parameter set to obtain an optimized surgical plan;

步骤S105、基于优化手术计划,对三维解剖模型执行基于物理引擎驱动的虚拟手术模拟,得到术后外观预测模型;Step S105: Based on the optimized surgical plan, a virtual surgical simulation driven by a physical engine is performed on the three-dimensional anatomical model to obtain a postoperative appearance prediction model;

步骤S106、对术后外观预测模型进行动态配准及实时反馈分析,得到手术导航数据流,并根据手术导航数据流生成目标整形外科计划。Step S106: dynamically align and perform real-time feedback analysis on the postoperative appearance prediction model to obtain a surgical navigation data stream, and generate a target plastic surgery plan based on the surgical navigation data stream.

可以理解的是,本申请的执行主体可以为基于医学影像的整形外科计划生成装置,还可以是终端或者服务器,具体此处不做限定。本申请实施例以服务器为执行主体为例进行说明。It is understandable that the execution subject of the present application may be a plastic surgery plan generation device based on medical images, or a terminal or a server, which is not limited here. The present application embodiment is described by taking the server as the execution subject as an example.

具体地,通过获取多模态医学影像数据,如CT、MRI和3D表面扫描等,以面部整形为例,获取患者的高分辨率CT扫描以了解骨骼结构,MRI图像以观察软组织分布,以及3D表面扫描以捕捉皮肤表面细节。这些多模态数据随后经过预处理和自适应分割,其中预处理包括图像增强和噪声去除,而自适应分割则利用深度学习算法如U-Net自动识别和分割不同的解剖结构。例如,在面部整形中,可以自动分割出面部骨骼、肌肉、脂肪组织和皮肤等不同层次的结构。的输出是一个精确的三维图像数据集,它包含了患者面部的详细解剖信息。Specifically, by acquiring multimodal medical imaging data such as CT, MRI, and 3D surface scans, taking facial plastic surgery as an example, high-resolution CT scans of patients are obtained to understand the bone structure, MRI images to observe the distribution of soft tissues, and 3D surface scans to capture skin surface details. These multimodal data are then preprocessed and adaptively segmented, where the preprocessing includes image enhancement and noise removal, while the adaptive segmentation uses deep learning algorithms such as U-Net to automatically identify and segment different anatomical structures. For example, in facial plastic surgery, structures at different levels such as facial bones, muscles, fat tissue, and skin can be automatically segmented. The output is an accurate three-dimensional image dataset that contains detailed anatomical information of the patient's face.

接下来,对这个三维图像数据集进行多尺度特征融合重建,这一过程利用了小波变换、卷积神经网络和特征金字塔网络等技术。通过这些技术,能够捕捉从大尺度结构(如整体面部轮廓)到微小细节(如皱纹和毛孔)的全方位信息。例如,在鼻整形手术规划中,可以精确重建鼻梁的曲线、鼻尖的形状以及鼻翼的细微结构。重建的结果是一个高度精确的三维解剖模型,它不仅包含高分辨率的几何结构,还包括多层次的组织表示信息(如皮肤、脂肪、肌肉和骨骼的分层信息)以及生物力学数据(如组织的弹性和硬度)。Next, this 3D image dataset is reconstructed by multi-scale feature fusion, a process that utilizes technologies such as wavelet transform, convolutional neural network, and feature pyramid network. Through these technologies, a full range of information from large-scale structures (such as overall facial contours) to tiny details (such as wrinkles and pores) can be captured. For example, in rhinoplasty planning, the curve of the bridge of the nose, the shape of the tip of the nose, and the subtle structure of the nose wing can be accurately reconstructed. The result of the reconstruction is a highly accurate 3D anatomical model that contains not only high-resolution geometric structures, but also multi-level tissue representation information (such as layered information of skin, fat, muscle, and bone) and biomechanical data (such as tissue elasticity and hardness).

有了这个详细的三维解剖模型,和患者可以进行基于面部特征的交互式目标定义。利用了主成分分析、深度特征提取和图卷积网络等技术,允许和患者共同定义和调整手术目标。例如,在下颌整形手术中,可以通过交互式界面精确调整下颌角的角度、下巴的突出度等参数,而会实时展示这些调整对整体面部美学的影响。这个过程的输出是一个量化的手术目标参数集,它精确定义了期望达到的面部特征。With this detailed 3D anatomical model, interactive goal definition based on facial features can be performed with the patient. Techniques such as principal component analysis, deep feature extraction, and graph convolutional networks are used to allow the patient and the patient to jointly define and adjust surgical goals. For example, in mandibular plastic surgery, the angle of the mandibular angle, the protrusion of the chin, and other parameters can be precisely adjusted through an interactive interface, and the impact of these adjustments on the overall facial aesthetics will be displayed in real time. The output of this process is a quantified set of surgical target parameters that accurately defines the desired facial features.

随后,对这个量化手术目标参数集进行多目标优化处理。使用了层次分析法、遗传算法和非支配排序遗传算法等技术,以平衡美学效果、手术安全性、恢复时间等多个目标。例如,在全面部年轻化手术规划中,会权衡皮肤紧致度的改善、面部轮廓的优化以及手术创伤最小化等多个目标,生成一个最优的手术计划。Subsequently, this set of quantitative surgical target parameters was subjected to multi-objective optimization. Techniques such as the analytic hierarchy process, genetic algorithm, and non-dominated sorting genetic algorithm were used to balance multiple objectives such as aesthetic effect, surgical safety, and recovery time. For example, in the planning of a full facial rejuvenation surgery, multiple objectives such as improving skin firmness, optimizing facial contours, and minimizing surgical trauma are weighed to generate an optimal surgical plan.

基于这个优化的手术计划,接下来对三维解剖模型执行基于物理引擎驱动的虚拟手术模拟。利用了质点-弹簧算法和有限元分析等技术,模拟手术过程中的组织变形和力学响应。例如,在面部填充手术的模拟中,可以精确预测填充物注入后周围软组织的变形情况,以及随时间推移的沉降效果。这个模拟的结果是一个术后外观预测模型,它为和患者提供了高度逼真的手术效果预览。最后,对术后外观预测模型进行动态配准及实时反馈分析。Based on this optimized surgical plan, a physics-driven virtual surgery simulation is then performed on the 3D anatomical model. Techniques such as the mass-spring algorithm and finite element analysis are used to simulate tissue deformation and mechanical response during surgery. For example, in the simulation of facial filler surgery, the deformation of the surrounding soft tissue after the filler is injected, as well as the sedimentation effect over time, can be accurately predicted. The result of this simulation is a postoperative appearance prediction model that provides a highly realistic preview of the surgical effect for the patient. Finally, the postoperative appearance prediction model is dynamically registered and analyzed for real-time feedback.

本申请实施例中,通过对多模态医学影像数据进行预处理和自适应分割,结合自适应直方图均衡化、非局部均值滤波和基于互信息最大化的空间配准等技术,显著提高了原始医学影像的质量和信息提取效率,为后续处理奠定了坚实基础。其次,采用多尺度特征融合重建技术,包括小波变换、卷积神经网络深度特征提取、特征金字塔网络和自注意力机制等,实现了高精度的三维解剖模型重建,不仅保留了高分辨率几何结构,还包含了多层次组织表示信息和生物力学数据,为手术规划提供了全面、精确的解剖信息。在目标定义阶段,引入了基于面部特征的交互式目标定义方法,通过主成分分析、卷积神经网络特征提取、图卷积网络拓扑分析等技术,实现了手术目标的精确量化,大大提高了手术规划的个性化程度和精准性。在优化阶段,采用多目标优化处理技术,包括层次分析法、遗传算法、非支配排序遗传算法和模糊综合评判法等,有效平衡了美学效果、安全性、恢复速度等多个目标,生成了最优的手术计划。虚拟手术模拟环节中,基于物理引擎驱动的仿真技术,结合质点-弹簧系统算法和形状匹配算法,实现了高度逼真的软组织变形模拟,大大提高了术后效果预测的准确性。最后,在术中导航和实时反馈分析方面,通过迭代最近点算法、卡尔曼滤波、支持向量机分类和强化学习等技术的综合应用,实现了高精度的动态配准和实时反馈,为手术过程提供了精确的导航支持。这一系列技术创新和集成不仅显著提高了整形外科手术规划的精度和可靠性,还实现了从术前规划到术中导航的全流程智能化支持。通过综合利用多模态医学影像、深度学习、计算机视觉和人工智能技术,本方法有效克服了传统手术规划方法中信息利用不充分、目标定义不精确、优化不全面、模拟不逼真等问题,为医生提供了更全面、更准确的决策支持。In the embodiment of the present application, by preprocessing and adaptive segmentation of multimodal medical image data, combined with adaptive histogram equalization, non-local mean filtering and spatial registration based on mutual information maximization and other technologies, the quality and information extraction efficiency of the original medical image are significantly improved, laying a solid foundation for subsequent processing. Secondly, multi-scale feature fusion reconstruction technology, including wavelet transform, deep feature extraction of convolutional neural network, feature pyramid network and self-attention mechanism, is adopted to achieve high-precision three-dimensional anatomical model reconstruction, which not only retains high-resolution geometric structure, but also contains multi-level tissue representation information and biomechanical data, providing comprehensive and accurate anatomical information for surgical planning. In the target definition stage, an interactive target definition method based on facial features is introduced, and the precise quantification of surgical targets is achieved through principal component analysis, convolutional neural network feature extraction, graph convolution network topology analysis and other technologies, which greatly improves the personalization and accuracy of surgical planning. In the optimization stage, multi-objective optimization processing technology, including hierarchical analysis method, genetic algorithm, non-dominated sorting genetic algorithm and fuzzy comprehensive evaluation method, is adopted to effectively balance multiple goals such as aesthetic effect, safety and recovery speed, and generate the optimal surgical plan. In the virtual surgery simulation, the simulation technology driven by the physical engine, combined with the mass-spring system algorithm and shape matching algorithm, achieved highly realistic soft tissue deformation simulation, greatly improving the accuracy of postoperative effect prediction. Finally, in terms of intraoperative navigation and real-time feedback analysis, through the comprehensive application of iterative nearest point algorithm, Kalman filtering, support vector machine classification and reinforcement learning, high-precision dynamic registration and real-time feedback were achieved, providing accurate navigation support for the surgical process. This series of technological innovations and integrations not only significantly improved the accuracy and reliability of plastic surgery planning, but also realized intelligent support for the entire process from preoperative planning to intraoperative navigation. By comprehensively utilizing multimodal medical imaging, deep learning, computer vision and artificial intelligence technologies, this method effectively overcomes the problems of insufficient information utilization, inaccurate target definition, incomplete optimization, and unrealistic simulation in traditional surgical planning methods, providing doctors with more comprehensive and accurate decision support.

在一具体实施例中,执行步骤S101的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S101 may specifically include the following steps:

(1)对多模态医学影像数据进行自适应直方图均衡化处理,得到增强医学影像数据;(1) Performing adaptive histogram equalization processing on multimodal medical image data to obtain enhanced medical image data;

(2)对增强医学影像数据进行非局部均值滤波处理,得到降噪医学影像数据;(2) performing non-local mean filtering on the enhanced medical image data to obtain denoised medical image data;

(3)对降噪医学影像数据进行基于互信息最大化的空间配准,得到统一坐标系下的医学影像数据;(3) Perform spatial registration of the denoised medical image data based on maximizing mutual information to obtain medical image data in a unified coordinate system;

(4)将统一坐标系下的医学影像数据输入U-Net深度学习网络进行分割,得到初步分割图像;(4) Input the medical image data in the unified coordinate system into the U-Net deep learning network for segmentation to obtain a preliminary segmented image;

(5)对初步分割图像进行形态学闭运算处理,得到优化分割图像;(5) Perform morphological closing operation on the preliminary segmented image to obtain an optimized segmented image;

(6)通过边缘检测算法对优化分割图像进行边界信息提取,得到解剖结构边界信息;(6) Extracting boundary information from the optimized segmented image using an edge detection algorithm to obtain boundary information of the anatomical structure;

(7)通过等直面提取算法根据解剖结构边界信息进行曲面重建,得到初步三维像素网格;(7) Using the equal straight surface extraction algorithm to perform surface reconstruction based on the anatomical structure boundary information, a preliminary three-dimensional pixel grid is obtained;

(8)对初步三维像素网格进行网格简化及细分操作,得到第一候选三维像素网格;(8) performing mesh simplification and subdivision operations on the preliminary three-dimensional pixel grid to obtain a first candidate three-dimensional pixel grid;

(9)对候选三维像素网格进行基于多模态医学影像数据的纹理映射,得到具有表面特征的第二候选三维像素网格;(9) performing texture mapping on the candidate three-dimensional pixel grid based on the multimodal medical imaging data to obtain a second candidate three-dimensional pixel grid having surface features;

(10)对第二候选三维像素网格进行生物力学参数融合,得到三维图像数据集。(10) Perform biomechanical parameter fusion on the second candidate three-dimensional pixel grid to obtain a three-dimensional image data set.

具体地,对多模态医学影像数据进行自适应直方图均衡化处理。这一步骤旨在增强图像对比度,使得图像细节更加清晰。例如,在面部整形手术计划中,对CT扫描图像应用自适应直方图均衡化可以突出显示面部骨骼结构的细微差异,而对MRI图像的处理则能够增强软组织的对比度。这一处理得到的增强医学影像数据为后续分析提供了更加清晰的视觉信息。对增强后的医学影像数据进行非局部均值滤波处理,以降低图像噪声。非局部均值滤波算法通过在图像中寻找相似区域并进行加权平均来去除噪声,同时保留图像的细节和边缘信息。在面部整形手术计划中,这一步骤可以有效去除CT或MRI图像中的随机噪声,同时保留如鼻梁轮廓、眼眶结构等关键解剖特征的细节。降噪后的医学影像数据随后undergo基于互信息最大化的空间配准。这一步骤旨在将不同模态(如CT、MRI、3D扫描)的图像对齐到同一坐标系下。例如,在颌面部整形手术计划中,这一步骤可以将显示骨骼结构的CT图像与展示软组织分布的MRI图像精确对齐,使得医生能够在同一空间参考系下全面评估患者的解剖结构。配准后的医学影像数据被输入到U-Net深度学习网络中进行分割。U-Net是一种专门用于医学图像分割的卷积神经网络架构,能够高精度地识别和分割不同的解剖结构。在面部整形手术计划中,U-Net可以自动分割出面部的皮肤、脂肪、肌肉、骨骼等不同组织层次,为后续的手术规划提供详细的解剖信息。对U-Net输出的初步分割图像进行形态学闭运算处理,以填充小孔洞并平滑边界。这一步骤可以修复分割过程中可能出现的小缺陷,例如在面部皮肤分割中填补毛孔或细小皱纹造成的断裂。Specifically, adaptive histogram equalization is performed on multimodal medical image data. This step aims to enhance image contrast and make image details clearer. For example, in facial plastic surgery planning, applying adaptive histogram equalization to CT scan images can highlight subtle differences in facial bone structure, while processing MRI images can enhance the contrast of soft tissue. The enhanced medical image data obtained by this processing provides clearer visual information for subsequent analysis. The enhanced medical image data is processed by non-local mean filtering to reduce image noise. The non-local mean filtering algorithm removes noise by finding similar regions in the image and performing weighted averaging while retaining image details and edge information. In facial plastic surgery planning, this step can effectively remove random noise in CT or MRI images while retaining details of key anatomical features such as nose bridge contour and orbital structure. The denoised medical image data then undergoes spatial registration based on maximizing mutual information. This step aims to align images of different modalities (such as CT, MRI, 3D scans) to the same coordinate system. For example, in maxillofacial plastic surgery planning, this step can accurately align CT images showing bone structures with MRI images showing soft tissue distribution, allowing doctors to fully evaluate the patient's anatomical structure in the same spatial reference system. The registered medical image data is input into the U-Net deep learning network for segmentation. U-Net is a convolutional neural network architecture specifically used for medical image segmentation, which can identify and segment different anatomical structures with high accuracy. In facial plastic surgery planning, U-Net can automatically segment different tissue layers such as skin, fat, muscle, and bone on the face, providing detailed anatomical information for subsequent surgical planning. The preliminary segmented image output by U-Net is processed by morphological closing operations to fill small holes and smooth the boundaries. This step can repair small defects that may occur during the segmentation process, such as filling pores or breaks caused by fine wrinkles in facial skin segmentation.

随后,通过边缘检测算法(如Canny边缘检测器)对优化后的分割图像进行边界信息提取,得到清晰的解剖结构边界信息。这一步骤可以精确描绘面部轮廓、鼻梁线条、唇形等关键特征的边界。利用等值面提取算法(如Marching Cubes算法)根据解剖结构边界信息进行曲面重建,生成初步的三维像素网格。这一步骤将二维图像中的解剖结构信息转化为三维表面模型,例如,可以重建出立体的鼻形、下颌轮廓等。对初步三维像素网格进行网格简化和细分操作,以优化网格质量和计算效率。网格简化可以减少不必要的细节,而细分则可以在关键区域增加精度。例如,在鼻整形手术计划中,可以对鼻尖等复杂区域进行网格细分,而对相对平坦的鼻梁区域进行适当简化。接下来,对优化后的三维像素网格进行基于多模态医学影像数据的纹理映射。这一步骤将原始医学图像中的表面特征信息映射到三维模型上,使得模型不仅具有准确的几何形状,还包含真实的表面细节。例如,可以将皮肤纹理、色素沉着等信息精确映射到面部三维模型上。Subsequently, the boundary information of the optimized segmented image is extracted by edge detection algorithms (such as Canny edge detector) to obtain clear anatomical boundary information. This step can accurately depict the boundaries of key features such as facial contours, nose bridge lines, and lip shapes. The surface is reconstructed based on the anatomical boundary information using an isosurface extraction algorithm (such as Marching Cubes algorithm) to generate a preliminary three-dimensional pixel grid. This step converts the anatomical structure information in the two-dimensional image into a three-dimensional surface model. For example, a three-dimensional nose shape, mandibular contour, etc. can be reconstructed. The preliminary three-dimensional pixel grid is meshed and subdivided to optimize the mesh quality and computational efficiency. Mesh simplification can reduce unnecessary details, while subdivision can increase accuracy in key areas. For example, in rhinoplasty surgery planning, complex areas such as the nose tip can be meshed, while the relatively flat nose bridge area can be appropriately simplified. Next, the optimized three-dimensional pixel grid is texture mapped based on multimodal medical imaging data. This step maps the surface feature information in the original medical image to the three-dimensional model, so that the model not only has accurate geometric shapes but also contains real surface details. For example, information such as skin texture and pigmentation can be accurately mapped onto a 3D facial model.

最后,对具有表面特征的三维像素网格进行生物力学参数融合。这一步骤将组织的弹性、硬度等生物力学特性整合到模型中。例如,在面部填充手术计划中,可以为不同区域的软组织赋予相应的弹性参数,以便在后续的虚拟手术模拟中准确预测填充物注入后的组织变形。Finally, the biomechanical parameters are fused to the three-dimensional pixel grid with surface features. This step integrates the biomechanical properties of the tissue, such as elasticity and hardness, into the model. For example, in the facial filler surgery plan, the corresponding elastic parameters can be assigned to the soft tissues in different areas to accurately predict the tissue deformation after the filler is injected in the subsequent virtual surgery simulation.

在一具体实施例中,执行步骤S102的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S102 may specifically include the following steps:

(1)对三维图像数据集进行小波变换,得到多尺度特征图;(1) Perform wavelet transform on the three-dimensional image data set to obtain a multi-scale feature map;

(2)通过卷积神经网络对多尺度特征图进行深度特征提取,得到深层特征表示;(2) Deep feature extraction is performed on multi-scale feature maps through convolutional neural networks to obtain deep feature representation;

(3)通过特征金字塔网络对深层特征表示进行多层级特征融合,得到多分辨率特征图;(3) Multi-level feature fusion is performed on the deep feature representation through the feature pyramid network to obtain a multi-resolution feature map;

(4)通过自注意力机制对多分辨率特征图进行空间上下文增强,得到增强空间上下文信息;(4) The spatial context of the multi-resolution feature map is enhanced through the self-attention mechanism to obtain enhanced spatial context information;

(5)对增强空间上下文信息进行三维反卷积操作,得到重建体素数据;(5) performing a three-dimensional deconvolution operation on the enhanced spatial context information to obtain reconstructed voxel data;

(6)通过马奇立方体算法对重建体素数据进行表面重建,得到三维表面网格;(6) reconstructing the surface of the reconstructed voxel data using the March cube algorithm to obtain a three-dimensional surface mesh;

(7)对三维表面网格进行拉普拉斯平滑处理,得到高分辨率几何结构;(7) Perform Laplace smoothing on the three-dimensional surface mesh to obtain a high-resolution geometric structure;

(8)通过纹理映射算法对高分辨率几何结构进行表面特征映射,得到包含多层次组织表示信息的表面数据;(8) Mapping the surface features of high-resolution geometric structures through texture mapping algorithms to obtain surface data containing multi-level tissue representation information;

(9)通过有限元分析算法对包含多层次组织表示信息的的表面数据进行力学属性计算,得到包含生物力学数据的数据结构;(9) calculating mechanical properties of surface data containing multi-level tissue representation information by a finite element analysis algorithm to obtain a data structure containing biomechanical data;

(10)对包含生物力学数据的数据结构进行数据压缩和索引构建,得到三维解剖模型。(10) Data compression and index construction are performed on the data structure containing the biomechanical data to obtain a three-dimensional anatomical model.

具体地,对三维图像数据集进行小波变换是关键的第一步。小波变换能够将图像分解为不同尺度的特征,从而生成多尺度特征图。例如,在面部整形手术计划中,小波变换可以同时捕捉面部的大尺度轮廓(如下颌线)和微小细节(如皱纹纹理),为后续分析提供全面的信息基础。Specifically, wavelet transform of 3D image datasets is a critical first step. Wavelet transform can decompose images into features of different scales, thereby generating multi-scale feature maps. For example, in facial plastic surgery planning, wavelet transform can simultaneously capture large-scale contours of the face (such as the jawline) and tiny details (such as wrinkle texture), providing a comprehensive information basis for subsequent analysis.

通过卷积神经网络对这些多尺度特征图进行深度特征提取。卷积神经网络的多层结构能够逐层学习和提取越来越抽象的特征。在鼻整形手术计划中,浅层网络可能识别简单的边缘和纹理,而深层网络则能捕捉复杂的结构特征,如鼻梁的弧度或鼻翼的形状。这一步骤的输出是一组深层特征表示,它包含了丰富的解剖结构信息。通过特征金字塔网络对这些深层特征表示进行多层级特征融合。特征金字塔网络能够整合不同尺度的特征,生成多分辨率特征图。在全面部年轻化手术计划中,这一步骤可以有效融合面部大区域(如额头、脸颊)的整体特征和局部细节(如眼角纹),为后续的精确规划奠定基础。Deep feature extraction is performed on these multi-scale feature maps through convolutional neural networks. The multi-layer structure of convolutional neural networks can learn and extract increasingly abstract features layer by layer. In rhinoplasty surgery planning, shallow networks may recognize simple edges and textures, while deep networks can capture complex structural features, such as the curvature of the bridge of the nose or the shape of the nose wing. The output of this step is a set of deep feature representations, which contains rich anatomical information. Multi-level feature fusion is performed on these deep feature representations through feature pyramid networks. Feature pyramid networks can integrate features of different scales to generate multi-resolution feature maps. In the planning of comprehensive facial rejuvenation surgery, this step can effectively integrate the overall features of large facial areas (such as forehead and cheeks) and local details (such as eye wrinkles), laying the foundation for subsequent precise planning.

为了进一步增强特征的表达能力,对多分辨率特征图应用自注意力机制进行空间上下文增强。自注意力机制能够捕捉特征之间的长距离依赖关系,从而得到增强的空间上下文信息。例如,在下颌整形手术计划中,自注意力机制可以帮助理解下颌角与整体面部轮廓的关系,确保局部修改与整体美学协调。对增强的空间上下文信息进行三维反卷积操作,这一步骤旨在将抽象的特征信息重新映射到三维空间中,得到重建的体素数据。在面部填充手术计划中,这一步可以精确重建面部各个区域的体积信息,为后续的填充量规划提供依据。通过马奇立方体算法对重建的体素数据进行表面重建,生成三维表面网格。马奇立方体算法能够有效地从体素数据中提取等值面,形成连续的表面网格。例如,在颧骨整形手术计划中,这一步骤可以精确重建颧骨的三维表面形态。In order to further enhance the expressive power of features, the self-attention mechanism is applied to the multi-resolution feature map for spatial context enhancement. The self-attention mechanism can capture the long-range dependencies between features, thereby obtaining enhanced spatial context information. For example, in the planning of mandibular plastic surgery, the self-attention mechanism can help understand the relationship between the mandibular angle and the overall facial contour, ensuring that local modifications are coordinated with the overall aesthetics. The enhanced spatial context information is subjected to a three-dimensional deconvolution operation. This step aims to remap the abstract feature information into three-dimensional space to obtain reconstructed voxel data. In the planning of facial filling surgery, this step can accurately reconstruct the volume information of each area of the face, providing a basis for the subsequent filling volume planning. The reconstructed voxel data is surface reconstructed by the March cube algorithm to generate a three-dimensional surface mesh. The March cube algorithm can effectively extract isosurfaces from voxel data to form a continuous surface mesh. For example, in the planning of zygomatic plastic surgery, this step can accurately reconstruct the three-dimensional surface morphology of the zygomatic bone.

对生成的三维表面网格进行拉普拉斯平滑处理,以获得高分辨率几何结构。拉普拉斯平滑能够去除网格表面的噪声和小波动,同时保留重要的几何特征。在唇部整形手术计划中,这一步骤可以平滑唇部轮廓,同时保留唇峰等关键特征。通过纹理映射算法对高分辨率几何结构进行表面特征映射,得到包含多层次组织表示信息的表面数据。这一步骤将原始医学图像中的纹理信息精确映射到三维模型上。例如,在面部皮肤年轻化手术计划中,可以将皮肤纹理、色素分布等信息映射到面部模型上,为后续的治疗方案设计提供直观参考。通过有限元分析算法对包含多层次组织表示信息的表面数据进行力学属性计算,得到包含生物力学数据的数据结构。有限元分析能够模拟不同组织在外力作用下的变形行为。在乳房整形手术计划中,这一步骤可以预测植入物对周围软组织的影响,帮助选择最合适的植入物大小和位置。对包含生物力学数据的数据结构进行数据压缩和索引构建,得到最终的三维解剖模型。数据压缩可以减少存储空间并提高数据传输效率,而索引构建则有助于快速检索和操作模型中的特定部分。例如,在复杂的颌面部重建手术计划中,这一步骤可以确保在交互式规划过程中能够快速访问和修改模型的任意部分,提高规划效率。The generated 3D surface mesh is Laplace smoothed to obtain high-resolution geometry. Laplace smoothing can remove noise and small fluctuations on the mesh surface while retaining important geometric features. In lip plastic surgery planning, this step can smooth the lip contour while retaining key features such as the lip peak. The surface features of the high-resolution geometry are mapped by the texture mapping algorithm to obtain surface data containing multi-level tissue representation information. This step accurately maps the texture information in the original medical image to the 3D model. For example, in facial skin rejuvenation surgery planning, information such as skin texture and pigment distribution can be mapped to the facial model to provide an intuitive reference for subsequent treatment plan design. The mechanical properties of the surface data containing multi-level tissue representation information are calculated by the finite element analysis algorithm to obtain a data structure containing biomechanical data. Finite element analysis can simulate the deformation behavior of different tissues under external forces. In breast plastic surgery planning, this step can predict the impact of implants on surrounding soft tissues and help choose the most appropriate implant size and location. The data structure containing biomechanical data is compressed and indexed to obtain the final 3D anatomical model. Data compression can reduce storage space and improve data transmission efficiency, while index construction helps to quickly retrieve and operate specific parts of the model. For example, in complex maxillofacial reconstruction surgery planning, this step can ensure that any part of the model can be quickly accessed and modified during the interactive planning process, improving planning efficiency.

在一具体实施例中,执行步骤S103的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S103 may specifically include the following steps:

(1)对三维解剖模型进行关键点检测,得到面部特征点集;(1) Detect key points on the 3D anatomical model to obtain a set of facial feature points;

(2)通过主成分分析算法对面部特征点集进行降维处理,得到面部形态主成分;(2) Using principal component analysis algorithm to reduce the dimension of facial feature point set and obtain the principal components of facial morphology;

(3)对面部形态主成分进行统计分析,得到面部美学评分指标;(3) Statistical analysis of the principal components of facial morphology was performed to obtain facial aesthetic scoring indicators;

(4)通过卷积神经网络对三维解剖模型进行特征提取,得到面部深度特征表示信息;(4) Extract features from the three-dimensional anatomical model through a convolutional neural network to obtain facial deep feature representation information;

(5)对面部深度特征表示信息进行聚类分析,得到面部区域分割图像;(5) Perform cluster analysis on facial depth feature representation information to obtain a facial region segmentation image;

(6)通过图卷积网络对面部区域分割图像进行拓扑关系分析,得到面部结构图表示;(6) Perform topological relationship analysis on the facial region segmentation image through a graph convolutional network to obtain a facial structure graph representation;

(7)基于面部美学评分指标,对面部结构图表示进行图谱匹配,得到预期美学标准偏差值;(7) Based on the facial aesthetic scoring index, the facial structure map representation is matched to obtain the expected aesthetic standard deviation value;

(8)通过变分自编码器对预期美学标准偏差值进行参数化编码,得到美学参数空间;(8) parameterizing the expected aesthetic standard deviation value through a variational autoencoder to obtain an aesthetic parameter space;

(9)对美学参数空间进行交互式调节,得到个性化美学目标;(9) Interactively adjust the aesthetic parameter space to obtain personalized aesthetic goals;

(10)通过预置的傅里叶描述子对个性化美学目标进行形状编码,得到量化手术目标参数集。(10) The shape of the personalized aesthetic target is encoded through a preset Fourier descriptor to obtain a quantitative surgical target parameter set.

具体地,对三维解剖模型进行关键点检测,以获取面部特征点集。这些特征点通常包括眼角、鼻尖、唇角等解剖学标志点。例如,在鼻整形手术计划中,精确定位鼻根、鼻尖、鼻翼等关键点对于评估鼻部形态至关重要。随后,通过主成分分析算法对面部特征点集进行降维处理,得到面部形态主成分。主成分分析能够提取出最能代表面部变化的主要特征。在面部轮廓整形计划中,这些主成分可能包括下颌角度、脸颊丰满度等关键因素。对面部形态主成分进行统计分析,得到面部美学评分指标。这一步骤将主成分与美学标准相结合,建立客观的评分系统。例如,在全面部年轻化手术计划中,这些指标可能包括面部对称性、黄金比例符合度等量化指标。Specifically, key point detection is performed on the three-dimensional anatomical model to obtain a set of facial feature points. These feature points usually include anatomical landmarks such as the corners of the eyes, the tip of the nose, and the corners of the lips. For example, in a rhinoplasty surgery plan, accurately locating key points such as the root of the nose, the tip of the nose, and the wing of the nose is crucial for evaluating the morphology of the nose. Subsequently, the facial feature point set is subjected to dimensionality reduction processing using a principal component analysis algorithm to obtain the principal components of facial morphology. Principal component analysis can extract the main features that best represent facial changes. In a facial contouring plan, these principal components may include key factors such as the mandibular angle and cheek fullness. The principal components of facial morphology are statistically analyzed to obtain facial aesthetic scoring indicators. This step combines the principal components with aesthetic standards to establish an objective scoring system. For example, in a comprehensive facial rejuvenation surgery plan, these indicators may include quantitative indicators such as facial symmetry and golden ratio compliance.

通过卷积神经网络对三维解剖模型进行特征提取,得到面部深度特征表示信息。卷积神经网络能够自动学习并提取复杂的面部特征。在眼部整形手术计划中,这些深度特征可能包括眼睑形状、眼角倾斜度等细微特征。对面部深度特征表示信息进行聚类分析,得到面部区域分割图像。聚类分析能够将相似的特征归类,从而划分出不同的面部区域。在颧骨整形手术计划中,这一步骤可以精确划分出颧骨区域与周围软组织的边界。通过图卷积网络对面部区域分割图像进行拓扑关系分析,得到面部结构图表示。图卷积网络能够捕捉不同面部区域之间的空间关系。在下颌角整形手术计划中,这一步骤可以分析下颌角与邻近面部结构的相互影响。The feature extraction of the three-dimensional anatomical model is performed through a convolutional neural network to obtain the facial deep feature representation information. The convolutional neural network can automatically learn and extract complex facial features. In the eye plastic surgery plan, these deep features may include subtle features such as eyelid shape and eye angle inclination. The facial deep feature representation information is clustered to obtain a facial region segmentation image. Cluster analysis can classify similar features to divide different facial regions. In the zygomatic plastic surgery plan, this step can accurately divide the boundary between the zygomatic region and the surrounding soft tissue. The topological relationship analysis of the facial region segmentation image is performed through a graph convolutional network to obtain a facial structure graph representation. The graph convolutional network can capture the spatial relationship between different facial regions. In the mandibular angle plastic surgery plan, this step can analyze the mutual influence of the mandibular angle and adjacent facial structures.

基于之前得到的面部美学评分指标,对面部结构图表示进行图谱匹配,得到预期美学标准偏差值。这一步骤将患者的面部结构与美学标准进行比对,量化出需要改进的方面。例如,在唇部整形手术计划中,可以计算出当前唇形与理想唇形的偏差程度。通过变分自编码器对预期美学标准偏差值进行参数化编码,得到美学参数空间。变分自编码器能够将复杂的美学特征压缩到一个连续的参数空间中。在面部填充手术计划中,这些参数可能代表不同区域的填充量和形状变化。对美学参数空间进行交互式调节,得到个性化美学目标。这一步骤允许医生和患者在参数空间中进行探索和调整,以达成共识的手术目标。例如,在隆鼻手术计划中,可以通过调整参数来预览不同程度的隆鼻效果。最后,通过预置的傅里叶描述子对个性化美学目标进行形状编码,得到量化手术目标参数集。傅里叶描述子能够用有限的参数精确描述复杂的形状变化。在下巴整形手术计划中,这些参数可能描述下巴轮廓的曲率变化和突出程度。Based on the facial aesthetic scoring index obtained previously, the facial structure graph representation is matched to obtain the expected aesthetic standard deviation value. This step compares the patient's facial structure with the aesthetic standard and quantifies the aspects that need to be improved. For example, in the lip plastic surgery plan, the degree of deviation between the current lip shape and the ideal lip shape can be calculated. The expected aesthetic standard deviation value is parameterized and encoded by the variational autoencoder to obtain the aesthetic parameter space. The variational autoencoder can compress complex aesthetic features into a continuous parameter space. In the facial filling surgery plan, these parameters may represent the filling amount and shape changes in different areas. The aesthetic parameter space is interactively adjusted to obtain personalized aesthetic goals. This step allows doctors and patients to explore and adjust in the parameter space to reach a consensus on the surgical goal. For example, in the rhinoplasty surgery plan, the rhinoplasty effects of different degrees can be previewed by adjusting the parameters. Finally, the personalized aesthetic goal is shape-encoded by the preset Fourier descriptor to obtain a quantitative surgical target parameter set. The Fourier descriptor can accurately describe complex shape changes with limited parameters. In chin surgery planning, these parameters may describe changes in the curvature and projection of the chin contour.

通过这一系列步骤,整形外科手术计划从客观的三维解剖模型出发,经过多层次的特征提取、分析和优化,最终得到一个高度个性化且量化的手术目标参数集。这个参数集不仅考虑了患者的个体特征,还融入了美学标准和医学专业知识,为后续的手术模拟和计划制定提供了精确的数学基础。例如,在一个复杂的面部重塑手术计划中,这个参数集可能包含了面部多个区域的形状变化参数、软组织填充量、骨骼调整角度等多维度信息,使得整个手术计划更加精确、可预测和个性化。Through this series of steps, the plastic surgery plan starts from the objective three-dimensional anatomical model, and after multi-level feature extraction, analysis and optimization, it finally obtains a highly personalized and quantified surgical target parameter set. This parameter set not only takes into account the individual characteristics of the patient, but also incorporates aesthetic standards and medical expertise, providing a precise mathematical basis for subsequent surgical simulation and planning. For example, in a complex facial reshaping surgery plan, this parameter set may include multi-dimensional information such as shape change parameters of multiple facial areas, soft tissue filling amount, bone adjustment angle, etc., making the entire surgical plan more accurate, predictable and personalized.

在一具体实施例中,执行步骤S104的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S104 may specifically include the following steps:

(1)对量化手术目标参数集进行目标分解,得到子目标集合;(1) Decomposing the quantitative surgical target parameter set to obtain a sub-target set;

(2)通过层次分析法对子目标集合进行权重分配,得到加权子目标集;(2) Assign weights to the sub-goal set through the analytic hierarchy process to obtain a weighted sub-goal set;

(3)对加权子目标集进行约束条件定义,得到约束条件数据集;(3) Define constraints on the weighted sub-goal set to obtain a constraint data set;

(4)通过遗传算法对约束条件数据集进行初始解生成,得到候选解集;(4) Generate an initial solution for the constraint condition data set through a genetic algorithm to obtain a candidate solution set;

(5)对候选解集进行适应度评估,得到述候选解集对应的质量评分数据;(5) Perform fitness evaluation on the candidate solution set to obtain quality score data corresponding to the candidate solution set;

(6)通过非支配排序遗传算法对质量评分数据进行Pareto优化分析,得到非劣解集;(6) Perform Pareto optimization analysis on the quality score data using a non-dominated sorting genetic algorithm to obtain a non-inferior solution set;

(7)对非劣解集进行聚类分析,得到代表解集;(7) Perform cluster analysis on the non-inferior solution set to obtain the representative solution set;

(8)通过模糊综合评判法对代表解集进行多准则决策,得到最优解;(8) Perform multi-criteria decision-making on the representative solution set through fuzzy comprehensive evaluation method to obtain the optimal solution;

(9)对最优解进行手术步骤分解,得到初步手术计划;(9) Decomposing the optimal solution into surgical steps to obtain a preliminary surgical plan;

(10)通过蒙特卡洛树搜索算法对初步手术计划进行细化,得到优化手术计划。(10) The preliminary surgical plan is refined through the Monte Carlo tree search algorithm to obtain the optimized surgical plan.

具体地,对量化手术目标参数集进行目标分解,得到子目标集合。例如,在一个综合性面部整形手术计划中,可将整体目标分解为鼻部形态改善、颧骨轮廓调整、下颌角修饰等具体子目标。这种分解使得复杂的整形目标更易于分析和优化。通过层次分析法对子目标集合进行权重分配,得到加权子目标集。层次分析法考虑了各子目标之间的相对重要性。在上述面部整形例子中,可能会根据患者需求和医学考虑,给予鼻部形态改善较高的权重,而对颧骨轮廓调整赋予中等权重。对加权子目标集进行约束条件定义,得到约束条件数据集。这些约束条件包括医学安全限制、解剖学可行性等。例如,在鼻部整形中,约束条件可能包括鼻梁高度的最大允许增加值,以确保呼吸功能不受影响。Specifically, the quantitative surgical target parameter set is decomposed to obtain a sub-target set. For example, in a comprehensive facial plastic surgery plan, the overall goal can be decomposed into specific sub-targets such as nose morphology improvement, zygomatic bone contour adjustment, and mandibular angle modification. This decomposition makes complex plastic surgery goals easier to analyze and optimize. The sub-target set is weighted by the analytic hierarchy process to obtain a weighted sub-target set. The analytic hierarchy process takes into account the relative importance of each sub-target. In the above facial plastic surgery example, a higher weight may be given to nose morphology improvement, while a medium weight is given to zygomatic bone contour adjustment based on patient needs and medical considerations. Constraints are defined for the weighted sub-target set to obtain a constraint data set. These constraints include medical safety restrictions, anatomical feasibility, etc. For example, in rhinoplasty, constraints may include the maximum allowable increase in nose bridge height to ensure that respiratory function is not affected.

通过遗传算法对约束条件数据集进行初始解生成,得到候选解集。遗传算法模拟生物进化过程,生成满足约束条件的多个可能手术方案。在面部整形计划中,每个候选解代表一种特定的手术参数组合,如鼻尖突出程度、颧骨削减量等。对候选解集进行适应度评估,得到候选解集对应的质量评分数据。评估标准可能包括美学效果、手术风险、恢复时间等多个维度。例如,一个候选解可能在美学效果上得分很高,但在手术风险评估中得分较低。通过非支配排序遗传算法对质量评分数据进行Pareto优化分析,得到非劣解集。Pareto优化考虑多个目标之间的权衡,找出在任一目标上改进都会导致其他目标恶化的解集。在面部整形计划中,这可能产生一系列方案,每个方案在美学效果和手术风险之间取得不同的平衡。Genetic algorithms are used to generate initial solutions for the constraint condition data set to obtain a candidate solution set. Genetic algorithms simulate the biological evolution process and generate multiple possible surgical plans that meet the constraints. In facial plastic surgery planning, each candidate solution represents a specific combination of surgical parameters, such as the degree of nose tip protrusion, the amount of cheekbone reduction, etc. The candidate solution set is evaluated for fitness and the quality score data corresponding to the candidate solution set is obtained. The evaluation criteria may include multiple dimensions such as aesthetic effect, surgical risk, and recovery time. For example, a candidate solution may score high in aesthetic effect but low in surgical risk assessment. The quality score data is subjected to Pareto optimization analysis by a non-dominated sorting genetic algorithm to obtain a non-inferior solution set. Pareto optimization considers the trade-offs between multiple objectives and finds a solution set in which improvement in any one objective will lead to the deterioration of other objectives. In facial plastic surgery planning, this may produce a series of solutions, each of which strikes a different balance between aesthetic effect and surgical risk.

对非劣解集进行聚类分析,得到代表解集。聚类分析将相似的解归为一组,从每组中选取代表性解,以减少需要进一步评估的方案数量。在面部整形例子中,可能会得到几个典型方案,如保守方案、激进方案和折中方案。通过模糊综合评判法对代表解集进行多准则决策,得到最优解。模糊综合评判法能够处理决策过程中的不确定性和主观性。在整形手术计划中,这一步骤综合考虑医生经验、患者偏好等因素,从代表解集中选出最佳方案。对最优解进行手术步骤分解,得到初步手术计划。这一步将抽象的参数转化为具体的手术步骤序列。例如,在面部整形中,可能包括“首先进行鼻骨切割,提升鼻梁高度2mm”等详细步骤。最后,通过蒙特卡洛树搜索算法对初步手术计划进行细化,得到优化手术计划。蒙特卡洛树搜索通过模拟大量可能的手术过程,优化每个步骤的具体参数和执行顺序。在面部整形计划中,这可能涉及调整每个切口的精确位置和深度,或优化填充物注射的顺序和剂量。基于医学影像的整形外科计划生成方法将复杂的手术目标转化为一个经过优化、可执行的手术计划。这个过程不仅考虑了多个目标之间的平衡,还融入了医学约束和专业判断,最终产生一个既满足美学需求,又确保医学安全性的个性化手术方案。例如,在一个全面部年轻化手术计划中,最终的优化手术计划可能包含精确到毫米的面部填充剂注射位置和剂量,精确的皮肤拉伸参数,以及详细的手术步骤时序,所有这些都经过了多轮优化,以在美学效果、安全性和恢复速度之间取得最佳平衡。Cluster analysis is performed on the non-inferior solution set to obtain a representative solution set. Cluster analysis groups similar solutions into a group, and selects representative solutions from each group to reduce the number of solutions that need to be further evaluated. In the facial plastic surgery example, several typical solutions may be obtained, such as conservative solutions, radical solutions, and compromise solutions. The fuzzy comprehensive evaluation method is used to make multi-criteria decisions on the representative solution set to obtain the optimal solution. The fuzzy comprehensive evaluation method can handle the uncertainty and subjectivity in the decision-making process. In plastic surgery planning, this step comprehensively considers factors such as doctor experience and patient preferences to select the best solution from the representative solution set. The optimal solution is decomposed into surgical steps to obtain a preliminary surgical plan. This step converts abstract parameters into a specific sequence of surgical steps. For example, in facial plastic surgery, it may include detailed steps such as "first perform nasal bone cutting and raise the height of the nose bridge by 2mm". Finally, the preliminary surgical plan is refined by the Monte Carlo tree search algorithm to obtain the optimized surgical plan. Monte Carlo tree search optimizes the specific parameters and execution order of each step by simulating a large number of possible surgical processes. In facial plastic surgery planning, this may involve adjusting the precise location and depth of each incision, or optimizing the order and dosage of filler injections. Medical imaging-based plastic surgery plan generation methods transform complex surgical goals into an optimized, executable surgical plan. This process not only considers the balance between multiple goals, but also incorporates medical constraints and professional judgment, ultimately producing a personalized surgical plan that meets both aesthetic needs and ensures medical safety. For example, in a comprehensive facial rejuvenation surgery plan, the final optimized surgical plan may contain facial filler injection locations and dosages accurate to the millimeter, precise skin stretching parameters, and detailed surgical step timing, all of which have been optimized multiple times to achieve the best balance between aesthetic effects, safety, and recovery speed.

在一具体实施例中,执行步骤S105的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S105 may specifically include the following steps:

(1)对三维解剖模型进行网格简化,得到简化三维网格数据;(1) Simplifying the mesh of the three-dimensional anatomical model to obtain simplified three-dimensional mesh data;

(2)对简化三维网格数据进行离散化处理,得到计算网格数据;(2) discretizing the simplified three-dimensional grid data to obtain computational grid data;

(3)对计算网格数据进行材料属性分配,得到计算数据;(3) Assigning material properties to the computational grid data to obtain computational data;

(4)通过质点-弹簧系统算法对计算数据进行动力学参数化,得到可变形体参数集;(4) Dynamically parameterize the calculated data through the mass-spring system algorithm to obtain the deformable body parameter set;

(5)对可变形体参数集进行边界条件设置,得到约束条件下的可变形体数据;(5) setting boundary conditions for the deformable body parameter set to obtain deformable body data under constraint conditions;

(6)对约束条件下的可变形体数据进行时间离散化,得到动态仿真数据;(6) Time discretization of the deformable body data under constraint conditions to obtain dynamic simulation data;

(7)对动态仿真数据进行时序数据分解,得到变形过程时序数据;(7) Decomposing the dynamic simulation data into time series data to obtain the time series data of the deformation process;

(8)通过形状匹配算法对变形过程时序数据进行轮廓对齐,得到标准化变形数据集;(8) Using shape matching algorithm to align the contours of the deformation process time series data to obtain a standardized deformation data set;

(9)对标准化变形数据集进行统计分析,得到变形特征分布数据;(9) Performing statistical analysis on the standardized deformation data set to obtain deformation feature distribution data;

(10)通过生成对抗网络对变形特征分布数据进行外观合成,得到术后外观预测数据。(10) The appearance of the deformation feature distribution data is synthesized through a generative adversarial network to obtain the postoperative appearance prediction data.

具体地,对三维解剖模型进行网格简化,得到简化三维网格数据。这一步骤通过减少网格中的顶点和面数量,在保持模型主要特征的同时降低计算复杂度。例如,在面部整形手术计划中,可以保留面部关键特征点(如鼻尖、唇角)的精细网格,而对相对平坦区域进行适度简化。Specifically, the mesh of the 3D anatomical model is simplified to obtain simplified 3D mesh data. This step reduces the computational complexity while maintaining the main features of the model by reducing the number of vertices and faces in the mesh. For example, in facial plastic surgery planning, the fine mesh of key facial feature points (such as the tip of the nose and the corner of the lip) can be retained, while relatively flat areas can be moderately simplified.

对简化三维网格数据进行离散化处理,得到计算网格数据。离散化过程将连续的几何模型转换为离散的计算单元,为后续的数值分析做准备。在面部填充手术计划中,这一步骤可能将面部软组织划分为数千个四面体单元,每个单元代表一小块组织。对计算网格数据进行材料属性分配,得到计算数据。这一步骤为网格中的每个单元赋予相应的物理属性,如弹性模量、密度等。在鼻整形手术计划中,可能会为鼻尖软骨、鼻梁骨骼和周围软组织分配不同的材料属性,以准确模拟它们在手术中的行为。The simplified 3D mesh data is discretized to obtain computational mesh data. The discretization process converts the continuous geometric model into discrete computational units in preparation for subsequent numerical analysis. In facial filler surgery planning, this step may divide the facial soft tissue into thousands of tetrahedral units, each unit representing a small piece of tissue. Material properties are assigned to the computational mesh data to obtain computational data. This step assigns corresponding physical properties, such as elastic modulus, density, etc., to each unit in the mesh. In rhinoplasty surgery planning, different material properties may be assigned to the tip cartilage, bridge of the nose bone, and surrounding soft tissue to accurately simulate their behavior during surgery.

通过质点-弹簧系统算法对计算数据进行动力学参数化,得到可变形体参数集。质点-弹簧系统将组织模型化为由质点和弹簧组成的网络,能够模拟组织的弹性变形。在面部拉皮手术计划中,这一步骤可以设定面部皮肤和皮下组织的弹性参数,以预测皮肤拉伸后的形态变化。对可变形体参数集进行边界条件设置,得到约束条件下的可变形体数据。边界条件定义了模型在模拟过程中的固定点和外力作用点。例如,在下颌角整形手术计划中,可能会设置颌骨与头骨连接处为固定点,而在下颌角区域施加切除力。对约束条件下的可变形体数据进行时间离散化,得到动态仿真数据。时间离散化将连续的变形过程分解为一系列离散的时间步,便于逐步计算组织变形。在面部填充手术计划中,这可能涉及模拟填充物注入后软组织随时间变化的过程。The calculated data is dynamically parameterized by the mass-spring system algorithm to obtain a deformable body parameter set. The mass-spring system models the tissue as a network of mass points and springs, which can simulate the elastic deformation of the tissue. In the facelift surgery planning, this step can set the elastic parameters of the facial skin and subcutaneous tissue to predict the morphological changes of the skin after stretching. The deformable body parameter set is subjected to boundary conditions to obtain the deformable body data under constraint conditions. The boundary conditions define the fixed points and external force application points of the model during the simulation process. For example, in the mandibular angle plastic surgery planning, the connection between the jaw and the skull may be set as a fixed point, and the resection force is applied to the mandibular angle area. The deformable body data under constraint conditions is time discretized to obtain dynamic simulation data. Time discretization decomposes the continuous deformation process into a series of discrete time steps, which facilitates the step-by-step calculation of tissue deformation. In the facial filler surgery planning, this may involve simulating the process of soft tissue changes over time after filler injection.

对动态仿真数据进行时序数据分解,得到变形过程时序数据。这一步骤将整个变形过程分解为一系列关键时间点的状态。在隆鼻手术计划中,可能包括初始状态、植入物放置后的即时状态、术后肿胀高峰期、以及最终稳定状态等多个时间点的数据。通过形状匹配算法对变形过程时序数据进行轮廓对齐,得到标准化变形数据集。形状匹配确保不同时间点的数据在空间上对应一致,便于后续分析。在面部轮廓整形手术计划中,这一步骤可以确保面部轮廓在整个变形过程中的对应关系,从而准确追踪每个部位的变化。对标准化变形数据集进行统计分析,得到变形特征分布数据。统计分析可以揭示变形过程中的规律和特征。例如,在全面部年轻化手术计划中,可以分析不同区域软组织厚度变化的分布情况,了解填充效果的均匀性和持久性。通过生成对抗网络对变形特征分布数据进行外观合成,得到术后外观预测数据。生成对抗网络能够基于统计特征生成逼真的视觉效果。在面部整形手术计划中,这一步骤可以生成患者术后不同时期的面部外观图像,包括皮肤质地、色泽变化等细节。基于医学影像的整形外科计划生成方法实现了从静态三维模型到动态变形预测的转变,为手术计划提供了全面的模拟和预测支持。例如,在一个复杂的面部重塑手术计划中,这个过程可以精确预测多个手术步骤(如颧骨降低、下颌角缩小、面部填充等)的组合效果,不仅包括最终的外观变化,还包括术后不同时期的恢复过程。这种详细的模拟和预测能力极大地提高了手术计划的准确性和可靠性,使医生能够更好地评估手术风险,优化手术策略,并为患者提供更精确的术后效果预期。The dynamic simulation data is decomposed into time series data to obtain the deformation process time series data. This step decomposes the entire deformation process into a series of states at key time points. In the rhinoplasty surgery plan, it may include data at multiple time points such as the initial state, the immediate state after implant placement, the peak of postoperative swelling, and the final stable state. The time series data of the deformation process are aligned by shape matching algorithm to obtain a standardized deformation data set. Shape matching ensures that the data at different time points correspond to each other in space, which is convenient for subsequent analysis. In the facial contour plastic surgery plan, this step can ensure the correspondence of the facial contour throughout the deformation process, so as to accurately track the changes in each part. The standardized deformation data set is statistically analyzed to obtain the deformation feature distribution data. Statistical analysis can reveal the laws and characteristics of the deformation process. For example, in the full facial rejuvenation surgery plan, the distribution of soft tissue thickness changes in different regions can be analyzed to understand the uniformity and durability of the filling effect. The appearance of the deformation feature distribution data is synthesized by the generative adversarial network to obtain the postoperative appearance prediction data. The generative adversarial network can generate realistic visual effects based on statistical features. In facial plastic surgery planning, this step can generate images of the patient's facial appearance at different times after surgery, including details such as skin texture and color changes. The plastic surgery plan generation method based on medical imaging realizes the transition from static three-dimensional models to dynamic deformation prediction, providing comprehensive simulation and prediction support for surgical planning. For example, in a complex facial reshaping surgery plan, this process can accurately predict the combined effects of multiple surgical steps (such as zygomatic bone reduction, mandibular angle reduction, facial filling, etc.), including not only the final appearance changes, but also the recovery process at different times after surgery. This detailed simulation and prediction capability greatly improves the accuracy and reliability of surgical planning, enabling doctors to better assess surgical risks, optimize surgical strategies, and provide patients with more accurate expectations of postoperative effects.

在一具体实施例中,执行步骤S106的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S106 may specifically include the following steps:

(1)对术后外观预测模型进行特征点提取,得到关键解剖标记点集;(1) Extract feature points from the postoperative appearance prediction model to obtain a set of key anatomical landmark points;

(2)通过迭代最近点算法对关键解剖标记点集进行实时位置匹配,得到动态配准矩阵;(2) Using the iterative closest point algorithm, the key anatomical landmark points are matched in real time to obtain a dynamic registration matrix;

(3)对动态配准矩阵进行误差分析,得到配准精度评估数据;(3) Perform error analysis on the dynamic registration matrix to obtain registration accuracy evaluation data;

(4)通过卡尔曼滤波算法对配准精度评估数据进行噪声抑制,得到平滑化的配准参数;(4) Suppressing the noise of the registration accuracy evaluation data by using the Kalman filter algorithm to obtain smoothed registration parameters;

(5)对平滑化的配准参数进行时间序列分析,得到配准趋势数据;(5) Perform time series analysis on the smoothed registration parameters to obtain registration trend data;

(6)通过支持向量机对配准趋势数据进行分类,得到手术进程状态标识;(6) Classifying the registration trend data through support vector machine to obtain the surgical process status identification;

(7)对手术进程状态标识进行决策树分析,得到实时反馈指令集;(7) Perform decision tree analysis on the surgical process status identifier to obtain a real-time feedback instruction set;

(8)通过强化学习算法对实时反馈指令集进行优化,得到自适应导航策略;(8) Optimizing the real-time feedback instruction set through reinforcement learning algorithm to obtain an adaptive navigation strategy;

(9)对自适应导航策略进行数据流化处理,得到手术导航数据流;(9) performing data stream processing on the adaptive navigation strategy to obtain a surgical navigation data stream;

(10)通过深度神经网络对手术导航数据流进行综合分析,得到目标整形外科计划。(10) The surgical navigation data stream is comprehensively analyzed through a deep neural network to obtain the target plastic surgery plan.

具体地,对术后外观预测模型进行特征点提取,得到关键解剖标记点集。这些标记点可能包括鼻尖、唇峰、眼角等面部关键位置。例如,在鼻整形手术计划中,可能会提取鼻根、鼻尖、鼻翼等20个特征点,用于追踪鼻部形态的变化。通过迭代最近点算法对关键解剖标记点集进行实时位置匹配,得到动态配准矩阵。迭代最近点算法通过不断迭代找到两组点集之间的最佳匹配,从而实现模型与实际情况的对齐。在面部轮廓整形手术计划中,这一步骤可以追踪面部轮廓点在手术过程中的移动,生成一个4x4的变换矩阵,描述每个时间点模型需要的旋转和平移。对动态配准矩阵进行误差分析,得到配准精度评估数据。误差分析计算预测位置与实际位置之间的偏差,quantify配准的准确性。例如,在下颌角整形手术计划中,可能会计算下颌角位置的平均误差和最大误差,以评估手术进程的精确度。Specifically, feature points are extracted from the postoperative appearance prediction model to obtain a set of key anatomical marker points. These marker points may include key facial positions such as the nose tip, lip peak, and eye corners. For example, in a rhinoplasty surgery plan, 20 feature points such as the root of the nose, nose tip, and nose wing may be extracted to track changes in the nose morphology. The key anatomical marker point set is matched in real time by the iterative closest point algorithm to obtain a dynamic registration matrix. The iterative closest point algorithm finds the best match between two sets of points through continuous iterations, thereby aligning the model with the actual situation. In the facial contour plastic surgery plan, this step can track the movement of facial contour points during the operation and generate a 4x4 transformation matrix to describe the rotation and translation required by the model at each time point. Error analysis is performed on the dynamic registration matrix to obtain registration accuracy evaluation data. Error analysis calculates the deviation between the predicted position and the actual position to quantify the accuracy of the registration. For example, in the mandibular angle plastic surgery plan, the average error and maximum error of the mandibular angle position may be calculated to evaluate the accuracy of the surgical process.

通过卡尔曼滤波算法对配准精度评估数据进行噪声抑制,得到平滑化的配准参数。卡尔曼滤波能够有效去除测量噪声,提供更稳定的估计。在面部填充手术计划中,这一步骤可以平滑填充物注入过程中由于组织变形造成的跟踪波动,提供更加连续和可靠的位置信息。对平滑化的配准参数进行时间序列分析,得到配准趋势数据。时间序列分析可以揭示手术进程中的变化模式和趋势。在全面部年轻化手术计划中,这可能涉及分析面部各区域随时间的变化趋势,如皮肤拉伸程度、填充物分布变化等。通过支持向量机对配准趋势数据进行分类,得到手术进程状态标识。支持向量机可以根据配准趋势数据将手术进程分类为不同的状态。例如,在鼻整形手术计划中,可能将手术过程分为初始切割、骨骼重塑、软组织调整等几个关键阶段,每个阶段都有其特定的配准趋势特征。对手术进程状态标识进行决策树分析,得到实时反馈指令集。决策树分析基于当前状态和预定目标,生成下一步操作的指导。在颧骨整形手术计划中,决策树可能根据当前颧骨位置和目标形态,生成如“向内侧移动2mm”或“向上旋转3度”等具体指令。The registration accuracy assessment data is noise-reduced by the Kalman filter algorithm to obtain smoothed registration parameters. Kalman filtering can effectively remove measurement noise and provide more stable estimates. In facial filler surgery planning, this step can smooth tracking fluctuations caused by tissue deformation during filler injection and provide more continuous and reliable position information. The smoothed registration parameters are analyzed in time series to obtain registration trend data. Time series analysis can reveal the changing patterns and trends in the surgical process. In a comprehensive facial rejuvenation surgery plan, this may involve analyzing the changing trends of various facial regions over time, such as the degree of skin stretching, filler distribution changes, etc. The registration trend data is classified by support vector machine to obtain the surgical process status identification. The support vector machine can classify the surgical process into different states based on the registration trend data. For example, in a rhinoplasty surgery plan, the surgical process may be divided into several key stages such as initial cutting, bone reshaping, and soft tissue adjustment, each of which has its own specific registration trend characteristics. The surgical process status identification is analyzed by decision tree analysis to obtain a real-time feedback instruction set. The decision tree analysis generates guidance for the next operation based on the current state and the predetermined goal. In the planning of zygomatic bone surgery, the decision tree may generate specific instructions such as "move 2 mm inward" or "rotate 3 degrees upward" based on the current zygomatic bone position and target morphology.

通过强化学习算法对实时反馈指令集进行优化,得到自适应导航策略。强化学习通过模拟大量手术场景,学习最优的决策策略。在复杂的面部重塑手术计划中,强化学习可以优化多个手术步骤的顺序和参数,以达到最佳的整体效果。The real-time feedback instruction set is optimized through reinforcement learning algorithms to obtain adaptive navigation strategies. Reinforcement learning learns the optimal decision-making strategy by simulating a large number of surgical scenarios. In complex facial reshaping surgery plans, reinforcement learning can optimize the sequence and parameters of multiple surgical steps to achieve the best overall effect.

对自适应导航策略进行数据流化处理,得到手术导航数据流。数据流化将离散的策略转换为连续的数据流,便于实时处理和传输。在实际的手术计划模拟中,这可能表现为每秒生成60帧的导航数据,包含工具位置、力反馈、下一步建议等信息。The adaptive navigation strategy is processed into a data stream to obtain a surgical navigation data stream. Data streaming converts discrete strategies into a continuous data stream for real-time processing and transmission. In an actual surgical planning simulation, this may be manifested as generating 60 frames of navigation data per second, including information such as tool position, force feedback, and next step suggestions.

最后,通过深度神经网络对手术导航数据流进行综合分析,得到目标整形外科计划。深度神经网络能够从大量数据中学习复杂的模式和关系,生成最终的手术计划。在一个全面的面部整形手术计划中,深度神经网络可能综合考虑患者的面部结构、期望效果、手术风险等多个因素,生成一个详细的手术步骤列表,包括每个步骤的精确参数、预期效果和potential风险。基于医学影像的整形外科计划生成方法实现了从静态预测到动态导航的转变,为整形外科手术提供了全面的规划和导航支持。例如,在一个复杂的面部重建手术计划中,这个过程不仅能够精确预测手术的最终效果,还能在模拟手术过程中提供实时的导航和调整建议,使得手术计划更加精确、安全和个性化。这种动态、自适应的计划生成方法极大地提高了整形外科手术的精确度和成功率,为医生提供了强大的决策支持工具。Finally, the surgical navigation data stream is comprehensively analyzed by a deep neural network to obtain the target plastic surgery plan. Deep neural networks can learn complex patterns and relationships from a large amount of data to generate the final surgical plan. In a comprehensive facial plastic surgery plan, a deep neural network may comprehensively consider multiple factors such as the patient's facial structure, expected results, and surgical risks to generate a detailed list of surgical steps, including the precise parameters, expected results, and potential risks of each step. The plastic surgery plan generation method based on medical images has achieved a transition from static prediction to dynamic navigation, providing comprehensive planning and navigation support for plastic surgery. For example, in a complex facial reconstruction surgery plan, this process can not only accurately predict the final effect of the surgery, but also provide real-time navigation and adjustment suggestions during the simulated operation, making the surgical plan more accurate, safe, and personalized. This dynamic and adaptive plan generation method greatly improves the accuracy and success rate of plastic surgery and provides doctors with a powerful decision support tool.

上面对本申请实施例中基于医学影像的整形外科计划生成方法进行了描述,下面对本申请实施例中基于医学影像的整形外科计划生成装置进行描述,请参阅图2,本申请实施例中基于医学影像的整形外科计划生成装置一个实施例包括:The above describes the method for generating a plastic surgery plan based on medical images in the embodiment of the present application. The following describes the device for generating a plastic surgery plan based on medical images in the embodiment of the present application. Please refer to FIG. 2. An embodiment of the device for generating a plastic surgery plan based on medical images in the embodiment of the present application includes:

分割模块201,用于对获取的多模态医学影像数据进行预处理及自适应分割,得到三维图像数据集;The segmentation module 201 is used to pre-process and adaptively segment the acquired multi-modal medical image data to obtain a three-dimensional image data set;

重建模块202,用于对所述三维图像数据集进行多尺度特征融合重建,得到三维解剖模型,其中,所述三维解剖模型包括高分辨率几何结构、多层次组织表示信息以及生物力学数据;A reconstruction module 202 is used to perform multi-scale feature fusion reconstruction on the three-dimensional image data set to obtain a three-dimensional anatomical model, wherein the three-dimensional anatomical model includes high-resolution geometric structure, multi-level tissue representation information and biomechanical data;

定义模块203,用于对所述三维解剖模型进行基于面部特征的交互式目标定义,得到量化手术目标参数集;A definition module 203, used for performing interactive target definition based on facial features on the three-dimensional anatomical model to obtain a quantitative surgical target parameter set;

处理模块204,用于对所述量化手术目标参数集进行多目标优化处理,得到优化手术计划;A processing module 204 is used to perform multi-objective optimization processing on the quantitative surgical target parameter set to obtain an optimized surgical plan;

模拟模块205,用于基于所述优化手术计划,对所述三维解剖模型执行基于物理引擎驱动的虚拟手术模拟,得到术后外观预测模型;A simulation module 205 is used to perform a virtual surgery simulation driven by a physical engine on the three-dimensional anatomical model based on the optimized surgical plan to obtain a postoperative appearance prediction model;

分析模块206,用于对所述术后外观预测模型进行动态配准及实时反馈分析,得到手术导航数据流,并根据所述手术导航数据流生成目标整形外科计划。The analysis module 206 is used to perform dynamic registration and real-time feedback analysis on the postoperative appearance prediction model to obtain a surgical navigation data stream, and generate a target plastic surgery plan based on the surgical navigation data stream.

通过上述各个组成部分的协同合作,通过对多模态医学影像数据进行预处理和自适应分割,结合自适应直方图均衡化、非局部均值滤波和基于互信息最大化的空间配准等技术,显著提高了原始医学影像的质量和信息提取效率,为后续处理奠定了坚实基础。其次,采用多尺度特征融合重建技术,包括小波变换、卷积神经网络深度特征提取、特征金字塔网络和自注意力机制等,实现了高精度的三维解剖模型重建,不仅保留了高分辨率几何结构,还包含了多层次组织表示信息和生物力学数据,为手术规划提供了全面、精确的解剖信息。在目标定义阶段,引入了基于面部特征的交互式目标定义方法,通过主成分分析、卷积神经网络特征提取、图卷积网络拓扑分析等技术,实现了手术目标的精确量化,大大提高了手术规划的个性化程度和精准性。在优化阶段,采用多目标优化处理技术,包括层次分析法、遗传算法、非支配排序遗传算法和模糊综合评判法等,有效平衡了美学效果、安全性、恢复速度等多个目标,生成了最优的手术计划。虚拟手术模拟环节中,基于物理引擎驱动的仿真技术,结合质点-弹簧系统算法和形状匹配算法,实现了高度逼真的软组织变形模拟,大大提高了术后效果预测的准确性。最后,在术中导航和实时反馈分析方面,通过迭代最近点算法、卡尔曼滤波、支持向量机分类和强化学习等技术的综合应用,实现了高精度的动态配准和实时反馈,为手术过程提供了精确的导航支持。这一系列技术创新和集成不仅显著提高了整形外科手术规划的精度和可靠性,还实现了从术前规划到术中导航的全流程智能化支持。通过综合利用多模态医学影像、深度学习、计算机视觉和人工智能技术,本方法有效克服了传统手术规划方法中信息利用不充分、目标定义不精确、优化不全面、模拟不逼真等问题,为医生提供了更全面、更准确的决策支持。Through the collaborative cooperation of the above components, the quality and information extraction efficiency of the original medical images are significantly improved by preprocessing and adaptive segmentation of multimodal medical image data, combined with adaptive histogram equalization, non-local mean filtering and spatial registration based on mutual information maximization, laying a solid foundation for subsequent processing. Secondly, multi-scale feature fusion reconstruction technology, including wavelet transform, deep feature extraction of convolutional neural network, feature pyramid network and self-attention mechanism, is used to achieve high-precision reconstruction of three-dimensional anatomical models, which not only retains high-resolution geometric structure, but also contains multi-level tissue representation information and biomechanical data, providing comprehensive and accurate anatomical information for surgical planning. In the target definition stage, an interactive target definition method based on facial features is introduced. Through principal component analysis, convolutional neural network feature extraction, graph convolutional network topology analysis and other technologies, the precise quantification of surgical targets is achieved, which greatly improves the personalization and accuracy of surgical planning. In the optimization stage, multi-objective optimization processing technology, including hierarchical analysis, genetic algorithm, non-dominated sorting genetic algorithm and fuzzy comprehensive evaluation method, is used to effectively balance multiple goals such as aesthetic effect, safety and recovery speed, and generate the optimal surgical plan. In the virtual surgery simulation, the simulation technology driven by the physical engine, combined with the mass-spring system algorithm and shape matching algorithm, achieved highly realistic soft tissue deformation simulation, greatly improving the accuracy of postoperative effect prediction. Finally, in terms of intraoperative navigation and real-time feedback analysis, through the comprehensive application of iterative nearest point algorithm, Kalman filtering, support vector machine classification and reinforcement learning, high-precision dynamic registration and real-time feedback were achieved, providing accurate navigation support for the surgical process. This series of technological innovations and integrations not only significantly improved the accuracy and reliability of plastic surgery planning, but also realized intelligent support for the entire process from preoperative planning to intraoperative navigation. By comprehensively utilizing multimodal medical imaging, deep learning, computer vision and artificial intelligence technologies, this method effectively overcomes the problems of insufficient information utilization, inaccurate target definition, incomplete optimization, and unrealistic simulation in traditional surgical planning methods, providing doctors with more comprehensive and accurate decision support.

本申请还提供一种计算机可读存储介质,该计算机可读存储介质可以为非易失性计算机可读存储介质,该计算机可读存储介质也可以为易失性计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行所述基于医学影像的整形外科计划生成方法的步骤。The present application also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium or a volatile computer-readable storage medium. The computer-readable storage medium stores instructions, and when the instructions are executed on a computer, the computer executes the steps of the method for generating a plastic surgery plan based on medical images.

所属领域的技术人员可以清楚地了解到,为描述的方便及简洁,上述描述的系统,系统及单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working processes of the above-described systems, systems and units can refer to the corresponding processes in the aforementioned method embodiments and will not be repeated here.

所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-onlymemory,ROM)、随机存取存储器(random acceS memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including a number of instructions to enable a computer device (which can be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk and other media that can store program codes.

以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神及范围。As described above, the above embodiments are only used to illustrate the technical solutions of the present application, rather than to limit them. Although the present application has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that they can still modify the technical solutions described in the aforementioned embodiments, or make equivalent replacements for some of the technical features therein. However, these modifications or replacements do not deviate the essence of the corresponding technical solutions from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (9)

Translated fromChinese
1.一种基于医学影像的整形外科计划生成方法,其特征在于,所述基于医学影像的整形外科计划生成方法包括:1. A method for generating a plastic surgery plan based on medical images, characterized in that the method for generating a plastic surgery plan based on medical images comprises:对获取的多模态医学影像数据进行预处理及自适应分割,得到三维图像数据集;Preprocessing and adaptively segmenting the acquired multimodal medical image data to obtain a three-dimensional image data set;对所述三维图像数据集进行多尺度特征融合重建,得到三维解剖模型,其中,所述三维解剖模型包括高分辨率几何结构、多层次组织表示信息以及生物力学数据;Performing multi-scale feature fusion reconstruction on the three-dimensional image data set to obtain a three-dimensional anatomical model, wherein the three-dimensional anatomical model includes high-resolution geometric structure, multi-level tissue representation information and biomechanical data;对所述三维解剖模型进行基于面部特征的交互式目标定义,得到量化手术目标参数集;Performing interactive target definition based on facial features on the three-dimensional anatomical model to obtain a quantitative surgical target parameter set;对所述量化手术目标参数集进行多目标优化处理,得到优化手术计划;Performing multi-objective optimization processing on the quantitative surgical target parameter set to obtain an optimized surgical plan;基于所述优化手术计划,对所述三维解剖模型执行基于物理引擎驱动的虚拟手术模拟,得到术后外观预测模型;Based on the optimized surgical plan, a virtual surgical simulation driven by a physical engine is performed on the three-dimensional anatomical model to obtain a postoperative appearance prediction model;对所述术后外观预测模型进行动态配准及实时反馈分析,得到手术导航数据流,并根据所述手术导航数据流生成目标整形外科计划。The postoperative appearance prediction model is dynamically registered and analyzed in real time to obtain a surgical navigation data stream, and a target plastic surgery plan is generated based on the surgical navigation data stream.2.根据权利要求1所述的基于医学影像的整形外科计划生成方法,其特征在于,所述对获取的多模态医学影像数据进行预处理及自适应分割,得到三维图像数据集,包括:2. The method for generating a plastic surgery plan based on medical images according to claim 1, wherein the preprocessing and adaptive segmentation of the acquired multimodal medical image data to obtain a three-dimensional image data set comprises:对所述多模态医学影像数据进行自适应直方图均衡化处理,得到增强医学影像数据;Performing adaptive histogram equalization processing on the multimodal medical image data to obtain enhanced medical image data;对所述增强医学影像数据进行非局部均值滤波处理,得到降噪医学影像数据;Performing non-local mean filtering on the enhanced medical image data to obtain denoised medical image data;对所述降噪医学影像数据进行基于互信息最大化的空间配准,得到统一坐标系下的医学影像数据;Performing spatial registration on the denoised medical image data based on maximizing mutual information to obtain medical image data in a unified coordinate system;将所述统一坐标系下的医学影像数据输入U-Net深度学习网络进行分割,得到初步分割图像;Inputting the medical image data in the unified coordinate system into a U-Net deep learning network for segmentation to obtain a preliminary segmented image;对所述初步分割图像进行形态学闭运算处理,得到优化分割图像;Performing morphological closing operation on the preliminary segmented image to obtain an optimized segmented image;通过边缘检测算法对所述优化分割图像进行边界信息提取,得到解剖结构边界信息;Extracting boundary information from the optimized segmented image using an edge detection algorithm to obtain boundary information of the anatomical structure;通过等直面提取算法根据所述解剖结构边界信息进行曲面重建,得到初步三维像素网格;Performing surface reconstruction according to the boundary information of the anatomical structure by using an isoplanar surface extraction algorithm to obtain a preliminary three-dimensional pixel grid;对初步三维像素网格进行网格简化及细分操作,得到第一候选三维像素网格;Performing mesh simplification and subdivision operations on the preliminary three-dimensional pixel grid to obtain a first candidate three-dimensional pixel grid;对所述候选三维像素网格进行基于所述多模态医学影像数据的纹理映射,得到具有表面特征的第二候选三维像素网格;Performing texture mapping based on the multimodal medical image data on the candidate three-dimensional pixel grid to obtain a second candidate three-dimensional pixel grid having surface features;对所述第二候选三维像素网格进行生物力学参数融合,得到所述三维图像数据集。Biomechanical parameter fusion is performed on the second candidate three-dimensional pixel grid to obtain the three-dimensional image data set.3.根据权利要求1所述的基于医学影像的整形外科计划生成方法,其特征在于,所述对所述三维图像数据集进行多尺度特征融合重建,得到三维解剖模型,其中,所述三维解剖模型包括高分辨率几何结构、多层次组织表示信息以及生物力学数据,包括:3. The method for generating a plastic surgery plan based on medical images according to claim 1, characterized in that the three-dimensional image data set is subjected to multi-scale feature fusion reconstruction to obtain a three-dimensional anatomical model, wherein the three-dimensional anatomical model includes high-resolution geometric structure, multi-level tissue representation information and biomechanical data, including:对所述三维图像数据集进行小波变换,得到多尺度特征图;Performing wavelet transform on the three-dimensional image data set to obtain a multi-scale feature map;通过卷积神经网络对所述多尺度特征图进行深度特征提取,得到深层特征表示;Performing deep feature extraction on the multi-scale feature map through a convolutional neural network to obtain a deep feature representation;通过特征金字塔网络对所述深层特征表示进行多层级特征融合,得到多分辨率特征图;Perform multi-level feature fusion on the deep feature representation through a feature pyramid network to obtain a multi-resolution feature map;通过自注意力机制对所述多分辨率特征图进行空间上下文增强,得到增强空间上下文信息;Performing spatial context enhancement on the multi-resolution feature map through a self-attention mechanism to obtain enhanced spatial context information;对所述增强空间上下文信息进行三维反卷积操作,得到重建体素数据;Performing a three-dimensional deconvolution operation on the enhanced spatial context information to obtain reconstructed voxel data;通过马奇立方体算法对所述重建体素数据进行表面重建,得到三维表面网格;Performing surface reconstruction on the reconstructed voxel data by using a March cube algorithm to obtain a three-dimensional surface mesh;对所述三维表面网格进行拉普拉斯平滑处理,得到所述高分辨率几何结构;Performing Laplace smoothing on the three-dimensional surface mesh to obtain the high-resolution geometric structure;通过纹理映射算法对所述高分辨率几何结构进行表面特征映射,得到包含多层次组织表示信息的表面数据;Performing surface feature mapping on the high-resolution geometric structure by a texture mapping algorithm to obtain surface data containing multi-level tissue representation information;通过有限元分析算法对所述包含多层次组织表示信息的的表面数据进行力学属性计算,得到包含生物力学数据的数据结构;Calculating the mechanical properties of the surface data containing multi-level tissue representation information by a finite element analysis algorithm to obtain a data structure containing biomechanical data;对所述包含生物力学数据的数据结构进行数据压缩和索引构建,得到所述三维解剖模型。The data structure containing the biomechanical data is compressed and indexed to obtain the three-dimensional anatomical model.4.根据权利要求1所述的基于医学影像的整形外科计划生成方法,其特征在于,所述对所述三维解剖模型进行基于面部特征的交互式目标定义,得到量化手术目标参数集,包括:4. The method for generating a plastic surgery plan based on medical images according to claim 1, wherein the interactive target definition based on facial features of the three-dimensional anatomical model is performed to obtain a quantitative surgical target parameter set, comprising:对所述三维解剖模型进行关键点检测,得到面部特征点集;Performing key point detection on the three-dimensional anatomical model to obtain a facial feature point set;通过主成分分析算法对所述面部特征点集进行降维处理,得到面部形态主成分;Performing dimensionality reduction processing on the facial feature point set by using a principal component analysis algorithm to obtain the principal components of facial morphology;对所述面部形态主成分进行统计分析,得到面部美学评分指标;Performing statistical analysis on the principal components of the facial morphology to obtain a facial aesthetics scoring index;通过卷积神经网络对所述三维解剖模型进行特征提取,得到面部深度特征表示信息;Extracting features from the three-dimensional anatomical model through a convolutional neural network to obtain facial depth feature representation information;对所述面部深度特征表示信息进行聚类分析,得到面部区域分割图像;Performing cluster analysis on the facial depth feature representation information to obtain a facial region segmentation image;通过图卷积网络对所述面部区域分割图像进行拓扑关系分析,得到面部结构图表示;Performing topological relationship analysis on the facial region segmentation image through a graph convolutional network to obtain a facial structure graph representation;基于所述面部美学评分指标,对所述面部结构图表示进行图谱匹配,得到预期美学标准偏差值;Based on the facial aesthetic scoring index, performing atlas matching on the facial structure graph representation to obtain an expected aesthetic standard deviation value;通过变分自编码器对所述预期美学标准偏差值进行参数化编码,得到美学参数空间;The expected aesthetic standard deviation value is parameterized and encoded by a variational autoencoder to obtain an aesthetic parameter space;对所述美学参数空间进行交互式调节,得到个性化美学目标;Interactively adjusting the aesthetic parameter space to obtain a personalized aesthetic goal;通过预置的傅里叶描述子对所述个性化美学目标进行形状编码,得到所述量化手术目标参数集。The personalized aesthetic target is shape-coded by a preset Fourier descriptor to obtain the quantitative surgical target parameter set.5.根据权利要求1所述的基于医学影像的整形外科计划生成方法,其特征在于,所述对所述量化手术目标参数集进行多目标优化处理,得到优化手术计划,包括:5. The method for generating a plastic surgery plan based on medical images according to claim 1, wherein the step of performing multi-objective optimization processing on the quantitative surgical target parameter set to obtain an optimized surgical plan comprises:对所述量化手术目标参数集进行目标分解,得到子目标集合;Decomposing the quantitative surgical target parameter set to obtain a sub-target set;通过层次分析法对所述子目标集合进行权重分配,得到加权子目标集;The weights of the sub-goal set are assigned by using the hierarchical analysis method to obtain a weighted sub-goal set;对所述加权子目标集进行约束条件定义,得到约束条件数据集;Defining constraint conditions for the weighted sub-objective set to obtain a constraint condition data set;通过遗传算法对所述约束条件数据集进行初始解生成,得到候选解集;Generate an initial solution for the constraint condition data set by using a genetic algorithm to obtain a candidate solution set;对所述候选解集进行适应度评估,得到述候选解集对应的质量评分数据;Performing fitness evaluation on the candidate solution set to obtain quality score data corresponding to the candidate solution set;通过非支配排序遗传算法对所述质量评分数据进行Pareto优化分析,得到非劣解集;Performing Pareto optimization analysis on the quality score data by using a non-dominated sorting genetic algorithm to obtain a non-inferior solution set;对所述非劣解集进行聚类分析,得到代表解集;Performing cluster analysis on the non-inferior solution set to obtain a representative solution set;通过模糊综合评判法对所述代表解集进行多准则决策,得到最优解;Perform multi-criteria decision making on the representative solution set by using a fuzzy comprehensive evaluation method to obtain the optimal solution;对所述最优解进行手术步骤分解,得到初步手术计划;Decomposing the optimal solution into surgical steps to obtain a preliminary surgical plan;通过蒙特卡洛树搜索算法对所述初步手术计划进行细化,得到所述优化手术计划。The preliminary surgical plan is refined by a Monte Carlo tree search algorithm to obtain the optimized surgical plan.6.根据权利要求1所述的基于医学影像的整形外科计划生成方法,其特征在于,所述基于所述优化手术计划,对所述三维解剖模型执行基于物理引擎驱动的虚拟手术模拟,得到术后外观预测模型,包括:6. The method for generating a plastic surgery plan based on medical images according to claim 1, wherein the step of performing a virtual surgery simulation driven by a physical engine on the three-dimensional anatomical model based on the optimized surgery plan to obtain a postoperative appearance prediction model comprises:对所述三维解剖模型进行网格简化,得到简化三维网格数据;Simplifying the mesh of the three-dimensional anatomical model to obtain simplified three-dimensional mesh data;对所述简化三维网格数据进行离散化处理,得到计算网格数据;Discretizing the simplified three-dimensional grid data to obtain computational grid data;对所述计算网格数据进行材料属性分配,得到计算数据;Assigning material properties to the calculation grid data to obtain calculation data;通过质点-弹簧系统算法对所述计算数据进行动力学参数化,得到可变形体参数集;Dynamically parameterize the calculation data by using a mass-spring system algorithm to obtain a deformable body parameter set;对所述可变形体参数集进行边界条件设置,得到约束条件下的可变形体数据;Setting boundary conditions for the deformable body parameter set to obtain deformable body data under constraint conditions;对所述约束条件下的可变形体数据进行时间离散化,得到动态仿真数据;Performing time discretization on the deformable body data under the constraint conditions to obtain dynamic simulation data;对所述动态仿真数据进行时序数据分解,得到变形过程时序数据;Performing time series data decomposition on the dynamic simulation data to obtain deformation process time series data;通过形状匹配算法对所述变形过程时序数据进行轮廓对齐,得到标准化变形数据集;Performing contour alignment on the deformation process time series data by a shape matching algorithm to obtain a standardized deformation data set;对所述标准化变形数据集进行统计分析,得到变形特征分布数据;Performing statistical analysis on the standardized deformation data set to obtain deformation feature distribution data;通过生成对抗网络对所述变形特征分布数据进行外观合成,得到术后外观预测数据。The appearance of the deformation feature distribution data is synthesized by generating an adversarial network to obtain postoperative appearance prediction data.7.根据权利要求1所述的基于医学影像的整形外科计划生成方法,其特征在于,所述对所述术后外观预测模型进行动态配准及实时反馈分析,得到手术导航数据流,并根据所述手术导航数据流生成目标整形外科计划,包括:7. The method for generating a plastic surgery plan based on medical images according to claim 1, characterized in that the postoperative appearance prediction model is dynamically registered and analyzed in real time to obtain a surgical navigation data stream, and a target plastic surgery plan is generated according to the surgical navigation data stream, comprising:对所述术后外观预测模型进行特征点提取,得到关键解剖标记点集;Extracting feature points from the postoperative appearance prediction model to obtain a set of key anatomical landmark points;通过迭代最近点算法对所述关键解剖标记点集进行实时位置匹配,得到动态配准矩阵;Performing real-time position matching on the key anatomical landmark point set by an iterative closest point algorithm to obtain a dynamic registration matrix;对所述动态配准矩阵进行误差分析,得到配准精度评估数据;Performing error analysis on the dynamic registration matrix to obtain registration accuracy evaluation data;通过卡尔曼滤波算法对所述配准精度评估数据进行噪声抑制,得到平滑化的配准参数;The Kalman filter algorithm is used to suppress noise on the registration accuracy evaluation data to obtain smoothed registration parameters;对所述平滑化的配准参数进行时间序列分析,得到配准趋势数据;Performing time series analysis on the smoothed registration parameters to obtain registration trend data;通过支持向量机对所述配准趋势数据进行分类,得到手术进程状态标识;Classifying the registration trend data by a support vector machine to obtain a surgical process status identifier;对所述手术进程状态标识进行决策树分析,得到实时反馈指令集;Performing a decision tree analysis on the surgical process status identifier to obtain a real-time feedback instruction set;通过强化学习算法对所述实时反馈指令集进行优化,得到自适应导航策略;The real-time feedback instruction set is optimized by a reinforcement learning algorithm to obtain an adaptive navigation strategy;对所述自适应导航策略进行数据流化处理,得到手术导航数据流;Performing data stream processing on the adaptive navigation strategy to obtain a surgical navigation data stream;通过深度神经网络对所述手术导航数据流进行综合分析,得到目标整形外科计划。The surgical navigation data stream is comprehensively analyzed through a deep neural network to obtain a target plastic surgery plan.8.一种基于医学影像的整形外科计划生成装置,其特征在于,所述基于医学影像的整形外科计划生成装置包括:8. A device for generating a plastic surgery plan based on medical images, characterized in that the device for generating a plastic surgery plan based on medical images comprises:分割模块,用于对获取的多模态医学影像数据进行预处理及自适应分割,得到三维图像数据集;A segmentation module is used to preprocess and adaptively segment the acquired multimodal medical image data to obtain a three-dimensional image data set;重建模块,用于对所述三维图像数据集进行多尺度特征融合重建,得到三维解剖模型,其中,所述三维解剖模型包括高分辨率几何结构、多层次组织表示信息以及生物力学数据;A reconstruction module, used for performing multi-scale feature fusion reconstruction on the three-dimensional image data set to obtain a three-dimensional anatomical model, wherein the three-dimensional anatomical model includes high-resolution geometric structure, multi-level tissue representation information and biomechanical data;定义模块,用于对所述三维解剖模型进行基于面部特征的交互式目标定义,得到量化手术目标参数集;A definition module, used for interactively defining the target based on facial features on the three-dimensional anatomical model to obtain a quantitative surgical target parameter set;处理模块,用于对所述量化手术目标参数集进行多目标优化处理,得到优化手术计划;A processing module, used for performing multi-objective optimization processing on the quantitative surgical target parameter set to obtain an optimized surgical plan;模拟模块,用于基于所述优化手术计划,对所述三维解剖模型执行基于物理引擎驱动的虚拟手术模拟,得到术后外观预测模型;A simulation module, configured to perform a virtual surgery simulation driven by a physical engine on the three-dimensional anatomical model based on the optimized surgery plan to obtain a postoperative appearance prediction model;分析模块,用于对所述术后外观预测模型进行动态配准及实时反馈分析,得到手术导航数据流,并根据所述手术导航数据流生成目标整形外科计划。The analysis module is used to perform dynamic registration and real-time feedback analysis on the postoperative appearance prediction model to obtain a surgical navigation data stream, and generate a target plastic surgery plan based on the surgical navigation data stream.9.一种计算机可读存储介质,所述计算机可读存储介质上存储有指令,其特征在于,所述指令被处理器执行时实现如权利要求1-7中任一项所述的基于医学影像的整形外科计划生成方法。9. A computer-readable storage medium having instructions stored thereon, wherein when the instructions are executed by a processor, the method for generating a plastic surgery plan based on medical images as described in any one of claims 1 to 7 is implemented.
CN202410906413.XA2024-07-082024-07-08Medical image-based orthopedic plan generation method, device and storage mediumActiveCN118866251B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202410906413.XACN118866251B (en)2024-07-082024-07-08Medical image-based orthopedic plan generation method, device and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202410906413.XACN118866251B (en)2024-07-082024-07-08Medical image-based orthopedic plan generation method, device and storage medium

Publications (2)

Publication NumberPublication Date
CN118866251Atrue CN118866251A (en)2024-10-29
CN118866251B CN118866251B (en)2025-01-07

Family

ID=93178747

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202410906413.XAActiveCN118866251B (en)2024-07-082024-07-08Medical image-based orthopedic plan generation method, device and storage medium

Country Status (1)

CountryLink
CN (1)CN118866251B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119279774A (en)*2024-12-132025-01-10山东大学齐鲁医院 A multimodal medical image 3D reconstruction and visualization system for surgical planning
CN119381006A (en)*2024-12-302025-01-28中国人民解放军总医院第一医学中心 An AI simulation method and system for facial cosmetic surgery based on the Internet
CN119655879A (en)*2025-02-072025-03-21宁夏医科大学总医院Thyroid nodule surgery planning and navigation system based on artificial intelligence
CN119679513A (en)*2024-12-182025-03-25四川大学 Digestive tract catheterization robot navigation method and system based on multimodal information fusion
CN119811686A (en)*2025-01-242025-04-11云南师范大学 Surgical minimally invasive surgery data processing method and system based on AR technology
CN119964823A (en)*2025-04-092025-05-09贵州利美康外科医院股份有限公司 A plastic surgery effect simulation display method and system based on 3D simulation technology
CN120180892A (en)*2025-03-042025-06-20中南大学湘雅三医院 An intelligent simulation method and system for facial micro-plastic surgery

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102551892A (en)*2012-01-172012-07-11王旭东Positioning method for craniomaxillofacial surgery
CN104392492A (en)*2014-11-242015-03-04中南大学Computer interaction type method for segmenting single tooth crown from three-dimensional jaw model
CN116645503A (en)*2023-04-252023-08-25上海交通大学医学院附属瑞金医院 Three-dimensional medical image segmentation method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102551892A (en)*2012-01-172012-07-11王旭东Positioning method for craniomaxillofacial surgery
CN104392492A (en)*2014-11-242015-03-04中南大学Computer interaction type method for segmenting single tooth crown from three-dimensional jaw model
CN116645503A (en)*2023-04-252023-08-25上海交通大学医学院附属瑞金医院 Three-dimensional medical image segmentation method, device, equipment and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119279774A (en)*2024-12-132025-01-10山东大学齐鲁医院 A multimodal medical image 3D reconstruction and visualization system for surgical planning
CN119679513A (en)*2024-12-182025-03-25四川大学 Digestive tract catheterization robot navigation method and system based on multimodal information fusion
CN119381006A (en)*2024-12-302025-01-28中国人民解放军总医院第一医学中心 An AI simulation method and system for facial cosmetic surgery based on the Internet
CN119381006B (en)*2024-12-302025-05-13中国人民解放军总医院第一医学中心 An AI simulation method and system for facial cosmetic surgery based on the Internet
CN119811686A (en)*2025-01-242025-04-11云南师范大学 Surgical minimally invasive surgery data processing method and system based on AR technology
CN119655879A (en)*2025-02-072025-03-21宁夏医科大学总医院Thyroid nodule surgery planning and navigation system based on artificial intelligence
CN120180892A (en)*2025-03-042025-06-20中南大学湘雅三医院 An intelligent simulation method and system for facial micro-plastic surgery
CN119964823A (en)*2025-04-092025-05-09贵州利美康外科医院股份有限公司 A plastic surgery effect simulation display method and system based on 3D simulation technology

Also Published As

Publication numberPublication date
CN118866251B (en)2025-01-07

Similar Documents

PublicationPublication DateTitle
CN118866251B (en)Medical image-based orthopedic plan generation method, device and storage medium
CN109074500B (en) System and method for segmenting medical images of the same patient
JP2023505036A (en) Method, system and computer readable storage medium for creating a three-dimensional dental restoration from a two-dimensional sketch
AU2014237177B2 (en)Systems and methods for planning hair transplantation
CN112037200B (en) A method for automatic recognition and model reconstruction of anatomical features in medical images
CN109903396A (en)A kind of tooth three-dimensional model automatic division method based on surface parameterization
JP6786497B2 (en) Systems and methods for adding surface details to digital crown models formed using statistical techniques
CN118279302B (en) Three-dimensional reconstruction detection method and system for brain tumor images
Cuadros Linares et al.Mandible and skull segmentation in cone beam computed tomography using super-voxels and graph clustering
JP7555337B2 (en) Digital character blending and generation system and method
CN119339006B (en) Orthopedic 3D printing model construction method and device based on intelligent AI
CN119360049B (en) A model training method and device for extracting human skeleton features
CN113052864A (en)Method for predicting body appearance after plastic surgery based on machine learning
CN117934689B (en)Multi-tissue segmentation and three-dimensional rendering method for fracture CT image
Cheng et al.Facial morphology prediction after complete denture restoration based on principal component analysis
CN111105502A (en)Biological rib nose and lower jaw simulation plastic technology based on artificial bone repair material
CN118490364B (en)Full-orthopedics platform operation robot and navigation method thereof
CN120180892A (en) An intelligent simulation method and system for facial micro-plastic surgery
CN119152039B (en)Titanium network intelligent positioning method, device, equipment and storage medium
Chowdary et al.Modeling and Prototyping of Anatomical Structures
CN120495241A (en) An intelligent identification system for osteoporosis areas
CN120449554A (en)Anti-aging shaping technology for three-dimensional face
CN118116054A (en)Face aging accurate simulation method based on face recognition and physical simulation
KR20240176600A (en)Plastic surgery guide device and operation method therefor
CN120672973A (en) Auxiliary analysis method and system for cosmetic surgery based on three-dimensional facial digital measurement

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp