Movatterモバイル変換


[0]ホーム

URL:


CN119579683A - Endoscopic image processing method and related equipment - Google Patents

Endoscopic image processing method and related equipment
Download PDF

Info

Publication number
CN119579683A
CN119579683ACN202411626322.7ACN202411626322ACN119579683ACN 119579683 ACN119579683 ACN 119579683ACN 202411626322 ACN202411626322 ACN 202411626322ACN 119579683 ACN119579683 ACN 119579683A
Authority
CN
China
Prior art keywords
target
data
lesion
eye movement
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411626322.7A
Other languages
Chinese (zh)
Inventor
乔元风
罗特
朱江烽
曾凡
孙德佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Xuanwei Digital Medical Technology Co ltd
Original Assignee
Henan Xuanwei Digital Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Xuanwei Digital Medical Technology Co ltdfiledCriticalHenan Xuanwei Digital Medical Technology Co ltd
Priority to CN202411626322.7ApriorityCriticalpatent/CN119579683A/en
Publication of CN119579683ApublicationCriticalpatent/CN119579683A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The embodiment of the invention provides an endoscope image processing method and related equipment, and belongs to the technical field of artificial intelligence. The method comprises the steps of obtaining an auxiliary screening result of a target object in endoscopy, determining lesion coordinate data corresponding to the target object according to the auxiliary screening result, obtaining initial eye movement data fed back by a target doctor in response to the auxiliary screening result, wherein the initial eye movement data are used for representing eyeball movement information of the target doctor in the observation process of the auxiliary screening result, determining a lesion movement heat point diagram corresponding to the target object according to the lesion coordinate data, determining an eye movement heat point diagram of the target doctor according to the initial eye movement data, and determining target attention of the target doctor to the auxiliary screening result according to the lesion movement heat point diagram and the eye movement heat point diagram. According to the method, the attention degree of a doctor to a potential lesion area is evaluated by analyzing the eye movement data of the doctor, so that the condition of missed diagnosis or misdiagnosis is reduced, and the accuracy and reliability of endoscopy are improved.

Description

Endoscopic image processing method and related device
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an endoscope image processing method and related equipment.
Background
Endoscopy is a common medical diagnostic tool by which a physician observes and examines the condition of a patient's lesions by inserting an endoscope inside a lumen or organ of the human body. In actual clinical practice, a doctor can identify a lesion region from data acquired by endoscopy when performing endoscopy.
However, in the process of endoscopy, doctors are easily affected by subjective factors, visual fatigue, tiny and hidden factors of lesions, and the like, so that missed diagnosis or misdiagnosis occurs, and the accuracy and reliability of the endoscopy are reduced. Therefore, a new technical solution is needed to solve the above technical problems.
Disclosure of Invention
The embodiment of the invention mainly aims to provide an endoscope image processing method and related equipment, which aim to evaluate the attention degree of a doctor to a potential lesion area by analyzing eye movement data of the doctor, so that feedback is provided for the doctor in time to assist in improving the accuracy of diagnosis and treatment decision, reducing the condition of missed diagnosis or misdiagnosis and improving the accuracy and reliability of endoscopy.
In a first aspect, an embodiment of the present invention provides an endoscopic image processing method, including:
Acquiring an auxiliary screening result of a target object in endoscopy, and determining lesion coordinate data corresponding to the target object according to the auxiliary screening result, wherein the target object is the object of the endoscopy, the auxiliary screening result is a potential lesion area of the target object, and the lesion coordinate data is used for indicating position information corresponding to the potential lesion area of the target object;
acquiring initial eye movement data fed back by a target doctor in response to the auxiliary screening result, wherein the initial eye movement data is used for representing eyeball movement information of the target doctor in the process of observing the auxiliary screening result;
determining a lesion movement heat map corresponding to the target object according to the lesion coordinate data;
determining an eye movement thermal map corresponding to the target doctor according to the initial eye movement data;
And determining the target attention degree of the target doctor to the auxiliary screening result according to the lesion movement hotspot graph and the eye movement hotspot graph.
In a third aspect, an embodiment of the present invention further provides an endoscopic image processing apparatus, the apparatus including:
The system comprises a screening unit, a detection unit and a detection unit, wherein the screening unit is configured to acquire an auxiliary screening result of a target object in an endoscopy, and determine lesion coordinate data corresponding to the target object according to the auxiliary screening result, wherein the target object is the object of the endoscopy;
The eye movement unit is configured to acquire initial eye movement data fed back by a target doctor in response to the auxiliary screening result, wherein the initial eye movement data is used for representing eyeball movement information in the process of observing the auxiliary screening result by the target doctor;
The first hot spot unit is configured to determine a lesion movement hot spot diagram corresponding to the target object according to the lesion coordinate data;
a second hotspot unit configured to determine an eye movement hotspot graph corresponding to the target doctor according to the initial eye movement data;
And a focus degree unit configured to determine a target focus degree of the auxiliary screening result by the target doctor according to the lesion movement hotspot graph and the eye movement hotspot graph.
In a third aspect, embodiments of the present invention further provide an electronic device comprising a processor, a memory, a computer program stored on the memory and executable by the processor, and a data bus for enabling a connection communication between the processor and the memory, wherein the computer program, when executed by the processor, implements the steps of any of the endoscopic image processing methods as provided in the present specification.
The embodiment of the invention provides an endoscope image processing method and related equipment, the method comprises the steps of obtaining an auxiliary screening result of a target object in endoscope examination, determining lesion coordinate data corresponding to the target object according to the auxiliary screening result, wherein the target object refers to the object of the endoscope examination, the auxiliary screening result refers to a potential lesion area contained in the target object, the lesion coordinate data are used for indicating position information corresponding to the potential lesion area, initial eye movement data fed back by a target doctor in response to the auxiliary screening result are obtained, the initial eye movement data are used for representing eyeball movement information of the target doctor in the observation process of the auxiliary screening result, a lesion movement heat point diagram corresponding to the target object is determined according to the lesion coordinate data, an eye movement heat point diagram corresponding to the target doctor is determined according to the initial eye movement heat point diagram, and the target attention degree of the target doctor to the auxiliary screening result is determined according to the lesion movement heat point diagram and the eye movement heat point diagram. According to the method, an auxiliary screening result of a target object in endoscopy and lesion coordinate data are obtained to determine a lesion movement heat map corresponding to the target object, initial eye movement data fed back by a target doctor in response to the auxiliary screening result are obtained, and an eye movement heat map corresponding to the target doctor is obtained according to the initial eye movement data, and then the target attention degree of the target doctor to the auxiliary screening result is determined according to the lesion movement heat map and the eye movement heat map, so that the observation behavior of the target doctor to the auxiliary screening result is evaluated through analyzing the initial eye movement data of the target doctor, whether the identified lesion area is ignored is judged, the target attention degree of the target doctor to the auxiliary screening result can be evaluated, objective feedback and advice can be provided for the target doctor, more accurate diagnosis and treatment decisions can be made for the target doctor, and diagnosis efficiency and accuracy are further improved.
According to the embodiment of the invention, the attention degree of the doctor to the potential lesion area is evaluated by analyzing the eye movement data of the doctor, so that feedback is provided for the doctor in time to assist in improving the accuracy of diagnosis and treatment decisions, the condition of missed diagnosis or misdiagnosis is reduced, and the accuracy and reliability of endoscopy are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an endoscopic image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of auxiliary screening results obtained under different target types provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of the real endoscope detection effect under different target types according to the embodiment of the present invention;
FIG. 4 is a schematic view of a glance path before and after preprocessing initial eye movement data according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a lesion movement heat map and an eye movement heat map provided by an embodiment of the present invention;
Fig. 6 is a schematic diagram of real-time display of a glance path corresponding to a target doctor and a lesion motion path corresponding to a target object according to an embodiment of the present invention;
FIG. 7 is a schematic view of a scene for implementing the endoscopic image processing method according to the present embodiment;
Fig. 8 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that some, but not all embodiments of the invention are described. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
It is to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The embodiment of the invention provides an endoscope image processing method and related equipment. The endoscope image processing method can be applied to electronic equipment, and the electronic equipment can be tablet personal computers, notebook computers, desktop computers, personal digital assistants, wearable equipment and other electronic equipment. The electronic device may be a server or a server cluster. The electronic device may also be an auxiliary peripheral module that mates with the medical testing device, or a hardware or software module that is deployed in the medical testing device. The endoscopic image processing method may also be implemented by a chip.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flowchart of an endoscopic image processing method according to an embodiment of the present invention.
As shown in fig. 1, the endoscopic image processing method includes steps S101 to S105.
Step S101, an auxiliary screening result of a target object in the endoscopy is obtained, and lesion coordinate data corresponding to the target object is determined according to the auxiliary screening result, wherein the target object refers to the object of the endoscopy, the auxiliary screening result refers to a potential lesion area contained in the target object, and the lesion coordinate data is used for indicating position information corresponding to the potential lesion area.
Illustratively, the target object is an endoscopic object, which may be any of the stomach, intestine, bladder, nose and throat, and abdominal cavity, or may be a partial region of such organs or tissues, so that the endoscope is used to examine the target object to obtain corresponding pictures, image frames, or video data, thereby performing auxiliary screening on the obtained images or video data, including but not limited to image processing, pattern recognition, or other related techniques, to determine possible lesions or abnormalities in the images or video data, thereby obtaining auxiliary screening results. The auxiliary screening result refers to a potential lesion area obtained by detecting data acquired by endoscopy of a target object. The auxiliary screening result at least comprises one of a potential lesion area, an abnormality type corresponding to the potential lesion area and an abnormality probability.
For example, lesion coordinate data corresponding to the target object is obtained from the auxiliary screening result, where the lesion coordinate data is used to indicate position information corresponding to the potential lesion area, that is, the lesion coordinate data represents abnormal position information marked in data acquired by endoscopy on the target object.
In some embodiments, the method for obtaining the auxiliary screening result of the target object in the endoscopy comprises the steps of determining a target type corresponding to the target object through a target detection model, determining a target lesion detection model according to the target type, obtaining endoscope detection data corresponding to the target object, and identifying the endoscope detection data according to the target lesion detection model to obtain the auxiliary screening result.
In the embodiment of the application, the target type is used for representing an endoscope detection scene where the target object is located. Further alternatively, different endoscopy scenarios employ different target lesion detection models. That is, different target lesion detection models may be trained using image data in different endoscopy scenes. Specific training patterns are described in the examples below.
Illustratively, the target type of the target object, e.g., the intestine, stomach, etc., is first determined from the location of the endoscopy (i.e., the endoscopy scene) prior to performing the endoscopy. And further, the method is helpful for selecting a target lesion detection model corresponding to the target type. After the target type is determined, a target lesion detection model corresponding to the target type is selected from the database.
For example, endoscopy data of a target object is collected, where the endoscopy data may be a video including a portion to be detected, or may be a picture including the portion to be detected. And identifying the endoscopic detection data by using the selected target lesion detection model to obtain an auxiliary screening result.
For example, a trained target lesion detection model DRAM (Disease Recognise AI Model) of a corresponding target type is loaded, the endoscope detection data is a video stream color image, after the video stream color image is preprocessed, a preprocessing result is input into a DRAM, when suspicious lesions appear in the video stream color image, block positioning coordinates bbox [ x, y, w, h ] of the suspicious lesions in the image are obtained, wherein x and y are coordinate information of the left upper corner of a predicted frame, w and h are width and height of the predicted frame, lesion probability and lesion type corresponding to the block positioning coordinates are obtained, and the like. And determining the square positioning coordinates, the lesion probability and the lesion type as auxiliary screening results, as shown in fig. 2, and identifying the endoscope detection data by using target lesion detection models under different target types to obtain the auxiliary screening results, wherein the square positioning coordinates in the auxiliary screening results are converted into bounding boxes, and the bounding boxes are marked in the figure, as shown in a green block diagram in fig. 2.
Specifically, by determining the target type of the target object and selecting an appropriate lesion detection model according to different target types, the accuracy and precision of lesion detection can be improved, and the diagnosis accuracy can be improved.
In some embodiments, the determining the target type corresponding to the target object comprises obtaining an initial endoscope image corresponding to the target object, performing data preprocessing on the initial endoscope image to obtain a target endoscope image, classifying the target endoscope image by using a target detection model to obtain a target classification result corresponding to the target endoscope image, wherein the target classification result comprises an endoscope detection object type corresponding to the target endoscope image and probability information corresponding to the endoscope detection object type, the endoscope detection object type comprises a gastroscope, a enteroscope, a cystoscope, a nasoscope and a laparoscope, determining a maximum value corresponding to the probability information, and determining the target type according to the endoscope detection object type corresponding to the maximum value.
Illustratively, an initial endoscopic image obtained by endoscopy of a target object is acquired and data preprocessing is performed on the initial endoscopic image to obtain the target endoscopic image, including but not limited to denoising, contrast enhancement, and the like.
Illustratively, the target endoscopic image is classified using a target detection model. The target detection model may be a deep learning model such as convolutional neural network (Convolutional Neural Networks, CNN) or the like. Thereby obtaining a target classification result.
For example, using the classification model EFFICIENTNET-b2 as a base model, 5 classification datasets (class 0 gastroscopic picture, class 1 enteroscopic picture, class 2 cystoscopic picture, class 3 nasoscope picture and class 4 laparoscopic picture) corresponding to the type of the endoscopy object are prepared, the training picture size 224×224, the data volume 10000, 8000 pieces of training, 2000 pieces of verification, the training parameters workers =8, epochs=300, batch_size=128, lr=0.001, momentum=0.9, and decay=4e-05 are trained to obtain a target detection model, wherein as shown in fig. 3, a real endoscopy effect map of gastroscopy, enteroscopy, cystoscope, nasoscope and laparoscope is provided.
Illustratively, the target classification result should include probability information corresponding to the type of the endoscopy object. The types of endoscopy objects may be gastroscope, enteroscope, cystoscope, rhinolaryngoscope, and laparoscope.
For example, the target classification result is that the gastroscope correspondence probability is p1, the enteroscope correspondence probability is p2, the cystoscope correspondence probability is p3, the nasoscope correspondence probability is p4, and the laparoscope correspondence probability is p5.
Illustratively, a maximum value corresponding to the probability information in the target classification result is obtained, and the type of the endoscope detection object corresponding to the maximum value is determined as the target type corresponding to the target object.
For example, the object detection model is a trained endoscopy classification model ECAM (Endoscopy Classification AI Model), which automatically distinguishes between gastroscopy, enteroscopy, cystoscope, rhinolaryngoscope and laparoscope. And then inputting the target endoscope image into the ECAM to obtain a target classification result, wherein the target classification result is shown as [ predProbs, predLabels ], predProbs is probability information corresponding to the type of the endoscope detection object, and predLabels is the type of the endoscope detection object.
Specifically, by performing data preprocessing on the initial endoscopic image, noise and interference in subsequent processing can be reduced, and classification accuracy can be improved. The target endoscopic image is classified by using the target detection model, so that the type of an endoscopic detection object can be accurately determined, and good support is provided for obtaining an accurate auxiliary screening result later.
In some embodiments, the auxiliary screening result comprises position information corresponding to a predicted lesion, and the determining of the lesion coordinate data corresponding to the target object according to the auxiliary screening result comprises determining a lesion horizontal coordinate corresponding to the target object according to a maximum horizontal coordinate and a minimum horizontal coordinate corresponding to the position information, determining a lesion vertical coordinate corresponding to the target object according to a maximum vertical coordinate and a minimum vertical coordinate corresponding to the position information, and determining the lesion coordinate data corresponding to the target object according to the lesion horizontal coordinate and the lesion vertical coordinate.
For example, a maximum horizontal coordinate and a minimum horizontal coordinate corresponding to the lesion position are found according to the position information, and then a horizontal average value between the maximum horizontal coordinate and the minimum horizontal coordinate is calculated, and then the horizontal average value is determined as the lesion horizontal coordinate corresponding to the lesion coordinate data.
And similarly, finding out the maximum vertical coordinate and the minimum vertical coordinate corresponding to the lesion position according to the position information, further calculating a vertical average value between the maximum vertical coordinate and the minimum vertical coordinate, and further determining the vertical average value as the lesion vertical coordinate corresponding to the lesion coordinate data.
Illustratively, the lesion horizontal coordinates and the lesion vertical coordinates are combined into a pair of coordinate data, so that the coordinate data is determined as lesion coordinate data corresponding to the target object.
Specifically, by determining the lesion horizontal coordinate and the lesion vertical coordinate according to the position information, the lesion site of the target object can be positioned more accurately, which is helpful for the doctor to diagnose and treat accurately. After the lesion horizontal coordinate and the lesion vertical coordinate are combined into the lesion coordinate data, the subsequent attention degree can be more conveniently calculated.
Step S102, initial eye movement data fed back by a target doctor in response to the auxiliary screening result is obtained, wherein the initial eye movement data are used for representing eyeball movement information of the target doctor in the process of observing the auxiliary screening result.
Illustratively, the initial eye movement data of the targeted physician in response to the auxiliary screening results is acquired by an eye movement tracking device or software. In the embodiment of the application, the eye movement data refers to the data for recording the movement of eyes in the observation process. Thus, the initial eye movement data is used to represent eye movement information during the observation of the auxiliary screening result by the targeted physician. The data may include information on the position of the eye at various points in time, the duration of fixation, the speed of eye movement, etc. In practical applications, eye movement data is often used to study human visual behavior, cognitive processes, and user experience.
And step 103, determining a lesion movement heat map corresponding to the target object according to the lesion coordinate data.
For example, using the acquired lesion coordinate data, a lesion motion thermal map may be determined by statistical methods or image processing techniques.
For example, lesion coordinate data is mapped to an image space, and a lesion motion heatmap is generated from the density or other index of the data mapped to the image space.
In some embodiments, the determining the lesion movement heat map corresponding to the target object according to the lesion coordinate data comprises calculating a first kernel function value between the lesion coordinate data, wherein the first kernel function value is used for representing the similarity degree between the lesion coordinate data, determining a first estimated value corresponding to the lesion coordinate data according to the first kernel function value, wherein the first estimated value is used for representing the density distribution condition of the lesion coordinate data in the target object, and determining the lesion movement heat map corresponding to the target object according to the first estimated value and the lesion coordinate data.
Illustratively, a first kernel function value between lesion coordinate data is calculated using an appropriate kernel function, the first kernel function value being used to characterize a degree of similarity between the lesion coordinate data. For example, if the kernel function is a gaussian kernel function, the first kernel function value may be calculated according to the following formula, where qi represents the ith lesion coordinate data, pj represents the jth lesion coordinate data, Kw represents the first kernel function value, and σ represents the bandwidth.
Illustratively, the first kernel function value is subjected to density estimation calculation by using a kernel density estimation algorithm to obtain a first estimated value, and the first estimated value is used for estimating the density of lesion coordinate data.
By combining the first estimated value and the lesion coordinate data, the lesion movement heat map corresponding to the target object can be determined, and different colors are displayed according to the first estimated value by mapping the first estimated value to the position corresponding to the lesion coordinate data, and the colors are displayed at the position corresponding to the lesion coordinate data, so that the lesion movement heat map corresponding to the target object is obtained.
Specifically, by calculating the first kernel function values between lesion coordinate data, the degree of similarity between them can be evaluated, helping the targeted physician to better understand the relevance and characteristics between the lesion data. After the first estimated value corresponding to the lesion coordinate data is determined, the density of the lesion coordinate data can be estimated more accurately, more comprehensive lesion distribution information is provided for doctors, finally, the distribution condition and density change of lesions can be visually displayed by generating a lesion movement heat map, visual reference is provided for the doctors, and support is provided for follow-up evaluation of the attention degree of target doctors to auxiliary screening results.
In some embodiments, the calculating the first kernel function value between the lesion coordinate data comprises obtaining a first data point from the lesion coordinate data and eliminating the first data point from the lesion coordinate data to obtain first residual data, respectively calculating first distance information between the first data point and each second data point in the first residual data, determining a first weight corresponding to the first data point and the second data point according to the first distance information, and adjusting the first kernel function value between the first data point and the second data point according to the first weight.
Illustratively, one data point is arbitrarily selected from the lesion coordinate data as a first data point, and the first data point is further removed from the lesion coordinate data to obtain first residual data. Further, for each second data point of the first data point and the first remaining data, first distance information therebetween is calculated. This may use suitable distance measurement methods such as euclidean distance, manhattan distance, etc.
Illustratively, a corresponding first weight between the first data point and each second data point is determined from the calculated first distance information. The first weight may be calculated based on the inverse of the first distance information or other suitable weight function. For example, the first weight wij is equal to the result of performing the reciprocal calculation after adding one to the first distance information.
Illustratively, the first kernel function value may be calculated according to the following formula, where qi represents the ith lesion coordinate data, i.e., the first data point, pj represents the jth lesion coordinate data, i.e., the second data point, Kw represents the first kernel function value, σ represents the bandwidth, and wij represents the first weight.
Illustratively, the above steps are performed on each of the lesion coordinate data, thereby obtaining a first kernel function value corresponding to each of the lesion coordinate data.
Specifically, the first kernel function value corresponding to the weight adjustment can better reflect the similarity degree between the first data point and the second data point through the distance information between the first data point and the second data point, and the characteristic and the relevance of lesion coordinate data can be more comprehensively understood, so that more accurate analysis and diagnosis basis is provided for a target doctor.
In some embodiments, the determining the first estimated value corresponding to the lesion coordinate data according to the first kernel function value includes summing the first kernel function value between the first data point and each second data point in the first residual data to obtain a first summation result, summing the first weight between the first data point and each second data point in the first residual data to obtain a second summation result, and determining the first estimated value corresponding to the lesion coordinate data according to the first summation result and the second summation result.
Illustratively, a first kernel function value of the first data point corresponding to each second data point in the first remaining data is obtained, and then all the first kernel function values are summed to obtain a first summation result.
For example, a first weight of a first data point corresponding to each second data point in the first remaining data is obtained, and then all the first weights are summed to obtain a second summation result, and then the first summation result and the second summation result are divided, so that the division result is determined as a first estimated value corresponding to the first data point.
Illustratively, the steps are performed on each first data point in the lesion coordinate data, and a first estimated value corresponding to each first data point in the lesion coordinate data is obtained.
Specifically, a first estimated value of lesion coordinate data is determined according to a summation result of the first kernel function value and the first weight, so that distribution and characteristics of the lesion data are better understood. This facilitates further analysis and decision making, and improves awareness and diagnostic accuracy of the condition.
Step S104, determining an eye movement thermal diagram corresponding to the target doctor according to the initial eye movement data.
Illustratively, using the acquired initial eye movement data, an eye movement thermal map may be determined by statistical methods or image processing techniques.
For example, the initial eye movement data is mapped to an image space, and an eye movement heat map is generated according to the density or other index of the data mapped to the image space.
In some embodiments, the method for determining the eye movement thermal diagram corresponding to the target doctor according to the initial eye movement data comprises the steps of performing data preprocessing on the initial eye movement data by utilizing Kalman filtering to obtain target eye movement data, calculating a second kernel function value between the target eye movement data, wherein the second kernel function value is used for representing the similarity degree between the target eye movement data, determining a second estimated value corresponding to the target eye movement data according to the second kernel function value, wherein the second estimated value is used for representing the density distribution condition of the target eye movement data, and determining the eye movement thermal diagram corresponding to the target doctor according to the second estimated value and the target eye movement data.
Illustratively, when the eye movement data is sampled, according to the glance path discovery displayed on the screen in real time, when the target doctor blinks, the eye movement data can deviate to the edge of the screen to cause a large degree of data jump, so that the initial eye movement data needs to be preprocessed to eliminate the data noise as much as possible. Kalman filtering is a recursive filter for state estimation, which is applicable to the problem of state estimation in linear dynamic systems. It is able to estimate the state by observing the data and system model and to effectively handle noise and uncertainty. The saccade path display before and after the initial eye movement data preprocessing is shown in fig. 4, blue is the initial eye movement data, and red is the target eye movement data.
Illustratively, a second kernel function value between the target eye movement data is calculated using an appropriate kernel function, the second kernel function value being used to characterize the degree of similarity between the target eye movement data. For example, if the kernel function is a gaussian kernel function, the second kernel function value may be calculated according to the following formula, where ai represents the ith target eye movement data, aj represents the jth target eye movement data, Tw represents the second kernel function value, and σ represents the bandwidth.
Illustratively, the second kernel function value is subjected to density estimation calculation by using a kernel density estimation algorithm to obtain a second estimated value, and the second estimated value is used for estimating the density of the target eye movement data.
By combining the second estimated value with the target eye movement data, an eye movement thermal map corresponding to the target doctor can be determined, and different colors are displayed according to the second estimated value by mapping the second estimated value to the position corresponding to the target eye movement data, and the colors are displayed at the position corresponding to the target eye movement data, so that the eye movement thermal map corresponding to the target doctor is obtained.
Specifically, the Kalman filtering is utilized to perform data preprocessing on the initial eye movement data, so that noise and bad signals in the data can be removed, and the accuracy and stability of the data are improved. By calculating the second kernel function value between the target eye movement data, the degree of similarity between them can be evaluated, helping to analyze the characteristics and relevance of the eye movement data. Determining a second estimate corresponding to the target eye movement data may help estimate a density distribution of the eye movement data, provide more comprehensive information about eye movement behavior, and facilitate analysis of eye movement patterns and anomalies. According to the second estimated value and the target eye movement data, the eye movement thermal point diagram is generated, the concentrated area and the activity mode of the eye movement data can be displayed, visual eye movement behavior analysis is provided for a target doctor, and support is provided for follow-up attention calculation.
In some embodiments, the calculating of the second kernel function value between the target eye movement data comprises obtaining a third data point from the target eye movement data and eliminating the third data point from the target eye movement data to obtain second residual data, respectively calculating second distance information between the third data point and each fourth data point in the second residual data, determining a second weight corresponding to the third data point and the fourth data point according to the second distance information, and adjusting the second kernel function value between the third data point and the fourth data point according to the second weight.
Illustratively, one data point is arbitrarily selected from the target eye movement data as a third data point, and this third data point is further removed from the target eye movement data, so as to obtain second remaining data. And further, for each fourth data point of the third data point and the second remaining data, calculating second distance information therebetween. This may use suitable distance measurement methods such as euclidean distance, manhattan distance, etc.
Illustratively, a corresponding second weight between the third data point and each fourth data point is determined based on the calculated second distance information. The second weight may be calculated based on the inverse of the second distance information or other suitable weight function. If the second weight is equal to the result of the reciprocal calculation after the second distance information is added by one.
Illustratively, the second kernel function value may be calculated according to the following formula, where ai represents the ith target eye movement data, i.e., the third data point, aj represents the jth target eye movement data, i.e., the fourth data point, Tw represents the second kernel function value, σ represents the bandwidth, and hij represents the second weight.
Illustratively, the above steps are performed on each of the target eye movement data, thereby obtaining a second kernel function value corresponding to each of the target eye movement data.
Specifically, the degree of similarity between the third data point and the fourth data point can be better reflected by determining the second kernel function value corresponding to the weight adjustment through the distance information between the third data point and the fourth data point, so that the characteristics and the relevance of the target eye movement data can be more comprehensively understood, and more accurate analysis and basis are provided for judging the attention of a target doctor to the auxiliary screening result.
In some embodiments, the determining a second estimated value corresponding to the target eye movement data according to the second kernel function value includes summing the second kernel function value between the third data point and each fourth data point in the second residual data to obtain a third summation result, summing the second weight between the third data point and each fourth data point in the second residual data to obtain a fourth summation result, and determining the second estimated value corresponding to the target eye movement data according to the third summation result and the fourth summation result.
Illustratively, a second kernel function value of the third data point corresponding to each fourth data point in the second remaining data is obtained, and then all the second kernel function values are summed to obtain a third summation result.
For example, a third data point is obtained and a second weight is corresponding to each fourth data point in the second remaining data, then all the second weights are summed to obtain a fourth summation result, and then the third summation result and the fourth summation result are divided, so that the division result is determined as a second estimated value corresponding to the third data point.
Illustratively, the above steps are performed on each third data point in the target eye movement data, and a second estimated value corresponding to each third data point in the target eye movement data is obtained.
Specifically, a second estimated value of the target eye movement data is determined according to a summation result of the second kernel function value and the second weight, so that distribution and characteristics of the target eye movement data are better understood. This facilitates further analysis and decision making, providing more accurate analysis and basis for judging the attention of the targeted physician to the secondary screening results.
Step S105, determining the target attention degree of the target doctor to the auxiliary screening result according to the lesion movement hotspot graph and the eye movement hotspot graph.
Illustratively, the obtained lesion movement heat point map and eye movement heat point map are shown in fig. 5, and further, the target attention degree of the target doctor to the auxiliary screening result is determined by calculating the association degree between the lesion movement heat point map and the eye movement heat point map.
In some embodiments, the determining the target attention of the target doctor to the auxiliary screening result according to the lesion movement hotspot graph and the eye movement hotspot graph comprises converting the lesion movement hotspot graph into a first vector and converting the eye movement hotspot graph into a second vector, calculating a correlation value between the first vector and the second vector by utilizing a spearman correlation coefficient, and determining the target attention of the target doctor to the auxiliary screening result according to the correlation value.
For example, converting the lesion motion hotspot graph to the first vector may map data points in the lesion motion hotspot graph into one vector according to a preset rule. Likewise, converting the eye movement hotspot graph to a second vector ensures that eye movement data points in the eye movement hotspot graph are correctly mapped into the second vector.
Illustratively, a correlation value between the first vector and the second vector is calculated using a spearman correlation coefficient. The spearman correlation coefficient may be used to measure a monotonic correlation between the first vector and the second vector, here to evaluate the correlation between the lesion movement hotspot graph and the eye movement hotspot graph.
Illustratively, a target doctor's target attention to the auxiliary screening result is determined based on the calculated correlation value. The higher the correlation value, the stronger the correlation between the lesion movement hotspot graph and the eye movement hotspot graph, and the higher the doctor's attention to the screening result may be.
For example, after a lesion movement hotspot graph and an eye movement hotspot graph are acquired, the lesion movement hotspot graph is converted into a one-dimensional first vector and the eye movement hotspot graph is converted into a one-dimensional second vector so as to perform correlation analysis. And calculating a spearman correlation coefficient, and obtaining the correlation between the lesion movement hotspot graph and the eye movement hotspot graph. By calculating the spearman correlation coefficient, the degree of nonlinear monotonic relation between the lesion movement hotspot graph and the eye movement hotspot graph can be quantified, and whether the target doctor neglects the identified auxiliary screening result or not can be further judged. In the case of the lesion movement heat map and the eye movement heat map, if the correlation value is closer to 1, a stronger monotonic positive correlation exists between the two, namely, a high-density region in the eye movement heat map and a high-density region in the lesion movement heat map are more overlapped, which indicates that the target doctor does not ignore the identified auxiliary screening result, and conversely, if the correlation value is closer to-1, a stronger monotonic negative correlation exists, namely, a high-density region in the eye movement heat map and a low-density region in the lesion movement heat map are more overlapped, which indicates that the target doctor ignores the identified auxiliary screening result. As shown in fig. 5, the correlation value 0.9812 can be obtained by calculating the spearman correlation coefficient from the lesion movement thermal point diagram and the eye movement thermal point diagram, and it is further known that the target doctor has higher target attention corresponding to the auxiliary screening result.
Specifically, the correlation between the lesion movement hotspot graph and the eye movement hotspot graph is quantified, and the attention of the target doctor to the auxiliary screening result is determined according to the correlation. This helps to better determine the observation behavior and focus of attention of the targeted physician, thereby improving the focus and utility of the auxiliary screening results. Thereby improving the application value of the auxiliary screening result.
In some embodiments, the method further comprises the steps of comparing the target attention degree with a preset value to obtain a comparison result, determining an attention feedback result of the target doctor on the auxiliary screening result according to the comparison result, and sending the attention feedback result to a target terminal corresponding to the target doctor, so that the target terminal receives the attention feedback result and displays the attention feedback result to the target doctor.
For example, comparing the target doctor's target attention to the auxiliary screening result with a preset value may determine whether the doctor's attention to the auxiliary screening result meets expectations. And determining the feedback result of the attention of the target doctor to the auxiliary screening result according to the comparison result. The feedback result of interest may be positively confirmed or encouraged if the target doctor's target attention is expected, or may remind or suggest the doctor to refocus on a particular area or content if it is not expected.
The determined attention feedback result is sent to a target terminal corresponding to the target doctor. The feedback information may be sent via email, mobile application, or other communication channel. And the target terminal displays the feedback result to the target doctor after receiving the feedback result. Ensuring that the attention feedback information can be clearly and intuitively presented to doctors so that the doctors understand the attention condition of themselves and make corresponding adjustment.
Specifically, the target attention of the target doctor to the auxiliary screening result is timely fed back to the target doctor, so that the target doctor can be helped to better understand own observation behaviors, the target doctor is promoted to concentrate on a key area, and the possibility of missed diagnosis and misdiagnosis is reduced. Thereby improving diagnosis efficiency and accuracy and finally improving medical service quality.
In some embodiments, the method further comprises displaying the auxiliary screening result in real time in the endoscopic detection data corresponding to the target object.
The auxiliary screening result is transmitted to the endoscope detection device or the related display platform in real time by using a real-time data transmission technology, and the auxiliary screening result is integrated with the endoscope detection data of the target object, so that the auxiliary screening result data and the endoscope detection data are integrated and displayed on the endoscope detection device or the related display platform. This may be to display the auxiliary screening result superimposed on the endoscopic detection data, or to display both the endoscopic detection data and the auxiliary screening result on the same interface.
Illustratively, the auxiliary screening result can be updated in real time and synchronously displayed with the endoscopic detection data by a timing updating mode, so that the accuracy and instantaneity of the result data are maintained.
Specifically, the auxiliary screening result is displayed in real time in the endoscope detection data corresponding to the target object, so that more comprehensive and timely information can be provided for doctors, and accurate diagnosis and treatment decisions can be made by the doctors.
In some embodiments, the method further comprises determining a glance path corresponding to the target doctor according to the initial eye movement data, and displaying the glance path in real time in the endoscope detection data.
Illustratively, an eye movement data analysis algorithm or software is used to identify and extract a glance path corresponding to the target doctor in the initial eye movement data. And integrating the analyzed glance path into the display of the endoscopic detection data in real time.
For example, as shown in fig. 6, the glance path corresponding to the target doctor, such as the red path in fig. 6, and the lesion motion path determined according to the lesion coordinate data, such as the blue path in fig. 6, are displayed in real time, so that the target doctor can be assisted, whether the auxiliary screening result is ignored by the target doctor is judged, and misdiagnosis due to missing diagnosis is reduced.
Illustratively, the glance path is ensured to be updated in real time and displayed synchronously with the endoscope detection data by timing update or real-time data transmission, so as to ensure that the target doctor can acquire the latest glance information in time.
Specifically, the glance path of the target doctor is determined according to the initial eye movement data, and the glance path is displayed in the endoscope detection data, so that visual information which is more visual and comprehensive can be provided for the target doctor, and the target doctor can be more accurately assisted to analyze and diagnose the illness state more accurately.
Further optionally, the real-time display effect of the lesion motion path can be dynamically adjusted according to the coincidence degree between the glance path and the lesion motion path of the target doctor. For example, when the coincidence degree between the glance path and the lesion motion path is lower than a set first threshold value, the real-time display brightness of the lesion motion path is increased to a first brightness level, and the real-time display line type of the lesion motion path is set to a corresponding first prompt form, such as thickening the line width by 15 percent. Thereby, the doctor is guided to pay attention to the auxiliary screening result, and the diagnosis efficiency and accuracy are improved in an auxiliary manner. For example, when the doctor moves the line of sight and the overlap ratio between the glance path and the lesion motion path gradually changes to be higher than the set first threshold value, the real-time display brightness of the lesion motion path is reduced to a second brightness level (for example, the second brightness level is similar to the brightness of the glance path), and the real-time display linear width of the lesion motion path is adjusted to be similar to the glance path or the original width is restored, so that the observation field of view of the doctor is prevented from being blocked.
Referring to fig. 7, fig. 7 is a schematic view of a scene for implementing the method for processing an endoscopic image according to the present embodiment, as shown in fig. 7, an initial endoscopic image corresponding to a target object is obtained, and data preprocessing is performed on the initial endoscopic image to obtain a target endoscopic image, and then the target endoscopic image is classified by using a target detection model to obtain a target classification result corresponding to the target endoscopic image, so that a target type corresponding to the initial endoscopic image is determined according to the target classification result. And determining a target lesion detection model according to the target type, and identifying the endoscope detection data by using the target lesion detection model to obtain an auxiliary screening result. The method comprises the steps of displaying an auxiliary screening result in endoscope detection data corresponding to a target object in real time, determining lesion coordinate data corresponding to the target object according to the auxiliary screening result, further determining a lesion movement heat point diagram corresponding to the target object according to the lesion coordinate data, obtaining initial eye movement data fed back by a target doctor in response to the auxiliary screening result, carrying out data preprocessing on the initial eye movement data by utilizing Kalman filtering to obtain target eye movement data, determining a glance path corresponding to the target doctor according to the target eye movement data, displaying the glance path in the endoscope detection data in real time, further determining an eye movement heat point diagram corresponding to the target doctor, further calculating a correlation value between the lesion movement heat point diagram and the eye movement heat point diagram according to a Skerman correlation coefficient, and further determining target attention of the target doctor to the auxiliary screening result according to the correlation value. And finally, comparing the target attention degree with a preset value to determine that the attention feedback result of the target doctor on the auxiliary screening result is the sight-line neglect lesion or the sight-line capture lesion, and further displaying and feeding back the attention feedback result. Therefore, by analyzing the initial eye movement data of the target doctor, the observation behavior of the target doctor on the auxiliary screening result is evaluated, whether the identified lesion area is ignored or not is judged, and then the target attention of the target doctor on the auxiliary screening result can be evaluated, objective feedback and advice can be provided for the target doctor, the target doctor is helped to make more accurate diagnosis and treatment decisions, and the diagnosis efficiency and accuracy are improved.
An embodiment of the present invention provides an endoscopic image processing apparatus including at least the following units:
The system comprises a screening unit, a detection unit and a detection unit, wherein the screening unit is configured to acquire an auxiliary screening result of a target object in an endoscopy, and determine lesion coordinate data corresponding to the target object according to the auxiliary screening result, wherein the target object is the object of the endoscopy;
The eye movement unit is configured to acquire initial eye movement data fed back by a target doctor in response to the auxiliary screening result, wherein the initial eye movement data is used for representing eyeball movement information in the process of observing the auxiliary screening result by the target doctor;
The first hot spot unit is configured to determine a lesion movement hot spot diagram corresponding to the target object according to the lesion coordinate data;
a second hotspot unit configured to determine an eye movement hotspot graph corresponding to the target doctor according to the initial eye movement data;
And a focus degree unit configured to determine a target focus degree of the auxiliary screening result by the target doctor according to the lesion movement hotspot graph and the eye movement hotspot graph.
The specific steps of each unit implementation are referred to in the related description in the above embodiments, and are not repeated here.
Referring to fig. 8, fig. 8 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
As shown in FIG. 8, the electronic device 300 includes a processor 301 and a memory 302, the processor 301 and the memory 302 being connected by a bus 303, such as an I2C (Inter-INTEGRATED CIRCUIT) bus.
In particular, the processor 301 is used to provide computing and control capabilities to support the operation of the overall electronic device. The Processor 301 may be a central processing unit (Central Processing Unit, CPU), the Processor 301 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Specifically, the Memory 302 may be a Flash chip, a Read-Only Memory (ROM) disk, an optical disk, a U-disk, a removable hard disk, or the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of a portion of the structure related to an embodiment of the present invention, and does not constitute a limitation of the electronic device to which the embodiment of the present invention is applied, and that a specific server may include more or less components than those shown in the drawings, or may combine some components, or have a different arrangement of components.
The processor is used for running a computer program stored in the memory and realizing any one of the endoscope image processing methods provided by the embodiment of the invention when the computer program is executed.
In an embodiment, the processor is configured to run a computer program stored in a memory and to implement the following steps when executing the computer program:
Acquiring an auxiliary screening result of a target object in endoscopy, and determining lesion coordinate data corresponding to the target object according to the auxiliary screening result, wherein the target object is the object of the endoscopy, the auxiliary screening result is a potential lesion area contained in the target object, and the lesion coordinate data is used for indicating position information corresponding to the potential lesion area;
acquiring initial eye movement data fed back by a target doctor in response to the auxiliary screening result, wherein the initial eye movement data is used for representing eyeball movement information of the target doctor in the process of observing the auxiliary screening result;
determining a lesion movement heat map corresponding to the target object according to the lesion coordinate data;
determining an eye movement thermal map corresponding to the target doctor according to the initial eye movement data;
And determining the target attention degree of the target doctor to the auxiliary screening result according to the lesion movement hotspot graph and the eye movement hotspot graph.
In some embodiments, the processor 301 performs, during the acquisition of the auxiliary screening results in the endoscopy of the target object:
Determining a target type corresponding to the target object through a target detection model, and determining a target lesion detection model according to the target type, wherein the target type is used for representing an endoscope detection scene where the target object is located;
obtaining endoscope detection data corresponding to the target object;
and identifying the endoscope detection data according to the target lesion detection model to obtain the auxiliary screening result.
In some embodiments, the processor 301 performs, in the determining, by the target detection model, a target type corresponding to the target object:
Obtaining an initial endoscope image corresponding to the target object, and performing data preprocessing on the initial endoscope image to obtain a target endoscope image;
Classifying the target endoscopic image by using a target detection model to obtain a target classification result corresponding to the target endoscopic image, wherein the target classification result comprises an endoscopic detection object type corresponding to the target endoscopic image and probability information corresponding to the endoscopic detection object type, and the endoscopic detection object type comprises a gastroscope, a enteroscope, a cystoscope, a nasoscope and a laparoscope;
and determining a maximum value corresponding to the probability information, and determining the target type according to the type of the endoscope detection object corresponding to the maximum value.
In some embodiments, the auxiliary screening result includes location information corresponding to the predicted lesion, and in some embodiments, the processor 301 performs, in the determining the lesion coordinate data corresponding to the target object according to the auxiliary screening result:
Determining a pathological change horizontal coordinate corresponding to the target object according to the maximum horizontal coordinate and the minimum horizontal coordinate corresponding to the position information;
Determining a lesion vertical coordinate corresponding to the target object according to the maximum vertical coordinate and the minimum vertical coordinate corresponding to the position information;
and determining the lesion coordinate data corresponding to the target object according to the lesion horizontal coordinate and the lesion vertical coordinate.
In some embodiments, the processor 301 performs, in the determining, according to the lesion coordinate data, a lesion movement heat map corresponding to the target object:
calculating a first kernel function value between the lesion coordinate data, wherein the first kernel function value is used for representing the similarity degree between the lesion coordinate data;
Determining a first estimated value corresponding to the lesion coordinate data according to the first kernel function value, wherein the first estimated value is used for representing the density distribution condition of the lesion coordinate data in the target object;
and determining the lesion movement heat point diagram corresponding to the target object according to the first estimated value and the lesion coordinate data.
In some embodiments, the processor 301 performs, in the calculating the first kernel function value between the lesion coordinate data:
obtaining a first data point from the lesion coordinate data, and removing the first data point from the lesion coordinate data to obtain first residual data;
Respectively calculating first distance information between the first data point and each second data point in the first residual data, and determining a corresponding first weight between the first data point and the second data point according to the first distance information;
the first kernel function value between the first data point and the second data point is adjusted according to the first weight.
In some embodiments, the processor 301 performs, in the determining the first estimated value corresponding to the lesion coordinate data according to the first kernel function value:
Summing the first kernel function value between the first data point and each second data point in the first residual data to obtain a first summation result;
summing the first weights between the first data point and each second data point in the first residual data to obtain a second summation result;
and determining the first estimated value corresponding to the lesion coordinate data according to the first summation result and the second summation result.
In some embodiments, the processor 301 performs, in the determining, according to the initial eye movement data, an eye movement hotspot graph corresponding to the target doctor:
performing data preprocessing on the initial eye movement data by using Kalman filtering to obtain target eye movement data;
calculating a second kernel function value between the target eye movement data, wherein the second kernel function value is used for representing the similarity degree between the target eye movement data;
Determining a second estimated value corresponding to the target eye movement data according to the second kernel function value, wherein the second estimated value is used for representing the density distribution condition of the target eye movement data;
And determining the eye movement thermal map corresponding to the target doctor according to the second estimated value and the target eye movement data.
In some embodiments, the processor 301 performs, in the calculating the second kernel function value between the target eye movement data:
obtaining a third data point from the target eye movement data, and removing the third data point from the target eye movement data to obtain second residual data;
respectively calculating second distance information between the third data point and each fourth data point in the second residual data, and determining a second weight corresponding to the third data point and the fourth data point according to the second distance information;
The second kernel function value between the third data point and the fourth data point is adjusted according to the second weight.
In some embodiments, the processor 301 performs, in the determining the second estimated value corresponding to the target eye movement data according to the second kernel function value:
Summing the second kernel function value between the third data point and each fourth data point in the second residual data to obtain a third summation result;
Summing the second weights between the third data point and each fourth data point in the second residual data to obtain a fourth summation result;
And determining the second estimated value corresponding to the target eye movement data according to the third summation result and the fourth summation result.
In some embodiments, the processor 301 performs, in the determining the target attention of the target doctor to the auxiliary screening result according to the lesion movement hotspot graph and the eye movement hotspot graph:
converting the lesion motion hotspot graph to a first vector and converting the eye motion hotspot graph to a second vector;
Calculating a correlation value between the first vector and the second vector using a spearman correlation coefficient;
And determining the target attention degree corresponding to the auxiliary screening result by the target doctor according to the correlation value.
In some implementations, the processor 301 further performs:
Comparing the target attention with a preset value to obtain a comparison result;
And determining an attention feedback result of the target doctor on the auxiliary screening result according to the comparison result, and sending the attention feedback result to a target terminal corresponding to the target doctor, so that the target terminal displays the target feedback result to the target doctor after receiving the target feedback result.
In some implementations, the processor 301 further performs:
constructing corresponding prompt information based on the auxiliary screening result;
And overlapping the prompt information corresponding to the auxiliary screening result into an endoscope detection image corresponding to the target object for real-time display.
In some implementations, the processor 301 further performs:
determining a glance path corresponding to the target doctor according to the initial eye movement data;
and displaying the scanning path in the endoscope detection data in real time.
It should be noted that, for convenience and brevity of description, specific working processes of the above-described electronic device may refer to corresponding processes in the foregoing embodiment of the endoscope image processing method, which are not described herein again.
Embodiments of the present invention also provide a storage medium for computer-readable storage, the storage medium storing one or more programs executable by one or more processors to implement steps of any of the endoscopic image processing methods provided in the embodiments of the present invention.
The storage medium may be an internal storage unit of the electronic device according to the foregoing embodiment, for example, a hard disk or a memory of the electronic device. The storage medium may also be an external storage device of the electronic device, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like, which are provided on the electronic device.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware embodiment, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components, for example, one physical component may have a plurality of functions, or one function or step may be cooperatively performed by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
It should be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

Translated fromChinese
1.一种内窥镜图像处理方法,其特征在于,所述方法包括:1. An endoscopic image processing method, characterized in that the method comprises:获取目标对象在内窥镜检查中的辅助筛查结果,并根据所述辅助筛查结果确定所述目标对象对应的病变坐标数据;其中,所述目标对象是指所述内窥镜检查的对象;所述辅助筛查结果是指所述目标对象中包含的潜在病变区域;所述病变坐标数据用于指示所述潜在病变区域对应的位置信息;Obtaining auxiliary screening results of a target object in an endoscopic examination, and determining lesion coordinate data corresponding to the target object according to the auxiliary screening results; wherein the target object refers to the object of the endoscopic examination; the auxiliary screening results refer to a potential lesion area contained in the target object; the lesion coordinate data is used to indicate position information corresponding to the potential lesion area;获取目标医生响应于所述辅助筛查结果反馈的初始眼动数据;其中,所述初始眼动数据用于表示所述目标医生对所述辅助筛查结果观察过程中的眼球移动信息;Acquiring initial eye movement data of the target doctor in response to the auxiliary screening result feedback; wherein the initial eye movement data is used to represent the eye movement information of the target doctor during the observation of the auxiliary screening result;根据所述病变坐标数据确定所述目标对象对应的病变运动热点图;Determine a lesion motion hotspot map corresponding to the target object according to the lesion coordinate data;根据所述初始眼动数据确定所述目标医生对应的眼动运动热点图;Determine an eye movement heat map corresponding to the target doctor according to the initial eye movement data;根据所述病变运动热点图和所述眼动运动热点图确定所述目标医生对所述辅助筛查结果的目标关注度。The target doctor's target attention to the auxiliary screening result is determined according to the lesion movement hotspot map and the eye movement hotspot map.2.根据权利要求1所述的方法,其特征在于,所述获取目标对象在内窥镜检查中的辅助筛查结果,包括:2. The method according to claim 1, characterized in that the step of obtaining the auxiliary screening result of the target object during endoscopic examination comprises:通过目标检测模型确定所述目标对象对应的目标类型,并根据所述目标类型确定目标病变检测模型;其中,所述目标类型用于表示所述目标对象所处的内窥镜检测场景;不同内窥镜检测场景采用不同目标病变检测模型;Determine the target type corresponding to the target object through the target detection model, and determine the target lesion detection model according to the target type; wherein the target type is used to represent the endoscope detection scene where the target object is located; different endoscope detection scenes use different target lesion detection models;获得所述目标对象对应的内镜检测数据;Obtaining endoscopic detection data corresponding to the target object;根据所述目标病变检测模型对所述内镜检测数据进行识别,获得所述辅助筛查结果。The endoscopic detection data is identified according to the target lesion detection model to obtain the auxiliary screening result.3.根据权利要求2所述的方法,其特征在于,所述通过目标检测模型确定所述目标对象对应的目标类型,包括:3. The method according to claim 2, characterized in that the determining the target type corresponding to the target object by using the target detection model comprises:获得所述目标对象对应的初始内镜图像,并对所述初始内镜图像进行数据预处理获得目标内镜图像;Obtaining an initial endoscopic image corresponding to the target object, and performing data preprocessing on the initial endoscopic image to obtain a target endoscopic image;利用目标检测模型对所述目标内镜图像进行分类,获得所述目标内镜图像对应的目标分类结果,所述目标分类结果包括所述目标内镜图像对应的内窥镜检测对象类型,以及所述内窥镜检测对象类型对应的概率信息,所述内窥镜检测对象类型包括胃镜、肠镜、膀胱镜、鼻喉镜和腹腔镜;Classifying the target endoscopic image using a target detection model to obtain a target classification result corresponding to the target endoscopic image, the target classification result including an endoscopic detection object type corresponding to the target endoscopic image and probability information corresponding to the endoscopic detection object type, the endoscopic detection object type including a gastroscope, a colonoscope, a cystoscope, a naso-laryngoscope and a laparoscope;确定所述概率信息对应的最大值,并根据所述最大值对应的所述内窥镜检测对象类型确定所述目标类型。A maximum value corresponding to the probability information is determined, and the target type is determined according to the endoscope detection object type corresponding to the maximum value.4.根据权利要求1所述的方法,其特征在于,所述辅助筛查结果包括预测病变对应的位置信息,所述根据所述辅助筛查结果确定所述目标对象对应的病变坐标数据,包括:4. The method according to claim 1, wherein the auxiliary screening result includes location information corresponding to the predicted lesion, and the step of determining the lesion coordinate data corresponding to the target object according to the auxiliary screening result comprises:根据所述位置信息对应的最大水平坐标和最小水平坐标确定所述目标对象对应的病变水平坐标;Determine the horizontal coordinate of the lesion corresponding to the target object according to the maximum horizontal coordinate and the minimum horizontal coordinate corresponding to the position information;根据所述位置信息对应的最大垂直坐标和最小垂直坐标确定所述目标对象对应的病变垂直坐标;Determine the vertical coordinate of the lesion corresponding to the target object according to the maximum vertical coordinate and the minimum vertical coordinate corresponding to the position information;根据所述病变水平坐标和所述病变垂直坐标确定所述目标对象对应的所述病变坐标数据。The lesion coordinate data corresponding to the target object is determined according to the lesion horizontal coordinate and the lesion vertical coordinate.5.根据权利要求1所述的方法,其特征在于,所述根据所述病变坐标数据确定所述目标对象对应的病变运动热点图,包括:5. The method according to claim 1, characterized in that determining the lesion motion hotspot map corresponding to the target object according to the lesion coordinate data comprises:计算所述病变坐标数据之间的第一核函数值,所述第一核函数值用于表征所述病变坐标数据之间的相似程度;Calculating a first kernel function value between the lesion coordinate data, wherein the first kernel function value is used to characterize the similarity between the lesion coordinate data;根据所述第一核函数值确定所述病变坐标数据对应的第一估计值,所述第一估计值用于表征所述病变坐标数据在所述目标对象中的密度分布情况;Determine a first estimated value corresponding to the lesion coordinate data according to the first kernel function value, where the first estimated value is used to characterize the density distribution of the lesion coordinate data in the target object;根据所述第一估计值和所述病变坐标数据确定所述目标对象对应的所述病变运动热点图。The lesion motion hotspot map corresponding to the target object is determined according to the first estimated value and the lesion coordinate data.6.根据权利要求5所述的方法,其特征在于,所述计算所述病变坐标数据之间的第一核函数值,包括:6. The method according to claim 5, characterized in that the calculating the first kernel function value between the lesion coordinate data comprises:从所述病变坐标数据中获得第一数据点,并将所述第一数据点从所述病变坐标数据中剔除获得第一剩余数据;Obtaining a first data point from the lesion coordinate data, and removing the first data point from the lesion coordinate data to obtain first remaining data;分别计算所述第一数据点与所述第一剩余数据中每个第二数据点之间的第一距离信息,并根据所述第一距离信息确定所述第一数据点和所述第二数据点之间对应的第一权重;respectively calculating first distance information between the first data point and each second data point in the first remaining data, and determining a first weight corresponding to the first data point and the second data point according to the first distance information;根据所述第一权重调整所述第一数据点和所述第二数据点之间的所述第一核函数值。The first kernel function value between the first data point and the second data point is adjusted according to the first weight.7.根据权利要求6所述的方法,其特征在于,所述根据所述第一核函数值确定所述病变坐标数据对应的第一估计值,包括:7. The method according to claim 6, characterized in that determining the first estimated value corresponding to the lesion coordinate data according to the first kernel function value comprises:对所述第一数据点与所述第一剩余数据中每个第二数据点之间的所述第一核函数值进行求和,获得第一求和结果;Summing the first kernel function value between the first data point and each second data point in the first remaining data to obtain a first summation result;对所述第一数据点与所述第一剩余数据中每个第二数据点之间的所述第一权重进行求和,获得第二求和结果;Summing the first weight between the first data point and each second data point in the first remaining data to obtain a second summation result;根据所述第一求和结果和所述第二求和结果确定所述病变坐标数据对应的所述第一估计值。The first estimated value corresponding to the lesion coordinate data is determined according to the first summation result and the second summation result.8.根据权利要求1所述的方法,其特征在于,所述根据所述初始眼动数据确定所述目标医生对应的眼动运动热点图,包括:8. The method according to claim 1, characterized in that the step of determining the eye movement heat map corresponding to the target doctor according to the initial eye movement data comprises:利用卡尔曼滤波对所述初始眼动数据进行数据预处理,获得目标眼动数据;Performing data preprocessing on the initial eye movement data by using Kalman filtering to obtain target eye movement data;计算所述目标眼动数据之间的第二核函数值,所述第二核函数值用于表征所述目标眼动数据之间的相似程度;Calculating a second kernel function value between the target eye movement data, where the second kernel function value is used to characterize the similarity between the target eye movement data;根据所述第二核函数值确定所述目标眼动数据对应的第二估计值,所述第二估计值用于表征所述目标眼动数据的密度分布情况;Determine a second estimated value corresponding to the target eye movement data according to the second kernel function value, where the second estimated value is used to characterize the density distribution of the target eye movement data;根据所述第二估计值和所述目标眼动数据确定所述目标医生对应的所述眼动运动热点图。The eye movement heat map corresponding to the target doctor is determined according to the second estimated value and the target eye movement data.9.根据权利要求8所述的方法,其特征在于,所述计算所述目标眼动数据之间的第二核函数值,包括:9. The method according to claim 8, characterized in that the calculating the second kernel function value between the target eye movement data comprises:从所述目标眼动数据中获得第三数据点,并将所述第三数据点从所述目标眼动数据中剔除获得第二剩余数据;Obtaining a third data point from the target eye movement data, and removing the third data point from the target eye movement data to obtain second remaining data;分别计算所述第三数据点与所述第二剩余数据中每个第四数据点之间的第二距离信息,并根据所述第二距离信息确定所述第三数据点和所述第四数据点之间对应的第二权重;respectively calculating second distance information between the third data point and each fourth data point in the second remaining data, and determining a second weight corresponding to the third data point and the fourth data point according to the second distance information;根据所述第二权重调整所述第三数据点和所述第四数据点之间的所述第二核函数值。The second kernel function value between the third data point and the fourth data point is adjusted according to the second weight.10.一种内窥镜图像处理装置,其特征在于,所述装置包括:10. An endoscopic image processing device, characterized in that the device comprises:筛查单元,被配置为获取目标对象在内窥镜检查中的辅助筛查结果,并根据所述辅助筛查结果确定所述目标对象对应的病变坐标数据;其中,所述目标对象是指所述内窥镜检查的对象;所述辅助筛查结果是指所述目标对象中包含的潜在病变区域;所述病变坐标数据用于指示所述潜在病变区域对应的位置信息;A screening unit is configured to obtain an auxiliary screening result of a target object in an endoscopic examination, and determine lesion coordinate data corresponding to the target object according to the auxiliary screening result; wherein the target object refers to the object of the endoscopic examination; the auxiliary screening result refers to a potential lesion area contained in the target object; and the lesion coordinate data is used to indicate position information corresponding to the potential lesion area;眼动单元,被配置为获取目标医生响应于所述辅助筛查结果反馈的初始眼动数据;其中,所述初始眼动数据用于表示所述目标医生对所述辅助筛查结果观察过程中的眼球移动信息;An eye movement unit, configured to obtain initial eye movement data of the target doctor in response to the auxiliary screening result feedback; wherein the initial eye movement data is used to represent the eye movement information of the target doctor during the observation of the auxiliary screening result;第一热点单元,被配置为根据所述病变坐标数据确定所述目标对象对应的病变运动热点图;A first hotspot unit is configured to determine a lesion motion hotspot map corresponding to the target object according to the lesion coordinate data;第二热点单元,被配置为根据所述初始眼动数据确定所述目标医生对应的眼动运动热点图;A second hotspot unit is configured to determine an eye movement hotspot map corresponding to the target doctor according to the initial eye movement data;关注度单元,被配置为根据所述病变运动热点图和所述眼动运动热点图确定所述目标医生对所述辅助筛查结果的目标关注度。An attention unit is configured to determine the target doctor's target attention to the auxiliary screening result based on the lesion movement heat map and the eye movement heat map.
CN202411626322.7A2024-11-142024-11-14 Endoscopic image processing method and related equipmentPendingCN119579683A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411626322.7ACN119579683A (en)2024-11-142024-11-14 Endoscopic image processing method and related equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411626322.7ACN119579683A (en)2024-11-142024-11-14 Endoscopic image processing method and related equipment

Publications (1)

Publication NumberPublication Date
CN119579683Atrue CN119579683A (en)2025-03-07

Family

ID=94803096

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411626322.7APendingCN119579683A (en)2024-11-142024-11-14 Endoscopic image processing method and related equipment

Country Status (1)

CountryLink
CN (1)CN119579683A (en)

Similar Documents

PublicationPublication DateTitle
US20240029243A1 (en)Referenceless image evaluation method for capsule endoscope, electronic device, and medium
US11900647B2 (en)Image classification method, apparatus, and device, storage medium, and medical electronic device
Reddy et al.A novel computer-aided diagnosis framework using deep learning for classification of fatty liver disease in ultrasound imaging
US12154680B2 (en)Endoscopic image display method, apparatus, computer device, and storage medium
US12182999B2 (en)Artificial intelligence-based colonoscopic image diagnosis assisting system and method
US20230206435A1 (en)Artificial intelligence-based gastroscopy diagnosis supporting system and method for improving gastrointestinal disease detection rate
JP7467595B2 (en) IMAGE PROCESSING SYSTEM, IMAGE PROCESSING APPARATUS, ENDOSCOPIC SYSTEM, IMAGE PROCESSING METHOD, AND PROGRAM
CN110662476B (en)Information processing apparatus, control method, and program
WO2020027228A1 (en)Diagnostic support system and diagnostic support method
JP6578058B2 (en) Image processing apparatus, method for operating image processing apparatus, and operation program for image processing apparatus
KR20210054140A (en)Medical image diagnosis assistance apparatus and method using a plurality of medical image diagnosis algorithm for endoscope images
US20240005494A1 (en)Methods and systems for image quality assessment
CN116630237A (en)Image quality detection method and related device, electronic equipment and storage medium
CN117036905A (en)Capsule endoscope image focus identification method based on HSV color space color attention
CN114785948B (en)Endoscope focusing method and device, endoscope image processor and readable storage medium
US20220277445A1 (en)Artificial intelligence-based gastroscopic image diagnosis assisting system and method
JPWO2020071086A1 (en) Information processing equipment, control methods, and programs
CN114581340A (en) An image correction method and device
CN119579683A (en) Endoscopic image processing method and related equipment
US12239453B2 (en)System and method for automatic personalized assessment of human body surface conditions
WO2024076683A1 (en)Image processing for medical condition diagnosis
US20230107485A1 (en)Systems and Methods for Automated Clinical Image Quality Assessment
CN117058467A (en)Gastrointestinal tract lesion type identification method and system
CN110647926A (en)Medical image stream identification method and device, electronic equipment and storage medium
WO2022181299A1 (en)Information processing device, information processing method, and information processing program

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp