Movatterモバイル変換


[0]ホーム

URL:


CN115462903B - Human body internal and external sensor cooperative positioning system based on magnetic navigation - Google Patents

Human body internal and external sensor cooperative positioning system based on magnetic navigation
Download PDF

Info

Publication number
CN115462903B
CN115462903BCN202211417447.XACN202211417447ACN115462903BCN 115462903 BCN115462903 BCN 115462903BCN 202211417447 ACN202211417447 ACN 202211417447ACN 115462903 BCN115462903 BCN 115462903B
Authority
CN
China
Prior art keywords
endoscope
image
optical flow
magnetic
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211417447.XA
Other languages
Chinese (zh)
Other versions
CN115462903A (en
Inventor
章世平
程帆
朱捷
吴梦麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kaben Shenzhen Medical Equipment Co ltd
Original Assignee
Kaben Shenzhen Medical Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kaben Shenzhen Medical Equipment Co ltdfiledCriticalKaben Shenzhen Medical Equipment Co ltd
Priority to CN202211417447.XApriorityCriticalpatent/CN115462903B/en
Publication of CN115462903ApublicationCriticalpatent/CN115462903A/en
Application grantedgrantedCritical
Publication of CN115462903BpublicationCriticalpatent/CN115462903B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

A human body internal and external sensor cooperative positioning system based on magnetic navigation comprises an endoscope unit, a positioning unit and a positioning unit, wherein the endoscope unit is used for acquiring an endoscope video stream and a magnetic signal of a space where the endoscope video stream is located; the surgical tool unit is used for acquiring magnetic signals of a space where the surgical tool unit is located; the data processing unit is used for receiving the endoscope video stream sent by the endoscope unit, and processing the endoscope video stream to obtain a 3D point cloud data set; and 3D-3D point cloud registration is carried out on the 3D point cloud data set and the 3D image coordinate system data corresponding to the endoscope video stream, so that organ image positioning is realized. The invention combines the magnetic navigation technology and the application of traditional image omics to realize the nonlinear registration of the endoscope and other modal images in a 3D space, thereby greatly improving the tracking precision of the endoscope lens in vivo.

Description

Human body internal and external sensor cooperative positioning system based on magnetic navigation
Technical Field
The invention relates to the field of interventional medicine, in particular to human body dynamic organ positioning, and specifically relates to a human body internal and external sensor cooperative positioning system based on magnetic navigation.
Background
With the development of current interventional medicine, accurate positioning of human organs is benefiting from the progress of new technologies such as artificial intelligence, optical navigation, magnetic navigation and the like. How to better control and track the displacement of dynamic organs in the human body, using the X-ray equipment as less as possible to reduce the radiation injury to the human body, and simultaneously controlling the accurate positioning technology and equipment from the aspect of cost is a common target of related research and development mechanisms.
The endoscope technology derives disposable soft lens products along with the cost reduction of the electronic mirror, and provides powerful guarantee for the examination of various human natural cavities and ducts in cooperation with hard lens products, fiberscope products and the like. After the lens of the endoscope enters a human body, the visual Field (FOV) which can be seen by the lens is limited, how to monitor the traveling route of the endoscope in the human body helps an operator to avoid obstacles, and the direction of turning a curve to reach a target is always one direction of academic and engineering research.
Disclosure of Invention
The invention aims to provide a human body internal and external sensor cooperative positioning system based on magnetic navigation, aiming at the problem of organ positioning in interventional technology.
The technical scheme of the invention is as follows:
a magnetic navigation based co-location system for external and internal sensors of a human body, the system comprising:
the endoscope unit comprises an endoscope and a first magnetic sensor arranged at the front end of the endoscope, wherein the endoscope is used for acquiring an endoscope video stream, and the first magnetic sensor is used for acquiring a magnetic signal of a space where the first magnetic sensor is arranged;
the surgical tool unit comprises a surgical tool and a second magnetic sensor, and the second magnetic sensor is used for acquiring a magnetic signal of a space where the second magnetic sensor is located;
the data processing unit comprises a magnetic signal processing module and an image processing module, wherein:
the magnetic signal processing module is used for receiving magnetic signals of the first magnetic sensor and the second magnetic sensor, and obtaining coordinate position and direction information of the corresponding sensors in a three-dimensional space after processing;
the image processing module is used for receiving an endoscope video stream sent by the endoscope unit, and processing the video stream to obtain a 3D point cloud data set;
and 3D-3D point cloud registration is carried out on the 3D point cloud data set and the 3D image coordinate system data corresponding to the endoscope video stream, so as to realize organ image positioning.
Further, the first magnetic sensor and the second magnetic sensor employ an electromagnetic sensor or a permanent magnetic sensor.
Further, the data processing unit specifically executes the following steps:
the magnetic signal processing module receives coordinates of a human body entering position and an organ position to be positioned, which are acquired by a first magnetic sensor on the endoscope, wherein the coordinates (x, y, z) of the human body entering position are used as reference positions;
the image processing module receives an endoscope video stream sent by an endoscope unit, wherein the endoscope video stream comprises video data of a position where a human body enters to reach an organ to be positioned; processing images in the endoscope video stream, extracting optical flow characteristic information of pipelines and tissues, and forming an endoscope 3D point cloud data set according to the space coordinates of the optical flow field characteristic points.
Further, the step of acquiring the 3D point cloud data set is:
inputting each frame image in the endoscope video stream into a super point network model for Feature extraction to obtain an Optical Flow field Feature of a continuous endoscope image;
processing the characteristic points of the Optical Flow field to generate a corresponding Optical Flow Descriptor;
inputting the optical flow field characteristic points and the optical flow descriptors as optical flow characteristic signals into an online neural network to obtain endoscope image Depth Depth;
taking the human body entering position (x, y, z) as a reference position, acquiring space coordinates (opx, opy and opz) of all optical flow field characteristic points according to the optical flow field characteristic points, the optical flow descriptors and the endoscope image Depth in the optical flow characteristic information, and forming an endoscope 3D point cloud data set; where opx and opy are the in-plane coordinates of optical flow field feature points opz = z + Depth.
Further, the obtaining step of the optical flow descriptor OFD is:
the average distance of all optical flow characteristic points corresponding to adjacent frames between continuous frames of the endoscope image is calculated by sampling the following formula
Figure 886384DEST_PATH_IMAGE001
And average angle>
Figure 346184DEST_PATH_IMAGE002
As an optical flow descriptor OFD;
Figure 934161DEST_PATH_IMAGE003
Figure 996663DEST_PATH_IMAGE004
wherein: k represents the number of adjacent frame groups, k = [1,2,3, …, n ], n represents the number of adjacent frame groups; dx and dy are respectively the distance difference of the optical flow characteristic points corresponding to the previous frame image and the next frame image in the current adjacent frame in the x direction and the y direction, m is the number of all the corresponding optical flow characteristic points in the current adjacent frame image, and i represents the number of the optical flow characteristic points in the current adjacent frame image.
Further, the hypertext network model includes:
the basic feature point extractor can extract basic feature points from the marked common geometric figures;
and the self-supervision characteristic point marking module can mark interest points according to the basic characteristic point extractor and the single-frame images in the endoscope video stream, and acquire Optical Flow characteristic points of the corresponding frame images.
Further, the step of acquiring the 3D image coordinate system data is:
selecting an image from a human body entry position to an arrival position corresponding to endoscope data in a CT or MRI sequence image;
and extracting human body position information in medical digital imaging and communication information DICOM in CT or MRI sequence images, and performing three-dimensional reconstruction rendering to obtain 3D image coordinate system data.
Further, the CT or MRI sequence image is acquired by using a computed tomography apparatus or a magnetic resonance apparatus.
Further, the registration step further comprises elastic registration based on a 3D deformation field;
when the front end of the endoscope moves along with the organ, a needle point is formed by the second magnetic sensor at the front end of the surgical tool and the first magnetic sensor at the front end of the endoscope for detecting the needle point;
obtaining the spatial movement range of the organ as the offset by the displacement of the magnetic sensor, and mapping the spatial movement range of the organ to the organ in the 3D image coordinate system data;
or the image change of the position of the organ to be positioned is mapped to the organ in the 3D image coordinate system data through the endoscope to realize elastic registration, and the 3D organ which changes accurately in real time is obtained.
Further, the second magnetic sensor is arranged at the front end of the surgical tool;
in the data processing unit:
forming a state that the needle point faces the needle point by aiming at a first magnetic sensor at the front end of the endoscope in the body, and finishing tool joint positioning; the position of the tip of the endoscope in vivo is obtained by 3D image coordinate coefficient data when the organ image is positioned.
The invention has the beneficial effects that:
the invention combines the magnetic navigation technology and the application of the traditional image omics to realize the nonlinear registration of the endoscope and other modal images in the 3D space and greatly improve the tracking precision of the endoscope lens in the body.
The invention combines the cooperative data of the in-vivo and external magnetic sensors, not only enhances the tracking of the dynamic organs in the human body, but also is beneficial to the medical targeted operation from the outside of the human body.
The invention uses the self-supervision super point network to process and calculate the optical flow field to obtain the depth information of the endoscope video flow, and acts on all optical flow characteristics to form an endoscope optical flow point cloud data set, and carries out deformation field-based registration with the human body 3D image obtained by CT/MRI reconstruction, thereby effectively improving the registration precision.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
Fig. 1 shows a system block diagram of the present invention.
Fig. 2 shows a schematic diagram of a human body coordinate system obtained by registering a 3D image coordinate system with a 3D point cloud dataset obtained by positioning a magnetic sensor in an embodiment.
FIG. 3 shows a schematic view of the tool co-location in an embodiment.
FIG. 4 is a schematic diagram of a hypertext network model in an embodiment.
FIG. 5 is a flowchart of obtaining the depth of an endoscope image based on the combined training of the super network model and the online neural network in the embodiment.
Fig. 6 shows a schematic diagram of feature point rotation and movement during elastic alignment in the embodiment.
FIG. 7 is a schematic diagram showing the registration effect of a 3D-3D point cloud when the present invention is applied.
Detailed Description
Preferred embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein.
The first embodiment is as follows:
a system for co-locating external and internal sensors of a human body based on magnetic navigation, as shown in fig. 1, is a block diagram of the system of the present invention, and the system includes:
the endoscope unit comprises an endoscope and a first magnetic sensor arranged at the front end of the endoscope, wherein the endoscope is used for acquiring an endoscope video stream, and the first magnetic sensor is used for acquiring a magnetic signal of a space where the first magnetic sensor is arranged;
the surgical tool unit (puncture needle and the like) comprises a surgical tool and a second magnetic sensor, wherein the second magnetic sensor is used for acquiring a magnetic signal of a space where the second magnetic sensor is located;
the data processing unit comprises a magnetic signal processing module and an image processing module, wherein:
the magnetic signal processing module is used for receiving magnetic signals of the first magnetic sensor and the second magnetic sensor, and obtaining coordinate position and direction information of the corresponding sensors in a three-dimensional space after processing;
the image processing module is used for receiving an endoscope video stream sent by the endoscope unit, and processing the video stream to obtain a 3D point cloud data set;
and 3D-3D point cloud registration is carried out on the 3D point cloud data set and the 3D image coordinate system data corresponding to the endoscope video stream, so that organ image positioning is realized.
The data processing unit also comprises a system control module which is used for taking charge of the functions of starting, closing, configuring and the like of the system, data communication among the modules, user operation interaction, data storage, system logs and the like.
In this embodiment, the 3D point cloud data set of the continuous video of the endoscope and the reconstruction data of the 3D image of other modality are non-linearly registered in space by combining the magnetic navigation technology and the image processing technology, and the registration adopts an ICP method based on deformation, a conventional or artificial intelligence based registration method; the tracking precision of the endoscope lens in the body is greatly improved.
Example two:
in the invention, the first magnetic sensor and the second magnetic sensor adopt electromagnetic sensors or permanent magnetic sensors, the electromagnetic sensors move in a magnetic field generated by an electromagnetic generator to obtain current signals by using a coil cutting magnetic field effect, and the space coordinates of the electromagnetic sensors are obtained by calculation after AD conversion; the permanent magnet sensor is in a permanent magnet mode, and data acquisition and matrix calculation are carried out on the magnetic flux of the permanent magnet through a magnetic induction chip matrix to obtain the position of the permanent magnet sensor in the space;
the second magnetic sensor is arranged at the front end of the surgical tool;
the data processing unit executes tool joint positioning, as shown in fig. 3, by aligning with a first magnetic sensor at the tip of the endoscope in vivo, a state of the needle tip to the needle tip is formed, and tool joint positioning is completed; the position of the tip of the endoscope in vivo is obtained by 3D image coordinate coefficient data when the organ image is positioned.
In this embodiment, the endoscope module integrated with the magnetic sensor includes an optical lens and an electronic lens, and may be a hard lens system or a soft lens system, where the soft lens may be a reusable soft lens or a disposable soft lens. The magnetic sensor is mounted at the distal end of the endoscope lens, and if the endoscope is a soft lens, the distal end can be bent by operating the handle, typically up to about 270 °.
The in vitro operation tool matched with the magnetic sensor is an operation tool integrated with the magnetic sensor, for example, a puncture needle is taken as an example, a guide wire type magnetic sensor can be installed at the needle point part, and a larger magnetic sensor can also be installed at the tail part of the puncture needle;
specifically, the magnetic sensor calibration can adopt a spatial point cloud technology, and can also adopt a marking method to manually or automatically perform registration; connect the sensor to the surgical instruments on, the most advanced position (x + l, y + w, z + h) of surgical instruments just can be fixed a position to the coordinate (x, y, z) of sensor plus the length and width height (l, w, h) of surgical instruments for 3D image coordinate coefficient can accurate show the accurate position of surgical instruments most advanced inside and outside the human body according to, guide the operator to carry out the operation, has improved the location precision greatly.
When the endoscope is inserted into a target organ or tissue, the front tip of the endoscope is pointed to the outside of the body surface of the human body as far as possible, and at the moment, if a soft lens is used, the operation is convenient, and the tip part can be directly bent. Then, the operator can align the front end of the extracorporeal operation tool provided with the magnetic sensor with the front end of the endoscope in the body according to the position of the front end of the endoscope in the 3D image to form a state that the needle point is opposite to the needle point, and the positioning is finished.
Example three:
in the present invention, the data processing unit specifically executes the following steps:
the magnetic signal processing module: and receiving the coordinates of the human body entering position and the position of the organ to be positioned, which are acquired by a first magnetic sensor on the endoscope, wherein the coordinates (x, y, z) of the human body entering position are used as reference positions.
The image processing module executes 3D point cloud data set acquisition:
receiving an endoscope video stream sent by an endoscope unit, wherein the endoscope video stream comprises video data of a human body entering position reaching an organ position to be positioned; processing images in the endoscope video stream, extracting optical flow characteristic information of pipelines and tissues, and forming an endoscope 3D point cloud data set according to the space coordinates of the optical flow field characteristic points;
the image processing module executes the 3D image coordinate coefficient data acquisition:
selecting an image from a human body entry position to an arrival position corresponding to endoscope data in a CT or MRI sequence image; and extracting human body position information in medical digital imaging and communication information DICOM in CT or MRI sequence images, and performing three-dimensional reconstruction rendering to obtain 3D image coordinate system data.
Registering the 3D point cloud data set and the 3D image coordinate system data to realize organ image positioning; as shown in fig. 2 and 7, schematic diagrams and effect diagrams are obtained by acquiring 3D image coordinate system data according to a CT or MRI sequence, and performing registration with a 3D point cloud data set obtained by positioning a magnetic sensor; the 3D-3D point cloud registration adopts an ICP method based on deformation, a traditional or artificial intelligence-based registration method.
Wherein: the 3D point cloud data set acquisition steps are as follows:
s1, inputting each frame image in an endoscope video stream into a super point network model for Feature extraction to obtain an Optical Flow field Feature of a continuous endoscope image;
the superpoint network model is shown in fig. 4, and includes:
the basic feature point extractor can extract basic feature points from the marked common geometric figure;
and the self-supervision characteristic point marking module can mark interest points according to the basic characteristic point extractor and the single-frame images in the endoscope video stream, and acquire Optical Flow characteristic points of the corresponding frame images.
S2, processing the characteristic points of the Optical Flow field to generate a corresponding Optical Flow Descriptor, which specifically comprises the following steps:
the average distance of all optical flow characteristic points corresponding to adjacent frames between continuous frames of the endoscope image is calculated by sampling the following formula
Figure 166745DEST_PATH_IMAGE001
And average angle>
Figure 672812DEST_PATH_IMAGE002
As an optical flow descriptor OFD;
Figure 544822DEST_PATH_IMAGE003
Figure 896169DEST_PATH_IMAGE004
wherein: k represents the number of adjacent frame groups, k = [1,2,3, …, n ], n represents the number of adjacent frame groups; dx and dy are respectively the distance difference of the optical flow characteristic points corresponding to the previous frame image and the next frame image in the current adjacent frame in the x direction and the y direction, m is the number of all the corresponding optical flow characteristic points in the current adjacent frame image, and i represents the number of the optical flow characteristic points in the current adjacent frame image.
S3, inputting the optical flow field feature points and the optical flow descriptor as optical flow feature signals into an online neural network to obtain endoscope image Depth Depth, as shown in FIG. 5, a flow chart for obtaining endoscope image Depth based on the combined training of a super network model and the online neural network;
s4, taking the human body entering position (x, y, z) as a reference position, and acquiring space coordinates (opx, opy and opz) of all optical flow field feature points according to the optical flow field feature points, the optical flow descriptor and the endoscope image Depth Depth in the optical flow feature information to form an endoscope 3D point cloud data set; where opx and opy are the in-plane coordinates of optical flow field feature points, opz = z + Depth.
In this embodiment, the image processing module may reconstruct a CT/MRI image in 3D to obtain 3D image coordinate system data, and complete the operations of point cloud registration, image segmentation, image registration, image fusion, and the like by combining the sensor 3D coordinate position and direction information calculated by the magnetic signal processing module.
In the process of acquiring the 3D point cloud data set, the image processing module carries out image processing on pipelines, tissues and the like in the advancing process of the endoscope, and based on the optical flow characteristic information of the pipelines and the tissues, the rigid registration of the 3D point cloud of the endoscope and the 3D image in the human body can be realized, so that the precise registration of the endoscope video stream and the 3D image reconstructed by CT \ MRI is completed.
Meanwhile, the depth information of the endoscope video stream is obtained by estimating after the optical flow field is processed and calculated by using an automatic supervision super point network, and the depth information acts on all optical flow characteristics to form an endoscope optical flow point cloud data set and a human body 3D image obtained by CT/MRI reconstruction to perform non-rigid registration based on a deformation field.
Example four:
the registration step further includes elastic registration based on the 3D deformation field, as shown in fig. 6, which is a schematic diagram of feature point rotation and movement during elastic registration;
when the front end of the endoscope moves along with the organ, the needle point-to-needle point detection is formed by the second magnetic sensor at the front end of the surgical tool and the first magnetic sensor at the front end of the endoscope;
obtaining the spatial movement range of the organ as an offset through the displacement of the magnetic sensor, and mapping the spatial movement range of the organ to the organ in the 3D image coordinate system data;
or the image change of the position of the organ to be positioned is mapped to the organ in the 3D image coordinate system data through the endoscope to realize elastic registration, and the 3D organ which changes accurately in real time is obtained.
In this embodiment, the organ movement is caused by human respiration, internal and external squeezing, organ peristalsis, etc.: estimating dynamic organs in space by corresponding optical flow feature points in an endoscopic video stream if the endoscope can be kept stationaryThe moving range of (2); if the front end of the endoscope moves along with the organ, the space movement range of the dynamic organ is judged through the displacement of the needle tip to the needle tip; or comprehensively estimating the space moving range of the dynamic organ by combining the image change of the endoscope and the displacement of the needle tip to the needle tip (
Figure 435604DEST_PATH_IMAGE005
Figure 846994DEST_PATH_IMAGE006
) The range of positive and negative movement in the three directions of x, y and z is included; real-time change parameter (on or in a human organ)>
Figure 753770DEST_PATH_IMAGE005
Figure 439966DEST_PATH_IMAGE006
) The three-dimensional image is mapped to an organ in a 3D image as an offset, so that a real-time accurately-changed 3D organ is obtained, accurate detection of a dynamic organ inside a human body is achieved, and better targeting guidance is provided.
The system application method of the invention is as follows:
s1, an endoscope unit is that a first magnetic sensor is installed at the front end of the foremost part of an endoscope, so that the endoscope can enter a human body through a natural cavity (such as an oral cavity, an intestinal tract, a urethra and the like) of the human body and then can be tracked by an external magnetic signal processing module in real time; the second magnetic sensor can be mounted at the front end or the tail end of the extracorporeal operation tool, the operation tool needs to be ensured to have enough rigidity at the tail end, otherwise the bending of the operation tool can cause unacceptable errors in positioning, and the extracorporeal operation tool integrated with the magnetic sensor can be tracked by the magnetic signal processing module in real time within a certain magnetic field range (such as a spherical or cylindrical space of 500 to 600mm);
s2, importing CT or MR data of a human body, rebuilding coordinate coefficient data of the 3D image of the human body by the image processing module based on the CT/MR sequence data, registering a coordinate system of the 3D image coordinate system and a coordinate system of the human body through a coordinate system of a magnetic sensor, and specifically:
the front end of the front part of the endoscope is deeply inserted into a human body and reaches an organ to be positioned through channels such as intestinal tracts, urethra and the like of the human body, an endoscope video stream is continuously obtained in the process, an image processing module carries out image processing on pipelines, tissues and the like in the advancing process of the endoscope, and based on light stream characteristic information of the pipelines and the tissues, the registration of the endoscope 3D point cloud and the coordinate coefficient data of the 3D image in the human body can be realized, rigid registration can be carried out, and elastic registration based on a 3D deformation field can be carried out, so that the precise registration of the endoscope video stream and the 3D image reconstructed by CT \ MRI is completed.
And S3, when the endoscope extends deeply into a target organ and tissue, the front end of the front part of the endoscope points to the outside of the body surface of the human body as far as possible, and at the moment, if a soft lens is used, the operation is convenient, and the front end part can be directly bent. Then, the operator can align the distal end of the extracorporeal operation tool to which the magnetic sensor is attached to the distal end of the endoscope in the body according to the position of the distal end of the endoscope in the 3D image, and form a state of the needle point to the needle point. Organ movement can be caused by human respiration, internal and external extrusion, organ peristalsis and the like: if the endoscope can be kept still, estimating the moving range of the dynamic organ in the space through the corresponding optical flow characteristic points in the endoscope video stream; if the front end of the endoscope moves along with the organ, the space movement range of the dynamic organ is judged through the displacement of the needle tip to the needle tip; the space moving range of the dynamic organ can also be comprehensively estimated by combining the image change of the endoscope and the displacement of the needle tip to the needle tip.
And S4, mapping the real-time change parameters of the human organs as offsets to the organs in the 3D image, so that the real-time and accurately changed 3D organs are obtained, accurate detection of the internal dynamic organs of the human body is achieved, and better targeting guidance is provided.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments.

Claims (9)

1. The utility model provides a human inside and outside sensor cooperative localization system based on magnetic navigation which characterized in that: the system comprises:
the endoscope unit comprises an endoscope and a first magnetic sensor arranged at the front end of the endoscope, wherein the endoscope is used for acquiring an endoscope video stream, and the first magnetic sensor is used for acquiring a magnetic signal of a space where the first magnetic sensor is arranged;
the surgical tool unit comprises a surgical tool and a second magnetic sensor, and the second magnetic sensor is used for acquiring a magnetic signal of a space where the second magnetic sensor is located;
the data processing unit comprises a magnetic signal processing module and an image processing module, wherein:
the magnetic signal processing module is used for receiving the magnetic signals of the first magnetic sensor and the second magnetic sensor, and obtaining the coordinate position and the direction information of the corresponding sensors in a three-dimensional space after processing;
the image processing module is used for receiving an endoscope video stream sent by the endoscope unit, and processing the video stream to obtain a 3D point cloud data set;
3D-3D point cloud registration is carried out on the 3D point cloud data set and the 3D image coordinate system data corresponding to the endoscope video stream, so that organ image positioning is realized;
the data processing unit specifically executes the following steps:
the magnetic signal processing module receives coordinates of a human body entering position and an organ position to be positioned, which are acquired by a first magnetic sensor on the endoscope, wherein the coordinates (x, y, z) of the human body entering position are used as reference positions;
the image processing module receives an endoscope video stream sent by an endoscope unit, wherein the endoscope video stream comprises video data of a position where a human body enters to reach an organ to be positioned; processing images in the endoscope video stream, extracting optical flow characteristic information of pipelines and tissues, and forming an endoscope 3D point cloud data set according to the space coordinates of the optical flow field characteristic points.
2. The magnetic navigation-based co-location system for internal and external sensors of a human body according to claim 1, wherein: the first magnetic sensor and the second magnetic sensor are electromagnetic sensors or permanent magnetic sensors.
3. The magnetic navigation-based co-location system for internal and external sensors of a human body according to claim 1, wherein: the 3D point cloud data set acquisition steps are as follows:
inputting each frame image in the endoscope video stream into a super network model for Feature extraction to obtain an Optical Flow field Feature of a continuous endoscope image;
processing the characteristic points of the Optical Flow field to generate a corresponding Optical Flow Descriptor;
inputting the optical flow field characteristic points and the optical flow descriptors as optical flow characteristic signals into an online neural network to obtain endoscope image Depth;
taking the human body entering position (x, y, z) as a reference position, acquiring space coordinates (opx, opy and opz) of all optical flow field characteristic points according to the optical flow field characteristic points, the optical flow descriptors and the endoscope image Depth in the optical flow characteristic information, and forming an endoscope 3D point cloud data set; where opx and opy are the in-plane coordinates of optical flow field feature points opz = z + Depth.
4. The magnetic navigation-based co-location system for internal and external sensors of a human body according to claim 3, wherein: the acquisition step of the optical flow descriptor OFD comprises the following steps:
the following formula is sampled to calculate the average distance of all optical flow characteristic points corresponding to adjacent frames between continuous frames of the endoscope image
Figure QLYQS_1
And the mean angle->
Figure QLYQS_2
As optical flow descriptor OFD;
Figure QLYQS_3
;/>
Figure QLYQS_4
wherein: k represents the number of adjacent frame groups, k = [1,2,3, …, n ], n represents the number of adjacent frame groups;
dx and dy are respectively the distance difference of the optical flow characteristic points corresponding to the previous frame image and the next frame image in the current adjacent frame in the x direction and the y direction, m is the number of all the corresponding optical flow characteristic points in the current adjacent frame image, and i represents the number of the optical flow characteristic points in the current adjacent frame image.
5. The magnetic navigation-based co-location system for internal and external sensors of a human body according to claim 3, wherein: the superpoint network model includes:
the basic feature point extractor can extract basic feature points from the marked common geometric figure;
and the self-supervision characteristic point labeling module can label interest points according to the basic characteristic point extractor and single-frame images in the endoscope video stream to acquire Optical Flow characteristic points of corresponding frame images.
6. A magnetic navigation-based co-location system for internal and external sensors of a human body according to claim 1, wherein: the method for acquiring the 3D image coordinate system data comprises the following steps:
selecting an image from a human body entry position to an arrival position corresponding to endoscope data in a CT or MRI sequence image;
and extracting human body position information in medical digital imaging and communication information DICOM in CT or MRI sequence images, and performing three-dimensional reconstruction rendering to obtain 3D image coordinate system data.
7. The magnetic navigation-based co-location system for internal and external sensors of a human body according to claim 6, wherein: the CT or MRI sequence image is obtained by adopting a computer tomography scanning device or a nuclear magnetic resonance device.
8. The magnetic navigation-based co-location system for internal and external sensors of a human body according to claim 1, wherein: the registration further comprises an elastic registration based on the 3D deformation field;
when the front end of the endoscope moves along with the organ, a needle point is formed by the second magnetic sensor at the front end of the surgical tool and the first magnetic sensor at the front end of the endoscope for detecting the needle point;
obtaining the spatial movement range of the organ as an offset through the displacement of the magnetic sensor, and mapping the spatial movement range of the organ to the organ in the 3D image coordinate system data;
or the image change of the position of the organ to be positioned is mapped to the organ in the 3D image coordinate system data through the endoscope to realize elastic registration, and the 3D organ which changes accurately in real time is obtained.
9. The magnetic navigation-based co-location system for internal and external sensors of a human body according to claim 1, wherein: the second magnetic sensor is arranged at the front end of the surgical tool;
in the data processing unit:
forming a state that the needle point faces the needle point through a first magnetic sensor aligned with the front end of the endoscope in the body, and finishing tool joint positioning; the position of the tip of the endoscope in vivo is obtained by 3D image coordinate coefficient data when the organ image is positioned.
CN202211417447.XA2022-11-142022-11-14Human body internal and external sensor cooperative positioning system based on magnetic navigationActiveCN115462903B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202211417447.XACN115462903B (en)2022-11-142022-11-14Human body internal and external sensor cooperative positioning system based on magnetic navigation

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202211417447.XACN115462903B (en)2022-11-142022-11-14Human body internal and external sensor cooperative positioning system based on magnetic navigation

Publications (2)

Publication NumberPublication Date
CN115462903A CN115462903A (en)2022-12-13
CN115462903Btrue CN115462903B (en)2023-04-07

Family

ID=84338255

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202211417447.XAActiveCN115462903B (en)2022-11-142022-11-14Human body internal and external sensor cooperative positioning system based on magnetic navigation

Country Status (1)

CountryLink
CN (1)CN115462903B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118000642B (en)*2024-04-102024-06-14卡本(深圳)医疗器械有限公司Method and device for determining bending form of snake bone of endoscope
CN119867935B (en)*2025-03-192025-09-23中国人民解放军总医院第六医学中心 Multimodal electromagnetic navigation-assisted transnasal endoscopic anatomical measurement device and method for fresh cadaver heads

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP4025921A4 (en)*2019-09-032023-09-06Auris Health, Inc. DETECTION AND COMPENSATION OF ELECTROMAGNETIC DISTORTION
EP4271305A1 (en)*2021-01-042023-11-08Intuitive Surgical Operations, Inc.Systems for image-based registration and associated methods
CN113506334B (en)*2021-06-072023-12-15刘星宇Multi-mode medical image fusion method and system based on deep learning
CN114010314B (en)*2021-10-272023-07-07北京航空航天大学 An augmented reality navigation method and system for endoscopic retrograde cholangiopancreatography
CN114119549B (en)*2021-11-262023-08-29卡本(深圳)医疗器械有限公司Multi-mode medical image three-dimensional point cloud registration optimization method

Also Published As

Publication numberPublication date
CN115462903A (en)2022-12-13

Similar Documents

PublicationPublication DateTitle
JP7154832B2 (en) Improving registration by orbital information with shape estimation
US10762627B2 (en)Method and a system for registering a 3D pre acquired image coordinates system with a medical positioning system coordinate system and with a 2D image coordinate system
US20220361729A1 (en)Apparatus and method for four dimensional soft tissue navigation
US11399895B2 (en)Systems and methods of pose estimation and calibration of perspective imaging system in image guided surgery
US20150313503A1 (en)Electromagnetic sensor integration with ultrathin scanning fiber endoscope
JP5372407B2 (en) Medical equipment
Mori et al.Hybrid bronchoscope tracking using a magnetic tracking sensor and image registration
CN115462903B (en)Human body internal and external sensor cooperative positioning system based on magnetic navigation
US20250177056A1 (en)Three-dimensional reconstruction of an instrument and procedure site
CN106725852A (en)The operation guiding system of lung puncture
JP2023507155A (en) Systems and methods for robotic bronchoscopy navigation
CN111093505B (en) Radiation imaging device and image processing method
JP2012165838A (en)Endoscope insertion support device
US20240130799A1 (en)Image-based seeding for registration and associated systems and methods
JP2022517807A (en) Systems and methods for medical navigation
CN112315582B (en)Positioning method, system and device of surgical instrument
Deguchi et al.A method for bronchoscope tracking using position sensor without fiducial markers

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp