Movatterモバイル変換


[0]ホーム

URL:


CN119454096B - A method and device for autonomously scanning thyroid lesions using an ultrasonic robot - Google Patents

A method and device for autonomously scanning thyroid lesions using an ultrasonic robot

Info

Publication number
CN119454096B
CN119454096BCN202411538708.2ACN202411538708ACN119454096BCN 119454096 BCN119454096 BCN 119454096BCN 202411538708 ACN202411538708 ACN 202411538708ACN 119454096 BCN119454096 BCN 119454096B
Authority
CN
China
Prior art keywords
probe
obstacle avoidance
offset
force sensor
sensor data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411538708.2A
Other languages
Chinese (zh)
Other versions
CN119454096A (en
Inventor
闫琳
王能
张少华
覃小娟
韩冬
张武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Cobot Technology Co ltd
Original Assignee
Wuhan Cobot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Cobot Technology Co ltdfiledCriticalWuhan Cobot Technology Co ltd
Priority to CN202411538708.2ApriorityCriticalpatent/CN119454096B/en
Publication of CN119454096ApublicationCriticalpatent/CN119454096A/en
Application grantedgrantedCritical
Publication of CN119454096BpublicationCriticalpatent/CN119454096B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种超声机器人自主扫查甲状腺病灶的方法及装置,该方法包括:控制探头向目标方向旋转;实时采集超声图像和力传感器数据,实时记录更新总旋转角度;对超声图像进行图像分割,得到病灶位置;根据力传感器数据判断探头是否与人体发生碰撞;若是,则计算平移避障偏移量,判断平移避障偏移量是否等于0;若是,则计算旋转避障偏移量,并根据旋转避障偏移量控制探头旋转,直至探头不再与人体处于碰撞状态,重新执行探头旋转过程;若否,则根据平移避障偏移量控制探头移动,重新执行上述采集超声图像并进行碰撞判断的过程;否则,则重新执行探头旋转过程,直至总旋转角度达到预设角度阈值。该方法能更精准、高效的完成甲状腺病灶扫查。

The present invention discloses a method and device for autonomously scanning thyroid lesions with an ultrasonic robot, the method comprising: controlling a probe to rotate in a target direction; real-time acquisition of ultrasonic images and force sensor data, and real-time recording and updating of a total rotation angle; performing image segmentation on the ultrasonic image to obtain a lesion location; judging whether the probe collides with a human body based on the force sensor data; if so, calculating a translational obstacle avoidance offset and judging whether the translational obstacle avoidance offset is equal to 0; if so, calculating a rotational obstacle avoidance offset and controlling the probe rotation according to the rotational obstacle avoidance offset until the probe is no longer in a collision state with the human body, and re-executing the probe rotation process; if not, controlling the probe movement according to the translational obstacle avoidance offset and re-executing the above-mentioned process of acquiring ultrasonic images and performing collision judgment; otherwise, re-executing the probe rotation process until the total rotation angle reaches a preset angle threshold. This method can complete thyroid lesion scanning more accurately and efficiently.

Description

Method and device for autonomously scanning thyroid lesions by ultrasonic robot
Technical Field
The invention relates to a method and a device for autonomously scanning thyroid lesions by an ultrasonic robot.
Background
The use of ultrasound robots is becoming more and more common in the field of ultrasound examination technology, especially for the diagnosis of thyroid disorders. The robots can automatically or semi-automatically perform focus scanning tasks, and diagnosis efficiency and accuracy are remarkably improved. However, the operation of conventional ultrasound probes has encountered certain challenges when faced with thyroid lesions located below the collarbone. Due to anatomical limitations, when a transition from transection to slitation is required to obtain a more comprehensive view of the lesion, the probe may collide with the collarbone, which not only prevents the smooth progress of the examination, but may also affect the accuracy of the diagnostic results.
To solve this problem, the prior art proposes solutions, such as using an external camera to acquire the shape of the neck contour in real time, and by accurately identifying the positions of the obstacles such as the collarbone and the chin, adjusting the angle and the position of the probe, so as to avoid collision and ensure acquisition of the required longitudinal section image.
Disclosure of Invention
In order to enable the ultrasonic robot to finish thyroid lesion scanning more accurately and efficiently, the embodiment of the invention provides a method and a device for automatically scanning thyroid lesions by the ultrasonic robot.
In a first aspect, an embodiment of the present invention provides a method for autonomously scanning thyroid lesions by an ultrasonic robot, which may include:
The probe is controlled to rotate around the Z-axis of the tool coordinate system at the initial position by a preset rotation angle in the target direction;
Collecting ultrasonic images and force sensor data in real time, and recording and updating the total rotation angle in real time;
image segmentation is carried out on the ultrasonic image to obtain focus positions;
judging whether the probe collides with a human body according to the force sensor data:
if yes, calculating a translation obstacle avoidance offset according to the force sensor data and the focus position, and judging whether the translation obstacle avoidance offset is equal to 0;
If so, calculating to obtain a rotation obstacle avoidance offset according to the force sensor data, and controlling the probe to rotate around the X axis of the tool coordinate system according to the rotation obstacle avoidance offset until the probe is no longer in a collision state with a human body, and re-executing the rotation acquisition process of the probe;
If not, controlling the probe to move along the Y axis of the tool coordinate system according to the translation obstacle avoidance offset, and re-executing the process of acquiring the ultrasonic image and performing collision judgment;
otherwise, the rotating and collecting process of the probe is re-executed until the total rotating angle reaches a preset angle threshold value, and the probe is determined to complete autonomous scanning of thyroid lesions.
In one or some optional implementations of the embodiments of the present application, the controlling the probe to rotate around the tool coordinate system Z axis to the target direction by a preset rotation angle at the initial position includes:
acquiring an ultrasonic image acquired by the probe at an initial position, and acquiring a focus position in the ultrasonic image;
acquiring the current probe position and the current probe posture of the probe;
calculating a first tool coordinate system position offset according to the focus position, and calculating a first next probe position by combining the current probe position and the current probe posture;
Calculating according to the preset rotation angle to obtain a rotation matrix, and combining the current probe posture to obtain a first next probe posture;
And controlling the probe to rotate around the Z-axis target direction of the tool coordinate system according to the first next probe position and the first next probe posture.
In one or some optional implementations of the embodiments of the present application, the calculating the translational obstacle avoidance offset according to the force sensor data and the focal position includes:
determining the contact direction of the probe and the human body according to the force sensor data to obtain an obstacle direction coefficient;
calculating a translational obstacle avoidance deviation pixel value according to the obstacle direction coefficient and the focus position;
Judging whether the translation obstacle avoidance deviation pixel value is larger than a minimum deviation pixel threshold value or not;
if yes, calculating to obtain the translation obstacle avoidance offset according to the translation obstacle avoidance offset pixel value and the force sensor data;
if not, setting the translation obstacle avoidance offset to 0.
In one or some optional implementations of the embodiments of the present application, the calculating a rotational obstacle avoidance offset according to the force sensor data, and controlling the probe to rotate around the tool coordinate system X axis according to the rotational obstacle avoidance offset until the probe is no longer in a collision state with a human body, includes:
determining the contact direction of the probe and the human body according to the force sensor data to obtain an obstacle direction coefficient;
calculating a second tool coordinate system position offset according to the obstacle direction coefficient, and calculating a second next probe position by combining the obtained current probe position and the current probe posture of the probe;
Calculating a rotation angle around an X axis according to the obstacle reverse coefficient;
calculating a rotation obstacle avoidance offset according to the rotation angle around the X axis, and calculating a second next probe posture by combining the current probe posture;
Controlling the probe to rotate around the X axis according to the position of the second next probe and the posture of the second next probe, and collecting new force sensor data in real time;
if the probe is still collided with the human body according to the new force sensor data, the process of calculating the position and the posture of the second next probe and rotating the second next probe is re-executed until the probe is no longer in a collision state with the human body.
In one or some optional implementations of the embodiments of the present application, before the control probe rotates around the tool coordinate system Z-axis target direction by a preset rotation angle at the initial position, the method further includes:
controlling the probe to acquire an ultrasonic image, wherein the ultrasonic image comprises thyroid lesions;
image segmentation is carried out on the ultrasonic image to obtain focus positions;
Calculating to obtain the pixel distance from the focus position to the center of the ultrasonic image according to the focus position and the width value of the ultrasonic image;
calculating to obtain initial adjustment offset according to the pixel distance;
and controlling the probe to move to the initial position according to the initial adjustment offset.
In one or some optional implementations of the embodiments of the present application, the image segmentation of the ultrasound image to obtain a lesion location includes:
Image segmentation is carried out on the ultrasonic image to obtain focus contours;
And carrying out ellipse fitting on the focus outline to obtain the focus ellipse circle center position as the focus position.
In a second aspect, an embodiment of the present invention provides an apparatus for autonomously scanning thyroid lesions by an ultrasonic robot, which may include:
the first rotating module is used for controlling the probe to rotate around the Z-axis direction of the tool coordinate system by a preset rotating angle at the initial position;
the first segmentation module is used for carrying out image segmentation on the ultrasonic image to obtain focus positions;
The first judging module is used for judging whether the probe collides with the human body according to the force sensor data, if so, the translation obstacle avoidance module is executed, and if not, the second rotating module is executed;
the translation obstacle avoidance module is used for calculating a translation obstacle avoidance offset according to the force sensor data and the focus position when the probe collides with a human body, and judging whether the translation obstacle avoidance offset is equal to 0;
The rotating obstacle avoidance module is used for calculating the rotating obstacle avoidance offset according to the force sensor data when the translational obstacle avoidance offset is equal to 0, controlling the probe to rotate around the X axis of the tool coordinate system according to the rotating obstacle avoidance offset until the probe is no longer in a collision state with a human body, and re-executing the rotating acquisition process of the probe;
The first moving module is used for controlling the probe to move along the Y axis of the tool coordinate system according to the translation obstacle avoidance offset when the translation obstacle avoidance offset is not equal to 0, and re-executing the process of acquiring the ultrasonic image and performing collision judgment;
and the second rotating module is used for re-executing the rotating and collecting process of the probe until the total rotating angle reaches a preset angle threshold value when the probe and the human body are not collided, and determining that the probe completes autonomous scanning of thyroid lesions.
In a third aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program/instruction which, when executed by a processor, implements a method of autonomous scanning of thyroid lesions by an ultrasound robot as described above.
In a fourth aspect, embodiments of the present invention provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements a method of autonomous scanning of thyroid lesions by an ultrasound robot as described above.
In a fifth aspect, an embodiment of the present invention provides a computer device, including a memory, a processor, and a computer program stored on the memory, where the processor implements a method for autonomous scanning of thyroid lesions by an ultrasound robot as described above when the processor executes the computer program.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
The embodiment of the invention provides a method for autonomously scanning thyroid lesions by an ultrasonic robot, which comprises the steps of collecting ultrasonic images and force sensor data in real time by controlling a probe to rotate around a Z axis of a tool coordinate system at an initial position by a preset angle, recording the total rotation angle, dividing the ultrasonic images to determine the position of the lesions, judging whether the probe collides with a human body according to the force sensor data, calculating translation or rotation obstacle avoidance offset if the collision is determined, correspondingly adjusting the position or angle of the probe until the collision state is relieved, and continuing to rotate and collect until the total rotation angle reaches a preset threshold value if the collision is not determined, thus completing autonomous scanning of thyroid lesions. According to the method, the probe is flexibly adjusted according to the actual focus position and the change of the human anatomy structure, so that the probe can safely complete the conversion process from transverse cutting to longitudinal cutting, meanwhile, the collision between the probe and the human body can be effectively avoided by monitoring the data of the force sensor in real time and dynamically adjusting the position or angle of the probe, the safety and the reliability of the inspection process are obviously improved, the method has strong adaptability to individual differences of different patients, and the safety and the comfort level of the patients are ensured. Therefore, the method not only improves the efficiency and comfort of ultrasonic examination, but also ensures the accuracy of diagnosis, and has important significance for improving the quality of thyroid disease diagnosis.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
fig. 1 is a schematic flow chart of a method for autonomous scanning of thyroid lesions by an ultrasonic robot according to an embodiment of the present invention;
fig. 2 is a schematic view of an ultrasonic robot according to an embodiment of the present invention;
Fig. 3 is a probe status of focus scanning always provided in an embodiment of the present invention;
FIG. 4is a schematic view of a rotation of a preset rotation angle around a Z-axis target direction of a tool coordinate system according to an embodiment of the present invention;
FIG. 5 illustrates two collision scenarios provided by embodiments of the present invention;
FIG. 6 illustrates two rotational obstacle avoidance schemes provided by embodiments of the present invention;
fig. 7 is a schematic structural diagram of a device for autonomous scanning of thyroid lesions by an ultrasonic robot according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The inventor finds that in the prior art, the real-time acquisition of the shape of the neck outline is realized based on an external camera, and the angle and the position of the probe are adjusted by identifying the positions of the obstacles such as the collarbone, the chin and the like so as to avoid collision and ensure acquisition of a required longitudinal section image. The method is long in time consumption, depends on an external camera, is high in cost and is easily interfered by human body differences or ambient light, and reliability is poor. Based on the above, the inventor has further developed and made the invention to provide a method and a device for autonomously scanning thyroid lesions by an ultrasonic robot.
Example 1
In a first embodiment of the present invention, a method for autonomous scanning of thyroid lesions by an ultrasonic robot is provided, and referring to fig. 1, the method may include the following steps S101 to S108:
S101, controlling the probe to rotate around the Z-axis of the tool coordinate system at the initial position by a preset rotation angle in the target direction.
S102, acquiring ultrasonic images and force sensor data in real time, and recording and updating the total rotation angle in real time.
And S103, performing image segmentation on the ultrasonic image to obtain focus positions.
S104, judging whether the probe collides with the human body according to the force sensor data, if so, executing the step S105, and if not, executing the step S108.
S105, calculating a translation obstacle avoidance offset according to the force sensor data and the focus position, judging whether the translation obstacle avoidance offset is equal to 0, if so, executing the step S106, and if not, executing the step S107.
And S106, calculating a rotation obstacle avoidance offset according to the force sensor data, controlling the probe to rotate around the X axis of the tool coordinate system according to the rotation obstacle avoidance offset until the probe is no longer in a collision state with a human body, and re-executing the probe rotation acquisition process in the steps S101-S104.
And S107, controlling the probe to move along the Y axis of the tool coordinate system according to the translation obstacle avoidance offset, and re-executing the process of acquiring the ultrasonic image and performing collision judgment in the steps S102-S104.
S108, re-executing the process of probe rotation acquisition in the steps S101-S107 until the total rotation angle reaches a preset angle threshold value, and determining that the probe completes autonomous scanning of thyroid lesions.
The embodiment of the invention provides a method for autonomously scanning thyroid lesions by an ultrasonic robot, which comprises the steps of collecting ultrasonic images and force sensor data in real time by controlling a probe to rotate around a Z axis of a tool coordinate system at an initial position by a preset angle, recording the total rotation angle, dividing the ultrasonic images to determine the position of the lesions, judging whether the probe collides with a human body according to the force sensor data, calculating translation or rotation obstacle avoidance offset if the collision is determined, correspondingly adjusting the position or angle of the probe until the collision state is relieved, and continuing to rotate and collect until the total rotation angle reaches a preset threshold value if the collision is not determined, thus completing autonomous scanning of thyroid lesions. According to the method, the probe is flexibly adjusted according to the actual focus position and the change of the human anatomy structure, so that the probe can safely complete the conversion process from transverse cutting to longitudinal cutting, meanwhile, the collision between the probe and the human body can be effectively avoided by monitoring the data of the force sensor in real time and dynamically adjusting the position or angle of the probe, the safety and the reliability of the inspection process are obviously improved, the method has strong adaptability to individual differences of different patients, and the safety and the comfort level of the patients are ensured. Therefore, the method not only improves the efficiency and comfort of ultrasonic examination, but also ensures the accuracy of diagnosis, and has important significance for improving the quality of thyroid disease diagnosis.
In order to facilitate understanding of the solution by those skilled in the art, an exemplary diagram is given below for the ultrasonic robot and the probe state of the focus scanning in the embodiment of the present application, where the ultrasonic robot is shown in fig. 2, two sets of coordinate systems exist in the diagram, the root of the mechanical arm is the world coordinate system, and the end of the mechanical arm is the tool coordinate system. The probe state of focus scanning all the time is shown in fig. 3, the left side of the diagram is defined as an initial scanning state, at the moment, the ultrasonic probe is in a transverse cutting state (horizontal state), a tool coordinate system at the tail end of the probe is shown in the diagram, the X-axis points to the vertical direction of the probe, the Y-axis points to the horizontal direction of the probe, the Z-axis points to the depth direction of the probe, the right side of the diagram is defined as a scanning ending state, at the moment, the ultrasonic probe is in a longitudinal cutting state (vertical state), and the tool coordinate system at the tail end of the probe is rotated by 90 degrees on the horizontal plane as shown in the diagram, so that focus scanning is completed. Wherein the scanning in the figures is the scanning described herein.
In the embodiment of the application, when the autonomous scanning of thyroid lesions based on the ultrasonic robot is started, the probe of the ultrasonic robot is firstly scanned on the neck of a human body in a transverse cutting mode, and the probe moves in the Y-axis direction under a tool coordinate system and cannot collide with the collarbone or chin at the moment because the probe is in the transverse cutting state. Meanwhile, an ultrasonic image is acquired in real time in the scanning process, when a thyroid lesion appears in the ultrasonic image, step S109 is executed, the probe is controlled to pull the lesion to the center of the ultrasonic image, the probe is positioned at the initial position, and then the method for autonomously scanning the thyroid lesion by the ultrasonic robot in steps S101-S108 is executed. The step S109 specifically includes the following steps S1091 to S1095:
S1091, controlling the probe to acquire an ultrasonic image. Thyroid lesions are included in the ultrasound images.
S1092, image segmentation is carried out on the ultrasonic image, and focus positions are obtained.
Specifically, image segmentation may be performed on an ultrasound image to obtain a focus contour, and then ellipse fitting is performed on the focus contour to obtain a focus ellipse circle center position, which is used as a focus position m= (Mx,my). Wherein mx、my in the lesion position represents an X-axis pixel coordinate and a Y-axis pixel coordinate in the ultrasound image coordinate system, respectively.
S1093, calculating to obtain the pixel distance from the focus position to the center of the ultrasonic image according to the focus position and the width value of the ultrasonic image.
Specifically, the pixel distance from the focus position to the center of the ultrasound image may be calculated based on the following formula 1 according to the focus position m= (Mx,my) and the width value of the ultrasound image:
Where Δm1 represents the pixel distance from the lesion position to the center of the ultrasound image, mx represents the X-axis pixel coordinate of the lesion position in the ultrasound image coordinate system, and W represents the width value of the ultrasound image. Wherein W is illustratively settable to 800.
S1094, calculating to obtain initial adjustment offset according to the pixel distance.
Specifically, the initial adjustment offset may be calculated based on the following equation 2 according to the pixel distance:
Where pyoffset1 represents the initial adjustment offset, Δm1 represents the pixel distance of the lesion position from the center of the ultrasound image, and kp、ki and kd represent the Proportional, integral and differential coefficients of a PID controller (Proportional-Integral-differential controller), respectively, which can be set to 0.001, 0.0001 and 0.002, respectively, by way of example.
And S1095, controlling the probe to move to the initial position according to the initial adjustment offset.
Specifically, the current probe position Pcurrent=[Pcx,Pcy,Pcz]T and the current probe pose Rcurrent of the probe may be acquired. Wherein, Pcx,Pcy,Pcz respectively represents the coordinates of the tool coordinate system at the end of the probe in three directions of XYZ under the world coordinate system.
According to the initial adjustment offset, the current probe position and the current probe posture, the initial next probe position is calculated based on the following formula 3:
Where Pstart_next denotes the initial next probe position, Pcurrent denotes the current probe position, Rcurrent denotes the current probe attitude, and Pyoffset1 denotes the initial adjustment offset.
The probe does not need to be rotated in this step, so the initial next probe pose of the probe is equal to the current probe pose, i.e., Rstart_next=Rcurrent.
And controlling the probe to move on the Y axis according to the initial next probe position and the initial next probe posture so as to move the probe to the initial position.
In the embodiment of the application, the step S109 is performed by controlling the ultrasonic probe to pull the thyroid focus to the image center, so that not only is the focus positioning accuracy improved, the optimal observation position of the focus in the whole scanning process is ensured, but also external interference caused by probe movement is reduced, for example, the influence of image edge area distortion and other structures on focus observation is avoided, and thus the focus detection accuracy and diagnosis quality are remarkably improved. In addition, the process can also effectively prevent the probe from unnecessarily colliding with the collarbone or the chin in the moving process, and ensures the safety and fluency of the scanning process.
In the above step S101, the probe is controlled to rotate around the tool coordinate system Z-axis to a target direction by a preset rotation angle at the initial position, and the probe is shown to rotate around the Z-axis clockwise direction with reference to fig. 4.
Step S101 specifically includes the following steps S1011 to S1015:
s1011, acquiring an ultrasonic image acquired by the probe at the initial position, and obtaining the focus position in the ultrasonic image.
Specifically, an ultrasonic image acquired by a probe at an initial position is acquired, the ultrasonic image is subjected to image segmentation to obtain a focus contour, and then the focus contour is subjected to ellipse fitting to obtain a focus ellipse circle center position as a focus position m= (Mx,my). Wherein mx、my in the lesion position represents an X-axis pixel coordinate and a Y-axis pixel coordinate in the ultrasound image coordinate system, respectively.
S1012, acquiring the current probe position and the current probe posture of the probe.
Specifically, the current probe position Pcurrent=[Pcx,Pcy,Pcz]T and the current probe pose Rcurrent of the probe may be acquired. Wherein, Pcx,Pcy,Pcz respectively represents the coordinates of the tool coordinate system at the end of the probe in three directions of XYZ under the world coordinate system.
S1013, calculating a first tool coordinate system position offset according to the focus position, and combining the current probe position and the current probe posture to calculate a first next probe position.
Specifically, the first tool coordinate system position offset may be calculated according to the focus position based on the following equation 4:
Where toolOffset denotes the first tool coordinate system position offset, mx denotes the X-axis pixel coordinate of the lesion position in the ultrasound image coordinate system, KPixelPhyDis is the ultrasound image pixel physical distance, and denotes the actual distance represented by each pixel in the ultrasound image in the real world, which may be set to 0.00005 by way of example.
According to the position offset of the first tool coordinate system, the current probe position and the current probe posture, the first next probe position is calculated based on the following formula 5:
Pnext1=Pcurrent+Rcurrent x toolOffset1 equation 5
Where Pnext1 denotes the first next probe position, Pcurrent denotes the current probe position, Rcurrent denotes the current probe attitude, and toolOffset denotes the first tool coordinate system position offset.
S1014, calculating to obtain a rotation matrix according to the preset rotation angle, and calculating to obtain a first next probe posture by combining the current probe posture.
Specifically, the rotation matrix may be calculated based on the following equation 6 according to the preset rotation angle:
Where Rz (θ) represents a rotation matrix, θ represents a preset rotation angle, and may be set to 0.03 radians, for example.
Multiplying the rotation matrix and the current probe pose, and calculating to obtain a first next probe pose based on the following formula 7:
Rnext1=Rcurrent*Rz (θ) equation 7
Where Rnext1 represents the first next probe pose, Rz (θ) represents the rotation matrix, and Rcurrent represents the current probe pose.
And S1015, controlling the probe to rotate around the Z axis according to the first next probe position and the first next probe posture.
In the embodiment of the application, the step S101 performs the rotation movement by taking the focus as the center, so that the focus is always positioned at the center position of the image in the whole rotation process, the definition and the continuity of the focus are maintained, the accuracy of focus feature identification is improved, the multi-angle comprehensive scanning of the focus is realized, more abundant focus information can be obtained, and the accuracy and the reliability of diagnosis are further improved.
In the step S102, the ultrasonic image and the force sensor data are acquired in real time, and the updated total rotation angle is recorded in real time.
Specifically, after the above step S101 is completed, an ultrasound image is acquired, and force sensor data f= (fx, fy, fz, tx, ty, tz) is acquired, where the force sensor data is composed of 6 components, and the first 3 components fx, fy, and fz respectively represent forces applied to the probe in XYZ three directions under the tool coordinate system, and the last three components tx, ty, and tz respectively represent moments, that is, rotational forces, of the probe in XYZ three directions under the tool coordinate system.
At the same time, the total rotation angle θ Total (S), that is θ Total (S)=θ Total (S) +θ, needs to be updated in real time.
In step S103, the ultrasound image is subjected to image segmentation to obtain the lesion position. Specifically, the following steps S1031 to S1032 may be included:
s1031, image segmentation is carried out on the ultrasonic image, and focus contours are obtained.
Specifically, image segmentation is performed on the ultrasound image by using a preset image segmentation network to obtain a focus contour.
Those skilled in the art can select an appropriate neural network for pre-training based on the detailed description of the prior art to obtain a preset image segmentation network. The training process may specifically include:
Firstly, collecting ultrasonic images of thyroid lesions, marking outlines of the lesions in the ultrasonic images respectively, and preprocessing to obtain a thyroid lesion outline data set.
And secondly, selecting a proper neural network model as an initial image segmentation network, such as a U-Net model, a SegNet model and the like.
Third, the thyroid lesion contour dataset is divided into a training set and a test set.
Fourth, define a loss function (e.g., cross entropy loss function, mean square error loss function, etc.), an optimization algorithm (e.g., adam, SGD, etc.), etc.
And fifthly, training an initial image segmentation network by using a training set of a thyroid focus contour data set to obtain a trained image segmentation model.
Repeating the training process of the image segmentation model until the preset condition is completed, and stopping training to obtain a preset image segmentation network. The preset condition may be set to reach a fixed iteration number, the accuracy reaches a threshold, the accuracy is not changed within the preset iteration number, and the like. There may be no particular limitation here.
S1032, carrying out ellipse fitting on the focus outline to obtain focus ellipse circle center position as focus position.
Specifically, an ellipse fitting is performed on the focal contour using the FITELLIPSE function in the open source computer vision library OpenCV (Open Source Computer Vision Library) to obtain the focal ellipse circle center position, which is the focal position m= (Mx,my). Wherein mx、my in the lesion position represents an X-axis pixel coordinate and a Y-axis pixel coordinate in the ultrasound image coordinate system, respectively.
In the step S104, whether the probe collides with the human body is judged according to the force sensor data, if yes, the step S105 is executed, and if not, the step S108 is executed.
Specifically, the force sensor data f= (fx, fy, fz, tx, ty, tz) may be extracted to determine whether the absolute value of fy is greater than a preset collision threshold e, if yes, the probe collides with the collarbone or chin of the human body, step S105 is executed to perform obstacle avoidance operation, if no, the probe does not collide with the collarbone or chin of the human body, step S108 is executed, and the process of probe rotation acquisition is continued. The preset collision threshold e may be set to 2.0, where the unit is N.
In the step S105, a translation obstacle avoidance offset is calculated according to the force sensor data and the focus position, and whether the translation obstacle avoidance offset is equal to 0 is judged, if yes, the step S106 is executed. If not, step S107 is performed. Specifically, the method comprises the following steps S1051-S1055:
S1051, determining the contact direction of the probe and the human body according to the data of the force sensor, and obtaining the obstacle direction coefficient.
Specifically, the force fy in the Y-axis direction of the tool coordinate system in the force sensor data f= (fx, fy, fz, tx, ty, tz) may be extracted, when fy= e, it is indicated that the probe collides with the collarbone, the probe needs to translate to avoid the obstacle above the human body, the obstacle direction coefficient obstacleDir is set to 1, when fy < = -e, it is indicated that the probe collides with the chin, the obstacle direction coefficient obstacleDir needs to translate to avoid the obstacle below the human body, and the obstacle direction coefficient obstacleDir is set to-1.
In order to facilitate understanding of the scheme by those skilled in the art, the two collision conditions described above may be shown in fig. 5, where the left side indicates that the probe collides with the collarbone, and the probe is required to translate above the human body along the tool coordinate system Y, and the right side indicates that the probe collides with the chin, and the probe is required to translate below the human body along the tool coordinate system Y.
S1052, calculating to obtain the translational obstacle avoidance deviation pixel value according to the obstacle direction coefficient and the focus position.
Specifically, according to the obstacle direction coefficient and the focus position, a translational obstacle avoidance deviation pixel value may be calculated based on the following formula 8:
wherein Δm2 represents a translational obstacle avoidance deviation pixel value, mx represents an X-axis pixel coordinate of a focus position in an ultrasonic image coordinate system, W represents a width value of an ultrasonic image, and obstacleDir is an obstacle direction coefficient. Wherein W is illustratively settable to 800.
S1053, judging whether the translation obstacle avoidance deviation pixel value is larger than the minimum deviation pixel threshold value, if yes, executing step S1054, and if not, executing step S1055.
Specifically, it may be determined whether the absolute value |Δm2| of the shift obstacle avoidance deviation pixel value is greater than the minimum deviation pixel threshold epsilon, if yes, it is indicated that the position of the probe can perform shift obstacle avoidance without causing a missing focus, step S1054 is performed to calculate shift obstacle avoidance offset, if no, it is indicated that no obstacle avoidance along the Y-axis direction of the tool coordinate system is possible to cause a missing focus, step S1055 is performed to set the shift obstacle avoidance offset to 0, and then the rotation obstacle avoidance operation of step S106 is performed. Wherein the minimum deviation pixel threshold value may be set to 10, for example.
S1054, calculating the translation obstacle avoidance offset according to the translation obstacle avoidance offset pixel value and the force sensor data.
Specifically, according to the pixel value of the translational obstacle avoidance deviation and the force applied to the probe in the Y-axis direction of the force sensor data under the tool coordinate system, the translational obstacle avoidance offset is calculated based on the following formula 9:
pyoffset2=km*Δm2+kf fy formula 9
Wherein pyoffset2 is a translation obstacle avoidance offset, Δm2 is a translation obstacle avoidance offset pixel value, fy is a force applied to the probe in the Y-axis direction under the tool coordinate system, km is a preset pixel offset coefficient, and kf is a preset force offset coefficient. Wherein, W is exemplary settable to 800 and km is exemplary settable to 0.00002.
S1055, setting the translation obstacle avoidance offset to 0.
In the step S106, a rotational obstacle avoidance offset is calculated according to the force sensor data, and the probe is controlled to rotate around the tool coordinate system X axis according to the rotational obstacle avoidance offset until the probe is no longer in a collision state with the human body, and the process of rotational acquisition of the probe in the steps S101-S104 is re-executed. Specifically, the method comprises the following steps S1061-S1066:
s1061, determining the contact direction of the probe and the human body according to the force sensor data, and obtaining the obstacle direction coefficient.
Specifically, the force sensor data f= (fx, fyfz, tx, ty, tz) may be extracted to obtain a force fy in the Y-axis direction of the tool coordinate system, when fy= e, the probe needs to translate to avoid the obstacle above the human body when the probe collides with the collarbone, the obstacle direction coefficient obstacleDir is set to 1, when fy < = - = e, the probe needs to translate to avoid the obstacle below the human body when the probe collides with the chin, and the obstacle direction coefficient obstacleDir is set to-1.
In order to facilitate understanding of the scheme by those skilled in the art, the rotary obstacle avoidance scheme corresponding to the two collision situations can be shown in fig. 6, wherein the left side indicates that the probe collides with the collarbone, the probe is required to rotate clockwise along the X axis of the tool coordinate system to avoid the obstacle, the left side indicates that the probe collides with the chin, and the probe is required to rotate anticlockwise along the X axis of the tool coordinate system to avoid the obstacle.
S1062, calculating a second tool coordinate system position offset according to the obstacle direction coefficient, and calculating a second next probe position by combining the acquired current probe position and the current probe posture of the probe.
Specifically, the second tool coordinate system position offset amount may be calculated based on the following equation 10 according to the obstacle direction coefficient:
Where toolOffset denotes a second tool coordinate system position offset, obstacleDir is an obstacle direction coefficient, Wtool is an ultrasonic probe tip length, and exemplary may be set to 0.04m.
According to the position offset of the second tool coordinate system, the current probe position and the current probe posture, a second next probe position is calculated based on the following formula 11:
Pnext2=Pcurrent+Rcurrent x toolOffset formula 11
Where Pnext2 denotes the second next probe position, Pcurrent denotes the current probe position, Rcurrent denotes the current probe attitude, and toolOffset denotes the second tool coordinate system position offset.
And S1063, calculating the rotation angle around the X axis according to the barrier reverse coefficient.
Specifically, the rotation angle around the X axis may be calculated based on the following equation 12 according to the obstacle reverse coefficient:
α= obstacleDir ×α0-kt tx) equation 12
Wherein α represents a rotation angle around an X-axis, obstacleDir represents an obstacle direction coefficient, α0 represents a preset rotation angle initial value, kt represents a moment deviation coefficient, and tx represents a moment of the probe in the X-axis direction under a tool coordinate system in force sensor data. Wherein, α0 is exemplary of settable to 0.05 radians, and kt is exemplary of settable to 0.1.
S1064, calculating a rotation obstacle avoidance offset according to the rotation angle around the X axis, and calculating a second next probe posture by combining the current probe posture.
Specifically, the rotational obstacle avoidance offset may be calculated based on the following equation 13 according to the rotation angle around the X axis:
Wherein Rx (alpha) represents a rotational obstacle avoidance offset, and alpha represents a rotational angle about the X axis.
Multiplying the rotational obstacle avoidance offset by the current probe pose, and calculating a second next probe pose based on the following equation 14:
rnext2=Rcurrent*Rx (. Alpha.) formula 14
Where Rnext2 represents the second next probe pose, Rx (α) represents the rotational obstacle avoidance offset, and Rcurrent represents the current probe pose.
And S1065, controlling the probe to rotate around the X axis according to the position and the posture of the second next probe, and collecting new force sensor data in real time.
And S1066, if the probe is still collided with the human body according to the new force sensor data, re-executing the process of calculating the second next probe position and the second next probe posture and rotating as described in the steps S1061-S1065 until the probe is no longer in a collision state with the human body, and re-executing the process of rotating and collecting the probe as described in the steps S101-S104.
In the embodiment of the application, when the rotation obstacle avoidance operation completed in the step S106 is insufficient to solve the problem in translating the obstacle avoidance along the Y axis, the obstacle avoidance operation is performed by rotating the obstacle avoidance device around the X axis, so that the probe is ensured to safely avoid the obstacle under the condition of not losing the focus, the method can ensure the safe operation of focus scanning from multiple angles, the stability and the definition of the focus ultrasonic image are improved, and the flexibility and the adaptability of the method are enhanced.
In the step S107, the probe is controlled to move according to the translational obstacle avoidance offset, and the process of acquiring the ultrasonic image and performing collision judgment in the steps S102 to S104 is re-performed.
Specifically, according to the translational obstacle avoidance offset and the current probe position, a third next probe position is calculated based on the following formula 15:
Where Pnext3 represents the third next probe position, Pyoffset2 represents the translational obstacle avoidance offset, and Pcurrent represents the current probe position.
The probe does not need to be rotated in this step, so the third next probe pose of the probe is equal to the current probe pose, i.e., Rnext3=Rcurrent.
And controlling the probe to move on the Y axis according to the position of the third next probe and the posture of the third next probe to finish translational obstacle avoidance, and then re-executing the process of acquiring the ultrasonic image and performing collision judgment in the steps S102-S104.
In the embodiment of the application, the translation obstacle avoidance operation completed in the steps S105 and S107 ensures that the probe can safely avoid the obstacle without losing the focus by translating the obstacle avoidance along the Y axis when the ultrasonic probe encounters the collision of the collarbone or the chin along the Y axis direction, thereby avoiding the discomfort of human body caused by the collision, keeping the continuous tracking of the focus and ensuring the stability and the definition of the focus image. Meanwhile, the stress condition in the Y-axis direction is monitored in real time, and the obstacle avoidance speed is dynamically adjusted according to the stress, so that the obstacle avoidance process is quicker and more efficient, the robustness and the adaptability of the system are improved, and the successful completion of focus scanning tasks is ensured.
In the step S108, the process of rotating and collecting the probe in the steps S101 to S107 is re-executed until the total rotation angle reaches the preset angle threshold value, and it is determined that the probe completes autonomous scanning of thyroid lesions.
Specifically, the process of rotating and collecting the probe in the steps S101-S107 may be circularly executed until the total rotation angle reaches the preset angle threshold pi/2, that is, 90 °, and the probe is turned from the transverse cutting to the longitudinal cutting, so that the probe is determined to complete autonomous scanning of the thyroid focus.
Example two
Based on the same inventive concept, the embodiment of the invention also provides a device for autonomously scanning thyroid lesions by an ultrasonic robot, which comprises:
a first rotation module 101, configured to control the probe to rotate around the tool coordinate system Z by a preset rotation angle in the target direction at the initial position;
The first acquisition module 102 is used for acquiring ultrasonic images and force sensor data in real time and recording and updating the total rotation angle in real time;
a first segmentation module 103, configured to perform image segmentation on the ultrasound image to obtain a focus position;
The first judging module 104 is configured to judge whether the probe collides with the human body according to the force sensor data, if yes, execute the translation obstacle avoidance module, and if not, execute the second rotation module;
The translation obstacle avoidance module 105 is configured to calculate a translation obstacle avoidance offset according to the force sensor data and the focus position when the probe collides with the human body, and determine whether the translation obstacle avoidance offset is equal to 0;
The rotating obstacle avoidance module 106 is configured to calculate a rotating obstacle avoidance offset according to the force sensor data when the translating obstacle avoidance offset is equal to 0, and control the probe to rotate around the tool coordinate system X axis according to the rotating obstacle avoidance offset until the probe is no longer in a collision state with a human body, and re-execute the process of rotating and collecting the probe;
The first moving module 107 is configured to control, when the translation obstacle avoidance offset is not equal to 0, the probe to move along the Y axis of the tool coordinate system according to the translation obstacle avoidance offset, and re-execute the process of acquiring the ultrasonic image and performing collision judgment;
And the second rotation module 108 is configured to re-execute the above-mentioned process of probe rotation acquisition when the probe does not collide with the human body, until the total rotation angle reaches a preset angle threshold, and determine that the probe completes autonomous scanning of thyroid lesions.
Example III
Based on the same inventive concept, an embodiment of the present invention also provides a computer-readable storage medium having stored thereon a computer program/instruction which, when executed by a processor, implements a method of autonomous scanning of thyroid lesions by an ultrasound robot as described in the above embodiment one.
Example IV
Based on the same inventive concept, embodiments of the present invention also provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements a method of autonomous scanning of thyroid lesions by an ultrasound robot as described in the above embodiment one.
Example five
Based on the same inventive concept, the embodiment of the present invention further provides a computer device, including a memory, a processor and a computer program stored on the memory, where the processor implements the method for autonomously scanning thyroid lesions by the ultrasound robot as described in the first embodiment.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

Translated fromChinese
1.一种超声机器人自主扫查甲状腺病灶的方法,其特征在于,包括:1. A method for autonomously scanning thyroid lesions using an ultrasonic robot, comprising:控制探头在初始位置绕工具坐标系Z轴向目标方向旋转预设旋转角度;Control the probe to rotate around the Z axis of the tool coordinate system in the target direction at the initial position by a preset rotation angle;实时采集超声图像和力传感器数据,实时记录更新总旋转角度;Real-time acquisition of ultrasonic images and force sensor data, and real-time recording and updating of total rotation angle;对所述超声图像进行图像分割,得到病灶位置;performing image segmentation on the ultrasound image to obtain a lesion location;根据所述力传感器数据判断所述探头是否与人体发生碰撞;Determining whether the probe collides with a human body according to the force sensor data;若是,则根据所述力传感器数据和所述病灶位置计算得到平移避障偏移量,判断所述平移避障偏移量是否等于0;If yes, calculating a translational obstacle avoidance offset based on the force sensor data and the lesion position, and determining whether the translational obstacle avoidance offset is equal to 0;若是,则根据所述力传感器数据计算得到旋转避障偏移量,并根据所述旋转避障偏移量控制探头绕工具坐标系X轴旋转,直至所述探头不再与人体处于碰撞状态,重新执行上述探头旋转采集的过程;If so, a rotational obstacle avoidance offset is calculated based on the force sensor data, and the probe is controlled to rotate around the X-axis of the tool coordinate system according to the rotational obstacle avoidance offset until the probe is no longer in a collision state with the human body, and the above-mentioned probe rotation acquisition process is executed again;若否,则根据所述平移避障偏移量控制探头沿工具坐标系Y轴移动,重新执行上述采集超声图像并进行碰撞判断的过程;If not, the probe is controlled to move along the Y-axis of the tool coordinate system according to the translation obstacle avoidance offset, and the above-mentioned process of acquiring ultrasonic images and performing collision judgment is executed again;否则,重新执行上述探头旋转采集的过程,直至所述总旋转角度达到预设角度阈值,确定所述探头完成甲状腺病灶的自主扫查。Otherwise, the above-mentioned probe rotation acquisition process is executed again until the total rotation angle reaches the preset angle threshold, and it is determined that the probe has completed the autonomous scanning of the thyroid lesion.2.根据权利要求1所述的方法,其特征在于,所述控制所述探头在初始位置绕工具坐标系Z轴向目标方向旋转预设旋转角度,包括:2. The method according to claim 1, wherein controlling the probe to rotate at an initial position about a Z-axis of a tool coordinate system toward a target direction by a preset rotation angle comprises:获取所述探头在初始位置采集的超声图像,并得到该超声图像中的病灶位置;Acquiring an ultrasound image captured by the probe at an initial position, and obtaining a lesion location in the ultrasound image;获取所述探头的当前探头位置和当前探头姿态;Obtaining a current probe position and a current probe posture of the probe;根据所述病灶位置计算得到第一工具坐标系位置偏移量,结合所述当前探头位置和当前探头姿态,计算得到第一下一探头位置;A first tool coordinate system position offset is calculated based on the lesion position, and a first next probe position is calculated based on the current probe position and the current probe posture;根据所述预设旋转角度计算得到旋转矩阵,结合所述当前探头姿态,计算得到第一下一探头姿态;A rotation matrix is calculated based on the preset rotation angle, and a first next probe posture is calculated based on the current probe posture;根据所述第一下一探头位置和第一下一探头姿态,控制所述探头绕工具坐标系Z轴向目标方向进行旋转。According to the first next probe position and the first next probe posture, the probe is controlled to rotate around the Z axis of the tool coordinate system toward the target direction.3.根据权利要求1所述的方法,其特征在于,所述根据所述力传感器数据和所述病灶位置计算得到平移避障偏移量,包括:3. The method according to claim 1, wherein calculating the translational obstacle avoidance offset based on the force sensor data and the lesion location comprises:根据所述力传感器数据确定所述探头与人体接触的方向,得到障碍方向系数;determining the direction in which the probe contacts the human body according to the force sensor data, and obtaining an obstacle direction coefficient;根据所述障碍方向系数和所述病灶位置计算得到平移避障偏差像素值;Calculating a translation obstacle avoidance deviation pixel value according to the obstacle direction coefficient and the lesion position;判断所述平移避障偏差像素值是否大于最小偏差像素阈值;Determine whether the translation obstacle avoidance deviation pixel value is greater than a minimum deviation pixel threshold;若是,则根据所述平移避障偏差像素值和所述力传感器数据计算得到所述平移避障偏移量;If yes, the translation obstacle avoidance offset is calculated based on the translation obstacle avoidance deviation pixel value and the force sensor data;若否,则将所述平移避障偏移量设置为0。If not, the translation obstacle avoidance offset is set to 0.4.根据权利要求1所述的方法,其特征在于,所述根据所述力传感器数据计算得到旋转避障偏移量,并根据所述旋转避障偏移量控制探头绕工具坐标系X轴旋转,直至所述探头不再与人体处于碰撞状态,包括:4. The method according to claim 1, wherein calculating a rotational obstacle avoidance offset based on the force sensor data and controlling the probe to rotate around the X-axis of the tool coordinate system according to the rotational obstacle avoidance offset until the probe is no longer in a collision state with the human body comprises:根据所述力传感器数据确定所述探头与人体接触的方向,得到障碍方向系数;determining the direction in which the probe contacts the human body according to the force sensor data, and obtaining an obstacle direction coefficient;根据所述障碍方向系数计算得到第二工具坐标系位置偏移量,结合获取的所述探头的当前探头位置和当前探头姿态,计算得到第二下一探头位置;A second tool coordinate system position offset is calculated based on the obstacle direction coefficient, and a second next probe position is calculated based on the obtained current probe position and current probe posture of the probe;根据所述障碍反向系数计算绕X轴的旋转角度;Calculating the rotation angle around the X-axis according to the obstacle reverse coefficient;根据所述绕X轴的旋转角度计算得到旋转避障偏移量,结合所述当前探头姿态,计算得到第二下一探头姿态;The rotation obstacle avoidance offset is calculated according to the rotation angle around the X-axis, and the second next probe posture is calculated in combination with the current probe posture;根据所述第二下一探头位置和第二下一探头姿态控制所述探头绕X轴进行旋转,实时采集新的力传感器数据;Controlling the probe to rotate around the X-axis according to the second next probe position and the second next probe posture to collect new force sensor data in real time;若根据所述新的力传感器数据可以判断所述探头与人体仍发生了碰撞,则重新执行上述计算第二下一探头位置和第二下一探头姿态并旋转的过程,直至所述探头不再与人体处于碰撞状态。If it can be determined based on the new force sensor data that the probe still collides with the human body, the above process of calculating the second next probe position and the second next probe posture and rotating is performed again until the probe is no longer in a collision state with the human body.5.根据权利要求1所述的方法,其特征在于,在所述控制探头在初始位置绕工具坐标系Z轴向目标方向旋转预设旋转角度之前,还包括:5. The method according to claim 1, characterized in that before the control probe is rotated at the initial position around the Z axis of the tool coordinate system toward the target direction by a preset rotation angle, the method further comprises:控制所述探头采集一张超声图像;所述超声图像中包括甲状腺病灶;Controlling the probe to acquire an ultrasound image; the ultrasound image includes a thyroid lesion;对所述超声图像进行图像分割,得到病灶位置;performing image segmentation on the ultrasound image to obtain a lesion location;根据所述病灶位置和所述超声图像的宽度值,计算得到所述病灶位置到所述超声图像中心的像素距离;Calculating a pixel distance from the lesion position to the center of the ultrasound image according to the lesion position and the width value of the ultrasound image;根据所述像素距离计算得到初始调整偏移量;Calculating an initial adjustment offset according to the pixel distance;根据所述初始调整偏移量控制所述探头移动到所述初始位置。The probe is controlled to move to the initial position according to the initial adjustment offset.6.根据权利要求1所述的方法,其特征在于,所述对所述超声图像进行图像分割,得到病灶位置,包括:6. The method according to claim 1, wherein performing image segmentation on the ultrasound image to obtain the lesion location comprises:对所述超声图像进行图像分割,得到病灶轮廓;performing image segmentation on the ultrasound image to obtain a lesion contour;对所述病灶轮廓进行椭圆拟合,得到病灶椭圆圆心位置,作为病灶位置。Perform ellipse fitting on the lesion contour to obtain the center position of the lesion ellipse as the lesion position.7.一种超声机器人自主扫查甲状腺病灶的装置,其特征在于,包括:7. A device for autonomously scanning thyroid lesions using an ultrasonic robot, comprising:第一旋转模块,用于控制探头在初始位置绕工具坐标系Z轴向目标方向旋转预设旋转角度;The first rotation module is used to control the probe to rotate around the Z axis of the tool coordinate system in the target direction at the initial position by a preset rotation angle;第一采集模块,用于实时采集超声图像和力传感器数据,实时记录更新总旋转角度;The first acquisition module is used to acquire ultrasonic images and force sensor data in real time, and record and update the total rotation angle in real time;第一分割模块,用于对所述超声图像进行图像分割,得到病灶位置;A first segmentation module is used to perform image segmentation on the ultrasound image to obtain the location of the lesion;第一判断模块,用于根据所述力传感器数据判断所述探头是否与人体发生碰撞:若是,执行平移避障模块;若否,执行第二旋转模块;A first judgment module is configured to judge whether the probe collides with the human body based on the force sensor data: if so, execute the translation obstacle avoidance module; if not, execute the second rotation module;平移避障模块,用于探头与人体发生碰撞时,则根据所述力传感器数据和所述病灶位置计算得到平移避障偏移量,判断所述平移避障偏移量是否等于0;A translation obstacle avoidance module is used to calculate a translation obstacle avoidance offset based on the force sensor data and the lesion position when the probe collides with the human body, and to determine whether the translation obstacle avoidance offset is equal to 0;旋转避障模块,用于平移避障偏移量等于0时,根据所述力传感器数据计算得到旋转避障偏移量,并根据所述旋转避障偏移量控制探头绕工具坐标系X轴旋转,直至所述探头不再与人体处于碰撞状态,重新执行上述探头旋转采集的过程;A rotation obstacle avoidance module is configured to calculate a rotation obstacle avoidance offset based on the force sensor data when the translation obstacle avoidance offset is equal to 0, and control the probe to rotate around the X-axis of the tool coordinate system according to the rotation obstacle avoidance offset until the probe is no longer in a collision state with the human body, and then re-execute the above-mentioned probe rotation acquisition process;第一移动模块,用于平移避障偏移量不等于0时,根据所述平移避障偏移量控制探头沿工具坐标系Y轴移动,重新执行上述采集超声图像并进行碰撞判断的过程;A first movement module is configured to control the probe to move along the Y-axis of the tool coordinate system according to the translation obstacle avoidance offset when the translation obstacle avoidance offset is not equal to 0, and to re-execute the above-mentioned process of acquiring ultrasonic images and performing collision judgment;第二旋转模块,用于探头与人体未发生碰撞时,重新执行上述探头旋转采集的过程,直至所述总旋转角度达到预设角度阈值,确定所述探头完成甲状腺病灶的自主扫查。The second rotation module is used to re-execute the above-mentioned probe rotation acquisition process when the probe does not collide with the human body until the total rotation angle reaches a preset angle threshold, thereby determining that the probe has completed the autonomous scanning of the thyroid lesion.8.一种计算机可读存储介质,其上存储有计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现权利要求1-6任一项所述的超声机器人自主扫查甲状腺病灶的方法。8. A computer-readable storage medium having a computer program/instruction stored thereon, wherein when the computer program/instruction is executed by a processor, the method for autonomously scanning thyroid lesions with an ultrasonic robot according to any one of claims 1 to 6 is implemented.9.一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现权利要求1-6任一项所述的超声机器人自主扫查甲状腺病灶的方法。9. A computer program product, comprising a computer program/instruction, characterized in that when the computer program/instruction is executed by a processor, the method for autonomously scanning thyroid lesions with an ultrasonic robot according to any one of claims 1 to 6 is implemented.10.一种计算机设备,包括存储器、处理器及存储在存储器上的计算机程序,其特征在于,所述处理器执行所述计算机程序以实现权利要求1-6任一项所述的超声机器人自主扫查甲状腺病灶的方法。10. A computer device comprising a memory, a processor, and a computer program stored in the memory, wherein the processor executes the computer program to implement the method for autonomously scanning thyroid lesions with an ultrasonic robot according to any one of claims 1 to 6.
CN202411538708.2A2024-10-312024-10-31 A method and device for autonomously scanning thyroid lesions using an ultrasonic robotActiveCN119454096B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411538708.2ACN119454096B (en)2024-10-312024-10-31 A method and device for autonomously scanning thyroid lesions using an ultrasonic robot

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411538708.2ACN119454096B (en)2024-10-312024-10-31 A method and device for autonomously scanning thyroid lesions using an ultrasonic robot

Publications (2)

Publication NumberPublication Date
CN119454096A CN119454096A (en)2025-02-18
CN119454096Btrue CN119454096B (en)2025-09-23

Family

ID=94584930

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411538708.2AActiveCN119454096B (en)2024-10-312024-10-31 A method and device for autonomously scanning thyroid lesions using an ultrasonic robot

Country Status (1)

CountryLink
CN (1)CN119454096B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112773508A (en)*2021-02-042021-05-11清华大学Robot operation positioning method and device
CN118319362A (en)*2024-05-072024-07-12武汉库柏特科技有限公司Thyroid gland transverse cutting, rotary and longitudinal cutting scanning method and device for ultrasonic robot

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP6873647B2 (en)*2016-09-302021-05-19キヤノンメディカルシステムズ株式会社 Ultrasonic diagnostic equipment and ultrasonic diagnostic support program
EP3919003B1 (en)*2019-01-292023-11-01Kunshan Imagene Medical Co., Ltd.Ultrasound scanning control method and system, ultrasound scanning device, and storage medium
CN115869013B (en)*2022-12-082024-07-12合肥合滨智能机器人有限公司Blood vessel positioning and navigation method for autonomous scanning of blood vessel ultrasound

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112773508A (en)*2021-02-042021-05-11清华大学Robot operation positioning method and device
CN118319362A (en)*2024-05-072024-07-12武汉库柏特科技有限公司Thyroid gland transverse cutting, rotary and longitudinal cutting scanning method and device for ultrasonic robot

Also Published As

Publication numberPublication date
CN119454096A (en)2025-02-18

Similar Documents

PublicationPublication DateTitle
CN112861598B (en) System and method for human body model estimation
Chatelain et al.Real-time needle detection and tracking using a visually servoed 3D ultrasound probe
EP3309749B1 (en)Registration of a magnetic tracking system with an imaging device
CN1883415B (en)Method for determining the position and orientation of an object, especially of a catheter, from two-dimensional x-ray images
CN112006777A (en) Robotic system and control method for nailing surgery based on surface tracking
CN108245122B (en)Magnetic guiding type capsule endoscope system and track planning method
Li et al.A framework for fast automatic robot ultrasound calibration
CN110742691A (en) A motion control method for flexible endoscope manipulation robot
Lu et al.A unified monocular camera-based and pattern-free hand-to-eye calibration algorithm for surgical robots with RCM constraints
CN119454096B (en) A method and device for autonomously scanning thyroid lesions using an ultrasonic robot
Huang et al.Robot-assisted deep venous thrombosis ultrasound examination using virtual fixture
CN111658144A (en)Control system and method of vascular robot based on autonomous control
CN115813554A (en)Full-automatic registration method of surgical robot and surgical robot
CN117462258A (en) Computer vision-based surgical assisted robot collision avoidance method, device, equipment and storage medium
CN113954082B (en)Control method, control equipment and auxiliary system suitable for puncture surgical mechanical arm
Yang et al.Robot-Assisted Automatic Ultrasound Calibration Without External Trackers
CN115662601A (en)B-ultrasonic automatic detection method and system based on vision and simulation reinforcement learning
CN115294315A (en)Device, method and system for detecting and picking up foreign matters in mechanical arm in pipeline
CN119564256B (en) Method and device for automatically marking C-mode on the portal vein of the liver for an ultrasound robot
JP7698306B2 (en) Searching device and program for acquiring echo images
CN119454095B (en) Metastasis control method and device for autonomous thyroid scan by ultrasonic robot
CN119523530B (en) A method and device for intelligently determining rib edges using an ultrasonic robot
CN118986415B (en)Registration method based on rib curvature change
Ayadi et al.An image-guided robot for needle insertion in small animal. Accurate needle positioning using visual servoing.
JP7696641B2 (en) Ultrasound image search device and program

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp