Disclosure of Invention
In order to enable the ultrasonic robot to finish thyroid lesion scanning more accurately and efficiently, the embodiment of the invention provides a method and a device for automatically scanning thyroid lesions by the ultrasonic robot.
In a first aspect, an embodiment of the present invention provides a method for autonomously scanning thyroid lesions by an ultrasonic robot, which may include:
The probe is controlled to rotate around the Z-axis of the tool coordinate system at the initial position by a preset rotation angle in the target direction;
Collecting ultrasonic images and force sensor data in real time, and recording and updating the total rotation angle in real time;
image segmentation is carried out on the ultrasonic image to obtain focus positions;
judging whether the probe collides with a human body according to the force sensor data:
if yes, calculating a translation obstacle avoidance offset according to the force sensor data and the focus position, and judging whether the translation obstacle avoidance offset is equal to 0;
If so, calculating to obtain a rotation obstacle avoidance offset according to the force sensor data, and controlling the probe to rotate around the X axis of the tool coordinate system according to the rotation obstacle avoidance offset until the probe is no longer in a collision state with a human body, and re-executing the rotation acquisition process of the probe;
If not, controlling the probe to move along the Y axis of the tool coordinate system according to the translation obstacle avoidance offset, and re-executing the process of acquiring the ultrasonic image and performing collision judgment;
otherwise, the rotating and collecting process of the probe is re-executed until the total rotating angle reaches a preset angle threshold value, and the probe is determined to complete autonomous scanning of thyroid lesions.
In one or some optional implementations of the embodiments of the present application, the controlling the probe to rotate around the tool coordinate system Z axis to the target direction by a preset rotation angle at the initial position includes:
acquiring an ultrasonic image acquired by the probe at an initial position, and acquiring a focus position in the ultrasonic image;
acquiring the current probe position and the current probe posture of the probe;
calculating a first tool coordinate system position offset according to the focus position, and calculating a first next probe position by combining the current probe position and the current probe posture;
Calculating according to the preset rotation angle to obtain a rotation matrix, and combining the current probe posture to obtain a first next probe posture;
And controlling the probe to rotate around the Z-axis target direction of the tool coordinate system according to the first next probe position and the first next probe posture.
In one or some optional implementations of the embodiments of the present application, the calculating the translational obstacle avoidance offset according to the force sensor data and the focal position includes:
determining the contact direction of the probe and the human body according to the force sensor data to obtain an obstacle direction coefficient;
calculating a translational obstacle avoidance deviation pixel value according to the obstacle direction coefficient and the focus position;
Judging whether the translation obstacle avoidance deviation pixel value is larger than a minimum deviation pixel threshold value or not;
if yes, calculating to obtain the translation obstacle avoidance offset according to the translation obstacle avoidance offset pixel value and the force sensor data;
if not, setting the translation obstacle avoidance offset to 0.
In one or some optional implementations of the embodiments of the present application, the calculating a rotational obstacle avoidance offset according to the force sensor data, and controlling the probe to rotate around the tool coordinate system X axis according to the rotational obstacle avoidance offset until the probe is no longer in a collision state with a human body, includes:
determining the contact direction of the probe and the human body according to the force sensor data to obtain an obstacle direction coefficient;
calculating a second tool coordinate system position offset according to the obstacle direction coefficient, and calculating a second next probe position by combining the obtained current probe position and the current probe posture of the probe;
Calculating a rotation angle around an X axis according to the obstacle reverse coefficient;
calculating a rotation obstacle avoidance offset according to the rotation angle around the X axis, and calculating a second next probe posture by combining the current probe posture;
Controlling the probe to rotate around the X axis according to the position of the second next probe and the posture of the second next probe, and collecting new force sensor data in real time;
if the probe is still collided with the human body according to the new force sensor data, the process of calculating the position and the posture of the second next probe and rotating the second next probe is re-executed until the probe is no longer in a collision state with the human body.
In one or some optional implementations of the embodiments of the present application, before the control probe rotates around the tool coordinate system Z-axis target direction by a preset rotation angle at the initial position, the method further includes:
controlling the probe to acquire an ultrasonic image, wherein the ultrasonic image comprises thyroid lesions;
image segmentation is carried out on the ultrasonic image to obtain focus positions;
Calculating to obtain the pixel distance from the focus position to the center of the ultrasonic image according to the focus position and the width value of the ultrasonic image;
calculating to obtain initial adjustment offset according to the pixel distance;
and controlling the probe to move to the initial position according to the initial adjustment offset.
In one or some optional implementations of the embodiments of the present application, the image segmentation of the ultrasound image to obtain a lesion location includes:
Image segmentation is carried out on the ultrasonic image to obtain focus contours;
And carrying out ellipse fitting on the focus outline to obtain the focus ellipse circle center position as the focus position.
In a second aspect, an embodiment of the present invention provides an apparatus for autonomously scanning thyroid lesions by an ultrasonic robot, which may include:
the first rotating module is used for controlling the probe to rotate around the Z-axis direction of the tool coordinate system by a preset rotating angle at the initial position;
the first segmentation module is used for carrying out image segmentation on the ultrasonic image to obtain focus positions;
The first judging module is used for judging whether the probe collides with the human body according to the force sensor data, if so, the translation obstacle avoidance module is executed, and if not, the second rotating module is executed;
the translation obstacle avoidance module is used for calculating a translation obstacle avoidance offset according to the force sensor data and the focus position when the probe collides with a human body, and judging whether the translation obstacle avoidance offset is equal to 0;
The rotating obstacle avoidance module is used for calculating the rotating obstacle avoidance offset according to the force sensor data when the translational obstacle avoidance offset is equal to 0, controlling the probe to rotate around the X axis of the tool coordinate system according to the rotating obstacle avoidance offset until the probe is no longer in a collision state with a human body, and re-executing the rotating acquisition process of the probe;
The first moving module is used for controlling the probe to move along the Y axis of the tool coordinate system according to the translation obstacle avoidance offset when the translation obstacle avoidance offset is not equal to 0, and re-executing the process of acquiring the ultrasonic image and performing collision judgment;
and the second rotating module is used for re-executing the rotating and collecting process of the probe until the total rotating angle reaches a preset angle threshold value when the probe and the human body are not collided, and determining that the probe completes autonomous scanning of thyroid lesions.
In a third aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program/instruction which, when executed by a processor, implements a method of autonomous scanning of thyroid lesions by an ultrasound robot as described above.
In a fourth aspect, embodiments of the present invention provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements a method of autonomous scanning of thyroid lesions by an ultrasound robot as described above.
In a fifth aspect, an embodiment of the present invention provides a computer device, including a memory, a processor, and a computer program stored on the memory, where the processor implements a method for autonomous scanning of thyroid lesions by an ultrasound robot as described above when the processor executes the computer program.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
The embodiment of the invention provides a method for autonomously scanning thyroid lesions by an ultrasonic robot, which comprises the steps of collecting ultrasonic images and force sensor data in real time by controlling a probe to rotate around a Z axis of a tool coordinate system at an initial position by a preset angle, recording the total rotation angle, dividing the ultrasonic images to determine the position of the lesions, judging whether the probe collides with a human body according to the force sensor data, calculating translation or rotation obstacle avoidance offset if the collision is determined, correspondingly adjusting the position or angle of the probe until the collision state is relieved, and continuing to rotate and collect until the total rotation angle reaches a preset threshold value if the collision is not determined, thus completing autonomous scanning of thyroid lesions. According to the method, the probe is flexibly adjusted according to the actual focus position and the change of the human anatomy structure, so that the probe can safely complete the conversion process from transverse cutting to longitudinal cutting, meanwhile, the collision between the probe and the human body can be effectively avoided by monitoring the data of the force sensor in real time and dynamically adjusting the position or angle of the probe, the safety and the reliability of the inspection process are obviously improved, the method has strong adaptability to individual differences of different patients, and the safety and the comfort level of the patients are ensured. Therefore, the method not only improves the efficiency and comfort of ultrasonic examination, but also ensures the accuracy of diagnosis, and has important significance for improving the quality of thyroid disease diagnosis.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The inventor finds that in the prior art, the real-time acquisition of the shape of the neck outline is realized based on an external camera, and the angle and the position of the probe are adjusted by identifying the positions of the obstacles such as the collarbone, the chin and the like so as to avoid collision and ensure acquisition of a required longitudinal section image. The method is long in time consumption, depends on an external camera, is high in cost and is easily interfered by human body differences or ambient light, and reliability is poor. Based on the above, the inventor has further developed and made the invention to provide a method and a device for autonomously scanning thyroid lesions by an ultrasonic robot.
Example 1
In a first embodiment of the present invention, a method for autonomous scanning of thyroid lesions by an ultrasonic robot is provided, and referring to fig. 1, the method may include the following steps S101 to S108:
S101, controlling the probe to rotate around the Z-axis of the tool coordinate system at the initial position by a preset rotation angle in the target direction.
S102, acquiring ultrasonic images and force sensor data in real time, and recording and updating the total rotation angle in real time.
And S103, performing image segmentation on the ultrasonic image to obtain focus positions.
S104, judging whether the probe collides with the human body according to the force sensor data, if so, executing the step S105, and if not, executing the step S108.
S105, calculating a translation obstacle avoidance offset according to the force sensor data and the focus position, judging whether the translation obstacle avoidance offset is equal to 0, if so, executing the step S106, and if not, executing the step S107.
And S106, calculating a rotation obstacle avoidance offset according to the force sensor data, controlling the probe to rotate around the X axis of the tool coordinate system according to the rotation obstacle avoidance offset until the probe is no longer in a collision state with a human body, and re-executing the probe rotation acquisition process in the steps S101-S104.
And S107, controlling the probe to move along the Y axis of the tool coordinate system according to the translation obstacle avoidance offset, and re-executing the process of acquiring the ultrasonic image and performing collision judgment in the steps S102-S104.
S108, re-executing the process of probe rotation acquisition in the steps S101-S107 until the total rotation angle reaches a preset angle threshold value, and determining that the probe completes autonomous scanning of thyroid lesions.
The embodiment of the invention provides a method for autonomously scanning thyroid lesions by an ultrasonic robot, which comprises the steps of collecting ultrasonic images and force sensor data in real time by controlling a probe to rotate around a Z axis of a tool coordinate system at an initial position by a preset angle, recording the total rotation angle, dividing the ultrasonic images to determine the position of the lesions, judging whether the probe collides with a human body according to the force sensor data, calculating translation or rotation obstacle avoidance offset if the collision is determined, correspondingly adjusting the position or angle of the probe until the collision state is relieved, and continuing to rotate and collect until the total rotation angle reaches a preset threshold value if the collision is not determined, thus completing autonomous scanning of thyroid lesions. According to the method, the probe is flexibly adjusted according to the actual focus position and the change of the human anatomy structure, so that the probe can safely complete the conversion process from transverse cutting to longitudinal cutting, meanwhile, the collision between the probe and the human body can be effectively avoided by monitoring the data of the force sensor in real time and dynamically adjusting the position or angle of the probe, the safety and the reliability of the inspection process are obviously improved, the method has strong adaptability to individual differences of different patients, and the safety and the comfort level of the patients are ensured. Therefore, the method not only improves the efficiency and comfort of ultrasonic examination, but also ensures the accuracy of diagnosis, and has important significance for improving the quality of thyroid disease diagnosis.
In order to facilitate understanding of the solution by those skilled in the art, an exemplary diagram is given below for the ultrasonic robot and the probe state of the focus scanning in the embodiment of the present application, where the ultrasonic robot is shown in fig. 2, two sets of coordinate systems exist in the diagram, the root of the mechanical arm is the world coordinate system, and the end of the mechanical arm is the tool coordinate system. The probe state of focus scanning all the time is shown in fig. 3, the left side of the diagram is defined as an initial scanning state, at the moment, the ultrasonic probe is in a transverse cutting state (horizontal state), a tool coordinate system at the tail end of the probe is shown in the diagram, the X-axis points to the vertical direction of the probe, the Y-axis points to the horizontal direction of the probe, the Z-axis points to the depth direction of the probe, the right side of the diagram is defined as a scanning ending state, at the moment, the ultrasonic probe is in a longitudinal cutting state (vertical state), and the tool coordinate system at the tail end of the probe is rotated by 90 degrees on the horizontal plane as shown in the diagram, so that focus scanning is completed. Wherein the scanning in the figures is the scanning described herein.
In the embodiment of the application, when the autonomous scanning of thyroid lesions based on the ultrasonic robot is started, the probe of the ultrasonic robot is firstly scanned on the neck of a human body in a transverse cutting mode, and the probe moves in the Y-axis direction under a tool coordinate system and cannot collide with the collarbone or chin at the moment because the probe is in the transverse cutting state. Meanwhile, an ultrasonic image is acquired in real time in the scanning process, when a thyroid lesion appears in the ultrasonic image, step S109 is executed, the probe is controlled to pull the lesion to the center of the ultrasonic image, the probe is positioned at the initial position, and then the method for autonomously scanning the thyroid lesion by the ultrasonic robot in steps S101-S108 is executed. The step S109 specifically includes the following steps S1091 to S1095:
S1091, controlling the probe to acquire an ultrasonic image. Thyroid lesions are included in the ultrasound images.
S1092, image segmentation is carried out on the ultrasonic image, and focus positions are obtained.
Specifically, image segmentation may be performed on an ultrasound image to obtain a focus contour, and then ellipse fitting is performed on the focus contour to obtain a focus ellipse circle center position, which is used as a focus position m= (Mx,my). Wherein mx、my in the lesion position represents an X-axis pixel coordinate and a Y-axis pixel coordinate in the ultrasound image coordinate system, respectively.
S1093, calculating to obtain the pixel distance from the focus position to the center of the ultrasonic image according to the focus position and the width value of the ultrasonic image.
Specifically, the pixel distance from the focus position to the center of the ultrasound image may be calculated based on the following formula 1 according to the focus position m= (Mx,my) and the width value of the ultrasound image:
Where Δm1 represents the pixel distance from the lesion position to the center of the ultrasound image, mx represents the X-axis pixel coordinate of the lesion position in the ultrasound image coordinate system, and W represents the width value of the ultrasound image. Wherein W is illustratively settable to 800.
S1094, calculating to obtain initial adjustment offset according to the pixel distance.
Specifically, the initial adjustment offset may be calculated based on the following equation 2 according to the pixel distance:
Where pyoffset1 represents the initial adjustment offset, Δm1 represents the pixel distance of the lesion position from the center of the ultrasound image, and kp、ki and kd represent the Proportional, integral and differential coefficients of a PID controller (Proportional-Integral-differential controller), respectively, which can be set to 0.001, 0.0001 and 0.002, respectively, by way of example.
And S1095, controlling the probe to move to the initial position according to the initial adjustment offset.
Specifically, the current probe position Pcurrent=[Pcx,Pcy,Pcz]T and the current probe pose Rcurrent of the probe may be acquired. Wherein, Pcx,Pcy,Pcz respectively represents the coordinates of the tool coordinate system at the end of the probe in three directions of XYZ under the world coordinate system.
According to the initial adjustment offset, the current probe position and the current probe posture, the initial next probe position is calculated based on the following formula 3:
Where Pstart_next denotes the initial next probe position, Pcurrent denotes the current probe position, Rcurrent denotes the current probe attitude, and Pyoffset1 denotes the initial adjustment offset.
The probe does not need to be rotated in this step, so the initial next probe pose of the probe is equal to the current probe pose, i.e., Rstart_next=Rcurrent.
And controlling the probe to move on the Y axis according to the initial next probe position and the initial next probe posture so as to move the probe to the initial position.
In the embodiment of the application, the step S109 is performed by controlling the ultrasonic probe to pull the thyroid focus to the image center, so that not only is the focus positioning accuracy improved, the optimal observation position of the focus in the whole scanning process is ensured, but also external interference caused by probe movement is reduced, for example, the influence of image edge area distortion and other structures on focus observation is avoided, and thus the focus detection accuracy and diagnosis quality are remarkably improved. In addition, the process can also effectively prevent the probe from unnecessarily colliding with the collarbone or the chin in the moving process, and ensures the safety and fluency of the scanning process.
In the above step S101, the probe is controlled to rotate around the tool coordinate system Z-axis to a target direction by a preset rotation angle at the initial position, and the probe is shown to rotate around the Z-axis clockwise direction with reference to fig. 4.
Step S101 specifically includes the following steps S1011 to S1015:
s1011, acquiring an ultrasonic image acquired by the probe at the initial position, and obtaining the focus position in the ultrasonic image.
Specifically, an ultrasonic image acquired by a probe at an initial position is acquired, the ultrasonic image is subjected to image segmentation to obtain a focus contour, and then the focus contour is subjected to ellipse fitting to obtain a focus ellipse circle center position as a focus position m= (Mx,my). Wherein mx、my in the lesion position represents an X-axis pixel coordinate and a Y-axis pixel coordinate in the ultrasound image coordinate system, respectively.
S1012, acquiring the current probe position and the current probe posture of the probe.
Specifically, the current probe position Pcurrent=[Pcx,Pcy,Pcz]T and the current probe pose Rcurrent of the probe may be acquired. Wherein, Pcx,Pcy,Pcz respectively represents the coordinates of the tool coordinate system at the end of the probe in three directions of XYZ under the world coordinate system.
S1013, calculating a first tool coordinate system position offset according to the focus position, and combining the current probe position and the current probe posture to calculate a first next probe position.
Specifically, the first tool coordinate system position offset may be calculated according to the focus position based on the following equation 4:
Where toolOffset denotes the first tool coordinate system position offset, mx denotes the X-axis pixel coordinate of the lesion position in the ultrasound image coordinate system, KPixelPhyDis is the ultrasound image pixel physical distance, and denotes the actual distance represented by each pixel in the ultrasound image in the real world, which may be set to 0.00005 by way of example.
According to the position offset of the first tool coordinate system, the current probe position and the current probe posture, the first next probe position is calculated based on the following formula 5:
Pnext1=Pcurrent+Rcurrent x toolOffset1 equation 5
Where Pnext1 denotes the first next probe position, Pcurrent denotes the current probe position, Rcurrent denotes the current probe attitude, and toolOffset denotes the first tool coordinate system position offset.
S1014, calculating to obtain a rotation matrix according to the preset rotation angle, and calculating to obtain a first next probe posture by combining the current probe posture.
Specifically, the rotation matrix may be calculated based on the following equation 6 according to the preset rotation angle:
Where Rz (θ) represents a rotation matrix, θ represents a preset rotation angle, and may be set to 0.03 radians, for example.
Multiplying the rotation matrix and the current probe pose, and calculating to obtain a first next probe pose based on the following formula 7:
Rnext1=Rcurrent*Rz (θ) equation 7
Where Rnext1 represents the first next probe pose, Rz (θ) represents the rotation matrix, and Rcurrent represents the current probe pose.
And S1015, controlling the probe to rotate around the Z axis according to the first next probe position and the first next probe posture.
In the embodiment of the application, the step S101 performs the rotation movement by taking the focus as the center, so that the focus is always positioned at the center position of the image in the whole rotation process, the definition and the continuity of the focus are maintained, the accuracy of focus feature identification is improved, the multi-angle comprehensive scanning of the focus is realized, more abundant focus information can be obtained, and the accuracy and the reliability of diagnosis are further improved.
In the step S102, the ultrasonic image and the force sensor data are acquired in real time, and the updated total rotation angle is recorded in real time.
Specifically, after the above step S101 is completed, an ultrasound image is acquired, and force sensor data f= (fx, fy, fz, tx, ty, tz) is acquired, where the force sensor data is composed of 6 components, and the first 3 components fx, fy, and fz respectively represent forces applied to the probe in XYZ three directions under the tool coordinate system, and the last three components tx, ty, and tz respectively represent moments, that is, rotational forces, of the probe in XYZ three directions under the tool coordinate system.
At the same time, the total rotation angle θ Total (S), that is θ Total (S)=θ Total (S) +θ, needs to be updated in real time.
In step S103, the ultrasound image is subjected to image segmentation to obtain the lesion position. Specifically, the following steps S1031 to S1032 may be included:
s1031, image segmentation is carried out on the ultrasonic image, and focus contours are obtained.
Specifically, image segmentation is performed on the ultrasound image by using a preset image segmentation network to obtain a focus contour.
Those skilled in the art can select an appropriate neural network for pre-training based on the detailed description of the prior art to obtain a preset image segmentation network. The training process may specifically include:
Firstly, collecting ultrasonic images of thyroid lesions, marking outlines of the lesions in the ultrasonic images respectively, and preprocessing to obtain a thyroid lesion outline data set.
And secondly, selecting a proper neural network model as an initial image segmentation network, such as a U-Net model, a SegNet model and the like.
Third, the thyroid lesion contour dataset is divided into a training set and a test set.
Fourth, define a loss function (e.g., cross entropy loss function, mean square error loss function, etc.), an optimization algorithm (e.g., adam, SGD, etc.), etc.
And fifthly, training an initial image segmentation network by using a training set of a thyroid focus contour data set to obtain a trained image segmentation model.
Repeating the training process of the image segmentation model until the preset condition is completed, and stopping training to obtain a preset image segmentation network. The preset condition may be set to reach a fixed iteration number, the accuracy reaches a threshold, the accuracy is not changed within the preset iteration number, and the like. There may be no particular limitation here.
S1032, carrying out ellipse fitting on the focus outline to obtain focus ellipse circle center position as focus position.
Specifically, an ellipse fitting is performed on the focal contour using the FITELLIPSE function in the open source computer vision library OpenCV (Open Source Computer Vision Library) to obtain the focal ellipse circle center position, which is the focal position m= (Mx,my). Wherein mx、my in the lesion position represents an X-axis pixel coordinate and a Y-axis pixel coordinate in the ultrasound image coordinate system, respectively.
In the step S104, whether the probe collides with the human body is judged according to the force sensor data, if yes, the step S105 is executed, and if not, the step S108 is executed.
Specifically, the force sensor data f= (fx, fy, fz, tx, ty, tz) may be extracted to determine whether the absolute value of fy is greater than a preset collision threshold e, if yes, the probe collides with the collarbone or chin of the human body, step S105 is executed to perform obstacle avoidance operation, if no, the probe does not collide with the collarbone or chin of the human body, step S108 is executed, and the process of probe rotation acquisition is continued. The preset collision threshold e may be set to 2.0, where the unit is N.
In the step S105, a translation obstacle avoidance offset is calculated according to the force sensor data and the focus position, and whether the translation obstacle avoidance offset is equal to 0 is judged, if yes, the step S106 is executed. If not, step S107 is performed. Specifically, the method comprises the following steps S1051-S1055:
S1051, determining the contact direction of the probe and the human body according to the data of the force sensor, and obtaining the obstacle direction coefficient.
Specifically, the force fy in the Y-axis direction of the tool coordinate system in the force sensor data f= (fx, fy, fz, tx, ty, tz) may be extracted, when fy= e, it is indicated that the probe collides with the collarbone, the probe needs to translate to avoid the obstacle above the human body, the obstacle direction coefficient obstacleDir is set to 1, when fy < = -e, it is indicated that the probe collides with the chin, the obstacle direction coefficient obstacleDir needs to translate to avoid the obstacle below the human body, and the obstacle direction coefficient obstacleDir is set to-1.
In order to facilitate understanding of the scheme by those skilled in the art, the two collision conditions described above may be shown in fig. 5, where the left side indicates that the probe collides with the collarbone, and the probe is required to translate above the human body along the tool coordinate system Y, and the right side indicates that the probe collides with the chin, and the probe is required to translate below the human body along the tool coordinate system Y.
S1052, calculating to obtain the translational obstacle avoidance deviation pixel value according to the obstacle direction coefficient and the focus position.
Specifically, according to the obstacle direction coefficient and the focus position, a translational obstacle avoidance deviation pixel value may be calculated based on the following formula 8:
wherein Δm2 represents a translational obstacle avoidance deviation pixel value, mx represents an X-axis pixel coordinate of a focus position in an ultrasonic image coordinate system, W represents a width value of an ultrasonic image, and obstacleDir is an obstacle direction coefficient. Wherein W is illustratively settable to 800.
S1053, judging whether the translation obstacle avoidance deviation pixel value is larger than the minimum deviation pixel threshold value, if yes, executing step S1054, and if not, executing step S1055.
Specifically, it may be determined whether the absolute value |Δm2| of the shift obstacle avoidance deviation pixel value is greater than the minimum deviation pixel threshold epsilon, if yes, it is indicated that the position of the probe can perform shift obstacle avoidance without causing a missing focus, step S1054 is performed to calculate shift obstacle avoidance offset, if no, it is indicated that no obstacle avoidance along the Y-axis direction of the tool coordinate system is possible to cause a missing focus, step S1055 is performed to set the shift obstacle avoidance offset to 0, and then the rotation obstacle avoidance operation of step S106 is performed. Wherein the minimum deviation pixel threshold value may be set to 10, for example.
S1054, calculating the translation obstacle avoidance offset according to the translation obstacle avoidance offset pixel value and the force sensor data.
Specifically, according to the pixel value of the translational obstacle avoidance deviation and the force applied to the probe in the Y-axis direction of the force sensor data under the tool coordinate system, the translational obstacle avoidance offset is calculated based on the following formula 9:
pyoffset2=km*Δm2+kf fy formula 9
Wherein pyoffset2 is a translation obstacle avoidance offset, Δm2 is a translation obstacle avoidance offset pixel value, fy is a force applied to the probe in the Y-axis direction under the tool coordinate system, km is a preset pixel offset coefficient, and kf is a preset force offset coefficient. Wherein, W is exemplary settable to 800 and km is exemplary settable to 0.00002.
S1055, setting the translation obstacle avoidance offset to 0.
In the step S106, a rotational obstacle avoidance offset is calculated according to the force sensor data, and the probe is controlled to rotate around the tool coordinate system X axis according to the rotational obstacle avoidance offset until the probe is no longer in a collision state with the human body, and the process of rotational acquisition of the probe in the steps S101-S104 is re-executed. Specifically, the method comprises the following steps S1061-S1066:
s1061, determining the contact direction of the probe and the human body according to the force sensor data, and obtaining the obstacle direction coefficient.
Specifically, the force sensor data f= (fx, fyfz, tx, ty, tz) may be extracted to obtain a force fy in the Y-axis direction of the tool coordinate system, when fy= e, the probe needs to translate to avoid the obstacle above the human body when the probe collides with the collarbone, the obstacle direction coefficient obstacleDir is set to 1, when fy < = - = e, the probe needs to translate to avoid the obstacle below the human body when the probe collides with the chin, and the obstacle direction coefficient obstacleDir is set to-1.
In order to facilitate understanding of the scheme by those skilled in the art, the rotary obstacle avoidance scheme corresponding to the two collision situations can be shown in fig. 6, wherein the left side indicates that the probe collides with the collarbone, the probe is required to rotate clockwise along the X axis of the tool coordinate system to avoid the obstacle, the left side indicates that the probe collides with the chin, and the probe is required to rotate anticlockwise along the X axis of the tool coordinate system to avoid the obstacle.
S1062, calculating a second tool coordinate system position offset according to the obstacle direction coefficient, and calculating a second next probe position by combining the acquired current probe position and the current probe posture of the probe.
Specifically, the second tool coordinate system position offset amount may be calculated based on the following equation 10 according to the obstacle direction coefficient:
Where toolOffset denotes a second tool coordinate system position offset, obstacleDir is an obstacle direction coefficient, Wtool is an ultrasonic probe tip length, and exemplary may be set to 0.04m.
According to the position offset of the second tool coordinate system, the current probe position and the current probe posture, a second next probe position is calculated based on the following formula 11:
Pnext2=Pcurrent+Rcurrent x toolOffset formula 11
Where Pnext2 denotes the second next probe position, Pcurrent denotes the current probe position, Rcurrent denotes the current probe attitude, and toolOffset denotes the second tool coordinate system position offset.
And S1063, calculating the rotation angle around the X axis according to the barrier reverse coefficient.
Specifically, the rotation angle around the X axis may be calculated based on the following equation 12 according to the obstacle reverse coefficient:
α= obstacleDir ×α0-kt tx) equation 12
Wherein α represents a rotation angle around an X-axis, obstacleDir represents an obstacle direction coefficient, α0 represents a preset rotation angle initial value, kt represents a moment deviation coefficient, and tx represents a moment of the probe in the X-axis direction under a tool coordinate system in force sensor data. Wherein, α0 is exemplary of settable to 0.05 radians, and kt is exemplary of settable to 0.1.
S1064, calculating a rotation obstacle avoidance offset according to the rotation angle around the X axis, and calculating a second next probe posture by combining the current probe posture.
Specifically, the rotational obstacle avoidance offset may be calculated based on the following equation 13 according to the rotation angle around the X axis:
Wherein Rx (alpha) represents a rotational obstacle avoidance offset, and alpha represents a rotational angle about the X axis.
Multiplying the rotational obstacle avoidance offset by the current probe pose, and calculating a second next probe pose based on the following equation 14:
rnext2=Rcurrent*Rx (. Alpha.) formula 14
Where Rnext2 represents the second next probe pose, Rx (α) represents the rotational obstacle avoidance offset, and Rcurrent represents the current probe pose.
And S1065, controlling the probe to rotate around the X axis according to the position and the posture of the second next probe, and collecting new force sensor data in real time.
And S1066, if the probe is still collided with the human body according to the new force sensor data, re-executing the process of calculating the second next probe position and the second next probe posture and rotating as described in the steps S1061-S1065 until the probe is no longer in a collision state with the human body, and re-executing the process of rotating and collecting the probe as described in the steps S101-S104.
In the embodiment of the application, when the rotation obstacle avoidance operation completed in the step S106 is insufficient to solve the problem in translating the obstacle avoidance along the Y axis, the obstacle avoidance operation is performed by rotating the obstacle avoidance device around the X axis, so that the probe is ensured to safely avoid the obstacle under the condition of not losing the focus, the method can ensure the safe operation of focus scanning from multiple angles, the stability and the definition of the focus ultrasonic image are improved, and the flexibility and the adaptability of the method are enhanced.
In the step S107, the probe is controlled to move according to the translational obstacle avoidance offset, and the process of acquiring the ultrasonic image and performing collision judgment in the steps S102 to S104 is re-performed.
Specifically, according to the translational obstacle avoidance offset and the current probe position, a third next probe position is calculated based on the following formula 15:
Where Pnext3 represents the third next probe position, Pyoffset2 represents the translational obstacle avoidance offset, and Pcurrent represents the current probe position.
The probe does not need to be rotated in this step, so the third next probe pose of the probe is equal to the current probe pose, i.e., Rnext3=Rcurrent.
And controlling the probe to move on the Y axis according to the position of the third next probe and the posture of the third next probe to finish translational obstacle avoidance, and then re-executing the process of acquiring the ultrasonic image and performing collision judgment in the steps S102-S104.
In the embodiment of the application, the translation obstacle avoidance operation completed in the steps S105 and S107 ensures that the probe can safely avoid the obstacle without losing the focus by translating the obstacle avoidance along the Y axis when the ultrasonic probe encounters the collision of the collarbone or the chin along the Y axis direction, thereby avoiding the discomfort of human body caused by the collision, keeping the continuous tracking of the focus and ensuring the stability and the definition of the focus image. Meanwhile, the stress condition in the Y-axis direction is monitored in real time, and the obstacle avoidance speed is dynamically adjusted according to the stress, so that the obstacle avoidance process is quicker and more efficient, the robustness and the adaptability of the system are improved, and the successful completion of focus scanning tasks is ensured.
In the step S108, the process of rotating and collecting the probe in the steps S101 to S107 is re-executed until the total rotation angle reaches the preset angle threshold value, and it is determined that the probe completes autonomous scanning of thyroid lesions.
Specifically, the process of rotating and collecting the probe in the steps S101-S107 may be circularly executed until the total rotation angle reaches the preset angle threshold pi/2, that is, 90 °, and the probe is turned from the transverse cutting to the longitudinal cutting, so that the probe is determined to complete autonomous scanning of the thyroid focus.
Example two
Based on the same inventive concept, the embodiment of the invention also provides a device for autonomously scanning thyroid lesions by an ultrasonic robot, which comprises:
a first rotation module 101, configured to control the probe to rotate around the tool coordinate system Z by a preset rotation angle in the target direction at the initial position;
The first acquisition module 102 is used for acquiring ultrasonic images and force sensor data in real time and recording and updating the total rotation angle in real time;
a first segmentation module 103, configured to perform image segmentation on the ultrasound image to obtain a focus position;
The first judging module 104 is configured to judge whether the probe collides with the human body according to the force sensor data, if yes, execute the translation obstacle avoidance module, and if not, execute the second rotation module;
The translation obstacle avoidance module 105 is configured to calculate a translation obstacle avoidance offset according to the force sensor data and the focus position when the probe collides with the human body, and determine whether the translation obstacle avoidance offset is equal to 0;
The rotating obstacle avoidance module 106 is configured to calculate a rotating obstacle avoidance offset according to the force sensor data when the translating obstacle avoidance offset is equal to 0, and control the probe to rotate around the tool coordinate system X axis according to the rotating obstacle avoidance offset until the probe is no longer in a collision state with a human body, and re-execute the process of rotating and collecting the probe;
The first moving module 107 is configured to control, when the translation obstacle avoidance offset is not equal to 0, the probe to move along the Y axis of the tool coordinate system according to the translation obstacle avoidance offset, and re-execute the process of acquiring the ultrasonic image and performing collision judgment;
And the second rotation module 108 is configured to re-execute the above-mentioned process of probe rotation acquisition when the probe does not collide with the human body, until the total rotation angle reaches a preset angle threshold, and determine that the probe completes autonomous scanning of thyroid lesions.
Example III
Based on the same inventive concept, an embodiment of the present invention also provides a computer-readable storage medium having stored thereon a computer program/instruction which, when executed by a processor, implements a method of autonomous scanning of thyroid lesions by an ultrasound robot as described in the above embodiment one.
Example IV
Based on the same inventive concept, embodiments of the present invention also provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements a method of autonomous scanning of thyroid lesions by an ultrasound robot as described in the above embodiment one.
Example five
Based on the same inventive concept, the embodiment of the present invention further provides a computer device, including a memory, a processor and a computer program stored on the memory, where the processor implements the method for autonomously scanning thyroid lesions by the ultrasound robot as described in the first embodiment.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.