Movatterモバイル変換


[0]ホーム

URL:


CN118252529A - Ultrasonic scanning method, device and system, electronic equipment and storage medium - Google Patents

Ultrasonic scanning method, device and system, electronic equipment and storage medium
Download PDF

Info

Publication number
CN118252529A
CN118252529ACN202211692689.XACN202211692689ACN118252529ACN 118252529 ACN118252529 ACN 118252529ACN 202211692689 ACN202211692689 ACN 202211692689ACN 118252529 ACN118252529 ACN 118252529A
Authority
CN
China
Prior art keywords
image
key point
probe
ultrasonic probe
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211692689.XA
Other languages
Chinese (zh)
Inventor
王长成
周国义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Opening Of Biomedical Technology Wuhan Co ltd
Original Assignee
Opening Of Biomedical Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Opening Of Biomedical Technology Wuhan Co ltdfiledCriticalOpening Of Biomedical Technology Wuhan Co ltd
Priority to CN202211692689.XApriorityCriticalpatent/CN118252529A/en
Priority to PCT/CN2023/142144prioritypatent/WO2024140749A1/en
Publication of CN118252529ApublicationCriticalpatent/CN118252529A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The embodiment of the invention provides an ultrasonic scanning method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring at least one image to be detected, wherein a first part of the image to be detected in the at least one image to be detected comprises an ultrasonic probe, and a second part of the image to be detected comprises a target inspection area of an object to be detected; performing target detection based on the first part of the image to be detected to determine initial probe coordinates of the ultrasonic probe; performing gesture estimation based on the second part of the image to be detected to determine key point coordinates of at least one key point in the target inspection area; planning a motion track of the ultrasonic probe for scanning at least one key point based on the initial probe coordinates of the ultrasonic probe and the key point coordinates of the at least one key point; and controlling the ultrasonic probe to sequentially move to the position of at least one key point according to the planned motion track so as to scan the part of the at least one key point. The scheme does not need complex operation of the user, and can liberate the hands of the user to a certain extent.

Description

Ultrasonic scanning method, device and system, electronic equipment and storage medium
Technical Field
The present invention relates to the field of ultrasound imaging technology, and in particular, to an ultrasound scanning method, an ultrasound scanning apparatus, an electronic device, an ultrasound scanning system, and a storage medium.
Background
Ultrasound scanning is a simple and effective means of medical examination, and has been applied in great numbers in recent years to medical diagnosis. Currently, ultrasound scanning is operated mainly by means of a doctor holding an ultrasound probe. This is a repetitive and tedious task for the physician, and at the same time, prolonged work may lead to diseases such as arthritis, affecting the physical health and work efficiency of the physician. In addition, since the scanning methods are different from doctor to doctor, it is difficult to ensure the standard of an ultrasound image obtained by the ultrasound scanning.
Therefore, a new ultrasound scanning method is needed to solve the above-mentioned problems.
Disclosure of Invention
In order to at least partially solve the problems in the prior art, an ultrasound scanning method, an ultrasound scanning apparatus, an electronic device, an ultrasound scanning system, and a storage medium are provided.
According to one aspect of the present invention, there is provided an ultrasonic scanning method comprising: acquiring at least one image to be measured, wherein a first part of the image to be measured in the at least one image to be measured comprises an ultrasonic probe, and a second part of the image to be measured in the at least one image to be measured comprises a target inspection area of an object to be measured, wherein the first part of the image to be measured is at least part of the image to be measured in the at least one image to be measured, and the second part of the image to be measured is at least part of the image to be measured in the at least one image to be measured; performing target detection based on the first part of the image to be detected to determine initial probe coordinates of the ultrasonic probe, wherein the initial probe coordinates are two-dimensional coordinates or three-dimensional coordinates; performing gesture estimation based on the second part of the image to be detected to determine key point coordinates of at least one key point in the target inspection area, wherein the key point coordinates are two-dimensional coordinates or three-dimensional coordinates; planning a motion track of the ultrasonic probe for scanning at least one key point based on the initial probe coordinates of the ultrasonic probe and the key point coordinates of the at least one key point; and controlling the ultrasonic probe to sequentially move to the position of at least one key point according to the planned motion track so as to scan the part of the at least one key point.
In an exemplary process of controlling the ultrasonic probe to sequentially move to the position of at least one key point according to the planned motion track so as to scan the position of the at least one key point, the method further includes: when the ultrasonic probe moves to the position of any key point, corresponding scanning feedback operation is executed based on the position of the key point, wherein the scanning feedback operation comprises one or more of the following operations: displaying a diagnosis measurement interface, wherein the diagnosis measurement interface comprises at least one first measurement item corresponding to a part to which the key point belongs; measuring at least one second measurement item corresponding to the part to which the key point belongs; responding to a measurement instruction of at least one third measurement item corresponding to the position of the key point, which is input by a user, and measuring the at least one third measurement item; the standard mannequin is displayed and the body region corresponding to the location to which the key point belongs is highlighted on the standard mannequin.
Illustratively, the at least one second measurement item comprises: the identifying of the standard tangent plane and the measuring of the standard tangent plane are carried out based on the ultrasonic image scanned for the part of the key point, and the identifying of the standard tangent plane and the measuring of the standard tangent plane are carried out based on the ultrasonic image scanned for the part of the key point, which comprises the following steps: carrying out standard section identification on the basis of the ultrasonic image in real time so as to judge whether the ultrasonic image belongs to one of at least one preset type of standard section; under the condition that the ultrasonic image belongs to a standard section of a specific preset type, image segmentation is carried out on the ultrasonic image so as to segment at least one target structure from the ultrasonic image, wherein the at least one target structure belongs to a structure related to the standard section of the specific preset type; based on the segmentation result of the at least one target structure, at least one sub-measurement item is measured, the at least one sub-measurement item being a sub-measurement item related to the at least one target structure.
Illustratively, in response to a measurement instruction of at least one third measurement item corresponding to a location to which the key point belongs, which is input by a user, measuring the at least one third measurement item includes: and responding to a selection operation instruction input by a user for one or more first measurement items contained on the diagnosis measurement interface, and measuring at least one third measurement item, wherein the at least one third measurement item is one or more first measurement items, and the measurement instruction is the selection operation instruction.
In an exemplary process of controlling the ultrasonic probe to sequentially move to the position of at least one key point according to the planned motion track so as to scan the position of the at least one key point, the method further includes: acquiring resistance information detected by a force sensor, wherein the force sensor is arranged at one end of the ultrasonic probe, which is contacted with an object to be detected; when the ultrasonic probe moves to the position of any key point, the motion track of the ultrasonic probe is adjusted according to the corresponding resistance information.
Illustratively, when the ultrasonic probe moves to a position where any key point is located, adjusting a motion track of the ultrasonic probe according to the corresponding resistance information includes: and when the resistance information is larger than a preset resistance threshold value, controlling the ultrasonic probe to move along the direction opposite to the current movement direction so as to adjust the movement track of the ultrasonic probe.
Illustratively, one of the initial probe coordinates of the ultrasound probe and the keypoint coordinates of the at least one keypoint is located under the image coordinate system and the other is located under the world coordinate system, planning a motion trajectory of the ultrasound probe to scan the at least one keypoint based on the initial probe coordinates of the ultrasound probe and the keypoint coordinates of the at least one keypoint, comprising: based on the conversion relation between the image coordinate system and the world coordinate system, carrying out coordinate conversion on the initial probe coordinate of the ultrasonic probe and/or the key point coordinate of at least one key point so as to unify the initial probe coordinate of the ultrasonic probe and the key point coordinate of at least one key point under the world coordinate system or the image coordinate system; determining the distance between the ultrasonic probe and the at least one key point in the world coordinate system based on the three-dimensional coordinates of the ultrasonic probe in the world coordinate system and the three-dimensional coordinates of the at least one key point in the world coordinate system under the condition of unifying the ultrasonic probe to the world coordinate system; under the condition of unifying to an image coordinate system, determining the distance between the ultrasonic probe and at least one key point under the image coordinate system based on the two-dimensional coordinate of the ultrasonic probe under the image coordinate system and the two-dimensional coordinate of the at least one key point under the image coordinate system, and determining the distance between the ultrasonic probe and the at least one key point under the world coordinate system according to the conversion relation and the distance between the ultrasonic probe and the at least one key point under the image coordinate system; the motion trajectory is planned based on distances of the ultrasound probe and the at least one key point in the world coordinate system.
Illustratively, the first portion of the image to be measured includes at least one two-dimensional image, wherein object detection is performed based on the first portion of the image to be measured to determine initial probe coordinates of the ultrasound probe, comprising: inputting any two-dimensional image in at least one two-dimensional image into a target detection model to obtain probe position information of an ultrasonic probe; and determining initial probe coordinates of the ultrasonic probe based on the probe position information, wherein the initial probe coordinates are two-dimensional coordinates.
Illustratively, the probe position information includes a target detection frame for indicating a position of the ultrasound probe, and determining initial probe coordinates of the ultrasound probe based on the probe position information includes: and determining the coordinate of the mass center of the ultrasonic probe based on the target detection frame as an initial probe coordinate.
Illustratively, the second portion of the image to be measured includes at least one three-dimensional depth image, wherein pose estimation is performed based on the second portion of the image to be measured to determine keypoint coordinates of at least one keypoint within the target examination region, comprising: inputting any three-dimensional depth image in the at least one three-dimensional depth image into the three-dimensional attitude estimation model to obtain key point coordinates of at least one key point, wherein the key point coordinates are three-dimensional coordinates.
In an exemplary process of controlling the ultrasonic probe to sequentially move to the position of at least one key point according to the planned motion track so as to scan the position of the at least one key point, the method further includes: acquiring mask information of an object to be detected and acquiring an image in real time, wherein the image acquired in real time is an image acquired in real time aiming at an ultrasonic probe in the moving process of the ultrasonic probe; performing target detection based on the real-time acquired image to determine real-time probe coordinates of the ultrasonic probe, wherein the real-time probe coordinates are two-dimensional coordinates or three-dimensional coordinates; determining whether the ultrasonic probe falls on the object to be detected based on real-time probe coordinates and mask information of the ultrasonic probe; outputting corresponding prompt information under the condition that the ultrasonic probe does not fall on the object to be tested and/or under the condition that the ultrasonic probe falls on the object to be tested.
According to another aspect of the present invention, there is also provided an ultrasonic scanning apparatus including: the device comprises an acquisition module, a detection module and a display module, wherein the acquisition module is used for acquiring at least one image to be detected, a first part of the at least one image to be detected contains an ultrasonic probe, a second part of the at least one image to be detected contains a target detection area of an object to be detected, the first part of the at least one image to be detected is at least one part of the at least one image to be detected, and the second part of the at least one image to be detected is at least one part of the at least one image to be detected; the detection module is used for carrying out target detection based on the first part of the image to be detected so as to determine the initial probe coordinate of the ultrasonic probe, wherein the initial probe coordinate is a two-dimensional coordinate or a three-dimensional coordinate; the estimating module is used for carrying out gesture estimation based on the second part of the image to be detected so as to determine the key point coordinate of at least one key point in the target checking area, wherein the key point coordinate is a two-dimensional coordinate or a three-dimensional coordinate; the planning module is used for planning the motion track of the ultrasonic probe for scanning at least one key point based on the initial probe coordinates of the ultrasonic probe and the key point coordinates of the at least one key point; and the control module is used for controlling the ultrasonic probe to sequentially move to the position of at least one key point according to the planned motion track so as to scan the position of the at least one key point.
According to still another aspect of the present invention, there is also provided an electronic device including a processor and a memory, the memory storing a computer program, the processor executing the computer program to implement the above-mentioned ultrasound scanning method.
The electronic device is illustratively an ultrasonic diagnostic device or an ultrasonic workstation.
According to yet another aspect of the present invention, there is also provided an ultrasound scanning system including: the tail end of the mechanical arm is provided with an ultrasonic probe; at least one image acquisition device for acquiring at least one image to be detected; the processor in the electronic equipment is connected with the at least one image acquisition device and the mechanical arm and is used for executing the ultrasonic scanning method based on the at least one image to be detected, wherein the processor controls the mechanical arm to drive the ultrasonic probe to move by sending a control instruction to the mechanical arm.
Illustratively, the at least one image acquisition device comprises a depth camera and/or a color camera, and the at least one image to be measured comprises at least one three-dimensional depth image acquired by the depth camera and/or at least one two-dimensional image acquired by the color camera.
According to still another aspect of the present invention, there is also provided a storage medium storing a computer program/instruction which, when executed by a processor, implements the above-described ultrasound scanning method.
According to the ultrasonic scanning method, the ultrasonic scanning device, the electronic equipment, the ultrasonic scanning system and the storage medium, the initial probe coordinates of the ultrasonic probe and the key point coordinates of at least one key point can be automatically determined based on the image to be detected, the motion track of the ultrasonic probe can be planned based on the coordinates, and the ultrasonic probe can be controlled to scan according to the motion track. The proposal adds the detection of the probe and the key points, so the positioning precision is high, and the planned movement track is more accurate. In addition, the ultrasonic probe is controlled to carry out ultrasonic scanning through the planned motion trail, so that the standard degree of an ultrasonic image obtained by scanning can be improved. In addition, the scheme can release the hands of the user to a certain extent without the need of the user to perform complex operations.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following more particular description of embodiments of the present invention, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, and not constitute a limitation to the invention. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 shows a schematic flow chart of an ultrasound scanning method according to one embodiment of the invention;
FIG. 2 shows a schematic diagram of highlighting a human body region according to an embodiment of the present invention;
FIG. 3 shows a schematic block diagram of an ultrasound scanning apparatus according to one embodiment of the invention; and
Fig. 4 shows a schematic block diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present invention and not all embodiments of the present invention, and it should be understood that the present invention is not limited by the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art without any inventive effort, based on the embodiments described in the present invention shall fall within the scope of protection of the present invention.
In order to at least partially solve the above technical problems, an embodiment of the present invention provides an ultrasound scanning method. Fig. 1 shows a schematic diagram of an ultrasound scanning method 100 according to one embodiment of the invention. As shown in fig. 1, the ultrasonic scanning method 100 may include the following steps S110, S120, S130, S140, and S150.
Step S110, at least one image to be measured is obtained, a first part of the at least one image to be measured includes an ultrasonic probe, and a second part of the at least one image to be measured includes a target inspection area of the object to be measured, the first part of the at least one image to be measured is at least a part of the at least one image to be measured, and the second part of the at least one image to be measured is at least a part of the at least one image to be measured.
The object to be measured may be any object including, but not limited to, a human or animal, etc. The target examination region of the object to be examined may be at least a partial region of the object to be examined, such as the whole body of the object to be examined, or one or more parts of the object to be examined, such as a uterine part or the like.
For example, an ultrasound scanning system may include an ultrasound probe and at least one image acquisition device. The at least one image acquisition device may comprise a depth camera and/or a color camera. At least one image to be measured including a target examination region of an object to be measured and an ultrasound probe may be acquired using at least one image acquisition device. In any of the images to be tested, the ultrasound probe and the target examination region may exist at the same time, or only one of them may exist. That is, the first portion of the image under test and the second portion of the image under test described herein may all be the same, partially the same, or all different.
In one embodiment, a color camera may be employed alone to capture the image to be measured. At this time, the image to be measured is a two-dimensional image. In this case, the number of images to be measured may be one or more. By way of example and not limitation, for a first portion of the plurality of images to be measured, an ultrasound probe may be included in each image to be measured, and a location of the ultrasound probe may be identified from each image to be measured. For a second part of the images to be detected, each image to be detected can contain a target detection area, and the positions of key points in the target detection area can be identified from each image to be detected. In another embodiment, the depth camera may be employed alone to capture the image to be measured. At this time, the image to be measured is a three-dimensional depth image. In this case, the number of images to be measured may be one or more. By way of example and not limitation, for a first portion of the plurality of images to be measured, an ultrasound probe may be included in each image to be measured, and a location of the ultrasound probe may be identified from each image to be measured. For a second part of the images to be detected, each image to be detected can contain a target detection area, and the positions of key points in the target detection area can be identified from each image to be detected. In yet another embodiment, the images to be measured may be acquired by using a color camera and a depth camera, respectively, to obtain a plurality of images to be measured. By way of example and not limitation, one or more images under test acquired by a color camera may include at least an ultrasound probe and one or more images under test acquired by a depth camera may include at least a target examination region. In this case, the position (e.g., two-dimensional coordinates) of the ultrasound probe may be determined based on a first portion of the image to be measured acquired by the color camera, and the position (e.g., three-dimensional coordinates) of at least one key point in the target examination region may be determined based on a second portion of the image to be measured acquired by the depth camera.
In step S120, target detection is performed based on the first portion of the image to be detected to determine an initial probe coordinate of the ultrasound probe, where the initial probe coordinate is a two-dimensional coordinate or a three-dimensional coordinate.
For example, the object detection model may be used to object detect the first portion of the image to be detected. In the case where all of the at least one image to be detected is a two-dimensional image or a three-dimensional depth image, the target detection may be directly performed based on all of the images to be detected in the at least one image to be detected (i.e., all of the images to be detected in the at least one image to be detected are the first partial images to be detected). It is, of course, also possible in this case to carry out the object detection on the basis of a part of the images to be detected in the at least one image to be detected (i.e. the part of the images to be detected in the at least one image to be detected is the first part of the images to be detected). The first part of the image to be measured can be an image obtained by removing abnormal images such as reflection, blurring and the like after at least one image to be measured is filtered. By way of example, the object detection model may be implemented using any suitable neural network model, such as any suitable existing or future-likely object detection network model. For example, the object detection model may include one or more of the following: only once (You Only Look Once, YOLO) series, regional convolutional neural network (Region-Convolutional Neural Networks, RCNN) series, retinal network (Retina-Net) and other network models. Of course, the above-described object detection network model is merely an example, and the object detection model may also be implemented using any suitable existing or future image segmentation network model. For example, the object detection model may include one or more of the following: full convolutional networks (Fully Convolutional Networks, FCN), U-type networks (Unet), deep laboratory (DeepLab) series, V-type networks (Vnet), and the like. It will be appreciated that the image segmentation network model may also identify the location of the target object (e.g., the ultrasound probe described above). A target detection box (ultrasound probe) for indicating the position of the ultrasound probe can be obtained through a target detection network model, and the target detection box can be any suitable shape, and is preferably rectangular. A mask or envelope indicating the location of the ultrasound probe may be obtained by image segmentation of the network model.
Based on the target detection result, initial probe coordinates of the ultrasound probe may be determined. For the image to be detected acquired by the depth camera, the three-dimensional coordinates of the ultrasonic probe can be obtained based on the corresponding target detection result. For the image to be detected acquired by the color camera, the two-dimensional coordinates of the ultrasonic probe can be obtained based on the corresponding target detection result.
The above-described object detection model employed in step S120 may be obtained through training of the first training data set. The first training dataset may include a plurality of first sample images and first annotation information (Ground truth) that corresponds one-to-one to the plurality of first sample images. The first sample image may contain an ultrasound probe. The first labeling information may include a labeling target detection box for indicating a position of the ultrasound probe. And respectively inputting a plurality of first sample images into an initial target detection model to obtain respective corresponding prediction target detection results, wherein the prediction target detection results comprise a prediction target detection frame for indicating the position of the ultrasonic probe. The initial object detection model is consistent with the network structure of the object detection model employed in step S120 but the parameters may not be consistent. After training the parameters in the initial target detection model, the target detection model adopted in step S120 is obtained. The predicted target detection result and the first labeling information of the plurality of first sample images can be substituted into a first preset loss function to perform loss calculation, so that a first loss value is obtained. Parameters in the initial object detection model may then be optimized using back-propagation and gradient descent algorithms based on the first loss value. The optimization of the parameters may be performed iteratively until the target detection model reaches a converged state. When training is completed, the obtained target detection model can be used for subsequent target detection, and the stage can be called as an inference stage of the model.
Step S130, performing gesture estimation based on the second part of the image to be detected to determine the key point coordinates of at least one key point in the target inspection area, wherein the key point coordinates are two-dimensional coordinates or three-dimensional coordinates.
For example, for the second part of the image to be measured obtained in the foregoing, the pose estimation model may be used to perform pose estimation on the second part of the image to be measured to determine the position of at least one key point. In the case where all of the at least one image to be measured is a two-dimensional image or a three-dimensional depth image, the pose estimation may be directly performed based on all of the images to be measured in the at least one image to be measured (i.e., all of the images to be measured in the at least one image to be measured are the second partial images to be measured). Of course, it is also possible in this case to perform the pose estimation based on a part of the images to be measured in the at least one image to be measured (i.e., the part of the images to be measured in the at least one image to be measured is the second part of the images to be measured). The second part of the image to be measured can be an image obtained by removing abnormal images such as reflection, blurring and the like after at least one image to be measured is filtered. The pose estimation model may be implemented using any suitable neural network model, such as any suitable existing or future pose estimation network model. For example, the pose estimation model may include one or more of the following: an hourglass network structure (Hourglass), a C2F-Vol, and the like. Based on the pose estimation result of the pose estimation model, two-dimensional or three-dimensional coordinates of at least one key point within the target examination region may be obtained. In the case where the object to be measured is a human, the at least one key point may include a human key point such as a nose, eyes, ears, knees, etc. The number of at least one key point may be set as needed, and the present invention is not limited thereto. For example, the number of at least one keypoint may be 12, 20, 32, etc. For the image to be detected acquired by the depth camera, the three-dimensional coordinates of the key points can be obtained. For the image to be measured acquired by the color camera, two-dimensional coordinates of the key points can be obtained.
The above-described pose estimation model may be obtained by training the second training data set. The second training dataset may include a plurality of second sample images and second annotation information that corresponds one-to-one to the plurality of second sample images. The second labeling information may include labeling two-dimensional coordinates or labeling three-dimensional coordinates corresponding to each human body keypoint in the second sample image. And respectively inputting the plurality of second sample images into the initial attitude estimation model to obtain respective corresponding predicted attitude estimation results. In the case where the sample image is a two-dimensional image, the predicted posture estimation result is the predicted two-dimensional coordinates of the key point. It will be appreciated that the pose estimation model in this case is a model for processing a two-dimensional image, and thus the predicted image input by the inference stage is also a two-dimensional image. Similarly, in the case where the sample image is a three-dimensional depth image, the predicted pose estimation result is the predicted three-dimensional coordinates of the key point. It will be appreciated that the pose estimation model in this case is a model for processing a three-dimensional depth image, and thus the predicted image input by the inference stage is also a three-dimensional depth image. And substituting the predicted gesture estimation result and second labeling information of the plurality of second sample images into a second preset loss function to perform loss calculation, so as to obtain a second loss value. Parameters in the initial pose estimation model may then be optimized using back-propagation and gradient descent algorithms based on the second loss values. The optimization of the parameters may be performed iteratively until the pose estimation model reaches a converging state. After training is finished, the obtained gesture estimation model can be used for subsequent gesture estimation.
Step S140, planning a motion trajectory of the ultrasonic probe for scanning at least one key point based on the initial probe coordinates of the ultrasonic probe and the key point coordinates of the at least one key point.
For example, the distance of the ultrasonic probe from each key point in the world coordinate system may be determined according to the initial probe coordinates of the ultrasonic probe and the key point coordinates corresponding to at least one key point one by one. For example, for keypoints A, B, C, the distances from the ultrasound probe are ordered sequentially from near to far as keypoint B, keypoint C, and keypoint a. Therefore, the movement track (i.e. path) of the mechanical arm carrying the ultrasonic probe can be planned from the initial position of the ultrasonic probe to the position of the key point B to the position of the key point C and then to the position of the key point A.
And step S150, controlling the ultrasonic probe to sequentially move to the position of at least one key point according to the planned motion track so as to scan the position of the at least one key point.
For example, based on the planning result of the motion trajectory, the ultrasonic probe may be controlled to sequentially move to the positions of the 3 key points B, C, A according to the planned motion trajectory, so as to scan the positions of the 3 key points.
According to the ultrasonic scanning method provided by the embodiment of the invention, the initial probe coordinates of the ultrasonic probe and the key point coordinates of at least one key point can be automatically determined based on the image to be detected, the motion track of the ultrasonic probe can be planned based on the coordinates, and the ultrasonic probe can be controlled to scan according to the motion track. The proposal adds the detection of the probe and the key points, so the positioning precision is high, and the planned movement track is more accurate. In addition, the ultrasonic probe is controlled to carry out ultrasonic scanning through the planned motion trail, so that the standard degree of an ultrasonic image obtained by scanning can be improved. In addition, the scheme can release the hands of the user to a certain extent without the need of the user to perform complex operations.
For example, in the process of controlling the ultrasonic probe to sequentially move to the position where the at least one key point is located according to the planned motion track so as to scan the position where the at least one key point belongs, the method may further include: when the ultrasonic probe moves to the position of any key point, corresponding scanning feedback operation is executed based on the position of the key point, wherein the scanning feedback operation comprises one or more of the following operations: displaying a diagnosis measurement interface, wherein the diagnosis measurement interface can comprise at least one first measurement item corresponding to a part to which the key point belongs; measuring at least one second measurement item corresponding to the part to which the key point belongs; responding to a measurement instruction of at least one third measurement item corresponding to the position of the key point, which is input by a user, and measuring the at least one third measurement item; the standard mannequin is displayed and the body region corresponding to the location to which the key point belongs is highlighted on the standard mannequin.
In one embodiment, when the ultrasonic probe moves to the position of any one of the keypoints, a corresponding feedback operation may be performed based on the location to which the keypoint belongs. The location to which any key point belongs may be one of a variety of predetermined locations. The preset locations may include, but are not limited to, the following: thyroid site, heart site, uterus site, kidney site, etc. The scan feedback operation may include one or more of any of the following.
A first operation: and displaying a diagnosis measurement interface. The means for performing the ultrasound scanning method 100 may comprise a display device or the means for performing the ultrasound scanning method 100 may be communicatively connected with the display device. On the display device, a diagnostic measurement interface may be displayed. The diagnostic measurement interface may include at least one first measurement item corresponding to a location to which the current keypoint belongs. For example, when the location to which the key point belongs is a thyroid location, the at least one first measurement item may include one or more of: measuring a length of the lesion based on the scanned ultrasound image; measuring an area of the lesion based on the scanned ultrasound image; measuring the length of the thyroid based on the scanned ultrasound image; the area of the thyroid gland, etc. is measured based on the scanned ultrasound image.
For another example, when the location to which the key point belongs is a uterine location, the at least one first measurement item may include: and carrying out identification of the standard section and measurement of the standard section based on the scanned ultrasonic image. Wherein, the identification of the standard tangent plane and the measurement of the standard tangent plane can comprise: carrying out standard section identification on the ultrasonic image to judge whether the ultrasonic image belongs to one of at least one preset type of standard section; under the condition that the ultrasonic image belongs to a standard section of a specific preset type, image segmentation is carried out on the ultrasonic image so as to segment at least one target structure from the ultrasonic image, wherein the at least one target structure belongs to a structure related to the standard section of the specific preset type; based on the segmentation result of the at least one target structure, at least one sub-measurement item is measured, the at least one sub-measurement item being a sub-measurement item related to the at least one target structure. The at least one predetermined type of standard cut surface may include, but is not limited to, one or more of the following: standard head-hip length section, horizontal thalamus section, umbilical cord blood flow section, upper abdomen section, long femur shaft section, placenta section, amniotic fluid section, cervical canal sagittal section, etc. The at least one target structure may include, but is not limited to, one or more of the following: amniotic fluid, placenta, thalamus, four-chamber heart, etc. The at least one sub-measurement item may include, but is not limited to, one or more of the following: head-hip length, double-top diameter length, placenta thickness, maximum amniotic fluid depth, etc.
At least one first measurement item may be displayed on the diagnostic measurement interface from which the user may select a particular first measurement item to be measured. After the user selects a particular first measurement item, the means for performing the ultrasound scanning method 100 may automatically perform a corresponding operation based on the user's selection. In the prior art, when a doctor scans different parts of a patient, the doctor needs to manually switch to a diagnosis measurement interface corresponding to the part by a user in a traditional manual measurement mode or an intelligent measurement mode, so that the workload of the user is increased to a certain extent. By adopting the scheme, the diagnosis and measurement interface of the corresponding part can be automatically switched according to the scanning position of the ultrasonic probe, so that the operation flow of a user can be effectively reduced, and the working efficiency can be further improved.
And a second operation: and measuring at least one second measurement item corresponding to the part to which the current key point belongs. The apparatus for performing the ultrasonic scanning method 100 may store the second measurement items in one-to-one correspondence with the respective sites in advance. When the ultrasonic probe moves to any key point, a second measurement item corresponding to the part to which the key point belongs can be automatically executed. For example, when the ultrasound probe is moved to a critical point contained in the thyroid site, the length or area of the lesion, etc., may be automatically measured based on the scanned ultrasound image. For another example, the identification of the standard section and the measurement of the standard section may be automatically performed when the ultrasonic probe is moved to a key point included in the uterine portion. By the scheme, the measurement of the position of the key point can be automatically started, so that the working efficiency can be further improved.
Third operation: and responding to a measurement instruction of at least one third measurement item corresponding to the part to which the current key point belongs, which is input by a user, and measuring the at least one third measurement item. The means for performing the ultrasound scanning method 100 may comprise an input device or the means for performing the ultrasound scanning method 100 may be communicatively connected with the input device. The input device may include, but is not limited to, one or more of a mouse, keyboard, touch screen, microphone, and the like. The user can input a measurement instruction of the third measurement item through the input device. In one example, the input device is the same touch screen as the display device, and the user may input a measurement instruction related to any first measurement item displayed in the diagnostic measurement interface by, for example, clicking a button control related to the measurement item on the touch screen, where the third measurement item is the first measurement item. Based on the third measurement item selected by the user, measurement may be started. By the scheme, the user is allowed to determine the measurement items to be measured by himself, the degree of autonomy is high, and the personalized requirements of the user can be better met.
Fourth operation: the standard mannequin is displayed and the body region corresponding to the location to which the current key point belongs is highlighted on the standard mannequin. FIG. 2 shows a schematic diagram of highlighting a human body region according to an embodiment of the present invention. As shown in fig. 2, the highlighted portion is a human body area corresponding to a thyroid part of a human body on the human body model. The standard manikin may be pre-stored in the storage means. The storage device may be included in or communicatively connected with the means for performing the ultrasound scanning method 100. On a standard mannequin, the body region where the keypoints are located may be highlighted based on the location of the current keypoints. The human body region may be, for example, a region surrounded by edges where at least two keypoints are located, respectively, or a region surrounded by other edges where at least two keypoints are located, respectively, wherein the at least two keypoints include a keypoint to which the ultrasound probe is currently moved. For example, in the highlighted thyroid region shown in fig. 2, there are upper and lower edges that may each contain one keypoint, one of which is the one to which the ultrasound probe is currently moving, and the other is the one closest to the keypoint. Thus, a human body region can be determined by the key point currently moved to and the key point nearest thereto. In addition, referring to fig. 2, it can be seen that the thyroid region further includes left and right edges, which may optionally not include keypoints thereon, which may be formed by a section of the contour of the human body itself. Through the scheme, the currently scanned human body area can be highlighted, so that a user can conveniently check the scanning progress, and the user experience can be effectively improved.
In the above at least one first measurement item and at least one second measurement item, at least a part of the first measurement item and at least a part of the second measurement item may be the same, or all of the first measurement items may be different from all of the second measurement items. The at least one second measurement item and the at least one third measurement item may be similar to each other and will not be described in detail.
According to the technical scheme, based on various scanning feedback operations, the measurement requirements of different users in different application scenes can be met.
Illustratively, the scan feedback operation may further include a fifth operation: and displaying the currently scanned ultrasonic image, namely the ultrasonic image containing the part to which the current key point belongs. This facilitates the user to view the results of the ultrasound image. By way of example and not limitation, the fifth operation may be performed while any one or more of the first operation, the second operation, the third operation, the fourth operation are performed. By way of example and not limitation, the diagnostic measurement interface may be displayed in a first region of the display interface of the display device and the currently scanned ultrasound image may be displayed in a second region.
Illustratively, the at least one second measurement item may include: the identifying of the standard tangent plane and the measuring of the standard tangent plane based on the ultrasonic image scanned for the part of the key point, and the identifying of the standard tangent plane and the measuring of the standard tangent plane based on the ultrasonic image scanned for the part of the key point may include: carrying out standard section identification on the basis of the ultrasonic image in real time so as to judge whether the ultrasonic image belongs to one of at least one preset type of standard section; under the condition that the ultrasonic image belongs to a standard section of a specific preset type, image segmentation is carried out on the ultrasonic image so as to segment at least one target structure from the ultrasonic image, wherein the at least one target structure belongs to a structure related to the standard section of the specific preset type; based on the segmentation result of the at least one target structure, at least one sub-measurement item is measured, the at least one sub-measurement item being a sub-measurement item related to the at least one target structure.
Those skilled in the art can understand the implementation manner and the beneficial effects of the technical solution by reading the foregoing embodiments, and for brevity, a detailed description is omitted herein.
Illustratively, in response to a measurement instruction of at least one of 5 third measurement items corresponding to a location to which the key point belongs, which is input by a user, measuring the at least one third measurement item includes:
and responding to a selection operation instruction input by a user for one or more first measurement items contained on the diagnosis measurement interface, and measuring at least one third measurement item, wherein the at least one third measurement item is one or more first measurement items, and the measurement instruction is the selection operation instruction.
As described above, the user can directly click and select a certain first measurement item 0 to measure on the diagnosis measurement interface, and the first measurement item is the third measurement item. Of course, alternatively, the input
The device may be other devices than a display device, such as a mouse, a keyboard, and the like, and the user may input a selection operation instruction for any one of the first measurement items through the mouse and/or the keyboard, so as to measure the selected measurement item based on the currently scanned ultrasound image (i.e., the ultrasound image including the portion to which the current key point belongs).
5 Illustratively, controlling the ultrasound probe to sequentially move to at least one in accordance with the planned motion profile
In the process of scanning the position of each key point to which at least one key point belongs, the method may further include: acquiring resistance information detected by a force sensor, wherein the force sensor is arranged at one end of the ultrasonic probe, which is contacted with an object to be detected; when the ultrasonic probe moves to the position of any key point, the motion track of the ultrasonic probe is adjusted according to the corresponding resistance information.
0 In one embodiment, the end of the ultrasonic probe contacting the object to be measured can be provided with a force transmission
And a sensor. The force sensor may be implemented using any type of force sensor. In the process of controlling the ultrasonic probe to sequentially move to the position of at least one key point according to the planned motion track, the resistance information detected by the force sensor can be acquired in real time. The resistance information may represent the magnitude of the resistance. Example(s)
As previously planned motion trajectories of the ultrasound probe, when the ultrasound probe moves to the keypoint B, the 5 keypoint C and the keypoint a, respectively, corresponding resistance information can be obtained. Taking the key point B as an example, when
After the resistance information corresponding to the current key point is obtained, the moving direction and/or speed of the ultrasonic probe can be adjusted according to the resistance information, and then the motion track is planned again.
According to the technical scheme, based on the force sensor, the motion track can be timely adjusted according to the acquired resistance information, and the track adjustment scheme has high intelligent degree.
0 Illustratively, when the ultrasonic probe moves to a position where any key point is located, adjusting the motion trail of the ultrasonic probe according to the corresponding resistance information may include: and when the resistance information is larger than a preset resistance threshold value, controlling the ultrasonic probe to move along the direction opposite to the current movement direction so as to adjust the movement track of the ultrasonic probe.
In one embodiment, the resistance threshold may be preset by the user. The resistance threshold may be any value greater than 0, for example, the resistance threshold may be 3 newtons (N). When the resistance information is greater than the preset resistance threshold value 3N, the ultrasonic probe can be controlled to move along the direction opposite to the current movement direction. For example, the current ultrasonic probe moves forward along the Z-axis direction in the world coordinate system, and after the acquired resistance information is greater than 3N, the ultrasonic probe can be controlled to stop moving forward and move in the X-axis negative direction in the world coordinate system. In one example, the ultrasound probe may be controlled to move in a direction opposite to the current direction of movement until the acquired resistance information is less than a preset resistance threshold.
According to the technical scheme, the ultrasonic probe can be controlled to move along the direction opposite to the current movement direction when the resistance is overlarge, so that the ultrasonic probe can be effectively prevented from exerting overlarge pressure on an object to be tested, and discomfort of the object to be tested is caused.
Illustratively, one of the initial probe coordinates of the ultrasound probe and the keypoint coordinates of the at least one keypoint is located under the image coordinate system and the other is located under the world coordinate system, planning a motion trajectory of the ultrasound probe to scan the at least one keypoint based on the initial probe coordinates of the ultrasound probe and the keypoint coordinates of the at least one keypoint may include: based on the conversion relation between the image coordinate system and the world coordinate system, carrying out coordinate conversion on the initial probe coordinate of the ultrasonic probe and/or the key point coordinate of at least one key point so as to unify the initial probe coordinate of the ultrasonic probe and the key point coordinate of at least one key point under the world coordinate system or the image coordinate system; determining the distance between the ultrasonic probe and the at least one key point in the world coordinate system based on the three-dimensional coordinates of the ultrasonic probe in the world coordinate system and the three-dimensional coordinates of the at least one key point in the world coordinate system under the condition of unifying the ultrasonic probe to the world coordinate system; under the condition of unifying to an image coordinate system, determining the distance between the ultrasonic probe and at least one key point under the image coordinate system based on the two-dimensional coordinate of the ultrasonic probe under the image coordinate system and the two-dimensional coordinate of the at least one key point under the image coordinate system, and determining the distance between the ultrasonic probe and the at least one key point under the world coordinate system according to the conversion relation and the distance between the ultrasonic probe and the at least one key point under the image coordinate system; the motion trajectory is planned based on distances of the ultrasound probe and the at least one key point in the world coordinate system.
In one embodiment, one of the initial probe coordinates of the ultrasound probe and the keypoint coordinates of the at least one keypoint are located under the image coordinate system and the other is located under the world coordinate system. In the case of the determination by the image acquisition device, the internal and external parameters thereof are determined, i.e. the conversion relationship between the image coordinate system of the image acquired by the image acquisition device and the world coordinate system is known. The conversion relationship between the image coordinate system and the world coordinate system may be stored in advance. By way of example and not limitation, the top left corner vertex of the image to be measured may be taken as the origin o of the image coordinate system, and the side that passes through the origin o and is parallel to the bottom side of the image to be measured may be taken as the x-axis (i.e., the width direction of the image); an image coordinate system is established with the side passing through the origin o and perpendicular to the x-axis as the y-axis (i.e., the height direction of the image). Based on the position of the ultrasonic probe in the image to be measured, the image coordinates of any point in the region to which the position belongs can be selected as the initial probe coordinates. For example, the image coordinates corresponding to the center point of the region to which the position belongs may be selected as the initial probe coordinates. Based on the conversion relation between the image coordinate system and the world coordinate system, coordinate conversion is performed on the initial probe coordinates of the ultrasonic probe and/or the key point coordinates of the at least one key point, and the initial probe coordinates of the ultrasonic probe and the key point coordinates of the at least one key point can be unified under the world coordinate system or the image coordinate system. For the case of unification to world coordinate system, based on the coordinates of the ultrasound probe in world coordinate system and the coordinates of the at least one key point in world coordinate system, the distances of the ultrasound probe and the at least one key point in world coordinate system may be determined. For the case of unification to the image coordinate system, based on the two-dimensional coordinates of the ultrasonic probe in the image coordinate system and the two-dimensional coordinates of the at least one key point in the image coordinate system, the distances of the ultrasonic probe and the at least one key point in the image coordinate system can be determined, and the distances of the ultrasonic probe and the at least one key point in the world coordinate system can be determined according to the conversion relation and the distances of the ultrasonic probe and the at least one key point in the image coordinate system. According to the distance between the ultrasonic probe and at least one key point in the world coordinate system, the motion track of the ultrasonic probe can be planned.
Under the condition that the initial probe coordinates and the key point coordinates are two-dimensional coordinates, the two coordinates can be converted into three-dimensional coordinates based on the conversion relation between the coordinate systems, then the distance between the ultrasonic probe and the key point under the world coordinate system is determined, and then the trajectory planning is performed based on the distance. In addition, the coordinate difference between the two-dimensional coordinates of the ultrasonic probe and the two-dimensional coordinates of the key points can be calculated first, then the two-dimensional coordinate difference is converted into the three-dimensional coordinate difference based on the conversion relation between the coordinate systems, the distance between the ultrasonic probe and the key points under the world coordinate system is determined, and then the track planning is performed based on the distance.
Under the condition that the initial probe coordinates and the key point coordinates are three-dimensional coordinates, the difference between the initial probe coordinates and the key point coordinates can be directly calculated to determine the distance between the ultrasonic probe and the key point under the world coordinate system, and track planning is carried out based on the distance.
According to the technical scheme, based on the conversion relation between the image coordinate system and the world coordinate system, the initial probe coordinates of the ultrasonic probe and/or the key point coordinates of at least one key point are subjected to coordinate conversion so as to plan the motion trail of the ultrasonic probe. According to the scheme, complex algorithms and operations are not needed, and the efficiency of motion trail planning can be improved.
The first partial image to be measured may, for example, comprise at least one two-dimensional image, wherein the target detection is performed based on the first partial image to determine initial probe coordinates of the ultrasound probe, possibly
To include: inputting any two-dimensional image in at least one two-dimensional image into a target detection model to obtain probe position information of the 5 ultrasonic probe; determining an initial probe of an ultrasound probe based on probe position information
Head coordinates, initial probe coordinates are two-dimensional coordinates.
In one embodiment, the first portion of the image to be measured may include at least one two-dimensional image therein. The two-dimensional image may be an image acquired with a color camera. Inputting either two-dimensional image as described above
The target detection model of (2) can obtain the probe position information of the ultrasonic probe. The probe position information 0 can be represented by an object detection frame with any shape, and can also be represented by the envelope of the ultrasonic probe. Base group
In the probe position information, initial probe coordinates of the ultrasound probe may be determined, for example, a center point coordinate of the target detection frame is selected as the initial probe coordinates. The initial probe coordinates may be two-dimensional coordinates in the image coordinate system. The manner in which the image coordinate system is established has been described in detail in the foregoing embodiments, and is not described here again for brevity.
5 According to the technical proposal, the two-dimensional image of the ultrasonic probe is relatively easy to obtain, thus being based on two
The scheme of determining the two-dimensional coordinates of the ultrasonic probe by the dimensional image is simple to realize.
Illustratively, the probe position information includes a target detection frame for indicating a position of the ultrasonic probe, and determining initial probe coordinates of the ultrasonic probe based on the probe position information may include: and determining the coordinate of the mass center of the ultrasonic probe based on the target detection frame as an initial probe coordinate.
0 In one embodiment, the probe position information may include an object indicating the position of the ultrasound probe
And (5) marking a detection frame. The form of the target detection frame is described above and is not described here in detail. For example, the center point, any corner point, or any other suitable point of the target detection frame may be determined as the centroid of the ultrasound probe. The coordinates corresponding to the centroid may be used as the initial probe coordinates of the ultrasound probe.
According to the technical scheme, the coordinate of the mass center of the ultrasonic probe is determined to be 5 as the initial probe coordinate based on the target detection frame, so that the determined initial probe coordinate is more accurate, and the accuracy can be further ensured
Planning the motion trail.
The second portion of the image to be measured may, for example, comprise at least one three-dimensional depth image, wherein,
Pose estimation based on the second portion of the image under test to determine at least one of the regions of the target examination
The key point coordinates of the key points may include: inputting any three-dimensional 0 depth image in the at least one three-dimensional depth image into the three-dimensional attitude estimation model to obtain key point coordinates of at least one key point, wherein the key point coordinates are three-dimensional coordinates.
The implementation of the pose estimation model is described above and is not described here in detail. In processing the three-dimensional depth image, the pose estimation model may be referred to as a three-dimensional pose estimation model. In one embodiment, the second portion of the image to be measured may include at least one three-dimensional depth image therein. The three-dimensional depth image may be an image acquired with a depth camera. And inputting any three-dimensional depth image into the gesture estimation model, so as to obtain the key point coordinates of at least one key point. The key point coordinates are three-dimensional coordinates in the world coordinate system.
According to the technical scheme, the three-dimensional depth image aiming at the key points of the human body is easy to obtain at present, the three-dimensional depth image is directly obtained, and the three-dimensional coordinates of the key points are determined, so that the planning efficiency is improved when the motion track of the ultrasonic probe under the world coordinate system is planned.
For example, in the process of controlling the ultrasonic probe to sequentially move to the position where the at least one key point is located according to the planned motion track so as to scan the position where the at least one key point belongs, the method may further include: acquiring mask information of an object to be detected and acquiring an image in real time, wherein the image acquired in real time is an image acquired in real time aiming at an ultrasonic probe in the moving process of the ultrasonic probe; performing target detection based on the real-time acquired image to determine real-time probe coordinates of the ultrasonic probe, wherein the real-time probe coordinates are two-dimensional coordinates or three-dimensional coordinates; determining whether the ultrasonic probe falls on the object to be detected based on real-time probe coordinates and mask information of the ultrasonic probe; outputting corresponding prompt information under the condition that the ultrasonic probe does not fall on the object to be tested and/or under the condition that the ultrasonic probe falls on the object to be tested.
In the process of controlling the ultrasonic probe to sequentially move to the position of at least one key point according to the planned motion trail so as to scan the position of the at least one key point, the real-time probe coordinates of the ultrasonic probe at each current moment and the mask information at each current moment can be acquired in real time.
The mask information may be used to indicate where the object to be measured is located. Illustratively, the real-time acquisition image may be a two-dimensional image and the mask information may be a mask image. The size of the mask image is consistent with that of the real-time acquisition image, and the pixel value of each pixel in the mask image is used for indicating whether the pixel at the same position on the real-time acquisition image belongs to an object to be detected. For example, the pixel value of any pixel of the mask image is a first value, which indicates that the corresponding pixel on the real-time collected image belongs to the image to be measured, and the pixel value of any pixel of the mask image is a second value, which indicates that the corresponding pixel on the real-time collected image does not belong to the image to be measured. One of the first value and the second value may be 1 and the other may be 0. Of course, the mask information may also be three-dimensional coordinates of all points corresponding to the object to be measured (i.e., three-dimensional point cloud of the object to be measured).
Alternatively, in one embodiment, the depth camera may be provided with a detection function for a human mask, which may directly output mask information. The means for performing the ultrasound scanning method 100 may obtain the mask information it outputs directly from the depth camera. In another embodiment, the mask information may be obtained by identifying the mask of the object to be measured from any image to be measured containing the object to be measured. For example, the image to be measured may be a two-dimensional image, and the mask identifying the object to be measured may be optionally implemented by the above-described image segmentation model. In the case where the depth camera outputs mask information and the mask information is a mask image, the resolution of the mask image coincides with the resolution of the three-dimensional depth image output by the depth camera.
Under the condition that the mask information and the real-time probe coordinates are respectively located in different coordinate systems (one is an image coordinate system and the other is a world coordinate system), the mask information and the real-time probe coordinates can be converted into the same coordinate system through the conversion relation of the coordinate systems, and then whether the ultrasonic probe falls on an object to be detected is judged. In the case where the ultrasonic probe does not fall onto the object to be measured, the first prompt information may be optionally output. In the case where the ultrasonic probe falls onto the object to be measured, the second prompt information may be optionally output.
Outputting any prompt information can be realized through an output device. The output device may be included in or communicatively coupled with the device for performing the ultrasound scanning method 100. The output device may include, but is not limited to, one or more of a display device, a speaker, a light emitting device, a communication device. The communication device may be any wired and/or wireless communication device. The display device can output prompt information in the forms of video, images, characters and the like. The prompt information can be output in an audio form through a speaker. The prompt information can be output in the form of an optical signal through the light-emitting device. The prompt information may be output to any associated device, such as a personal computer, server, mobile terminal, etc., via the communication means. In one example, the first hint information may be a text message such as "probe is moving". The second hint information may be a text message such as "the probe is scanning".
Through the scheme, whether the ultrasonic probe falls onto the object to be detected can be monitored in real time, and corresponding prompt information is output, so that a user can conveniently know the current scanning progress in time.
For example, in the process of controlling the ultrasonic probe to sequentially move to the position where the at least one key point is located according to the planned motion track so as to scan the position where the at least one key point belongs, the method may further include: mask information of an object to be detected is obtained in real time, the mask information is a mask image, and a real-time acquisition image is an image acquired in real time for an ultrasonic probe in the moving process of the ultrasonic probe; performing target detection based on the real-time acquired image to determine real-time probe coordinates of the ultrasonic probe, wherein the real-time probe coordinates are two-dimensional coordinates or three-dimensional coordinates; synthesizing identification information of at least one key point and the ultrasonic probe on the mask image based on the key point coordinates of the at least one key point and the real-time probe coordinates of the ultrasonic probe; and outputting the synthesized mask image.
The mask information and the real-time probe coordinates may be obtained by referring to the above embodiments, and will not be described in detail. The object under test is generally stationary during the inspection process, so that the coordinates of the keypoints of the at least one keypoint may follow the coordinates of the keypoints (which may be referred to as initial keypoint coordinates) determined in step S130 described above. Of course, alternatively, the real-time keypoint coordinates of the at least one keypoint may also be determined from the real-time ultrasound image in a similar manner as step S130. According to the key point coordinates (which may be the initial key point coordinates or the real-time key point coordinates) of at least one key point and the real-time probe coordinates of the ultrasonic probe, the identification information of the corresponding key point and the ultrasonic probe may be synthesized on the mask image. The identification information may take any form of identification. For example, the respective corresponding positions of the key points and the ultrasound probe may be represented by dots or rectangular boxes. After the synthesized identification information on the mask image is obtained, the synthesized mask image may be displayed on a display device.
According to the technical scheme, the synthesized mask image is obtained and output, so that the positions of the ultrasonic probe and the key point can be conveniently checked by a user.
According to yet another aspect of the present invention, there is also provided an ultrasonic scanning apparatus. Fig. 3 shows a schematic block diagram of an ultrasound image processing apparatus 300 according to one embodiment of the invention, as shown in fig. 3, the apparatus 300 may include: an acquisition module 310, a detection module 320, an estimation module 330, a planning module 340, and a control module 350.
The obtaining module 310 may be configured to obtain at least one image to be measured, where a first part of the at least one image to be measured includes the ultrasound probe, and a second part of the at least one image to be measured includes the target inspection area of the object to be measured, where the first part of the at least one image to be measured is at least a part of the at least one image to be measured, and the second part of the at least one image to be measured is at least a part of the at least one image to be measured.
The detection module 320 may be configured to perform target detection based on the first portion of the image to be detected, so as to determine an initial probe coordinate of the ultrasound probe, where the initial probe coordinate is a two-dimensional coordinate or a three-dimensional coordinate.
The estimation module 330 may be configured to perform pose estimation based on the second portion of the image to be detected, so as to determine a key point coordinate of at least one key point in the target inspection area, where the key point coordinate is a two-dimensional coordinate or a three-dimensional coordinate.
The planning module 340 may be configured to plan a motion trajectory of the ultrasound probe for scanning at least one keypoint based on an initial probe coordinate of the ultrasound probe and a keypoint coordinate of the at least one keypoint.
The control module 350 may be configured to control the ultrasonic probe to sequentially move to a position where at least one key point is located according to the planned motion trajectory, so as to scan a location where the at least one key point belongs.
According to still another aspect of the present invention, there is also provided an electronic apparatus. Fig. 4 shows a schematic block diagram of an electronic device according to an embodiment of the invention. As shown, the electronic device 400 includes a processor 410 and a memory 420, wherein the memory 420 has stored therein computer program instructions that, when executed by the processor 410, are configured to perform the ultrasound scanning method 100 described above.
The electronic device 400 may be an ultrasonic diagnostic device or an ultrasonic workstation, for example.
The ultrasonic diagnostic device may include an ultrasonic probe, a first processor, a first memory, a first display, and the like. In the case where the electronic device 400 is an ultrasonic diagnostic device, the processor 410 may be a first processor and the memory 420 may be a first memory. The ultrasound probe may be used to transmit ultrasound waves to a target examination region (e.g., a uterine portion of a human body) and to receive ultrasound echoes returned from the target examination region, thereby obtaining ultrasound echo signals. The ultrasound probe transmits the ultrasound echo signal to the first processor. The first processor may process the ultrasound echo signals to obtain an ultrasound image of the target examination region. The ultrasound image obtained by the first processor may be stored in a first memory. These ultrasound images may optionally be displayed on a first display for viewing by a user. Further, the first processor may be used to perform the ultrasound scanning method 100 described herein, and may optionally store some of the processing information generated during processing in the first memory. The processing information may include intermediate data and/or final results, such as mask information described herein, detection results of the object detection model, pose estimation results of the pose estimation model, first measurement items, and so forth. Optionally, the ultrasonic diagnostic apparatus may further include a first input device, such as one or more of a mouse, a keyboard, a touch screen, and the like, for a user to input information, instructions, and the like.
An ultrasound workstation may also be referred to as an ultrasound imaging workstation. The ultrasonic workstation is equipment integrating the functional modules of patient registration, image acquisition, diagnosis and editing, report printing, image post-processing, medical record inquiry, statistical analysis and the like. The ultrasound workstation may be communicatively coupled to the ultrasound diagnostic apparatus, such as by any wired or wireless communication means. The ultrasound diagnostic apparatus may transmit the acquired ultrasound echo signals and/or image data of the ultrasound image, etc. to the ultrasound workstation.
The ultrasound workstation may include a second processor, a second memory, a second display, and the like. In the case where electronic device 400 is an ultrasound workstation, processor 410 may be a second processor and memory 420 may be a second memory. The second processor may receive the ultrasound echo signals and/or image data of the ultrasound image from the ultrasound diagnostic device through the communication interface and may obtain the ultrasound image based on the ultrasound echo signals or may obtain the ultrasound image directly based on the image data of the ultrasound image. The second processor may store the obtained ultrasound image in a second memory. These ultrasound images may optionally be displayed on a second display for viewing by a user. Further, a second processor may be used to perform the ultrasound scanning method 100 described herein, and may optionally store some of the processing information generated during processing in a second memory. Examples of processing information may be referred to the description above. Optionally, the ultrasonic diagnostic apparatus may further include a second input device, such as one or more of a mouse, a keyboard, a touch screen, and the like, for a user to input information, instructions, and the like.
In addition, the ultrasound workstation may also include other functional modules such as a printing device. The processing, storage, playback, printing, statistics, searching and other functions of the ultrasonic image can be completed through the second processor, the second memory and various functional modules of the ultrasonic workstation.
According to yet another aspect of the present invention, there is also provided an ultrasound scanning system including: the tail end of the mechanical arm is provided with an ultrasonic probe; at least one image acquisition device for acquiring at least one image to be detected; the electronic device 400 is connected to at least one image acquisition device and a mechanical arm, and the processor 410 in the electronic device is configured to execute the ultrasonic scanning method 100 based on at least one image to be detected, where the processor 410 sends a control instruction to the mechanical arm to control the mechanical arm to drive the ultrasonic probe to move.
In one example, the at least one image capture device may be a motion sensing (kinect) camera that includes both a color camera and a depth camera. At least one image acquisition device may be fixed directly above the object to be measured. The ultrasonic probe can be fixed at the tail end of the mechanical arm, and the mechanical arm can drive the ultrasonic probe to move. The processor 410 is connected to the image acquisition device and the robotic arm, respectively. Before scanning, the internal and external parameters of each image acquisition device may be calibrated first, a conversion relation (represented by a conversion matrix) between the coordinate systems may be obtained, and the conversion relation may be stored in the memory 420. The processor 410 may receive at least one image to be detected acquired by at least one image acquisition device, determine coordinates of key points of each key point and coordinates of an initial probe of the ultrasonic probe based on the image to be detected, plan a track based on the coordinates, send a corresponding control instruction to the mechanical arm according to a planned motion track, control the mechanical arm to move, and further drive the ultrasonic probe to sequentially move to a position where the at least one key point is located according to the planned motion track.
For example, the at least one image acquisition device may comprise a depth camera and/or a color camera, and the at least one image to be measured may comprise at least one three-dimensional depth image acquired by the depth camera and/or at least one two-dimensional image acquired by the color camera.
According to yet another aspect of the present invention, there is also provided a storage medium having stored thereon program instructions for performing the above-described ultrasound scanning method 100 when executed. The storage medium may include, for example, a storage component of a tablet computer, a hard disk of a personal computer, an erasable programmable read-only memory (EPROM), a portable read-only memory (CD-ROM), a USB memory, or any combination of the foregoing storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
Those skilled in the art will understand the specific implementation schemes of the above-mentioned ultrasonic scanning apparatus, electronic device, ultrasonic scanning system and storage medium by reading the above description about the ultrasonic scanning method, and for brevity, the description is omitted here.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present invention thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the invention and aid in understanding one or more of the various inventive aspects, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the invention. However, the method of the present invention should not be construed as reflecting the following intent: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules in an ultrasound scanning apparatus according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing description is merely illustrative of specific embodiments of the present invention and the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the scope of the present invention. The protection scope of the invention is subject to the protection scope of the claims.

Claims (17)

CN202211692689.XA2022-12-282022-12-28Ultrasonic scanning method, device and system, electronic equipment and storage mediumPendingCN118252529A (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202211692689.XACN118252529A (en)2022-12-282022-12-28Ultrasonic scanning method, device and system, electronic equipment and storage medium
PCT/CN2023/142144WO2024140749A1 (en)2022-12-282023-12-26Ultrasonic scanning method, apparatus and system, and electronic device and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202211692689.XACN118252529A (en)2022-12-282022-12-28Ultrasonic scanning method, device and system, electronic equipment and storage medium

Publications (1)

Publication NumberPublication Date
CN118252529Atrue CN118252529A (en)2024-06-28

Family

ID=91601070

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202211692689.XAPendingCN118252529A (en)2022-12-282022-12-28Ultrasonic scanning method, device and system, electronic equipment and storage medium

Country Status (2)

CountryLink
CN (1)CN118252529A (en)
WO (1)WO2024140749A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119214691A (en)*2024-12-052024-12-31上海冰座晶依科技有限公司 Control method, control device and electronic equipment for intracardiac ultrasonic catheter

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104856720B (en)*2015-05-072017-08-08东北电力大学A kind of robot assisted ultrasonic scanning system based on RGB D sensors
US20200069285A1 (en)*2018-08-312020-03-05General Electric CompanySystem and method for ultrasound navigation
CN109480906A (en)*2018-12-282019-03-19无锡祥生医疗科技股份有限公司Ultrasonic transducer navigation system and supersonic imaging apparatus
US20200305837A1 (en)*2019-03-272020-10-01General Electric CompanySystem and method for guided ultrasound imaging
US11607200B2 (en)*2019-08-132023-03-21GE Precision Healthcare LLCMethods and system for camera-aided ultrasound scan setup and control
CN110477956A (en)*2019-09-272019-11-22哈尔滨工业大学A kind of intelligent checking method of the robotic diagnostic system based on ultrasound image guidance
CN112215843B (en)*2019-12-312021-06-11无锡祥生医疗科技股份有限公司Ultrasonic intelligent imaging navigation method and device, ultrasonic equipment and storage medium
CN111657997A (en)*2020-06-232020-09-15无锡祥生医疗科技股份有限公司Ultrasonic auxiliary guiding method, device and storage medium
CN111938700B (en)*2020-08-212021-11-09电子科技大学Ultrasonic probe guiding system and method based on real-time matching of human anatomy structure
CN115089212A (en)*2022-05-082022-09-23中南大学湘雅二医院 A three-dimensional vision-guided robotic arm automatic neck ultrasound scanning method and system

Also Published As

Publication numberPublication date
WO2024140749A1 (en)2024-07-04

Similar Documents

PublicationPublication DateTitle
CN112215843B (en)Ultrasonic intelligent imaging navigation method and device, ultrasonic equipment and storage medium
US11911214B2 (en)System and methods for at home ultrasound imaging
JP6740033B2 (en) Information processing device, measurement system, information processing method, and program
CN113116386B (en)Ultrasound imaging guidance method, ultrasound apparatus, and storage medium
US9974618B2 (en)Method for determining an imaging specification and image-assisted navigation as well as device for image-assisted navigation
KR20190100011A (en)Method and apparatus for providing surgical information using surgical video
US20130188851A1 (en)Information processing apparatus and control method thereof
CN113711275B (en) Creating training data variability for object annotation in images in machine learning
JP2021029675A (en)Information processor, inspection system, and information processing method
EP4528465A1 (en)Finger interaction trajectory acquisition method and system, and storage medium
CN110418610A (en)Determine guidance signal and for providing the system of guidance for ultrasonic hand-held energy converter
CN115170629A (en)Wound information acquisition method, device, equipment and storage medium
CN209392096U (en)A kind of operation guiding system
CN119722998A (en) Training method and system based on multimodal IoT perception and virtual-real symbiosis
CN118252529A (en)Ultrasonic scanning method, device and system, electronic equipment and storage medium
CN109106448A (en)A kind of operation piloting method and device
US10922899B2 (en)Method of interactive quantification of digitized 3D objects using an eye tracking camera
CN109636856A (en) Joint measurement method of object six-dimensional pose information based on HOG feature fusion operator
CN118628657A (en) Gastric ultrasound image-assisted scanning method, device and electronic equipment
CN114694442B (en)Ultrasonic training method and device based on virtual reality, storage medium and ultrasonic equipment
JP2023073109A (en)Information processing device, medical diagnostic imaging system, program, and storage medium
JP2023543010A (en) Method and system for tool tracking
CN116782850A (en)Ultrasonic simulation system
JP2009151516A (en)Information processor and operator designating point computing program for information processor
CN120107367B (en) A real-time pupil detection and tracking method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp