Movatterモバイル変換


[0]ホーム

URL:


CN112331049A - An ultrasonic simulation training method, device, storage medium and ultrasonic equipment - Google Patents

An ultrasonic simulation training method, device, storage medium and ultrasonic equipment
Download PDF

Info

Publication number
CN112331049A
CN112331049ACN202011220774.7ACN202011220774ACN112331049ACN 112331049 ACN112331049 ACN 112331049ACN 202011220774 ACN202011220774 ACN 202011220774ACN 112331049 ACN112331049 ACN 112331049A
Authority
CN
China
Prior art keywords
ultrasonic
ultrasonic probe
training
probe
ultrasound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011220774.7A
Other languages
Chinese (zh)
Other versions
CN112331049B (en
Inventor
莫若理
甘从贵
赵明昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chison Medical Technologies Co ltd
Original Assignee
Chison Medical Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chison Medical Technologies Co ltdfiledCriticalChison Medical Technologies Co ltd
Priority to CN202110696904.2ApriorityCriticalpatent/CN113470495A/en
Priority to CN202011220774.7Aprioritypatent/CN112331049B/en
Publication of CN112331049ApublicationCriticalpatent/CN112331049A/en
Application grantedgrantedCritical
Publication of CN112331049BpublicationCriticalpatent/CN112331049B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention discloses an ultrasonic simulation training method, an ultrasonic simulation training device, a storage medium and ultrasonic equipment, wherein the method comprises the following steps: scanning the detection object by the ultrasonic probe; determining a scanning part scanned by the ultrasonic probe according to external input information or spatial position information of the ultrasonic probe relative to a detection object; determining and displaying an ultrasonic image of a scanned part according to the scanned part of the ultrasonic probe and the training model; and based on the type of the ultrasonic probe, generating a moving path of the ultrasonic probe according to the training model, and guiding the ultrasonic probe to perform moving scanning based on the moving path. By implementing the method and the device, the spatial information between the probe and the scanned part of the detection object is acquired, so that the ultrasonic image corresponding to the scanned part can be generated and displayed by the pre-trained training model, a user can intuitively know the association between the operation of the probe and the ultrasonic image, and the user can more conveniently train to acquire a high-quality ultrasonic image.

Description

Ultrasonic simulation training method and device, storage medium and ultrasonic equipment
Technical Field
The invention relates to the technical field of medical imaging, in particular to an ultrasonic simulation training method, an ultrasonic simulation training device, a storage medium and ultrasonic equipment.
Background
Ultrasonic diagnosis is a diagnostic method which applies ultrasonic detection technology to human body, knows the data form of human body physiology or tissue structure by measuring specific parameters, finds diseases and gives prompts. Ultrasonic diagnosis is highly dependent on operators, namely, operators must acquire accurate ultrasonic examination results through professional ultrasonic techniques and ultrasonic image knowledge. Therefore, excellent training of ultrasound operation is the basis for clinical application of ultrasound diagnostic techniques.
The current ultrasonic training course is divided into two parts of classroom theory explanation and clinical teaching. On one hand, the difference between theoretical explanation and actual operation is often large, so that a student cannot intuitively master the key points of the operation technique; on the other hand, clinical teaching is often limited by patients and operating environments, large-scale training cannot be performed, and students cannot perform ultrasonic operation on patients directly, so that the ultrasonic performance of typical diseases is difficult to observe by the existing clinical teaching. The above disadvantages all make the medical ultrasound training not ideal, resulting in the inability of the trainee to master the clinical ultrasound skills well.
Disclosure of Invention
In view of this, embodiments of the present invention provide an ultrasound simulation training method, an ultrasound simulation training device, a storage medium, and an ultrasound apparatus, so as to solve the technical problem that a trainee cannot well master clinical ultrasound skills in the existing ultrasound training.
The technical scheme provided by the invention is as follows:
the first aspect of the embodiments of the present invention provides an ultrasound simulation training method, including: scanning a detection object by adopting an ultrasonic probe, wherein the ultrasonic probe comprises a virtual ultrasonic probe or a real ultrasonic probe; determining a scanning part scanned by the ultrasonic probe according to external input information or the spatial position information of the ultrasonic probe relative to the detection object; determining an ultrasonic image of a scanned part according to the scanned part of the ultrasonic probe and a training model, displaying the ultrasonic image, and updating the training model according to an actual training condition; and generating a moving path of the ultrasonic probe according to the training model based on the type of the ultrasonic probe, and guiding the ultrasonic probe to perform moving scanning based on the moving path.
Optionally, the training model is obtained by pre-training according to the following method, and acquires an ultrasonic image of a detection object and relative spatial position information of a corresponding ultrasonic probe relative to the detection object, and inputs the ultrasonic image into a first convolution neural network for feature extraction to obtain feature image data; inquiring in a three-dimensional data model according to the characteristic image data and the relative spatial position information of the corresponding ultrasonic probe relative to the detection object, and judging whether existing characteristic image data exist at the corresponding position in the three-dimensional data model; when existing characteristic image data exist at corresponding positions in the three-dimensional data model, inputting the existing characteristic image data and the characteristic image data into a second convolutional neural network for fusion to obtain fused characteristic image data; and updating the three-dimensional data model according to the fusion characteristic image data to obtain the training model.
Optionally, the ultrasound simulation training method further includes: and when no existing characteristic image data exists at the corresponding position in the three-dimensional data model, updating the three-dimensional data model according to the characteristic image data and the relative spatial position information of the corresponding ultrasonic probe relative to the detection object to obtain the training model.
Optionally, the ultrasound simulation training method further includes: acquiring image data of a detection object according to CT scanning or MRI scanning; matching the image data with the characteristic images in the training model according to the matching model, and judging whether the detection object is missed; and when the scanning omission exists, sending out a scanning omission prompt.
Optionally, based on the type of the ultrasound probe, generating a moving path of the ultrasound probe according to the training model, and guiding the ultrasound probe to perform a mobile scanning based on the moving path, including: when the ultrasonic probe is a virtual ultrasonic probe, determining the spatial position of a target tangent plane according to the target tangent plane and a training model; and generating a moving path of the ultrasonic probe according to the current position information of the ultrasonic probe and the space position of the target tangent plane, and guiding the ultrasonic probe to perform moving scanning based on the moving path.
Optionally, based on the type of the ultrasound probe, a moving path of the ultrasound probe is generated according to the training model, and the ultrasound probe is guided to perform a mobile scanning based on the moving path, further including: when the ultrasonic probe is a real ultrasonic probe, inputting a current ultrasonic image of a scanned part scanned by the ultrasonic probe into a first convolution neural network for processing to obtain a current ultrasonic characteristic image; inputting the current ultrasonic characteristic image into a third convolutional neural network for simplification to obtain a current ultrasonic simplified characteristic image; determining the existing ultrasonic image at the corresponding spatial position in the training model according to the spatial position information of the ultrasonic probe relative to the scanned part and the training model; inputting the existing ultrasonic image into a third convolutional neural network for simplification to obtain an existing ultrasonic simplified image; fully connecting and processing the current ultrasonic simplified feature image and the existing ultrasonic simplified image to obtain a position difference value; and generating a moving path of the ultrasonic probe according to the position difference and the target tangent plane, and guiding the ultrasonic probe to perform moving scanning based on the moving path.
Optionally, the ultrasound simulation training method further includes: obtaining a standard tangent plane corresponding to the scanned part according to the training model at least based on the ultrasonic image of the scanned part of the detection object, and performing quality evaluation on the ultrasonic image based on the standard tangent plane; and/or evaluating an actual movement path of the ultrasound probe based on generating the movement path of the ultrasound probe.
A second aspect of an embodiment of the present invention provides an ultrasound simulation training apparatus, including: the scanning module is used for scanning the detection object by adopting an ultrasonic probe, and the ultrasonic probe comprises a virtual ultrasonic probe or a real ultrasonic probe; the part determining module is used for determining a scanning part scanned by the ultrasonic probe according to external input information or the spatial position information of the ultrasonic probe relative to the detection object; the image determining module is used for determining an ultrasonic image of a scanned part according to the scanned part of the ultrasonic probe and the training module, displaying the ultrasonic image, and updating the training model according to an actual training condition; and the path determining module is used for generating a moving path of the ultrasonic probe according to the training model based on the type of the ultrasonic probe and guiding the ultrasonic probe to perform moving scanning based on the moving path.
A third aspect of embodiments of the present invention provides a computer-readable storage medium storing computer instructions for causing a computer to perform the method for ultrasound simulation training according to any one of the first aspect and the first aspect of embodiments of the present invention.
A fourth aspect of the embodiments of the present invention provides an ultrasound apparatus, including: a memory and a processor, the memory and the processor being communicatively coupled, the memory storing computer instructions, and the processor executing the computer instructions to perform the method of ultrasound simulation training according to any of the first aspect and the first aspect of the embodiments of the present invention.
The technical scheme provided by the invention has the following effects:
according to the ultrasonic simulation training method, the ultrasonic simulation training device, the storage medium and the ultrasonic equipment provided by the embodiment of the invention, the spatial information between the probe and the scanned part of the detection object is acquired, so that the ultrasonic image corresponding to the scanned part can be generated and displayed by the pre-trained training model, a user can intuitively know the association between the operation of the probe and the ultrasonic image, and the user can more conveniently train to acquire a high-quality ultrasonic image. Meanwhile, the ultrasonic simulation training method can be used for training by adopting a real probe and can also be used for training by adopting a virtual probe, so that different training requirements are met. In addition, the training model can be updated in real time according to the actual training situation, and the use experience of the user is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of an ultrasound simulation training method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an ultrasound simulation training method according to another embodiment of the present invention;
FIG. 3 is a flow chart of generating a movement path according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a first convolutional neural network according to an embodiment of the present invention
FIG. 5 is a schematic diagram of a third convolutional neural network according to an embodiment of the present invention;
FIG. 6 is a guidance diagram for generating a movement path on a display according to an embodiment of the invention;
FIG. 7 is a flow chart of an ultrasound simulation training method according to another embodiment of the present invention;
FIG. 8 is a schematic diagram of a second convolutional neural network according to an embodiment of the present invention;
FIG. 9 is a flow chart of an ultrasound simulation training method according to another embodiment of the present invention;
FIG. 10 is a block diagram of an ultrasound simulation training apparatus according to an embodiment of the present invention;
FIG. 11 is a schematic structural diagram of a computer-readable storage medium provided in accordance with an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an ultrasound apparatus provided according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an ultrasonic simulation training method, which comprises the following steps as shown in figure 1:
s100: scanning a detection object by adopting an ultrasonic probe, wherein the ultrasonic probe comprises a virtual ultrasonic probe or a real ultrasonic probe; specifically, when the detection object is a prosthesis, the detection object can be scanned by using a virtual ultrasonic probe, and can also be scanned by using a real ultrasonic probe; when the test object is a human body or a real phantom, such as a real animal or a phantom used in medical simulation, or a part of a real animal, such as a certain tissue or organ; or a phantom of an organ or tissue; or a joint phantom of multiple tissues or organs, a true ultrasound probe may be employed. For example, the phantom can be used for female pregnant woman physical signs to perform ultrasonic detection of female gynecology, and for example, the phantom can be used for common normal adult males to perform ultrasonic detection of superficial organs, and at this time, a real ultrasonic probe can be used, or if training simulation is performed, a virtual ultrasonic probe can be used. Meanwhile, different parts can be matched with different types of ultrasonic probes, such as a linear array probe, a convex array probe, a phased array, an area array and the like.
S200: and determining the scanning part scanned by the ultrasonic probe according to external input information or the spatial position information of the ultrasonic probe relative to the detection object. Specifically, spatial position information of the ultrasound probe relative to the detection object may be acquired first, and the scanned part scanned by the ultrasound probe may be determined according to the spatial position information.
In an embodiment, position and/or angle information of the ultrasound probe relative to the test object may be identified using one or more sensors. The accuracy of the calculation may be improved when multiple sensors are employed, or more position or angle information may be measured. Wherein the sensor may be moving or stationary. The sensor type comprises one or the combination of any several of a visual sensor, a position sensor, a pressure sensor, an infrared sensor, a speed sensor, an acceleration sensor and a magnetic sensor. Wherein, the sensor can be arranged at any position of the phantom according to the practical application condition, such as the inner part of the phantom; or is separated from the body model and is arranged on a part connected with the body model module; alternatively still, the sensor may be attached to the phantom by means of a remote connection.
In an embodiment, a camera may be disposed outside the ultrasound probe for acquiring relative spatial position information of the ultrasound probe with respect to the object to be detected, and the camera may be a three-dimensional camera. The three-dimensional camera acquires the spatial position information of the ultrasonic probe and the spatial position information of the object to be detected, so that the relative spatial position information of the ultrasonic probe relative to the object to be detected is obtained.
In one embodiment, an Inertial sensor (IMU) is provided in the ultrasound probe, and may acquire real-time spatial position information of the ultrasound probe, for example, real-time coordinate information of X, Y, and Z axes of the ultrasound probe. And then the relative spatial position information of the ultrasonic probe relative to the object to be detected can be obtained by combining the spatial position information of the object to be detected, which is acquired by the camera. In addition, the relative spatial position information of the ultrasonic probe relative to the scanned part of the object to be detected can be judged in a mode of combining the magnetic sensor and the camera.
In an embodiment, at least 1 infrared transmitter can be respectively arranged at the positions of four corners on the shell of the ultrasonic probe and used for transmitting infrared rays, and simultaneously, infrared sensors are arranged on the phantom and outside the phantom and used for receiving the infrared rays transmitted by the infrared transmitters, and the transmitters can transmit the infrared rays to all directions. Therefore, the relative spatial position information of the ultrasonic probe relative to the object to be detected can be obtained according to the received infrared light.
In an embodiment, a flexible touch screen or a flexible touch layer may be disposed on the phantom, and a pressure sensor may be disposed on the flexible touch screen or the flexible touch layer, so as to identify position information of the ultrasound probe relative to the flexible touch screen or the flexible touch layer and pressure information exerted on the detection object, thereby determining relative spatial position information between the ultrasound probe relative to the detection object.
S300: and determining an ultrasonic image of the scanned part according to the scanned part of the ultrasonic probe and the training model, displaying the ultrasonic image, and updating the training model according to the actual training condition.
In an embodiment, the training model may be a pre-trained deep learning network model, and when the training model is trained, a real ultrasonic probe may be used to perform ultrasonic scanning on the detection object along a preset direction to obtain an ultrasonic image of each section of the detection object. The tissue scanned by the probe may be heart, kidney, liver, blood vessel, gallbladder, uterus, breast, fetus, thyroid, etc. And, can confirm the relative spatial position information of the ultrasonic probe relative to the detected object while obtaining each ultrasonic image, this relative spatial position information can be obtained through magnetic field generator or magnetic locator, can also obtain through the lens.
Meanwhile, in the training process, the ultrasonic image of each section and the corresponding relative spatial position information need to be acquired, and the acquired related information is input into the deep learning network for training, so that the required training model can be obtained.
And, the training model can be updated according to the actual training situation. For example, for the operations of puncturing and inner diameter measurement of blood vessels, various types of blood vessels, blood vessels of various ages, blood vessels of different genders and blood vessels of people can be obtained, and a training model of the blood vessels is established, so that the development of blood vessel training can be facilitated.
In one embodiment, the ultrasound image comprises a pure ultrasound image, an ultrasound video, an organ model; or at least one of measurement information, diagnosis information, organ information, attributes of the object to be detected and the like. The attribute information of the object to be detected may be: the attribute information of a phantom used by a real animal or medical simulation, for example, the object to be detected is a female, a male, an old, a child, a height, a weight, or the like. After the ultrasound image is acquired, the ultrasound image may also be displayed. Wherein, the ultrasonic image display comprises the simultaneous display of one of a two-dimensional ultrasonic image, a three-dimensional ultrasonic image, a two-dimensional ultrasonic video, a three-dimensional ultrasonic video and an organ model; the position information of the probe relative to the organ model, the position information of the ultrasonic image relative to the organ model and the time sequence information of the ultrasonic image in the ultrasonic video can be displayed for more intuitive display.
S400: and based on the type of the ultrasonic probe, generating a moving path of the ultrasonic probe according to the training model, and guiding the ultrasonic probe to perform moving scanning based on the moving path.
In one embodiment, according to the training model, guiding the ultrasound probe to perform a mobile scan based on a movement path includes:
when the ultrasonic probe is a virtual ultrasonic probe, determining the spatial position of a target section according to the target section and the training model; and generating a moving path of the ultrasonic probe according to the current position information and the spatial position of the ultrasonic probe, and guiding the ultrasonic probe to perform moving scanning based on the moving path. Specifically, the target position may be a standard section recommended intelligently, or a target section of a site input by a medical staff. After the target section is determined, the spatial position of the section can be determined according to the training model, and then the moving path can be determined according to the current position of the ultrasonic probe and the spatial position of the section.
In an embodiment, as shown in fig. 2 and 3, based on the type of the ultrasound probe, a moving path of the ultrasound probe is generated according to the training model, and the ultrasound probe is guided to perform a moving scan based on the moving path, further including:
s401: when the ultrasonic probe is a real ultrasonic probe, inputting a current ultrasonic image of a scanned part scanned by the ultrasonic probe into a first convolution neural network for processing to obtain a current ultrasonic characteristic image; in an embodiment, as shown in fig. 4, an input ultrasound image first enters a two-layer convolution + pooling module of a first convolution neural network for processing, where the convolution kernel size is 3 × 3 and the step size is 1, the number of convolution kernels increases by multiples of 32, the kernel size of the pooling layer is 2 × 2 and the step size is 2; then, the data are input into a bilinear interpolation and convolution module through two layers of convolution (convolution kernel is 3 multiplied by 3, step length is 1), wherein the bilinear interpolation and convolution module and a two-layer convolution and pooling module of a neural network can increase or decrease the number of modules according to the training test effect; the two-layer convolution connection can connect a bilinear interpolation and convolution module and a two-layer convolution and pooling module of the neural network and is used for enhancing feature extraction. The number of channels output by the bilinear interpolation and convolution module is an image after feature enhancement and extraction, and a ReLU activation function is added after convolution for relieving the problem of gradient disappearance. And a convolution layer is connected behind the previous pooling layer, the size of the convolution kernel is 1 multiplied by 1, the aim is to fuse and extract features, the nonlinearity is increased, the fitting capacity of the network is increased, and the part can be added with the former to be used as the input of next up-sampling, so that the capability of improving network classification is realized. And in the final bilinear interpolation and convolution module, performing convolution on the output channel number, and outputting the extracted characteristic image data with the same size as the input ultrasonic image.
S402: inputting the current ultrasonic characteristic image into a third convolutional neural network for simplification to obtain a current ultrasonic simplified characteristic image; specifically, the third convolutional neural network is used for processing the ultrasonic characteristic image, and simplifying the characteristic distribution in the input image. As shown in fig. 5, the network structure adopts three convolution kernels of 3 × 3 sizes to perform convolution calculation on the input feature image in the form of "SAME", so as to simplify redundant features in the input feature image data.
S403: determining the existing ultrasonic image at the corresponding spatial position in the training model according to the spatial position information of the ultrasonic probe relative to the scanned part and the training model; specifically, the existing ultrasound image at the corresponding spatial position in the training model can be determined by querying in the training model according to the spatial position information of the ultrasound probe relative to the scanned part.
S404: inputting the existing ultrasonic image into a third convolution neural network for simplification to obtain an existing ultrasonic simplified image; specifically, the existing ultrasound image can also be simplified by the third convolutional neural network, so as to obtain the existing ultrasound simplified image.
S405: fully connecting and processing the current ultrasonic simplified characteristic image and the existing ultrasonic simplified image to obtain a position difference value; specifically, the current ultrasonic simplified characteristic image and the existing ultrasonic simplified image are subjected to full connection processing, and the difference value M between the spatial position of the ultrasonic probe relative to the scanned part of the detection object and the corresponding spatial position of the ultrasonic probe in the training model is calculated through regression; the difference M is a difference between the spatial position (x1, y1, z1, ax1, ay1, az1) of the ultrasound probe at the scanning part of the detection object and the spatial position (x2, y2, z2, ax2, ay2, az2) of the ultrasound probe in the training model.
S406: and generating a moving path of the ultrasonic probe according to the position difference and the target tangent plane, and guiding the ultrasonic probe to perform moving scanning based on the moving path. Specifically, after the target tangent plane is determined, the spatial position of the tangent plane may be determined according to the training model, and then the moving path (Δ X, Δ Y, Δ Z, Δ AX, Δ AY, Δ AZ) of the ultrasound probe may be calculated based on the difference M and the spatial position information of the target tangent plane in the training model. Namely, after the difference value M is determined, the spatial position of the ultrasonic probe in the target tangent plane in the training model is determined (x3, y3, z3, ax3, ay3 and az3), so that the ultrasonic probe can determine the moving path only by stepping the ultrasonic probe by the step M and then stepping the ultrasonic probe by the steps (x 3-x 2, y 3-y 2, z 3-z 2, ax 3-ax 2, ay 3-ay 2 and az 3-az 2).
In an embodiment, when the target position is a standard section of the currently scanned part, the acquired ultrasound image may be matched with the standard section of the scanned part corresponding to the ultrasound image by using the training model, so as to generate a moving path of the probe when the probe moves to the standard section. When the target position is a standard tangent plane of the scanned part input by the medical staff, a moving path can be generated according to the ultrasonic image and the target position obtained by current scanning. When the target position is the standard tangent plane of the current scanned part, the position can be intelligently recommended according to the scanned part scanned by the ultrasonic probe at present.
In one embodiment, when the target position is a position input by a medical care worker, the medical care worker can input the position through an interactive device, wherein the interactive device comprises a keyboard, a mouse, a language sensor, a light sensor, a touch screen and the like; or, a medical person selects a site from the displayed sites; or, the medical staff may speak the voice input part, for example, the medical staff may say "scan the double apical paths of the fetus". Optionally, after scanning a scanned part of a detection object by using an ultrasonic probe, a user displays m ultrasonic image sections stored in the scanned part, wherein m is a positive integer, medical staff selects an ultrasonic image section of a target organ or tissue required by the medical staff from the m sections, and determines the selected ultrasonic image section as a target position; in actual implementation, after scanning the scanned part of the detection object, the medical staff may input the target position by voice, for example, when scanning a blood vessel, the medical staff may input "scan the cross section of the blood vessel".
In an embodiment, when the target position is the position recommended intelligently, after the scanning part is determined, the position scanned by the medical staff at a high probability when the scanning part is scanned can be determined according to the big data, and then the position is determined as the target position. In addition, during actual implementation, at least two target positions can be provided, and the first position in the moving direction of the ultrasonic probe can be determined as the target position according to the moving path of the ultrasonic probe; for example, when a medical staff operates the kidney, the medical staff usually operates 5 positions of A, B, C, D and E, the current position of the ultrasonic probe is between A and B, and the ultrasonic probe moves towards the position B according to the moving direction of the ultrasonic probe, so that the position B can be determined as the target position. Or determining the position closest to the current scanning position as the target position.
After the target position is determined, the moving path of the ultrasonic probe can be generated. Wherein the movement path comprises a movement in position and/or angle. For example, the moving path is 30 degrees of clockwise deflection or 30 degrees of counterclockwise deflection of the ultrasonic probe; a translation of 1cm to the left or 1cm to the right, etc.
In some embodiments, directing the ultrasound probe to move based on a movement path includes: methods based on visual, auditory or force feedback guide the ultrasound probe in movement. Specifically, the method based on visual guidance may guide the user by one or more of image guidance, video guidance, logo guidance, text guidance, light guidance, and projection guidance. The user may be guided by a voice method based on auditory guidance. For example, if the user is correctly positioned this time, the probe can reach the target position, or the user can be guided to the target position through a training model, and then the user can be prompted in various ways, for example, a prompt sound of dripping is sent. Further, the user may be guided by tactile means, such as one or more of tactile guidance, vibration guidance, traction guidance.
The ultrasonic simulation method provided by the embodiment of the invention can guide the user to find the standard tangent plane through different visual, auditory and tactile modes, so that the user can obtain training in the guided process, the guide mode can be selected according to the actual application condition, the experience of the user is further improved, and the training effect of the user is also improved.
In one embodiment, to improve the efficiency of virtual training, the movement path, the target position, and the ultrasound probe may also be displayed in real time. As shown in fig. 6, thescanning guide area 1000 displayed on the display includes at least afirst guide area 1600 and asecond guide area 1700, where thefirst guide area 1600 displays at least the position information and the angle information of the current ultrasound probe, the position information and the angle information of the probe corresponding to the standard tangent plane, and the operation prompt information. The operation prompt information at least comprises the translation distance and the selected angle, and can also be the pressure of the ultrasonic probe pressed down. The second guiding region includes the object to be detected 1100, the target scannedobject 1500 highlighted on the object to be detected 1100, thecurrent ultrasound probe 1200, themovement path 1400, and the targetvirtual probe 1300, it being understood that the highlighting may be highlighting of the entire target scannedobject 1500 or the outline of the target scannedobject 1500. Thecurrent ultrasound probe 1200 moves according to its real-time position, and the targetvirtual probe 1300 needs to move to a position to obtain the ultrasound probe corresponding to the standard tangent plane.
According to the ultrasonic simulation training method provided by the embodiment of the invention, the spatial information between the probe and the scanned part of the detection object is acquired, so that the ultrasonic image corresponding to the scanned part can be generated and displayed by the pre-trained training model, a user can intuitively know the association between the operation of the probe and the ultrasonic image, and the user can more conveniently train to acquire a high-quality ultrasonic image. Meanwhile, the ultrasonic simulation training method can be used for training by adopting a real probe and can also be used for training by adopting a virtual probe, so that different training requirements are met. In addition, the training model can be updated in real time according to the actual training situation, and the use experience of the user is further improved.
In an embodiment, the training model is obtained by pre-training according to the following method, as shown in fig. 7, which specifically includes the following steps:
s101: acquiring an ultrasonic image of a detection object and relative spatial position information of a corresponding ultrasonic probe relative to the detection object, and inputting the ultrasonic image into a first convolution neural network for feature extraction to obtain feature image data; specifically, before training, a real ultrasound probe may be used to perform an ultrasound scanning on the detection object along a preset direction, and an ultrasound image of the detection object and relative spatial position information of the corresponding ultrasound probe with respect to the detection object may be acquired. The first convolutional neural network may be the first convolutional neural network employed in S401.
S102: inquiring in the three-dimensional data model according to the characteristic image data and the relative spatial position information of the corresponding ultrasonic probe relative to the detection object, and judging whether existing characteristic image data exist at the corresponding position in the three-dimensional data model; the three-dimensional data model can be a data model formed by some existing ultrasonic images and the relative spatial position information of the corresponding ultrasonic probe relative to the detection object. But the data sources may be few or the data is not very accurate and cannot meet the requirements of the training model. Therefore, after the characteristic image data is obtained, searching query can be carried out in the three-dimensional data model, and whether the existing characteristic image data exists at the corresponding position in the three-dimensional data model or not is judged.
S103: and when existing characteristic image data exist at corresponding positions in the three-dimensional data model, inputting the existing characteristic image data and the characteristic image data into a second convolution neural network for fusion to obtain fused characteristic image data.
Specifically, as shown in fig. 8, the second convolutional neural network includes two input loops, namely, a current feature image data loop (upper loop) for inputting the feature image data processed by the first convolutional neural network; and an existing characteristic image data loop (lower loop) for inputting existing characteristic image data obtained by inquiring the corresponding spatial position information of the ultrasonic probe in the three-dimensional ultrasonic model.
As shown in fig. 8, the second convolutional neural network copies and fuses the two paths of feature image data after the first convolution to form a fused data processing loop of the middle layer. The three data processing loops are processed in the same processing mode, namely, the three data processing loops are processed by adopting the same structure as the first convolutional neural network. The difference of the three processing loops is that the current characteristic image data loop processes current characteristic image data output by the first convolution neural network, the existing characteristic image data loop processes existing characteristic image data in a three-dimensional data model, the middle layer fuses the current characteristic image data and the existing characteristic image data, and the model finally fuses a fused image from extracted characteristics by using bilinear interpolation and convolution layers. The second convolutional neural network model adopts a multi-loop form, and feature extraction is enhanced. And the multi-scale features are respectively fused and are respectively added to the middle loop at different resolutions, and finally, a comprehensive multi-scale information fusion feature image is formed.
S104: and updating the three-dimensional data model according to the fusion characteristic image data to obtain a training model. Specifically, the obtained fusion feature image data may replace the existing feature image data at the original position in the three-dimensional data model, so as to update the three-dimensional data model.
S105: and when the existing characteristic image data do not exist at the corresponding position in the three-dimensional data model, updating the three-dimensional data model according to the characteristic image data and the corresponding relative spatial position information of the ultrasonic probe relative to the detection object to obtain a training model. And when the existing characteristic image data do not exist in the corresponding position, storing the obtained characteristic image and the corresponding position signal into the three-dimensional data model to obtain the training model.
According to the ultrasonic simulation training method provided by the embodiment of the invention, on the basis of the existing three-dimensional data model, the three-dimensional data model is updated through the first convolutional neural network and the second convolutional neural network to obtain the required training model, so that the accuracy of the training model can be improved, and meanwhile, the data source of the training model is expanded, so that the training model can meet the requirements of various types of training, and the use experience of a user is improved.
In one embodiment, since the training model is obtained by training data scanned by the ultrasonic probe, there may be a missing situation, so that whether the training model is complete can be determined by matching the model. As shown in fig. 9, the method can be specifically realized by the following steps:
s201: acquiring image data of a detection object according to CT scanning or MRI scanning; specifically, the three-dimensional contour model may be constructed by acquiring image data of the entire detection object and corresponding position information by using Computed Tomography (CT) or Magnetic Resonance Imaging (MRI).
S202: matching the image data with the characteristic images in the training model according to the matching model, and judging whether the detection object is missed; specifically, the training process of the matching model can be realized by the following method: inputting the characteristic image into a first three-dimensional CNN network to obtain a three-dimensional ultrasonic model, inputting image data into a second three-dimensional CNN network to obtain a three-dimensional contour model, respectively inputting the three-dimensional ultrasonic model and the three-dimensional contour model into a three-dimensional GCN network for transformation to obtain a transformation matrix, and then performing three-dimensional model transformation on the transformation matrix and the three-dimensional ultrasonic model to obtain a matching model. When actual matching is carried out, image data and characteristic images in the training model can be input into the matching model to be matched, and whether missing scanning exists or not can be judged.
S203: and when the scanning omission exists, sending out a scanning omission prompt. Wherein, the missing scanning prompt is one or more of a voice prompt, a vibration prompt or an indicator light.
According to the ultrasonic simulation training method provided by the embodiment of the invention, path planning is respectively carried out according to the type of the ultrasonic probe, and a user can determine a moving path by adopting the method no matter a virtual ultrasonic probe or a real ultrasonic probe is adopted, so that the ultrasonic probe is guided to carry out moving scanning based on the moving path, and the requirements of different users are met.
In one embodiment, the ultrasound simulation training method further comprises: obtaining a standard tangent plane corresponding to the scanned part according to the training model at least based on the ultrasonic image of the scanned part of the detected object, and performing quality evaluation on the ultrasonic image based on the standard tangent plane; and/or evaluating an actual movement path of the ultrasound probe based on the generated movement path of the ultrasound probe.
Specifically, when a real ultrasonic probe is used for training, the current ultrasonic image of the scanned part scanned by the ultrasonic probe can be acquired, and then compared with the standard tangent plane image of the scanned part set in the training model, so that the quality of the ultrasonic image can be evaluated, and the actual operation of a user can be corrected. Meanwhile, the actual moving path of the user can be evaluated according to the generated moving path, so that the operation capacity of the user can be evaluated.
In one embodiment, the training model may also be updated based on a particular quality assessment value and/or one of the ultrasound probe actual movement path assessment values. Therefore, new models are continuously generated, the training simulation difficulty is improved, or the use mode (such as the position, the angle, the force and the like of the probe) of a user is corrected. For example, when assessing the user's ability improvement, the user may be continually provided with more difficult training questions, such as adjusting from a vascular scan of the arm to performing a carotid vascular scan, or from a lean vascular scan to a obese person.
In some embodiments, a new three-dimensional data model may be generated according to the human-computer interaction information, for example, first, a user uses an ultrasound probe to detect a certain part or tissue of an object to be detected, such as a blood vessel on an arm, and obtains an ultrasound image of the blood vessel, and the user may perform a measurement operation on the ultrasound image. And if the current ultrasonic image or the measurement result of the blood vessel does not meet the clinical requirement, moving the probe to generate a new ultrasonic image. The training model generates a related new three-dimensional data model according to the measurement operation and the movement operation of the user, and is used for improving the training difficulty, correcting the wrong operation action of the user and adjusting the operation method of the user so as to improve the training effect.
The ultrasonic simulation training method provided by the embodiment of the invention can evaluate the capability of the user by acquiring the ultrasonic image of the current training of the user, so that the training content is adjusted according to the actual condition of the user, and the method is more targeted and improves the training effect.
An embodiment of the present invention further discloses an ultrasound simulation training apparatus, as shown in fig. 10, the apparatus includes:
the scanning module 10 is configured to scan a detection object by using an ultrasonic probe, where the ultrasonic probe includes a virtual ultrasonic probe or a real ultrasonic probe; for details, refer to the related description of step S100 in the above method embodiment.
The part determining module 20 is configured to determine a scanning part scanned by the ultrasonic probe according to external input information or spatial position information of the ultrasonic probe relative to the detection object; for details, refer to the related description of step S200 in the above method embodiment.
The image determining module 30 is used for determining an ultrasonic image of a scanned part according to the scanned part of the ultrasonic probe and the training module, displaying the ultrasonic image, and updating the training module according to an actual training condition; for details, refer to the related description of step S300 in the above method embodiment.
And the path determining module 40 is used for generating a moving path of the ultrasonic probe according to the training model based on the type of the ultrasonic probe and guiding the ultrasonic probe to perform moving scanning based on the moving path. For details, refer to the related description of step S400 in the above method embodiment.
According to the ultrasonic simulation training device provided by the embodiment of the invention, the spatial information between the probe and the scanned part of the detection object is acquired, so that the ultrasonic image corresponding to the scanned part can be generated and displayed by the pre-trained training model, a user can intuitively know the association between the operation of the probe and the ultrasonic image, and the user can more conveniently train to acquire a high-quality ultrasonic image. Meanwhile, the ultrasonic simulation training device can be used for training by adopting a real probe and can also be used for training by adopting a virtual probe, so that different training requirements are met. In addition, the training model can be updated in real time according to the actual training situation, and the use experience of the user is further improved.
The function of the ultrasound simulation training device provided by the embodiment of the invention is described in detail with reference to the ultrasound simulation training method in the above embodiment.
An embodiment of the present invention further provides a storage medium, as shown in fig. 11, on which acomputer program 601 is stored, where the instructions are executed by a processor to implement the steps of the ultrasound simulation training method in the foregoing embodiment. The storage medium is also stored with audio and video stream data, characteristic frame data, an interactive request signaling, encrypted data, preset data size and the like. The storage medium may be a magnetic Disk, an optical Disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a flash Memory (FlashMemory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid-State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
An ultrasound apparatus is further provided in an embodiment of the present invention, as shown in fig. 12, the ultrasound apparatus may include aprocessor 51 and amemory 52, where theprocessor 51 and thememory 52 may be connected by a bus or in another manner, and fig. 12 illustrates an example of connection by a bus.
Theprocessor 51 may be a Central Processing Unit (CPU). TheProcessor 51 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
Thememory 52, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as the corresponding program instructions/modules in the embodiments of the present invention. Theprocessor 51 executes various functional applications and data processing of the processor by executing non-transitory software programs, instructions and modules stored in thememory 52, so as to implement the ultrasound simulation training method in the above method embodiment.
Thememory 52 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by theprocessor 51, and the like. Further, thememory 52 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, thememory 52 may optionally include memory located remotely from theprocessor 51, and these remote memories may be connected to theprocessor 51 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in thememory 52 and, when executed by theprocessor 51, perform the ultrasound simulation training method of the embodiment shown in fig. 1-9.
The details of the ultrasonic device can be understood by referring to the corresponding descriptions and effects in the embodiments shown in fig. 1 to 9, and are not described herein again.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

Translated fromChinese
1.一种超声模拟培训方法,其特征在于,包括:1. an ultrasonic simulation training method, is characterized in that, comprises:超声探头对检测对象进行扫查,所述超声探头包括虚拟的超声探头或真实的超声探头;The ultrasonic probe scans the detection object, and the ultrasonic probe includes a virtual ultrasonic probe or a real ultrasonic probe;根据外部输入信息或者所述超声探头相对所述检测对象的空间位置信息确定所述超声探头扫查的扫查部位;Determine the scanning part scanned by the ultrasonic probe according to external input information or the spatial position information of the ultrasonic probe relative to the detection object;根据所述超声探头的扫查部位以及培训模型确定扫查部位的超声影像,并将所述超声影像进行显示,所述培训模型根据实际培训情况进行更新;Determine the ultrasonic image of the scanning part according to the scanning part of the ultrasonic probe and the training model, and display the ultrasonic image, and the training model is updated according to the actual training situation;基于所述超声探头的类型,根据所述培训模型生成所述超声探头的移动路径,引导所述超声探头基于所述移动路径进行移动扫查。Based on the type of the ultrasonic probe, a moving path of the ultrasonic probe is generated according to the training model, and the ultrasonic probe is guided to perform a moving scan based on the moving path.2.根据权利要求1所述的超声模拟培训方法,其特征在于,所述培训模型根据以下方法预先训练得到,2. ultrasonic simulation training method according to claim 1, is characterized in that, described training model obtains according to following method pre-training,采集检测对象的超声影像以及对应的超声探头相对于检测对象的相对空间位置信息,将所述超声影像输入第一卷积神经网络进行特征提取,得到特征影像数据;Collect the ultrasonic image of the detection object and the relative spatial position information of the corresponding ultrasonic probe relative to the detection object, and input the ultrasonic image into the first convolutional neural network for feature extraction to obtain characteristic image data;根据所述特征影像数据以及对应的超声探头相对于检测对象的相对空间位置信息在三维数据模型中进行查询,判断所述三维数据模型中相应位置是否存在已有特征影像数据;Querying in the three-dimensional data model according to the characteristic image data and the relative spatial position information of the corresponding ultrasonic probe relative to the detection object, and determining whether there is existing characteristic image data at the corresponding position in the three-dimensional data model;当所述三维数据模型中相应位置存在已有特征影像数据,将所述已有特征影像数据和所述特征影像数据输入第二卷积神经网络进行融合,得到融合特征影像数据;When the existing characteristic image data exists at the corresponding position in the three-dimensional data model, the existing characteristic image data and the characteristic image data are input into the second convolutional neural network for fusion to obtain fusion characteristic image data;根据所述融合特征影像数据更新所述三维数据模型,得到所述培训模型。The three-dimensional data model is updated according to the fusion feature image data to obtain the training model.3.根据权利要求2所述的超声模拟培训方法,其特征在于,还包括:3. ultrasonic simulation training method according to claim 2, is characterized in that, also comprises:当所述三维数据模型中相应位置不存在已有特征影像数据,根据所述特征影像数据以及对应的超声探头相对于检测对象的相对空间位置信息更新所述三维数据模型,得到所述培训模型。When there is no existing characteristic image data at the corresponding position in the three-dimensional data model, the three-dimensional data model is updated according to the characteristic image data and the relative spatial position information of the corresponding ultrasound probe relative to the detection object, to obtain the training model.4.根据权利要求2所述的超声模拟培训方法,其特征在于,还包括:4. ultrasonic simulation training method according to claim 2, is characterized in that, also comprises:根据CT扫查或MRI扫查获取检测对象的影像数据;Obtain the image data of the detection object according to the CT scan or MRI scan;根据匹配模型将所述影像数据和培训模型中的特征影像数据进行匹配,判断是否漏扫检测对象;Matching the image data with the characteristic image data in the training model according to the matching model, to determine whether the detection object is missed;当存在漏扫查时,则发出漏扫查提示。When there is a missed scan, a missed scan prompt will be issued.5.根据权利要求1所述的超声模拟培训方法,其特征在于,基于所述超声探头的类型,根据所述培训模型生成所述超声探头的移动路径,引导所述超声探头基于所述移动路径进行移动扫查,包括:5 . The ultrasonic simulation training method according to claim 1 , wherein, based on the type of the ultrasonic probe, a movement path of the ultrasonic probe is generated according to the training model, and the ultrasonic probe is guided based on the movement path. 6 . Conduct mobile scans, including:当所述超声探头为虚拟的超声探头时,根据目标切面和培训模型确定目标切面的空间位置;When the ultrasonic probe is a virtual ultrasonic probe, the spatial position of the target slice is determined according to the target slice and the training model;根据所述超声探头的当前位置信息以及所述目标切面的空间位置生成所述超声探头的移动路径,引导所述超声探头基于所述移动路径进行移动扫查。A moving path of the ultrasonic probe is generated according to the current position information of the ultrasonic probe and the spatial position of the target slice, and the ultrasonic probe is guided to perform a moving scan based on the moving path.6.根据权利要求1所述的超声模拟培训方法,其特征在于,基于所述超声探头的类型,根据所述培训模型生成所述超声探头的移动路径,引导所述超声探头基于所述移动路径进行移动扫查,还包括:6 . The ultrasonic simulation training method according to claim 1 , wherein, based on the type of the ultrasonic probe, a moving path of the ultrasonic probe is generated according to the training model, and the ultrasonic probe is guided based on the moving path. 7 . Conduct mobile scans, which also include:当所述超声探头为真实的超声探头时,将所述超声探头扫查的扫查部位的当前超声影像输入第一卷积神经网络处理得到当前超声特征影像;When the ultrasonic probe is a real ultrasonic probe, inputting the current ultrasonic image of the scanning part scanned by the ultrasonic probe into the first convolutional neural network for processing to obtain the current ultrasonic characteristic image;将所述当前超声特征影像输入第三卷积神经网络进行简化,得到当前超声简化特征影像;inputting the current ultrasound feature image into a third convolutional neural network for simplification to obtain a current ultrasound simplified feature image;根据所述超声探头相对扫查部位的空间位置信息和培训模型,确定培训模型中对应空间位置处的已有超声影像;According to the spatial position information of the ultrasound probe relative to the scanning part and the training model, determine the existing ultrasound image at the corresponding spatial position in the training model;将所述已有超声影像输入所述第三卷积神经网络进行简化,得到已有超声简化影像;inputting the existing ultrasound image into the third convolutional neural network for simplification to obtain a simplified existing ultrasound image;将所述当前超声简化特征影像和所述已有超声简化影像全连接处理计算得到位置差值;The position difference is obtained by full connection processing of the current simplified ultrasound image and the existing simplified ultrasound image;根据所述位置差值以及目标切面生成所述超声探头的移动路径,引导所述超声探头基于所述移动路径进行移动扫查。A moving path of the ultrasonic probe is generated according to the position difference and the target slice, and the ultrasonic probe is guided to perform a moving scan based on the moving path.7.根据权利要求1所述的超声模拟培训方法,其特征在于,还包括:7. ultrasonic simulation training method according to claim 1, is characterized in that, also comprises:至少基于所述检测对象的扫查部位的超声影像,根据所述培训模型获得所述扫查部位对应的标准切面,并基于所述标准切面,对所述超声影像进行质量评估;和/或基于生成所述超声探头的移动路径,对所述超声探头实际移动路径进行评估。At least based on the ultrasound image of the scan part of the detection object, obtain a standard slice corresponding to the scan part according to the training model, and perform quality assessment on the ultrasound image based on the standard slice; and/or based on The movement path of the ultrasonic probe is generated, and the actual movement path of the ultrasonic probe is evaluated.8.一种超声模拟培训装置,其特征在于,包括:8. an ultrasonic simulation training device, is characterized in that, comprises:扫查模块,用于采用超声探头对检测对象进行扫查,所述超声探头包括虚拟的超声探头或真实的超声探头;a scanning module, used for scanning the detection object with an ultrasonic probe, where the ultrasonic probe includes a virtual ultrasonic probe or a real ultrasonic probe;部位确定模块,用于根据外部输入信息或者所述超声探头相对所述检测对象的空间位置信息确定所述超声探头扫查的扫查部位;a part determination module, configured to determine the scanning part scanned by the ultrasonic probe according to external input information or the spatial position information of the ultrasonic probe relative to the detection object;影像确定模块,用于根据所述超声探头的扫查部位以及培训模块确定扫查部位的超声影像,并将所述超声影像进行显示,所述培训模型根据实际培训情况进行更新;an image determining module, configured to determine an ultrasound image of the scanning site according to the scanning site of the ultrasound probe and the training module, and display the ultrasound image, and the training model is updated according to the actual training situation;路径确定模块,用于基于所述超声探头的类型,根据所述培训模型生成所述超声探头的移动路径,引导所述超声探头基于所述移动路径进行移动扫查。A path determination module, configured to generate a moving path of the ultrasonic probe according to the training model based on the type of the ultrasonic probe, and guide the ultrasonic probe to perform a moving scan based on the moving path.9.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机指令,所述计算机指令用于使所述计算机执行如权利要求1-7任一项所述的超声模拟培训方法。9 . A computer-readable storage medium, wherein the computer-readable storage medium stores computer instructions, the computer instructions are used to cause the computer to perform the ultrasound according to any one of claims 1 to 7 Simulation training method.10.一种超声设备,其特征在于,包括:存储器和处理器,所述存储器和所述处理器之间互相通信连接,所述存储器存储有计算机指令,所述处理器通过执行所述计算机指令,从而执行如权利要求1-7任一项所述的超声模拟培训方法。10. An ultrasound device, comprising: a memory and a processor, wherein the memory and the processor are connected in communication with each other, the memory stores computer instructions, and the processor executes the computer instructions by executing the computer instructions. , so as to execute the ultrasonic simulation training method according to any one of claims 1-7.
CN202011220774.7A2020-11-042020-11-04 An ultrasonic simulation training method, device, storage medium and ultrasonic equipmentActiveCN112331049B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202110696904.2ACN113470495A (en)2020-11-042020-11-04Ultrasonic simulation training method and device, storage medium and ultrasonic equipment
CN202011220774.7ACN112331049B (en)2020-11-042020-11-04 An ultrasonic simulation training method, device, storage medium and ultrasonic equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011220774.7ACN112331049B (en)2020-11-042020-11-04 An ultrasonic simulation training method, device, storage medium and ultrasonic equipment

Related Child Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110696904.2ADivisionCN113470495A (en)2020-11-042020-11-04Ultrasonic simulation training method and device, storage medium and ultrasonic equipment

Publications (2)

Publication NumberPublication Date
CN112331049Atrue CN112331049A (en)2021-02-05
CN112331049B CN112331049B (en)2021-07-02

Family

ID=74316112

Family Applications (2)

Application NumberTitlePriority DateFiling Date
CN202011220774.7AActiveCN112331049B (en)2020-11-042020-11-04 An ultrasonic simulation training method, device, storage medium and ultrasonic equipment
CN202110696904.2APendingCN113470495A (en)2020-11-042020-11-04Ultrasonic simulation training method and device, storage medium and ultrasonic equipment

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
CN202110696904.2APendingCN113470495A (en)2020-11-042020-11-04Ultrasonic simulation training method and device, storage medium and ultrasonic equipment

Country Status (1)

CountryLink
CN (2)CN112331049B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112807025A (en)*2021-02-082021-05-18威朋(苏州)医疗器械有限公司Ultrasonic scanning guiding method, device, system, computer equipment and storage medium
CN113257100A (en)*2021-05-272021-08-13郭山鹰Remote ultrasonic teaching system
CN113274051A (en)*2021-04-302021-08-20中国医学科学院北京协和医院Ultrasonic auxiliary scanning method and device, electronic equipment and storage medium
CN115886877A (en)*2021-09-292023-04-04深圳迈瑞生物医疗电子股份有限公司Guiding method for ultrasonic scanning and ultrasonic imaging system
CN116262050A (en)*2021-12-142023-06-16无锡触典科技有限公司 A method for finding standard ultrasonic slices and a calculation method for measuring bladder volume

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113951922B (en)*2021-10-262024-12-31深圳迈瑞动物医疗科技股份有限公司 Ultrasonic imaging device and scanning prompt method thereof
CN113951923B (en)*2021-10-262025-01-21深圳迈瑞动物医疗科技股份有限公司 Veterinary ultrasonic imaging device, ultrasonic imaging device and scanning method thereof
CN114098818B (en)*2021-11-222024-03-26邵靓Analog imaging method of ultrasonic original image data

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102016957A (en)*2008-02-252011-04-13发明医药有限公司Medical training method and apparatus
CN104303075A (en)*2012-04-012015-01-21艾里尔大学研究与开发有限公司 Apparatus for training users of ultrasound imaging apparatus
US20160328998A1 (en)*2008-03-172016-11-10Worcester Polytechnic InstituteVirtual interactive system for ultrasound training
CN107578662A (en)*2017-09-012018-01-12北京大学第医院 A virtual obstetric ultrasound training method and system
CN109447940A (en)*2018-08-282019-03-08天津医科大学肿瘤医院Convolutional neural networks training method, ultrasound image recognition positioning method and system
CN110960262A (en)*2019-12-312020-04-07上海杏脉信息科技有限公司Ultrasonic scanning system, method and medium
CN110967730A (en)*2019-12-092020-04-07深圳开立生物医疗科技股份有限公司Ultrasonic image processing method, system, equipment and computer storage medium
CN111657997A (en)*2020-06-232020-09-15无锡祥生医疗科技股份有限公司Ultrasonic auxiliary guiding method, device and storage medium
CN111860636A (en)*2020-07-162020-10-30无锡祥生医疗科技股份有限公司 Measurement information prompt method and ultrasound training method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102016957A (en)*2008-02-252011-04-13发明医药有限公司Medical training method and apparatus
US20160328998A1 (en)*2008-03-172016-11-10Worcester Polytechnic InstituteVirtual interactive system for ultrasound training
CN104303075A (en)*2012-04-012015-01-21艾里尔大学研究与开发有限公司 Apparatus for training users of ultrasound imaging apparatus
CN107578662A (en)*2017-09-012018-01-12北京大学第医院 A virtual obstetric ultrasound training method and system
CN109447940A (en)*2018-08-282019-03-08天津医科大学肿瘤医院Convolutional neural networks training method, ultrasound image recognition positioning method and system
CN110967730A (en)*2019-12-092020-04-07深圳开立生物医疗科技股份有限公司Ultrasonic image processing method, system, equipment and computer storage medium
CN110960262A (en)*2019-12-312020-04-07上海杏脉信息科技有限公司Ultrasonic scanning system, method and medium
CN111657997A (en)*2020-06-232020-09-15无锡祥生医疗科技股份有限公司Ultrasonic auxiliary guiding method, device and storage medium
CN111860636A (en)*2020-07-162020-10-30无锡祥生医疗科技股份有限公司 Measurement information prompt method and ultrasound training method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112807025A (en)*2021-02-082021-05-18威朋(苏州)医疗器械有限公司Ultrasonic scanning guiding method, device, system, computer equipment and storage medium
CN113274051A (en)*2021-04-302021-08-20中国医学科学院北京协和医院Ultrasonic auxiliary scanning method and device, electronic equipment and storage medium
CN113274051B (en)*2021-04-302023-02-21中国医学科学院北京协和医院Ultrasonic auxiliary scanning method and device, electronic equipment and storage medium
CN113257100A (en)*2021-05-272021-08-13郭山鹰Remote ultrasonic teaching system
CN115886877A (en)*2021-09-292023-04-04深圳迈瑞生物医疗电子股份有限公司Guiding method for ultrasonic scanning and ultrasonic imaging system
CN116262050A (en)*2021-12-142023-06-16无锡触典科技有限公司 A method for finding standard ultrasonic slices and a calculation method for measuring bladder volume

Also Published As

Publication numberPublication date
CN113470495A (en)2021-10-01
CN112331049B (en)2021-07-02

Similar Documents

PublicationPublication DateTitle
CN112331049B (en) An ultrasonic simulation training method, device, storage medium and ultrasonic equipment
JP6238651B2 (en) Ultrasonic diagnostic apparatus and image processing method
CN111758137A (en)Method and apparatus for telemedicine
CN108352132A (en)ultrasonic simulation method
US20150056591A1 (en)Device for training users of an ultrasound imaging device
CN113194837B (en) System and method for frame indexing and image review
JP6580013B2 (en) Image processing apparatus and method
JP2020137974A (en)Ultrasonic probe navigation system and navigation display device therefor
JP2021166578A (en)Ultrasound diagnosis device and ultrasound diagnosis system
JP2018079070A (en)Ultrasonic diagnosis apparatus and scanning support program
JP5390149B2 (en) Ultrasonic diagnostic apparatus, ultrasonic diagnostic support program, and image processing apparatus
CN111568469A (en)Method and apparatus for displaying ultrasound image and computer program product
JP2025105891A (en) Ultrasound Qualification System
CN107578662A (en) A virtual obstetric ultrasound training method and system
JP7183451B2 (en) Systems, devices, and methods for assisting in neck ultrasound
CN118845164A (en) Portable puncture device based on ultrasound guidance
CN113870636B (en)Ultrasonic simulation training method, ultrasonic device and storage medium
CN111419272B (en)Operation panel, doctor end controlling means and master-slave ultrasonic detection system
CN116631252A (en)Physical examination simulation system and method based on mixed reality technology
CN114631841A (en)Ultrasonic scanning feedback device
KR102364490B1 (en)Untrasound dianognosis apparatus, method and computer-readable storage medium
US12138117B2 (en)One-dimensional position indicator
KR20230169067A (en)Method for measuring medical indicator and ultrasound diagnosis apparatus for the same
JP2010284543A (en)Ultrasonic diagnostic apparatus

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp