Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present specification, and it is possible for those of ordinary skill in the art to apply the present specification to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It will be appreciated that "system," "apparatus," "unit" and/or "module" as used herein is one method for distinguishing between different components, elements, parts, portions or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this specification and the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in this specification to describe the operations performed by the system according to embodiments of the present specification. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
Fig. 1 is an application scenario diagram of a motion state determination system according to some embodiments of the present description. As shown in fig. 1, an application scenario 100 of the motion state determination system may include a magnetic resonance device 110, a processing device 120, a network 130, a terminal device 140, and a storage device 150. In some embodiments, the components in the application scenario 100 may be interconnected by a network 130. In some embodiments, the connection between the partial components in the application scenario 100 may be direct.
The magnetic resonance apparatus 110 may be used for magnetic resonance scanning of a scan subject. In some embodiments, the scan object may be biological or non-biological. For example, the scan object may include a patient, an artificial object, and the like. In some embodiments, the scanned object may include a particular portion of the body, e.g., on the head, neck, chest, etc., or any combination thereof. In some embodiments, the scanned object may include a particular organ, e.g., liver, kidney, pancreas, bladder, uterus, rectum, etc., or any combination thereof. In some embodiments, the scan object may include a region of interest (region of interest, ROI), e.g., a tumor, a nodule, etc.
In some embodiments, when determining the motion state of a certain scan object, the scan object may be scanned according to the first imaging field of view and the second imaging field of view, the data correspondence between the first imaging field of view and the second imaging field of view in each motion state in the first scan stage of the scan object is obtained through the preprocessing stage, and then the motion state of the scan object in the second scan stage may be determined in real time according to the data correspondence. For example, the motion state of the scan object in a similar motion cycle may be determined. The similar motion period may be a motion period having a similar motion rule corresponding to the first scanning stage in the scanning process other than the first scanning stage. The preprocessing stage is a stage of acquiring a data corresponding relationship between a first imaging view field and a second imaging view field in each motion state in the first scanning stage. The application stage is a stage for determining the motion state of the scanning object in the second scanning stage in real time according to the data corresponding relation. As a non-limiting example, the motion-related data of the scanned object during the second scanning phase may be acquired by various means such as a sensor, an imaging device, etc., in combination with the data correspondence, to determine the motion state of the scanned object during the second scanning phase (and optionally other motion periods similar to the motion period corresponding to the scanning process).
In some embodiments, the magnetic resonance apparatus 110 may acquire a large amount of raw data (e.g., K-space data, etc.) by performing a magnetic resonance scan of a scan subject. In some embodiments, the magnetic resonance apparatus 110 may acquire image data by performing a magnetic resonance scan of a scan subject. In some embodiments, the magnetic resonance apparatus 110 may perform a three-dimensional magnetic resonance scan of a scan subject, acquire three-dimensional scan data (e.g., a three-dimensional scan image, etc.). In some embodiments, the magnetic resonance apparatus 110 may perform a four-dimensional magnetic resonance scan of the scan subject, acquiring four-dimensional scan data (e.g., a three-dimensional scan image containing time-dimensional information, etc.).
In some embodiments, the magnetic resonance apparatus 110 may perform magnetic resonance scanning of the scan subject in accordance with a scanning sequence, wherein the scanning sequence may include, but is not limited to, a free induction decay sequence (FID), a self-selected echo Sequence (SE), a gradient echo sequence (GRE), a Hybrid Sequence (HS), and the like. In some embodiments, different scan sequences may be employed for scanning different scan objects. For example, an Ax SE T1 scan sequence may be used for a conventional head scan, and a Cor SE T1 scan sequence may be used for a pituitary scan.
In some embodiments, the magnetic resonance device 110 may include a 1.5T magnetic resonance device, a 3.0T magnetic resonance device, a 5.0T magnetic resonance device, a 7.0T magnetic resonance device, or the like.
The above description of the magnetic resonance apparatus 110 is for illustrative purposes only and is not intended to limit the scope of the present description.
In some embodiments, the magnetic resonance apparatus 110 may transmit the scan data to the processing apparatus 120 and/or other components of the application scenario 100. In some embodiments, the magnetic resonance apparatus 110 may receive relevant data or instructions from the processing apparatus 120 and/or other components of the application scenario 100 to perform a magnetic resonance scan.
The processing device 120 may process data and/or information acquired from the magnetic resonance device 110, the terminal device 140, and/or the storage device 150. For example, the processing device 120 may acquire first scan data and second scan data of the scan object in the first scan stage, and acquire data correspondence under the first imaging field of view and the second imaging field of view of the scan object in each motion state in the first scan stage based on the first scan data and the second scan data. For another example, the processing device 120 may acquire third scan data of the scan object during the second scan phase and determine a target motion state of the scan object when acquiring the third scan data based on the third scan data and the data correspondence.
In some embodiments, the processing device 120 may be a single server or a group of servers. The server farm may be centralized or distributed. In some embodiments, the processing device 120 may be local or remote. For example, the processing device 120 may access information and/or data from the magnetic resonance device 110, the terminal device 140, and/or the storage device 150 via the network 130. As another example, the processing device 120 may be directly connected to the magnetic resonance device 110, the terminal device 140, and/or the storage device 150 to access information and/or data. In some embodiments, the processing device 12 may be disposed in the magnetic resonance device 110. In some embodiments, the processing device 120 may be implemented on a cloud platform. For example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, and the like, or any combination thereof.
Network 130 may include any suitable network providing a network capable of facilitating the exchange of information and/or data. In some embodiments, information and/or data may be exchanged between one or more components of application scenario 100 via network 130. For example, the processing device 120 may receive scan data of the magnetic resonance device 110 over the network 130. The network 130 may include a Local Area Network (LAN), a Wide Area Network (WAN), a wired network, a wireless network, etc., or any combination thereof.
The terminal device 140 may be in communication and/or connected with the magnetic resonance device 110, the processing device 120, and/or the storage device 150. For example, the terminal device 140 may send one or more control instructions to the magnetic resonance device 110 via the network 130 to control the magnetic resonance device 110 to perform magnetic resonance scanning of the scan subject as instructed. In some embodiments, the terminal device 140 may include one or any combination of mobile device 140-1, tablet 140-2, notebook 140-3, desktop 140-4, and other input and/or output enabled devices. In some embodiments, terminal device 140 may include input devices, output devices, and the like. The input device may include a keyboard, touch screen, mouse, voice device, etc., or any combination thereof. The output device may include a display, speakers, printer, etc., or any combination thereof. In some embodiments, the terminal device 140 may be part of the processing device 120. In some embodiments, the terminal device 140 may be integrated with the processing device 120 as an console for the magnetic resonance device 110.
Storage device 150 may store data, instructions, and/or any other information. In some embodiments, the storage device 150 may store data acquired from the magnetic resonance device 110, the processing device 120, and/or the terminal device 140. For example, the storage device 150 may store raw data acquired from the magnetic resonance device 110. In some embodiments, the storage device 150 may store data and/or instructions that the processing device 120 uses to perform or use to accomplish the exemplary methods described herein. For example, the storage device 150 may store instructions for controlling the magnetic resonance device 110 to scan.
In some embodiments, the storage device 150 may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), and the like, or any combination thereof. In some embodiments, storage device 150 may be implemented on a cloud platform. In some embodiments, the storage device 150 may be part of the processing device 120.
It should be noted that the above description is provided for illustrative purposes only and is not intended to limit the scope of the present description. Many variations and modifications will be apparent to those of ordinary skill in the art, given the benefit of this disclosure. The features, structures, methods, and other features of the exemplary embodiments described herein may be combined in various ways to obtain additional and/or alternative exemplary embodiments. For example, the magnetic resonance apparatus 110, the processing apparatus 120 and the terminal apparatus 140 may share one storage apparatus 150, or may have respective storage apparatuses. However, such changes and modifications do not depart from the scope of the present specification.
FIG. 2 is an exemplary block diagram of a motion state determination system according to some embodiments of the present description. In some embodiments, as shown in fig. 2, the motion state determination system 200 may include an acquisition module 210, an analysis module 220, and a determination module 230. In some embodiments, the motion state determination system 200 may be implemented by the processing device 120.
In some embodiments, the acquisition module 210 may be configured to acquire first scan data of the scan object during a first scan phase, the first scan data corresponding to a first imaging field of view, and acquire second scan data of the scan object during the first scan phase, the second scan data corresponding to a second imaging field of view. For more description of acquiring the first scan data, the second scan data, see fig. 3 and its associated description.
In some embodiments, the first scan data includes at least one set of three-dimensional scan images of the scan object in at least one motion state within the first scan phase, and the acquiring module 210 may be further configured to acquire first raw data of the scan object in the first scan phase, the first raw data corresponding to the first imaging field of view, and perform three-dimensional state segmentation based on the first raw data to obtain at least one set of three-dimensional scan images, each set of at least one set of three-dimensional scan images corresponding to a particular motion state of the scan object. For more description of acquiring at least one set of three-dimensional scanned images, see fig. 3, 4 and their associated description.
In some embodiments, the second scan data includes at least one set of K-space data sets of the scan object in at least one motion state within the first scan phase, and accordingly, the acquisition module 210 may be further configured to acquire second raw data of the scan object within the first scan phase, the second raw data corresponding to the second imaging field of view, and divide the second raw data based on the K-space data variation characteristics to obtain at least one set of K-space data sets, each set of at least one set of K-space data sets corresponding to one set of three-dimensional scan images of the at least one set of three-dimensional scan images. For more description of acquiring at least one set of K-space data, see fig. 3 and its associated description.
In some embodiments, the analysis module 220 may be configured to obtain, based on the first scan data and the second scan data, a data correspondence between the first imaging field of view and the second imaging field of view of the scan object in each motion state during the first scan phase. In some embodiments, the analysis module 220 may be further configured to determine data correspondence of the scan object in each motion state within the first scan stage by the matching model based on the at least one set of three-dimensional scan images and the at least one set of K-space data sets. For more explanation on the acquired data correspondence, see fig. 3 and its associated description.
In some embodiments, the determining module 230 may be configured to determine a motion state of the scan object within the second scan stage based on the data correspondence. For more explanation on determining the motion state of the scanned object in the second scanning phase, see fig. 3 and its related description.
In some embodiments, the determining module 230 may be further configured to acquire third scan data of the scan object during the second scan phase, the third scan data corresponding to the second imaging field of view, and determine a target motion state of the scan object when acquiring the third scan data based on the third scan data and the data correspondence. For a further description of the third scan data, determining the target motion state of the scan object at the time of acquisition of the third scan data, see fig. 3 and its associated description.
In some embodiments, the determining module 230 may be further configured to determine, based on the third scan data, a set of K-space data sets in the second scan data that match the third scan data, and determine, based on the data correspondence, a target motion state corresponding to the third scan data. For more description of determining the state of motion of the target see fig. 3 and its associated description.
It should be noted that the above description of the motion state determination system 200 and its modules is for convenience of description only and is not intended to limit the present description to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the principles of the system, various modules may be combined arbitrarily or a subsystem may be constructed in connection with other modules without departing from such principles. In some embodiments, the acquisition module 210, the analysis module 220, and the determination module 230 may be different modules in a system, or may be one module to implement the functions of two or more modules described above. For example, each module may share one memory module, or each module may have a respective memory module. Such variations are within the scope of the present description.
Fig. 3 is a flow chart of an exemplary motion state determination method shown in accordance with some embodiments of the present description. In some embodiments, the motion state determination method may be performed by the processing device 120 or the motion state determination system 200. For example, the flow 300 may be stored in a storage device (e.g., the storage device 150) in the form of a program or instructions that, when executed by the processing device 120 or the motion state determination system 200, may implement the flow 300. The operational schematic of flow 300 presented below is illustrative. In some embodiments, the process may be accomplished with one or more additional operations not described above and/or one or more operations not discussed. In addition, the order in which the operations of flow 300 are illustrated in FIG. 3 and described below is not limiting.
Step 301, acquiring first scan data of a scan object in a first scan phase, the first scan data corresponding to a first imaging field of view. In some embodiments, step 301 may be performed by the acquisition module 210.
In some embodiments, the scanned subject is affected by physiological activity. For example, the motion state (also called physiological phase) of the scanned subject may change as the breath and heart are beating. When the scanned object is the lung, the scanned object is affected by physiological activities and can contract and relax. When the scan target is the heart, the scan target is influenced by physiological activities and displaced. For more description of the scanned object, see fig. 1 and its associated description.
In some embodiments, the first scan stage may be a scan stage for acquiring data. The data acquired in the first scan stage may be used to determine data correspondence. See below for more description.
In some embodiments, the first scanning phase may include one or more periods of motion.
The movement cycle is a period of time from when the displacement, velocity, acceleration, and the like of the scanning object are completely or substantially restored to the same time as that at a certain time. The respiration, heart beat, etc. of the scanned subject can cause organ displacement and organ dilation of the tissue. For example, the scan object is a heart and the movement period may be a heart beat period of a heart beat. For another example, where the scanned object is a lung, the movement cycle may be a period of time during which the lung is contracting and relaxing, i.e., a respiratory cycle. As another example, the scan subject is the stomach and intestine, the movement cycle may be the gastrointestinal motility cycle, or the like.
In some embodiments, the first scan data may be scan data obtained after four-dimensional magnetic resonance scanning of the scan subject. The first scan data may be four-dimensional scan data. For example, the first scan data may be a three-dimensional scan image sequence containing time dimension information.
In some embodiments, the first scan data corresponds to a first imaging field of view. Accordingly, the first scan data may be scan data obtained after performing four-dimensional magnetic resonance scan on the scan object in the first imaging field of view. The first imaging field of view is the field of view size at the time of acquiring the first scan data.
In some embodiments, the first imaging field of view may be a field of view size of a four-dimensional magnetic resonance scan of the scan subject. In some embodiments, the first imaging field of view may be greater than or equal to the range of the scanned object, e.g., the first imaging field of view may be a distance beyond four sides or the periphery of the scanned object. For another example, the first imaging field of view may be disposed at an edge of a range in which the scan object is located.
In some embodiments, the first imaging field of view may be set according to actual requirements. For example, the first imaging field of view may be set to the maximum range in which a certain scan object can be judged according to the diagnosis needs of the doctor.
In some embodiments, the first imaging field of view is limited by hardware limitations of the magnetic resonance apparatus.
In some embodiments, the acquisition module 210 may acquire raw data (e.g., K-space data) of the scan object during a first scan phase by performing a four-dimensional magnetic resonance scan of the scan object at a first imaging field of view by a magnetic resonance apparatus (e.g., magnetic resonance apparatus 110), and obtain first scan data by reconstructing the raw data. Exemplary reconstruction methods include, but are not limited to, partial fourier reconstruction methods, parallel reconstruction methods, compressed sensing reconstruction methods, depth learning based reconstruction methods, and the like.
In some embodiments, the first scan data includes a set of three-dimensional scan images of the scan object in a plurality of motion states over one or more motion periods encompassed by the first scan phase. The motion state may be represented by a position (i.e., a position of the scan object at a certain time of the motion cycle relative to a position of the scan object at an initial time of the motion cycle), a shape (e.g., a volume), or the like. In some embodiments, a position or morphology within a different range may be considered a different state of motion.
For example, if the scanned object is a heart, the motion state within the heart cycle may be divided based on a change in the volume of the heart or a change in the position somewhere along the edge of the heart. For example only, the motion state during the heart cycle may be divided into a mild beat state, a moderate beat state, a severe beat state, and the like when the heart moves. The heart volume change or the heart edge position change is smaller than a corresponding first threshold value in a slight beating state, smaller than a corresponding second threshold value and larger than or equal to the first threshold value in a moderate beating state, and smaller than a corresponding third threshold value and larger than or equal to the second threshold value in a severe beating state. The first, second, and third thresholds may be system set points, system default values, or human set points, etc.
For another example, if the scanned object is a lung, the motion state of the lung in the contracted and expanded motion periods may be divided based on changes in the size of the lung volume or changes in the position somewhere along the edge of the lung. For example only, the movement state of the lung when contracting and expanding may be divided into a first contracted state, a second contracted state, etc., and/or a first expanded state, a second expanded state, etc. The dividing manner of each motion state may refer to the above manner of dividing the motion state in the heartbeat cycle, which is not described herein.
In some embodiments, the first scan data may include one or more sets of three-dimensional scan images. The scanned object includes one or more motion states within a motion cycle, and each set of three-dimensional scanned images may correspond to a particular one of the one or more motion states. The set of three-dimensional scanned images may include one or more consecutive three-dimensional scanned images. The position or morphology of the scanned object in a set of three-dimensional scanned images corresponding to a specific motion state is less changed. The position or the shape of the scanned object in the multiple groups of three-dimensional scanned images corresponding to different motion states has larger or obvious change.
Referring to fig. 4, in some embodiments, the acquiring module 210 may obtain at least one set of three-dimensional scan images by acquiring first raw data of the scan object in a first scan stage, and performing three-dimensional state segmentation based on the first raw data.
The first original data refers to four-dimensional image data obtained after four-dimensional magnetic resonance scanning is performed on a scanning object. The four-dimensional image data refers to image data including time dimension information. In some embodiments, the first raw data may be obtained by reconstructing raw data (e.g., a K-space dataset) obtained by four-dimensional magnetic resonance scanning of the scanned object. See the previous description of the reconstruction. In this embodiment, the first scan data includes at least one three-dimensional scan image of the scan object in each motion state in one or more motion periods included in the first scan phase, and accordingly, the first scan data may be further obtained based on processing the first raw data, that is, three-dimensional state segmentation.
In some embodiments, the first raw data corresponds to a first imaging field of view. Accordingly, the first raw data may be four-dimensional image data obtained after performing four-dimensional magnetic resonance scanning on the scanning object in the first imaging field of view.
In some embodiments, the acquisition module 210 may segment the first raw data into at least one set of three-dimensional scanned images by performing a three-dimensional state segmentation on the first raw data. In some embodiments, the acquisition module 210 may perform three-dimensional state segmentation on the first raw data by way of image matching. For example, the acquiring module 210 may preset reference three-dimensional scan images (may include one or more sheets) corresponding to different motion states, and match the three-dimensional scan images in the first raw data with the reference three-dimensional scan images corresponding to different motion states, so as to divide the first raw data into at least one group of three-dimensional scan images corresponding to different motion states.
In some embodiments, the acquisition module 210 may divide the first raw data into one or more sets of three-dimensional scanned images by way of image contrast. For example, the obtaining module 210 may slide and intercept multiple time slices, perform similarity comparison on multiple three-dimensional scanned images corresponding to the time slices, segment the three-dimensional scanned images with image similarity smaller than the similarity threshold to form a group of three-dimensional scanned images, and correspond to a specific motion state, so as to segment the first original data into one or more groups of three-dimensional scanned images.
Step 302, second scan data of the scanned object during the first scan phase is acquired, the second scan data corresponding to a second imaging field of view. In some embodiments, step 302 may be performed by the acquisition module 210.
The second scan data is scan data obtained after performing a fast magnetic resonance scan on the scan object. In some embodiments, the second scan data may be scan data containing time dimension information. For example, the second scan data may be a K-space dataset containing time dimension information.
In some embodiments, the second scan data corresponds to a second imaging field of view. Accordingly, the second scan data may be scan data obtained after performing a fast magnetic resonance scan of the scan subject in the second imaging field of view.
In some embodiments, the phase encoding direction may be adjusted when performing a fast magnetic resonance scan. For example, the phase encoding direction may be set on an anatomical short axis of the human body.
In some embodiments, the acquiring the first scan data and the acquiring the second scan data may or may not be performed simultaneously.
The second imaging field of view refers to the size of the field of view of the subject being scanned for fast magnetic resonance scanning. In some embodiments, the second imaging field of view may be less than the range of the scanned object. In some embodiments, the second imaging field of view may be smaller than the first imaging field of view. By setting the second imaging view field smaller than the first imaging view field, compared with the first scanning data, the second scanning data can be enabled to have higher signal intensity of the central low-frequency and peripheral high-frequency parts of the acquired signals in the K space, and the motion state of the scanning object can be better reflected. In some embodiments, image curls may occur at this time because the second imaging field of view is less than the range of the scanned object.
The second imaging field of view may be determined in a variety of ways. In some embodiments, the second imaging field of view may be preset by a system (e.g., motion state determination system 200) or by human beings. For example, the system or person may pre-set the size of the second imaging field of view based on a priori knowledge or historical data.
In some embodiments, the second imaging field of view may be determined based on the first imaging field of view. For example, the system may perform an operation such as an equal scale reduction on the field of view size of the first imaging field of view to obtain the second imaging field of view.
In some embodiments, the second imaging field of view may be determined based on the organ type of the scanned subject. For example, the system may determine the actual second imaging field of view applied to the scanned object based on the actual organ type of the scanned object by presetting the correspondence of different organ types to different second imaging fields of view.
In some embodiments, the second imaging field of view may be determined based on the first scan data and a preset condition.
The preset condition may be in various forms. In some embodiments, the preset condition may include setting a field of view size of the second imaging field of view such that the second imaging field of view meets a type of motion of the scanned object. The motion type of the scan object may refer to a motion direction of the scan object, and includes, for example, up-down movement, left-right movement, and the like. In some embodiments, the type of motion of the scanned object may be determined based on the first scan data. For example, the type of movement, i.e., the direction of movement, of the scan object may be determined based on the adjacent plurality of three-dimensional scan images.
In some embodiments, satisfying the type of motion of the scan object refers to making one side of the second imaging field of view the same as or similar to the length of the side of the scan object perpendicular to the direction of motion. Referring to fig. 5, where R is the scan object, a is a side of the second imaging field perpendicular to the moving direction of the scan object R, and b is a side of the scan object R perpendicular to the moving direction thereof, satisfying the moving type of the scan object means that the length of the b side in the second imaging field is the same as or similar to the length of the a side on the scan object R.
In some embodiments, the preset condition may include setting a field size of the second imaging field such that the image derived based on the first scan data does not include image folds and the image derived based on the second scan data includes image folds. Referring to fig. 5, image folds are present in the second scan data corresponding to the second imaging field of view.
In some embodiments, the preset condition may include setting a field size of the second imaging field of view such that a time resolution of the second scan data within the first scan phase satisfies the preset time resolution condition and/or the image fold occurs in a direction in which motion of the scan object has a greatest effect on imaging.
In some embodiments, the preset time resolution condition may be that the time resolution of the second scan data in the first scan phase is greater than or equal to a time resolution threshold. The time resolution threshold may be a system default value, an empirical value, an artificial preset value, or any combination thereof, and may be set according to actual requirements, which is not limited in this specification.
In some embodiments, the direction in which the motion of the scanned object affects imaging most may be the same as the direction of motion of the scanned object. In some embodiments, the direction in which the motion of the scanned object affects imaging most may be a phase encoding direction. In some embodiments, the direction in which the motion of the scanned object affects imaging most may be the frequency encoding direction.
In some embodiments, the system may adjust the field of view of the first imaging field of view a plurality of times such that the image derived based on the first scan data does not include image folds and the image derived based on the second scan data includes image folds and the adjusted field of view size is taken as the second imaging field of view.
In some embodiments, the system may adjust the field size of the first imaging field of view a plurality of times such that the time resolution of the second scan data during the first scan phase satisfies a predetermined time resolution condition and/or the image fold occurs in a direction in which the motion of the scan object has a greatest effect on imaging, and take the adjusted field size as the second imaging field of view.
In some embodiments, the preset condition may alternatively or additionally include setting a field of view size of the second imaging field of view such that the spatial resolution of the second scan data meets the preset spatial resolution condition. In some embodiments, the predetermined spatial resolution condition may be that the spatial resolution of the second scan data within the first scan phase is less than a spatial resolution threshold. The spatial resolution threshold may be a system default value, an empirical value, an artificial preset value, or any combination thereof, and may be set according to actual requirements, which is not limited in this specification.
The preset conditions can also be in other feasible forms, and can be specifically set according to actual requirements.
In some embodiments, the system may adjust the field of view size of the first imaging field of view a plurality of times such that the spatial resolution of the second scan data satisfies a preset spatial resolution condition, and take the adjusted field of view size as the second imaging field of view.
In some embodiments, the acquisition module 210 may acquire second scan data of the scan subject during the first scan phase by performing one or more fast magnetic resonance scans of the scan subject at the second imaging field of view via a magnetic resonance device (e.g., the magnetic resonance device 110). The second scan data is acquired without reconstruction.
In some embodiments, the second scan data includes one or more sets of K-space data for at least one motion state of the scan object during one or more motion periods encompassed by the first scan phase. The scan object includes at least one motion state within the motion cycle, each set of K-space data sets corresponding to a particular one of the at least one motion state. The set of K-space data sets may include one or more consecutive K-space data sets. The position or morphology of the scanned object in the set of K-space datasets corresponding to a particular motion state is less variable. The position or the shape of the scanned object in the plurality of groups of K space data sets corresponding to different motion states has larger or obvious change.
Since each set of K-space data corresponds to a motion state, each three-dimensional scan image corresponds to a motion state, and accordingly, each set of K-space data in the at least one set of K-space data corresponds to one set of three-dimensional scan images in the at least one set of three-dimensional scan images.
In some embodiments, each set of K-space data in the second scan data corresponds to the same length of acquisition time. In some embodiments, the length of acquisition time corresponding to each set of K-space data in the second scan data may be different. In some embodiments, each set of three-dimensional scan images in the first scan data corresponds to the same length of acquisition time. In some embodiments, the corresponding acquisition time lengths of each set of three-dimensional scan images in the first scan data may be different. In some embodiments, each set of three-dimensional scan images in the first scan data corresponds to the same length of acquisition time as each set of K-space data sets in the second scan data. In some embodiments, the length of acquisition time corresponding to each set of three-dimensional scan images in the first scan data may be different from the length of acquisition time corresponding to each set of K-space data in the second scan data.
In some embodiments, the number of groups of K-space data sets in the second scan data is the same as the number of groups of three-dimensional scan image groups in the first scan data. Referring to fig. 6, in some embodiments, the acquisition module 210 may obtain at least one set of K-space data sets by acquiring second raw data of the scan object during one or more motion periods contained in the first scan phase, and dividing the second raw data based on the K-space data variation characteristics.
The second raw data is raw data obtained after the rapid magnetic resonance scanning is performed on the scanned object. In some embodiments, the second raw data corresponds to a second imaging field of view. Accordingly, the second raw data may be raw data obtained after a fast magnetic resonance scan of the scan subject in the second imaging field of view.
In some embodiments, the acquisition module 210 may divide the second raw data into at least one set of K-space data sets based on the K-space data variation characteristics.
In some embodiments, the K-space data variation characteristic may include a high-low frequency signal distribution difference.
Due to the existence of image folds, high-frequency and low-frequency signal distribution differences exist in the K space data set. Referring to fig. 7, image folds of different fold degrees are provided on the left side, imaging differences between a K-space dataset corresponding to the image folds and a reference K-space dataset, the reference K-space dataset being the K-space dataset in the absence of the image folds. As shown in fig. 7, in the case where the degree of crimping of the image is severe, the enhancement amount of the central low frequency component and the peripheral high frequency components of the K-space dataset with respect to the reference K-space dataset is large, whereas in the case where the degree of crimping is small, the enhancement amount of the central low frequency component and the peripheral high frequency components of the K-space dataset with respect to the reference K-space dataset is small. That is, as the degree of crimping increases, the signal strength of the central low frequency and peripheral high frequency portions of the K-space dataset increases significantly.
Accordingly, the acquisition module 210 may divide the second raw data into one or more sets of K-space data sets by distinguishing the high-low frequency signal distribution differences in the second raw data. The signal intensity of the central low-frequency and peripheral high-frequency parts of the K space data acquired at different moments in a group of K space data set corresponding to a specific motion state is less or not obvious. And corresponding to a plurality of groups of K space data sets in different motion states, the signal intensity of the central low-frequency and peripheral high-frequency parts of the different K space data sets is changed greatly or obviously.
For example, K-space data sets in which the low-frequency signal strength enhancement amounts (may be the average enhancement amounts of all low-frequency signals) in a plurality of K-space data sets in the second original data are in the same range may be divided into the same group of K-space data sets. For another example, K-space data sets in which the enhancement amounts of the high-frequency signal intensities (may be the average enhancement amounts of all the high-frequency signals) in the plurality of K-space data sets in the second original data are in the same range may be divided into the same group of K-space data sets. For another example, K-space data sets in which the enhancement amount of the low-frequency signal intensity is increased within the same range and the enhancement amount of the high-frequency signal intensity is within the same range among the plurality of K-space data sets in the second original data may be divided into the same group of K-space data sets.
Step 303, based on the first scan data and the second scan data, acquiring a data correspondence between the first imaging field of view and the second imaging field of view of the scan object in at least one motion state in the first scan stage. In some embodiments, step 303 may be performed by analysis module 220.
The data correspondence may reflect a correspondence of one or more data in the first scan data acquired in the first imaging field of view with one or more data in the second scan data acquired in the second imaging field of view. For example, the data correspondence may reflect a correspondence between each of the first scan data and the associated data of the second scan data. For example, in the case where at least one set of three-dimensional scan images is included in the first scan data and at least one set of K-space data sets is included in the second scan data, referring to fig. 8, the data correspondence may include a correspondence of at least one set of three-dimensional scan images and at least one set of K-space data sets. For another example, the first scan data may include at least one set of three-dimensional or two-dimensional scan images, the second scan data may include at least one set of three-dimensional or two-dimensional scan images having a smaller imaging field of view than the first scan data, and the data correspondence may include a correspondence between the two sets of three-dimensional or two-dimensional scan images having different imaging fields of view. For another example, the first scan data may include at least one set of two-dimensional scan images, the second scan data may include at least one set of K-space data sets, and the data correspondence may include a correspondence between the at least one set of two-dimensional scan images and the at least one set of K-space data sets.
The data correspondence may be obtained in a plurality of ways. In some embodiments, the analysis module 220 may determine a motion state corresponding to each set of three-dimensional scan images and a motion state corresponding to each set of K-space data sets, and determine a correspondence between the three-dimensional scan images and the K-space data sets according to the motion states, so as to obtain a data correspondence between a first imaging field of view and a second imaging field of view of the scan object in at least one motion state in the first scan stage. For example, where each of the at least one set of three-dimensional scan images corresponds to a motion state of the scan object during the first scan phase, and each of the at least one set of K-space data sets corresponds to a motion state of the scan object during the first scan phase, the analysis module 220 may correlate the three-dimensional scan images and the K-space data sets having the same motion state to determine the data correspondence. For another example, the analysis module 220 may tag at least one three-dimensional scan image and at least one set of K-space datasets based on the motion state, respectively, and associate the three-dimensional scan image and the K-space dataset with the same tag by analyzing the tags of the different three-dimensional scan images and the K-space datasets to determine the data correspondence. For example only, when the scan object is a lung, the tag types may include "primary inhalation state (e.g., indicating less inhalation)", "secondary inhalation state (e.g., indicating more inhalation)", "tertiary inhalation state (e.g., indicating that inhalation reaches a limit)", and the like.
In this embodiment, the motion state corresponding to the K-space dataset may be obtained by performing image reconstruction on the K-space dataset and then performing image registration to obtain a correspondence between the K-space dataset and the motion state. The image registration process comprises image transformation, feature extraction, similarity measure, search optimization and the like.
In some embodiments, the analysis module 220 may determine, based on the at least one set of three-dimensional scan images and the at least one set of K-space data sets, a data correspondence of the K-space data sets with the set of three-dimensional scan images for at least one state of motion of the scan object during the first scan phase by the matching model.
The matching model may be a machine learning model. In some embodiments, the matching model may include any one or combination of various possible models, including a recurrent neural network (Recurrent Neural Network, RNN) model, a deep neural network (Deep Neural Network, DNN) model, a convolutional neural network (Convolutional Neural Network, CNN) model, and so on.
In some embodiments, the input of the matching model may include at least one three-dimensional scan image and at least one set of K-space data sets, and the output of the matching model may include data correspondence of the scan object in at least one motion state within the first scan stage.
In some embodiments, the matching model may be an existing image registration model. Exemplary image registration models include, but are not limited to, a full convolution network based unsupervised registration model (FCNet), a generation countermeasure network based unsupervised registration model, and the like.
In some embodiments, the matching model may be trained from a large number of labeled training samples. In some embodiments, the training samples include at least one set of sample three-dimensional scan images and at least one set of sample K-space data sets, and the labels include actual data correspondence corresponding to the training samples. In some embodiments, training samples may be obtained based on historical data, and tags may be determined by the system or by human labeling. For example, the system or person can determine the sample three-dimensional scanning image and the sample K-space dataset with the corresponding relation in an image matching or image comparison mode (see above for details), so as to obtain the actual data corresponding relation between at least one sample three-dimensional scanning image and at least one group of sample K-space datasets.
In some embodiments of the present disclosure, by determining the data correspondence by using a machine learning model, a rule may be found from a large amount of data by using the self-learning capability of machine learning, and the association relationship between the three-dimensional scan image and the K-space dataset may be obtained, thereby improving the accuracy and efficiency of determining the data correspondence.
Step 304, determining the motion state of the scanning object in the second scanning stage based on the data correspondence. In some embodiments, step 304 may be performed by determination module 230.
In some embodiments, the second scan phase may be a real-time scan phase. The second scanning phase is different from the first scanning phase.
In some embodiments, the second scan phase may be a complete imaging process (e.g., where the motion state of the scan object is determined while the image is being taken), or may be a non-complete imaging process or a non-imaging process for acquiring specific data (e.g., K-space data) (e.g., where only the motion state of the scan object needs to be determined without taking a photograph).
In some embodiments, the determination module 230 may acquire third scan data of the scan object during the second scan phase. The third scanning process for acquiring the third scanning data may not overlap in time with the first scanning process for acquiring the first scanning data and the second scanning process for acquiring the second scanning data, or may have a partial overlap in time with the first scanning data and/or the second scanning data. The third scan data may correspond to a second imaging field of view. Alternatively, the third scan data may correspond to a third imaging field of view that is different from the second imaging field of view, e.g., the third imaging field of view may be smaller than the second imaging field of view, or the third imaging field of view is larger than the second imaging field of view but smaller than the first imaging field of view, and so on. In some embodiments, the determination module 230 may determine a target motion state of the scan object when acquiring the third scan data based on the third scan data and the data correspondence.
The third scan data may refer to scan data obtained after performing a fast magnetic resonance scan on the scan subject. In some embodiments, the third scan data may be acquired during the application phase. For more description of the application phase, see fig. 1 and its associated description.
In some embodiments, the third scan data may be scan data containing time dimension information. For example, the third scan data may be a K-space dataset containing time dimension information. In some embodiments, the third scan data may be scan data that does not contain time dimension information, i.e. scan data obtained at a certain fast magnetic resonance scan. For example, the third scan data may be a K-space dataset that does not contain time dimension information.
In some embodiments, the third scan data corresponds to a second imaging field of view. Accordingly, the third scan data may be scan data obtained after performing a fast magnetic resonance scan of the scan subject in the second imaging field of view.
In some embodiments, the determination module 230 may acquire third scan data of the scan subject by performing a fast magnetic resonance scan of the scan subject at the second imaging field of view via a magnetic resonance device (e.g., the magnetic resonance device 110). The third scan data is acquired without reconstruction. The third scan data is acquired in real time.
In some embodiments, the determination module 230 may determine a set of K-space data sets in the second scan data that match the third scan data. The determination module 230 may determine a set of three-dimensional image data in the first scan data corresponding to the matched K-space dataset in the data correspondence. The determination module 230 may determine a motion state corresponding to the third scan data based on the determined set of three-dimensional image data. For example, the determination module 230 may determine a motion state of the scan object represented by the three-dimensional scan image in the determined set of three-dimensional image data as a target motion state of the scan object when the third scan data is acquired. The target motion state corresponding to the third scan data may be a corresponding position, a shape, etc. of the scan object when the third scan data is acquired.
In some embodiments, the determining module 230 may determine a set of K-space data sets in the second scan data matching the third scan data based on the third scan data, and determine the target motion state corresponding to the third scan data based on the K-space data sets matching the third scan data based on the data correspondence. The target motion state is a motion state corresponding to a three-dimensional scanning image corresponding to the K space data set.
In some embodiments, the determination module 230 may determine a three-dimensional imaging result of the scan object based on the third scan data. For example, the determination module 230 may determine a set of K-space datasets in the second scan data that match the third scan data. The determination module 230 may determine a set of three-dimensional image data in the first scan data corresponding to the matched K-space dataset in the data correspondence. The determination module 230 may determine a three-dimensional imaging result corresponding to the third scan data for the scan object based on the determined set of three-dimensional image data.
It should be noted that, although the above embodiment has been described by taking the example that the first scan data (and the first raw data) is four-dimensional scan data, the second scan data (and the second raw data) and the third scan data is scan data obtained after the fast magnetic resonance scan, the above description is for illustrative purposes only and is not intended to limit the scope of the present disclosure. The first scan data, the second scan data and the third scan data may be other types of scan data, for example, the first scan data may be a two-dimensional scan image sequence containing time dimension information, and the second scan data and the third scan data may be scan data obtained by a normal magnetic resonance scan.
The method has the advantages that (1) the method can realize the real-time motion state capture of organs greatly influenced by motion tissues such as respiratory motion, heart beating and the like through a pre-scanning stage and an application stage, improve the motion tracking precision of target organ tissues and ensure higher real-time requirements, (2) the corresponding relation between a K space dataset and three-dimensional scanning image data is built through the scanning result of the pre-scanning stage, so that the motion state of the motion tissues can be obtained without reconstructing the K space dataset in the application stage, the time cost of image reconstruction is effectively reduced, the calculation force consumption caused by signal processing and image reconstruction to a computer is reduced, 3) the method can effectively reduce the step number of phase encoding in the imaging process through adopting small-view imaging, greatly reduce the imaging time consumed by the phase encoding, and fully utilize the severity degree of image coil artifacts caused by insufficient phase encoding to position the current motion state of the motion tissues, (4) the method can reflect the high-frequency motion state of the motion tissues caused by the image overlapping in the K space dataset in the application stage, thereby obtaining the corresponding relation between K space dataset and K space dataset can be directly obtained through the corresponding relation between K space dataset and the current motion state and the motion state, the current space image data can be directly captured through the two-dimensional space data, the corresponding relation is greatly reduced, the real-dimensional space data can be directly obtained through the real-time space data is directly processed, and the real-time data is directly acquired through the motion state of the image capturing relation, so as to finally realize the high-speed real-time three-dimensional imaging of the magnetic resonance.
The motion state determining method described in some embodiments of the present specification is exemplarily described below with reference to the embodiment of fig. 9.
First, the motion state determining system may acquire, through a preprocessing stage, a data correspondence between a first imaging field of view and a second imaging field of view in each motion state in a motion cycle. The preprocessing stage may include steps S11 to S13 described below.
And step S11, performing four-dimensional magnetic resonance scanning on the scanned object in a large scanning view field to obtain first original data, and acquiring at least one group of three-dimensional scanning images corresponding to different motion states in a motion period based on the first original data.
Step S12, setting a proper small visual field range (wherein the sizes of the visual field ranges of the large visual field range and the small visual field range are relatively speaking), performing fast magnetic resonance scanning on a specific plane of a scanning object in the small scanning visual field to obtain second original data, and acquiring at least one group of K space data sets corresponding to the specific plane in different motion states in a motion period based on the second original data.
The step S11 and the step S12 may be performed simultaneously or may not be performed simultaneously.
And S13, determining the corresponding relation between the K space data sets and the three-dimensional scanning image groups under different motion states by using a matching model based on at least one group of three-dimensional scanning images corresponding to different motion states and at least one group of K space data sets corresponding to specific planes of different motion states, and obtaining a data corresponding relation.
The above is a preprocessing stage, and after the preprocessing stage is completed, the motion state determining system can enter an application stage. The application phase includes the following steps S14 to S15.
Step S14, performing real-time fast magnetic resonance scanning on the scanned object in the small scan field to obtain a real-time K-space data set (i.e. the third scan data).
And S15, positioning the current motion state based on the corresponding relation between the real-time K space data set and the data acquired in the preprocessing stage. In the process, if the K space data set matched with the real-time K space data set exists in the data corresponding relation, the current motion state can be determined according to the three-dimensional scanning image corresponding to the matched K space data set, and then the current motion state is output. If the K space data set matched with the real-time K space data set does not exist in the data corresponding relation, the patient is determined to have large offset, and the scanning needs to be carried out again.
One or more embodiments of the present specification provide a movement state method including the following steps S21 to S24.
Step S21, determining a data corresponding relation between a first imaging view field and a second imaging view field in at least one motion state, wherein the first imaging view field corresponds to a three-dimensional scanning image, the second imaging view field corresponds to a K space data set, and the data corresponding relation comprises a corresponding relation between the three-dimensional scanning image and the K space data set.
It should be noted that, step S21 may be implemented based on a database, where the database includes relevant scan data obtained by historical scan. See above for further explanation of the correspondence of the acquired data.
And S22, carrying out small-field magnetic resonance scanning on the scanned object in real time to acquire a real-time K space data set.
In this step, the processor may control the imaging device to perform a fast magnetic resonance scan of the scan subject over a small scan field of view. See above for more details.
And S23, determining the physiological phase of the scanned object based on the real-time K space data set and the data corresponding relation. See above for more details.
Step S24, determining a radiotherapy plan of the scanned object based on the physiological phase of the scanned object.
In some embodiments, the processor may determine the radiotherapy plan of the scanned subject in a variety of ways based on the physiological phase of the scanned subject. In some embodiments, the processor may determine the radiotherapy plan of the scanned subject by querying a preset schedule based on the physiological phase of the scanned subject. In some embodiments, the preset schedule may include a plurality of physiological phase relationships for a plurality of organ types and a plurality of radiotherapy schedules. In some embodiments, the preset schedule may be predetermined based on a priori knowledge or historical data. In some embodiments, the processor may determine the historical radiation therapy plan as the current radiation therapy plan based on the phase of the physiological phase of the scanned subject matching a historical radiation therapy plan corresponding to a similar historical physiological phase of the same organ type from the historical radiation therapy data. The radiation therapy plan may also be determined in other possible ways, without limitation.
According to some embodiments of the present disclosure, the physiological phase of the scanned object is analyzed, so that a radiotherapy plan can be determined in a targeted manner, and the accuracy of radiotherapy is improved.
One or more embodiments of the present disclosure provide a motion state determining apparatus, including a processor configured to perform a motion state determining method according to any one of the above embodiments.
One or more embodiments of the present disclosure provide a computer-readable storage medium storing computer instructions that, when read by a computer, perform a method for determining a motion state according to any one of the embodiments described above.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations to the present disclosure may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this specification, and therefore, such modifications, improvements, and modifications are intended to be included within the spirit and scope of the exemplary embodiments of the present invention.
Meanwhile, the specification uses specific words to describe the embodiments of the specification. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present description. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present description may be combined as suitable.
Furthermore, the order in which the elements and sequences are processed, the use of numerical letters, or other designations in the description are not intended to limit the order in which the processes and methods of the description are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present disclosure. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed in this specification and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure does not imply that the subject matter of the present description requires more features than are set forth in the claims. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., referred to in this specification is incorporated herein by reference in its entirety. Except for application history documents that are inconsistent or conflicting with the content of this specification, documents that are currently or later attached to this specification in which the broadest scope of the claims to this specification is limited are also. It is noted that, if the description, definition, and/or use of a term in an attached material in this specification does not conform to or conflict with what is described in this specification, the description, definition, and/or use of the term in this specification controls.
Finally, it should be understood that the embodiments described in this specification are merely illustrative of the principles of the embodiments of this specification. Other variations are possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present specification may be considered as consistent with the teachings of the present specification. Accordingly, the embodiments of the present specification are not limited to only the embodiments explicitly described and depicted in the present specification.