Movatterモバイル変換


[0]ホーム

URL:


CN119867713B - Non-contact respiration monitoring method, system, terminal equipment and medium - Google Patents

Non-contact respiration monitoring method, system, terminal equipment and medium
Download PDF

Info

Publication number
CN119867713B
CN119867713BCN202510379645.9ACN202510379645ACN119867713BCN 119867713 BCN119867713 BCN 119867713BCN 202510379645 ACN202510379645 ACN 202510379645ACN 119867713 BCN119867713 BCN 119867713B
Authority
CN
China
Prior art keywords
speed
image
signals
respiratory
image block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510379645.9A
Other languages
Chinese (zh)
Other versions
CN119867713A (en
Inventor
王文锦
曾咏燊
徐永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aipaypal Intelligent Technology Co ltd
Southern University of Science and Technology
Original Assignee
Shenzhen Aipaypal Intelligent Technology Co ltd
Southern University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aipaypal Intelligent Technology Co ltd, Southern University of Science and TechnologyfiledCriticalShenzhen Aipaypal Intelligent Technology Co ltd
Priority to CN202510379645.9ApriorityCriticalpatent/CN119867713B/en
Publication of CN119867713ApublicationCriticalpatent/CN119867713A/en
Application grantedgrantedCritical
Publication of CN119867713BpublicationCriticalpatent/CN119867713B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a non-contact respiration monitoring method, a system, terminal equipment and a medium, wherein the method comprises the steps of obtaining video of a chest and abdomen area of a user, and dividing each frame of image in the video to obtain a plurality of corresponding image blocks; the method comprises the steps of carrying out velocity component calculation on the image block to obtain velocity signals of respiratory motion in different directions, carrying out singular value decomposition and signal fusion on the velocity signals to obtain respiratory signals, and calculating the heartbeat interval of the respiratory signals to obtain the respiratory rate of a user. According to the invention, a singular value decomposition method is introduced in non-contact respiration monitoring, and optical flow speeds in two directions in all image blocks of a video image are decomposed and fused, so that a more stable and robust respiration signal is finally generated, and even if an angle deviation exists between a camera and a measured object, higher signal extraction precision can be maintained.

Description

Non-contact respiration monitoring method, system, terminal equipment and medium
Technical Field
The invention relates to the technical field of non-contact respiration monitoring, in particular to a non-contact respiration monitoring method, a non-contact respiration monitoring system, a non-contact respiration monitoring terminal device and a non-contact respiration monitoring medium for multiple rotation angle images.
Background
Respiratory rate is an important physiological parameter for assessing cardiopulmonary function, and has important significance for many clinical applications, especially in the fields of intensive care, surgical anesthesia, neonatal care, and the like. Traditional respiratory monitoring methods, such as electrical impedance plethysmography, airflow sensors, and capnography, while providing highly accurate respiratory data, often require direct contact with the patient or use of invasive devices, which can cause discomfort to the patient as well as interfere with the patient's natural breathing pattern.
Camera-based respiration monitoring is gaining attention in the medical field as a completely non-invasive, undisturbed, non-contact technique. The technology captures the micro-motion of the chest and the abdomen of a patient through a high-resolution camera, and extracts respiratory signals by using an image processing algorithm. The advantage of this method is that it is contactless, making it particularly suitable for patients with skin sensitivity, suffering from infectious diseases or requiring long-term monitoring, such as neonates, burn patients and immunocompromised persons. In addition, the breath monitoring based on the camera can realize remote monitoring and continuous monitoring, and has important significance for home care and telemedicine.
The respiratory monitoring technology based on cameras has been developed to a certain extent, and the main implementation method is PixFlow optical flow algorithm. By analyzing the pixel displacement between adjacent frames, the horizontal and vertical motion amounts of chest and abdomen are calculated respectively, and the direction with higher signal-to-noise ratio is selected as the respiratory signal. In addition, the horizontal direction and the vertical direction are fused to form an included angle of the orthogonal respiration velocity vector, so that the respiration motion angular velocity is obtained. However, the above algorithm has an angle dependence, and can acquire accurate respiratory signals only when the patient is at a specific angle, usually just facing the camera. If the angle between the camera and the patient is not ideal, the chest and abdomen movements in the image may not be accurately captured, resulting in inaccurate or unstable breathing signals output by the algorithm.
It is therefore necessary to propose a technique that enables accurate respiratory signal monitoring at arbitrary angles.
Disclosure of Invention
The invention aims to solve the technical problems that the prior art is overcome by providing a non-contact respiration monitoring method, a non-contact respiration monitoring system, terminal equipment and a medium, and aims to solve the problem that an accurate respiration signal can be obtained only when a patient is at a specific angle and usually faces a camera. In particular, in the prior art, if the angle between the camera and the patient is not ideal, the chest and abdomen movements in the image may not be accurately captured, resulting in inaccurate or unstable breathing signals output by the algorithm.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
In a first aspect, the present invention provides a method of non-contact respiration monitoring, the method comprising:
Acquiring a video of a chest and abdomen area of a user, and dividing each frame of image in the video to obtain a plurality of image blocks corresponding to each frame of image;
Calculating the velocity component of each image block to obtain velocity signals of the respiratory motion of each image block in different directions;
performing singular value decomposition on the speed signals in different directions for each image block to obtain corresponding speed characteristics, and performing signal fusion on the speed signals in different directions based on the speed characteristics to obtain fusion respiration signals of each image block;
fusing the fusion respiratory signals of all the image blocks to obtain an integral respiratory signal;
and accumulating and processing the whole respiratory signals and calculating the heartbeat interval to obtain the respiratory rate of the user.
In one implementation manner, the obtaining a video of a chest and abdomen area of a user, and dividing each frame of image in the video to obtain a plurality of image blocks corresponding to each frame of image includes:
shooting the chest and abdomen area of the user at any shooting angle to obtain a video of the chest and abdomen area of the user;
And based on the shooting angle, obtaining an image segmentation parameter, and based on the image segmentation parameter, segmenting each frame of image in the video to obtain a plurality of image blocks corresponding to each frame of image.
In one implementation, the calculating the velocity component of each image block to obtain velocity signals of respiratory motion of each image block in different directions includes:
Calculating, for each of the image blocks, a velocity component of the image block in a horizontal direction and a vertical direction based on a difference of the image block in different frame images;
and obtaining the speed signals of the image blocks in the horizontal direction and the vertical direction based on the speed components of the image blocks in each frame of image.
In one implementation manner, the calculating the velocity component of each image block to obtain velocity signals of respiratory motion of each image block in different directions further includes:
And carrying out trend removal on the speed signals in the horizontal direction and the vertical direction, and eliminating low-frequency noise and baseline drift.
In one implementation manner, the singular value decomposition is performed on the velocity signals in different directions for each image block to obtain corresponding velocity features, including:
obtaining a speed matrix of the image block in the horizontal direction and the vertical direction according to the speed signal;
And carrying out singular value decomposition on the speed matrix to obtain the eigenvectors and eigenvalues corresponding to the image blocks.
In one implementation manner, the performing signal fusion on the speed signals in different directions based on the speed characteristics to obtain a fused respiration signal of each image block includes:
And selecting the characteristic vector with the largest characteristic value, and carrying out signal fusion on the speed signals in the horizontal direction and the vertical direction based on the selected characteristic vector to obtain a fusion respiratory signal of the image block.
In one implementation, the fusing the fused respiratory signals of all the image blocks to obtain an overall respiratory signal includes:
Selecting a target local respiratory signal according to the average heartbeat interval characteristic of the fusion respiratory signal of the image block;
and carrying out average fusion on the selected target local respiratory signals to obtain one-dimensional integral respiratory signals.
In a second aspect, embodiments of the present invention also provide a non-contact respiration monitoring system, the system comprising:
The image block acquisition module is used for acquiring a video of a chest and abdomen area of a user, and dividing each frame of image in the video to obtain a plurality of image blocks corresponding to each frame of image;
The speed signal acquisition module is used for carrying out speed component calculation on each image block to obtain speed signals of the respiratory motion of each image block in different directions;
The fusion respiratory signal acquisition module is used for carrying out singular value decomposition on the speed signals in different directions for each image block to obtain corresponding speed characteristics, and carrying out signal fusion on the speed signals in different directions based on the speed characteristics to obtain fusion respiratory signals of each image block;
The integral respiratory signal acquisition module is used for fusing the fused respiratory signals of all the image blocks to obtain integral respiratory signals;
and the respiratory rate acquisition module is used for accumulating and processing the whole respiratory signals and calculating the heartbeat interval to obtain the respiratory rate of the user.
In a third aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes a memory, a processor, and a non-contact respiration monitoring program stored in the memory and capable of running on the processor, where the processor implements the steps of the non-contact respiration monitoring method according to any one of the above schemes when executing the non-contact respiration monitoring program.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where a non-contact respiration monitoring program is stored on the computer readable storage medium, where the non-contact respiration monitoring program, when executed by a processor, implements the steps of the non-contact respiration monitoring method according to any one of the above schemes.
The beneficial effects are that: the invention discloses a non-contact respiration monitoring method, a non-contact respiration monitoring system, terminal equipment and a non-contact respiration monitoring medium. And then, carrying out velocity component calculation on the image block to obtain velocity signals of the respiratory motion in different directions. And then, carrying out singular value decomposition and signal fusion on the speed signal to obtain a respiration signal. And finally, calculating the heartbeat interval of the breathing signal to obtain the breathing rate of the user. According to the invention, a singular value decomposition method is introduced in non-contact respiration monitoring, and optical flow speeds in two directions in all image blocks of a video image are decomposed and fused, so that a more stable and robust respiration signal is finally generated, and even if an angle deviation exists between a camera and a measured object, higher signal extraction precision can be maintained.
Drawings
Fig. 1 is a flowchart of a specific implementation of a non-contact respiration monitoring method according to an embodiment of the present invention.
Fig. 2 is a flowchart of non-contact respiration monitoring signal extraction according to an embodiment of the present invention.
Fig. 3 is a functional block diagram of a non-contact respiratory monitoring device provided by an embodiment of the present invention.
Fig. 4 is a schematic block diagram of an internal structure of a terminal device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and more specific, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all the elements and operations or steps are included or performed in the order described. For example, some operations or steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
It is to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that, in order to clearly describe the technical solutions of the embodiments of the present invention, in the embodiments of the present invention, the words "first", "second", etc. are used to distinguish identical items or similar items having substantially the same function and effect. For example, the first control information and the second control information are merely for distinguishing different control information, and the order of the different control information is not limited.
It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Respiratory rate is a key physiological parameter for assessing cardiopulmonary function, and has great significance in clinical fields such as intensive care, operative anesthesia, neonatal care and the like. Traditional respiration monitoring methods, such as electrical impedance plethysmography, airflow sensors, and carbon dioxide oscillograms, can provide high-precision respiration data, but can be directly contacted with a patient or invasive equipment is used, so that discomfort of the patient is easily caused, and natural respiration modes are disturbed. Camera-based respiration monitoring is receiving attention in the medical field as a non-invasive, undisturbed, non-contact technique. The method captures the small movements of the chest and abdomen of a patient by means of a high-resolution camera, and extracts respiratory signals by means of an image processing algorithm. Because of the non-contact characteristic, the medical care instrument is particularly suitable for patients with sensitive skin, infectious diseases or long-time monitoring, can realize remote and continuous monitoring, and is of great importance to home care and telemedicine. At present, the technology is mainly realized through PixFlow optical flow algorithm, the pixel displacement of adjacent frames is analyzed to calculate the motion quantity, the direction with high signal to noise ratio is selected as the respiratory signal, and the included angle of the directions is fused to obtain the respiratory motion angular velocity. However, the algorithm has angle dependence, and can accurately acquire the respiratory signal only when a patient is opposite to the camera, if the angle between the camera and the patient is poor, chest and abdomen movements are difficult to accurately capture, and the respiratory signal output by the algorithm is inaccurate or unstable.
To solve the above problems, the applicant introduced a method of singular value decomposition (SVD, singular Value Decomposition). Specifically, the method first merges optical flow velocities in the horizontal and vertical directions. In the fusion process, the optical flow velocity matrix is decomposed into multiple independent components. The decomposition mode can effectively reduce the influence of angle change on the optical flow speed. Because the change of the optical flow velocity at different angles is complex, the change can be analyzed and processed more accurately after being decomposed into independent components. By fusing these independent components, a more stable and robust respiration signal can be generated. Even if the angle deviation exists between the camera and the measured object, the method can still keep higher signal extraction precision. This improvement improves the reliability and practicality of non-contact respiratory monitoring techniques, especially for situations requiring long-term, multi-angle monitoring, such as home care, telemedicine, and some special medical environments, such as neonatal care, infected patient monitoring, etc.
The non-contact respiration monitoring method provided in this embodiment, as shown in fig. 1, specifically includes the following steps:
Step S100, acquiring a video of a chest and abdomen area of a user, and dividing each frame of image in the video to obtain a plurality of image blocks corresponding to each frame of image.
In this embodiment, as a non-contact respiration detection method, image video data of a user needs to be analyzed, and the first step is to collect the video image data of the user. Specifically, when the user performs respiratory exercise, the lung retracts and expands and the chest volume changes under the contraction and the abduction of the diaphragm and the intercostal external muscle. In other words, when the user performs respiratory exercise, the relief of the chest and abdomen is finally exhibited. Therefore, the video image data of the user, specifically, the video image of the chest and abdomen area of the user is acquired, and the fluctuation state of the chest and abdomen of the user is acquired. Preferably, continuous video image acquisition is performed on the chest and abdomen area of the user at a fixed viewing angle and a fixed distance. Further, after the complete chest and abdomen video image of the user is obtained, on the premise of fixed image pickup viewing angle, as shown in fig. 2, the image frame of each frame in the video is segmented, so as to obtain a plurality of image blocks corresponding to the image frame of each frame. Typically, each frame of image is divided into m×n image blocks, each image block serving as an independent signal detection area for subsequent extraction of the respiratory signal. In this way, a sequence of frames of m×n image blocks in the video, respectively, i.e. a sequence of m×n image block video frames, is obtained. In each sequence of image block video frames, each frame is an image block. And combining the image blocks in the same frame of the video frame sequence of all the image blocks to obtain a complete image picture of the original video image in the frame.
In one implementation manner, a video of a chest and abdomen area of a user is obtained, each frame of image in the video is segmented, and a plurality of image blocks corresponding to each frame of image are obtained, specifically comprising the following steps:
Step S110, shooting a chest and abdomen area of a user at any shooting angle to obtain a video of the chest and abdomen area of the user;
Step S120, based on the shooting angle, obtaining an image segmentation parameter, and based on the image segmentation parameter, segmenting each frame of image in the video to obtain a plurality of image blocks corresponding to each frame of image.
In this embodiment, when capturing video images of the chest and abdomen area of the user, a fixed capturing angle and distance are required, so that the spatial pose of the capturing device relative to the chest and abdomen area of the user in the capturing process is fixed, the image size is kept consistent, and the size and spatial orientation of the segmented image block in each video frame are ensured to be kept consistent, so that the method can be used for calculating the change condition of the subsequent image block in the continuous frames. Although the shooting angle needs to be fixed, the selection of the shooting angle is not limited, and the shooting angle needs to be fixed, so that the fluctuation of the chest and the abdomen of the user can be acquired. At the same time as the photographing angle is fixed, the relative distance of the photographing apparatus to the user, which is generally selected to be 50cm to 100cm, is also required to be fixed, and preferably, the relative distance of the photographing apparatus to the user is set to be 50cm in this embodiment. Further, after the shooting angle and the distance are fixed, relevant parameters of image segmentation can be obtained through the shooting angle and the distance. The related parameters at least comprise the image size and proportion and the position and angle of the chest and abdomen of the user in the image. Specifically, firstly, the proportional relation between the actual size of the object in the image and the imaging size can be determined by combining the shooting distance with the focal length or other camera parameters. In the case of a fixed shooting distance, different focal lengths can cause the chest and abdomen to be different in size in the image. The image size parameter specifies the width and height pixel values of the image for determining the size and number of segmented image blocks. In other words, when dividing each frame of video image of the chest and abdomen of the user into m×n image blocks, the pixel range of each image block needs to be calculated according to the above size information, so as to obtain the image block number parameters M and N.
And step 200, calculating the velocity component of each image block to obtain velocity signals of the respiratory motion of each image block in different directions.
In the present embodiment, after dividing each frame image of a video image into m×n image blocks, an m×n image block video frame sequence is obtained. For each of the M x N image block video frame sequences, the velocity component of each image block may be calculated from the difference between a frame of image block in one image block video frame sequence and the preceding and following frame of image blocks of the frame of image block also in that image block video frame sequence. The velocity signal of a sequence of image block video frames may be derived by combining the velocity components of each frame of image block in the sequence of image block video frames. Further, the speed signals of the video frame sequences of all image blocks are summed, so that the speed signals of the respiratory motion can be obtained.
In one implementation manner, the calculating the velocity component of each image block to obtain velocity signals of respiratory motion of each image block in different directions specifically includes the following steps:
Step S210, calculating the velocity components of the image blocks in the horizontal direction and the vertical direction based on the differences of the image blocks in different frame images for each image block;
Step S220, based on the velocity component of the image block in each frame of image, obtaining velocity signals of the image block in the horizontal direction and the vertical direction.
In the present embodiment, after dividing each frame image of a video image into m×n image blocks, an m×n image block video frame sequence is obtained. For each of the M x N image block video frame sequences, as shown in fig. 2, the speed component of each image block may be calculated from the contrast difference between a frame image block in an image block video frame sequence and the preceding and following frame image blocks of the frame image block also in the image block video frame sequence using PixFlow algorithm. Specifically, on the premise of fixing the shooting angle and the distance, when the shot ambient light does not change, the brightness of the moving pixels does not change when the pixels in the image block move between different frames. Because the fluctuation of the chest and abdomen is small during respiratory motion, the average motion speed of the pixels in the image block in the horizontal direction and the vertical direction is solved by calculating the brightness change of the pixels in each image block and utilizing an optical flow constraint equation, and the speed components of the image block in the horizontal direction and the vertical direction are obtained. The horizontal and vertical velocity signals of a video frame sequence of tiles may be obtained by combining the horizontal and vertical velocity components of each tile in the video frame sequence of tiles.
In one implementation manner, the calculating the velocity component of each image block to obtain velocity signals of respiratory motion of each image block in different directions specifically further includes the following steps:
And step S230, carrying out trend removal on the speed signals in the horizontal direction and the vertical direction, and eliminating low-frequency noise and baseline drift.
In the present embodiment, in the process of acquiring the respiratory movement horizontal and vertical direction velocity signals, noise is mixed in the velocity signals due to the influence of environmental factors such as low frequency vibration near the monitoring device, low frequency disturbance of the power system, and the like. Wherein low frequency noise can cause unwanted fluctuations in the velocity signal, masking the true respiratory signal characteristics. In addition, in respiratory monitoring, a baseline drift of the speed signal, that is, a slow change of the direct current component of the signal with time, is generated due to slight movement of the human body, posture adjustment, instability of the sensor, etc., and the overall trend of the speed signal is changed. Therefore, as shown in fig. 2, the velocity signals in the horizontal direction and the vertical direction are subjected to preprocessing for trend removal to eliminate low-frequency noise and baseline drift, thereby ensuring the accuracy of the signals. Specifically, the trend removal method comprises a high-pass filtering method, a polynomial fitting removal method and an empirical mode decomposition method. Preferably, in this embodiment, the velocity signal is preprocessed using a high pass filtering method. First, the coefficients of the filter are determined according to the filter type and cut-off frequency set. Then, the horizontal and vertical velocity signals are input to the high-pass filters respectively for filtering processing. In this embodiment, the design of the high pass filter and the signal filtering are implemented using a digital filter design tool.
Step S300, performing singular value decomposition on the speed signals in different directions for each image block to obtain corresponding speed characteristics, and performing signal fusion on the speed signals in different directions based on the speed characteristics to obtain fusion respiration signals of each image block.
In this embodiment, after obtaining velocity signals of respiratory motion in different directions, singular value decomposition is further required to be performed on the velocity signals to obtain corresponding velocity features, and fusion is performed on velocity signals of different directions of an image block based on the velocity features to obtain a fused respiratory signal of the image block.
In one implementation manner, for each image block, singular value decomposition is performed on the velocity signals in different directions to obtain corresponding velocity characteristics, and the method specifically includes the following steps:
Step S310, obtaining a speed matrix of the image block in the horizontal direction and the vertical direction according to the speed signal;
step S320, singular value decomposition is carried out on the velocity matrix to obtain a feature vector and a feature value corresponding to the image block;
In this embodiment, after completion of the velocity component calculation, we obtain a velocity signal of each image block that varies with time in the horizontal direction and the vertical direction. The speed signals are presented in time series, one speed value for each point in time. The one-dimensional velocity signal is converted into a matrix form for subsequent singular value decomposition. Specifically, the velocity signal of each image block includes T time points, and the velocity matrix in the horizontal direction and the velocity matrix in the vertical direction are a matrix of T rows, including T velocity values corresponding to the velocity signal in the horizontal direction and T velocity values corresponding to the velocity signal in the vertical direction.
After obtaining the velocity matrix in the horizontal direction and the vertical direction of the image block, as shown in fig. 2, singular value decomposition is performed on the velocity matrix to obtain a left singular matrix, a diagonal matrix containing singular values, and a right singular matrix. Correspondingly, through the three matrixes obtained by the decomposition, the eigenvalues and eigenvectors corresponding to the velocity matrixes in the horizontal direction and the vertical direction can be obtained.
In one implementation manner, the signal fusion is performed on the speed signals in different directions based on the speed characteristics to obtain a fused respiratory signal of each image block, and the method specifically includes the following steps:
Step S330, selecting the feature vector with the largest feature value, and carrying out signal fusion on the speed signals in the horizontal direction and the vertical direction based on the selected feature vector to obtain a fusion respiratory signal of the image block.
In this embodiment, after singular value decomposition is performed on the velocity matrix in the horizontal direction and the vertical direction, a series of eigenvectors and corresponding eigenvalues, that is, squares of the singular values, are obtained. The magnitude of the characteristic value reflects the importance degree of the mode represented by the characteristic vector in the original speed signal, and the larger the characteristic value is, the higher the signal energy carried by the characteristic vector is, and the more the characteristic vector can represent the main characteristic of respiratory motion. Further, traversing the eigenvalues corresponding to all eigenvectors, and comparing to determine the eigenvector corresponding to the largest eigenvalue. After the feature vector is selected, as shown in fig. 2, the velocity signals in the horizontal direction and the vertical direction are subjected to signal fusion. Preferably, in the present embodiment, the velocity signals in the horizontal direction and the vertical direction of the image block are fused in a weighted summation manner. According to the selected feature vector, weights corresponding to the feature vector elements are respectively given to the speed signals in the horizontal and vertical directions, and then weighted summation is carried out. In other words, the speed signals in the horizontal and vertical directions are fused according to the importance degree represented by the feature vector, and the fused respiratory signal of the image block capable of comprehensively reflecting the respiratory motion is obtained.
And step 400, fusing the fused respiratory signals of all the image blocks to obtain an integral respiratory signal.
In this embodiment, after the original video image is subjected to the segmentation process, a plurality of image block video frame sequences are obtained. That is, each image block may be calculated as a fused respiration signal for the corresponding image block. Although the fused respiratory signal of each image block contains information of respiratory motion, each signal may have certain fluctuations and noise due to factors such as the position, size, etc. of the image block. As shown in fig. 2, the fused respiratory signals of all the image blocks are subjected to average fusion processing, so that the signals are smoothed, the influence of noise is reduced, and the respiratory information of each image block is integrated to obtain a signal representing the whole respiratory motion. Specifically, the fusion respiratory signals corresponding to the m×n image blocks are fused averagely, so as to obtain an overall respiratory signal. More specifically, for a video image that is cut into a sequence of m×n image blocks video frames and contains T time points, at any time point T of the T time points, the respiration signal values of the corresponding m×n image blocks at the time point T are averaged and summed to obtain an overall respiration signal value at the time point T. And combining the total respiration signal values of the T time points to obtain a total respiration signal.
In one implementation manner, the fusing respiratory signals of all the image blocks to obtain an overall respiratory signal specifically includes the following steps:
Step S410, selecting a target local respiratory signal according to the average heartbeat interval characteristic of the fusion respiratory signal of the image block;
Step S420, carrying out average fusion on the selected target local respiratory signals to obtain one-dimensional integral respiratory signals.
In this embodiment, in any frame of video image, there may be noise in the fused respiratory signal value of the image block, so before the fused respiratory signal corresponding to the image block is fused averagely, the image block needs to be screened. Specifically, since the respiratory signal may exhibit a periodic variation, similar to the fluctuation of the heartbeat, an amplitude threshold is preset, and when the signal value exceeds the threshold, it is considered that a characteristic point similar to the heartbeat is detected. Then, the time intervals between adjacent feature points are calculated, and the average of the time intervals is calculated, resulting in an average heartbeat interval (mIBI, MEAN INTER-beat interval). The range of average heart beat intervals is determined based on the age of the user, e.g. an adult human, with a normal breathing rate of about 12-20 beats per minute, converted to average heart beat intervals of about 3-5 seconds, and the average heart beat intervals of an infant need to be specifically determined based on its age. Based on the determined average heartbeat interval, further, as shown in fig. 2, the average heartbeat interval of the fusion respiratory signals of all the image blocks is calculated, and the fusion respiratory signals of the image blocks with the average heartbeat interval in a normal range are selected as high-quality target local respiratory signals. Thus, the influence of noise interference and other non-respiratory motion of the human body can be reduced, and signal abnormality caused by unreasonable image block segmentation can be reduced. And finally, carrying out average fusion on the selected target local respiratory signals to obtain one-dimensional integral respiratory signals.
And S500, accumulating and processing the whole respiratory signals and calculating the heartbeat interval to obtain the respiratory rate of the user.
In this embodiment, after the integral respiratory signal is obtained, the integral respiratory signal is accumulated and processed, the signal is smoothed, and the influence of noise is reduced, so as to obtain a final integral respiratory motion signal. After the above-mentioned integral respiratory signal is obtained, a characteristic point of each respiratory cycle in the respiratory signal is determined, and a peak or trough of the respiratory signal is generally selected as the characteristic point. Preferably, the feature points are detected using a thresholding method or a derivative method. After the feature points of each respiratory cycle in the respiratory signal are determined, the time interval between adjacent feature points is calculated to obtain the respiratory interval. The respiration rate of the user is calculated by an average respiration interval method or a piecewise calculation method as shown in fig. 2.
In summary, under the technical scheme of the embodiment, the problem of respiratory signal extraction accuracy caused by angular dependence of the traditional optical flow algorithm is solved by introducing a singular value decomposition method in non-contact respiratory monitoring. Specifically, the optical flow velocities in two directions in all image blocks of the video image are decomposed and fused to ultimately generate a more stable and robust respiration signal. Even if the angle deviation exists between the camera and the measured object, the technical scheme can keep higher signal extraction precision, and the reliability and the practicability of the non-contact respiration monitoring technology are improved.
As shown in fig. 3, an embodiment of the present invention provides a non-contact respiration monitoring system, which includes an image block acquisition module 10, a speed signal acquisition module 20, a fused respiration signal acquisition module 30 of the image block, an overall respiration signal acquisition module 40, and a respiration rate acquisition module 50.
The system comprises an image block acquisition module 10, a speed signal acquisition module 20, a breath rate acquisition module 50 and a heartbeat interval calculation module, wherein the image block acquisition module 10 is used for acquiring videos of chest and abdomen areas of a user, segmenting each frame of image in the videos to obtain a plurality of image blocks corresponding to each frame of image, the speed signal acquisition module 20 is used for carrying out speed component calculation on each image block to obtain speed signals of breath movements of each image block in different directions, the breath rate acquisition module 30 is used for carrying out singular value decomposition on the speed signals in different directions aiming at each image block to obtain corresponding speed characteristics, carrying out signal fusion on the speed signals in different directions based on the speed characteristics to obtain a fusion breath signal of each image block, the whole breath signal acquisition module 40 is used for carrying out fusion on fusion of fusion breath signals of all the image blocks to obtain a whole breath signal, and the breath rate acquisition module 50 is used for carrying out accumulation and processing on the whole breath signals and calculating the heartbeat interval to obtain the breath rate of the user.
In one implementation, the image block acquisition module includes:
the chest and abdomen video shooting unit is used for shooting the chest and abdomen area of the user at any shooting angle to obtain a video of the chest and abdomen area of the user;
The image segmentation unit is used for obtaining image segmentation parameters based on the shooting angles, and segmenting each frame of image in the video based on the image segmentation parameters to obtain a plurality of image blocks corresponding to each frame of image.
In one implementation, the speed signal acquisition module includes:
a speed component acquisition unit configured to calculate, for each of the image blocks, a speed component of the image block in a horizontal direction and a vertical direction based on a difference of the image block in different frame images;
And the speed signal acquisition unit is used for acquiring the speed signal of the image block in the horizontal direction and the vertical direction based on the speed component of the image block in each frame of image.
In one implementation, the speed signal acquisition module further includes:
And the speed signal preprocessing unit is used for removing trend of the speed signals in the horizontal direction and the vertical direction and eliminating low-frequency noise and baseline drift.
In one implementation, the fused respiratory signal acquisition module of the image block includes:
a speed matrix obtaining unit, configured to obtain a speed matrix in a horizontal direction and a vertical direction of the image block according to the speed signal;
And the singular value decomposition unit is used for carrying out singular value decomposition on the velocity matrix to obtain the eigenvectors and eigenvalues corresponding to the image blocks.
And the fusion respiratory signal acquisition unit of the image block is used for selecting the characteristic vector with the largest characteristic value, and carrying out signal fusion on the speed signals in the horizontal direction and the vertical direction based on the selected characteristic vector to obtain the fusion respiratory signal of the image block.
In one implementation, the integral respiratory signal acquisition module includes:
The target local respiratory signal selecting unit is used for selecting a target local respiratory signal according to the average heartbeat interval characteristic of the fusion respiratory signal of the image block;
and the whole respiratory signal acquisition unit is used for carrying out average fusion on the selected target local respiratory signals to obtain one-dimensional whole respiratory signals.
Based on the above embodiment, the present invention also provides a terminal device, and a functional block diagram thereof may be shown in fig. 4. The terminal equipment comprises a processor, a memory, a network interface, a display screen and a temperature sensor which are connected through a system bus. Wherein the processor of the terminal device is adapted to provide computing and control capabilities. The memory of the terminal device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the terminal device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a non-contact respiration monitoring method. The display screen of the terminal equipment can be a liquid crystal display screen or an electronic ink display screen, and the temperature sensor of the terminal equipment is preset in the terminal equipment and is used for detecting the running temperature of the internal equipment.
It will be appreciated by those skilled in the art that the functional block diagram shown in fig. 4 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the terminal device to which the present inventive arrangements are applied, and that a particular terminal device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a terminal device is provided that includes a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, the one or more programs comprising instructions for:
Acquiring a video of a chest and abdomen area of a user, and dividing each frame of image in the video to obtain a plurality of image blocks corresponding to each frame of image;
calculating the velocity component of the image block to obtain velocity signals of respiratory motion in different directions;
Singular value decomposition and signal fusion are carried out on the speed signal, and a respiratory signal is obtained;
And calculating the heartbeat interval of the breathing signal to obtain the breathing rate of the user.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
In summary, the present invention provides a non-contact respiration monitoring method, system, terminal device and medium, compared with the prior art, the method comprises the steps of firstly obtaining a video of a chest and abdomen area of a user, and dividing each frame of image in the video to obtain a plurality of image blocks corresponding to each frame of image. And then, carrying out velocity component calculation on the image block to obtain velocity signals of the respiratory motion in different directions. And then, carrying out singular value decomposition and signal fusion on the speed signal to obtain a respiration signal. And finally, calculating the heartbeat interval of the breathing signal to obtain the breathing rate of the user. According to the invention, a singular value decomposition method is introduced in non-contact respiration monitoring, and optical flow speeds in two directions in all image blocks of a video image are decomposed and fused, so that a more stable and robust respiration signal is finally generated, and even if an angle deviation exists between a camera and a measured object, higher signal extraction precision can be maintained.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (8)

CN202510379645.9A2025-03-282025-03-28Non-contact respiration monitoring method, system, terminal equipment and mediumActiveCN119867713B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202510379645.9ACN119867713B (en)2025-03-282025-03-28Non-contact respiration monitoring method, system, terminal equipment and medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202510379645.9ACN119867713B (en)2025-03-282025-03-28Non-contact respiration monitoring method, system, terminal equipment and medium

Publications (2)

Publication NumberPublication Date
CN119867713A CN119867713A (en)2025-04-25
CN119867713Btrue CN119867713B (en)2025-06-06

Family

ID=95442881

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202510379645.9AActiveCN119867713B (en)2025-03-282025-03-28Non-contact respiration monitoring method, system, terminal equipment and medium

Country Status (1)

CountryLink
CN (1)CN119867713B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113425282A (en)*2020-03-232021-09-24复旦大学附属中山医院Respiration rate monitoring method and device based on multispectral PPG blind source separation method
CN116189300A (en)*2023-02-232023-05-30南方科技大学 A method and device for extracting respiratory angular velocity from video

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2015076916A1 (en)*2013-11-202015-05-28General Electric CompanyMethod and system for determining respiration rate
US9693710B2 (en)*2014-10-212017-07-04Xerox CorporationSystem and method for determining respiration rate from a video
CN114170201B (en)*2021-12-082024-04-26山东大学Non-contact respiration rate detection method and system based on edge optical flow information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113425282A (en)*2020-03-232021-09-24复旦大学附属中山医院Respiration rate monitoring method and device based on multispectral PPG blind source separation method
CN116189300A (en)*2023-02-232023-05-30南方科技大学 A method and device for extracting respiratory angular velocity from video

Also Published As

Publication numberPublication date
CN119867713A (en)2025-04-25

Similar Documents

PublicationPublication DateTitle
US9928607B2 (en)Device and method for obtaining a vital signal of a subject
Al-Naji et al.Remote respiratory monitoring system based on developing motion magnification technique
EP4213705B1 (en)Motion-compensated laser speckle contrast imaging
JP6472086B2 (en) Device for acquiring respiratory information of a target
CN105869144A (en)Depth image data-based non-contact respiration monitoring method
WO2016151966A1 (en)Infant monitoring device, infant monitoring method, and infant monitoring program
Heinrich et al.Body movement analysis during sleep based on video motion estimation
CN113326781A (en)Non-contact anxiety recognition method and device based on face video
JP7266599B2 (en) Devices, systems and methods for sensing patient body movement
Chatterjee et al.Real-time respiration rate measurement from thoracoabdominal movement with a consumer grade camera
WO2016147678A1 (en)Vital sign measurement apparatus, vital sign measurement method, and vital sign measurement program
CN116189300B (en) Method and device for extracting respiratory angular velocity from video
US9861302B1 (en)Determining respiration rate from a video of a subject breathing
Heinrich et al.Video based actigraphy and breathing monitoring from the bedside table of shared beds
Yang et al.Estimating heart rate via depth video motion tracking
CN120392025A (en) Multimodal optical non-contact health monitoring system and method
Wang et al.Camera-based respiration monitoring: Motion and PPG-based measurement
Gwak et al.Contactless monitoring of respiratory rate and breathing absence from head movements using an rgb camera
CN114680865B (en) Medical imaging device equipped with biological signal processing system, medical imaging system, and biological signal processing method
CN119867713B (en)Non-contact respiration monitoring method, system, terminal equipment and medium
CN119992585A (en) Measurement method, computer program product, measurement system, and recording medium
Nguyen et al.A Computer Vision-Based Respiratory Rate Monitoring and Alarm System
Chatterjee et al.A vision based method for real-time respiration rate estimation using a recursive Fourier analysis
US20240285188A1 (en)Device, system and method for determining respiratory information of a subject
Dang et al.Measuring human respiration rate using depth frames of PrimeSense camera

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp