Disclosure of Invention
The invention aims to provide a safety monitoring video optimization transmission method and a video monitoring terminal, which are characterized in that characteristics such as motion intensity, change density and the like are extracted by extracting preprocessing and differential calculation of two adjacent frames of images, and a deep learning model is input to analyze dynamic changes, so that a picture scene is divided into a rapid dynamic change scene and a normal dynamic change scene. For a normal scene, an initial compression algorithm is used, and for a fast dynamic scene, the video coding bit rate is dynamically adjusted to ensure capturing details and fast moving targets. The method balances the video quality and the compression efficiency, ensures that high-quality images are provided at key moments, is beneficial to case investigation and evidence collection, and improves the performance and the practicability of a monitoring system so as to solve the problems in the background technology.
In order to achieve the above purpose, the invention provides a method for optimizing transmission of a safety monitoring video, which comprises the following steps:
extracting a current frame and a previous frame from a continuous monitoring video frame sequence, further extracting two adjacent frame images from a video stream, and providing basic data for subsequent dynamic change analysis;
Preprocessing the extracted two adjacent frames of images, calculating the difference of the two adjacent frames of images by using an image difference technology, generating a difference image by comparing pixel levels, and highlighting the change area between frames;
Extracting dynamic change characteristic information of the differential image, analyzing the dynamic change characteristic information, inputting the dynamic change characteristic information obtained by analysis into a trained deep learning model, and analyzing the dynamic change condition of a picture scene;
Dividing the dynamic change condition of the picture scene into a fast dynamic change scene and a normal dynamic change scene according to the analysis result of the deep learning model;
And for a normal dynamic change scene, adopting an initial compression algorithm to continue video stream compression, and for a fast dynamic change scene, dynamically adjusting the bit rate of video coding through the compression algorithm to ensure that a fast moving target and complex details in a video image are captured.
Preferably, the difference between two adjacent frames of images is calculated by using an image difference technology, and a difference image is generated by comparing pixel levels, and the specific steps are as follows:
Converting two adjacent frames of images into gray images;
The images are adjusted by using a feature point matching method through an image registration technology, so that the images are compared under the same coordinate system, and the adjacent two frames of images are ensured to be aligned in space;
carrying out smoothing treatment on the gray level image to remove noise and tiny fluctuation;
calculating the difference of two adjacent frames of gray images pixel by pixel, and calculating the difference of pixel values corresponding to the current frame and the previous frame for each pixel position to generate a difference image, wherein the calculation formula of the difference image is as follows: WhereinRepresenting in a differential imageThe difference value of the pixel location, specifically representing the difference in brightness of the adjacent two frames of images at that pixel location,AndThe images at time t and time t-1 are shown respectively,For the pixel coordinates,AndTwo frames of images are respectively arranged at pixel positionsPixel values of (2);
the method comprises the specific processes of comparing the differential value with a preset differential reference threshold value, marking pixels with the differential value larger than or equal to the differential reference threshold value as white, and marking pixels with the differential value smaller than the differential reference threshold value as black.
Preferably, the extracted dynamic change characteristic information of the differential image comprises motion intensity and change density, wherein the motion intensity refers to the overall motion intensity in the image, the change density refers to the proportion of changed pixels in the image to the total pixel number, the motion intensity and change density of the differential image are analyzed to generate a motion intensity index and a change density index, the motion intensity index and the change density index obtained through analysis are input into a trained deep learning model to generate a dynamic change quantization coefficient, and the dynamic change condition of a picture scene is analyzed through the dynamic change quantization coefficient.
Preferably, the dynamic change quantized coefficient generated after the differential image is analyzed is compared with a preset reference threshold value of the dynamic change quantized coefficient, and the dynamic change condition of the picture scene is divided, wherein the dividing process is as follows:
if the dynamic change quantization coefficient is larger than or equal to the dynamic change quantization coefficient reference threshold, dividing the dynamic change condition of the picture scene into fast dynamic change scenes;
If the dynamic change quantized coefficient is smaller than the dynamic change quantized coefficient reference threshold, dividing the dynamic change condition of the picture scene into normal dynamic change scenes.
Preferably, for fast dynamic scenes, the specific steps for dynamically adjusting the bit rate of video coding by compression algorithm are as follows:
comparing the calculated dynamic change quantized coefficient with a preset dynamic change quantized coefficient reference threshold value, calculating a dynamic change coefficient deviation value, wherein the calculated expression is as follows: In which, in the process,Represents the deviation value of the dynamic change coefficient,Representing the quantized coefficients of the dynamic change,Representing a dynamically changing quantized coefficient reference threshold, an;
According to the deviation value of the dynamic change coefficientDynamically adjusting video coding bit rate, and setting initial video coding bit rate asThe adjusted actual bit rate isThe adjustment coefficient is K, calculated by the following formula: In which, in the process,Is the maximum variation ratio of the adjustment coefficient,Is the variation sensitivity coefficient of the sensor,Is a hyperbolic tangent function for smoothing the adjustment coefficient variation;
by dynamically adjusted actual bit rateThe video encoding process is optimized to ensure that fast moving objects and complex details in the video image are captured.
Preferably, after analyzing the motion intensity of the differential image, the step of generating the motion intensity index is as follows:
calculating motion intensity for each pixel in the differential image, introducing a weight functionTo enhance the motion intensity of certain areas, and to calculate the weighted motion intensity by the following expression: In which, in the process,Representing weighted motion strengths;
Summing the motion intensities of the whole differential image to obtain the total motion intensity, wherein the calculated expression is as follows: wherein W and H represent the width and height of the image, respectively,Representing the overall intensity of motion;
intensity of overall movementNormalization, for comparison between different images, the normalized expression is: In which, in the process,Representing the normalized motion intensity;
taking the motion intensity change of continuous multiframes into consideration, introducing time weighted average, calculating a motion intensity index, wherein the calculated expression is as follows: In which, in the process,The index of the intensity of the movement is represented,Representing a time weighting coefficient ranging from 0 to 1, controlling the weight ratio of the motion intensities of the current frame and the previous frame,Is the motion intensity index of the previous frame.
Preferably, the step of generating the change density index after analyzing the change density in the differential image is as follows:
applying a threshold to a differential imageTo determine changed pixels, defining changed pixels,;
Counting the total number of varying pixels in a differential image,;
The variation density is calculated, and the expression of the calculation is: Wherein, the method comprises the steps of, wherein,For the total number of pixels in the image,Representing the variation density;
calculating a change density index based on the change density, the calculated expression being: In which, in the process,Indicating the index of the varying density of the material,Indicating the desired change density threshold.
A video monitoring terminal comprises a data extraction module, an image difference module, a feature extraction and analysis module, a scene classification module and a compression strategy adjustment module;
The data extraction module extracts a current frame and a previous frame from a continuous monitoring video frame sequence, further extracts two adjacent frame images from a video stream, and provides basic data for subsequent dynamic change analysis;
The image difference module is used for preprocessing the extracted two adjacent frames of images, calculating the difference of the two adjacent frames of images by using an image difference technology, generating a difference image through pixel-level comparison, and highlighting the change area between frames;
The feature extraction and analysis module extracts dynamic change feature information of the differential image, analyzes the dynamic change feature information, inputs the dynamic change feature information obtained by analysis into a trained deep learning model, and analyzes the dynamic change condition of the picture scene;
the scene classification module is used for dividing the dynamic change condition of the picture scene into a fast dynamic change scene and a normal dynamic change scene according to the analysis result of the deep learning model;
And the compression strategy adjustment module is used for continuing to compress the video stream by adopting an initial compression algorithm aiming at a normal dynamic change scene, and dynamically adjusting the bit rate of video coding by adopting the compression algorithm aiming at a fast dynamic change scene so as to ensure the fast moving target and complex details in the captured video image.
In the technical scheme, the invention has the technical effects and advantages that:
According to the invention, preprocessing and differential calculation are carried out by extracting two adjacent frames of images, dynamic change characteristic information such as motion intensity, change density and the like is extracted, the characteristic information is input into a deep learning model, the dynamic change condition of a picture scene is accurately analyzed, then the picture scene is divided into a quick dynamic change scene and a normal dynamic change scene according to the analysis result of the deep learning model, in the normal dynamic change scene, an initial compression algorithm is adopted to continue video stream compression, in the quick dynamic change scene, the bit rate of video coding is dynamically adjusted, capturing of a quick moving target and complex details is ensured, balance between video quality and compression efficiency is achieved, high-quality image details of a monitoring video at key moments are ensured, the reliability and effectiveness of safety monitoring are ensured, investigation and evidence obtaining work of cases are facilitated, and the overall performance and practicability of a monitoring system are finally improved.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the example embodiments may be embodied in many different forms and should not be construed as limited to the examples set forth herein, but rather, the example embodiments are provided so that this disclosure will be more thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
The invention provides a safety monitoring video optimized transmission method as shown in fig. 1, which comprises the following steps:
extracting a current frame and a previous frame from a continuous monitoring video frame sequence, further extracting two adjacent frame images from a video stream, and providing basic data for subsequent dynamic change analysis;
And selecting two images of the current frame and the last frame from the continuous monitoring video frame sequence so as to carry out subsequent dynamic change analysis. The two frames of images are adjacent, can reflect the change in the video in a short time, and provide basic data for analyzing the change of the video content.
The main function of extracting two adjacent frames of images is to compare and analyze the changes between video frames. By comparing adjacent frames, it is possible to identify which parts of the video have changed, such as the trajectory, speed, direction, etc. of the moving object. This is important for analysis of monitoring scenarios, as it can help identify and track dynamic events, such as intrusions, moving objects, etc., thereby improving the efficiency and accuracy of the monitoring system.
When extracting two adjacent frames of images, the following points need to be noted:
Inter-frame spacing-ensuring that the time interval between two extracted frames of images is short enough to accurately capture dynamic changes over a short period of time. If the interval is too long, some subtle but important changes may be missed.
Image quality-the extracted frame image needs to be kept at a high quality to ensure the accuracy of the subsequent analysis. Problems such as noise and blurring in the image can affect the detection and analysis results of dynamic changes.
The synchronization problem is to ensure that the time stamp and sequence of the images are accurate when extracting the frame images, and avoid analysis errors caused by asynchronous frame sequences.
Preprocessing the extracted two adjacent frames of images, calculating the difference of the two adjacent frames of images by using an image difference technology, generating a difference image by comparing pixel levels, and highlighting the change area between frames;
preprocessing the extracted two adjacent frames of images generally comprises the steps of graying processing, noise filtering, edge detection and the like. The preprocessing functions to reduce redundant information and noise in the image, highlight the main features of the image, and provide clearer and accurate data input for subsequent dynamic change analysis, thereby improving the effectiveness and reliability of analysis.
The difference between two adjacent frames of images is calculated by using an image difference technology, and a difference image is generated by comparing pixel levels, and the specific steps are as follows:
Converting two adjacent frames of images into gray images;
The graying process is to convert the color image into a gray image, and only the brightness information is kept and the color information is removed. This may simplify subsequent processing steps while reducing computational effort and making the differencing operation more direct and efficient. The graying process may be implemented by a weighted average method or directly taking a single color channel.
The images are adjusted by using a feature point matching method through an image registration technology, so that the images are compared under the same coordinate system, and the adjacent two frames of images are ensured to be aligned in space;
if the camera slightly moves or shakes in the shooting process, the error can be corrected by image alignment, and the accuracy of a difference result is ensured.
Carrying out smoothing treatment on the gray level image to remove noise and tiny fluctuation;
the smoothing process may be implemented by convolution operation, and smoothing is performed with a gaussian-check image, thereby reducing the influence of high-frequency noise. The smoothing process can enable the subsequent differential image to be smoother and more stable, and the situation that misjudgment noise is a change area is reduced.
Calculating the difference of two adjacent frames of gray images pixel by pixel, and calculating the difference of pixel values corresponding to the current frame and the previous frame for each pixel position to generate a difference image, wherein the calculation formula of the difference image is as follows: WhereinRepresenting in a differential imageThe difference value of the pixel location, specifically representing the difference in brightness of the adjacent two frames of images at that pixel location,AndRepresenting the image at time t (i.e. the current frame) and time t-1 (i.e. the previous frame) respectively,For the pixel coordinates,AndTwo frames of images are respectively arranged at pixel positionsPixel values of (2);
The method comprises the specific processes of comparing the differential value with a preset differential reference threshold value, marking pixels with the differential value larger than or equal to the differential reference threshold value as white (change area), and marking pixels with the differential value smaller than the differential reference threshold value as black (rest area);
The binarization processing can clearly separate the change area and the static area, so that the subsequent dynamic analysis is more visual and effective. The proper threshold value can be selected to be adjusted according to the actual scene, so that important change information can be accurately captured.
Extracting dynamic change characteristic information of the differential image, analyzing the dynamic change characteristic information, inputting the dynamic change characteristic information obtained by analysis into a trained deep learning model, and analyzing the dynamic change condition of a picture scene;
The extracted dynamic change characteristic information of the differential image comprises motion intensity and change density, wherein the motion intensity refers to the overall motion intensity in the image, the change density refers to the proportion of changed pixels in the image to the total pixel number, the motion intensity and change density of the differential image are analyzed to generate a motion intensity index and a change density index, the motion intensity index and the change density index obtained through analysis are input into a trained deep learning model to generate a dynamic change quantization coefficient, and the dynamic change condition of a picture scene is analyzed through the dynamic change quantization coefficient.
In the differential image, the overall motion intensity in the image is larger, which indicates that the dynamic change of the picture scene is larger, because the motion intensity measures the average value of the brightness difference of each pixel in two adjacent frames of images, and reflects the degree of the pixel change in the whole image. A larger intensity of motion means that more pixels have changed significantly, typically due to the presence of significant motion or dynamic events in the scene. In particular, when a large number of objects are moving in a scene or the moving speed of the objects is fast, the pixel values change greatly, resulting in an increase in the intensity of motion. For example, in a surveillance video, if a group of people runs fast or a plurality of vehicles run at high speed, the motion intensity in the differential image may be significantly increased, reflecting that the dynamic change in the picture is very significant. This not only indicates a large amplitude of motion, but also indicates that the range of motion may be wide, covering a large image area. Therefore, by analyzing the motion intensity in the differential image, the dynamic change condition of the picture scene can be effectively identified and evaluated. The large overall motion intensity in the image generally indicates a large dynamic change in the scene of the picture, as this reflects the presence of significant and extensive motion in the scene, which is critical for monitoring and analyzing dynamic events.
After the motion intensity of the differential image is analyzed, the step of generating a motion intensity index is as follows:
calculating motion intensity for each pixel in the differential image, introducing a weight functionTo enhance the motion intensity of certain areas, and to calculate the weighted motion intensity by the following expression: In which, in the process,Representing weighted motion strengths;
Here, the methodIs a weight function and can be customized according to application scenes. For example, different weights may be assigned according to the importance of different regions in the image.
Summing the motion intensities of the whole differential image to obtain the total motion intensity, wherein the calculated expression is as follows: wherein W and H represent the width and height of the image, respectively,Representing the overall intensity of motion;
intensity of overall movementNormalization, for comparison between different images, the normalized expression is: In which, in the process,Representing the normalized motion intensity;
taking the motion intensity change of continuous multiframes into consideration, introducing time weighted average, calculating a motion intensity index, wherein the calculated expression is as follows: In which, in the process,The index of the intensity of the movement is represented,Representing a time weighting coefficient ranging from 0 to 1, controlling the weight ratio of the motion intensities of the current frame and the previous frame,Is the motion intensity index of the previous frame.
As can be seen from the motion intensity index, the larger the expression value of the motion intensity index generated by analyzing the motion intensity of the differential image is, the larger the dynamic change of the picture scene is. This means that there are more pixel value variations between adjacent frames of images, typically reflecting significant motion or variations, such as fast moving objects or severe scene changes. In contrast, the smaller the expression value of the motion intensity index is, the smaller the dynamic change of the picture scene is, the smaller the change between two adjacent frames is, the scene is static or only slightly changed is indicated, and the motion intensity index can effectively help the monitoring system to identify and distinguish different types of dynamic scenes.
In the differential image, a larger density of change in the image indicates a larger dynamic change of the picture scene. The variation density refers to the ratio of pixels that vary in the differential image to the total number of pixels. A high change density means that between two adjacent frames of images there are a large number of pixels that change, typically due to the presence of more or larger moving objects in the scene. For example, in a busy street monitor, the movement of pedestrians, vehicles, etc. can result in a high change density, as the movement of these objects can affect the position and color change of a large number of pixels. In addition, fast moving objects in the scene, such as running people, flying vehicles, or suddenly appearing objects (e.g., opening and closing of a door or dropping of an object) can also cause a significant increase in the variation density. These changes reflect the frequency and intensity of dynamic events occurring in the scene, so the dynamic change condition of the scene can be judged by analyzing the change density. When the change density is large, there are many movements or changes in the scene. In summary, the change density is an important index for evaluating dynamic changes of a scene of a picture, and a high change density means that there is a significant dynamic change in the scene, which puts higher challenges and demands on monitoring and video compression.
After analyzing the variation density in the differential image, the step of generating the variation density index is as follows:
applying a threshold to a differential imageTo determine changed pixels, defining changed pixels;
It should be noted that the number of the substrates,Is a preset threshold value, and the differential image is obtainedAnd threshold valueAn alignment analysis is performed to distinguish significant changes from background noise or minor changes.
Counting the total number of varying pixels in a differential image,;
This step counts all pixels marked 1, i.e. pixels satisfying the threshold condition.
The variation density is calculated, and the expression of the calculation is: Wherein, the method comprises the steps of, wherein,For the total number of pixels in the image,Representing the variation density;
calculating a change density index based on the change density, the calculated expression being: In which, in the process,Indicating the index of the varying density of the material,Representing a desired change density threshold;
The desired change density threshold is a predetermined parameter that defines the intermediate point that is considered important in calculating the change density index. The response of the change density index is more pronounced when the change density (the proportion of pixels in the image that change to the total number of pixels) approaches this threshold. The method helps the model to better adapt to specific application scenes, so that the model is more sensitive to the change density in a specific range, and the dynamic change situation among video frames is accurately reflected.
The larger the variation density index value generated after the variation density of the differential image is analyzed, the larger the dynamic variation of the picture scene is indicated. This is because the change density index is calculated based on the proportion of changed pixels in the differential image, with higher change density meaning that more pixels change significantly between adjacent frames, indicating more or larger moving objects in the scene. These significant changes typically represent significant dynamic events in the monitored scene. Conversely, a smaller representation of the change density index indicates a smaller dynamic change in the scene of the picture, meaning fewer pixels change between adjacent frames, with the scene being substantially stationary or having only a small change.
The machine learning model is not particularly limited herein and can achieve the motion intensity indexAnd a variable density indexComprehensive analysis is carried out to generate dynamic change quantization coefficientThe machine learning model of the invention can be realized, and in order to realize the technical scheme of the invention, the invention provides a specific implementation mode;
Dynamically changing quantized coefficientsThe generated calculation formula is as follows: In which, in the process,、Respectively, the exercise intensity indexAnd a variable density indexIs a preset proportionality coefficient of (1), and、Are all greater than 0.
The dynamic change quantization coefficient can be known that the larger the expression value of the motion intensity index generated after analyzing the motion intensity of the differential image is, the larger the expression value of the change density index generated after analyzing the change density of the differential image is, that is, the larger the expression value of the dynamic change quantization coefficient generated after analyzing the differential image is, the larger the dynamic change of the picture scene is, and otherwise, the smaller the dynamic change of the picture scene is.
Dividing the dynamic change condition of the picture scene into a fast dynamic change scene and a normal dynamic change scene according to the analysis result of the deep learning model;
Comparing and analyzing the dynamic change quantization coefficient generated after the differential image is analyzed with a preset reference threshold value of the dynamic change quantization coefficient, and dividing the dynamic change condition of the picture scene, wherein the dividing process is as follows:
if the dynamic change quantization coefficient is larger than or equal to the dynamic change quantization coefficient reference threshold, dividing the dynamic change condition of the picture scene into fast dynamic change scenes;
If the dynamic change quantized coefficient is smaller than the dynamic change quantized coefficient reference threshold, dividing the dynamic change condition of the picture scene into normal dynamic change scenes.
For a normal dynamic change scene, adopting an initial compression algorithm to continue video stream compression, and for a fast dynamic change scene, dynamically adjusting the bit rate of video coding through the compression algorithm to ensure capturing of a fast moving target and complex details in a video image;
In the monitoring video processing, if it is determined that the picture scene is a normal dynamic change scene (i.e., a scene having little or less variation) by analyzing the differential image formed by the adjacent two frame images, in this case, the video stream can be continuously compressed using the originally set compression algorithm. In particular, this means that there is no need to adjust or optimize the compression algorithm, since the amplitude of the changes in the normal dynamic scene is small, the original compression algorithm is already sufficient to cope with these changes, and the video data can be compressed effectively while ensuring the video quality.
For a fast dynamic scene, the specific steps of dynamically adjusting the bit rate of video coding by a compression algorithm are as follows:
comparing the calculated dynamic change quantized coefficient with a preset dynamic change quantized coefficient reference threshold value, calculating a dynamic change coefficient deviation value, wherein the calculated expression is as follows: In which, in the process,Represents the deviation value of the dynamic change coefficient,Representing the quantized coefficients of the dynamic change,Representing a dynamically changing quantized coefficient reference threshold, an;
According to the deviation value of the dynamic change coefficientDynamically adjusting video coding bit rate, and setting initial video coding bit rate asThe adjusted actual bit rate isThe adjustment coefficient is K, calculated by the following formula: In which, in the process,Is the maximum variation ratio of the adjustment coefficient,Is the variation sensitivity coefficient of the sensor,Is a hyperbolic tangent function for smoothing the adjustment coefficient variation;
the maximum change proportion of the adjustment coefficient is represented byThe maximum amplitude of the bit rate adjustment is determined. In particular the number of the elements,Is used to control the maximum ratio by which the video coding bit rate can be increased or decreased. When the dynamically changing quantized coefficients of a scene are significantly above a reference threshold, the bit rate needs to be significantly increased to capture fast moving objects and complex details, and vice versa. If it isSetting too high, the adjustment of the video bit rate may be too drastic, possibly resulting in waste of network bandwidth and storage resources, and if setting too low, the magnitude of the improvement or reduction of the video quality may be insufficient to cope with the dynamic changes of the scene. Thus, the first and second substrates are bonded together,Careful adjustment is required to find the best balance between video quality and resource usage, depending on the specific application scenario and system performance.
The variable sensitivity coefficient is given byThe sensitivity of the bit rate adjustment to dynamically changing quantized coefficient differences is controlled. It determines the sensitivity of the bit rate adjustment, i.e. how the bit rate changes in response to dynamically changing quantization coefficients. A higher oneThe values mean that the system may be very sensitive to small dynamic changing quantized coefficient differences, resulting in fast bit rate changes, which may be necessary in highly dynamic and complex scenarios. However, too highFrequent fluctuations in bit rate may result, which is detrimental to bandwidth stability and smooth video playback. Conversely, a lower oneThe values may make the system more responsive to differences in the changing quantization coefficients, suitable for relatively static or less changing scenes, but may not adequately capture details in rapid dynamic changes. Thus (2)The setting of the system needs to comprehensively consider various factors such as scene dynamic characteristics, system response speed, bandwidth stability and the like.
By dynamically adjusted actual bit rateThe video encoding process is optimized to ensure that fast moving objects and complex details in the video image are captured.
According to the invention, preprocessing and differential calculation are carried out by extracting two adjacent frames of images, dynamic change characteristic information such as motion intensity, change density and the like is extracted, the characteristic information is input into a deep learning model, the dynamic change condition of a picture scene is accurately analyzed, then the picture scene is divided into a quick dynamic change scene and a normal dynamic change scene according to the analysis result of the deep learning model, in the normal dynamic change scene, an initial compression algorithm is adopted to continue video stream compression, in the quick dynamic change scene, the bit rate of video coding is dynamically adjusted, capturing of a quick moving target and complex details is ensured, balance between video quality and compression efficiency is achieved, high-quality image details of a monitoring video at key moments are ensured, the reliability and effectiveness of safety monitoring are ensured, investigation and evidence obtaining work of cases are facilitated, and the overall performance and practicability of a monitoring system are finally improved.
The invention provides a video monitoring terminal as shown in fig. 2, which comprises a data extraction module, an image difference module, a feature extraction and analysis module, a scene classification module and a compression strategy adjustment module;
The data extraction module extracts a current frame and a previous frame from a continuous monitoring video frame sequence, further extracts two adjacent frame images from a video stream, and provides basic data for subsequent dynamic change analysis;
The image difference module is used for preprocessing the extracted two adjacent frames of images, calculating the difference of the two adjacent frames of images by using an image difference technology, generating a difference image through pixel-level comparison, and highlighting the change area between frames;
The feature extraction and analysis module extracts dynamic change feature information of the differential image, analyzes the dynamic change feature information, inputs the dynamic change feature information obtained by analysis into a trained deep learning model, and analyzes the dynamic change condition of the picture scene;
the scene classification module is used for dividing the dynamic change condition of the picture scene into a fast dynamic change scene and a normal dynamic change scene according to the analysis result of the deep learning model;
the compression strategy adjustment module is used for continuing to compress the video stream by adopting an initial compression algorithm aiming at a normal dynamic change scene, and dynamically adjusting the bit rate of video coding by adopting the compression algorithm aiming at a fast dynamic change scene so as to ensure capturing of a fast moving target and complex details in a video image;
The embodiment of the invention provides a method for optimizing and transmitting a security monitoring video, which is realized by the video monitoring terminal, and a specific method and a specific flow of the video monitoring terminal are detailed in the embodiment of the method for optimizing and transmitting the security monitoring video, and are not repeated here.
The above formulas are all formulas with dimensions removed and numerical values calculated, the formulas are formulas with a large amount of data collected for software simulation to obtain the latest real situation, and preset parameters in the formulas are set by those skilled in the art according to the actual situation.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
While certain exemplary embodiments of the present invention have been described above by way of illustration only, it will be apparent to those of ordinary skill in the art that modifications may be made to the described embodiments in various different ways without departing from the spirit and scope of the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive of the scope of the invention, which is defined by the appended claims.