Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the following detailed description of specific embodiments thereof is given with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the matters related to the present application are shown in the accompanying drawings. Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The automatic monitoring method, device, equipment and storage medium based on the behavior label provided by the embodiment of the application are described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
Example 1
Fig. 1 is a flowchart of an automatic monitoring method based on behavior tags according to an embodiment of the present application. As shown in fig. 1, the method specifically comprises the following steps:
s101, acquiring a video image sequence through a security camera.
First, the usage scenario of the present solution may be a scenario using video surveillance. Specifically, when viewing video monitoring, the safety behavior of the object in the monitoring can be displayed, and if the object is identified to be unsafe, key tracking is performed.
Based on the above usage scenario, it can be understood that the execution subject of the present application may be an intelligent terminal, or may be running software or a set-up system in the intelligent terminal, which is not limited herein.
The security camera is a camera commonly used for security monitoring. It is typically connected to a video monitoring system for monitoring living beings in an area, and is typically classified as both stationary and mobile. Fixed cameras are typically mounted in a fixed location, such as a wall or ceiling, and are capable of capturing a fixed area. The mobile camera can be moved to different positions and can shoot surrounding areas. The security camera generally has higher image quality, and can shoot clear images. They can also be equipped with night vision, which can take clear images in darker environments. When the security camera is installed, a position which is high enough, wide in coverage range and not easy to damage should be considered to be selected.
A video image sequence is a set of consecutive images, typically representing a continuous dynamic scene. The sequence of video images may be captured by a camera or other video acquisition device or may be computer generated. In this step, the video image sequence is acquired by the security camera. Video image sequences typically consist of a series of image frames, each frame being an image. Each frame of images is independent, but they are arranged together in a certain order, forming a continuous animation. The video image sequence may be played by a video player or other software, or may be stored on a computer or hard drive for later viewing. For example, the video is set to have a frame rate of 30 frames, the video duration is 2 seconds, namely 60 images are provided, and the images are numbered from 1 to 60, so that a video image sequence is obtained.
S102, determining an image with a behavior security tag based on a pre-connected behavior security tag library, and determining a monitoring target for triggering behavior security.
Behavioural security tag libraries are a specific tag system for tagging and managing security related information to ensure security behaviours at a workplace or other location. These labels are often colored with obvious warning information, such as "danger", "warning" or "attention". They may be used to alert people to hazards present in the surrounding environment and may provide information about how to safely operate the device or use the item. Behavioural security tag libraries are an important component of many security management systems that can help ensure personnel security and prevent accidents. In image recognition, the tag library is typically a database for storing tags for marking images. These tags refer to descriptive words of objects, scenes, people or other content present in the image.
The data in the tag library may be stored in different data types, such as strings, text, enumeration, etc. The specific data type depends on the database type of the tag library and the data type of the stored tag. For example, a label for an image may include "dog", "natural landscape", "city street", etc. The data of the tag library may be stored in a variety of ways, for example using a relational database, storing the image and tag in different tables and associating using foreign keys. Alternatively, the images and tags may be stored in the same document using a non-relational database, such as MongoDB. The data of the tag library is typically manually labeled, i.e., the tags of the image are manually labeled by a human label maker. This approach is often time and effort consuming, and thus there are many artificial intelligence techniques aimed at automatically generating tag library data to reduce the amount of manual labeling effort.
In the step, the behavior security tag library can preset corresponding tags, provide certain storage capacity and rechecking capacity, recheck whether the behaviors accord with the tags or not, and improve the accuracy of tag calibration. The behavior security library is accessed into the intelligent terminal in advance, and the behavior security library can comprise, but is not limited to: behavior labels such as overhead hanging weights, trailing behaviors of personnel, quarrying behaviors and the like are provided.
In this step, it is determined whether the security tag is in the behavioural security library by identifying whether an image in the sequence of video images has a behavioural security tag.
In particular, identifying the actions of a person in a surveillance video typically requires the use of video analysis techniques. The usual methods include: video frame difference method: this method detects motion by comparing differences between adjacent frames. Optical flow method: this method detects motion by tracking the pixel position of the moving target object. Motion estimation method: this method estimates the motion of the target object by using the motion information of the camera. Deep learning-based method: this approach uses a deep learning model to identify the motion of the target object. To use these methods to identify the actions of a person in a surveillance video, the video needs to be pre-processed, e.g., denoised, deblurred, etc., and then analyzed using appropriate methods. If the security tag of the image is in the library, the object in the image is selected and determined as a monitoring target.
S103, determining an image capturing sequence of the security camera according to the position information of the monitoring target in the video image so as to automatically generate track information of the monitoring target.
In this step, the position information refers to the position information of the object in the video during the movement. The motion of the object is estimated by tracking the pixel positions of the moving target object. In the calculation process, the movement direction and speed of each pixel between two frames need to be calculated. This can be achieved by calculating the optical flow for each pixel. Optical flow refers to the direction and speed of gray value change of a pixel in motion.
In video analysis, feature extraction of images is often required, such as SIFT (Scale Invariant Feature Transform, scale-invariant feature transform) algorithm to extract SIFT features. The SIFT algorithm is an algorithm for image processing to extract key points and corresponding descriptors in an image. The SIFT algorithm can extract local feature points in the image, and the feature points have good invariance and can adapt to rotation, scaling, illumination change and the like of the image. The SIFT algorithm includes two steps: feature extraction and feature description. In the feature extraction step, the SIFT algorithm firstly builds a scale space on the image, and then extracts key points in the image. In the feature description step, the SIFT algorithm constructs a descriptor using pixel information around each keypoint. A RANSAC (Random Sample Consensus, random sample consensus algorithm) algorithm or other algorithm may then be used to extract the valid optical flow information. The RANSAC (Random Sample Consensus) algorithm is an algorithm for estimating model parameters, and is commonly used for scenes such as model fitting, data fitting and the like. The basic idea of the RANSAC algorithm is to randomly choose samples from the dataset, fit a model using these samples, and calculate the fitting accuracy. If the fitting accuracy is high, the model is considered reasonable, otherwise, the samples are selected again for fitting. The RANSAC algorithm is repeated in such a process until a reasonable model is found. Finally, according to the optical flow information obtained by calculation, the motion of the object can be estimated, namely, the track information of the monitoring target is generated.
In this step, optionally, after determining the image capturing sequence of the security camera, the following is further performed:
determining an associated camera which belongs to the same bandwidth as the security cameras in the camera sequence;
and generating a parameter adjusting instruction for the associated camera according to the time of the security camera of the monitoring target in the camera shooting sequence so as to reversely adjust the video resolution and the frame rate of the security camera.
In this step, bandwidth refers to the rate of data transmission and is commonly used to measure the transmission capacity of the network. The larger the bandwidth, the more data the network is able to transmit, and the faster the transmission speed. The unit of bandwidth is typically bits per second (bps) or megabits per second (Mbps). For example, a bandwidth of 1Mbps means that 1 megabit of data can be transmitted per second. And setting security cameras with the same bandwidth as the associated cameras, and reversely adjusting the video resolution and the frame rate of the associated cameras. The inverse adjustment may be to adjust the original video resolution and frame rate to lower parameters. For example, the security camera 1 and the security camera 2 are associated cameras, and the security camera 1 adjusts the video resolution and the frame rate to higher image quality after acquiring the monitoring target, and adjusts the video image quality of the security camera 2 to low image quality.
In the embodiment of the application, the security camera of the monitoring target is subjected to enhanced image quality adjustment, and the image quality of the related camera is reversely adjusted, so that the file size can be saved, and the video transmission rate is improved.
In the embodiment of the application, a video image sequence is acquired through a security camera; determining an image with a behavior security tag based on a pre-connected behavior security tag library, and determining a monitoring target for triggering behavior security; determining a shooting sequence of a security camera according to the position information of the monitoring target in the video image so as to automatically generate track information of the monitoring target; and generating a parameter adjusting instruction for the security cameras in the camera shooting sequence according to the occurrence time of the monitoring target so as to adjust the video resolution and the frame rate of the security cameras. By analysing a sequence of video images. The video acquisition parameters of the monitoring equipment can be automatically adjusted, the track of the monitored object is generated, and parameter reduction processing is carried out on the monitoring equipment irrelevant to the track, so that the monitoring system is adaptively adjusted.
S104, generating parameter adjusting instructions for the security cameras in the image capturing sequence according to the occurrence time of the monitoring target so as to adjust the video resolution and the frame rate of the security cameras.
In the step, if the security camera recognizes the monitoring target, the occurrence time of the monitoring target is recorded, and a parameter adjusting instruction is generated. Parameter adjustment instructions refer to instructions for adjusting parameters of a computer program. These parameters can affect the manner in which the program operates and the performance. The parameter adjustment instructions typically contain one or more parameters specifying the name and new value of the parameter to be adjusted. In this step, the parameter adjustment instructions may include parameter instructions to adjust video resolution and frame rate. Video resolution refers to the number of pixels of a video image. Video resolution is typically expressed in terms of a number of pixels of width and height, e.g., 1920x1080, i.e., a video image having a width of 1920 pixels and a height of 1080 pixels. The higher the video resolution, the sharper the video picture. In the existing video technology, common video resolutions include: 1080p (1920 x 1080), 720p (1280 x 720), 480p (854 x 480), and 360p (640 x 360). The video resolution may be adjusted as the video is captured, edited, and played. For example, when capturing video, a high resolution camera may be used to obtain a clearer picture; when editing video, the quality of a picture can be changed by changing the video resolution of the video. The frame rate refers to the number of images displayed per second in the video. The higher the frame rate, the higher the video smoothness and the smoother the dynamic picture. However, the higher the frame rate, the larger the video file, and thus more overhead may be incurred in storage and transmission. Common frame rates include: 24 frames/second, 30 frames/second, and 60 frames/second. The frame rate may be adjusted as the video is captured, edited, and played. For example, when capturing video, a camera with a higher frame rate may be used to obtain a smooth picture; when editing video, the frame rate of the video can be changed to change the picture fluency; the video player may also adjust the frame rate of the video as it is played.
Example two
Fig. 2 is a flow chart of an automatic monitoring method based on behavior tags according to a second embodiment of the present application. The scheme makes better improvement on the embodiment, and the specific improvement is as follows: determining an image with a behavioural security tag based on a pre-connected behavioural security tag library, and determining a monitoring target that triggers behavioural security, comprising: performing behavior recognition on the target image in the video image sequence based on a pre-connected behavior security tag library; if the target image matched with the abnormal behavior in the behavior safety tag library exists, marking the safety tag; and distinguishing and marking the security tag according to the type of the monitoring target triggering the behavior security. As shown in fig. 2, the method specifically comprises the following steps:
s201, acquiring a video image sequence through a security camera.
S202, determining an image with a behavior security tag based on a pre-connected behavior security tag library, and determining a monitoring target triggering behavior security.
S203, performing behavior recognition on the target image in the video image sequence based on a pre-connected behavior security tag library.
In the step, the behavior recognition is carried out on the target image through the behavior security tag library. Specifically, machine learning algorithms can be used to learn features that identify different behaviors by training a large number of tagged images. The trained model can be used to identify behavior in the new image. Machine learning can be classified into two methods, supervised learning and unsupervised learning.
In supervised learning, a computer needs to train on a labeled dataset to learn rules or models for predicting labels. Specifically, a suitable supervised learning algorithm, such as logistic regression, support vector machines, decision trees, etc., needs to be selected. The principle of selecting an algorithm needs to select an appropriate algorithm according to the characteristics of the problem and the characteristics of the data. Secondly, training data needs to be prepared, and then parameters of the algorithm need to be adjusted so that the model can fit the training data as much as possible. This process of adjusting the parameters is called training.
During training, the performance of the model is typically evaluated using a cross-validation approach to find the optimal parameter settings. Evaluating the performance of the model: after training is completed, test data is required to evaluate the performance of the model. This may help us to understand how the model behaves in the real world. The model was used: if the model's performance is good enough, the model can be used to predict the label of unknown data. For example, iris datasets are classified using logistic regression algorithms. A logistic regression algorithm in the supervised learning algorithm is selected. Next, a logistic regression model is trained using the training data, and the performance of the model is evaluated using a cross-validation method. The performance of the model was evaluated using the test data. If the model's performance is good enough, the model can be used to predict the class of unknown data.
In unsupervised learning, the computer does not need a labeled dataset, but rather learns the model by analyzing the relationships between the data. Specifically, an appropriate unsupervised learning algorithm, such as a clustering algorithm, principal component analysis, or the like, needs to be selected. The principle of selecting the algorithm is that an appropriate algorithm should be selected according to the characteristics of the problem and the characteristics of the data. Second, unsupervised learning data, i.e., unlabeled data, needs to be prepared. The data may be any form of data such as images, text, sound, etc. The unsupervised learning data is then learned using the selected algorithm to find rules and patterns in the data. Finally, the performance of the model needs to be evaluated. As such, the data sets are clustered using a clustering algorithm. A selection clustering algorithm of the unsupervised learning algorithms is selected. An unlabeled dataset is prepared. The data sets are clustered using a clustering algorithm. And evaluating the quality of the clustering result, comparing different clustering schemes, and selecting an optimal scheme.
S204, if the target image matched with the abnormal behavior label in the behavior safety label library is identified, marking the safety label;
S205, distinguishing marks of the security tags according to the type of the security monitoring target triggering the behavior security;
in the step, the image is compared with the target image of the abnormal behavior in the behavior security tag library, if the behaviors of the two images are matched, the identified image and the target image in the behavior security tag library are determined to be of the same type, and the same tag is marked.
And distinguishing and marking the security tag according to the type of the monitoring target triggering the behavior security. Specifically, one or more security actions may be triggered in an image. And marking the abnormal behaviors identified by different monitoring target types respectively. For example, in one image, two monitoring object types of the quarry and the tail-in behavior occur, and then the two object types are marked respectively.
S206, determining an image capturing sequence of the security camera according to the position information of the monitoring target in the video image so as to automatically generate track information of the monitoring target.
S207, generating parameter adjusting instructions for the security cameras in the image capturing sequence according to the occurrence time of the monitoring target so as to adjust the video resolution and the frame rate of the security cameras.
In this embodiment of the present application, determining an image with a behavior security tag based on a pre-connected behavior security tag library, and determining a monitoring target that triggers behavior security, includes: performing behavior recognition on the target image in the video image sequence based on a pre-connected behavior security tag library; if the target image matched with the abnormal behavior in the behavior safety tag library exists, marking the safety tag; and distinguishing and marking the security tag according to the type of the monitoring target triggering the behavior security. By means of safety marking of the monitoring target, staff can quickly determine the behavior of the staff in the monitoring.
Example III
Fig. 3 is a flow chart of an automatic monitoring method based on behavior tags according to a third embodiment of the present application. The scheme makes better improvement on the embodiment, and the specific improvement is as follows: performing behavior recognition on the target image in the video image sequence based on a pre-connected behavior security tag library, wherein the behavior recognition comprises the following steps: determining at least two candidate abnormal behavior recognition models based on a pre-connected behavior security tag library; determining an identification model of a target abnormal behavior suspected to be associated with the target image according to the information content in the target image; and accurately identifying the information content in the target image according to the identification model of the target abnormal behavior to obtain an abnormal behavior identification result. As shown in fig. 3, the method specifically comprises the following steps:
S2031, determining at least two candidate abnormal behavior recognition models based on a pre-connected behavior security tag library.
The candidate abnormal behavior recognition model may be a machine learning model for recognizing abnormal or abnormal behavior. The basic principle of the abnormal behavior recognition model is to learn a large amount of normal behavior data and then judge whether or not the newly observed behavior belongs to normal behavior based on the data. If it does not belong to normal behavior, it is considered to be abnormal behavior. For example, a mall security monitoring system is designed. First, it is necessary to determine how many cameras to use according to the size and needs of the mall. The cameras then need to be installed to cover the whole mall, according to the layout and features of the mall. Next, corresponding hardware devices, including computers, servers, network devices, etc., need to be purchased to install and configure the software system.
A software system is typically made up of several modules, each of which is responsible for performing a particular task. For example, the camera module is responsible for managing the operation of the camera, the image storage module is responsible for storing images, and the abnormal behavior recognition module is responsible for recognizing abnormal behaviors. After the software system is installed, a large amount of video data is collected for training the abnormal behavior recognition model. The model is trained using various machine learning algorithms, such as support vector machines, decision trees, neural networks, and the like. The trained model can be used for monitoring crowd behaviors of a mall in real time. When abnormal behaviors are found, the system can give an alarm to remind the staff of paying attention. Meanwhile, the system can also record the images of the abnormal behaviors for further analysis. In this step, at least two candidate abnormal behavior recognition models may be trained to make the recognized image behavior more accurate. Specifically, the abnormal behavior recognition model may be classified according to the number of people. For example, a single person abnormal behavior recognition model may include overhead heavy object tags, and a multi-person abnormal behavior recognition model may include tags such as trailing and quarreling behavior.
S2032, determining target abnormal behaviors associated with the target image according to the target image, and determining an identification model applicable to the target abnormal behaviors.
In the step, according to the information content in the target image, determining the suspected associated abnormal behavior of the target image, and determining an identification model applicable to the abnormal behavior. For example, only one person in the image is identified, and a model is identified for abnormal behavior suspected to be a single person. If a scene has multiple people, the scene is suspected to be an abnormal behavior identification model of the multiple people.
In this step, "determining, according to the target image, a target abnormal behavior associated with the target image" specifically includes the following:
acquiring information content in the target image;
and determining an identification model of the target abnormal behavior suspected to be associated with the target image according to whether personnel and the number of the personnel are contained in the information content, whether suspended articles exist, the areas of the suspended articles exist, whether sharp articles exist and the positions of the sharp articles exist or not.
In this step, the information content in the target image may be information of a person or an object. In particular, the information content may include the number of persons, i.e. whether the person is one or more; whether articles are hung or not and the area of the articles are hung or not are judged, and whether irregular operations such as heavy object hanging or not are carried out or not is judged; judging whether the position of the sharp object has influence or not if the sharp object has the sharp object and the position of the sharp object; the information content is subject to an identification model that determines the associated target behavior. Specifically, the recognition model may be classified by person or object, or by the number of persons, or the like.
According to the method and the device for identifying the abnormal behavior model of the image, the corresponding identification model is determined through setting information content, and the abnormal behavior model of the image can be identified more accurately.
And S2033, accurately identifying the information content in the target image according to the identified identification model to obtain an abnormal behavior identification result.
In the step, after determining the identification model of the related abnormal behavior, inputting the target image into the identification model, comparing the identification model according to the images in the security feature library, if the behaviors of the two images are matched, determining that the identified image and the target image in the behavior security tag library are of the same type, and marking the same tag.
Specifically, in this step, "identifying the information content in the target image to obtain the identification result of the abnormal behavior" specifically includes the following contents:
extracting characteristic information in a target image;
inputting the characteristic information into the recognition model of the target abnormal behavior, and acquiring a judgment result output by the recognition model of the target abnormal behavior;
and determining an abnormal behavior identification result of the target image according to the judgment result.
The feature information refers to information for describing the image content. Such information may include objects, edges, colors, textures, shapes, etc. that appear in the image. To assist the machine learning model in identifying images, we typically use feature extraction algorithms to extract feature information for the images. The algorithms can extract information such as edges, colors, textures, shapes and the like in the images, convert the information into feature vectors, and can obtain a judgment result of abnormal behaviors of the target images by inputting the feature vectors into the recognition model. And determining whether the abnormal behavior result of the target image is correct according to the judgment result. Specifically, the operator can manually judge whether the input result is correct.
In the embodiment of the application, the characteristic model in the image is extracted and input into the appointed recognition model, and whether the image has abnormal behaviors is recognized. The abnormal behavior model of the image can be more accurately identified.
In this embodiment of the present application, performing behavior recognition on a target image in the video image sequence based on a pre-connected behavior security tag library includes: determining at least two candidate abnormal behavior recognition models based on a pre-connected behavior security tag library; determining an identification model of a target abnormal behavior suspected to be associated with the target image according to the information content in the target image; and accurately identifying the information content in the target image according to the identification model of the target abnormal behavior to obtain an abnormal behavior identification result. The associated recognition model is determined through the information content, so that the abnormal behavior of the image can be more accurately recognized, and the accuracy of image recognition is improved.
Example IV
Fig. 4 is a schematic structural diagram of an automatic monitoring device based on behavior tags according to a fourth embodiment of the present application. As shown in fig. 4, the method specifically includes the following steps:
an imagesequence acquisition module 401, configured to acquire a video image sequence through a security camera;
A security tagimage confirmation module 402 for determining an image with a behavioural security tag based on a behavioural security tag library connected in advance, and determining a monitoring target triggering behavioural security;
the trackinformation generating module 403 is configured to determine a capturing sequence of the security camera according to the position information of the monitoring target in the video image, so as to automatically generate track information of the monitoring target;
the first parameter adjustmentinstruction generating module 404 is configured to generate a parameter adjustment instruction for a security camera in the image capturing sequence according to the occurrence time of the monitoring target, so as to adjust the video resolution and the frame rate of the security camera.
Further, the device further comprises:
the associated camera determining module is used for determining associated cameras which belong to the same bandwidth with the security cameras in the camera shooting sequence;
and the second parameter adjustment instruction generation module is used for generating parameter adjustment instructions for the associated cameras according to the time of the security cameras of the monitoring targets in the shooting sequence so as to reversely adjust the video resolution and the frame rate of the security cameras.
Further, the security tag image confirmation module includes:
The target image behavior recognition unit is used for performing behavior recognition on the target images in the video image sequence based on a pre-connected behavior security tag library;
and the security tag marking unit is used for carrying out the distinguishing marking of the behavior security tags according to the type of the monitoring target triggering the behavior security if the object image matched with the abnormal behavior tag in the behavior security tag library is identified.
Further, the target image behavior recognition unit is specifically configured to:
a behavior recognition model determining subunit, configured to determine at least two candidate abnormal behavior recognition models based on a pre-connected behavior security tag library;
the abnormal behavior recognition model determining subunit is used for determining target abnormal behaviors associated with the target image according to the target image and determining a recognition model applicable to the target abnormal behaviors;
and the abnormal behavior recognition result obtaining subunit is used for recognizing the information content in the target image according to the determined recognition model of the target abnormal behavior to obtain an abnormal behavior recognition result.
Further, the abnormal behavior recognition result is obtained by a subunit, which is specifically configured to:
Extracting characteristic information in a target image;
inputting the characteristic information into the recognition model of the target abnormal behavior, and obtaining a judgment result output by the recognition model;
and determining an abnormal behavior identification result of the target image according to the judgment result.
Further, the abnormal behavior recognition model determining subunit is specifically configured to:
acquiring information content in the target image;
and determining a target abnormal behavior identification model related to the target image according to whether personnel and the number of the personnel are contained in the information content, whether suspended articles exist, the areas of the suspended articles exist, whether sharp articles exist and the positions of the sharp articles exist or not.
In the embodiment of the application, the image sequence acquisition module is used for acquiring a video image sequence through the security camera; the security tag image confirming module is used for determining an image with a behavior security tag based on a pre-connected behavior security tag library and determining a monitoring target for triggering behavior security; the track information generation module is used for determining a shooting sequence of the security camera according to the position information of the monitoring target in the video image so as to automatically generate track information of the monitoring target; the first parameter adjustment instruction generation module is used for generating parameter adjustment instructions for the security cameras in the camera shooting sequence according to the occurrence time of the monitoring target so as to adjust the video resolution and the frame rate of the security cameras. By analyzing the video image sequence, the video acquisition parameters of the monitoring equipment can be automatically adjusted, the track of the monitored object is generated, and the parameter reduction processing is carried out on the monitoring equipment irrelevant to the track, so that the monitoring system is adaptively adjusted.
The automatic monitoring device based on the behavior tag in the embodiment of the application can be a device, and also can be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The automatic monitoring device based on the behavior tag in the embodiment of the present application may be a device with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The automatic monitoring device based on the behavior label provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 3, and in order to avoid repetition, a detailed description is omitted here.
Example five
As shown in fig. 5, the fifth embodiment of the present application further provides anelectronic device 500, which includes aprocessor 501, amemory 502, and a program or an instruction stored in thememory 502 and capable of running on theprocessor 501, where the program or the instruction implements each process of the foregoing embodiment of the automatic monitoring method based on behavior tags when executed by theprocessor 501, and the process can achieve the same technical effects, and for avoiding repetition, a description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Example six
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the processes of the foregoing embodiment of the automatic monitoring method based on behavior tags are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
Example seven
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is configured to run a program or an instruction, implement each process of the above embodiment of the automatic monitoring method based on the behavior label, and achieve the same technical effect, so that repetition is avoided, and no further description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.
The foregoing description is only of the preferred embodiments of the present application and the technical principles employed. The present application is not limited to the specific embodiments described herein, but is capable of numerous obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the present application. Therefore, while the present application has been described in connection with the above embodiments, the present application is not limited to the above embodiments, but may include many other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the claims.