Disclosure of Invention
The embodiment of the invention aims to provide a smart community data acquisition method based on Bluetooth Mesh, and aims to solve the problems in the background technology.
The embodiment of the invention is realized in such a way that a smart community data acquisition method based on Bluetooth Mesh comprises the following steps:
acquiring monitoring video data acquired by a camera;
generating dynamic video data and a static picture set according to the monitoring video data, wherein the dynamic video data are used for representing video segments of which the pixel variation of images in the monitoring video data acquisition time node is not less than a set threshold; a plurality of static pictures are arranged in the static picture set, each static picture corresponds to a static video segment, and the static video segment is formed by monitoring video data from which dynamic video data are removed;
and the dynamic video data and the static picture set are transmitted to the server through the Bluetooth Mesh gateway, so that the server restores and stores the monitoring video data according to the dynamic video data and the static picture set.
Preferably, the step of generating the dynamic video data and the static image set according to the monitoring video data specifically includes:
reading the monitoring video data frame by frame, and numbering each frame of image according to time nodes;
comparing the current image with an image to be judged according to the sequence of the serial numbers, wherein the image to be judged is a first image behind the current image;
when the pixel variation of the current image and the image to be judged is smaller than a set threshold, judging the image to be judged as a static image, and when the pixel variation of the current image and the image to be judged is not smaller than the set threshold, judging the image to be judged as a dynamic image;
and integrating continuous dynamic images into dynamic video data according to the time nodes, integrating continuous static images into a static video segment according to the time nodes, and selecting a static picture from the static video segment to obtain a static picture set.
Preferably, the step of transmitting the dynamic video data and the still picture set to the server specifically includes:
reading the real-time occupancy rate of a data transmission channel between the Bluetooth Mesh gateway and the Bluetooth Mesh gateway;
distributing the dynamic video data and the static picture set to each data transmission channel according to the real-time occupancy rate;
and transmitting the dynamic video data and the static image set to a server by using a preset data transmission channel.
Preferably, the step of restoring the monitoring video data according to the dynamic video data and the static image set specifically includes:
restoring a static video segment according to the static picture set;
acquiring the starting time and the ending time of the dynamic video data and the static video segment according to the time node;
and splicing the dynamic video data and the static video segment according to the starting time and the ending time to obtain the original monitoring video data.
Preferably, after the step of transmitting the dynamic video data and the still picture set to the server through the bluetooth Mesh gateway, the method further includes: and acquiring a successful receiving signal sent by the Bluetooth Mesh gateway.
Preferably, if a successful receiving signal sent by the bluetooth Mesh gateway is not obtained, the sending is regarded as failed, and the dynamic video data and the static picture set are sent to the bluetooth Mesh gateway again.
Preferably, the step of allocating the dynamic video data and the static picture set to each data transmission channel according to the real-time occupancy rate specifically includes:
acquiring the real-time occupancy rate of each data transmission channel;
calculating a difference value between a preset threshold occupancy rate and a real-time occupancy rate;
and distributing the dynamic video data and the static picture sets corresponding to the memory occupation amount to the corresponding data transmission channels according to the difference values.
Another objective of an embodiment of the present invention is to provide a bluetooth Mesh-based smart community data acquisition system, including:
the video acquisition module is used for acquiring monitoring video data acquired by the camera;
the video processing module is used for generating dynamic video data and a static picture set according to the monitoring video data, wherein the dynamic video data are used for representing video segments of which the pixel variation in the monitoring video data acquisition time node is not less than a set threshold value; a plurality of static pictures are arranged in the static picture set, each static picture corresponds to a static video segment, and the static video segment is formed by monitoring video data from which dynamic video data are removed; and
and the data sending module is used for transmitting the dynamic video data and the static picture set to the server through the Bluetooth Mesh gateway, so that the server restores the monitoring video data according to the dynamic video data and the static picture set and stores the data.
Preferably, the video processing module includes:
the numbering unit is used for reading the monitoring video data frame by frame and numbering each frame of image according to the time node;
the comparison unit is used for comparing the current image with an image to be judged according to the sequence of numbers, wherein the image to be judged is a first image behind the current image;
the judging unit is used for judging the image to be judged as a static image when the pixel variation of the current image and the image to be judged is smaller than a set threshold value, and judging the image to be judged as a dynamic image when the pixel variation of the current image and the image to be judged is not smaller than the set threshold value; and
and the recombination unit integrates continuous dynamic images into dynamic video data according to the time node, integrates continuous static images into a static video segment according to the time node, and selects a static picture from the static video segment to obtain a static picture set.
According to the intelligent community data acquisition method based on the Bluetooth Mesh, the acquired monitoring video data are processed and converted into the dynamic video data and the static picture set, and each static video segment is represented by the static picture in the static picture set, so that the transmission bandwidth occupied by the video segments of repeated pictures during transmission is reduced, the problem of channel blockage is solved fundamentally, and the efficiency and the stability of monitoring video data transmission are improved.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another. For example, a first xx script may be referred to as a second xx script, and similarly, a second xx script may be referred to as a first xx script, without departing from the scope of the present application.
Fig. 1 is a network implementation environment diagram of a smart community data collection method based on bluetooth Mesh according to an embodiment of the present invention, as shown in fig. 1, in the network implementation environment, the smart community data collection method includes a camera, a bluetooth Mesh terminal, a bluetooth Mesh gateway, and a server.
The server may be an independent physical server or terminal, may also be a server cluster formed by a plurality of physical servers, and may be a cloud server providing basic cloud computing services such as a cloud server, a cloud database, a cloud storage, a CDN, and the like.
The bluetooth Mesh gateway can be used for interconnection of both wide area networks and local area networks. Which may be one of a signaling gateway, a relay gateway, an access gateway, or a protocol gateway.
The bluetooth Mesh terminal refers to a low-cost short-range wireless connection communication device, and thus, may be any device having a bluetooth function.
The camera is also called a computer camera, a computer eye, an electronic eye and the like, is a video input device, and is widely applied to aspects such as video conferences, telemedicine, real-time monitoring and the like. The camera can be a digital camera or an analog camera.
In the network implementation environment diagram, each camera is connected with a bluetooth Mesh terminal, the bluetooth Mesh terminals are connected with each other through bluetooth, and meanwhile, the bluetooth Mesh terminals are connected with a bluetooth Mesh gateway which is connected with a server.
Example 1:
fig. 2 is a flowchart of a bluetooth Mesh-based smart community data acquisition method according to an embodiment of the present invention; the invention is applied to the Bluetooth Mesh terminal;
the smart community data acquisition method based on the Bluetooth Mesh is detailed as follows:
and S100, acquiring monitoring video data acquired by the camera.
In this step, utilize the camera to shoot, in the community, the camera can set up the multiunit, therefore the multiunit camera will be the equipartition within the community to the angle that each camera was shot is all fixed, and the background can not appear changing in its all kinds of surveillance video data who obtains, after the surveillance video data was gathered to the camera, with surveillance video data direct transmission with its bluetooth Mesh terminal of connecting.
And S200, generating dynamic video data and a static picture set according to the monitoring video data.
In this step, after receiving the surveillance video data, the bluetooth Mesh terminal intercepts the dynamic video segment therein, and as for the static video segment recorded in the surveillance video data, the picture is always unchanged, so if the static video segment is completely transmitted to the server, a large amount of repeated data transmission will be generated, at this time, the bluetooth Mesh terminal is used to extract the static video segment in the surveillance video data respectively, then a static picture is intercepted from each static video segment, the static picture represents the static video segment, the length and time node of the static video segment are recorded in the static picture, and after the processing, the dynamic video data and the static picture set are obtained and transmitted to the bluetooth Mesh gateway.
And S300, transmitting the dynamic video data and the static picture set to a server through the Bluetooth Mesh gateway, so that the server restores and stores the monitoring video data according to the dynamic video data and the static picture set.
In this step, the bluetooth Mesh gateway receives the dynamic video data and the static image set, and at this time, the dynamic video data and the static image set cannot be directly used, but the monitoring video data is restored by using the dynamic video data and the static image set through the server and is stored.
In the process, after the step of transmitting the dynamic video data and the static picture set to the server through the bluetooth Mesh gateway, a receiving success signal sent by the bluetooth Mesh gateway is received, and if the receiving success signal is received, the data is successfully transmitted, and if the receiving success signal is not received, the data is incorrectly transmitted, and at this time, the bluetooth Mesh terminal retransmits the data.
According to the intelligent community data acquisition method based on the Bluetooth Mesh, the acquired monitoring video data are processed and converted into the dynamic video data and the static picture set, and each static video segment is represented by the static picture in the static picture set, so that the transmission bandwidth occupied by the video segments of repeated pictures during transmission is reduced, the problem of channel blockage is solved fundamentally, and the efficiency and the stability of monitoring video data transmission are improved.
Example 2:
fig. 3 is a flowchart of a process for generating dynamic video data and a still image set according to surveillance video data according to an embodiment of the present invention;
the flow chart depicted in figure 3 includes the following steps,
s201, reading the monitoring video data frame by frame, and numbering each frame of image according to time nodes.
In this step, the monitoring video data is first analyzed, the monitoring video data is analyzed into one frame and one frame of images, then each frame of image is numbered, and the numbering is performed according to the time nodes, so that each frame of image records the corresponding time nodes.
S202, comparing the current image with an image to be judged according to the numbering sequence, wherein the image to be judged is the first image behind the current image.
In this step, when performing comparison, it is necessary to process both the current image and the image to be determined, and first read the current image and the image to be determined at the same time, then perform pixelization on the current image and the image to be determined to obtain two sets of pixelized images, then perform gray scale processing on the two sets of pixelized images to obtain gray scale processed images, and then perform pixel comparison on the two sets of gray scale processed images, thereby calculating the amount of pixel variation supposed to be between the two sets of gray scale processed images.
S203, judging whether the pixel variation of the current image and the image to be judged is smaller than a set threshold value, if so, jumping to S204, and if not, jumping to S205.
And S204, judging that the image to be judged is a static image, and jumping to S206.
S205, judging the image to be judged as the dynamic image, and jumping to S206.
In this step, a determination is made according to the magnitude between the pixel variation and the set threshold, when the pixel variation is smaller than the set threshold, it is indicated that there is no significant difference between the current image and the image to be determined, and therefore it can be considered that no change occurs in the picture in the video acquired by the camera at this time, the image to be determined is determined to be a static image, and is stored, and when the pixel variation is not smaller than the set threshold, it is indicated that there is a large difference between the current image and the image to be determined, and therefore it can be determined that the image is a dynamic image, and is stored, and after traversing each frame image recorded in the monitoring video data, the process jumps to S206.
S206, integrating the dynamic image into dynamic video data, integrating the static image into a static video segment, and representing the corresponding static video segment by using the static image to obtain a static image set.
In this step, after the frame-by-frame judgment, each frame image in the monitoring video data is divided into a dynamic image or a static image, and at this time, all the images are sorted according to time nodes, so that the continuous dynamic images are integrated into dynamic video data, so that a plurality of discontinuous pictures are actually recorded in the dynamic video data as a dynamic video segment, then the continuous static images are integrated into a static video segment, then a static picture is extracted from each static video segment, the static video segment is represented by the static picture, the total duration, the starting time and the ending time of the static video segment are recorded in the static picture, and finally all the static pictures are integrated to obtain a static picture set.
Example 3:
fig. 4 is a flowchart of a process of delivering motion video data and a still picture set to a server according to an embodiment of the present invention;
the flow chart depicted in figure 4 includes the following steps,
and S301, reading the real-time occupancy rate of a data transmission channel between the Bluetooth Mesh gateway and the Bluetooth Mesh gateway.
In this step, the real-time occupancy rate of the data transmission channel between the other bluetooth Mesh terminal and the bluetooth Mesh gateway is directly read, and the real-time occupancy rate is the ratio of the bandwidth occupied by the current data transmission channel to the total bandwidth of the data transmission channel, so as to represent the utilization rate of the data transmission channel.
And S302, distributing the dynamic video data and the static picture set to each data transmission channel according to the real-time occupancy rate.
In this step, the dynamic video data and the static image set are then segmented and packaged to obtain a plurality of data packets, and the data packets are distributed to each data transmission channel according to the real-time occupancy rate.
And S303, transmitting the dynamic video data and the static image set to a server by using a preset data transmission channel.
In this step, the dynamic video data and the still picture set stored in the data packet are transmitted to the server using these data transmission channels.
Example 4:
fig. 5 is a flowchart of a process of restoring monitoring video data according to dynamic video data and a static image set according to an embodiment of the present invention;
the flow chart depicted in figure 5 includes the following steps,
and S304, restoring the still video segment according to the still picture set.
In this step, since the still pictures are not videos but pictures with time information recorded therein, the still pictures need to be restored first, and when restoring, each still picture and the time information recorded in the still picture are read, where the time information includes a duration, a start time, and an end time of a still video segment corresponding to the still picture, and then each still video segment is restored according to a frame rate of a video in the monitored video data.
S305, acquiring the start time and the end time of the dynamic video data and the static video segment according to the time node.
In this step, the start and end times of the dynamic video data and the still video segment are obtained for subsequent splicing.
And S306, splicing the dynamic video data and the static video segment according to the starting time and the ending time to obtain the original monitoring video data.
In this step, the dynamic video data and the static video segments are spliced in sequence according to the read start time and end time, so as to obtain the original monitoring video data.
Example 5:
FIG. 6 is a flowchart of a process for allocating dynamic video data and static picture sets to respective data transmission channels according to real-time occupancy, according to an embodiment of the present invention;
the flow chart depicted in figure 6 includes the following steps,
and S3021, acquiring the real-time occupancy rate of each data transmission channel.
In this step, before allocation, the current usage of each data transmission channel is analyzed, and if the current data transmission channel is abnormal in transmission or full in transmission, allocating a transmission task to the data transmission channel will result in a decrease in data transmission efficiency, so that the real-time occupancy of each data transmission channel is read before allocation.
And S3022, calculating a difference value between the preset threshold occupancy rate and the real-time occupancy rate.
In this step, the total transmission amount of each data transmission channel is fixed, so to ensure efficient transmission, the preset threshold occupancy rate is set to 80%, and the real-time occupancy rate is the current actual transmission occupancy rate, and the two occupancy rates are subtracted to obtain a difference value.
And S3023, distributing the dynamic video data and the static image sets corresponding to the memory occupation amount to the corresponding data transmission channels according to the difference values.
In the step, according to the size of the difference, dynamic video data and static pictures of corresponding memory occupation amount are distributed to the data transmission channel, so that the occupancy rate of the data transmission channel is about 80%, the utilization rate of the data transmission channel can be ensured, and a certain bandwidth can be reserved to prevent channel blockage.
Example 6:
fig. 7 is an architecture diagram of a bluetooth Mesh-based smart community data acquisition system according to an embodiment of the present invention.
The wisdom community data acquisition system based on bluetooth Mesh includes:
thevideo acquisition module 100 is configured to acquire monitoring video data acquired by a camera;
thevideo processing module 200 is configured to generate dynamic video data and a static picture set according to the monitoring video data, where the dynamic video data is used to represent a video segment in which a pixel variation in a monitoring video data acquisition time node is not less than a set threshold; a plurality of static pictures are arranged in the static picture set, each static picture corresponds to a static video segment, and the static video segment is formed by monitoring video data from which dynamic video data are removed;
the video processing module comprises:
anumbering unit 201, configured to read the monitoring video data frame by frame, and number each frame of image according to a time node;
a comparingunit 202, configured to compare, according to a numbering sequence, a current image with an image to be determined, where the image to be determined is a first image located behind the current image;
a determiningunit 203, configured to determine that the image to be determined is a static image when the pixel variation of the current image and the image to be determined is smaller than a set threshold, and determine that the image to be determined is a dynamic image when the pixel variation of the current image and the image to be determined is not smaller than the set threshold; and
the reconstructingunit 204 integrates continuous dynamic images into dynamic video data according to the time node, integrates continuous static images into a static video segment according to the time node, and selects a static picture from the static video segment to obtain a static picture set.
And thedata sending module 300 is configured to transmit the dynamic video data and the static image set to the server through the bluetooth Mesh gateway, so that the server restores the monitoring video data according to the dynamic video data and the static image set and stores the data.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.