Movatterモバイル変換


[0]ホーム

URL:


CN109166261A - Image processing method, device, equipment and storage medium based on image recognition - Google Patents

Image processing method, device, equipment and storage medium based on image recognition
Download PDF

Info

Publication number
CN109166261A
CN109166261ACN201811186563.9ACN201811186563ACN109166261ACN 109166261 ACN109166261 ACN 109166261ACN 201811186563 ACN201811186563 ACN 201811186563ACN 109166261 ACN109166261 ACN 109166261A
Authority
CN
China
Prior art keywords
target
image
target area
frame image
reference image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811186563.9A
Other languages
Chinese (zh)
Other versions
CN109166261B (en
Inventor
王义文
王健宗
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co LtdfiledCriticalPing An Technology Shenzhen Co Ltd
Priority to CN201811186563.9ApriorityCriticalpatent/CN109166261B/en
Priority to PCT/CN2018/123882prioritypatent/WO2020073505A1/en
Publication of CN109166261ApublicationCriticalpatent/CN109166261A/en
Application grantedgrantedCritical
Publication of CN109166261BpublicationCriticalpatent/CN109166261B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The embodiment of the invention discloses a kind of image processing method based on image recognition, device, equipment and storage mediums, and wherein method includes: to be shot by filming apparatus to target area, to obtain the target video data of the target area;Target frame image is filtered out from the target video data according to preset screening rule, and obtain the reference picture of the target area, the characteristic information of the target frame image is compared with the characteristic information of the reference picture, to obtain the matching degree between the target frame image and the reference picture;When the matching degree of the target frame image and the reference picture is less than preset threshold, determines that there are the target objects to swarm into the target area, automatically identifies the object for swarming into monitoring area, improve the efficiency of image recognition.

Description

Image processing method, device and equipment based on image recognition and storage medium
Technical Field
The present invention relates to the field of medical technology, and in particular, to an image processing method, apparatus, device, and storage medium based on image recognition.
Background
The image identification is a method for processing, analyzing and understanding the image to identify the object in the image, and the method is widely applied to the fields of security video monitoring, image retrieval, automatic driving or quality detection and the like, and brings great convenience to the life and work of users. In the field of security video monitoring, a worker mainly watches video data shot aiming at a monitored area so as to judge whether an abnormal object (such as a stranger) intrudes into the monitored area by comparing every two frames of images in the video data. In practice, the image recognition method needs to consume a lot of time and occupy a lot of labor resources, so that the image recognition efficiency is low.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, image processing equipment and a storage medium based on image recognition, which can automatically recognize an object intruding into a monitoring area and improve the efficiency of image recognition.
In a first aspect, an embodiment of the present invention provides an image processing method based on image recognition, where the method includes:
shooting a target area through a shooting device to obtain target video data of the target area;
screening a target frame image from the target video data according to a preset screening rule, and acquiring a reference image of the target area, wherein the reference image is an image shot when no target object intrudes into the target area;
comparing the characteristic information of the target frame image with the characteristic information of the reference image to obtain the matching degree between the target frame image and the reference image;
and when the matching degree of the target frame image and the reference image is smaller than a preset threshold value, determining that the target object intrudes into the target area.
In a second aspect, an embodiment of the present invention provides a base image processing apparatus, including:
and the shooting module is used for shooting the target area through the shooting device so as to obtain the target video data of the target area.
And the screening module is used for screening a target frame image from the target video data according to a preset screening rule and acquiring a reference image of the target area, wherein the reference image is an image shot when no target object intrudes into the target area.
And the comparison module is used for comparing the characteristic information of the target frame image with the characteristic information of the reference image so as to obtain the matching degree between the target frame image and the reference image.
And the determining module is used for determining that the target object intrudes into the target area when the matching degree of the target frame image and the reference image is smaller than a preset threshold value.
In a third aspect, an embodiment of the present invention provides a monitoring device, including a processor adapted to implement one or more instructions; and a computer storage medium storing one or more instructions adapted to be loaded by a processor and to perform the steps of:
shooting a target area through a shooting device to obtain target video data of the target area;
screening a target frame image from the target video data according to a preset screening rule, and acquiring a reference image of the target area, wherein the reference image is an image shot when no target object intrudes into the target area;
comparing the characteristic information of the target frame image with the characteristic information of the reference image to obtain the matching degree between the target frame image and the reference image;
and when the matching degree of the target frame image and the reference image is smaller than a preset threshold value, determining that the target object intrudes into the target area.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where one or more instructions are stored, and the one or more instructions are adapted to be loaded by a processor and execute the following steps:
shooting a target area through a shooting device to obtain target video data of the target area;
screening a target frame image from the target video data according to a preset screening rule, and acquiring a reference image of the target area, wherein the reference image is an image shot when no target object intrudes into the target area;
comparing the characteristic information of the target frame image with the characteristic information of the reference image to obtain the matching degree between the target frame image and the reference image;
and when the matching degree of the target frame image and the reference image is smaller than a preset threshold value, determining that the target object intrudes into the target area.
In the embodiment of the invention, whether a target object intrudes into the target area can be automatically identified by identifying the target frame image in the video data of the target area, so that labor resources are saved, the image identification efficiency is improved, the automatic and intelligent requirements of a user on video monitoring are met, and the safety of the user can be effectively ensured; in addition, by screening the video data of the target area, only part of frame images in the video data need to be identified, and all images in the video data do not need to be identified, so that the image identification efficiency is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method based on image recognition according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image processing method based on image recognition according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a monitoring device according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An example of the invention may be performed by a monitoring device that may include a front-end portion, a transmission portion, and a back-end portion, the front-end portion: the device mainly comprises a camera, a sensor, a lens, a holder, a protective cover, a bracket, a decoder and the like, and is mainly used for shooting videos, audios or images and the like; a transmission part: cables and wires are used for transmitting video, audio or control signals in an overhead, underground or wall-mounted mode; a rear end portion: the video processing system is mainly used for processing videos or images by a picture divider, a monitor, a control device, a video storage device and the like.
The embodiment of the invention can be applied to security video monitoring scenes to analyze the video data of the scenes to judge whether target objects break into the scenes, wherein the security video monitoring scenes comprise video monitoring scenes of residential districts, video monitoring scenes of military areas or video monitoring scenes of shopping mall warehouses and the like, and the target objects comprise strange persons or animals and the like. Specifically, in order to monitor the monitored area, the shooting device of the monitoring device may shoot the monitored area to obtain video data of the monitored area. Screening a target frame image from video data of a monitored area according to a preset screening rule, obtaining a reference image of the monitored area, wherein the reference image is an image in which a target object does not break into the monitored area, comparing the characteristic information of the target frame image with the characteristic information of the reference image to obtain the matching degree between the target frame image and the reference image, and when the matching degree between the target frame image and the reference image is smaller than a preset threshold value, indicating that the difference between the target frame image and the reference image is large, and determining that the target object breaks into the monitored area. Whether a target object intrudes into the target area can be automatically identified by identifying the target frame image in the video data of the target area, so that labor resources are saved, the image identification efficiency is improved, the automatic and intelligent requirements of a user on video monitoring are met, and the safety of the user can be effectively ensured; in addition, by screening the video data of the target area, only part of frame images in the video data need to be identified, and all images in the video data do not need to be identified, so that the image identification efficiency is further improved.
Fig. 1 is a schematic flow chart of an image processing method based on image recognition according to an embodiment of the present invention, which can be executed by the above-mentioned monitoring device. In this embodiment, the image processing method based on image recognition includes the following steps.
S101, shooting a target area through a shooting device to obtain target video data of the target area.
In the embodiment of the invention, in the target area into which the intrusion of target objects such as strange people or animals is forbidden, the target area can be shot by the shooting device for the personal safety and the property safety of a user so as to obtain the target video data of the target area. The shooting device can be a panoramic shooting device or a hemispherical shooting device and the like, and the target area can be an entrance area of a residential quarter, an area where a garage or a warehouse of a shopping mall is located, a military area and the like.
In one embodiment, when a sensor in the monitoring device detects that a person or an animal or the like intrudes into the monitoring area, a shooting device of the monitoring device is triggered to shoot the monitoring area to obtain video data of the monitoring area, for example, the sensor transmits an infrared spectrum and receives a reflected infrared spectrum, a time interval between the transmitted infrared spectrum and the reflected infrared spectrum is calculated, when the time interval is lower than a preset time threshold, it is determined that the person or the animal intrudes into the target area, and the shooting device of the monitoring device is triggered to shoot the monitoring area to obtain the video data of the monitoring area.
In another embodiment, in order to reduce the pressure of the monitoring device on processing image data, the monitoring device may monitor a target area in a certain time period, specifically, a shooting time period is set for the shooting device, and when the time is within the shooting time period of the shooting device, the shooting device of the monitoring device is triggered to shoot the monitored area to obtain video data of the monitored area, the shooting time period may be set according to a time rule that the target area history intrudes into the target object, the shooting time period may be a time period in which the frequency of the target area history intruding into the target object is greater than a preset frequency, for example, the shooting time period is 6:00 to 12:00 at night.
S102, screening a target frame image from the target video data according to a preset screening rule, and acquiring a reference image of a target area, wherein the reference image is an image shot when no target object intrudes into the target area.
In the embodiment of the invention, in order to improve the efficiency of image recognition, the monitoring device may screen a target frame image from target video data according to a preset screening rule, and acquire a reference image, where the reference image may be acquired from a target video or acquired according to historical video data of a target area, and the reference image is an image shot when no target object intrudes into the target area.
S103, comparing the characteristic information of the target frame image with the characteristic information of the reference image to obtain the matching degree of the target frame image and the reference image.
In the embodiment of the present invention, the monitoring device may take the target frame image and the reference image as a whole, obtain feature information of the target frame image and feature information of the reference image, where the feature information may refer to at least one of a Histogram of Oriented Gradient (HOG), Scale-invariant feature transform (SIFT), or Color Histogram, and compare the feature information of the target frame image with the feature information of the reference image to obtain a matching degree between the target frame image and the reference image; or dividing the target frame image and the reference image into a plurality of sub-images, respectively obtaining the characteristic information of each sub-image, and determining the matching degree between the target frame image and the reference image according to the characteristic information of each sub-image. The greater the matching degree is, the greater the similarity between the target frame image and the reference image is, that is, the smaller the difference between the target frame image and the reference image is; conversely, the smaller the matching degree is, the smaller the similarity between the target frame image and the reference image is, that is, the greater the difference between the target frame image and the reference image is.
In an embodiment, in order to improve the efficiency of obtaining the matching degree between the target frame image and the reference image, part of feature information of the target frame image may be compared with part of feature information of the reference image to obtain the matching degree between the target frame image and the reference image, specifically, the feature information of the target frame image is sampled according to a preset sampling frequency, the feature information of the reference image is sampled according to the preset sampling frequency, and the feature information of a sampling point of the target frame image is compared with the feature information of a corresponding sampling point of the reference image to obtain the matching degree between the target frame image and the reference image.
In another embodiment, in order to improve the accuracy of obtaining the matching degree between the target frame image and the reference image, all the feature information of the target frame image may be compared with the corresponding feature information of the reference image to obtain the matching degree between the target frame image and the reference image.
It should be noted that, in order to improve the accuracy and flexibility of obtaining the matching degree between the target frame image and the reference image, the monitoring device may dynamically select a comparison policy of the feature information according to the stability of the target region, where the comparison policy includes all comparisons and partial comparisons, specifically, when it is detected that the stability of the target region is greater than or equal to a preset stability value, it indicates that the target region itself does not change much, for example, a background (an inherent object of the target region such as illumination and weather) changes slowly, and the probability of the target object intruding is small, and a part of the feature information may be selected as the policy, that is, a part of the feature information of the target frame image is compared with a part of the feature information of the reference image, so as to improve the matching degree between the target frame image and the reference image; when the stability of the target region is detected to be smaller than the preset stability value, it is indicated that the target region changes greatly, for example, the background changes quickly, and the probability of intrusion of the target object is high, all the feature information can be selected as this policy, that is, all the feature information of the target frame image is compared with all the feature information of the reference image, so as to obtain the matching degree between the target frame image and the reference image. The comparison strategy of the characteristic information can be manually selected by the user according to personal requirements.
And S104, when the matching degree of the target frame image and the reference image is smaller than a preset threshold value, determining that a target object intrudes into the target area.
In the embodiment of the invention, when the matching degree between the target frame image and the reference image is greater than or equal to the preset threshold value, the difference between the target frame image and the reference image is smaller, and the target object does not break into the target area; when the matching degree between the target frame image and the reference image is smaller than a preset threshold value, the difference between the target frame image and the reference image is larger, and the target object intrusion in the target area is determined. For the purpose of accurately identifying the target object, the preset threshold may be set to a difference between the background of the target area and the target object, for example, when the feature (e.g., color) of the target object is close to the background content (e.g., color of the background) of the target area, the preset threshold is set to a smaller value, and when the difference between the feature (e.g., color) of the target object and the background content (e.g., color of the background) of the target area is larger, the preset threshold is set to a larger value. .
In one example, in order to ensure the personal safety and property safety of the user, when the matching degree between the target frame image and the reference image is smaller than a preset threshold, the monitoring device may output prompt information, where the prompt information is used to prompt that a target object intrudes into the target area, and the prompt information may be output in the form of voice, or flashing warning light, vibration monitoring device, or the like.
In one embodiment, in order to timely notify a manager of timely processing an event that a target area breaks into a target object, when the matching degree of a target frame image and a reference image is smaller than a preset threshold, contact information of the manager is acquired, the target frame image is sent to a device bound with the contact information of the manager through the contact information, and the contact information includes an instant messaging account number of the manager, such as a telephone number, a WeChat account number or a QQ account number.
In the embodiment of the invention, whether a target object intrudes into the target area can be automatically identified by identifying the target frame image in the video data of the target area, so that labor resources are saved, the image identification efficiency is improved, the automatic and intelligent requirements of a user on video monitoring are met, and the safety of the user can be effectively ensured; in addition, by screening the video data of the target area, only part of frame images in the video data need to be identified, and all images in the video data do not need to be identified, so that the image identification efficiency is further improved.
Fig. 2 is a schematic flow chart of another image processing method based on image recognition according to an embodiment of the present invention, where the method according to the embodiment of the present invention can be executed by the above-mentioned monitoring device. In this embodiment, the image processing method based on image recognition includes the following steps.
S201, shooting a target area through a shooting device to obtain target video data of the target area.
In one embodiment, temperature information of a target area is acquired through a sensor, and when the temperature information of the target area indicates that a temperature value of the target area is greater than a preset temperature value, a step of shooting the target area through a shooting device to obtain target video data of the target area is executed; or receiving a shooting instruction aiming at the target area, and executing the step of shooting the target area by the shooting device to obtain target video data of the target area.
In order to reduce the pressure of processing image data by monitoring equipment, the monitoring equipment can trigger a shooting device to shoot a video according to parameters in a target area, specifically, temperature information of the target area is obtained through a sensor, when the temperature information of the target area indicates that the temperature value of the target area is greater than a preset temperature value, an object with temperature in the target area breaks into the target area, the object can be a person or an animal, in order to avoid that the broken-into object is a strange person or an animal, the shooting device of the monitoring equipment is triggered to shoot the monitoring area, and video data of the monitoring area are obtained.
Or, the user may trigger the shooting device to shoot, specifically, receive a shooting instruction sent by the user, and trigger the shooting device of the monitoring device to shoot the monitored area, so as to obtain video data of the monitored area, and the user may send the shooting instruction to the shooting device through touch (such as pressing a key, sliding, or clicking) or voice.
S202, screening a target frame image from the target video data according to a preset screening rule, and acquiring a reference image of a target area, wherein the reference image is an image shot when no target object intrudes into the target area.
In one example, the preset filtering rule includes a filtering rule according to a scene change parameter, and the step S202 includes the following steps S11 to S12.
And S11, acquiring a scene change parameter of the target area according to the historical video data of the target area, wherein the scene change parameter is used for indicating the stability of the target area.
S12, acquiring a reference image of the target area according to the scene change parameters of the target area, and screening the target frame image from the target video according to the scene change parameters of the target area.
In steps S11 to S12, the monitoring device may obtain the reference image and the target frame image according to the scene change parameter of the target area, and specifically, the monitoring device may obtain historical data of the target area within a preset time period, obtain the scene change parameter of the target area according to the historical data, obtain the reference image according to the scene change parameter of the target area, and screen the target frame image from the target video data according to the scene change parameter.
In the same time period, the scene change parameters of the target area have greater similarity, so that the monitoring device can acquire the current time, acquire historical video data of the target area corresponding to the current time, and determine the scene change parameters of the target area according to the historical video data of the target area corresponding to the current time. For example, when the current time is 6:00 nighttime, the monitoring device may acquire historical video data of the target area in a time period of 6:00 to 12:00 nighttime, and acquire the scene change parameter of the target area according to the historical video data of the target area in the time period of 6:00 to 12:00 nighttime.
In one embodiment, step S12 includes: when the scene change parameters indicate that the stability of the target area is greater than or equal to a preset stability value, acquiring multi-frame images without target objects in historical video data of the target area; and averaging the pixel information of the multiple frames of images to obtain a reference image of the target area, selecting images from the target video according to a first preset time interval, and taking the selected images as target frame images one frame at a time.
When the scene change parameter indicates that the stability of the target area is greater than or equal to a preset stable value, which indicates that the target area has small change, such as slow background change and low probability of intrusion of the target object, and the moving speed of the target object intruding into the target area is lower than the preset speed, the reference image can be obtained according to the historical video data, the monitoring device can obtain multi-frame images of the target object in the historical video data of the target area, average the pixel information of the multi-frame images to obtain the reference image of the target area, select an image from the target video according to a first preset time interval, and take the selected image as a target frame image every time, the first preset time interval can be set according to the time rule that the target area intrudes into the target object, such as high probability of intrusion into the target object at the later time, the first preset time interval may be set to a small value, the probability of intruding into the target object during the working period (e.g., 9:00 to 17:00 as early as the morning) is small, and the first preset time interval may be set to a large value.
In another embodiment, step S12 includes: when the scene change parameter indicates that the stability of the target area is smaller than a preset stability value, selecting images from the target video according to a second preset time interval, and selecting two frames of images each time; and taking a first frame image of the two frame images as a reference image of the target area, and taking a second frame image of the two frame images as the target frame image, wherein the shooting time of the first frame image is earlier than that of the second frame image.
When the scene change parameter indicates that the stability of the target area is smaller than a preset stable value, the target area changes greatly, for example, the background changes quickly, the probability of intrusion of a target object is high, the moving speed of the target object in the target area is higher than a preset speed, and then the historical video data cannot reflect the characteristics of the target area, so that a reference image can be obtained from the currently shot target video, the monitoring equipment can obtain multi-frame images of the target object which do not exist in the historical video data of the target area, select images from the target video according to a second preset time interval, and select two frames of images each time; and taking a first frame image of the two frame images as a reference image of the target area, and taking a second frame image of the two frame images as a target frame image, wherein the shooting time of the first frame image is earlier than that of the second frame image.
In one embodiment, the scene change parameter includes at least one of a background change rate of the target area, a probability of intrusion of the target object, and a moving speed of the target object in the target area, and the scene change parameter indicating that the stability of the target area is greater than or equal to a preset stability value may refer to: the background change rate of the target area is less than or equal to a preset change rate, and/or the probability of intrusion of the target object is less than or equal to a preset probability value, and/or the moving speed of the target object in the target area is less than or equal to a preset speed value; the scene change parameter indicating that the stability of the target region is smaller than the preset stability value may be: the background change rate of the target area is greater than a preset change rate, and/or the probability of intrusion of the target object is greater than a preset probability value, and/or the moving speed of the target object in the target area is greater than a preset speed value.
S203, comparing the characteristic information of the target frame image with the characteristic information of the reference image to obtain the matching degree of the target frame image and the reference image.
In one example, the target frame image and the reference image are divided into a plurality of sub-images according to a preset division rule, feature information of each sub-image in the target frame image and feature information of each sub-image in the reference image are obtained, the feature information of each sub-image in the target frame image is compared with the feature information of the corresponding sub-image in the reference image to obtain a matching degree between each sub-image in the target frame image and the corresponding sub-image in the reference image, and the determined matching degrees are subjected to weighted summation to obtain the matching degree between the target frame image and the reference image.
In order to improve the accuracy of obtaining the matching degree between the target frame image and the reference image, the monitoring device may divide the target frame image and the reference image into a plurality of sub-images according to a preset division rule, where the preset division rule includes a horizontal division rule and/or a vertical division rule and/or an oblique division rule, and obtain feature information of each sub-image in the target frame image and feature information of each sub-image in the reference image, compare the feature information of each sub-image in the target frame image with the feature information of a corresponding sub-image in the reference image to obtain the matching degree between each sub-image in the target frame image and a corresponding sub-image in the reference image, set a weight for each sub-image in the target frame image, where the weight of the sub-image is used to reflect the degree of influence of the sub-image on the matching degree between the target frame image and the reference image, and carrying out weighted summation on the determined matching degree according to the weight of each sub-image to obtain the matching degree between the target frame image and the reference image.
The above-mentioned method includes, for each sub-picture in the target frame image: the weight of each sub-graph is set according to the probability of the target object appearing in the indicated region in the sub-graph, for example, if the indicated region in the sub-graph is the fence region of the cell, the probability of the target object appearing in the region is smaller, the weight of the sub-graph can be set to a smaller value, if the indicated region in the sub-graph is the entrance region of the cell, the probability of the target object appearing in the region is larger, and the weight of the sub-graph can be set to a larger value.
Or a weight is set for each subgraph of the target frame image through a logistic regression classifier, and the logistic regression classifier sets the weight of each subgraph through the change characteristics (namely, the stability) of the region where the subgraph is located, specifically, a smaller weight is set for the subgraph with fixed change characteristics (namely, the stability of the region indicated by the subgraph is greater than or equal to the preset stability), and a larger weight is set for the subgraph without fixed change characteristics (namely, the stability of the region indicated by the subgraph is less than the preset stability). Example (c): the area where the subgraph refers is the area where the traffic signal lamp at the intersection is located, the traffic signal lamp has a fixed change characteristic, and the subgraph can be set to have a smaller weight; the area where the sub-graph refers is the area where the zebra crossing of the intersection is located, the pedestrian volume of the area where the zebra crossing is located does not usually have fixed characteristics, and the sub-graph can be set to be high in weight.
In one embodiment, in order to improve the accuracy of obtaining the matching degree between the target frame image and the reference image and improve the efficiency of obtaining the matching degree between the target frame image and the reference image, the monitoring device may divide the target frame image and the reference image into a plurality of sub-images according to a preset division rule, the preset division rule includes a horizontal division rule and/or a vertical division rule and/or an oblique division rule, and obtains the feature information of each sub-image in the target frame image and the feature information of each sub-image in the reference image, compare the feature information of each sub-image in the target frame image with the feature information of the corresponding sub-image in the reference image to obtain the matching degree between each sub-image in the target frame image and the corresponding sub-image in the reference image, and count the number of sub-images with the matching degree smaller than a preset value, and determining the matching degree between the target frame image and the reference image according to the number of the sub images. For example, when the number of the sub-images is smaller than a preset number threshold, determining that the matching degree between the target frame image and the reference image is smaller than a preset threshold; and when the number of the sub images is larger than or equal to a preset number threshold, determining that the matching degree between the target frame image and the reference image is larger than or equal to a preset threshold.
And S204, when the matching degree of the target frame image and the reference image is smaller than a preset threshold value, determining that a target object intrudes into the target area.
S205, acquiring a training image matched with the target frame image from a database.
S206, acquiring object information in the training images from a database, wherein the database comprises a plurality of training images and the object information in each training image.
And S207, taking the object information in the training image as the object information of the target object of the target frame image, and outputting the object information of the target object.
In steps S204 to S207, when it is determined that a target object intrudes into the target area, the monitoring device may obtain object information of the target object (i.e., a tag of the target object), so that a user may take corresponding measures in time, and harm to the user caused by the target object is reduced. Specifically, a training image matching the target frame image is obtained from the database, for example, a training image identical to the target frame image or a training image having a similarity greater than a preset similarity value with the target frame image is obtained, object information in the training image is obtained from the database, the object information in the training image is used as object information of the target object of the target frame image, and the object information of the target object is output. When the target object is a person, the object information of the target object comprises the identity information of the target object, and/or the contra-recording information of the target object, and the like, and the identity information of the target object comprises a name, a native place, an age, and the like; when the target object is an animal, the object information of the target object includes a name, a category, and the like.
In the embodiment of the invention, whether a target object intrudes into the target area can be automatically identified by identifying the target frame image in the video data of the target area, so that labor resources are saved, the image identification efficiency is improved, the automatic and intelligent requirements of a user on video monitoring are met, and the safety of the user can be effectively ensured; in addition, by screening the video data of the target area, only part of frame images in the video data need to be identified, and all images in the video data do not need to be identified, so that the image identification efficiency is further improved.
Fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention, where the apparatus according to an embodiment of the present invention may be disposed in the above-mentioned monitoring device. In this embodiment, the apparatus includes:
a shooting module 301, configured to shoot a target area through a shooting device to obtain target video data of the target area.
The screening module 302 is configured to screen a target frame image from the target video data according to a preset screening rule, and acquire a reference image of the target area, where the reference image is an image captured when no target object intrudes into the target area.
A comparing module 303, configured to compare the feature information of the target frame image with the feature information of the reference image, so as to obtain a matching degree between the target frame image and the reference image.
A determining module 304, configured to determine that the target object intrudes into the target area when the matching degree between the target frame image and the reference image is smaller than a preset threshold.
Optionally, the shooting module 301 is specifically configured to acquire temperature information of the target area through a sensor; when the temperature information of the target area indicates that the temperature value of the target area is greater than a preset temperature value, the step of shooting the target area through a shooting device to obtain target video data of the target area is executed; or receiving a shooting instruction aiming at the target area, and executing the step of shooting the target area by the shooting device to obtain target video data of the target area.
Optionally, the preset screening rule includes a screening rule according to a scene change parameter; a screening module 302, configured to obtain a scene change parameter of the target area according to historical video data of the target area, where the scene change parameter is used to indicate a stability of the target area; and acquiring a reference image of the target area according to the scene change parameters of the target area, and screening the target frame image from the target video according to the scene change parameters of the target area.
Optionally, the screening module 302 is specifically configured to, when the scene change parameter indicates that the stability of the target region is greater than or equal to a preset stability value, obtain a multi-frame image in which the target object does not exist in the historical video data of the target region; carrying out averaging processing on the pixel information of the multi-frame image to obtain a reference image of the target area; and selecting images from the target video according to a first preset time interval, and taking the selected images as the target frame images one frame at a time.
Optionally, the screening module 302 is specifically configured to select an image from the target video according to a second preset time interval when the scene change parameter indicates that the stability of the target region is smaller than a preset stability value, and select two frames of images each time; and taking a first frame image of the two frame images as a reference image of the target area, and taking a second frame image of the two frame images as the target frame image, wherein the shooting time of the first frame image is earlier than that of the second frame image.
Optionally, the comparison module 303 is specifically configured to divide the target frame image and the reference image into a plurality of sub-images according to a preset division rule; acquiring the characteristic information of each sub-image in the target frame image and the characteristic information of each sub-image in the reference image; comparing the characteristic information of each sub-image in the target frame image with the characteristic information of the corresponding sub-image in the reference image to obtain the matching degree between each sub-image in the target frame image and the corresponding sub-image in the reference image; and carrying out weighted summation on the determined matching degree to obtain the matching degree between the target frame image and the reference image.
Optionally, the obtaining module 305 is configured to obtain a training image matched with the target frame image from a database, and obtain object information in the training image from the database, where the database includes multiple training images and object information in each training image.
Optionally, the output module 306 is configured to use the object information in the training image as the object information of the target object in the target frame image; and outputting the object information of the target object.
In the embodiment of the invention, whether a target object intrudes into the target area can be automatically identified by identifying the target frame image in the video data of the target area, so that labor resources are saved, the image identification efficiency is improved, the automatic and intelligent requirements of a user on video monitoring are met, and the safety of the user can be effectively ensured; in addition, by screening the video data of the target area, only part of frame images in the video data need to be identified, and all images in the video data do not need to be identified, so that the image identification efficiency is further improved.
Referring to fig. 4, a schematic structural diagram of a monitoring device according to an embodiment of the present invention is shown, where the monitoring device according to the embodiment of the present invention includes: one or more processors 401; one or more input devices 402, one or more output devices 403, and memory 404. The processor 401, the input device 402, the output device 403, and the memory 404 are connected by a bus 405.
The Processor 401 may be a Central Processing Unit (CPU), and may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 402 may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of a fingerprint), a temperature sensor (for acquiring temperature information of a target area), a photographing device (for acquiring video data of a target area), a microphone, etc., the output device 403 may include a display (LCD, etc.), a speaker, etc., and the output device 403 may output object information of a target object.
The memory 404 may include a read-only memory and a random access memory, and provides instructions and data to the processor 401. A portion of the memory 404 may also include a non-volatile random access memory, the memory 404 for storing a computer program comprising program instructions, the processor 401 for executing the program instructions stored by the memory 404 for performing an image processing method based on image recognition, i.e. for performing the following operations:
shooting a target area through a shooting device to obtain target video data of the target area;
screening a target frame image from the target video data according to a preset screening rule, and acquiring a reference image of the target area, wherein the reference image is an image shot when no target object intrudes into the target area;
comparing the characteristic information of the target frame image with the characteristic information of the reference image to obtain the matching degree between the target frame image and the reference image;
and when the matching degree of the target frame image and the reference image is smaller than a preset threshold value, determining that the target object intrudes into the target area.
Optionally, the processor 401 is configured to execute the program instructions stored in the memory 404, and is configured to perform the following operations:
acquiring temperature information of the target area through a sensor; when the temperature information of the target area indicates that the temperature value of the target area is greater than a preset temperature value, the step of shooting the target area through a shooting device to obtain target video data of the target area is executed; or,
and receiving a shooting instruction aiming at the target area, and executing the step of shooting the target area through a shooting device to obtain target video data of the target area.
Optionally, the processor 401 is configured to execute the program instructions stored in the memory 404, and is configured to perform the following operations:
acquiring scene change parameters of the target area according to historical video data of the target area, wherein the scene change parameters are used for indicating the stability of the target area;
and acquiring a reference image of the target area according to the scene change parameters of the target area, and screening the target frame image from the target video according to the scene change parameters of the target area.
Optionally, the processor 401 is configured to execute the program instructions stored in the memory 404, and is configured to perform the following operations:
when the scene change parameter indicates that the stability of the target area is greater than or equal to a preset stability value, acquiring a multi-frame image without the target object in the historical video data of the target area;
carrying out averaging processing on the pixel information of the multi-frame image to obtain a reference image of the target area;
and selecting images from the target video according to a first preset time interval, and taking the selected images as the target frame images one frame at a time.
Optionally, the processor 401 is configured to execute the program instructions stored in the memory 404, and is configured to perform the following operations:
when the scene change parameter indicates that the stability of the target area is smaller than a preset stability value, selecting images from the target video according to a second preset time interval, and selecting two frames of images each time;
and taking a first frame image of the two frame images as a reference image of the target area, and taking a second frame image of the two frame images as the target frame image, wherein the shooting time of the first frame image is earlier than that of the second frame image.
Optionally, the processor 401 is configured to execute the program instructions stored in the memory 404, and is configured to perform the following operations:
dividing the target frame image and the reference image into a plurality of sub-images according to a preset division rule;
acquiring the characteristic information of each sub-image in the target frame image and the characteristic information of each sub-image in the reference image;
comparing the characteristic information of each sub-image in the target frame image with the characteristic information of the corresponding sub-image in the reference image to obtain the matching degree between each sub-image in the target frame image and the corresponding sub-image in the reference image;
and carrying out weighted summation on the determined matching degree to obtain the matching degree between the target frame image and the reference image.
Optionally, the processor 401 is configured to execute the program instructions stored in the memory 404, and is configured to perform the following operations:
acquiring a training image matched with the target frame image from a database;
acquiring object information in the training images from the database, wherein the database comprises a plurality of training images and object information in each training image;
taking the object information in the training image as the object information of the target object in the target frame image;
and outputting the object information of the target object.
In the embodiment of the invention, whether a target object intrudes into the target area can be automatically identified by identifying the target frame image in the video data of the target area, so that labor resources are saved, the image identification efficiency is improved, the automatic and intelligent requirements of a user on video monitoring are met, and the safety of the user can be effectively ensured; in addition, by screening the video data of the target area, only part of frame images in the video data need to be identified, and all images in the video data do not need to be identified, so that the image identification efficiency is further improved.
The processor 401, the input device 402, and the output device 403 described in this embodiment of the present invention may execute the implementation manners described in the first embodiment and the second embodiment of the image processing method based on image recognition provided in this embodiment of the present invention, and may also execute the implementation manner of the monitoring device described in this embodiment of the present invention, which is not described herein again.
A computer-readable storage medium is further provided in the embodiments of the present invention, and the computer-readable storage medium stores a computer program, where the computer program includes program instructions, and the program instructions, when executed by a processor, implement the image processing method based on image recognition shown in the embodiments of fig. 1 and fig. 2 of the present invention.
The computer readable storage medium may be an internal storage unit of the monitoring device according to any of the foregoing embodiments, for example, a hard disk or a memory of the control device. The computer-readable storage medium may also be an external storage device of the control device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the control device. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the control device. The computer-readable storage medium is used to store the computer program and other programs and data required by the control device. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the control device and the unit described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed control device and method may be implemented in other ways. For example, the above-described apparatus embodiments are illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

CN201811186563.9A2018-10-112018-10-11Image processing method, device and equipment based on image recognition and storage mediumActiveCN109166261B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN201811186563.9ACN109166261B (en)2018-10-112018-10-11Image processing method, device and equipment based on image recognition and storage medium
PCT/CN2018/123882WO2020073505A1 (en)2018-10-112018-12-26Image processing method, apparatus and device based on image recognition, and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201811186563.9ACN109166261B (en)2018-10-112018-10-11Image processing method, device and equipment based on image recognition and storage medium

Publications (2)

Publication NumberPublication Date
CN109166261Atrue CN109166261A (en)2019-01-08
CN109166261B CN109166261B (en)2022-06-07

Family

ID=64877954

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201811186563.9AActiveCN109166261B (en)2018-10-112018-10-11Image processing method, device and equipment based on image recognition and storage medium

Country Status (2)

CountryLink
CN (1)CN109166261B (en)
WO (1)WO2020073505A1 (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109889695A (en)*2019-02-272019-06-14努比亚技术有限公司A kind of image-region determines method, terminal and computer readable storage medium
CN110223366A (en)*2019-04-282019-09-10深圳传音控股股份有限公司Image processing method, picture processing unit and readable storage medium storing program for executing
CN110225299A (en)*2019-05-062019-09-10平安科技(深圳)有限公司Video monitoring method, device, computer equipment and storage medium
CN110617873A (en)*2019-04-262019-12-27深圳市豪视智能科技有限公司Method for detecting vibration of cable and related product
CN110991550A (en)*2019-12-132020-04-10歌尔科技有限公司Video monitoring method and device, electronic equipment and storage medium
CN111027376A (en)*2019-10-282020-04-17中国科学院上海微系统与信息技术研究所Method and device for determining event map, electronic equipment and storage medium
CN111047458A (en)*2019-12-172020-04-21江苏恒宝智能系统技术有限公司Farmland monitoring method
CN111160240A (en)*2019-12-272020-05-15腾讯科技(深圳)有限公司 Image object recognition processing method, device, intelligent device, and storage medium
CN111191498A (en)*2019-11-072020-05-22腾讯科技(深圳)有限公司Behavior recognition method and related product
CN111263955A (en)*2019-02-282020-06-09深圳市大疆创新科技有限公司Method and device for determining movement track of target object
CN111523608A (en)*2020-04-302020-08-11上海顺久电子科技有限公司Image processing method and device
CN112189496A (en)*2020-09-152021-01-08津市市毛里湖镇自强柑橘农民专业合作社Crop planting method
CN112347808A (en)*2019-08-072021-02-09中国电信股份有限公司 Method, device and system for identifying characteristic behavior of target object
CN112422601A (en)*2019-08-232021-02-26阿里巴巴集团控股有限公司Data processing method and device and electronic equipment
WO2021043092A1 (en)*2019-09-022021-03-11平安科技(深圳)有限公司Image semantic matching method and device, terminal and computer readable storage medium
CN112967467A (en)*2021-02-242021-06-15九江学院Cultural relic anti-theft method, system, mobile terminal and storage medium
CN113076935A (en)*2021-04-292021-07-06平安国际智慧城市科技股份有限公司Supervision method based on image recognition, related equipment and storage medium
CN113177481A (en)*2021-04-292021-07-27北京百度网讯科技有限公司Target detection method and device, electronic equipment and storage medium
CN113239224A (en)*2021-05-142021-08-10百度在线网络技术(北京)有限公司Abnormal document identification method, device, equipment and storage medium
CN114095731A (en)*2020-07-312022-02-25上海际链网络科技有限公司 Image transmission, target recognition method and device, storage medium, terminal, server
CN114155508A (en)*2021-12-082022-03-08北京百度网讯科技有限公司 A road change detection method, device, device and storage medium
CN115512268A (en)*2022-09-282022-12-23北京爱笔科技有限公司 Method, device, equipment and storage medium for determining scene state
CN116503815A (en)*2023-06-212023-07-28宝德计算机系统股份有限公司Big data-based computer vision processing system
CN116543330A (en)*2023-04-132023-08-04北京京东乾石科技有限公司 Crop information storage method, device, electronic device and computer readable medium
CN117936087A (en)*2023-12-252024-04-26甘肃省畜牧兽医研究所Intelligent monitoring method and system for bovine nodular skin disease
CN119479127A (en)*2025-01-092025-02-18山东力拓智能科技有限公司 A method for identifying and warning abnormal personnel in smart buildings

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113723152B (en)*2020-05-262025-03-25阿里巴巴集团控股有限公司 Image processing method, device and electronic device
CN111722074B (en)*2020-06-032023-06-20四川蓝景光电技术有限责任公司LED identification label fault positioning method, device and system
CN113901855A (en)*2020-06-222022-01-07阿里巴巴集团控股有限公司Fire-fighting risk detection method and device and server
CN113963307A (en)*2020-07-022022-01-21上海际链网络科技有限公司Method and device for identifying content on target and acquiring video, storage medium and computer equipment
CN113938671B (en)*2020-07-142023-05-23北京灵汐科技有限公司Image content analysis method, image content analysis device, electronic equipment and storage medium
CN111950607A (en)*2020-07-282020-11-17深圳市元征科技股份有限公司Reminding method, reminding device and server
CN112037276B (en)*2020-08-112024-08-16浙江大华技术股份有限公司Secondary landslide monitoring method and device, electronic equipment and storage medium
CN112040186B (en)*2020-08-282023-01-31北京市商汤科技开发有限公司Method, device and equipment for determining activity area of target object and storage medium
CN112380938B (en)*2020-11-042024-05-03浙江大华技术股份有限公司Face recognition and temperature measurement method, device, equipment and medium
CN112800888B (en)*2021-01-152023-11-10中国科学院半导体研究所 A target reporting method and device based on image recognition
CN113076159B (en)*2021-03-262024-02-27西安万像电子科技有限公司Image display method and device, storage medium and electronic equipment
CN113225451B (en)*2021-04-282023-06-27维沃移动通信(杭州)有限公司Image processing method and device and electronic equipment
CN113947749A (en)*2021-09-162022-01-18华北电力大学扬中智能电气研究中心 A kind of legacy detection method, device, electronic device and storage medium
CN114332706A (en)*2021-12-282022-04-12浙江大华技术股份有限公司Target event determination method and device, storage medium and electronic device
CN114842416B (en)*2022-04-272025-07-04杭州海康威视数字技术股份有限公司 Method for counting target quantity in region, method and device for training region recognition model
CN115396618B (en)*2022-05-262025-01-07杭州华橙软件技术有限公司Video data storage method, video data reading method, electronic device, and storage medium
CN116994203A (en)*2023-08-072023-11-03中国人民解放军61068部队90分队 A method and system for processing surveillance video data
CN118397560B (en)*2024-05-242025-04-01费马(深圳)科技有限公司 Security data processing method and system based on image recognition
CN119068397B (en)*2024-11-042025-03-25江西核工业兴中新材料有限公司 A method and system for detecting abnormal alkali solution flow rate in producing basic nickel carbonate

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101853504A (en)*2010-05-072010-10-06厦门大学 Image Quality Evaluation Method Based on Visual Features and Structural Similarity
EP2378486A1 (en)*2006-02-072011-10-19QUALCOMM IncorporatedMulti-mode region-of-interest video object segmentation
CN103455812A (en)*2012-06-012013-12-18株式会社理光Target recognition system and target recognition method
CN103871186A (en)*2012-12-172014-06-18博立码杰通讯(深圳)有限公司Security and protection monitoring system and corresponding warning triggering method
CN105844671A (en)*2016-04-122016-08-10河北大学Rapid background subtraction method under changing illumination conditions
CN106525968A (en)*2016-10-192017-03-22中国人民解放军空军勤务学院Damage probability imaging and positioning method based on subareas

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102404875B (en)*2011-11-012014-08-27北京航空航天大学Distributed type intelligent wireless image sensor network node equipment
CN104581054A (en)*2014-12-222015-04-29深圳供电局有限公司Power transmission line inspection method and system based on video
EP3270351B1 (en)*2015-03-112023-03-01FUJI CorporationComponent type automatic distinguishing method and component type automatic distinguishing system
CN105512633A (en)*2015-12-112016-04-20谭焕玲Power system dangerous object identification method and apparatus
CN106297130A (en)*2016-08-222017-01-04国家电网公司Transmission line of electricity video analysis early warning system
CN106454282A (en)*2016-12-092017-02-22南京创维信息技术研究院有限公司Security and protection monitoring method, apparatus and system
CN108540762A (en)*2017-03-012018-09-14武汉摩菲智能科技有限公司Artificial intelligence security protection camera system and safety protection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP2378486A1 (en)*2006-02-072011-10-19QUALCOMM IncorporatedMulti-mode region-of-interest video object segmentation
CN101853504A (en)*2010-05-072010-10-06厦门大学 Image Quality Evaluation Method Based on Visual Features and Structural Similarity
CN103455812A (en)*2012-06-012013-12-18株式会社理光Target recognition system and target recognition method
CN103871186A (en)*2012-12-172014-06-18博立码杰通讯(深圳)有限公司Security and protection monitoring system and corresponding warning triggering method
CN105844671A (en)*2016-04-122016-08-10河北大学Rapid background subtraction method under changing illumination conditions
CN106525968A (en)*2016-10-192017-03-22中国人民解放军空军勤务学院Damage probability imaging and positioning method based on subareas

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
朱明早: "运动车辆的检测识别与跟踪", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》, 15 May 2006 (2006-05-15)*
杨强等: "一种用于车辆检测的选择性背景更新方法", 《合肥工业大学学报(自然科学版)》, 30 April 2011 (2011-04-30)*
杨磊等: "《网络视频监控技术》", 30 June 2017, pages: 165 - 166*
谢剑斌等: "《视觉感知与智能视频监控》", 31 March 2012, pages: 165*
黄小菲等: "《城市灾害极早期预警技术应用》", 30 September 2018, pages: 135 - 136*

Cited By (35)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109889695A (en)*2019-02-272019-06-14努比亚技术有限公司A kind of image-region determines method, terminal and computer readable storage medium
CN111263955A (en)*2019-02-282020-06-09深圳市大疆创新科技有限公司Method and device for determining movement track of target object
CN110617873A (en)*2019-04-262019-12-27深圳市豪视智能科技有限公司Method for detecting vibration of cable and related product
CN110223366A (en)*2019-04-282019-09-10深圳传音控股股份有限公司Image processing method, picture processing unit and readable storage medium storing program for executing
CN110225299A (en)*2019-05-062019-09-10平安科技(深圳)有限公司Video monitoring method, device, computer equipment and storage medium
CN110225299B (en)*2019-05-062022-03-04平安科技(深圳)有限公司Video monitoring method and device, computer equipment and storage medium
CN112347808A (en)*2019-08-072021-02-09中国电信股份有限公司 Method, device and system for identifying characteristic behavior of target object
CN112422601B (en)*2019-08-232022-06-10阿里巴巴集团控股有限公司Data processing method and device and electronic equipment
CN112422601A (en)*2019-08-232021-02-26阿里巴巴集团控股有限公司Data processing method and device and electronic equipment
WO2021043092A1 (en)*2019-09-022021-03-11平安科技(深圳)有限公司Image semantic matching method and device, terminal and computer readable storage medium
CN111027376A (en)*2019-10-282020-04-17中国科学院上海微系统与信息技术研究所Method and device for determining event map, electronic equipment and storage medium
CN111191498A (en)*2019-11-072020-05-22腾讯科技(深圳)有限公司Behavior recognition method and related product
CN110991550B (en)*2019-12-132023-10-17歌尔科技有限公司Video monitoring method and device, electronic equipment and storage medium
CN110991550A (en)*2019-12-132020-04-10歌尔科技有限公司Video monitoring method and device, electronic equipment and storage medium
CN111047458A (en)*2019-12-172020-04-21江苏恒宝智能系统技术有限公司Farmland monitoring method
CN111160240B (en)*2019-12-272024-05-24腾讯科技(深圳)有限公司Image object recognition processing method and device, intelligent device and storage medium
CN111160240A (en)*2019-12-272020-05-15腾讯科技(深圳)有限公司 Image object recognition processing method, device, intelligent device, and storage medium
CN111523608B (en)*2020-04-302023-04-18上海顺久电子科技有限公司Image processing method and device
CN111523608A (en)*2020-04-302020-08-11上海顺久电子科技有限公司Image processing method and device
CN114095731A (en)*2020-07-312022-02-25上海际链网络科技有限公司 Image transmission, target recognition method and device, storage medium, terminal, server
CN112189496A (en)*2020-09-152021-01-08津市市毛里湖镇自强柑橘农民专业合作社Crop planting method
CN112967467A (en)*2021-02-242021-06-15九江学院Cultural relic anti-theft method, system, mobile terminal and storage medium
CN113177481B (en)*2021-04-292023-09-29北京百度网讯科技有限公司Target detection method, target detection device, electronic equipment and storage medium
CN113177481A (en)*2021-04-292021-07-27北京百度网讯科技有限公司Target detection method and device, electronic equipment and storage medium
CN113076935B (en)*2021-04-292024-06-11平安国际智慧城市科技股份有限公司Supervision method, device, server and medium based on image recognition
CN113076935A (en)*2021-04-292021-07-06平安国际智慧城市科技股份有限公司Supervision method based on image recognition, related equipment and storage medium
CN113239224A (en)*2021-05-142021-08-10百度在线网络技术(北京)有限公司Abnormal document identification method, device, equipment and storage medium
CN114155508B (en)*2021-12-082024-04-05北京百度网讯科技有限公司Road change detection method, device, equipment and storage medium
CN114155508A (en)*2021-12-082022-03-08北京百度网讯科技有限公司 A road change detection method, device, device and storage medium
CN115512268A (en)*2022-09-282022-12-23北京爱笔科技有限公司 Method, device, equipment and storage medium for determining scene state
CN116543330A (en)*2023-04-132023-08-04北京京东乾石科技有限公司 Crop information storage method, device, electronic device and computer readable medium
CN116503815B (en)*2023-06-212024-01-30宝德计算机系统股份有限公司Big data-based computer vision processing system
CN116503815A (en)*2023-06-212023-07-28宝德计算机系统股份有限公司Big data-based computer vision processing system
CN117936087A (en)*2023-12-252024-04-26甘肃省畜牧兽医研究所Intelligent monitoring method and system for bovine nodular skin disease
CN119479127A (en)*2025-01-092025-02-18山东力拓智能科技有限公司 A method for identifying and warning abnormal personnel in smart buildings

Also Published As

Publication numberPublication date
CN109166261B (en)2022-06-07
WO2020073505A1 (en)2020-04-16

Similar Documents

PublicationPublication DateTitle
CN109166261B (en)Image processing method, device and equipment based on image recognition and storage medium
KR101825045B1 (en)Alarm method and device
CN109040709B (en)Video monitoring method and device, monitoring server and video monitoring system
CN109766779B (en)Loitering person identification method and related product
US20190295393A1 (en)Image capturing apparatus with variable event detecting condition
WO2018223955A1 (en)Target monitoring method, target monitoring device, camera and computer readable medium
US10657783B2 (en)Video surveillance method based on object detection and system thereof
CN106682620A (en)Human face image acquisition method and device
US10373015B2 (en)System and method of detecting moving objects
US9615063B2 (en)Method and apparatus for visual monitoring
CN109815839B (en)Loitering person identification method under micro-service architecture and related product
CN111401239B (en)Video analysis method, device, system, equipment and storage medium
JP2006011728A (en) Suspicious person countermeasure system and suspicious person detection device
KR102127276B1 (en)The System and Method for Panoramic Video Surveillance with Multiple High-Resolution Video Cameras
KR102297575B1 (en)Intelligent video surveillance system and method
CN111354024A (en)Behavior prediction method for key target, AI server and storage medium
CN111444758A (en)Pedestrian re-identification method and device based on spatio-temporal information
CN110569770A (en)Human body intrusion behavior recognition method and device, storage medium and electronic equipment
CN109960969B (en)Method, device and system for generating moving route
CN110267011B (en)Image processing method, image processing apparatus, server, and storage medium
CN110705469A (en)Face matching method and device and server
CN109800664B (en)Method and device for determining passersby track
CN110505438B (en)Queuing data acquisition method and camera
CN113947103A (en)High-altitude parabolic model updating method, high-altitude parabolic detection system and storage medium
CN108875477B (en)Exposure control method, device and system and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp