Movatterモバイル変換


[0]ホーム

URL:


CN113469201A - Image acquisition equipment offset detection method, image matching method, system and equipment - Google Patents

Image acquisition equipment offset detection method, image matching method, system and equipment
Download PDF

Info

Publication number
CN113469201A
CN113469201ACN202010243946.6ACN202010243946ACN113469201ACN 113469201 ACN113469201 ACN 113469201ACN 202010243946 ACN202010243946 ACN 202010243946ACN 113469201 ACN113469201 ACN 113469201A
Authority
CN
China
Prior art keywords
image
feature points
image frame
matching
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010243946.6A
Other languages
Chinese (zh)
Inventor
何墨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding LtdfiledCriticalAlibaba Group Holding Ltd
Priority to CN202010243946.6ApriorityCriticalpatent/CN113469201A/en
Publication of CN113469201ApublicationCriticalpatent/CN113469201A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开一种图像采集设备偏移检测方法、图像匹配方法、系统和设备。其中图像采集设备偏移检测方法基于图像采集设备获取的图像帧,并对图像帧设定检测区域;在检测区域中分别对图像帧中的参考图像帧和当前图像帧生成预设数量的特征点;使用生成的特征点进行特征点匹配后,确定相匹配的特征点的数量和相匹配的特征点之间的连线距离,最终判断设备是否发生偏移。本发明针对检测区域生成预设数量的特征点,避免了在图像的前景图像和背景图像中均生成特征点,消除了除设备偏移之外由其他图像区域物体本身移动造成的特征点匹配不准确的问题,并且基于相匹配的特征点的数量和相匹配的特征点之间的连线距离综合判断出图像采集设备是否发生偏移。

Figure 202010243946

The invention discloses an image acquisition device offset detection method, an image matching method, a system and a device. The image capture device offset detection method is based on the image frames obtained by the image capture device, and a detection area is set for the image frames; in the detection area, a preset number of feature points are respectively generated for the reference image frame and the current image frame in the image frame. ; After using the generated feature points for feature point matching, determine the number of matching feature points and the distance between the matching feature points, and finally determine whether the device is offset. The invention generates a preset number of feature points for the detection area, avoids generating feature points in both the foreground image and the background image of the image, and eliminates the mismatch of feature points caused by the movement of objects in other image areas except for equipment offset. It is an accurate problem, and based on the number of matched feature points and the connecting distance between the matched feature points, it is comprehensively judged whether the image acquisition device has shifted.

Figure 202010243946

Description

Image acquisition equipment offset detection method, image matching method, system and equipment
Technical Field
The invention relates to the field of image processing, in particular to an image acquisition equipment offset detection method, an image matching system and image processing equipment.
Background
Image matching, which can be used to check whether two images have the same visual content, is a basic technique that can be used in applications such as object recognition, motion tracking, computer vision, etc. A rotated image or images taken from different perspectives may contain the same content, scene or object when compared to the image before rotation, despite the different variations in the two images. Therefore, different image matching techniques can be used to efficiently perform image matching. Furthermore, in an image, the image can be divided into a foreground and a background according to the subject characteristics of the image, such as a character image, the foreground is the character, and the other is the background.
Image matching technology is also applied to the field of intelligent transportation, and image acquisition equipment (pan-tilt-zoom camera) of an expressway or an intersection often deviates due to human control or natural reasons, so that error and even false alarm are generated in event detection depending on human marking characteristics. If crossing lane lines and sidewalks shot by a pan-tilt camera at a crossing are taken as artificial marking features, the crossing lane lines and the sidewalks are used for detecting whether crossing vehicles or pedestrians violate rules, but due to the fact that the pan-tilt camera deviates, detection errors are caused frequently, and a false alarm phenomenon is caused.
Most of objects (vehicles running at high speed) in a picture shot by the existing pan-tilt camera in a real scene belong to a foreground except a background, and when the images are matched, the existing matching method is to detect feature points of all areas of the images and then match the images based on the detected feature points, so that the pan-tilt camera is not deviated frequently, but the detection result is misjudged easily because the foreground features are more and the change is larger.
Disclosure of Invention
In view of the technical drawbacks and disadvantages of the prior art, embodiments of the present invention provide an image capturing device offset detection method, an image matching method, a system and a device that overcome the above problems or at least partially solve the above problems.
As an aspect of an embodiment of the present invention, a method for detecting an offset of an image capturing apparatus is provided, which may include:
determining an image frame acquired by an image acquisition device;
setting a detection area for the image frame; selecting a reference image frame and a current image frame from the image frames;
respectively generating a preset number of feature points for the reference image frame and the current image frame in the detection area;
carrying out feature point matching on the feature points of the reference image frame and the feature points of the current image frame, and determining the number of the matched feature points and the connecting line distance between the matched feature points;
and judging whether the image acquisition equipment deviates or not according to the number of the matched characteristic points and the connecting line distance between the matched characteristic points.
As a second aspect of the embodiments of the present invention, an image matching method may include:
setting corresponding detection areas on the first image and the second image;
respectively generating a preset number of feature points for the first image and the second image in the detection area;
carrying out feature point matching on the feature points of the first image and the feature points of the second image;
and determining the matching relation between the first image and the second image according to the matching result of the feature point matching.
As a third aspect of the embodiments of the present invention, another image capturing apparatus offset detection method is related to, and may include:
reading a reference image frame and a current image frame from a video stream photographed by an image pickup device;
setting corresponding detection areas in background areas of the reference image frame and the current image frame;
respectively generating a preset number of feature points for the reference image frame and the current image frame in the detection area;
carrying out feature point matching on the feature points of the reference image frame and the feature points of the current image frame;
and judging whether the image acquisition equipment deviates or not according to the matching result of the feature point matching.
As a fourth aspect of the embodiments of the present invention, an offset detection apparatus for an image capturing device is provided, which may include:
a determining module for determining an image frame acquired by an image acquisition device;
the setting module is used for setting a detection area for the image frame;
the selecting module is used for selecting a reference image frame and a current image frame from the image frames;
a generating module, configured to generate a preset number of feature points for the reference image frame and the current image frame in the detection area, respectively;
the matching module is used for matching the characteristic points of the reference image frame with the characteristic points of the current image frame to determine the number of the matched characteristic points and the connection line distance between the matched characteristic points;
and the judging module is used for judging whether the image acquisition equipment deviates or not according to the number of the matched characteristic points and the connecting line distance between the matched characteristic points.
As a fifth aspect of the embodiments of the present invention, there is provided an image matching apparatus, which may include:
the detection area setting module is used for setting corresponding detection areas on the first image and the second image;
a feature point generation module, configured to generate a preset number of feature points for the first image and the second image in the detection area respectively;
the characteristic point matching module is used for matching the characteristic points of the first image with the characteristic points of the second image;
and the matching relation determining module is used for determining the matching relation between the first image and the second image according to the matching result of the feature point matching.
As a sixth aspect of the embodiments of the present invention, there is provided another image capturing device shift detection apparatus, including:
the image frame reading module is used for reading a reference image frame and a current image frame from a video stream shot by the image acquisition equipment;
the region setting module is used for setting corresponding detection regions in the background regions of the reference image frame and the current image frame;
a feature point generation module, configured to generate a preset number of feature points for the reference image frame and the current image frame in the detection area respectively;
the characteristic point matching module is used for matching the characteristic points of the reference image frame with the characteristic points of the current image frame;
and the judging module is used for judging whether the image acquisition equipment deviates or not according to the matching result of the characteristic point matching.
As a seventh aspect of the embodiment of the present invention, there is related to an image processing system including: an image acquisition device and a server;
the image acquisition equipment is used for acquiring image data in real time and sending the image data to the server;
the server comprises the device.
As an eighth aspect of the embodiments of the present invention, it relates to a computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements the above-described method.
As a ninth aspect of the embodiments of the present invention, it relates to a computer apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above method when executing the program.
The embodiment of the invention at least realizes the following technical effects:
the embodiment of the invention reads image frames based on video streams shot by image acquisition equipment and sets a detection area for the image frames; then respectively generating a preset number of feature points for a reference image frame and a current image frame in the detection area; and finally, judging whether the image acquisition equipment deviates or not according to the number of the matched characteristic points and the connecting line distance between the matched characteristic points. The invention generates a preset number of feature points aiming at the detection area, avoids generating feature points in the foreground image and the background image of the image, eliminates the problem of inaccurate feature point matching caused by the movement of objects in other image areas except the offset of the image acquisition equipment, and comprehensively judges whether the image acquisition equipment is offset or not based on the number of matched feature points and the connection line distance between the matched feature points, thereby greatly saving the manpower judgment.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of an image capturing device offset detection method according to embodiment 1 of the present invention;
fig. 2A is a schematic diagram of an image frame read by a video stream according to embodiment 1 of the present invention;
FIG. 2B is a schematic diagram of the detection area only including the background image set in FIG. 2A;
fig. 3 is a flowchart of a specific offset detection of an image capturing device according to embodiment 1 of the present invention;
FIG. 4A is a schematic diagram of feature points generated in the detection region 1 and the detection region 2 in FIG. 2B;
FIG. 4B is a schematic diagram of feature points generated in the detection region 1 and the detection region 2 in FIG. 2B after the image capturing device is shifted;
fig. 5 is a schematic structural diagram of an offset detection apparatus of an image capturing device according to embodiment 1 of the present invention;
fig. 6 is a schematic structural diagram of another offset detection apparatus for image capturing devices according to embodiment 1 of the present invention;
fig. 7 is a flowchart of an image matching method according to embodiment 2 of the present invention;
fig. 8 is a flowchart of an image capturing device offset detection method according to embodiment 2 of the present invention;
fig. 9 is a schematic structural diagram of an image matching apparatus provided in embodiment 2 of the present invention;
fig. 10 is a schematic structural diagram of an offset detection apparatus of an image capturing device according to embodiment 2 of the present invention;
fig. 11 is a schematic structural diagram of an image processing system according to embodiment 3 of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Example 1
An embodiment 1 of the present invention provides an offset detection method for an image capture device, which is shown in fig. 1 and includes the following steps:
step S11, determining image frames acquired by the image acquisition device.
Step S12 sets a detection area for the image frame.
Step S13, selecting a reference image frame and a current image frame from the image frames.
Step S14, generating a preset number of feature points for the reference image frame and the current image frame in the detection area, respectively.
And step S15, performing feature point matching on the feature points of the reference image frame and the feature points of the current image frame, and determining the number of matched feature points and the connecting line distance between the matched feature points.
And step S16, judging whether the image acquisition equipment is deviated or not according to the number of the matched characteristic points and the connecting line distance between the matched characteristic points.
The embodiment of the invention generates the preset number of feature points aiming at the detection area, avoids generating the feature points in the foreground image and the background image of the image, eliminates the problem of inaccurate matching of the feature points caused by the self movement of objects in other image areas except the offset of the image acquisition equipment, and comprehensively judges whether the image acquisition equipment is offset or not based on the number of the matched feature points and the connecting line distance between the matched feature points.
The above steps of the embodiments of the present invention are described in detail as follows:
the embodiment of the invention can be used for detecting and judging common image acquisition equipment (a pan-tilt camera, a camera and the like) in the field of intelligent transportation, and the pan-tilt camera or the camera often deviates due to human factors (such as artificial control rotation, artificial damage and the like) or natural reasons (such as wind blowing, vibration and the like), namely the original shooting direction deflects or translates, so that the characteristic time detection marked during the original shooting generates errors and even false alarms. In the prior art, manual on-site detection is poor in efficiency and low in precision. Certainly, other application scenes for the application of the image acquisition device in the embodiment of the present invention may also be other multiple application scenes for video monitoring and video analysis, for example, video monitoring of transportation hubs such as road traffic monitoring, subways, light rails, high-speed rail stations, bus stops, dockside ports, and logistics stations, and various application scenes for large-scale activities, concerts, industrial production, agricultural cultivation, and safety monitoring, and the analyzed image content may also include other objects such as road traffic motor vehicles, logistics vehicles, ships, aircrafts, and electric vehicles, in addition to support vehicles, staff, and high-speed vehicles of parking parks.
In step S11, the image frame obtained by the image capturing device is determined, and the embodiment of the present invention may be applied to a scene in which the shooting angle of the pan-tilt image capturing device or the camera is detected in real time, and the image frame is extracted from the video stream obtained in real time by using the existing video reading technology and video frame processing technology. It should be noted that before this step is performed, parameter information of the image capturing device, such as spatial coordinates of the image capturing device, a shooting angle of the image capturing device, a size of an image captured by the image capturing device, a resolution of the image captured by the image capturing device, and the like, needs to be read. The video stream address or the local address of the video can be deduced from the parameter information of the image capturing device, and then an image frame is read for the acquired video stream, as shown in fig. 2A, the image frame is read from the video stream on the road captured by the image capturing device, wherein the building, the lawn, the isolation belt and the road surface are background images and are generally still; vehicles, pedestrians, etc. are foreground images and are kept moving most of the time.
In step S12, a detection area is set for the image frame. In the step, the whole image is matched when the images are matched in the prior art, so that the foreground object is easy to move to cause inaccurate matching and misjudge as unsuccessful image matching, and further cause matching errors. Setting a certain detection area can improve matching accuracy.
Preferably, in this step, a detection area including only the background image may be set for the image frame, and the inventors of the present application have found that: the detection area is set on the unmoved background image, and the image matching is carried out by only using the detection area with the same position and angle (namely, the detection area is not changed) each time, so that the misjudgment caused by the movement of the foreground object can be avoided.
In this step, the detection region refers to a region for detecting a feature point in an image. The feature point refers to a point where the image gray value changes drastically or a point where the curvature is large on the image edge (i.e., the intersection of two edges). The image feature points can reflect the essential features of the image, can identify the target object in the image, and can complete the matching of the image through the matching of the feature points.
In the embodiment of the present invention, the detection area may be adjusted according to the actual situation of the background area, an object that does not move for a large time may be recognized as the background image through an image recognition technology, and then the detection area including only the background image is set based on the size, the range, and the like of the area where the background image is located. Of course, the number of detection regions to be set may be one or plural, and the detection region is generally a polygonal region. Referring to fig. 2B, which is a schematic diagram of four detection regions set for the image shown in fig. 2A, after the detection regions are set, the coordinates of the detection regions need to be recorded in preparation for performing detection region setting on each frame of image in the video stream, so as to ensure that the detection regions of the detected images have the same standard, and the success rate of matching the images is significantly improved.
The above step S13 selects the reference image frame and the current image frame from the image frames. The embodiment of the invention can be applied to a scene for detecting the shooting angle of image acquisition equipment (a pan-tilt camera or a camera) in real time, and the image frames of the video stream acquired in real time are extracted by the existing video reading technology and video frame processing technology. Then, a reference image frame and a current image frame are selected from the image frames as matched images.
The reference image frame may be an image shot by the image capturing device in the video stream at an earlier shooting time, or may be an image shot by the image capturing device or an image shot by the image capturing device in advance that only includes a background image. The current image frame is used as a comparative image frame, and it should be noted that the current image frame in the embodiment of the present invention is not only one image but also an image updated at any time, and after the following determination steps are performed each time, one current image frame is updated, so as to ensure the purpose of real-time detection and real-time determination.
It should be noted that, in the embodiment of the present invention, the step S12 and the step S13 are not executed sequentially, and the step S12 is executed first and then the step S13 is executed, or the step S13 is executed first and then the step S12 is executed, which may also be executed simultaneously, which is not limited in the embodiment of the present invention.
The above step S14 is to generate a preset number of feature points for the reference image frame and the current image frame, respectively, in the detection area.
In the embodiment of the invention, the same characteristic point detection method is used for generating a preset number of characteristic points for the reference image frame and the current image frame in the detection area. And the number of the feature points generating the preset number is the same, and the guaranteed matching standards are the same, so that the precision of judging whether the image acquisition equipment has the offset is obviously improved.
The feature point detection method used in the embodiment of the present invention includes, but is not limited to, the following methods: ORB algorithm, SIFT (scale invariant feature transform) algorithm, SURF algorithm, harris corner detection algorithm, RANSAC algorithm.
The orb (organized FAST and Rotated brief) algorithm is an algorithm for FAST feature point extraction and description, and is divided into feature point extraction and feature point description. The feature extraction is developed by fast (features from accessed Segment test) algorithm, and the feature point description is improved according to brief (binary Robust independent feature features) feature description algorithm. The ORB characteristic is that the detection method of FAST characteristic points is combined with BRIEF characteristic descriptors, and improvement and optimization are performed on the original basis, and the ORB algorithm has the greatest characteristic of high calculation speed. Firstly, FAST detects feature points, so that the feature points have directivity, and a BRIEF algorithm with rotation invariance is adopted to calculate a descriptor, and the expression form of a 2-system string unique to the descriptor not only saves storage space, but also greatly shortens matching time. FAST and BRIEF are both very FAST feature computation methods, so ORB has the same performance advantage as the conventional feature point detection method, such as SIFT.
The Scale-invariant feature transform (SIFT) algorithm is a local feature descriptor that describes that the Scale invariance exists, key points can be detected in an image, and the Scale-invariant feature transform (SIFT) algorithm is characterized by Scale invariance. The SIFT feature is based on some locally apparent points of interest on the object, regardless of the size and rotation of the image.
The SURF (speeded Up Robust features) algorithm is an improvement on the SIFT algorithm and is mainly characterized by high speed. For an image with one gray scale, the value of any point (x, y) in the integral image refers to the sum of gray scale values of all points in a rectangular region formed by the point from the upper left corner of the image, and the integral image is the key for reducing the calculation amount of the SURF algorithm, and the improvement of the performance of the SURF algorithm from SIFT to SURF algorithm is largely attributed to the use of the integral image.
The Harris corner detection principle is that a moving window is utilized to calculate a gray level change value in an image, wherein the key process comprises the steps of converting the gray level change value into a gray level image, calculating a difference image, smoothing Gaussian, calculating a local extreme value and confirming a corner.
The ransac (random Sample consensus) algorithm is an algorithm for calculating mathematical model parameters of data according to a set of Sample data sets containing abnormal data to obtain valid Sample data. The basic assumption of the RANSAC algorithm is that samples contain correct data (inliers, data that can be described by a model) and also contain abnormal data (outliers, data that is far from a normal range and cannot adapt to a mathematical model), that is, data sets contain noise. These outlier data may be due to erroneous measurements, erroneous assumptions, erroneous calculations, etc. RANSAC also assumes that, given a correct set of data, there is a way to calculate the model parameters that fit into the data.
In an alternative embodiment, when there are a plurality of detection areas, generating a preset number of feature points for the reference image frame and the current image frame in the detection area respectively includes: a preset number of feature points are generated for the reference image frame and the current image frame, respectively, in each of the detection differences in the plurality of detection regions. For example, as shown in fig. 2B, 4 detection regions are set, and a predetermined number of feature points are generated in each of the 4 detection regions. In the embodiment of the present invention, the number of the feature points may be set according to the area size of the detection region, or feature points with the same total number may be randomly generated in the detection region for the reference image frame and the current image frame, respectively.
In the step S15, feature points of the reference image frame and feature points of the current image frame are matched to determine the number of matched feature points and the link distance between the matched feature points.
In the embodiment of the present invention, when feature points of a reference image frame are matched with feature points of a current image frame, an existing matching method, such as a BF (breeze Force) algorithm, a FLANN matching algorithm, and the like, may be used.
Matching the first character of the target string S with the first character of the pattern string T by using a BF (breeze Force) algorithm, and continuously comparing the second character of the target string S with the second character of the pattern string T if the first character of the target string S is equal to the first character of the pattern string T; and if not, comparing the second character of the S with the first character of the T, and sequentially comparing until the final matching result is obtained, wherein the BF algorithm is a brute force algorithm.
The FLANN (Fast Library for Approximate Nearest Neighbors) matching algorithm is a collection of algorithms that perform Nearest neighbor searches on large datasets and high dimensional features. The FLANN method is much faster than the BF (Brute-Force) method by using the approximate k nearest neighbor algorithm to find the consistency.
In the embodiment of the invention, the matching algorithm is used to approximately equalize the characteristic points (corner points) in the detection area, so that the characteristic points in the pictures can be paired. For example, in fig. 2B, the detection area 2 and the detection area 3 are green areas, and the two green areas have extremely high growing similarity, the feature points generated by the ORB feature point detection algorithm are relatively similar, and it is highly likely that the feature points in the detection area 2 and the feature points in the detection area 3 in different images are successfully matched by the violence matching algorithm. Therefore, there may be mismatching feature points in the matching result. In the embodiment of the present invention, the pairing relationship between the characteristic points that are mismatched needs to be filtered and deleted.
In an optional embodiment, after the feature point matching of the feature point of the reference image frame and the feature point of the current image frame, the method further includes: and filtering out the mismatched characteristic points from the matching result by using a filtering algorithm. For example, in the embodiment of the present invention, a GMS filtering algorithm is used to filter the matching result, so as to filter out the mismatched feature points.
The GMS (Grid-based Motion Statistics for Fast) filtering algorithm is a filtering method for eliminating mismatching by using Motion smoothness as a statistic to match local regions. The method has the advantages that the operation speed is increased relative to the traditional RANSAC (random Sample consensus) algorithm, and more computer resources are saved when the mismatched characteristic points are removed.
In the step S16, it is determined whether the image capturing device is shifted according to the number of matched feature points and the link distance between the matched feature points.
Specifically, when the number of the matched feature points is smaller than a first preset value and the average value of the connection line distances between the matched feature points is larger than a second preset value, it is determined that the image acquisition device has an offset; otherwise, judging that the image acquisition equipment does not deviate.
For example, in the embodiment of the present invention, the preset number of feature points generated in the step S14 is N, the number of matched feature points determined after the feature point matching in the step S15 is N (N is a matching point), and the link distance between the matched feature points is { L1, L2, … …, Ln } (L is a link between coordinates of the reference image frame and coordinates of the current image frame at each pair of matching points, and is marked by a vector).
In an embodiment of the present invention, the first predetermined value is determined by a ratio of N to N, and the second predetermined value is determined by an average value of the connection distances. For example, calculate the average displacement of the matching point connecting line vector set, r ═ L1| + | L2| + … … + | Ln |)/n; when N/N is smaller than a certain ratio (e.g. N/N <0.3) and r is larger than a certain value (e.g. r > (w + h)/70, w and h are the width and height of the image, respectively, and 70 is the empirical value of the sum of the width and height of the image), it is determined that the current image frame has shifted from the reference image frame.
After the above step S16 is completed, the embodiment of the present invention further updates the reference image frame when it is determined that the image capturing device is not shifted based on the continuation of the plurality of current image frames. Due to the influence of factors such as time environment, the gray value of the image captured by the image capturing device may change according to the change of the surrounding environment, for example, the brightness of the image captured by the image capturing device may be significantly different between morning and midday. Therefore, when it is determined that the image capturing device has not shifted based on the continuation of the plurality of current image frames, the reference image frame is updated. For example, in the embodiment of the present invention, when m consecutive frames (for example, m is 5) between the reference image frame and the current image frame are determined that the image capturing device has not shifted, the reference image frame is updated by using the current image frame.
The embodiment of the invention can generate a preset number of feature points aiming at the detection area only containing the background image, avoids generating feature points in the foreground image and the background image of the image, solves the problem of inaccurate matching of the feature points caused by the movement of objects in other image areas (such as the foreground area) except the offset of the image acquisition equipment, and comprehensively judges whether the image acquisition equipment is offset or not on the basis of the number of matched feature points and the connecting line distance between the matched feature points, thereby greatly saving manpower judgment.
In a more specific embodiment, referring to fig. 3, offset detection is performed on a certain PTZ pan-tilt camera on a highway, a video stream address of the pan-tilt camera or a local address of any video is acquired, and camera offset detection is performed on a camera reading the video. First, initialization parameters are set, for example: setting a detection area only containing a background image, setting the number of feature points generated in the detection area, the address of a video stream and the like, specifically, taking one background area enclosed by a polygon as the detection area, or taking a plurality of background areas as the detection area, and storing the vertex of the polygon for background area selection of each frame of image; then, selecting an image frame, and matching the preset image frame (generally, a first frame image of a video stream) serving as a reference image frame for judging camera offset with each new image frame (current image frame); reading a latest image frame (current image frame) and a reference image frame, and generating a preset number of feature points for the reference image frame and the current image frame in a detection area by using an ORB feature point detection algorithm; then BF matching is used, GMS mismatching elimination is carried out, and the number of matched feature points and a connection line vector set { L1, L2, … … and Ln } between the matched feature points are obtained; and then judging whether the current image frame is deviated from the reference image frame or not by using the obtained number of the matched characteristic points and a connection vector set between the matched characteristic points, and sending corresponding prompt information.
When objective factors such as illumination change need to be adapted, a gradual change updating mechanism can be used, and when an alarm is not prompted based on a history image frame (reference image frame) and m continuous frames (for example, m is 5) of a current image frame, the history image frame is updated by using the current image frame (the history image frame is always a preset image frame under the condition of no update), so that the detected camera offset is more consistent with the actual situation.
Referring to fig. 4A and 4B, the detection regions 1 and 2 in fig. 2B generate a preset number of feature points by the ORB feature point detection algorithm, and black crosses in the figure are schematic diagrams of the generated feature points. If the image shown in fig. 4B is not shifted, the generated feature points are almost the same as those in fig. 4A, the ratio of the number N of the first predetermined value feature points to the preset number N is close to 1, and the average displacement (average value of the link distance between the feature points) of the second predetermined value matching point location link vector set is close to 0; however, after the image shown in fig. 4B is shifted, the number of matched feature points is significantly reduced, the link distance between the matched feature points is not 0, and the larger the shift distance is, the smaller the number of matched feature points is.
Based on the same inventive concept, an embodiment of the present invention further provides an offset detection apparatus for an image capturing device, and as shown in fig. 5, the apparatus includes: the readingmodule 11, thesetting module 12, the selectingmodule 13, the generatingmodule 14, thematching module 15 and the judgingmodule 16 have the following working principles:
thedetermination module 11 determines image frames acquired by the image pickup device.
Thesetting module 12 sets a detection area including only a background image for the image frame.
The selectingmodule 13 selects a reference image frame and a current image frame from the image frames.
Thegeneration module 14 generates a preset number of feature points for the reference image frame and the current image frame in the detection area, respectively. Optionally, when there are a plurality of detection areas, the generatingmodule 14 generates a preset number of feature points for the reference image frame and the current image frame in each detection difference in the plurality of detection areas.
Thematching module 15 performs feature point matching on the feature points of the reference image frame and the feature points of the current image frame, and determines the number of matched feature points and the connection line distance between the matched feature points. Optionally, thematching module 15 further filters the mismatched feature points from the matching result by using a filtering algorithm.
The judgingmodule 16 judges whether the image capturing device is shifted according to the number of the matched feature points and the connection distance between the matched feature points. Specifically, when the number of the matched feature points is less than a first predetermined value and the average value of the connection line distances between the matched feature points is greater than a second predetermined value, the determiningmodule 16 determines that the image capturing device has an offset; otherwise, the determiningmodule 16 determines that the image capturing device is not shifted.
In an alternative embodiment, referring to fig. 6, the apparatus may further include an updatingmodule 17, and when it is determined that the image capturing device is not shifted based on a plurality of current image frames, the updatingmodule 17 updates the reference image frame.
For specific description, beneficial effects and relevant examples of the device according to the embodiment of the present invention, reference is made to the above method, and details are not repeated herein.
Example 2
An embodiment of the present invention provides an image matching method, and as shown in fig. 7, the method may include the following steps:
step S21 sets corresponding detection areas on the first image and the second image.
The first image and the second image may be images taken from different image capturing devices, respectively.
Step S22, generating a preset number of feature points for the first image and the second image in the detection area, respectively.
Step S23, feature point matching is performed between the feature points of the first image and the feature points of the second image.
And step S24, determining the matching relation between the first image and the second image according to the matching result of the feature point matching.
According to the image matching method provided by the embodiment of the invention, the images to be matched are set into the detection area, so that feature points are prevented from being generated in all the images, only a preset number of feature points are generated for the detection area, and then the image matching is carried out on the feature points generated by the detection area, thus the segmentation and minimum unit matching of the images can be completed, the image matching efficiency can be improved, the interference of other factors (such as foreground images are other factors when the background is matched and background images are other factors when the foreground is matched) can be eliminated, and the accuracy of the image matching is improved.
The above steps of the embodiments of the present invention are described in detail as follows:
the step S21 is to set the corresponding detection areas on the first image and the second image, and the step can refer to the description of step S12 of embodiment 1. It should be noted that, in embodiment 1, the detection area is set only for the background image included in the image capturing apparatus to detect the offset, and the detection area in this embodiment is not limited to the background area in embodiment 1, but is a detection area determined according to the area where the object is located. Specifically, the detection region includes only a foreground image in the first image and the second image, or only a background image in the first image and the second image.
For example, when object recognition and motion tracking are performed, the target may be a known person object, a vehicle, or the like, a monitoring picture is captured by image capturing devices (pan-tilt cameras or cameras) at different traffic intersections, image recognition is performed on a person and a vehicle image in the picture, and the person image belongs to a foreground image, so that a detection area at this time is a foreground image only including the first image and the second image, and different areas in the foreground image can be divided. Therefore, the image feature points are compared with the detection areas of suspected persons in the reference image (such as the first image) one by one, the arrangement range of the image feature points is narrowed, and a large number of error feature points are filtered out, so that the matching is more accurate.
The above step S22 is to generate a preset number of feature points for the first image and the second image, respectively, in the detection area. This step is described with reference to step S14 in embodiment 1. In step S23, feature points of the first image are matched with feature points of the second image. And step S24 is to determine the matching relationship between the first image and the second image according to the matching result of the feature point matching. Reference is made to the description of step S15 and step S16, respectively, of embodiment 1.
It should be noted that, in step S15 in embodiment 1, the number of matched feature points and the link distance between the matched feature points are determined after feature point matching is performed. And step S16 determines whether the image capturing device is shifted according to the number of matched feature points and the link distance between the matched feature points determined in step S15. After the feature point matching is performed in step S23 in this embodiment, the number of matched feature points and the connection distance between the matched feature points may also be determined; correspondingly, determining the matching relationship between the first image and the second image according to the matching result of the feature point matching includes: when the number of the matched feature points is smaller than a first preset value and the average value of the connecting line distances between the matched feature points is larger than a second preset value, judging that the first image is matched with the second image; otherwise, the first image is judged to be not matched with the second image.
Of course, after the feature point matching is performed in step S23 according to the embodiment of the present invention, the determination of the number of matched feature points and the connection distance between the matched feature points is not limited. Therefore, step S24 may determine the matching relationship of the first image and the second image according to the matching result of any determined feature point matching of step S23.
According to the embodiment of the invention, the corresponding detection areas are set on the first image and the second image, and the image areas are divided according to the application of different characteristics such as image identification, motion tracking, image matching and the like; and generating a preset number of feature points for the images in the corresponding detection areas, so as to perform feature point matching and determine the matching relationship between the images. Because the detection area where the generated feature points are located is the target detection area, the interference when the feature points in the non-target detection area are matched is avoided, the calculation amount of a computer is reduced, and the detection efficiency and the working efficiency are improved.
Based on the same inventive concept, an embodiment of the present invention further provides an image matching apparatus, and as shown in fig. 8, the apparatus may include: the detectionregion setting module 21, the featurepoint generating module 22, the featurepoint matching module 23 and the matchingrelationship determining module 24 have the following working principles:
the detectionregion setting module 21 sets the corresponding detection regions on the first image and the second image. Specifically, the detection region includes only a foreground image in the first image and the second image, or only a background image in the first image and the second image. The featurepoint generating module 22 generates a preset number of feature points for the first image and the second image in the detection area, respectively. The featurepoint matching module 23 performs feature point matching on the feature points of the first image and the feature points of the second image. The matchingrelationship determining module 24 determines the matching relationship between the first image and the second image according to the matching result of the feature point matching.
Specifically, the featurepoint matching module 23 performs feature point matching on the feature points of the first image and the feature points of the second image, and determines the number of matched feature points and the connection line distance between the matched feature points; the matchingrelationship determining module 24 determines the matching relationship between the first image and the second image according to the matching result of the feature point matching, including: when the number of the matched feature points is smaller than a first predetermined value and the average value of the connection line distances between the matched feature points is larger than a second predetermined value, the matchingrelationship determination module 24 determines that the first image is shifted relative to the second image; otherwise, the matchingrelation determining module 24 determines that the first image is not shifted with respect to the second image.
For the technical effects and the related examples of the device of this embodiment, reference may be made to the related contents in the above method, and further description is omitted here.
In a specific embodiment, referring to fig. 9, when the image matching method in the embodiment of the present invention is applied to the field of intelligent transportation and used for detecting the offset of an image capturing device, the corresponding image capturing device offset detecting method includes the following steps:
step S31, reading a reference image frame and a current image frame from a video stream shot by an image acquisition device;
step S32, setting a corresponding detection area in the background areas of the reference image frame and the current image frame;
step S33, generating a preset number of feature points for the reference image frame and the current image frame in the detection area, respectively;
step S34, matching the characteristic points of the reference image frame with the characteristic points of the current image frame;
and step S35, judging whether the image acquisition equipment deviates according to the matching result of the feature point matching.
For the technical effects and examples of the above steps in the embodiment of the present invention, reference may be made to the relevant contents of the image matching methods in the above embodiments 1 and 2, which are not described herein again.
Based on the same inventive concept, a specific image capturing apparatus offset detection apparatus is also provided in this specific embodiment, and as shown in fig. 10, the apparatus includes: the imageframe reading module 31, theregion setting module 32, the featurepoint generating module 33, the featurepoint matching module 34 and the judgingmodule 35 work according to the following working principles:
the imageframe reading module 31 reads a reference image frame and a current image frame from a video stream captured by an image capturing apparatus; theregion setting module 32 sets a corresponding detection region in the background regions of the reference image frame and the current image frame; the featurepoint generating module 33 generates a preset number of feature points for the reference image frame and the current image frame in the detection area respectively; the featurepoint matching module 34 performs feature point matching on the feature points of the reference image frame and the feature points of the current image frame; the judgingmodule 35 judges whether the image capturing device is shifted according to the matching result of the feature point matching.
Example 3
An embodiment of the present invention provides an image processing system, and as shown in fig. 11, the system includes: an image acquisition device 1 and a server 2. The image acquisition equipment 1 is used for acquiring image data in real time and sending the image data to the server 2; the server 2 includes the image capturing device offset detection apparatus, and is configured to receive image data sent by the image capturing device and match images included in the image data.
It should be noted that, when the method in embodiment 1 needs to be implemented, the image processing system provided in the embodiment of the present invention may be referred to as an image capturing device offset detection system, and a device included in the server is the image capturing device offset detection device in embodiment 1.
When the image matching method in embodiment 2 needs to be implemented, the system may be referred to as an image matching system, the image capturing device of the system may be any image capturing device provided for the image in embodiment 2, such as a camera, a mobile phone, a camera, a vehicle-mounted terminal, and the like, and the device included in the server is the image matching device in embodiment 2.
For the technical effects and the related examples of the system described in this embodiment, reference may be made to the related contents in the foregoing embodiments 1 and 2, and details are not described here again.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (16)

1. An image acquisition device offset detection method, comprising:
determining an image frame acquired by an image acquisition device;
setting a detection area for the image frame; selecting a reference image frame and a current image frame from the image frames;
respectively generating a preset number of feature points for the reference image frame and the current image frame in the detection area;
carrying out feature point matching on the feature points of the reference image frame and the feature points of the current image frame, and determining the number of the matched feature points and the connecting line distance between the matched feature points;
and judging whether the image acquisition equipment deviates or not according to the number of the matched characteristic points and the connecting line distance between the matched characteristic points.
2. The method of claim 1, the setting a detection region for the image frame comprising: a detection area including only a background image is set for the image frame.
3. The method of claim 1, wherein the determining whether the image capturing device is shifted according to the number of the matched feature points and the connection line distance between the matched feature points comprises:
when the number of the matched characteristic points is smaller than a first preset value and the average value of the connecting line distances between the matched characteristic points is larger than a second preset value, judging that the image acquisition equipment has offset; otherwise, judging that the image acquisition equipment does not deviate.
4. The method of claim 1, further comprising, after feature point matching feature points of the reference image frame with feature points of the current image frame:
and filtering out the mismatched characteristic points from the matching result by using a filtering algorithm.
5. The method according to claim 1, wherein the detection area is plural, and the generating a preset number of feature points for the reference image frame and the current image frame in the detection area respectively comprises:
generating a preset number of feature points for the reference image frame and the current image frame, respectively, in each detection difference in the plurality of detection regions.
6. The method according to any one of claims 1 to 5, further comprising, after determining whether the image capturing device is shifted according to the number of matched feature points and the connection distance between the matched feature points:
updating the reference image frame when it is determined that the image capturing device is not shifted based on a plurality of current image frames continuing.
7. An image matching method, comprising:
setting corresponding detection areas on the first image and the second image;
respectively generating a preset number of feature points for the first image and the second image in the detection area;
carrying out feature point matching on the feature points of the first image and the feature points of the second image;
and determining the matching relation between the first image and the second image according to the matching result of the feature point matching.
8. The method of claim 7, the detection region containing only foreground images of the first and second images or only background images of the first and second images.
9. The method of claim 7, the feature point matching feature points of the first image with feature points of the second image, comprising: performing feature point matching on the feature points of the first image and the feature points of the second image, and determining the number of the matched feature points and the connection line distance between the matched feature points;
the determining the matching relationship between the first image and the second image according to the matching result of the feature point matching includes: when the number of the matched feature points is smaller than a first preset value and the average value of the connecting line distances between the matched feature points is larger than a second preset value, judging that the first image is matched with the second image; otherwise, the first image is judged to be not matched with the second image.
10. An image acquisition device offset detection method, comprising:
reading a reference image frame and a current image frame from a video stream photographed by an image pickup device;
setting corresponding detection areas in background areas of the reference image frame and the current image frame;
respectively generating a preset number of feature points for the reference image frame and the current image frame in the detection area;
carrying out feature point matching on the feature points of the reference image frame and the feature points of the current image frame;
and judging whether the image acquisition equipment deviates or not according to the matching result of the feature point matching.
11. An image capturing apparatus shift detection apparatus comprising:
a determining module for determining an image frame acquired by an image acquisition device;
the setting module is used for setting a detection area for the image frame;
the selecting module is used for selecting a reference image frame and a current image frame from the image frames;
a generating module, configured to generate a preset number of feature points for the reference image frame and the current image frame in the detection area, respectively;
the matching module is used for matching the characteristic points of the reference image frame with the characteristic points of the current image frame to determine the number of the matched characteristic points and the connection line distance between the matched characteristic points;
and the judging module is used for judging whether the image acquisition equipment deviates or not according to the number of the matched characteristic points and the connecting line distance between the matched characteristic points.
12. An image matching apparatus comprising:
the detection area setting module is used for setting corresponding detection areas on the first image and the second image;
a feature point generation module, configured to generate a preset number of feature points for the first image and the second image in the detection area respectively;
the characteristic point matching module is used for matching the characteristic points of the first image with the characteristic points of the second image;
and the matching relation determining module is used for determining the matching relation between the first image and the second image according to the matching result of the feature point matching.
13. An image capturing apparatus shift detection apparatus comprising:
the image frame reading module is used for reading a reference image frame and a current image frame from a video stream shot by the image acquisition equipment;
the region setting module is used for setting corresponding detection regions in the background regions of the reference image frame and the current image frame;
a feature point generation module, configured to generate a preset number of feature points for the reference image frame and the current image frame in the detection area respectively;
the characteristic point matching module is used for matching the characteristic points of the reference image frame with the characteristic points of the current image frame;
and the judging module is used for judging whether the image acquisition equipment deviates or not according to the matching result of the characteristic point matching.
14. An image processing system comprising: an image acquisition device and a server;
the image acquisition equipment is used for acquiring image data in real time and sending the image data to the server;
the server comprises an image capturing device shift detection apparatus according to claim 11 or an image matching apparatus according to claim 12 or an image capturing device shift detection apparatus according to claim 13.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 10.
16. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1 to 10 when executing the program.
CN202010243946.6A2020-03-312020-03-31Image acquisition equipment offset detection method, image matching method, system and equipmentPendingCN113469201A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010243946.6ACN113469201A (en)2020-03-312020-03-31Image acquisition equipment offset detection method, image matching method, system and equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010243946.6ACN113469201A (en)2020-03-312020-03-31Image acquisition equipment offset detection method, image matching method, system and equipment

Publications (1)

Publication NumberPublication Date
CN113469201Atrue CN113469201A (en)2021-10-01

Family

ID=77866155

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010243946.6APendingCN113469201A (en)2020-03-312020-03-31Image acquisition equipment offset detection method, image matching method, system and equipment

Country Status (1)

CountryLink
CN (1)CN113469201A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113794875A (en)*2021-11-152021-12-14浪潮软件股份有限公司 A method and device for intelligent inspection of video offset on site of major projects
CN114040094A (en)*2021-10-252022-02-11青岛海信网络科技股份有限公司Method and equipment for adjusting preset position based on pan-tilt camera
CN114240919A (en)*2021-12-232022-03-25恒为科技(上海)股份有限公司Board device detection method, device, equipment and storage medium
CN114363585A (en)*2022-03-212022-04-15南通阳鸿石化储运有限公司Intelligent video safety monitoring method and system based on gridding control
CN114419120A (en)*2022-01-262022-04-29Oppo广东移动通信有限公司Image processing method and device, computer readable storage medium and electronic device
CN114463395A (en)*2021-12-312022-05-10济南信通达电气科技有限公司 A kind of monitoring equipment offset detection method, equipment and medium
CN114519753A (en)*2022-02-142022-05-20上海闻泰信息技术有限公司Image generation method, system, electronic device, storage medium and product
CN116205889A (en)*2023-03-062023-06-02合肥联宝信息技术有限公司 Offset detection method, device, electronic equipment and storage medium
CN116651778A (en)*2023-04-282023-08-29上海悦峻网络信息技术有限公司Method and equipment for jet sorting of object lane change
CN117036490A (en)*2023-10-102023-11-10长沙能川信息科技有限公司Method, device, computer equipment and medium for detecting preset bit offset of camera
CN118135477A (en)*2024-01-112024-06-04湖南工学院 Safety detection method and system for personnel under bridge crane based on monocular camera
CN118735994A (en)*2024-09-032024-10-01天翼视联科技有限公司 Camera displacement recognition method and computer equipment
CN118864596A (en)*2024-07-102024-10-29北京数原数字化城市研究中心 A method, device and related equipment for generating identification information
CN119850987A (en)*2025-03-202025-04-18浙江易时科技股份有限公司Image equipment displacement abnormity early warning system and method based on feature point detection

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102118561A (en)*2010-05-272011-07-06周渝斌Camera movement detection system in monitoring system and method
CN102609957A (en)*2012-01-162012-07-25上海智觉光电科技有限公司Method and system for detecting picture offset of camera device
CN103279765A (en)*2013-06-142013-09-04重庆大学Steel wire rope surface damage detection method based on image matching
CN103366374A (en)*2013-07-122013-10-23重庆大学Fire fighting access obstacle detection method based on image matching
CN104657997A (en)*2015-02-282015-05-27北京格灵深瞳信息技术有限公司 A lens shift detection method and device
CN105678730A (en)*2014-11-172016-06-15西安三茗科技有限责任公司Camera movement self-detecting method on the basis of image identification
CN109059895A (en)*2018-03-282018-12-21南京航空航天大学A kind of multi-modal indoor ranging and localization method based on mobile phone camera and sensor
CN109145684A (en)*2017-06-192019-01-04西南科技大学Head state monitoring method based on region most matching characteristic point
CN109523570A (en)*2017-09-202019-03-26杭州海康威视数字技术股份有限公司Beginning parameter transform model method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102118561A (en)*2010-05-272011-07-06周渝斌Camera movement detection system in monitoring system and method
CN102609957A (en)*2012-01-162012-07-25上海智觉光电科技有限公司Method and system for detecting picture offset of camera device
CN103279765A (en)*2013-06-142013-09-04重庆大学Steel wire rope surface damage detection method based on image matching
CN103366374A (en)*2013-07-122013-10-23重庆大学Fire fighting access obstacle detection method based on image matching
CN105678730A (en)*2014-11-172016-06-15西安三茗科技有限责任公司Camera movement self-detecting method on the basis of image identification
CN104657997A (en)*2015-02-282015-05-27北京格灵深瞳信息技术有限公司 A lens shift detection method and device
CN109145684A (en)*2017-06-192019-01-04西南科技大学Head state monitoring method based on region most matching characteristic point
CN109523570A (en)*2017-09-202019-03-26杭州海康威视数字技术股份有限公司Beginning parameter transform model method and device
CN109059895A (en)*2018-03-282018-12-21南京航空航天大学A kind of multi-modal indoor ranging and localization method based on mobile phone camera and sensor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
中国人工智能学会编: "《中国人工智能进展 2007》", 31 December 2007, 北京邮电大学出版社, pages: 1123 - 1127*

Cited By (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114040094B (en)*2021-10-252023-10-31青岛海信网络科技股份有限公司Preset position adjusting method and device based on cradle head camera
CN114040094A (en)*2021-10-252022-02-11青岛海信网络科技股份有限公司Method and equipment for adjusting preset position based on pan-tilt camera
CN113794875A (en)*2021-11-152021-12-14浪潮软件股份有限公司 A method and device for intelligent inspection of video offset on site of major projects
CN114240919A (en)*2021-12-232022-03-25恒为科技(上海)股份有限公司Board device detection method, device, equipment and storage medium
CN114463395A (en)*2021-12-312022-05-10济南信通达电气科技有限公司 A kind of monitoring equipment offset detection method, equipment and medium
CN114419120A (en)*2022-01-262022-04-29Oppo广东移动通信有限公司Image processing method and device, computer readable storage medium and electronic device
CN114519753A (en)*2022-02-142022-05-20上海闻泰信息技术有限公司Image generation method, system, electronic device, storage medium and product
CN114363585A (en)*2022-03-212022-04-15南通阳鸿石化储运有限公司Intelligent video safety monitoring method and system based on gridding control
CN116205889A (en)*2023-03-062023-06-02合肥联宝信息技术有限公司 Offset detection method, device, electronic equipment and storage medium
CN116651778A (en)*2023-04-282023-08-29上海悦峻网络信息技术有限公司Method and equipment for jet sorting of object lane change
CN117036490A (en)*2023-10-102023-11-10长沙能川信息科技有限公司Method, device, computer equipment and medium for detecting preset bit offset of camera
CN117036490B (en)*2023-10-102024-01-19长沙能川信息科技有限公司Method, device, computer equipment and medium for detecting preset bit offset of camera
CN118135477A (en)*2024-01-112024-06-04湖南工学院 Safety detection method and system for personnel under bridge crane based on monocular camera
CN118864596A (en)*2024-07-102024-10-29北京数原数字化城市研究中心 A method, device and related equipment for generating identification information
CN118735994A (en)*2024-09-032024-10-01天翼视联科技有限公司 Camera displacement recognition method and computer equipment
CN118735994B (en)*2024-09-032024-12-24天翼视联科技有限公司 Camera displacement recognition method and computer equipment
CN119850987A (en)*2025-03-202025-04-18浙江易时科技股份有限公司Image equipment displacement abnormity early warning system and method based on feature point detection

Similar Documents

PublicationPublication DateTitle
CN113469201A (en)Image acquisition equipment offset detection method, image matching method, system and equipment
CN111724439B (en)Visual positioning method and device under dynamic scene
CN112686812A (en)Bank card inclination correction detection method and device, readable storage medium and terminal
CN112819895B (en) Camera calibration method and device
CN116311218B (en)Noise plant point cloud semantic segmentation method and system based on self-attention feature fusion
CN110610150B (en)Tracking method, device, computing equipment and medium of target moving object
CN108280450A (en)A kind of express highway pavement detection method based on lane line
CN116279592A (en)Method for dividing travelable area of unmanned logistics vehicle
CN115019241B (en)Pedestrian identification and tracking method and device, readable storage medium and equipment
CN112381132A (en)Target object tracking method and system based on fusion of multiple cameras
CN106530407A (en)Three-dimensional panoramic splicing method, device and system for virtual reality
CN113989761B (en) Object tracking method, device, electronic device and storage medium
CN112215205A (en)Target identification method and device, computer equipment and storage medium
CN120065187B (en) Fusion calibration system and method of lidar and camera based on ROS
Revaud et al.Robust automatic monocular vehicle speed estimation for traffic surveillance
CN113628251B (en)Smart hotel terminal monitoring method
CN113033445B (en) Cross-over recognition method based on aerial power channel image data
Kaimkhani et al.UAV with vision to recognise vehicle number plates
HK40063917A (en)Offset detection method for image acquisition equipment, image matching method and system and equipment
CN118823726A (en) Method, device and vehicle for constructing multi-task dataset for automatic parking
CN117928540A (en)Positioning method and positioning device for robot, and storage medium
CN114004742B (en)Image reconstruction method, training method, detection method, device and storage medium
CN120014052A (en) An abnormal condition detection method and system for a smart safety help cloud platform
Planitz et al.Intrinsic correspondence using statistical signature-based matching for 3d surfaces
CN120707893A (en)Water area change early warning method and device and electronic equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
REGReference to a national code

Ref country code:HK

Ref legal event code:DE

Ref document number:40063917

Country of ref document:HK

RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20211001


[8]ページ先頭

©2009-2025 Movatter.jp