Movatterモバイル変換


[0]ホーム

URL:


CN103077533B - A kind of based on frogeye visual characteristic setting movement order calibration method - Google Patents

A kind of based on frogeye visual characteristic setting movement order calibration method
Download PDF

Info

Publication number
CN103077533B
CN103077533BCN201210574497.9ACN201210574497ACN103077533BCN 103077533 BCN103077533 BCN 103077533BCN 201210574497 ACN201210574497 ACN 201210574497ACN 103077533 BCN103077533 BCN 103077533B
Authority
CN
China
Prior art keywords
moving
camera
images
image
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210574497.9A
Other languages
Chinese (zh)
Other versions
CN103077533A (en
Inventor
陈宗海
郭明玮
赵宇宙
张陈斌
项俊平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTCfiledCriticalUniversity of Science and Technology of China USTC
Priority to CN201210574497.9ApriorityCriticalpatent/CN103077533B/en
Publication of CN103077533ApublicationCriticalpatent/CN103077533A/en
Application grantedgrantedCritical
Publication of CN103077533BpublicationCriticalpatent/CN103077533B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于蛙眼视觉特性定位运动目标的方法,该方法包括:采用基于帧间差法的运动区域检测算法,模拟蛙眼视觉系统对运动目标敏感的特性来提取图像中的运动区域,具体的:利用帧间差分法对用于进行接力跟踪的第二摄像机采集到的序列图像中的相邻帧做差分运算,获得包含一个或多个运动目标的若干运动区域的图像;将所述包含一个或多个运动目标的若干运动区域的图像与第一摄像机中选定的运动目标跟踪框图像进行直方图匹配,从该若干运动区域图像中寻找最为相似的区域,该区域则为选定的运动目标在第二摄像机中的位置区域。通过采用本发明公开的方法,可准确的从复杂场景中提取运动目标,从而快速、准确地实现定位。

The invention discloses a method for locating a moving target based on frog-eye visual characteristics. The method includes: adopting a motion region detection algorithm based on the inter-frame difference method, simulating the characteristic that the frog-eye visual system is sensitive to moving targets to extract motion in an image Regions, specifically: using the inter-frame difference method to perform differential operations on adjacent frames in the sequence images collected by the second camera for relay tracking, to obtain images of several moving regions containing one or more moving targets; The images of several moving regions containing one or more moving objects are histogram-matched with the moving object tracking frame image selected in the first camera, and the most similar region is found from the several moving region images, and the region is The location area of the selected moving object in the second camera. By adopting the method disclosed in the present invention, the moving target can be accurately extracted from the complex scene, so as to quickly and accurately realize positioning.

Description

Translated fromChinese
一种基于蛙眼视觉特性定位运动目标的方法A method for locating moving targets based on the visual characteristics of frog eyes

技术领域technical field

本发明涉及模式识别领域,尤其涉及一种基于蛙眼视觉特性定位运动目标的方法。The invention relates to the field of pattern recognition, in particular to a method for locating a moving target based on the visual characteristics of frog eyes.

背景技术Background technique

目前,智慧城市和安全城市的建设对于智能视频监控系统的需求越来越大,对于其功能要求也越来越多,接力跟踪成为智能视频监控系统的一大主要功能。由于目前摄像机的监控范围有限,为了对较大监控场景区域内的同一目标进行持续监控,以获取目标更为清晰、更为具体的图像,需要多摄像机的协同工作,接力跟踪则是根据这一需求而产生的功能。几乎所有的接力跟踪方法都涉及到运动目标的匹配,而常用的匹配方法根据匹配特征的选取不同,可以分为三大类:1、直接利用原始图像的像素值进行匹配;2、利用图像的物理形状特征(点、线),如边缘、角点等特征,需要进行相关计算的像素点数目有了明显的减少,并具有更强的适应能力;3、使用约束的树搜索等高级特征的算法。然而针对实际复杂场景下的接力跟踪,常用的这些目标匹配方法无法直接较好的应用,其根本原因在于实际场景较为复杂,存在较多的干扰,直接使用常用的匹配算法会面临计算量较大、干扰较多且无法准确定位的问题。At present, the construction of smart cities and safe cities has an increasing demand for intelligent video surveillance systems, and there are more and more functional requirements for them. Relay tracking has become a major function of intelligent video surveillance systems. Due to the limited monitoring range of the current camera, in order to continuously monitor the same target in a larger monitoring scene area to obtain clearer and more specific images of the target, it is necessary to cooperate with multiple cameras. Relay tracking is based on this Functions produced by demand. Almost all relay tracking methods involve the matching of moving targets, and the commonly used matching methods can be divided into three categories according to the selection of matching features: 1. Directly use the pixel values of the original image for matching; Physical shape features (points, lines), such as edges, corners and other features, the number of pixels that need to be calculated for correlation has been significantly reduced, and has stronger adaptability; 3. Using constrained tree search and other advanced features algorithm. However, for relay tracking in actual complex scenes, these commonly used target matching methods cannot be directly and better applied. The fundamental reason is that the actual scene is more complex and there are more interferences. Directly using commonly used matching algorithms will face a large amount of calculation. , There is a lot of interference and the problem cannot be accurately located.

发明内容Contents of the invention

本发明的目的是提供一种基于蛙眼视觉特性定位运动目标的方法,可准确的从复杂场景中提取运动目标,从而快速、准确地实现目标定位。The object of the present invention is to provide a method for locating a moving target based on the visual characteristics of frog eyes, which can accurately extract the moving target from complex scenes, thereby realizing target positioning quickly and accurately.

一种基于蛙眼视觉特性定位运动目标的方法,包括:A method for locating a moving target based on frog eye visual characteristics, comprising:

采用基于帧间差法的运动区域检测算法,模拟蛙眼视觉系统对运动目标敏感的特性来提取图像中的运动区域,具体的:利用帧间差分法,对用于进行接力跟踪的第二摄像机采集到的序列图像中的相邻帧做差分运算,获得包含一个或多个运动目标的若干运动区域的图像;Using the motion area detection algorithm based on the frame difference method, the moving area in the image is extracted by simulating the sensitivity of the frog eye visual system to the moving target. Specifically: using the frame difference method, the second camera used for relay tracking Adjacent frames in the collected sequence images are differentially calculated to obtain images of several moving areas containing one or more moving objects;

将所述包含一个或多个运动目标的若干运动区域的图像与第一摄像机中选定的运动目标跟踪框图像进行直方图匹配,从该若干运动区域图像中寻找最为相似的区域,该区域则为选定的运动目标在第二摄像机中的位置区域。performing histogram matching on the images of several moving regions containing one or more moving objects and the selected moving object tracking frame image in the first camera, and finding the most similar region from the several moving region images, and then is the position area of the selected moving target in the second camera.

由上述本发明提供的技术方案可以看出,通过基于蛙眼视觉特性滤除复杂场景图像中的静止区域,可较为准确的提取运动目标的运动区域,减少了目标匹配时的计算量,增加了定位的准确性。It can be seen from the above-mentioned technical solution provided by the present invention that by filtering out static areas in complex scene images based on frog-eye visual characteristics, the moving areas of moving objects can be extracted more accurately, reducing the amount of calculation when matching objects, and increasing the Positioning accuracy.

附图说明Description of drawings

为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments. Obviously, the accompanying drawings in the following description are only some embodiments of the present invention. For Those of ordinary skill in the art can also obtain other drawings based on these drawings on the premise of not paying creative efforts.

图1为本发明实施例一提供的一种基于蛙眼视觉特性定位运动目标的方法的流程图;FIG. 1 is a flow chart of a method for locating a moving target based on frog eye visual characteristics provided by Embodiment 1 of the present invention;

图2为本发明实施例二提供的又一种基于蛙眼视觉特性定位运动目标的方法的流程图。FIG. 2 is a flow chart of yet another method for locating a moving target based on frog eye visual characteristics provided by Embodiment 2 of the present invention.

具体实施方式detailed description

下面结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明的保护范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

一个监控场景区域中包含多个摄像机,通过多个摄像机的协作来完成场景区域的监控工作。第一摄像机(例如,枪型摄像机)用于对场景区域进行整体监控,当该第一摄像机中出现用户关注的运动目标时,需调用第二摄像机(例如,球型摄像机)对该运动目标进行接力跟踪。A monitoring scene area contains multiple cameras, and the monitoring work of the scene area is completed through the cooperation of multiple cameras. The first camera (for example, a bullet camera) is used to monitor the scene area as a whole. When a moving target that the user pays attention to appears in the first camera, the second camera (for example, a dome camera) needs to be called to monitor the moving target. Relay tracking.

对于较为复杂的环境(例如,第二摄像机中可能存在多个运动目标),若第二摄像机要准确的对第一摄像机中关注的运动目标进行接力跟踪,首先要进行运动目标的匹配。For a more complex environment (for example, there may be multiple moving objects in the second camera), if the second camera is to accurately relay track the moving object concerned in the first camera, the moving object must first be matched.

实施例一Embodiment one

图1为本发明实施例一提供的一种基于蛙眼视觉特性定位运动目标的方法的流程图,主要包括如下步骤:Fig. 1 is a flow chart of a method for locating a moving target based on frog eye visual characteristics provided by Embodiment 1 of the present invention, which mainly includes the following steps:

步骤101、基于蛙眼视觉特性滤除第二摄像机中的静止区域,提取运动区域图像。Step 101 : Filter out the still area in the second camera based on the frog-eye visual characteristic, and extract the image of the moving area.

蛙眼视觉系统对运动目标具有特殊敏感性,青蛙看不见(至少是不关注)周围世界中的静态部分细节,而对于复杂场景中运动目标可以利用该特性进行静止区域的过滤。The frog eye vision system has a special sensitivity to moving objects. Frogs cannot see (or at least do not pay attention to) the details of static parts in the surrounding world. For moving objects in complex scenes, this feature can be used to filter static areas.

因此,采用基于帧间差法的运动区域检测算法来模拟蛙眼对运动目标敏感的视觉特性,以滤除第二场景中的静止区域,实现运动目标的运动区域准确提取;具体的:利用帧间差分法对第二摄像机采集到的序列图像中的相邻帧做差分运算,获得包含一个或多个运动目标的若干运动区域的图像。Therefore, the moving area detection algorithm based on the inter-frame difference method is used to simulate the visual characteristics of frog eyes sensitive to moving objects, so as to filter out the static area in the second scene and realize the accurate extraction of the moving area of the moving object; specifically: use the frame The inter-difference method performs a difference operation on adjacent frames in the sequence images collected by the second camera to obtain images of several moving regions containing one or more moving objects.

帧间差法主要通过对序列图像中相邻帧做差分运算,比较相邻帧对应像素点的灰度值的不同,再通过选取阈值来提取包含一个或多个运动目标的若干运动区域的图像。The inter-frame difference method mainly performs differential operations on adjacent frames in the sequence image, compares the difference in gray value of corresponding pixels in adjacent frames, and then extracts images of several moving areas containing one or more moving objects by selecting a threshold .

步骤102、运动目标的定位。Step 102, positioning the moving target.

由于第一摄像机的监控范围有限,因此,若需对某一运动目标进行接力跟踪,则需借助第二摄像机。通过步骤101可获得第二摄像机监控场景中的运动区域(可以包含一个或多个运动目标的若干运动区域),此时,可以利用第二摄像机中的运动区域与第一摄像机中选定的某一运动目标跟踪框图像进行匹配,获得该目标在第二摄像机中的位置。具体的:用户可以从第一摄像机的场景中选定一个运动目标跟踪框图像(包含一个运动目标的运动区域),将该图像与第二摄像机中包含一个或多个运动目标的若干运动区域的图像进行直方图匹配,从所述第二摄像机中若干运动区域图像中寻找最为相似的区域,该区域则为预定的运动目标在第二摄像机中的位置区域。Since the monitoring range of the first camera is limited, if a moving target needs to be relay-tracked, the second camera is needed. Through step 101, the moving area (may include several moving areas of one or more moving objects) in the monitoring scene of the second camera can be obtained. A moving target tracking frame image is matched to obtain the position of the target in the second camera. Specifically: the user can select a moving object tracking frame image (a moving area containing a moving object) from the scene of the first camera, and combine the image with the images of several moving areas containing one or more moving objects in the second camera The images are histogram-matched, and the most similar region is found from the images of several moving regions in the second camera, and this region is the position region of the predetermined moving target in the second camera.

当获得运动目标在第二摄像机所在区域后,可将该区域作为第二摄像机的初始跟踪框所在区域;再利用以均值飘移(MeanShift)法为核心的主动跟踪算法控制所述第二摄像机运动,使该运动目标始终位于第二摄像机场景的中央区域,且跟踪框的大小保持在预定的范围之内。When the moving target is obtained in the area where the second camera is located, the area can be used as the area where the initial tracking frame of the second camera is located; then the active tracking algorithm with the MeanShift method as the core is used to control the movement of the second camera, The moving target is always located in the central area of the second camera scene, and the size of the tracking frame is kept within a predetermined range.

本发明实施例通过基于蛙眼视觉特性的运动区域检测算法过滤复杂场景图像中的静止区域,可较为准确的提取运动目标的运动区域,减少了目标匹配的计算量,增加了其匹配的准确性。In the embodiment of the present invention, the moving region detection algorithm based on the frog's eye visual characteristics filters the static region in the complex scene image, so that the moving region of the moving target can be extracted more accurately, the calculation amount of the target matching is reduced, and the matching accuracy is increased. .

实施例二Embodiment two

为了便于理解本发明,下面结合附图2对本发明做进一步介绍,如图2所示,主要包括如下步骤:In order to facilitate understanding of the present invention, the present invention is further introduced below in conjunction with accompanying drawing 2, as shown in Figure 2, mainly comprises the following steps:

步骤201、第一摄像机进行监控场景区域运动目标的检测与跟踪。Step 201, the first camera detects and tracks a moving object in the monitoring scene area.

可采用背景差法进行运动目标检测,当第一摄像机检测到监控场景区域中存在运动目标时,判断是否符合预定的跟踪条件,若是,则转入步骤202。所述预定的跟踪条件包括:判断所述第一摄像机用于监控运动目标的跟踪框的位置是否处于第一摄像机监控场景区域的边缘;具体的:若该跟踪框与第一摄像机画面边界纵方向或横方向的距离小于预定值(例如,3个像素)时,则判定运动目标处于第一摄像机监控场景区域的边缘。The background difference method can be used for moving object detection. When the first camera detects that there is a moving object in the monitoring scene area, it is judged whether the predetermined tracking condition is met, and if so, go to step 202 . The predetermined tracking condition includes: judging whether the position of the tracking frame used by the first camera to monitor the moving target is at the edge of the scene area monitored by the first camera; Or when the distance in the horizontal direction is less than a predetermined value (for example, 3 pixels), it is determined that the moving object is at the edge of the scene area monitored by the first camera.

步骤202、调用空闲的第二摄像机对运动目标进行接力跟踪。Step 202, calling the idle second camera to perform relay tracking on the moving target.

为了便于进行接力跟踪,需要在场景区域设置对每一第二摄像机设置多个预置位。例如,在第一摄像机监控场景区域的顶点与顶部、底部、左端及右端中心处设置P个预置位。In order to facilitate relay tracking, it is necessary to set multiple preset positions for each second camera in the scene area. For example, P preset positions are set at the vertex and the center of the top, bottom, left end and right end of the scene area monitored by the first camera.

当符合预定的跟踪条件时,则调用空闲第二摄像机并移动到距离运动目标最近的预置位进行接力跟踪。When the predetermined tracking condition is met, the idle second camera is called and moved to the preset position closest to the moving target for relay tracking.

步骤203、采用基于蛙眼视觉特性的运动区域检测算法滤除第二摄像机中的静止区域,提取运动区域的图像。Step 203 , using a moving region detection algorithm based on frog-eye visual characteristics to filter out static regions in the second camera, and extract images of moving regions.

第二摄像机到达预置位后,需要保持在该位置稳定一段时间(例如,500毫秒),以供拍摄用于与第一摄像机进行运动目标匹配的图像。该时间可以根据场景中的实际情况进行设定,但需要保证在这段时间内,运动目标匹配算法可以计算完成,并且运动目标不会走出此时刻第二摄像机的监控场景。After the second camera reaches the preset position, it needs to keep stable at the position for a period of time (for example, 500 milliseconds), so as to capture an image for matching the moving target with the first camera. This time can be set according to the actual situation in the scene, but it needs to be ensured that within this period of time, the moving target matching algorithm can be calculated and the moving target will not leave the monitoring scene of the second camera at this moment.

蛙眼的视觉系统对运动目标具有敏感性,因此,本实施采用基于帧间差法的运动区域检测算法来模拟蛙眼的视觉特性,以滤除第二摄像机场景中的静止目标,实现运动目标的运动区域准确提取。The visual system of the frog eye is sensitive to moving objects. Therefore, this implementation adopts the moving area detection algorithm based on the inter-frame difference method to simulate the visual characteristics of the frog eye to filter out the static objects in the scene of the second camera and realize the detection of moving objects. Accurate extraction of motion regions.

具体的:利用帧间差法中的三帧差分法对连续的三帧图像中相邻的两帧图像分别进行差分运算,获得两个灰度差图像:Specifically: use the three-frame difference method in the inter-frame difference method to perform difference operations on two adjacent frames of images in three consecutive frames to obtain two grayscale difference images:

Dk-1,k(x,y)=|fk-1(x,y)-fk(x,y)|;Dk-1,k (x,y)=|fk-1 (x,y)-fk (x,y)|;

Dk,k+1(x,y)=|fk+1(x,y)-fk(x,y)|;Dk,k+1 (x,y)=|fk+1 (x,y)-fk (x,y)|;

其中,fk-1(x,y),fk(x,y)和fk+1(x,y)为连续的三帧图像;Dk-1k(x,y)与Dk,k+1(x,y)为相邻两帧图像进行差分运算后得到的灰度差图像;Among them, fk-1 (x, y), fk (x, y) and fk+1 (x, y) are continuous three-frame images; Dk-1k (x, y) and Dk, k +1 (x, y) is the grayscale difference image obtained after differential operation of two adjacent frames of images;

利用阈值对Dk-1,k(x,y)与Dk,k+1(x,y)进行二值化,获得对应的二值化图像Bk-1,k(x,y)和Bk,k+1(x,y);Use the threshold to binarize Dk-1, k (x, y) and Dk, k+1 (x, y), and obtain the corresponding binarized image Bk-1, k (x, y) and Bk,k+1 (x,y);

将二值化图像Bk-1,k(x,y)和Bk,k+1(x,y)进行相与运算,获得包含一个或多个运动目标的若干运动区域的三帧差二值图像Perform a phase-AND operation on the binarized image Bk-1, k (x, y) and Bk, k+1 (x, y) to obtain the three-frame difference two of several moving areas containing one or more moving objects value image

DD.sthe skk((xx,,ythe y))==11,,BBkk--11,,kk((xx,,ythe y))∩∩BBkk,,kk++11((xx,,ythe y))==1100,,BBkk--11,,kk((xx,,ythe y))∩∩BBkk,,kk++11((xx,,ythe y))≠≠11..

通过上述算法,可以获得第二摄像机中包含一个或多个运动目标的若干运动区域的图像。但是该图像中的运动区域可能会存在分裂成多块的情况,因此,为了提高后续的运动目标匹配算法的准确度,需要标记所述若干运动区域中的一个或多个运动目标。Through the above algorithm, images of several moving regions containing one or more moving objects in the second camera can be obtained. However, the moving area in the image may be split into multiple blocks. Therefore, in order to improve the accuracy of the subsequent moving object matching algorithm, one or more moving objects in the several moving areas need to be marked.

其步骤主要包括:首先,利用形态学闭操作及降低分辨率的方法,从所述若干运动区域图像中提取出连续的运动区域:(1)将所述包含一个或多个运动目标的若干运动区域的图像进行形态学中的膨胀处理,获得膨胀后的图像Dn;(2)将Dn的分辨率得到图像Rn,具体的:将Dn分割为Z×Z的子块,若每一子块中像素为255的像素占一半以上,则将当前子块中的所有像素置为255,否则为0;(3)对所述图像Rn进行形态学中的腐蚀处理,获得包含连续的运动区域的图像。The steps mainly include: firstly, using the morphological closing operation and the method of reducing resolution to extract continuous moving areas from the images of the moving areas: (1) extracting the moving areas containing one or more moving objects The image of the area undergoes expansion processing in morphology to obtain the expanded image Dn ; (2) obtain the image Rn with the resolution of Dn , specifically: divide Dn into sub-blocks of Z×Z, if each If more than half of the pixels in a sub-block are 255, then set all the pixels in the current sub-block to 255, otherwise they will be 0; (3) Perform morphological corrosion processing on the image Rn to obtain continuous image of the region of motion.

其次,标记连通区域。在得到连续的运动区域后还需要对连通的区域进行标记。Second, mark the connected regions. After obtaining the continuous motion area, it is necessary to mark the connected area.

本实施例使用扫描二值图像的方法,二值图像任意一行中的直线用数据结构为:This embodiment uses the method of scanning the binary image, and the data structure of the straight line in any row of the binary image is:

如果相邻两行的直线Linei和Lineo八邻域连通,则必须同时满足以下关系:If the eight neighbors of two adjacent lines, Linei and Lineo , are connected, the following relations must be satisfied at the same time:

Linei.m_1ColumnTail+1≥Lineo.m_1ColumnHead;Linei .m_1ColumnTail+1≥Lineo .m_1ColumnHead;

Lineo.m_1ColumnTail+1≥Linei.m_1ColumnHead;Lineo.m_1ColumnTail +1≥Line i.m_1ColumnHead;

通过逐行扫描,将所有连通的直线连成链表并进行统一的数字标记,可得到连通区域的信息。通过得到的信息就可以方便地计算出各个运动目标的质心、面积、周长,用于目标的分类或是特征表达。Through progressive scanning, all connected straight lines are connected into a linked list and uniformly marked with numbers to obtain the information of connected areas. The center of mass, area, and perimeter of each moving target can be easily calculated through the obtained information, which can be used for target classification or feature expression.

最后,框体定位。Finally, frame positioning.

分别提取每一独立的连通区域的最小外接矩形框体,并进行标注。以便对运动目标及对应的运动区域进行区分。The minimum circumscribing rectangular frame of each independent connected region is respectively extracted and labeled. In order to distinguish the moving target and the corresponding moving area.

步骤204、运动目标的定位。Step 204, positioning the moving target.

将步骤203得到的图像与第一摄像机中选定的某一运动目标跟踪框图像进行匹配。The image obtained in step 203 is matched with the image of a certain moving object tracking frame selected in the first camera.

本实施例使用直方图匹配方法,首先,计算步骤203得到的图像与第一摄像机中选定的跟踪框图像的直方图W1与W2This embodiment uses a histogram matching method. First, calculate the histograms W1 and W2 of the image obtained in step 203 and the selected tracking frame image in the first camera;

再将W1与W2转换为规定概率密度函数的图像,并找出最为相似的区域;具体的:Then convert W1 and W2 into the image of the specified probability density function, and find the most similar area; specifically:

对于直方图W1与w2的每个像素,若像素值为rk,将该值映射到其对应的灰度级sk;再映射灰度级sk到最终的灰度级zkFor each pixel of the histograms W1 and w2 , if the pixel value is rk , map this value to its corresponding gray level sk ; then map the gray level sk to the final gray level zk ;

灰度级sk与zk可通过下述方法进行计算:The gray levels sk and zk can be calculated by the following method:

假设r与z分别为处理前与处理后图像的灰度级,pr(r)与pz(z)分别为对应的连续概率密度函数,根据处理前的图像来估计pr(r),pz(z)为期望处理后的图像具有的规定概率密度函数。另s为一随机变量,且有:Assuming that r and z are the gray levels of the pre-processing and post-processing images respectively, pr (r) and pz (z) are the corresponding continuous probability density functions, and pr (r) is estimated according to the pre-processing image, pz (z) is a prescribed probability density function that the processed image is expected to have. Another s is a random variable, and there are:

sthe s==TT((rr))==∫∫00rrpprr((ww))dwdw;;------((11))

其中w为积分变量,其离散公式如下:Where w is the integral variable, and its discrete formula is as follows:

sthe skk==TT((rrkk))==ΣΣjj==00kkpprr((rrjj))==ΣΣjj==00jjnnojjnno,,kk==0,1,20,1,2......,,LL--11------((22))

n为图像中像素的数量和,nj为灰度级rj的像素数量,L为离散灰度级的数量。n is the sum of the number of pixels in the image, nj is the number of pixels in the gray level rj , and L is the number of discrete gray levels.

然后,假设定义随机变量z,且有:Then, suppose a random variable z is defined, and there are:

GG((zz))==∫∫00zzppzz((tt))dtdt==sthe s------((33))

其中t为积分变量,其离散表达式为:Where t is an integral variable, and its discrete expression is:

vvkk==GG((zzkk))==ΣΣii==00kkppzz((zzii))==sthe skk,,kk==0,1,20,1,2......,,LL--11------((44))

由公式(1)与(3)可知,G(z)=T(z),因此z必须满足如下条件:From the formulas (1) and (3), we can see that G(z)=T(z), so z must meet the following conditions:

z=G-1(s)=G-1[T(r)](5)z=G-1 (s)=G-1 [T(r)] (5)

变换函数T(r)由公式(1)得到,pr(r)由处理前图像估值,其离散表达式为:The transformation function T(r) is obtained by formula (1), pr (r) is estimated from the image before processing, and its discrete expression is:

zk=G-1[T(r)],k=0,1,2...,L-1(6)zk =G-1 [T(r)],k=0,1,2...,L-1(6)

即,首先利用上述公式(2)对每一灰度级rk预计算映射灰度级sk;再利用公式(4)从具有预定规律密度函数的Pz(z)得到变换函数G;最后,利用公式(6)对每一个sk值预计算zkThat is, first use the above formula (2) to pre-calculate the mapped gray level sk for each gray level rk ; then use the formula (4) to obtain the transformation function G from Pz (z) with a predetermined regular density function; finally , using formula (6) to pre-calculate zk for each value of s k.

当通过上述步骤分别作用于直方图W1与W2后,完成运动目标的匹配,即从第二摄像机中定位在第一摄像机中用户所选定的运动目标。After the above steps are applied to the histograms W1 and W2 respectively, the matchingof the moving object is completed, that is, the moving object selected by the user in the first camera is located from thesecond camera.

步骤205,接力跟踪。Step 205, relay tracking.

将所述第二摄像机定位的区域作为初始跟踪框所在区域对运动目标进行接力跟踪。例如,可采用以均值飘移(MeanShift)法为核心的主动跟踪算法控制所述第二摄像机运动,使该运动目标始终位于第二摄像机场景的中央区域,且跟踪框的大小保持在预定的范围之内。The area where the second camera is positioned is used as the area where the initial tracking frame is located to perform relay tracking on the moving target. For example, an active tracking algorithm centered on the MeanShift method can be used to control the movement of the second camera, so that the moving target is always located in the central area of the scene of the second camera, and the size of the tracking frame is kept within a predetermined range Inside.

本发明实施例通过过滤复杂场景图像中的静止区域,可较为准确的提取运动目标的运动区域,减少了目标匹配的计算量,增加了其匹配的准确性。In the embodiment of the present invention, by filtering the static area in the complex scene image, the moving area of the moving object can be extracted more accurately, the calculation amount of object matching is reduced, and the matching accuracy is increased.

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例可以通过软件实现,也可以借助软件加必要的通用硬件平台的方式来实现。基于这样的理解,上述实施例的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。Through the above description of the implementation manners, those skilled in the art can clearly understand that the above embodiments can be implemented by software, or by means of software plus a necessary general hardware platform. Based on this understanding, the technical solutions of the above embodiments can be embodied in the form of software products, which can be stored in a non-volatile storage medium (which can be CD-ROM, U disk, mobile hard disk, etc.), including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute the methods described in various embodiments of the present invention.

以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明披露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权利要求书的保护范围为准。The above is only a preferred embodiment of the present invention, but the scope of protection of the present invention is not limited thereto. Any person familiar with the technical field can easily conceive of changes or changes within the technical scope disclosed in the present invention. Replacement should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be determined by the protection scope of the claims.

Claims (7)

Translated fromChinese
1.一种基于蛙眼视觉特性定位运动目标的方法,其特征在于,包括:1. A method for locating a moving target based on frog eye visual characteristics, characterized in that, comprising:采用基于帧间差法的运动区域检测算法,模拟蛙眼视觉系统对运动目标敏感的特性来提取图像中的运动区域,具体的:利用帧间差分法对用于进行接力跟踪的第二摄像机采集到的序列图像中的相邻帧做差分运算,获得包含一个或多个运动目标的若干运动区域的图像;Using the motion area detection algorithm based on the frame difference method, the moving area in the image is extracted by simulating the sensitivity of the frog eye visual system to the moving target. Specifically: use the frame difference method to collect data from the second camera used for relay tracking Adjacent frames in the received sequence images are differentially calculated to obtain images of several moving regions containing one or more moving objects;将所述包含一个或多个运动目标的若干运动区域的图像与第一摄像机中选定的运动目标跟踪框图像进行直方图匹配,从该若干运动区域图像中寻找最为相似的区域,该区域则为选定的运动目标在第二摄像机中的位置区域;performing histogram matching on the images of several moving regions containing one or more moving objects and the selected moving object tracking frame image in the first camera, and finding the most similar region from the several moving region images, and then is the position area of the selected moving target in the second camera;所述将所述包含一个或多个运动目标的若干运动区域的图像与第一摄像机中选定的运动目标跟踪框图像进行直方图匹配,从所述若干运动区域图像中寻找最为相似的区域的步骤包括:分别计算所述包含一个或多个运动目标的若干运动区域的图像与第一摄像机的选定运动目标跟踪框图像的直方图W1与W2;将上述直方图W1与W2转换为具有规定概率密度函数的图像,并找出最为相似的区域;performing histogram matching on the images of several moving regions containing one or more moving objects and the moving object tracking frame image selected in the first camera, and finding the most similar region from the several moving region images The steps include: respectively calculating the histograms W1 and W 2 of the images of several moving regions containing one or more moving objects and the image of the selected moving object tracking frame of the first camera; combining the above histograms W 1andW2 Convert to an image with a prescribed probability density function and find the most similar regions;其中,所述将上述直方图W1与W2转换为具有规定概率密度函数的图像的步骤包括:Wherein, the step of converting the above- mentioned histograms W1 and W2 into images witha prescribed probability density function includes:对于直方图W1与W2的每个像素,若像素值为rk,将该值映射到其对应的灰度级sk;再映射灰度级sk到最终的灰度级zkFor each pixel of the histograms W1 and W2 , if the pixel value is rk , map this value to its corresponding gray level sk ; then map the gray level sk to the final gray level zk ;具体的:对每一灰度级rk预计算映射灰度级skSpecifically: for each gray level rk precompute the mapped gray level sk :sthe skk==TT((rrkk))==ΣΣjj==00kkpprr((rrjj))==ΣΣjj==00jjnnojjnno,,kk==00,,11,,22......,,LL--11;;其中,pr(rj)为对应的连续概率密度函数,n为图像中像素的数量和,nj为灰度级rj的像素数量,L为离散灰度级的数量;Among them, pr (rj ) is the corresponding continuous probability density function, n is the sum of the number of pixels in the image, nj is the number of pixels of gray level rj , and L is the number of discrete gray levels;利用具有预定规律密度函数的Pz(z)得到变换函数G:The transformation function G is obtained by using Pz (z) with a predetermined regular density function:G(zk)=Σt=0kpz(zt)=sk,k=0,1,2,...,L-1,其中,zk为最终的灰度级;G ( z k ) = Σ t = 0 k p z ( z t ) = the s k , k = 0 , 1 , 2 , ... , L - 1 , Among them, zk is the final gray level;对每一个sk值预计算zk:zk=G-1[T(r)],k=0,1,2...,L-1;Precalculate zk for each sk value: zk =G-1 [T(r)],k=0,1,2...,L-1;所述从该运动区域中寻找最为相似的区域之后还包括:The search for the most similar region from the motion region also includes:将该区域作为第二摄像机的初始跟踪框所在区域;Use this area as the area where the initial tracking frame of the second camera is located;利用均值飘移MeanShift算法控制所述第二摄像机运动,使该运动目标始终位于第二摄像机场景的中央区域,且跟踪框的大小保持在预定的范围之内。The movement of the second camera is controlled by using the MeanShift algorithm, so that the moving target is always located in the central area of the scene of the second camera, and the size of the tracking frame is kept within a predetermined range.2.根据权利要求1所述的方法,其特征在于,所述利用帧间差分法对用于进行接力跟踪的第二摄像机采集到的序列图像中的相邻帧做差分运算,获得包含一个或多个运动目标的若干运动区域的图像的步骤包括:2. The method according to claim 1, characterized in that, using the inter-frame difference method to perform a difference operation on adjacent frames in the sequence images collected by the second camera for relay tracking, to obtain one or more The steps of the images of several moving regions of a plurality of moving targets include:利用三帧差分法对连续的三帧图像中相邻的两帧图像分别进行差分运算,获得两个灰度差图像:Use the three-frame difference method to perform difference operations on two adjacent frames of images in three consecutive frames to obtain two grayscale difference images:Dk-1,k(x,y)=|fk-1(x,y)-fk(x,y)|;Dk-1,k (x,y)=|fk-1 (x,y)-fk (x,y)|;Dk,k+1(x,y)=|fk+1(x,y)-fk(x,y)|;Dk,k+1 (x,y)=|fk+1 (x,y)-fk (x,y)|;其中,fk-1(x,y),fk(x,y)和fk+1(x,y)为连续的三帧图像;Dk-1,k(x,y)与Dk,k+1(x,y)为相邻两帧图像进行差分运算后得到的灰度差图像;Among them, fk-1 (x, y), fk (x, y) and fk+1 (x, y) are three consecutive frames of images; Dk-1, k (x, y) and Dk , k+1 (x, y) is the grayscale difference image obtained after differential operation of two adjacent frames of images;利用阈值对Dk-1,k(x,y)与Dk,k+1(x,y)进行二值化,获得对应的二值化图像Bk-1,k(x,y)和Bk,k+1(x,y);Use the threshold to binarize Dk-1,k (x,y) and Dk,k+1 (x,y) to obtain the corresponding binarized image Bk-1,k (x,y) and Bk,k+1 (x,y);将二值化图像Bk-1,k(x,y)和Bk,k+1(x,y)进行相与运算,获得包含一个或多个运动目标的若干运动区域的三帧差二值图像Perform a phase-AND operation on the binarized image Bk-1,k (x,y) and Bk,k+1 (x,y) to obtain the three-frame difference two of several moving areas containing one or more moving objects value imageDD.sthe skk((xx,,ythe y))==11,,BBkk--11,,kk((xx,,ythe y))∩∩BBkk,,kk++11((xx,,ythe y))==1100,,BBkk--11,,kk((xx,,ythe y))∩∩BBkk,,kk++11((xx,,ythe y))≠≠11..3.根据权利要求2所述的方法,其特征在于,该方法还包括:标记所述若干运动区域中的一个或多个运动目标;3. The method according to claim 2, further comprising: marking one or more moving targets in the plurality of moving areas;具体的:利用形态学闭操作及降低分辨率的方法,从所述包含一个或多个运动目标的若干运动区域的图像中提取出连续的运动区域;Specifically: using the morphological closing operation and the method of reducing resolution, extracting continuous moving regions from the images of several moving regions containing one or more moving objects;使用包含S条直线的二值图像对所述提取出连续的运动区域的图像进行逐行扫描,将所有连通的直线连成链表并进行统一的数字标记,获得连通区域的信息,其中S为正整数;Use a binary image containing S straight lines to scan the image of the extracted continuous motion area line by line, connect all connected straight lines into a linked list and perform unified digital marking to obtain information on connected areas, where S is positive integer;分别提取每一独立的连通区域的最小外接矩形框体,并进行标注。The minimum circumscribing rectangular frame of each independent connected region is respectively extracted and labeled.4.根据权利要求3所述的方法,其特征在于,所述利用形态学闭操作及降低分辨率的方法,从所述包含一个或多个运动目标的若干运动区域的图像中提取出连续的运动区域的步骤包括:4. The method according to claim 3, characterized in that, the method of using morphological closing operation and reducing resolution extracts continuous The steps in the motor zone include:将所述包含一个或多个运动目标的若干运动区域的图像进行形态学中的膨胀处理,获得膨胀后的图像Dnperforming a morphological expansion process on the images of several moving regions containing one or more moving objects to obtain an expanded image Dn ;降低所述Dn的分辨率得到图像Rn,具体的:将Dn分割为Z×Z的子块,若每一子块中像素为255的像素占一半以上,则将当前子块中的所有像素置为255,否则为0;Reduce the resolution of Dn to obtain an image Rn , specifically: divide Dn into sub-blocks of Z×Z, if more than half of the pixels in each sub-block are 255 pixels, then divide the All pixels are set to 255, otherwise 0;对所述图像Rn进行形态学中的腐蚀处理,获得包含连续的运动区域的图像。Erosion processing in morphology is performed on the image Rn to obtain an image containing continuous moving regions.5.根据权利要求1所述的方法,其特征在于,所述利用帧间差分法对用于进行接力跟踪的第二摄像机采集到的序列图像中的相邻帧做差分运算之前还包括:5. The method according to claim 1, characterized in that, before performing the differential operation on the adjacent frames in the sequence images collected by the second camera for relay tracking by using the inter-frame difference method:通过第一摄像机进行运动目标的检测与跟踪,当该第一摄像机检测到运动目标后,判断是否符合预定的跟踪条件,若是,则调用空闲的第二摄像机对运动目标进行接力跟踪。The first camera is used to detect and track the moving object. When the first camera detects the moving object, it is judged whether the predetermined tracking condition is met. If so, the idle second camera is called to carry out relay tracking on the moving object.6.根据权利要求5所述的方法,其特征在于,所述判断是否符合预定的跟踪条件包括:6. The method according to claim 5, wherein said judging whether a predetermined tracking condition is met comprises:判断所述第一摄像机用于监控运动目标的跟踪框的位置是否处于第一摄像机监控场景区域的边缘;具体的:若该跟踪框与第一摄像机画面边界纵方向或横方向的距离小于预定值时,则判定运动目标处于第一摄像机监控场景区域的边缘。Judging whether the position of the tracking frame used by the first camera to monitor the moving target is at the edge of the scene area monitored by the first camera; specifically: if the distance between the tracking frame and the border of the first camera in the vertical or horizontal direction is less than a predetermined value , it is determined that the moving target is at the edge of the scene area monitored by the first camera.7.根据权利要求5或6所述的方法,其特征在于,所述调用空闲的第二摄像机对运动目标进行接力跟踪包括:7. The method according to claim 5 or 6, wherein calling the idle second camera to carry out relay tracking of the moving target comprises:所述第一摄像机监控场景区域的顶点与顶部、底部、左端及右端中心处设有P个供第二摄像机进行接力跟踪的预置位,当符合预定的跟踪条件时,则调用空闲第二摄像机并移动到距离运动目标最近的预置位进行接力跟踪。The top, bottom, left end and right end center of the first camera monitoring scene area are provided with P preset positions for the second camera to carry out relay tracking. When the predetermined tracking conditions are met, the idle second camera is called And move to the preset position closest to the moving target for relay tracking.
CN201210574497.9A2012-12-262012-12-26A kind of based on frogeye visual characteristic setting movement order calibration methodActiveCN103077533B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201210574497.9ACN103077533B (en)2012-12-262012-12-26A kind of based on frogeye visual characteristic setting movement order calibration method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201210574497.9ACN103077533B (en)2012-12-262012-12-26A kind of based on frogeye visual characteristic setting movement order calibration method

Publications (2)

Publication NumberPublication Date
CN103077533A CN103077533A (en)2013-05-01
CN103077533Btrue CN103077533B (en)2016-03-02

Family

ID=48154052

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201210574497.9AActiveCN103077533B (en)2012-12-262012-12-26A kind of based on frogeye visual characteristic setting movement order calibration method

Country Status (1)

CountryLink
CN (1)CN103077533B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105791687A (en)*2016-03-042016-07-20苏州卓视蓝电子科技有限公司Frogeye bionic detection method and frogeye bionic camera
CN107844734B (en)*2016-09-192020-07-07杭州海康威视数字技术股份有限公司Monitoring target determination method and device and video monitoring method and device
CN107133969B (en)*2017-05-022018-03-06中国人民解放军火箭军工程大学A kind of mobile platform moving target detecting method based on background back projection
CN109767454B (en)*2018-12-182022-05-10西北工业大学Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance
CN111885301A (en)*2020-06-292020-11-03浙江大华技术股份有限公司Gun and ball linkage tracking method and device, computer equipment and storage medium
CN119649454B (en)*2024-11-272025-05-13湖南大学Bullfrog biological behavior recognition optimization system based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101572803A (en)*2009-06-182009-11-04中国科学技术大学Customizable automatic tracking system based on video monitoring
CN101883261A (en)*2010-05-262010-11-10中国科学院自动化研究所 Method and system for abnormal target detection and relay tracking in large-scale monitoring scenarios
CN102289822A (en)*2011-09-092011-12-21南京大学Method for tracking moving target collaboratively by multiple cameras
CN102509088A (en)*2011-11-282012-06-20Tcl集团股份有限公司Hand motion detecting method, hand motion detecting device and human-computer interaction system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7058204B2 (en)*2000-10-032006-06-06Gesturetek, Inc.Multiple camera control system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101572803A (en)*2009-06-182009-11-04中国科学技术大学Customizable automatic tracking system based on video monitoring
CN101883261A (en)*2010-05-262010-11-10中国科学院自动化研究所 Method and system for abnormal target detection and relay tracking in large-scale monitoring scenarios
CN102289822A (en)*2011-09-092011-12-21南京大学Method for tracking moving target collaboratively by multiple cameras
CN102509088A (en)*2011-11-282012-06-20Tcl集团股份有限公司Hand motion detecting method, hand motion detecting device and human-computer interaction system

Also Published As

Publication numberPublication date
CN103077533A (en)2013-05-01

Similar Documents

PublicationPublication DateTitle
JP6723247B2 (en) Target acquisition method and apparatus
CN103824070B (en)A kind of rapid pedestrian detection method based on computer vision
CN103077539B (en)Motion target tracking method under a kind of complex background and obstruction conditions
CN112232349A (en)Model training method, image segmentation method and device
CN104978567B (en)Vehicle checking method based on scene classification
CN103077533B (en)A kind of based on frogeye visual characteristic setting movement order calibration method
CN102867349B (en)People counting method based on elliptical ring template matching
US20130243343A1 (en)Method and device for people group detection
CN108647649A (en)The detection method of abnormal behaviour in a kind of video
CN104660994B (en)Maritime affairs dedicated video camera and maritime affairs intelligent control method
CN103093198B (en)A kind of crowd density monitoring method and device
WO2023273010A1 (en)High-rise littering detection method, apparatus, and device, and computer storage medium
CN101141633A (en) A Moving Object Detection and Tracking Method in Complex Scenes
CN109711256B (en)Low-altitude complex background unmanned aerial vehicle target detection method
CN104835147A (en)Method for detecting crowded people flow in real time based on three-dimensional depth map data
CN102930248A (en)Crowd abnormal behavior detection method based on machine learning
CN107330922A (en)Video moving object detection method of taking photo by plane based on movable information and provincial characteristics
CN114332781A (en)Intelligent license plate recognition method and system based on deep learning
CN100382600C (en) Moving Object Detection Method in Dynamic Scene
CN106530310A (en)Pedestrian counting method and device based on human head top recognition
CN113177439B (en)Pedestrian crossing road guardrail detection method
CN104036250A (en)Video pedestrian detecting and tracking method
CN105678213A (en)Dual-mode masked man event automatic detection method based on video characteristic statistics
CN118864537B (en) A method, device and equipment for tracking moving targets in video surveillance
CN105469054A (en)Model construction method of normal behaviors and detection method of abnormal behaviors

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp