Movatterモバイル変換


[0]ホーム

URL:


CN101950352B - Target detection method capable of removing illumination influence and device thereof - Google Patents

Target detection method capable of removing illumination influence and device thereof
Download PDF

Info

Publication number
CN101950352B
CN101950352BCN2010101951491ACN201010195149ACN101950352BCN 101950352 BCN101950352 BCN 101950352BCN 2010101951491 ACN2010101951491 ACN 2010101951491ACN 201010195149 ACN201010195149 ACN 201010195149ACN 101950352 BCN101950352 BCN 101950352B
Authority
CN
China
Prior art keywords
gradient
pixel
background image
current frame
prospect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2010101951491A
Other languages
Chinese (zh)
Other versions
CN101950352A (en
Inventor
杨学超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netposa Technologies Ltd
Original Assignee
Beijing Zanb Science & Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zanb Science & Technology Co LtdfiledCriticalBeijing Zanb Science & Technology Co Ltd
Priority to CN2010101951491ApriorityCriticalpatent/CN101950352B/en
Publication of CN101950352ApublicationCriticalpatent/CN101950352A/en
Application grantedgrantedCritical
Publication of CN101950352BpublicationCriticalpatent/CN101950352B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

The invention provides a target detection method capable of removing illumination influence, including the following steps: a background image is created; the gradients of the current frame image and the background image are calculated and output, wherein the gradient comprises a horizontal gradient and a vertical gradient; the directions and amplitudes of the gradients of the current frame image and the background image are compared, and a foreground outline is extracted and output according to the comparison result; and the extracted foreground outline is filled, so as to obtain a foreground block, and noise is filtered, so as to output the target. The invention also provides a target detection device capable of removing illumination influence. Compared with the prior art, the target detection method and device of the invention effectively solve the problems that the detected target is not accurate and is not reliable due to illumination influence in target detection.

Description

A kind of object detection method and device of removing illumination effect
Technical field
The present invention relates to a kind of object detection method and device, particularly a kind of object detection method and device of removing illumination effect belongs to Flame Image Process, field of video monitoring.
Background technology
Moving object detection is the basis of intelligent video monitoring technology, its testing result directly affect the later stage incident (such as: invasion, article are left over, article are stolen, vehicle reverse driving etc.) the alert rate of mistake and the false alarm rate that detect, therefore obtained paying close attention to widely.Yet when practical application, the situation of illumination variation occurs through regular meeting, this has just influenced the accuracy and the reliability of moving object detection greatly.Therefore, need research to remove the object detection method of illumination effect.
The object detection method of the removal illumination effect of research mainly contains two types at present.Wherein, class methods are based on the method for pixel.In general illumination variation only can be brought the variation of pixel intensity and color does not have too big variation, and these class methods are analyzed with the identification illumination variation pixel value in the HSI space based on this principle.But a lot of situation do not satisfy this hypothesis prerequisite in the true environment the inside, and inside most outdoor scene, are that background or target all do not have colouring information, and this type algorithm is unsatisfactory at the effect of actual environment the inside like this.Another kind of method is based on the method in zone.If all have certain contrast at the illumination variation front and back scene; The variation of illumination can not bring the variation of image texture edge feature so; Method based on the zone is utilized this principle just; If the edge of prospect and background is complementary, then this foreground area is exactly the false foreground area that illumination variation causes.But the hypothesis of " the illumination variation front and back scene all has certain contrast " is false in night, and then this type algorithm lost efficacy.In addition, there is real goal to get in the illumination variation zone simultaneously and also can causes the failure of this type algorithmic match.
Publication number is the method that the one Chinese patent application of CN 101393603A discloses a kind of identification and detection tunnel fire disaster flame.This application provides uses the method for gamma transformation to reject unnecessary illumination.But this method computing complicacy and reliability are not high.
In sum, press at present and propose simply a kind of and remove the object detection method of illumination effect effectively.
Summary of the invention
The objective of the invention is to solve the inaccurate and unreliable problem of the detection target that in target detection, produces owing to illumination effect.In order to achieve the above object, the invention provides a kind of object detection method of removing illumination effect, said object detection method may further comprise the steps:
Step 101: set up background image;
Step 102: calculate and the gradient of output current frame image and the gradient of said background image, said gradient comprises horizontal direction gradient and vertical gradient;
Step 103: the direction and the amplitude of the gradient of more said current frame image and the gradient of said background image, and extract and export prospect profile in view of the above; With
Step 104: the prospect profile that extracts is filled obtaining the prospect agglomerate, and filtering noise is with export target.
Preferably, instep 101, adopt statistical method to set up background image.Said statistical method is to (the x of the pixel in the image of gathering in certain section time t; Y) carrying out statistical study (is exactly to pixel (x; Y) gray-scale value carries out simple number statistics and adds up), select this pixel in this section period (x, stable gray-scale value y) is (with pixel (x; Y) gray-scale value occurrence number is maximum elects stable gray-scale value as) as a setting in the image to should pixel (x; Y) gray-scale value through adding up in this section period t the stable gray-scale value of each pixel in the images acquired, thereby obtains background image.
Preferably, instep 102, adopt gradient operator to calculate horizontal direction gradient, the vertical gradient of said current frame image respectively, and horizontal direction gradient, the vertical gradient of said background image.Said gradient operator is Robert operator or Sobel operator.Robert operator and Sobel operator are the gradient algorithms in the general image treatment, repeat no more here.
Preferably,step 103 may further comprise the steps:
Step 1031: according to the gradient magnitude A2 and the gradient direction θ 2 of each pixel in the horizontal direction gradient of the horizontal direction gradient of the said current frame image ofstep 102 output and vertical gradient and said background image and gradient magnitude A1 that vertical gradient is calculated each pixel in the said current frame image and gradient direction θ 1 and the said background image;
Step 1032: if the pixel in the said current frame image (x, this pixel in gradient magnitude A1 y) and the said background image (x, gradient magnitude A2 y) all>=first threshold T1, then change step 1033 over to; If gradient magnitude A1 and A2 all≤the second threshold value T2, then think this pixel (x y) be noise spot, otherwise calculating | A1-A2|; If | A1-A2|>=the 3rd threshold value T3, think that then (x y) belongs to the foreground point to this pixel; Wherein, first threshold T1 ∈ [8,12], the second threshold value T2 ∈ [3,5], the 3rd threshold value T3 ∈ [4,6];
Step 1033: calculate this pixel (x in the said current frame image; Gradient direction θ 1 y) and this pixel in the said background image (x, the absolute difference of gradient direction θ 2 y) | θ 1-θ 2|, if | θ 1-θ 2| >=the 4th threshold value T4; Think that then (x y) belongs to the foreground point to this pixel; Wherein, the 4th threshold value T4 ∈ [18 °, 22 °]; With
Step 1034: extract the pixel that all belong to the foreground point, thereby obtain prospect profile.
Preferably,step 104 may further comprise the steps:
Step 1041: the prospect profile tostep 103 output is filled to obtain the prospect agglomerate;
Step 1042: calculate the error image of current frame image and background image, adopt thresholding method that this error image is carried out Threshold Segmentation to obtain the variation prospect in the error image; With
Step 1043: said prospect agglomerate and said variation prospect are carried out AND-operation, with the pixel that belongs to said prospect agglomerate and said variation prospect simultaneously as impact point to obtain and export target.
In addition, the present invention also provides a kind of object detecting device of removing illumination effect, and said device comprises: background is set up the unit, is used to adopt statistical method to set up background image; The gradient calculation unit is used to calculate and export the gradient of current frame image and the gradient of background image, and said gradient comprises horizontal direction gradient and vertical gradient; The prospect profile extraction unit is used for the direction and the amplitude of gradient of gradient and the said background image of more said current frame image, and extracts and export prospect profile in view of the above; With the target acquiring unit, be used for the prospect profile that extracts is filled obtaining the prospect agglomerate, and filtering noise is with export target.
Compared with prior art, object detection method of the present invention can detect the target of removing illumination effect exactly with device, has solved the inaccurate and unreliable problem of the detection target that produces owing to illumination effect in the target detection effectively.
Description of drawings
Fig. 1 is the process flow diagram according to the object detection method of removal illumination effect of the present invention;
Fig. 2 is the process flow diagram according to thestep 103 of object detection method of the present invention;
Fig. 3 is the process flow diagram according to thestep 104 of object detection method of the present invention;
Fig. 4 is the frame diagram according to the object detecting device of removal illumination effect of the present invention.
Embodiment
For making your auditor can further understand structure of the present invention, characteristic and other purposes, combine appended preferred embodiment to specify as follows at present, illustrated preferred embodiment only is used to technical scheme of the present invention is described, and non-limiting the present invention.
The object detection method of removal illumination effect provided by the present invention is mainly used in the inaccurate and insecure problem of target detection that produces owing to illumination effect in the monitoring scene that solves.
As shown in Figure 1, Fig. 1 is the process flow diagram according to the object detection method of removal illumination effect of the present invention.Can find out that by Fig. 1 the object detection method of removal illumination effect of the present invention may further comprise the steps:
Step 101: set up background image;
Step 102: calculate and the gradient of output current frame image and the gradient of said background image, said gradient comprises horizontal direction gradient and vertical gradient;
Step 103: the direction and the amplitude of the gradient of more said current frame image and the gradient of said background image, and extract and export prospect profile in view of the above; With
Step 104: the prospect profile that extracts is filled obtaining the prospect agglomerate, and filtering noise is with export target.
Wherein, the background image set up ofstep 101 can be start frame image or specific still image.But in order to ensure the stability and the accuracy of background image, the statistical method of preferred employing is set up background image in step 101.This statistical method is implemented through following steps: to the pixel (x in the image of gathering in certain section time t; Y) carrying out statistical study (is exactly to pixel (x; Y) gray-scale value carries out simple number statistics and adds up), select this pixel in this section period (x, stable gray-scale value y) is (with pixel (x; Y) gray-scale value occurrence number is maximum elects stable gray-scale value as) as a setting in the image to should pixel (x; Y) gray-scale value through adding up in this section period t the stable gray-scale value of each pixel in the images acquired, thereby obtains background image.
Instep 102, can adopt gradient operator to calculate horizontal direction gradient, the vertical gradient of current frame image respectively, and horizontal direction gradient, the vertical gradient of background image.Wherein, this gradient operator is preferably Robert operator or Sobel operator.For example, can adopt 3 * 3Robert operator to calculate horizontal direction gradient, the vertical gradient of current frame image respectively, and horizontal direction gradient, the vertical gradient of calculating background image.
The horizontal direction gradient of 3 * 3Robert operator computed image, vertical gradient utilize 3 * 3Robert operator horizontal direction template, vertical direction template computed image interior pixel to put the horizontal difference of corresponding level, vertical formwork, vertical difference exactly.For example, can select 3 * 3Robert operator horizontal direction template to be:
Figure BSA00000147525800061
The vertical direction template is:
Figure BSA00000147525800062
Pixel (x, horizontal direction gradient S y) thenH(x, y), vertical gradient SV(x y) is respectively:
SH(x,y)=(fx+1,y-1+2fx+1,y+fx+1,y+1)-(fx-1,y-1+2fx-1,y+fx-1,y+1)
SV(x,y)=(fx-1,y+1+2fx,y+1+fx+1,y+1)-(fx-1,y-1+2fx,y-1+fx+1,y-1)
fX, yRemarked pixel point (x, gray-scale value y).)
As shown in Figure 2, Fig. 2 is the process flow diagram according to thestep 103 of object detection method of the present invention.Can find out by Fig. 2, can may further comprise the steps according to thestep 103 of object detection method of the present invention:
Step 1031: according to the gradient magnitude A2 and the gradient direction θ 2 of each pixel in the gradient magnitude A1 of each pixel in the horizontal direction gradient of the horizontal direction gradient of the current frame image ofstep 102 output and vertical gradient and background image and the vertical gradient calculating current frame image and gradient direction θ 1 and the background image.
Image interior pixel point (x, gradient magnitude y), gradient direction computing formula are following:
A(x,y)=SH2(x,y)+SV2(x,y)
θ(x,y)=arctgSV(x,y)SH(x,y)
Step 1032: if the pixel in the current frame image (this pixel in the x, gradient magnitude A1 y) and background image (x, gradient magnitude A2 y) is all more than or equal to first threshold T1; Then change step 1033 over to, if gradient magnitude A1 and A2, then think this pixel (x all smaller or equal to the second threshold value T2; Y) be noise spot, otherwise calculate | A1-A2|, if | A1-A2| is more than or equal to the 3rd threshold value T3; Think that then (x y) belongs to the foreground point to this pixel.Preferably, first threshold T1 ∈ [8,12], the second threshold value T2 ∈ [3,5], the 3rd threshold value T3 ∈ [4,6].
Step 1033: calculate this pixel (x in the current frame image; Gradient direction θ 1 y) and this pixel in the background image (x, the absolute difference of gradient direction θ 2 y) | θ 1-θ 2|, if | θ 1-θ 2| is more than or equal to the 4th threshold value T4; Think that then (x y) belongs to the foreground point to this pixel.Preferably, the 4th threshold value T4 ∈ [18 °, 22 °].
Step 1034: extract the pixel that all belong to the foreground point, thereby obtain prospect profile.
As shown in Figure 3, Fig. 3 is the process flow diagram according to thestep 104 of object detection method of the present invention.Can find out by Fig. 3, can may further comprise the steps according to thestep 104 of object detection method of the present invention:
Step 1041: the prospect profile tostep 103 output is filled to obtain the prospect agglomerate; The method that profile is filled is a lot; For example can adopt the horizontal direction scanning method, step can be following: the rectangular area with each prospect profile is an object, goes from first of rectangular area to begin scanning; By order from left to right; Scan first point (being leftmost profile) and last point (being the point of right sides), then the pixel between these two point all is made as the foreground point, continues to finish until this line scanning; Begin to scan next line, until last column; The agglomerate that all point after having scanned and foreground point are formed is the prospect agglomerate.
Step 1042: calculate the error image of current frame image and background image, adopt thresholding method that this error image is carried out Threshold Segmentation to obtain the variation prospect in the error image.
Thresholding method is the method for image interior pixel point being cut apart according to threshold value.The choosing method of said threshold value is a lot, and one dimension threshold value, two-dimentional threshold value are arranged.Be example with the simple one dimension fixed threshold of an example below: if the gray-scale value of certain point then is designated as " 1 " to be expressed as the foreground point greater than preset threshold in this error image; Otherwise be designated as " 0 " to be expressed as background dot, obtain the bianry image of prospect thus.
Step 1043: said prospect agglomerate and said variation prospect are carried out AND-operation, with the pixel that belongs to said prospect agglomerate and said variation prospect simultaneously as impact point to obtain and export target.
AND-operation is general a kind of computer operation, if interior certain pixel of image belongs to prospect agglomerate and variation prospect simultaneously particularly, thinks that then this pixel is that impact point then obtains and exports.
As shown in Figure 4, Fig. 4 is the frame diagram according to the object detecting device of removal illumination effect of the present invention.Can find out that by Fig. 4 the object detecting device of removal illumination effect of the present invention comprises:
Background is set up unit 1, is used to set up background image;
Gradient calculation unit 2 is used to calculate and export the gradient of current frame image and background image, and said gradient comprises horizontal direction gradient and vertical gradient;
Prospect profile extraction unit 3 is used for the direction and the amplitude of the gradient of more said current frame image and said background image, and extracts and export prospect profile in view of the above;
Target acquiring unit 4 be used for the prospect profile that extracts is filled obtaining the prospect agglomerate, and filtering noise is with export target.
Compared with prior art, object detection method of the present invention and device have solved the inaccurate and unreliable problem of the detection target that produces owing to illumination effect in the target detection effectively.
What need statement is that foregoing invention content and embodiment are intended to prove the practical application of technical scheme provided by the present invention, should not be construed as the qualification to protection domain of the present invention.Those skilled in the art are in spirit of the present invention and principle, when doing various modifications, being equal to replacement or improvement.Protection scope of the present invention is as the criterion with appended claims.

Claims (5)

1. an object detection method of removing illumination effect is characterized in that, said object detection method may further comprise the steps:
Step 101: adopt statistical method to set up background image;
Step 102: calculate and the gradient of output current frame image and the gradient of said background image, said gradient comprises horizontal direction gradient and vertical gradient;
Step 103 comprises the steps 1031, step 1032, step 1033 and step 1034:
Step 1031: according to the gradient magnitude A2 and the gradient direction θ 2 of each pixel in the horizontal direction gradient of the horizontal direction gradient of the said current frame image of step 102 output and vertical gradient and said background image and gradient magnitude A1 that vertical gradient is calculated each pixel in the said current frame image and gradient direction θ 1 and the said background image;
Step 1032: if the pixel in the said current frame image (x, this pixel in gradient magnitude A1 y) and the said background image (x, gradient magnitude A2 y) all>=first threshold T1, then change step 1033 over to; If gradient magnitude A1 and A2 all≤the second threshold value T2, then think this pixel (x y) be noise spot, otherwise calculating | A1-A2|; If | A1-A2|>=the 3rd threshold value T3, think that then (x y) belongs to the foreground point to this pixel;
Step 1033: calculate this pixel (x in the said current frame image; Gradient direction θ 1 y) and this pixel in the said background image (x, the absolute difference of gradient direction θ 2 y) | θ 1-θ 2|, if | θ 1-θ 2| >=the 4th threshold value T4; Think that then (x y) belongs to the foreground point to this pixel;
Step 1034: extract the pixel that all belong to the foreground point, thereby obtain prospect profile; With
Step 104: the prospect profile that extracts is filled obtaining the prospect agglomerate, and filtering noise is with export target.
2. object detection method according to claim 1; It is characterized in that; In step 102, adopt gradient operator to calculate horizontal direction gradient, the vertical gradient of said current frame image respectively, and horizontal direction gradient, the vertical gradient of said background image.
3. object detection method according to claim 1 is characterized in that, first threshold T1 ∈ [8,12], the second threshold value T2 ∈ [3,5], the 3rd threshold value T3 ∈ [4,6], the 4th threshold value T4 ∈ [18 °, 22 °].
4. object detection method according to claim 1 is characterized in that step 104 may further comprise the steps:
Step 1041: the prospect profile to step 103 output is filled to obtain the prospect agglomerate;
Step 1042: calculate the error image of current frame image and background image, adopt thresholding method that this error image is carried out Threshold Segmentation, to obtain the variation prospect in the error image;
Step 1043: said prospect agglomerate and said variation prospect are carried out AND-operation, with the pixel that belongs to said prospect agglomerate and said variation prospect simultaneously as impact point to obtain and export target.
5. an object detecting device of removing illumination effect is characterized in that, said device comprises:
Background is set up the unit, is used to adopt statistical method to set up background image;
The gradient calculation unit is used to calculate and export the gradient of current frame image and the gradient of background image, and said gradient comprises horizontal direction gradient and vertical gradient;
The prospect profile extraction unit; Be used for, according to the gradient magnitude A2 and the gradient direction θ 2 of each pixel in the horizontal direction gradient of the horizontal direction gradient of the said current frame image of gradient calculation unit output and vertical gradient and said background image and gradient magnitude A1 that vertical gradient is calculated each pixel in the said current frame image and gradient direction θ 1 and the said background image; If the pixel in the said current frame image (this pixel in the x, gradient magnitude A1 y) and said background image (x, gradient magnitude A2 y) all>=first threshold T1; Then calculate this pixel (x in the said current frame image; Gradient direction θ 1 y) and this pixel in the said background image (x, the absolute difference of gradient direction θ 2 y) | θ 1-θ 2|, if | θ 1-θ 2|>=the 4th threshold value T4; Think that then (x y) belongs to the foreground point to this pixel; If gradient magnitude A1 and A2 all≤the second threshold value T2, then think this pixel (x y) be noise spot, otherwise calculating | A1-A2|; If | A1-A2|>=the 3rd threshold value T3, think that then (x y) belongs to the foreground point to this pixel; Extract the pixel that all belong to the foreground point, thereby obtain prospect profile; And
The target acquiring unit be used for the prospect profile that extracts is filled obtaining the prospect agglomerate, and filtering noise is with export target.
CN2010101951491A2010-05-312010-05-31Target detection method capable of removing illumination influence and device thereofActiveCN101950352B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN2010101951491ACN101950352B (en)2010-05-312010-05-31Target detection method capable of removing illumination influence and device thereof

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN2010101951491ACN101950352B (en)2010-05-312010-05-31Target detection method capable of removing illumination influence and device thereof

Publications (2)

Publication NumberPublication Date
CN101950352A CN101950352A (en)2011-01-19
CN101950352Btrue CN101950352B (en)2012-08-22

Family

ID=43453847

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN2010101951491AActiveCN101950352B (en)2010-05-312010-05-31Target detection method capable of removing illumination influence and device thereof

Country Status (1)

CountryLink
CN (1)CN101950352B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103971382A (en)*2014-05-212014-08-06国家电网公司Target detection method avoiding light influences

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102360513B (en)*2011-09-302013-02-06北京航空航天大学 Object Illumination Migration Method Based on Gradient Operation
CN102509345B (en)*2011-09-302014-06-25北京航空航天大学Portrait art shadow effect generating method based on artist knowledge
CN103700102A (en)*2013-12-202014-04-02电子科技大学Rock core target extracting method based on CT (Computed Tomography) images
US9519975B2 (en)*2014-01-082016-12-13Hong Kong Applied Science And Technology Research Institute Co. Ltd.Method of detecting edge under non-uniform lighting background
US10373316B2 (en)2017-04-202019-08-06Ford Global Technologies, LlcImages background subtraction for dynamic lighting scenarios

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101425179A (en)*2008-11-182009-05-06清华大学Face image relighting method and device
CN101556739A (en)*2009-05-142009-10-14浙江大学Vehicle detecting algorithm based on intrinsic image decomposition
CN101621615A (en)*2009-07-242010-01-06南京邮电大学Self-adaptive background modeling and moving target detecting method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101425179A (en)*2008-11-182009-05-06清华大学Face image relighting method and device
CN101556739A (en)*2009-05-142009-10-14浙江大学Vehicle detecting algorithm based on intrinsic image decomposition
CN101621615A (en)*2009-07-242010-01-06南京邮电大学Self-adaptive background modeling and moving target detecting method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王涛等.基于时空背景差的带跟踪补偿目标检测方法.《计算机应用》.2010,第30卷(第1期),摘要,第2,3部分.*

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103971382A (en)*2014-05-212014-08-06国家电网公司Target detection method avoiding light influences

Also Published As

Publication numberPublication date
CN101950352A (en)2011-01-19

Similar Documents

PublicationPublication DateTitle
CN104504388B (en)A kind of pavement crack identification and feature extraction algorithm and system
CN101950352B (en)Target detection method capable of removing illumination influence and device thereof
US10068343B2 (en)Method and apparatus for recognizing moving target
CN103366156B (en)Road structure detect and track
CN102982313B (en)The method of Smoke Detection
CN104537651B (en)Proportion detecting method and system for cracks in road surface image
CN105987684A (en)Monocular vision-based agricultural vehicle navigation line detection system and method
CN104899880B (en)A kind of public transit vehicle opening/closing door of vehicle state automatic testing method
CN100520362C (en)Method for detecting forest fire fog based on colorful CCD image analysis
CN104616275A (en)Defect detecting method and defect detecting device
CN104657735A (en)Lane line detection method and system, as well as lane departure early warning method and system
CN109087363B (en)HSV color space-based sewage discharge detection method
CN101739549B (en)Face detection method and system
CN102122390A (en)Method for detecting human body based on range image
KR20150112656A (en)Method to calibrate camera and apparatus therefor
CN109117702B (en)Target vehicle detection, tracking and counting method and system
CN110321855A (en)A kind of greasy weather detection prior-warning device
CN103473779B (en)The detection method of stripe interference and device in image
CN110443166A (en)A kind of licence plate recognition method of haze weather
Lin et al.A new prediction method for edge detection based on human visual feature
CN108009480A (en)A kind of image human body behavioral value method of feature based identification
CN101763635B (en)Method and device for judging region of background illumination variation in video image frame sequence
CN103714552B (en)Motion shadow removing method and device and intelligent video analysis system
US20190164005A1 (en)Method for extracting features of a thermal image
Angeline et al.Tracking and localisation of moving vehicle license plate via Signature Analysis

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
ASSSuccession or assignment of patent right

Owner name:NETPOSA TECHNOLOGIES, LTD.

Free format text:FORMER OWNER: BEIJING ZANB SCIENCE + TECHNOLOGY CO., LTD.

Effective date:20150716

C41Transfer of patent application or patent right or utility model
TR01Transfer of patent right

Effective date of registration:20150716

Address after:100102, Beijing, Chaoyang District, Tong Tung Street, No. 1, Wangjing SOHO tower, two, C, 26 floor

Patentee after:NETPOSA TECHNOLOGIES, Ltd.

Address before:100048 Beijing city Haidian District Road No. 9, building 4, 5 layers of international subject

Patentee before:Beijing ZANB Technology Co.,Ltd.

PP01Preservation of patent right
PP01Preservation of patent right

Effective date of registration:20220726

Granted publication date:20120822


[8]ページ先頭

©2009-2025 Movatter.jp