Movatterモバイル変換


[0]ホーム

URL:


CN104408747A - Human motion detection method suitable for depth image - Google Patents

Human motion detection method suitable for depth image
Download PDF

Info

Publication number
CN104408747A
CN104408747ACN201410717382.XACN201410717382ACN104408747ACN 104408747 ACN104408747 ACN 104408747ACN 201410717382 ACN201410717382 ACN 201410717382ACN 104408747 ACN104408747 ACN 104408747A
Authority
CN
China
Prior art keywords
pixel
value
image
background
background model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410717382.XA
Other languages
Chinese (zh)
Other versions
CN104408747B (en
Inventor
孟明
杨方波
鲁少娜
朱俊青
桂奇政
佘青山
罗志增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yancheng Yonghao Kai Network Technology Co.,Ltd.
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi UniversityfiledCriticalHangzhou Dianzi University
Priority to CN201410717382.XApriorityCriticalpatent/CN104408747B/en
Publication of CN104408747ApublicationCriticalpatent/CN104408747A/en
Application grantedgrantedCritical
Publication of CN104408747BpublicationCriticalpatent/CN104408747B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention provides a human motion detection method suitable for a depth image. The method comprises the following steps: firstly, dividing the image into an upper layer and a lower layer, wherein the upper layer and the lower layer are used for creating a background model by different neighborhoods, and a reference model is added while the background model is created; secondly, adjusting the parameter of the difference threshold value of an image lower-layer algorithm and comparing pixels with the background model for pixel classification in a following video; thirdly, updating the background model by different updating modes based on the classified pixels; finally, denoising false detection points. According to the method, the identification and detection rates of a human body are remarkably increased.

Description

Translated fromChinese
一种适用于深度图像的人体运动检测方法A Human Motion Detection Method Applicable to Depth Images

技术领域technical field

本发明属于计算机视觉领域,涉及一种针对深度图像的人体运动检测的方法。The invention belongs to the field of computer vision, and relates to a method for human body motion detection aiming at depth images.

背景技术Background technique

在人体运动视觉分析研究中,人体运动检测是一个关键的预处理步骤,直接影响后续跟踪、识别的效果,因此人体运动检测算法一直是该领域的研究热点。In the research of human motion vision analysis, human motion detection is a key preprocessing step, which directly affects the effect of follow-up tracking and recognition, so the human motion detection algorithm has always been a research hotspot in this field.

以微软Kinect为代表的3D传感器,能获取呈现物体的三维信息的深度图像,给人体运动检测和分析提供了新途径。相比普通彩色图像,深度图像具有一些明显优点,比如困扰彩色图像的阴影和光照问题对深度图像影响不大。The 3D sensor represented by Microsoft Kinect can obtain the depth image showing the three-dimensional information of the object, which provides a new way for human motion detection and analysis. Compared with ordinary color images, depth images have some obvious advantages. For example, the shadow and lighting problems that plague color images have little effect on depth images.

ViBe(visual background extractor)算法也称之为视觉背景提取算子算法,是背景减除法中的一种。它是一种像素级视频背景建模算法,效果优于常用的平均背景模型、混合高斯模型等算法,具有计算量小、处理效率高等特点。但是由于深度图像与普通彩色图像特性的不同,该方法应用到深度图像时存在以下问题:1)地面附近运动目标难以检测,具体现象是图像中与地面相连的脚缺失。2)当建模时背景图像中存在运动的人,ViBe算法会将人初始化为背景,人运动后该区域会被判为前景这种现象“鬼影”,另一方面当人再次运动到“鬼影”区域时由于重叠处时重叠处无法检测出前景这种现象称为“黑影”。3)由于Kinect传感器的固有缺陷导致静止像素误判为前景。主要是因为传感器的精度偏差:远距离的像素在邻近视频序列中也会有一定的变化甚至是丢失,物体边缘不稳定。The ViBe (visual background extractor) algorithm, also known as the visual background extractor algorithm, is one of the background subtraction methods. It is a pixel-level video background modeling algorithm, which is better than commonly used algorithms such as average background model and mixed Gaussian model, and has the characteristics of small amount of calculation and high processing efficiency. However, due to the different characteristics of depth images and ordinary color images, the method has the following problems when applied to depth images: 1) It is difficult to detect moving objects near the ground, and the specific phenomenon is that the feet connected to the ground are missing in the image. 2) When there is a moving person in the background image during modeling, the ViBe algorithm will initialize the person as the background. After the person moves, this area will be judged as the phenomenon "ghost" in the foreground. On the other hand, when the person moves to the "ghost" again The phenomenon that the foreground cannot be detected in the overlapped area is called "dark shadow". 3) Still pixels are misjudged as foreground due to inherent defects of the Kinect sensor. It is mainly due to the accuracy deviation of the sensor: the distant pixels will have certain changes or even be lost in the adjacent video sequence, and the edge of the object is unstable.

发明内容Contents of the invention

为了应对上面提到问题,本发明在经典ViBe算法基础上提出一种用于深度图像的人体运动检测方法。In order to deal with the problems mentioned above, the present invention proposes a human motion detection method for depth images based on the classic ViBe algorithm.

为了实现以上目的,本发明方法主要包括以下步骤:In order to achieve the above object, the inventive method mainly comprises the following steps:

步骤(1).建立背景模型;Step (1). Establish a background model;

步骤(2).像素分类;Step (2). Pixel classification;

步骤(3).背景模型更新;Step (3). Background model update;

步骤(4).误检点消噪处理。Step (4). Denoise processing of false detection points.

本发明具有如下有益效果:The present invention has following beneficial effects:

1、在背景建模时提出了一种自适应的图像分层处理和不同邻域模式的建模方式,并在背景模型里增加了去除“鬼影”现象的参考模型MR(x)。1. In background modeling, a self-adaptive layered image processing and modeling methods of different neighborhood modes are proposed, and a reference model MR (x) for removing "ghosting" phenomenon is added to the background model.

2、像素分类时增加了前景点检验步骤,通过当前像素与参考模型的比较消除“鬼影”。2. The foreground point inspection step is added during pixel classification, and the "ghost" is eliminated by comparing the current pixel with the reference model.

3、在模型更新方面增加了基于前景点的背景模型更新策略,解决了“黑影”现象问题,保持背景模型的准确性。3. In terms of model updating, a background model update strategy based on foreground points is added, which solves the problem of "black shadow" and maintains the accuracy of the background model.

4、采用阈值法对分类结果进行了误检点消噪处理。4. The threshold method is used to de-noise the false detection points of the classification results.

5、可以在深度图像中完整的提取出运动人体,为后续的研究如步态分析等创造了前提,在人体运动分析领域具有广阔的应用前景。5. The moving human body can be completely extracted in the depth image, which creates a premise for subsequent research such as gait analysis, and has broad application prospects in the field of human motion analysis.

附图说明Description of drawings

图1DViBe算法流程图;Figure 1DViBe algorithm flow chart;

图2a 24-邻域采样图;Figure 2a 24-neighborhood sampling map;

图2b 14-邻域采样图;Figure 2b 14-neighborhood sampling map;

图3差值阈值Rb对PCC的影响。Fig. 3 Effect of difference threshold Rb on PCC.

具体实施方式Detailed ways

下面结合附图描述本发明DViBe算法的深度图像的人体运动检测方法。The human motion detection method of the depth image of the DViBe algorithm of the present invention will be described below in conjunction with the accompanying drawings.

图1为DViBe算法流程图,其实施主要包括以下几个步骤:Figure 1 is a flow chart of the DViBe algorithm, and its implementation mainly includes the following steps:

(1)将图像分成上下两层,上下两层利用不同的邻域建立背景模型。建立背景模型的同时增加了一个参考模型MR(x)。(1) Divide the image into upper and lower layers, and the upper and lower layers use different neighborhoods to build background models. A reference model MR (x) is added while establishing the background model.

(2)调整图像下层算法的差值阈值Rb的参数。(2) Adjust the parameters of the difference threshold Rb of the image lower layer algorithm.

(3)在接下来的视频中将各像素与背景模型比较进行像素分类。(3) In the next video, compare each pixel with the background model for pixel classification.

(4)基于分类后的像素采用不同的更新方式更新背景模型。(4) Based on the classified pixels, different update methods are used to update the background model.

(5)误检点消噪处理。(5) De-noise processing of false detection points.

下面逐一对各步骤进行详细说明。Each step will be described in detail below one by one.

步骤一,背景模型建立Step 1, background model establishment

(一)基于深度图像的自适应的图像分层技术(1) Adaptive image layering technology based on depth image

图像的分层技术的实质是将图像分层地面区和非地面区两部分。为了适应Kinect传感器的马达角度变化和位置的变化。利用深度图像地面像素深度值纵向向上方向增大的特性和传感器有效视距的最远距离D对图像进行分层,具体过程如下:The essence of the image layering technology is to layer the image into two parts, the ground area and the non-ground area. To accommodate changes in the motor angle and position of the Kinect sensor. The image is layered by using the characteristic that the depth value of the ground pixel of the depth image increases in the vertical direction and the longest distance D of the sensor's effective line-of-sight. The specific process is as follows:

1)随机选取图像的一列从最下面的像素开始垂直向上遍历像素。1) Randomly select a column of the image to traverse the pixels vertically upwards from the bottom pixel.

2)记录第一个像素值在D±5范围的像素的纵坐标为y,如遍历完该列像素后没有找到像素值在D±5范围的像素,则默认y为纵坐标的最大值。2) Record the vertical coordinate of the first pixel whose pixel value is within the range of D±5 as y. If no pixel with a pixel value within the range of D±5 is found after traversing the column of pixels, y is the maximum value of the vertical coordinate by default.

3)重复1和2的过程m次,可以得到m个纵坐标值{y1,y2,…,ym-1,ym}。3) Repeat the process of 1 and 2 m times to obtain m ordinate values {y1 , y2 , . . . , ym-1 , ym }.

4)考虑到物体阻挡和传感器的误差等情况会造成的y的值过大或者过小对划分的影响,将这m个纵坐标值按大小进行排序,选取中间的k个值,计算它们的均值,就可以得到图像的分界线的纵坐标:4) Considering the impact of the excessively large or small value of y on the division caused by object obstruction and sensor error, sort the m ordinate values by size, select the middle k values, and calculate their Mean value, you can get the ordinate of the dividing line of the image:

ythe y‾‾==ythe y11++ythe y22++......++ythe ykkkk------((11))

(二)像素背景模型和参考模型建立(2) Establishment of pixel background model and reference model

DViBe算法采用的是单帧图像建模的方式,因此在第一帧图像即为图像建立初始化背景模型。记v(x)为图像中位于x处的像素在给定颜色空间的值,可以将背景模型中每个像素x建模为一个集合M(x)。M(x)中包含n个从像素x的邻域中随机选取的像素的值vi,i=1,…,n,称为背景样本值,即The DViBe algorithm adopts a single-frame image modeling method, so the initial background model is established for the image in the first frame image. Denote v(x) as the value of the pixel at x in the image in a given color space, and each pixel x in the background model can be modeled as a set M(x). M(x) contains n values vi of pixels randomly selected from the neighborhood of pixel x, i=1,...,n, which are called background sample values, namely

M(x)={v1,v2,v3,…,vn-1,vn}  (2)M(x)={v1 ,v2 ,v3 ,…,vn-1 ,vn } (2)

MR(x)=v(x)  (3)MR (x) = v (x) (3)

其中vi是索引为i的背景样本值,下标n。为样本个数,v(x)是像素x的像素值。对于上层图像选取的是如图2a的24邻域,对于下层图像选取的是如图2b的14邻域。where vi is the background sample value with index i and subscript n. is the number of samples, and v(x) is the pixel value of pixel x. The 24-neighborhood as shown in Figure 2a is selected for the upper layer image, and the 14-neighborhood as shown in Figure 2b is selected for the lower-layer image.

步骤二,像素分类Step 2, pixel classification

(一)分类基本原理(1) Basic principles of classification

通过把像素值v(x)与背景模型中对应的模型M(x)进行比较来对当前图像中像素x分类。记v(x)与M(x)中的样本值vi在给定颜色空间的距离为:Classify the pixel x in the current image by comparing the pixel value v(x) with the corresponding model M(x) in the background model. Note that the distance between v(x) and the sample value vi in M(x) in a given color space is:

dis[v(x),vi]=||v(x)-vi||  (4)dis[v(x),vi ]=||v(x)-vi || (4)

对给定的差值阈值R。统计dis[v(x),vi]<R的个数并用C表示,当C<Cmin时则x为前景点,反之为背景点,Cmin是像素分类匹配参数。For a given difference threshold R. Count the number of dis[v(x),vi ]<R and express it with C. When C<Cmin , x is the foreground point, otherwise it is the background point. Cmin is the pixel classification matching parameter.

(二)参数调整(2) Parameter adjustment

图像分层后像素分类中的参数也需要适当调整,对图像的上层部分和下层部分采用不同的距离阈值。对于上层图像分类,距离阈值Rt保持不变为20。由于下层图像的地面像素深度值接近,要想在地面处有效检测出运动目标,除了更改样本选取领域模式外,还需要调节阈值Rb的大小。Rb越小运动目标越容易被检测,但是Rb过小会造成大量背景点误判为前景点。不同的Rb的值对正确识别率(PCC)的影响如图3所示,由图可得Rb越大识别率越高,但是大于6后PCC趋于稳定,综合考虑整体识别率和脚部的识别率可选择Rb=6。The parameters in pixel classification after image layering also need to be adjusted appropriately, using different distance thresholds for the upper and lower parts of the image. For upper image classification, the distance thresholdRt remains unchanged at 20. Since the ground pixel depth values of the underlying images are close, in order to effectively detect moving targets on the ground, in addition to changing the sample selection domain mode, it is also necessary to adjust the size of the threshold Rb . The smaller Rb is , the easier it is to detect the moving target, but too small Rb will cause a large number of background points to be misjudged as foreground points. The influence of different Rb values on the correct recognition rate (PCC) is shown in Figure 3. From the figure, it can be seen that the larger the Rb is, the higher the recognition rate is, but when it is greater than 6, the PCC tends to be stable. Considering the overall recognition rate and footsteps The recognition rate of the part can be selected as Rb =6.

(三)前景点校验(3) Foreground point verification

当建模时背景图像中存在运动的人,ViBe算法会将人错误的初始化为背景,因此产生的问题是:人移动后原区域像素点的值发生了巨大的改变,且具有像素值变大的特点。ViBe算法会将这些点判定为前景点,而且一直保持为前景的状态。我们将这种不对应实际运动对象的前景点集合称之为“鬼影”。利用像素值变大的特性通过当前像素与背景模型中对应像素比较则可以去鬼影现象。校验规则如下:When there is a moving person in the background image during modeling, the ViBe algorithm will incorrectly initialize the person as the background, so the problem is: after the person moves, the value of the pixel point in the original area changes greatly, and the pixel value becomes larger specialty. The ViBe algorithm will judge these points as foreground points and keep them as foreground points. We call this set of foreground points that do not correspond to actual moving objects "ghosts". The ghost phenomenon can be removed by comparing the current pixel with the corresponding pixel in the background model by using the characteristic that the pixel value becomes larger. The verification rules are as follows:

vv((xx,,tt00))--vv((xx,,tt))>>00vv((xx,,tt00))>>00------((55))

其中v(x,t0)是MR(x)中像素x的值,v(x,t)为当前图像中对应x处像素点的值。当距离超过Kinect传感器检测范围时,像素可能会出现丢失的情况即像素值为0,这个现象会导致一些前景点误判为背景点。添加限定条件v(x,t0)>0可以防止上述原因而将正常的前景点误判为背景点。Where v(x,t0 ) is the value of pixel x inMR (x), and v(x,t) is the value of the pixel corresponding to x in the current image. When the distance exceeds the detection range of the Kinect sensor, pixels may be lost, that is, the pixel value is 0. This phenomenon will cause some foreground points to be misjudged as background points. Adding the limiting condition v(x,t0 )>0 can prevent the normal foreground points from being misjudged as background points due to the above reasons.

步骤三,背景模型Step 3, background model

(一)原算法模型更新基本原理(1) Basic principle of updating the original algorithm model

背景模型更新的目的是保证随着时间的推移背景模型依旧能够保持准确性。当像素x被分类为背景点时,就触发该像素背景模型M(x)的更新过程。首先采用随机子抽样的方法选择是否更新M(x),对于选中更新的模型,再随机从M(x)中选取一个样本值用当前像素值v(x)代替,从而使背景模型中样本值的生命周期成指数单调衰减。为了保持空间一致性,更新过程还采用同样方法随机对x的邻域像素的背景模型进行更新。The purpose of the background model update is to ensure that the background model remains accurate over time. When a pixel x is classified as a background point, the update process of the pixel background model M(x) is triggered. First, use the random sub-sampling method to choose whether to update M(x). For the model that is selected to be updated, then randomly select a sample value from M(x) and replace it with the current pixel value v(x), so that the sample value in the background model The lifetime of the decays exponentially and monotonically. In order to maintain spatial consistency, the update process also uses the same method to randomly update the background model of the neighborhood pixels of x.

黑影现象是因为当人再次运动到鬼影区域时由于重叠处像素值与其背景模型中样本值接近,算法会将重叠区域检测为背景。原算法中基于背景点的背景模型更新策略在后续帧中也能解决黑影和鬼影现象但是过程相对比较缓慢。The black shadow phenomenon is because when the person moves to the ghost area again, the algorithm will detect the overlapping area as the background because the pixel value at the overlap is close to the sample value in the background model. The background model update strategy based on background points in the original algorithm can also solve the black shadow and ghost phenomenon in subsequent frames, but the process is relatively slow.

(二)基于前景点背景模型更新及参考模型更新(2) Update the background model and reference model based on the foreground points

基于前景点的背景模型更新策略具体策略如下:The specific strategy of the background model update strategy based on the foreground point is as follows:

1)统计图像中所有像素连续被判为前景次数F。当被检测为背景点则重新计数。1) Count the number of times F that all pixels in the image are judged as foreground consecutively. When it is detected as a background point, it will be counted again.

2)设置帧数阈值Fmin,当F>Fmin时则认为是算法误将背景点判为前景点。2) Set the frame number threshold Fmin , when F>Fmin , it is considered that the algorithm mistakenly judges the background point as the foreground point.

3)将该点修改为背景点,对F重新计数,更新参考模型MR(x),建立新的背景模型,建模样本选取采用上文的自适应分层邻域模式。3) Modify this point as a background point, count F again, update the reference modelMR (x), establish a new background model, and select modeling samples using the above adaptive hierarchical neighborhood mode.

步骤四,误检点消噪处理Step 4, false detection point denoising processing

Kinect传感器的特点是误差随着距离的增加而增加并且距离小于一定值时亦无法检测到。噪声大多出现在深度值较大和物体边缘。所以可以用阈值法可以去除大量的误检点。当检测为前景的像素点的深度值超出设定范围时则认为该点是背景,即The characteristic of the Kinect sensor is that the error increases with the increase of the distance and cannot be detected when the distance is less than a certain value. Noise mostly appears at large depth values and at the edges of objects. Therefore, the threshold method can be used to remove a large number of false detection points. When the depth value of a pixel detected as the foreground exceeds the set range, the point is considered to be the background, that is,

vv((xx))==00vv((xx))>>TT||||vv((xx))<<tt255255elseelse------((66))

其中T和t表示深度值,255代表前景,0代表背景。T和t可以根据传感器有效视距的深度值确定,如果知道人体所处区域可以根据人体区域确定T和t的值,这样可以减少更多的误检点。Where T and t represent the depth value, 255 represents the foreground, and 0 represents the background. T and t can be determined according to the depth value of the effective line of sight of the sensor. If the area where the human body is known, the values of T and t can be determined according to the human body area, which can reduce more false detection points.

实验表明,改进后的Vibe算法针对深度图像的人体运动检测具有一定的可行性,人体的识别率和检测率都有明显的提高。Experiments show that the improved Vibe algorithm is feasible for human motion detection in depth images, and the recognition rate and detection rate of human body have been significantly improved.

Claims (3)

CN201410717382.XA2014-12-012014-12-01Human motion detection method suitable for depth imageActiveCN104408747B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201410717382.XACN104408747B (en)2014-12-012014-12-01Human motion detection method suitable for depth image

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201410717382.XACN104408747B (en)2014-12-012014-12-01Human motion detection method suitable for depth image

Publications (2)

Publication NumberPublication Date
CN104408747Atrue CN104408747A (en)2015-03-11
CN104408747B CN104408747B (en)2017-02-22

Family

ID=52646375

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201410717382.XAActiveCN104408747B (en)2014-12-012014-12-01Human motion detection method suitable for depth image

Country Status (1)

CountryLink
CN (1)CN104408747B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105631862A (en)*2015-12-212016-06-01浙江大学Background modeling method based on neighborhood characteristic and grayscale information
CN106251348A (en)*2016-07-272016-12-21广东外语外贸大学A kind of self adaptation multi thread towards depth camera merges background subtraction method
CN107066950A (en)*2017-03-142017-08-18北京工业大学A kind of human testing window rapid extracting method based on depth information
CN107067843A (en)*2017-02-102017-08-18广州动创信息科技有限公司Body-sensing touch-control electronic blank tutoring system
CN107454316A (en)*2017-07-242017-12-08艾普柯微电子(上海)有限公司Method for testing motion and device
CN107441691A (en)*2017-09-122017-12-08上海视智电子科技有限公司Body building method and body-building equipment based on body-sensing camera
CN109407839A (en)*2018-10-182019-03-01京东方科技集团股份有限公司Image adjusting method, device, electronic equipment and computer readable storage medium
CN111915687A (en)*2020-07-132020-11-10浙江工业大学Background extraction method with depth information and color information
CN112101090A (en)*2020-07-282020-12-18四川虹美智能科技有限公司Human body detection method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102903110A (en)*2012-09-292013-01-30宁波大学Segmentation method for image with deep image information
US20130243313A1 (en)*2010-10-012013-09-19Telefonica, S.A.Method and system for images foreground segmentation in real-time
CN104077776A (en)*2014-06-272014-10-01深圳市赛为智能股份有限公司Visual background extracting algorithm based on color space self-adapting updating

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20130243313A1 (en)*2010-10-012013-09-19Telefonica, S.A.Method and system for images foreground segmentation in real-time
CN102903110A (en)*2012-09-292013-01-30宁波大学Segmentation method for image with deep image information
CN104077776A (en)*2014-06-272014-10-01深圳市赛为智能股份有限公司Visual background extracting algorithm based on color space self-adapting updating

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
余烨 等: "EVibe: 一种改进的Vibe 运动目标检测算法", 《仪器仪表学报》*
胡小冉 等: "一种新的基于ViBe的运动目标检测方法", 《计算机科学》*

Cited By (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105631862A (en)*2015-12-212016-06-01浙江大学Background modeling method based on neighborhood characteristic and grayscale information
CN105631862B (en)*2015-12-212019-05-24浙江大学A kind of background modeling method based on neighborhood characteristics and grayscale information
CN106251348B (en)*2016-07-272021-02-02广东外语外贸大学 An adaptive multi-cue fusion background subtraction method for depth cameras
CN106251348A (en)*2016-07-272016-12-21广东外语外贸大学A kind of self adaptation multi thread towards depth camera merges background subtraction method
CN107067843A (en)*2017-02-102017-08-18广州动创信息科技有限公司Body-sensing touch-control electronic blank tutoring system
CN107066950A (en)*2017-03-142017-08-18北京工业大学A kind of human testing window rapid extracting method based on depth information
CN107454316A (en)*2017-07-242017-12-08艾普柯微电子(上海)有限公司Method for testing motion and device
CN107454316B (en)*2017-07-242021-10-15艾普柯微电子(江苏)有限公司Motion detection method and device
CN107441691A (en)*2017-09-122017-12-08上海视智电子科技有限公司Body building method and body-building equipment based on body-sensing camera
CN109407839A (en)*2018-10-182019-03-01京东方科技集团股份有限公司Image adjusting method, device, electronic equipment and computer readable storage medium
US10877641B2 (en)2018-10-182020-12-29Boe Technology Group Co., Ltd.Image adjustment method, apparatus, device and computer readable storage medium
CN111915687A (en)*2020-07-132020-11-10浙江工业大学Background extraction method with depth information and color information
CN111915687B (en)*2020-07-132024-07-05浙江工业大学Background extraction method with depth information and color information
CN112101090A (en)*2020-07-282020-12-18四川虹美智能科技有限公司Human body detection method and device
CN112101090B (en)*2020-07-282023-05-16四川虹美智能科技有限公司Human body detection method and device

Also Published As

Publication numberPublication date
CN104408747B (en)2017-02-22

Similar Documents

PublicationPublication DateTitle
CN104408747B (en)Human motion detection method suitable for depth image
CN103971386B (en)A kind of foreground detection method under dynamic background scene
CN103530893B (en)Based on the foreground detection method of background subtraction and movable information under camera shake scene
CN102646279B (en)Anti-shielding tracking method based on moving prediction and multi-sub-block template matching combination
CN103810699B (en)SAR (synthetic aperture radar) image change detection method based on non-supervision depth nerve network
CN108805016B (en)Head and shoulder area detection method and device
CN107491731B (en)Ground moving target detection and identification method for accurate striking
CN110688987A (en) Pedestrian position detection and tracking method and system
CN108921875A (en)A kind of real-time traffic flow detection and method for tracing based on data of taking photo by plane
CN106204646A (en)Multiple mobile object tracking based on BP neutral net
CN104392468A (en)Improved visual background extraction based movement target detection method
Nassu et al.A vision-based approach for rail extraction and its application in a camera pan–tilt control system
CN108399361A (en)A kind of pedestrian detection method based on convolutional neural networks CNN and semantic segmentation
CN103077539A (en)Moving object tracking method under complicated background and sheltering condition
CN104299243B (en)Target tracking method based on Hough forests
CN103810499B (en)Application for detecting and tracking infrared weak object under complicated background
CN106372598A (en)Image stabilizing method based on image characteristic detection for eliminating video rotation and jittering
CN107622507B (en)Air target tracking method based on deep learning
CN110782487A (en)Target tracking method based on improved particle filter algorithm
CN104318589A (en)ViSAR-based anomalous change detection and tracking method
CN102509414B (en)Smog detection method based on computer vision
Liu et al.[Retracted] Self‐Correction Ship Tracking and Counting with Variable Time Window Based on YOLOv3
CN102800105B (en) Object Detection Method Based on Motion Vector
CN106056062B (en)A kind of vehicle checking method based on adaptive local feature background model
KR101690050B1 (en)Intelligent video security system

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
EE01Entry into force of recordation of patent licensing contract
EE01Entry into force of recordation of patent licensing contract

Application publication date:20150311

Assignee:FUZHOU FUGUANG WATER SCIENCE & TECHNOLOGY CO.,LTD.

Assignor:HANGZHOU DIANZI University

Contract record no.:2019330000071

Denomination of invention:Human motion detection method suitable for depth image

Granted publication date:20170222

License type:Common License

Record date:20190718

TR01Transfer of patent right

Effective date of registration:20201211

Address after:Room 1004-5, building 8, 3333 Guangyi Road, Daqiao Town, Nanhu District, Jiaxing City, Zhejiang Province

Patentee after:Jiaxing Xunfu New Material Technology Co.,Ltd.

Address before:Room 3003-1, building 1, Gaode land center, Jianggan District, Hangzhou City, Zhejiang Province

Patentee before:Zhejiang Zhiduo Network Technology Co.,Ltd.

Effective date of registration:20201211

Address after:Room 3003-1, building 1, Gaode land center, Jianggan District, Hangzhou City, Zhejiang Province

Patentee after:Zhejiang Zhiduo Network Technology Co.,Ltd.

Address before:310018 No. 2 street, Xiasha Higher Education Zone, Hangzhou, Zhejiang

Patentee before:HANGZHOU DIANZI University

TR01Transfer of patent right
TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20201221

Address after:224002 Building 5, No. 55, Taishan South Road, Yancheng Economic and Technological Development Zone, Jiangsu Province

Patentee after:Jiangsu Suya Heavy Industry Technology Co.,Ltd.

Address before:Room 1004-5, building 8, 3333 Guangyi Road, Daqiao Town, Nanhu District, Jiaxing City, Zhejiang Province

Patentee before:Jiaxing Xunfu New Material Technology Co.,Ltd.

TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20210226

Address after:224002 room 1209, business building, comprehensive bonded zone, No. 18, South hope Avenue, Yancheng Economic and Technological Development Zone, Jiangsu Province

Patentee after:Jiangsu Yanzong Industry Investment Development Co.,Ltd.

Address before:224002 Building 5, No. 55, Taishan South Road, Yancheng Economic and Technological Development Zone, Jiangsu Province

Patentee before:Jiangsu Suya Heavy Industry Technology Co.,Ltd.

EC01Cancellation of recordation of patent licensing contract
EC01Cancellation of recordation of patent licensing contract

Assignee:FUZHOU FUGUANG WATER SCIENCE & TECHNOLOGY Co.,Ltd.

Assignor:HANGZHOU DIANZI University

Contract record no.:2019330000071

Date of cancellation:20210517

TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20250806

Address after:224001 Jiangsu Province, Yancheng City Economic and Technological Development Zone, Bufeng Town, Bufeng Road No. 188, 2nd floor of the Comprehensive Business Building, Room 2208

Patentee after:Yancheng Yonghao Kai Network Technology Co.,Ltd.

Country or region after:China

Address before:224002 room 1209, business building, comprehensive bonded zone, No. 18, South hope Avenue, Yancheng Economic and Technological Development Zone, Jiangsu Province

Patentee before:Jiangsu Yanzong Industry Investment Development Co.,Ltd.

Country or region before:China


[8]ページ先頭

©2009-2025 Movatter.jp