Movatterモバイル変換


[0]ホーム

URL:


US8224023B2 - Method of tracking an object in a video stream - Google Patents

Method of tracking an object in a video stream
Download PDF

Info

Publication number
US8224023B2
US8224023B2US11/490,156US49015606AUS8224023B2US 8224023 B2US8224023 B2US 8224023B2US 49015606 AUS49015606 AUS 49015606AUS 8224023 B2US8224023 B2US 8224023B2
Authority
US
United States
Prior art keywords
locations
frame
grid
sampling
detector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US11/490,156
Other versions
US20070097112A1 (en
Inventor
Darryl Greig
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LPfiledCriticalHewlett Packard Development Co LP
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.reassignmentHEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: HEWLETT-PACKARD LIMITED
Publication of US20070097112A1publicationCriticalpatent/US20070097112A1/en
Priority to US12/417,244priorityCriticalpatent/US20090245580A1/en
Application grantedgrantedCritical
Publication of US8224023B2publicationCriticalpatent/US8224023B2/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

A method of tracking an object such as a face in a video stream comprises running an object detector at a plurality of locations on a first frame, defining a coarse grid. This is repeated for second and subsequent frames, with the grid slightly offset each time so that, ultimately, all of the points on a fine grid are covered but in several passes. When an object such as a face is located on one frame, positional and/or scale information is propagated to the next frame to assist in the tracking of that object onto the next frame.

Description

RELATED APPLICATIONS
The present application is based on, and claims priority from, British Application Number 0522225.2, filed Oct. 31, 2005, the disclosure of which is hereby incorporated by reference herein in its entirety.
FIELD
The present invention relates to a method of tracking and/or deleting an object in a video stream, and particularly although not exclusively to a method capable of operating on a real-time video stream in conjunction with a sub-real-time object detector.
In recent years, algorithms designed to detect faces or other objects within a video stream have become much more efficient, to the extent that some are now capable of operating in real-time or near real-time when run on a powerful platform such as a PC. However, there is now an increasing demand for face and object detection to be provided on low powered platforms such as hand-held organisers, mobile telephones, still digital cameras, and digital camcorders. These platforms are typically not sufficiently high powered to allow real-time operation using some of the better and more robust face/object detectors. There is accordingly a need to speed up object detection/tracking.
There have of course been many advances in recent years designated to speed up an object detection. Most of these operate on a frame-by-frame basis. That is, speedup is achieved by designing a faster frame-wise detector. In some cases, detectors are specifically designed for operation on a video stream, in which case some amount of historical information may be propagated to the current frame of interest. This may be done for reasons of speedup, robustness, or sometimes both.
Some examples of object detection and tracking algorithms designed specifically for a video stream are described below. Note that each of these methods presume the existence of a face/object detector that can operate on a single frame. The detector is generally assumed to give accurate results that may be enhanced or verified by historical data.
US Patent US20040186816 A1.—This is an example of a combined detection/classification algorithm, utilized for mouth tracking in this case. The inventors use a face detector initially to locate the face and mouth, then track the mouth using a linear Kalman filter, with the mouth location and state verified by a mouth detector in each frame. If the mouth is lost in any frame, the face detector is re-run and the mouth location re-initialized.
Keith Anderson & Peter McOwan. “Robust real-time face tracker for cluttered environments”, Computer Vision and Image Understanding 95 (2004), pp 184-200.—The authors describe a face detection and tracking system that uses a number of different methods to determine a probability map for face locations in an initial frame of the video sequence. This probability map is then updated frame by frame using the same detection methods, so that in any given frame the recent history is included in the probability map. This has the effect of making the system more robust.
R. Choudhury Verma, C. Schmid, K. Mikolajczyk, “Face Detection and Tracking in a Video by Propagating Detection Probabilities”, IEEE Trans on Pattern Analysis and Machine Intelligence, Vol. 25, No. 10 pp 1215-1228, 2003.—The authors describe a face detection and tracking system similar to the previous two mentioned. Faces are detected in each frame, and a probability map of face locations in the video stream is updated using the CONDENSATION algorithm. This algorithm is described in Isard & Blake, “Condensation—Conditional Density Propagation for Video Tracking”, Int. J. Computer Vision, Vol. 29, 1998, pp 5-28.
BRIEF DESCRIPTION OF DRAWINGS
FIGS. 1 to 3 illustrate repeated passes across an image within which one or more faces are to be detected;
FIG. 4 schematically illustrates apparatus according to an embodiment of the invention; and
FIG. 5 is a flow diagram schematically illustrating a method.
DESCRIPTION OF EMBODIMENTS
According to a first aspect of the present invention there is provided a method of tracking an object in a video stream comprising:
    • (a) running an object detector at a plurality of sampling locations, the locations defining a first grid spaced across a first frame, and recording a hit at each location where an object of interest is found; and
    • (b) running the object detector at a further plurality of sampling locations defining a second grid spaced across a second frame, the second grid being offset from the first grid, and running the detector in addition at one or more further locations on the second frame derived from the or each location on the first frame at which a hit was recorded.
The invention further extends to a computer program for operating such a method, and to a computer readable medium bearing such a computer program.
According to a second aspect of the invention there is provide an apparatus for tracking an object in a video stream comprising a plurality of video frames, the apparatus including an object detector comprising a programmed computer for:
(a) running an object detector at a plurality of sampling locations, the locations defining a first grid spaced across a first frame, and recording a hit at each location where an object of interest is found; and
(b) running the object detector at a further plurality of sampling locations defining a second grid spaced across a second frame, the second grid being offset from the first grid, and running the detector in addition at one or more further locations on the second frame derived from the or each location on the first frame at which a hit was recorded.
A particular feature of the present invention is that it may be used in conjunction with a variety of standard and well-understood face or object detection algorithms, including algorithms that operate sub-real-time.
Staggered sampling grids may easily be integrated into many existing detection and tracking system, allowing significant additional speed up in detection/tracking for a very small computational overhead. Since the preferred method applies only to the run-time operation, there is no need to retrain existing detectors, and well understood conventional object/face detectors may continue to be used.
It has been found that in some applications the use of a staggered grid may actually outperform the conventional fine grid (one pass) approach, both for false negative and for false positives. This is believed to be because the use of a local search, in some embodiments, allows attention to be directed at locations which do not occur even on a fine grid, thereby reducing the false negative rate. In addition, a coarse sampling grid is likely to locate fewer false positives, which are typically fairly brittle (that is, they occur only in specific locations), and those that are found are unlikely to be successfully propagated.
The invention may be carried into practice in a number of ways, and one specific embodiment will now be described, by way of example, with reference to the accompanying Figures.
In the present embodiment we wish to attempt real-time or near real-time face detection/tracking on a video stream, but using a face detector/tracker which operates only in sub-real-time.
Any convenient face or object detection/tracking algorithm may be used, including the following: Virma, Schmitd & Mikolajczyk, “Face Detection&Tracking in a Video by Propagating Detection Probabilities”, IEEE Trans. On Pattern Analysis and Machine. Intelligence, Vol. 25, No. 10, October 2003. p 1215; Andersen & McOwan, “Robust real-time face tracker for cluttered environments”, computer Vision and Image Understanding, 95 (2004), 184-200; and Isard & Blake, (op cit).
FIG. 1 shows the typical frame from a video stream including an image of aface12. It will be assumed for the purposes of discussion that the chosen face detector, if run at all locations on the frame, will locate the face within arectangular region14 shown in dotted lines. It will be further assumed that the face detector operates in a rectangular region to the right of and below anominal starting location16. In other words, when the face detector is run at thelocation16, it will carry out a search within thedotted region14 to attempt to find a face.
In a practical embodiment, the face detector may actually operate at a plurality of different scales and may attempt to find a face at a variety of different sizes/resolutions to the right of and below thenominal starting position16. Thus, thedotted rectangle14, within which theface12 is located, may be of a variety of differing sizes depending upon the details of the image being analysed and the details of the face detector. For the purpose of simplicity, however, the following description will assume that we are interested in detecting faces or other objects at a single resolution only. It will of course be understood that the method generalises trivially to operate at multiple resolutions.
If the face detector were to be capable of operating sufficiently rapidly, we could simply define a fine grid across the image, and run the face detector at every point on the grid, frame by frame. However, robust and reliable face detectors are computationally intensive, and it may not be possible for the detector to keep up with the incoming video stream if the detector is called at each and every point on a fine grid.
In the present embodiment, the detector is called not at each point of a fine grid but at each point of a larger 2×2 grid, as shown by the cells annotated with thenumeral1 inFIG. 1. For the purposes of this disclosure, it should be appreciated that the terms point and cell may be used interchangeably. The base unit for this coarser grid is shown schematically by thedarker cell10.
Once the first frame has been analysed, as shown inFIG. 1, a second pass is undertaken, as shown inFIG. 2, this pass being based on a grid which is offset from the first pass downwards and to the right by one cell of the finer grid. As shown inFIG. 3, a third pass is then undertaken based on a further grid which is offset by one cell to the left of the grid ofFIG. 2. Finally, the system carries out a fourth pass (not shown), based upon a grid which is spaced diagonally upwards and to the right of the grid shown inFIG. 3. Accordingly, the entirety of the original finer grid has been covered, but in four sequential offset passes rather than in a single pass.
At any pass, if a face is located, the location and scale/size of the face is propagated to the next frame in order to assist detection and/or tracking of the face in that new frame.
In the example shown, the first pass ofFIG. 1 misses thelocation16 at which theface12 may be found, and the face is therefore not located in that pass. In the second pass ofFIG. 2, the face detector is triggered to operate at alocation18, corresponding to alocation16 ofFIG. 1, and the face is therefore located. Details of thelocation18 and arepresentation19 of the size/resolution of the face is recorded for use in the next pass.
In the third pass, shown inFIG. 3, the face detector is triggered to operate at all of the locations indicated by thenumeral3. One of theselocations22 almost, but not quite, finds theface12. In addition to the normal grid, however, on this pass the face detector is also triggered to run at thelocation20, corresponding to theposition18 at which the face was located in the previous pass. Since in this example the face has not moved between frames, it is again automatically located inFIG. 3 by virtue of the information passed from the identification made inFIG. 2. Without that additional information, the face would not have been found in theFIG. 3 scan.
The propagation of information from 1 frame to a subsequent frame may take a variety of forms, including any or all of the following:
    • 1. The position only is propagated, with the object being redetected anew in the next frame. The assumption being made here is that if the frame rate is sufficiently high the object is unlikely to have been moved very much between frames.
    • 2. The position is propagated, and a local search is made in a neighbourhood of that position in the next frame to attempt to pick up the new position of the object. The search may be conducted either by running the full object detector at a plurality of locations around—the forward-propagated location, or alternatively a faster simpler algorithm may be used to undertake this pre-search, with the full object detection algorithm being used only at the most promising locations within the search area.
    • 3. Some form of object tracking may be used to predict the location and/or scale of the object in the next frame, based upon measured changes of object location and/or scale between frames. This may be achieved by means of any suitable motion prediction algorithm (perhaps using motion vectors), for example a Kalman filter and/or the CONDENSATION algorithm of Isard & Blake, (op cit).
Preferably, the method is applied to consecutive sequential frames within the video stream, but given a sufficiently high frame rate the algorithm will still operate even if some frames (for example every other frame) are dropped.
It will of course be understood that the method may equally well be applied using coarse grids having a size other than 2×2, based upon the size of the fine grid which ultimately has to be covered.
If the desired sampling resolution (the cell size of the fine grid) is given by the variable “step” then a staggered algorithm based on a sampling resolution of twice that size may be generated as follows:
int i,j,idx,istart,jstart;
int nlocations=0;
POINT locations[MAX_LOCATIONS];
for ( idx=0; idx<4; ++idx ) {
  istart = jstart = 0;
  if ( idx == 1 ) {
    istart = step;
    jstart = step;
  } else if ( idx == 2 ) {
    jstart = step;
  } else if ( idx == 3 ) {
    istart = step;
  }
  for ( i=istart; i < img_height; i += 2*step ) {
   for ( j=jstart; j < img_width; j += 2*step ) {
    if ( detect_object(i,j) == true ) {
     locations[nlocations] = POINT(i,j);
     nlocations++;
    }
   }
  }
}
This uses a procedure called “detect_object” operating on a particular image location (i, j), the inner two loops representing a coarser sampling grid that is staggered by the index in the outer loop, so that all of the locations in the original finer sampling grid are covered. It may be noted that apart from a small overhead this algorithm requires almost no greater computational effort than the effort required to scan the finer grid in a single pass.
The method is shown, schematically, inFIG. 5. Atstep50, the object detector is first operated at a plurality of sampling locations, and a hit is recorded at each location where an object of interest is found. Atstep52, the detector is then operated again at a further plurality of sampling locations defining a second grid spaced across a second frame. As shown atstep54, the detector is then operated in addition at one or more further locations on the second frame derived from the or each location on the first frame at which a hit was recorded. The order of thesteps52,54 may be reversed, and it would also be possible for both of the steps to be undertaken simultaneously, assuming suitable parallel processing facilities were available.
On completion of thesteps52,54, these two steps may be repeated (again, in either order) for a sequence of subsequent frames, with each respective sampling grid being offset from the grid used on the preceding frame. That is illustrated schematically inFIG. 5 by thearrow56. The method completes at the end of the sequence of the video frames, as shown by thearrow58.
In a practical implementation, the invention may be embodied within some hardware or apparatus, such as a still orvideo camera40, shown schematically inFIG. 4. Within thecamera40 is a microprocessor chip or programmeddigital computer42, which is programmed to carry out the method as previously described. The computer, when operating in accordance with the stored program, embodies anobject detector44. It will be understood of course that instead of using a programmed digital computer, theobject detector44 could comprise a purpose designed hard coded or hard wired system.

Claims (19)

1. A method of tracking an object in a video stream comprising a plurality of video frames, the method comprising:
(a) running an object detector, on a computer, at a plurality of sampling locations, the locations defining a first grid spaced across a first video frame, and recording a hit, to electronic storage, at each location where an object of interest is found; and
(b) running the object detector, on the computer, at a further plurality of sampling locations defining a second grid spaced across a succeeding second video frame, the second grid being offset from the first grid, wherein larger points, which have a size of at least 2×2 points of a fine grained grid, cover the entire first grid and the entire second grid and wherein each of the larger points has one of the sampling locations of the first grid and one of the further sampling locations of the second grid and wherein each of the further sampling locations is offset with respect to a corresponding sampling location of the first grid for a respective larger point and simultaneously running the detector employing parallel processing, on the computer, in addition at one or more further locations on the second frame derived from each location on the first frame at which a hit was recorded.
10. Apparatus for tracking an object in a video stream comprising a plurality of video frames, the apparatus including an object detector comprising:
a computer that is programmed for
(a) running an object detector at a plurality of sampling locations, the locations defining a first grid spaced across a first video frame, and recording a hit, to electronic storage, at each location where an object of interest is found; and
(b) running the object detector at a further plurality of sampling locations defining a second grid spaced across a succeeding second video frame, the second grid being offset from the first grid, wherein larger points, which have a size of at least 2×2 points of a fine grained grid, cover the entire first grid and the entire second grid and wherein each of the larger points has one of the sampling locations of the first grid and one of the further sampling locations of the second grid and wherein each of the further sampling locations is offset with respect to a corresponding sampling location of the first grid for a respective larger point and simultaneously running the detector employing parallel processing in addition at one or more further locations on the second frame derived from each location on the first frame at which a hit was recorded.
US11/490,1562005-10-312006-07-21Method of tracking an object in a video streamExpired - Fee RelatedUS8224023B2 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US12/417,244US20090245580A1 (en)2006-07-212009-04-02Modifying parameters of an object detector based on detection information

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
GB0522225AGB2431787B (en)2005-10-312005-10-31A method of tracking an object in a video stream
GB0522225.22005-10-31

Related Child Applications (1)

Application NumberTitlePriority DateFiling Date
US12/417,244Continuation-In-PartUS20090245580A1 (en)2006-07-212009-04-02Modifying parameters of an object detector based on detection information

Publications (2)

Publication NumberPublication Date
US20070097112A1 US20070097112A1 (en)2007-05-03
US8224023B2true US8224023B2 (en)2012-07-17

Family

ID=35516086

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US11/490,156Expired - Fee RelatedUS8224023B2 (en)2005-10-312006-07-21Method of tracking an object in a video stream

Country Status (2)

CountryLink
US (1)US8224023B2 (en)
GB (1)GB2431787B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20130342438A1 (en)*2012-06-262013-12-26Shahzad MalikMethod and apparatus for measuring audience size for a digital sign
US20140240553A1 (en)*2013-02-282014-08-28Nokia CorporationMethod and apparatus for automatically rendering dolly zoom effect
US20190050694A1 (en)*2017-08-102019-02-14Fujitsu LimitedControl method, non-transitory computer-readable storage medium for storing control program, and control apparatus
US10672131B2 (en)2017-08-102020-06-02Fujitsu LimitedControl method, non-transitory computer-readable storage medium, and control apparatus

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
GB2431787B (en)*2005-10-312009-07-01Hewlett Packard Development CoA method of tracking an object in a video stream
JP4668220B2 (en)*2007-02-202011-04-13ソニー株式会社 Image processing apparatus, image processing method, and program
US8139817B2 (en)*2007-04-272012-03-20Telewatch Inc.Face image log creation
CN102054278B (en)*2011-01-052012-06-13西南交通大学Object tracking method based on grid contraction
CN102306304B (en)*2011-03-252017-02-08上海星尘电子科技有限公司Face occluder identification method and device
US8306267B1 (en)*2011-05-092012-11-06Google Inc.Object tracking
EP2528019A1 (en)*2011-05-262012-11-28Axis ABApparatus and method for detecting objects in moving images
US9767378B2 (en)*2015-08-312017-09-19Sony CorporationMethod and system to adaptively track objects
US10275669B2 (en)*2015-09-092019-04-30Lightmetrics Technologies Pvt. Ltd.System and method for detecting objects in an automotive environment
CN109031262B (en)*2018-06-052023-05-05鲁忠Positioning vehicle searching system and method thereof
CN110459026A (en)*2019-06-042019-11-15恒大智慧科技有限公司Specific people's tracking positioning method, platform, server and storage medium

Citations (46)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4692801A (en)*1985-05-201987-09-08Nippon Hoso KyokaiBandwidth compressed transmission system
US4853968A (en)*1987-09-211989-08-01Kulicke & Soffa Industries, Inc.Pattern recognition apparatus and method
US4873573A (en)*1986-03-191989-10-10British Broadcasting CorporationVideo signal processing for bandwidth reduction
US5150207A (en)*1990-02-201992-09-22Sony CorporationVideo signal transmitting system
US5247353A (en)*1989-11-081993-09-21Samsung Co., Ltd.Motion detection system for high definition television receiver
US5359426A (en)*1991-12-161994-10-25Sony CorporationReproducing a bandwidth expanded chroma signal with reduced noise and reduced flicker
US5611000A (en)*1994-02-221997-03-11Digital Equipment CorporationSpline-based image registration
US5617482A (en)*1990-08-151997-04-01TeleverketMethod of motion compensation and elastic deformation in picture sequences
US5691769A (en)*1995-09-071997-11-25Daewoo Electronics Co, Ltd.Apparatus for encoding a contour of an object
US5892849A (en)*1995-07-101999-04-06Hyundai Electronics Industries Co., Ltd.Compaction/motion estimation method using a grid moving method for minimizing image information of an object
US5910909A (en)*1995-08-281999-06-08C-Cube Microsystems, Inc.Non-linear digital filters for interlaced video signals and method thereof
US5917953A (en)*1997-07-071999-06-29The Morgan Crucible Company PlcGeometry implicit sampler for polynomial surfaces over freeform two-dimensional domains
EP0955605A1 (en)1998-05-071999-11-10ESEC Management SAObject position estimation method using digital image processing
US6009437A (en)1997-03-251999-12-28Nec Research Institute, Inc.Linear fitting with missing data: applications to structure-from-motion and to characterizing intensity images
US6064749A (en)*1996-08-022000-05-16Hirota; GentaroHybrid tracking for augmented reality using both camera motion detection and landmark tracking
US6215890B1 (en)*1997-09-262001-04-10Matsushita Electric Industrial Co., Ltd.Hand gesture recognizing device
US6278803B1 (en)*1990-04-262001-08-21Canon Kabushiki KaishaInterpolation apparatus for offset sampling signals
US6289050B1 (en)*1997-08-072001-09-11Matsushita Electric Industrial Co., Ltd.Device and method for motion vector detection
US20020114525A1 (en)*2001-02-212002-08-22International Business Machines CorporationBusiness method for selectable semantic codec pairs for very low data-rate video transmission
US6456340B1 (en)*1998-08-122002-09-24Pixonics, LlcApparatus and method for performing image transforms in a digital display system
US20020141615A1 (en)*2001-03-302002-10-03Mcveigh Jeffrey S.Mechanism for tracking colored objects in a video sequence
US20020159749A1 (en)*2001-03-152002-10-31Koninklijke Philips Electronics N.V.Method and apparatus for motion estimation in image-sequences with efficient content-based smoothness constraint
US6480615B1 (en)*1999-06-152002-11-12University Of WashingtonMotion estimation within a sequence of data frames using optical flow with adaptive gradients
US20020167537A1 (en)2001-05-112002-11-14Miroslav TrajkovicMotion-based tracking with pan-tilt-zoom camera
US20030053661A1 (en)*2001-08-012003-03-20Canon Kabushiki KaishaVideo feature tracking with loss-of-track detection
US6546117B1 (en)*1999-06-102003-04-08University Of WashingtonVideo object segmentation using active contour modelling with global relaxation
US20030067988A1 (en)*2001-09-052003-04-10Intel CorporationFast half-pixel motion estimation using steepest descent
US20030198385A1 (en)*2000-03-102003-10-23Tanner Cameron W.Method apparatus for image analysis
US20030227552A1 (en)*2002-05-222003-12-11Olympus Optical Co., Ltd.Imaging apparatus
US6671321B1 (en)*1999-08-312003-12-30Mastsushita Electric Industrial Co., Ltd.Motion vector detection device and motion vector detection method
US6782132B1 (en)*1998-08-122004-08-24Pixonics, Inc.Video coding and reconstruction apparatus and methods
US20040186816A1 (en)*2003-03-172004-09-23Lienhart Rainer W.Detector tree of boosted classifiers for real-time object detection and tracking
US20050069037A1 (en)*2003-09-302005-03-31Hong JiangRectangular-shape motion search
US20050089196A1 (en)*2003-10-242005-04-28Wei-Hsin GuMethod for detecting sub-pixel motion for optical navigation device
US20050089768A1 (en)*2003-08-282005-04-28Satoshi TanakaMethod of creating predictive model, method of managing process steps, method of manufacturing semiconductor device, method of manufacturing photo mask, and computer program product
US20050117804A1 (en)*1999-10-222005-06-02Takashi IdaMethod of extracting contour of image, method of extracting object from image, and video transmission system using the same method
US20050286637A1 (en)*2004-06-252005-12-29Matsushita Electric Industrial Co., Ltd.Motion vector detecting apparatus and method for detecting motion vector
US20060140446A1 (en)*2004-12-272006-06-29Trw Automotive U.S. LlcMethod and apparatus for determining the position of a vehicle seat
US7133453B2 (en)*2000-05-302006-11-07Matsushita Electric Industrial Co., Ltd.Motion vector detection apparatus for performing checker-pattern subsampling with respect to pixel arrays
US20070047834A1 (en)*2005-08-312007-03-01International Business Machines CorporationMethod and apparatus for visual background subtraction with one or more preprocessing modules
US20070097112A1 (en)*2005-10-312007-05-03Hewlett-Packard Development Company, L.P.Method of tracking an object in a video stream
US20070279494A1 (en)*2004-04-162007-12-06Aman James AAutomatic Event Videoing, Tracking And Content Generation
US7433497B2 (en)*2004-01-232008-10-07Hewlett-Packard Development Company, L.P.Stabilizing a sequence of image frames
US7440587B1 (en)*2004-11-242008-10-21Adobe Systems IncorporatedMethod and apparatus for calibrating sampling operations for an object detection process
US7522748B2 (en)*2002-08-152009-04-21Sony CorporationMethod and apparatus for processing image data and semiconductor storage device
US7817717B2 (en)*2002-06-182010-10-19Qualcomm IncorporatedMotion estimation techniques for video encoding

Patent Citations (50)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4692801A (en)*1985-05-201987-09-08Nippon Hoso KyokaiBandwidth compressed transmission system
US4873573A (en)*1986-03-191989-10-10British Broadcasting CorporationVideo signal processing for bandwidth reduction
US4853968A (en)*1987-09-211989-08-01Kulicke & Soffa Industries, Inc.Pattern recognition apparatus and method
US5247353A (en)*1989-11-081993-09-21Samsung Co., Ltd.Motion detection system for high definition television receiver
US5150207A (en)*1990-02-201992-09-22Sony CorporationVideo signal transmitting system
US6278803B1 (en)*1990-04-262001-08-21Canon Kabushiki KaishaInterpolation apparatus for offset sampling signals
US5617482A (en)*1990-08-151997-04-01TeleverketMethod of motion compensation and elastic deformation in picture sequences
US5359426A (en)*1991-12-161994-10-25Sony CorporationReproducing a bandwidth expanded chroma signal with reduced noise and reduced flicker
US5611000A (en)*1994-02-221997-03-11Digital Equipment CorporationSpline-based image registration
US5892849A (en)*1995-07-101999-04-06Hyundai Electronics Industries Co., Ltd.Compaction/motion estimation method using a grid moving method for minimizing image information of an object
US5910909A (en)*1995-08-281999-06-08C-Cube Microsystems, Inc.Non-linear digital filters for interlaced video signals and method thereof
US5691769A (en)*1995-09-071997-11-25Daewoo Electronics Co, Ltd.Apparatus for encoding a contour of an object
US6064749A (en)*1996-08-022000-05-16Hirota; GentaroHybrid tracking for augmented reality using both camera motion detection and landmark tracking
US6009437A (en)1997-03-251999-12-28Nec Research Institute, Inc.Linear fitting with missing data: applications to structure-from-motion and to characterizing intensity images
US5917953A (en)*1997-07-071999-06-29The Morgan Crucible Company PlcGeometry implicit sampler for polynomial surfaces over freeform two-dimensional domains
US6289050B1 (en)*1997-08-072001-09-11Matsushita Electric Industrial Co., Ltd.Device and method for motion vector detection
US6215890B1 (en)*1997-09-262001-04-10Matsushita Electric Industrial Co., Ltd.Hand gesture recognizing device
EP0955605A1 (en)1998-05-071999-11-10ESEC Management SAObject position estimation method using digital image processing
US6782132B1 (en)*1998-08-122004-08-24Pixonics, Inc.Video coding and reconstruction apparatus and methods
US6456340B1 (en)*1998-08-122002-09-24Pixonics, LlcApparatus and method for performing image transforms in a digital display system
US6546117B1 (en)*1999-06-102003-04-08University Of WashingtonVideo object segmentation using active contour modelling with global relaxation
US6480615B1 (en)*1999-06-152002-11-12University Of WashingtonMotion estimation within a sequence of data frames using optical flow with adaptive gradients
US6671321B1 (en)*1999-08-312003-12-30Mastsushita Electric Industrial Co., Ltd.Motion vector detection device and motion vector detection method
US20050117804A1 (en)*1999-10-222005-06-02Takashi IdaMethod of extracting contour of image, method of extracting object from image, and video transmission system using the same method
US20030198385A1 (en)*2000-03-102003-10-23Tanner Cameron W.Method apparatus for image analysis
US7133453B2 (en)*2000-05-302006-11-07Matsushita Electric Industrial Co., Ltd.Motion vector detection apparatus for performing checker-pattern subsampling with respect to pixel arrays
US20020114525A1 (en)*2001-02-212002-08-22International Business Machines CorporationBusiness method for selectable semantic codec pairs for very low data-rate video transmission
US20020159749A1 (en)*2001-03-152002-10-31Koninklijke Philips Electronics N.V.Method and apparatus for motion estimation in image-sequences with efficient content-based smoothness constraint
US20020141615A1 (en)*2001-03-302002-10-03Mcveigh Jeffrey S.Mechanism for tracking colored objects in a video sequence
US6760465B2 (en)*2001-03-302004-07-06Intel CorporationMechanism for tracking colored objects in a video sequence
US20020167537A1 (en)2001-05-112002-11-14Miroslav TrajkovicMotion-based tracking with pan-tilt-zoom camera
US20030053661A1 (en)*2001-08-012003-03-20Canon Kabushiki KaishaVideo feature tracking with loss-of-track detection
US20030067988A1 (en)*2001-09-052003-04-10Intel CorporationFast half-pixel motion estimation using steepest descent
US20030227552A1 (en)*2002-05-222003-12-11Olympus Optical Co., Ltd.Imaging apparatus
US7817717B2 (en)*2002-06-182010-10-19Qualcomm IncorporatedMotion estimation techniques for video encoding
US7522748B2 (en)*2002-08-152009-04-21Sony CorporationMethod and apparatus for processing image data and semiconductor storage device
US20040186816A1 (en)*2003-03-172004-09-23Lienhart Rainer W.Detector tree of boosted classifiers for real-time object detection and tracking
US20050089768A1 (en)*2003-08-282005-04-28Satoshi TanakaMethod of creating predictive model, method of managing process steps, method of manufacturing semiconductor device, method of manufacturing photo mask, and computer program product
US7473495B2 (en)*2003-08-282009-01-06Kabushiki Kaisha ToshibaMethod of creating predictive model, method of managing process steps, method of manufacturing semiconductor device, method of manufacturing photo mask, and computer program product
US20050069037A1 (en)*2003-09-302005-03-31Hong JiangRectangular-shape motion search
US20050089196A1 (en)*2003-10-242005-04-28Wei-Hsin GuMethod for detecting sub-pixel motion for optical navigation device
US7433497B2 (en)*2004-01-232008-10-07Hewlett-Packard Development Company, L.P.Stabilizing a sequence of image frames
US20070279494A1 (en)*2004-04-162007-12-06Aman James AAutomatic Event Videoing, Tracking And Content Generation
US20050286637A1 (en)*2004-06-252005-12-29Matsushita Electric Industrial Co., Ltd.Motion vector detecting apparatus and method for detecting motion vector
US7885329B2 (en)*2004-06-252011-02-08Panasonic CorporationMotion vector detecting apparatus and method for detecting motion vector
US7440587B1 (en)*2004-11-242008-10-21Adobe Systems IncorporatedMethod and apparatus for calibrating sampling operations for an object detection process
US7616780B2 (en)*2004-11-242009-11-10Adobe Systems, IncorporatedMethod and apparatus for calibrating sampling operations for an object detection process
US20060140446A1 (en)*2004-12-272006-06-29Trw Automotive U.S. LlcMethod and apparatus for determining the position of a vehicle seat
US20070047834A1 (en)*2005-08-312007-03-01International Business Machines CorporationMethod and apparatus for visual background subtraction with one or more preprocessing modules
US20070097112A1 (en)*2005-10-312007-05-03Hewlett-Packard Development Company, L.P.Method of tracking an object in a video stream

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20130342438A1 (en)*2012-06-262013-12-26Shahzad MalikMethod and apparatus for measuring audience size for a digital sign
US8766914B2 (en)*2012-06-262014-07-01Intel CorporationMethod and apparatus for measuring audience size for a digital sign
US20140240553A1 (en)*2013-02-282014-08-28Nokia CorporationMethod and apparatus for automatically rendering dolly zoom effect
US9025051B2 (en)*2013-02-282015-05-05Nokia Technologies OyMethod and apparatus for automatically rendering dolly zoom effect
US20190050694A1 (en)*2017-08-102019-02-14Fujitsu LimitedControl method, non-transitory computer-readable storage medium for storing control program, and control apparatus
US10672131B2 (en)2017-08-102020-06-02Fujitsu LimitedControl method, non-transitory computer-readable storage medium, and control apparatus
US10803364B2 (en)*2017-08-102020-10-13Fujitsu LimitedControl method, non-transitory computer-readable storage medium for storing control program, and control apparatus

Also Published As

Publication numberPublication date
GB2431787A (en)2007-05-02
GB0522225D0 (en)2005-12-07
GB2431787B (en)2009-07-01
US20070097112A1 (en)2007-05-03

Similar Documents

PublicationPublication DateTitle
US8224023B2 (en)Method of tracking an object in a video stream
CN109035304B (en)Target tracking method, medium, computing device and apparatus
US7352880B2 (en)System and method for detecting and tracking a plurality of faces in real time by integrating visual ques
US8873801B2 (en)Identification of objects in a video
US8718324B2 (en)Method, apparatus and computer program product for providing object tracking using template switching and feature adaptation
KR100860988B1 (en)Method and apparatus for object detection in sequences
EP3665613A1 (en)Method and system for detecting action
US9973698B2 (en)Rapid shake detection using a cascade of quad-tree motion detectors
US20110228979A1 (en)Moving-object detection apparatus, moving-object detection method and moving-object detection program
CN110728700B (en)Moving target tracking method and device, computer equipment and storage medium
CN109636828A (en)Object tracking methods and device based on video image
CN112669294A (en)Camera shielding detection method and device, electronic equipment and storage medium
Funde et al.Object detection and tracking approaches for video surveillance over camera network
CN102314591A (en)Method and equipment for detecting static foreground object
CN102103694B (en)Face real time detecting method based on video and device thereof
Wong et al.{MadEye}: Boosting Live Video Analytics Accuracy with Adaptive Camera Configurations
KR101826669B1 (en)System and method for video searching
US11468676B2 (en)Methods of real-time spatio-temporal activity detection and categorization from untrimmed video segments
KR20130091441A (en)Object tracking device and method for controlling thereof
Jiang et al.Tracking Small and Fast Moving Ball in Broadcast Videos Using Transfer Learning and the Enhanced Interactive Multi-motion Model
KR100994722B1 (en) Continuous Object Tracking on Multiple Cameras Using Camera Handoff
CN111027482B (en)Behavior analysis method and device based on motion vector segmentation analysis
Cullen et al.Detection and summarization of salient events in coastal environments
KR102594422B1 (en)Method for training object detector capable of predicting center of mass of object projected onto the ground, method for identifying same object in specific space captured from multiple cameras having different viewing frustums using trained object detector, and learning device and object identifying device using the same
JP5241687B2 (en) Object detection apparatus and object detection program

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD LIMITED;REEL/FRAME:018053/0513

Effective date:20060717

ZAAANotice of allowance and fees due

Free format text:ORIGINAL CODE: NOA

ZAABNotice of allowance mailed

Free format text:ORIGINAL CODE: MN/=.

STCFInformation on status: patent grant

Free format text:PATENTED CASE

CCCertificate of correction
REMIMaintenance fee reminder mailed
FPAYFee payment

Year of fee payment:4

SULPSurcharge for late payment
MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:8

FEPPFee payment procedure

Free format text:MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPSLapse for failure to pay maintenance fees

Free format text:PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCHInformation on status: patent discontinuation

Free format text:PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FPLapsed due to failure to pay maintenance fee

Effective date:20240717


[8]ページ先頭

©2009-2025 Movatter.jp