Movatterモバイル変換


[0]ホーム

URL:


US20190370979A1 - Feature point identification in sparse optical flow based tracking in a computer vision system - Google Patents

Feature point identification in sparse optical flow based tracking in a computer vision system
Download PDF

Info

Publication number
US20190370979A1
US20190370979A1US16/532,658US201916532658AUS2019370979A1US 20190370979 A1US20190370979 A1US 20190370979A1US 201916532658 AUS201916532658 AUS 201916532658AUS 2019370979 A1US2019370979 A1US 2019370979A1
Authority
US
United States
Prior art keywords
feature points
image
frame
tracked
subset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/532,658
Other versions
US11915431B2 (en
Inventor
Deepak Kumar Poddar
Anshu Jain
Desappan Kumar
Pramod Kumar Swami
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by IndividualfiledCriticalIndividual
Priority to US16/532,658priorityCriticalpatent/US11915431B2/en
Publication of US20190370979A1publicationCriticalpatent/US20190370979A1/en
Priority to US18/414,772prioritypatent/US20240153105A1/en
Application grantedgrantedCritical
Publication of US11915431B2publicationCriticalpatent/US11915431B2/en
Activelegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

A method for sparse optical flow based tracking in a computer vision system is provided that includes detecting feature points in a frame captured by a monocular camera in the computer vision system to generate a plurality of detected feature points, generating a binary image indicating locations of the detected feature points with a bit value of one, wherein all other locations in the binary image have a bit value of zero, generating another binary image indicating neighborhoods of currently tracked points, wherein locations of the neighborhoods in the binary image have a bit value of zero and all other locations in the binary image have a bit value of one, and performing a binary AND of the two binary images to generate another binary image, wherein locations in the binary image having a bit value of one indicate new feature points detected in the frame.

Description

Claims (20)

What is claimed is:
1. A method comprising:
receiving a frame of an image;
detecting a set of feature points within the frame, wherein the set of feature points includes a set of tracked feature points;
generating a first image that indicates locations of the set of feature points within the frame;
generating a second image that indicates neighborhoods of the set of tracked feature points; and
generating a third image based on the first image and the second image that indicates locations of a subset of the set of feature points that is different from the set of tracked feature points.
2. The method ofclaim 1, wherein the generating of the third image is such that the third image indicates a location of a first feature of the subset based on the first feature being outside the neighborhoods indicated by the second image.
3. The method ofclaim 1, wherein the frame is a first frame, and the set of tracked feature points is associated with a second frame that is prior to the first frame.
4. The method ofclaim 1 further comprising adding the subset of the set of feature points to the set of tracked feature points.
5. The method ofclaim 1, wherein each of the neighborhoods indicated by the second image is larger in area than a respective feature of the set of feature points.
6. The method ofclaim 1 further comprising determining a set of coordinates for each feature point of the subset of the set of feature points based on the third image.
7. The method ofclaim 1, wherein:
the generating of the first image is such that the locations of the set of feature points are indicated with a binary one and a remainder of the first image is indicated with a binary zero;
the generating of the second image is such that the neighborhoods of the set of tracked feature points are indicated with a binary zero and a remainder of the second image is indicated with a binary one; and
the generating of the third image includes applying an AND function to the first image and the second image.
8. A computer vision system comprising:
a camera configured to capture a frame;
a detection circuit coupled to the camera configured to detect feature points in the frame, wherein the feature points include a set of tracked feature points; and
a feature point identification circuit coupled to the detection circuit and configured to:
receive the detected feature points from the detection circuit;
receive the set of tracked feature points;
generate a first image that indicates locations of the detected feature points;
generate a second image that indicates neighborhoods of the set of tracked feature points; and
generate a third image based on the first image and the second image that indicates locations of a subset of the detected feature points that is outside the neighborhoods of the set of tracked feature points.
9. The computer vision system ofclaim 8, wherein the feature point identification circuit is further configured to add the subset of the detected feature points that is outside the neighborhoods to the set of tracked feature points.
10. The computer vision system ofclaim 9 further comprising a sparse optical flow circuit coupled to the feature point identification circuit to receive the set of tracked feature points that includes the added subset of the detected feature points.
11. The computer vision system ofclaim 8, wherein the feature point identification circuit is further configured to determine coordinates of the subset of detected feature points.
12. The computer vision system ofclaim 8, wherein each of the neighborhoods indicated by the second image is larger in area than a corresponding point of the set of tracked feature points.
13. The computer vision system ofclaim 12, wherein each of the neighborhoods has an area selected from a group consisting of: a 3×3 pixel area and a 5×5 pixel area.
14. The computer vision system ofclaim 8, wherein:
the first image indicates the locations of the detected feature points with a binary one and indicates a remainder of the first image with a binary zero;
the second image indicates the neighborhoods of the set of tracked feature points with a binary zero and indicates a remainder of the second image with a binary one; and
the feature point identification circuit is configured to generate the third image by performing an AND function on the first image and the second image.
15. A non-transitory computer readable medium storing software instructions that, when executed by one or more processors, cause the one or more processors to:
detect a set of feature points within a frame;
generate a first image that indicates locations of the set of feature points within the frame;
generate a second image that indicates neighborhoods of a first subset of the set of feature points; and
generate a third image based on the first image and the second image that indicates locations of a second subset of the set of feature points that is different from the first subset.
16. The non-transitory computer readable medium ofclaim 15, wherein the instructions that generate the third image are configured such that the second subset of the set of feature points is outside the neighborhoods of the first subset in the second image.
17. The non-transitory computer readable medium ofclaim 15 comprising further instructions that cause the one or more processors to determine a set of coordinates for each feature point of the second subset.
18. The non-transitory computer readable medium ofclaim 15, wherein the first subset of the set of feature points is a set of tracked feature points.
19. The non-transitory computer readable medium ofclaim 18 comprising further instructions that cause the one or more processors to add the second subset to the set of tracked feature points.
20. The non-transitory computer readable medium ofclaim 18, wherein the frame is a first frame, and the set of tracked feature points is associated with a second frame that is prior to the first frame.
US16/532,6582015-12-302019-08-06Feature point identification in sparse optical flow based tracking in a computer vision systemActive2037-02-18US11915431B2 (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
US16/532,658US11915431B2 (en)2015-12-302019-08-06Feature point identification in sparse optical flow based tracking in a computer vision system
US18/414,772US20240153105A1 (en)2015-12-302024-01-17Feature point identification in sparse optical flow based tracking in a computer vision system

Applications Claiming Priority (4)

Application NumberPriority DateFiling DateTitle
IN7079/CHE/20152015-12-30
IN7079CH20152015-12-30
US15/266,149US10460453B2 (en)2015-12-302016-09-15Feature point identification in sparse optical flow based tracking in a computer vision system
US16/532,658US11915431B2 (en)2015-12-302019-08-06Feature point identification in sparse optical flow based tracking in a computer vision system

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US15/266,149ContinuationUS10460453B2 (en)2015-12-302016-09-15Feature point identification in sparse optical flow based tracking in a computer vision system

Related Child Applications (1)

Application NumberTitlePriority DateFiling Date
US18/414,772ContinuationUS20240153105A1 (en)2015-12-302024-01-17Feature point identification in sparse optical flow based tracking in a computer vision system

Publications (2)

Publication NumberPublication Date
US20190370979A1true US20190370979A1 (en)2019-12-05
US11915431B2 US11915431B2 (en)2024-02-27

Family

ID=59227159

Family Applications (3)

Application NumberTitlePriority DateFiling Date
US15/266,149Active2037-01-10US10460453B2 (en)2015-12-302016-09-15Feature point identification in sparse optical flow based tracking in a computer vision system
US16/532,658Active2037-02-18US11915431B2 (en)2015-12-302019-08-06Feature point identification in sparse optical flow based tracking in a computer vision system
US18/414,772PendingUS20240153105A1 (en)2015-12-302024-01-17Feature point identification in sparse optical flow based tracking in a computer vision system

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
US15/266,149Active2037-01-10US10460453B2 (en)2015-12-302016-09-15Feature point identification in sparse optical flow based tracking in a computer vision system

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
US18/414,772PendingUS20240153105A1 (en)2015-12-302024-01-17Feature point identification in sparse optical flow based tracking in a computer vision system

Country Status (1)

CountryLink
US (3)US10460453B2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10241520B2 (en)*2016-12-222019-03-26TCL Research America Inc.System and method for vision-based flight self-stabilization by deep gated recurrent Q-networks
US10140719B2 (en)*2016-12-222018-11-27TCL Research America Inc.System and method for enhancing target tracking via detector and tracker fusion for unmanned aerial vehicles
CN111583295B (en)*2020-04-282022-08-12清华大学 A Real-time Dense Optical Flow Computation Method Based on Image Block Binarized Hash Representation
CN112377332B (en)*2020-10-192022-01-04北京宇航系统工程研究所Rocket engine polarity testing method and system based on computer vision

Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5953055A (en)*1996-08-081999-09-14Ncr CorporationSystem and method for detecting and analyzing a queue
US6037988A (en)*1996-03-222000-03-14Microsoft CorpMethod for generating sprites for object-based coding sytems using masks and rounding average
US6192156B1 (en)*1998-04-032001-02-20Synapix, Inc.Feature tracking using a dense feature array
US20020051057A1 (en)*2000-10-262002-05-02Kunio YataFollowing device
JP2002150295A (en)*2000-11-092002-05-24Oki Electric Ind Co LtdObject detection method and object detection device
US20030081836A1 (en)*2001-10-312003-05-01Infowrap, Inc.Automatic object extraction
US20040096119A1 (en)*2002-09-122004-05-20Inoe Technologies, LlcEfficient method for creating a viewpoint from plurality of images
US20060008118A1 (en)*2004-07-022006-01-12Mitsubishi Denki Kabushiki KaishaImage processing apparatus and image monitoring system
US7136518B2 (en)*2003-04-182006-11-14Medispectra, Inc.Methods and apparatus for displaying diagnostic data
US7227893B1 (en)*2002-08-222007-06-05Xlabs Holdings, LlcApplication-specific object-based segmentation and recognition system
CN101266689A (en)*2008-04-232008-09-17北京中星微电子有限公司A mobile target detection method and device
US20090136158A1 (en)*2007-11-222009-05-28Semiconductor Energy Laboratory Co., Ltd.Image processing method, image display system, and computer program
US20090225183A1 (en)*2008-03-052009-09-10Semiconductor Energy Laboratory Co., LtdImage Processing Method, Image Processing System, and Computer Program
US20120129605A1 (en)*2010-11-192012-05-24Total ImmersionMethod and device for detecting and tracking non-rigid objects in movement, in real time, in a video stream, enabling a user to interact with a computer system
US20150355309A1 (en)*2014-06-052015-12-10University Of DaytonTarget tracking implementing concentric ringlets associated with target features

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20030165259A1 (en)*2002-02-152003-09-04Balent James S.Signal analysis using image processing techniques
US8121414B2 (en)*2007-06-132012-02-21Sharp Kabushiki KaishaImage processing method, image processing apparatus, and image forming apparatus
JP2011008687A (en)*2009-06-292011-01-13Sharp CorpImage processor
US9092692B2 (en)*2012-09-132015-07-28Los Alamos National Security, LlcObject detection approach using generative sparse, hierarchical networks with top-down and lateral connections for combining texture/color detection and shape/contour detection
TW201421423A (en)*2012-11-262014-06-01Pixart Imaging IncImage sensor and operating method thereof
US8948454B2 (en)*2013-01-022015-02-03International Business Machines CorporationBoosting object detection performance in videos
AU2014333927A1 (en)*2013-10-072016-03-03Ventana Medical Systems, Inc.Systems and methods for comprehensive multi-assay tissue analysis
US9251416B2 (en)*2013-11-192016-02-02Xerox CorporationTime scale adaptive motion detection
WO2015086537A1 (en)*2013-12-102015-06-18Thomson LicensingMethod for building a set of color correspondences from a set of feature correspondences in a set of corresponding images
US10037456B2 (en)*2015-09-042018-07-31The Friedland Group, Inc.Automated methods and systems for identifying and assigning attributes to human-face-containing subimages of input images
CN108470354B (en)*2018-03-232021-04-27云南大学Video target tracking method and device and implementation device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6037988A (en)*1996-03-222000-03-14Microsoft CorpMethod for generating sprites for object-based coding sytems using masks and rounding average
US5953055A (en)*1996-08-081999-09-14Ncr CorporationSystem and method for detecting and analyzing a queue
US6192156B1 (en)*1998-04-032001-02-20Synapix, Inc.Feature tracking using a dense feature array
US20020051057A1 (en)*2000-10-262002-05-02Kunio YataFollowing device
JP2002150295A (en)*2000-11-092002-05-24Oki Electric Ind Co LtdObject detection method and object detection device
US20030081836A1 (en)*2001-10-312003-05-01Infowrap, Inc.Automatic object extraction
US7227893B1 (en)*2002-08-222007-06-05Xlabs Holdings, LlcApplication-specific object-based segmentation and recognition system
US20040096119A1 (en)*2002-09-122004-05-20Inoe Technologies, LlcEfficient method for creating a viewpoint from plurality of images
US7136518B2 (en)*2003-04-182006-11-14Medispectra, Inc.Methods and apparatus for displaying diagnostic data
US20060008118A1 (en)*2004-07-022006-01-12Mitsubishi Denki Kabushiki KaishaImage processing apparatus and image monitoring system
US20090136158A1 (en)*2007-11-222009-05-28Semiconductor Energy Laboratory Co., Ltd.Image processing method, image display system, and computer program
US20090225183A1 (en)*2008-03-052009-09-10Semiconductor Energy Laboratory Co., LtdImage Processing Method, Image Processing System, and Computer Program
CN101266689A (en)*2008-04-232008-09-17北京中星微电子有限公司A mobile target detection method and device
US20120129605A1 (en)*2010-11-192012-05-24Total ImmersionMethod and device for detecting and tracking non-rigid objects in movement, in real time, in a video stream, enabling a user to interact with a computer system
US20150355309A1 (en)*2014-06-052015-12-10University Of DaytonTarget tracking implementing concentric ringlets associated with target features

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. BRIEF: Binary robust independent elementary features. In Proceedings of the European Conference on Computer Vision (ECCV 2010), Heraklion, Crete, Greece, 5–11 September 2010; Volume 6314, pp. 778–792*

Also Published As

Publication numberPublication date
US20240153105A1 (en)2024-05-09
US20170193669A1 (en)2017-07-06
US10460453B2 (en)2019-10-29
US11915431B2 (en)2024-02-27

Similar Documents

PublicationPublication DateTitle
US20240153105A1 (en)Feature point identification in sparse optical flow based tracking in a computer vision system
JP7222582B2 (en) Method for structure-from-motion (SfM) processing in computer vision systems
US8682109B2 (en)Method and system of reconstructing super-resolution image
US10482626B2 (en)Around view monitoring systems for vehicle and calibration methods for calibrating image capture devices of an around view monitoring system using the same
WO2019020103A1 (en)Target recognition method and apparatus, storage medium and electronic device
EP3709266A1 (en)Human-tracking methods, apparatuses, systems, and storage media
CN207947851U (en) Vehicle system, imaging system and device for error detection of vehicle system
Zhao et al.Real-time traffic sign detection using SURF features on FPGA
US11074716B2 (en)Image processing for object detection
US11688078B2 (en)Video object detection
CN110363211B (en)Detection network model and target detection method
CN110443245B (en) A method, device and equipment for locating license plate area in unrestricted scene
US20230386055A1 (en)Image feature matching method, computer device, and storage medium
Van der Wal et al.FPGA acceleration for feature based processing applications
US20190362224A1 (en)Artificial neural network apparatus and operating method for the same
Zhai et al.Standard Definition ANPR System on FPGA and an Approach to Extend it to HD
CN103841340A (en)Image sensor and operating method thereof
US20090245586A1 (en)Image processing apparatus and method for real-time motion detection
CN110298302B (en)Human body target detection method and related equipment
Zhao et al.An efficient real-time FPGA implementation for object detection
CN112241660A (en) A vision-based anti-theft monitoring method and device
CN113052763B (en)Fusion image generation method and device, computer equipment and storage medium
CN112669351B (en) A fast movement detection method and device
US12211226B2 (en)Method for calculating intersection over union between target region and designated region in an image and electronic device using the same
US11706546B2 (en)Image sensor with integrated single object class detection deep neural network (DNN)

Legal Events

DateCodeTitleDescription
FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPPInformation on status: patent application and granting procedure in general

Free format text:FINAL REJECTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:FINAL REJECTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPPInformation on status: patent application and granting procedure in general

Free format text:FINAL REJECTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPPInformation on status: patent application and granting procedure in general

Free format text:FINAL REJECTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPPInformation on status: patent application and granting procedure in general

Free format text:AWAITING TC RESP., ISSUE FEE NOT PAID

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPPInformation on status: patent application and granting procedure in general

Free format text:PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCFInformation on status: patent grant

Free format text:PATENTED CASE


[8]ページ先頭

©2009-2025 Movatter.jp