Movatterモバイル変換


[0]ホーム

URL:


Open In App
Next Article:
Mahotas - Speeded-Up Robust Features
Next article icon

ORB is a fusion of FAST keypoint detector and BRIEF descriptor with some added features to improve the performance.FAST isFeatures from Accelerated Segment Test used to detect features from the provided image. It also uses a pyramid to produce multiscale-features. Now it doesn’t compute the orientation and descriptors for the features, so this is where BRIEF comes in the role.

ORB uses BRIEF descriptors but as the BRIEF performs poorly with rotation. So what ORB does is to rotate the BRIEF according to the orientation of keypoints. Using the orientation of the patch, its rotation matrix is found and rotates the BRIEF to get the rotated version. ORB is an efficient alternative to SIFT or SURF algorithms used for feature extraction, in computation cost, matching performance, and mainly the patents. SIFT and SURF are patented and you are supposed to pay them for its use. But ORB is not patented. 

In this tutorial, we are going to learn how to find the features in an image and match them with the other images in a continuous Video.

Algorithm

Below is the implementation.

Input image:  

Python3
importnumpyasnpimportcv2# Read the query image as query_img# and train image This query image# is what you need to find in train image# Save it in the same directory# with the name image.jpgquery_img=cv2.imread('query.jpg')train_img=cv2.imread('train.jpg')# Convert it to grayscalequery_img_bw=cv2.cvtColor(query_img,cv2.COLOR_BGR2GRAY)train_img_bw=cv2.cvtColor(train_img,cv2.COLOR_BGR2GRAY)# Initialize the ORB detector algorithmorb=cv2.ORB_create()# Now detect the keypoints and compute# the descriptors for the query image# and train imagequeryKeypoints,queryDescriptors=orb.detectAndCompute(query_img_bw,None)trainKeypoints,trainDescriptors=orb.detectAndCompute(train_img_bw,None)# Initialize the Matcher for matching# the keypoints and then match the# keypointsmatcher=cv2.BFMatcher()matches=matcher.match(queryDescriptors,trainDescriptors)# draw the matches to the final image# containing both the images the drawMatches()# function takes both images and keypoints# and outputs the matched query image with# its train imagefinal_img=cv2.drawMatches(query_img,queryKeypoints,train_img,trainKeypoints,matches[:20],None)final_img=cv2.resize(final_img,(1000,650))# Show the final imagecv2.imshow("Matches",final_img)cv2.waitKey(3000)

Output: 

Screenshot-from-2020-04-21-02-00-06-copy-2



 


Improve
Practice Tags :

Similar Reads

We use cookies to ensure you have the best browsing experience on our website. By using our site, you acknowledge that you have read and understood ourCookie Policy &Privacy Policy
Lightbox
Improvement
Suggest Changes
Help us improve. Share your suggestions to enhance the article. Contribute your expertise and make a difference in the GeeksforGeeks portal.
geeksforgeeks-suggest-icon
Create Improvement
Enhance the article with your expertise. Contribute to the GeeksforGeeks community and help create better learning resources for all.
geeksforgeeks-improvement-icon
Suggest Changes
min 4 words, max Words Limit:1000

Thank You!

Your suggestions are valuable to us.

What kind of Experience do you want to share?

Interview Experiences
Admission Experiences
Career Journeys
Work Experiences
Campus Experiences
Competitive Exam Experiences

[8]ページ先頭

©2009-2025 Movatter.jp