Computer Vision PS3 Solved

$24.99 $18.99

Setup Note that we will be using a new conda environment for this project! If you run into import module errors, try pip install -e . again, and if that still doesn’t work, you may have to create a fresh environment. Install Miniconda. It doesn’t matter whether you use Python 2 or 3 because we…

5/5 – (2 votes)

You’ll get a: zip file solution

 

Categorys:

Description

5/5 – (2 votes)

Setup

Note that we will be using a new conda environment for this project! If you run into import module errors, try pip install -e . again, and if that still doesn’t work, you may have to create a fresh environment.

  1. Install Miniconda. It doesn’t matter whether you use Python 2 or 3 because we will create our own environment that uses 3 anyways.

  1. Open the terminal

    1. On Windows: open the installed Conda prompt to run the command.

    1. On MacOS: open a terminal window to run the command

    1. On Linux: open a terminal window to run the command

  1. Navigate to the folder where you have the project

Fourth, finish get_features() by implementing a SIFT-like feature. Accuracy should increase to 70% on the Notre Dame pair, 40% on Mount Rushmore, and 15% on Episcopal Gaudi if you only evaluate your 100 most confident matches (these are just estimates). These accuracies still aren’t great because the human selected keypoints from cheat_interest_points() might not match particularly well according to your feature.

  1. Fifth, stop using cheat_interest_points() and implement get_interest_points(). Harris cor-ners aren’t as good as ground-truth points which we know correspond, so accuracy may drop. On the other hand, you can get hundreds or even a few thousand interest points so you have more op-portunities to find confident matches. If you only evaluate the most confident 100 matches (see the

num_pts_to_evaluate parameter) on the Notre Dame pair, you should be able to achieve >80% accu-racy. You will likely need to do extra credit to get high accuracy on Mount Rushmore and Episcopal Gaudi.

Tips, Tricks, and Common Problems

Make sure you’re not swapping x and y coordinates at some point. If your interest points aren’t showing up where you expect or if you’re getting out of bound errors you might be swapping x and y coordinates. Remember, images expressed as NumPy arrays are accessed image[y,x].

Make sure your features aren’t somehow degenerate. You can visualize your features with plt.imshow-(image1_features), although you may need to normalize them first. If the features are mostly zero or mostly identical you may have made a mistake.

Potentially useful NumPy, OpenCV, and SciPy functions:

np . a r c t a n 2 ( ) , np . s o r t ( ) , np . r e s h a p e ( ) , np . newaxis , np . a r g s o r t ( ) , np . g r a d i e n t ( ) , np . h i s t o g r a m ( ) , np . hypot ( ) , np . f l i p l r ( ) , np . f l i p u d ( ) , cv2 . g e t G a u s s i a n K e r n e l ( )

Forbidden functions (you can use for testing, but not in your final code):

cv2 . SURF( ) , cv2 . BFMatcher ( ) , cv2 . BFMatcher ( ) . match ( ) , cv2 . FlannBasedMatcher ( ) ,

knnMatch ( ) , cv2 . BFMatcher ( ) . knnMatch ( ) , cv2 . HOGDescriptor ( ) , cv2 . c o r n e r H a r r i s ( ) ,

cv2 . F a s t F e a t u r e D e t e c t o r ( ) , cv2 .ORB( ) , skimage . f e a t u r e , skimage . f e a t u r e . hog ( ) ,

skimage . f e a t u r e . d a i s y , skimage . f e a t u r e . c o r n e r _ h a r r i s ( ) ,

skimage . f e a t u r e . corner_shi_tomasi ( ) , skimage . f e a t u r e . m a t c h _ d e s c r i p t o r s ( ) ,

skimage . f e a t u r e .ORB( ) , s c i p y . s i g n a l . c o n v o l v e ( ) , cv2 . f i l t e r 2 D ( ) , cv2 . S o b e l ( )

We haven’t enumerated all possible forbidden functions here but using anyone else’s code that performs filtering, interest point detection, feature computation, or feature matching for you is forbidden.

Rubric

Code: The score for each part is provided below. Please refer to the submission results on Gradescope for a detailed breakdown.

Part 1: Interest point detection

30

Part 2: Local Feature Description

35

Part 3: Feature matching

10

Part 4: Report

25

Extra Credit: Bells & Whistles

20

Total

100 (+20)

Submission Instructions and Deliverables

Unit Tests: Check that you pass all local unit tests by entering the proj3_unit_tests directory and running the command pytest ./. This command will run all the unit tests once more, and you need to add a screenshot to the report. Ensure that the conda environment proj3 is being used. Code zip: The following code deliverables will be uploaded as a zip file on Gradescope.

  1. proj3_code/student_harris.py

    1. get_interest_points()

    1. my_filter2D()

    1. get_gradients()

    1. get_gaussian_kernel()

    1. second_moments()

    1. corner_response()

    1. non_max_suppression()

    1. remove_border_vals()

  1. proj3_code/student_sift.py

    1. get_magnitudes_and_orientations()

    1. get_feat_vec()

    1. get_features()

  1. proj3_code/student_feature_matching.py

    1. compute_feature_distances()

    1. match_features()

  1. proj3_code/proj3.ipynb

  1. proj3_code/utils.py

Do not create this zip manually! You are supposed to use the command python zip_submission.py –gt_username <username> for this.

Report: The final thing to upload is the PDF export of the report on gradescope. The report is worth 25 points. Please refer to the pptx template where we have detailed the points associated with each question. To summarize, the deliverables are as follows:

Submit the code as a zipfile on Gradescope at PS3 – Code.

Submit the report as a PDF on Gradescope at PS3 – Report. There is no submission to be done on Canvas. Good luck!

This iteration of the assignment is developed by Viraj Prabhu and Judy Hoffman. This assignment was originally developed by James Hays, Samarth Brahmbhatt, and John Lambert, and updated by Judy Hoffman, Mitch Donley, and Vijay Upadhya.

6

Computer Vision PS3 Solved
$24.99 $18.99