Motion Field Solution

$30.00 $24.00

Objectives: Learn optical flow and its limitation. Learn feature detection and feature tracking. Learn to track feature points across multiple frames. Learn to compute the fundamental matrix from images taken by one camera from multiple views. Learn to compute the essential matrix from images taken by one camera from multiple views. Instructions: You will lose…

Rate this product

You’ll get a: zip file solution

 

Description

Rate this product

Objectives:

  • Learn optical flow and its limitation.

  • Learn feature detection and feature tracking.

  • Learn to track feature points across multiple frames.

  • Learn to compute the fundamental matrix from images taken by one camera from multiple views.

  • Learn to compute the essential matrix from images taken by one camera from multiple views.

Instructions:

  • You will lose 10 points if any of the following requirements is not met:

o Generate a PDF file that includes your discussion for Task 1 and Task 2 and four sets of images for Task 3. o Submit your PDF file, two videos (Tasks 1 and 2), and source code file(s) in one zip file without the folder or

directory.

o Use your first name and last name (e.g., justinsmith.zip) as the file name.

  • Login to myBYU and submit your work through BYU Learning Suite online submission.

  • Download the Optical Flow image sequence (17 frames) from BYU Learning Suite for Tasks 1 and 2.

  • Download the ParallelCube, ParallelReal, TurnCube, and TurnReal image sequences (18 frames) from BYU Learning Suite for Task 3.

  • Convert images to 8-bit single channel using cvtConvert() with the CV_RGB2GRAY flag before processing.

  • Undistortion of the images before processing is not necessary.

  • Use goodFeatureToTrack() function to detect initial features to track.

Task 1: Optical Flow 25 points

  • Use the pyramid LK method to obtain the motion field of ~500 features (corners) for each frame pair.

  • Choose your own pyramid level and other parameters.

  • Obtain three motion field sequences that are calculated between Frames n and n+1 (n = 1, 2, … 16), Frames n and n+2 (skip one frame for n = 1, 2, … 15), and Frames n and n+3 (skip two frames for n = 1, 2, … 14).

  • Show each motion vector by drawing a green dot at the original location of each feature and a red line to show where it moves to in the next frame. (5 point deduction if motion vectors are not shown properly)

  • Include your observation and what you learn from these three different scenarios in your PDF file.

  • Combine these three motion field sequences into one video and submit the video.

  • Submit your code.

Task 2: Feature Matching 25 points

  • Use a template matching method (SSD or NCC) to obtain the motion field of ~500 features (corners).

  • Choose your own template size and search window size.

  • Obtain three motion field sequences that are calculated between Frames n and n+1 (n = 1, 2, … 16), Frames n and n+2 (skip one frame for n = 1, 2, … 15), and Frames n and n+3 (skip two frames for n = 1, 2, … 14).

  • Show each motion vector by drawing a green dot at the original location of each feature and a red line to show where it moves to in the next frame. (5 point deduction if motion vectors are not shown properly)

  • Include your observation and what you learn from these three different scenarios in your PDF file.

  • Compare your results from Optical Flow (Task 1) and Feature Tracking (Task 2) and include your discussion in your PDF file.

  • Combine these three motion field sequences into one video and submit the video.

  • Submit your code.

Task 3: Multi-Frame Feature Tracking 50 points

Finding corresponding feature points is an important task for 3D vision applications. For a calibrated stereovision system, image pairs can be rectified and the corresponding feature pairs can be located along the horizontal lines. Even without rectification, epipolar geometry (constraint) can be used to assist finding the corresponding feature pairs. For structure from motion (SFM) applications, the camera motion (R & T) between any two frames is unknown and the corresponding feature points must be tracked across multiple frames. Camera motion and 3D object structure can be calculated using these matching feature points from multiple frames.

  • Use only 6 frames (e.g., Frames 10 to 15) of the four image sequences for this assignment.

  • Write a program to

o Detect ~300 good features from the first frame (Frame# 10) using goodFeaturesToTrack() (get subpixel accuracy if desired). o Modify the feature tracking function you developed for Motion Field Assignment Task 2 to detect and track the matching point

features across six frames.

  1. Use the matching feature points between two consecutive frames and findFundamentalMat() function (with CV_FM_RANSAC flag) to detect outliers. The findFundamentalMat() function returns a status vector that shows which feature points are not good

(incorrect motion vector).

o Remove these outliers from the list of the matching feature points.

o Use the remaining feature points (outliers removed) found in the current frame to find matching features in the next frame.

o Some matching feature points in the previous frames will disappear or become outliers in the subsequent frames.

o Save the matching feature points (with outliers removed) from all six frames in a file for the next few questions. Only feature points that are matched (or found) in all six frames should be saved.

  • Submit the first frame with matching feature points circled and a red line to show where each point moves to in the last frame.

  • Submit the last frame with the matching feature points circled.

  • Include one set of the above two images per image sequence in your PDF file.

  • Submit your code.

Motion Field Solution
$30.00 $24.00