ECE Structure from Motion Solution

$35.00 $29.00

Objectives: Learn to estimate camera motion. Learn to reconstruct object structure. Instructions: Use OpenCV to complete this exam. Use the same four image sequences, data, and results you got from your previous assignment. Download the camera intrinsic and distortion parameters (SFM Camera Parameters.txt) from BYU Learning Suite. You will lose 20 points if any of…

Rate this product

You’ll get a: zip file solution

 

Description

Rate this product

Objectives:

  • Learn to estimate camera motion.

  • Learn to reconstruct object structure.

Instructions:

  • Use OpenCV to complete this exam.

  • Use the same four image sequences, data, and results you got from your previous assignment.

  • Download the camera intrinsic and distortion parameters (SFM Camera Parameters.txt) from BYU Learning Suite.

  • You will lose 20 points if any of the following requirements is not met:

o Include your result, images, and discussion for all three tasks in one PDF file.

o Submit your PDF file and source code file(s) in one zip file without the folder or directory. o Use your first name and last name (e.g., justinsmith.zip) as the file name.

  • Login to myBYU and submit your work through BYU Learning Suite online submission.

Task 1: Unknown Intrinsic and Extrinsic Parameters 40 points

Assuming the four image sequences were taken with an unknown camera (intrinsic parameters are not available) and camera motion is unknown (extrinsic parameters are not available), the fundamental matrix F can still be estimated using the 8-point algorithm. Using the estimated F, images can be rectified and disparities can be calculated.

  • Use the matching feature points between the first frame and the last frame obtained from your previous assignment to estimate the fundamental matrix F.

  • Use streoRectifyUncalibrated() function to compute the rectification homography matrices (H1 and H2) for the first and for the last frame.

H1 and H2 can then be used to rectify the images. Check Slide# 25 of Lecture – Calibration & Rectification for details.

  • Remember to convert them to R1 and R2 (R1 = M1-1H1M1 and R2 = M2-1H2M2) before rectification.

  • You need to make a guess on the intrinsic and distortion parameters (Do NOT use the camera parameter file downloaded online).

  • The image center (320, 240) is a reasonable estimate for the optical center. In this case, of course M1 = M2.

  • Submit one rectified image pair for each image sequence (total four sequences). Draw a few horizontal lines to show rectification result. Describe what you observe. Include all your answers and images in your PDF file.

  • Submit your code.

Task 2: Known Intrinsic and Unknown Extrinsic Parameters 40 points

With a calibrated camera (intrinsic parameters are available) but unknown camera motion, the object structure can be reconstructed up to an unknown scale factor.

  • Use the fundamental matrix F from Task 1.

  • Now that we know the intrinsic parameters, the data points should be undistorted first using the undistortPoints() function.

  • Remember NOT to pass R1(2) and P1(2) to the undistortPoints() function and the output points must be converted back to image frame in pixels.

  • Calculate the essential matrix E from F.

  • Since we do not know camera motion, E should be normalized so that we don’t get confused (see lecture slides).

ˆ ˆ

Extract R and T between the first view and the last view from E. Remember E = TR ,where T is a 3 × 3 skew – symmetric matrix

  • There will be four sets of R and T solutions. Determine which one is correct.

  • Exam the R and T matrices you select and explain what is the estimated camera motion (up to a scale factor).

  • Submit your explanation and the R, T, E, and F matrices of all 4 sequences in your PDF file.

  • Submit your code.

Task 3: Known Intrinsic and Extrinsic Parameters 30 points

Assuming that we were able to use a laser range finder to measure the distance from the camera to the 3D scene. Our measurements show that the distance from the camera (at Frame 10) to the closest point on the cube (see ParallelCube and TurnedCube image sequence) is 20 inches. Assume all points on the same vertical corner line have the same distance to the camera (cube is sitting upright).

  • Adjust the scale factor until the closet points have roughly the same distance of 20 inches.

  • Use the same scale factor for the ParallelCube and ParallelReal sequences and the TurnedCube and TurnedReal sequences.

  • Explain what you think how the camera moves (camera motion with actual measurements) in both cases (parallel and turned).

  • Select four points and circle them in the first frame from each sequence and calculate their true 3D information.

  • Submit your explanation, image, and data in your PDF file.

  • Submit your code.

ECE Structure from Motion Solution
$35.00 $29.00