0% found this document useful (0 votes)
4 views

Computer Vision Unit 4 Notes

Self-calibration in computational photography estimates camera intrinsic parameters from uncalibrated images of a static scene, utilizing epipolar geometry and multiple views. It is beneficial for 3D reconstruction without special equipment but has limitations such as sensitivity to noise and insufficient camera motion. Projective reconstruction complements this by recovering 3D structure from images without intrinsic parameters, though it lacks real-world metric accuracy.

Uploaded by

Bhuvana H
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Computer Vision Unit 4 Notes

Self-calibration in computational photography estimates camera intrinsic parameters from uncalibrated images of a static scene, utilizing epipolar geometry and multiple views. It is beneficial for 3D reconstruction without special equipment but has limitations such as sensitivity to noise and insufficient camera motion. Projective reconstruction complements this by recovering 3D structure from images without intrinsic parameters, though it lacks real-world metric accuracy.

Uploaded by

Bhuvana H
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Self-Calibration in Computational

Photography

1. Introduction
In computational photography and computer vision, self-calibration is a technique used to
estimate intrinsic parameters of a camera—such as focal length, skew, principal point, and
distortion—without requiring a physical calibration object like a checkerboard. Instead, it
relies on analyzing multiple uncalibrated images of a static 3D scene, typically during
Structure-from-Motion (SfM) or multi-view reconstruction.

2. What are Camera Intrinsic Parameters?


Before understanding self-calibration, it is important to know what intrinsic parameters are:

 Focal Length (fx, fy): Controls zoom/magnification.


 Principal Point (cx, cy): The optical center of the image.
 Skew Coefficient (s): Accounts for the non-perpendicularity of image axes.
 Radial and Tangential Distortion Coefficients: Compensate lens distortion.

These parameters are represented in a matrix called the Intrinsic Matrix (K):

3. What is Self-Calibration?
Self-calibration is the process of estimating the camera's intrinsic parameters using only image
correspondences across multiple views, without any known geometry or calibration target.

It assumes:

 The scene is static and rigid.


 The camera undergoes a sufficiently varied motion.
 Multiple views are available (typically ≥3).

This process extracts camera intrinsics from epipolar geometry, particularly the Fundamental
Matrix (F) or the Essential Matrix (E) derived from matched image features.
4. Mathematical Background
4.1 Projection Model

A 3D point XXX in homogeneous coordinates projects to a 2D point xxx using the projection matrix P:
6. Real-Time Example: Structure from Motion (SfM)
In SfM, multiple images from different viewpoints are used to reconstruct 3D scenes. Here:

 Feature points (e.g., SIFT, ORB) are matched across images.


 The fundamental matrix FFF is estimated.
 From FFF, camera intrinsics are estimated via self-calibration.
 Then 3D points and camera poses are refined via Bundle Adjustment.

This integration enables full 3D reconstruction with only images as input—ideal for robotics, AR/VR,
and drones.

7. Visualization: Workflow Diagram


Uncalibrated Images

Feature Matching

Fundamental Matrix (F)

Kruppa Constraints / Self-calibration

Intrinsic Matrix (K)

3D Scene Reconstruction + Camera Poses

8. Benefits of Self-Calibration
✅ No special equipment required
✅ Enables 3D reconstructions in uncontrolled environments
✅ Works on existing datasets (e.g., historical photos)
9. Limitations
 Requires sufficient camera motion (degenerate motion leads to failure)
 Noise-sensitive
 Doesn’t estimate lens distortion well
 Numerically unstable if poorly conditioned

10. Applications in Computational Photography


 Panorama stitching with unknown lens parameters
 3D modeling from consumer photos
 Autonomous drone navigation
 Augmented Reality (AR) without external calibration

11. Summary
Self-calibration is a powerful tool in computational photography that enables intrinsic parameter
estimation directly from images. While not as precise as traditional methods, it is indispensable in
scenarios where no calibration pattern is available.

Projective Reconstruction in
Computational Photography

1. Introduction
Projective Reconstruction is the process of recovering 3D structure and camera motion
from multiple 2D images without requiring knowledge of the camera's intrinsic or
extrinsic parameters. It provides a reconstruction that is accurate up to a projective
transformation, meaning the 3D geometry is recovered but not with real-world metric
information (like exact angles or distances).

This is a key step in Structure-from-Motion (SfM) and Multiple View Geometry,


especially when no calibration is available.
2. What Does "Projective" Mean?
In geometry, a projective transformation (homography) preserves straight lines but not
angles or distances. So, in projective reconstruction, we only recover the shape and relative
positions—not the scale or metric properties.
Solve using linear triangulation (DLT method).

Step 5: Generalize to Multiple Views

Add more cameras and apply Bundle Adjustment to refine all 3D points and camera matrices.
6. Diagram: Projective Reconstruction Overview
Image 1 Image 2
↓ ↓
+----------------+------------------+
| Feature Matching and F Matrix |
+----------------+------------------+

Estimate P1 and P2 from F

Triangulate 3D Points

Projective 3D Point Cloud (X)

7. Applications
 3D Modeling from uncalibrated photo collections (e.g., tourist photos).
 Archaeology and cultural heritage digitization.
 Augmented Reality where metric accuracy is not critical.
 Movie production (3D scenes from set photos).
 Robotics / Drones for basic scene understanding.

8. Limitations
 Cannot recover real-world scale or shape.
 Sensitive to feature correspondence errors.
 Requires at least 2 images (3+ for robustness).
 Needs good camera motion (not planar or degenerate).
9. Comparison with Other Reconstructions
Type of Reconstruction What it Recovers Requirement

Shape up to projective
Projective Uncalibrated images
transformation

Affine Parallelism and ratios preserved Affine calibration or assumption

Metric / Euclidean Real-world distances and angles Full calibration (intrinsics + extrinsics)

10. Summary
Projective reconstruction is a foundational technique in computational photography that enables 3D
scene recovery from images without camera calibration. While not metrically accurate, it allows for
visually consistent modeling and forms the basis for more refined reconstructions.

You might also like