0% found this document useful (0 votes)
54 views77 pages

Unit 03 To 15th Questions

The document discusses various concepts related to image thresholding and segmentation. It provides definitions and explanations of different thresholding techniques like Otsu's thresholding, bimodal histogram analysis, variable thresholding, edge-based thresholding, and fuzzy thresholding. It also discusses morphological operations, connected component analysis, and their applications in image segmentation.

Uploaded by

S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views77 pages

Unit 03 To 15th Questions

The document discusses various concepts related to image thresholding and segmentation. It provides definitions and explanations of different thresholding techniques like Otsu's thresholding, bimodal histogram analysis, variable thresholding, edge-based thresholding, and fuzzy thresholding. It also discusses morphological operations, connected component analysis, and their applications in image segmentation.

Uploaded by

S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 77

1. Which color space is commonly used for color-based image processing tasks?

a) RGB

b) HSV

c) Grayscale

d) CMYK

2. What advantage does HSV have over RGB for color-based image analysis?

a) Separation of color information from brightness

b) More compact representation of colors c) Better handling of noise in images d) Higher dynamic
range

3. Grayscale images are primarily used for:

a) Color-based segmentation

b) Texture analysis

c) Illumination invariance d) Edge detection

4. The Bhattacharyya distance measures the similarity between:

a) RGB values of pixels b) Hue values of pixels

c) Probability distributions

d) Image gradients

5. Bhattacharyya distance is commonly used for comparing:

a) Histograms

b) Grayscale images c) Binary images d) Color maps

6. What does the Bhattacharyya coefficient measure?

a) Overlap between two probability distributions

b) Color saturation in an image

c) Entropy of an image d) Shape similarity between objects

7. Which distance metric is used to compare histograms based on the Bhattacharyya coefficient?

a) Euclidean distance b) Cosine similarity

c) Bhattacharyya distance
d) Mahalanobis distance

8. The Bhattacharyya distance ranges between:


a) 0 and 1
b) 0 and infinity
c) -1 and 1 d) -infinity and infinity

9. Bhattacharyya distance is commonly used in which field?


a) Natural language processing
b) Computer vision
c) Signal processing d) Robotics

10. Bhattacharyya distance is used in image retrieval systems to:


a) Compute image gradients b) Measure image contrast
c) Rank and retrieve visually similar images
d) Segment foreground and background regions

11. Which measure is commonly used for comparing the similarity between histograms or
distributions?
a) Jaccard index
b) Bhattacharyya distance
c) Pearson correlation coefficient d) Kullback-Leibler divergence

12. Which color space is commonly used for texture analysis in grayscale images?
a) RGB b) HSV c) YUV

d) Grayscale

13. The Bhattacharyya distance can be used as a similarity measure in:


a) Object recognition
b) Image compression c) Motion tracking d) Data clustering

14. What does the Bhattacharyya distance value of 0 indicate?


a) Perfect match between histograms
b) Maximum dissimilarity between histograms c) Random distribution of pixel intensities d) No
information about the histogram similarity

15. When comparing histograms, which metric is more robust to changes in lighting conditions?
a) Euclidean distance b) Cosine similarity
c) Bhattacharyya distance
d) Manhattan distance

16. Histogram comparison is commonly used in which of the following applications?


a) Image denoising b) Image enhancement
c) Image retrieval
d) Image segmentation

17. What is the range of values in a grayscale image?


a) 0 to 255
b) 0 to 1 c) -1 to 1
d) -infinity to infinity
18. Which color space is most suitable for color segmentation tasks?
a) RGB
b) HSV
c) YUV d) CMYK

19. Histogram equalization is used to:


a) Enhance image contrast
b) Convert color images to grayscale c) Smooth image edges d) Reduce image noise

20. Which distance metric is used for comparing histograms based on the Earth Mover's Distance
(EMD)?
a) Manhattan distance
b) Euclidean distance c) Bhattacharyya distance d) Kullback-Leibler divergence

21. Histogram comparison techniques are commonly applied to:


a) Binary images
b) Color images
c) Grayscale images d) Vector graphics

22. Which technique is best suited for comparing the similarity between distributions in image
recognition?
a) Histogram matching
b) Bhattacharyya distance
c) Correlation coefficient d) Template matching

23. Histogram comparison is a technique used for:


a) Measuring image resolution b) Calculating image entropy c) Evaluating image quality
d) Assessing image similarity

24. The Bhattacharyya distance is sensitive to changes in:


a) Image size b) Image rotation c) Histogram bin size
d) Image brightness

25. Histogram comparison can be used to assess the similarity between images in which application?
a) Object tracking
b) Image inpainting c) Image compression d) Image stitching

26. Which color space is used to calculate the color histograms in the Bhattacharyya distance
calculation?
a) RGB
b) HSV
c) YUV d) Grayscale

27. Which histogram comparison technique is more robust to changes in image intensity levels?
a) Bhattacharyya distance b) Euclidean distance
c) Correlation coefficient
d) Earth Mover's Distance (EMD)

28. Histogram comparison can be used as a feature for:


a) Image classification
b) Image resizing c) Image cropping d) Image rotation

29. Which distance metric is used to compare histograms based on the intersection of two histograms?
a) Cosine similarity
b) Earth Mover's Distance (EMD)
c) Bhattacharyya distance d) Euclidean distance
30. Histogram comparison techniques are used to analyze the similarity between images based on
their:
a) Spatial resolution
b) Pixel intensity values
c) Geometric transformations d) File sizes

=================================================================================

1) Threshold detection is a technique used in computer vision for:

a) Image enhancement b) Image segmentation c) Object recognition d) Image filtering

.Answer: b) Image segmentation

2) Which thresholding method assumes that the image histogram has two distinct peaks?

a) Otsu's thresholding b) Bimodal histogram analysis c) Variable thresholding d) Edge-based


thresholding

Answer: b) Bimodal histogram analysis

3)Otsu's thresholding is based on: a) Entropy of the image b) Gradient magnitude

c) Bimodal histogram analysis d) Edge detection

Answer: c) Bimodal histogram analysis

4) Which thresholding method dynamically adjusts the threshold based on local image statistics?

a) Otsu's thresholding b) Variable thresholding c) Edge-based thresholding d) Fuzzy thresholding

Answer: b) Variable thresholding

5) Mathematical morphology is used for: a) Image compression b) Image restoration

c) Image enhancement d) Image analysis

Answer: d) Image analysis

6) The basic operations in mathematical morphology include:

a) Erosion and dilation b) Blurring and sharpening c) Edge detection and gradient calculation d)
Thresholding and binarization
Answer: a) Erosion and dilation

7) The opening operation in mathematical morphology is defined as:

a) Erosion followed by dilation b) Dilation followed by erosion c) Difference between erosion and
dilation d) Union of erosion and dilation

Answer: a) Erosion followed by dilation

8) Which of the following is an advanced morphological function in OpenCV?

a) Opening b) Closing c) Hit-or-miss transform d) Thresholding

Answer: c) Hit-or-miss transform

9) Connectedness paradox refers to the situation where:

a) Two connected objects are incorrectly identified as separate b) Two separate objects are
incorrectly identified as connected c) The background is misclassified as foreground d) None of the
above

Answer: b) Two separate objects are incorrectly identified as connected

10) Which of the following techniques can be used to solve connectedness paradoxes?

a) Morphological reconstruction b) Watershed segmentation c) Region growing d) All of the above

Answer: d) All of the above

===========================================================

9) Variable thresholding is a technique that:

a) Adapts the threshold based on local image characteristics b) Uses multiple thresholds for different
regions of an image

c) Adjusts the threshold based on user input d) None of the above

Answer: a) Adapts the threshold based on local image characteristics

=======================================================

10) Edge-based thresholding is based on:

a) Gradient magnitude of the image b) Local image statistics c) Fuzzy logic principles d) Histogram
analysis

Answer: a) Gradient magnitude of the image


11) Fuzzy thresholding differs from traditional thresholding methods in that it:

a) Allows for gradual separation between foreground and background b) Relies on image gradients
for thresholding

c) Applies a fixed threshold to the image d) None of the above

Answer: a) Allows for gradual separation between foreground and background

12) Which library can be used for fuzzy logic operations in Python?

a) NumPy b) OpenCV c) scikit-learn d) scikit-fuzzy

Answer: d) scikit-fuzzy

13) Which function is used to define the fuzzy membership in scikit-fuzzy?

a) fuzz.membership.gaussmf() b) cv2.threshold() c) fuzz.interp_membership() d) cv2.Sobel()

Answer: a) fuzz.membership.gaussmf()

14) Which function is used to calculate the fuzzy membership values for each pixel in scikit-fuzzy?

a) fuzz.membership.gaussmf() b) cv2.threshold() c) fuzz.interp_membership() d) cv2.Sobel()

Answer: c) fuzz.interp_membership()

15) Which of the following is true about fuzzy thresholding?

a) It always produces binary images b) It requires manual adjustment of thresholds

c) It uses fuzzy logic principles for thresholding d) It is not suitable for image segmentation

Answer: c) It uses fuzzy logic principles for thresholding

16) Which of the following is not an advanced morphological function in OpenCV?

a) Opening b) Closing c) Hit-or-miss transform d) Binarization

Answer: d) Binarization

17) Gradient-based thresholding is based on:


a) Image histograms b) Local image statistics c) Gradient magnitude of the image d) Fuzzy logic
principles

Answer: c) Gradient magnitude of the image

18) What is the main advantage of variable thresholding over traditional thresholding methods?

a) More precise separation between foreground and background b) Faster computation time

c) Simplicity of implementation d) Ability to handle only binary images

Answer: a) More precise separation between foreground and background

19) Which method is used to address the connectedness paradox in image segmentation?

a) Morphological reconstruction b) Fuzzy thresholding c) Edge detection d) Watershed segmentation

Answer: a) Morphological reconstruction

20) Which thresholding method adapts the threshold based on the local image characteristics?

a) Variable thresholding b) Otsu's thresholding c) Edge-based thresholding d) Fuzzy thresholding

Answer: a) Variable thresholding

21) Which morphological operation is used to remove small noise regions in an image?

a) Opening b) Closing c) Erosion d) Dilation

Answer: a) Opening

22) Which morphological operation is used to fill small holes in an image?

a) Opening b) Closing c) Erosion d) Dilation

Answer: b) Closing

23) Which morphological operation can be used to extract only the boundary of objects in an image?
a) Opening b) Closing c) Erosion d) Dilation

Answer: c) Erosion

24) Which morphological operation can be used to expand the boundaries of objects in an image?

a) Opening b) Closing c) Erosion d) Dilation

Answer: d) Dilation

25) Which method is used to solve connectedness paradoxes by segmenting objects based on
intensity and spatial information?

a) Watershed segmentation b) Region growing c) Mathematical morphology d) Hit-or-miss transform

Answer: a) Watershed segmentation

26) Which thresholding method utilizes edge information to determine the threshold?

Answer: b) Edge-based thresholding

27) Fuzzy thresholding allows for: a) Only binary segmentation results b) Gradual separation
between foreground and background

c) Manual adjustment of thresholds d) Noisy segmentation outputs

Answer: b) Gradual separation between foreground and background

==============================================================================

1) Which library is commonly used for computer vision tasks in Python?

a. TensorFlow b. OpenCV c. Scikit-learn d. Keras

Answer: b) OpenCV
2) What does OpenCV stand for?

a. Open Computer Vision b. Open Source Computer Vision c. Optical Character Verification d.
Operational Computer Visualizer

Answer: b) Open Source Computer Vision

3) Which of the following is not a common computer vision task?

a. Object detection b. Image classification c. Sentiment analysis d. Image segmentation

Answer: c) Sentiment analysis

4) What is the purpose of camera calibration in computer vision?

a. To adjust camera exposure settings b. To estimate the focal length of the camera

c. To remove lens distortion effects d. To improve image resolution

Answer: c) To remove lens distortion effects

5) Which interpolation method is commonly used in image resizing?

a. Nearest Neighbor b. Bilinear c. Bi-cubic d. All of the above

Answer: d) . All of the above

6) Which technique is used to detect and extract edges in an image?

a. Gaussian blur b. Thresholding c. Canny edge detection d. Hough transform

Answer: c) Canny edge detection

7) Which of the following is an image segmentation algorithm?

a. K-Means b. Support Vector Machines (SVM) c. Random Forest d. Naive Bayes

Answer: a) K-Means

8) Which deep learning architecture won the ImageNet Large-Scale Visual Recognition Challenge
(ILSVRC) in 2012?
a. VGGNet b. ResNet c. AlexNet d. InceptionNet

Answer: c) AlexNet

9) Which technique can be used to extract text from images?

a. Optical Character Recognition (OCR) b. Object detection c. Image classification d. Image super-
resolution

Answer: a) Optical Character Recognition (OCR)

10) Which method is used to estimate the camera's pose (position and orientation) in 3D space?

a. Perspective transformation b. Homography estimation c. Epipolar geometry d. PnP (Perspective-n-


Point) algorithm

Answer: d) PnP (Perspective-n-Point) algorithm

11) Which technique is used to match and track objects across multiple frames in a video?

a. Template matching b. Optical flow c. SIFT (Scale-Invariant Feature Transform) d. Harris corner
detection

Answer: b) Optical flow

12) Which algorithm is commonly used for face detection in images and videos?

a. Haar cascades b. SURF (Speeded-Up Robust Features) c. LBP (Local Binary Patterns) d. Histogram
of Oriented Gradients (HOG)

Answer: a) Haar cascades

13) Which technique is used to generate panoramic images by stitching multiple images together?

a. Perspective transformation b. Homography estimation c. Image warping d. Image blending

Answer: b) Homography estimation

14)Which method can be used to estimate the depth or 3D structure of a scene from stereo images?
a. Structure from Motion (SfM) b. Bundle Adjustment c Homography estimation d. Epipolar
geometry

Answer: c) Homography estimation

15) Which technique can be used to remove noise from an image while preserving the important
details?

a. Gaussian blur b. Median filtering c. Bilateral filtering d. Sobel edge detection

Answer: c) Bilateral filtering

16)Which technique is used to extract dominant colors from an image?

a. K-Means clustering b. Mean-Shift clustering c. Gaussian Mixture Models (GMM) d. Agglomerative


clustering

Answer: a) K-Means clustering

17) Which technique is used to detect and recognize human faces in real-time?

a. Viola-Jones algorithm b. R-CNN (Region-based Convolutional Neural Network) c. YOLO (You Only
Look Once) d. Faster R-CNN

Answer: c) YOLO (You Only Look Once)

18)Which technique can be used to generate a depth map from a single RGB image?

a. Structure from Motion (SfM) b. Kinect sensor c. Monocular depth estimation d. Lidar scanning

Answer: c) Monocular depth estimation

19)Which method can be used to detect and track moving objects in a video?

a. Background subtraction b. Optical flow c. Mean-Shift algorithm d. Kalman filter


Answer: a) Background subtraction

20) Which technique is used to detect and recognize handwritten digits in an image?

a. Convolutional Neural Networks (CNN) b. Random Forest c. Principal Component Analysis (PCA) d.
Support Vector Machines (SVM)

Answer: a) Convolutional Neural Networks (CNN)

21) Which technique can be used to detect and recognize specific objects or regions in an image?

a. Image classification b. Semantic segmentation c. Object detection d. Instance segmentation

Answer: c) Object detection

22) Which technique is used to estimate the motion of objects between consecutive frames in a
video?

a. Optical flow b. Lucas-Kanade method c. Dense optical flow d. Block matching

Answer: a) . Optical flow

23) Which technique can be used to generate synthetic images by learning the underlying data
distribution?

a. Variational Autoencoders (VAE) b. Generative Adversarial Networks (GAN)

c. Restricted Boltzmann Machines (RBM) d. Autoencoders

Answer: b) Generative Adversarial Networks (GAN)

24) Which technique is used to enhance the visibility of faint details in an image?
=

a. Histogram equalization b. Contrast stretching c. Adaptive histogram equalization d. Gamma


correction

Answer: c) Adaptive histogram equalization

25) Which method is used to detect and track the movement of the human eye in a video?
a. Pupil detection b. Eye corner detection c. Optical flow d. Template matching

Answer: a) Pupil detection

26) Which technique is used to remove the background from an image and extract the foreground
object?

a. GrabCut algorithm b. Watershed segmentation c. Mean-Shift algorithm d. Region-growing


segmentation

Answer: a) GrabCut algorithm

27) Which technique can be used to detect and track keypoints or interest points in an image?

a. SIFT (Scale-Invariant Feature Transform) b. SURF (Speeded-Up Robust Features)

c. ORB (Oriented FAST and Rotated BRIEF) d. All of the above

Answer: d) . All of the above

28) Which technique can be used to detect and recognize different facial expressions in an image?

a. Haar cascades b. Active Appearance Models (AAM) c. Facial Action Coding System (FACS) d. Local
Binary Patterns (LBP)

Answer: c) Facial Action Coding System (FACS)

29) Which technique can be used to estimate the pose (position and orientation) of a 3D object in an
image?

a. Perspective-n-Point (PnP) algorithm b. Random sample consensus (RANSAC)

c. Iterative Closest Point (ICP) algorithm d. Epipolar geometry

Answer: a) Perspective-n-Point (PnP) algorithm

30) Which technique can be used to generate a depth map from a pair of stereo images?

a. Stereo matching b. Triangulation c. Disparity mapping d. All of the above


Answer: d) All of the above

1) Which operator is commonly used for edge detection and emphasizes both vertical and horizontal
edges?

a) Sobel operator b) Canny edge detector c) Laplacian of Gaussian (LoG) d) Roberts operator

Answer: a) Sobel operator

2) The Canny edge detection algorithm performs which of the following operations?

a) Noise reduction, gradient computation, non-maximum suppression, hysteresis thresholding b)


Noise reduction, thresholding, edge linking, Hough transform

c) Gradient computation, non-maximum suppression, thresholding, edge linking d) Gradient


computation, thresholding, edge linking, Hough transform
Answer: a) Noise reduction, gradient computation, non-maximum suppression, hysteresis
thresholding

2) Which performance metric is used to evaluate the quality of edge detection?

a) Precision b) Recall c) F1 score d) All of the above

Answer: d) All of the above

3) The Haar classifier is commonly used for which computer vision task?

a) Object detection b) Image segmentation c) Image classification d) Image denoising

Answer: a) Object detection

4) The training process of a Haar classifier involves:

a) Collecting positive and negative samples, training weak classifiers, and combining them using
AdaBoost b) Calculating the integral image and applying gradient-based optimization

c) Selecting the most discriminative features and adjusting weights d) Training a neural network
using backpropagation

Answer: a) Collecting positive and negative samples, training weak classifiers, and combining them
using Ada Boost

5) Which technique is used to separate an object from its background in contour segmentation?

a) Edge detection b) Noise reduction c) Thresholding d) Region growing

Answer: d) Region growing

6) Which corner detection algorithm uses the second-moment matrix to compute corner responses?

a) Harris corner detection b) FAST corner detection c) Moravec corner detection d) Shi-Tomasi
corner detection

Answer: a) Harris corner detection


7) The FAST corner detector is known for its:

a) Speed b) Accuracy c) Robustness to noise d) Ability to detect corners of any shape

Answer: a) Speed

8) Which dataset is commonly used for object detection evaluation?

a) ImageNet b) COCO c) CIFAR-10 d) MNIST

Answer: b) COCO

9) The YOLO algorithm is widely used for: a) Face recognition b) Semantic segmentation c) Object
detection d) Image denoising

Answer: c) Object detection

10) Which performance metric measures the overlap between the detected and ground truth
regions? a) Precision b) Recall c) Intersection over Union (IoU) d) F1 score

Answer: c) Intersection over Union (IoU)

11) Which video dataset contains human action labels for a wide range of activities? a) UCF101 b)
YouTube-8M c) HMDB51 d) Kinetics

Answer: a) UCF101

12) Which method is commonly used to evaluate the performance of object detection algorithms? a)
Precision-Recall curve b) ROC curve c) Mean Average Precision (mAP) d) F1 score

Answer: c) Mean Average Precision (mAP)

13) Which technique is used to evaluate the quality of contour segmentation?

a) Jaccard Index b) Euclidean Distance c) Fowlkes-Mallows Index d) Intersection over Union (IoU)
Answer: d) Intersection over Union (IoU)

14) Which dataset is commonly used for face recognition research? a) LFW (Labeled Faces in the
Wild) b) ImageNet c) COCO d) PASCAL VOC

Answer: a) LFW (Labeled Faces in the Wild)

15) Which technique is used to measure the similarity between two contours? a) Hausdorff distance
b) Mahalanobis distance c) Cosine similarity d) Euclidean distance

Answer: a) Hausdorff distance

16) Which algorithm is commonly used for image segmentation? a) K-means clustering b) Haar
classifier c) Hough transform d) Random Forests

Answer: a) K-means clustering

16) The term "False Positive" in object detection refers to: a) A true object not being detected b) An
object being incorrectly classified as the wrong class

c) An object being detected when it is not present d) An object being segmented incorrectly

Answer: c) An object being detected when it is not present

17) Which algorithm is commonly used for text detection in images?

a) Canny edge detector b) Harris corner detector c) EAST (Efficient and Accurate Scene Text)
algorithm d) Sobel operator

Answer: c) EAST (Efficient and Accurate Scene Text) algorithm

18) Which dataset contains images with pixel-level semantic segmentation annotations? a)
ImageNet b) COCO c) Cityscapes d) SUN Dataset

Answer: c) Cityscapes
19) Which technique is used for image denoising? a) Median filtering b) Laplacian of Gaussian (LoG)
c) Sobel operator d) Canny edge detection

Answer: a) Median filtering

==================================================================================

20) Which metric is used to measure the performance of image classification algorithms?

a) Mean Average Precision (mAP) b) Precision c) Accuracy d) F1 score

Answer: c) Accuracy

21) The "Knowledge Cutoff" refers to: a) The date until which the AI model has been trained b) The
number of training iterations performed during model training

c) The cutoff value used for thresholding in edge detection d) The amount of memory allocated for
the AI model

Answer: a) The date until which the AI model has been trained

22) Which algorithm is commonly used for line detection in images? a) Canny edge detector b)
Hough transform c) Harris corner detector d) Sobel operator

Answer: b) Hough transform

23) Which technique is used for image resizing while maintaining the aspect ratio? a) Nearest
Neighbor interpolation b) Bilinear interpolation

c) Lanczos interpolation d) Aspect ratio cannot be maintained during image resizing

Answer: c) Lanczos interpolation

24) Which technique is used to handle occlusion in object detection? a) Non-Maximum Suppression
(NMS) b) Region of Interest (ROI) pooling

c) Feature Pyramid Network (FPN) d) Sliding window approach

Answer: a) Non-Maximum Suppression (NMS)


==================================================================================
==================================================================================

25) The ImageNet dataset consists of images labeled into how many different categories? a) 1,000 b)
10,000 c) 100,000 d) 1,000,000

Answer: a) 1,000

26) Which technique is used to evaluate the performance of semantic segmentation algorithms? a)
Intersection over Union (IoU) b) Mean Squared Error (MSE) c) Precision d) F1 score

Answer: a) Intersection over Union (IoU)

27) Which technique is commonly used for image registration in computer vision? a) Harris corner
detection b) Optical Flow c) SIFT (Scale-Invariant Feature Transform) d) Harris-Laplace detector

Answer: c) SIFT (Scale-Invariant Feature Transform)

28) Which technique is used to generate image descriptors for feature matching? a) Histogram of
Oriented Gradients (HOG) b) Local Binary Patterns (LBP)

c) Scale-Invariant Feature Transform (SIFT) d) Speeded-Up Robust Features (SURF)

Answer: c) Scale-Invariant Feature Transform (SIFT)

Unit === 08

1. What is the purpose of camera calibration in computer vision?

a) Estimating depth information

b) Identifying object shapes

c) Estimating camera parameters

d) Tracking object motion

Answer: c) Estimating camera parameters

2. Which camera model assumes that light rays pass through a small aperture (pinhole)?

a) Perspective camera model


b) Orthographic camera model

c) Spherical camera model

d) Pinhole camera model

Answer: d) Pinhole camera model

3. What does radiance represent in computer vision?

a) Amount of light energy emitted by a surface

b) Amount of light energy incident on a surface

c) Amount of light energy reflected by a surface

d) Amount of light energy transmitted through a surface

Answer: c) Amount of light energy reflected by a surface

4. Which projection preserves parallel lines and lacks depth cues?

a) Orthographic projection b) Perspective projection

c) Spherical projection

d) Omnidirectional projection

Answer: a) Orthographic projection

5. What is the key characteristic of perspective projection?

a) Foreshortening of objects based on their distance

b) Preservation of parallel lines

c) Lack of depth cues

d) Convergence of parallel lines

Answer: a) Foreshortening of objects based on their distance

6. Which camera model is often used for cameras with a wide field of view?

a) Perspective camera model

b) Omnidirectional camera model


c) Orthographic camera model

d) Spherical camera model

Answer: b) Omnidirectional camera model

7. Which OpenCV function is used for finding corners in a chessboard calibration pattern?

a) cv2.findContours()

b) cv2.findChessboardCorners()

c) cv2.findEdges()

d) cv2.findFeatures()

Answer: b) cv2.findChessboardCorners()

8. What does the camera matrix represent in camera calibration?

a) Intrinsic camera parameters

b) Extrinsic camera parameters

c) Lens distortion coefficients

d) Principal point coordinates

Answer: a) Intrinsic camera parameters

9. What is the purpose of the distortion coefficients in camera calibration?

a) Estimating camera position and orientation

b) Correcting lens distortion in images

c) Determining the field of view

d) Calculating the focal length

Answer: b) Correcting lens distortion in images

10. Which camera model is commonly used in computer graphics and architectural plans?

a) Perspective camera model

b) Orthographic camera model


c) Spherical camera model

d) Omnidirectional camera model

Answer: b) Orthographic camera model

11. What is the unit of radiance?

a) Watts per square meter (W/m²)

b) Watts per square meter per steradian (W/m²·sr)

c) Watts per square meter per square meter (W/m²·m²)

d) Watts per square meter per square meter per wavelength (W/m²·m²·nm)

Answer: b) Watts per square meter per steradian (W/m²·sr)

12. Which projection involves the convergence of parallel lines?

a) Perspective projection

b) Orthographic projection

c) Spherical projection

d) Omnidirectional projection

Answer: a) Perspective projection

13. Which camera model is commonly used for fisheye or catadioptric cameras?

a) Perspective camera model

b) Orthographic camera model

c) Spherical camera model

d) Omnidirectional camera model

Answer: c) Spherical camera model

14. What is the purpose of irradiance in computer vision?

a) Estimating depth information


b) Identifying object shapes

c) Estimating the intensity of incident light

d) Tracking object motion

Answer: c) Estimating the intensity of incident light

15. Which projection preserves the relative size of objects at different depths?

a) Perspective projection

b) Orthographic projection

c) Spherical projection

d) Omnidirectional projection

Answer: b) Orthographic projection

16. What does the camera matrix contain in camera calibration?

a) Intrinsic and extrinsic camera parameters

b) Lens distortion coefficients

c) Principal point coordinates d) Focal length

Answer: a) Intrinsic and extrinsic camera parameters

17. Which camera model is commonly used for 3D reconstruction and augmented reality?

a) Perspective camera model

b) Orthographic camera model

c) Spherical camera model

d) Omnidirectional camera model

Answer: a) Perspective camera model

18. Which OpenCV function is used to calibrate a camera?

a) cv2.detectCorners()

b) cv2.calibrateCamera()
c) cv2.findContours()

d) cv2.calibrate()

Answer: b) cv2.calibrateCamera()

19. What is the purpose of irradiance in computer vision?

a) Estimating depth information

b) Identifying object shapes

c) Estimating the intensity of incident light

d) Tracking object motion

Answer: c) Estimating the intensity of incident light

20. Which projection preserves the relative size of objects at different depths?

a) Perspective projection

b) Orthographic projection

c) Spherical projection

d) Omnidirectional projection

Answer: b) Orthographic projection

21. What does the camera matrix contain in camera calibration?

a) Intrinsic and extrinsic camera parameters

b) Lens distortion coefficients

c) Principal point coordinates

d) Focal length

Answer: a) Intrinsic and extrinsic camera parameters

22. Which camera model assumes parallel projection rays and lacks depth cues?

a) Perspective camera model


b) Orthographic camera model

c) Spherical camera model

d) Omnidirectional camera model

Answer: b) Orthographic camera model

23. What is the purpose of lens distortion correction in camera calibration?

a) Estimating the focal length of the camera

b) Improving the accuracy of 3D reconstruction

c) Correcting image distortions caused by the camera lens

d) Calculating the camera's field of view

Answer: c) Correcting image distortions caused by the camera lens

24. Which projection model is used for capturing a 360-degree view of the scene?

a) Perspective projection

b) Orthographic projection c) Spherical projection

d) Omnidirectional projection

Answer: c) Spherical projection

25. What is the unit of irradiance?

a) Watts per square meter (W/m²)

b) Watts per square meter per steradian (W/m²·sr)

c) Watts per square meter per square meter (W/m²·m²)

d) Watts per square meter per square meter per wavelength (W/m²·m²·nm)

Answer: a) Watts per square meter (W/m²)

26. What is the purpose of camera calibration?

a) Determining the camera's intrinsic parameters

b) Correcting lens distortion in images


c) Estimating the camera's extrinsic parameters

d) All of the above

Answer: d) All of the above

27. Which camera model is commonly used in architectural drawings?

a) Perspective camera model

b) Orthographic camera model

c) Spherical camera model

d) Omnidirectional camera model

Answer: b) Orthographic camera model

28. What does radiance measure?

a) Amount of light energy emitted by a surface

b) Amount of light energy incident on a surface

c) Amount of light energy reflected by a surface

d) Amount of light energy transmitted through a surface

Answer: c) Amount of light energy reflected by a surface

29. Which projection model is commonly used in architectural drawings?

a) Perspective projection

b) Orthographic projection

c) Spherical projection

d) Omnidirectional projection

Answer: b) Orthographic projection

30. What does the camera matrix contain in camera calibration?

a) Intrinsic and extrinsic camera parameters


b) Lens distortion coefficients

c) Principal point coordinates

d) Focal length

Answer: a) Intrinsic and extrinsic camera parameters

Unit === 09

1. Which field deals with the extraction and analysis of useful information from digital images or
videos?

a) Computer graphics

b) Image processing

c) Computer vision

d) Pattern recognition

Answer: c) Computer vision

2. Which technique is used to reduce noise, enhance details, and normalize image intensities?

a) Image segmentation

b) Image filtering

c) Edge detection

d) Feature extraction

Answer: b) Image filtering

3. Which technique is used to detect abrupt changes in pixel intensity and highlight object
boundaries?

a) Image segmentation

b) Image filtering

c) Edge detection

d) Feature extraction
Answer: c) Edge detection

4. Which technique is used to group pixels with similar characteristics together based on intensity or
color?

a) Image segmentation
b) Image filtering
c) Edge detection
d) Feature extraction

Answer: a) Image segmentation

5. Which technique is used to identify specific patterns or features in an image?

a) Image segmentation

b) Image filtering

c) Edge detection

d) Feature extraction

Answer: d) Feature extraction

6. Which technique is commonly used for object detection, using pre-trained models and feature
extraction?

a) Haar cascades

b) HOG features

c) Template matching

d) Active contours

Answer: a) Haar cascades

7. Which technique is commonly used for image classification, using convolutional neural networks
(CNNs)?

a) Haar cascades

b) HOG features

c) Template matching
d) CNNs

Answer: d) CNNs

8. Which technique is used to estimate the depth or distance of objects in a scene based on focus
variations in images?

a) Stereo vision

b) Structure from motion

c) Depth from focus

d) Photometric stereo

Answer: c) Depth from focus

9. Which technique estimates the surface normals and 3D structure of objects based on lighting
variations in images?

a) Stereo vision

b) Structure from motion

c) Depth from focus

d) Photometric stereo

Answer: d) Photometric stereo

10. What is the main purpose of camera calibration in computer vision?

a) To estimate the depth of objects in a scene

b) To remove noise and enhance image details

c) To compute the camera parameters for accurate measurements

d) To classify objects based on their appearance

Answer: c) To compute the camera parameters for accurate measurements

11. Which technique is used to extract distinctive features and establish correspondences between
images?

a) Camera calibration

b) Feature extraction

c) Feature matching

d) Image segmentation
Answer: c) Feature matching

12. Which technique estimates the 3D structure of a scene by analyzing the correspondences
between multiple images?

a) Stereo vision

b) Structure from motion

c) Depth from focus

d) Photometric stereo

Answer: b) Structure from motion

13. Which technique involves computing disparities and using triangulation to estimate the 3D
positions of scene points?

a) Stereo vision

b) Structure from motion

c) Depth from focus

d) Photometric stereo

Answer: a) Stereo vision

14. Which technique is used to generate a dense 3D model by using information from multiple
viewpoints?

a) Stereo vision

b) Structure from motion

c) Multi-View Stereo

d) Depth from focus

Answer: c) Multi-View Stereo

15. Which technique is used to estimate the 3D structure and camera motion simultaneously from a
sequence of images?

a) Stereo vision

b) Structure from motion

c) Multi-View Stereo

d) Depth from focus


Answer: b) Structure from motion

16. Which technique is used to estimate the 3D structure and camera motion simultaneously from a
sequence of images?

a) Stereo vision

b) Structure from motion

c) Multi-View Stereo

d) Depth from focus

Answer: b) Structure from motion

17. Which technique estimates the 3D shape of an object based on the variations in brightness or
shading in its 2D image?

a) Shape from shading

b) Shape from motion

c) Shape from texture

d) Shape from focus

Answer: a) Shape from shading

18. Which technique estimates the 3D shape of an object based on the variations in texture or
appearance in its 2D image?

a) Shape from shading

b) Shape from motion

c) Shape from texture

d) Shape from focus

Answer: c) Shape from texture

19. Which technique estimates the 3D shape of an object based on the variations in focus across
multiple images?

a) Shape from shading

b) Shape from motion

c) Shape from texture

d) Shape from focus


Answer: d) Shape from focus

20. Which technique is used to estimate the depth of objects in a scene based on the analysis of
images captured from multiple viewpoints?

a) Stereo vision

b) Structure from motion

c) Multi-View Stereo

d) Depth from focus

Answer: a) Stereo vision

21. Which technique is commonly used for tracking objects or analyzing object motion in videos?

a) Image segmentation

b) Object detection

c) Object tracking

d) Feature extraction

Answer: c) Object tracking

22. Which technique is used for recognizing and verifying individuals based on their unique facial
features?

a) Image segmentation

b) Object detection

c) Facial recognition

d) Feature extraction

Answer: c) Facial recognition

23. Which technique estimates the 3D structure of a scene by fusing information from multiple
viewpoints?

a) Image segmentation

b) Object tracking

c) Semantic segmentation

d) Multi-View Stereo

Answer: d) Multi-View Stereo


24. Which technique is used to estimate the 3D structure of a scene and the camera motion
simultaneously?

a) Image segmentation

b) Object tracking

c) Structure from motion

d) Semantic segmentation

Answer: c) Structure from motion

25. Which technique is used to assign depth values to each pixel in an image based on the disparity
between stereo image pairs?

a) Depth from focus

b) Depth from motion

c) Depth from shading

d) Depth from stereo

Answer: d) Depth from stereo

26. Which technique is used to estimate the 3D structure of a scene by analyzing variations in focus
across multiple images?

a) Depth from focus

b) Depth from motion

c) Depth from shading

d) Depth from stereo

Answer: a) Depth from focus

27. Which technique is used to estimate the 3D structure of a scene by analyzing variations in

brightness or shading in its 2D image?

a) Depth from focus

b) Depth from motion

c) Depth from shading

d) Depth from stereo

Answer: c) Depth from shading


28. Which technique estimates the depth or distance of objects in a scene based on focus variations
in images?

a) Stereo vision

b) Structure from motion

c) Depth from focus

d) Photometric stereo

Answer: c) Depth from focus

29. Which technique estimates the surface normals and 3D structure of objects based on lighting
variations in images?

a) Stereo vision

b) Structure from motion

c) Depth from focus

d) Photometric stereo

Answer: d) Photometric stereo

30. What is the main purpose of camera calibration in computer vision?

a) To estimate the depth of objects in a scene

b) To remove noise and enhance image details

c) To compute the camera parameters for accurate measurements

d) To classify objects based on their appearance

Answer: c) To compute the camera parameters for accurate measurements

==================================================================================

Unit === 10

1. Which of the following is a fundamental step in image processing?

a) Image acquisition

b) Image enhancement

c) Image segmentation

d) Image compression
Answer : a) Image acquisition

2. Which method is used to reduce noise and smoothen an image?

a) Edge detection

b) Thresholding

c) Blurring

d) Histogram equalization

Answer : c) Blurring

3. The Fourier transform is used in image processing for:

a) Image denoising

b) Image segmentation

c) Frequency domain analysis

d) Edge detection

Answer : c) Frequency domain analysis

4. Which technique aims to preserve edges while reducing noise in an image?

a) Gaussian filtering

b) Median filtering

c) Sobel operator

d) Canny edge detection

5. What is the purpose of image segmentation?

a) Enhancing image details

b) Removing image noise

c) Dividing an image into meaningful regions

d) Adjusting image brightness and contrast

6. Which technique is used for extracting relevant information from an image?

a) Image restoration

b) Feature extraction
c) Image compression

d) Image registration

7. Which compression method is lossless and commonly used for text-based images?

a) JPEG

b) PNG

c) GIF

d) TIFF

8. Regularization theory in image processing is used to address:

a) Noise reduction

b) Image enhancement

c) Ill-posedness and instability

d) Image segmentation

9. Which regularization technique encourages sparse solutions?

a) Tikhonov regularization

b) Total Variation (TV) regularization

c) L1-Regularization (Lasso)

d) Bayesian approaches

10. Which method exploits redundancy and self-similarity in natural images?

a) Tikhonov regularization

b) Non-local regularization

c) Total Variation (TV) regularization

d) L1-Regularization (Lasso)

11. The Canny edge detection algorithm involves which of the following steps?

a) Gaussian smoothing, gradient computation, non-maximum suppression

b) Histogram equalization, thresholding, morphological operations

c) Fourier transform, image dilation, contour extraction


d) Sobel operator, image restoration, feature extraction

12. Which noise model simulates sudden disturbances or errors in an image?

a) Gaussian noise

b) Salt and pepper noise

c) Poisson noise

d) Speckle noise

13. Which noise model represents the random variation observed in low-light conditions?

a) Gaussian noise

b) Salt and pepper noise

c) Poisson noise

d) Speckle noise

14. Which noise model is best suited for simulating imaging systems' noise?

a) Gaussian noise

b) Salt and pepper noise

c) Poisson noise

d) Speckle noise

15. Which method can be used to remove salt and pepper noise from an image?

a) Median filtering

b) Gaussian filtering

c) Sobel operator

d) Canny edge detection

16. Which algorithm is used for adaptive filtering and noise reduction?

a) Non-local means denoising

b) Total Variation (TV) regularization

c) L1-Regularization (Lasso)

d) Bayesian regularization
17. Which method is used to enhance image details and adjust local contrast?

a) Histogram equalization

b) Fourier transform

c) Morphological operations

d) Non-maximum suppression

18. Which technique is used to divide an image into regions based on color or intensity similarities?

a) Edge detection

b) Thresholding

c) Segmentation

d) Feature extraction

19. Which technique is used to extract relevant features such as edges, corners, or textures from an
image?

a) Image restoration

b) Image compression

c) Feature extraction

d) Image registration

20. Which technique can be used to compress an image by reducing redundancy and eliminating
irrelevant details?

a) Image enhancement

b) Image registration

c) Image compression

d) Image segmentation

21. Which type of noise is common in medical imaging due to low radiation levels?

a) Gaussian noise

b) Salt and pepper noise

c) Poisson noise
d) Speckle noise

22. Which method is used to reduce noise and preserve edges in an image simultaneously?

a) Median filtering

b) Gaussian filtering

c) Laplacian operator

d) Canny edge detection

23. Which regularization technique promotes sparsity and piecewise constant solutions?

a) Tikhonov regularization

b) Total Variation (TV) regularization

c) L1-Regularization (Lasso)

d) Bayesian approaches

24. The Canny edge detection algorithm is based on:

a) Sobel operators

b) Fourier transform

c) Laplacian operator

d) Median filtering

25. Which technique is used to identify and connect weak edges to strong edges to form complete
edges?

a) Non-maximum suppression

b) Thresholding

c) Region-growing

d) Contour extraction

26. Which technique is used to encourage similar patches or structures to have similar pixel values?

a) Gaussian smoothing

b) Histogram equalization

c) Non-local regularization
d) Fourier transform

27. Which regularization technique balances data fidelity and regularization constraints using a
regularization parameter?

a) Tikhonov regularization

b) Total Variation (TV) regularization

c) L1-Regularization (Lasso)

d) Bayesian approaches

28. Which technique is used to analyze the frequency content of an image?

a) Histogram equalization

b) Fourier transform

c) Morphological operations

d) Non-maximum suppression

29. Which compression method is lossy and commonly used for natural images?

a) JPEG

b) PNG

c) GIF

d) TIFF

30. Which technique is used to enhance image contrast by stretching the pixel values to cover the
full dynamic range?

a) Histogram equalization

b) Gaussian filtering

c) Sobel operator

d) Canny edge detection

==================================================================================
Unit No 11

1. Which of the following is NOT a common image filtering technique?

a) Gaussian filter
b) Median filter

c) Laplacian filter

d) Fourier filter

2. Image segmentation is the process of:

a) Detecting keypoints in an image

b) Dividing an image into meaningful regions

c) Enhancing the contrast of an image

d) Converting a color image to grayscale

3. Which feature extraction technique is commonly used for matching keypoints between images?

a) Harris Corner Detection

b) Histogram Equalization

c) Sobel Edge Detection

d) Gaussian Blurring

4. Image registration is used to:

a) Match features in two images

b) Align two or more images together

c) Enhance the quality of an image

d) Convert a color image to grayscale

5. Image classification is the task of:

a) Detecting objects in an image

b) Segmenting an image into regions

c) Assigning a label to an entire image

d) Extracting features from an image

6. What type of neural networks are commonly used for image classification tasks?

a) Recurrent Neural Networks (RNNs)

b) Convolutional Neural Networks (CNNs)


c) Generative Adversarial Networks (GANs)

d) Multilayer Perceptrons (MLPs)

7. Object detection aims to:

a) Identify and label objects within an image

b) Find similarities between two images

c) Align images together for stereo vision

d) Enhance the resolution of an image

8. Optical Character Recognition (OCR) is used for:

a) Detecting objects in an image

b) Segmenting an image into regions

c) Identifying and converting text from images

d) Extracting features from an image

9. Image denoising techniques are used to:

a) Enhance the sharpness of an image

b) Restore lost details in an image

c) Reduce noise or artifacts in an image

d) Convert a color image to grayscale

10. Image morphology operations include:

a) Histogram equalization and thresholding

b) Erosion and dilation

c) Fourier transform and convolution

d) Edge detection and feature extraction

11. Which type of compression is used to reduce image file size with minimal loss of quality?

a) Lossless compression

b) Lossy compression

c) JPEG compression
d) PNG compression

12. Which of the following is a popular deep learning framework for computer vision tasks?

a) OpenCV

b) TensorFlow

c) NumPy

d) Scikit-learn

13. Camera calibration is essential for:

a) Capturing high-resolution images

b) Estimating camera poses in 3D space

c) Applying image filtering techniques

d) Enhancing the contrast of an image

14. Stereo vision uses binocular disparity to:

a) Enhance the depth of an image

b) Align images for panorama stitching

c) Estimate the depth of 3D objects in a scene

d) Reduce noise in an image

15. Optical flow techniques are used for:

a) Detecting keypoints in an image

b) Estimating camera calibration parameters

c) Tracking the movement of pixels between frames

d) Removing distortions in an image

16. Structure from Motion (SfM) is used for:

a) Object detection in images

b) 3D reconstruction of a scene from multiple images

c) Image segmentation
d) Image classification

17. Optical computation is a technique that uses:

a) Light rays for image processing tasks

b) Human vision for object detection

c) Stereoscopic vision for depth estimation

d) Image filtering for denoising

18. Which of the following is NOT an application of computer vision?

a) Autonomous vehicles

b) Medical image analysis

c) Natural language processing

d) Augmented reality

19. Camera distortion in stereo vision can affect:

a) The accuracy of disparity estimation

b) The resolution of the captured images

c) The color balance of the images

d) The orientation of the camera

20. In image registration, what type of transformation is applied to align two images?

a) Rigid transformation

b) Non-linear transformation

c) Scaling transformation

d) Shearing transformation

21. Depth estimation in stereo vision is derived from:

a) Image filtering techniques

b) Optical flow analysis

c) Binocular disparity information


d) Feature extraction from the images

22. Which type of compression is used for storing medical images with precise data preservation?

a) Lossless compression

b) Lossy compression

c) JPEG compression

d) PNG compression

23. In image restoration, what is the goal?

a) Enhancing the quality of the image

b) Reducing the file size of the image

c) Removing noise or artifacts from the image

d) Converting the image to grayscale

24. Image segmentation is used for:

a) Identifying objects in an image

b) Estimating the depth of the scene

c) Removing distortions from the image

d) Converting a color image to grayscale

25. Which feature extraction method is commonly used in image registration?

a) Harris Corner Detection

b) Histogram Equalization

c) Sobel Edge Detection

d) Gaussian Blurring

26. Optical Character Recognition (OCR) is used to:

a) Enhance the quality of an image

b) Restore lost details in an image

c) Identify and convert text from images


d) Segment an image into regions

27. What type of neural networks are commonly used for image classification tasks?

a) Recurrent Neural Networks (RNNs)

b) Convolutional Neural Networks (CNNs)

c) Generative Adversarial Networks (GANs)

d) Multilayer Perceptrons (MLPs)

28. Which type of image filtering technique is useful for reducing noise in an image?

a) Gaussian filter

b) Median filter

c) Sobel filter

d) Laplacian filter

29. Stereo matching algorithms are used to find:

a) Similar images in a dataset

b) Corresponding points in stereo image pairs

c) The best image feature descriptor

d) The best camera calibration parameters

30. Image morphology operations include:

a) Erosion and dilation

b) Histogram equalization and thresholding

c) Fourier transform and convolution

d) Edge detection and feature extraction

==================================================================================

Unit-12 Sample MCQ:

1. **Question:** What is the primary objective of contour-based shape representation?

- A. Capturing the essential features of an object's boundary.


- B. Assigning labels to pixels inside an object.

- C. Describing the texture of an object.

- D. Representing the interior of an object with a binary mask.

- ✅ Correct Answer: A.

2. **Question:** Which of the following is not a common shape representation technique?

- A. Skeletonization.

- B. Shape Descriptors.

- C. Contour Hierarchies.

- D. Curvature Histograms.

- ✅ Correct Answer: D.

3. **Question:** In contour-based shape representation, what is a common way to represent the


contour's shape?

- A. Point Coordinates.

- B. Binary Mask.

- C. Chain Codes.

- D. Curvature Information.

- ✅ Correct Answer: C.

4. **Question:** The Douglas-Peucker algorithm is used for:

- A. Region-based segmentation.

- B. Contour approximation.

- C. Skeletonization.

- D. Shape matching.

- ✅ Correct Answer: B.

5. **Question:** What does the internal energy in deformable curves encourage?


- A. Smoothness and regularization of the curve.

- B. Movement towards object boundaries.

- C. Minimization of the energy functional.

- D. Expansion of the curve.

- ✅ Correct Answer: A.

6. **Question:** Deformable surfaces (active contours) work in which dimension?

- A. 1D.

- B. 2D.

- C. 3D.

- D. Both 2D and 3D.

- ✅ Correct Answer: D.

7. **Question:** Active contours are used for:

- A. Shape modeling in computer graphics.

- B. Image segmentation.

- C. Object tracking in videos.

- D. All of the above.

- ✅ Correct Answer: D.

8. **Question:** What does the external energy term in active contours depend on?

- A. The snake's rigidity.

- B. Image characteristics like edges or gradients.

- C. The number of iterations.

- D. The initial contour position.

- ✅ Correct Answer: B.

9. **Question:** Which library provides an efficient implementation of Active Contours in Python?

- A. OpenCV.
- B. scikit-image.

- C. NumPy.

- D. TensorFlow.

- ✅ Correct Answer: B.

10. **Question:** Mean-Shift segmentation is based on:

- A. Edge detection.

- B. Clustering.

- C. Level set methods.

- D. Graph-based optimization.

- ✅ Correct Answer: B.

11. **Question:** Watershed segmentation can be used for:

- A. Edge detection.

- B. Object recognition.

- C. Image smoothing.

- D. Image segmentation.

- ✅ Correct Answer: D.

12. **Question:** Which shape representation method encodes the relative positions of
neighboring contour points?

- A. Point Coordinates.

- B. Binary Mask.

- C. Chain Codes.

- D. Curvature Information.

- ✅ Correct Answer: C.

13. **Question:** Which technique groups pixels into compact, perceptually meaningful regions
while preserving object boundaries?

- A. Mean-Shift Segmentation.

- B. Region Growing.

- C. Superpixel Segmentation.
- D. Watershed Segmentation.

- ✅ Correct Answer: C.

14. **Question:** Which type of segmentation algorithm treats pixel intensities as a topographic
relief?

- A. Mean-Shift Segmentation.

- B. Region Growing.

- C. Superpixel Segmentation.

- D. Watershed Segmentation.

- ✅ Correct Answer: D.

15. **Question:** Which region-based representation method utilizes graph cuts?

- A. Mean-Shift Segmentation.

- B. Region Growing.

- C. Superpixel Segmentation.

- D. Watershed Segmentation.

- ✅ Correct Answer: D.

16. **Question:** Which representation method captures the overall shape of objects using a
sequence of connected boundary points?

- A. Contour-based Representation.

- B. Region-based Representation.

- C. Skeletonization.

- D. Shape Descriptors.

- ✅ Correct Answer: A.

17. **Question:** Which shape representation method includes numerical descriptors such as area,
perimeter, and moments?

- A. Contour-based Representation.

- B. Region-based Representation.

- C. Skeletonization.
- D. Shape Descriptors.

- ✅ Correct Answer: D.

18. **Question:** In contour-based shape representation, which technique reduces the number of
points while preserving essential shape characteristics?

- A. Contour Extraction.

- B. Contour Approximation.

- C. Contour Representation.

- D. Contour Hierarchies.

- ✅ Correct Answer: B.

19. **Question:** Which region-based representation technique involves morphological operations


to remove noise and fill gaps?

- A. Binary Mask.

- B. Chain Codes.

- C. Region Growing.

- D. Watershed Segmentation.

- ✅ Correct Answer: D.

20. **Question:** Deformable curves (active contours) are not suitable for:

- A. Image segmentation.

- B. Object tracking.

- C. Image filtering.

- D. Shape modeling.

- ✅ Correct Answer: C.

21. **Question:** The external energy in deformable curves is based on which image
characteristics?

- A. Color distribution.

- B. Texture analysis.

- C. Object boundaries or gradients.

- D. Area under the curve.


- ✅ Correct Answer: C.

22. **Question:** The Mean-Shift segmentation algorithm is commonly used for:

- A. Image morphing.

- B. Image smoothing.

- C. Object tracking.

- D. Image sharpening.

- ✅ Correct Answer: C.

23. **Question:** Deformable surfaces are also known as:

- A. Superpixels.

- B. Skeletonization.

- C. Active Contours.

- D. Chain Codes.

- ✅ Correct Answer: C.

24. **Question:** The internal energy in deformable surfaces promotes:

- A. Smoothness and regularization of the surface.

- B. Movement towards object boundaries.

- C. Minimization of the energy functional.

- D. Expansion of the surface.

- ✅ Correct Answer: A.

25. **Question:** Which library provides built-in implementations of region-based segmentation


algorithms in Python?

- A. OpenCV.

- B. NumPy.

- C. scikit-image.

- D. TensorFlow.

- ✅ Correct Answer: C.
26. **Question:** What does the Watershed segmentation algorithm use to separate regions in an
image?

- A. Graph Cuts.

- B. Clustering.

- C. Watershed lines.

- D. Gradient information.

- ✅ Correct Answer: D.

27. **Question:** Deformable curves (active contours) can be initialized:

- A. Anywhere in the image.

- B. Close to the object boundary of interest.

- C. In the center of the image.

- D. At random locations.

- ✅ Correct Answer: B.

28. **Question:** What is the primary objective of region-based representation?

- A. Capturing the essential features of an object's boundary.

- B. Assigning labels to pixels inside an object.

- C. Describing the texture of an object.

- D. Representing the interior of an object with a binary mask.

- ✅ Correct Answer: B.

29. **Question:** Which method combines graph cuts with deformable surfaces for efficient shape
modeling?

- A. Mean-Shift Segmentation.

- B. Watershed Segmentation.

- C. Level Set Methods.

- D. Superpixel Segmentation.

- ✅ Correct Answer: C.
30. **Question:** Which region-based representation method involves grouping pixels with similar
color or intensity values into regions?

- A. Binary Mask.

- B. Chain Codes.

- C. Region Growing.

- D. Watershed Segmentation.

- ✅ Correct Answer: C.

Unit-13 Sample MCQ

1. **Question:** Level Set Representations are primarily used for:

- A. Image compression.

- B. Object detection.

- C. Shape modeling and segmentation.

- D. Image enhancement.

- ✅ Correct Answer: C.

2. **Question:** Fourier Descriptors are used to:

- A. Analyze the texture of an image.

- B. Encode the shape of an object using frequency components.

- C. Enhance image resolution.

- D. Perform image denoising.

- ✅ Correct Answer: B.

3. **Question:** Medial Representations provide a simplified view of an object's shape by


representing:

- A. The object's texture.

- B. The object's boundary.


- C. The object's centerlines.

- D. The object's color distribution.

- ✅ Correct Answer: C.

4. **Question:** Multi-Resolution Analysis involves:

- A. Analyzing images at a single resolution only.

- B. Analyzing images at different scales or resolutions.

- C. Increasing the image size.

- D. Enhancing image contrast.

- ✅ Correct Answer: B.

5. **Question:** Level Set methods evolve a higher-dimensional function over time to capture:

- A. Color information in images.

- B. Object's boundary deformation.

- C. Texture patterns in images.

- D. Spatial frequency components.

- ✅ Correct Answer: B.

6. **Question:** Which of the following is a characteristic of Fourier Descriptors?

- A. They analyze image textures.

- B. They are invariant to rotation and translation.

- C. They directly represent image pixels.

- D. They are primarily used for image denoising.

- ✅ Correct Answer: B.

7. **Question:** Medial Representations are useful for:

- A. Enhancing image contrast.

- B. Capturing fine texture details.

- C. Simplifying shape analysis and comparison.

- D. Generating realistic image textures.

- ✅ Correct Answer: C.
8. **Question:** In Multi-Resolution Analysis, an image pyramid consists of:

- A. Images of varying colors.

- B. Images with different aspect ratios.

- C. Images at different scales or resolutions.

- D. Images with random noise patterns.

- ✅ Correct Answer: C.

9. **Question:** Level Set methods are commonly used for:

- A. Image compression.

- B. Image smoothing.

- C. Image registration.

- D. Object segmentation.

- ✅ Correct Answer: D.

10. **Question:** Fourier Descriptors are sensitive to:

- A. Scaling and rotation of objects.

- B. Changes in image intensity.

- C. Object boundary deformation.

- D. Image noise levels.

- ✅ Correct Answer: A.

11. **Question:** Medial Representations help in analyzing an object's:

- A. Color distribution.

- B. Fine texture details.

- C. Centerlines and main features.

- D. Fourier coefficients.

- ✅ Correct Answer: C.

12. **Question:** Wavelet Transforms are commonly used in:

- A. Fourier Descriptors.
- B. Skeletonization.

- C. Image compression and denoising.

- D. Medial Representations.

- ✅ Correct Answer: C.

13. **Question:** In Fourier Descriptors, the first coefficient (DC component) represents:

- A. The object's high-frequency details.

- B. The object's rotation.

- C. The object's translation.

- D. The object's shape center.

- ✅ Correct Answer: D.

14. **Question:** Medial Representations are particularly useful for:

- A. Capturing detailed texture patterns.

- B. Simplifying shape representation and analysis.

- C. Enhancing image resolution.

- D. Eliminating image noise.

- ✅ Correct Answer: B.

15. **Question:** Multi-Resolution Analysis helps in:

- A. Increasing image noise.

- B. Analyzing images at a single scale.

- C. Identifying object boundaries.

- D. Extracting features at different levels of detail.

- ✅ Correct Answer: D.

16. **Question:** Level Set Representations are advantageous for:

- A. Handling complex shape deformations.

- B. Enhancing image textures.

- C. Performing image rotation.

- D. Removing image artifacts.


- ✅ Correct Answer: A.

17. **Question:** Fourier Descriptors are based on the idea of representing an object's shape using:

- A. Pixel intensities.

- B. Histograms.

- C. Frequency components.

- D. Chain codes.

- ✅ Correct Answer: C.

18. **Question:** Medial Representations help in reducing:

- A. Image resolution.

- B. Image contrast.

- C. Shape complexity.

- D. Color variations.

- ✅ Correct Answer: C.

19. **Question:** Which technique captures both global

and local features of an image?

- A. Level Set Representations.

- B. Fourier Descriptors.

- C. Medial Representations.

- D. Multi-Resolution Analysis.

- ✅ Correct Answer: D.

20. **Question:** Level Set methods are particularly useful for:

- A. Image sharpening.

- B. Texture classification.

- C. Handling topological changes in object shapes.

- D. Reducing image noise.

- ✅ Correct Answer: C.
21. **Question:** Fourier Descriptors are used for shape analysis because they are:

- A. Sensitive to object color.

- B. Sensitive to object position.

- C. Invariant to rotation and translation.

- D. Invariant to object boundary deformation.

- ✅ Correct Answer: C.

22. **Question:** Medial Representations are especially helpful for:

- A. Detecting object corners.

- B. Eliminating object boundaries.

- C. Simplifying shape matching.

- D. Enhancing image textures.

- ✅ Correct Answer: C.

23. **Question:** Which technique involves wavelet transforms to analyze images at different
scales?

- A. Level Set Representations.

- B. Fourier Descriptors.

- C. Medial Representations.

- D. Multi-Resolution Analysis.

- ✅ Correct Answer: D.

24. **Question:** Level Set methods are robust to changes in:

- A. Object shape.

- B. Object color.

- C. Image contrast.

- D. Image noise.

- ✅ Correct Answer: A.

25. **Question:** Fourier Descriptors are commonly used for:


- A. Capturing object centerlines.

- B. Removing image artifacts.

- C. Object recognition and matching.

- D. Enhancing image resolution.

- ✅ Correct Answer: C.

26. **Question:** Medial Representations help in simplifying shape analysis by:

- A. Capturing fine texture details.

- B. Eliminating object boundaries.

- C. Focusing on key features and main characteristics.

- D. Enhancing image contrast.

- ✅ Correct Answer: C.

27. **Question:** Multi-Resolution Analysis is useful for:

- A. Analyzing images at a single resolution.

- B. Capturing only global features of an image.

- C. Handling changes in object shape.

- D. Analyzing images at different scales to extract information.

- ✅ Correct Answer: D.

28. **Question:** The primary objective of Level Set Representations is to:

- A. Enhance image textures.

- B. Detect object boundaries.

- C. Simplify object shapes.

- D. Capture evolving object boundaries over time.

- ✅ Correct Answer: D.

29. **Question:** Fourier Descriptors can be affected by changes in:

- A. Object rotation.

- B. Image color.

- C. Object translation.
- D. Image noise.

- ✅ Correct Answer: A.

30. **Question:** Medial Representations are a powerful tool for:

- A. Image compression.

- B. Handling image noise.

- C. Simplifying shape analysis and recognition.

- D. Enhancing image resolution.

- ✅ Correct Answer: C.

==================================================================================
======================================

Unit-14 Sample MCQ¶

1. **Question:** What is the primary goal of object detection?

- A) Identifying the type of object in an image.

- B) Identifying and localizing objects within an image.

- C) Recognizing human faces.

- D) Analyzing crowd density.

✅**Answer:** B) Identifying and localizing objects within an image.

2. **Question:** What is the main difference between object detection and object recognition?

- A) Object detection involves identifying and recognizing objects.

- B) Object recognition involves identifying objects within an image.

- C) Object detection localizes objects, while object recognition only identifies them.

- D) Object recognition is more complex than object detection.

✅**Answer:** C) Object detection localizes objects, while object recognition only identifies them.

3. **Question:** Which technique captures the geometric structure and spatial layout of a face in
three-dimensional space?
- A) Appearance models.

- B) Active Appearance Models (AAMs).

- C) 3D Shape Models.

- D) Eigenfaces.

✅**Answer:** C) 3D Shape Models.

4. **Question:** What is the purpose of eigenfaces in face recognition?

- A) Capturing the geometric structure of faces.

- B) Representing facial appearance variations.

- C) Localizing facial features.

- D) Detecting facial expressions.

✅**Answer:** B) Representing facial appearance variations.

5. **Question:** Which application involves identifying individuals by matching their unique facial
features?

- A) Object recognition.

- B) Face detection.

- C) Facial expression analysis.

- D) Face recognition.

✅**Answer:** D) Face recognition.

6. **Question:** What is an example of using a 3D shape model in computer vision?

- A) Identifying objects in an image.

- B) Enhancing color accuracy.

- C) Virtual reality facial animation.

- D) Converting images to grayscale.

✅**Answer:** C) Virtual reality facial animation.


7. **Question:** Which type of model combines both a statistical shape model and a texture model
for facial analysis?

- A) Appearance model.

- B) 3D Shape model.

- C) Active Appearance Model (AAM).

- D) Eigenface model.

✅**Answer:** C) Active Appearance Model (AAM).

8. **Question:** What is the main focus of surveillance systems?

- A) Enhancing user experiences.

- B) Capturing artistic photographs.

- C) Monitoring and analyzing environments for security.

- D) Enhancing gaming experiences.

✅**Answer:** C) Monitoring and analyzing environments for security.

9. **Question:** Which computer vision task is useful for analyzing crowd density, flow, and
congestion in public areas?

- A) Facial recognition.

- B) Object detection.

- C) Intrusion detection.

- D) Crowd management.

✅**Answer:** D) Crowd management.

10. **Question:** Which technology identifies individuals based on their unique facial features?

- A) Object recognition.

- B) Object detection.

- C) Image segmentation.

- D) Face recognition.
✅**Answer:** D) Face recognition.

11. **Question:** What does YOLO stand for in the context of object detection?

- A) You Only Look Once.

- B) You Only Learn Once.

- C) Your Object Localization Order.

- D) Your Only Learning Opportunity.

✅**Answer:** A) You Only Look Once.

12. **Question:** Which model is used for 3D face reconstruction and analysis?

- A) Eigenfaces.

- B) Active Appearance Models (AAMs).

- C) 3D Morphable Models (3DMMs).

- D) YOLOv3.

✅**Answer:** C) 3D Morphable Models (3DMMs).

13. **Question:** What is the primary purpose of eigenfaces in computer vision?

- A) Object recognition.

- B) Facial expression analysis.

- C) 3D face reconstruction.

- D) Dimensionality reduction and feature extraction.

✅**Answer:** D) Dimensionality reduction and feature extraction.

14. **Question:** In surveillance, what is the purpose of anomaly detection?

- A) Identifying familiar objects.

- B) Detecting unusual behavior or events.

- C) Capturing artistic images.


- D) Enhancing video quality.

✅ **Answer:** B) Detecting unusual behavior or events.

15. **Question:** Which method combines both appearance and 3D shape information for more
accurate face analysis?

- A) Using facial recognition only.

- B) Ignoring 3D shape information.

- C) Focusing only on appearance models.

- D) Combining appearance and 3D shape models.

✅ **Answer:** D) Combining appearance and 3D shape models.

16. **Question:** What is the primary application of a 3D shape model in virtual reality?

- A) Facial recognition.

- B) Animation of virtual objects.

- C) Object detection.

- D) Enhancing image resolution.

✅**Answer:** B) Animation of virtual objects.

17. **Question:** Which model captures the most significant variations in a set of face images and
is used for dimensionality reduction?

- A) Active Appearance Model (AAM).

- B) 3D Morphable Model (3DMM).

- C) Eigenfaces.

- D) YOLOv3.

✅**Answer:** C) Eigenfaces.

18. **Question:** Which method is used for tracking crowd density and flow in public areas?

- A) Object detection.
- B) Facial recognition.

- C) Eigenfaces.

- D) Crowd analysis algorithms.

✅**Answer:** D) Crowd analysis algorithms.

19. **Question:** What is the primary advantage of combining appearance and 3D shape models in
computer vision?

- A) It simplifies the analysis process.

- B) It eliminates the need for object detection.

- C) It improves accuracy and robustness.

- D) It reduces computational complexity.

✅**Answer:** C) It improves accuracy and robustness.

20. **Question:** What is the primary purpose of an Active Appearance Model (AAM)?

- A) Object recognition.

- B) Capturing appearance variations.

- C) Tracking crowd density.

- D) Analyzing depth information.

✅**Answer:** B) Capturing appearance variations.

21. **Question:** Which technique is used for identifying individuals based on their facial features?

- A) Image segmentation.

- B) Object recognition.

- C) Facial recognition.

- D) Crowd management.

✅**Answer:** C) Facial recognition.


22. **Question:** What type of model is commonly used for analyzing crowd behavior in public
areas?

- A) 3D Morphable Model (3DMM).

- B) YOLOv3.

- C) Active Appearance Model (AAM).

- D) Crowd analysis algorithms.

✅**Answer:** D) Crowd analysis algorithms.

23. **Question:** In surveillance, which method is commonly used for identifying unauthorized
access?

- A) Object recognition.

- B) Object detection.

- C) Facial recognition.

- D) License plate recognition.

✅**Answer:** B) Object detection.

24. **Question:** What is the primary goal of crowd management algorithms?

- A) Identifying individual faces.

- B) Enhancing image quality.

- C) Analyzing crowd behavior.

- D) Tracking vehicle movement.

✅**Answer:** C) Analyzing crowd behavior.

25. **Question:** Which model captures both appearance and 3D shape information for facial
analysis?

- A) Active Appearance Model (AAM).

- B) Eigenfaces.
- C) Object detection model.

- D) Crowd analysis model.

✅**Answer:** A) Active Appearance Model (AAM).

26. **Question:** Which model is used to represent faces as linear combinations of shape and
texture basis vectors?

- A) Active Appearance Model (AAM).

- B) 3D Morphable Model (3DMM).

- C) Eigenfaces.

- D) YOLOv3.

✅**Answer:** B) 3D Morphable Model (3DMM).

27. **Question:** What is the main application of object detection in surveillance?

- A) Monitoring crowd behavior.

- B) Identifying unusual events.

- C) Identifying individual faces.

- D) Enhancing image resolution.

✅**Answer:** B) Identifying unusual events.

28. **Question:** Which approach involves identifying and verifying individuals based on their
unique facial features?

- A) Object detection.

- B) Object recognition.

- C) Facial recognition.

- D) Image segmentation.

✅**Answer:** C) Facial recognition.


29. **Question:** Which model is used for capturing the most significant variations in a set of face
images?

- A) 3D Morphable Model (3DMM).

- B) YOLOv3.

- C) Eigenfaces.

- D) Active Appearance Model (AAM).

✅**Answer:** C) Eigenfaces.

30. **Question:** What is the primary purpose of combining appearance and 3D shape information
in facial analysis?

- A) It simplifies the analysis process.

- B) It eliminates the need for object detection.

- C) It improves accuracy and robustness.

- D) It reduces computational complexity.

✅**Answer:** C) It improves accuracy and robustness.

==================================================================================

Unit-15 Sample MCQ¶

**Computer Vision and Object Tracking:**

1. What is the primary goal of object tracking in computer vision?

a) Detect objects in an image

b) Identify the class of objects

c) Continuously follow and predict the state of objects

d) Estimate the depth of objects

✅**Answer: c**

2. Which technique is used to associate an object's location across multiple frames?

a) Object detection
b) Object recognition

c) Object tracking

d) Object segmentation

✅**Answer: c**

3. Which technique is commonly used to predict an object's future position based on its past
motion?

a) Graph cut

b) Kalman filter

c) K-Means clustering

d) Principal Component Analysis (PCA)

✅**Answer: b**

4. What challenge does occlusion present in object tracking?

a) High computational complexity

b) Difficulty in camera calibration

c) Difficulty in object detection

d) Objects being partially or fully blocked from view

**✅Answer: d**

5. Which of the following tracking methods is particularly robust to occlusion?

a) Template matching

b) Feature extraction

c) Deep learning-based tracking

d) Optical flow

**✅Answer: c**
6. Which technique can be used to combine information from multiple camera views?

a) Stereo vision

b) Image segmentation

c) Depth map estimation

d) Multi-camera fusion

**✅Answer: d**

**Multi-Camera Fusion and Applications:**

7. What is the purpose of camera calibration in multi-camera setups?

a) Enhance image resolution

b) Align camera viewpoints for accurate fusion

c) Increase camera field of view

d) Improve camera stability

**✅Answer: b**

8. In stereo vision, what information does the disparity map provide?

a) Color information of objects

b) Depth information of objects

c) Object detection scores

d) Object tracking trajectories

**✅Answer: b**

9. Which algorithm helps establish correspondences between points in different camera views?

a) Depth-first search

b) Graph cut

c) Feature matching

d) Image segmentation
**✅Answer: c**

10. What is the primary advantage of combining views from multiple cameras in object tracking?

a) Reducing image resolution

b) Increasing computational complexity

c) Improving tracking accuracy and robustness

d) Enhancing color accuracy

**✅Answer: c**

11. Which computer vision application can benefit from multi-camera fusion to estimate object flow
patterns in crowded areas?

a) Autonomous vehicle navigation

b) Facial recognition

c) Gesture recognition

d) Pedestrian tracking in public spaces

**✅Answer: d**

12. What is the primary advantage of using multiple camera views for augmented reality
applications?

a) Decreasing system latency

b) Increasing battery consumption

c) Improving object occlusion handling

d) Enhancing color saturation

**✅Answer: c**

**Chamfer Matching and Segmentation:**

13. What is Chamfer matching used for in computer vision?


a) Object tracking

b) Depth map estimation

c) Image classification

d) Shape matching and recognition

**✅Answer: d**

14. Which technique measures the similarity between two sets of points by computing the sum of
distances between corresponding points?

a) Chamfer matching

b) Template matching

c) Image convolution

d) Histogram equalization

**✅Answer: a**

15. What is the main advantage of using Chamfer matching for shape recognition?

a) It requires extensive training data

b) It is insensitive to shape variations and occlusions

c) It is computationally efficient

d) It only works with binary images

**✅Answer: b**

16. Chamfer matching is primarily used for which type of image analysis?

a) Edge detection

b) Object segmentation

c) Image filtering

d) Color correction

**✅Answer: b**
17. Which technique is commonly used to calculate the Chamfer distance between points?

a) Euclidean distance

b) Hamming distance

c) Correlation coefficient

d) Histogram intersection

**✅Answer: a**

**Object Tracking and Occlusion:**

18. What is the main challenge when tracking objects undergoing occlusion?

a) Loss of camera synchronization

b) False positive detections

c) Data association and re-identification

d) Inaccurate camera calibration

**✅Answer: c**

19. Which tracking technique is suitable for handling occlusion by maintaining multiple hypotheses
about an object's state?

a) Kalman filter

b) Particle filter

c) Template matching

d) Histogram backprojection

**✅Answer: b**

20. How does occlusion affect the performance of traditional tracking algorithms?

a) Occlusion improves tracking accuracy

b) Occlusion has no impact on tracking performance


c) Occlusion can lead to track loss or identity switches

d) Occlusion only affects object detection

**✅Answer: c**

21. What is the main advantage of using deep learning-based tracking algorithms for occluded
objects?

a) They are computationally faster

b) They are more memory-efficient

c) They can learn complex appearance changes caused by occlusion

d) They are less sensitive to lighting conditions

**✅Answer: c**

22. How does multi-camera tracking help address occlusion challenges?

a) By increasing the likelihood of occlusion occurrences

b) By reducing the accuracy of object tracking

c) By providing multiple viewpoints to infer object positions

d) By causing track switches during occlusion

**✅Answer: c**

**Combining Views and Multi-Camera Fusion:**

23. What is the primary goal of combining views from multiple cameras?

a) Enhancing image resolution

b) Reducing computational complexity

c) Improving tracking accuracy and robustness

d) Increasing camera field of view

**✅Answer: c**
24. Which technique can be used to align camera views in a multi-camera setup?

a) Image cropping

b) Camera calibration

c) Image compression

d) Histogram equalization

**✅Answer: b**

25. What information does a disparity map provide in stereo vision?

a) Object detection scores

b) Object depth information

c) Color information of objects

d) Object tracking trajectories

**✅Answer: b**

26. Which technique is used to fuse tracking information from different camera views to refine
tracking accuracy?

a) Histogram equalization

b) Graph cut

c) Bundle adjustment

d) Principal Component Analysis (PCA)

**✅Answer: c**

27. What is the primary advantage of using multi-camera fusion in augmented reality applications?

a) Reducing system latency

b) Enhancing color saturation

c) Improving object occlusion handling

d) Increasing battery consumption


**✅Answer: c**

28. How can multi-camera fusion help in surveillance scenarios?

a) By reducing the need for object detection

b) By increasing computational complexity

c) By providing multiple viewpoints for accurate tracking

d) By decreasing the importance of calibration

**✅Answer: c**

29. Which computer vision application can benefit from multi-camera fusion to estimate crowd
density in public spaces?

a) Autonomous vehicle navigation

b) Facial recognition

c) Gesture recognition

d) Crowd management and analysis

**✅Answer: d**

30. In multi-camera setups, what is the purpose of undistorting and rectifying images from each
camera?

a) To enhance image resolution

b) To increase computational complexity

c) To align camera viewpoints for accurate fusion

d) To improve color accuracy

**✅Answer: c**

You might also like