Lab 4 CSCI444 944
Lab 4 CSCI444 944
Camera Sensor
Introduction
In this laboratory exercise, we explore the application of camera sensors in
robotic vision systems. The camera sensor serves as a essential component in
robotics, enabling machines to perceive their surroundings and make informed
decisions based on visual input. This lab focuses on the practical aspects of
configuring and utilizing a camera sensor to perform tasks that require visual
data interpretation, such as color detection and object tracking.
Objectives
The primary objectives of Lab 4 are:
1. Understanding Camera Sensor Output: Students will learn about RGB
and HSV color models.
2. Blob Detection: Utilizing the find_blobs() function, students will detect
and analyze specific color blobs within an image. This involves setting
appropriate HSV thresholds to filter the image for desired color ranges
and analyzing the properties of detected blobs.
3. Navigation and Object Tracking: By applying the camera sensor data,
students will develop algorithms to enable a robot to follow or interact
with detected objects based on their size and position within the camera's
field of view.
RGB:
HSV:
hsv_thresholds = (Min Hue, Max Hue, Min Saturation , Max Saturation, Min
Value, Max Value)
Selecting pixels_threshold
You can adjust pixels_threshold depending on the size of the blobs you are
interested in. If you're looking for smaller features, decrease this number.
Setting hsv_thresholds
To select appropriate hsv_thresholds, you need to understand the type of colors
you want to detect. You can use an HSV color picker to help determine these
values. Adjust:
• Hue: Depending on the color spectrum you are interested in (e.g., reds
might be around 0 to 30 degrees).
• Saturation: Increase the minimum saturation to avoid dull colors if
necessary.
• Value: Adjust depending on whether you need to detect colors in low or
high light conditions.
Hue ranges for common colors in HSV color models:
• Red: 0-15 and 345-360 degrees
• Orange: 16-45 degrees
• Yellow: 46-75 degrees
• Green: 76-165 degrees
• Cyan: 166-195 degrees
• Blue: 196-255 degrees
• Purple: 256-285 degrees
• Magenta: 286-344 degrees
Note: These ranges can vary slightly depending on the source and context in
which you're using them, but they offer a good starting point for identifying
colors by their hue values.
Why Use HSV for Parameters?
HSV is particularly suited for color-based blob detection for several reasons:
1. Color Distinction: Unlike RGB, HSV separates color information (hue)
from lighting (value/lightness). This makes it easier to detect colors in
different lighting conditions without being misled by shadows, brightness
variations, or highlights.
2. Intuitive Tuning: Setting thresholds in HSV is more intuitive for human
perception of colors. Hue corresponds to the color type, saturation
indicates the richness or purity of the color, and value describes
brightness. This separation allows for more precise control over which
colors to detect.
3. Robustness: HSV is generally more robust against lighting changes,
which is crucial in real-world applications where lighting cannot be
controlled. By filtering based on hue and saturation, you can often ignore
changes in lighting that would otherwise affect the RGB values
significantly.
4. Efficiency: Since HSV allows for effective segmentation of colors even in
varied lighting, it can simplify the processing steps following color
filtering, making the algorithm more efficient and reliable.
The output (return) of find_blobs []:
The return value is a list of matches. Each match is a list containing
[pixel_count, centroid_x_position, centroid_y_position, bounding_box_x,
bounding_box_y, bounding_box_width, bounding_box_height].
The list of matches are always sorted according to pixel_count, so that the
largest blob (...by pixel_count) is always at the start of the list.
1. pixel_count: This is the total count of pixels that make up the blob. It's
useful for understanding the size of the blob in terms of pixel coverage,
which can be an indicator of the blob's actual size or distance from the
camera if other conditions are known. The list of result from find_blob
is always sorted so that the largest blob is at the start of the list.
2. centroid x: The x-coordinate of the centroid of the blob. The centroid is
the average position of all the pixels in the blob along the x-axis. It
represents the central point horizontally.
3. centroid y: The y-coordinate of the centroid of the blob. Similar to the
centroid x, this is the average position of all the pixels in the blob along
the y-axis, representing the central point vertically.
4. bounding Box x: The x-coordinate of the top-left corner of the smallest
rectangle that can entirely contain the blob. This is part of what's known
as the bounding box, which helps in understanding where the blob starts
horizontally.
5. bounding Box y: The y-coordinate of the top-left corner of the bounding
box. This tells you where the blob starts vertically.
6. bounding_box_width: The horizontal dimension of the bounding box.
This is how wide the blob is, which helps in visualizing or processing the
blob based on its horizontal spread.
7. bounding_box_height: The vertical dimension of the bounding box. This
measures how tall the blob is, which, like width, helps in understanding
the size and shape of the blob.
With models like LEGO EV3, the steer.drive() function is used to control the
robot's movement by specifying a steering value and a speed. The steering value
dictates the curvature of the robot's path:
• 0 means the robot will move straight ahead.
• Positive values make the robot steer to the right.
• Negative values make the robot steer to the left.
• The further the value is from zero, the sharper the turn.
Why Use clamp_value with steer.drive?
1. Maintain Valid Input Range:
o The steering method typically expects a steering value between -
100 and 100. This range corresponds to turning circles from full
left to full right, with 100 and -100 possibly representing in-place
spinning or very tight turns. The clamp_value function ensures that
the values passed remain within this expected range, preventing the
method from throwing errors or behaving unpredictably.
2. Avoid Execution Errors:
o As seen from the error screenshot you provided earlier, providing a
value outside the -100 to 100 range results in a ValueError. This
suggests that the function or method strictly requires values within
this defined range to operate correctly. Clamping values avoid such
runtime exceptions that halt program execution.
In the above code, the camera sensor is activated to begin the image capture
process. The sensor captures a snapshot of the environment, translating the
visual stimuli into a digital image format. The captured image is immediately
stored in the robot's memory. This allows the image to be accessible for
subsequent processing tasks such as image analysis.
The capture_image() function serves as an essential precursor to the blob
detection process. The robot relies on the most recent image to analyze and
detect color blobs that match predefined criteria (color thresholds). Without
capturing a new image at each cycle of the loop, the robot would be working
with outdated visual data, which could lead to ineffective or incorrect blob
tracking.
Setting the hue range from 118 to 123 is quite narrow, targeting a very specific
shade of green. This tight range helps avoid false positives from colors that are
close but not exactly the desired green.
The range from 10 to 80 allows for detection of greens that are not overly pale
or muted (which would have lower saturation) and not extremely vivid or
artificial-looking (which would have very high saturation). This range helps
ensure the green detected is neither too washed out nor unnaturally bright,
making it suitable for a variety of natural and artificial green objects.
Setting the value range from 10 to 90 ensures that both very dark and very
bright versions of the color are excluded. This helps in detecting greens that are
clearly visible under normal lighting conditions, avoiding issues with shadows
or highlights distorting the perceived color.
By setting a minimum size of pixel threshold 10, the function avoids
recognizing small, irrelevant artifacts as objects of interest. This is particularly
useful in environments where there might be small patches of green that are not
relevant to the robot's task.
Condition Check: if len(blobs) > 0:
This line checks if any blobs have been detected in the image. The len(blobs)
function returns the number of blobs found. If this number is greater than zero,
it means that there are objects in the image that match the specified color and
size criteria.
Get the Largest Blob:
largest_blob = blobs[0]
This line picks the first blob from the list returned by find_blobs(), where blobs
are sorted by their apparent size in the camera view, not necessarily their actual
size in reality. The blob that appears largest on the screen is listed first, which
often indicates that it is closest to the camera.
Extract Centroid Coordinates:
centroid_x = largest_blob[1],
centroid_y = largest_blob[2]
The centroid of a blob is the geometric center of the object, calculated as the
average position of all the pixels in the blob. In the context of your script,
largest_blob[1] and largest_blob[2] retrieve the x and y coordinates of the
centroid. Knowing the centroid's coordinates is crucial for determining how the
robot should move relative to the object. If the goal is to approach, follow, or
interact with the blob, these coordinates guide the steering decisions.
Retrieve Blob Height:
blob_height = largest_blob[6]
This refers to the vertical dimension of the blob in pixels. The height of the blob
helps determine how far the robot is from the object. If the robot is getting too
close to the object, it slows down. This way, the robot can avoid bumping into
the object by adjusting its speed based on how close it is.
These lines calculate the horizontal (deviation_x) and vertical (deviation_y)
deviations of the detected blob’s centroid from the predefined target position the
center of the camera's field of view. The target positions (TARGET_X and
TARGET_Y) are set to direct the robot to focus on the center of the image,
making it easier to follow or interact with objects directly in front of it.
How It Works: The centroid_x and centroid_y are the coordinates of the blob's
centroid, representing the center point of the blob as detected by the camera. By
subtracting the target center coordinates from these, the result is a measure of
how far and in which direction (left/right and up/down) the blob is from the
center of the view. A positive deviation indicates the blob is to the right (for x)
or below (for y) the center, and a negative deviation indicates it is to the left or
above.
This line adjusts the robot's base speed inversely with the blob's height, which
serves as a proxy for distance. As the blob appears larger (suggesting it is
closer), the robot reduces its speed.
How It Works:
The base speed starts at a constant value (30 in this case) and decreases as the
blob's height increases. The factor 0.01 scales the blob's height to a suitable
value for speed adjustment, ensuring that the robot slows down as it approaches
the object to avoid collisions.
This line applies the calculated turn rate and base speed to the robot's steering
drive system, controlling the robot’s movement.
How It Works:
• Clamping the Turn Rate: Before applying the turn rate, it is passed
through the clamp_value function to ensure that the value remains within
the acceptable range for the robot's motor system (-100 to 100). This
prevents errors and potential mechanical strain or erratic behavior.
• Executing Movement: The steering_drive.on method activates the
robot's motors to move according to the specified turn rate and base
speed. This combined control allows the robot to simultaneously adjust
its direction and speed based on the visual analysis of the blob's position
and size.
Exercise 2 recognize sphere object using Camera sensor:
References
https://github.com/QuirkyCort/gears/wiki/Camera-Sensor
https://www.w3schools.com/css/css_colors_hsl.asp
https://github.com/QuirkyCort/gears/wiki/Sensors-and-Actuators