ECE280F24 Lab5
ECE280F24 Lab5
Laboratory 5:
Image Processing 1
Contents
1 Introduction 2
2 Objectives 2
3 Background 3
3.1 Black & White Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.2 Grayscale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.3 Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.4 1D Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.5 2D Discrete Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.5.1 2D Convolution Graphical Example (complete with caveats) . . . . . 10
3.5.2 Basic Blurring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.5.3 A Better Convolution Option . . . . . . . . . . . . . . . . . . . . . . 11
3.5.4 Basic Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.5.5 2D Convolution in Color . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Pre-Laboratory Assignment 16
5 Instructions 17
5.1 Exercise 1: Color Blending . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 Exercise 2: Random Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.3 Exercise 3: 1D Moving Average . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.4 Exercise 4: Derivative Approximations . . . . . . . . . . . . . . . . . . . . . 19
5.5 Exercise 5: Boxes and Rectangles and Voids (Oh My!) . . . . . . . . . . . . 20
5.6 Exercise 6: Same Song, Different Verse, These Images Aren’t Quite As Big
As At First . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.7 Exercise 7: Fun with Convolution . . . . . . . . . . . . . . . . . . . . . . . . 21
5.8 Exercise 8: Putting it All Together . . . . . . . . . . . . . . . . . . . . . . . 23
6 Lab Report 25
6.1 Discussion (25 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.2 Graphics (67 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.3 Code (8 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7 Revision History 27
1 Introduction
This lab is centered on the topic of Digital Image Processing. In this lab, you will extend
your programming abilities in MATLAB and your knowledge of convolution to explore simple
digital image processing techniques. This handout and supporting web pages will teach you
about the fundamentals of images and image formats as well as specific MATLAB commands
used to load, manipulate, and save images. They will also teach you how to extend one-
dimensional convolution to two dimensions. There is a Pundit page that accompanies this
document and that has the example codes and images. See https://pundit.pratt.duke.edu/
wiki/ECE_280/Imaging_Lab_1
2 Objectives
The objectives of this project are to:
• Become familiar with different types of digital images (Black/White, Grayscale, and
Color) as well as the conversions between them,
• Become familiar with using MATLAB’s Image Processing Toolbox and the toolbox’s
built-in functions,
• Load, create, manipulate, and save images, and
• Understand and use MATLAB’s built-in 2-D convolution function to spatially filter
images and perform edge detection.
3 Background
The formal, encyclopedic definition of a digital image reads as follows:
“A digital image is a representation of a real image as a set of numbers that
can be stored and handled by a digital computer. In order to translate the image
into numbers, it is divided into small areas called pixels (picture elements). For
each pixel, the imaging device records a number, or a small set of numbers,
that describe some property of this pixel, such as its brightness (the intensity
of the light) or its color. The numbers are arranged in an array of rows and
columns that correspond to the vertical and horizontal positions of the pixels in
the image.”1
During this lab, we will look at three main types of image: black & white, grayscale, and
color. In each case, the image will be stored as one or more 2D arrays of values that map
to what each pixel looks like. For color images, there will be three 2D arrays storing all the
information.
This program creates a 5x5 matrix of 0 and 1 values, then opens a figure and clears it.
Next, it uses MATLAB’s imagesc program to view the matrix as a scaled image, meaning
MATLAB will map the minimum value of the array to the first color and the highest value
of the array to the last color. MATLAB will then change the colormap to grayscale which
makes the first color black and the last color white. Finally, MATLAB adds a colorbar so
that you can relate the colors to numerical values.
Note that by default for integers, image assumes values between 0 and 255 so using the image
command here would make an image that is basically all black. For this image, depending on
1
“Digital Images,” Computer Sciences, Encyclopedia.com, (July 10, 2020) https://www.encyclopedia.
com/computing/news-wires-white-papers-and-books/digital-images
which color you focus on, you will either see “if” or “Hi” in the figure. With enough pixels,
you can certainly create more intricate graphical representations - but if you add different
shades of gray, you can do even more.
3.2 Grayscale
A grayscale image differs from a black and white image in that each pixel is allowed to have
one of several values between pure black and pure white. Typically, grayscale images store
an 8-bit number for each pixel, allowing for 28 = 256 different shades of gray for each.
you will see an image that goes from black to white in 256 steps from left to right. Grayscale
images have 8 times as much information as black and white and typically take 8 times as
much memory to store.
Once again note the use of imagesc instead of image to automatically map the values in
the matrix to the 256 values in the given map. In this case, the minimum value of -1 gets
mapped to the first color (black) and the maximum value of +1 gets mapped to the last
color (white). Also, the image dimensions are equalized to make it look square regardless of
the shape of the figure window.
3.3 Color
A color image allows you to adjust not only the dark or light level of a pixel but also the hue
(“which color”) and saturation (“how intense does the color appear / is it washed out?”).
MATLAB stores color images by storing the red, green, and blue levels for each pixel. Each
of these values is stored in its own 8-bit number, meaning a color image takes up to 3 times
as much space as a grayscale image and 24 times as much space as a black and white image!
While there are other methods of representing colors of a pixel, we are going to stick with
the RGB model in this lab. Generally, MATLAB expects either an integer between 0 and
255 or a floating point number between 0 and 1 for each component.
The code produces a three layer matrix with the number of rows and columns determined
by the parameters rad and del. The rad value is used to generate three circles of equal
radius (2 × rad) centered on points evenly spaced around a circle of radius rad. The del
value provides some space between the circles and the edges of the image.
The first layer represents red, the second layer represents blue, and the third
represents green. With this code, the red layer is 1 for all the pixels within a total distance
of 200 from the location (100, 0). The green layer is 1 for all pixels within a total distance
of 200 from the location (50, -86.6). The blue layer is 1 for all pixels within a distance of
200 from the location (-50, -86.6). These three layers describe three overlapping circles, and
when you make the plot, you can see eight different regions:
• (0,0,0) not in any of the circles (black)
• (1, 0, 0) for the red circle
• (0, 1, 0) for the green circle
• (0, 0, 1) for the blue circle
• (1, 1, 0) for the intersection of the red and green circle (yellow)
• (1, 0, 1) for the intersection of the red and blue circle (magenta)
• (0, 1, 1) for the intersection of the green and blue circle (cyan)
• (1, 1, 1) for the intersection of the green and blue circle (white)
Pre-lab Deliverable (1/6): Which lines of code in Example 4 would you change to most
directly change the intensity of each color?
1. Line 3
2. Lines 5-6
3. Lines 7-11
4. Lines 12-14
This example shows the extremes where the components are either fully on or fully off. Since
there are three dimensions (for the red, green, and blue level), it is difficult to portray the
full range of colors MATLAB can show. In fact, since there are just over 16 million different
possible colors, it is impossible for most computers screens to display all the colors at once
given that most screen resolutions top out at or below 8 megapixels.
For this code, the red component increases from left to right and the green component in-
creases from top to bottom. The very middle of the image will be gray since each component
is at 50%.
Pre-lab Deliverable (2/6): What would you change line 4 of the Example 5 code to
so that the red component is 0 along the line x = y and gets brighter as we move further
away?
1. palette(:,:,1) = abs(x+y)
2. palette(:,:,1) = abs(x-y)
3. palette(:,:,1) = 1
4. palette(:,:,1) = abs(x/y)
3.4 1D Convolution
You have already learned about the process of convolution for both discrete and continuous
signals. Digital images very much fall into the discrete category, but they are a bit more
complicated than what you have seen so far in that they are two dimensional, with the row
and column index providing the two independent directions. Furthermore, with images, we
are less concerned about the concept of a system response and more interested in how we
might use a weighted average of pixel values to produce a new image. 2D Convolution will
allow us to perform that task.
Let us first look at the issue from a 1D perspective. Imagine you have some discrete signal
x[n] and you want to find a signal that is based on the difference between the value of the
signal at n and the value of the signal at n − 1. You are therefore interested in creating a
signal y[n] where:
y[n] = x[n] − x[n − 1]
The impulse response of this signal is:
h[n] = δ[n] − δ[n − 1]
In MATLAB, you can perform this convolution with the conv command, where the argu-
ments will be the discrete values of your original signal x[n] and the discrete values of your
impulse response h[n].
EXAMPLE 6: 1D Convolution
Imagine that we have some set of x[n] values:
x[n] = [1, 2, 4, 8, 7, 5, 1]
and we want to find the differences between those values. We can define h from its first
non-zero value to its last non-zero value:
h[n] = [1, −1]
and then we can ask MATLAB to do the convolution:
1 x = [1 , 2, 4, 8, 7, 5, 1]
2 h = [1 , -1]
3 y = conv (x, h)
Among other things, notice that y is longer than either x or h! What happened here is the
following (using parenthetical arguments to map to MATLAB):
∑
m=7
y(n) = x(m) · h(n − m + 1)
m=1
which is the discrete version of convolution (with the +1 in the h argument to account for
MATLAB being 1-indexed and also assuming that x or h at undefined index values will be
considered 0). Here is a step-by-step examination:
• MATLAB flipped h to create [-1, 1]
• MATLAB aligned the far right of the flipped version of h with the far left of x, multi-
plied overlapping terms (x(1) · h(1), meaning the 1 from x and the 1 from the flipped
h) and stored the result in the first element of y; which is to say, if n = 1 to calculate
the first value of y,
m 1 2 3 4 5 6 7
x(m) 1 2 4 8 7 5 1
h(n − m + 1) -1 1
x(m) · h(n − m + 1) 0 1 0 0 0 0 0 0 0
and y(1) will be the sum of that last row, or 1.
• MATLAB slid the flipped version of h one space to the right, multiplied the overlapping
terms (so x(2) · h(1) and x(1) · h(2)) and stored the result in the second element of y;
if n = 2:
m 1 2 3 4 5 6 7
x(m) 1 2 4 8 7 5 1
h(n − m + 1) -1 1
x(m) · h(n − m + 1) 0 -1 2 0 0 0 0 0 0
and y(2) will be the sum of that last row, or 1.
• MATLAB slid the flipped version of h one more space to the right, multiplied the
overlapping terms (so x(3) · h(1) and x(2) · h(2)) and stored the result in the third
element of y; if n = 3:
m 1 2 3 4 5 6 7
x(m) 1 2 4 8 7 5 1
h(n − m + 1) -1 1
x(m) · h(n − m + 1) 0 0 -2 4 0 0 0 0 0
and y(3) will be the sum of that last row, or 2.
• MATLAB slid the flipped version of h one more space to the right, multiplied the
overlapping terms (so x(4) · h(1) and x(3) · h(2)) and stored the result in the fourth
element of y; if n = 4:
m 1 2 3 4 5 6 7
x(m) 1 2 4 8 7 5 1
h(n − m + 1) -1 1
x(m) · h(n − m + 1) 0 0 0 -4 8 0 0 0 0
and y(4) will be the sum of that last row, or 4.
• MATLAB repeated this process until x(7) overlaps with h(2), which is to say, if n = 8:
m 1 2 3 4 5 6 7
x(m) 1 2 4 8 7 5 1
h(n − m + 1) -1 1
x(m) · h(n − m + 1) 0 0 0 0 0 0 0 -1 0
and y(8) will be the sum of that last row, or -1.
Pre-lab Deliverable (3/6): Which of the images in Fig. 1 accurately visualizes the setup of
the convolution described above, after h has been flipped but before it has been aligned?
(a) (b)
(c) (d)
Assuming h has Nh terms, and further assuming that Nh is smaller than the number of terms
in x (Nx ), using the same option means that MATLAB will not return results at the extreme
edges of the convolution. The way MATLAB trims the convolution depends on how large h
is as a total of Nh − 1 terms need to be removed.
If Nh is even, Nh /2 terms are removed from the beginning and (Nh − 2)/2 are removed from
the end. If Nh is odd, (Nh − 1)/2 terms are removed from both the beginning and the end.
This process tries to make y(1) the result of “centering” the flipped version of h on the first
value in x and similarly makes the last value y(Nx ) the result of centering the flipped version
of h on the last value of x. If h has an even number of values, it cannot be perfectly centered
so the flipped value is shifted one space to the right. This means that the convolution being
performed is:
∑x
m=N
y(n) = x(m) · h(n − m + ⌈Nh /2⌉)
m=1
with n = [1, Nx ] and where the ⌈ ⌉ operator represents rounding up. Fortunately you do not
have to worry about that calculation - MATLAB will do that!
If you are performing 2D convolution with the `same' option, y(1, 1) will be the result of
centering the flipped version of h on x(1, 1), multiplying the overlapping values, and adding
those products together. This is what the web page calls y[0, 0]. The gray square represents
the flipped and shifted h matrix. To get y(1, 2) you would move the flipped and shifted h
matrix one space to the right (what the web page would call y[1, 0] since it reverses rows
and columns from our perspective). If you look at the remaining examples, you get a new
entry in y by centering the flipped h on a new location in x.
Pre-lab Deliverable (4/6): Which of the following lines of code could be substituted for
line 2 in Example 8 to create a 2 × 50 blurring kernel?
1. h = ones(50, 2)/100;
2. h = ones(2, 50)/22 ;
3. h = ones(2, 50)/100;
which is different from using the `same' option in that there is one fewer value. The seventh
value with the `same' option assumed there was an extra 0 on the end of x. That can be
convenient for making comparative plots, but can also lead to misinformation when it comes
to blurring and other operations with images. As a result, for image processing, we will
stick with the `valid' option for conv2. We may end up with a smaller image, but the
information presented will not have to make any assumptions about “missing” x values and
generally the h matrix is so small relative to the x image that not much is cropped. We
will, however, generally want to use square h matrices so that the same amount is cropped
vertically and horizontally. Soon you will see that sometimes we will want to combine two
or more convolved images, and those must all be the same size.
Pre-lab Deliverable (5/6): Which of the following options, based on Example 10, would
most effectively detect horizontal edges?
1. [2, 2; −2, −2]
2. [2, −4; 4, −2]
3. [2, 2; 2, 2]
4. [−2, 2; −2, 2]
Note the use of image instead of imagesc here. We do not want the individual components
to have their levels “stretched out” to the full range of the colorbar; instead, we want the
value between 0 and 255 to be represented by the gray or color level between 0 and 255. The
cmap matrix is changed each time to provide color codes for each of the 256 possible values
within each layer. For red, only the first component will be nonzero; for green the second;
for blue the third.
Pre-lab Deliverable (6/6): How would you change line 5 of the code in Example 12
to make it into a Sobel operator that detects when a color changes as you go from top to
bottom? You may find the information at this Wikipedia page helpful: https://en.wikipedia.
org/wiki/Sobel_operator
1. h = [-1 0 1; -2 0 2; -1 0 1]
2. h = [1 2 1; 0 0 0; -1 -2 -1]
3. h = [-1 -2 -1; 0 0 0; 1 2 1]
4. h = [1 1 1; 0 0 0; -1 -1 -1]
4 Pre-Laboratory Assignment
For your prelab, provide answers on Gradescope to each of the (6) Pre-lab Deliverables based
on the Background information provided above.
The questions are listed below for your convenience:
1. Which lines of code in Example 4 would you change to most directly change the
intensity of each color?
(a) Line 3
(b) Lines 5-6
(c) Lines 7-11
(d) Lines 12-14
2. What would you change line 4 of the Example 5 code to so that the red component is
0 along the line x = y and gets brighter as we move further away?
(a) palette(:,:,1) = abs(x+y)
(b) palette(:,:,1) = abs(x-y)
(c) palette(:,:,1) = 1
(d) palette(:,:,1) = abs(x/y)
3. Which of the images in Fig. 1 accurately visualizes the setup of the convolution de-
scribed in Example 6, after h has been flipped but before it has been aligned?
4. Which of the following lines of code could be substituted for line 2 in Example 8 to
create a 2 × 50 blurring kernel?
(a) h = ones(50, 2)/100;
(b) h = ones(2, 50)/22 ;
(c) h = ones(2, 50)/100;
5. Which of the following options, based on Example 10, would most effectively detect
horizontal edges?
(a) [2, 2; −2, −2]
(b) [2, −4; 4, −2]
(c) [2, 2; 2, 2]
(d) [−2, 2; −2, 2]
6. How would you change line 5 of the code in Example 12 to make it into a Sobel
operator that detects when a color changes as you go from top to bottom? You may
find the information at this Wikipedia page helpful: https://en.wikipedia.org/wiki/
Sobel_operator
(a) h = [-1 0 1; -2 0 2; -1 0 1]
(b) h = [1 2 1; 0 0 0; -1 -2 -1]
(c) h = [-1 -2 -1; 0 0 0; 1 2 1]
(d) h = [1 1 1; 0 0 0; -1 -1 -1]
5 Instructions
5.1 Exercise 1: Color Blending
1. Create a new MATLAB script and title it IP1_EX1.m. Copy the code currently specified
in Example 4 into the script.
2. Adjust the code that makes the Venn diagram of colors such that the amount of color
for each component is determined by the formula:
1
Componentk (x, y) = √
1+p (x − xc,k )2 + (y − yc,k )2
In this formula:
• (x, y) are the x and y values for a particular pixel.
• k is an index (1,2, or 3 for red, green, or blue, respectively).
• p determines how quickly a color fades as you move away from its center.
• (xc,k , yc,k ) is the location where a particular component’s intensity should be at
its maximum, i.e. the center of the circle in the Venn diagram.
(a) Set the values of xc,k and yc,k for each circle based on the following table:
k Component xc,k yc,k
1 Red rad 0
2 Green rad· cos(2π/3) rad· sin(2π/3)
3 Blue rad· cos(4π/3) rad· sin(4π/3)
Checkpoint (1/10): Show your modified code to your TA. Discuss how you antici-
pate changing p to affect the resulting Venn diagram.
3. Set p = 0.05 and export the resulting image as follows:
(a) Set the title of the image to be Venn Diagram p = 0.05 (NetID)
(b) Save the image as IP1_EX1_Plot1.png.
4. Repeat the same process with p = 0.005, changing the title accordingly and saving the
image as IP1_EX1_Plot2.png.
Deliverable (1/16): In your report, describe the image output by the code in this
exercise. How did changing p affect the image?
Deliverable (2/16): Include the following files in your report:
• IP1_EX1.m
• IP1_EX1_Plot1.png
• IP1_EX1_Plot2.png
Checkpoint (2/10): Run the code and show the output to your TA. Discuss with
your TA what the relationship is between xc and xd.
2. Create a vector h of length 2, where each entry in h is equal to 12 . This will be used to
calculate the 2-point moving average of xd.
• NOTE: In general, you can calculate the “N-point moving average” of a data set
by convolving the data set with an h of length N where all the entries in h are
equal to 1/N .
3. Calculate the 2-point moving average of xd by convolving h with xd and save this to
the variable y2.
• Refer to the code in Example 7 for reference on how to perform convolution while
maintaining dimensionality of your input.
4. Repeat steps 2 and 3 to calculate the 5-point moving average and save it to the variable
y5.
5. Create a figure that contains the following and save it as IP1_EX3_Plot1.png:
(a) The original plots of xc and xd
(b) A plot of the 2-point moving average with red circles connected by solid lines
(c) A plot of the 5-point moving average with green circles connected by solid lines
(d) The title Moving Averages (NetID)
Deliverable (5/16): In your report, discuss the differences between the 2-point and
5-point moving averages. In particular, answer the following questions:
(a) Does one of the smoothed curves look more or less symmetrically-smoothed than
the other?
(b) Which one and why do you think that is?
Deliverable (6/16): Include the following files in your report:
• IP1_EX3.m
• IP1_EX3_Plot1.png
Checkpoint (3/10): Run the code and show your TA the two figures that were
produced. What is twopointdiff?
2. Perform a same-size convolution of a vector h = [1,0,-1]/2/deltatd with xd to get
an approximation for the derivative of humps.
3. Overlay a plot of the result of the convolution onto figure 2 using magenta circles.
4. Title the graph Derivative Approximation (11 points) (NetID) and save it as
IP1_EX4_Plot11.png
Discussion (2/3): How do you think adding more points to td will affect the rela-
tionship between twopointdiff and our derivative approximation?
5. Change the code where td is generated so that there are 51 points, rather than 11,
and rerun the code.
(a) This time, title your figure Derivative Approximation (51 points) (NetID)
and save it as IP1_EX4_Plot51.png.
Deliverable (7/16): In your lab report, discuss the differences between the derivative
approximations using 11 points and using 51 points. Clearly identify the calculations
you believe have unacceptable error and provide an explanation for why those errors
occur.
Deliverable (8/16): Include the following files in your report:
• IP1_EX4.m
• IP1_EX4_Plot11.png
• IP1_EX4_Plot51.png
Deliverable (9/16): In your lab report, discuss the differences between the three
different blurs used (10x10, 2x50, and 50x2), as well as any interesting “artifacts” you
see in the images.
Deliverable (10/16): Include the following files in your report:
• IP1_EX5.m
• IP1_EX5_Plot1.png
• IP1_EX5_Plot2.png
• IP1_EX5_Plot3.png
2. Based on the example in the table of different image kernels from the Wikipedia page,
generate a 5 × 5 Gaussian blur approximation kernel.
3. Convolve the image with your kernel and display the result with image.
Checkpoint (6/10): Show your TA the resulting, blurred image.
4. Title the image Coin Gaussian Blur (NetID) and save it as IP1_EX7_Plot1.png.
5. Reference the Wikipedia page at https://en.wikipedia.org/wiki/Prewitt_operator to
generate a 3 × 3 Prewitt operator that will detect edges as the gray level changes from
left to right.
6. Convolve the original image with this new kernel. Save the result to a variable called
CoinsEdgeX and display this using imagesc.
7. Title the image Coin Vertical Edges (NetID) and save it as IP1_EX7_Plot2.png.
8. Now, generate a 3 × 3 Prewitt operator that will detect edges as the gray level changes
from top to bottom.
9. Repeat steps 6 and 7 using your new kernel, saving the convolved result to CoinsEdgeY,
titling the image Coin Horizontal Edges (NetID), and saving it as
IP1_EX7_Plot3.png.
Checkpoint (7/10): Show your TA the two plots generated via convolution with
Prewitt operators. Explain the difference between the two of them.
Discussion (3/3): What about the Prewitt operator influences which direction it de-
tects edges in? How can you relate this to the derivative approximation you calculated
in Exercise 4?
10. Find the (element based) square root of the sum of the (element based) squares of
CoinsEdgeX and CoinsEdgeY and display the result with imagesc.
11. Title the resulting image Coin Edges (NetID) and save it as IP1_EX7_Plot4.png.
12. Finally, convolve the original image with the following kernel:
−1 −1 −1
h = −1 8 −1
−1 −1 −1
13. Title the image Alternate Coin Edges (NetID) and, displaying the result with
imagesc, save it as IP1_EX7_Plot5.png.
Checkpoint (8/10): Show your last two images to your TA and discuss the similar-
ities/differences between the two.
• Save: IP1_EX8_Plot3.png
(b) A 3×3 Sobel operator that will detect edges as colors change from top to bottom.
• Title: Sobel Horizontal Edges Test Card (NetID)
• Save: IP1_EX8_Plot4.png
10. Display the normalized square root of the sum of the squares of the results for the two
Sobel operators with image.
11. Title the image Sobel Edges Test Card (NetID) and save it as IP1_EX8_Plot5.png.
Deliverable (15/16): In your lab report, discuss:
• the differences between the different blurs, and
• what you see in the edge detection images
Deliverable (16/16): Include the following files in your lab report:
• IP1_EX8.m
• IP1_EX8_Plot0.png
• IP1_EX8_Plot1.png
• IP1_EX8_Plot2.png
• IP1_EX8_Plot3.png
• IP1_EX8_Plot4.png
• IP1_EX8_Plot5.png
6 Lab Report
This lab assignment is a little different from others in that there is no full lab report with
an introduction, discussion, conclusion, etc.
Instead, you should complete the 16 deliverables listed in the body of this document. There
is a LATEX skeleton available on Canvas that has the infrastructure of the lab document
already done.
Overall, the deliverables cover the following components:
4. Exercise 4
• IP1_EX4_Plot11.png (2 points)
• IP1_EX4_Plot51.png (2 points)
5. Exercise 5
• IP1_EX5_Plot1.png (3 points)
• IP1_EX5_Plot2.png (3 points)
• IP1_EX5_Plot3.png (3 points)
6. Exercise 6
• IP1_EX6_Plot1.png (3 points)
• IP1_EX6_Plot2.png (3 points)
• IP1_EX6_Plot3.png (3 points)
7. Exercise 7
• IP1_EX7_Plot1.png (3 points)
• IP1_EX7_Plot2.png (3 points)
• IP1_EX7_Plot3.png (3 points)
• IP1_EX7_Plot4.png (3 points)
• IP1_EX7_Plot5.png (3 points)
8. Exercise 8
• IP1_EX8_Plot0.png (2 points)
• IP1_EX8_Plot1.png (4 points)
• IP1_EX8_Plot2.png (4 points)
• IP1_EX8_Plot3.png (4 points)
• IP1_EX8_Plot4.png (4 points)
• IP1_EX8_Plot5.png (4 points)
7 Revision History
• October 2024: Added a prelab; made multiple adjustments based on recommendations
of Jenny Green (Pratt ’25), Eduardo Bortolomiol (Pratt ’26), and Adam Davidson.
• Spring 2024: Converted to a standalone document
• Fall 2020: First published version