0% found this document useful (0 votes)
70 views

Design and Implementation of Digital Image Transformation Algorithms

Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd22918.pdf Paper URL: https://www.ijtsrd.com/computer-science/computer-graphics/22918/design-and-implementation-of-digital-image-transformation-algorithms/joe-g-saliby

Uploaded by

Editor IJTSRD
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Design and Implementation of Digital Image Transformation Algorithms

Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd22918.pdf Paper URL: https://www.ijtsrd.com/computer-science/computer-graphics/22918/design-and-implementation-of-digital-image-transformation-algorithms/joe-g-saliby

Uploaded by

Editor IJTSRD
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

International Journal of Trend in Scientific Research and Development (IJTSRD)

Volume: 3 | Issue: 3 | Mar-Apr 2019 Available Online: www.ijtsrd.com e-ISSN: 2456 - 6470

Design & Implementation of Digital


Image Transformation Algorithms
Joe G. Saliby
Researcher, Lebanese Association for Computational Sciences, Beirut, Lebanon

How to cite this paper: Joe G. Saliby ABSTRACT


"Design & Implementation of Digital In computer science, Digital Image Processing or DIP is the use of computer
Image Transformation Algorithms" hardware and software to perform image processing and computations on
Published in International Journal of digital images. Generally, digital image processing requires the use of complex
Trend in Scientific Research and algorithms, and hence, can be more sophisticated from a performance
Development perspective at doing simple tasks. Many applications exist for digital image
(ijtsrd), ISSN: 2456- processing, one of which is Digital Image Transformation. Basically, Digital Image
6470, Volume-3 | Transformation or DIT is an algorithmic and mathematical function that converts
Issue-3 , April 2019, one set of digital objects into another set after performing some operations.
pp.623-631, URL: Some techniques used in DIT are image filtering, brightness, contrast, hue, and
http://www.ijtsrd.co saturation adjustment, blending and dilation, histogram equalization, discrete
m/papers/ijtsrd229 IJTSRD22918 cosine transform, discrete Fourier transform, edge detection, among others. This
18.pdf paper proposes a set of digital image transformation algorithms that deal with
converting digital images from one domain to another. The algorithms to be
Copyright © 2019 by author(s) and implemented are grayscale transformation, contrast and brightness adjustment,
International Journal of Trend in hue and saturation adjustment, histogram equalization, blurring and sharpening
Scientific Research and Development adjustment, blending and fading transformation, erosion and dilation
Journal. This is an Open Access article transformation, and finally edge detection and extraction. As future work, some
distributed under of the proposed algorithms are to be investigated with parallel processing paving
the terms of the the way to make their execution time faster and more scalable.
Creative Commons
Attribution License (CC BY 4.0)
(http://creativecommons.org/licenses/ KEYWORDS: Algorithms, Digital Image Processing, Digital Image Transformation
by/4.0)
I. GRAYSCALE TRANSFORMATION A. Implementation
Grayscale is a range of shades of gray without Image img = pictureBox1.Image;
apparent color. The darkest possible shade is black, Bitmap bitmap = new Bitmap(img);
which is the total absence of transmitted or reflected
light. The lightest possible shade is white, the total // Cycling over all the pixels in the image
transmission or reflection of light at all visible for (int i = 0; i < bitmap.Size.Width; i++)
wavelengths. Intermediate shades of gray are {
represented by equal brightness levels of the three
for (int j = 0; j < bitmap.Size.Height; j++)
primary colors (red, green and blue) for transmitted
light, or equal amounts of the three primary pigments {
(cyan, magenta and yellow) for reflected light [1]. Color color = bitmap.GetPixel(i, j); // Retreives
the color of a particular pixel
In photography and computing, a grayscale digital int R = color.R; // since the image is 8-bit
image is an image in which the value of each pixel is a Grayscale
single sample, that is, it carries only intensity if (intensityTrackbar.Value == 2)
information. Images of this sort, also known as black- {
and-white, are composed exclusively of shades of
int upperBound = 271; // 271-16 = 255
gray, varying from black at the weakest intensity to
white at the strongest. Grayscale images are distinct int lowerBound = 0;
from one-bit bi-tonal black-and-white images, which for (int k = 1; k <= 16; k++)
in the context of computer imaging are images with {
only the two colors, black, and white. Grayscale upperBound = upperBound - 16;
images have many shades of gray in between. lowerBound = upperBound - 16;
Grayscale images are also called monochromatic,
if (R <= upperBound && R > lowerBound)
denoting the presence of only one color.
{
R = upperBound;
}

@ IJTSRD | Unique Paper ID – IJTSRD22918 | Volume – 3 | Issue – 3 | Mar-Apr 2019 Page: 623
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
}
}
else if (intensityTrackbar.Value == 1)
{
int upperBound = 319; // 319-64 = 255
int lowerBound = 0;
for (int k = 1; k <= 4; k++)
{
upperBound = upperBound - 64;
lowerBound = upperBound - 64;
if (R <= upperBound && R > lowerBound)
{
R = upperBound;
}
} Figure 3: 2-bit Grayscale = 4 Levels
}
else if (intensityTrackbar.Value == 0)
{
if (R <= 255 && R > 127)
R = 255;
else R = 0;
}
bitmap.SetPixel(i, j, Color.FromArgb(R, R, R));
}
}
Figure 1, 2, 3, and 4 depict an original image in 8 bits,
4 bits, 2 bits, and 1 bit grayscale mode respectively.

Figure 4: 1-bit Grayscale = 2 Levels (Black & White)

II. CONTRAST ADJUSTMENT


Contrast is created by the difference in luminance, the
amount of reflected light, reflected from two adjacent
surfaces. There is also the Weber definition of
contrast:

Contrast = Lmax – Lmin


Lmax

Lmax = Luminance on the lighter surface


Lmin = Luminance on the darker surface
Figure 1: 8-bit Grayscale = 256 Levels
When the darker surface is black and reflects no light,
the ratio is 1. Contrast is usually expressed as
percentage value; the ratio is multiplied by 100. The
maximum contrast is thus 100% contrast [2]. The
symbols of the visual acuity charts are close to the
maximum contrast. If the lowest contrast perceived is
5%, contrast sensitivity is 100/5=20. If the lowest
contrast perceived by a person is 0.6%, contrast
sensitivity is 100/0.6=170.

A. Implementation
public static Bitmap AdjustContrast(Bitmap Image,
float Value)
{
Value = (100.0f + Value) / 100.0f;
Value *= Value;
Figure 2: 4-bit Grayscale = 16 Levels
Bitmap NewBitmap = (Bitmap)Image.Clone();

@ IJTSRD | Unique Paper ID - IJTSRD22918 | Volume – 3 | Issue – 3 | Mar-Apr 2019 Page: 624
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
BitmapData data = NewBitmap(); {
Color color = bitmap.GetPixel(i, j); //
int Height = NewBitmap.Height; Retreives the color of a particular pixel
int Width = NewBitmap.Width;
int R = color.R; // since the image is 8-bit
for (int y = 0; y < Height; ++y) Grayscale --> R = G = B
{ int G = color.G; // since the image is 8-bit
byte* row = (byte*)data.Scan0 + (y * Grayscale --> R = G = B
data.Stride); int B = color.B; // since the image is 8-bit
int columnOffset = 0; Grayscale --> R = G = B
for (int x = 0; x < Width; ++x)
{ R+= intensity_level;
byte B = row[columnOffset]; G+= intensity_level;
byte G = row[columnOffset + 1]; B+= intensity_level;
byte R = row[columnOffset + 2];
bitmap.SetPixel(i, j, Color.FromArgb(R, G,
B)); // Updating the bitmap with the new
float Red = R / 255.0f;
modified pixel
float Green = G / 255.0f;
}
float Blue = B / 255.0f;
}
Red = (((Red - 0.5f) * Value) + 0.5f) * 255.0f;
Green = (((Green - 0.5f) * Value) + 0.5f) * IV. HUE & SATURATION ADJUSTMENT
255.0f; HSL stands for hue, saturation, and lightness, and is
Blue = (((Blue - 0.5f) * Value) + 0.5f) * 255.0f; often also called HLS. HSV stands for hue, saturation,
and value, and is also often called HSB. A third model,
int iR = (int)Red; common in computer vision applications, is HSI, for
iR = iR > 255 ? 255 : iR; hue, saturation, and intensity. However, while
typically consistent, these definitions are not
iR = iR < 0 ? 0 : iR;
standardized, and any of these abbreviations might be
int iG = (int)Green; used for any of these three or several other related
iG = iG > 255 ? 255 : iG; cylindrical models [4].
iG = iG < 0 ? 0 : iG;
int iB = (int)Blue; HSL and HSV are the two most common cylindrical-
iB = iB > 255 ? 255 : iB; coordinate representations of points in an RGB color
model. The two representations rearrange the
iB = iB < 0 ? 0 : iB;
geometry of RGB in an attempt to be more intuitive
and perceptually relevant than the Cartesian (cube)
row[columnOffset] = (byte)iB; representation. Developed in the 1970s for computer
row[columnOffset + 1] = (byte)iG; graphics applications, HSL and HSV are used today in
row[columnOffset + 2] = (byte)iR; color pickers, in image editing software, and less
commonly in image analysis and computer vision.
Figure 5 depicts the HSL and HSV color spectrum.
columnOffset += 4;
}
}
}

III. BRIGHTNESS ADJUSTMENT


Brightness is an attribute of visual perception in
which a source appears to be radiating or reflecting
light. In other words, brightness is the perception
elicited by the luminance of a visual target [3]. This is
a subjective attribute/property of an object being
observed.

A. Implementation
Image img = pictureBox1.Image;
Bitmap bitmap = new Bitmap(img);
// Cycling over all the pixels in the image
for (int i = 0; i < bitmap.Size.Width; i++)
{
for (int j = 0; j < bitmap.Size.Height; j++)
Figure 5: HSL & HSV

@ IJTSRD | Unique Paper ID - IJTSRD22918 | Volume – 3 | Issue – 3 | Mar-Apr 2019 Page: 625
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
A. Implementation V. HISTOGRAM
In image processing and photography, a color
public static Color[] histogram is a representation of the distribution of
GetColorDiagram(List<ControlPoint> points) colors in an image. For digital images, a color
{ histogram represents the number of pixels that have
Color[] colors = new Color[256]; colors in each of a fixed list of color ranges that span
the image's color space, the set of all possible colors
points.Sort(new PointsComparer());
[5].
for (int i = 0; i < 256; i++)
{ The color histogram can be built for any kind of color
ControlPoint leftColor = new ControlPoint(0, space, although the term is more often used for three-
GetNearestLeftColor(points[0].Color)); dimensional spaces like RGB or HSV. For
ControlPoint rightColor = new ControlPoint monochromatic images, the term intensity histogram
(255, may be used instead. For multi-spectral images,
GetNearestRigthColor(points[points.Count - where each pixel is represented by an arbitrary
1].Color)); number of measurements (for example, beyond the
three measurements in RGB), the color histogram is
if (i < points[0].Level) N-dimensional, with N being the number of
{ measurements taken. Each measurement has its own
rightColor = points[0]; wavelength range of the light spectrum, some of
which may be outside the visible spectrum.
}

if (i > points[points.Count - 1].Level) A. Histogram Equalization Algorithm


1. Iterate over all the pixels and count the number of
{
pixels that have a particular intensity
leftColor = points[points.Count - 1]; 2. Store the results in a table and calculate the
} probability of each intensity using Number of
else pixels of a particular intensity level / total number
{ of pixels in the image
for (int j = 0; j < points.Count - 1; j++) 3. Perform equalization using T(rk) = (L-1)
Sum[i=0k] Pr(i) = sk
{ 4. Store the new results in a table and update the
if ((points[j].Level <= i) & (points[j + image by substituting the old intensity values by
1].Level > i)) the new equalized ones.
{ 5. Calculate the histogram distribution of the new
leftColor = points[j]; generated image
rightColor = points[j + 1];
B. Implementation
}
distribution = new double[256, 3];
}
// distribution[0]= # of pixels
}
// distribution[1]= probability
if ((rightColor.Level - leftColor.Level) != 0) //distribution[2]= New Intensity level after
{ Equalization
double koef = (double)(i - leftColor.Level) / Bitmap bitmap = new Bitmap(pictureBox1.Image);
(double)(rightColor.Level - leftColor.Level);
for (int i = 0; i < bitmap.Height; i++)
int r = leftColor.Color.R + (int)(koef *
{
(rightColor.Color.R - leftColor.Color.R));
for (int j = 0; j < bitmap.Width; j++)
int g = leftColor.Color.G + (int)(koef * {
(rightColor.Color.G - leftColor.Color.G)); int intensity = bitmap.GetPixel(j, i).R;
int b = leftColor.Color.B + (int)(koef * distribution[intensity, 0]++;
(rightColor.Color.B - leftColor.Color.B)); }
}
colors[i] = Color.FromArgb(r, g, b);
} // working with LISTVIEW
else listView1.Items.Clear();
{
colors[i] = leftColor.Color; int total = 0;
} double totalProbability = 0.0;
}
for (int i = 0; i < distribution.GetLength(0); i++)
return colors;
{
}
distribution[i, 1] = distribution[i, 0] / 87040.0;

@ IJTSRD | Unique Paper ID - IJTSRD22918 | Volume – 3 | Issue – 3 | Mar-Apr 2019 Page: 626
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
total = total + Convert.ToInt32(distribution[i, software, typically to reduce image noise and reduce
0]); detail. The visual effect of this blurring technique is a
totalProbability = totalProbability + smooth blur resembling that of viewing the image
distribution[i, 1]; through a translucent screen, distinctly different from
the bokeh effect produced by an out-of-focus lens or
// Updating the listview just for illustration the shadow of an object under usual illumination.
purposes Gaussian smoothing is also used as a pre-processing
// i represents the INTENSITY level stage in computer vision algorithms in order to
ListViewItem item = new ListViewItem(new enhance image structures at different scales [7].
string[] { "" + i, "" + distribution[i, 0],
distribution[i, 1].ToString("0.000000"), "" }); Mathematically, applying a Gaussian blur to an image
is the same as convolving the image with a Gaussian
listView1.Items.Add(item); function. This is also known as a two-dimensional
} Weierstrass transform. By contrast, convolving by a
circle would more accurately reproduce the bokeh
listView1.Items.Add(""); // EMPTY row effect. Since the Fourier transform of a Gaussian is
listView1.Items.Add(new ListViewItem(new string[] { another Gaussian, applying a Gaussian blur has the
"Totals:", "" + total, "" + totalProbability, "" })); effect of reducing the image's high-frequency
components; a Gaussian blur is thus a low pass filter.
Figure 6 is about calculating the Intensity Distribution
A. Implementation
prior to historgram equalization which is depicted in
double[] filter = new double[]{
Figure 7.
Convert.ToDouble(textBox1.Text),
Convert.ToDouble(textBox2.Text),
Convert.ToDouble(textBox3.Text),
Convert.ToDouble(textBox4.Text),
Convert.ToDouble(textBox5.Text),
Convert.ToDouble(textBox6.Text),
Convert.ToDouble(textBox7.Text),
Convert.ToDouble(textBox8.Text),
Convert.ToDouble(textBox9.Text) };

Bitmap bitmap = (Bitmap)pictureBox1.Image;


Bitmap bitmap2 = new Bitmap(bitmap.Width,
bitmap.Height);

for (int y = 0; y < bitmap.Height; y++)


{
for (int x = 0; x < bitmap.Width; x++)
{
Figure 6: Calculating the Intensity Distribution int Rcenter = bitmap.GetPixel(x, y).R;

int R0 = 0;
R0 = bitmap.GetPixel(x - 1, y + 1).R;
int R1 = 0;
R1 = bitmap.GetPixel(x, y + 1).R;
int R2 = 0;
R2 = bitmap.GetPixel(x + 1, y + 1).R;
int R3 = 0;
R3 = bitmap.GetPixel(x - 1, y).R;
int R5 = 0;
R5 = bitmap.GetPixel(x + 1, y).R;
int R6 = 0;
R6 = bitmap.GetPixel(x - 1, y - 1).R;
int R7 = 0;
R7 = bitmap.GetPixel(x, y - 1).R;
int R8 = 0;
Figure 7: Performing Histogram Equalization R8 = bitmap.GetPixel(x + 1, y - 1).R;

VI. BLURRING & SHARPENING ADJUSTMENT int sum = Convert.ToInt32(((R0 * filter[0]) +


A Gaussian blur (also known as Gaussian smoothing) (R1 * filter[1]) +
is the result of blurring an image by a Gaussian (R2 * filter[2]) + (R3 * filter[3]) + (Rcenter *
function. It is a widely used effect in graphics filter[4]) +

@ IJTSRD | Unique Paper ID - IJTSRD22918 | Volume – 3 | Issue – 3 | Mar-Apr 2019 Page: 627
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
(R5 * filter[5]) + (R6 * filter[6]) + (R7 * 1. Iterate over all the pixels of both source images
filter[7]) + namely image 1 and image 2
(R8 * filter[8])) / 9); 2. On each iteration, read the intensity value of a
particular pixel in image 1 and add it to the
if (sum < 0) intensity value of the corresponding pixel in
sum = 0; image 2 as in pixel(i , image3) = pixel(i , image1) +
if (sum > 255) pixel(i , image2)
3. Check if the obtained value is larger than 255 then
sum = 255;
normalize it to 255 and if the obtained value is
bitmap2.SetPixel(x, y, Color.FromArgb(sum, less than 0 (in case of subtraction) then normalize
sum, sum)); it to 0
4. Store the resulting value in a 3rd image
}
5. Upon scanning of all the pixel of both images 1
} and 2, a new image 3 will be obtained and it is the
pictureBox2.Image = bitmap2; result of image 1 + image 2

Figure 8 demonstrates the blurring effect; while, A. Implementation


Figure 9 demonstrates the sharpening effect on a Button button = (Button)sender;
particular image. Bitmap bitmap1 = new Bitmap(pictureBox1.Image);
Bitmap bitmap2 = new Bitmap(pictureBox2.Image);
Bitmap bitmap3 = new Bitmap(300, 300);

for (int y = 0; y < bitmap1.Height; y++)


{
for (int x = 0; x < bitmap1.Width; x++)
{
Color color1 = bitmap1.GetPixel(x, y);
Color color2 = bitmap2.GetPixel(x, y);

int R1 = color1.R;
int R2 = color2.R;
int R3 = 0;

if (button.Text == "+")
R3 = R1 + R2;
Figure 8: Applying Blurring Effect else if (button.Text == "-")
R3 = R1 - R2;
else if (button.Text == "*")
R3 = R1 * R2;
else if (button.Text == "/")
{
if (R2 != 0)
R3 = R1 / R2;
}

if (R3 > 255)


R3 = 255;
else if (R3 < 0)
R3 = 0;

bitmap3.SetPixel(x, y, Color.FromArgb(R3, R3,


R3));
}
Figure 9: Applying Sharpening Effect }

VII. BLENDING & FADING TRANSFORMATION pictureBox3.Image = bitmap3;


Blending in graphics is about forming a blend of two pictureBox3.Refresh();
input images of the same size. The value of each pixel
in the output image is a linear combination of the Figure 10 demonstrates the blending effect of two
corresponding pixel values in the input images [8]. images; while, Figure 11 demonstrates the shading
Below is the algorithm for blending two input images effect applied by a constant value to an image.
together:

@ IJTSRD | Unique Paper ID - IJTSRD22918 | Volume – 3 | Issue – 3 | Mar-Apr 2019 Page: 628
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
if pixelA is black OR pixelB is black
then Set ResultPixel to black
else then Set ResultPixel to white

NOT operation (Complement)


 Scan images A pixel-by-pixel
if pixelA is black
then Set ResultPixel to white
else then Set ResultPixel to black

XOR operation
 Scan both images A and B simultaneously pixel-
by-pixel
if pixelA is black AND pixelB is black
then Set ResultPixel to white
else if pixelA is black OR pixelB is black
then Set ResultPixel to black
else then Set ResultPixel to white

Figure 10: Blending of Two Images Difference operation (A-B)


 Scan both images A and B simultaneously pixel-
by-pixel
 Apply the complement of B
 Then Apply the AND operation on A and B
 A AND (NOT B)

Boundary Extraction
 Apply erosion on A and B
 Subtract the result form A (Use logical set
differencing)
 A – (A erosion B) [10]

Connected Components
 Connected component labeling works by scanning
an image, pixel-by-pixel (from top to bottom and
left to right) in order to identify connected pixel
regions, i.e. regions of adjacent pixels which share
the same set of intensity values V.
 When a point p is encountered (p denotes the
pixel to be labeled at any stage in the scanning
process for which V={1}), it examines the four
Figure 11: Shading by a Constant neighbors of p which have already been
encountered in the scan. Based on this
VII. EROSION & DILATION TRANSFORMATION information, the labeling of p occurs as follows:
Erosion is about performing a special processing on a If all four neighbors are 0, assign a new label to
binary image. We successively place the center pixel p, else
of the structuring element on each foreground pixel if only one neighbor has V={1}, assign its label to
(value 1). If any of the neighborhood pixels are p, else
background pixels (value 0), the foreground pixel is if one or more of the neighbors have V={1},
switched to background. On the other hand, to assign one of the labels to p and make a note of
perform dilation, we successively place the center the equivalences.
pixel of the structuring element on each background  After completing the scan, the equivalent label
pixel [9]. If any of the neighborhood pixels are pairs are sorted into equivalence classes and a
foreground pixels (value 1), the background pixel is unique label is assigned to each class. As a final
switched to foreground. step, a second scan is made through the image,
during which each label is replaced by the label
AND operation (Intersection) assigned to its equivalence classes [11].
 Scan both images A and B simultaneously pixel-
by-pixel A. Implementation
if pixelA is black AND pixelB is black AND Operation
then Set ResultPixel to black
else then Set ResultPixel to white Bitmap A = (Bitmap)pictureBox1.Image;
Bitmap B = (Bitmap)pictureBox2.Image;
OR operation (Union) Bitmap C = new Bitmap(319 , 240) ;
 Scan both images A and B simultaneously pixel-
by-pixel for (int i = 0; i < A.Height; i++)

@ IJTSRD | Unique Paper ID - IJTSRD22918 | Volume – 3 | Issue – 3 | Mar-Apr 2019 Page: 629
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
{
for (int j = 0; j < A.Width; j++)
{
int colorA = A.GetPixel(j, i).R;
int colorB = B.GetPixel(j, i).R;
if (colorA < 200 && colorB < 200)
C.SetPixel(j,i,Color.FromArgb(0,0,0)) ;
else
C.SetPixel(j,i,Color.FromArgb(255,255,255)) ;
}
}
pictureBox3.Image = C ;
pictureBox3.Refresh();

Dilation Figure 12: Erosion Morphology

byte* ptr = (byte*)data.Scan0;


byte* tptr = (byte*)data2.Scan0;

ptr += data.Stride + 3;
tptr += data.Stride + 3;

int remain = data.Stride - data.Width * 3;

for (int i = 1; i < data.Height - 1; i++)


{
for (int j = 1; j < data.Width - 1; j++)
{
if (ptr[0] == 255)
{
byte* temp = tptr - data.Stride - 3; Figure 13: A – B (Difference)

for (int k = 0; k < 3; k++)


{
for (int l = 0; l < 3; l++)
{
temp[data.Stride * k + l * 3] =
temp[data.Stride * k + l * 3 + 1] =
temp[data.Stride * k + l * 3 + 2] =
(byte)(sElement[k, l] * 255);
}
}
}

ptr += 3;
tptr += 3;
}

ptr += remain + 6;
Figure 14: Boundary Extraction
tptr += remain + 6;
}
VIII. CONCLUSIONS & FUTURE WORK
This paper proposed the design and implementation
bmpimg.UnlockBits(data);
of a set of digital image transformation algorithms
tempbmp.UnlockBits(data2);
that deal with converting digital images from one
domain to another. The algorithms implemented
bmpimg = (Bitmap)tempbmp.Clone();
were grayscale transformation, contrast and
brightness adjustment, hue and saturation
pictureBox2.Image = bmpimg;
adjustment, histogram equalization, blurring and
sharpening adjustment, blending and fading
Figure 12 demonstrates the erosion and dilation
transformation, erosion and dilation transformation,
effects when applied to a black and white image.
and finally edge detection and extraction. The
Likewise, Figure 13 shows the difference logical
proposed algorithms were implemented using
operation over a particular image. Finally, Figure 14
C#.NET and .NET Framework 3.5.
shows the boundary extraction effect.

@ IJTSRD | Unique Paper ID - IJTSRD22918 | Volume – 3 | Issue – 3 | Mar-Apr 2019 Page: 630
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
As future work, the proposed algorithms are to be [5] Kenneth Castleman, “Digital Image Processing”,
reprogrammed to fit in a multiprocessing Prentice Hall, 1995
environment with the purpose of speeding up their
[6] N. Bassiou, C. Kotropoulos, "Color image
execution and processing time.
histogram equalization by absolute discounting
back-off," Computer Vision and Image
Acknowledgment
Understanding, vol. 107, no. 1-2, pp.108-122,
This research was funded by the Lebanese
2007
Association for Computational Sciences (LACSC),
Beirut, Lebanon, under the “Parallel Programming [7] John C. Russ, “The Image Processing Handbook”,
Algorithms Research Project – PPARP2019”. 6th Edition, CRC Press, 2011.

References [8] Ronny Richardson, "Digital imaging: The wave of


[1] Rafael C. Gonzalez, Richard E. Woods, “Digital the future", THE Journal, vol. 31, no.3, 2003
Image Processing”, 3rd Edition, Prentice Hall,
[9] Qi-Yu Liang, et al, "Observation of three-photon
2007. bound states in a quantum nonlinear medium",
Science, vol. 359, no.6377, pp.783–786, 2018
[2] Maria Petrou, Costas Petrou, “Image Processing:
The Fundamentals”, 2nd edition, Wiley, 2010 [10] Pietro Perona, Jitendra Malik, "Scale-space and
edge detection using anisotropic diffusion",
[3] Mike Reed, "Graphic arts, digital imaging and
Proceedings of IEEE Computer Society
technology education", THE Journal, vol.21 no.5,
Workshop on Computer Vision, vol.1, pp. 16–22,
p.69, 2002. 1987
[4] S. Naik and C. Murthy, "Hue-preserving color [11] Guillermo Sapiro, "Geometric partial differential
image enhancement without gamut problem," equations and image analysis", Cambridge
IEEE Trans, Image Processing, vol. 12, no. 12, pp. University Press, 2001
1591–1598, 2003

@ IJTSRD | Unique Paper ID - IJTSRD22918 | Volume – 3 | Issue – 3 | Mar-Apr 2019 Page: 631

You might also like