Report
Report
In order to extract useful data for diagnosis and therapy planning, medical image
unique method for medical image segmentation utilizing the Gaussian Mixture Model. The
GMM is a probabilistic model that can capture intricate patterns and variations found in
Medical Image Segmentation is used to extract regions of interest (ROI) from medical
imagery which can be helpful for doctors to study the anatomy of the segmented regions to
find tumors etc. This paper uses the variational level sets method to perform image
segmentation which tries to find the best boundaries between different parts of an image by
minimizing the function which measures how good the segmentation is. The mentioned
function has two terms the first term deals with the edges of the image and the second term
deals with regions of the image and tries to make the regions have similar properties such as
color or texture. Optimizing the model to fit the observed pixel intensities in the medical
pictures, the GMM parameters are estimated using the Expectation-Maximization approach.
This paper is based on a statistical model called the Gaussian Mixture Model and also it
includes a new region-based term that uses GMMs of different regions to measure how
different they are from each other. Distance between two GMMs is calculated and this distance
is maximized during the segmentation process which makes the regions more distinct and
dermoscopy images and shows that its method can segment these images more efficiently.
1
CHAPTER 1
INTRODUCTION
In normal or Gaussian distribution, Mean, Median, and Mode are the same
and Gaussian distribution can be described by two values: mean and standard
deviation. Smaller standard deviation results in a narrow bell curve and larger
standard deviation leads to a wide curve.
2
1.2 MACHINE LEARNING
Modelling the distribution of data in machine learning often involves the use of
many Gaussian distributions, each with parameters like mean, covariance, and weight,
combine to form the data. By learning these parameters during training, the model
speech recognition, image processing, and clustering, where they are used for tasks
Problems with this method include figuring out how sensitive the medical images
are to initialization, noise, and intensity changes, as well as how many Gaussian
components is the right amount. To get meaningful and clinically relevant segmentation
EXPECTATION-MAXIMIZATION FUNCTION:
In the framework of models such as the Gaussian Mixture Model (GMM), medical
iterative method.
algorithm better understand and adapt to complicated patterns in images, like those
segmentation results.
1.3 OBJECTIVES:
detailed analysis.
medical images.
4
CHAPTER 2
LITERATURE REVIEW
2.2 Drawbacks:
5
CHAPTER 3
PROPOSED SYSTEM
• The proposed novel image segmentation method makes use of both edge-based
features and region-based features, unlike DRLSE or C-V which only make use of
• Curve Initialization: Consider image I which consists of both object (Ro) and
background (Rb) and begins with a contour or curve around the object of interest.
modify the contour in such a way that maximizes the separation between object and
background using a metric called Bhattacharya distance which tells about the
• Minimize the objective function by using differential calculus to find the optimal
configuration of contour that provides the best segmentation result and include the
novel term that maximizes the distance between object and background with level
sets to get a hybrid approach to image segmentation which considers both edge
6
3.1 METHODOLOGY:
7
3.3 IMPLEMENTATION MODULES:
• Extract relevant features from the image, such as pixel intensities or texture features.
Applying GMM:
8
Segmentation:
• Assign each pixel in the image to a particular cluster or class using the trained GMM.
• Using the acquired Gaussian distributions, this step efficiently divides the image into
various regions.
Evaluation:
9
CHAPTER 4
DESIGN
Class Diagram:
10
Activity Diagram:
11
Sequence Diagram:
12
CHAPTER 5
IMPLEMENTATION
5.1 ALGORITHM:
13
Terminologies used:
14
5.2 SOFTWARE AND HARDWARE REQUIREMENTS:
HARDWARE REQUIREMENTS:
SOFTWARE REQUIREMENTS:
IDE :
Dataset :
TECHNOLOGY DESCRIPTION:
JUPYTER:
computational climate for making Jupyter scratch pad reports. The "note pad" term can
application, Jupyter Python web server, or Jupyter record design contingent upon setting.
As indicated by the authority site of Jupyter, Project Jupyter exists to foster open-source
programming dialects. Jupyter Book is an open-source project for building books and
archives from computational material. It permits the client to develop the substance in a
15
combination of Markdown, a lengthy form of Markdown called MyST, Maths and
time.
PYTHON:
numerous viewpoints, but it is somewhat less verbose than other codes. Python is easy to
learn. You can foster a helpful device in Python regardless of whether you haven't taken a
software engineering class. You don't need to manage the lower-level parts of
programming, like memory the board, since it's undeniable level. Python might be utilized
to compose scripts, scratch sites, and make informational indexes. Python is generally
utilized in established researchers for logical figuring, and libraries exist to empower
language, subsequently it utilizes the web to impart. It comprehends how to send and get
web demands. "Inexactly composed" is the manner by which Python is depicted. This
classification of programming dialects needn't bother with you to announce the sort of
significant worth a capacity returns or the kind of factor before you make it when you
characterize it. Python is intuitive as in you can sit at a Python brief and compose your
extraordinary language for amateurs since it permits you to make a wide scope of projects,
16
CHAPTER 6
EVALUATION DETAILS
17
# M-step
new_mus, new_covs, new_priors = [], [], []
soft_counts = np.sum(beliefs, axis=0)
for i in range(self.ncomp):
new_mu = np.sum(np.expand_dims(beliefs[:, i], -1) * datas, axis=0)
new_mu /= soft_counts[i]
new_mus.append(new_mu)
data_shifted = np.subtract(datas, np.expand_dims(new_mu, 0))
new_cov = np.matmul(np.transpose(np.multiply(np.expand_dims(beliefs[:, i],-1),
data_shifted)), data_shifted)
new_cov /= soft_counts[i]
new_covs.append(new_cov)
new_priors.append(soft_counts[i] / np.sum(soft_counts))
self.mus = np.asarray(new_mus)
self.covs = np.asarray(new_covs)
self.priors = np.asarray(new_priors)
if _name_ == '_main_':
# Load image
image_name = raw_input('Input the image name: ')
image_path = 'images/{}.jpg'.format(image_name)
image = load_image(image_path)
image_height, image_width, image_channels = image.shape
image_pixels = np.reshape(image, (-1, image_channels))
_mean = np.mean(image_pixels,axis=0,keepdims=True)
_std = np.std(image_pixels,axis=0,keepdims=True)
image_pixels = (image_pixels - _mean) / _std
# Normalization
# Input number of classes
ncomp = int(input('Input number of classes: '))
18
# Apply K-Means to find the initial weights and covariance matrices for GMM
kmeans = KMeans(n_clusters=ncomp)
labels = kmeans.fit_predict(image_pixels)
initial_mus = kmeans.cluster_centers_
initial_priors, initial_covs = [], []
for i in range(ncomp):
datas = np.array([image_pixels[j, :] for j in range(len(labels)) if labels[j] == i]).T
initial_covs.append(np.cov(datas))
initial_priors.append(datas.shape[1] / float(len(labels)))
# Initialize a GMM
gmm = GMM(ncomp, initial_mus, initial_covs, initial_priors)
# EM Algorithm
prev_log_likelihood = None
for i in range(1000):
beliefs, log_likelihood = gmm.inference(image_pixels) # E-step
gmm.update(image_pixels, beliefs) # M-step
print('Iteration {}: Log Likelihood = {}'.format(i+1, log_likelihood))
if prev_log_likelihood != None and abs(log_likelihood - prev_log_likelihood) <1e-10:
break
prev_log_likelihood = log_likelihood
# Show Result
beliefs, log_likelihood = gmm.inference(image_pixels)
map_beliefs = np.reshape(beliefs, (image_height, image_width, ncomp))
segmented_map = np.zeros((image_height, image_width, 3))
for i in range(image_height):
for j in range(image_width):
hard_belief = np.argmax(map_beliefs[i, j, :])
segmented_map[i,j,:] = np.asarray(COLORS[hard_belief]) / 255.0
plt.imshow(segmented_map)
plt.show()
19
•
import numpy as np
from PIL import Image
COLORS = [
(255, 0, 0), # red
(0, 255, 0), # green
(0, 0, 255), # blue
(255, 255, 0), # yellow
(255, 0, 255), # magenta
]
def load_image(infilename) :
img = Image.open( infilename )
img.load()
data = np.asarray( img, dtype="int32" )
return data
20
CHAPTER 7
FUTURE WORK
The Gaussian Mixture Model (GMM) of unsupervised learning has the potential to
segmentation. Ongoing research and development are needed to improve the accuracy
that draw on the advantages of both approaches to achieve more precise and efficient
segmentation, particularly when working with intricate and sizable medical imaging
guaranteeing stable performance on a variety of picture kinds, including CT, MRI, and
models and algorithms, which facilitates speedier analysis and clinical decision-making.
On-device processing capabilities for portable and point-of-care devices are also being
developed.
Unsupervised learning using GMM is applied to rare diseases where labeled data is
hard to come by. When looking for patterns and anomalies that might point to these kinds
imaging. This could help segmentation models based on GMMs become more
21
CHAPTER 8
CONCLUSION
In conclusion, there are a lot of promising opportunities for advancement in the field of
learning. The potential uses of GMM-based segmentation in the medical industry are numerous
and researchers will concentrate on enhancing the GMM models' accuracy and precision,
investigating novel algorithms, and combining cutting-edge optimization strategies. Using the
integration of GMM with deep learning approaches is probably going to be an important area
of research. One important feature that can increase the utility of GMM-based segmentation in
a variety of clinical scenarios is its ability to adapt to a wide range of medical imaging
modalities, including rare diseases. It is projected that on-device applications and real-time
processing capabilities will become indispensable for speedier analysis and decision-making,
lack of labelled data, transfer learning techniques may be essential in helping to train models
explainability and interpretability, these models should be easier for clinicians to use and more
reliable, which will facilitate their integration into clinical workflows. Global cooperation and
data exchange programs will probably hasten the creation of globally applicable standards and
guarantee fair and equitable outcomes for diverse patient populations, it is also anticipated that
ethical considerations and bias mitigation strategies will receive more attention.
22
CHAPTER 9
REFERENCES
Images for Medical Image Segmentation.” IEEE Access, vol. 8, Institute of Electrical
https://doi.org/10.1109/access.2020.2967676.
• Raza, Khalid, and Nripendra Kumar Singh. “A Tour of Unsupervised Deep Learning
for Medical Image Analysis.” Current Medical Imaging Reviews, vol. 17, no. 9,
https://doi.org/10.2174/1573405617666210127154257.
• Farnoush, R., and B. Zarr Pak. “Image Segmentation Using Gaussian Mixture
• www.researchgate.net/profile/Behnam_Zarpak/publication/250791017_Image_Seg
mentation_using_Gaussian_Mixture_Models/links/5580f85708ae47061e5f3f48.pdf
23