Chapter 4. Development Process
Chapter 4. Development Process
DEVELOPMENT PROCESS
4.1. REQUIREMENT ANALYSIS
Requirements are a feature of a system or description of something that the system is
capable of doing in order to fulfil the system’s purpose. It provides the appropriate
mechanism for understanding what the customer wants, analyzing the needs assessing
feasibility, negotiating a reasonable solution, specifying the solution unambiguously,
validating the specification and managing the requirements as they are translated into an
operational system.
4.1.1. PYTHON:
Python is a dynamic, high level, free open source and interpreted programming
language. It supports object-oriented programming as well as procedural oriented
programming. In Python, we don’t need to declare the type of variable because it is a
dynamically typed language.
For example, x=10 .Here, x can be anything such as String, int, etc.
Python is an interpreted, object-oriented programming language similar to PERL, that has
gained popularity because of its clear syntaxand readability. Python is said to be relatively
easy to learn and portable, meaning its statements can be interpreted in a number of operating
systems, including UNIX-based systems, Mac OS, MS-DOS, OS/2, and various versions of
Microsoft Windows 98. Python was created by Guido van Rossum, a former resident of the
Netherlands, whose favourite comedy group at the time was Monty Python's Flying Circus.
The source code is freely available and open for modification and reuse. Python has a
significant number of users.
Features in Python
There are many features in Python, some of which are discussed below
Easy to code
Free and Open Source
Object-Oriented Language
GUI Programming Support
High-Level Language
Extensible feature
Python is Portable language
Python is Integrated language
Interpreted Language
4.2. ANACONDA
The big difference between conda and the pip package manager is in how package
dependencies are managed, which is a significant challenge for Python data science and the
reason conda exists.
When pip installs a package, it automatically installs any dependent Python packages
without checking if these conflict with previously installed packages. It will install a package
and any of its dependencies regardless of the state of the existing installation. Because of this,
a user with a working installation of, for example, Google Tensorflow, can find that it stops
working having used pip to install a different package that requires a different version of the
dependent numpy library than the one used by Tensorflow. In some cases, the package may
appear to work but produce different results in detail.
The default installation of Anaconda2 includes Python 2.7 and Anaconda3 includes
Python 3.7. However, it is possible to create new environments that include any version of
Python packaged with conda.
JupyterLab
Jupyter Notebook
QtConsole
Spyder
Glue
Orange
RStudio
Visual Studio Code
The Notebook interface was added to IPython in the 0.12 release [14] (December 2011),
renamed to Jupyter notebook in 2015 (IPython 4.0 – Jupyter 1.0). Jupyter Notebook is similar
to the notebook interface of other programs such as Maple, Mathematica, and SageMath, a
computational interface style that originated with Mathematica in the 1980s. According
to The Atlantic, Jupyter interest overtook the popularity of the Mathematica notebook
interface in early 2018.
HARDWARE REQUIREMENTS:
Dataset Collection
CNN in Deep
Training Dense Layer
Learning
Pre-
Processing
Testing
Segmentation
Classification
Prediction Of
Brain Tumor
4.4.1. USECASE DIAGRAM
Image Processing
Pre-Processing
Segmentation
Classification
admin
Our proposed system involves Dense Layer in Convolutional Neural Network (CNN)
Algorithm in Deep Learning concept used to train the dataset.
In Dense Layer, each layer obtains additional inputs from all preceding layers and
passes on its own feature-maps to all subsequent layers.
In Dense Layer uses features of all complexity levels. It tends to give more smooth
decision boundaries.
4.5.1. ADVANTAGES
In general, learning algorithms benefit from standardization of the data set. If some
outliers are present in the set, robust scalers or transformers are more appropriate. The
behaviors of the different scalers, transformers, and normalizers on a dataset containing
marginal outliers is highlighted in Compare the effect of different scalers on data with
outliers.
Normalization
Normalization is the process of scaling individual samples to have unit norm. This
process can be useful if you plan to use a quadratic form such as the dot-product or any other
kernel to quantify the similarity of any pair of samples.
This assumption is the base of the Vector Space Model often used in text
classification and clustering contexts.
Module 3: Segmentation
Image segmentation is the process of dividing the image into non- overlapping
meaningful regions. The main objective if an image segmentation is to divide an image into
many sections for the further analysis, so we can get the only necessary or a segment of
information. We use various image segmentation algorithms to split and group a certain set of
pixels together from the image. By doing so, we are actually assigning labels to pixels and the
pixels with the same label fall under a category where they have some or the other thing
common in them.
Using these labels, we can specify boundaries, draw lines, and separate the most
required objects in an image from the rest of the not-so-important ones. In the below
example, from a main image on the left, we try to get the major components, e.g. chair, table
etc. and hence all the chairs are colored uniformly. In the next tab, we have detected
instances, which talk about individual objects, and hence the all the chairs have different
colors.
Module 4: Classification
Image classification is to identify and portray, as a unique gray level (or color), the
features occurring in an image in terms of the object or type of land cover these features
actually represent on the ground. Image classification is perhaps the most important part of
digital image analysis.
K-Nearest Neighbours
Neighbours based classification is a type of lazy learning as it does not attempt to
construct a general internal model, but simply stores instances of the training data.
Classification is computed from a simple majority vote of the k nearest neighbours of each
point.