Report Machine Learning
Report Machine Learning
Itis
Machine Learning is the science of getting computers to learn without being explicitly programmed.
In its
closely related to computational statistics, which focuses on making prediction using computer.
as analysis. Machine
application across business problems, machinc learning
is also referrcd predictive
the development of
Learning is
closcly related to computational statistics. Machine Learning focuses on
that can access data and use it to learn themsclves. The process of learning begins with
computer programs
oraer to 1oOK Tor patterns in data
Observalions or aata, sucn as exampies, airec experiencc, or nstruction, in
the computers learn without human intervention or assistance and adjust actions accordingly.
automatically
The name machine learning was coined in 1959 by Arthur Samuel. Tom M. Mitchell provided a widely
said learn from experience E with respect to some class of tasks T and performance
program is to
we can
"Can machines think?" is replaced with the question "Can machines do what (as thinking entities)
and
The types of machine learning algorithms differ in their approach, the type of data they input and output,
of task or problem that they are intended to solve. Broadly Machine Learning can be categorized
the type
1. Supervised Learning
I. Unsupervised Learning
Reinforcement Learning
it
time and resources to train properly.
SupervisedLearning
Supervised Learning is a type of leaming in which we are given a data set and we already know what are
correct output should look like, having the idea that there is a relationship between the input and output.
Basically,it is learning task of leaming a function that maps an input to an output based on example
input
Unsupervised Learning
resut.
the variables With unsupervised there is no feedback based on prediction
among in data. learning
in data set
Basically.
it is a type of self-organized learning that helps in finding previously unknown patterns
without preexising
label
Reinforcement Learning
Reinforcement leaming is a learning method that interacts with its environment by producing actions and
discovers errors or rewards. Trial and error search and delayed reward are the most relevant characteristics
ofreinforcement learning. This method allows machines and software agents to automatically determine the
Semi-Supervised Learning
Semi-supervised learning fall somewhere in between supervised and unsupervised learning, since they use
both labeled and unlabeleddata for training -typicallya small amount of labeled data and a large amountof
unlabeled data. The systems that use this method are able to considerably improve learning accuracy.
Usually, semi-supervised learning is chosen when the acquired labeled data requires skilled and relevant
resources in order to train it / learn from it Otherwise, acquiring unlabeled data generally doesn't
require
additional resources.
Literature Survey
Theory
A of machine
core objective of.a learmer is to gencralize from its cxperienee. The computational analysis
.
common. The is one way to quantify generalization
performance are quite bias-variance decomposition
emor
should match the
For the best perfomance in the context of generalization, the complexity of the hypothesis
model has underfit the data. If the complexity of the model is increased in response, then the training error
and
complex, then the model subject to overfitting
But if the hypothesis is too is generalization
docreases.
will be poorer.
and of
In addition to performance bounds, learning theorists study the time complexity feasibility learning.
There are wo kinds of time complexity result. Positive results show that a certain class of functions can be
mainstream machine technologies are black-box approaches, making us concerned about their
learning
potential
risks. To tackle this challenge, we may want to make machine learning more explainable and
controllable. As another example, the computational complexity of machine learning algorithms is usually
very high and we may want to invent lightweight algorithms or implementations. Furthermore, in many
domains such as physics. chemistry, biology. and social sciences, people usually seek elegantly sinple
Machine learning takes much more time. You have to gather and prepare data, then train the algorithm.
There are much more uncertainties. That is why, while in traditional website or application development an
a machine
experienced team can estimate the time quite precisely, learning project used for example to
provide product recommendations can take much less or much more time than expected.
Why? Because
cven the best machine learning engineers don't know how the decp learning networks will behave when
analyzing different sets of data. It also means that the machine learning engineers and data scientists cannot
Leaninginclude:
Web Search Engine: One of the reasons why search engines like google, bing
ete work so well s
because the system has learnt how to rank pages through a complex learning algorithm.
Photo tagging Applications: Be it facebook or any other photo tagging the ability to tag
application,
makes that
friends it even more happening. It is all possible because of a face recognition algorithm
Spam Detector: Our mail agent like Gmail or Hotmail does a lot of hard work for us in classifying
the mails and moving the spam mails to spam folder. This is again achieved by a spam classifier
unning in the back end of mail application.
Future Scope
Future of Machine Learming is as vast as the limits of human mind. We can always keep learning, and
teaching the computers how to learn. And at the same time, wondering how some of the most complex
machine learming algorithms have been running in the back of our own mind so effortlessly all the time.
There is a bright future for machine learning. Companies like Google, Quora, and Facebook hire people with
machine learning. There is intense research in machine learning at the top universities in the world. The
global machine learning as a service market is rising expeditiously mainly due to the Internet revolution. The
process of connecting the world virtually has generated vast amountof data which is boosting the adoption
of machine learning solutions. Considering all these applications and dramatic improvements that ML has
brought us, it doesn't take a genius to realize that in coming future we will
definitely see more advanced
applications of ML, applications that will stretch the capabilities of machine learning to an unimaginable
level.
Organization of Training Workshop
Company Profile
DreamUny Education was created with a mission to create skilled software engineers for our country and
and the of
bridge the gap betwecn the of demanded
the world. It aims to quality skills by industry quality
skills impartcd by conventional institutes. With assessments, learning paths and courses authored by industry
Obiectives
Main objectives of training were to learn:
Pyhon Programming
ML Library Scikit, Numpy, Matplotlib, Pandas, Theano, TensorFlow
ML Algorithms
Machine Learning Programming and Use Cases.
Methodologies
There were several facilitation techniques used by the trainer which included question and answer,
brainstorming, group discussions, case study discussions and practical implementation of some of the topics
by trainees on flip charts and paper sheets. The multitude of training methodologies was utilized in order to
make sure all the participants get the whole concepts and they practice what they learn, because only
listening to the trainers can be forgotten, but what the trainees do by themselves they will never After
forget.
the post-tests were administered and the final course evaluation forms were filled in by the the
participants,
trainer cxpressed his closing remarks and reiterated the importance of the training for the trainees in their
daily activities and their readiness for applying the learnt concepts in their
assigned tasks. Certificates of
at the end.
completion were distributed among the participants
2.3MACHINE LEARNING:
algorithms that can receive input data and use statistical analysis to predict an
output while updating outputs as new data becomes available.[3]
networks are used for more complex processing tasks than supervised learning
and trends that might not be apparent to a human. For instance, a machine learning
algorithm or program more "experience," which can, in turn, be used to make better
decisions or predictions.
Predictions are made by looking at past weather patterns añd events; this data is then
used to determine what's most likely to occur in a scenario. The more
particular data
you have in your data set, the greater the accuracy of a given forecast. The same basic
concept is also true for algorithms that are used to make decisions or
recommendations.
Machine Learning allows for instantaneous adaptation, without the necd for human
intervention.An excellent example of this can be found in security
and anti-virus
These systems use data science to identify new threats and trends. Then, the Al
protecting against that threat. Data Science has eliminated the gap between the time
when a new threat is identified and the time when a response is issued. This near-
immediate response is critical in a niche where bots, viruses, worms, hackers and other
cyber threats can impact thousands or even millions of people in minutes.
4.Automation
artificial
intelligence. The automated nature of Data Science means it can save time
and money, as developers and analysts are freed up to perform high-level tasks that a
On the flip side, you have a computer running the show and that's something that is
certain to make any developer squirm with discomfort. For now, technology is
technology in order to develop an algorithm, you might program the Data Science
human.
This workaround adds a human gatekeeper to the equation, thereby eliminating the
potential for problems that can arise when a computer is in charge. After all, an
practice.
Various Python libraries used in the project
2.4Numpy
NumPy is
NumPy
the fundamental package for scientific computing with Python.
It contains among
other things:
sophisticated (broadcasting)functions
Besides its obvious scientific uses,NumPy can also be used as an efficient multi-dimensional
NumPy is licensed under the BSD license, enabling reuse with few restrictions. The core
functionality of NumPy is its "ND array", for n-dimensional array, data structure. These
arrays are strideviews on memory.In contrast to Python's built-in list data structure (which
despite the name, is a these arrays are homogeneously typed: all elements of
dynamic array),
a single array must be of the same type. NumPy has built-in support for memory
mappedarrays.
interval.
6. linspace (start, stop , num, endpoint, ..])- Retun evenly spaced numbers
over a
interval.
specified
manr fimotinne uhirh arn 1neod tn nerfarm enenifiad nnarntinn m tho nivven
t thana
input values
2.5 Pandas
Pandas
easy-to
Pandas is an open-source, BSD-licensed Python library providing high-performance,
with
use data structures and data analysis tools for the Python programming language. Python
Multidimensional data.
was majorly used for, data munging and preparation. It had very little
Prior to Pandas, Python
contribution towards data analysis. Pandas solved this problem. Using Pandas, we can
five steps in the processing and analysis of data, regardless of the origin
accomplish typical
used the Python and IPython shells, the Jupyter notebook, web application
in Python scripts,
A
and hard possible. You can generate plots,
Matplotlib tries to make easy things easy things
when
the pyplot module provides a MATLAB-like interface, particularly
For simple plotting
of line styles, font
have full control
combined with IPython. For the power user, you
a "SciKit"(SciPy a separately-
Cournapeau. Its name stems from the notion that it is Toolkit),
Scikit-learn is largely written in Python, with some core algorithms written in Cython to
LIBLINEAR. [10J
.Itprovides you with many options for cach model, but also chooses sensible defaults.
Technology Implemented
Python-The New Generation Language
Python is a widely used gencral-purpose, high level programming language. It was initially designed by
Guido van Rossum in 1991 and developed by Python Software Foundation. It was mainly developed for an
in fewer lines
emphasis on code readability, and its syntax allows programmers to express concepts of code.
nmcedural. ohiect-oriented. and fimctional nrnprammin Pvthon is often descrihed as a "hatteries included"
Features
Interpreted
no separate compilation and execution steps like C/C++. It directly run the program
In Python there is
from the source code. Internally, Python converts the source code into an intermediate form called
bytecodes which is then translated into native language of specific computer to run it.
Platform Independent
Python programs can be developed and executed on the multiple operating system platform.Pyhon
programming are fully supported, and many of its features support functional programming and aspect-
oriented programming.
Simple
Python is a very simple language. It is a very easy to learn as it is closer to English language. In python
documentation generation, unit testing, threading, databases, web browsers, CGI, email, XML, HTML,
WAV files. cryptography, GUI and many more.
Free and Open Source
Firstly, Python is freely available. Secondly, it is open-source. This means that its source code is
available to the public. We can download it, change it, use it, and distribute it. This is called 1 LOSS
Source Software). As
the Python community, we're
(Free/Libre and Open all
headed toward one goal
an ever-bettering Python.
Why Python Isa Perfect Language for Machine Lcarning?
A great choice of libraries is one of the main reasons Python is the most popular programming
A by diferent sources
language used for Al. library is a module or a group of modules published
which include a pre-written piece of code that allows users to reach some functionality or perform
different actions. Python libraries provide base level items so developers don't have to code them
irom ine very Degnning every uine. IL requires conunuous uaa processing,
anu ryuon s iioraries
let us access, handle and transform data. 1These are some of the most widespread libraries you can use
oPandasfor high-level data structures and analysis. It allows merging and filtering ofdata, as
well as gathering it from other external sources like Excel, for instance.
oKerasfordeep learning. It allows fast calculations and prototyping, as it uses the GPU in
o TensorFlow for working with deep learning by setting up, training, and utilizing artificial
o
Matplotlib forcreating 2D plots, histograms, charts, and other forms of visualization.
processing.
o Caffe for deep learning that allows switching between the CPU and the GPU and processing
the most convenient and effective way. The low entry barrier allows more data scientists to quickly
pick up Python and start using it for Al development without wasting too much effort into learning
the language. In addition to this, there's a lot of documentation available, and Python's communityis
Python for
machine leaming is a great choice, as this language is very flexible:
Prnorammers can comhine Pvthon and other lanouaoes to reach their onal
For Al developers, it's important to highlight that in artificial intelligence, deep learning, and
machine learning. it's vital to be able to data in a human-readable format. Libraries like
represent
Matplotlib allow data scientists to build charts, histograms,and plots for better data comprehension,
effective presentation, and visualization. Different application programming interfaces also simplify
the visualization process and make it easier to create clear reports.
5. CommunitySupport
It's always very helpful when there's strong community support built around the programming
language. Python is an open-source language which means that there's a bunch ofresources open for
programmers starting from beginners and ending with pros. A lot of Python documentation is
6. Growing Popularity-
data to an ML algorithm, we must preprocess it. We must apply some transformations on it. With data
1. Rescaling Data-
For data with attributes of varying scales, we can rescale atributes to possess the same scale. We rescale
attributes into the range 0 to I and call it normalization. We use the MinMaxScaler class from scikit-
2. Standardizing Data -
With standardizing.we can take attributes with a Gaussian distribution and different means and standard
deviations and transform them into a standard Gaussian distribution with a mean of 0 and a standard
deviation of 1.
3. Normalizing Data -
class.
4. BinarizingData -
Using a binary threshold, it is possible to transform our data by marking the values aboveit 1 and those
5. Mean Removal-
We can remove the mean from each feature to center it on zero.
perform One Hot Encoding. For k distinct values, we can transform the feature into a k-dimensional
Some labels can be words or numbers. Usually, training data is labelled with words to make it readable.
Label encoding converts word labels into numbers to let algorithms work on them.
Machine Learning Algorithms
There are many types of Machine Learning Algorithms specific to different use cascs. As we work with
two
datasets, a machinc leaming algorithm works in stages.
Weusually split the dataaround 20%-80%
we split a dataset into a data and test
betweentesting andraining stages. Under supervised learning, training
1. Lincar Regression-
features and predicts an outcome. Depending on whether it runs on a single variable or on many features, we
can call it
simple linear regression or multiple linear regression.
This is one of the most popular Python ML algorithms and often under-appreciated.
It assigns optimal
weights to variables to create a line ax+b to predict the output. We often use linear regression to estimate
real values like a number of calls and costs of houses based on continuous variables. The regression line is
the best line that fits Y=a*X+b to denote a relationship between independent and dependent variables.
15
10
".
-20 -10 10 20 30 40 So 60
2. Logistic Regression
Logistic regression is a supervised classification is unique Machine Learning algorithms in Python that finds
its use in estimating discrete values like O/1, yes/no, and true/false. This is based on a given set of
output between 0 and 1. Although it says 'regression', this is actually a classification algorithm. Logistic
regression fits data into a logit function and is also called logit regression.
sig()= --1.0-g().
.8-
-0.6
8 4 2
3. Decision Tree-
A decision tree falls under supervised Machine Learning Algorithms in Python and comes of use for both
closer to the
Usually, more important features are
left child branch or the right depends on the result. root.
dependent variables. Here, we split a population into two or more homogeneous sets. Tree models where the
variable can take a discrete set of values are called classification trees; in these tree structures, leaves
target
representclass labels and branches represent conjunctions of features that lead to those class labels. Decision
trees where the variable can take continuous values real numbers) are called regression
target (typically
trees.
that plots a line that divides different categories of your data. In this ML algorithm, we calculate the vector
to optimize the line. This is to ensure that the closest point in each group lies farthest from each other. While-
you will almost always find this to be a linear vector, it can be other than that. An SVM model is a
representation
of the examples as points
in space, mapped so that the examples
ofthe separate categories are
as wide as possible. In addition to linear SVMs can
divided by a clear gap that is performing classification,
efficiently perform a non-linear classification using what is called the kernel trick,
implicitly mapping their
and an unsupervised learming approach is required, which attempts to find natúral of the data to
clustering
5. Nave BayesAlgorithm-
Naive Bayes is a classification method which is based on Bayes' theorem. This assumes independence
between predictors. A Naive Bayes classifier will assume that a feature in a class is unrelated to any other.
Consider a fruit. This is an apple if it is round, red. and 2.5 inches in diameter. A Naive Bayes classifier will
say these characteristics independently contribute to the probability of the fruit being an apple. This is even
if features depend on each other. For very large data sets, it is casy to build a Naive Bayesian model. Not
only is this model very simple, it performs better than many highly sophisticated classification methods.
Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of
evaluating a closed-fom expression. which takes linear time, rather than by expensive iterative
Plelx)-Plalc)P)
Posterior
P) Predictor Prior Probablty
Probability
This is a Python Machine Learning algorithm for classification and regression- mostly for classification.
This is a supervised learning algorithm that considers different centroids and uses a usually Euclidean
function to comparedistance. Then, it analyzes the results and classifies each point to the group to optimize
it to place with all closest points to it. It classifies new cases using a majority vote of k of its neighbors. The
case it to a class is the one most common among its K nearest neighbors. For this, it uses a distance
assigns
approximated locally and all computation is deferred until classification. k-NN is a special case of a
7. K-Means Algorithm-
k-Means is an unsupervised algorithm that solves the problem of clustering. It classifies data using a number
of clusters. The data points inside a class are homogeneous and heterogeneous to peer groups. k-means
clustering is a method of vector quantization, originally from signal processing, that is popular for cluster
analysis in data mining. k-means clustering aims to partitíon n observations into k clusters in which each
observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. k-means
algorithm. It often is used as a preprocessing step for other algorithms, for example to find a starting
configuration. The problem is computationally difficult (NP-hard). k-means originates from signal
processing, and still finds use in this domain. In cluster analysis, the k-means algorithm can be used to
partition the input data set into k partitions (clusters). k-means clustering has been used as a feature learning
A random forest is an ensemble of decision trees. In order to classify every new object based on its
atributes. trees vote for class- each tree provides a classification. The classification with the most votes wins
in the forest. Random forests or random decision forests are an ensemble learning method for classification,
regression and other tasks that operates by constructing a multitude of decision trees at training time and
Outputting the class that is the mode of the classes (classification) or mean prediction (regression)of the
individual trees.
X dataset
MAJORITY VOTING
FINAL CLASS