Final Project File
Final Project File
ON
“ARTIFICIAL INTELLIGENCE”
A report submitted in partial fulfillment of the requirements for award of the degree of
Bachelor of Engineering in Computer Science & Engineering.
Submitted by:
ROLL NO : 85/19
i
ACKNOWLEDGEMENT
ii
Mahant Bachittar Singh
College of Engineering & Technology, Jammu
A constituent of SantManjit Singh Trust
(Approved by AICTE, Govt. of J&K and Affiliated to University of Jammu)
DECLARATION
HANSIKA KOUR
85/19
191303026
iii
Babliana, Jeevan Nagar Road, P.O.Miran Sahib, Jammu, J&K (UT) India
#Ph- 0191-2262896 #Fax-0191-2262896, email – [email protected], www.mbscet.edu.in
CONTENTS
Page No
Certificate i
Acknowledgement ii
Declaration iii
List of Figures vi
iv
1.5.11 AI IN AGRICULTURE 8
1.5.12 AI IN E-COMMERCE 8
1.5.13 AI IN EDUCATION
9
2.1.2 JAVA 10
2.1.3 PROLOG 10
2.1.4 LISP 11
2.1.5 R 11
2.1.6 JULIA 11
2.1.7 C++ 12
3.5 ROBOTICS 23
v
4.1.1 NAIVE BAYES 28
4.1.2 DECISION TREE 28
4.1.3 RANDOM FOREST 29
4.1.4 SUPPORT VECTOR MACHINES 29
4.1.5 K NEAREST NEIGHBORS 29
4.2 REGRESSION ALGORITHMS 30
4.2.1 LINEAR REGRESSION 30
4.2.2 LOGISTIC REGRESSION 31
4.3 CLUSTERING ALGORITHMS
4.3.1 K-MEANS CLUSTERING 32
32
CHAPTER 5 FUTURE, CONCERNS AND SCOPE OF AI 32
5.1 FUTURE OF ARTIFICIAL INTELLIGENCE 33
5.2 FUTURE IMPACT OF AI IN DIFFERENT SECTORS 33
5.2.1 HEALTHCARE 33
5.2.2 CYBER SECURITY 33
5.2.3 TRANSPORTATION 33
5.2.4 E-COMMERCE 34
5.2.5 EMPLOYMENT
5.3 MYTHS/CONCERNS ABOUT ADVANCED 34
ARTIFICIAL INTELLIGENCE 34
vi
6.1 INTRODUCTION 38
6.2 ROADMAP OF THE PROJECT 38
6.3 PYTHON LIBRARIES USED IN THE PROJECT 39
6.4 MACHINE LEARNING ALGORITHMS USED IN MODEL 39
6.6 STEPS INVOLVED IN THE PROJECT 43
6.7 COMPLETE CODE FOR THE PROJECT 46
6.8 OUTPUT SCREENSHOTS
6.9 PROJECT CONCLUSION
47
48
Conclusion
Reference
vii
List of Figures
viii
Fig 4.8 Logistic Regression 30
Fig 4.9 K Means Clustering 31
Fig 5.1 Future of AI 32
Fig 5.2 AI Careers 36
Fig 6.1 Output Screenshot 1 43
Fig 6.2 Output Screenshot 2 43
Fig 6.3 Output Screenshot 3 44
Fig 6.4 Output Screenshot 4 44
Fig 6.5 Output Screenshot 5 45
Fig 6.6 Output Screenshot 6 45
ix
Artificial Intelligence
CHAPTER 1
INTRODUCTION TO ARTIFICIAL INTELLIGENCE
According to the father of Artificial Intelligence, John McCarthy, it is “The science and
engineering of making intelligent machines, especially intelligent computer programs”. He
said, ‘Every aspect of learning or any other feature of intelligence can in principle be so
precisely described that a machine can be made to simulate it. An attempt will be made to
find how to make machines use language, form abstractions, and concepts, solve kinds of
problems now reserved for humans, and improve themselves.’
Some popular accounts use the term "artificial intelligence" to describe machines that mimic
"cognitive" functions that humans associate with the human mind, such as "learning" and
"problem solving", however, this definition is rejected by major AI researchers. Artificial
intelligence, in its simplest sense, refers to the ability of a computer to perform tasks that are
similar to that of human learning and decision making. The term can also refer to the study,
science, and engineering of such intelligent machines, systems, and programs. AI is
accomplished by studying how human brain thinks, and how humans learn, decide, and work
while trying to solve a problem, and then using the outcomes of this study as a basis of
developing intelligent software and systems.
AI Technique is a manner to organize and use the knowledge efficiently in such a way that −
o It should be perceivable by the people who provide it.
o It should be easily modifiable to correct errors.
o It should be useful in many situations though it is incomplete or inaccurate.
Although, the idea of a ‘machine that thinks’ dates back to the Mayan civilization. In the
modern era, there have been some important events since the advent of electronic computers
that played a crucial role in the evolution of AI:
Since then, robots are constantly being developed and trained to perform complex
tasks in various industries.
A boom in AI (1980–1987): The first AI winter (1974–1980) was over, and
governments started seeing the potential of how useful AI systems could be for the
economy and defense forces. Expert systems and software were programmed to
simulate the decision-making ability of the human brain in machines. Al algorithms
like backpropagation, which uses neural networks to understand a problem and find
the best possible solution, were used.
The AI Winter (1987–1993): By the end of the year 1988, IBM successfully
translated a set of bilingual sentences from English to French. More advancements
were going on in the field of AI and Machine Learning, and by 1989, Yann LeCun
successfully applied the backpropagation algorithm to recognize handwritten ZIP
codes. It took three days for the system to produce the results but was still fast enough
given the hardware limitations at that time.
The emergence of intelligent agents (1993–2011): In the year 1997, IBM developed
a chess-playing computer named ‘Deep Blue’ that outperformed the world chess
champion, Garry Kasparov, in a chess match, twice. In 2002, Artificial intelligence
for the first time stepped into the domestics and built a vacuum cleaner named
’Roomba.’ By the year 2006, MNCs such as Facebook, Google, and Microsoft started
using AI algorithms and Data Analytics to understand customer behavior and improve
their recommendation systems.
Deep Learning, Big Data, and Artificial General Intelligence (2011–
Present): With computing systems becoming more and more powerful, it is now
possible to process large amounts of data and train our machines to make better
decisions. Supercomputers take the advantage of AI algorithms and neural
networks to solve some of the most complex problems of the modern world. Recently,
Neuralink, a company owned by Elon Musk, successfully demonstrated a brain–
machine interface where a monkey played the ping pong ball video game from his
mind.
o Mathematics
o Biology
o Psychology
o Sociology
o Computer Science
o Neurons Study
o Statistics
Fig:1.1: Foundation of AI
Fig:1.2: Types of AI
Artificial Intelligence has various applications in today's society. It is becoming essential for
today's time because it can solve complex problems with an efficient way in multiple
industries, such as Healthcare, entertainment, finance, education, etc. AI is making our daily
life more comfortable and faster.
Following are some sectors which have the application of Artificial Intelligence:
Fig:1.3: Applications of AI
1.5.1 AI in Astronomy
Artificial Intelligence can be very useful to solve complex universe problems. AI technology
can be helpful for understanding the universe such as how it works, origin, etc.
1.5.2 AI in Healthcare
In the last, five to ten years, AI becoming more advantageous for the healthcare industry and
going to have a significant impact on this industry. Healthcare Industries are applying AI to
make a better and faster diagnosis than humans. AI can help doctors with diagnoses and can
inform when patients are worsening so that medical help can reach to the patient before
hospitalization.
1.5.3 AI in Gaming
AI can be used for gaming purpose. The AI machines can play strategic games like chess,
where the machine needs to think of a large number of possible places.
1.5.4 AI in Finance
AI and finance industries are the best matches for each other. The finance industry is
implementing automation, chatbot, adaptive intelligence, algorithm trading, and machine
learning into financial processes.
1.5.9 AI in Robotics:
Artificial Intelligence has a remarkable role in Robotics. Usually, general robots are
programmed such that they can perform some repetitive tasks, but with the help of AI, we can
create intelligent robots which can perform tasks with their own experiences without pre-
programmed.
Humanoid Robots are best examples for AI in robotics, recently the intelligent Humanoid
robot named as Erica and Sophia has been developed which can talk and behave like humans.
1.5.10 AI in Entertainment
We are currently using some AI based applications in our daily life with some entertainment
services such as Netflix or Amazon. With the help of ML/AI algorithms, these services show
the recommendations for programs or shows.
1.5.11 AI in Agriculture
Agriculture is an area which requires various resources, labor, money, and time for best
result. Now a day's agriculture is becoming digital, and AI is emerging in this field.
Agriculture is applying AI as agriculture robotics, solid and crop monitoring, predictive
analysis. AI in agriculture can be very helpful for farmers.
1.5.12 AI in E-commerce
AI is providing a competitive edge to the e-commerce industry, and it is becoming more
demanding in the e-commerce business. AI is helping shoppers to discover associated
products with recommended size, color, or even brand.
1.5.13 AI in education:
AI can automate grading so that the tutor can have more time to teach. AI chatbot can
communicate with students as a teaching assistant. AI in the future can be work as a personal
virtual tutor for students, which will be accessible easily at any time and any place.
CHAPTER 2
TOOLS FOR ARTIFICIAL INTELLIGENCE
Following are some of the most popular AI programming languages for building
Advanced AI solutions:
o Python
o R
o Lisp
o Java
o C++
o Julia
o Prolog
2.1.1 Python
Python is one of the most powerful and easy programming languages that anyone can start to
learn. Python is initially developed in the early stage of 1991. Most of the developers and
programmers choose Python as their favorite programming language for developing Artificial
Intelligence solutions. Python is worldwide popular among all developers and experts
because it has more career opportunities than any other programming language.
Python also comes with some default sets of standards libraries and also provides better
community support to its users. Further, Python is a platform-independent language and also
provides an extensive framework for Deep Learning, Machine Learning, and Artificial
Intelligence. Python is also a portable language as it is used on various platforms such
as Linux, Windows, Mac OS, and UNIX.
2.1.2 Java
Java is also the most widely used programming language by all developers and programmers
to develop machine learning solutions and enterprise development. Similar to Python, Java is
also a platform-independent language as it can also be easily implemented on various
platforms.
2.1.3 Prolog
Prolog is one of the oldest programming languages used for Artificial Intelligence solutions.
Prolog stands for "Programming in Logic", which was developed by French scientist Alain
Colmerauer in 1970.
For AI programming in Prolog, developers need to define the rules, facts, and the end goal.
After defining these three, the prolog tries to discover the connection between them.
Programming in AI using Prolog is different and has several advantages and disadvantages.
2.1.4 Lisp
Lisp has been around for a very long time and has been widely used for scientific research in
the fields of natural languages, theorem proofs, and to solve artificial intelligence problems.
Lisp was originally created as a practical mathematical notation for programs but eventually
became a top choice of developers in the field of AI. Although Lisp programming language is
the second oldest language after Fortran, it is still being used because of its crucial features.
The inventor of LISP programming was John McCarthy, who coined the term Artificial
Intelligence.
LISP is one of the most efficient programming languages for solving specific problems.
Currently, it is mainly used for machine learning and inductive logic problems. It has also
influenced the creation of other programming languages for AI, and some worth examples
are R and Julia.
However, though being so flexible, it has various deficiencies, such as lack of well-known
libraries, not so-human-friendly syntax, etc. Due to this reason, it is not preferred by the
programmers.
2.1.5 R
R is one of the great languages for statistical processing in programming. However, R
supports free, open-source programming language for data analysis purposes. It may not be
the perfect language for AI, but it provides great performance while dealing with large
numbers.Some inbuilt features such as built-in functional programming, object-oriented
nature, and vectorial computation make it a worthwhile programming language for AI.
2.1.6 Julia
Julia is one of the newer languages on the list and was created to focus on performance
computing in scientific and technical fields. Julia includes several features that directly apply
to AI programming.
Julia is a comparatively new language, which is mainly suited for numerical analysis and
computational science. It contains several features that can be very helpful in AI
programming.
2.1.7 C++
C++ language has been present for so long around, but still being a top and popular
programming language among developers. It provides better handling for AI models while
developing.
Although C++ may not be the first choice of developers for AI programming, various
machine learning and deep learning libraries are written in the C++ language. C++ is one of
the fastest languages, and it can be used in statistical techniques. It can be used with ML
algorithms for fast execution. Most of the libraries and packages available for Machine
learning and AI are written in C++. It is a user friendly and simple language.
Python is one of the most popular programming languages and the most beginner’s
friendly around the globe. Python can be used for AI for various following reasons:
Python has Prebuilt Libraries like Numpy for scientific computation, Scipy for
advanced computing and Pybrain for machine learning (Python Machine Learning)
making it one of the best languages For AI.
Python developers around the world provide comprehensive support and assistance
via forums and tutorials making the job of the coder easier than any other popular
languages.
Python is platform Independent and is hence one of the most flexible and popular
choices for use across different platforms and technologies with the least tweaks in
basic coding.
Python is the most flexible of all others with options to choose between OOPs
approach and scripting. You can also use IDE itself to check for most codes and is a
boon for developers struggling with different algorithms.
If you don’t already have a copy of Python installed on your computer, you will need to open
up your Internet browser and go to the Python download page
(http://www.python.org/download/).
Double-click the executable file, which is downloaded; the following window will open.
Select Customize installation and proceed. Click on the Add Path check box, it will set the
Python path automatically.
Run the Python Installer once downloaded. (In this example, we have downloaded Python
3.9.4)
Make sure you select the “Install launcher for all users” and “Add Python 3.10.2 to PATH”
checkboxes.
Select install now — the recommended installation options.
A Python library is a collection of related modules. It contains bundles of code that can be
used repeatedly in different programs. It makes Python Programming simpler and convenient
for the programmer. Python libraries play a very vital role in fields of Machine Learning,
Data Science, Data Visualization, etc.
Some of the most common python libraries used in AI are given below:
Tensor Flow Python: Tensor Flow is an end-to-end open source platform for
machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and
community resources that lets researchers push the state-of-the-art in ML and
developers easily build and deploy ML powered applications. Currently, the most
famous deep learning library in the world is Google’s Tensor Flow. Google product
uses machine learning in all of its products to improve the search engine, translation,
image captioning or recommendations.
Keras Python: Keras is an API designed for human beings, not machines. Keras
follows best practices for reducing cognitive load: it offers consistent & simple APIs,
it minimizes the number of user actions required for common use cases, and it
provides clear & actionable error messages. It also has extensive documentation and
developer guides.
Scikit-learn Python: Scikit-learn (Sklearn) is the most useful and robust library for
machine learning in Python. It provides a selection of efficient tools for machine
learning and statistical modeling including classification, regression, clustering and
dimensionality reduction via a consistence interface in Python. This library, which is
largely written in Python, is built upon NumPy, SciPy and Matplotlib.
NumPy Python: Numpy is a general-purpose array-processing package. It provides a
high-performance multidimensional array object, and tools for working with these
arrays. It is the fundamental package for scientific computing with Python.
Besides its obvious scientific uses, Numpy can also be used as an efficient multi-
dimensional container of generic data.
Python Pandas: Pandas is an open-source library that is made mainly for working
with relational or labelled data both easily and intuitively. It provides various data
structures and operations for manipulating numerical data and time series. This library
is built on top of the NumPy library. Pandas is fast and it has high performance &
productivity for users.
CHAPTER 3
SUBSETS OF ARTIFICIAL INTELLIGENCE
Artificial Intelligence works with large amounts of data that are first combined with the fast,
iterative processing and smart algorithms that allow the system to learn from the patterns
within the data. This way, the system would be able to deliver accurate or close to accurate
outputs. As it sounds, It is a vast subject, and the scope of AI is very wide which involves
much-advanced and complex processes, and it is a field of study that includes many theories,
methods, and technologies. The major subfields under AI are:
o Machine Learning
o Deep Learning
o Expert System
o Robotics
o Machine Vision
o Speech Recognition
Fig:3.1: Subset of AI
A Machine Learning system learns from historical data, builds the prediction models, and
whenever it receives new data, predicts the output for it. The accuracy of predicted output
depends upon the amount of data, as the huge amount of data helps to build a better model
which predicts the output more accurately. In Machine Learning the dataset is divided into 2
parts.
Training dataset
Test dataset
While training the model, data is usually split in the ratio of 80:20 i.e. 80% as training data
and rest as testing data. In training data, we feed input as well as output for 80% of data. The
model learns from training data only. We use different machine learning algorithms to build
our model. Once the model is ready then it is good to be tested. At the time of testing, the
input is fed from the remaining 20% data which the model has never seen before, the model
will predict some value and we will compare it with actual output and calculate the accuracy.
o Supervised Learning:
Supervised learning is when the model is getting trained on a labelled dataset, and on that basis, it
predicts the output. A labelled dataset is one that has both input and output parameters. The system
creates a model using labeled data to understand the datasets and learn about each data, once the
training and processing are done then we test the model by providing a sample data to check whether
it is predicting the exact output or not. The goal of supervised learning is to map input data with
the output data. The supervised learning is based on supervision, and it is the same as when a
student learns things in the supervision of the teacher.
Mainly supervised leaning problems can be divided into the following two kinds of problems:
Classification − Classification models are the type of Supervised Learning techniques, which
are used to generate conclusions from observed values in the categorical form, i.e., output has
defined labels(discrete value) as "male" or "female", etc.
Regression − A problem is called regression problem when we have the real value output
such as “distance”, “kilogram”, etc.
o Reinforcement learning:
Reinforcement learning is a type of learning in which an AI agent is trained by giving some
commands, and on each action, an agent gets a reward as feedback. Using these feedbacks,
agent improves its performance. Reward feedback can be positive or negative which means
on each good action, agent receives a positive reward while for wrong action, it gets a
negative reward.
Reinforcement learning is of two types:
1. Positive Reinforcement learning
2. Negative Reinforcement learning
o Unsupervised learning:
Unsupervised learning is a type of machine learning in which models are trained using unlabeled
dataset and are allowed to act on that data without any supervision. The goal of unsupervised learning
is to find the underlying structure of dataset, group that data according to similarities, and represent
that dataset in a compressed format. In unsupervised learning, the agent needs to learn from
patterns without corresponding output values.
when you want an intelligent system like robot to perform as per your instructions, when you
want to hear decision from a dialogue based clinical expert system, etc. The field of NLP
involves making computers reform useful tasks with the natural languages humans use. The
input and output of an NLP system can be −Speech Written Text.
o Deep learning is implemented through neural networks architecture hence also called
a deep neural network.
o Deep learning is the primary technology behind self-driving cars, speech recognition,
image recognition, automatic machine translation, etc.
o The main challenge for deep learning is that it requires lots of data with lots of
computational power.
One of the examples of an expert system is a Suggestion for the spelling error while typing in
the Google search box. Following are some characteristics of expert systems:
o High performance
o Reliable
o Highly responsive
o Understandable
3.5 ROBOTICS
Robotics is a branch of artificial intelligence and engineering which is used for designing and
manufacturing of robots. Robots are the programmed machines which can perform a series of
actions automatically or semi-automatically.
AI can be applied to robots to make intelligent robots which can perform the task with their
intelligence. AI algorithms are necessary to allow a robot to perform more complex tasks.
Nowadays, AI and machine learning are being applied on robots to manufacture intelligent
robots which can also interact socially like humans. One of the best examples of AI in
robotics is Sophia robot.
Fig:3.10 : Robotics
Computer vision is a field of artificial intelligence (AI) that enables computers and systems to
derive meaningful information from digital images, videos and other visual inputs — and
take actions or make recommendations based on that information. If AI enables computers to
think, computer vision enables them to see, observe and understand.
Computer vision works much the same as human vision, except humans have a head start.
Human sight has the advantage of lifetimes of context to train how to tell objects apart, how
far away they are, whether they are moving and whether there is something wrong in an
image. Here are a few examples of established computer vision tasks:
Object detection can use image classification to identify a certain class of image and
then detect and tabulate their appearance in an image or video. Examples include
detecting damages on an assembly line or identifying machinery that requires
maintenance.
There is some speech recognition software which has a limited vocabulary of words and
phrase. This software requires unambiguous spoken language to understand and perform
specific task. Today's there are various software or devices which contains speech recognition
technology such as Cortana, Google virtual assistant, Apple Siri, etc.
We need to train our speech recognition system to understand our language. In previous days,
these systems were only designed to convert the speech to text, but now there are various
devices which can directly convert speech into commands. Speech recognition systems can
be used in the following areas:
o Industrial application
1. Speaker Dependent
2. Speaker Independent
Neural Networks can be created from at least three layers of neurons: The input layer, the
hidden layer(s) and the output layer. The hidden layer — or layers — in between consist of
many neurons, with connections between the layers. As the neural network “learns” the data,
the weights, or strength, of the connections between these neurons are “fine-tuned,” allowing
the network to come up with accurate predictions.
When a neural network has many layers, it’s called a deep neural network, and the process of
training and using deep neural networks is called deep learning.
1) Neuron: Just like a neuron forms the basic element of our brain, a neuron forms the basic
structure of a neural network. In case of a neural network, a neuron receives an input,
processes it and generates an output which is either sent to other neurons for further
processing or it is the final output.
2) Weights – When input enters the neuron, it is multiplied by a weight. For example, if a
neuron has two inputs, then each input will have has an associated weight assigned to it. Deep
neural networks generally refer to particularly complex neural networks. These have more
layers (as many as 1,000) and — typically — more neurons per layer. With more layers and
more neurons, networks can handle increasingly complex tasks; but that means they take
longer to train.
CHAPTER 4
ALGORITHMS FOR ARTIFICIAL INTELLIGENCE
Artificial Intelligence has grown to have a significant impact on the world. With large
amounts of data being generated by different applications and sources, machine learning
systems can learn from the test data and perform intelligent tasks.
Artificial Intelligence is the field of computer science that deals with imparting the decisive
ability and thinking the ability to machines. Artificial Intelligence is thus a blend of computer
science, data analytics, and pure mathematics.
All Algorithms perform the same task of predicting outputs given unknown inputs, however,
here data is the key driver when it comes to picking the right algorithm. Different Artificial
Intelligence algorithms can be used to solve a category of problems.
In a Decision tree, there are two nodes, which are the Decision Node and Leaf Node.
Decision nodes are used to make any decision and have multiple branches, whereas Leaf
nodes are the output of those decisions and do not contain any further branches. The
decisions or the test are performed on the basis of features of the given dataset. It is a
graphical representation for getting all the possible solutions to a problem/decision based on
given conditions.
Fig:4.5: SVM
Fig:4.6: KNN
CHAPTER 5
FUTURE, CONCERNS AND SCOPE OF AI
Being so revolutionary technology, AI also deals with many controversies about its future
and impact on Human beings. It may be dangerous, but also a great opportunity. AI will be
deployed to enhance both defensive and offensive cyber operations. Additionally, new means
of cyber-attack will be invented to take advantage of particular vulnerabilities of AI
technology.
Fig:5.1: Future of AI
5.2.1 Healthcare:
AI will play a vital role in the healthcare sector for diagnosing diseases quickly and more
accurately. New drug discovery will be faster and cost-effective with the help of AI. It will
also enhance the patient engagement in their care and also make ease appointment
scheduling, bill paying, with fewer errors. However, apart from these beneficial uses, one
great challenge of AI in healthcare is to ensure its adoption in daily clinical practices.
5.2.3 Transportation:
The fully autonomous vehicle is not yet developed in the transportation sector, but
researchers are reaching in this field. AI and machine learning are being applied in the
cockpit to help reduce workload, handle pilot stress and fatigue, and improve on-time
performance. There are several challenges to the adoption of AI in transportation, especially
in areas of public transportation. There's a great risk of over-dependence on automatic and
autonomous systems.
5.2.4 E-commerce:
Artificial Intelligence will play a vital role in the e-commerce sector shortly. It will positively
impact each aspect of the e-commerce sector, ranging from user experience to marketing and
distribution of products. We can expect e-commerce with automated warehouse and
inventory, shopper personalization, and the use of chatbots in future.
5.2.5 Employment:
Nowadays, employment has become easy for job seekers and simple for employers due to the
use of Artificial Intelligence. AI has already been used in the job search market with strict
rules and algorithms that automatically reject an employee's resume if it does not fulfil the
requirement of the company. It is hoping that the employment process will be driven by most
AI-enabled applications ranging from marking the written interviews to telephonic rounds in
the future. Apart from above sectors, AI has great future in manufacturing, finance &
banking, entertainment, etc.
With the development of AI, a revolution has come in industries of every sector, and people
fear losing jobs with the increased development of AI. But in reality, AI has come up with
more jobs and opportunities for people in every sector. Every machine needs a human being
to operate it. However, AI has taken over some roles, but it reverts to producing more jobs for
people.
As discussed above, AI can be divided into three types, Weak AI, which can perform
specific tasks, such as weather Prediction. General AI; Capable of performing the task as a
human can do, Super AI; AI capable of performing any task better than human.
At present, we are using weak AI that performs a particular task and improves its
performance. On the other hand, general AI and Super AI are not yet developed, and
researches are going on. They will be capable of doing different tasks similar to human
intelligence. However, the development of such AI is far away, and it will take years or
centuries to create such AI applications. Moreover, the efficiency of such AI, whether it will
be better than humans, is not predictable at the current stage.
People also have a misconception that AI does not need any human intervention. But the fact
is that AI is not yet developed to take their own decisions. A machine learning
engineer/specialist is required to pre-process the data, prepare the models, prepare a training
dataset, identify the bias and variance and eliminate them, etc. Each AI model is still
dependent on humans. However, once the model is prepared, it improves its performance on
its own from the experiences.
Most of the researchers agree that super AI cannot show human emotions such as Love, hate
or kindness. Moreover, we should not expect an AI to become intentionally generous or
spiteful. Further, if we talk about AI to be risky, there can be mainly two scenarios, which
are:
Autonomous weapons are artificial intelligence systems that are programmed to kill. In the
hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an
AI arms race could inadvertently lead to an AI war resulting in mass casualties. To avoid
being dissatisfied with the enemy, these weapons would be designed to be extremely difficult
to "turn off," so humans could plausibly lose control of such a situation. This risk is present
even with narrow AI but grows as levels of AI intelligence and autonomy increase.
5.5 AI CAREERS
Freshers should analyze their competencies and skills and choose a better AI role with the
potential for upward mobility. The future scope of Artificial Intelligence continues to grow
due to new job roles and advancements in the AI field. The various roles in an AI career are
as follows:
Fig:5.2: AI Careers
o AI researcher
o AI Algorithms Expert
o Robotics specialist
o Surgical AI technician
CHAPTER 6
PROJECT
Machine learning is about learning to predict something or extracting knowledge from data.
ML is a part of artificial intelligence. ML algorithms build a model based on sample data or
known as training data and based upon the training data the algorithm can predict something
on new data.
In this project, I have experimented with a real-world dataset, and to explore how machine
learning algorithms can be used to find the patterns in data.
The dataset used was that of an Iris flower. Iris flower classification is a very popular
machine learning project. The iris dataset contains three classes of flowers, Versicolor,
Setosa, Virginica, and each class contains 4 features, ‘Sepal length’, ‘Sepal width’, ‘Petal
length’, ‘Petal width’. The aim of the iris flower classification is to predict flowers based on
their specific features.
6.1 INTRODUCTION
The dataset for this project originates from the UCI Machine Learning Repository. The Iris
flower data set or Fisher's Iris data set is a multivariate data set introduced by the British
statistician and biologist Ronald Fisher in his 1936 paper The use of multiple measurements
in taxonomic problems as an example of linear discriminant analysis.
The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris
virginica and Iris versicolor).
Four features were measured from each sample (in centimeters):
o Length of the sepals
o Width of the sepals
1. Define Problem.
2. Prepare Data.
3. Evaluate Algorithms.
4. Improve Results.
5. Present Results.
The program takes data from the load_csv(). The program then visualizes and summarizes the
data. Then the dataset is divided into training and testing samples in 80:20 ratio randomly
using train_test_learn() function available in sklearn module.
5 different algorithms are applied and models of the data are created. Predictions are made on
the testing sample space.
Accuracy score is then calculated by comparing with the correct results of the training
dataset.
#dimension of dataset
dataFrame.shape
#Data Visualization
#histograms
dataFrame['sepal-length'].hist()
plt.show()
dataFrame['sepal-width'].hist()
plt.show()
dataFrame['petal-length'].hist()
plt.show()
dataFrame['petal-width'].hist()
plt.show()
#Scatter matrix
pd.plotting.scatter_matrix(dataFrame)
plt.show()
#heatmap
print("Checking the correlation : ")
corr = dataFrame.corr()
fig, ax = plt.subplots(figsize =(5,4))
sns.heatmap(corr, annot=True, ax=ax)
plt.show()
#Building Models
models = []
models.append(('LR', LogisticRegression(solver='liblinear', multi_class='ovr')))
models.append(('KNN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier()))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC(gamma='auto')))
#Make Predictions
model = SVC(gamma='auto')
model.fit(x_train, y_train)
predictions = model.predict(x_test)
#evaluate predictions
print(f'Test Accuracy: {accuracy_score(y_test, predictions)}')
print(f'Classification Report: \n {classification_report(y_test, predictions)}')
So, we had 5 models and accuracy estimations for each. We need to compare the models to
each other and select the most accurate. In this case, we can see that it looks like Support
Vector Machines (SVM) has the largest estimated accuracy score at about 0.98 or 98%.
Hence Support Vector Machine is used as the final model.
The model successfully classifies the flower among three species (Setosa, Versicolor, or
Virginica) from sepals' and petals' length and width measurements.
CONCLUSION
Artificial intelligence (AI) is on the rise as it is integrated into every aspect of our daily lives;
from computers, video games, and even kitchen appliances. As humans, we have allowed AI to
infiltrate our daily lives as they complete the simplest of task for us, however they are not
completed to the best of their abilities. As humans, we are able to complete a task to the prime
of our capacity through the combination of our experiences, emotions, and logic. On the other
hand, artificial intelligence formulates a conclusion through a series of mathematical equations,
numerous numbers of code, and a series of zeros and ones in order to mimic our human
capabilities of decision making.
In conclusion, artificial intelligence will become more valuable to humans than it’s
capabilities. It will become a part of our daily lives. Some worry about the development of
this new technology where a robot that can learn and develop skills on it’s own. Artificial
intelligence will surpass humans on an IQ level and become better than humans at many
skills or knowledge. This leaves some people in an identity crisis. Why makes humans so
unique and what is their purpose if artificial intelligence can simply replace them by taking
all of their traits and habits? Artificial intelligences are designed to learn on their own and
resemble a human brain and physical and mental properties. One thing is for sure, is that
artificial intelligence will continue to develop because of humans. Humans will continue to
make new discoveries and discover new things. Artificial intelligence will never be able to
accomplish that, however they may assist a human by providing theories. The future is
unknown and maybe artificial intelligence and humans will be able to work together on many
different topics.
REFERENCE
https://www.tutorialspoint.com/artificial_intelligence_overview
http://www.techsparks.co.in/artificial-intelligence
https://www.upgrad.com/blog/types of artificial intelligence algorithms
https://www.javatpoint.com/subsets of ai
https://existek.com/blog/ai programming languages
https://www.javatpoint.com/future of artificial intelligence and concerns
https://en.wikipedia.org/wiki/Artificial_intelligence/Applications
https://www.analyticsvidhya.com/common-machine-learning-algorithms
https://www.upgrad.com/blog/top-python-libraries-for-machine-learning
https://computerscienceai.wordpress.com/conclusion