Study Material
Study Material
There are various technical definitions available to describe Artificial Intelligence but all of them are very
complex and confusing. We will elaborate on the definition in simple words for your better understanding.
The humans are considered as the most intelligent species on this earth as they can solve any problem
and analyze big data with their skills like analytical thinking, logical reasoning, statistical knowledge, and
mathematical or computational intelligence.
Keeping all these combinations of skills in mind, artificial intelligence is developed for machines and
robots which impose the ability to solve complex problems in the machines as similar to those that can be
done by humans.
The artificial intelligence is applicable in all fields inclusive medicine field, automobiles, daily lifestyle
applications, electronics, communications as well as computer networking systems.
So technically the AI in context to computer networks can be defined as the computer devices and
networking system which can understand the raw data accurately, gather useful information from
that data and then use those findings to achieve the final solution and assignment of the problem
with a flexible approach and easily adaptable solutions.
Elements Of Intelligence
#1) Reasoning: It is the procedure that facilitates us to provide the basic criteria and guidelines for
making a judgment, prediction, and decision making in any problem.
Reasoning can be of two types, one is generalized reasoning which is based on the general observed
incidences and statements. The conclusion can be false sometimes in this case. The other one is logical
reasoning, which is based on facts, figures, and specific statements and specific, mentioned, and
observed incidences. Thus the conclusion is correct and logical in this case.
#2) Learning: It is the action of acquiring knowledge and skill development from various sources like
books, true incidents of life, experiences, being taught by some experts, etc. The learning enhances the
person’s knowledge in fields that he is unaware of.
The ability of learning is displayed not only by humans but also by some of the animals and artificial
intelligent systems possess this skill.
Audio speech learning is based on the process when some teacher is delivering lecture then the
audible students hear it, memorize it, and then use it for gaining knowledge from it.
The linear learning is based on memorizing the array of events that the person has encountered
and learned from it.
Observational learning means learning by observing behavior and facial expressions of other
persons or creatures like animals. For Example, the small child learns to speak by mimicking their
parents.
Perceptual learning is based on learning by identifying and classifying the visuals and objects and
memorize them.
Relational learning is based on learning from past incidences and mistakes and make efforts to
improvise them.
Spatial learning means to learn from visuals like images, videos, colors, maps, movies, etc. which
will help people in creating an image of those in mind whenever it will be needed for future
reference.
#3) Problem Solving: It is the process of identifying the cause of the problem and to find out a possible
way to solve the problem. This is done by analyzing the problem, decision making, and then finding out
more than one solution to reach the final and best-suited solution to the problem.
The final motto here is to find the best solution out of available ones for achieving the best results of
problem-solving in minimal time.
#4) Perception: It is the phenomenon of obtaining, drawing an inference, choosing, and systematizing
the useful data from the raw input.
In humans, the perception is derived from the experiences, sense organs, and situational conditions of
the environment. But concerning artificial intelligence perception, it is acquired by the artificial sensor
mechanism in association with the data in a logical manner.
#5) Linguistic Intelligence: It is the phenomenon of one’s capacity to deploy, figure out, read, and write
the verbal things in different languages. It is the basic component of the mode of communication between
the two or more individuals and the necessary one also for analytical and logical understanding.
#1) We have explained above the components of human intelligence on the grounds of which the human
perform different types of complex tasks and solve the various kind of distinctive problems in diverse
situations.
#2) The human develops machines with intelligence just like humans and they also give results to the
complex problem to the very near extent just like humans.
#3) The humans distinguish the data by visual and audio patterns, past situations, and circumstances
events whereas the artificially intelligent machines recognize the problem and handle the issue based on
predefined rules and backlog data.
#4) Humans memorize the data of the past and recall it as they learned it and kept in the brain but the
machines will find the data of the past by searching algorithms.
#5) With linguistic intelligence, humans can even recognize the distorted image and shapes and missing
patterns of voice, data, and images. But machines don’t have this intelligence and they use computer
learning methodology and deep learning process which again involves various algorithms to obtain the
desired results.
#6) Humans always follow their instinct, vision, experience, circumstances situations, surrounding
information, visual and raw data available, and also the things they have been taught by some teachers
or elders to analyze, solve any problem and come out with some effective and meaningful results of any
issue.
On the other hand, artificially intelligent machines at every level deploy the various algorithms, predefined
steps, backlog data, and machine learning to arrive at some useful results.
#7) Though the process followed by the machines is complex and involves a lot of procedure still they
give the best results in case of analyzing the big source of complex data and where it needs to perform
distinctive tasks of different fields at the same instance of time precisely and accurately and within the
given time frame.
The error rate in these cases of machines is far less than humans.
The machine learning emphasizes the growth of the algorithms which can scrutinize the data and make
predictions of it. The main use of this is in the healthcare industry where it is used for diagnosis of the
disease, medical scan interpretation, etc.
The pattern recognition process includes several steps. These are explained as follows:
(i) Data acquisition and sensing: This includes the collection of raw data like physical variables etc and
measurement of frequency, bandwidth, resolution, etc. The data is of two types: training data, and
learning data.
The training data is one in which there is no labeling of the dataset is provided and the system applies
clusters to categorize them. While the learning data have a well-labeled dataset so that it can directly be
used with the classifier.
(ii) Pre-processing of input data: This includes filtering out the unwanted data like noise from the input
source and it is done through signal processing. At this stage, the filtration of pre-existing patterns in the
input data is also done for further references.
(iii) Feature extraction: Various algorithms are carried out like a pattern matching algorithm to find the
matching pattern as required in terms of features.
(iv) Classification: Based on the output of algorithms carried out and various models learned to get the
matching pattern, the class is assigned to the pattern.
(v) Post-processing: Here the final output is presented and it will be assured that the result achieved is
almost as likely to be needed.
[image source]
As shown in the figure above, the feature extractor will derive the features from the input raw data, like
audio, image, video, sonic, etc.
Now, the classifier will receive x as input value and will allocate different categories to the input value like
class 1, class 2 …. class C. based on the class of the data, further recognition and analysis of the pattern
are done.
The machine runs various random programs and algorithms to map the input raw sequence of input data
to output. By deploying the various algorithms like neuroevolution and other approaches like gradient
descend on a neural topology the output y is raised finally from the unknown input function f(x), assuming
that x and y are correlated.
Here interestingly, the job of neural networks is to find out the correct f function.
Deep learning will witness all possible human characteristics and behavioral databases and will
perform supervised learning. This process includes:
All other characteristics including the ones mentioned above are used to prepare the artificial neural
networks by deep learning.
Predictive Analysis: After collecting and learning huge datasets, the clustering of similar kinds of
datasets is done by approaching the available model sets, like comparing the similar kind of speech sets,
images, or documents.
Since we have done the classification and clustering of the datasets, we will approach the prediction of
future events which are based on the grounds of the present event cases by establishing the correlation
between both of them. Remember the predictive decision and approach is not time-bound.
The only point which should be kept in mind while making a prediction is that the output should make
some sense and should be logical.
By giving repetitive takes and self-analyzing, the solution to problems will be achieved by this for
machines. The example of deep learning is speech recognition in phones which allows the smartphones
to understand a different kind of accent of the speaker and convert it to meaningful speech.
The stack of various perceptron joining together makes the artificial neural networks in the machines.
Before giving a desirable output, the neural networks gain knowledge by processing various training
examples.
With the use of different learning models, this process of analyzing data will also give a solution for many
associated queries that were unanswered previously.
Deep learning in association with the neural networks can unfold the multiple layers of hidden data
including the output layer of complex problems and is an aide for the subfields like speech recognition,
natural language processing, and computer vision, etc.
[image source]
The earlier kinds of neural networks were composed of one input and one output and upmost only one
hidden layer or a single layer of perceptron only.
The deep neural networks are composed of more than one hidden layer between the input and output
layers. Therefore a deep learning process is required to unfold the hidden layers of the data unit.
In deep-learning of neural networks, each layer is skilled on the unique set of attributes, based on the
output features of the previous layers. The more you get into the neural network, the node gains the
ability to recognize more complex attributes as they predict and recombine the outputs of all the previous
layers to produce the more clear final output.
This whole process is called a feature hierarchy and also known as the hierarchy of the complex and
intangible data sets. It enhances the capability of the deep neural networks to handle very huge and wide
dimensional data units having billions of the constraint will go through the linear and non-linear functions.
The main issue which the machine intelligence is struggling with to solve is to handle and manage the
unlabeled and unstructured data in the world which is spread all over in all fields and countries. Now the
neural nets are having the capability of handling the latency and complex features of these data subsets.
The deep learning in association with artificial neural networks has classified and characterized the
unnamed and raw data which were in the form of pictures, text, audio, etc. into an organized relational
database with proper labeling.
For Example, the deep learning will take as input the thousands of raw images, and then classify them
based on their basic features and characters like all animals like dogs on one side, non-living things like
furniture on one corner and all the photos of your family on the third side thus completing the overall
photo which is also known as smart-photo albums.
Another example, let’s consider the case of text data as input where we have thousands of e-mails.
Here, the deep learning will cluster the emails into different categories like primary, social, promotional,
and spam e-mails as per their content.
Feedforward Neural Networks: The target for using the neural networks is to achieve the final result
with minimal error and a high accuracy level.
This procedure involves many steps and each of the levels includes the prediction, error management,
and weight updates which is a slight increment to the co-efficient as it will move slowly to the desirable
features.
At the starting point of the neural networks, it doesn’t know which weight and data-subsets will make it
convert the input into the best suitable predictions. Thus it will consider all kinds of subsets of data and
weights as models to make predictions sequentially to achieve the best result and it learns every time
from its mistake.
For Example, we can refer the neural networks with the small children as when they are born, they know
nothing about the world around them and have no intelligence but as they grow old they learn from their
life experiences and mistakes to become a better human and intellectual.
This can be explained here, the input dataset will map them with the coefficients to get the multiple
predictions for the network.
Now the prediction is compared with the ground facts which are taken from the real-time scenarios, facts
end experience to find the error rate. The adjustments are made to deal with the error and relate the
contribution of weights into it.
These three functions are the three core building blocks of the neural networks which are scoring input,
evaluating the loss, and deploying an upgrade to the model.
Thus it is a feedback loop that will reward the coefficients which support in making the correct predictions
and will discard the coefficients which lead to errors.
The handwriting recognition, face and digital signature recognition, missing pattern identification are some
of the real-time examples of neural networks.
By practicing this, the machine acquires the ability to understand human language and image reflections.
Thus the cognitive thinking along with artificial intelligence can make a product that will be having human-
like actions and can also have data handling capabilities.
Cognitive computing is capable of taking accurate decisions in case of complex problems. Thus it is
applied in the area which needs to improve solutions with optimum costs and is acquired by analyzing
natural language and evidence-based learning.
The concept behind introducing this component is to make the interaction between the machines and the
human language seamless and the computers will become capable of delivering logical responses
towards human speech or query.
The natural language processing focus on both the verbal and written section of human languages means
both active and passive modes of using algorithms.
The Natural Language Generation (NLG) will process and decode the sentences and words that humans
used to speak (verbal communication) while the NaturalLanguage Understanding (NLU) will emphasize
the written vocabulary to translate the language in the text or pixels which can be understood by
machines.
The Graphical User Interfaces (GUI) based applications of the machines are the best example of natural
language processing.
The various types of translators that convert one language into another are examples of the natural
language processing system. The Google feature of voice assistant and voice search engine is also an
example of this.
It incorporates the skills of deep learning and pattern recognition to extract the content of images from
any data given, including images or video files within PDF document, Word document, PPT document, XL
file, graphs, and pictures, etc.
Suppose we have a complex image of a bundle of things then only seeing the image and memorizing it is
not easily possible for everyone. The computer vision can incorporate a series of transformations to the
image to extract the bit and byte detail about it like the sharp edges of the objects, unusual design or
color used, etc.
This is done by using various algorithms by applying mathematical expressions and statistics. The robots
make use of computer vision technology to see the world and act in real-time situations.
The application of this component is very vastly used in the healthcare industry to analyze the health
condition of the patient by using an MRI scan, X-ray, etc. Also used in the automobile industry to deal with
computer-controlled vehicles and drones.
Conclusion
In this tutorial, first, we have explained the various elements of intelligence with a diagram and their
significance for applying intelligence in real-life situations to get desired results.
Then, we have explored in detail the various sub-fields of artificial intelligence and their significance in
machine intelligence and the real-world with the help of mathematical expressions, real-time applications,
and various examples.
We have also learned in detail about machine learning, pattern recognition, and the neural network
concepts of artificial intelligence which play a very vital role in all the applications of artificial intelligence.