0% found this document useful (0 votes)
21 views6 pages

AI

Uploaded by

Dhruv Mukherjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views6 pages

AI

Uploaded by

Dhruv Mukherjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

UNIT - 1

What is AI?
1. Artificial intelligence (AI) is a wide-ranging branch of computer science concerned with
building smart machines capable of performing tasks that typically require human
intelligence.
2. No fixed definition
3. Developing systems with the intellectual processes characteristic of humans, such as the
ability to reason, discover meaning, generalize, or learn from past experience.
4. Study of how to build or program computers to enable them to do the sort of things that the
mind can do
5. Stimulation of human intelligence in machines that are programmed to think like humans
and mimic their actions
6. Systems which think like humans – machines with minds – effort is to make computers think
– which perform tasks such as learning, problem solving, planning, reasoning, etc.

Summary of the History of AI (1940s onwards)


Early Foundations (1940s-1950s)
• Norbert Wiener: Pioneered the field of automation with his foundational work.
• Warren McCulloch and Walter Pitts: Investigated how biological neurons could serve as a
model for computing, exploring the concept of neural networks.
• Donald Hebb: Demonstrated that neural networks could learn and adapt.
• John Von Neumann: A key figure in developing computing technology, laying groundwork for
AI.
• Alan Turing: Introduced the Turing Test to assess machine intelligence, along with Von
Neumann, explored programmable machines.
• John McCarthy: Coined the term “Artificial Intelligence” and organized the Dartmouth
Conference, which was pivotal in shaping AI research.
Early AI Developments (1950s-1970s)
• Dartmouth Conference: A landmark event aiming to develop machines that could learn and
solve problems like humans.
• Marvin Minsky and Dean Edmonds: Created SNARC, an early neural network.
• Chatbot Eliza: The first chatbot, designed to simulate human conversation.
• Wabot 1 (1972): The first intelligent robot with basic communicative and interactive
capabilities.
• AI in Media: AI-themed movies began appearing in 1968.
AI Winters (1974-1980, 1987-1993)
• First AI Winter (1974-1980): Marked by reduced funding and progress due to
underwhelming results and skepticism, especially after a negative report from the UK
Parliament.
• Second AI Winter (1987-1993): Followed by failures in expert systems and AI hardware,
leading to further reduced funding and a failed conference aimed at revitalizing AI research.
Renewed Interest and Advances (1990s-2010s)
• Intelligent Agents (1990): Emergence of systems that could act based on environmental
analysis, signaling the end of the Second AI Winter.
• IBM Deep Blue (1997): An expert system that defeated a world-class chess player,
demonstrating significant AI capabilities.
• AI Boom (2010): Driven by vast data availability and improved computer efficiency, leading
to renewed growth and innovation in AI.

Human Intelligence v/s Machine Intelligence


• Human learning process varies from person to person. Once a learning process is set into the
minds of people, it is difficult to change it. But, in Machine Learning (ML), it is easy to change
the learning method by selecting a different algorithm.

• Humans acquire knowledge through experience either directly or shared by others.


Machines acquire knowledge through experience shared in the form of past data. We have
the terms, Knowledge, Skill, and Memory being used to define intelligence.

• Humans without knowledge in particular subjects can apply their intelligence to solve
problems in new domains. But machines can solve new problems only if their intelligence
has been updated with retraining on data acquired from the changed scenarios. This is a
fundamental difference between human intelligence and machine intelligence.

• Both humans and machines make mistakes in applying their intelligence in solving problems.
Machine intelligence is limited to the areas in which they are trained. But human intelligence
is independent of his domain of training. An intelligent human being will be able to solve
problems related to unforeseen domains, whereas a machine will not be able to do that.

• Machine learning refers to the process of teaching a computer system how to accurately
make predictions based on the data it is fed with. The predictions can include whether a fruit
in a photo is a banana or an apple, whether a person crossing the road in front of a moving
car can be spotted, or if the use of the word book in a sentence means a paperback book or
a hotel reservation, if an email is spam, or to identify speech with enough accuracy to create
captions for a YouTube video.

• The vision of making machines that can think and act like humans has evolved from movie-
fiction to real-world facts. We have long attempted to inherit Intelligence in Machines to
ease our work. There are bots, humanoids, robots, and digital humans that either outplay
humans or coordinate with us in many ways. These AI-driven applications have higher speed
of execution, have higher operational ability and accuracy, while also highly significant
in tedious and monotonous jobs compared to humans.
• On the contrary, Human Intelligence relates to adaptive learning and experience. It does not
always depend on pre-fed data like the ones required for AI. Human memory, its computing
power, and the human body as an entity may seem insignificant compared to the machine’s
hardware and software infrastructure. But, the depth and layers present in our brains are far
more complex and sophisticated, that machines still cannot beat at least not in the near
future.

Learning in machines
1. Supervised learning: All data and algorithms are provided. All the algorithms are labelled -
labelled data. The machine learns from the labelled data and tries to predict the unforeseen
outcome based on the labelled data - the labelled data has answers within it. Based on this,
it tries to identify the patterns and find answers. Based on the training it has received, the
machine tries to predict the outcomes and future. All the information regarding how to go
about it is there - based on this it works.
2. Unsupervised learning: No labelled data. Now technologies have transgressed from
supervised to unsupervised learning. Based on the previous experiences, the machine tries
to identify patterns and goes about classifying them. No instructions are provided. Tries to
find similarities and differences and works based on past experiences. Performs complex
tasks compared to supervised learning.
3. Reinforcement learning: Used in robots. Functions with the help of an agent - agent and
reward mechanism - tries to avoid hurdles which are in between in order to reach the
particular goal. Machine knows the goal and it tries to decide how to cross the hurdle and
based on that it rewards itself. - learns from a series of actions.
4. Semi-supervised learning (SSL): The AI learns from the labelled data to then make a
judgement on the unlabelled data and find patterns, relationships and structures. SSL is also
useful in reducing human bias in the process. A fully labelled, supervised learning AI has
been labelled by a human and thus poses the risk of results potentially being skewed due to
improper labelling. With SSL, including a lot of unlabelled data in the training process often
improves the precision of the end result while time and cost are reduced.
5. Machine Learning (ML): is an automated learning with little or no human intervention. It
involves programming computers so that they learn from the available inputs. The main
purpose of machine learning is to explore and construct algorithms that can learn from the
previous data and make predictions on new input data.

Machine learning transgresses into deep learning where it takes up unsupervised learning.

Deep learning is a subset of Machine Learning


• Deep learning is a subset of machine learning, a branch of artificial intelligence that
configures computers to perform tasks through experience. Contrary to classic, rule-based AI
systems, machine learning algorithms develop their behavior by processing annotated
examples, a process called “training.” For instance, to create a fraud-detection program, you
train a machine-learning algorithm with a list of bank transactions and their eventual
outcome (legitimate or fraudulent).
• While classic machine-learning algorithms solved many problems that rule-based programs
struggled with, they are poor at dealing with soft data such as images, video, sound files, and
unstructured text. For instance, creating a breast-cancer-prediction model using classic
machine-learning approaches would require the efforts of dozens of experts
• Deep-learning algorithms solve the same problem using deep neural networks, a type of
software architecture inspired by the human brain (though neural networks are different
from biological neurons). Neural networks are layers upon layers of variables that adjust
themselves to the properties of the data they’re trained on and become capable of doing
tasks such as classifying images and converting speech to text.
• Deep Learning is merely a subset of machine learning. The primary ways in which they differ
is in how each algorithm learns and how much data each type of algorithm uses. Deep
learning automates much of the feature extraction piece of the process, eliminating some of
the manual human intervention required. It also enables the use of large data sets
• Classical, or "non-deep", machine learning is more dependent on human intervention to
learn. Human experts determine the hierarchy of features to understand the differences
between data inputs, usually requiring more structured data to learn. For example, let's say
that I were to show you a series of images of different types of fast food, “pizza,” “burger,” or
“taco.” The human expert on these images would determine the characteristics which
distinguish each picture as the specific fast food type. For example, the bread of each food
type might be a distinguishing feature across each picture.
• By observing patterns in the data, a deep learning model can cluster inputs appropriately.
Taking the same example from earlier, we could group pictures of pizzas, burgers, and tacos
into their respective categories based on the similarities or differences identified in the
images. With that said, a deep learning model would require more data points to improve its
accuracy, whereas a machine learning model relies on less data given the underlying data
structure.
• Machine learning is at the intersection of computer science and statistics through which
computers receive the ability to learn without being explicitly programmed. There are two
broad categories of machine learning problems: supervised and unsupervised learning.
• Deep learning describes algorithms that analyze data with a logic structure similar to how a
human would draw conclusions. Note that this can happen both through supervised and
unsupervised learning. To achieve this, deep learning applications use a layered structure of
algorithms called an artificial neural network (ANN). The design of such an ANN is inspired by
the biological neural network of the human brain
• ANN ordinarily includes an enormous number of processors working in equal and
masterminded levels. The main level gets the crude info data - undifferentiated from optic
nerves in human visual handling. Each progressive level gets the yield from the level going
before it, instead of the crude information - similarly neurons further from the optic nerve
get signals from those nearer to it. The last level delivers the yield of the framework.
• The neurons are typically organized into multiple layers, especially in deep learning. Neurons
of one layer connect only to neurons of the immediately preceding and immediately
following layers. The layer that receives external data is the input layer. The layer that
produces the ultimate result is the output layer.

Reinforcement Learning:
• Reinforcement learning is goal-oriented learning. That is, rather than trying to classify or
cluster data, you define what you want to achieve, which metrics you want to maximize or
minimize, and RL agents learn how to do that. It is not mutually exclusive with deep learning,
but rather a framework in which neural networks can be used to learn the relationship
between actions and their rewards.
• Main points in Reinforcement learning –
Input: The input should be an initial state from which the model will start
Output: There are many possible outputs as there are a variety of solutions to a particular problem
Training: The training is based upon the input, The model will return a state and the user will decide
to reward or punish the model based on its output.
The model continues to learn.The best solution is decided based on the maximum reward.
Ex.The goal of the robot is to get the reward that is the diamond and avoid the hurdles that are fired.
The robot learns by trying all the possible paths and then choosing the path which gives him the
reward with the least hurdles. Each right step will give the robot a reward and each wrong step will
subtract the reward of the robot. The total reward will be calculated when it reaches the final reward
that is the diamond.
Types of Reinforcement: There are two types of Reinforcement:
1. Positive - Positive Reinforcement is defined as when an event occurs due to a particular
behavior, increases the strength and the frequency of the behavior. In other words, it has a
positive effect on behavior.
2. Negative – Negative Reinforcement is defined as strengthening of behavior because a
negative condition is stopped or avoided.
3. Various Practical applications of Reinforcement Learning – RL can be used in robotics for
industrial automation. RL can be used in machine learning and data processing
In some ways it is similar to supervised learning in that developers must give algorithms
clearly specified goals and define rewards and punishments. This means the level of explicit
programming required is greater than in unsupervised learning. But, once these parameters
are set, the algorithm operates on its own, making it much more self-directed than
supervised learning algorithms
The main challenge in reinforcement learning lays in preparing the simulation environment,
which is highly dependant on the task to be performed. Scaling and tweaking the neural
network controlling the agent is another challenge. There is no way to communicate with
the network other than through the system of rewards and penalties
Turing test
➢ Alan Turing – computer scientist
➢ Studied whether machines can think
➢ He wrote a paper – Computing Machinery and Intelligence – it laid down the
foundation wherein he started studying about whether machines can think.
Imitation game:
➢ humans kept in separate rooms connected by a screen and keyboard – one judge –
judge has to identify the sex of the other two members. The other two have to
convince the judge that they are of the opposite sex.
➢ Alan Turing – substituted this game with AI.
➢ 3 players – 1 interrogator, one machine and one human
➢ Interrogator – needs to identify which is the machine and which is human.If he fails
to identify/distinguish – we are dealing with a human computer or thinking
computer.
➢ Hence, it is a test to identify whether one is a person or a computer and whether
computers can think like humans.He basically established that under certain laid
down specific conditions, a machine can mimic human behaviour.

Turing test – not always successful but accepted. Alan Turing has stated that the test might have to
be done a couple of times.
Chatbot Eliza and Pari – passed the test.
Criticisms:
• John Searle: Chinese room argument - machines are not programmed to manipulate human
understanding - all human behavior cannot be mimicked by computers. Ex: When a human is
asked to multiply 2 big numbers they take time and the answer may not always be right, but
computers are quick and give correct answers.
• Deliberate mistakes: Humans - prone to make mistakes. But if a computer is programmed
properly - will not make mistakes
• MCQs: Machines find it hard to fool the interrogator when asked mcqs.
• Humans always have an intention to win but machines have no intentions, aims or feelings.
Test is not 100% successful.
UNIT 2

You might also like