0% found this document useful (0 votes)
30 views9 pages

AI Unit I

Artificial Intelligence (AI) is a field of computer science focused on creating machines that can perform tasks typically requiring human intelligence, such as problem-solving and decision-making. The history of AI spans several decades, marked by significant milestones including the development of early AI programs, the introduction of expert systems, and advancements in machine learning and natural language processing. AI has diverse applications across various fields, including finance, robotics, and healthcare, and continues to evolve with innovations in deep learning and computer vision.

Uploaded by

Khaleda Afroaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views9 pages

AI Unit I

Artificial Intelligence (AI) is a field of computer science focused on creating machines that can perform tasks typically requiring human intelligence, such as problem-solving and decision-making. The history of AI spans several decades, marked by significant milestones including the development of early AI programs, the introduction of expert systems, and advancements in machine learning and natural language processing. AI has diverse applications across various fields, including finance, robotics, and healthcare, and continues to evolve with innovations in deep learning and computer vision.

Uploaded by

Khaleda Afroaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Introduction to Artificial Intelligence

Artificial Intelligence is a branch of Computer Science that pursues creating the computers or machines as intelligent as
human beings. It is the science and engineering of making intelligent machines, especially intelligent computer
programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to
confine itself to methods that are biologically observable

Definition: Artificial Intelligence is the study of how to make computers do things, which, at the moment, people do
better. According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering of making
intelligent machines, especially intelligent computer programs”. Artificial Intelligence is a way of making a computer, a
computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think.

AI is accomplished by studying how human brain thinks and how humans learn, decide, and work while trying to
solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems. It
has gained prominence recently due, in part, to big data, or the increase in speed, size and variety of data businesses are
now collecting.

AI can perform tasks such as identifying patterns in the data more efficiently than humans, enabling businesses
to gain more insight out of their data. From a business perspective AI is a set of very powerful tools, and methodologies
for using those tools to solve business problems. From a programming perspective, AI includes the study of symbolic
programming, problem solving, and search.

Foundations and History of Artificial Intelligence

Artificial Intelligence is not a new word and not a new technology for researchers. This technology is much older than
you would imagine.

Maturation of Artificial Intelligence (1943-1952)

o Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in
1943. They proposed a model of artificial neurons.

o Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between
neurons. His rule is now called Hebbian learning.

o Year 1950: The Alan Turing who was an English mathematician and pioneered Machine learning in 1950. Alan
Turing publishes "Computing Machinery and Intelligence" in which he proposed a test. The test can check the
machine's ability to exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
The birth of Artificial Intelligence (1952-1956)

o Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence program"Which was
named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and find new and more
elegant proofs for some theorems.

o Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John McCarthy at
the Dartmouth Conference. For the first time, AI coined as an academic field. At that time high-level computer
languages such as FORTRAN, LISP, or COBOL were invented. And the enthusiasm for AI was very high at that
time.

The golden years-Early enthusiasm (1956-1974)

o Year 1966: The researchers emphasized developing algorithms which can solve mathematical problems.
Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA.

o Year 1972: The first intelligent humanoid robot was built in Japan which was named as WABOT-1.

The first AI winter (1974-1980) o The duration between years 1974 to 1980 was the first AI winter duration. AI
winter refers to the time period where computer scientist dealt with a severe shortage of funding from
government for AI researches. o During AI winters, an interest of publicity on artificial intelligence was
decreased.

A boom of AI (1980-1987)

o Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems were programmed
that emulate the decision-making ability of a human expert. o In the Year 1980, the first national conference of
the American Association of Artificial Intelligence was held at Stanford University.

The second AI winter (1987-1993)

o The duration between the years 1987 to 1993 was the second AI Winter duration. o Again Investors and
government stopped in funding for AI research as due to high cost but not efficient result. The expert system
such as XCON was very cost effective.

The emergence of intelligent agents (1993-2011)

o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov, and became the first
computer to beat a world chess champion.

o Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.

o Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also
started using AI.

Deep learning, big data and artificial general intelligence (2011-present)

o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex
questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky
questions quickly.

o Year 2012: Google has launched an Android app feature "Google now", which was able to provide information
to the user as a prediction.

o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on complex topics with two master debaters and also
performed extremely well.

o Google has demonstrated an AI program "Duplex" which was a virtual assistant and which had taken
hairdresser appointment on call, and lady on other side didn't notice that she was talking with the machine.

Applications of Artificial Intelligence

AI has applications in all fields of human study, such as finance and economics, environmental engineering, chemistry,
computer science, and so on.

Some of the applications of AI are listed below:

 Perception

■ Machine vision

■ Speech understanding

■ Touch ( tactile or haptic) sensation

 Robotics

 Natural Language Processing

■ Natural Language Understanding

■ Speech Understanding

■ Language Generation

■ Machine Translation

 Planning

 Expert Systems

 Machine Learning

 Theorem Proving

 Symbolic Mathematics

 Game Playing

Intelligent Agents and Structure of Intelligent Agents

 An (intelligent) agent perceives it environment via sensors and acts rationally upon that environment with its
effectors. Hence, an agent gets percepts one at a time, and maps this percept sequence to actions.
 Another definition: An agent is a computer software system whose main characteristics are situatedness,
autonomy, adaptivity, and sociability.

Agent Characteristics

 Situatedness
The agent receives some form of sensory input from its environment, and it performs some action that changes
its environment in some way. Examples of environments: the physical world and the Internet.
 Autonomy
The agent can act without direct intervention by humans or other agents and that it has control over its own
actions and internal state.
 Adaptivity
The agent is capable of (1) reacting flexibly to changes in its environment; (2) taking goal-directed initiative (i.e.,
is pro-active), when appropriate; and (3) learning from its own experience, its environment, and interactions
with others.
 Sociability
The agent is capable of interacting in a peer-to-peer manner with other agents or humans.

Examples of Agents

Agent Type Percepts Actions Goals Environment


Bin-Picking Robot Images Grasp objects; Sort into bins Parts in correct bins Conveyor belt
Patient symptoms,
Medical Diagnosis Tests and treatments Healthy patient Patient & hospital
tests
Excite's Jango product navigate web, gather Find best price for a
Web pages Internet
finder relevant products product
Follow links, pattern Collect info on a
Webcrawler Softbot Web pages Internet
matching subject
Financial forecasting Pick stocks to buy & Stock market, company
Financial data Gather data on companies
software sell reports

How to Evaluate an Agent's Behavior/Performance?

 Rationality => Need a performance measure to say how well a task has been achieved. An ideal rational agent
should, for each possible percept sequence, do whatever actions will maximize its performance measure based
on (1) the percept sequence, and (2) its built-in and acquired knowledge. Hence includes information gathering,
not "rational ignorance."
 Types of objective performance measures: false alarm rate, false dismissal rate, time taken, resources required,
effect on environment, etc.
 Examples: Benchmarks and test sets, Turing test (there is no homunculus!)

Approaches to Agent Design

1. Simple Reflex Agent


o Table lookup of percept-action pairs defining all possible condition-action rules necessary to interact in
an environment
o Problems
 Too big to generate and to store (Chess has about 10120 states, for example)
 No knowledge of non-perceptual parts of the current state
 Not adaptive to changes in the environment; requires entire table to be updated if changes
occur
 Looping: Can't make actions conditional
2. Reflex Agent with Internal State
o Encode "internal state" of the world to remember the past as contained in earlier percepts
o Needed because sensors do not usually give the entire state of the world at each input, so perception of
the environment is captured over time. "State" used to encode different "world states" that generate
the same immediate percept.
o Requires ability to represent change in the world; one possibility is to represent just the latest state, but
then can't reason about hypothetical courses of action
o Example: Rodney Brooks's Subsumption Architecture
Main idea: build complex, intelligent robots by decomposing behaviors into a hierarchy of skills, each
completely defining a complete percept-action cycle for one very specific task. For example, avoiding
contact, wandering, exploring, recognizing doorways, etc. Each behavior is modeled by a finite-state
machine with a few states (though each state may correspond to a complex function or module).
Behaviors are loosely-coupled, asynchronous interactions.
3. Goal-Based Agent
o Choose actions so as to achieve a (given or computed) goal= a description of a desirable situation
o Keeping track of the current state is often not enough--- need to add goals to decide which situations
are good
o Deliberative instead of reactive
o May have to consider long sequences of possible actions before deciding if goal is achieved--- involves
consideration of the future, "what will happen if I do...?"
4. Utility-Based Agent
o When there are multiple possible alternatives, how to decide which one is best?
o A goal specifies a crude distinction between a happy and unhappy state, but often need a more general
performance measure that describes "degree of happiness"
o Utility function U: State --> Reals
indicating a measure of success or happiness when at a given state
o Allows decisions comparing choice between conflicting goals, and choice between likelihood of success
and importance of goal (if achievement is uncertain)

Computer vision

Computer vision (CV) is the automatic extraction, analysis, and interpretation of images or videos. CV
converts photos and videos into numerical arrays, enabling ML algorithms to draw inferences, make
predictions, and even generate new images based on user-defined inputs.

Example of an image being converted to an array:

Potential uses for CV have been studied for decades, but CV has only recently become possible at
scale thanks to three innovations:

 More efficient algorithms: Deep learning and convolutional neural networks.specifically


significantly reduces the memory footprint and computational runtime of CV tasks

Better computing resources: GPU improvements, distributed architectures (e.g., Spark), and the availability
of inexpensive cloud computing resources have made it cheaper than ever to run memory-hungry CV
algorithms.

 Availability of images to train on: The proliferation of social media platforms, community forums,
and digital / mobile cameras have drastically increased the number of publicly-available images
that can be used to train CV algorithms.

These three innovations have opened the floodgates for new CV use cases, including self-driving cars
and automated retailers (e.g., Amazon Go). As cameras, LIDAR(light detection and ranging), and other
spatial sensors become less expensive, we’ll soon find ways to alleviate many of our most inefficient
processes using CV.
Natural Language Possessing

Natural language processing (NLP) is the automatic extraction, analysis, and generation of human
language. NLP algorithms parse sentences in various ways (e.g., splitting by word, splitting by letter,
reading both left-to-right and right-to-left, etc.) to automatically draw inferences about the writer’s
meaning and intent. NLP’s various use cases include:

 Named entity recognition and conference resolution

 Part-of-speech tagging

 Reading comprehension & question answering

 Machine translation

 Text summarization & topic modeling

 Spellcheck & autocomplete

Like CV, NLP has come a long way over the past decade thanks to innovations in deep learning that have
made it faster and easier to train ML models on human language. In the past, engineers would spend
hours examining, filtering, and transforming text to avoid computational bottlenecks. Today, out-of-the-
box solutions like fast.ai’s NLP library can crush reading comprehension accuracy records without need
for time-intensive preprocessing.

Siri and Alexa are great examples of NLP in action: by listening for “wake words”, these tools allow you to
play music, search the Web, create to-do lists, and control popular smart-home products — all while your
smartphone stays in your pocket. These virtual assistants will continue to improve over time as they gather
data from existing users, unlocking new use cases and integrating with the modern enterprise.

You might also like