AI Unit I
AI Unit I
Artificial Intelligence is a branch of Computer Science that pursues creating the computers or machines as intelligent as
human beings. It is the science and engineering of making intelligent machines, especially intelligent computer
programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to
confine itself to methods that are biologically observable
Definition: Artificial Intelligence is the study of how to make computers do things, which, at the moment, people do
better. According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering of making
intelligent machines, especially intelligent computer programs”. Artificial Intelligence is a way of making a computer, a
computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think.
AI is accomplished by studying how human brain thinks and how humans learn, decide, and work while trying to
solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems. It
has gained prominence recently due, in part, to big data, or the increase in speed, size and variety of data businesses are
now collecting.
AI can perform tasks such as identifying patterns in the data more efficiently than humans, enabling businesses
to gain more insight out of their data. From a business perspective AI is a set of very powerful tools, and methodologies
for using those tools to solve business problems. From a programming perspective, AI includes the study of symbolic
programming, problem solving, and search.
Artificial Intelligence is not a new word and not a new technology for researchers. This technology is much older than
you would imagine.
o Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in
1943. They proposed a model of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between
neurons. His rule is now called Hebbian learning.
o Year 1950: The Alan Turing who was an English mathematician and pioneered Machine learning in 1950. Alan
Turing publishes "Computing Machinery and Intelligence" in which he proposed a test. The test can check the
machine's ability to exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
The birth of Artificial Intelligence (1952-1956)
o Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence program"Which was
named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and find new and more
elegant proofs for some theorems.
o Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John McCarthy at
the Dartmouth Conference. For the first time, AI coined as an academic field. At that time high-level computer
languages such as FORTRAN, LISP, or COBOL were invented. And the enthusiasm for AI was very high at that
time.
o Year 1966: The researchers emphasized developing algorithms which can solve mathematical problems.
Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA.
o Year 1972: The first intelligent humanoid robot was built in Japan which was named as WABOT-1.
The first AI winter (1974-1980) o The duration between years 1974 to 1980 was the first AI winter duration. AI
winter refers to the time period where computer scientist dealt with a severe shortage of funding from
government for AI researches. o During AI winters, an interest of publicity on artificial intelligence was
decreased.
A boom of AI (1980-1987)
o Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems were programmed
that emulate the decision-making ability of a human expert. o In the Year 1980, the first national conference of
the American Association of Artificial Intelligence was held at Stanford University.
o The duration between the years 1987 to 1993 was the second AI Winter duration. o Again Investors and
government stopped in funding for AI research as due to high cost but not efficient result. The expert system
such as XCON was very cost effective.
o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov, and became the first
computer to beat a world chess champion.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
o Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also
started using AI.
o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex
questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky
questions quickly.
o Year 2012: Google has launched an Android app feature "Google now", which was able to provide information
to the user as a prediction.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on complex topics with two master debaters and also
performed extremely well.
o Google has demonstrated an AI program "Duplex" which was a virtual assistant and which had taken
hairdresser appointment on call, and lady on other side didn't notice that she was talking with the machine.
AI has applications in all fields of human study, such as finance and economics, environmental engineering, chemistry,
computer science, and so on.
Perception
■ Machine vision
■ Speech understanding
Robotics
■ Speech Understanding
■ Language Generation
■ Machine Translation
Planning
Expert Systems
Machine Learning
Theorem Proving
Symbolic Mathematics
Game Playing
An (intelligent) agent perceives it environment via sensors and acts rationally upon that environment with its
effectors. Hence, an agent gets percepts one at a time, and maps this percept sequence to actions.
Another definition: An agent is a computer software system whose main characteristics are situatedness,
autonomy, adaptivity, and sociability.
Agent Characteristics
Situatedness
The agent receives some form of sensory input from its environment, and it performs some action that changes
its environment in some way. Examples of environments: the physical world and the Internet.
Autonomy
The agent can act without direct intervention by humans or other agents and that it has control over its own
actions and internal state.
Adaptivity
The agent is capable of (1) reacting flexibly to changes in its environment; (2) taking goal-directed initiative (i.e.,
is pro-active), when appropriate; and (3) learning from its own experience, its environment, and interactions
with others.
Sociability
The agent is capable of interacting in a peer-to-peer manner with other agents or humans.
Examples of Agents
Rationality => Need a performance measure to say how well a task has been achieved. An ideal rational agent
should, for each possible percept sequence, do whatever actions will maximize its performance measure based
on (1) the percept sequence, and (2) its built-in and acquired knowledge. Hence includes information gathering,
not "rational ignorance."
Types of objective performance measures: false alarm rate, false dismissal rate, time taken, resources required,
effect on environment, etc.
Examples: Benchmarks and test sets, Turing test (there is no homunculus!)
Computer vision
Computer vision (CV) is the automatic extraction, analysis, and interpretation of images or videos. CV
converts photos and videos into numerical arrays, enabling ML algorithms to draw inferences, make
predictions, and even generate new images based on user-defined inputs.
Potential uses for CV have been studied for decades, but CV has only recently become possible at
scale thanks to three innovations:
Better computing resources: GPU improvements, distributed architectures (e.g., Spark), and the availability
of inexpensive cloud computing resources have made it cheaper than ever to run memory-hungry CV
algorithms.
Availability of images to train on: The proliferation of social media platforms, community forums,
and digital / mobile cameras have drastically increased the number of publicly-available images
that can be used to train CV algorithms.
These three innovations have opened the floodgates for new CV use cases, including self-driving cars
and automated retailers (e.g., Amazon Go). As cameras, LIDAR(light detection and ranging), and other
spatial sensors become less expensive, we’ll soon find ways to alleviate many of our most inefficient
processes using CV.
Natural Language Possessing
Natural language processing (NLP) is the automatic extraction, analysis, and generation of human
language. NLP algorithms parse sentences in various ways (e.g., splitting by word, splitting by letter,
reading both left-to-right and right-to-left, etc.) to automatically draw inferences about the writer’s
meaning and intent. NLP’s various use cases include:
Part-of-speech tagging
Machine translation
Like CV, NLP has come a long way over the past decade thanks to innovations in deep learning that have
made it faster and easier to train ML models on human language. In the past, engineers would spend
hours examining, filtering, and transforming text to avoid computational bottlenecks. Today, out-of-the-
box solutions like fast.ai’s NLP library can crush reading comprehension accuracy records without need
for time-intensive preprocessing.
Siri and Alexa are great examples of NLP in action: by listening for “wake words”, these tools allow you to
play music, search the Web, create to-do lists, and control popular smart-home products — all while your
smartphone stays in your pocket. These virtual assistants will continue to improve over time as they gather
data from existing users, unlocking new use cases and integrating with the modern enterprise.