Artificial Intelligence
Artificial Intelligence
Artificial intelligence has been used in a wide variety of applications, including engineering,
technology, the military, opinion mining, sentiment analysis, and many more. In more
advanced domains, it is used as language processing and applications for aerospace.
AI is everywhere in today's world, and people are gradually becoming accustomed to its
presence. It is utilised in systems that recognise both voices and faces. In addition to this, it
can provide you with shopping recommendations that are tailored to your own purchasing
preferences. Finding spam and preventing fraudulent use of credit cards is made much easier
when you have this skill. The most cutting-edge technology currently on the market are
virtual assistants like Apple's Siri, Amazon's Alexa, Microsoft's Cortana, and Google's own
Google Assistant.
Artificial Intelligence (AI) is the intelligence that is incorporated into machines; in other
words, AI is the ability of a machine to display human-like capabilities such as reasoning,
learning, planning, and creativity. We learned this information because AI is the ability of a
machine to display human-like capabilities.
There are four distinct objectives that might be pursued in the field of artificial intelligence.
These objectives are as follows:
• The creation of systems that think in the same way as people do.
• The creation of systems that are capable of logical thought.
• The creation of machines that can mimic human behaviour.
• The creation of systems that behave in a logical manner
1
In a general sense AI can be categorised into the following levels:
• Software level (i.e., Embodied AI). Where, the software level consists of things like search
engines, virtual assistants, speech and facial recognition systems, picture analysis tools,
and other things like that.
• Hardware level (Embedded) includes Robots, autonomous vehicles, drones, the Internet of
things, and other technologies fall under the category of embedded artificial intelligence (AI).
2
BRIEF HISTORY - ARTIFICIAL INTELLIGENCE
History of AI:
Important research that laid the groundwork for AI:
In 1931, Goedel layed the foundation of Theoretical Computer Science1920-30s:
He published the first universal formal language and showed that math itself is either
flawed or allows for unprovable but true statements.
In 1936, Turing reformulated Goedel’s result and church’s extension thereof.
In 1956, John McCarthy coined the term "Artificial Intelligence" as the topic of the
Dartmouth Conference, the first conference devoted to the subject.
In 1957, The General Problem Solver (GPS) demonstrated by Newell, Shaw &
Simon
In 1958, John McCarthy (MIT) invented the Lisp language.
In 1959, Arthur Samuel (IBM) wrote the first game-playing program, for checkers, to
achieve sufficient skill to challenge a world champion.
In 1963, Ivan Sutherland's MIT dissertation on Sketchpad introduced the idea of
interactive graphics into computing.
In 1966, Ross Quillian (PhD dissertation, Carnegie Inst. of Technology; now CMU)
demonstrated semantic nets
In 1967, Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan,
Georgia Sutherland at Stanford) demonstrated to interpret mass spectra on organic
chemical compounds. First successful knowledge-based program for scientific
reasoning.
In 1967, Doug Engelbart invented the mouse at SRI
In 1968, Marvin Minsky & Seymour Papert publish Perceptrons, demonstrating limits
of simple neural nets.
In 1972, Prolog developed by Alain Colmerauer.
In Mid 80’s, Neural Networks become widely used with the Backpropagation
algorithm (first described by Werbos in 1974).
1990, Major advances in all areas of AI, with significant demonstrations in machine
learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling,
uncertain reasoning, data mining, natural language understanding and translation,
vision, virtual reality, games, and other topics.
In 1997, Deep Blue beats the World Chess Champion Kasparov
In 2002, iRobot, founded by researchers at the MIT Artificial Intelligence Lab,
introduced Roomba, a vacuum cleaning robot. By 2006, two million had been sold.
3
FOUR APPROACHES IN ARTIFICIAL INTELLIGENCE
The Turing Test, proposed by Alan Turing (Turing, 1950), was designed to provide a
satisfactory operational definition of intelligence. Turing defined intelligent behaviour as the
ability to achieve human-level performance in all cognitive tasks. The test he proposed is that
the computer should be interrogated by a human via a teletype, and passes the test if the
interrogator cannot tell if there is a computer or a human at the other end.
The Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether
or not a computer is capable of thinking like a human being. Turing proposed that a computer
can be said to possess artificial intelligence if it can mimic human responses under specific
conditions. The original Turing Test requires three terminals, each of which is physically
separated from the other two. One terminal is operated by a computer, while the other two are
operated by humans.
During the test, one of the humans’ functions as the questioner, while the second human and
the computer function as respondents. The questioner interrogates the respondents within a
specific subject area, using a specified format and context. After a pre-set length of time or
number of questions, the questioner is then asked to decide which respondent was human and
which was a computer.
The test is repeated many times. If the questioner makes the correct determination in half of
the test runs or less, the computer is considered to have artificial intelligence because the
questioner regards it as "just as human" as the human respondent.
The basic idea of the Turing Test is simple: a human judge engages in a text-based
conversation with both a human and a machine, and then decides which of the two they
believe to be a human. If the judge is unable to distinguish between the human and the
machine based on the conversation, then the machine is said to have passed the Turing
Test.
While the Turing Test has been used as a measure of machine intelligence for over six
decades, it is not without its critics. Some argue that the test is too focused on language and
does not take into account other important aspects of intelligence, such as perception,
problem-solving, and decision-making. Despite its limitations, the Turing Test remains an
important reference point in the field of artificial intelligence and continues to inspire new
research and development in this area.
Imagine a game of three players having two humans and one computer, an interrogator (as
a human) is isolated from the other two players. The interrogator’s job is to try and figure
out which one is human and which one is a computer by asking questions from both of
4
them. To make things harder computer is trying to make the interrogator guess wrongly. In
other words, computers would try to be indistinguishable from humans as much as
possible.
The “standard interpretation” of the Turing Test, in which player C, the interrogator, is
given the task of trying to determine which player – A or B – is a computer and which is a
human. The interrogator is limited to using the responses to written questions to make the
determination
The conversation between interrogator and computer would be like this:
C(Interrogator): Are you a computer?
A(Computer): No
C: Multiply one large number to another, 158745887 * 56755647
A: After a long pause, an incorrect answer!
C: Add 5478012, 4563145
A: (Pause about 20 seconds and then give an answer)10041157
If the interrogator wouldn’t able to distinguish the answers provided by both human and
computer, then the computer passes the test and the machine(computer) is considered as
intelligent as a human. In other words, a computer would be considered intelligent if its
conversation couldn’t be easily distinguished from a human’s. The whole conversation
would be limited to a text-only channel such as a computer keyboard and screen.
The Turing Test has been criticized over the years, in particular because historically, the
nature of the questioning had to be limited in order for a computer to exhibit human-like
intelligence. For many years, a computer might only score high if the questioner formulated
the queries, so they had "Yes" or "No" answers or pertained to a narrow field of knowledge.
When questions were open-ended and required conversational answers, it was less likely that
the computer program could successfully fool the questioner.
5
Advantages of the Turing Test in Artificial Intelligence:
(a) Evaluating machine intelligence: The Turing Test provides a simple and well-known
method for evaluating the intelligence of a machine.
(b) Setting a benchmark: The Turing Test sets a benchmark for artificial intelligence
research and provides a goal for researchers to strive towards.
(c) Inspiring research: The Turing Test has inspired numerous studies and experiments
aimed at developing machines that can pass the test, which has driven progress in the
field of artificial intelligence.
(d) Simple to administer: The Turing Test is relatively simple to administer and can be
carried out with just a computer and a human judge.
(a) Limited scope: The Turing Test is limited in scope, focusing primarily on language-
based conversations and not taking into account other important aspects of
intelligence, such as perception, problem-solving, and decision-making.
(b) Human bias: The results of the Turing Test can be influenced by the biases and
preferences of the human judge, making it difficult to obtain objective and reliable
results.
(c) Not representative of real-world AI: The Turing Test may not be representative of the
kind of intelligence that machines need to demonstrate in real-world applications.
2. Thinking humanly: It is the cognitive modeling approach to thinking like a human, from
this point of view, the Artificial Intelligence model is based on Human Cognition, which is
the core of the human mind. This is done through three approaches, which are as follows:
• Introspection, which means to look at our own thoughts and use those thoughts to build a
model.
• Psychological Experiments, which means running tests on people and looking at how they
act.
• Brain imaging, which means to use an MRI to study how the brain works in different
situations and then copy that through code.
3.Thinking rationally i.e. The laws of thought approach: This approach Relates to use the
laws of thought to think logically: The Laws of Thought are a long list of logical statements
that tell our minds how to work. This method, called "Thinking Rationally," is based on these
laws. By putting in place algorithms for artificial intelligence, these laws can be written down
and made to work. But solving a problem by following the law is very different from solving
a problem in the real world. Here are the biggest problems with this approach.
6
4.Acting rationally i.e. The rational agent approach: In every situation, a rational agent
approach tries to find the best possible outcome. This means that it tries to make the best
decision it can give the circumstances. It means that the agent approach is much more flexible
and open to change. The Laws of Thought approach, on the other hand, says that a thing must
act in a way that makes sense. But there are some situations where there is no logically right
thing to do and there is more than one way to solve the problem, each with different results
and trade-offs. At that point, the rational agent method works well.
7
e) Manufacturing: AI-based solutions are expected to help the manufacturing industry the
most. This will make possible the "Factory of the Future" by allowing flexible and adaptable
technical systems to automate processes and machinery that can respond to new or
unexpected situations by making smart decisions. Impact areas include engineering (AI for
R&D), supply chain management (predicting demand), production (AI can cut costs and
increase efficiency), maintenance (predictive maintenance and better use of assets), quality
assurance (e.g., vision systems with machine learning algorithms to find flaws and
differences in product features), and in-plant logistics and warehousing.
f) Energy: In the energy sector, possible use cases include modelling and forecasting the
energy system to make it less unpredictable and make balancingand using power more
efficient. In renewable energy systems, AI can help store energy through smart metres and
intelligent grids. It can also make photovoltaic energy more reliable and less expensive. AI
could also be used to predict maintenance of grid infrastructure, just like it is in
manufacturing.
g) Smart Cities: Integrating AI into newly built smart cities and infrastructure could also
help meet the needs of a population that is moving to cities quickly and improve the quality
of life for those people. Some possible use cases include controlling traffic to reduce traffic
jams and managing crowds better to improve security.
h) Education and Skilling: Quality and access problems in the education sector might be
fixed by AI. Possible uses include adding to and improving the learning experience through
personalized learning, automating and speeding up administrative tasks, and predicting when
a student needs help to keep them from dropping out or to suggest vocational training.
i) Financial industry: The financial industry also uses AI. For example, it helps the fraud
department of a bank find and flag suspicious banking and finance activities like unusual
debit card use and large account deposits. AI is also used to make trading easier and more
efficient. This is done by making it easier to figure out how many securities are being bought
and sold and how much they cost.
Top Used Applications of Artificial Intelligence
• Tools and checkers for plagiarism
• Recognizing faces;
• Putting an AI autopilot on commercial planes
• Applications for sharing rides (E.g.: Uber, Lyft)
• E-mail spam filters; voice-to-text features; search suggestions
• Google's predictions based on AI (E.g.: Google Maps)
• Protecting against and stopping fraud.
• Smart personal assistants (E.g.: Siri, Alexa)
There are various ways to use artificial intelligence. The technology can be used in different
industries and sectors, but the adoption of AI by different sectors has been affected by
technical and regulatory challenges, but the biggest factor has been how it will affect
business.
8
COMPARISON - ARTIFICIAL INTELLIGENCE, MACHINE
LEARNING & DEEP LEARNING
Artificial intelligence is a big field that includes a lot of different ways of doing things, from
top-down (knowledge representation) to bottom-up (machine learning). In recent years,
people have often talked about three related ideas: artificial intelligence (AI), machine
learning (ML), and deep learning (DL) (DL). AI is the most general term, machine learning is
a part of AI, and deep learning is a type of machine learning.
Machine Learning (ML): Machine learning is a branch of artificial intelligence (AI). It explains
one of the most important ideas in AI, which has to do with learning through experience and not
through being taught. One of the most recent advances in AI, this way of learning is made
possible by applying machine learning to very large data sets. Machine learning algorithms find
patterns and learn how to make predictions and recommendations by using data and experiences
instead of explicit programming instructions. The algorithms also change as they get new data
and learn from their experiences. This makes them more effective over time.
Deep Learning (DL) Deep Learning is a subfield of machine learning that focuses on algorithms
called "Artificial Neural Networks" (ANN) that are based on how the brain is built and how it
works. Deep Learning is a type of machine learning that can handle a wider range of data sources,
needs less pre-processing of data, and often gives more accurate results than traditional machine
learning methods.