Explore 1.5M+ audiobooks & ebooks free for days

From $12.99 CAD/month after trial. Cancel anytime.

Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Ebook1,218 pages13 hoursArtificial Intelligence Books Series

Artificial Intelligence

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Welcome to the world of Artificial Intelligence. This book is designed to provide you with a comprehensive introduction to the exciting field of Artificial Intelligence. Whether you are a student, a professional, or simply someone curious about the latest advancements in AI, this book aims to be your go-to resource. Artificial Intelligence has become an integral part of our daily lives, impacting industries such as healthcare, finance, transportation, and entertainment. As AI technologies continue to evolve, the demand for individuals with expertise in AI is on the rise. Whether you are pursuing a degree in computer science, aiming to enhance your career prospects, or simply fascinated by the endless possibilities of AI, this book is here to guide you on your journey.

LanguageEnglish
PublisherPoorav Publications
Release dateNov 13, 2024
ISBN9789369918621
Artificial Intelligence

Other titles in Artificial Intelligence Series (1)

View More

Read more from Manish Soni

Related to Artificial Intelligence

Titles in the series (1)

View More

Related ebooks

Programming For You

View More

Reviews for Artificial Intelligence

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Artificial Intelligence - Manish Soni

    Preface

    Welcome to the world of Artificial Intelligence. This book is designed to provide you with a comprehensive introduction to the exciting field of Artificial Intelligence. Whether you are a student, a professional, or simply someone curious about the latest advancements in AI, this book aims to be your go-to resource. Artificial Intelligence has become an integral part of our daily lives, impacting industries such as healthcare, finance, transportation, and entertainment. As AI technologies continue to evolve, the demand for individuals with expertise in AI is on the rise. Whether you are pursuing a degree in computer science, aiming to enhance your career prospects, or simply fascinated by the endless possibilities of AI, this book is here to guide you on your journey.

    Key Features of This Book:

    1. In-Depth Coverage: We provide a comprehensive overview of various AI concepts, techniques, and technologies,

    ranging from machine learning and deep learning to natural language processing and computer vision.

    2. Exercises: To help you grasp the concepts and reinforce your learning, we offer a wide range of exercises at the end of each chapter. These exercises include multiple-choice questions, coding assignments, and practical projects to ensure that you gain hands-on experience.

    3. Previous Year Solved Papers: For students preparing for examinations, we have included solved papers from prestigious institutions such as IGNOU and RTU. These solved papers serve as valuable resources to understand the type of questions asked in exams and help you practice effectively.

    4. Unsolved Papers: In addition to solved papers, we provide unsolved papers for you to practice and assess your progress. These papers are designed to challenge your understanding of AI concepts and prepare you for various assessments.

    5. Online Resources: To complement your learning, we offer a range of online resources. You can access chapter-wise test papers to evaluate your knowledge, and we provide videos that cover each chapter in detail. Additionally, our collection of multiple-choice quizzes and true-false questions will help you test your knowledge and track your improvement.

    How to Use This Book:

    This book is structured to cater to both beginners and those with some prior knowledge of AI. If you are new to the field, start from the beginning and work your way through each chapter. If you have some experience with AI, feel free to jump to specific chapters that interest you the most. We encourage you to actively engage with the material by attempting the exercises, working on projects, and using the online resources provided. AI is a hands-on field, and practical experience is invaluable.

    Join the AI Revolution:

    Artificial Intelligence is an ever-evolving field with endless possibilities. Whether your goal is to excel academically, advance in your career, or simply satisfy your curiosity, we hope this book serves as a valuable tool on your AI journey.

    As the field of AI continues to progress, staying updated is crucial. We recommend exploring additional online resources, forums, and communities dedicated to AI. Keep in mind that AI is a collaborative field, and learning from others and sharing your knowledge can be immensely rewarding. So, let's embark on this exciting journey into the world of Artificial Intelligence together! Remember, the possibilities are limitless, and the future is AI-driven. Enjoy the learning process, and may this book be your trusted companion on your path to mastering AI.

    Table of Contents

    Preface

    CHAPTER 1: Introduction

    CHAPTER 2: Intelligent Agents

    CHAPTER 3: Solving Problems by Searching

    CHAPTER 4: Beyond Classical Search

    CHAPTER 5: Adversarial Search

    CHAPTER 6: Constraint Satisfaction Problems

    CHAPTER 7: Logical Agents

    CHAPTER 8: First-Order Logic

    CHAPTER 9: Inference in First-Order Logic

    CHAPTER 10: Classical Planning

    CHAPTER 11: Planning and Acting in the Real World

    CHAPTER 12: Knowledge Representation

    CHAPTER 13: Quantifying Uncertainty

    CHAPTER 14: Probabilistic Reasoning

    CHAPTER 15: Probabilistic Reasoning over Time

    CHAPTER 16: Making Simple Decisions

    CHAPTER 17: Making Complex Decisions

    CHAPTER 18: Learning from Examples

    CHAPTER 19: Knowledge in Learning

    CHAPTER 20: Learning Probabilistic Models

    CHAPTER 21: Reinforcement Learning

    CHAPTER 22: Natural Language Processing

    CHAPTER 23: Natural Language for Communication

    CHAPTER 24: Perception

    CHAPTER 25: Robotics

    CHAPTER 26: Philosophical Foundations

    CHAPTER 27: AI The Present and Future

    CHAPTER 28: Neural Networks

    CHAPTER 29: Additional Topics

    CHAPTER 1: Introduction

    1.0 Introduction

    In today's world, technology is growing very fast, and we are getting in touch with different new technologies day by day.

    Here, one of the booming technologies of computer science is Artificial Intelligence which is ready to create a new revolution in the world by making intelligent machines. The Artificial Intelligence is now all around us. It is currently working with a variety of subfields, ranging from general to specific, such as self-driving cars, playing chess, proving theorems, playing music, Painting, etc.

    1.0.1 ARTIFICIAL INTELLIGENCE DEFINITION: BASICS OF AI

    1.0.1.1 Understanding AI

    Broadly speaking, Artificially Intelligent systems can perform tasks commonly associated with human cognitive functions — such as interpreting speech, playing games and identifying patterns. They typically learn how to do so by processing massive amounts of data, looking for patterns to model in their own decision-making. In many cases, humans will supervise an AI’s learning process, reinforcing good decisions and discouraging bad ones. But some AI systems are designed to learn without supervision — for instance, by playing a video game over and over until they eventually figure out the rules and how to win.

    1.0.1.2 Strong AI Vs. Weak AI

    Intelligence is tricky to define, which is why AI experts typically distinguish between strong AI and weak AI.

    Strong AI - Strong AI, also known as Artificial General Intelligence, is a machine that can solve problems it’s never been trained to work on — much like a human can. This is the kind of AI we see in movies, like the robots from Westworld or the character Data from Star Trek: The Next Generation. This type of AI doesn’t actually exist yet. The creation of a machine with human-level intelligence that can be applied to any task is the Holy Grail for many AI researchers, but the quest for artificial general intelligence has been fraught with difficulty. And some believe strong AI research should be limited, due to the potential risks of creating a powerful AI without appropriate guardrails. In contrast to weak AI, strong AI represents a machine with a full set of cognitive abilities — and an equally wide array of use cases — but time hasn't eased the difficulty of achieving such a feat.

    Weak AI - Weak AI, sometimes referred to as narrow AI or specialized AI, operates within a limited context and is a simulation of human intelligence applied to a narrowly defined problem (like driving a car, transcribing human speech or curating content on a website). Weak AI is often focused on performing a single task extremely well. While these machines may seem intelligent, they operate under far more constraints and limitations, than even the most basic human intelligence. Weak AI examples include:

    Siri

    Alexa and other smart assistants

    Self-driving cars

    Google search

    Conversational bots

    Email spam filters

    Netflix’s recommendations

    1.1 What is AI?

    Artificial Intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include Expert Systems,NnaturalLlanguage Processing, Speech Recognition and Machine Vision. Hear the term artificial intelligence (AI), you might think of self-driving cars, robots, ChatGPT or other AI chatbots, and artificially created images. But it's also important to look behind the outputs of AI and understand how the technology works and its impacts for none and future generations. AI is a concept that has been around, formally since the 1950s, when it was defined as a machine's ability to perform a task that would've previously required human intelligence. This is quite a broad definition and one that has been modified over decades of research and technological advancements. Artificial Intelligence (AI) is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence. While AI is an interdisciplinary science with multiple approaches, advancements in machine learning and deep learning, in particular, are creating a paradigm shift in virtually every sector of the tech industry.

    Artificial Intelligence allows machines to model, or even improve upon, the capabilities of the human mind. And from the development of self-driving cars to the proliferation of generative AI tools like ChatGPT and Google’s Bard, AI is increasingly becoming part of everyday life — and an area companies across every industry are investing in.

    1.1.1 Machine Learning Vs. Deep Learning

    Although the terms machine learning and deep learning come up frequently in conversations about AI, they should not be used interchangeably. Deep learning is a form of machine learning, and machine learning is a subfield of artificial intelligence.

    Machine Learning - A machine learning algorithm is fed data by a computer and uses statistical techniques to help it learn how to get progressively better at a task, without necessarily having been specifically programmed for that task. Instead, ML algorithms use historical data as input to predict new output values. To that end, ML consists of both supervised learning (where the expected output for the input is known, thanks to labeled data sets) and unsupervised learning (where the expected outputs are unknown, due to the use of unlabeled data sets).

    Deep Learning - Deep learning is a type of machine learning that runs inputs through a biologically inspired neural network architecture. The neural networks contain a number of hidden layers through which the data is processed, allowing the machine to go deep in its learning, making connections and weighting input for the best results.

    1.1.2 TYPES OF ARTIFICIAL INTELLIGENCE

    The Four Types of AI

    AI can be divided into four categories, based on the type and complexity of the tasks a system is able to perform. They are:

    1. Reactive machines

    2. Limited memory

    3. Theory of mind

    4. Self awareness

    1. Reactive Machines - A Reactive Machine follows the most basic of AI principles and, as its name implies, is capable of only using its intelligence to perceive and react to the world in front of it. A reactive machine cannot store a memory and, as a result, cannot rely on past experiences to inform decision making in real time. Perceiving the world directly means that reactive machines are designed to complete only a limited number of specialized duties. Intentionally narrowing a reactive machine’s worldview has its benefits, however: This type of AI will be more trustworthy and reliable, and it will react the same way to the same stimuli every time.

    Reactive Machine Examples - Deep Blue was designed by IBM in the 1990s as a chess-playing supercomputer and defeated international grandmaster Gary Kasparov in a game. Deep Blue was only capable of identifying the pieces on a chess board and knowing how each moves based on the rules of chess, acknowledging each piece’s present position and determining what the most logical move would be at that moment. The computer was not pursuing future potential moves by its opponent or trying to put its own pieces in better position. Every turn was viewed as its own reality, separate from any other movement that was made beforehand. Google’s AlphaGo is also incapable of evaluating future moves but relies on its own neural network to evaluate developments of the present game, giving it an edge over Deep Blue in a more complex game. AlphaGo also bested world-class competitors of the game, defeating champion Go player Lee Sedol in 2016.

    2. Limited Memory - Limited Memory AI has the ability to store previous data and predictions, when gathering information and weighing potential decisions — essentially looking into the past for clues on what may come next. Limited memory AI is more complex and presents greater possibilities than reactive machines. Limited memory AI is created when a team continuously trains a model in how to analyze and utilize new data, or an AI environment is built so models can be automatically trained and renewed. When utilizing limited memory AI in ML, six steps must be followed:

    Establish training data

    Create the machine learning model

    Ensure the model can make predictions

    Ensure the model can receive human or environmental feedback

    Store human and environmental feedback as data

    Reiterate the steps above as a cycle

    3. Theory of Mind - Theory of mind is just that — theoretical. We have not yet achieved the technological and scientific capabilities necessary to reach this next level of AI. The concept is based on the psychological premise of understanding that other living things have thoughts and emotions that affect the behavior of one’s self. In terms of AI machines, this would mean that AI could comprehend how humans, animals and other machines feel and make decisions through self-reflection and determination, and then utilize that information to make decisions of their own. Essentially, machines would have to be able to grasp and process the concept of mind, the fluctuations of emotions in decision-making and a litany of other psychological concepts in real time, creating a two-way relationship between people and AI.

    4. Self Awareness - Once theory of mind can be established, sometime well into the future of AI, the final step will be for AI to become self-aware. This kind of AI possesses human-level consciousness and understands its own existence in the world, as well as the presence and emotional state of others. It would be able to understand what others may need based on not just what they communicate to them, but how they communicate it. Self-awareness in AI relies both on human researchers understanding the premise of consciousness and then learning how to replicate that, so it can be built into machines.

    1.1.3 Artificial Intelligence Examples

    Artificial intelligence technology takes many forms, from chatbots to navigation apps and wearable fitness trackers. The below examples illustrate the breadth of potential AI applications.

    ChatGPT - ChatGPT is an artificial intelligence chatbot capable of producing written content in a range of formats, from essays to code and answers to simple questions. Launched in November 2022 by OpenAI, ChatGPT is powered by a large language model that allows it to closely emulate human writing.

    Google Maps - Google Maps uses location data from smartphones, as well as user-reported data on things like construction and car accidents, to monitor the ebb and flow of traffic and assess what will be the fastest route.

    Smart Assistants - Personal assistants like Siri, Alexa and Cortana use natural language processing, or NLP, to receive instructions from users to set reminders, search for online information and control the lights in people’s homes. In many cases, these assistants are designed to learn a user’s preferences and improve their experience over time with better suggestions and more tailored responses.

    Snapchat Filters - Snapchat filters use ML algorithms to distinguish between an image’s subject and the background, track facial movements and adjust the image on the screen based on what the user is doing.

    Self-Driving Cars - Self-driving cars are a recognizable example of deep learning, since they use deep neural networks to detect objects around them, determine their distance from other cars, identify traffic signals and much more.

    Wearables - The wearable sensors and devices used in the healthcare industry also apply deep learning to assess the health condition of the patient, including their blood sugar levels, blood pressure and heart rate. They can also derive patterns from a patient’s prior medical data and use that to anticipate any future health conditions.

    MuZero - MuZero, a computer program created by DeepMind, is a promising frontrunner in the quest to achieve true artificial general intelligence. It has managed to master games that has not even been taught to play, including chess and an entire suite of Atari games, through brute force, playing games millions of times.

    1.2 The Foundations of Artificial Intelligence

    The Foundations of Artificial Intelligence is a research area within Georgia Tech’s School of Computer Science (SCS) that focuses on the development of algorithms, that leverage data and statistical tools to solve complex human tasks, to explore novel applications of such tools, and to better understand the apparent success of AI in practice. Instead of focusing on specific applications (e.g., Computer Vision, NLP or Robotics), the Foundations of Artificial Intelligence area focuses on general principles and novel approaches, that can be applied across a wide spectrum of applications. We are particularly interested in topics such as machine learning theory, scalable and distributed training, heterogeneity-aware inference, and robust dynamically adaptive algorithms, that help navigate multi-dimensional tradeoff spaces spanned by ML accuracy, model size, latency, and spatio-temporal cost efficiency of both training and inference. The Foundations of Artificial Intelligence area at SCS has made significant contributions in:

    Online learning

    Reinforcement learning

    Systems support for distributed ML frameworks

    Resource management for distributed ML frameworks

    Continual learning

    Learning theory

    Federated learning

    Auto ML

    Explainable ML

    Systems support for heterogeneity-aware ML inference

    Neural Architecture Search (NAS)

    Neuro-inspired AI

    Our major sources of funding are the National Science Foundation (NSF) and the Defense Advanced Research Projects Agency (DARP(A). Additionally, we participate in interdisciplinary research that brings together machine learning, neuroscience, biology, mathematics and statistics, and theoretical computer science. We welcome the involvement of graduate and undergraduate students in our research projects and the broader intellectual community.

    1.2.1 Artificial Intelligence Benefits

    AI has many uses — from boosting vaccine development to automating detection of potential fraud. AI companies raised $66.8 billion in funding in 2022, according to CB Insights research, more than double the amount raised in 2020. Because of its fast-paced adoption, AI is making waves in a variety of industries.

    Safer Banking - Business Insider Intelligence’s 2022 report on AI in banking found more than half of financial services companies already use AI solutions for risk management and revenue generation. The application of AI in banking could lead to upwards of $400 billion in savings.

    Better Medicine - As for medicine, a 2021 World Health Organization report noted that while integrating AI into the healthcare field comes with challenges, the technology holds great promise, as it could lead to benefits like more informed health policy and improvements in the accuracy of diagnosing patients.

    Innovative Media - AI has also made its mark on entertainment. The global market for AI in media and entertainment is estimated to reach $99.48 billion by 2030, growing from a value of $10.87 billion in 2021, according to Grand View Research. That expansion includes AI uses, like recognizing plagiarism and developing high-definition graphics.

    1.2.2 Challenges and Limitations of AI

    While AI is certainly viewed as an important and quickly evolving asset, this emerging field comes with its share of downsides. The Pew Research Center surveyed 10,260 Americans in 2021 on their attitudes toward AI. The results found 45 percent of respondents are equally excited and concerned, and 37 percent are more concerned than excited. Additionally, more than 40 percent of respondents said they considered driverless cars to be bad for society. Yet the idea of using AI to identify the spread of false information on social media was more well received, with close to 40 percent of those surveyed labeling it a good idea. AI is a boon for improving productivity and efficiency, while at the same time reducing the potential for human error. But there are also some disadvantages, like development costs and the possibility for automated machines to replace human jobs. It’s worth noting, however, that the artificial intelligence industry stands to create jobs, too — some of which have not even been invented yet.

    1.2.3 Future of Artificial Intelligence

    When one considers the computational costs and the technical data infrastructure running behind artificial intelligence, actually executing on AI is a complex and costly business. Fortunately, there have been massive advancements in computing technology, as indicated by Moore’s Law, which states that the number of transistors on a microchip doubles about every two years while the cost of computers is halved. Although many experts believe that Moore’s Law will likely come to an end sometime in the 2020s, this has had a major impact on modern AI techniques — without it, deep learning would be out of the question, financially speaking. Recent research found that AI innovation has actually outperformed Moore’s Law, doubling every six months or so, as opposed to two years. By that logic, the advancements artificial intelligence made across a variety of industries have been major over the last several years. And the potential for an even greater impact over the next several decades seems all but inevitable.

    1.3 The History of Artificial Intelligence

    The history of artificial intelligence dates back to antiquity with philosophers mulling over the idea that artificial beings, mechanical men, and other automatons had existed or could exist in some fashion. Thanks to early thinkers, artificial intelligence became increasingly more tangible throughout the 1700s and beyond. Philosophers contemplated how human thinking could be artificially mechanized and manipulated by intelligent non-human machines. The thought processes that fueled interest in AI originated when classical philosophers, mathematicians, and logicians considered the manipulation of symbols (mechanically), eventually leading to the invention of the programmable digital computer, the Atanas off Berry Computer (ABC) in the 1940s. This specific invention inspired scientists to move forward with the idea of creating an electronic brain, or an artificially intelligent being. Nearly a decade passed before icons in AI aided in the understanding of the field we have today. Alan Turing, a mathematician among other things, proposed a test that measured a machine’s ability to replicate human actions to a degree that was indistinguishable from human behavior. Later that decade, the field of AI research was founded during a summer conference at Dartmouth College in the mid-1950s, where John McCarthy, computer and cognitive scientist, coined the term artificial intelligence. From the 1950s forward, many scientists, programmers, logicians, and theorists aided in solidifying the modern understanding of artificial intelligence as a whole. With each new decade came innovations and findings that changed people’s fundamental knowledge of the field of artificial intelligence and, how historical advancements have catapulted AI from being an unattainable fantasy to a tangible reality for current and future generations. Intelligent robots and artificial beings first appeared in ancient Greek myths. And Aristotle’s development of syllogism and its use of deductive reasoning was a key moment in humanity’s quest to understand its own intelligence. While the roots are long and deep, the history of AI as we think of it today spans less than a century. The following is a quick look at some of the most important events in AI.

    1.3.1 1940s - (1942) Isaac Asimov publishes the Three Laws of Robotics, an idea commonly found in science fiction media about how artificial intelligence should not bring harm to humans. (1943) Warren McCullough and Walter Pitts publish the paper A Logical Calculus of Ideas Immanent in Nervous Activity, which proposes the first mathematical model for building a neural network. (1949) In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb proposes the theory that neural pathways are created from experiences and that connections between neurons become stronger the more frequently they’re used. Hebbian learning continues to be an important model in AI.

    1.3.2 1950s - (1950) Alan Turing publishes the paper Computing Machinery and Intelligence, proposing what is now known as the Turing Test, a method for determining if a machine is intelligent. (1950) Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the first neural network computer. (1950) Claude Shannon publishes the paper Programming a Computer for Playing Chess. (1952) Arthur Samuel develops a self-learning program to play checkers. (1954) The Georgetown-IBM machine translation experiment automatically translates 60 carefully selected Russian sentences into English. (1956) The phrase artificial intelligence is coined at the Dartmouth Summer Research Project on Artificial Intelligence. Led by John McCarthy, the conference is widely considered to be the birthplace of AI. (1956) Allen Newell and Herbert Simon demonstrate Logic Theorist (LT), the first reasoning program. (1958) John McCarthy develops the AI programming language Lisp and publishes Programs with Common Sense, a paper proposing the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans. (1959) Allen Newell, Herbert Simon and J.C. Shaw develop the General Problem Solver (GPS), a program designed to imitate human problem-solving. (1959) Herbert Gelernter develops the Geometry Theorem Prover program. (1959) Arthur Samuel coins the term machine learning while at IBM. (1959) John McCarthy and Marvin Minsky found the MIT Artificial Intelligence Project.

    1.3.3 1960s - (1963) John McCarthy starts the AI Lab at Stanford. (1966) The Automatic Language Processing Advisory Committee (ALPAC) report by the U.S. government details the lack of progress in machine translations research, a major Cold War initiative with the promise of automatic and instantaneous translation of Russian. The ALPAC report leads to the cancellation of all government-funded MT projects. (1969) The first successful expert systems, DENDRAL and MYCIN, are created at Stanford.

    1.3.4 1970s - (1972) The logic programming language PROLOG is created. (1973) The Lighthill Report, detailing the disappointments in AI research, is released by the British government and leads to severe cuts in funding for AI projects. (1974-1980) Frustration with the progress of AI development leads to major DARPA cutbacks in academic grants. Combined with the earlier ALPAC report and the previous year’s Lighthill Report, AI funding dries up and research stalls. This period is known as the First AI Winter.

    1.3.5 1980s - (1980) Digital Equipment Corporations develops R1 (also known as XCON), the first successful commercial expert system. Designed to configure orders for new computer systems, R1 kicks off an investment boom in expert systems that will last for much of the decade, effectively ending the First AI Winter. (1982) Japan’s Ministry of International Trade and Industry launches the ambitious Fifth Generation Computer Systems project. The goal of FGCS is to develop supercomputer-like performance and a platform for AI development. (1983) In response to Japan’s FGCS, the U.S. government launches the Strategic Computing Initiative to provide DARPA funded research in advanced computing and AI. (1985) Companies are spending more than a billion dollars a year on expert systems and an entire industry known as the Lisp machine market springs up to support them. Companies like Symbolics and Lisp Machines Inc. build specialized computers to run on the AI programming language Lisp. (1987-1993) As computing technology improved, cheaper alternatives emerged and the Lisp machine market collapsed in 1987, ushering in the Second AI Winter. During this period, expert systems proved too expensive to maintain and update, eventually falling out of favor.

    1.3.6 1990s - (1991) U.S. forces deploy DART, an automated logistics planning and scheduling tool, during the Gulf War. (1992) Japan terminates the FGCS project in 1992, citing failure in meeting the ambitious goals outlined a decade earlier. (1993) DARPA ends the Strategic Computing Initiative in 1993 after spending nearly $1 billion and falling far short of expectations. (1997) IBM’s Deep Blue beats world chess champion Gary Kasparov.

    1.3.7 2000s - (2005) STANLEY, a self-driving car, wins the DARPA Grand Challenge. (2005) The U.S. military begins investing in autonomous robots like Boston Dynamics’ Big Dog and iRobot’s PackBot. (2008) Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app.

    1.3.8 2010s - (2011) IBM’s Watson handily defeats the competition on Jeopardy!. (2011) Apple releases Siri, an AI-powered virtual assistant through its iOS operating system. (2012) Andrew Ng, founder of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to recognize a cat without being told what a cat is, ushering in the breakthrough era for neural networks and deep learning funding. (2014) Google makes the first self-driving car to pass a state driving test. (2014) Amazon’s Alexa, a virtual home smart device, is released. (2016) Google DeepMind’s AlphaGo defeats world champion Go player Lee Sedol. The complexity of the ancient Chinese game was seen as a major hurdle to clear in AI. (2016) The first robot citizen, a humanoid robot named Sophia, is created by Hanson Robotics and is capable of facial recognition, verbal communication and facial expression. (2018) Google releases natural language processing engine BERT, reducing barriers in translation and understanding by ML applications. (2018) Waymo launches its Waymo One service, allowing users throughout the Phoenix metropolitan area to request a pick-up from one of the company’s self-driving vehicles.

    1.3.9 2020s - (2020) Baidu releases its LinearFold AI algorithm to scientific and medical teams working to develop a vaccine during the early stages of the SARS-CoV-2 pandemic. The algorithm is able to predict the RNA sequence of the virus in just 27 seconds, 120 times faster than other methods. (2020) OpenAI releases natural language processing model GPT-3, which is able to produce text modeled after the way people speak and write. (2021) OpenAI builds on GPT-3 to develop DALL-E, which is able to create images from text prompts. (2022) The National Institute of Standards and Technology releases the first draft of its AI Risk Management Framework, voluntary U.S. guidance to better manage risks to individuals, organizations, and society associated with artificial intelligence. (2022) DeepMind unveils Gato, an AI system trained to perform hundreds of tasks, including playing Atari, captioning images and using a robotic arm to stack blocks. (2022) OpenAI launches ChatGPT, a chatbot powered by a large language model that gains more than 100 million users in just a few months. (2023) Microsoft launches an AI-powered version of Bing, its search engine, built on the same technology that powers ChatGPT. (2023) Google announces Bard, a competing conversational AI. (2023) OpenAI Launches GPT-4, its most sophisticated language model yet.

    1.4 The State of the Art

    THE STATE OF THE ART OF AI: ADVANCEMENTS, CHALLENGES AND FUTURE PROSPECTS

    AI, Artificial Intelligence, environment, healthcare, maker faire, Maker Faire Rome, makers, MFR2023, projects, Robotics, robots, sustainability . Artificial Intelligence (AI) has been a buzzword for the past few years, and it is not surprising why. AI has the potential to transform various aspects of our lives, from healthcare to transportation to education. The advancements in AI have been remarkable, and it is not an exaggeration to say that AI is at the forefront of technological innovation. In this article, we will take a closer look at the state of the art of AI, including recent advancements, challenges, and future prospects.

    1.4.1 Recent Advancements in AI

    AI has made significant strides in recent years, thanks to the development of advanced machine learning algorithms, deep learning techniques, and neural networks. Some of the recent advancements in AI include:

    Natural Language Processing (NLP) - NLP is a subfield of AI that deals with the interaction between computers and human language. Recent advancements in NLP have led to the development of chatbots, virtual assistants, and voice recognition software, which have made communication between humans and machines more natural and seamless. An extremely interesting application of strong Artificial Intelligence has been proposed by a research group from the University of Bologna, which used a Deep Learning system for the first time to try to decipher a 3,500-year-old language: Minoan Cyprus. This language, widespread on the island of Cyprus in the Late Bronze Age, defied translators who tried to interpret it for more than a century, colliding with the lack of texts (just over two hundred) and the lack of bilingual works. Until today, in fact, scholars have not yet found a common ground on the number of characters in the alphabet of this language. Researchers therefore used unsupervised learning techniques, in which the model develops hypotheses and conclusions without previous knowledge of the language and signs to be analyzed. This gave birth to a real ad hoc model, called Sign2Vecd, trained to analyze and catalog not only the different signs of Minoan Cyprus, but also entire sequences of signs. The results outlined a vector representation for each sign that can be viewed in three dimensions, offering scholars the ability to identify any errors in the transcription of the signs.

    Computer Vision & Robotics - Computer vision is a field of AI that deals with the ability of computers to interpret and understand visual information from the world around us. Recent advancements in computer vision have led to the development of facial recognition technology, object detection, and self-driving cars, among others. Speaking about Robotics, It is the field of AI that deals with the design, construction, and operation of robots. Recent advancements in robotics have led to the development of intelligent robots that can perform a wide range of tasks, from manufacturing to healthcare to space exploration. About the use of AI for facial recognition, a new tool, called GFP-GAN (Generative Facial Prior-Generative Adversarial Network), has been developed to fix old portraits damaged by time, so that nothing is lost over the years. The result can be generated quickly by merging two AI models capable of filling the image with realistic details. As you can well imagine, the result generated by the AI cannot be perfect and could involve a slight change of identity, but reviewing old photos in new life can prove interesting; it could even help elderly and memory-impaired people recall memories they hold dear.

    1.4.2 Challenges facing AI

    Despite the remarkable advancements in AI, there are still significant challenges that need to be addressed. Some of the most pressing challenges facing AI include:

    Data Bias - Data bias is a significant challenge in AI, as it can lead to algorithms that perpetuate discrimination and inequality. This is particularly problematic in areas such as facial recognition technology, where biased data can lead to inaccurate identification and unjust consequences.

    Ethical Concerns & Human – AI Collaboration - AI raises several ethical concerns, such as privacy, autonomy, and accountability. For instance, the use of AI in decision-making processes, such as hiring or loan approvals, can lead to unfair outcomes if the algorithms are biased or flawed. As AI becomes more prevalent in various industries, it is essential to ensure that humans can collaborate with AI systems effectively. This requires developing user-friendly interfaces and designing AI systems that can explain their decisions and actions to humans. Restorative AI is able to fix old portraits and collect memories.

    1.4.3 Future prospects of AI

    The future of AI is exciting, with many possibilities for innovation and development. Some of the future prospects of AI include:

    Healthcare & Education - AI has the potential to revolutionize healthcare by enabling personalized treatment plans, improving diagnosis accuracy, and accelerating drug discovery. But there’s more: It can also improve the education system by providing personalized learning experiences, automating administrative tasks, and facilitating remote learning. At Maker Faire Rome several projects have strongly demonstrated, over the years, how AI is an increasingly effective tool for predicting serious diseases such as diabetes mellitus. In 2021, a Machine Learning project, developed by Giacomo Bornino, Marco Chierici, Venet Osmani, Antonio Colangelo, Giuseppe Jurman, has shown how AI technologies are really able to predict five comorbidities from 17,000 blood tests and 23 variables, none of which were currently used as diagnostic criteria. The results have been promising to facilitate the early diagnosis of complications of diabetes mellitus, one of the leading causes of death worldwide. At Maker Faire Rome 2021 has been shown how AI can be use to predict Diabetes Mellitus.

    Environment - AI can contribute to environmental sustainability by optimizing resource management, predicting natural disasters, and reducing carbon emissions.

    1.5 Summary

    Philosophers (going back to 400 B.C.) made AI conceivable by considering the ideas that the mind is in some ways like a machine, that it operates on knowledge encoded in some internal language, and that thought can be used to choose what actions to take.

    Mathematicians provided the tools to manipulate statements of logical certainty as well as uncertain, probabilistic statements. They also set the groundwork for understanding computation and reasoning about algorithms.

    Economists formalized the problem of making decisions that maximize the expected outcome to the decision maker.

    Neuroscientists discovered some facts about how the brain works and the ways in which it is similar to and different from computers.

    Psychologists adopted the idea that humans and animals can be considered information processing machines. Linguists showed that language use fits into this model.

    Computer engineers provided the ever-more-powerful machines that make AI applications possible.

    Control theory deals with designing devices that act optimally on the basis of feedback from the environment. Initially, the mathematical tools of cAi ontrol theory were quite different from AI, but the fields are coming closer together.

    The history of AI has had cycles of success, misplaced optimism, and resulting cutbacks in enthusiasm and funding. There have also been cycles of introducing new creative approaches and systematically refining the best ones.

    AI has advanced more rapidly in the past decade because of greater use of the scientific method in experimenting with and comparing approaches.

    Recent progress in understanding the theoretical basis for intelligence has gone hand in hand with improvements in the capabilities of real systems. The subfields of AI have become more integrated, and AI has found common ground with other disciplines.

    Previous Years Solved Questions & Answers

    Q1. Explain Artificial Intelligence in brief. [R.T.U. 2012]

    Answer:Artificial Intelligence (AI) refers to the development of computer systems or machines that can perform tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, understanding natural language, and even interacting with the environment. AI aims to create systems that can mimic, simulate, or replicate human intelligence to automate processes, make informed decisions, and adapt to changing situations. It encompasses various subfields, such as machine learning, natural language processing, computer vision, robotics, and expert systems. AI has applications in diverse areas, including healthcare, finance, transportation, and entertainment, contributing to advancements in technology and transforming the way we live and work.

    Q 2. What is the goal of Artificial Intelligence? [R.T.U. 2012]

    Answer: The goal of Artificial Intelligence (AI) is to develop intelligent machines or systems that can perform tasks that typically require human intelligence. This encompasses a broad range of objectives and capabilities, including:

    Problem Solving: AI aims to create systems that can analyze complex problems, explore possible solutions, and make informed decisions.

    Learning: AI systems should have the ability to learn from data and experiences, adapting and improving their performance over time without explicit programming.

    Reasoning: AI seeks to enable machines to engage in logical reasoning, drawing conclusions from available information and making inferences.

    Perception: AI aims to replicate human-like perception, allowing machines to interpret and understand the world through vision, speech, and other sensory inputs.

    Natural Language Processing (NLP): AI strives to enable machines to understand, interpret, and generate human language, facilitating communication between humans and machines.

    Interaction: AI systems should be capable of interacting with the environment and responding to changes or new information.

    Autonomy: AI seeks to create systems that can operate autonomously, making decisions and performing tasks without continuous human intervention.

    Adaptability: AI systems should be adaptable, able to handle diverse and dynamic environments, and adjust their behavior accordingly.

    Creativity: Some forms of AI aim to exhibit creative behaviors, such as generating art, music, or innovative solutions to problems.

    Enhancing Human Abilities: Ultimately, AI's overarching goal is to augment and enhance human capabilities, making processes more efficient, providing valuable insights, and addressing complex challenges.

    Q 3. What are the major categories of AI? Explain them briefly. Why AI is a matter of research ? [R.T.U. 2012]

    Answer: Artificial Intelligence (AI) can be broadly categorized into two main types: Narrow AI (or Weak AI) and General AI (or Strong AI).

    Narrow AI (Weak AI):

    Definition: Narrow AI refers to AI systems designed and trained for a specific task or a narrow set of tasks. These systems excel at performing well-defined functions but lack the ability to generalize across different domains.

    Examples: Virtual personal assistants (like Siri or Alex(a), image and speech recognition systems, recommendation algorithms, and autonomous vehicles.

    Characteristics: Narrow AI is focused and specialized, providing solutions to specific problems without possessing a broader understanding or consciousness.

    General AI (Strong AI):

    Definition: General AI refers to the theoretical concept of AI that possesses human-like cognitive abilities, allowing it to understand, learn, and apply knowledge across diverse tasks. General AI would have the capacity for reasoning, problem-solving, and adapting to new environments in a manner similar to humans.

    Characteristics: General AI would exhibit a high level of adaptability, generalization, and autonomous decision-making. It would be capable of performing any intellectual task that a human being can do.

    Challenges: Achieving General AI is an ambitious and complex goal. It involves addressing fundamental challenges related to understanding human cognition, consciousness, and creating machines that can autonomously learn and reason across a wide range of domains.

    Why AI is a Matter of Research:

    Advancements in Technology: Continuous advancements in computing power, data availability, and algorithmic techniques have fueled rapid progress in AI research.

    Real-World Applications: AI has the potential to revolutionize various industries, from healthcare and finance to transportation and education. Researchers aim to develop practical applications that can solve complex problems and enhance efficiency in real-world scenarios.

    Ethical Considerations: As AI systems become more sophisticated, researchers are actively exploring ethical considerations, such as fairness, transparency, accountability, and the impact of AI on society. Ethical AI research is crucial to ensure responsible and unbiased AI development.

    Autonomous Systems: The development of autonomous systems, including self-driving cars and unmanned aerial vehicles, requires extensive research to ensure safety, reliability, and ethical decision-making.

    Cognitive Computing: Understanding and replicating human cognitive abilities is a key area of AI research. This involves studying natural language processing, image recognition, and reasoning to create more human-like AI systems.

    Interdisciplinary Nature: AI research involves collaboration across various disciplines, including computer science, neuroscience, philosophy, psychology, and ethics. This interdisciplinary approach is essential for addressing the multifaceted challenges of AI development.

    Continuous Evolution: AI is a dynamic field with evolving technologies and methodologies. Ongoing research is necessary to keep pace with the latest developments, explore new possibilities, and address emerging challenges.

    Q 4. What is AI and AI techniques? Briefly explain how AI technique can be represented. List out some of task doinain of AI? [R. T. U. B. TECH. 2019, 2015]

    Answer: AI (Artificial Intelligence):

    Definition: Artificial Intelligence refers to the development of computer systems or machines that can perform tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, understanding natural language, and interacting with the environment.

    AI Techniques:

    Machine Learning: Algorithms that enable computers to learn from data and improve their performance over time without being explicitly programmed.

    Natural Language Processing (NLP): Techniques that enable machines to understand, interpret, and generate human language.

    Computer Vision: Algorithms that allow machines to interpret and make decisions based on visual data, such as images or videos.

    Expert Systems: Systems that emulate the decision-making abilities of a human expert in a specific domain.

    Robotics: The design and programming of robots to perform tasks in the physical world.

    Speech Recognition: Technology that converts spoken language into written text.

    Representation of AI Techniques:

    AI techniques can be represented in various ways, depending on the specific domain and application. Here are a few common representations:

    Algorithms and Models: Representations of mathematical algorithms or models used for tasks like classification, regression, clustering, and more.

    Neural Networks: Diagrams illustrating the architecture of artificial neural networks, commonly used in deep learning.

    Flowcharts: Visual representations of the logical flow and decision-making processes in AI systems.

    Code Snippets: Actual code implementations of AI algorithms, showcasing the programming logic.

    Graphs: Graphical representations of relationships and dependencies between different components in AI systems.

    Task Domains of AI:

    Image and Speech Recognition: Identifying and interpreting images or recognizing spoken language.

    Natural Language Understanding: Comprehending and responding to human language in a meaningful way.

    Machine Translation: Translating text or speech from one language to another.

    Recommendation Systems: Suggesting products, services, or content based on user preferences.

    Autonomous Vehicles: Developing systems that enable vehicles to navigate and make decisions autonomously.

    Healthcare Diagnosis: Analyzing medical data to assist in disease diagnosis and treatment planning.

    Game Playing: Creating AI agents that can play games and learn strategies.

    Robotics and Automation: Programming robots to perform tasks in various industries.

    Fraud Detection: Identifying fraudulent activities in financial transactions.

    Virtual Assistants: Building systems that can understand and respond to user queries, like virtual personal assistants.

    Q 5. What is artificial intelligence? Explain A* and AO* algorithm in detail [R.T.U. 2016, 2017, 2018]

    Answer: Artificial Intelligence (AI) is a branch of computer science that aims to create machines or systems capable of performing tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, understanding natural language, perception, and interaction with the environment. AI can be broadly categorized into Narrow AI (task-specific) and General AI (human-like intelligence).

    A Algorithm*:

    The A* (A-star) algorithm is a popular pathfinding and graph traversal algorithm used in artificial intelligence and computer science. It efficiently finds the shortest path from a start node to a goal node on a weighted graph. A* combines elements of Dijkstra's algorithm and a heuristic to achieve both completeness and optimality.

    Algorithm Steps:

    Initialization:

    Initialize an open list with the starting node and a closed list as empty.

    Assign a cost of zero to the starting node.

    Calculate the heuristic cost from the starting node to the goal node.

    Expansion Loop:

    While the open list is not empty:

    Select the node with the lowest total cost (cost + heuristic) from the open list.

    Move the selected node to the closed list.

    If the selected node is the goal node, the algorithm terminates.

    Neighbor Evaluation:

    For each neighbor of the selected node:

    Calculate the cost to reach the neighbor from the current node.

    If the neighbor is not in the open list, add it and calculate its total cost.

    If the neighbor is in the open list, update its cost if the new cost is lower.

    Termination:

    The algorithm terminates when the goal node is reached or when the open list is empty.

    Heuristic Function: The effectiveness of the A* algorithm relies on an admissible heuristic function (h(n)) that estimates the cost from the current node to the goal. The heuristic should never overestimate the true cost, ensuring the optimality of the solution.

    A Properties*:

    A* is both complete (guaranteed to find a solution if one exists) and optimal (finds the shortest path).

    The efficiency of A* heavily depends on the quality of the heuristic function.

    AO Algorithm*:

    AO* (Anytime A-star) is an extension of the A* algorithm designed to find approximate solutions quickly and improve upon them over time. It allows for interruptible and incremental searches, making it suitable for real-time applications.

    Algorithm Enhancements:

    Anytime Property:

    AO* can return a solution at any time during the search process, providing increasingly refined solutions as time allows.

    Incremental Search:

    The search can be interrupted, and the algorithm incrementally improves the solution by resuming the search from the current state.

    Quality-Time Tradeoff:

    Users can trade solution quality for computation time, allowing the algorithm to produce better solutions with more computation.

    Heuristic Refinement:

    AO* can refine the heuristic function during the search, adapting to the problem's characteristics.

    Applications: AO* is particularly useful in scenarios where time constraints are critical, and obtaining an approximate solution quickly is more valuable than waiting for an optimal solution.

    Q 6. Mention three areas in which computers are better than human beings. [IGNOU MCA TERM-END JUNE 2010]

    Answer: Here are three areas in which computers are typically considered superior to human beings in the context of AI:

    Processing Speed and Capacity: Computers are exceptionally fast at processing vast amounts of data and performing complex calculations in a fraction of the time it would take a human. This advantage is particularly evident in tasks such as data analysis, simulations, and large-scale computations. AI algorithms, running on high-performance computing systems, can analyze massive datasets and execute complex mathematical operations with speed and efficiency beyond human capacity.

    Memory and Recall: Computers possess the ability to store and recall immense amounts of information with perfect accuracy. AI systems, powered by robust memory storage and retrieval mechanisms, excel at tasks requiring extensive memory and recall capabilities. In applications like data retrieval, pattern recognition, and knowledge-based systems, computers can quickly access and recall vast databases, providing consistent and accurate results without succumbing to fatigue or forgetfulness.

    Repetitive and Monotonous Tasks: Computers are well-suited for handling repetitive and monotonous tasks without experiencing a decline in performance or attention. In AI, this advantage is particularly valuable for tasks such as data entry, sorting, and routine decision-making processes. AI algorithms can tirelessly execute repetitive tasks with precision, freeing up human resources to focus on more complex and creative aspects of problem-solving.

    Q 7. Explain briefly the following definition of Artificial Intelligence (A.I) given by Eline Rich by explaining the underlined technical terms involved in the definition:

    Artificial Intelligence is the study of techniques for solving exponentially hard problems in polynomial time exploiting knowledge about the problem domain. [IGNOU MCA TERM-END JUNE 2010]

    Answer: Eline Rich's definition of Artificial Intelligence (AI) is as follows:

    Artificial Intelligence is the study of techniques for solving exponentially hard problems in polynomial time exploiting knowledge about the problem domain.

    Let's break down the underlined technical terms involved in this definition:

    Exponentially Hard Problems:

    In computational complexity theory, an exponentially hard problem is one whose complexity grows exponentially with the size of the input. As the input increases, the time required to solve the problem increases at an exponential rate. Exponential growth is often associated with problems that become impractical or impossible to solve with traditional algorithms as the input size grows.

    Polynomial Time:

    Polynomial time refers to the efficiency of an algorithm in terms of the size of its input. An algorithm is said to run in polynomial time if the time it takes to solve a problem is proportional to a polynomial function of the size of the input. Polynomial time algorithms are generally considered more manageable and efficient than exponential time algorithms. Problems for which polynomial time algorithms exist are considered computationally feasible.

    Exploiting Knowledge about the Problem Domain:

    This phrase emphasizes the importance of incorporating domain-specific knowledge into AI techniques. It suggests that AI systems leverage information about the specific problem they are designed to solve. This knowledge can include rules, patterns, heuristics, or insights about the characteristics of the problem domain. By exploiting such knowledge, AI systems aim to enhance their problem-solving capabilities and efficiency.

    Previous Years Unsolved Questions & Answers

    Q 1. When is the Heuristic function said to be admissible? [Anna University Nov. 2019] [RGPV June 2020]

    Q 2. Explain briefly the following definition of Artificial Intelligence (A.I) given by Eline Rich by explaining the underlined technical terms involved in the definition:

    Artificial Intelligence is the study of techniques of solving exponentially hard problems in polynomial time exploiting

    problem domain. [IGNOU June 2010]

    Q 3. Enumerate five characteristics of the programming language LISP. [IGNOU June 2010]

    Q 4. Define a function in LISP language that reads three numbers and returns the sum of the squares of these numbers.

    [IGNOU June 2010]

    Q 5. Explain the sequence of steps in processing query ? prefix ( [ c, d l, [ c, d, el ) [IGNOU June 2015]

    Q 6. Evaluate the following LISP expressions : [IGNOU June 2010] [RGPV June 2020]

    (i) ‘(+9 3)

    (ii) (expt2 5 )

    (iii) (even p ( + 9 K) )

    (iv) (or 'Cat nil ( ))

    v) ( equal '(two one) '(one two))

    Q 7. Write a recursive function in LISP that finds the factorial of n for a natural number n.

    [IGNOU June 2012, 2022, December 2013, 2014]

    Q 8. Write a LISP function to evaluate factorial using recursion. [IGNOU June 2012,2015,Dec. 2021]

    Q 9. What is Property List in LISP? How it is implemented? Explain with suitable examples. [IGNOU June 2012]

    Q 10. Write a user defined LISP function to perform as reverse does ? [IGNOU June 2013]

    Q 11. Briefly discuss Data types and structures in Prolog. [IGNOU December 2013, June 2014]

    Q 12. Develop the knowledge base in PROLOG, to identify the following relations : [MU MAY 2021]

    (i) BROTHER

    (ii) GRANDFATHER

    Q 13. Write a LISP program to find the maximum of 3 numbers. [IGNOU June 2016, 2017, Dec 2021]

    Q 14. Write a program in LISP to find the factorial of a number, entered by the user. Give comments in the program to explain your logic. [IGNOU June 2019, Dec. 2021]

    Q 15. Write A* algorithm. How is A* algorithm different from AO* ? Out of the two algorithms, which one is better and why ? [IGNOU Dec. 2011, 2016, 2021, June 2017, 2018]

    Q 16. What is Artificial Intelligence? Discuss the branches of Artificial Intelligence. [VTU July 2022]

    Q 17. Explain any two AI techniques for solving tie-tar-toe problem. [VTU July 2022]

    1.6 Exercises

    Q.1. MCQ’s.

    1) Arthur Samuel develops a self-learning program to play checkers.

    (a) 1952

    (b) 1976

    (c) 1990

    (d) 2000

    2) The logic programming language PROLOG is created.

    (a) 2020

    (b) 2022

    (c) 1972

    (d) 1971

    3) Which is not Artificial Intelligence Benefit ?

    (a) Google Maps

    (b) Safer Banking

    (c) Better Medicine

    (d) Innovative Media

    4) Which one of these is an example of Artificial Intelligence ?

    (a)Safer Banking

    (b) ChatGPT

    (c) Limited memory

    (d) Innovative Media

    5) Which step must not be followed when utilizing limited memory in AI ?

    (a) Steps

    (b) Ensure the model can make predictions

    (c) Self awareness

    (d) Reiterate the steps above as a cycle

    6) Which one is not the category of AI ?

    (a) Reactive machines

    (b) Limited memory

    (c) Theory of mind

    (d) Establish training data

    7) Which one is the type of machine learning that runs inputs through a biologically inspired neural network architecture ?

    (a) Deep learning

    (b) Machine learning

    (c) Both

    (d) None

    8) What does Limited Memory in AI refer to?

    (a) Systems with no memory capabilities

    (b) Systems with short-term memory for recent information

    (c) Systems with perfect memory

    (d) Systems with infinite memory

    9) What is the primary characteristic of Self-Aware AI systems?

    (a) Perfect memory

    (b) Sentient consciousness

    (c) Subjective awareness of experiences

    (d) Limited memory capacity

    10) Which of the following is not one of the types of Artificial Intelligence?

    (a) Reactive Machines

    (b) Limited Memory

    (c) Sentient Machines

    (d) Self-Awareness

    Q.2. Fill in the blanks.

    1) AI can contribute to_____________ sustainability by optimizing resource management.

    2) _____________is a field of AI that deals with the ability of computers to interpret and understand visual information from the world around us.

    3) The global market for AI in media and entertainment is estimated to reach ___________ billion by 2030.

    4) ____________ is an artificial intelligence chatbot capable of producing written content in a range of formats, from essays to code and answers to simple questions.

    5) A ____________ algorithm is fed data by a computer and uses statistical techniques to help it learn how to get progressively better at a task.

    6) AI encompasses the ability to learn, ________, and self-correct.

    7) Machine Learning is a subset of AI that specifically uses ________ networks for learning.

    8) In traditional Machine Learning, feature engineering is often a crucial step where relevant ________ are manually selected.

    9) Deep Learning models can automatically learn hierarchical representations of features from ________ data.

    10) Reactive Machines operate based on predefined rules and patterns, responding to specific inputs with predetermined ________.

    Q.3. True or False.

    1.In 2008 Waymo launches its Waymo One service, allowing users throughout the Phoenix metropolitan area to request a pick-up from one of the company’s self-driving vehicles.

    2. In 2020 OpenAI releases natural language processing model GPT-3, which is able to produce text modeled after the way people speak and write.

    3. In 2000 Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app.

    4. In 1991 U.S. forces deploy DART, an automated logistics planning and scheduling tool, during the Gulf War.

    5. In 1971 Japan’s Ministry of International Trade and Industry launches the ambitious Fifth Generation Computer Systems project.

    6. In 1973 The Lighthill Report, detailing the disappointments in AI research, is released by the British government and leads to severe cuts in funding for AI projects.

    7. Self-aware AI systems possess consciousness or a sense of their own existence.

    8. Intelligent agents lack the ability to adapt and learn from their experiences.

    9. General AI aims to create machines with human-like cognitive abilities, capable of understanding, learning, and applying knowledge across diverse tasks.

    10. Narrow AI refers to systems designed and trained for a broad set of tasks, demonstrating adaptability and generalization across different domains.

    Q.4. Answer the following.

    1. Define AI.

    2. Differentiate between strong and weak AI with examples.

    3. What is AI ?

    4. What is meant by Deep learning ?

    5. What are the types of AI . Explain with examples.

    6. Explain some examples of AI.

    7. State the area where Foundations of Artificial Intelligence at SCS has made significant contributions .

    8. In which type of industries AI is making waves?

    9. Which are the challenges and limitations of AI ?

    10. Write a short note on Future of AI..

    11. Write in short about the most important events in AI.

    12. What are the recent advancements in AI ?

    13. Write about the future aspects of AI .

    Answers:

    Q.1. 1) (a) 2) (c) 3) (a) 4) (b) 5) (c) 6) (b) 7) (a) 8) (b) 9 (c) 10 (c)

    Q.2. 1) environmental 2) Computer vision 3) $99.48 4) ChatGPT 5) machine learning 6) Reason 7) Neural 8) Features 9) Raw or Unstructured 10) Outputs, Memory

    Q.3. 1) False 2) True 3) False 4) True 5) False 6) True 7) True 8) False 9) True 10) False

    Online Resources

    MCQ Video

    Link: https://youtu.be/Q0guEyCfHT4

    QR Code:

    True/False Video

    Link: https://youtu.be/3o-r-zS9kjI

    QR Code:

    Online Exam Paper

    Link: https://forms.office.com/r/y0Wn8KVtAG

    QR Code:

    CHAPTER 2: Intelligent Agents

    Intelligent agents are entities that use sensors to perceive the environment, make a decision, and act upon that information using actuators. An intelligent agent could be a robot, machine, or even a human or an animal. The sensor that takes in the environment could be a camera, rain sensor, or nose. The intelligent agent processes this information and decides to act upon it. The action could be saving a video, turning off a sprinkler system, or looking for the nearest pizza shop. The actuator is the part that takes action. Intelligent agents make decisions in real time, and the rate of success versus error varies. The purpose of an intelligent agent is to respond to the environment around it.

    Intelligent agents are found throughout many aspects of life. Refrigerators sense temperature in order to determine when additional cooling is needed. Smoke detectors sense the presence of smoke before sounding an alarm. A dog that barks as a stranger approaches the door also counts as an intelligent agent. Complex intelligent agents include the SpaceX Dragon spacecraft and self-driving cars. These intelligent agents take information from many different sensors, make real-time decisions, and adjust actions based on the information.

    2.1 Agents and Environments

    2.1.1 What are Agent and Environment?

    An agent is anything that can perceive its environment through sensors and acts upon that environment through effectors. A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the sensors, and other organs such as hands, legs, mouth, for effectors. A robotic agent replaces cameras and infrared range finders for the sensors, and various motors and actuators for effectors. A software agent has encoded bit strings as its programs and actions , receives keystrokes, file contents, and

    Enjoying the preview?
    Page 1 of 1