Introduction To Artificial Intelligence
Introduction To Artificial Intelligence
Definition of AI :-
4 APPROACHES of AI:-
2. Birth of AI (1956)
• The term "Artificial Intelligence" was coined at the Dartmouth
Conference, organized by John McCarthy, Marvin Minsky,
Nathaniel Rochester, and Claude Shannon. This marked the
official founding of AI as a field of study.
3. Early Enthusiasm and Challenges (1950s-1970s)
• 1950s-1960s: AI research focused on problem-solving and
symbolic methods. Early successes included programs that could
solve algebra problems and play chess.
• 1970s: The initial excitement waned due to limitations in
computational power and the inability to solve complex
problems, leading to the first "AI winter," a period of reduced
funding and interest.
4. Expert Systems (1980s)
• The 1980s saw the rise of expert systems, which used rule-based algorithms
to mimic human expertise in specific domains (e.g., medical diagnosis).
Companies like Xerox and IBM invested heavily in AI, leading to renewed
interest.
5. Second AI Winter (Late 1980s-1990s)
• As expert systems proved costly to maintain and limited in scope,
another AI winter occurred, causing reduced funding and interest
in the field.
6. Resurgence and Machine Learning (1990s-2000s)
• 1997: IBM's Deep Blue defeated world chess champion Garry
Kasparov, demonstrating the potential of AI.
• The late 1990s and 2000s saw a shift toward machine learning,
particularly statistical methods and algorithms that enabled AI to
learn from data.
7. Deep Learning and Big Data (2010s)
• The development of deep learning, powered by advances in
neural networks and increased computational power, led to
breakthroughs in image recognition, natural language processing,
and speech recognition.
• 2012: A deep learning model won the ImageNet competition,
marking a turning point in AI applications in computer vision.
8. Current Developments (2020s-Present)
• AI is now pervasive in various sectors, including healthcare,
finance, autonomous vehicles, and more. Technologies like
ChatGPT and other large language models have pushed the
boundaries of natural language understanding.
• Ongoing research focuses on ethics, bias, explainability, and the
societal impacts of AI.
Machine Learning :-
is the learning in which machine can learn by its own without being explicitly
programmed. It is an application of AI that provide system the ability to
automatically learn and improve from experience. Here we can generate a
program by integrating input and output of that program.
• Although the terms artificial intelligence (AI) and machine learning are
frequently used interchangeably, (machine learning is a subset of the
larger category of AI.
• Artificial intelligence signifies computers' general ability to mimic
human thought while carrying out tasks in real-world environments.
• Machine learning implies to the technologies and algorithms that allow
systems to recognize patterns, make decisions, and improve themselves
through experience and data.
Types of Machine Learning :-
• Supervised Learning
• Unsupervised Learning
• Reinforcement Learning
• Deep Learning
• Deep Reinforcement Learning
Supervised Learning :
is a type of machine learning where the model is trained on labeled data. In this
process, the algorithm learns from a dataset that contains both the input data and
the corresponding correct output (labels). The goal is for the model to learn a
mapping from inputs to outputs, so it can predict the correct output when given
new, unseen data.
Supervised learning can be broadly divided into two main types based on the
nature of the target variable (output)
1. Classification
is a type of machine learning where the model is trained on data that does not have
labeled outputs. The goal is for the model to identify patterns, structures, or
relationships within the data without any explicit guidance. Unlike supervised
learning, the model isn't provided with the correct answers and must learn from the
input data alone.
Unsupervised learning can be divided into several types based on the goals and the
techniques used to analyze the data. Here are the main types:
1. Clustering
3. Anomaly Detection
Anomaly detection involves identifying rare items, events, or
observations that differ significantly from the majority of the data.
These outliers may indicate critical incidents, such as fraud detection or
equipment failures.
▪ Example: Detecting fraudulent transactions in banking by
finding unusual spending patterns.
▪ Common Algorithms:
• Isolation Forest
• One-Class SVM
• LOF (Local Outlier Factor)
Reinforcement learning (RL) :
Unlike supervised learning, reinforcement learning does not rely on labeled input-
output pairs. Instead, it learns by trial and error, exploring different strategies to
discover the most effective one.
Example:
A reinforcement learning agent could learn to play chess by interacting with the
chess environment. After each move, the agent receives feedback (e.g., gaining an
advantage or losing a piece), and through repeated games, it learns strategies that
increase the chances of winning.
Deep learning is especially effective for tasks like image recognition, speech
processing, natural language understanding, and more. It enables models to
perform complex tasks without the need for manual feature extraction, as the
model learns the relevant features directly from raw data.
Deep learning has several types based on the architecture of neural networks used and the
specific problems they are designed to solve. Here are the most common types: