Unit 2 - AIML
Unit 2 - AIML
learning
What is Uncertainty?
Uncertainty refers to a lack of complete knowledge about an
outcome or situation.
Non-monotonic reasoning:
Before understand what is non monotonic reasoning let us first
see what is mean by monotonic reasoning
In monotonic reasoning, once the conclusion is taken, then it
will remain the same even if we add some other information to
existing information in our knowledge base. In monotonic
reasoning, adding knowledge does not decrease the set of
prepositions that can be derived.
To solve monotonic problems, we can derive the valid
conclusion from the available facts only, and it will not be
affected by new facts.
Monotonic reasoning is not useful for the real-time systems, as
in real time, facts get changed, so we cannot use monotonic
reasoning.
Monotonic reasoning is used in conventional reasoning systems,
and a logic-based system is monotonic.
Any theorem proving is an example of monotonic reasoning.
Example:
Probabilistic reasoning:
• Probabilistic reasoning in AI involves using probability
theory to make decisions and draw conclusions based on
uncertain or incomplete information.
• It's a way for AI systems to handle uncertainty and make
educated guesses rather than giving definitive answers.
For Example:
Let's say you have an AI weather app that uses probabilistic
reasoning.
When you check the app, it doesn't just give you a single weather
forecast (e.g., "It will rain today"). Instead, it provides a
probability-based forecast like this:
• "There's a 70% chance of rain today."
• "There's a 30% chance of sunshine."
Features of Probabilistic Reasoning
Assigning Probabilities
AI assigns probabilities to different possible outcomes or events.
These probabilities indicate how likely each outcome is.
Quantifying Uncertainty
• Instead of making binary (yes/no) decisions, AI
acknowledges and quantifies uncertainty by expressing
probabilities.
• For instance, it might say, "There's an 80% chance it's true,"
indicating the level of confidence in an outcome.
Bayesian Inference
• Bayesian probability theory is a common framework used in
probabilistic reasoning. It involves updating probabilities as
new evidence becomes available.
• For example, if a medical test is 90% accurate and yields a
positive result, Bayesian reasoning allows for adjusting the
probability of having a disease based on this new
information.
Decision Making
• AI systems use these probabilities to make decisions that
aim to maximize expected outcomes or utility.
• For example, in autonomous vehicles, probabilistic
reasoning helps determine how fast to drive based on the
likelihood of encountering obstacles ahead.
Risk Assessment
• Probabilistic reasoning is valuable for assessing and
managing risks.
• It can be applied in financial modelling to estimate the level
of risk associated with different investment options.
• This allows decision-makers to make more informed choices
in situations involving uncertainty.
Bayes Theorem in AI
• Bayes' Theorem, named after the 18th-century
mathematician Thomas Bayes, stands as a foundational
principle in probability theory.
• Bayes' Theorem, also known as Bayes' Rule or Bayes'
Law, is a fundamental concept in AI and probability theory.
• It's used to update probabilities based on new evidence,
making it a crucial tool for reasoning under uncertainty.
In AI, Bayes' Theorem is often applied in various areas,
including:
• Machine Learning: In machine learning, it's used for
Bayesian inference and probabilistic modelling. For
instance, it's employed in Bayesian networks, which are
graphical models that represent probabilistic relationships
among variables.
• Natural Language Processing: Bayes' Theorem can be
used in text classification tasks, such as spam detection,
sentiment analysis, and language modelling.
• Medical Diagnosis: Bayes' Theorem helps doctors update
the probability of a patient having a disease based on the
results of medical tests and the patient's symptoms.
• Autonomous Systems: In autonomous systems like self-
driving cars, Bayes' Theorem is used for sensor fusion and
decision-making under uncertainty.
• Recommendation Systems: It can be applied in
recommendation engines to improve the accuracy of
personalized recommendations by updating user preferences
based on their interactions and feedback.
Fuzzy logic:
Fuzzy logic is widely used in robotics to handle uncertainty,
imprecision, and partial truths, which are common in real-world
environments.
Unlike traditional binary logic, where variables are either true or
false, fuzzy logic allows for a range of values between 0 and 1,
enabling robots to make decisions based on degrees of truth.
Control Systems:
Concept of learning
Learning automation:
• Definition: Learning automation in AI refers to the use of
automated tools and techniques to streamline the processes
involved in developing, training, optimizing, and deploying
machine learning (ML) models.
• Objective: Reduce the need for manual intervention in ML
workflows, enabling scalable, efficient, and faster AI
development.
2. Key Areas of Learning Automation
• AutoML (Automated Machine Learning):
o Automates the end-to-end process of applying ML to
problems.
o Includes tasks like data preprocessing, feature
engineering, model selection, hyperparameter tuning,
and model deployment.
o Tools: Google AutoML, H2O.ai, TPOT.
• Reinforcement Learning Automation:
o Focuses on automating the learning process where an
agent interacts with an environment to optimize
decision-making.
o Applications: Robotics, autonomous systems, AI in
gaming (e.g., AlphaGo).
• Neural Architecture Search (NAS):
o Automates the design of neural network architectures to
find the most effective model structure.
o Techniques: Reinforcement learning, evolutionary
algorithms, gradient-based methods.
o Tools: Google’s AutoML, ENAS.
• Hyperparameter Optimization:
o Automates the process of finding the best
hyperparameters for ML models.
o Techniques: Grid Search, Random Search, Bayesian
Optimization.
o Tools: Optuna, Ray Tune.
• MLOps (Machine Learning Operations):
o Involves automating the deployment, monitoring, and
maintenance of ML models in production environments.
o Tools: Kubeflow, MLflow.
3. Advantages of Learning Automation
• Scalability: Allows AI efforts to scale across multiple
domains with minimal manual input.
• Efficiency: Speeds up the ML lifecycle, from model
development to deployment.
• Accessibility: Makes advanced AI techniques more
accessible to non-experts by simplifying complex tasks.
• Performance: Enhances model performance by
automatically discovering optimal configurations that might
be missed manually.
4. Challenges and Considerations
• Complexity: While automation simplifies many tasks,
setting up automated systems can be complex and requires
expertise.
• Interpretability: Automated models can sometimes be less
interpretable, making it harder to understand why a model
performs well.
• Resource Intensive: Automation, especially in NAS and
reinforcement learning, can be computationally expensive.
5. Importance in Modern AI
• Innovation: Automation in learning is driving innovation by
allowing more time to focus on problem-solving rather than
technical minutiae.
• Business Impact: Enables businesses to deploy AI solutions
faster and more reliably, giving them a competitive edge.
Genetic algorithm:
History of GAs
•As early as 1962, John Holland's work on adaptive systems laid
the foundation for later developments.
•By the 1975, the publication of the book Adaptation in Natural
and Artificial Systems, by Holland and his students and
colleagues.
What is GA
A genetic algorithm (or GA) is a search technique used in
computing to find true or approximate solutions to optimization
and search problems.
• (GA)s are categorized as global search heuristics.
• (GA)s are a particular class of evolutionary algorithms that
use techniques inspired by evolutionary biology such as
inheritance, mutation, selection, and crossover (also called
recombination).
• The evolution usually starts from a population of randomly
generated individuals and happens in generations.
• In each generation, the fitness of every individual in the
population is evaluated, multiple individuals are selected
from the current population (based on their fitness), and
modified to form a new population.
• The new population is used in the next iteration of the
algorithm.
• The algorithm terminates when either a maximum number
of generations has been produced, or a satisfactory fitness
level has been reached for the population.
Vocabulary
•Individual-Any possible solution
•Population-Group of all individuals
•Fitness–Target function that we are optimizing (each individual
has a fitness)
•Trait-Possible aspect (features) of an individual
•Genome-Collection of all chromosomes (traits) for an
individual.
Activation Functions
Non-linearity is provided by activation functions such as
Rectified Linear Unit (ReLU), sigmoid, hyperbolic tangent
(tanh), Leaky ReLU, etc. ReLU is so common activation
function because of its simplicity and effectiveness. These
functions allow us to understand the complex relationship in the
data. But note that the key point is non-linearity.
Applications
Artificial intelligence has been revolutionized by neural
networks and enabled remarkable advancements in various filed
like computer vision, natural language processing, robotics,
finance, speech recognition, recommendation systems,
healthcare and etc. Their architecture, training process, and
flexibility make them incredibly powerful tools for complex
problems. Neural networks are continuously improving, making
them central to the future of AI, and enabling us to tackle new
challenges in the ever-changing tech world.
Assignment No: 2
1.What is fuzzy JHU logic?
2.Explain learning automation ?
3.Define non monotonic reasoning ?
4.Explain neural network?
5.Explain Probabilistic learning in ai?
Assignment no 3:
1.Describe Hill Climbing?
2.Explain Best First Search?
3.Describe Ant Colony Optimization?
4.Define Heuristic Search?
5.Describe uniformed and informed search?