0% found this document useful (0 votes)
6 views20 pages

Arti Final Revision 2025

The document covers various chapters on logic, knowledge-based agents, problem-solving agents, search strategies, and natural language processing (NLP) in artificial intelligence. It discusses different types of logic, the structure and functioning of knowledge-based agents, problem-solving steps, search algorithms, and the phases and techniques of NLP. Additionally, it highlights the applications of NLP and machine learning as a branch of AI.

Uploaded by

nouraalsultan471
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views20 pages

Arti Final Revision 2025

The document covers various chapters on logic, knowledge-based agents, problem-solving agents, search strategies, and natural language processing (NLP) in artificial intelligence. It discusses different types of logic, the structure and functioning of knowledge-based agents, problem-solving steps, search algorithms, and the phases and techniques of NLP. Additionally, it highlights the applications of NLP and machine learning as a branch of AI.

Uploaded by

nouraalsultan471
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

CHAPTER 4

Logic What exist Knowledge state

Propositional logic Facts Ture/False

First Order (predicate) Facts/Objects/Relations Ture/False/unknown

Temporal Facts/Objects/Relations/Time Ture/False/unknown

Probability Theory Facts Degree of belief 0~1

Fuzzy Degree of truth Degree of belief 0~1

Propositional logic: assumes that the world contains facts.


It has very limited expressive power, and not possible to have one proposition to represent
a group of objects; cannot make general statements.
First-order logic will provide this flexibility by using ∀, ∃.
Problems of PL:
1- no variables.
2- can’t directly express properties of individuals/relations between individuals.
3- Inferencing in PL is easy, but in FOL it is more complex.
Order Logic models the world in terms of:
Objects: which are things with individual identities. EX: people, houses, numbers..
Variable and constant: represents an object. EX: X ,Y ,Z.
Constant: represent individuals in the world. EX: Ahmad, 3, Green.
*Predicate: Gives relation between objects/variables. EX: brother of, bigger than,
Likes(Ali, chocolate). (answer is T/F).
- Unary: Male(x), Female(y). - Binary: Brother(x, y).
*Function: a relation where there is only one “value” for any given “input”
EX: MotherOf(Yasser), OldestSonOf(Walid, Ayman). (answer is value)
Properties: Describe specific aspects of objects, used to distinguish between objects.
Connectives

negation(¬): negative conjunction(˄): and disjunction(˅): or

implication(→/⇒): then equivalence(⟺): then from ¬∧ ⇒ ∨ ¬∨ ⇒ ∧


both sides

Quantifiers

(∀): all (∃): exist.

¬∀⇒∃ ¬∃⇒∀

Complex sentences are made from atomic sentences using connectives.


Term: any constants, variables, or functions. EX: Hamad, Japan.
Ground term: doesn’t contain any variables. EX: succ(1, 2).
Free variable: is a variable that isn’t bounded by a quantifier (∀/∃).
well-formed formula (wff): is a sentence containing no “free” variables.
∀ Quantifier: A statement of the form ∀x some-formula is True, if the statement some-
formula is True for every choice of x. (everything has to be True for it to be True)
Syntax: ∀<variables> <sentence>.
Common mistake: using ∧ as the main connective with ∀ instead of ⇒.
∃ Quantifier: A statement of the form ∃x some-formula is True if the statement some-
formula is True for some choice of x. (some have to be True for it to be True)
Syntax: ∃<variables> <sentence>.
Common mistake: using ⇒ as the main connective with ∃ instead of ∧.
*Note: If the Quantifiers are the same, we can switch places and have the same meaning,
but if they’re different they’re not swappable because they won’t carry the same meaning
∀X ∀Y = ∀Y ∀X, and ∃X ∃Y = ∃Y ∃X, BUT ∀X ∃Y =/ ∀Y ∃X.
FOL to CNF

1- assign clause a and b ∀x (student(x) ⇒ ∃Y take(x,y))


(a before “⇒”, b after “⇒”)

2.1- PNF: rewrite it as “ ¬a v b ” ∀x (¬student(x) ∨ ∃Y take(x,y))


2.2- PNF: move all quantifiers to the front
(don’t change the order) ∀x ∃Y (¬student(x) ∨ take(x,y))

3.1- SNF: replace ∃y with f(x)


∀x (¬student(x) ∨ take(x,f(x)))
(function of any ∀ BEFORE ∃)
*if there’s no ∀ before it, replace it with a
constant (any unused variable)

4.1- CNF: break down any () and split any ¬

4.2- CNF: remove all ∀ (¬student(x) ∨ take(x,f(x)))

END OF CHAPTER 4
CHAPTER 5
Knowledge-based agent: an agent that has an explicit representation of knowledge that
can be reasoned with.
It includes a KB and Interface system, KB: a set of facts of the world.
Knowledge level: the most abstract level; the knowledge an agent has.
Logical level: the level where knowledge is now represented in sentences.
Implementation level: the Physical representation of the sentences from the previous
step.

EX (Wumpus world): a base with 16 rooms, each room is hidden until you’re inside, it
includes a monster, pits, gold/glitter, breeze, stench.
using PEAS:
- Performance: agent gets gold (win), agent fall in pit or eaten by monster (lose).
- Environment: rooms, stench, breeze, location of gold and monster and pits and player.
- Actuators: move forward, turn right, turn left, move backward.
- Sensors: breeze, stench, glitter, bump on wall.

Partially observable: knows ONLY the local perceptions.


Deterministic: outcomes are specified.
Sequential: there’s a sequence of actions performed.
Static: monster and pits are immobile (can’t be moved).
Discrete: discrete environment; there’s a limited (not infinite; finite) number of states and
actions.
Single-agent: there’s only one agent which is the knowledge-based agent, others are
considered environment.

Logic: formal languages used to represent information in a way that allows drawing
conclusions.
Logic Syntax: defines the rules, to create valid sentences
Logic Semantics: defines the “meaning”; define the truth of a sentence.
Entailment(╞ ): something that logically follows or implied by something else.
EX: A implies (╞ ) B, therefore A ⇒ true, so B ⇒ true.

Models: an assignment of (T/F) values to each, so 2n is the number of possibilities.


Valid: all values (models) are True. (∀ True)
Satisfiable: at least one value (model) is True. (∃ True)
Unsatisfiable: all values (models) are False. (∀ False)

END OF CHAPTER 5
CHAPTER 6
Problem-solving agent: a type of intelligent agent designed to address and solve complex
problems/tasks in its environment, and they’re a fundamental concept in AI and used in
various applications.
It’s a goal-based agent, deciding the sequence of its actions based on its goal/desirable
states.

Steps of Problem-solving:
1- Goal Formulation:
- Steps are formulating a goal that requires actions
- Goal must be satisfiable
- Actions cause (from initial state → goal state)
- Actions should be abstract (not detailed)
2- Problem Formulation:
- one of the core steps of PS to decide what actions must be taken to achieve the goal
- steps consist of: 1- components 2- action 3- transition 4-goal test 5- path cost
1- components: [A,B,C,D,E,G,S]
2- action: road/path taken
3- transition: describing the change in states
4- goal test: destination is reached
5- path cost: total journey cost

Search algorithm:
- Theres many paths to achieve a goal, when they’re all combined together it’s called a Tree
- An agent can examine different possibilities to choose the best.
- The process of looking for the best sequence is called Search
- The best sequence is called Solution
- Search algorithm takes problem as input and returns a sequence of actions as output.
- Final phase is called execution phase.
- The full process of agent solving-problem is expressed as (Formulate, search, execute):
1. formulate a goal and a problem
2. search for a sequence of actions to solve the problem
3. Execute the actions one at a time

In a Search tree:
root: Initial state
actions: Branches
states: Nodes

Parent node: a predecessor (before) of any node


Child nodes: a descendant (after) of any node
Leaf node: a node with no children
Search strategy: how to choose which state to expand next
A repeated state: same state is encountered multiple times during the search process

Binary: has two subtrees (named left and right)


Ordered: left<parent, right>parent
- ordered so searching, adding and deleting can be done efficiently.

EX: Construct the Binary Search tree for the following series:
11, 6, 8, 19, 4, 10, 5, 17, 43, 49, 31. Let's consider 11 as the root node

Starting from the root

left = smaller, right = bigger,

with each number start


again from the root and
navigate your way

repeat until all numbers are


placed in the tree.

EX: how to reach to 17


Measure the performance of a search strategy by:
Completeness: is the strategy guaranteed to find a solution when there is one
Optimality: does it find the highest-quality solution when there’s different solutions?
Time complexity: how long does it take to find a solution?
Space complexity: how much memory is needed to perform the search?

For effectiveness: consider the total cost = path cost of the solution found + search cost
search cost = time needed to find the solution
Tradeoff: Long time, best solution with least cost
or shorter time, solution with a bit larger path cost

END OF CHAPTER 6
CHAPTER 7

Search is the fundamental technique of Al.


Possible answers, decisions or courses of action are structured into an abstract space,
which we then search in.
Types of Search:
1- Blind (uninformed):
A search that has no information about its domain. It can only single out a non-goal state
from a goal state.
Blind search strategies:
Breadth-first search (BFS)
Depth-first search
Bidirectional search
Iterative deepening search
Uniform-cost search

BFS:
Each state may be a potential candidate for the solution
In BFS the frontier is implemented as a FIFO (First-In, First-Out) queue
Algorithm:
1- start on level 1 (root)
2- moving to the next level starting from left moving your way to the right
3- repeating step 2 until all nodes are visited

Level 1
EX: What is the expansion order of nodes
using Breadth-First Search (BFS)?
Level 2

1→2→7→8→3→6→9→12→4→5→10→11 Level 3

Level 4
Applications of BFS
- Find the shortest path
- Form peer-to-peer network connections
- Find neighboring locations in the GPS navigation system
- Broadcast packets in a network
- Find all the nodes within one connected component in an otherwise disconnected graph

2- informed (heuristic search/ directed search):


A search that knows if a one non-goal is more promising than another. We guess what is
ahead and use that information to decide where to look next.
When more information than the initial state, the operator, and the goal test is available,
the size of the search space can usually be constrained.
Informed search algorithms require details such as distance to reach the goal, steps to
reach the goal, cost of the paths which make this algorithm more efficient.

Heuristic: a function that provides an estimate of solution cost.


It can be considered as a rule for deciding which choice might be best, but there is no
general theory for finding heuristics, because every problem is different.
Choice of heuristics depends on knowledge of the problem space.

Informed search strategies:


Best-first search (BFS)
A* search
Heuristics
Local search algorithms
BSF:
Use an evaluation function (h(n)) to decide which adjacent is most promising and then
explore, and it uses the concept of a Priority queue and heuristic search.
Algorithm:
1- initializing: Root node starts in the ‘ordered open list’ with an empty list in ‘closed list’
2- moving the root to ‘closed list’, then adding the children of the root in order (based on
highest H value to lowest) in the ‘ordered open list’, and selecting the lowest (for lowest
cost)
3- move the selected node to ‘closed list’, and add the new children nodes of your current
node to the ‘ordered open list’, without deleting the previous list
4- repeating step 2&3 until destination is reached

EX: Find the path from S to G


using the Best-First Search
algorithm

STEP Ordered Open List Closed List


Initialization [S] (root/start point) []
[A, D] (smallest H value) input: [S]
1 [A] output: [S, D]
[A, B, E] (smallest H value) input: [S, D]
2 [A, B] output: [S, D, E]
[A, B, G] (smallest H value) input: [S, D, E]
3 output: [S, D, E, G]
Path S⇒D⇒E⇒G
Cost 2+4+3=9

END OF CHAPTER 7
CHAPTER 8

Natural language process (NLP): is the process of computer analysis of input provided in
a human language, and conversion of this input into a useful form of representation.
And it’s concerned is getting computers to perform useful and interesting tasks with
human languages, and helping machines in understanding human language.

Input/Output of NLP system:


1- written text:
To process it we need:
- lexical, syntactic, semantic knowledge about the language
- discourse information, real world knowledge
2- speech
To process it we need:
- lexical, syntactic, semantic knowledge about the language
- discourse information, real world knowledge
- the challenges of speech recognition and speech synthesis.

Natural Language Processing (NLP) levels:


1 Phonological level sound of language (speech)
2 Morphological level word structure
3 Lexical level meaning of individual words
4 Syntactic level Grammar & Sentence Structure
5 Semantics level Meaning of Sentences, and phrases
6 Pragmatics level Context & Intended Meaning
7 Discourse level immediately preceding sentences affect the interpretation
Phases of NLP:
1- Lexical analysis/morphological processing: Breaks text into words, phrases, or
meaningful units (tokens)
2- Syntactic Analysis/parsing: Analyzes the grammatical structure of a sentence
3- Semantics Analysis: Assign meanings to words and sentences, this stage uses word
sense disambiguation (WSD) to determine meaning based on context.
Many words have several meanings, so it uses a process of elimination to find the correct
meaning
4- Discourse Integration(Context Understanding): Connects meaning across multiple
sentences.
5- Pragmatic Analysis (Intent & Implication Recognition): overall communicative and
social context and its effect on interpretation
*Note: this stage Helps in chatbots, sentiment analysis, and sarcasm detection.

How to perform NLP (techniques):


1- segmentation: break the entire document down into sentences (detecting boundaries)
Challenge: not all periods (.) indicate sentence boundaries EX: “Dr.”
2- tokenization: break down the sentence into its constituent words and store them,
each word is called a token
3- Stemming: gives new words upon adding affixes (prefix/suffix) to them.
Skip ⇒ skiping (stem)
4- Lemmatization: obtaining the Root/Base Stem of a word.
Am/Is/Are ⇒ Be (lemma)
5- Stop word analysis: make learning process faster by getting rid of non-essential words
Ex: “was, in, is, and, the” are called stop words and can be removed.
6- Dependency parsing
7- Part-of-speech tagging: Identifies each word’s grammatical category

Named Entity Recognition (NER): will classify words into subcategories, and it’s goal is to
extract information & find any keywords in a sentence
Applications of NLP: (Highlighted easiest to memorize/has examples)
- Text summarization
- Question answering
- Grammar Correction
- Translation Tools: Google Translate, Amazon Translate
- Chatbots: ChatGPT, DeepSeek
- Information retrieval & Web Search: Google, Yahoo, Bing
- Computer conversations
- Understanding text
- Access to information
- Extracting the information contained in the text
- Understanding speech
- Find and replace
- Correction of spelling mistakes
- Interlingual translation
- Reading printed text and correcting reading errors
- Development of writing aids
- Foreign language reading aids
- Voice interaction with computer

END OF CHAPTER 8
CHAPTER 9

Machine Learning (ML): branch of artificial intelligence(AI) and computer science focuses
on the using data and algorithms to enable AI imitating the way humans learn, improving
its accuracy.
It provides the ability to automatically learn from data and past experiences to identify
patterns and make predictions with minimal human intervention.
ML applications are fed with new data, and they can independently learn, grow, develop,
and adapt.

Types of ML:
1- Supervised Machine Learning
“labelled” training data, and on basis of that data, machines predict the output
Steps:
1- Determine type of dataset
2- Collet labelled data
3- Split the dataset into training dataset & test dataset EX: train: 70%, test: 30%
4- Identify the suitable algorithm for the model
5- Execute the algorithm on the training dataset
6- Evaluate the accuracy of the model by providing the test set

Types of SL algorithms:
Classification: output variable is categorical. EX: “yes or no”, “true or false”, “male, female”
Regression: input and output variables have a linear relationship.
Popular regression algorithms: Simple Linear, Multivariate, Decision Tree, Lasso

Main goal of classification is to predict the class label (Yes/ No).


Main goal of regression is to predict a value or a number.
Evaluations metrics:
Different types of EM are for different types of ML algorithm (classification, regression,
ranking..)
Some metrics can be useful for more than one type of algorithm. EX: Precision-Recall
popular classification: Accuracy, Confusion matrix and AUC
They’re Methods to determine an algorithm’s performance and behavior
Helpful to define the model in a way that can offer best performing algorithm

correct positive negative


predictions
Accuracy: x 100
total predictions positive 70 10

Confusion matrix: more detailed breakdown negative 5 15

Holdout set: The available data set is divided into two disjoint subsets,
the training set (for learning a model)
the test set (holdout set) (for testing the model)
Important: the sets must be separated.

Cross-validation: 1-Split the dataset into N 2- 1 2 3 4


equal-size disjoint subsets 1 2 3 4
for testing, for learning etc…
3- Repeat this process until every fold has been used as the test set once.
4- Average the evaluation metrics (accuracy, precision, etc.)
**This method is for small data
K-Nearest Neighbors (KNN): algorithm that stores all available cases and classifies new
cases based on a similarity measure. EX: distance function
Distance function is crucial (important), but depends on applications.

Click here for explanation

2- Unsupervised Machine Learning (chapter 10)

END OF CHAPTER 9
CHAPTER 10
Supervised learning: discover patterns
Unsupervised learning: The data have no targeted features.
There is no supervision (no labels/responses), only inputs
Purpose of unsupervised learning is to find natural partitions in the training set
General strategies: Clustering and Hierarchical Clustering

Collecting and labeling: a large set of sample patterns can be very expensive
Training for supervised learning needs a lot of computation time.
It’s good at finding patterns and relationships in data without being told what to look for
This can help you learn new things about your data.

Clustering: classification of objects into different groups, so that the data in each subset
share some common trait, -often according to some defined distance measure.

Goal: group data points that are close (or similar) to each other
Input: N unlabeled examples
Output: Group the examples into K “homogeneous” partitions

Clustering applications:
Anomaly detection: Identify unusual patterns, enabling the detection of fraud, intrusion,
or system failures
Image analysis: can group images based on content
In marketing: segment customers according to their similarities to do targeted marketing.
Unsupervised Learning Methods:
- k-means
- The EM Algorithm
- Gaussian mixture model
- Principal Component Analysis (PCA)

K-mean algorithm:
cluster n objects based on traits/features into k parts, where k < n, EX: k=3 so 3 groups
Each cluster has a cluster center, called centroid.
*k is specified by the user, and its the most popular clustering algorithm
Click here for explanation

K-mean Steps:
1- Define K
2- Randomly set k points (seeds) as centroids (cluster center)
3- Assign each data point to the closest center
4- Calculate each center of the clusters
5- Reassign data points to closest calculated center
6- Repeat steps 2 to 5 until measures are met

Distance measure: determine how the similarity of two elements is calculated and it will
influence the shape of the clusters
It includes:
- Euclidean distance
- Manhattan distance
Strengths of k-means:
Simple: easy to understand and to implement
Efficient: Time complexity
Flexibility: can easily adjust to changes.

Weaknesses
User needs to specify k.
The algorithm is sensitive to outliers
Outliers could be errors
Outliers: data points that are very far away from other data points.

END OF CHAPTER 10

You might also like