Arti Final Revision 2025
Arti Final Revision 2025
Quantifiers
¬∀⇒∃ ¬∃⇒∀
END OF CHAPTER 4
CHAPTER 5
Knowledge-based agent: an agent that has an explicit representation of knowledge that
can be reasoned with.
It includes a KB and Interface system, KB: a set of facts of the world.
Knowledge level: the most abstract level; the knowledge an agent has.
Logical level: the level where knowledge is now represented in sentences.
Implementation level: the Physical representation of the sentences from the previous
step.
EX (Wumpus world): a base with 16 rooms, each room is hidden until you’re inside, it
includes a monster, pits, gold/glitter, breeze, stench.
using PEAS:
- Performance: agent gets gold (win), agent fall in pit or eaten by monster (lose).
- Environment: rooms, stench, breeze, location of gold and monster and pits and player.
- Actuators: move forward, turn right, turn left, move backward.
- Sensors: breeze, stench, glitter, bump on wall.
Logic: formal languages used to represent information in a way that allows drawing
conclusions.
Logic Syntax: defines the rules, to create valid sentences
Logic Semantics: defines the “meaning”; define the truth of a sentence.
Entailment(╞ ): something that logically follows or implied by something else.
EX: A implies (╞ ) B, therefore A ⇒ true, so B ⇒ true.
END OF CHAPTER 5
CHAPTER 6
Problem-solving agent: a type of intelligent agent designed to address and solve complex
problems/tasks in its environment, and they’re a fundamental concept in AI and used in
various applications.
It’s a goal-based agent, deciding the sequence of its actions based on its goal/desirable
states.
Steps of Problem-solving:
1- Goal Formulation:
- Steps are formulating a goal that requires actions
- Goal must be satisfiable
- Actions cause (from initial state → goal state)
- Actions should be abstract (not detailed)
2- Problem Formulation:
- one of the core steps of PS to decide what actions must be taken to achieve the goal
- steps consist of: 1- components 2- action 3- transition 4-goal test 5- path cost
1- components: [A,B,C,D,E,G,S]
2- action: road/path taken
3- transition: describing the change in states
4- goal test: destination is reached
5- path cost: total journey cost
Search algorithm:
- Theres many paths to achieve a goal, when they’re all combined together it’s called a Tree
- An agent can examine different possibilities to choose the best.
- The process of looking for the best sequence is called Search
- The best sequence is called Solution
- Search algorithm takes problem as input and returns a sequence of actions as output.
- Final phase is called execution phase.
- The full process of agent solving-problem is expressed as (Formulate, search, execute):
1. formulate a goal and a problem
2. search for a sequence of actions to solve the problem
3. Execute the actions one at a time
In a Search tree:
root: Initial state
actions: Branches
states: Nodes
EX: Construct the Binary Search tree for the following series:
11, 6, 8, 19, 4, 10, 5, 17, 43, 49, 31. Let's consider 11 as the root node
For effectiveness: consider the total cost = path cost of the solution found + search cost
search cost = time needed to find the solution
Tradeoff: Long time, best solution with least cost
or shorter time, solution with a bit larger path cost
END OF CHAPTER 6
CHAPTER 7
BFS:
Each state may be a potential candidate for the solution
In BFS the frontier is implemented as a FIFO (First-In, First-Out) queue
Algorithm:
1- start on level 1 (root)
2- moving to the next level starting from left moving your way to the right
3- repeating step 2 until all nodes are visited
Level 1
EX: What is the expansion order of nodes
using Breadth-First Search (BFS)?
Level 2
1→2→7→8→3→6→9→12→4→5→10→11 Level 3
Level 4
Applications of BFS
- Find the shortest path
- Form peer-to-peer network connections
- Find neighboring locations in the GPS navigation system
- Broadcast packets in a network
- Find all the nodes within one connected component in an otherwise disconnected graph
END OF CHAPTER 7
CHAPTER 8
Natural language process (NLP): is the process of computer analysis of input provided in
a human language, and conversion of this input into a useful form of representation.
And it’s concerned is getting computers to perform useful and interesting tasks with
human languages, and helping machines in understanding human language.
Named Entity Recognition (NER): will classify words into subcategories, and it’s goal is to
extract information & find any keywords in a sentence
Applications of NLP: (Highlighted easiest to memorize/has examples)
- Text summarization
- Question answering
- Grammar Correction
- Translation Tools: Google Translate, Amazon Translate
- Chatbots: ChatGPT, DeepSeek
- Information retrieval & Web Search: Google, Yahoo, Bing
- Computer conversations
- Understanding text
- Access to information
- Extracting the information contained in the text
- Understanding speech
- Find and replace
- Correction of spelling mistakes
- Interlingual translation
- Reading printed text and correcting reading errors
- Development of writing aids
- Foreign language reading aids
- Voice interaction with computer
END OF CHAPTER 8
CHAPTER 9
Machine Learning (ML): branch of artificial intelligence(AI) and computer science focuses
on the using data and algorithms to enable AI imitating the way humans learn, improving
its accuracy.
It provides the ability to automatically learn from data and past experiences to identify
patterns and make predictions with minimal human intervention.
ML applications are fed with new data, and they can independently learn, grow, develop,
and adapt.
Types of ML:
1- Supervised Machine Learning
“labelled” training data, and on basis of that data, machines predict the output
Steps:
1- Determine type of dataset
2- Collet labelled data
3- Split the dataset into training dataset & test dataset EX: train: 70%, test: 30%
4- Identify the suitable algorithm for the model
5- Execute the algorithm on the training dataset
6- Evaluate the accuracy of the model by providing the test set
Types of SL algorithms:
Classification: output variable is categorical. EX: “yes or no”, “true or false”, “male, female”
Regression: input and output variables have a linear relationship.
Popular regression algorithms: Simple Linear, Multivariate, Decision Tree, Lasso
Holdout set: The available data set is divided into two disjoint subsets,
the training set (for learning a model)
the test set (holdout set) (for testing the model)
Important: the sets must be separated.
END OF CHAPTER 9
CHAPTER 10
Supervised learning: discover patterns
Unsupervised learning: The data have no targeted features.
There is no supervision (no labels/responses), only inputs
Purpose of unsupervised learning is to find natural partitions in the training set
General strategies: Clustering and Hierarchical Clustering
Collecting and labeling: a large set of sample patterns can be very expensive
Training for supervised learning needs a lot of computation time.
It’s good at finding patterns and relationships in data without being told what to look for
This can help you learn new things about your data.
Clustering: classification of objects into different groups, so that the data in each subset
share some common trait, -often according to some defined distance measure.
Goal: group data points that are close (or similar) to each other
Input: N unlabeled examples
Output: Group the examples into K “homogeneous” partitions
Clustering applications:
Anomaly detection: Identify unusual patterns, enabling the detection of fraud, intrusion,
or system failures
Image analysis: can group images based on content
In marketing: segment customers according to their similarities to do targeted marketing.
Unsupervised Learning Methods:
- k-means
- The EM Algorithm
- Gaussian mixture model
- Principal Component Analysis (PCA)
K-mean algorithm:
cluster n objects based on traits/features into k parts, where k < n, EX: k=3 so 3 groups
Each cluster has a cluster center, called centroid.
*k is specified by the user, and its the most popular clustering algorithm
Click here for explanation
K-mean Steps:
1- Define K
2- Randomly set k points (seeds) as centroids (cluster center)
3- Assign each data point to the closest center
4- Calculate each center of the clusters
5- Reassign data points to closest calculated center
6- Repeat steps 2 to 5 until measures are met
Distance measure: determine how the similarity of two elements is calculated and it will
influence the shape of the clusters
It includes:
- Euclidean distance
- Manhattan distance
Strengths of k-means:
Simple: easy to understand and to implement
Efficient: Time complexity
Flexibility: can easily adjust to changes.
Weaknesses
User needs to specify k.
The algorithm is sensitive to outliers
Outliers could be errors
Outliers: data points that are very far away from other data points.
END OF CHAPTER 10