Artificial Intelligence Assignment Two
Artificial Intelligence Assignment Two
Mohamed Abdihakim
VU-BCS-2209-1180-DAY
Akandwanaho Alvin
VU-BCS-2407-1907-DAY
Musiimenta Naume
VU-BCS -2209-0868-DAY
VU-BCS-1909-0074
ETWALU Emmanuel
VU-BCS-2209-0831-EVE
i. Hill Climbing: A heuristic search algorithm that aims to find a local optimum by repeatedly moving to a
neighboring state with a better objective function value.
Scenario: Optimizing traffic flow in a city. Hill climbing could be used to find the best combination of
traffic light timings to minimize congestion. By iteratively adjusting the timings and evaluating the
resulting traffic flow, the algorithm can converge towards a local optimum.
ii. Simulated Annealing: A probabilistic metaheuristic inspired by the annealing process in metallurgy. It
allows for occasional "uphill" moves to escape local optima, with the probability of such moves gradually
decreasing over time.
Scenario: Solving the traveling salesperson problem (TSP), where the goal is to find the shortest possible
route that visits all cities exactly once. Simulated annealing can be used to explore different routes and
potentially find a near-optimal solution by allowing for occasional "uphill" moves to avoid getting stuck
in local minima.
iii. Genetic Algorithms: A metaheuristic inspired by the process of natural selection. It involves
representing solutions as individuals in a population, evaluating their fitness, and selecting the fittest
individuals to reproduce and create new offspring through crossover and mutation.
Scenario: Optimizing the design of a building for energy efficiency. Genetic algorithms can be used to
explore different combinations of materials, insulation, and architectural features to find the most
energy-efficient design. By representing different designs as individuals and evaluating their energy
consumption, the algorithm can evolve towards a near-optimal solution.
1.b. Ways on how I would improve Usability with Responsible and Explainable AI
Responsible AI:
Fairness: Ensure that the algorithms do not perpetuate biases or discrimination. This can be achieved by
carefully selecting training data and using techniques like fairness constraints or counterfactual fairness.
Transparency: Make the decision-making process of the algorithms understandable. This can be done
through techniques like feature importance analysis or visualization of the decision boundaries.
Accountability: Establish clear accountability mechanisms for the use of the algorithms, including
guidelines for human oversight and intervention.
Explainable AI:
Interpretability: Make the algorithms' reasoning understandable to humans. This can be achieved
through techniques like rule extraction or feature importance analysis.
Explainability: Provide explanations for the algorithms' decisions in a way that is understandable to
humans. This can be done through techniques like case-based reasoning or counterfactual explanations.
2.a. Adversarial Search: A type of search algorithm used in game-playing and decision-making problems
where two or more opposing agents are involved. The goal is to find the optimal strategy for one agent
while considering the potential actions of the opponent.
Scenarios:
Chess: Adversarial search algorithms like Minimax and Alpha-Beta pruning are used to determine the
best move for a chess player by considering the possible moves of the opponent.
Negotiations: Adversarial search can be used to model negotiations between two parties, where each
party tries to maximize its own gain while considering the potential actions of the other party.
2.b.
Minimax Algorithm:
Purpose: To find the optimal move for a maximizing player in a two-player zero-sum game.
How it works:
Construct a game tree where each node represents a possible game state.
If it's a maximizing player's turn, choose the maximum value of its children.
If it's a minimizing player's turn, choose the minimum value of its children.
Backpropagate values up the tree to determine the optimal move for the maximizing player.
Example:
ExpectiminMax Algorithm:
Purpose: To find the optimal move for a maximizing player in a game with chance nodes (fore example,
dice rolls).
How it works:
For chance nodes, calculate the expected value by averaging the values of its children, weighted by their
probabilities.
Example:
Purpose: To reduce the number of nodes explored in the Minimax search tree by eliminating branches
that cannot possibly lead to the optimal solution.
How it works:
Alpha: The best value that the maximizing player can achieve along the path to the node.
Beta: The best value that the minimizing player can achieve along the path to the node.
If alpha >= beta at a node, the node and its subtree can be pruned, as there is no need to explore
further.
Example:
Adversarial machine learning is a field of study that focuses on understanding and defending against
attacks that aim to deceive or manipulate machine learning models. These attacks can take various
forms, including:
Data poisoning: Introducing malicious data into the training set to influence the model's behavior.
Model poisoning: Modifying the model's parameters or architecture to compromise its performance.
Evasion attacks: Crafting inputs that are misclassified by the model, even though they are correctly
labeled.
Adversarial machine learning and adversarial search are closely related concepts. In adversarial search,
the goal is to find the optimal strategy for one agent while considering the potential actions of the
opponent. In adversarial machine learning, the goal is to find ways to attack a machine learning model
while considering the model's defense mechanisms.
Key similarities:
Key differences:
Adversarial search typically focuses on discrete decision-making problems (fore example, games), while
adversarial machine learning can involve continuous spaces (fore example, images, text).
Adversarial machine learning often involves more complex models and data sets than adversarial search.
Adversarial machine learning may require specialized techniques to handle the continuous nature of the
problem and the complexity of the models involved.