0% found this document useful (0 votes)
6 views55 pages

CCV (Aad, DBMS, CD, Ai)

The document covers various concepts in algorithms and data structures, including asymptotic notation, AVL trees, Red-Black trees, B-trees, disjoint sets, graph theory, and dynamic programming. It explains key algorithms such as Dijkstra's, Kruskal's, and Merge Sort, along with their time complexities. Additionally, it discusses the principles of Divide and Conquer, Backtracking, and Greedy approaches, as well as DBMS concepts like data models and user types.

Uploaded by

Sooraj P.M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views55 pages

CCV (Aad, DBMS, CD, Ai)

The document covers various concepts in algorithms and data structures, including asymptotic notation, AVL trees, Red-Black trees, B-trees, disjoint sets, graph theory, and dynamic programming. It explains key algorithms such as Dijkstra's, Kruskal's, and Merge Sort, along with their time complexities. Additionally, it discusses the principles of Divide and Conquer, Backtracking, and Greedy approaches, as well as DBMS concepts like data models and user types.

Uploaded by

Sooraj P.M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 55

AAD MODULE 1

1.Big oh notation gives


Worst case or upper bound
2.asymptotically loose upper bound is given by
Little oh
3.Properties of asymptomatic notation
Reflexivity
Symmetry
Transpose symmetry
Transitivity
4.Little omega gives what bound
Loose Lower bound
5.How many asymptomatic notations are present and what are they
5
BIG OH
LITTLE OH
BIG THETA
BIG OMEGA
LITTLE OMEGA
6. What is space complexity
Amount of memory space taken by a program to run to its completion
7.What is time complexity
It is the amount of computer time it needs to run to completion. it measures how much time
an algorithm requires to run the program completely
8. Recursion tree method uses which algorithm
Divide and conquer
9. What are the characteristics of algorithm
Input, output, definiteness, effectiveness, finiteness
10. What is algorithm
A finite set of inspection that specified a sequence of operations to be carried out in order to
solve a specific problem or class of problem it is called algorithm
MODULE 2

1. What is an AVL tree?


An AVL tree is a self-balancing binary search tree where the heights of the two child
subtrees of any node differ by at most one.

2. Explain the balancing factor in AVL trees.


The balancing factor in AVL trees is the difference between the heights of the left and right
subtrees of a node. It is either -1, 0, or 1.

3. What are the rotation operations in AVL trees?


The rotation operations in AVL trees are single rotation (left or right) and double rotation
(left-right or right-left) used to maintain balance during insertion and deletion.

4. What is the worst-case time complexity for search, insert, and delete operations in AVL
trees?
The worst-case time complexity for search, insert, and delete operations in AVL trees is
O(log n), where n is the number of nodes in the tree.

5. What is a Red-Black tree?


A Red-Black tree is a self-balancing binary search tree where each node contains an extra
bit for representing the colour (red or black) and follows certain properties to maintain
balance.

6. Explain the properties of a Red-Black tree.


Properties of a Red-Black tree include:
Every node is either red or black.
The root and leaves (NIL nodes) are black.
Red nodes cannot have red children.
Every path from a node to its descendant leaves contains the same number of black nodes.

7. How are Red-Black trees balanced after insertion?


Red-Black trees are balanced after insertion using colour flips, rotations, and adjustments to
maintain the properties of Red-Black trees.

8. What is the worst-case time complexity for search, insert, and delete operations in Red-
Black trees?
The worst-case time complexity for search, insert, and delete operations in Red-Black trees
is O(log n), where n is the number of nodes in the tree.

9. What is a B-tree?
A B-tree is a self-balancing tree data structure that maintains sorted data and allows for
efficient search, insertion, and deletion operations.

10. Explain the structure and properties of B-trees.


B-trees have a variable but often large number of children per node, typically used in disk-
based data structures. They ensure that the tree remains balanced by adjusting the tree
structure when necessary.

11. How are B-trees different from binary search trees?


B-trees are different from binary search trees in that they have multiple keys and child
pointers per node, which allows them to remain balanced and efficient for disk-based data
storage.
12. What are the advantages of using B-trees in database systems?
B-trees provide efficient search, insert, and delete operations, especially in disk-based
storage systems, due to their ability to minimise disk I/O operations and maintain balance.

13. What is a disjoint set?


A disjoint set is a data structure that keeps track of a set of elements partitioned into a
number of disjoint (non-overlapping) subsets.

14. Explain the operations of MakeSet, Union, and Find in disjoint sets.
MakeSet creates a new set containing a single element.
Union merges two sets into one set.
Find determines which set a particular element belongs to.

15. What is a disjoint set forest?


A disjoint set forest is a collection of disjoint sets represented using a forest (collection of
trees), where each tree represents a set.

16. How do you implement disjoint set forests efficiently?


Disjoint set forests can be implemented efficiently using path compression and union by
rank/heuristic to ensure optimal performance.

17. What is a spanning tree in graph theory?


A spanning tree of a graph is a subgraph that is a tree and connects all the vertices together
without any cycles.

18. Explain the properties of a spanning tree.


It connects all vertices in the graph.
It is acyclic (no cycles).
It is a minimal spanning tree if the sum of edge weights is minimised.

19. How do you find a minimum spanning tree in a graph?


Minimum spanning trees can be found using algorithms like Prim's algorithm or Kruskal's
algorithm.

20. Compare Prim's and Kruskal's algorithms for finding minimum spanning trees.
Prim's algorithm starts from an arbitrary vertex and grows the tree by adding the shortest
edge that connects the tree to a non-tree vertex.
Kruskal's algorithm starts with a forest of single-node trees and repeatedly merges the
smallest edge until all vertices are connected.

21. Define a graph and its components.


A graph is a collection of nodes (vertices) and edges that connect pairs of nodes.
Components are subgraphs within a graph that are connected.

22. What are the different representations of a graph?


Graphs can be represented using adjacency matrix, adjacency list, or edge list
representations.

23. Explain the depth-first search (DFS) algorithm for traversing graphs.
DFS explores as far as possible along each branch before backtracking, using a stack or
recursion to keep track of visited vertices.
24. Explain the breadth-first search (BFS) algorithm for traversing graphs.
BFS explores all the neighbour nodes at the present depth prior to moving on to the nodes
at the next depth level.

25. What is the time complexity of BFS and DFS?


The time complexity of both BFS and DFS is O(V + E), where V is the number of vertices
and E is the number of edges in the graph.

26. Explain the time complexity of topological sorting.


The time complexity of topological sorting is O(V + E), where V is the number of vertices
and E is the number of edges in the graph.

27. Discuss the time complexity of weighted union heuristics in disjoint set forests.
The time complexity of weighted union heuristics in disjoint set forests is nearly constant
on average, approaching O(1).

28. Analyse the time complexity of the RB tree deletion algorithm.


The time complexity of RB tree deletion is O(log n), where n is the number of nodes in the
tree.

29. Explain RB tree deletion algorithm.


RB tree deletion algorithm involves several cases and rotations to maintain the properties
of the Red-Black tree after deletion.

30. What are the advantages of AVL trees over Red-Black trees?
AVL trees guarantee faster lookups in comparison to Red-Black trees but may require
more rotations during insertion and deletion operations.
MODULE 3
1. What is the main principle behind the Divide and Conquer strategy?
Answer: The main principle is to break down a problem into smaller sub-problems,
solve each sub-problem independently, and then combine the solutions to form the
solution to the original problem.

2. How does Merge Sort algorithm work?


Answer: Merge Sort divides the array into two halves recursively, sorts each half, and
then merges them back together in sorted order.

3. What is the key concept behind Strassen's Matrix Multiplication?


Answer: Strassen's Matrix Multiplication reduces the number of scalar multiplications
by dividing matrices into submatrices and applying a set of mathematical operations
to compute the product.

4. What is the time complexity of Strassen's Matrix Multiplication algorithm?


Answer: The time complexity is O(n^2.81), where n is the size of the matrices.

5. Define the Fractional Knapsack Problem.


Answer: The Fractional Knapsack Problem involves selecting items with fractional
weights to maximize the value of items in a knapsack without exceeding its weight
capacity.

6. How does Kruskal's Algorithm find the minimum cost spanning tree?
Answer: Kruskal's Algorithm adds edges to the spanning tree in ascending order of
weight, ensuring that no cycles are formed.

7. What is the time complexity of Kruskal's Algorithm?


Answer: The time complexity is O(E log V), where E is the number of edges and V is
the number of vertices.

8. Describe the key steps of Dijkstra's Algorithm.


Answer: Dijkstra's Algorithm starts from a source node and repeatedly selects the
node with the smallest distance, updates the distances to its neighbours, and marks the
selected node as visited until all nodes are visited.

9. How does Dijkstra's Algorithm differ from Bellman-Ford Algorithm?


Answer: Dijkstra's Algorithm is used for finding the shortest paths from a single
source node to all other nodes in a graph with non-negative edge weights, while
Bellman-Ford Algorithm can handle graphs with negative edge weights.
10. Provide an example of a problem that can be solved using dynamic programming.
Answer: The Fibonacci sequence can be calculated using dynamic programming by
storing previously computed values to avoid redundant calculations.

11. Compare and contrast the time complexity of Dijkstra's Algorithm and Bellman-Ford
Algorithm.
Dijkstra's Algorithm has a time complexity of O((V + E) log V) or O(V^2), while
Bellman-Ford Algorithm has a time complexity of O(VE), where V is the number of
vertices and E is the number of edges.

12. Discuss the space complexity considerations for Merge Sort and Quick Sort
algorithms.
Merge Sort has a space complexity of O(n) due to the temporary array used for
merging, while Quick Sort has a space complexity of O(log n) due to the recursive
function calls and partitioning.

MODULE 4

1) What is Dynamic Programming?

Dynamic programming simplifies complex problems by breaking them into smaller


subproblems, storing solutions to avoid redundancy.

2) What are the advantages and disadvantages of dynamic programming?

Advantages:
Easy to understand and implement
Solves the problem only when it is required
Easy to debug
Disadvantage:
Occupies more memory that degrades the overall performance

3) What are the characteristics of Dynamic Approach?

Overlapping subproblems store computed solutions for reuse, enabling combined


solutions for complex problems. Optimal substructure uses optimal solutions of
subproblems to find the overall optimal solution.

4) What is matrix chain multiplication?

An optimization problem to find the most efficient way to multiply a given sequence
of matrices.
5) What is Backtracking?

Backtracking uses depth-first search to solve decision problems by exploring the state
space tree until a solution is found.

6) What do you mean by Divide and Conquer?

It is a top-down approach which deals with three steps.


Divide the problem into a number of subproblems. Conquer the subproblem by
solving them recursively. Combine the solution to the subproblem into the solution
for the original problem.

7) What are the applications of backtracking?

N-queens problem
Sum of subset problem
Graph coloring
Hamiliton cycle

8) What is Branch and Bound?

Branch and Bound is an optimization method that systematically explores the search
space, pruning branches unlikely to lead to optimal solutions, often using bounding
techniques to guide efficiently.

9) What are the applications of branch and bound?

Knapsack problem
Travelling salesman problem

10) What is Greedy Approach?

It prioritizes the best current option without considering future consequences, useful
for optimization involving sequential decision-making.

11) Define N-Queens problem?

The N-Queens problem entails positioning N queens on an N×N chessboard so that no


two queens threaten each other, ensuring they don't share the same row, column, or
diagonal.

12) Define state space tree?


It is a tree representing all the possible states, i.e., solution or non-solution of the
problem from the root of an initial state to the leaf as a terminal state.

13) What is a live node in backtracking?

The nodes that can be further generated is called a live node.

14) What is an Enode in backtracking?

The nodes whose children are being generated and become a success node.

15) What is a dead node in backtracking?

The node which cannot be further generated and also does not provide a feasible
solution.

MODULE 5

1.Problems that can be solved in polynomial time are known as

Ans: tractable
2. Problems that can be solved in polynomial time are known as

Ans:NP
3. Problems that cannot be solved by any algorithm are called

Ans: undecidable problems

4. The Euler’s circuit problem can be solved in

Ans: O(N2)

5.To which class does the Euler’s circuit problem belong

Ans: P class

6. Halting problem is an example for

Ans: undecidable problem

7.To which of the following class does a CNF-satisfiability problem belong

Ans: NP complete
8.What is vertex coloring of a graph

Ans: A condition where any two vertices having a common edge should not have same
color.

9.How many edges will a tree consisting of N nodes have

Ans: N – 1

10.Minimum number of unique colors required for vertex coloring of a graph is called?

Ans: chromatic number

11. An example of NP complete problem

Ans: Calculating chromatic number ofgraph

12.Clique Decision problem is

Ans: NP-Complete

13.The problem 3-SAT and 2-SAT are


Ans:NP Complete and P

14.Bin packing problem is the application of _______

Ans: Knapsack

15. What is the worstcase time complexity of a quick sort algorithm

Ans: O(N2)
16.Distinguish between Tractable and Untractable Problems

Ans:Tractable Problem: a problem that is solvable by a polynomial-time algorithm. The


upper bound is polynomial.

Intractable Problem: a problem that cannot be solved by a polynomial-time algorithm. The


lower bound is exponential.

DBMS CCV QUESTIONS


MODULE 1
1. Explain different dbms users.

Sophisticated users: They Interact with the system without writing


programs
• Naive users: They are unsophisticated users who interact with the
system by invoking one of the application programs that have been
written previously
• Application programmers: They are computer professionals who write
application programs
• Specialized users: They are sophisticated users who write specialized
database applications that do not fit into the traditional data-processing
Framework.

2. Explain structured data

Structured data: The information stored in relational databases is


known as structured data because it is represented in a strict
format.
3. Different data models

High-level or conceptual data model


Low-level or physical data models
Representational or implementation data models
4. Define schema
overall design of database system
5. Explain different types of attributes

a.Simple and composite attributes


Composite attribute can be divided into further parts(Eg:Name) and
simple attribute cannot be divided into further(Eg:Age)
b.Single-valued and multi-valued attribute
Single-valued attribute have a single value for a particular entity(Eg:Age)
and multi-valued attribute can have set of values for a particular entity(Eg:
Languages known)
c.Derived and stored attributes
Derived attribute can be derived from other attributes(Eg:Age) and stored
attributes are attributes from which the values of other attributes are
derived

6. Explain the different types of DBMS architecture.


There are three types of DBMS architecture:
● Single tier architecture - The data is readily available on the client machine.
● Two tier architecture - The DBMS software is present on the client machine and the
database is present on the server machine.
● Three tier architecture - A layer is present between the client and server machine and
there is no direct communication between them. A client DBMS application on the
client machine interacts with a server DBMS application on the server machine.
7. What is the main difference between DELETE and TRUNCATE command?
DELETE command is used to remove the rows in a table based on the condition set by the
WHERE clause. The rows deleted can be rolled back.
TRUNCATE command is used to remove all the rows in a table without any conditions. The
rows cannot be rolled back.
8. Define a relationship in DBMS and its various types.
A relationship is an association or link between two or more data entities. There are three
main types of relationships in DBMS:
● One to one - A single record in one table is related to a single record in another table
and vice versa.
● One to many / Many to one - A single record of one table is related to many records
of other tables and vice versa.
● Many to many - Multiple records of one table are related to multiple records of
another table.
9. What is E-R model in the DBMS?
E-R model is known as an Entity-Relationship model in the DBMS which is based on the
concept of the Entities and the relationship that exists among these entities.
10. What is a field in a table?
A field is an entity used for storing a particular type of data within a table like numbers,
characters, dates, etc.
11. Explain TCL commands. What are the different TCL commands in SQL?
Ans. TCL refers to Transaction Control Language. These commands are used to manage the
changes made by DML statements. These are used to process a group of SQL statements
comprising a logical unit. The three TCL commands are-
● COMMIT – Commit write the changes to the database
● SAVEPOINT – Savepoints are the breakpoints, these divide the transaction into
smaller logical units which could be further roll-backed.
● ROLLBACK – Rollbacks are used to restore the database since the last commit.
12. What is a Unique constraint?
A unique constraint is used to ensure that a field or column will have only a unique value (no
duplication).
13. What is a Not NULL constraint?
A Not NULL constraint is used for ensuring that the value in the field cannot be NULL.
14. What is a Check constraint?
A check constraint is used to limit the value entered in a field.
15. What is degree of a Relation?
It is the number of attribute of its relation schema

Module 2

1. What does an RDBMS consist of?

A: It consists of a collection of tables i.e, the data is organised in tabular format. The columns
of the relation are known as Fields and rows of the relation are known as fields. Constraints
in a relation are known as Keys.

2. The ability to query data, as well as insert, delete, and alter tuples, is offered by
A: DML performs the change in the values of the relation.

3. What is a set of one or more attributes taken collectively to uniquely identify a record?
A: Super key is the superset of all the keys in a relation.

4. Which command is used to remove a relation from an SQL?


A: The drop table deletes the whole structure of the relation.

5. Student(ID, name, dept name, tot_cred)


In this query which attributes form the primary key?
A: ID

6. Which operation contains all pairs of tuples from the two relations, regardless of whether
their attribute values match.
A: Cartesian product is the multiplication of all the values in the attributes

7. What is a pictorial depiction of a database schema that shows the database's relations,
attributes, and primary keys and foreign keys?
A: Schema diagram

8. How is an attribute in relation, a foreign key ?


A: If the primary key from one relation is used as an attribute in that relation.

9. What is a weak entity set


A: An entity set that does not have sufficient attributes to form a primary key

10. Weak entity set is represented as


A: Double rectangle

11. What term is used to refer to a specific record in your music database; for instance;
information stored about a specific album?
A: Instance

12. An entity in A is associated with at most one entity in B. An entity in B, however, can be
associated with any number (zero or more) of entities in A. This is called as
A: Many-to-one
Here more than one entity in one set is related to one one entity in other set.

13. What are data constraints used for


A: Improve the quality of data entered for a specific property

14. Which key uniquely identifies the element in the relation?


A: Primary key
Primary key checks for not null and uniqueness constraint

15. CREATE TABLE employee (name VARCHAR, id INTEGER)


What type of statement is this
A: DDL
Data Definition language is the language which performs all the operation in defining
structure of relation.

MODULE 3
1. What are the differences between Logical files and physical files
2. Which are primary file organization type
3. Name all the Hashing Techniques
4. What is the difference between indexing and hashing
5. What is B-tree and B+ tree and what are their equations to find tree pointers
6. Name all the SQL DML Queries
7. Which keyword is used for basic retrieval of data in SQL
8. What is the technique used to prevent ambiguity in attribute names
9. What is the function of DISTINCT keyword
10. What are the SET operations used in SQL
11. Name the keyword used for substring matching
12. How to sort a table in SQL
13. Explain EXISTS and UNIQUE functions in SQL
14. Name and comment on the Aggregate functions used in SQL
15. Name different types of JOIN

MODULE 4
1)What are the disadvantages of not normalizing a database

Any relational database without normalization may lead to problems like large tables,
difficulty maintaining the database as it involves searching many records, poor disk space
utilization, and inconsistencies.

2)different types of Normalization forms in a DBMS.

1NF
2NF
3NF
BCNF
4NF
5NF

3)What are the advantages of


normalization?

Database normalization is the process of organizing data to minimize data redundancy and
dependency, resulting in a more efficient and well-structured database schema.

4)what is insert anomaly and delete anomaly and update anomaly

Insertion anomaly: when a new row is added to a table and it causes an inconsistency

Update anomaly: if there are changes in database ,we have to apply that changes in all row if
we miss any row we will have one more field creating an update anomaly in the database

Deletion Anomaly. A deletion anomaly occurs when you delete a record that may contain
attributes that shouldn't be deleted.

5)What is database normalization, and why is it important in database design?

Normalization is the process of organizing data in a database. It includes creating tables and
establishing relationships between those tables according to rules designed both to protect the
data and to make the database more flexible by eliminating redundancy and inconsistent
dependency.

6)What is the First Normal Form (1NF) in database normalization?


1NF requires that each column in a table holds only atomic (indivisible) values and that all
values in a column are of the same data type.

7)What is the Second Normal Form (2NF), and why is it an improvement over 1NF?

2NF builds upon 1NF by adding a requirement that all non-key attributes (columns) are fully
functionally dependent on the entire primary key. It eliminates partial dependencies.

8)What is the Third Normal Form (3NF), and how does it further refine the database schema?

The third normal form (3NF) is a means of organizing database tables by removing the
transitive dependencies from a relational system

9)Explain partial dependency and provide an example.

Partial dependency occurs in 2NF when a non-key attribute is functionally dependent on only
part of the primary key. For example, if a table’s primary key is (OrderID, ProductID), and a
non-key attribute depends only on OrderID, it exhibits partial dependency.

10)Explain transitive dependency and provide an example.

Transitive dependency occurs when a non-key attribute depends on another non-key attribute,
which, in turn, depends on the primary key. For example, if a table has a primary key
(StudentID) and a non-key attribute (ProfessorName), where ProfessorName depends on a
non-key attribute (CourseName), it exhibits transitive dependency

11)What do you mean by functional dependency?

In relational database theory, a functional dependency is a constraint between two sets of


attributes in a relation from a database

12)What is a lossless-join decomposition, and why is it important in normalization?

Lossless decomposition is achieved when the original table can be reconstructed through the
use of natural joins on the smaller tables. This technique helps in improving data
organization, minimizing redundancy, and ensuring data consistency in a relational database.

13)Explain the concept of closure in functional dependencies and how it is used in


normalization.

The Closure Of Functional Dependency means the complete set of all possible attributes that
can be functionally derived from given functional dependency using the inference rules
known as Armstrong’s Rules.
If “F” is a functional dependency then closure of functional dependency can be denoted using
“{F}+”.

14)what is BCNF
BCNF stands for Boyce-Codd Normal Form and is an advanced form of 3NF. It is also
referred to as 3.5NF for the same reason. A table to be in its BCNF normal form should
satisfy the following conditions:
The table should be in its 3NF i.e. satisfy all the conditions of 3NF.
For every functional dependency of any attribute A on B
(A->B), A should be the super key of the table. It simply implies that A can’t be a non-prime
attribute if B is a prime attribute.

15)What is meant by redundancy and why do we want to avoid it when creating a database?

Data redundancy occurs when the same piece of data exists in multiple places, whereas data
inconsistency is when the same data exists in different formats in multiple tables.
Unfortunately, data redundancy can cause data inconsistency, which can provide a company
with unreliable and/or meaningless information

MODULE 5
1)List out any three salient features of a database system?
Ans: a)Low repetition and redundancy
b)Maintanence of large databases
c)Enhanced Security
d)Multi-User Environment Support

2)Discuss the four ACID properties

Ans: a) Atomicity
b) consistency
c) Isolation
d) Durability

3)What is 2PL protocol

Ans: It is a method of concurrency control in DBMS that ensures serializability by applying a


lock to the transaction data which blocks other transactions. Used to access the same data
simultaneously.

4)What is a schedule and its importance

Ans:A series of operations from one transaction to another transaction.Used to oreserve the
order of operation in each individual transaction.

5) What are the conditions for conflict equivalence?

Ans: a)They contain the same set of transaction


b)If each pair of conflict operations are ordered in the same way

6)What is checkpointing?

Ans:The methodology used for removing all previous transaction logs and storing them in
permanent storage. It is used for the recovery if there is an unexpected shutdown in the
database
7)What is a key-value database?

Ans: uses a simple key value method to store the data.These databases contain a simple
string(the key) that is always unique and an arbitrary large data field(the value).

8)Explain NoSQL database

Ans: They are non tabular databases. They provide flexible schemas and scale easily with
large amounts of data and high user loads

9)Under what conditions should you use an index?

Ans:
An index can be used when you want to enforce uniqueness in a database. You can also use it
to facilitate sorting and perform fast retrieval. A column that is frequently used can be a good
candidate for an index.

10)What is a database transaction? Explain the concept of atomicity.

Ans:database transaction is a logical unit of work that consists of one or more database
operations. Atomicity ensures that either all operations within a transaction are executed
successfully, or none of them are. It ensures the transaction's "all or nothing" property.

11)How does a shared lock differ from an exclusive lock during a transaction in a database?

Ans:shared lock allows multiple transactions to read data concurrently, while an exclusive
lock ensures only one transaction can perform write operations to maintain data consistency.

12)When does a checkpoint occur in DBMS?

Ans:A checkpoint occurs as a snapshot of the current state of the database management
system. It limits the amount of work needed during system restart after a subsequent crash.
Checkpoints are used for database recovery following a system crash. They are particularly
employed in log-based recovery solutions to restart the system without having to redo all
transactions from the beginning.

13)What is Live Lock?

Ans:Livelock is a situation in which an exclusive lock request is repeatedly denied as many


overlapping shared locks keep interfering with each other. The processes keep changing their
status, preventing them from completing the task.

14)What is a Deadlock?

Ans:A Deadlock is a situation that occurs in OS when any process enters a waiting state as
another waiting process is holding the demanded resource. It is a common problem in multi-
processing where several processes share a specific type of mutually exclusive resource
known as a soft lock.
15)What is Starvation?

Ans:Starvation is a situation where all the low-priority processes get blocked. In any system,
requests for high and low-priority resources keep happening dynamically. Therefore, some
policy is required to decide who gets support and when.

COMPILER DESIGN

MODULE 1
1. State the role of lexical analyzer.
The lexical analysis is the first phase of the compiler where a lexical analyser operate
as an interface between the source code and the rest of the phases of a compiler. It
reads the input characters of the source program, groups them into lexemes, and
produces a sequence of tokens for each lexeme. The tokens are sent to the parser for
syntax analysis.

2. Distinguish between front end and back end of a compiler.


The front-end compiler is responsible for the initial stages of the compilation process.
It handles tasks such as lexical analysis, syntax analysis, and semantic analysis.
Lexical analysis involves breaking the source code into tokens, syntax analysis checks
the syntax of the code for correctness, and semantic analysis ensures that the code
follows the rules of the programming language. The back-end compiler, on the other
hand, takes the intermediate code generated by the front-end and translates it into the
target machine code. This involves tasks such as optimization and code generation.

3. Specify the analysis and synthesis parts of compilation.


Analysis phase reads the source program and splits it into multiple tokens and
constructs the intermediate representation of the source program. And also checks
and indicates the syntax and semantic errors of a source program. It collects
information about the source program and prepares the symbol table. Symbol table
will be used all over the compilation process. It will get the analysis phase
input(intermediate representation and symbol table) and produces the
targeted machine level code.

4. Define the terms token, lexemes and patterns.


Token is a sequence of characters that can be treated as a single logical entity. A
lexeme is an actual character sequence forming a specific instance of a token, such as
num. Patterns are rule that describes the set of strings associated to a token.

5. Explain the different phases of a compiler.


 Lexical Analysis: The first phase of a compiler is lexical analysis, also known
as scanning. This phase reads the source code and breaks it into a stream of
tokens, which are the basic units of the programming language. The tokens are
then passed on to the next phase for further processing.
 Syntax Analysis: The second phase of a compiler is syntax analysis, also
known as parsing. This phase takes the stream of tokens generated by the
lexical analysis phase and checks whether they conform to the grammar of the
programming language. The output of this phase is usually an Abstract Syntax
Tree (AST).
 Semantic Analysis: The third phase of a compiler is semantic analysis. This
phase checks whether the code is semantically correct, i.e., whether it
conforms to the language’s type system and other semantic rules. In this stage,
the compiler checks the meaning of the source code to ensure that it makes
sense. The compiler performs type checking, which ensures that variables are
used correctly and that operations are performed on compatible data types.
The compiler also checks for other semantic errors, such as undeclared
variables and incorrect function calls.
 Intermediate Code Generation: The fourth phase of a compiler is intermediate
code generation. This phase generates an intermediate representation of the
source code that can be easily translated into machine code.
 Optimization: The fifth phase of a compiler is optimization. This phase applies
various optimization techniques to the intermediate code to improve the
performance of the generated machine code.
 Code Generation: The final phase of a compiler is code generation. This phase
takes the optimized intermediate code and generates the actual machine code
that can be executed by the target hardware.

6. List and explain any three compiler construction tools.


 Parser Generator – It produces syntax analyzers (parsers) from the input that is
based on a grammatical description of programming language or on a context-
free grammar. It is useful as the syntax analysis phase is highly complex and
consumes more manual and compilation time.
 Scanner Generator – It generates lexical analyzers from the input that consists
of regular expression description based on tokens of a language. It generates a
finite automaton to recognize the regular expression.
 Data-flow analysis engines – It is used in code optimization. Data flow
analysis is a key part of the code optimization that gathers the information,
that is the values that flow from one part of a program to another.

7. Demonstrate bootstrapping.
Bootstrapping is a process in which simple language is used to translate more
complicated program which in turn may handle for more complicated program. This
complicated program can further handle even more complicated program and so on.
Writing a compiler for any high level language is a complicated process. It takes lot of
time to write a compiler from scratch. Hence simple language is used to generate
target code in some stages.

8. Define cross compilers.


Compilers are the tool used to translate high-level programming language to low-level
programming language. The simple compiler works in one system only, but what will
happen if we need a compiler that can compile code from another platform, to
perform such compilation, the cross compiler is introduced. In this article, we are
going to discuss cross-compiler.

9. Scanning of source code in compilers can be speeded up using input buffering.


Explain.
Input buffering is an important concept in compiler design that refers to the way in
which the compiler reads input from the source code. In many cases, the compiler
reads input one character at a time, which can be a slow and inefficient process. Input
buffering is a technique that allows the compiler to read input in larger chunks, which
can improve performance and reduce overhead. One of the main advantages of input
buffering is that it can reduce the number of system calls required to read input from
the source code. Since each system call carries some overhead, reducing the number
of calls can improve performance.

10. Explain how the regular expressions and finite state automata are used for the
specification and recognition of tokens?
Finite automata is a state machine that takes a string of symbols as input and changes
its state accordingly. Finite automata is a recognizer for regular expressions. When a
regular expression string is fed into finite automata, it changes its state for each literal.
If the input string is successfully processed and the automata reaches its final state, it
is accepted, i.e., the string just fed was said to be a valid token of the language in
hand.

Module 2

1. What is left recursive grammar?


A grammar is said to be left –recursive if it has a non-terminal A such that there is a

derivation A ->A aplha, for some string alpha.

2. What are the steps in removing left recursion?


3. What is Recursive Descent parsing?

4. what are context free grammar?


1. Terminals are the basic symbols from which strings are formed. The term "token

name" is a synonym for "terminal" and frequently we will use the word "token" for

terminal when it is clear that we are talking about just the token name.

2. Nonterminals are syntactic variables that denote sets of strings. The nonterminals

define sets of strings that help define the language generated by the grammar. They

also impose a hierarchical structure on the language that is useful for Both syntax

analysis and translation.

3. In a grammar, one nonterminal is distinguished as the start symbol, and the set of

strings it denotes is the language generated by the grammar. Conventionally, the

productions for the start symbol are listed first.

4. The productions of a grammar specify the manner in which the terminals and

nonterminals can be combined to form strings. Each production consists of:

a. A nonterminal called the head or left side of the production; this production

defines some of the strings denoted by the head.

b. The symbol →. Sometimes : : = has been used in place of the arrow.

c. A body or right side consisting of zero or more terminals and nonterminals.

5.leftmost derivation

A leftmost derivation is obtained by applying production to

the leftmost variable in each step.

6. rightmost derivation

A rightmost derivation is obtained by applying production to the rightmost variable in each step.

7. What are the steps in removing left factoring?


8..what is top down parsing and bottom up parsing?

Mainly 2 parsing approaches:

> Top Down Parsing

> Bottom Up Parsing

In top down parsing, parse tree is constructed from top (root) to the bottom (leaves).

In bottom up parsing, parse tree is constructed from bottom (leaves)) to the top (root).

MODULE 3
Bottom-Up Parsing:Handle Pruning. Shift Reduce parsing. Operator precedence parsing (Concept only). LR
parsing - Constructing SLR, LALR, and canonical LR parsing tables.

1.What are the types of bottom-up parsers?

Ans:(i)LR parser-LR(0),SLR(1),LALR(1),CLR(1).(ii)Operator precedence parser

2..Which are the 2 data structures used during shift reduce parsing?

Ans:Input buffer,stack.

3. Which are the basic operations of shift reduce parser?

Ans:Shift,Reduce,Accept,Error.
4.What is operator precedence parser?

Ans:A grammar that is used to define mathematical operators is called operator grammar. A bottom-
up parser that interprets an operator grammar is called operator precedence parser.

5.When is a grammar said to be an operator precedence grammar?

Ans:(i)No RHS of any production has ∈. (ii)No two non-terminals are adjacent.

6.How do you identify handles in operator precedence parsing?

Ans:Scan the string from the left until seeing •> - Scan backward the string from right to left until
seeing <• - Everything between the two relations <• and •> forms the handle.

7.Why is handle pruning technique used?

Ans:To obtain canonical reduction sequence.

8.Differentiate SLR parser,LALR parser,CLR parser.

Ans:(i)SLR and LALR parser are easier and cheaper to implement when compared to CLR parser.
(ii)LALR and SLR are small and have same size, CLR is the largest.(iii)SLR requires less time and
complexity compared to both LALR and CLR.

9.What are the requirements for construction of LR(1)?

Ans:Augmented Grammar,Closure function,goto function.

10. Assume that the SLR parser for a grammar G has n1 states and the LALR parser for G has n2
states. The relationship between n1 and n2 is :

Ans:n1 is necessarily equal to n2.

11.Arrange LR parsers in terms of their power.

Ans:CLR>LALR>SLR>LR(0)

12.Which is the most powerful parser and why?

Ans: CLR parser. It is because ,CLR has much larger number of states compared to rest of the parsers.

MODULE 4
1.What are the two ways to represent semantic rules associated with grammar symbols?

Ans: Syntax-Directed Definitions (SDD), Syntax-Directed Translation Schemes (SDT)

2. How can we classify attributes based on the way the attributes get their values?

Ans: i)Synthesized attributes - attributes get values from the attribute values of their child nodes.

ii)Inherited attributes - attributes inherit their values from their parent or sibling nodes.
3. What is an annotated parse tree?

Ans: The parse tree containing the values of attributes at each node for given input string is called
annotated or decorated parse tree.

4. What are the types of SDT?

Ans: i)S –attributed definition ii) L –attributed definition

5. Which form of SDT uses both synthesized and inherited attributes with restriction that inherited
attributes can inherit values from parent and left siblings only?

Ans: L-attributed SDT

6. If an SDT uses only synthesized attributes, it is called as?

Ans: S-attributed SDT

7.What is an SDT?

Ans: It is a CFG with semantic actions embedded with the production body.

8. Which parser is used for the bottom-up evaluation of S-attributed definition?

Ans: LR parser

9. What are the source language issues associated with run-time environments?

Ans: i)Procedure ii)Activation tree iii)Control stack iv)The scope of declaration v)Binding of
names

10. Whenever a procedure is executed, its activation record is stored on the stack, also known as?

Ans: Control stack

11. What are the different storage allocation strategies?

Ans: i)Static allocation - lays out storage for all data objects at compile time

ii)Stack allocation - manages the run-time storage as a stack.

iii)Heap allocation - allocates and deallocates storage as needed at run time from a data area

known as heap.

12. What are the types of intermediate code representations?

Ans: i)Syntax tree ii)Directed acyclic graph iii)Postfix notation iv)Three-address code

13.The condensed form of parse tree is called?

Ans: Syntax tree

14.What is the general form of three address code?

Ans: a=b op c (a,b and c->operands , op-> operator)

15.What are the representations of three-address statements?

Ans: i)Quadruples ii)Triples iii)Indirect triples


MODULE 5
1. What are the two types of optimizations?
Machine dependent and independent.
2. What are the two phases of optimization?
Local and global.
3. What is a basic block?
Basic block is a sequence of consecutive 3 address statements which may be
entered only at the beginning and when entered statements are executed in
sequence without halting or branching.
4. What is Function preserving transformations
a. Common subexpression elimination
b. Copy propagation
c. Dead code elimination
d. Constant folding
5.What is Common Sub Expression Elimination (CSE)
* An occurrence of an expression E is called a common sub-expression if E was previously
computed, and the values of variables in E have not changed since the previous
computation.
6.What is the idea behind the copy-propagation transformation?
A: The idea is to use the value of variable g for variable f whenever possible after the copy
statement f := g.

7: What is dead code?


A: Dead code refers to statements in a program that computes values that are never used.

8: When is a variable considered live in a program?


A: A variable is considered live at a point in a program if its value can be used subsequently;
otherwise, it is dead at that point.

9: Why might dead code appear in a program?


A: Dead code may appear as a result of previous transformations, even though the
programmer is unlikely to introduce it intentionally.
10What is constant folding?
A: Constant folding is the process of evaluating expressions at compile time if all operands
are constants, replacing the original evaluation in the program.

11: How does constant folding improve runtime performance and code size?
A: It improves runtime performance by avoiding evaluation at runtime and reduces code size
by replacing expressions with their constant values.

12: What is loop optimization?


A: Loop optimization involves improving the running time of a program by decreasing the
number of instructions in an inner loop, even if it means increasing the code outside the loop.
13: What is reduction in strength in the context of loop optimization?
A: Reduction in strength refers to the process of simplifying computations within a loop, such
as replacing a multiplication operation with a subtraction operation, to improve performance
15: What is loop jamming?
A: Loop jamming is a loop optimization technique where multiple loops performing similar
operations are merged or combined into a single loop to improve efficiency.
16: What are some of the issues that arise during the code generation phase?
A: Some issues include input to the code generator, target program format, memory
management, instruction selection, register allocation, and evaluation order.

17: What is the input to the code generator?


A: The input to the code generator consists of the intermediate representation of the source
program produced by the front end, along with information in the symbol table.

18: What are the different forms that the target program output can take?
A: The target program output can be in the form of absolute machine language, relocatable
machine language, or assembly language.

19: How does the nature of the instruction set architecture of the target machine affect code
generation?
A: The instruction set architecture of the target machine impacts the difficulty of constructing
a good code generator. Factors such as the number of registers, addressing modes, and
instruction complexity influence code generation strategies.

20: What are the two subproblems related to register usage?


A: The two subproblems related to register usage are register allocation and register
assignment. Register allocation involves selecting the set of variables that will reside in
registers at each point in the program, while register assignment involves picking the specific
register for each variable.

ARTIFICIAL INTELLIGENCE
MODULE 1
1.What do you mean by Artificial Intelligence?

Artificial intelligence is a wide-ranging branch of computer science concerned


with building smart machines capable of performing tasks that typically require
human intelligence.Artificial intelligence allows machines to replicate the
capabilities of the human mind.

2.What are the different levels of AI?

● Narrow Al: Artificial intelligence is said to be narrow when the machine


can perform a specific task better than a human. The current research of
Al is here now.
● General Al: Artificial intelligence reaches the general state when it can
perform any intellectual task with the same accuracy level as a human
would.
● Strong Al: Artificial intelligence is strong when it can beat humans in
many tasks.

3.What are the Advantages of Artificial Intelligence?

● High Accuracy with less errors


● High-Speed
● High reliability
● Useful for risky areas
● Digital Assistant
● Useful as a public utility

4.What are the disadvantages of AI?

● High Cost
● Can't think out of the box
● No feelings and emotions
● Increase dependency on machines
● No Original Creativity

5. Give some Applications of AI


● Gaming
● Natural Language Processing
● Expert Systems
● Vision systems
● Speech recognition
● Handwriting recognition
● Google Search Engine

6.What are the approaches of AI?

● Acting humanly: The Turing Test approach


● Thinking humanly: The cognitive modeling approach
● Thinking rationally: The “laws of thought” approach
● Thinking rationally: The “laws of thought” approach

7. PEAS Representation

PEAS System is used to categorize similar agents together. The PEAS system
delivers the performance measure with respect to the environment, actuators,
and sensors of the respective agent.
It is made up of four words:
P: Performance measure
E: Environment
A: Actuators
S: Sensors

8.What do you mean by agents?

An agent is anything that can be viewed as perceiving its environment through


sensors and acting upon that environment through actuators
9.What do you mean by Intelligent agents?

An intelligent agent is an autonomous entity which act upon an environment


using sensors and actuators for achieving goals. An intelligent agent may learn
from the environment to achieve their goals.

10. What do you mean by Rational agent ?

A rational agent is said to perform the right things. AI is about creating rational
agents to use for
game theory and decision theory for various real-world scenarios.A rational
agent is an agent which has clear preference, models uncertainty, and acts in a
way to maximize its performance measure with all possible actions.

11.Structure of AI agent

An agent program which implements the agent function. The structure of an


intelligent agent is a combination of architecture and agent program. It can be
viewed as:

Agent = Architecture + Agent program

● Architecture: Architecture is machinery that an Al agent executes on.


● Agent Function: Agent function is used to map a percept to an action.
● Agent program: Agent program is an implementation of agent function.

12.How can we represent the working of the components of the agent program?
● Atomic : In an atomic representation each state of the world is
indivisible—it has no internal structure.
● Factored :A factored representation splits up each state into a fixed set
of variables or attributes, each of which can have a value.
● Structured :In a structured representation, objects and their various and
varying relationships can be described explicitly.

13. Different types of task environment

● Fully Observable vs Partially Observable


● Deterministic vs Stochastic
● Single-agent vs Multi-agent
● Static vs Dynamic
● Discrete vs Continuous
● Episodic vs Sequential
● Known vs Unknown

14. What are the rules for an AI agent?

Rule 1: An AI agent must have the ability to perceive the environment.


Rule 2: The observation must be used to make decisions.
Rule 3: Decision should result in an action.
Rule 4: The action taken by an AI agent must be a rational action.

15. Types of AI agent


● Simple reflex agent
● Model based reflex agent
● Goal based agents
● Utility based agents
● Learning agent

16. Which are the 4 conceptual components of a learning agent?

● Learning Element
● Performance Element
● Critic
● Problem Generator

Performance element:The performance element is what we have previously


considered to be the entire agent: it takes in percepts and decides on actions
Learning element:The learning element uses CRITIC feedback from the critic
on how the agent is doing and determines how the performance element should
be modified to do better in the future.
Critic:The critic tells the learning element how well the agent is doing with
respect to a fixed
performance standard. The critic is necessary because the percepts themselves
provide no
indication of the agent's success.
Problem generator: It is responsible for suggesting actions that will lead to
new and informative experiences.

MODULE 2
1.What are the Properties of an Environment?

The properties of the environment are

*Observable: agent always knows the current state.


* Discrete: at any given state there are only finitely many actions to choose
from.
* Known: agent knows which states are reached by each action.
* Deterministic: each action has exactly one outcome.

2.What are the different ways of evaluating an algorithm's performance?


*Completeness: A search algorithm is said to be complete if it guarantees to
return a
solution if at least any solution exists for any random input.
*Optimality: If a solution found for an algorithm is guaranteed to be the best
solution (lowest path cost) among all other solutions, then such a solution for is
said to be an optimal solution.
*Time Complexity: Time complexity is a measure of time for an algorithm to
complete its task.
*Space Complexity: It is the maximum storage space required at any point
during the
search, as the complexity of the problem.

3.What are the different types of search algorithm?

Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.

4.What is a Blind search/uninformed search?

The uninformed search does not contain any domain knowledge such as
closeness, the location of the goal. It operates in a brute-force way as it only
includes information about how to traverse the tree and how to identify leaf and
goal nodes.It examines each node of the tree until it achieves the goal node.

5.What are the different types of uninformed search?

*Breadth-first search
*Uniform cost search
*Depth-first search
*Iterative deepening depth-first search
*Bidirectional Search

6.What is a Breadth-first search?

Breadth-first search is the most common search strategy for traversing a tree or
graph.
This algorithm searches breadth wise in a tree or graph, so it is called breadth-
first search.
BFS algorithm starts searching from the root node of the tree and expands all
successor
node at the current level before moving to nodes of next level.

7.BFS uses which data structure?

FIFO queue

8.What are the advantages and disadvantages of BFS?

Advantages:
* BFS will provide a solution if any solution exists.
* If there are more than one solution for a given problem, then BFS will provide
the
minimal solution which requires the least number of steps.
Disadvantages:
It requires lots of memory since each level of the tree must be saved into
memory to
expand the next level. o BFS needs lots of time if the solution is far away from
the root
node.
9.What is the time complexity and space complexity of BFS?

Time complexity:O(b^d)
Space complexity:O(b^d)

10.What is a Uniform cost search?

Uniform-cost search is a searching algorithm used for traversing a weighted tree


or graph.
This algorithm comes into play when a different cost is available for each edge.
The
primary goal of the uniform-cost search is to find a path to the goal node which
has the
lowest cumulative cost.

11.What is Depth first serach(DFS)?

Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.It is called the depth-first search because it starts from the root node
and follows each
path to its greatest depth node before moving to the next path.

12.What is the data structure used in DFS?

Stack

13.What is a Depth limited search algorithm?

A depth-limited search algorithm is similar to depth-first search with a


predetermined limit.
Depth-limited search can solve the drawback of the infinite path in the Depth
first search. In this algorithm, the node at the depth limit will treat as it has no
successor nodes further.

14.What are the two conditions of failure of depth limited search?

Depth-limited search can be terminated with two Conditions of failure:

* Standard failure value: It indicates that problem does not have any solution.
*Cutoff failure value: It defines no solution for the problem within a given
depth limit.

15.What is Iterative deepening algorithm?

The iterative deepening algorithm is a combination of DFS and BFS algorithms.


This search algorithm finds out the best depth limit and does it by gradually
increasing the limit until a goal is found. This algorithm performs depth-first
search up to a certain "depth limit", and it keeps increasing the depth limit after
each iteration until the goal node is found.

16.What is an informed search algorithm?

Informed search algorithm contains an array of knowledge such as how far we


are
from the goal, path cost, how to reach to goal node, etc. This knowledge help
agents to explore less to the search space and find more efficiently the goal
node. Informed search algorithm uses the idea of heuristic, so it is also called
Heuristic search.

17.What is a Greedy algorithm?


Greedy /best-first search algorithm always selects the path which appears best at
that
moment.It is the combination of depth-first search and breadth-first search
algorithms. It uses the heuristic function and search.

18.What is the heuristic function of Greedy algorithm?

f(n)=g(n)

19.What is the data structure used for implementing greedy algorithm?

Priority queue

20.What is an A* algorithm?

A* algorithm is a widely used search algorithm in artificial intelligence for


finding the shortest path between nodes in a graph. It evaluates nodes by
combining the cost to reach the node from the start node and the estimated cost
to reach the goal node through that node, using a heuristic function.

21.What is the heuristic function of A*algorithm?


MODULE 3
Qn 1) What is adversarial Search?

Adversarial search is a type of search algorithm used in game theory and


artificial intelligence to find the best possible move for a player in a
competitive, two-player game. It involves predicting the moves of both players
and selecting the best move for the current player while considering the possible
responses of the opponent. Classic examples include games like chess,
checkers, and tic-tac-toe.

Qn 2) What is the difference between perfect and imperfect information?

Perfect information: A game with the perfect information is that in which agents
can look into the complete board. Agents have all the information about the
game, and they can see each other moves also. Examples are Chess,
Checkers, Go, etc.
o Imperfect information: If in a game agents do not have all information about
the game and not aware with what's going on, such type of games are called
imperfect information, such as tic-tac-toe, Battleship, blind, Bridge, etc.

Qn 3) What is a deterministic game? Give an example.

Deterministic games are those games which follow a strict pattern and set of
rules for the games, and there is no randomness
associated with them. Examples are chess, Checkers, Go, tic-tac-toe, etc.

Qn 4) what do you mean by a non deterministic game? Give an example.

Non-deterministic are those games which have


various unpredictable events and has a factor of chance or luck. This factor of
chance or luck is introduced by either dice or cards. These are random, and each
action response is not fixed. Such games are also called stochastic games.
Example: Backgammon, Monopoly, Poker, etc.

Qn 5) What is the Min- Max Algorithm in AI?

Mini-max algorithm is a recursive or backtracking algorithm which is used in


decision-making and game theory. It provides an optimal move for the player
assuming that the opponent is also playing optimally.
o Mini-Max algorithm uses recursion to search through the game-tree.
o Min-Max algorithm is mostly used for game playing in AI. Such as Chess,
Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm
computes the minimax decision for the current state.
o In this algorithm two players play the game, one is called MAX and other is
called MIN.
o Both the players fight it as the opponent player gets the minimum benefit
while
they get the maximum benefit.
o Both Players of the game are opponent of each other, where MAX will select
the maximized value and MIN will select the minimized value.
o The minimax algorithm performs a depth-first search algorithm for the
exploration of the complete game tree.
o The minimax algorithm proceeds all the way down to the terminal node of the
tree, then backtrack the tree as the recursion.

Qn 6) List the properties of Min Max algorithm.

o Complete- Min-Max algorithm is Complete. It will definitely find a solution


(if exist), in the finite search tree.
o Optimal- Min-Max algorithm is optimal if both opponents are playing
optimally.
o Time complexity- As it performs DFS for the game-tree, so the time
complexity
of Min-Max algorithm is O(bm), where b is branching factor of the game-tree,
and m is the maximum depth of the tree.
o Space Complexity- Space complexity of Mini-max algorithm is also similar to
DFS which is O(bm).

Qn 7) Give a limitation of Min Max algorithm.

The main drawback of the minimax algorithm is that it gets really slow for
complex games such as Chess, go, etc. This type of games has a huge branching
factor, and the player has lots of choices to decide.

Qn 8) What is alpha beta pruning?

o Alpha-beta pruning is a modified version of the minimax algorithm. It is an


optimization technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of game
states it
has to examine are exponential in depth of the tree. Since we cannot eliminate
the exponent, but we can cut it to half. Hence there is a technique by which
without checking each node of the game tree we can compute the correct
minimax decision, and this technique is called pruning. This involves two
threshold parameter Alpha and beta for future expansion, so it is called alpha-
beta pruning. It is also called as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not
only
prune the tree leaves but also entire sub-tree.

Qn 9) Give the 2 parameters of alpha beta pruning algorithm.


The two-parameter can be defined as:
1. Alpha: The best (highest-value) choice we have found so far at any point
along
the path of Maximizer. The initial value of alpha is -∞.
2. Beta: The best (lowest-value) choice we have found so far at any point along
the path of Minimizer. The initial value of beta is +∞.

Qn 10) What is constraint satisfaction?

Constraint satisfaction is a technique where a problem is solved when itsvalues


satisfy certain constraints or rules of the problem. Such type of technique leads
to a deeper understanding of the problem structure as well as its complexity.
Constraint satisfaction depends on three components, namely:
• X: It is a set of variables.
• D: It is a set of domains where the variables reside. There is a specific domain
for each variable.
• C: It is a set of constraints which are followed by the set of variables.

Qn 11) Give 3 ways of assigning values to variables.

• Consistent or Legal Assignment: An assignment which does not violate any


constraint or rule is called Consistent or legal assignment.
• Complete Assignment: An assignment where every variable is assigned with a
value, and the solution to the CSP remains consistent.
Such assignment is known as Complete assignment.
• Partial Assignment: An assignment which assigns values to some of the
variables only. Such type of assignments are called Partial
assignments.
Qn 12) What are the 2 domains in a CSP?

Discrete Domain: It is an infinite domain which can have one state for multiple
variables. For example, a start state can be allocated infinite
times for each variable.
• Finite Domain: It is a finite domain which can have continuous states
describing one domain for one specific variable. It is also called a
continuous domain.

Qn 13) What are the constraint types in CSP?

Unary Constraints: It is the simplest type of constraints that restricts


the value of a single variable.
Binary Constraints: It is the constraint type which relates two variables. A value
x2 will contain a value which lies between x1 and x3.
Global Constraints: It is the constraint type which involves an arbitrary
number of variables.

Qn 14) Define constraint propagation.

Constraint propagation is a special type of inference which helps in reducing


the legal number of values for the variables. The idea behind constraint
propagation is local consistency.

Qn 15) Give the different types of local consistencies.

• Node Consistency: A single variable is said to be node consistent if all the


values in the variable’s domain satisfy the unary constraints on the variables.
• Arc Consistency: A variable is arc consistent if every value in its domain
satisfies the binary constraints of the variables.
• Path Consistency: When the evaluation of a set of two variable with respect to
a third variable can be extended over another variable, satisfying all the binary
constraints. It is similar to arc consistency.
• k-consistency: This type of consistency is used to define the notion of stronger
forms of propagation. Here, we examine the k-consistency of the variables.

Qn 16) What is backtracking search?

It is a depth-first search that chooses values for one variable at a time and
backtracks when a variable has no legal values left to assign. It repeatedly
chooses an unassigned variable, and then tries all values in the domain of that
variable in turn, trying to find a solution. If an inconsistency is detected, then
BACKTRACK returns failure, causing the previous call to try another value.

Qn 17) Define MRV

The idea of choosing the variable with the fewest “legal” value.It is also known
as “most constrained variable” or “fail-first” heuristic, it picks a variable that is
most likely to cause a failure soon thereby pruning the search tree.
If some variable X has no legal values left, the MRV heuristic will select X and
failure will be detected immediately avoiding pointless searches through other
variables.

Qn 18) Define Degree heuristic

Degree heuristic attempts to reduce the branching factor on future choices by


selecting the variable that is involved in the largest number of constraints on
other unassigned variables.

Qn 19) what is a least Constraining value?


It prefers the value that rules out the fewest choice for the neighboring variables
in the constraint graph. (Try to leave the maximum flexibility for subsequent
variable assignments.)
Once a variable has been selected, the algorithm must decide on the order in
which to examine its values least-constraining-value prefers the value that rules
out the fewest choices for the neighboring variables in the constraint graph.

Qn 20) What is forward Checking ?

One of the simplest forms of inference.


Whenever a variable X is assigned, the forward-checking process establishes
arc consistency for it: for each unassigned variable Y that is connected to X by a
constraint, delete from Y’s domain any value that is inconsistent with the value
chosen for X. There is no reason to do forward checking if we have already
done
arc consistency as a preprocessing step

MODULE 4
1.What is PEAS?
Performance measure
Environment
Actuators
Sensors
2.What are two generic function used in KB?
TELL – Add new sentence to the KB
ASK – Query knowledge base
3.What is Entailment?
Entailment means a sentence follows logically from another : to mean that the
sentence α entails the sentence β
α |= β if and only if, in every model in which α is true, β is also true
α |= β if and only if M(α) ⊆ M(β) .
4.What is Inference?
Inference is a procedure that allows new sentences to be derived from a
knowledge base.
5.What are types of prepositional logic?
Atomic proposition
Single proposition symbol. Each symbol is a proposition.
Examples of atomic propositions:
2+2=4 is a true proposition
Compound proposition
Constructed from atomic propositions using parentheses and logical
connectives.
6.What is Modus Ponens?
P
p->q
q
7.What is Modus Tollens?
-q
p->q
-p
8.What is DNF?
DNF: Disjunctive Normal Form, OR of ANDs (terms
OR of ANDs
e.g. (p∧¬q) ∨ (¬p∧¬r)
9.What is CNF?
CNF: Conjunctive Normal Form
Every sentence of propositional logic is logically equivalent to a conjunction of
clauses.
AND of ORs (clauses)
e.g. (p∨¬q) ∧ (¬p∨¬r)
10.What are demerit of propositional logic?
Propositional logic can only represent the facts, which are either true or false.
PL is not sufficient to represent the complex sentences or natural language
statements.
The propositional logic has very limited expressive power.
11.What is FOL?
First-order logic is also known as Predicate logic or First-order predicate logic.
It is an extension to propositional logic
Eg,
>Some birds can fly
> All men are mortal
> At least one student has course registered
12.What is Unification?
The process of finding a substitution for predicate parameters is called
unification.
Unification is a process of making two different logical atomic expressions
identical by finding a substitution.
Unification depends on the substitution process.

13.What is Skolemization?
Skolemization is used to remove such existential quantifiers by introducing
Skolem functions or Skolem constants.
14.What is Quantifiers and its types?
Quantifiers express properties of entire collections of objects. First-order logic
contains two standard quantifiers, called
Universal and Existential.
15.What are the modes that used for proceeding a Inference engine?
Forward chaining
Backward chaining

MODULE 5
1.What is Machine Learning

Machine Learning is said to be a subset of artificial intelligence that is mainly


concerned with the development of algorithms which allow a computer to learn from
the data and past experiences on their own.

2.What are the classifications of Machine Learning?

● Supervised Learning
● Unsupervised learning
● Reinforcement learning

3.Differentiate supervised learning and unsupervised learning ?

In supervised learning the machine experiences the examples along with the labels or
targets for each example. The labels in the data help the algorithm to correlate the
features.
Unsupervised learning is a learning method in which a machine learns without any
supervision.The training is provided to the machine with the set of data that has not
been labeled, classified, or categorized, and the algorithm needs to act on that data
without any Supervision.

4.What is Reinforcement Learning ?

Reinforcement learning is a feedback-based learning method, in which a learning


agent gets a reward for each right action and gets a penalty for each wrong action. The
agent learns automatically with these feedbacks and improves its performance. In
reinforcement learning, the agent interacts with the environment and explores it. The
goal of an agent is to get the most reward points, and hence, it improves its
Performance.
5.What are the two main categories of unsupervised learning algorithm?

● Clustering
● Association
6.What is the difference between classification and regression?

Regression algorithms are used to predict the continuous values such as price, salary,
age, etc. and Classification algorithms are used to predict/Classify the discrete values
such as Male or Female, True or False, Spam or Not Spam, etc.

7.Define a decision tree?

It is a tree-structured classifier, where internal nodes represent the features of a


dataset, branches represent the decision rules and each leaf node represents the
outcome.

8.What are the various decision tree terminologies?

• Root Node: Root node is from where the decision tree starts. It represents the entire
dataset, which further gets divided into two or more homogeneous sets.
• Leaf Node: Leaf nodes are the final output node, and the tree cannot be segregated
further after getting a leaf node.
• Splitting: Splitting is the process of dividing the decision node/root node into sub-
nodes according to the given conditions.
• Branch/Subtree: A tree formed by splitting the tree.
• Pruning: Pruning is the process of removing the unwanted branches from the tree.
• Parent/Child node: The root node of the tree is called the parent node, and other
nodes are called the child nodes.

9.What are the different attribute selection measures used in decision tree?
● Information gain:It is the measurement of changes in entropy after the
segmentation of a dataset based on an attribute. It calculates how much
information a feature provides us about a class.

● Gini index : It is a measure of impurity or purity used while creating a decision


tree in the CART(Classification and Regression Tree) algorithm. An attribute
with the low Gini index should be preferred as compared to the high Gini
index.

● Gain ratio

10.How does the Decision Tree algorithm Work?

In a decision tree, for predicting the class of the given dataset, the algorithm starts
from the root node of the tree. This algorithm compares the values of the root attribute
with the record (real dataset) attribute and, based on the comparison, follows the
branch and jumps to the next node.
For the next node, the algorithm again compares the attribute value with the other sub-
nodes and moves further. It continues the process until it reaches the leaf node of the
tree.

11.What are the advantages of decision tree ?

● It is simple to understand as it follows the same process which a humans


follow while making any decision in real-life.
● It can be very useful for solving decision-related problems .It helps to think
about all the possible outcomes for a problem.
● There is less requirement of data cleaning compared to other algorithms

12.What are the disadvantages of decision tree?

● The decision tree contains lots of layers, which makes it complex.


● It may have an overfitting issue, which can be resolved using the Random
Forest algorithm.
● For more class labels, the computational complexity of the decision tree may
increase.

13.What is pruning ?

Pruning is a process of deleting the unnecessary nodes from a tree in order to get the
optimal decision tree. It is a technique that decreases the size of the learning tree
without reducing accuracy.

14.What do you mean by overfitting in machine learning?

Overfitting occurs when the model fits more data than required, and it tries to capture
each and every datapoint fed to it. Hence it starts capturing noise and inaccurate data
from the dataset, which degrades the performance of the model.
15.How to detect Overfitting?

In the train-test split of the dataset, we can divide our dataset into random test and
training datasets. We train the model with a training dataset which is about 80% of the
total dataset. After training the model, we test it with the test dataset, which is 20 % of
the total dataset. Now, if the model performs well with the training dataset but not
with the test dataset, then it is likely to have an overfitting issue.

16.What are the ways to prevent overfitting in machine learning?

● Early Stopping
● Train with more data
● Feature Selection
● Cross-Validation
● Data Augmentation
● Regularization

17.What is Hypothesis in Machine Learning (ML)?


The hypothesis is one of the commonly used concepts of statistics in Machine
Learning. It is specifically used in Supervised Machine learning, where an ML model
learns a function that best maps the input to corresponding outputs with the help of an
available dataset.

18.Differentiate simple linear regression and multiple linear regression?

Simple Linear Regression:


If a single independent variable is used to predict the value of a numerical dependent
variable, then such a Linear Regression algorithm is called Simple Linear Regression.
Multiple Linear regression:
If more than one independent variable is used to predict the value of a numerical
dependent variable, then such a Linear Regression algorithm is called Multiple Linear
Regression.

Is Stack is empty and POP operation is performed it is not possible to


delete the items.
This situation is called Stack Underflow

You might also like