0% found this document useful (0 votes)
22 views62 pages

Ada 1

The document provides an introduction to algorithms, their definitions, and the importance of algorithm design and analysis in problem-solving. It covers various types of algorithms, complexity analysis, asymptotic notations, and techniques for designing efficient algorithms. Additionally, it discusses iterative and recursive algorithms, highlighting their advantages and the concept of recurrence relations.

Uploaded by

Adarsh Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views62 pages

Ada 1

The document provides an introduction to algorithms, their definitions, and the importance of algorithm design and analysis in problem-solving. It covers various types of algorithms, complexity analysis, asymptotic notations, and techniques for designing efficient algorithms. Additionally, it discusses iterative and recursive algorithms, highlighting their advantages and the concept of recurrence relations.

Uploaded by

Adarsh Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Unit-I Introduction to Algorithm ,Analysis &

Designing Techniques
Subject Code:-AL-402
Subject :- Analysis of Design & Algorithm
By:- Adarsh Raushan
Asst. Professor
CSE Dept.,LNCT, Bhopal
Algorithm:- Definition
► An algorithm is a step-by-step procedure or set
of rules designed to solve a specific problem or
perform a particular task. It is a finite sequence
of well-defined instructions that can be executed
mechanically or electronically by a computing
device. Algorithms can range from simple to
complex and are used extensively in computer
science, mathematics, engineering, and various
other fields to automate processes, make
decisions, analyze data, and more. The efficiency
and effectiveness of an algorithm often depend
on its design and implementation, as well as the
problem it aims to solve.
Algorithm:- Definition
► An algorithm is the best way to represent
the solution of a particular problem in a
very simple and efficient way. If we have an
algorithm for a specific problem ,then we
can implement it in any programming
language meaning that the algorithm is
implemented from any programming
language.
Need of Algorithm
► To understand the basic idea of the problem.
► To find an approach to solve the problem.
► To improve the efficiency of existing techniques.
► To understand the basic principles of designing the
algorithms.
► To compare the performance of the algorithm with respect to
other techniques.
► It is the best method of description without describing the
implementation detail.
► The Algorithm gives a clear description of requirements and
goal of the problem to the designer.
Need of Algorithm
► A good design can produce a good solution.
► To understand the flow of the problem.
► To measure the behavior (or performance) of the methods
in all cases (best cases, worst cases, average cases)
► With the help of an algorithm, we can also identify the
resources (memory, input-output) cycles required by the
algorithm.
► With the help of algorithm, we convert art into a science.
► To understand the principle of designing.
► We can measure and analyze the complexity (time and
space) of the problems concerning input size without
implementing and running it; it will reduce the cost of
design.
Example :- Write an algorithm to add two numbers

► Step 1 − START
► Step 2 − declare three integers a, b & c
► Step 3 − define values of a & b
► Step 4 − add values of a & b
► Step 5 − store output of step 4 to c
► Step 6 − print c
► Step 7 − STOP’

Alternative Algorithm
► Step 1 − START ADD
► Step 2 − get values of a & b
► Step 3 − c ← a + b
► Step 4 − display c
► Step 5 − STOP
Example :- Find the largest number among three
numbers
► Step 1: Start
► Step 2: Declare variables a,b and c.
► Step 3: Read variables a,b and c.
► Step 4: If a > b
► If a > c
Display a is the largest number.
► Else
Display c is the largest number.
► Else
If
b > c Display b is the largest number.
Else
Display c is the greatest number.
► Step 5: Stop
Write an algorithm to add two numbers We design an
algorithm to get a solution of a given problem. A problem can
be solved in more than one ways.
Problem Development Steps
► Problem Definition
► Development of a model
► Specification of Algorithm
► Designing of Algorithm
► Checking the correctness of Algorithm
► Analysis of Algorithm
► Implementation of Algorithm
► Program Testing
► Documentation
ANALYSIS OF ALGORITHM
► Definition:- Algorithm analysis is the process of evaluating the
performance of an algorithm , usually in terms of time and space
complexity . There are several ways to analyze the performance of an
algorithm including aymptotic analysis, which analyzes the behavior
of an algorithm as the size of input grows indefinitely.
OR
► Algorithm analysis is an important part of computational complexity
theory, which provides theoretical estimation for required resources
of an algorithm to solve a specific computational problem. Analysis
of algorithm is the determination of amount of time and space
resources required to it.
ASYMPTOTIC NOTATIONS
► Asymptotic Notations are mathematical tools that allow you to
analyze an algorithm’s running time by identifying its behavior
as its input size grows.
This is also referred to as an algorithm’s growth rate:-
► You can’t compare two algorithm’s head to head.
► You compare space and time complexity using asymptotic
analysis.
► It compares two algorithms based on changes in their
performance as the input size is increased or decreased.
TYPES OF ASYMPTOTIC NOTATIONS
1. Big-O Notation (O-notation)

2. Omega Notation (Ω-notation)

3. Theta Notation (Θ-notation)

4. Little –o Notation(o-notation)

5. Little-ω Notation (ω-notation)


Big-O Notation(O-notation){WORST CASE}
Big-O Notation (O-notation):- Big-O notation represents the upper
bound of the running time of an algorithm. Thus, it gives the
worst-case complexity of an algorithm.
1. It is the most widely used notations for Asymptotic Analysis.
2. It specifies the upper bound of a function .
3. The maximum time required by an algorithm or worst case time
complexity.
4. It return the highest possible output(Big O) for a given input.
5. It is defined as the condition that allows an algorithm to complete
statement execution in the longest amount of time possible.
6. O(g(n)) = { f(n): there exist positive constants c and n0 such that
0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
Big-O Notation (O-notation)
Omega Notation (Ω-notation){Best CASE}
Omega notation represents the lower bound of the running time of
an algorithm. Thus, it provides the best case complexity of an
algorithm.
1. The execution time serves as a lower bound on the algorithms
complexity.
2. It is defined as the condition that allows an algorithm to complete
statement execution in the shortest amount of time.
3. Ω(g(n)) = { f(n): there exist positive constants c and n0 such that
0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
Omega Notation (Ω-notation)
Theta Notation (Θ-notation){Average CASE}
Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an
algorithm.
1. The execution time serves as both a lower bound and upper
bound on the algorithms complexity.
2. Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0 such
that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }
Theta Notation (Θ-notation){Average CASE}
Little Oh (o-notation){Strict Upper Bound}
Little Oh (o-notation){Strict Upper Bound}
Little Omega (ω-notation){Strict Upper Bound}
Little omega (ω-notation){Loose Lower Bound}
COMPLEXITY
Complexity in algorithms refers to the amount of resources (such as
time or memory) required to solve a problem or perform a task. The
most common measure of complexity is time complexity, which
refers to the amount of time an algorithm takes to produce a result
as a function of the size of the input.
COMPLEXITY ANALYSIS
Complexity analysis is defined as a technique to characterize the
time taken by an algorithm with respect to input size (independent
from the machine, language and compiler). It is used for evaluating
the variations of execution time on different algorithms.
What is the need for Complexity Analysis?
1. Complexity Analysis determines the amount of time and space
resources required to execute it.
2. It is used for comparing different algorithms on different input
sizes.
3. Complexity helps to determine the difficulty of a problem.
4. often measured by how much time and space (memory) it takes
to solve a particular problem
COMPLEXITIES
Time Complexity:- The time complexity of an algorithm is defined as
the amount of time taken by an algorithm to run as a function of the
length of the input. Note that the time to run is a function of the length
of the input and not the actual execution time of the machine on which
the algorithm is running on.

Space Complexity:- The amount of memory required by the


algorithm to solve a given problem is called the space complexity of the
algorithm. Problem-solving using a computer requires memory to hold
temporary data or final result while the program is in execution.

Auxiliary Space:- The temporary space needed for the use of an


algorithm is referred to as auxiliary space. Like temporary arrays,
pointers, etc.
It is preferable to make use of Auxiliary Space when comparing things
like sorting algorithms.
CALCULATION OF TIME COMPLEXITY
To estimate the time complexity, we need to consider the cost of each
fundamental instruction and the number of times the instruction is
executed.
if we have statements with basic operations like comparisons, return
statements, assignments, and reading a variable. We can assume
they take constant time each O(1).
Statement 1: int a=5;
statement 2; if( a==5) return true;
statement 3; int x= 4>5 ? 1:0;
statement 4; bool flag=true;
total time = time(statement1) + time(statement2) + ... time
(statementN)
Assuming that n is the size of the input, let’s use T(n) to represent
the overall time and t to represent the amount of time that a
statement or collection of statements takes to execute.

Overall, T(n)= O(1), which means constant complexity.


CALCULATION OF SPACE COMPLEXITY
The space Complexity of an algorithm is the total space taken by the
algorithm with respect to the input size. Space complexity includes both
Auxiliary space and space used by input.
Space complexity is a parallel concept to time complexity. If we need to
create an array of size n, this will require O(n) space. If we create a
two-dimensional array of size n*n, this will require O(n2) space.
int addSequence (int n)
{ int sum = 0;
for (int i = 0; i < n; i++)
{ sum += pairSum(i, i+1); }
return sum;
}
int pairSum(int x, int y)
{ return x + y; }
There will be roughly O(n) calls to pairSum. However, those calls do
not exist simultaneously on the call stack, so you only need O(1)
space.
SOME OTHER COMPLEXITIES THAT EXISTS IN
PROGRAM
1. Constant Complexity:-
If the function or method of the program takes negligible execution time.
Then that will be considered as constant complexity.

2. Logarithmic Complexity:
It imposes a complexity of O(log(N)). It undergoes the execution of the
order of log(N) steps. To perform operations on N elements, it often
takes the logarithmic base as 2.

3. Linear Complexity:-
It imposes a complexity of O(N). It encompasses the same number of
steps as that of the total number of elements to implement an operation
on N elements.
SOME OTHER COMPLEXITIES THAT EXISTS IN
PROGRAM
4. Quadratic Complexity: -
It imposes a complexity of O(n2). For N input data size, it undergoes the
order of N2 count of operations on N number of elements for solving a
given problem.
5. Factorial Complexity: -
It imposes a complexity of O(n!). For N input data size, it executes the
order of N! steps on N elements to solve a given problem.
6. Exponential Complexity: -
It imposes a complexity of O(2N), O(N!), O(nk), …. For N elements, it
will execute the order of the count of operations that is exponentially
dependable on the input data size.
BASIC TECHNIQUES FOR DESIGNING EFFICIENT
ALGORITHM

DIVIDE &

ALGORTIHM DESIGNING
CONQUER

GREEDY
TECHNIQUES METHOD

DYNAMIC
PROGRAMMING

BACKTRACKING

BRANCH AND
BOUND
BASIC TECHNIQUES FOR DESIGNING EFFICIENT
ALGORITHM
BASIC TECHNIQUES FOR DESIGNING EFFICIENT
ALGORITHM
1. Divide and Conquer Approach: It is a top-down approach. The
algorithms which follow the divide & conquer techniques involve three
steps:
-Divide the original problem into a set of sub-problems.
-Solve every sub-problem individually, recursively.
-Combine the solution of the sub-problems (top level) into a solution of
the whole original problem.

2. Greedy Technique:- A greedy algorithm is an approach for solving a


problem by selecting the best option available at the moment. It doesn't
worry whether the current best result will bring the overall optimal
result.
-The algorithm never reverses the earlier decision even if the choice is
wrong. It works in a top-down approach.
-This algorithm may not produce the best result for all the problems. It's
because it always goes for the local best choice to produce the global
best result.
BASIC TECHNIQUES FOR DESIGNING EFFICIENT
ALGORITHM
3.Backtracking:- Backtracking algorithms are like problem-solving
strategies that help explore different options to find the best solution.
They work by trying out different paths and if one doesn’t work, they
backtrack and try another until they find the right one. It’s like solving a
puzzle by testing different pieces until they fit together perfectly.
4. Dynamic Programming:- Dynamic Programming is a method used
in mathematics and computer science to solve complex problems by
breaking them down into simpler sub-problems. By solving each
sub-problem only once and storing the results, it avoids redundant
computations, leading to more efficient solutions for a wide range of
problems.
5. The Branch and Bound Algorithm:- The Branch and Bound
Algorithm is a method used in combinatorial optimization problems to
systematically search for the best solution. It works by dividing the
problem into smaller sub-problems, or branches, and then eliminating
certain branches based on bounds on the optimal solution. This process
continues until the best solution is found or all branches have been
explored. Branch and Bound is commonly used in problems like the
traveling salesman and job scheduling.
TYPES OF ALGORITHMS

ITERATIVE ALGORITHM:- In iterative algorithms, a solution is


obtained by repeating a set of instructions in a loop until a specific
condition is met. These algorithms typically use construct like loops to
perform repetitive computations.
Advantages
1. Efficiency:- Iterative algorithms often have better performance in
terms of both time & space complexity compared to their recursive
counterparts especially for problems that do not naturally lend
themselves to recursion.
2. Control:- Iterative algorithms often more control over the flow of
execution ,making them suitable where explicit control is required.
3. Tailoring:- Iterative algorithms can sometimes be easier to optimize
and tailor to specific requirements as they avoid overhead of function
call and stack management associated with recursion.
Examples:-
TYPES OF ALGORITHMS

RECURSIVE ALGORITHM:- In recursive algorithms, a problem is


solved by breaking down into smaller sub-problems of the same type,
often leading to a more elegant and concise solution. Each sub-problem
is solved recursively until a base case is reached upon which the
recursion unwinds and produces the final result.
Advantages
1. Clarity:- Recursive algorithms can offer a more intuitive and natural
way to express certain problems, particularly those that exhibit a
recursive structure.
2. Reliability:- Recursive solutions can sometimes be more readable
and easier to understand, as they closely reflect the mathematical or
logical structures of problems.
3. Divisibility:- Recursive algorithms are well suited for problems that
can be naturally divided into smaller similar sub-problems such as
tree traversal and sorting algorithms .
Examples:-
Recurrence Relation

Definition:-A recurrence relation is an equation or inequality that


defines a sequence recursively. In mathematics and computer science ,
recurrence relations are often used to describe sequences where the
values of an element at position n is defined in terms of one or more
previous elements.
The general form of recurrence relation is :-
an=f(an-1,an-2,--------,---------, an-k)
a
Where n is the nth term of sequence, f is some function that depends
on the previous k terms of the sequence.
Example :-The Fibonacci sequence is defined by the recurrence relation
is F(n)=F(n-1) + F(n-1)
With base F(0)=0 and F(1) =1.
This recurrence relation defines each term of the fibonacci sequence as
the sum of two previous terms.
Recurrence Relation

Recurrence relations are commonly used in algorithm analysis to


describe the time complexity of recursive algorithm. Solving recurrence
relations involves finding a closed form expression for the sequence ,
which often helps in understanding the behavior of algorithms and
making performance predictions.
There are several methods for solving recurrence relations each suited to
different types of recurrences. Here are some of common methods :-
1. Substitution Method
2. Master Theorem Method
3. Recursive (Recurrence) Tree Method
4. Iteration Method
SUBSTITUTION METHOD

Definition:-The substitution method is a technique used to solve


recurrence relations by making an educated guess about the form of the
solution and then proving its correctness using mathematical induction.
Here's a general outline of the substitution method:
1. Guess the Form: Based on the structure of the recurrence relation,
guess a form for the solution. This guess is typically based on
observing patterns in the recurrence relation or drawing analogies
with known functions.
2. Prove by Induction: Once you have a guess for the form of the
solution, use mathematical induction to prove that the guess is
correct. This involves two steps:
• Base Case: Show that the guess is correct for the initial values of the
recurrence relation (usually for the smallest values of n).
• Inductive Step: Assume that the guess is correct for all values up to
n−1, and then use this assumption to show that it is also correct for n.
SUBSTITUTION METHOD

3. Solve for Constants: If the guess is correct, solve for any constants
in the solution using the initial values of the recurrence relation.

4. Verify the Solution: Finally, verify that the solution satisfies the
original recurrence relation for all values of n.
MASTER THEORREM METHOD
MASTER THEORREM METHOD

This theorem allows for quick analysis of the time complexity of many
recursive algorithms without solving the recurrence relation explicitly.
However, it's important to note that the Master Theorem can only be
applied to recurrence relations that fit its specific form. If a recurrence
doesn't match this form, alternative methods, such as substitution or
recursion tree methods, may be required for analysis.
MASTER THEORREM METHOD

This theorem allows for quick analysis of the time complexity of many
recursive algorithms without solving the recurrence relation explicitly.
However, it's important to note that the Master Theorem can only be
applied to recurrence relations that fit its specific form. If a recurrence
doesn't match this form, alternative methods, such as substitution or
recursion tree methods, may be required for analysis.
RECURRENCE TREE METHOD

Definition:- The recurrence tree method is a visual approach used to


solve recurrence relations , particularly those arising from divide and
conquer algorithm. It involves building a tree to represent the recursive
call made by the algorithms and then analyzing the structure of the tree
to determine the overall time complexity.
Here a general outline of how the recurrence tree method:-
1. Build the Recurrence Tree:- Start by representing the initial
problem as the root of the tree. Then for each recursive call made by
the algorithm ,create a sub-tree rooted to the corresponding node.
Repeat this process for each level of recursion until you reach the
base case(s) where the problem size is small enough to solve directly.
2. Analyze the tree:- Once the tree is constructed ,analyze its
structure to determine the number of nodes at each level and work
done at each level. This identifying pattern and relationships between
the sizes of the sub-problems and the work done to solve them.
RECURRENCE TREE METHOD

3. Compute the time complexity :- Finally use the information


obtained from the tree analysis to compute the overall time
complexity of the algorithm. This may involves summing up the
work done at each level of the tree and then simplifying the
expression to obtain a closed form of solution.

The recurrence tree method is particularly useful for understanding how


recursive algorithms break down the problem into sub problems and for
visualizing the overall time complexity . It can be especially helpful
when the recurrence relation doesn’t fit neatly into one of the case of
master theorem or when you want a more intuitive understanding of the
how the algorithm behaves as problem size grows.
ITERATION METHOD

Definition:- The iteration method is a technique used to solve recurrence


relations by iterating the relation to find a pattern or formula for the
terms. You repeatedly substitute previous terms into the relation until
you see a pattern emerge, which can then be used to derive a general
solution.
Binary Search Algorithm

Definition:-Binary search is a widely used algorithm for searching for a


specific element in a sorted array or list. It works by repeatedly dividing
the search interval in half. Here's a simple explanation of how it works:
Initialize: Begin with the entire array.
Midpoint: Calculate the midpoint of the array.
Comparison: Compare the target value with the value at the midpoint:
• If the target value matches the midpoint value, the search is
successful.
• If the target value is less than the midpoint value, continue the
search on the left half of the array.
• If the target value is greater than the midpoint value, continue the
search on the right half of the array.
Repeat: Repeat steps 2 and 3 until the target value is found or the search
interval is empty.
Binary search is efficient because it eliminates half of the remaining
elements in the search space with each iteration, leading to a time
complexity of O(log n), where n is the number of elements in the array.
Binary Search Algorithm

Advantages of Binary Search Algorithm

1. Since it follows the technique to eliminate half of the array elements,


it is more efficient as compared to linear search for large data.
2. Better time complexity and thus takes less compilation time.
3. An improvement over linear search as it breaks the array down in
half rather than sequentially traversing through the array elements.

Limitations of Binary Search Algorithm

1. Binary Search algorithm could only be implemented over a sorted


array.
2. Small unsorted arrays would take considerate time in sorting and
then searching the desired element. So, binary search is not preferred
in such cases.
3. It has poor locality of reference compared to linear search
algorithm when comes to in-memory searching for short intervals.
Binary Search Algorithm
Heap Sort Algorithm

Definition:- A heap is a specialized tree- based data structure that


satisfies the heap property. In the context of Heap Sort , we typically use
a binary tree , which is a complete binary tree where every parent node
has values less than or equal to its children.(For min heap)

Heap sort is sorting algorithm that uses a binary heap data structure to
sort elements. It has average and worst case time complexity of
O(nlogn), making it efficient for sorting large dataset.

Max Heap:- In a max heap , the value of each parent node is greater
than or equal to the value of its children . The maximum element in the
heap is located at root.
Min Heap:- In a min, the value each parent node is less than or equal to
the values of its children . The minimum is located at root.
Heap Sort Algorithm

Heap Sort Algorithm


1. Heap Construction :- The first step of heap sort involves
constructing a binary heap from the input array . This heap is built
such a way that satisfies the heap property.
2. Heapify:- After constructing the heap, we repeatedly remove the root
element ( the maximum element in a max heap or the minimum
element in a min heap) and swap it with the last element of the heap.
This operation ensures that the maximum (or minimum) element
“sinks down” to its correct position. We then reduce the size of the
heap by one and heapify the root to maintain the heap property. We
repeat this process until the heap is empty.
3. Sorting:- As we extract elements from the heap , we place them at
the end of the array. After all elements have been extracted , the
array is sorted in ascending order (for max heap) or descending order
(for min heap).
Heap Sort Algorithm
Quick Sort Algorithm

Definition:- Quick Sort is a comparison-based sorting algorithm that


divides an array into two sub-arrays , recursively sort each sub-array ,
and then combines them to produce a sorted array. It follows divide &
conquer strategy.
1. An array is divided into sub-arrays by selecting a pivot element
(element selected from the array).
2. While dividing the array, the pivot element should be positioned in
such a way that elements less than pivot are kept on the left side and
elements greater than pivot are on the right side of the pivot.
3. The left and right sub-arrays are also divided using the same
approach. This process continues until each sub-array contains a
single element.
4. At this point, elements are already sorted. Finally, elements are
combined to form a sorted array.
Quick Sort Algorithm
Merge Sort Algorithm

Definition:-Merge sort is a popular sorting algorithm that follows the


divide-and-conquer strategy. It breaks down the array into smaller
sub-arrays ,sorts them individually ,and then merge them back together
in sorted order. This process continues recursively until the entire array
is sorted. It is known for its stable performance and guarantees a worst
case time complexity O(nlogn).
Algorithm:-
if beg<end
Set mid=(beg+end)/2
MERGE_SORT(arr, beg, mid)
MERGE_SORT(arr, mid+1,end)
MERGE_SORT(arr,beg,mid, end )
End of if
END MERGE_SORT
Merge Sort Algorithm
STRASSEN’s Matrix Multiplication
STRASSEN’s Matrix Multiplication

7 MULTIPLICATIONS OF STRASSEN’S
M1=P=(A11+A22) x(B11+B22)

M2=Q=B11(A21+A22)

M3=R=A11(B12-B22)

M4=S=A22(B21-B11)

M5=T=B22(A11+A12)

M6=U=(B11+B12)(A21-A11)

M7=V=B21+B22(A12-A22)

Using 7 multiplicataions(one of each Mk) , found value of C Matrix


C11=P + S – T + V
C12=R + T
C21=Q + S
C22= P + R – Q + U
TIME SPACE TRADEOFF

Definition:-It is a fundamental concept in algorithm design and analysis.


It refers to the relationship between Time Complexity (execution time)
and Space Complexity(memory usage) of an algorithm. In many cases,
you can improve the efficiency of an algorithm in terms of time at the
cost of increased space usage or vice-versa.
OR
A tradeoff is a situation where one thing increases and another thing
decreases. It is a way to solve a problem in:
• Either in less time and by using more space, or
• In very little space by spending a long amount of time.

Types of Space-Time Trade-off


1. Compressed or Uncompressed data
2. Re Rendering or Stored images
3. Smaller code or loop unrolling
4. Lookup tables or Recalculation
TIME SPACE TRADEOFF

1. Compressed or Uncompressed data :-A space-time trade-off can


be applied to the problem of data storage. If data stored is
uncompressed, it takes more space but less time. But if the data is
stored compressed, it takes less space but more time to run the
decompression algorithm. There are many instances where it is
possible to directly work with compressed data. In that case of
compressed bitmap indices, where it is faster to work with
compression than without compression.

2. Re-Rendering or Stored images:- In this case, storing only the


source and rendering it as an image would take more space but less
time i.e., storing an image in the cache is faster than re-rendering but
requires more space in memory.
TIME SPACE TRADEOFF

3. Smaller code or Loop Unrolling:- Smaller code occupies less space


in memory but it requires high computation time that is required for
jumping back to the beginning of the loop at the end of each
iteration. Loop unrolling can optimize execution speed at the cost of
increased binary size. It occupies more space in memory but requires
less computation time.
4. Lookup tables or Recalculation:- In a lookup table, an
implementation can include the entire table which reduces computing
time but increases the amount of memory needed. It can recalculate
i.e., compute table entries as needed, increasing computing time but
reducing memory requirements.
.
CODE Tuning Techniques

Definition:- Code tuning, also known as code optimization, is the


process of modifying code to improve its efficiency, performance, or
other characteristics. The goal of code tuning is to make the code run
faster, use fewer resources (such as memory or CPU), or be more
maintainable without changing its functionality.
Code tuning can involve various techniques, including:-
1. Algorithmic Optimization: Replacing an algorithm with a more
efficient one. For example, using a hash table instead of a linear
search for faster lookups.

2. Data Structure Optimization: Using a data structure that better


suits the operations performed on the data. For example, using a
linked list for frequent insertions and deletions.

3. Compiler Optimization: Leveraging compiler optimizations to


generate more efficient machine code. This can include inlining
functions, loop unrolling, and optimizing memory access patterns.
CODE Tuning Techniques

4. Parallelization: Splitting tasks into smaller parts that can be


executed concurrently to take advantage of multi-core processors.

5. Memory Optimization: Reducing memory usage by optimizing data


structures, reusing memory, or minimizing memory leaks.

6. I/O Optimization: Reducing I/O operations or optimizing I/O access


patterns to improve performance.

7. Code Refactoring: Restructuring code to make it more readable,


maintainable, and efficient. This can involve simplifying complex
code, removing duplication, or improving naming conventions.

8. Profile-Guided Optimization (PGO): Using profiling tools to


identify performance bottlenecks and then optimizing the code based
on the profiling data..

You might also like