0% found this document useful (0 votes)
84 views11 pages

Kumaraguru: Class Notes

The document provides an overview of data structures and algorithms. It discusses primitive and non-primitive data types. Primitive types include basic types like integers and floats, while non-primitive types are derived types like arrays, stacks and queues. The document also covers analyzing algorithm efficiency, noting that time complexity indicates how fast an algorithm runs while space complexity refers to memory usage. Input size is a key factor in analyzing efficiency, as algorithms generally take longer to complete on larger inputs.

Uploaded by

DrJayakanthan N
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views11 pages

Kumaraguru: Class Notes

The document provides an overview of data structures and algorithms. It discusses primitive and non-primitive data types. Primitive types include basic types like integers and floats, while non-primitive types are derived types like arrays, stacks and queues. The document also covers analyzing algorithm efficiency, noting that time complexity indicates how fast an algorithm runs while space complexity refers to memory usage. Input size is a key factor in analyzing efficiency, as algorithms generally take longer to complete on larger inputs.

Uploaded by

DrJayakanthan N
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 11

KUMARAGURU

COLLEGE OF TECHNOLOGY

CLASS NOTES
Material No: I-1
Name of Teacher / Designation : N.Jayakanthan
Subject Code / Subject : P15CAT103-Data structures
Course / Branch : MCA
Topics : Introduction

Duration : ______1_____Hours.

Resources required to handle the class :( Visuals, audio, question paper


copies, printouts, etc,.)

_______________________________________________________________

STEP 1: INTRODUCTION
(1) LEARNING OBJECTIVES
The students are able to know the basics of data structures
After the session, the student should be able to:
 Learn fundamentals of the algorithm

STEP 2: ACQUISITION
An algorithm is a sequence of unambiguous instructions for solving
a problem in a finite amount of time.

The instances of the problems are been derived from a domain

The algorithm must solve the problem irrespective whatever be the instance (but from the
particular domain) is, and too within the time bound.
Another important property of an algorithm is it must terminate within finite number of
steps.

Introduction
Data Structure is a way to organized data in such a way that it can be used efficiently.
This tutorial explains basic terms related to data structure.

Data Type
Data type is way to classify various types of data such as integer, string etc. which
determines the values that can be used with the corresponding type of data, the type of
operations that can be performed on the corresponding type of data. Data type of two
types −

 Built-in Data Type


 Derived Data Type

Built-in Data Type


Those data types for which a language has built-in support are known as Built-in Data
types. For example, most of the languages provides following built-in data types.

 Integers
 Boolean (true, false)
 Floating (Decimal numbers)
 Character and Strings

Derived Data Type


Those data types which are implementation independent as they can be implemented in
one or other way are known as derived data types. These data types are normally built
by combination of primary or built-in data types and associated operations on them. For
example −

 List
 Array
 Stack
 Queue

Basic Operations
The data in the data structures are processed by certain operations. The particular data
structure chosen largely depends on the frequency of the operation that needs to be
performed on the data structure.

 Traversing
 Searching
 Insertion
 Deletion
 Sorting
 Merging

Abstract data types:

A data structure is a container which uses either contiguous- or node-based structures or


both to store objects (in member variables or instance variables) and is associated with
functions (member functions or methods) which allow manipulation of the stored objects.
A data structure may be directly implemented in any programming language; however,
we will see that there are numerous different data structures which can store the same
objects. Each data structure has advantages and disadvantages; for example, both arrays
and singly linked lists may be used to store data in an order defined by the user. To
demonstrate the differences:
 Assuming we fill an array from the first position, an array allows the user to easily
add or remove an object at the end of the array (assuming that all the entries have
not yet been used),
 A singly linked list allows the user to easily add or remove an object at the start of
the list and a singly linked list with a pointer to the last node (the tail) allows the
user to easily add an object at the end of the list.
There are significant differences between these two structures as well:
 Arrays allow arbitrary access to the nth entry of the array, but
 A linked list requires the program to step through n − 1 entries before accessing
the nth entry.
Other differences include:
 An array does not require new memory until it is full (in which case, usually all
the entries must be copied over); but

 A singly linked list requires new memory with each new node.
There are many other differences where an operation which can be done easily in linked
lists requires significant effort with an array or vice versa.
Modifications may be made to each of these structures to reduce the complications
required: an array size can be doubled, a singly linked list can be woven into an array to
reduce the required number of memory allocations; however, there are some problems
and restrictions which cannot be optimized away. An optimization in one area (increasing
the speed or reduction in memory required by either one function or the data structure as
a whole) may result in a detrimental effect elsewhere (a decrease in speed or increase in
memory by another function or the data structure itself). Thus, rather than speaking about
specific data structures, we need to step back and define models for specific data
structures of interest to computer and software engineers.
An abstract data type or ADT (sometimes called an abstract data type) is a mathematical
model of a data structure. It describes a container which holds a finite number of objects
where the objects may be associated through a given binary relationship. The operations
which may be performed on the container may be basic (e.g., insert, remove, etc.) or may
be based on the relationship (e.g, given an object (possibly already in the container), find
the next largest object). We will find that we cannot optimize all operations
simultaneously and therefore we will have to give requirements for which operations
must be optimal in both time and memory. Thus, to describe an abstract data structure we
must

STEP 3: PRACTICE/TESTING
1. Define Data structure
A data structure is a specialized format for organizing and storing data. General data
structure types include the array, the file, the record, the table, the tree, and so on. Any
data structure is designed to organize data to suit a specific purpose so that it can be
accessed and worked with in appropriate ways.

2. What is Abstract data type


Abstract data type (ADT) is a mathematical model for data types where a data type is
defined by its behavior (semantics) from the point of view of a user of the data,
specifically in terms of possible values, possible operations ondata of this type, and the
behavior of these operations.
KUMARAGURU
COLLEGE OF TECHNOLOGY

CLASS NOTES
Material No: 1-2
Name of Teacher / Designation : N.Jayakanthan
Subject Code / Subject : P15CAT103-Data structures
Course / Branch :MCA
Topics : Primitive Data and non primitive data
Structures, Analysis of algorithm and
notations.

Duration : ______1_____Hours.

Resources required to handle the class : Black board & OHP projector

_______________________________________________________________

STEP 1: INTRODUCTION
(2) LEARNING OBJECTIVES
The students will understand the primitive data structures, Analysis of
algorithm and notations.

STEP 2: ACQUISITION
Primitive Data Type,
A primitive data type is one that fits the base architecture of the underlying computer
such as int, float, and pointer, and all of the variations, thereof such as char short long
unsigned float double and etc, are primitive data type.

Primitive data are only single values, they have not special capabilities.

The examples of Primitive data types are given byte, short, int, long, float, double, char
etc.
The integer reals, logic data character data pointer and reference are primitive data
structures data structure that normally are directly operated upon by machine level
instructions are known as primitive structure and data type.

Non- Primitive Data type,


A non-primitive data type is something else such as an array structure or class is known
as the non-primitive data type.

The data type that are derived from primary data types are known as non-primitive data
type.
The non-primitive data types are used to store the group of values.

Examples of non-primitive data type:

Array, structure, union, link list, stacks, queue etc

Analyzing an Algorithm

We usually want our algorithms to possess several qualities. After correctness, by far the
most important is efficiency. In fact, there are two kinds of algorithm efficiency: time
efficiency, indicating how fast the algorithm runs, and space efficiency, indicating how
much extra memory it uses

Fundamentals of analysis of algorithms efficiency

The Analysis Framework


Time efficiency, also called time complexity, indicates how fast an algorithm in question
runs. Space efficiency, also called space complexity, refers to the amount of memory
units required by the algorithm in addition to the space needed for its input and output.

Measuring an Input’s Size


Let’s start with the obvious observation that almost all algorithms run longer on larger
inputs. For example, it takes longer to sort larger arrays, multiply larger matrices, and so
on. Therefore, it is logical to investigate an algorithm’s efficiency as a function of some
parameter n indicating the algorithm’s input size.

There are situations, of course, where the choice of a parameter indicating an input size
does matter. One such example is computing the product of two n × n matrices.

Units for Measuring Running Time


T (n) of a program implementing this algorithm on that computer by the formula
T (n) ≈ cop *C(n).
The count C(n) does not contain any information about operations that are not basic, and,
in fact, the count itself is often computed only approximately.

Orders of Growth
A difference in running times on small inputs is not what really distinguishes efficient
algorithms from inefficient ones.
For example, the greatest common divisor of two small numbers, it is not immediately
clear how much more efficient Euclid’s algorithm is compared to the other two
algorithms.

Also note that although specific values of such a count depend, of course, on the
logarithm’s base, the formula
loga n = loga b logb n

Worst-Case, Best-Case, and Average-Case Efficiencies


The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size
n, which is an input (or inputs) of size n for which the algorithm runs the longest among
all possible inputs of that size. The way to determine the worst-case efficiency of an
algorithm is, in principle, quite straightforward: analyze the algorithm to see what kind of
inputs yield the largest value of the basic operation’s count C(n) among all possible
inputs of size n and then compute this worst-case value Cworst(n).

The best-case efficiency of an algorithm is its efficiency for the best-case input of size n,
which is an input (or inputs) of size n for which the algorithm runs the fastest among all
possible inputs of that size. Accordingly, we can analyze the bestcase efficiency as
follows. First, we determine the kind of inputs for which the count C(n) will be the
smallest among all possible inputs of size n. (Note that the best case does not mean the
smallest input; it means the input of size n for which the algorithm runs the fastest.)

This is the information that the average-case efficiency seeks to provide. To analyze the
algorithm’s averagecase efficiency, we must make some assumptions about possible
inputs of size n.
Asymptotic Notation
Suppose we are considering two algorithms, A and B, for solving a given problem.
Furthermore, let us say that we have done a careful analysis of the running times of each
of the algorithms and determined them to be and , respectively, where n is a measure of
the problem size. Then it should be a fairly simple matter to compare the two functions
and to determine which algorithm is the best!

But is it really that simple? What exactly does it mean for one function, say , to be better
than another function, ? One possibility arises if we know the problem size a priori. For
example, suppose the problem size is and . Then clearly algorithm A is better than
algorithm B for problem size .

In the general case, we have no a priori knowledge of the problem size. However, if it can
be shown, say, that for all , then algorithm A is better than algorithm B regardless of the
problem size.

Unfortunately, we usually don't know the problem size beforehand, nor is it true that one
of the functions is less than or equal the other over the entire range of problem sizes. In
this case, we consider the asymptotic behavior of the two functions for very large
problem sizes.

An Asymptotic Lower Bound-Omega


The big oh notation introduced in the preceding section is an asymptotic upper bound. In
this section, we introduce a similar notation for characterizing the asymptotic behavior of
a function, but in this case it is a lower bound.

Definition (Omega) Consider a function f(n) which is non-negative for all integers .
We say that ``f(n) is omega g(n),'' which we write , if there exists an integer and a
constant c>0 such that for all integers , .

The definition of omega is almost identical to that of big oh. The only difference is in the
comparison--for big oh it is ; for omega, it is . All of the same conventions and caveats
apply to omega as they do to big oh

More Notation-Theta and Little Oh


This section presents two less commonly used forms of asymptotic notation. They are:
A notation, , to describe a function which is both O(g(n)) and , for the same g(n).
(Definition ).
A notation, , to describe a function which is O(g(n)) but not , for the same g(n).
(Definition ).

Definition (Theta) Consider a function f(n) which is non-negative for all integers . We
say that ``f(n) is theta g(n),'' which we write , if and only if f(n) is O(g(n)) and f(n) is .

Recall that we showed in Section that a polynomial in n, say , is . We also showed in


Section that a such a polynomial is . Therefore, according to Definition , we will write .
Definition (Little Oh) Consider a function f(n) which is non-negative for all integers .
We say that ``f(n) is little oh g(n),'' which we write f(n)=o(g(n)), if and only if f(n) is
O(g(n)) but f(n) is not .

Little oh notation represents a kind of loose asymptotic bound in the sense that if we are
given that f(n)=o(g(n)), then we know that g(n) is an asymptotic upper bound since
f(n)=O(g(n)), but g(n) is not an asymptotic lower bound since f(n)=O(g(n)) and implies
that .

For example, consider the function f(n)=n+1. Clearly, . Clearly too, , since not matter
what c we choose, for large enough n, . Thus, we may write .

Asymptotic Analysis of Algorithms

The previous chapter presents a detailed model of the computer which involves a number
of different timing parameters-- , , , , , , , , , , and . We show that keeping track of the
details is messy and tiresome. So we simplify the model by measuring time in clock
cycles, and by assuming that each of the parameters is equal to one cycle. Nevertheless,
keeping track of and carefully counting all the cycles is still a tedious task.

In this chapter we introduce the notion of asymptotic bounds, principally big oh, and
examine the properties of such bounds. As it turns out, the rules for computing and
manipulating big oh expressions greatly simplify the analysis of the running time of a
program when all we are interested in is its asymptotic behavior.

For example, consider the analysis of the running time of Program , which is just
Program again, an algorithm to evaluate a polynomial using Horner's rule.

Mathematical analysis of recursive and non recursive algorithms

Mathematical Analysis - Some Examples


The algorithms for the problems are categorized into recursive and nonrecursive
algorithms.

Nonrecursive Algorithms
Example: 1.1 Finding the largest element in an array.
Algorithm

Algorithm 1.2: MaxArray(A[1 · · · n])


begin
maxval  A[0];
for i 2 to n do
if A[i] > maxval then maxval  A[i];
end
Return maxval;
End

Steps in mathematical analysis of nonrecursive algorithms:


Decide on parameter n indicating input size

Identify algorithm’s basic operation

Determine worst, average, and best case for input of size n


if the basic operation count depends not only on n

Set up summation for C(n) reflecting algorithm’s loop structure


Express the number of times the algorithm’s basic operation is executed

Simplify summation using standard formulas and rules of sum


manipulation (see Appendix A)

Find a closed-form formula for the count or, at the very least, establish its order
of growth.

Analysis
Note here that the frequently executed instruction (comparison) is not inside the loop,
rather it determines whether the loop’s body to be executed. Since the number of times
the comparison will be executed is larger than the number of repetitions of the loop’s
body by exactly 1, the choice is not that important. The important thing here is the loop’s
variable generally takes only a few values between its lower and upper bounds, therefore,
we have to use alternate way of computing the number of times the body of the loop is
executed. Since the value of n is about halved on each iteration, the answer should be log
n. That is, the number of times the value of n is compared is log n + 1, and therefore the
complexity of the algorithm is _(log n). Another important factor to consider here is the
size of the input, which is n. Note that we are giving only one positive integer n as an
input, however the size of the input is n, As value of n varies, the complexity also varies
as related to this n, and hence the size of the input is n.

STEP 3: PRACTICE/TESTING
1. What is primitive data type?
The primitive data types are the basic data typesthat are available in most of the
programming languages. The primitive data types are used to represent single
values. Integer:

2. Given some example for non primitive data type?


Arrays, Linked lists and stacks.

3. What best case efficiency


The best case efficiency is the efficiency the algorithm for a given input of size n
the algorithm run fastest among all possible combination of that input.

You might also like