0% found this document useful (0 votes)
105 views120 pages

Formula Book

This document contains a 3 page handbook for computer science students covering topics in their third and fourth semesters, including: 1) Engineering mathematics covering discrete mathematics topics like lattices, combinatorics, and generating functions. 2) Computer organization and architecture, data structures and applications, digital system design, object oriented programming, principles of data communication, computer network protocols, database systems, design and analysis of algorithms, and embedded systems. 3) Formal languages and automata, operating systems, and 8086 microprocessor systems. The handbook provides definitions and explanations of key concepts in these subject areas to support students in their coursework.

Uploaded by

Anushka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views120 pages

Formula Book

This document contains a 3 page handbook for computer science students covering topics in their third and fourth semesters, including: 1) Engineering mathematics covering discrete mathematics topics like lattices, combinatorics, and generating functions. 2) Computer organization and architecture, data structures and applications, digital system design, object oriented programming, principles of data communication, computer network protocols, database systems, design and analysis of algorithms, and embedded systems. 3) Formal languages and automata, operating systems, and 8086 microprocessor systems. The handbook provides definitions and explanations of key concepts in these subject areas to support students in their coursework.

Uploaded by

Anushka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 120

III and IV Semester

Computer Science Stream B.Tech


Handbook
Sl No. Contents Page No

1 Engineering Mathematics III and IV 1-14

2 Computer organization and architecture 15-18

3 Data structure and application 19-22

4 Digital System Design 23-27

5 Object oriented programming 28-41

6 Principles of Data Communication 42-47

7 Computer Network Protocol 48-67

8 Database System 68-81

9 Design and Analysis of Algorithm 82-83

10 Embedded System 84-88

11 Formal Language and automata 89-97

12 Operating Systems 98- 103

13 8086 Microprocessor system 104-118


ENGINEERING MATHEMATICS III AND IV SEMESTER
DISCRETE MATHEMATICS
LATTICE THEORY
Cartesian product: The Cartesian product of two sets 𝐴 and 𝐵 denoted 𝐴 × 𝐵 is the set of all ordered pairs
of the form (𝑎, 𝑏) where 𝑎 ∈ 𝐴 and 𝑏 ∈ 𝐵.

Binary relation: A binary relation from 𝐴 to 𝐵 is a subset of 𝐴 × 𝐵.

Reflexive relation: Let 𝑅 be a binary relation on 𝐴. 𝑅 is said to be reflexive relation if (𝑎, 𝑎) is in R for
every 𝑎 ∈ 𝐴.

Symmetric relation: A binary relation 𝑅 on a set A is said to be a symmetric relation if (𝑎, 𝑏) in R implies
that (𝑏, 𝑎) is also in R.

Antisymmetric relation: Let R be a binary relation on A. R is said to be an antisymmetric relation if (𝑎, 𝑏)


in R implies that (𝑏, 𝑎) is not in R unless 𝑎 = 𝑏.

Transitive relation: Let R be a binary relation on A. R is said to be a transitive relation if (𝑎, 𝑐) is in R


whenever both (𝑎, 𝑏) and (𝑏, 𝑐) are in R.

Equivalence relation: A binary relation is said to an equivalence relation if it is reflexive, symmetric and
transitive.

Partial ordering relation: A binary relation is said to be a partial ordering relation if it is reflexive,
antisymmetric and transitive.

Partially ordered set (poset): Set A together with a partial ordering relation R on A is called a partially
ordered set and is denoted by (𝐴, ≤).

Chain: Let (𝐴, ≤) be a partially ordered set. A subset of A is called a chain if every two elements in the
subset are related.

Antichain: Let (𝐴, ≤) be a partially ordered set. A subset of A is called an antichain if no two elements in
the subset are related.

Totally ordered set: A partially ordered set (𝐴, ≤) is called a totally ordered set if A is a chain and the
binary relation is called a total ordering relation.

Maximal element: Let (𝐴, ≤) be a partially ordered set. An element 𝑎 in A is called a maximal element if
for no 𝑏 in A, 𝑎 ≠ 𝑏, 𝑎 ≤ 𝑏.

Minimal element: Let (𝐴, ≤) be a partially ordered set. An element 𝑎 in A is called a minimal element if
for no 𝑏 in A, 𝑎 ≠ 𝑏, 𝑏 ≤ 𝑎.

Upper bound: Let (𝐴, ≤) be a partially ordered set. An element = is said to be an upper bound of a and b
if 𝑎 ≤ 𝑐 and 𝑏 ≤ 𝑐. An element 𝑐 is said to be least upper bound of 𝑎 and 𝑏 if 𝑐 is an upper bound of a and
𝑏, and if there is no other upper bound 𝑑 of 𝑎 and 𝑏 such that 𝑑 ≤ 𝑐.

1
Universal upper bound: An element 𝑎 in a lattice (𝐴, ≤) is called a universal upper bound if for every
element 𝑏 in 𝐴, 𝑏 ≤ 𝑎. It is unique if it exists and is denoted by 1.

Lower bound: Let (𝐴, ≤) be a partially ordered set. An element c is said to be a lower bound of a and b if
𝑐 ≤ 𝑎 and 𝑐 ≤ 𝑏. An element c is said to be greatest lowerbound of a and b if c is a lower bound of a and
b, and if there is no other lower bound d of a and b such that 𝑐 ≤ 𝑑.

Universal lower bound: An element 𝑎 in a lattice (𝐴, ≤) is called a universal lower bound if for every
element 𝑏 in 𝐴, 𝑎 ≤ 𝑏. It is unique if it exists and is denoted by 0.

Lattice: A partially ordered set is said to be a lattice if every two elements in the set have a unique least
upper bound and a unique greatest lower bound.
For any 𝑎 and 𝑏 in the lattice (𝐴, ≤) , a ≤ a ⋁ b and 𝑎 ⋀ 𝑏 ≤ 𝑎
For any 𝑎, 𝑏, 𝑐, 𝑑 in a lattice (𝐴, ≤), if 𝑎 ≤ 𝑏 and 𝑐 ≤ 𝑑 then 𝑎 ⋁ 𝑐 ≤ 𝑏 ⋁ 𝑑 and 𝑎 ⋀ 𝑐 ≤ 𝑏 ⋀ 𝑑

Commutative property: For any 𝑎 and 𝑏 in a lattice (𝐴, ≤)


𝑎 ⋁ 𝑏 = 𝑏 ⋁ 𝑎 and 𝑎 ⋀ 𝑏 = 𝑏 ⋀ 𝑎

Associative property: For any a, b and c in a lattice (𝐴, ≤)


a⋁(𝑏 ⋁ 𝑐) = (𝑎 ⋁ 𝑏) ⋁ 𝑐 and 𝑎 ⋀(𝑏 ⋀ 𝑐) = (𝑎 ⋀ 𝑏) ⋀ 𝑐

Idempotent property: For every 𝑎 in a lattice (𝐴, ≤) 𝑎 ⋁ 𝑎 = 𝑎 and 𝑎 ⋀ 𝑎 = 𝑎.

Absorption Property: For any 𝑎 and 𝑏 in a lattice (𝐴, ≤), 𝑎 ⋁(𝑎 ⋀ 𝑏) = 𝑎 and 𝑎 ⋀(𝑎 ⋁ 𝑏) = 𝑎

Cover: Let 𝑎 and 𝑏 be two elements in a lattice. Then 𝑎 is said to cover 𝑏 if 𝑏 < 𝑎 and there is no element
𝑐 such that 𝑏 < 𝑐 < 𝑎.

Atom: An element is called as an atom if it covers the universal lower bound.

Distributive lattice: A lattice (𝐴,∨,∧) is said to be distributive if for all 𝑎, 𝑏, 𝑐 ∈ 𝐴,


𝑎 ∨ (𝑏 ∧ 𝑐) = (𝑎 ∨ 𝑏) ∧ (𝑎 ∨ 𝑐)
𝑎 ∧ (𝑏 ∨ 𝑐) = (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐).

Complement of an element: The complement of an element 𝑎 of a lattice (𝐴,∨,∧) with 0 and 1 is an


element 𝑏 ∈ 𝐴 such that 𝑎 ∨ 𝑏 = 1 and 𝑎 ∧ 𝑏 = 0.

Complemented lattice: A lattice in which every element has a complement is called a complemented
lattice.

Boolean lattice: A distributive, complemented lattice is called a Boolean lattice. In a such a lattice, every
element 𝑎 has a unique complement 𝑎̅, and ̅ is a unary operation on the lattice.

Boolean algebra: The algebraic structure (𝐴,∨,∧, ̅) formed by a Boolean lattice is called a Boolean
algebra.
A Boolean expression over ({0,1},∨,∧) is said to be in disjunctive normal form if it is join of minterms.
A Boolean expression over ({0,1},∨,∧) is said to be in conjunctive normal form if it is meet of maxterms.

2
COMBINATORICS
Addition Principle. If there are 𝑚 ways of doing 𝐴 and 𝑛 ways of doing 𝐵, with no way of doing both
simultaneously, then the number of ways of doing 𝐴 or 𝐵 is 𝑚 + 𝑛.

Multiplication Principle. If there are 𝑚 ways of doing 𝐴 and 𝑛 ways of doing 𝐵 independently, then there
are 𝑚𝑛 ways of doing 𝐴 and 𝐵 (or 𝐴 followed by 𝐵).

Permutations and Combinations


The number of permutations of 𝑛 distinct objects is 𝑛! = 𝑛(𝑛 − 1)(𝑛 − 2) × ⋯ 3 × 2 × 1.
The number of ways of selecting and arranging 𝑟 distinct objects from a collection of 𝑛 distinct objects is
𝑛 𝑛!
𝑃𝑟 = (𝑛−𝑟)!.
The number of ways of selecting 𝑟 distinct objects from a collection of 𝑛 distinct objects is
𝑛 𝑛 𝑛! 𝑛(𝑛−1)⋯(𝑛−𝑟+1)
𝐶𝑟 or ( ) = 𝑟! (𝑛−𝑟)! = .
𝑟 𝑟!
The number of ways of selecting any number of distinct objects from a collection of 𝑛 distinct objects is
2𝑛 .
The number of permutations of 𝑛 objects where 𝑛1 of them are alike of the first kind, 𝑛2 of them are alike
of the second kind, …, 𝑛𝑘 of them are alike of the 𝑘th kind is
𝑛!
.
𝑛1 !𝑛2 !⋯𝑛𝑘 !
The number of permutations of 𝑟 objects selected from 𝑛 types of objects with unlimited repetition of each
type is 𝑛𝑟 .
The number of selections of 𝑟 objects from 𝑛 types of objects with unlimited repetition of each type is
𝑛+𝑟−1 𝑛+𝑟−1
( )=( ).
𝑟 𝑛−1
Basic identities
1. 𝑛! = 𝑛(𝑛 − 1)!
𝑛 𝑛
2. ( ) = ( )
𝑟 𝑛−𝑟
𝑛 𝑛−1 𝑛−1
3. ( ) = ( )+( ) for 𝑛 > 𝑟 > 0
𝑟 𝑟 𝑟−1
𝑛
4. ∑𝑛𝑟=0 ( ) = 2𝑛
𝑟

Inclusion-Exclusion Principle
Let 𝑎1 , 𝑎2 , … , 𝑎𝑛 be 𝑛 properties. In a collection of 𝑁 objects, let 𝑁(𝑎𝑖 ) denote the number of objects with
property 𝑎𝑖 , let 𝑁(𝑎𝑖 𝑎𝑗 ) denote the number of objects with both properties 𝑁(𝑎𝑖 𝑎𝑗 ), etc. Then the number
of objects in the collection that do not have any of the properties 𝑎1 , 𝑎2 , … , 𝑎𝑛 is
𝑁(𝑎
̅̅̅1 ̅̅̅ 𝑎𝑛 = 𝑁 − ∑ 𝑁(𝑎𝑖 ) + ∑ 𝑁(𝑎𝑖 𝑎𝑗 ) + ⋯ + (−1)𝑘
𝑎2 ⋯ ̅̅̅) ∑ 𝑁(𝑎𝑖1 𝑎𝑖2 ⋯ 𝑎𝑖𝑘 ) + ⋯
𝑖 𝑖<𝑗 𝑖1 <𝑖2 <⋯<𝑖𝑘
+ (−1)𝑛 𝑁(𝑎1 𝑎2 ⋯ 𝑎𝑛 ).

Ordering of Permutations
Index sequence for 𝑘th permutation of 𝑛 distinct marks in lexicographical order: 𝑐𝑛−1 𝑐𝑛−2 ⋯ 𝑐1 where
𝑘 − 1 = 𝑐𝑛−1 (𝑛 − 1)! + 𝑐𝑛−2 (𝑛 − 2)! + ⋯ + 𝑐1 1!
is the factorial base representation of 𝑘 − 1.
Fike’s sequence for 𝑘th permutation of 𝑛 distinct marks: 𝑑1 𝑑2 ⋯ 𝑑𝑛−1 , where 𝑑𝑖 = 𝑖 − 𝑐𝑖 , and
𝑛! 𝑛! 𝑛!
𝑘 − 1 = 𝑐1 + 𝑐2 + ⋯ + 𝑐𝑛−1 .
2! 3! (𝑛 − 1)!

3
Generating Functions
The ordinary generating function for the number of selections of 𝑟 distinct objects out of 𝑛 distinct objects
𝑛
is (1 + 𝑥)𝑛 = ∑𝑛𝑟=0 ( ) 𝑥 𝑟 .
𝑟
The ordinary generating function for the number of selections of 𝑟 objects from 𝑛 types of objects with
𝑛+𝑟−1 𝑟
unlimited repetition is (1 − 𝑥)−𝑛 = ∑∞𝑟=0 ( )𝑥 .
𝑟
𝑥𝑛
The exponential generating function for the number of permutations of 𝑛 objects is 𝑒 𝑥 = ∑∞ 𝑛=0 𝑛! .

Partitions and Compositions


𝑛−1
The number of compositions of 𝑛 into 𝑘 positive parts is ( ).
𝑘−1
The number of compositions of 𝑛 into any number of positive parts is 2𝑛−1 .
The ordinary generating function for the number of unrestricted partitions of 𝑛 is (1 − 𝑥)−1 (1 − 𝑥 2 )−1 (1 −
𝑥 3 )−1 ⋯.

GRAPH THEORY
A graph 𝐺 consists of a finite nonempty set 𝑉 = 𝑉(𝐺) whose elements are called ‘vertices’ of 𝐺 and a set
𝐸 = 𝐸(𝐺) of unordered pairs of distinct vertices of 𝑉(𝐺) whose elements are called the ‘edges’ of 𝐺. A
graph with 𝑝 vertices and 𝑞 edges is called a (𝑝, 𝑞) graph.
The first theorem in graph theory due to Euler, popularly known as ‘Hand shaking lemma’. It states that,
“the sum of degrees of all the vertices in a graph is twice the number of edges”.
There are several types of graphs namely: complete graph, regular graph, cycle graph, path graph, tree,
bipartite graph etc.
Some of the preliminary terminologies to be noted are:
Distance: The distance 𝑑(𝑢, 𝑣) between the two vertices 𝑢 and 𝑣 in 𝐺 is the length of a shortest path joining
them if any, otherwise 𝑑(𝑢, 𝑣) = ∞. In a connected graph, distance is a metric. That is, for all the vertices
𝑢, 𝑣, 𝑤
i. 𝑑(𝑢, 𝑣) ≥ 0 with 𝑑(𝑢, 𝑣) = 0 if and only if 𝑑(𝑢, 𝑢) = 0
ii. 𝑑(𝑢, 𝑣) = 𝑑(𝑣, 𝑢)
iii. 𝑑(𝑢, 𝑣) + 𝑑(𝑣, 𝑤) ≥ 𝑑(𝑢, 𝑤)
Geodesic: A shortest 𝑢-𝑣 path.
Girth: Girth 𝑔(𝐺) of a graph 𝐺 is the length of the shortest cycle (if any) in 𝐺.
Circumference: Circumference 𝑐(𝐺) of a graph 𝐺 is the length of the longest cycle (if any) in 𝐺.
Eccentricity: The eccentricity 𝑒(𝑣) of a vertex in a connected graph 𝐺 is the distance from 𝑣 to the vertex
farthest from 𝑣 in 𝐺. That is, 𝑒(𝑣) = max {𝑑(𝑣, 𝑢)}.
𝑢∈𝑉(𝐺)
Radius: The radius 𝑟(𝐺) or rad(𝐺) is the minimum eccentricity of the vertices, i.e. rad(𝐺) = min {𝑒(𝑣)}.
𝑣∈𝑉(𝐺)
Diameter: The diameter diam(𝐺) is the maximum eccentricity of the vertices. In other words, the length of
any longest geodesic. i.e., diam(𝐺) = max {𝑒(𝑣)}.
𝑣∈𝑉(𝐺)
Central vertex: A vertex 𝑣 is a central vertex if 𝑒(𝑣) = rad(𝐺). And the set of all central vertices is called
‘center’ of the graph.

GROUP THEORY
Let 𝐺 be a non-empty set and ∗: 𝐺 × 𝐺 → 𝐺 a binary operation on 𝐺. Then
1. Associativity axiom: (𝑎 ∗ 𝑏) ∗ 𝑐 = 𝑎 ∗ (𝑏 ∗ 𝑐), for all 𝑎, 𝑏, 𝑐 ∈ 𝐺.
2. Identity axiom: There exists an element 𝑒 ∈ 𝐺 such that 𝑎 ∗ 𝑒 = 𝑒 ∗ 𝑎 = 𝑎, for all 𝑎 ∈ 𝐺.
3. Inverse axiom: For 𝑎 ∈ 𝐺, there corresponds an element 𝑏 ∈ 𝐺 such that 𝑎 ∗ 𝑏 = 𝑏 ∗ 𝑎 = 𝑒.
4. Commutativity or Abelian axiom: 𝑎 ∗ 𝑏 = 𝑏 ∗ 𝑎, for all 𝑎, 𝑏 ∈ 𝐺.

4
In the above, if (𝐺,∗) satisfies 1 then (𝐺,∗) is a semigroup.
If (𝐺,∗) satisfies 1 and 2 then (𝐺,∗) is a monoid.
If (𝐺,∗) satisfies 1, 2, and 3 then (𝐺,∗) is a group.
If (𝐺,∗) satisfies 1, 2, 3, and 4 then (𝐺,∗) is a commutative or Abelian group.

Definitions
Let (𝐺,⋅) be a group.
1. A non-empty subset of 𝐻 ⊆ 𝐺 is a subgroup of 𝐺 if (𝐻,⋅) itself is a group. Then we write 𝐻 ≤ 𝐺.
2. If 𝐻 ≤ 𝐺, and 𝑎 ∈ 𝐺, then 𝐻𝑎 = {ℎ𝑎 ∣ ℎ ∈ 𝐻}. Then 𝐻𝑎 is a right coset of 𝐻 in 𝐺. Similarly, 𝑎𝐻 =
{𝑎ℎ ∣ ℎ ∈ 𝐻} is a left coset of 𝐻 in 𝐺.
3. The number of elements in 𝐺 is the order of the group 𝐺, denoted 𝑜(𝐺) or |𝐺|.
4. Let 𝑎 ∈ 𝐺. The order of the element 𝑎 is the least positive integer 𝑚 such that 𝑎𝑚 = 𝑒, denoted 𝑜(𝑎)
or |𝑎|.
5. Let 𝑎 ∈ G. Then ⟨𝑎⟩ = {𝑎𝑖 ∣ 𝑖 = 0, ±1, ±2, … } is the cyclic subgroup of 𝐺 generated by 𝑎.
6. A subgroup 𝑁 of 𝐺 is a normal subgroup of 𝐺 if for every 𝑔 ∈ 𝐺 and every 𝑛 ∈ 𝑁, 𝑔𝑛𝑔−1 ∈ 𝑁.
7. The set 𝑍(𝐺) = {𝑧 ∈ 𝐺 ∣ 𝑥𝑧 = 𝑧𝑥, ∀𝑥 ∈ 𝐺} is the center of 𝐺.
8. Let 𝑎 ∈ 𝐺. Then 𝑁(𝑎) = {𝑥 ∈ 𝐺 ∣ 𝑎𝑥 = 𝑥𝑎} is the normaliser of 𝑎.
9. Let (𝐻,∘) also be a group. Then a group homomorphism from 𝐺 to 𝐻 is a function 𝑓: 𝐺 → 𝐻 such that
for all 𝑥, 𝑦 ∈ 𝐺, 𝑓(𝑥𝑦) = 𝑓(𝑥) ∘ 𝑓(𝑦).
10. Let 𝑓: 𝐺 → 𝐻 be a group homomorphism. Then the image of 𝑓 is im 𝑓 = { 𝑓(𝑥) ∣ 𝑥 ∈ 𝐺 } ≤ 𝐻 and the
kernel of 𝑓 is ker 𝑓 = {𝑥 ∈ 𝐺 ∣ 𝑓(𝑥) = 𝑒𝐻 } ≤ 𝐺 where 𝑒𝐻 is the identity element of 𝐻.

Examples of Groups
1. (ℤ, +) – Group of integers under addition
2. (ℚ, +) – Group of rational numbers under addition
3. (ℝ, +) – Group of real numbers under addition
4. (ℂ, +) – Group of complex numbers under addition
5. ℚ× – Group of non-zero rational numbers under multiplication
6. ℝ× – Group of non-zero real numbers under multiplication
7. ℂ× – Group of non-zero complex numbers under multiplication
8. ℤ𝑛 = {0, 1, 2, … , 𝑛 − 1} – Group of integers modulo 𝑛 under addition modulo 𝑛
2𝑖𝜋
9. {1, 𝜔, 𝜔2 , … , 𝜔𝑛−1 }, where 𝜔 = 𝑒 𝑛 – Group of complex 𝑛th roots of unity under multiplication
10. 𝑆𝑛 – Group of all permutations of {1, 2, … , 𝑛} under composition of permutations
11. GL𝑛 (ℝ) – Group of 𝑛 × 𝑛 invertible real matrices

Basic Results
Let (𝐺,⋅) be any group.
Uniqueness of identity: 𝐺 has a unique identity element.
1. Uniqueness of inverses: Every element 𝑥 ∈ 𝐺 has a unique inverse 𝑥 −1 ∈ 𝐺, and (𝑥 −1 )−1 = 𝑥.
2. Shoe-sock property: ∀𝑥, 𝑦 ∈ 𝐺, (𝑥𝑦)−1 = 𝑦 −1 𝑥 −1.
3. Cancellation laws: Let 𝑥, 𝑦 ∈ 𝐺. If ∃𝑎 ∈ 𝐺 such that 𝑎𝑥 = 𝑎𝑦, then 𝑥 = 𝑦. If ∃𝑏 ∈ 𝐺 such that 𝑥𝑏 =
𝑦𝑏, then 𝑥 = 𝑦.
4. If 𝐺 is finite of order 𝑛, then ∀𝑥 ∈ 𝐺, 𝑥 𝑛 = 𝑒.
5. If 𝑓: 𝐺 → 𝐻 is a homomorphism, then ker 𝑓 is an normal subgroup of 𝐺
6. 𝑍(𝐺) is a normal subgroup of 𝐺.

5
PROPOSITIONAL CALCULUS
Implications
I1: 𝑃 ∧ 𝑄 ⇒ 𝑃(Simplification) I8: ¬(𝑃 → 𝑄) ⇒ 𝑄
I2: 𝑃 ∧ 𝑄 ⇒ 𝑄(Simplification) I9: 𝑃, 𝑄 ⇒ 𝑃 ∧ 𝑄
I3: 𝑃 ⇒ 𝑃 ∨ 𝑄(Addition) I10: ¬𝑃, 𝑃 ∨ 𝑄 ⇒ 𝑄 (Disjunctive syllogism)
I4: 𝑄 ⇒ 𝑃 ∨ 𝑄(Addition) I11: 𝑃, 𝑃 → 𝑄 ⇒ 𝑄 (Modus ponens)
I5: ¬𝑃 ⇒ 𝑃 → 𝑄 I12: ¬𝑄, 𝑃 → 𝑄 ⇒ ¬𝑃 (Modus tollens)
I6: 𝑄 ⇒ 𝑃 → 𝑄 I13: 𝑃 → 𝑄, 𝑄 → 𝑅 ⇒ 𝑃 → 𝑅 (Hypothetical syllogism)
I7: ¬(𝑃 → 𝑄) ⇒ 𝑃 I14: 𝑃 ∨ 𝑄, 𝑃 → 𝑅, 𝑄 → 𝑅 ⇒ 𝑅 (Dilemma)
Equivalences
E1: ¬¬𝑃 ⇔ 𝑃 E12: 𝑅 ∨ (𝑃 ∧ ¬𝑃) ⇔ 𝑅
E2: 𝑃 ∧ 𝑄 ⇔ 𝑄 ∧ 𝑃 E13: 𝑅 ∧ (𝑃 ∨ ¬𝑃) ⇔ 𝑅
E3: 𝑃 ∨ 𝑄 ⇔ 𝑄 ∨ 𝑃 E14: 𝑅 ∨ (𝑃 ∨ ¬𝑃) ⇔ 𝐓
E4: (𝑃 ∧ 𝑄) ∧ 𝑅 ⇔ 𝑃 ∧ (𝑄 ∧ 𝑅) E15: 𝑅 ∧ (𝑃 ∧ ¬𝑃) ⇔ 𝐅
E5: (𝑃 ∨ 𝑄) ∨ 𝑅 ⇔ 𝑃 ∨ (𝑄 ∨ 𝑅) E16: 𝑃 → 𝑄 ⇔ ¬𝑃 ∨ 𝑄
E6: 𝑃 ∧ (𝑄 ∨ 𝑅) ⇔ (𝑃 ∧ 𝑄) ∨ (𝑃 ∧ 𝑅) E17: ¬(𝑃 → 𝑄) ⇔ 𝑃 ∧ ¬𝑄
E7: 𝑃 ∨ (𝑄 ∧ 𝑅) ⇔ (𝑃 ∨ 𝑄) ∧ (𝑃 ∨ 𝑅) E18: 𝑃 → 𝑄 ⇔ ¬𝑄 → ¬𝑃
E8: ¬(𝑃 ∧ 𝑄) ⇔ ¬𝑃 ∨ ¬𝑄 E19: 𝑃 → (𝑄 → 𝑅) ⇔ (𝑃 ∧ 𝑄) → 𝑅
E9: ¬(𝑃 ∨ 𝑄) ⇔ ¬𝑃 ∧ ¬𝑄 E20: ¬(𝑃 ⇄ 𝑄) ⇔ 𝑃 ⇄ ¬𝑄
E10: 𝑃 ∨ 𝑃 ⇔ 𝑃 E21: 𝑃 ⇄ 𝑄 ⇔ (𝑃 → 𝑄) ∧ (𝑄 → 𝑃)
E11: 𝑃 ∧ 𝑃 ⇔ 𝑃 E22: 𝑃 ⇄ 𝑄 ⇔ (𝑃 ∧ 𝑄) ∨ (¬𝑃 ∧ ¬𝑄)

PROBABILITY
Addition rule: If A and B are two events of an experiment having sample space S, then
P( A  B)  P( A)  P( B)  P( A  B) .
The conditional probability of an event B, given that the event A already taken place is
P( A  B)
P( B / A)  , P( A)  0.
P( A)
Baye’s Theorem:
Let B1 , B2 ,...Bk are the partitions of S with P( Bi )  0, i  1, 2,...k and A be any event of S, then
P( A / Bi ) P( Bi )
P( Bi / A)  k
.
 P( A / B ) P( B )
i 1
i i

𝑃(𝐴)𝑃(𝐵|𝐴), 𝑖𝑓 𝑃(𝐴) ≠ 0
The multiplicative rule of probability : 𝑃(𝐴 ∩ 𝐵) = {
𝑃(𝐵)𝑃(𝐴|𝐵), 𝑖𝑓 𝑃(𝐵) ≠ 0
If 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐵), then A and B are independent.
Random Variable: Let S be the sample of space of a random experiment. Suppose with each element s of
S, a unique real number X is associated according to some rule then X is called random variable. There are
two types of random variable, i) Discrete and ii) Continuous.
Discrete Random Variable: A random variable X is said to be discrete, if the number of possible values
of X is finite or countably infinite. The probability distribution function (pdf) is named as probability mass
function (PMF). The Probability mass function is defined as, let X be a random variable, hence the range
space 𝑅𝑋 of consists of atmost a countably infinite number of values. The probability mass function is
defined as
𝑝(𝑥𝑖 ) = Pr{𝑋 = 𝑥𝑖 }, satisfying the conditions i) 𝑝(𝑥𝑖 ) ≥ 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖
ii) ∑𝑘𝑖=1 𝑝(𝑥𝑖 ) = 1.

6
Continuous Random Variable: A random variable X is said to be continuous if it can take all possible
values between certain limits, here the range space of X is infinite. Therefore the probability distribution
function named for such random variable is Probability density function (PDF), which is defined as the pdf
of X is a function 𝑓(𝑥) satisfying the following properties i) 𝑓(𝑥) ≥ 0

ii) ∫−∞ 𝑓(𝑥)𝑑𝑥 = 1
𝑏
iii) Pr{𝑎 ≤ 𝑋 ≤ 𝑏} = ∫𝑎 𝑓(𝑥)𝑑𝑥 for any a, b such that −∞ < 𝑎 < 𝑏 < ∞.
Note: 1. If X is a continuous random variable with pdf f(x), then
𝑏
𝑃(𝑎 < 𝑋 < 𝑏) = 𝑃(𝑎 ≤ 𝑋 < 𝑏) = 𝑃(𝑎 < 𝑋 ≤ 𝑏) = 𝑃(𝑎 ≤ 𝑋 ≤ 𝑏) = ∫𝑎 𝑓(𝑥)𝑑𝑥.
2. 𝑃(𝑋 = 𝑎) = 0, if X is a continuous random variable.

Cumulative distribution function: Let X be random variable (discrete or continuous), we define F to be


the cumulative distribution function of a random variable X given by 𝐹(𝑥) = Pr{𝑋 ≤ 𝑥}.
Case i) If X is discrete random variable then
𝐹(𝑡) = Pr{𝑋 ≤ 𝑡} = 𝑃(𝑥1 ) + 𝑃(𝑥2 ) + ⋯ + 𝑃(𝑡)
𝑥
Case ii) If x is a continuous random variable then 𝐹(𝑥) = Pr{𝑋 ≤ 𝑥} = ∫−∞ 𝑓(𝑥)𝑑𝑥.

Two dimensional random variable: Let E be an experiment and S be a sample space associated with E.
Let X=X(s) and Y=Y(s) be two functions each assigning a real number to each outcome s of S. We call (X,
Y) to be two dimensional random variable.

Discrete 2D: If the possible values of (X, Y) are finite or countably infinite then (X, Y) is called discrete
and it is defined as 𝑃(𝑥𝑖 , 𝑦𝑗 ) satisfying the following condition,
i) 𝑃(𝑥𝑖 , 𝑦𝑗 ) ≥ 0 and
ii) ∑∞ ∞
𝑗=1 ∑𝑖=1 𝑃(𝑥𝑖 , 𝑦𝑗 ) = 1. The function 𝑃(𝑥𝑖 , 𝑦𝑗 ) defined is called as Joint probability
distribution function (Jpdf).

Continuous 2D: If (X, Y) is a continuous random variable assuming all values in some region R of the
Euclidean plane, then the Joint probability density function 𝑓(𝑥, 𝑦) is a function satisfying the following
conditions
i) 𝑓(𝑥, 𝑦) ≥ 0 for all (x, y)𝜖𝑅
ii) ∬ 𝑓(𝑥, 𝑦)𝑑𝑥 𝑑𝑦 = 1 over the region R.

Marginal Probability distribution: The marginal probability distribution is defined as


Case i) In the discrete (X, Y), it is defined as 𝑝(𝑥𝑖 ) = 𝑃{𝑋 = 𝑥𝑖 } = ∑∞ 𝑗=1 𝑃(𝑥𝑖 , 𝑦𝑗 ) is the marginal

probability distribution of X. Similarly 𝑞(𝑦𝑗 ) = 𝑃{𝑌 = 𝑦𝑗 } = ∑𝑖=1 𝑃(𝑥𝑖 , 𝑦𝑗 ) is the marginal probability
distribution of Y.
Case ii) In the continuous (X, Y), it is defined as the marginal probability function of X is defined as 𝑔(𝑥) =
∞ ∞
∫−∞ 𝑓(𝑥, 𝑦)𝑑𝑦 and the marginal probability function of Y is defined as ℎ(𝑦) = ∫−∞ 𝑓(𝑥, 𝑦)𝑑𝑥.

To calculate the conditional probability:


𝑃(𝑥𝑖 ,𝑦𝑗 )
Case i) Discrete: Probability of 𝑥𝑖 given 𝑦𝑗 is defined as = 𝑞(𝑦𝑗 )
, 𝑞(𝑦𝑗 ) > 0
𝑃(𝑥𝑖 ,𝑦𝑗 )
Probability of 𝑦𝑗 given 𝑥𝑖 is defined as = 𝑝(𝑥𝑖 )
, 𝑝(𝑥𝑖 ) > 0
𝑓(𝑥,𝑦)
Case ii) Continuous: The pdf of X for given Y=y is = ℎ(𝑦)
, ℎ(𝑦) > 0
𝑓(𝑥,𝑦)
The pdf off Y for given X=x is = 𝑔(𝑥)
, 𝑔(𝑥) > 0.

7
Independent Random variable: If X and Y are independent random variable then two dimensional
random variable in case of discrete is defined as 𝑃(𝑥𝑖 , 𝑦𝑗 ) = 𝑝(𝑥𝑖 ). 𝑞(𝑦𝑗 ) for all the values of i and j. In
case of Continuous it is defined as 𝑓(𝑥, 𝑦) = 𝑔(𝑥). ℎ(𝑦).

Mathematical Expectation: If X is a discrete random variable with pmf p(x), then the expectation of X is
given by 𝐸(𝑋) = ∑𝑥 𝑥𝑝(𝑥), provided the series is absolutely convergent.
If X is continuous with pdf f(x), then the expectation of X is given by 𝐸(𝑋) = ∫ 𝑥𝑓(𝑥)𝑑𝑥, provided
∫ |𝑥|𝑓(𝑥)𝑑𝑥 < ∞.
2
Variance of X is given by 𝑉(𝑋) = 𝐸(𝑋 − 𝐸(𝑋))2 = 𝐸(𝑋 2 ) − (𝐸(𝑋)) .
Chebyshev’s inequality: Let x be random variable with mean  and variance  2 then for any positive
real number k(k>0)
2
px    k   (Upper bound)
k2
2
px    k   1  (Lower bound)
k2
Note: some other forms
1. px    k  
1
 
and p x    k  1 (Upper bound)
k2
px     E ( x  c) 2 and p x     1
1
2.
2

DISTRIBUTIONS:
Distribution PMF/PDF Mean Variance
Binomial P( x)  Ck p (1  p)
n k n k
, k  0,1, 2,..., n E ( x)  np V ( x)  np(1  p)
distribution
X ~ B(n, p)

Poisson’s e k E ( x)    np V ( x)    np
Distribution P( x)  , k  0,1, 2,...,   0
k!
X ~ P( )
Uniform  1 ba (b  a) 2
, a xb E ( x)  V ( x) 
f ( x)   b  a
Distribution
2 12
X ~ U (a, b) 
 0, otherwise
Normal
1
1 ( x   )2 E ( x)   V ( x)   2
Distribution f ( x)  e2 2
,    x ,   ,   0
X ~ N ( , ) 2  2
Exponential  e  x , x0 1 1
Distribution f ( x)   E ( x)  𝑉(𝑋) =
𝜆2
 0, otherwise

X ~ E ( )
Gamma  x r 1e x r E ( x) 
r
V ( x) 
r
, x  0,  , r  0
f ( x)   (r )
Distribution
 2
X ~ G (r ,  )
 0, elsewhere

8
Chi-square  n
1 
x E ( x)  n V ( x)  2n

2 2
Distribution x e
, x0
X ~  2 ( n) f ( x)   n

 (n / 2)2 2

0,
 elsewhere

Uniform distribution on a two dimensional set: If R is a set in the two-dimensional plane, and R has a
finite area, then we may consider the density function equal to the reciprocal of the area of R inside R,
and equal to 0 otherwise:
1
; 𝑖𝑓 (𝑥, 𝑦) ∈ 𝑅
𝑓(𝑥, 𝑦) = {𝑎𝑟𝑒𝑎 𝑅 .
0 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Covariance: Cov( x, y)  E ( xy )  E ( x) E ( y)
E ( xy )  E ( x) E ( y )
Correlation coefficient:  xy   
V ( x)V ( y )
Properties:
1. 𝐸(𝑐) = c, where c is a constant.
2. 𝑉(𝑐) = 0, where c is a constant.
3. If E ( xy )  0 then x and y are orthogonal.
4. V ( Ax  b)  A 2V ( x) when Ax+B is linear function of x.
5. If   0 then x and y are un correlated.
6. V ( Ax  by)  A 2V ( x)  B 2V ( y )  2 ABCOV ( x, y )

FUNCTIONS OF ONE DIMENSIONAL RANDOM VARIABLES


Let 𝑆 be a sample space associated with a random experiment 𝐸, then it is known that a random
variable 𝑋 on 𝑆 is a real valued function, i.e., 𝑋: 𝑆 → 𝑅 , for each element s  S, there is a real number
associated.
Let 𝑋 be a random variable defined on 𝑆. Let 𝑦 = 𝐻(𝑥) is a real valued function of 𝑥. Then 𝑌 =
𝐻(𝑋) is a random variable on 𝑆. i.e., for each element s  S, there is a real number associated, say 𝑦 =
𝐻(𝑋(𝑠)). Here 𝑌 is called a function of the random variable 𝑋.
Notations:
1. 𝑅𝑋 – the set of all possible values of the function 𝑋, called the range space of the random variable
𝑋.
2. 𝑅𝑌 – the set of all possible values of the function 𝑌 = 𝐻(𝑋), called the range space of the random
variable 𝑌.
Equivalent Events: Let C be an event associated with the range space 𝑅𝑌 . Let 𝐵 ⊂ 𝑅𝑋 defined by 𝐵 = {𝑥 ∈
𝑅𝑋 ; 𝐻(𝑥) ∈ 𝐶}, then 𝐵 and 𝐶 are called equivalent events.
Distribution function of functions of random variables:
Case 1: Let 𝑋 be a discrete random variable with p.m.f. 𝑝(𝑥𝑖 ) = 𝑃(𝑋 = 𝑥𝑖 ) for 𝑖 = 1,2,3, … Let 𝑌 = 𝐻(𝑋)
then 𝑌 is also a discrete random variable. If 𝑌 = 𝐻(𝑋) is a one to one function then the probability
distribution of 𝑌 is as follows:
For the possible values of 𝑦𝑖 = 𝐻(𝑥𝑖 ) for 𝑖 = 1,2,3, …. The p.m.f. of 𝑌 = 𝐻(𝑋) is 𝑞(𝑦𝑖 ) = 𝑃(𝑌 = 𝑦𝑖 ) =
𝑃(𝑋 = 𝑥𝑖 ) = 𝑝(𝑥𝑖 ) for 𝑖 = 1,2,3, ….
Case 2: Let 𝑋 be a discrete random variable with p.m.f. 𝑝(𝑥𝑖 ) = 𝑃(𝑋 = 𝑥𝑖 ) for 𝑖 = 1,2,3, … Let 𝑌 = 𝐻(𝑋)
then 𝑌 is also a discrete random variable. Suppose tat for one value of 𝑌 = 𝑦𝑖 there corresponds several
values of 𝑋 say 𝑥𝑖1 , 𝑥𝑖2 , … , 𝑥𝑖𝑗 , … then the p.m.f. of of 𝑌 = 𝐻(𝑋) is
𝑞(𝑦𝑖 ) = 𝑃(𝑌 = 𝑦𝑖 ) = 𝑝(𝑥𝑖 1 ) + 𝑝(𝑥𝑖2 ) + ⋯ + 𝑝 (𝑥𝑖𝑗 ) + ⋯

9
Case 3: Let 𝑋 be a continuous random variable with p.d.f. 𝑓(𝑥). Let 𝑌 = 𝐻(𝑋) be a discrete random
variable. Then if the set {𝑌 = 𝑦𝑖 } is equivalent to an event 𝐵 ⊆ 𝑅𝑋 then the p.m.f. of 𝑌 is
𝑞(𝑦𝑖 ) = 𝑃(𝑌 = 𝑦𝑖 ) = ∫ 𝑓(𝑥) 𝑑𝑥
𝐵
Case 4: Let 𝑋 be a continuous random variable with p.d.f. 𝑓(𝑥). Let 𝑌 = 𝐻(𝑋) be a continuous random
variable. Then the p.d.f. of 𝑌, say 𝑔 is obtained by the following procedure:
Step 1: Obtain the c.d.f. of 𝑌, 𝐺(𝑦) = 𝑃(𝑌 = 𝑦), by finding the event
𝐴 ⊆ 𝑅𝑋 , which is equivalent to the event {𝑌 = 𝑦𝑖 }.
Step 2: Differentiate 𝐺(𝑦) with respect to 𝑦 to get 𝑔(𝑦).
Step 3: Determine those values of 𝑦 in 𝑅𝑌 for which 𝑔(𝑦) > 0.
Theorem: Let 𝑋 be a continuous random variable with p.d.f. 𝑓(𝑥) where 𝑓(𝑥) > 0 for 𝑎 < 𝑥 < 𝑏. Suppose
that 𝑌 = 𝐻(𝑋) is strictly monotonic function on [𝑎, 𝑏]. Then the p.d.f. of the random variable 𝑌 = 𝐻(𝑋)
is given by
𝑑𝑥
𝑔(𝑦) = 𝑓(𝑥) | |
𝑑𝑦
If 𝑌 = 𝐻(𝑋) is strictly increasing then 𝑔(𝑦) > 0 for 𝐻(𝑎) < 𝑦 < 𝐻(𝑏).
If 𝑌 = 𝐻(𝑋) is strictly decreasing then 𝑔(𝑦) > 0 for 𝐻(𝑏) < 𝑦 < 𝐻(𝑎).
Theorem: Let 𝑋 be a continuous random variable with p.d.f. 𝑓(𝑥). Let 𝑌 = 𝑋 2 then the p.d.f. of 𝑌 is
1
𝑔(𝑦) = [𝑓(√𝑦) + 𝑓(−√𝑦) ]
2√𝑦

FUNCTIONS OF TWO DIMENSIONAL RANDOM VARIABLES


Let (𝑋, 𝑌) be a two dimensional continuous random variable. Let 𝑍 = 𝐻(𝑋, 𝑌) be a continuous function of
X and Y then 𝑍 = 𝐻(𝑋, 𝑌) is a continuous one dimensional random variable.
To find the p.d.f. of 𝑍, we introduce another suitable random variable say,
𝑊 = 𝐺(𝑋, 𝑌) and obtain the joint p.d.f. of the two dimensional random variable (𝑍, 𝑊), say 𝑘(𝑧, 𝑤). From
this distribution, the p.d.f. of 𝑍 can be obtained by integrating 𝑘 with respect to 𝑤.
Theorem: Suppose (𝑋, 𝑌) is a two dimensional continuous random variable with joint p.d.f. 𝑓(𝑥, 𝑦)
defined on a region 𝑅 of the XY-plane. Let 𝑍 = 𝐻1 (𝑋, 𝑌) and 𝑊 = 𝐻2 (𝑋, 𝑌). Suppose that 𝐻1 and 𝐻2
satisfies the following conditions;
(i) 𝑧 = 𝐻1 (𝑥, 𝑦) and 𝑤 = 𝐻2 (𝑥, 𝑦) may be uniquely solved for 𝑥, 𝑦 in terms of 𝑧 & 𝑤 say, 𝑥 =
𝐺1 (𝑧, 𝑤) and 𝑦 = 𝐺2 (𝑧, 𝑤).
𝜕𝑥 𝜕𝑥 𝜕𝑦 𝜕𝑦
(ii) The partial derivatives 𝜕𝑧 , 𝜕𝑤 , 𝜕𝑧 , 𝜕𝑤 exist and are continuous
Then the joint p.d.f. of (𝑍, 𝑊) say 𝑘(𝑧, 𝑤) is given by,
𝑘(𝑧, 𝑤) = 𝑓[𝐺1 (𝑧, 𝑤), 𝐺2 (𝑧, 𝑤)]|𝐽(𝑧, 𝑤)|
𝜕𝑥 𝜕𝑥
𝜕𝑧 𝜕𝑤
where 𝐽(𝑧, 𝑤) = | 𝜕𝑦 𝜕𝑦
| is called the Jacobian of the transformation
𝜕𝑧 𝜕𝑤
(𝑥, 𝑦) ↦ (𝑧, 𝑤). Also, 𝑘(𝑧, 𝑤) > 0 for those values of (𝑧, 𝑤) corresponding to the values of (𝑥, 𝑦) for
which 𝑓(𝑥, 𝑦) > 0.

MOMENT GENERATING FUNCTION (M.G.F.) OF ONE DIMENSIONAL RANDOM


VARIABLES
Let 𝑋 be any one dimensional random variable then the mathematical expectation 𝐸(𝑒 𝑡𝑋 ) if exists then it
is called the moment generating function (m.g.f.) of 𝑋. i.e., 𝑀𝑋 (𝑡) = 𝐸(𝑒 𝑡𝑋 )
In particular, if 𝑋 is discrete then, 𝑀𝑋 (𝑡) = ∑𝑖=∞
𝑖=1 𝑒
𝑡𝑥𝑖
𝑃(𝑋 = 𝑥𝑖 ).
∞ 𝑡𝑥
If 𝑋 is continuous then, 𝑀𝑋 (𝑡) = ∫−∞ 𝑒 𝑓(𝑥) 𝑑𝑥.

10
Properties of m.g.f.: Let 𝑋 be any one dimensional random variable and 𝑀𝑋 (𝑡) be the m.g.f. of 𝑋 then
1. 𝑀𝑋𝑛 (0) = 𝐸(𝑋 𝑛 ) where 𝑀𝑋𝑛 (0) is the nth derivative of 𝑀𝑋 (𝑡) at 𝑡 = 0.
i.e.; 𝑀𝑋′ (0) = 𝐸(𝑋)
𝑀𝑋′′ (0) = 𝐸(𝑋 2 )
2
2. 𝑉(𝑋) = 𝑀𝑋′′ (0) − (𝑀𝑋′ (0))
3. Let 𝑋 be any one dimensional random variable and 𝑀𝑋 (𝑡) be the m.g.f. of 𝑋. Let 𝑌 = 𝛼𝑋 + 𝛽.
Then the m.g.f. of 𝑌 is 𝑀𝑌 (𝑡) = 𝑒 𝛽𝑡 𝑀𝑋 (𝛼𝑡).
4. Suppose that 𝑋 and 𝑌 are independent random variables. Let 𝑍 = 𝑋 + 𝑌. Let 𝑀𝑋 (𝑡), 𝑀𝑌 (𝑡) and
𝑀𝑍 (𝑡) be the m.g.f.’s of the random variables 𝑋, 𝑌 and 𝑍 respectively. Then 𝑀𝑍 (𝑡) =
𝑀𝑋 (𝑡)𝑀𝑌 (𝑡)
5. Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be 𝑛 independent random variables which follows a normal distribution 𝑁(𝜇𝑖 , 𝜎𝑖2 )
for 𝑖 = 1,2,3, . . , 𝑛. Let 𝑍 = 𝑋1 + 𝑋2 + ⋯ + 𝑋𝑛 then 𝑍 → 𝑁(𝜇1 + 𝜇2 + ⋯ + 𝜇𝑛 , 𝜎12 + 𝜎22 + ⋯ +
𝜎𝑛2 ).
6. Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be 𝑛 independent random variables which follows a Poisson distribution with
parameter 𝛼𝑖 for 𝑖 = 1,2, . . , 𝑛. Let 𝑍 = 𝑋1 + 𝑋2 + ⋯ + 𝑋𝑛 then 𝑍 has a Poisson distribution with
parameter 𝛼 = 𝛼1 + 𝛼2 + ⋯ + 𝛼𝑛 .
7. Let 𝑋1 , 𝑋2 , … , 𝑋𝑘 be 𝑘 independent random variables which follows a Chi-square distribution with
degrees of freedom 𝑛𝑖 for 𝑖 = 1,2,3, . . , 𝑘. Let 𝑍 = 𝑋1 + 𝑋2 + ⋯ + 𝑋𝑘 then 𝑍 has a Chi-square
distribution with degrees of freedom 𝑛 = 𝑛1 + 𝑛2 + ⋯ + 𝑛𝑘 .
8. Let 𝑋1 , 𝑋2 , … , 𝑋𝑘 be 𝑘 independent random variables, each having distribution 𝑁(0,1). Then 𝑆 =
𝑋12 + 𝑋22 + ⋯ + 𝑋𝑘2 has a Chi-square distribution with degrees of freedom 𝑘.
9. Let 𝑋1 , 𝑋2 , … , 𝑋𝑟 be 𝑟 independent random variables, each having exponential distribution with the
same parameter 𝛼. Let 𝑍 = 𝑋1 + 𝑋2 + ⋯ + 𝑋𝑟 then 𝑍 has a Gamma distribution with parameters
𝛼 and 𝑟.
10. Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 , … be a sequence of random variable with c.d.f.’s 𝐹1 , 𝐹2 , … , 𝐹𝑛 , … and m.g.f.’s
𝑀1 , 𝑀2 , … , 𝑀𝑛 , … Suppose that lim 𝑀𝑛 (𝑡) = 𝑀(𝑡), where 𝑀(0) = 1. Then 𝑀(𝑡) is the m.g.f. of
𝑛→∞
the random variable 𝑋 whose c.d.f is 𝐹 = lim 𝐹𝑛 (𝑡).
𝑛→∞

MGF of some standard distributions:


1. Binomial Distributions: 𝑴𝑿 (𝒕) = 𝑀𝑋 (𝑡) = (𝑝𝑒 𝑡 + 𝑞)𝑛
𝑡
2. Poisson Distributions: 𝑴𝑿 (𝒕) = 𝑒 ∝(𝑒 −1)
𝜎2 𝑡 2
3. Normal Distributions: 𝑴𝑿 (𝒕) = 𝑒 𝑡𝜇+ 2

4. Exponential Distributions: 𝑴𝑿 (𝒕) = ∝−𝑡
∝𝑟
5. Gamma Distributions: 𝑴𝑿 (𝒕) = (∝−𝑡)𝑟
6. Chi square Distributions: 𝑴𝑿 (𝒕) = (1 − 2𝑡)−𝑛⁄2

SAMPLING
In statistical investigation, the characteristics of a large group of individuals (called population) is
studied. Sampling is a study of the relationship between a population and samples drawn from it.
The population mean and the population variance are denoted by 𝜇 and 𝜎 2 respectively.
Sample mean and sample variance: Let 𝑋 be the random variable which denotes the population with
mean 𝜇 and variance 𝜎 2 . Let (𝑋1 , 𝑋2 , … , 𝑋𝑛 ) be a random sample of size 𝑛 from 𝑋. Then,
∑𝑛 𝑋
Sample mean, 𝑋̅ = 𝑖=1 𝑖 𝑛
and
2 ∑𝑛 ̅ 2
𝑖=1(𝑋𝑖 −𝑋 )
Sample variance, 𝑠 = 𝑛

11
 If 𝑋 → 𝑁(𝜇, 𝜎 2 ) then 𝑋̅ and 𝑠 2 are independent random variables.
 Let 𝑋 be the random variable with 𝐸(𝑋) = 𝜇 and 𝑉(𝑋) = 𝜎 2 . Let (𝑋1 , 𝑋2 , … , 𝑋𝑛 ) be a random
𝜎2
sample of size 𝑛 from 𝑋. Then, 𝐸(𝑋̅) = 𝜇 and 𝑉(𝑋̅) = . 𝑛
𝜎2
 Let 𝑋 → 𝑁(𝜇, 𝜎 ) then 𝑋̅ →
2
𝑁(𝜇, 𝑛 ) 2
and 𝑠 → 𝜒 2 (𝑛
− 1).

Central Limit Theorem: Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be 𝑛 independent random variables all of which have the same
distribution. Let 𝜇 = 𝐸(𝑋𝑖 ) and 𝜎 2 = 𝑉(𝑋𝑖 ) be the common expectation and variance. Let 𝑆 = ∑𝑛𝑖=1 𝑋𝑖
𝑆−𝐸(𝑆)
then 𝐸(𝑆) = 𝑛𝜇 and 𝑉(𝑆) = 𝑛𝜎 2 then for large values of 𝑛, the random variable 𝑇𝑛 = has
√𝑉(𝑆)
approximately the distribution 𝑁(0,1).

Testing of Hypothesis:
The central limit theorem is used for testing of hypothesis. The purpose of hypothesis testing is to determine
whether there is enough statistical evidence in favor of a certain belief, or hypothesis, about a parameter.
For the hypothesis tests, the law of probability is assumed to be known, so the sampling distribution is
perfectly known and we take a sample to define a decision criterion which help us to accept or reject
the hypothesis.
Using the mean, we test a hypothesis H0. This is referred to as the null hypothesis. It is an assumption
made on the probability distribution X. The alternate hypothesis is denoted by H1.
The error of the first kind is called Type I error. The error of the second kind is called Type II error. Thus
a Type I error is an error in a statistical test which occurs when a false hypothesis is accepted (a false
positive in terms of the null hypothesis) and a Type II error is an error in a statistical test which occurs when
a true hypothesis is rejected (a false negative in terms of the null hypothesis). Note that the acceptance of
H1 when H0 is true is called a Type I error. The probability of committing a Type I error is called the level

of significance and is denoted by α. Also, the failure to reject H0 when H1 is true is called a Type II error.
The probability of committing a Type II error is denoted by β. The probability 1 − β is called the power of
a test; it is the probability of taking the correct action of rejecting the null hypothesis when it is false. By
increasing n, we can improve the power of a test. For the same α and the same n, the power of test is also
used to choose between different tests; a more powerful test is one that yields the correct action with greater
frequency.
A statistical hypothesis test may return a value called the p-value. The p-value is the minimum probability
of a Type I error with which H0 can still be rejected. This is a quantity that we can use to interpret or
quantify the result of the test and either reject or fail to reject the null hypothesis. This is done by comparing
the p-value to a threshold value chosen beforehand called the significance level α.
If p-value > α: Fail to reject the null hypothesis (not significant result).
If p-value <= α: Reject the null hypothesis (significant result).
Some tests do not return a p-value. Instead, they might return a list of critical values and their associated
significance levels, as well as a test statistic. The results are interpreted in a similar way. Instead of

12
comparing a single p-value to a pre-specified significance level, the test statistic is compared to the critical
value at a chosen significance level.
If test statistic < critical value: Fail to reject the null hypothesis.
If test statistic >= critical value: Reject the null hypothesis.
Moving the critical value provides a trade-off between α and β. A reduction in β is always possible by
increasing the size of the critical region, but this increases α. Likewise, reducing α is possible by decreasing
the critical region. Note that α and β are related in such a way that decreasing one generally increases the
other. This problem is solved with the help of sample size. Both α and β can be reduced simultaneously by
increasing the sample size.
Consider H0: θ = θ0 vs H1: θ > θ0. This is a one-tailed test with the critical region in the right-tail of the test
statistic X. Another one-tailed test could have the form, H0: θ = θ0 vs H1: θ < θ0, in which the critical region
is in the left-tail. In a two-tailed test we have : H0: θ = θ0 vs H1: θ not equal to θ0.
The first type of test is the most basic: testing the mean of a distribution in which we already know the
population variance. Let μ and σ be the mean and standard deviation of X. We take from the population a
sample of size n large enough. The sample mean is ̄x . If ̄x is between μ−t∞. (σ/√n) and μ+t∞.(σ/√n) then
H0 is accepted. However if ̄x is outside of those values, the null hypothesis is rejected (two-tailed test).
Our test statistic is
(𝑥̅ − 𝜇)
𝑧=
(𝜎⁄√𝑛)
where n is the number of observations made when collecting the data for the study, and μ is the true mean
when we assume the null hypothesis is true. So to test a hypothesis with given significance level α, we
calculate the critical value of z (or critical values, if the test is two-tailed) and then check to see whether or
not the value of the test statistic in is in our critical region. This is called a z-test. We are most often
concerned with tests involving either α = .05 or α = .01. When we construct our critical region, we need to
decide whether or not our hypotheses in question are one-tailed or two-tailed. If one-tailed, we reject the
null hypothesis if z ≥ zα (if the hypothesis is right-handed) or if z ≤ zα (if the hypothesis is left-handed). If
two-tailed, we reject the null hypothesis if |z| ≥ zα/2. If we do not know the population variance, and if n is
large (n ≥ 30) it suffices for most distributions commonly encountered to replace the unknown population
variance with the modified definition of sample variance.
Critical Region: To construct a critical region of size α, we first examine our alternative hypothesis. If our
hypothesis is one-tailed, our critical region is either z ≥ zα (if the hypothesis is right-handed) or z ≤ −zα (if
the hypothesis is left-handed). If our hypothesis is two-tailed, then our critical region is |z| ≥ zα/2.
Student t distribution:
If we have a sample of size n from a normal distribution with mean μ and unknown variance, we study t =
(x−μ)/(s/√n ) and compare this to the Student t-distribution with (n –1) degrees of freedom. If n ≥ 30 then
by the Central Limit Theorem we may instead compare it to the standard normal case.
Thus when we have a small sample size (n < 30) taken from a normal distribution of unknown variance,
we use the t-test with (n – 1) degrees of freedom.
Critical Region: To construct a critical region of size α, we first examine our alternative hypothesis. If our
hypothesis is one-tailed, our critical region is either t ≥ tα, (n−1) (if the hypothesis is right-handed) or t ≤
−tα, (n−1) (if the hypothesis is left-handed). If our hypothesis is two-tailed, then our critical region is |t| ≥
tα/2, (n−1).
Chi-Square Test:
The chi-square test gives a p-value. The p-value tells us whether the test results are significant or not. The
chi-square statistic is given by
𝑛
2
(𝑂𝑗 − 𝐸𝑗 )2
𝜒 =∑
𝐸𝑗
𝑗=1
where Oj = each Observed (actual) value, Ej= each Expected value.

13
Calculating the chi-square statistic and comparing it against a critical value from the chi-square distribution
allows us to assess whether the observed cell counts in a table are significantly different from the expected
cell counts.
Estimation:
Estimator (or estimate) 𝛳̂ for the unknown parameter θ associated with the distribution of a random
variable 𝑥 is called unbiased estimator (or unbiased estimate) if 𝐸(𝛳̂) = θ for all θ.
An unbiased estimate of the variance σ2 of a random variable based on a sample 𝑥1 , … , 𝑥𝑛 is
1
σ2 = 𝑛−1 ∑𝑛𝑖=1(𝑥𝑖 − 𝑥̅ )2 .
Maximum likelihood estimate (M.L.E.):- Based on a random sample 𝑥1 , … , 𝑥𝑛 the M.L.E. 𝛳̂ of θ is that
value of θ which maximizes 𝐿(𝑥1 , … , 𝑥𝑛 ; θ) = f(𝑥1 ; θ)f(𝑥2 ; θ) ⋯ 𝑓(𝑥𝑛 ; θ) when f(𝑥; θ) is either the p.d.f
of 𝑥.
Confidence intervals:
σ
Once the sample is observed and the sample mean computed to equal 𝑥̅ , this interval [𝑥̅ − 𝑧𝛼⁄ ( ) , 𝑥̅ +
2 √𝑛
σ
𝑧𝛼⁄2 ( 𝑛)] is a known interval for the unknown mean μ.

σ
For example 𝑥̅ ± 1.96 ( ) is 95% confidence interval for μ. The number 100(1 − 𝛼)% or 1 − 𝛼 is
√𝑛
confidence coefficient.
Let a and b be constants selected for the random sample 𝑥1 , … , 𝑥𝑛 . The confidence interval for the
1
variance σ2 based on the sample variance 𝑆 2 = ∑𝑛𝑖=1(𝑥𝑖 − 𝑥̅ )2 with 100(1 − 𝛼)%
𝑛−1
(𝑛−1)𝑠2 (𝑛−1)𝑠2 (𝑛−1) (𝑛−1) (𝑛−1)𝑠2
Confidence is given by [√ ,√ ] = [√ 𝑠, √ 𝑠], if 𝑃 (𝑎 ≤ ≤ 𝑏) = 1 − 𝛼.
𝑏 𝑎 𝑏 𝑎 σ2

14
COMPUTER ORGANIZATION AND ARCHITECTURE

Binary, signed-integer representations

Format of typical instructions


RISC Style CISC Style
Move destination, source Move destination, source
Operation destination, source1, source2 Operation destination, source
Clear destination Clear destination
Branch condition, target Branch condition, target
Load destination, source
Store source, destination

Addressing modes
Name Assembler syntax Addressing function
Immediate #Value Operand= Value
Register Ri EA= Ri
Absolute LOC EA=LOC
Register Indirect (Ri) EA= [Ri]
Index X(Ri) EA=[Ri]+X
Base with index (Ri, Rj) EA=[Ri] + [Rj]
Base index plus constant X(Ri, Rj) EA=[Ri] + [Rj] +X
Auto increment (Ri)+ EA= [Ri] + 1
Auto decrement -(Ri) EA= [Ri] -1
Relative addressing X(PC) EA=[PC]+X
EA=Effective Address
Value= A signed number
X=Index Value

15
Commonly used flags
N (negative), Z (zero), V (overflow) and C (carry)

IEEE standard single precision and double precision floating-point formats


Single precision

Double precision

Booth Multiplier Recoding Table

Characteristics of the components used in the Processing section

Block Diagram Truth Table

16
Storage Register

Adder/Subtractor Tristate Buffer

Block Diagram Truth Table

Mod 16 Counter
10 Steps for Hardwired Control
1. Define the task to be performed
2. Propose a trial processing section
3. Provide a Register Transfer Description algorithm based on the processing section outlined
4. Validate the algorithm using trial data
5. Describe the basic characteristics of the hardware elements to be used in the processing section
6. Complete the design of the processing section by establishing necessary control points
7. Propose the block diagram of the controller
8. Specify the state diagram of the controller
9. Specify the characteristics of the hardware elements to be used in the controller
10. Complete the controller design and draw a logic diagram of the final circuit
All microinstructions have 2 fields:
-Control field
-Next address field

Memory Basic Concepts


The maximum size of the memory that can be used in any computer is determined by the addressing
scheme.
Eg:
Address Memory Locations
32 Bit 232 = 4G (Giga)

17
Hit Rate and Miss Penalty
The average access time experienced by the processor in a system with a single cache is
tave = hC + (1 – h)M
h → Cache hit ratio
C→ Cache access time
M→ Miss Penalty to transfer information from main memory to cache.
The average access time experienced by the processor in a system with two levels of caches is

h1 → Cache L1 hit ratio


h2 → Cache L2 hit ratio
C1→L1 access time
C2→ Miss Penalty to transfer information from L2 to L1
M→ Miss Penalty to transfer information from main memory to L2.

Vector (SIMD) Processing instructions


VectorAdd.S Vi, Vj, Vk
VectorLoad.S Vi, X(Rj)
VectorStore.S Vi, X(Rj)

Where Vi, Vj and Vk vector registers.


In the instruction X(Rj) causes vector length L elements handled beginning at memory location X + [Rj].

18
DATA STRUCTURES
typedef
typedef <existing_name> <alias_name>;
alias_name var_name;

Enumeration(enum)
enum tagname {value1, value2,…… , valueN};
enum tagname var_name;

Structure
struct [structure_tag] {
member definition;
member definition;
...
member definition;
} [structure variables];

Union
union [union_tag] {
member definition;
member definition;
...
member definition;
} [union variables];

Pointers
declaration Datatype *pointerVar;
function returning pointer Datatype *func ( );
generic pointer type void *
null pointer constant NULL
pointer dereferencing *pointerVar
address of variable &pointerVar
structure member through pointer (*pointerVar).member or pointerVar->member
address of an array element address = pointer + (offset * element_size);

Dynamic Storage Allocation


memory allocation void* malloc (size_t size);
contiguous memory allocation void* calloc (size_t count, size_t size);
reallocation of memory void* realloc (void* ptr, size_t newSize);
releasing memory void free (void* ptr);

Pointers(new and delete):


Syntax: datatype *pointername; pointername=new datatype;
For array: datatype *pointername; pointername=new datatype[size];
Syntax: delete pointername;
For array: delete[] pointername;

19
Stacks
ADT Stack is
Objects: a finite ordered list with zero or more elements.

Functions:
for all stack € Stack, item € element, maxStackSize € positive integer

Stack createS(maxStackSize)::=
create an empty stack whose maximum size is maxStackSize

Boolean IsFull(stack, maxStackSize)::=


if (number of elements in stack == maxStackSize)
return TRUE
else return FALSE

stack push (stack,item)::=


if (isFull(stack)) stackFull
else insert item into top of stack and return

Boolean IsEmpty(stack)::=
if(stack==createS(maxStackSize))
return TRUE
else return FALSE

element pop(stack)::=
if (IsEmpty(stack)) return
else remove and return the element at the top of the stack.

Queues
ADT Queue is
Objects: a finite ordered list with zero or more elements.
Functions:
for all queue € Queue, item € element, maxQueueSize € positive integer

Queue createQ(maxQueueSize)::=
create an empty queue whose maximum size is maxQueueSize

Boolean IsFull(queue, maxQueueSize)::=


if (number of elements in queue == maxQueueSize)
return TRUE
else return FALSE

Queue Add (queue,item)::=


if (isFull(queue)) queueFull
else insert item into rear of queue and return queue

Boolean IsEmpty(queue)::=
if(queue==CreateQ(maxQueueSize))
return TRUE
else return FALSE

20
element DeleteQ(queue)::=
if (IsEmpty(queue)) return
else remove and return the element at front of queue

Sparse Matrix
ADT SparseMatrix is
Objects: a set of triples <row,column,value>, where row and column are integers and form a unique
combination, and value comes from the set item.
Functions:
for all a,b € sparse matrix, x € item, i, ,j, maxCol, maxRow € index

SparseMatrix Create(maxRow, maxCol)::=


return a sparse matrix that can hold upto max items = maxRow X maxCol

Sparsematrix Transpose(a) ::=


return the matrix produced by interchanging the row and column value of every triple.

SparseMatrix Add(a,b)::=
if the dimension of a,b are same return the matrix produced by adding the corresponding items.
else return error

SparseMatirx Multiply(a,b)::=
if number of columns in a equals number of rows in b return the matrix D produced by the formula
d[i][j]=Σ(a[i][k] . b[k][j]) where d(i,j) is the i,jth element.
else return error

Linked Lists
Linked List Operations
Suppose head is pointer pointing to the first node in the linked list, some of the basic operations on linked
list are:
InsertEnd(head, item) ::=
Inserts an element to the end of the list and returns the head pointer.

InsertBeg(head, item) ::=


Inserts an element before the head node of the list and returns the new head pointer.

InsertBefore(head, pos) ::=


Inserts an element before the given position in the list and returns the head pointer.

InsertAfter(head, pos) ::=


Inserts an element after the given position in the list and returns the head pointer.
remove(node) ::=
if the list is empty
return error
else remove the specific node from the list.

21
Binary Tree
Abstract data type Binary_Tree
Structure Binary_Tree(abbreviated BinTree) is

Objects: a finite set of nodes either empty or consisting of a root node, left Binary_Tree, and right
Binary_Tree.

Functions:
for all bt, bt1, bt2 ∈ BinTree, item ∈ element
BinTree Create() ::= creates an empty binary tree.
Boolean IsEmpty(bt) ::= if (bt == empty binary tree) return TRUE
else return FALSE
BinTree MakeBT(bt1, item, bt2) ::= return a binary tree whose left subtree is bt1,
whose right subtree is bt2, and whose root node
contains the data item.
BinTree Lchild(bt) ::= if (IsEmpty(bt)) return error
else return the left subtree of bt.
element Data(bt) ::= if (IsEmpty(bt)) return error
else return the data in the root node of bt.
BinTree Rchild(bt) ::= if (IsEmpty(bt)) return error
else return the right subtree of bt.

Searching and Sorting

Algorithm In place Stable Best Average Worst Remarks

n exchanges;
selection sort ✔ ½ n2 ½ n2 ½ n2
quadratic in best case
use for small or
insertion sort ✔ ✔ n ¼ n2 ½ n2
partially-sorted arrays
rarely useful;
bubble sort ✔ ✔ n ½ n2 ½ n2
use insertion sort instead
tight code;
shellsort ✔ n log3 n unknown c n 3/2
subquadratic
n log n guarantee;
mergesort ✔ ½ n lg n n lg n n lg n
stable
n log n probabilistic guarantee;
quicksort ✔ n lg n 2 n ln n ½ n2
fastest in practice
n log n guarantee;
heapsort ✔ n† 2 n lg n 2 n lg n
in place

22
DIGITAL SYSTEM DESIGN

Basic Gates

Boolean Algebra
x · y = y · x Commutative
x+y=y+x

x · ( y · z) = (x · y) · z Associative
x + ( y + z) = (x + y) + z

x · ( y + z) = x · y + x · z Distributive
x + y · z = (x + y) · (x + z)

x + x · y = x Absorption
x · (x + y) = x

x · y + x · y = x Combining
(x + y) · (x + y) = x

x · y = x + y DeMorgan’s theorem
x+y=x·y

23
The General Form of a Module

module module name [(port name{, port name})];


[parameter declarations]
[input declarations]
[output declarations]
[inout declarations]
[wire or tri declarations]
[reg or integer declarations]
[function or task declarations]
[assign continuous assignments]
[initial block]
[always blocks]
[gate instantiations]
[module instantiations]
Endmodule

Designing subcircuits in Verilog

module_name [#(parameter overrides)] instance_name (


.port_name ( [expression] ) {, .port_name ( [expression] )} );

Parameters
parameter n = 4;

Always Block

always @(sensitivity_list)
[begin]
[procedural assignment statements]
[if-else statements]
[case statements]
[while, repeat, and for loops]
[task and function calls]
[end]

For Loop

for (initial_index; terminal_index; increment)


begin
statement;
end

24
The Conditional Operator

conditional_expression ? true_expression : false_expression

The if-else Statement


if (expression1)
begin
statement;
end
else if (expression2)
begin
statement;
end
else
begin
statement;
end

The case Statement

case (expression)
alternative1: begin
statement;
end
alternative2: begin
statement;
end
[default: begin
statement;
end]
endcase

Functions and Tasks


function [range | integer] function_name;
[input declarations]
[parameter, reg, integer declarations]
Begin
statement;
end
endfunction

A task is declared by the keyword task and it comprises a block of statements that ends with the keyword
endtask.

25
IC Details

7483 : 4 bit binary full adder 7485:4 bit Magnitude comparator

74138 : 3 to 8 active low output decoder 74151 : 8 input multiplexer

74153 : DUAL 4 input multiplexer 74157 : QUAD 2 input multiplexer

7490 : Asynchronous Decade counter 7493 : Asynchronous MOD 16 UP counter

26
74193 : Synchronous UP/DOWN counter

27
OBJECT ORIENTED PROGRAMMING

INTRODUCTION TO JAVA

Simple Program
/*
This is a simple Java program.
Call this file Example.java
. */
class Example
{ // A Java program begins with a call to main().
public static void main(String args[])
{
System.out.println("Java drives the Web.");
}
}

Data Types in Java:


There are two categories of data types available in Java:

Primitive Data Types:


byte (8 bits), short(16 bits), int (32 bits), long(64 bits), float(32 bits), double(64 bits), boolean, char (16
bits)
Reference/Object Data Types: Reference variables are created using defined constructors of the classes.
They are used to access objects. These variables are declared to be of a specific type that cannot be changed.
Examples: Employee, Puppy etc. Class objects, and various type of array variables come under reference
data type. Default value of any reference variable is null. A reference variable can be used to refer to any
object of the declared type or any compatible type.

Java Literals

Java Literals: A literal is a source code representation of a fixed value. They are represented directly in
the code without any computation. Literals can be assigned to any primitive type variable. For example:
byte a = 68; char a = 'A';
String literals in Java are specified like they are in most other languages by enclosing a sequence of
characters between a pair of double quotes. Examples of string literals are: "Hello World", "two\nlines",
"\"This is in quotes\""
Java language supports few special escape sequences for String and char literals as well. They are:
Notation Character represented
\n Newline (0x0a)
\r Carriage return (0x0d)
\f Form feed (0x0c) 62
\b Backspace (0x08)
\s Space (0x20)
\t tab
\" Double quote
\' Single quote
\\ backslash
\ddd Octal character (ddd)
\uxxxx Hexadecimal UNICODE character (xxxx)

28
Arrays:
To declare a one-dimensional array, you can use this general form:
type array-name[ ] = new type[size];
int arr[ ]; // Declare an integer array
arr =new int[100 ]; // Allocate 100 elements of memory

int arr [ ] = new int [ 100];


//Declare and allocate an integer array in one statement.

The general form for initializing a one-dimensional array is shown here:


type array-name[ ] = { val1, val2, val3, ... , valN };

int arr [ ] = {1, 2, 3, 4}; // Initializing 1D array


int arr2D [] = new int[10][20]; //declare and allocate 2D array.

The general form of array initialization for a two-dimensional array is shown here:
type-specifier array_name[ ] [ ] =
{
{ val, val, val, ..., val },
{ val, val, val, ..., val },
...
{ val, val, val, ..., val }
};

Java Operators:

Arithmetic Operators
+, -, *, / (addition, subtraction, multiplication, division)
%, ++, -- ( modulus, increment, decrement)

Relational Operators
==, !=, >, < (equal, not equal, greater, lesser)
>=, <= (greater or equal, lesser or equal)

Logical Operators
&, |, !, ^, || , && ( AND, OR, NOT, XOR, short -circuit OR, short -circuit AND)
Bitwise Operators
&, |, ~, ^ (AND, OR, NOT, XOR)

>>, >>>, << (shift right, shift right zero fill, shift left)

Conditional operator ( ? )
The ? is called a ternary operator because it requires three operands. It takes the general form
Exp1 ? Exp2 : Exp3;

absval = val < 0 ? -val : val; // get absolute value of val

29
Branch Structures:
The syntax of if ,if...else, if...else if...else, Nested if...else, switch- case, for, while, do-while break, and
continue statements are similar to that of C++.

Defining a class:
A class is created by using the keyword class. A simplified general form of a class definition is shown here:
class classname
{ // declare instance variables
type var1;
type var2;
// ...
type varN;
// declare methods
type method1(parameters) { // body of method }
type method2(parameters) { // body of method }
// ...
type methodN(parameters) { // body of method }
}

Java Access Modifiers:


Java provides a number of access modifiers to set access levels for classes, variables, methods and
constructors. The four access levels are:
Visible to the package: the default. No modifiers are needed.
Visible to the class only: private.
Visible to the world: public.
Visible to the package and all subclasses: protected.

30
INHERITANCE
Syntax:
access_modifier class subclass-name extends existing-class-name
{ . . // Changes and additions. . }
Example:
public class Flying_Bird extends Bird
{ . . // Changes and additions. . }

INTERFACE
Syntax:
access_modifier interface interface-name
{ . . // Abstract methods
. . // Interface Constants }

A class can both extend one other class and implement one or more interfaces.

PACKAGES
Syntax:
Following is the syntax for package creation:
package package_name;
Example:
package Mypack;

You can create a hierarchy of packages by separating them with a period.


package pkg1[.pkg2 [.pkg3] ]
Example:
package com.course.in;

Importing packages:
Syntax:
Following is the syntax for importing packages:
import pkg1[.pkg2].(classname|*);
Example:
import java.util.Date;
import java.io.*;

ArrayList

Constructor Description
ArrayList() To build an empty array list.
ArrayList(Collection c) To build an array list that is initialized with the elements of the
collection c.
ArrayList(int capacity) To build an array list that has the specified initial capacity.

Method Description
void add(int index, E To insert the specified element at the specified position in a list.
element)
boolean addAll(Collection c) To append all of the elements in the specified collection to the end of
this list.

31
void clear() To remove all of the elements from this list
boolean add(Object e) To append the specified element at the end of a list
boolean addAll(int index, To append all the elements in the specified collection, starting at the
Collection c) specified position of the list.
Object clone() It is used to return a shallow copy of an ArrayList
int indexOf(Object o) To return the index in this list of the first occurrence of the specified
element, or -1 if the List does not contain this element
int lastIndexOf(Object o) To return the index in this list of the last occurrence of the specified
element.

Strings

Method Description
char charAt(int where) Returns the character at the specified position
String replace(char original, Replaces the original character with the specified replacement and
char replacement) returns the string
boolean equals(Object str) Compares two objects if they are same or not and returns true or false.
boolean Compares two strings if they are same or not and returns true or false.
equalsIgnoreCase(String str)
String concat(String str ) Concatenates str with the invoking String and returns it.
String trim( ) Returns the invoking String after removing leading and trailing spaces.
boolean startsWith(String Checks if the invoking string starts with str and returns true if found
str) otherwise returns false
boolean endsWith(String Checks if the invoking string ends with str and returns true if found
str): otherwise returns false
int compareTo(String str) Compares two strings and based on the comparison returns zero or less
than zero or greater than zero.
int indexOf(int ch) Returns the index of the first occurrence of ch in the invoking String.
Returns -1 if not found.
int lastIndexOf(int cha) Returns the index of the last occurrence of ch in the invoking String.
Returns -1 if not found.
int indexOf(String st) Returns the index of the first occurrence of st in the invoking String.
Returns -1 if not found.
int lastIndexOf(String st Returns the index of the last occurrence of st in the invoking String.
): Returns -1 if not found.
int indexOf(int ch, int Here, fromIndex specifies the index at which point the search begins.
fromIndex) For indexOf( ), the search runs from fromIndex to the end of the string.
int lastIndexOf((int ch, int For lastIndexOf( ), the search runs from zero to tillIndex.
fromIndex)
int indexOf(String st, int
fromIndex)
int lastIndexOf(String st,
int tillIndex)

32
String substring(int Here, startIndex specifies the beginning index, and endIndex specifies
startIndex) the stopping point. The string returned contains all the characters from
String substring(int the beginning index, up to, but not including, the ending index. If
startIndex, int endIndex) endIndex is not specified then the entire string from the startIndex is
returned.

String Buffer

Constructor Description
StringBuffer() An object of StringBuffer created with default size of 16
StringBuffer(int size) An object of StringBuffer created with specified size
StringBuffer(String s) An object of StringBuffer created with specified string

Method Description
int capacity() Returns the number of characters it can hold

int length() Returns the actual number of characters it is holding


char charAt(int pos) Returns with the character found at the position in the given
StringBuffer
void setCharAt(int i, char ch) Sets the character in the given StringBuffer with ch at the
specified position i
StringBuffer append(String s) Concatenates string s to the invoking StringBuffer and returns the
StringBuffer
StringBuffer insert(int i, String st) Inserts string st to the invoking StringBuffer at the position i and
returns the StringBuffer
String[] split(String regex) The string is split as many times on the regex appearing in the
invoked string and is returned as a string array.

String[] split(String regex, int limit) The string is split as many times as specified by the limit on the
regex appearing in the invoked string and is returned as a string
array.

String regionMatches(int toffset, Tests if the two Strings are equal. Using this method we can
String other, int ooffset, int len): compare the substring of input String with the substring of
String regionMatches(boolean specified String.
ignoreCase, int toffset, String other, Parameters:
int ooffset, int len): ignoreCase– if true, ignore case when comparing characters.
toffset – the starting offset of the subregion in this string.
other – the string argument.
ooffset – the starting offset of the subregion in the string
argument.
len – the number of characters to compare.

33
Exception Handling

Java Exception Keywords:


Keyword Description
try used to specify a block where we should place exception code.
catch used to handle the exception.
finally used to execute the important code of the program.
throw used to raise an exception explicitly at runtime
throws used to declare the exception that might raise during program execution

Multithreading

Method Description
public void run(): Used to perform action for a thread.
public void start(): Starts the execution of the thread.JVM calls the run() method on
the thread.
public void sleep(long miliseconds) Causes the currently executing thread to sleep (temporarily cease
execution) for the specified number of milliseconds.
public void join() Waits for a thread to die.
public void join() throws Waits for a thread to die for the specified miliseconds.
InterruptedException
public int getPriority(): Returns the priority of the thread.
public int setPriority(int priority) Changes the priority of the thread.
public String getName() Returns the name of the thread.

public void setName(String name) Changes the name of the thread.


public static Thread currentThread() Returns the reference of currently executing thread.
public int getId(): Returns the id of the thread.
public Thread.State getState(): Returns the state of the thread.
public boolean isAlive(): Tests if the thread is alive.
public void yield(): Causes the currently executing thread object to temporarily pause
and allow other threads to execute.
public void suspend(): Used to suspend the thread(depricated).
public void resume(): Used to resume the suspended thread(depricated).
public void stop(): Used to stop the thread(depricated).
public void interrupt(): Interrupts the thread.
public boolean isInterrupted(): Tests if the thread has been interrupted.
public static boolean interrupted(): Tests if the current thread has been interrupted.

34
public final void wait()throws Causes current thread to release the lock and wait until either
InterruptedException another thread invokes the notify() method or the notifyAll()
public final void wait(long method for this object, or a specified amount of time has elapsed.
timeout)throws
InterruptedException
public final void notify() Wakes up a single thread that is waiting on this object's monitor.
public final void notifyAll() Wakes up all threads that are waiting on this object's monitor.

GENERICS

Generics are parameterized types enable us to create classes, interfaces, and methods in which the type of
data on which they operate is specified as a parameter.
Generic methods: Generic methods are methods that introduce their own type parameters. The syntax for
a generic method includes a list of type parameters, inside angle brackets, which appears before the
method's return type. For static generic methods, the type parameter. For static generic methods, the type
parameter section must appear before the method's return type.

Example:
public class GenericMethodTest {
// generic method printArray
public static < E > void printArray( E[] inputArray ) {
// Display array elements
for(E element : inputArray) {
System.out.printf("%s ", element);
}
System.out.println();
}

Generic class:

A generic class declaration looks like a non-generic class declaration, except that the class name is followed
by a type parameter section.
Example:

public class Box<T> {


private T t;
public void add(T t) {
this.t = t;
}
public T get() {
return t;
}
public static void main(String[] args) {
Box<Integer> integerBox = new Box<Integer>();
Box<String> stringBox = new Box<String>();
integerBox.add(new Integer(10));
….
}

35
IO Streams

Method Description
String getName() Returns the name of the file
String getParent() Returns the name of the parent directory
String getPath() Returns the relative path
String getAbsolutePath( Returns the absolute path
boolean exists() Returns true if the file exists, false if it does not
boolean canWrite() Returns true if the file is writable
boolean canRead() Returns true if the file is readable
boolean isDirectory() Returns true if the file is a directory
boolean isFile() Returns true if called on a file and false if called on a directory.
boolean isAbsolute() Returns true if the file has an absolute path and false if its path is
relative
long length() Returns the size of the file
boolean renameTo(File newName) Rename the file to newName
boolean mkdir( ) Create a directory for which path exists
boolean mkdirs( ) Create a directory for which no path exists. It creates a directory
and all the parents of the directory
boolean delete() Deletes the disk file represented by the path of the invoking File
object.

Directories:

Method Description

String [] list() Returns the list of files in the directory


File[ ] listFiles( ) return the file list as an array of File objects

File listFiles(FilenameFilter FFObj) Returns those files that satisfy the specified FilenameFilter

Methods defined by InputStream class:

Method Description
int available( ) Returns the number of bytes of input currently available for
reading.
void mark(int numBytes) Places a mark at the current point in the input stream that will
remain valid until numBytes bytes are read.
boolean markSupported( ) Returns true if mark( )/reset( ) are supported by the invoking
stream.

36
int read( ) Returns an integer representation of the next available byte of
input. –1 is returned when the end of the file is encountered.
int read(byte buffer[ ]) Attempts to read up to buffer.length bytes into buffer and returns
the actual number of bytes that were successfully read. –1 is
returned when the end of the file is encountered.
int read(byte buffer[ ], int offset, Attempts to read up to numBytes bytes into buffer starting at
buffer[offset], returning the number of bytes successfully read. –
int numBytes)
1 is returned when the end of the file is encountered.
void reset( ) Resets the input pointer to the previously set mark.

long skip(long numBytes) Ignores (that is, skips) numBytes bytes of input, returning the
number of bytes actually ignored.

Methods defined by OutputStream class :

Method Description
void flush( ) Finalizes the output state so that any buffers are cleared. That is,
it flushes the output buffers
void write(int b) Writes a single byte to an output stream. Note that the parameter
is an int, which allows you to call write( ) with expressions
without having to cast them back to byte.
void write(byte buffer[ ]) Writes a complete array of bytes to an output stream
void write(byte buffer[ ], int offset, Writes a subrange of numBytes bytes from the array buffer,
int numBytes beginning at buffer[offset].

Methods defined by the Reader class :

Method Description
void mark(int numChars) Places a mark at the current point in the input stream that will
remain valid until numChars characters are read.
boolean markSupported( ) Returns true if mark( )/reset( ) are supported on this stream
int read( ) Returns an integer representation of the next available character
from the invoking input stream. –1 is returned when the end of
the file is encountered.
int read(char buffer[ ]) Attempts to read up to buffer.length characters into buffer and
returns the actual number of characters that were successfully
read. –1 is returned when the end of the file is encountered.
abstract int read(char buffer[ ], int Attempts to read up to numChars characters into buffer starting
offset, int numChars at buffer[offset], returning the number of characters successfully
read. –1 is returned when the end of the file is encountered.
boolean ready( ) Returns true if the next input request will not wait. Otherwise, it
returns false.
void reset( ) Resets the input pointer to the previously set mark.

37
long skip(long numChars) Skips over numChars characters of input, returning the number of
characters actually skipped.

Methods defined by the Writer class :

Method Description

Writer append(char ch) Appends ch to the end of the invoking outputstream. Returns a reference
to the invoking stream.
Writer append(CharSequence chars) Appends chars to the end of the invoking output stream. Returns a
reference to the invoking stream.
Writer append(CharSequence chars, int Appends the subrange of chars specified by begin and end–1 to the end
begin, of the invoking ouput stream. Returns a reference to the
int end) invoking stream.
abstract void close( ) Closes the output stream. Further write attempts will generate an
IOException.
abstract void flush( ) Finalizes the output state so that any buffers are cleared. That is, it
flushes the output buffers.
void write(int ch) Writes a single character to the invoking output stream. Note that the
parameter is an int, which allows you to call write with expressions
without having to cast them back to char.
void write(char buffer[ ]) Writes a complete array of characters to the invoking output stream.
abstract void write(char buffer[ ], Writes a subrange of numChars characters from the array buffer,
int offset, int numChars) beginning at buffer[offset] to the invoking output stream.
void write(String str) Writes str to the invoking output stream.
void write(String str, int offset, Writes a subrange of numChars characters from the string str, beginning
int numChars) at the specified offset.

Commonly used methods defined by ObjectOutputStream class :

Method Description
void flush( ) Finalizes the output state so that any buffers are cleared. That is,
it flushes the output buffers.
void write(byte buffer[ ]) Writes an array of bytes to the invoking stream.
void write(byte buffer[ ], int offset, Writes a subrange of numBytes bytes from the array buffer,
int numBytes) beginning at buffer[offset].
void write(int b) Writes a single byte to the invoking stream. The byte written is
the low-order byte of b.
void writeBoolean(boolean b) Writes a boolean to the invoking stream.
void writeByte(int b) Writes a byte to the invoking stream. The byte written is the
low-order byte of b.
void writeBytes(String str) Writes the bytes representing str to the invoking stream.
void writeChar(int c) Writes a char to the invoking stream.
void writeChars(String str) Writes the characters in str to the invoking stream.
void writeDouble(double d) Writes a double to the invoking stream.
void writeFloat(float f ) Writes a float to the invoking stream.

38
void writeInt(int i) Writes an int to the invoking stream.
void writeLong(long l) Writes a long to the invoking stream.
final void writeObject(Object obj) Writes obj to the invoking stream.
void writeShort(int i) Writes a short to the invoking stream.

Commonly used methods defined by ObjectInputStream class :

Method Description
int available( ) Returns the number of bytes that are now available in the input
buffer.
int read( ) Returns an integer representation of the next available byte of
input. –1 is returned when the end of the file is encountered.
int read(byte buffer[ ], int offset, Attempts to read up to numBytes bytes into buffer starting at
int numBytes) buffer[offset], returning the number of bytes successfully read. –
1 is returned when the end of the file is encountered.
boolean readBoolean( ) Reads and returns a boolean from the invoking stream.
byte readByte( ) Reads and returns a byte from the invoking stream.
char readChar( ) Reads and returns a char from the invoking stream.
double readDouble( ) Reads and returns a double from the invoking stream.
float readFloat( ) Reads and returns a float from the invoking stream.
int readInt( ) Reads and returns an int from the invoking stream.
long readLong( ) Reads and returns a long from the invoking stream.
final Object readObject( ) Reads and returns an object from the invoking stream.
short readShort( ) Reads and returns a short from the invoking stream.

Some of the other methods of RandomAccessFile :

Method Description

long length() Returns the length of the file in bytes.

void setLength(long len) throws Sets the length of the invoking file to that specified by len.
IOException
int skipBytes(int n) Add n to the file pointer. Returns actual number of bytes
skipped. If n s negative, no bytes are skipped.

long getFilePointer() throws Returns the current offset, in bytes, from the beginning of the
IOException file to where the next read or write occurs.

void seek(long newPos) throws Sets the current position of the file pointer within the file.
IOException newPos specifies the new position (in bytes), of the file pointer
from the beginning of the file

39
JFrame Methods:

Method Description
void setSize(int width, int height) Sets the dimensions of the JFrame window.
setVisible(boolean b) Container is made visible or not visible by setting value b
setLayout(LayoutManager lm) Sets the layout manager to the container.
add(component c) Add the component c to the container.
setDefaultCloseOperation(int op); Sets the operation the user clicks the close Box of a JFrame
window. Operation op takes the following constants.
JFrame.EXIT_ON_CLOSE
JFrame.DO_NOTHING_ON_CLOSE

The methods of ActionEvent object.


Method Description
String getActionCommand( ) Returns command name for the invoking ActionEvent object.
int getModifiers( ) Returns a value that indicates which modifier keys (ALT, CTRL,
META, and/or SHIFT) were pressed when the event was
generated.
long getWhen( ) Returns the time at which the event took place.

JavaFX

The JavaFX Packages


The JavaFX elements are contained in packages that begin with the javafx prefix. Most frequently use are:
javafx.application, javafx.stage, javafx.scene, and javafx.scene.layout.
The Application class of the package javafx.application is the entry point of the application in JavaFX. To
create a JavaFX application, you need to inherit this class and implement its abstract method start(). In this
method, you need to write the entire code for the JavaFX graphics

In the main method, you have to launch the application using the launch() method. This method internally
calls the start() method of the Application class as shown in the following program.
public class JavafxSample extends Application {
@Override
public void start(Stage primaryStage) throws Exception {
/*
Code for JavaFX application.
(Stage, scene, scene graph)
*/
}
public static void main(String args[]){
launch(args);
}
}

40
JavaFX contains rich set of controls which are listed below:
JavaFX Button Control, JavaFX CheckBox Control, JavaFX ChoiceBox Control, JavaFX ComboBox
Control, JavaFX Hyperlink Control, JavaFX Label Control,JavaFX ListView Control, JavaFX MenuBar
Control, JavaFX MenuButton Control, JavaFX PasswordField Control, JavaFX RadioButton Control,
JavaFX ScrollBar Control, JavaFX ScrollPane Control, JavaFX Slider Control, JavaFX TableView
Control, JavaFX TabPane Control, JavaFX TextArea Control, JavaFX TextField Control, JavaFX
ToggleButton Control, JavaFX TreeView Control etc.

CERT JAVA CODING STANDARD


The CERT Oracle Secure Coding Standard for Java provides rules for secure coding in the Java
programming language. The goal of these rules is to eliminate insecure coding practices that can lead to
exploitable vulnerabilities.
Rules and Recommendations:
Rules are the requirements that we should or must follow otherwise it leads to some vulnerabilities. Rules
are the requirements for conformance with the standard.
Example for the Rule:
1.Do not ignore values returned by methods
public class Replace
{ public static void main(String[] args) {
String original = "insecure"; original.replace('i', '9'); System.out.println(original); } }
This noncompliant code example ignores the return value of the String.replace() method, failing to update
the original string. The following compliant solution correctly updates the String reference original with
the return value from the String.replace() method.
public class Replace { public static void main(String[] args) {
String original = "insecure"; original = original.replace('i', '9'); System.out.println(original); } }
Recommendation describe good practices or useful advice. And do not establish conformance requirements.
Example for the Recommendations:
2. Do not declare more than one variable per declaration
int i, j = 1;
This noncompliant code might lead a programmer or reviewer to mistakenly believe that both i and j are
initialized to 1. In the following compliant solution, it is readily apparent that both i and j are initialized to
1:
int i = 1; // Purpose of i...
int j = 1; // Purpose of j...

41
PRINCIPLES OF DATA COMMUNICATION

Thermal Noise : N = kTB


Where k = boltzmanns constant = 1.38 x10-23
T = temperature in kelvin
N = Noise in watt

Channel Capacity

Nyquist Bandwidth: C = 2Blog2M

Here B is the bandwidth of the channel and M is the number of signal levels used to represent the data.

Shannon Capacity: C = B log2 (1+SNR)

Here B is the bandwidth of the channel in Hertz, SNR is the signal to noise ratio and C is the channel
capacity in bits per second.
Signal to noise ratio is represented in decibels

Line Coding
B8ZS:
In this line coding technique, Sequence of eight 0’s replaced by 000 + - 0 - +, if the previous pulse was
positive. Sequence of eight 0’s replaced by 000 - + 0 + -, if the previous pulse was negative.

HDB3:
This line coding technique replaces a sequence of 4 zeros by a code as per the rule given in below table.

Modulation techniques

MFSK:
Transmitted signal is given by
si (t )  A cos(2f i t ), 1 i  M
where
f i  f c  (2i  1  M ) f d
f c  the carrier frequency
f d  the difference frequency
M  number of different signal elements  2 L
L  number of bits per signal element

42
Period of signal element
Ts  LTb , Ts : signal element period Tb :bit period
Minimum frequency separation
1 / Ts  2 f d  1 /( LTb )  2 f d  1 / Tb  2 Lf d (bit rate)
MFSK signal bandwidth
Wd  M (2 f d )  2Mf d

Performance
The transmission bandwidth

Here r depends on modulation and filtering process where 0<= r <=1, R is bit rate, M is number of different
signal element and L is number of bits.

Line of Sight transmission

Free space loss is


Pt/Pr =(4πd)2/λ =(4πfd)2/c2
Where
Pt = signal power at the transmitting antenna
Pr= signal power at the receiving antenna
λ = carrier wavelength(in meters)
d=propagation distance between antennas (in meters)
c=speed of light (3 x 108 m/s)

This can be recast as


LdB=10 log (Pt/Pr) = 20 log(4πd/λ) = -20 log(λ) + 20 log(d) + 21.98 Db
=20 log(4πfd/c) = 20 log(f) +20 log(d) -147.56 dB

For other antennas, we must take into account the gain of the antenna, which yields the following free space
loss equation:
Pt/Pr = (4π)2(d)2/GtGr λ2=( λd)2/ArAt =(cd)2/f2AtAr
Where
Gt = gain of the transmitting antenna
Gr = gain of the receiving antenna
At= effective area of transmitting antenna
Ar= effective area of receiving antenna

We can recast loss equation as


LdB=20 log(λ) + 20 log(d)-10 log(AtAr)
=-20 log(f) +20 log(d)-10 log(AtAr) +169.54 dB
The effective area of an ideal isotropic antenna is λ2/4π, with a power gain of 1, the effective area of a
parabolic antenna with a face area of A is 0.56A, with a power gain of 7A/ λ2
d=3.57 √h

43
where d is the distance between an antenna and the horizon in kilometres and h is the antenna height in
meters. The effective, or radio, line of sight to the horizon is
d=3.57 √Kh

Where K is an adjustment factor to account for the refraction. A good rule of thumb id K=4/3. Thus, the
maximum distance between two antennas for LOS propagation is 3.57(√Kh1+√Kh2), where h1 and h2 are
the heights of the two antennas.

Error Detection and Correction

Single-bit error correction


To correct an error, the receiver reverses the value of the altered bit. To do so, it must know which bit is in
error.
Number of redundancy bits needed
Let data bits = m
Redundancy bits = r
Total message sent = m+r
The value of r must satisfy the following relation:
2r ≥ m+r+1

Stop-and-Wait Flow Control


𝑃𝑟𝑜𝑝𝑎𝑔𝑎𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒
Maximum possible utilization of the link, U = 1/(1+2a), where a = 𝑇𝑟𝑎𝑛𝑠𝑚𝑖𝑠𝑠𝑖𝑜𝑛 𝑇𝑖𝑚𝑒

Sliding Window Flow Control


Bit Length of a link,
B = R x (d/V),
where
B = length of the link in bits; this is the number of bits present on the link at an instance in
time when a stream of bits fully occupies the link
R = data rate of the link, in bps
d = length, or distance, of the link in meters
V = velocity of propagation, in m/s

For a k-bit field the range of sequence numbers is 0 through 2k-1 and frames are numbered modulo 2k.
Maximum window size = 2k-1.

If the transmission time is normalized to one, the propagation delay can be expressed as a= B/L, where L
is the number of bits in the frame (length of the frame in bits).
Throughput (in bps) depends on ‘W’ and ‘a’, where W is the window size.
1, 𝑊 ≥ 2𝑎 + 1
Utilization, 𝑈 = { 𝑊
2𝑎+1
, 𝑊 < 2𝑎 + 1

HDLC

Frame format

44
Commands and Responses

45
Multiplexing

Statistical TDM Frame Formats

Random Access Protocols


Back-off time, TB = (R x Frame Transmission Time) or (R x Propagation Time), where R is a random
number between 0 and 2K -1, and K is the number of retransmission attempts.
Maximum number of retransmission attempts, Kmax = 15, usually.

Pure ALOHA:
Vulnerable Time = 2 x Frame Transmission Time
Throughput, S = G x e −2G, where G = the average number of frames generated by the system during one
frame transmission time.
Maximum Throughput, Smax = 0.184, when G = 0.5

Slotted ALOHA:
Vulnerable Time = Frame Transmission Time
Throughput, S = G x e −G
Maximum Throughput, Smax = 0.368, when G = 1

CSMA
Vulnerable Time = Propagation Time

46
IEEE LAN Standards

LLC PDU Structure

Standard Ethernet: 802.3 MAC Frame

Slot time = round-trip time + time required to send the jam sequence
Relationship between max length of network and slot time
Max Length = Propagation speed * slot time / 2 [Theoretical]

Bridges
Source Routing Bridges Frame Format

FDDI
Frame Format

47
COMPUTER NETWORK PROTOCOL
Switching
Design of Cross bar Switch
To connects n inputs to m outputs in a gridusing crossbar switch requires n × m crosspoints

Design of Multistage Swtich


For a three stage switch with N input and N output, total number of crosspoints is
2kN+k(N/n)2
To design a three-stage switch, these steps should be followed:
1. Divide the N input lines into groups, each of n lines. For each group, use one crossbar of size n × k,
where k is the number of crossbars in the middle stage. In other words, the first stage has N/n
crossbars of n × k crosspoints.
2. Use k crossbars, each of size (N/n) × (N/n) in the middle stage.
3. Use N/n crossbars, each of size k × n at the third stage.
Design of Banyan Switch
For n inputs and n outputs, it has log2 n stages with n/2 microswitches at each stage.

Clos Criteria
n = (N)1/2 and k ≥ 2n-1
Total number of crosspoints ≥ 4N [(2N)1/2 -1]

IPv4 Addressing

Finding the address class using continuous checking

subnet mask : nsub = n + log2 s supernet mask : nsuper = n − log2 c

Extracting Block Information in classless addressing


1. The number of addresses in the block, N = 2 32 – n
2. First address = (any address) AND (network mask)
3. Last address = (any address) OR [NOT (network mask)]
4. Subnet mask, nsub = n + log2 (N/Nsub)

48
IPv6 Addressing

Prefixes for IPv6 Addresses

IPv4 Datagram

IP Datagram

49
Service Type

Protocols field value

Flags field
D : Do not fragment if value is 1.
M : If value is 1 then the fragment is not the last fragment.

Options

No operation option and End-of-option option

Record-route option

50
Strict-source-route option

Loose-source-route option

Timestamp option

Flag value is 0, each router adds only the timestamp in the provided field.
Flag value is 1, each router must add its outgoing IP address and the timestamp.
Flag value is 3, IP addresses given, enter timestamps
Unicast Routing Protocols

Routing Information Protocol (RIP) v1 Message Format:

51
Routing Information Protocol (RIP) v2 Message Format:

RIPv2 Authentication:

OSPF

Types of OSPF Packets

OSPF Common Header (Version 2)

Authentication Type: 0 for None and 1 for Password.

52
Link State Update Packet

LSA General Header

E Flag (1 bit): 1 means Stub Area


T Flag (1 bit): 1 means router can handle multiple TOS
Link State Type: Router link (1), Network link (2), Summary link to network (3), Summary link to AS
boundary router (4), and External link (5).

Router Link LSA

Link ID, Link Type and Link Data:

Network Link Advertisement Format

53
Summary Link to Network LSA

Summary Link to AS Boundary Router LSA

External Link LSA

Hello Packet

E Flag (1 bit): 1 means Stub Area


T Flag (1 bit): 1 means router can handle multiple TOS

54
Database Description Packet

E Flag: 1 if the advertising router is an autonomous boundary router.


B Flag: 1 if the advertising router is an area border router.
I Flag: 1 if the message is the first message.
M Flag: 1 if this is not the last message.
M/S Flag: Master =1, Slave = 0

Link State Request Packet

Link State Acknowledgement Packet

BGP

BGP Packet Header

55
Open Message

Version = 4

Update Message

Keepalive Message

Notification Message

56
Error Codes

ARP

ARP packet

Hardware Type :16 bits ; Protocol Type :16 bits ; Hardware length :8 bits ; Protocol length : 8 bits; Operation
: 16 bits

Encapsulation of ARP packet

57
ICMP

ICMP Messages

General Message format of ICMP message

Destination Unreachable message format

Code 0. The network is unreachable, possibly due to hardware failure.


Code 1. The host is unreachable. This can also be due to hardware failure.
Code 2. The protocol is unreachable.
Code 3. The port is unreachable.
Code 4. Fragmentation is required, but the DF (do not fragment) field of the datagram has been set.
Code 5. Source routing cannot be accomplished.
Code 6. The destination network is unknown.
Code 7. The destination host is unknown.
Code 8. The source host is isolated.
Code 9. Communication with the destination network is administratively prohibited.
Code 10. Communication with the destination host is administratively prohibited.
Code 11. The network is unreachable for the specified type of service.
Code 12. The host is unreachable for the specified type of service.
Code 13. The host is unreachable because the administrator has put a filter on it.
Code 14. The host is unreachable because the host precedence is violated.
Code 15. The host is unreachable because its precedence was cut off.

58
Source Quench format

Time Exceeded message format

Parameter-problem message format

Code 0. There is an error or ambiguity in one of the header fields


Code 1. The required part of an option is missing.

Redirection message format

Code 0. Redirection for a network-specific route.


Code 1. Redirection for a host-specific route.
Code 2. Redirection for a network-specific route based on a specified type of service.
Code 3. Redirection for a host-specific route based on a specified type of service.

Query Messages

Echo request – reply messages

Timestamp request reply messages

59
Calculations :
Sending time = receive timestamp – original timestamp
Receiving time = returned time – transmit timestamp
Round-trip time = sending time + receiving time
Time difference = receive timestamp – (original timestamp field + one way time duration)

UDP

User Datagram format

Pseudoheader for checksum calculation

TCP

Well Known Ports

60
Segment Structure

RTT

Smoothed RTT, RTTS:


After first measurement  RTTS = RTTM where RTTM means Measured RTT
After each measurement  RTTS = (1-α) RTTS + α x RTTM
Generally, α = 1/8

RTT Deviation, RTTD:


After first measurement  RTTD = RTTM/2
After each measurement  RTTS = (1-β) RTTD + β x |RTTS – RTTM|
Generally, β = 1/4

Retransmission Time-out (RTO):


After any measurement  RTO = RTTS + 4 x RTT

Options
1. End-of-option: All zeros
2. No-operation option: Last bit is 1.
3. Maximum-segment-size option

4. Window-scale-factor option

New window size = window size defined in header x 2window scale factor
5. Timestamp option

61
6. SACK

DNS
Generic domains

62
DHCP

DHCP Packet Format

Flag Format

Option Format

63
Tables for List of Options

Options with tag 53

64
DHCP Client Transition Diagram

TELNET

NVT control characters

65
SNMP data type

Object identifier

66
SNMP PDU

67
DATABASE SYSTEM

ORACLE SQL COMMANDS

SELECT Statement
SELECT [DISNCT] {*, column [alias],...}
FROM table
[WHERE condition(s)]
[ORDER BY {column, exp, alias} [ASC|DESC]]

Cartesian Product
SELECT table1.*, table2.*,[...]
FROM table1,table2[,...]

Natural join
SELECT table1.*,table2.*
FROM table1 NATURAL JOIN table2

Inner join
SELECT table1.*,table2.*
FROM table1 INNER JOIN table2
ON table1.column = table2.column

Join Types
inner join
left outer join
right outer join
full outer join

Join Conditions
natural
on <predicate>
using (A1, A2, …)

Aggregation Selecting
SELECT [column,] group_function(column)
FROM table
[WHERE condition]
[GROUP BY group_by_expression]
[HAVING group_condition]
[ORDER BY column] ;

Group function
AVG([DISTINCT|ALL]n)
COUNT(*|[DISTINCT|ALL]expr)
MAX([DISTINCT|ALL]expr)
MIN([DISTINCT|ALL]expr)
SUM([DISTINCT|ALL]n)

68
Subquery
SELECT select_list
FROM table
WHERE expr operator(SELECT select_list FROM table);

single-row comparison operators


= > >= < <= <>
multiple-row comparison operators
IN ANY ALL

Multiple-column Subqueries
SELECT column, column, ...
FROM table
WHERE (column, column, ...) IN
(SELECT column, column, ...
FROM table WHERE condition) ;

View Definition
create view v as <query expression>;

Manipulating Data
INSERT Statement(one row)
INSERT INTO table [ (column [,column...])]
VALUES (value [,value...]) ;

INSERT Statement with Subquery


INSERT INTO table [ column(, column) ]
subquery ;

UPDATE Statement
UPDATE table
SET column = value [, column = value,...]
[WHERE condition] ;

Updating with Multiple-column Subquery


UPDATE table
SET (column, column,...) =
(SELECT column, column,...
FROM table
WHERE condition)
WHERE condition ;

Deleting Rows with DELETE Statement


DELETE [FROM] table
[WHERE conditon] ;

Deleting Rows Based on Another Table


DELETE FROM table
WHERE column = (SELECT column
FROM table
WHERE condtion) ;

69
Transaction Control Statements
COMMIT ;
SAVEPOINT name ;
ROLLBACK [TO SAVEPOINT name] ;

CREATE TABLE Statement


CREATE TABLE [schema.]table
(column datatype [DEFAULT expr] [,...]) ;

Datatype
VARCHAR2(size) CHAR(size) NUMBER(p,s) DATE
LONG CLOB RAW LONG RAW
BLOB BFILE

ALTER TABLE Statement (Add columns)


ALTER TABLE table
ADD (column datatype [DEFAULT expr]
[, column datatype]...) ;

Changing a column’s type, size and default of a Table


ALTER TABLE table
MODIFY (column datatype [DEFAULT expr]
[, column datatype]...) ;

Dropping a Table
DROP TABLE table ;

Changing the Name of an Object


RENAME old_name TO new_name ;

Truncating a Table
TRUNCATE TABLE table ;

Defining Constraints
CREATE TABLE [schema.]table
(column datatype [DEFAULT expr][NOT NULL]
[column_constraint],...
[table_constraint][,...]) ;

Column constraint level


column [CONSTRAINT constraint_name] constraint_type,

Constraint_type
PRIMARY KEY
REFERENCES table(column)
UNIQUE
CHECK (codition)

Table constraint level(except NOT NULL)


column,...,[CONSTRAINT constraint_name]
constraint_type (column,...),

70
NOT NULL Constraint (Only Column Level)
CONSTRAINT table[_column...]_nn NOT NULL ...

UNIQUE Key Constraint


CONSTRAINT table[_column..]_uk UNIQUE (column[,...])

PRIMARY Key Constraint


CONSTRAINT table[_column..]_pk PRIMARY (column[,...])

FOREIGN Key Constraint


CONSTRAINT table[_column..]_fk
FOREIGN KEY (column[,...])
REFERENCES table (column[,...])[ON DELETE CASCADE]

CHECK constraint
CONSTRAINT table[_column..]_ck CHECK (condition)

Adding a Constraint(except NOT NULL)


ALTER TABLE table
ADD [CONSTRAINT constraint_name ] type (column) ;

Adding a NOT NULL constraint


ALTER TABLE table
MODIFY (column datatype [DEFAULT expr]
[CONSTRAINT constraint_name_nn] NOT NULL) ;

Dropping a Constraint
ALTER TABLE table
DROP CONSTRAINT constraint_name ;
ALTER TABLE table
DROP PRIMARY KEY | UNIQUE (column) |
CONSTRAINT constraint_name [CASCADE] ;

THE ENTITY-RELATIONSHIP MODEL

Entity-relationship (E-R) data model is a widely used data model for database design. It provides a
convenient graphical representation to view data, relationships, and constraints.

The E-R model is intended primarily for the database-design process. It was developed to facilitate database
design by allowing the specification of an enterprise schema. Such a schema represents the overall logical
structure of the database. This overall structure can be expressed graphically by an E-R diagram.

An entity is an object that exists in the real world and is distinguishable from other objects. We express the
distinction by associating with each entity a set of attributes that describes the bject.

A relationship is an association among several entities. A relationship set is a collection of relationships


of the same type, and an entity set is a collection of entities of the same type.

71
The terms superkey, candidate key, and primary key apply to entity and relationship sets as they do for
relation schemas. Identifying the primary key of a relationship set requires some care, since it is composed
of attributes from one or more of the related entity sets.

Mapping cardinalities express the number of entities to which another entity can be associated via a
relationship set.

An entity set that does not have sufficient attributes to form a primary key is termed a weak entity set. An
entity set that has a primary key is termed a strong entity set.

The various features of the E-R model offer the database designer numerous choices in how to best represent
the enterprise being modeled. Concepts and objects may, in certain cases, be represented by entities,
relationships, or attributes. Aspects of the overall structure of the enterprise may be best described by using
weak entity sets, generalization, specialization, or aggregation. Often, the designer must weigh the merits
of a simple, compact model versus those of a more precise, but more complex, one.

Specialization and generalization define a containment relationship between a higher-level entity set and
one or more lower-level entity sets. Specialization is the result of taking a subset of a higher-level entity set
to form a lower-level entity set. Generalization is the result of taking the union of two or more disjoint
(lower-level) entity sets to produce a higher-level entity set.
The attributes of higher-level entity sets are inherited by lower-level entity sets.

Aggregation is an abstraction in which relationship sets (along with their associated entity sets) are treated
as higher-level entity sets, and can participate in relationships.

72
Symbols used in E-R Notations :

73
Alternative E-R Notations:

RELATIONAL ALGEBRA

Fundamental Operations:
1. Select 
Notation:  p(r) where p is the required condition/predicate on the relation r

2. Project 
 A1 , A2 ,, Ak (r )
Notation: where A1 and so on are attributes of relation r

3. Union 
Notation: r  s where r and s are the two relations

4. Set difference –
Notation: r – s where r and s are the two relations

5. Cartesian product x
Notation: r x s where r and s are the two relations

6. Rename 
Notation:  x (E) where the expression E is returned under the name X

74
Additional Operations:
1. Set Intersection 
Notation: r  s where r and s are two relations
2. Natural Join
Notation : r s where r and s are two relations
3. Assignment 
4. Outer Join
a. Left Outer Join
b. Right Outer Join
c. Full Outer Join
5. Division 
Notation: r  s where r and s are two relations

Extended Operations:
 , ,..., ( E )
F1 F2 Fn
1. Generalized Projection
Where Fi and so on are arithmetic operations involving constants and attributes in the schema of E
( E)
2. Aggregate Functions G1 ,G2 ,,Gn F1 ( A1 ), F2 ( A2 ,, Fn ( An )
Where Gi and so on are the groups on which the aggregate functions Fi and so on are applied for a
given attribute Ai of the relational expression E

NORMALIZATION

1NF
A relational schema R is in first normal form(1NF) if the domains of all attributes of R are atomic

Functional Dependency
Let R be a relation schema and let   R and   R
The functional dependency    holds on R if and only if for any legal relations r(R), whenever any two
tuples t1 and t2 of r agree on the attributes , they also agree on the attributes . That is,
t1[] = t2 []  t1[ ] = t2 [ ]

Closure of functional Dependency, F+


The set of all functional dependencies that can be inferred given the set F. F+ contains all of the functional
dependencies in F.

BCNF
A relation schema R is in BCNF with respect to a set F of functional dependencies if for all functional
dependencies in F+ of the form    where   R and   R, at least one of the following holds:
   is trivial (i.e.,   )
 is a super key for R

3NF
A relation schema R is in third normal form (3NF) if for all:    in F+ at least one of the following holds:
   is trivial (i.e.,   )
 is a super key for R
Each attribute A in  –  is contained in a candidate key for R.
(NOTE: each attribute may be in a different candidate key)

75
Armstrong’s Axioms:
if   , then    (reflexivity)
if   , then      (augmentation)
if   , and   , then    (transitivity)

Additional rules:
If    holds and    holds, then     holds (union)
If     holds, then    holds and    holds (decomposition)
If    holds and     holds, then     holds (pseudo transitivity)

Extraneous Attribute
Consider a set F of functional dependencies and the functional dependency    in F.
Attribute A is extraneous in  if A  
and F logically implies (F – {  })  {( – A)  }.
Attribute A is extraneous in  if A   and the set of functional dependencies
(F – {  })  { ( – A)} logically implies F.

Canonical cover, Fc
Canonical Cover, Fc for F is a set of dependencies Fc such that
F logically implies all dependencies in Fc, and
Fc logically implies all dependencies in F, and
No functional dependency in Fc contains an extraneous attribute, and
Each left side of functional dependency in Fc is unique.

Lossless Join Decomposition


A decomposition of R into R1 and R2 is lossless join if at least one of the following dependencies is in F+:
R1  R2  R1
R1  R2  R2

Restriction F to Ri
Let F be a set of functional dependencies on a schema R, and let R1, R2, …...Rn be a decomposition of R.
The restriction of F to Ri is the set Fi of all functional dependencies in F+ that include only attributes of Ri
.
Dependency Preserving
Let F be a set of functional dependencies on a schema R, and let R1, R2, …...Rn be a decomposition of R.
The restriction of F to R1, R2, …...Rn is the set F1, F2 …. Fn . The decomposition is dependency
preserving, if (F1  F2  …  Fn )+ = F +

Munltivalued Dependency(MVD)
Let R be a relation schema and let   R and   R. The multivalued dependency    holds on R if
in any legal relation r(R), for all pairs for tuples t1 and t2 in r such that t1[] = t2 [], there exist tuples t3 and
t4 in r such that:
t1[] = t2 [] = t3 [] = t4 []
t3[] = t1 []
t3[R – ] = t2[R – ]
t4 [] = t2[]
t4[R – ] = t1[R – ]

76
MVD Rules
For
If   , then   

Closure of Multivalued Dependency, D+


The closure D+ of D is the set of all functional and multivalued dependencies logically implied by D.

Fourth Normal Form, 4NF


A relation schema R is in 4NF with respect to a set D of functional and multivalued dependencies
if for all multivalued dependencies in D+ of the form   , where   R and   R, at least one of
the following hold:
   is trivial (i.e.,    or    = R)
 is a superkey for schema R

Restriction of D to Ri
The restriction of D to Ri is the set Di consisting of
All functional dependencies in D+ that include only attributes of Ri
All multivalued dependencies of the form
  (  Ri)
where   Ri and    is in D+

PROCEDURAL LANGUAGE

Procedures :

A procedure is a module performing one or more actions; it does not need to return any values. The syntax
for creating a procedure is as follows:

CREATE OR [REPLACE] PROCEDURE name


[(parameter[, parameter, ...])]
AS
[local declarations]
BEGIN
executable statements
[EXCEPTION
exception handlers]
END [name];

A procedure may have 0 to many parameters.


Every procedure has two parts:
1. The header portion, which comes before AS (sometimes you will see IS—they are
interchangeable), keyword (this contains the procedure name and the parameter list),
2. The body, which is everything after the AS keyword.

The word REPLACE is optional. When the word REPLACE is not used in the header of the procedure, in
order to change the code in the procedure, it must be dropped first and then re-created.
There are three types of parameter modes: IN, OUT, and IN OUT. Modes specify whether the parameter
passed is read in or a receptacle for what comes out. IN passes value into the procedure, OUT passes
back from the procedure and INOUT does both.

77
In order to execute a procedure in SQLPlus use the following syntax:
EXECUTE Procedure_name (list of parameters);

Functions:

The syntax for creating a function is as follows:

CREATE [OR REPLACE] FUNCTION function_name (parameter list)


RETURN datatype
IS BEGIN
<body>
RETURN (return_value);
END;
/
The significant difference between the Procedure and the function is that a function is a PL/SQL block
that returns a single value. Using an anonymous block function is called.

PL/SQL anonymous block:

DECLARE
[Local Variable declaration section]
BEGIN
Execution Section
[EXCEPTION
Exception Section ]
END ;

PL/SQL Control Structures:

IF-THEN-ELSIF Statement

IF condition1 THEN
sequence_of_statements1
ELSIF condition2 THEN
sequence_of_statements2
ELSE
sequence_of_statements3
END IF;

CASE Statement

CASE selector variable/Expression


WHEN expression1 THEN sequence_of_statements1;
WHEN expression2 THEN sequence_of_statements2;
...
WHEN expressionN THEN sequence_of_statementsN;
[ELSE sequence_of_statementsN+1;]
END CASE;

78
LOOP

LOOP
sequence_of_statements
EXIT WHEN boolean_expression;
-- Instead of EXIT WHEN, use of EXIT statement forces a loop to complete
unconditionally.
END LOOP;

WHILE-LOOP

WHILE condition LOOP


sequence_of_statements
END LOOP;
FOR-LOOP
FOR counter IN [REVERSE] lower_bound..higher_bound LOOP
sequence_of_statements
END LOOP;

Using the %TYPE Attribute:


The %TYPE attribute provides the datatype of a variable or database column.
DECLARE
my_empno employees.employee_id%TYPE;
BEGIN
my_empno := NULL;
END;

Using the %ROWTYPE Attribute:


The %ROWTYPE attribute provides a record type that represents a row in a table (or view).
DECLARE
-- %ROWTYPE can include all the columns in a table.
EMP_REC employees%ROWTYPE;
BEGIN
-- EMP_REC can hold a row from the EMPLOYEES table.
SELECT * INTO EMP_REC FROM employees WHERE Emp_Id = 10 ;
END;

User-defined Exceptions:

DECLARE
my-exception EXCEPTION;

Example:

DECLARE
ex_invalid_id EXCEPTION;
BEGIN
IF <cond> THEN
RAISE ex_invalid_id;

79
ELSE
…..
END IF;

EXCEPTION
WHEN ex_invalid_id THEN
dbms_output.put_line('ID must be greater than zero!');
END;

Triggers:
A database trigger is a stored PL/SQL program unit associated with a specific database table. ORACLE
executes (fires) a database trigger automatically when a given SQL operation (like INSERT,
UPDATE or DELETE) affects the table. Unlike a procedure, or a function, which must be invoked
explicitly, database triggers are invoked implicitly.
A database trigger has three parts: A triggering event, an optional trigger constraint, and a trigger
action. When an event occurs, a database trigger is fired, and a predefined PL/SQL block will perform
the necessary action.

Syntax:
CREATE [OR REPLACE] TRIGGER trigger_name
{BEFORE|AFTER} triggering_event ON table_name
[FOR EACH ROW]
DECLARE

Declaration statements
BEGIN
Executable statements
EXCEPTION
Exception-handling statements
END;

The trigger_name references the name of the trigger. BEFORE or AFTER specify when the trigger is
fired (before or after the triggering event). The triggering_event references a DML statement issued
against the table (e.g., INSERT,DELETE, UPDATE). The table_name is the name of the table
associated with the trigger. A trigger may be a ROW or STATEMENT type. If the statement FOR
EACH ROW is present in the CREATE TRIGGER clause of a trigger, the trigger is a row trigger. The
clause, FOR EACH ROW, specifies a trigger is a row trigger and fires once for each modified row. A
statement trigger, however, is fired only once for the triggering statement, regardless of the number of
rows affected by the triggering statement

Enabling, Disabling, Dropping Triggers:


SQL>ALTER TRIGGER trigger_name DISABLE; SQL>ALTER
TABLE table_name DISABLE ALL TRIGGERS;
SQL>ALTER TABLE table_name ENABLE trigger_name; SQL>
ALTER TABLE table_name ENABLE ALL TRIGGERS;
SQL> DROP TRIGGER trigger_name

80
Cursor:
DECLARE
CURSOR <cursor_name> IS <SELECT statement> ;
<cursor_variable declaration> ;
BEGIN
OPEN <cursor_name>;
FETCH <cursor_name> INTO <cursor_variable>;
// Process the records
CLOSE <cursor_name>;
END;

Packages in PL/SQL:
Packages are schema objects that groups logically related PL/SQL types, variables, and subprograms.
A package will have two mandatory parts:
1. Package specification:
Example
CREATE PACKAGE cust_sal AS
PROCEDURE find_sal(c_id customers.id%type);
END cust_sal; /
2. Package body or definition:
Example
CREATE OR REPLACE PACKAGE BODY cust_sal AS
PROCEDURE find_sal(c_id customers.id%TYPE) IS
c_sal customers.salary%TYPE;
BEGIN
SELECT salary INTO c_sal
FROM customers
WHERE id = c_id;
dbms_output.put_line('Salary: '|| c_sal);
END find_sal;
END cust_sal;
/

Usage of Package Elements


package_name.element_name;

81
DESIGN AND ANALYSIS OF ALGORITHMS
Properties of Logarithms
1. 𝑙𝑜𝑔 𝑎1 = 0
2. 𝑙𝑜𝑔 𝑎𝑎 = 1
3. 𝑙𝑜𝑔 𝑎𝑥 𝑦 = 𝑦𝑙𝑜𝑔 𝑎𝑥
4. 𝑙𝑜𝑔 𝑎𝑥𝑦 = 𝑙𝑜𝑔 𝑎𝑥 + 𝑙𝑜𝑔 𝑎𝑦
𝑥
5. 𝑙𝑜𝑔 𝑎 𝑦 = 𝑙𝑜𝑔 𝑎𝑥 − 𝑙𝑜𝑔 𝑎𝑦
6. 𝑎𝑙𝑜𝑔 𝑏𝑥 = 𝑥 𝑙𝑜𝑔 𝑏𝑎
𝑙𝑜𝑔 𝑥
7. 𝑙𝑜𝑔 𝑎𝑥 = 𝑙𝑜𝑔 𝑏𝑎 = 𝑙𝑜𝑔 𝑎𝑏 𝑙𝑜𝑔 𝑏𝑥
𝑏

Combinatorics
1. Number of permutations of an n-element set 𝑃(𝑛) = 𝑛!
𝑛!
2. Number of k-combinations of an n-element set: 𝐶(𝑛, 𝑘) =
𝑘!(𝑛−𝑘)!
3. Number of subsets of an n-element set: 2𝑛

Important Summation Formula


1. ∑𝑢𝑖=𝑙 1 = 1 + 1 + 1 + ⋯ + 1 = 𝑢 − 𝑙 + 1 ( 𝑙, 𝑢 𝑎𝑟𝑒 𝑖𝑛𝑡𝑒𝑔𝑒𝑟 𝑙𝑖𝑚𝑖𝑡𝑠 , 𝑙 ≤ 𝑢); ∑𝑛𝑖=1 1 = 𝑛
𝑛(𝑛+1) 1
2. ∑𝑛𝑖=1 𝑖 = 1 + 2 + 3 + ⋯ . +𝑛 = == 𝑛2
2 2
𝑛(𝑛+1)(2𝑛+1) 1
3. ∑𝑛𝑖=1 𝑖 2 = 12 + 22 + ⋯ . +𝑛2 = == 𝑛3
6 3
𝑛(𝑛+1)(2𝑛+1) 1
4. ∑𝑛𝑖= 𝑖 𝑘 = 1𝑘 + 2𝑘 + ⋯ . +𝑛𝑘 = == 𝑛𝑘+1
6 𝑘+1
𝑎 𝑛+1 −1
5. ∑𝑛𝑖=0 𝑎𝑖 = 1 + 𝑎 + ⋯ . +𝑎𝑛 = (𝑎 𝑛𝑜𝑡 𝑒𝑞𝑢𝑎𝑙 𝑡𝑜 1); ∑𝑛𝑖=0 2𝑖 = (2𝑛+1 − 1)
𝑎−1
6. ∑𝑛𝑖=1 𝑖2𝑖 = 1 ∗ 2 + 2 ∗ 22 + ⋯ . +𝑛2𝑛 = (𝑛 − 1)2𝑛+1 + 2
1 1 1
7. ∑𝑛𝑖=1 = 1 + + ⋯ + ≈ ln n + c, where c ≈ 0.5772... (Euler’s constant), Hamonic number
𝑖 2 𝑛
1 1 1 1
8. ∑∞𝑖=0 2𝑖 = 1 + 2 + 4 + ⋯ + 2𝑛 ≈ 2 // geometric series
9. ∑𝑛𝑖=1 lg 𝑖 ≈ n lg 𝑛

Sum Manipulation Rules


1. ∑𝑢𝑖=𝑙 𝐶𝑎𝑖 =C∑𝑢𝑖=𝑙 𝑎𝑖

2. ∑𝑢𝑖=𝑙(𝑎𝑖 ± 𝑏𝑖 ) =∑𝑢𝑖=𝑙 𝑎𝑖 ± ∑𝑢𝑖=𝑙 𝑏𝑖

3. ∑𝑢𝑖=𝑙 𝑎𝑖 =∑𝑚 𝑢
𝑖=𝑙 𝑎𝑖 + ∑𝑖=𝑚+1 𝑎𝑖 where l ≤m < 𝑢

4. ∑𝑢𝑖=𝑙(𝑎𝑖 -𝑎𝑖−1 ) = 𝑎𝑢 -𝑎𝑙−1

Approximation of a Sum by a Definite Integral


𝑢 𝑢+1
1. ∫𝑙−1 𝑓(𝑥)𝑑𝑥 ≤ ∑𝑢𝑖=𝑙 𝑓(𝑖) ≤ ∫𝑙 𝑓(𝑥)𝑑𝑥 for a non-decreasing f(x)
𝑢+1 𝑢
2. ∫𝑙 𝑓(𝑥)𝑑𝑥 ≤ ∑𝑢𝑖=𝑙 𝑓(𝑖) ≤ ∫𝑙−1 𝑓(𝑥)𝑑𝑥 for a non-increasing f(x)

Miscellaneous
n n
1. n≈ √2πn (ⅇ ) as n → ∞ (Stirling’s formula)

82
2. Modular arithmetic (n, m are integers, p is a positive integer)
(n + m) mod p = ((n mod p) + (m mod p)) mod p
(n × m) mod p = ((n mod p) × (m mod p)) mod p

Asymptotic Inequality
Constant C < log n < √n < n < n log n < n2 < n3 < . . . < 2n < n! < nn

Arithmetic Progression
Sum of first n terms of a arithmetic series: a, a+d, a + 2d, ….. , a + (n-1)d is n/2 [2a + (n-1)d]

Geometric Progression
Sum of first n terms of a geometric series a, ar, ar2, …, arn-1 is
a(rn – 1) / (r -1) if (r >1 )
a(1 - rn) / (1 -r) if (r < 1 )

Strassen’s Matrix Multiplication


C = A*B then
D = A1 (B2 – B4)
E = A4 (B3 – B1)
F = (A3 + A4) B1
G = (A1 + A2) B4
H = (A3 – A1) (B1 + B2)
I = (A2 – A4) (B3 + B4)
J = (A1 + A4) (B1 + B4)
C1 = E + I + J – G
C2 = D + G
C3 = E + F
C4 = D + H + J – F

Master’s Theorem

If T(n) = a T(n/b) + g(n), for n > 1 and T(1) for n =1, where n = bk, then
T(n) = n1ogba [ T(1) + f(n) ] , where h(n) = g(n)/ n1ogb a and f(n) = ∑kj=1 h(bj)

Space Requirement

Type Name 32–bit Size 64–bit Size


char 1 byte 1 byte
short 2 bytes 2 bytes
int 4 bytes 4 bytes
long 4 bytes 8 bytes
long long 8 bytes 8 bytes
float 4 bytes 4 bytes
double 8 bytes 8 bytes
long double 16 bytes 16 bytes
near pointer 2 bytes 2 bytes
far pointer 4 bytes 4 bytes

83
EMBEDDED SYSTEM

CPSR Structure

M[4:0] Mode bits V Overflow


T Thumb state C Carry
F FIQ (Fast Interrupt Request) Z Zero
I Interrupt N Negetive

ARM Instruction format

[label] mnemonic [operands] [;comment]


Brackets indicate that the field is optional and not all lines have them. Brackets should not be typed in.
Regarding the above format, the following points should be noted:
1. The label field allows the program to refer to a line of code by name. The label field cannot exceed a
certain number of characters. Check your assembler for the rule.
2. The Assembly language mnemonic (instruction) and operand(s) fields together perform the real work of
the program and accomplish the tasks for which the program was written. In Keil uVision, the mnemonic
must be typed at least one tab space from the left margin.

Conditional Execution Mnemonics

84
SRAM Bit-Addressable Memory Region
The bit-band SRAM addresses 0x20000000 to 0x200FFFFF (1M bytes) is given alias addresses of
0x22000000 to 0x23FFFFFF.

GPIO extension connectors:

CNA

CNB

Short JP4(1, 2) to use CNB pin 8.

85
CNC

Short JP4(2, 3) to work CNC pin 9.

CND

Seven Segment Display Interface

U11 U10 U9
Hex codes for displaying hex digits U8
from 0 to F

0 0x3F 4 0x66 8 0x7F C 0x39


1 0x06 5 0x6D 9 0x6F d 0x5E
2 0x5B 6 0x7D A 0x77 E 0x79
3 0x4F 7 0x07 b 0x7C F 0x71

86
Control and Display commands for LCD
S. No. Hex Code Command to LCD Instruction Register
1 01 Clear Display
2 02 Return Home
3 04 Decrement Cursor (Shift cursor to left)
4 06 Increment Cursor (Shift Cursor to right)
5 05 Shift Display right
6 07 Shift Display left
7 08 Display Off, Cursor Off
8 0A Display Off, Cursor On
9 0C Display On, Cursor Off
10 0F Display On, Cursor Blinking
11 10 Shift cursor position to left
12 14 Shift cursor position to right
13 18 Shift the entire display to the left
14 1C Shift the entire display to the right
15 80 Force the cursor to the beginning of 1st line
16 C0 Force the cursor to the beginning of 2nd line
17 33 and 32 To initialize the lcd to 4-bit mode
18 28 Function set (Data length = 4-bit; Number of display lines=2;
Display format=5*7 Dot matrix)

ADC
Va = ( Vref/212 )* Dval
Where Va is the analog input, Vref is the reference voltage and Dval is the digital output.
Power control bit for ADC in PCONP register is 12. Its value on reset is zero.

DAC
VAOUT = (VALUE * (VREFP-VREFN)/1024) + VREFN
If VREFN = 0,
VAOUT = VALUE * VREF/1024
where VALUE is the 10-bit digital value which is to be converted into its analog counterpart and V REF is
the input reference voltage.

PWM and Timer


TPCLK = 1/PCLKHz, where PCLK is the peripheral clock.
TRES = (PR+1)/PCLKHz, where TRES is the timer resolution and PR is the Prescale Register value.
PR = (PCLKHz * TRES) – 1

Stepper motor
Steps per full rotation=Number of rotor teeth * Number of stators

UART
UART0/2/3 baud rate can be calculated as (n = 0/2/3):

87
Where DivAddVal & MulVal are part of “Fractional Rate Divider” or “Baud-Prescaler” which is used in
Baud-Rate generation.

Fractional Divider setting look-up table

88
FORMAL LANGUAGES AND AUTOMATA THEORY

Languages, Grammars and Automata


Alphabet: Finite nonempty set Σ of symbols, called the alphabet.
Examples: ∑ = {a, b}, ∑ = {0, 1}, and ASCII Characters
Strings: Finite sequence of symbols from the alphabet.
Example: If the alphabet is ∑ = {a, b}, then aab, & bbaba are strings on ∑.
We use lowercase letters a, b, c,… for elements of Σ & u, v, w… for string names
String Operations:
Concatenation: The concatenation of two strings w and v is the string obtained by appending the symbols
of v to the right end of w.
Example: If w = a1a2…an and v = b1b2…bm, then concatenation of w and v, given by
wv = a1a2…anb1b2…bm
Reverse: The reverse of a string is obtained by writing the symbols in reverse order
Example: The reverse of w is given by
wR = an….a2a1
String Length: The length of a string w, denoted by |w|, is the number of symbols in the string
Example: The length string w is |w| = n
Empty string: A string with no symbols and it is denoted by . The length of empty string is || = 0
Substring: Any string of consecutive symbols in some string w is said to be a substring of w.
Prefix and suffix of a String: If w = uv, then the substrings v and u are said to be a prefix and a suffix of
w, respectively.
wn operation: If w is a string, then wn stands for the string obtained by repeating w n times. As a special
case, we define w0 =  and w1 = w.
Star-closure (*) operation (zero or more): If ∑ is alphabet. Then ∑* denote the set of strings obtained by
concatenating zero or more symbols from ∑.
Example: If ∑ = {a, b}, then ∑* = {, a, b, aa, ab, ba, aaa, aab, …}
Positive-closure (+) operation (one or more): The set of all possible strings from alphabet ∑ except . It is
given by ∑+ = ∑* - .

Languages
A formal language is a set of strings over a finite alphabet.
In other words a language is any subset of ∑*.
Example: If ∑ = {a, b}, then ∑* = {, a, b, aa, ab, ba, aaa, aab, …}. The set {a, aa, aab} is a language on
∑. The set L = {anbn : n ≥ 0} is also a language on ∑.
Sentence: A string in a language L will be called a sentence of L.
Since languages are sets, the union, intersection, and difference of two languages are simultaneously
defined.
Complement: The complement of a language is defined with respect to ∑*; that is, the complement of L is
L   * L
Reverse: The reverse of a language is the set of all strings reversals, that is,
LR = { wR : w ∈ L}
Concatenation: The concatenation of two languages L1 and L2 is the set of all strings obtained by
concatenating any element of L1 with any element of L2. It is given by
L1L2 = {xy: x ∈ L1, y ∈ L2 }
Ln operation: We define Ln as L concatenated with itself n times, with special cases
L0 = {} and L1 = L for every language L
Star closure and Positive closure: We define the star-closure of a language as

89
L* = L0 U L1 U L2…
and the positive closure as
L+ = L1 U L2…

Grammars
Definition: Mathematically, a grammar G is defined as a quadruple:
G = (V, T, S, P )
Where V is a finite set of objects called variables
T is a finite set of objects called terminal symbols
S  V is a special symbol called the Start symbol
P is a finite set of productions or "production rules"
Sets V and T are nonempty and disjoint
All the production rules have the form:
xy
where x is an element of (V  T)+ and y is in (V  T)*

Definition: Let G = (V, T, S, P) be a grammar. Then the set

is the language generated by G.

Automata
An automaton is an abstract model of a digital computer
An automaton has (Refer Figure 1 below)
– Input file
– Control unit (with finite states)
– Temporary storage
– Output

Figure 1

Finite Automata
Finite automata (FA) consists of a finite set of states and a set of transitions from one state to
another state that occur on input symbols from an alphabet
Two types of Finite Automata:
Deterministic Finite automata (DFA): Has unique transition from one state to another state
Nondeterministic Finite Automata (NFA): May have several possible transitions from one state to
another state and may have  transition

90
Deterministic Finite automata (DFA):
A DFA is mathematically defined by a quintuple M = (Q, ∑, 𝛿, q0, F)
where, Q is a finite set of states
∑ is finite set of input symbols called the input alphabet
: Qx∑ Q is the transition function
q0  Q is the initial state or start state
F ⊆ Q is a set of final states or accepting states

Definition: The language accepted by a DFA M = (Q, ∑, 𝛿, q0, F) is the set of all strings on Σ
accepted by M. That is
LM   w  * :  * q0 , w  F 
Definition: A language L is called regular if and only if there exists some DFA M such that L =
L(M).

Nondeterministic Finite Automata (NFA):


Definition: A NFA is mathematically defined by the quintuple M = (Q, ∑, 𝛿, q0, F), where Q, ∑,
q0, and F are defined as before (See DFA definition), but the transition function is defined by
 𝛿: Q x (∑ U{}) 2Q
Definition: The language L accepted by an NFA M = (Q, ∑, 𝛿, q0, F) is defined as
L(M )  {w   :   (q0 , w)  F  }
Equivalence of NFA and DFA
Theorem: Let L be the language accepted by a NFA MN = (QN, ∑, 𝛿 N, q0, FN ). Then there exists
a DFA MD = (QD, ∑, 𝛿 D, q0, FD ) such that L = L(MD)

Regular languages and regular grammars


Regular Expressions: The regular language (RL) can be easily described by simple expressions called
regular expressions (RE)
Each RE r denotes a RL L(r)
Note: We use + to denote denotes union,
We use . for concatenation and
We use * for star closure
Examples:
1.  is a RE corresponding to the language L ={ }.
2.  is a RE corresponding to the language L= {}.
3. For each symbol a  , a is a regular expression.

Definition: Let Σ be a given alphabet. Then


1. Ф, , and a ∈ Σ are all REs. These are called primitive REs.
2. If r1 & r2 are REs, so are r1+ r2, r1.r2, r1*, and (r1).
3. A string is a RE if and only if it can be derived from primitive REs by a finite number of applications
of the rules in (2).

Languages associated with regular expressions


Definition: If r1 and r2 are REs corresponding to the RLs L(r1) and L(r2), respectively, then
1. (r1 + r2) is a RE corresponds to the RL L(r1+r2) = L(r1)UL(r2)
2. (r1r2) is a RE corresponds to the RL L(r1.r2) = L(r1)L(r2)
3. (r1*) is RE corresponds to the RL L(r1*) = (L(r1))*

91
Theorem: Let r be a regular expression. Then there exits some NFA that accepts L(r). Consequently, L(r)
is a RL

Generalized Transition Graph


A generalized transition graph (GTG) is a transition graph (TG) whose edges are labeled with regular
expressions.
Any transition graph is reduced to two-state transition graph shown in Figure 2 and the regular expression
for the same is given by: r1* r2(r3 + r4 r1*r2)*

Figure 2
Theorem: Let L be a regular language. Then there exists a regular expression r such that L = L(r)

Regular Grammars
Regular grammar is another way of representing the regular language.
Definition: A grammar G = (V, T, S, P) is said to be linear grammar if all the productions contain at most
one variable on the right side of the production, without restriction on the position of the variable.

Definition: A grammar G = (V, T, S, P) is said to be right-linear if all the productions are of the form
A→ xB or A→ x, where A, B  V and x  T*.

Definition: A grammar G = (V, T, S, P) is said to be left-linear if all the productions are of the form
A→ Bx or A→ x, where A, B  V and x  T*.

Definition: A regular grammar is one that is either right-linear or left-linear.

Theorem: Let G = (V, T, S, P) be a right-liner grammar. Then L(G) is a regular language.

Theorem: A language L is a regular if and only if there exists a right-linear grammar (or left-linear grammar)
G = (V, T, S, P) such that L = L(G).

Theorem: A language is regular if and only if there exists a regular grammar G such that L = L(G)

Properties of regular languages


If L1 and L2 are regular languages, then so are
1. Union: L1 U L2
2. Intersection: L1 ∩ L2
3. Concatenation: L1L2
4. Complement: ̅̅̅ 𝐿1
5. Star: : L1*
6. Difference: L1- L2
7. Reversal: L1R

Homomorphism
Homomorphism is a substitution in which a single letter is replaced with a string.

92
Suppose Σ and Γ are alphabets, then a function h: Σ → Γ* is called a homomorphism. If w = a1a2….an then
h(w) = h(a1)h(a2)….h(an).
If L is a language on Σ, then its homomorphic image is defined as h(L)= {h(w) : w  L}.

Theorem: The family of regular languages is closed under homomorphism.

Right Quotient
Let L1 and L2 be languages on the same alphabet, then the right quotient of L1 and L2 is defined as
L1/L2 = {x: xy  L1 for some y  L2 }

Theorem: The family of regular languages is closed under right quotient with a regular language.

Pumping Lemma: Given an infinite regular language L there exists some positive integer m such
that for any string w  L with length |w| ≥ m can be decomposed as w = xyz with |xy| ≤ m and |y|
≥ 1 such that wi = xyiz  L for all i = 0, 1, 2,… This is known as the pumping lemma for regular
languages.
Pumping lemma can be used to prove that certain languages are not regular.

Context free lanuages


Definition: A grammar G = (V, T, S, P) is said to be context-free if all productions in P have the
form
A→ x, where A  V and x  (V Ս T)*.
A language L is said to be context-free if and only if there is a context-free grammar G such that
L= L(G).

Sentential Form: A sentence that contains variables and terminals


Leftmost and Rightmost derivations

Definition: A derivation is said to be leftmost if in each step the leftmost variable in the sentential
form is replaced. If in each step the rightmost variable is replaced, then the derivation is called
rightmost derivation.

Derivation Trees
A derivation tree is an ordered tree in which the nodes are labeled with the left sides of the
production and in which the children of a node represent its corresponding right side.
Simple grammar or s-grammar

Definition: A CFG G = (V, T, S, P) is said to be a simple grammar or s-grammar if all its


productions are of the form
A → ax,
where A  V, a  T, x  V*, and any pair (A, a) occurs at most once in P.

Ambiguity in Grammars and Languages


Definition: A context free grammar G is said to be ambiguous if there exist some w ϵ L(G) that
has two or more leftmost derivations or rightmost derivations.

93
Simplification of context-free grammars
Useless Productions, -Productions and Unit Productions

Definition: Let G = (V, T, S, P) be a CFG. A variable A  V is said to be useful if and only if there is at
least one w  L(G) such that
S→ xAy → w with x, y in (VUT)*

In other words, a variable is useful if and only if it occurs in at least one derivation. A variable that is not
useful is called useless. A production is useless if it involves any useless variable.

Definition: Any production of a CFG of the form


A→
is called a -production.

Definition: Any production of a context-free grammar of the form


A→B
where A, B  V, is called a unit-production

Chomsky Normal Form


Definition: A context free grammar is in Chomsky normal form, if all productions are of the form
A→BC or A→ a
where A, B, C are in V(non-terminals) and a is in T(terminal)

Greibach Normal Form


Definition: A context free grammar is said to be in Greibach normal form if all productions have the form
A→ ax
where a T and x  V*.

Nondeterministic Pushdown Automata


A schematic representation of a pushdown automaton is given in Figure 3 below: It has input file, control
unit and a temporary storage stack.
Each move of the control unit reads a symbol from the input file while at the same time changing the
contents of the stack through the usual stack operations.
Each move of the control unit is determined by the current input symbol currently on top of the stack.
The result of the move is a new state of the control unit and a change in the top of the stack.

Figure 3

94
Definition: A nondeterministic pushdown acceptor (NPDA) is mathematically defined by the septuple
M = (Q, Σ, Γ, δ, q0, z, F),
where
Q is a finite set of internal states of the control unit,
Σ is the input alphabet,
Γ is a finite set of symbols called the stack alphabet,
δ : Q × (Σ ∪{λ}) × Γ → set of finite subsets of Q × Γ* is the transition function,
q0 ∈ Q is the initial state of the control unit,
z ∈ Γ is the stack start symbol,
F ⊆ Q is the set of final states.

Definition: Let M = (Q, Σ, Γ, δ, q0, z, F) be a nondeterministic pushdown automaton. The language


accepted by M is the set
L(M) = {w  Σ* : (q0, w, z) ⊢*M (p, λ, u), p  F, u  Γ*}
In words, the language accepted by M is the set of all strings that can put M into a final state at the end of
a string. The final stack content u is irrelevant to this definition of acceptance.

Theorem: For any context-free language L, there exits an NPDA M, such that L = L(M).

Deterministic Pushdown Automata (dpda) and Deterministic Context-free Languages


Definition: A pushdown automaton M = (Q, Σ, Γ, δ, q0, z, F) is said to be deterministic if for every q ∈ Q,
a ∈ Σ ∪{λ} and b ∈ Γ,
1. δ (q, a, b) contains at most one element,
2. if δ (q, λ, b) is not empty, then δ (q, c, b) must be empty for every c ∈ Σ.
The first of these conditions simply requires that for any given input symbol and any stack top, at most one
move can be made. The second condition is that when a λ-move is possible for some configuration, no
input-consuming alternative is available.

Definition: A language is said to be a deterministic CFL if and only if there exists a DPDA M such that L
= L(M)

Properties of Context-free Languages


Pumping Lemma: Let L be an infinite context-free language. Then there exists some positive integer m
such that any w  L with |w| ≥ m can be decomposed as w = uvxyz, with |vxy| ≤ m, and |vy| ≥ 1, such
that uvi xyi z ∈ L, for all i = 0, 1, 2,… This is known as the pumping lemma for context-free languages.
The pumping lemma can be used to prove that certain languages are not context-free.

Closure Properties and Decision Algorithms for Context-free Languages


Theorem: The family of context-free languages is closed under union, concatenation, and star-closure.

Theorem: The family of context-free languages is not closed under intersection and complementation.

Theorem: Let L1 be a context-free language and L2 be a regular language. Then L1 ⋂ L2 is context-free.

TURING MACHINES

The Standard Turing Machine


A Turing machine is an automaton whose temporary storage is a tape. This tape is divided into cells, each
of which is capable of holding one symbol. Associated with the tape is a read-write head that can travel
right or left on the tape and that can read and write a single symbol on each move.

95
Turing machine's storage is actually quite simple. It can be visualized as a single, one-dimensional array of
cells, each of which can hold a single symbol. This array extends indefinitely in both directions and is
therefore capable of holding an unlimited amount of information. The information can be read and changed
in any order.
A diagram giving institutive visualization of a Turing machine is shown in Figure 4

Figure 4

Definition: A Turing machine M is mathematically defined by


M = (Q, Σ, Γ, δ, q0, , F),
Where
Q is the set of internal states,
Σ is the input alphabet
Γ is the finite set of symbols called the tape alphabet,
δ is the transition function,
∈ Γ is a special symbol called the blank,
q0 ∈ Q is the initial state,
F ⊆ Q is the set of final states.

The transition function δ is defined as

Figure 5 below shows the situation before and after the move

Figure 5: The situation (a) before the move and (b) after the move.

Definition: Let M= (Q, Σ, Γ, δ; q0, , F) be a Turing machine. Then the language accepted by M is

Definition: A function f with domain D is said to be Turing-computable or just computable if there exists
some Turing machine M = (Q, Σ, Γ, δ, q0, , F) such that

96
for all w ∈ D .

Nondeterministic Turing Machines (ntm)


Definition: A nondeterministic Turing machine is a Turing machine where transition function δ is defined
by

Linear Bounded Automata (lba)


Definition: Definition: A LBA is a nondeterministic TM M = (Q, Σ, Γ, δ, q0, , F ) subject to the restriction
that Σ must contain two special symbols, left-end marker [ and the right-end marker ], such that
δ( qi, [ ) = (qj, [, R) and δ( qi, ] ) = (qj, ], L)

Recursive and recursively enumerable languages


Definition: A language L is said to be recursively enumerable if there exists a Turing machine that accepts
it.
Definition: A language is recursive if some Turing machine accepts it and halts on any input string.

Theorem: There exists a recursively enumerable language whose complement is not recursively
enumerable.
Theorem: There exists a recursively enumerable language that is not recursive.

Unrestricted Grammars
Definition: A grammar G = (V, T, S, P) is called unrestricted if all the productions are of the form
u→v,
where u is in (V ∪ T)+ and v is in (V ∪ T)*.

Theorem: Any language generated by an unrestricted grammar is recursively enumerable.

Theorem: For every recursively enumerable language L, there exists an unrestricted grammar G, such that
L =L (G).

Context-sensitive Grammars and languages


Definition: A grammar G = (V, T, S, P) is said to be context-sensitive if all productions are of the form
x →y,
where x, y ∈ (V ∪ T)+ and |x| ≤ |y|.

Context-sensitive Languages and Linear Bounded Automata


Definition: A language L is said to be context-sensitive if there exists a context-sensitive grammar G, such
that L = L (G) or L = L (G) ∪{λ}.

Theorem: For every context-sensitive language L not including λ, there exists some linear bounded
automaton M such that L = L (M).

Theorem: If a language L is accepted by some linear bounded automaton M, then there exists a context-
sensitive grammar that generates L.

Relation between Recursive and Context-sensitive Languages


Theorem: Every context-sensitive language L is recursive.

Theorem: There exists a recursive language that is not context-sensitive.

97
OPERATING SYSTEMS

Operating system dual mode operation:

Examples of Windows and Unix System Calls

WINDOWS UNIX
CreateProcess() fork()
Process Control ExitProcess() exit()
WaitForSingleObject() wait()
CreateFile() open()
ReadFile() read()
File Manipulation
WriteFile() write()
CloseHandle() close()
SetConsoleMode() ioctl()
Device Manipulation ReadConsole() read()
WriteConsole() write()
GetCurrentProcessID() getpid()
Information Maintenance SetTimer() alarm()
Sleep() sleep()
CreatePipe() pipe()
Communication CreateFileMapping() shmget()
MapViewOfFile() mmap()
SetFileSecurity() chmod()
Protection InitlializeSecurityDescriptor() umask()
SetSecurityDescriptorGroup() chown()

98
PROCESS MANAGEMENT

Multithreaded programming

Multithreaded server architecture

Process Scheduling
Turnaround time = Completion Time - Arrival Time
Waiting Time = Turnaround Time - Burst Time
Determining Length of Next CPU Burst
τn+1 =α tn + (1- α) τn
tn= actual length of nth CPU burst
τn+1= Predicted value of next CPU burst
α,0 ≤α ≤1

Synchronization

Semaphore
Counting semaphore – integer value can range over an unrestricted domain
Binary semaphore – integer value can range only between 0 and 1
Semaphore S – integer variable
Two standard operations modify S: wait() and signal()
Originally called P() and V()
Syntax:
wait (S)
{
while (S <= 0)
; // busy wait
S--;
}
signal (S) {
S++;}

Monitors
Syntax of a monitor
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
procedure Pn (…) {……}
Initialization code (…) { … }
}
}

99
Deadlocks

Data Structures for the Banker’s Algorithm


Let n = number of processes, and m = number of resources types.
Available: Vector of length m. If available [j] = k, there are k instances of resource type Rj available
Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj
Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj
Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task
Need [i,j] = Max[i,j] – Allocation [i,j]

Memory Management

Paging hardware

SEGMENTATION HARDWARE

Virtual Memory

Performance of Demand Paging:


Let p be the probability of a page fault (0 ≤ p ≤ 1).
The effective access time is then effective access time = (1 − p) × memory access time + p × page fault
time.

100
Frame Allocation Algorithms
a. Equal allocation
b. Proportional allocation
Let the size of the virtual memory for process pi be si , and define S =∑ si .Then, if the total number of
available frames is m, we allocate ai frames to process pi, where ai is approximately
ai = si/S × m.

Working-set model
The working-set size is computed as, WSSi , for each process in the system, D =∑ WSSi ,where D is the
total demand for frames.

Storage management

Common file types

Simulation of sequential access on a direct-access file

101
Single level directory structure

Two-level directory structure

Tree-structured directory structure

Real Time Operating System


Real time operating system helps the real time applications/tasks to meet their deadlines by using a task
scheduler.

Real-time Task Scheduling: Real-time task scheduling essentially refers to determining the order in which
the various tasks are to be taken up for execution by the operating system.

Types of real time tasks: Periodic, Sporadic and Aperiodic


Notation
The 4-tuple Ti = (ϕi, pi, ei, Di) refers to a periodic task Ti with phase ϕi, period pi, execution time ei, and
relative deadline Di
– Default phase of Ti is ϕi = 0, default relative deadline is the period Di = pi
– Omit elements of the tuple that have default values

102
Examples:
i) T1 = (1, 10, 3, 6)  ϕ1 = 1, p1 = 10, e1 = 3, D1 = 6
ii) Consider the task Ti with period = 5 and execution time = 3. Phase is not given so, assume the release
time of the first job as zero. So the job of this task is first released at t = 0 then it executes for 3s and then
next job is released at t = 5 which executes for 3s and then next job is released at t = 10. So jobs are released
at t = 5k where k = 0, 1, . . ., n

Real time task scheduling algorithms have been divided into: Clock driven, Event driven, Hybrid
Theorem 1: The major cycle of a set of tasks ST = {T , T , … , T } is LCM ({p , p , … , p }) even when
1 2 n 1 2 n
the tasks have arbitrary phasing, where p , p , … , p are the periods of T , T , … , T
1 2 n 1 2 n

Theorem 2: The minimum separation of the task arrival from the corresponding frame start time (min(Δt)),
considering all instances of a task T , is equal to gcd(F, p ).
i i

Rate-monotonic scheduling
Necessary condition:
𝒏 𝒏

∑ 𝒆𝒊 ⁄𝒑𝒊 = ∑ 𝑼𝒊 ≤ 𝟏
𝒊=𝟏 𝒊=𝟏

where e i is the worst case execution time and pi is the period of the task Ti, n is the number of tasks to be
scheduled, and Ui is the CPU utilization due to the task Ti

Sufficient Condition: The sufficient condition as given by Liu and Layland for a set of n real-time periodic
tasks that are schedulable under RMA, if
𝒏
𝟏⁄
∑ 𝑼𝒊 ≤ 𝒏 (𝟐 𝒏 − 𝟏)
𝒊=𝟏
where Ui is the utilization due to task Ti. If a set of tasks satisfies the sufficient condition, then it is
guaranteed that the set of tasks would be RMA schedulable.

Earliest Deadline First

Priority-driven Approach - the earlier the deadline, the higher the priority
Theorem [EDF bound]: A set of n periodic tasks, each of whose relative deadline equals its period, can
be feasibly scheduled by EDF if and only if
𝒏

∑ 𝒆𝒊 ⁄𝒑𝒊 ≤ 𝟏
𝒊=𝟏

103
8086 MICROPROCESSOR SYSTEMS
8086 Microprocessor Pin Diagram

8086 is a 16-bit microprocessor available in 40 pin Dual In Package (DIP).

8086 INSTRUCTION SET

Data Transfer Instructions

MOV Destination, Source


The MOV instruction copies a word or byte of data from a specified source to a specified destination.

XCHG Destination, Source


This instruction exchanges the content of a register with the content of another register or with the content
of memory location(s).

LEA Register, Source


This instruction determines the offset of the variable or memory location named as the source and puts this
offset in the 16-bit register.

LDS Register, Memory address of the first word


This instruction loads new values into the specified register and into the DS register from four successive
memory locations.

LES Register, Memory address of the first word


This instruction loads new values into the specified register and into the ES register from four successive
memory locations.

104
Arithmetic Instructions

ADD Destination, Source


This instruction adds a number from some source to a number in some destination and puts the result in the
specified destination.

ADC Destination, Source


This instruction adds a number from some source to a number in some destination along with carry flag
and puts the result in the specified destination.

SUB Destination, Source


This instruction subtracts the number in some source from the number in some destination and puts the
result in the destination.

SBB Destination, Source


This instruction subtracts the number in some source and the carry flag from the number in some destination
and puts the result in the destination.

MUL Source
This instruction multiplies an unsigned byte in some source with an unsigned byte in AL register or an
unsigned word in some source with an unsigned word in AX register. When a byte is multiplied by the
content of AL, the product is put in AX. When a word is multiplied by the content of AX, the result is put
in DX and AX registers.

IMUL Source
This instruction multiplies a signed byte from source with a signed byte in AL or a signed word from some
source with a signed word in AX. When a byte from source is multiplied with content of AL, the signed
product will be put in AX. When a word from source is multiplied by AX, the signed product is put in DX
and AX.

DIV Source
This instruction is used to divide an unsigned word by a byte or to divide an unsigned double word by a
word. When a word is divided by a byte, the word must be in the AX register. After the division, AL will
contain the 8-bit quotient, and AH will contain the 8-bit remainder. When a double word is divided by a
word, the most significant word of the double word must be in DX, and the least significant word of the
double word must be in AX. After the division, AX will contain the 16-bit quotient, and DX will contain
the 16-bit remainder.

IDIV Source
This instruction is used to divide a signed word by a signed byte, or to divide a signed double word by a
signed word. When dividing a signed word by a signed byte, the word must be in the AX register. The
divisor can be in an 8-bit register or a memory location. After the division, AL will contain the signed
quotient, and AH will contain the signed remainder. When a double word is divided by a word, the most
significant word of the double word must be in DX, and the least significant word of the double word must
be in AX. After the division, AX will contain the 16-bit signed quotient, and DX will contain the 16-bit
signed remainder.

INC Destination
The INC instruction adds 1 to the destination word or bye.

105
DEC Destination
This instruction subtracts 1 from the destination word or byte.

DAA
This instruction is used to adjust the sum of two packet BCD bytes available in AL register in to packed
BCD.

DAS
This instruction is used to adjust the difference of two packet BCD bytes available in AL register in to
packed BCD.

CBW
This instruction converts a byte in AL into word in AX by copying the sign bit of AL to all the bits in AH.

CWD
This instruction converts a word in AX into double word in DX: AX by copying the sign bit of AX to all
the bits in DX.

AAA
This instruction allows us to add the ASCII codes for two decimal digits without masking the higher order
nibble 3. The unpacked BCD sum is available in AL and the carry flag is set in case of any adjustment.

AAS
This instruction allows us to subtract the ASCII codes for two decimal digits without masking the higher
order nibble 3. The unpacked BCD difference is available in AL and the carry flag is set in case of any
adjustment.

AAM
This instruction is used to adjust the product in AX in to unpacked BCD in AX.

AAD
AAD converts two unpacked BCD digits in AH and AL in to the equivalent binary number in AX.

Logical Instructions

AND Destination, Source


This instruction performs the bitwise logical AND operation of source and destination operands and stores
the result in the destination operand.

OR Destination, Source
This instruction performs the bitwise logical OR operation of source and destination operands and stores
the result in the destination operand.

XOR Destination, Source


This instruction performs the bitwise logical XOR operation of source and destination operands and stores
the result in the destination operand.

NOT Destination
This instruction performs the 1’s complement of a byte or word in the specified destination.

106
NEG Destination
This instruction performs the 2’s complement of a byte or word in the specified destination.

CMP Destination, Source


This instruction compares the destination with the source operand by subtracting the source from the
destination. Result is nowhere stored. Flags are affected accordingly.

TEST Destination, Source


This instruction performs the bitwise logical AND operation of the source operand with the destination
operand. Result is nowhere stored. Flags are affected accordingly.

Rotate and Shift Instructions

ROL Destination, Count


This instruction rotates destination operand Count number of bit positions to the left.

ROR Destination, Count


This instruction rotates destination operand Count number of bit positions to the right.

RCL Destination, Count


This instruction rotates destination operand through the carry flag, Count number of bit positions to the
left.

RCR Destination, Count


This instruction rotates destination operand through the carry flag, Count number of bit positions to the
right.

SAL Destination, Count


SHL Destination, Count
This instruction logically shifts the destination operand, Count number of bit positions to the left.

SHR Destination, Count


This instruction logically shifts the destination operand, Count number of bit positions to the right.

SAR Destination, Count


This instruction arithmetically shifts the destination operand, Count number of bit positions to the right.

Program Execution Transfer Instructions

JMP label
This instruction will fetch and execute the instruction from the address of the label rather than from the next
address after the JMP instruction.

JC label
This instruction will fetch and execute the instruction from the address of the label if Carry Flag is equal to
1.

JNC label
This instruction will fetch and execute the instruction from the address of the label if Carry Flag is equal
to 0.

107
JZ label
JE label
This instruction will fetch and execute the instruction from the address of the label if Zero Flag is equal to
1.

JNZ label
JNE label
This instruction will fetch and execute the instruction from the address of the label if Zero Flag is equal to
0.

JO label
This instruction will fetch and execute the instruction from the address of the label if Overflow Flag is equal
to 1.

JNO label
This instruction will fetch and execute the instruction from the address of the label if Overflow Flag is equal
to 0.

JP label
JPE label
This instruction will fetch and execute the instruction from the address of the label if Parity Flag is equal
to 1.

JNP label
JPO label
This instruction will fetch and execute the instruction from the address of the label if Parity Flag is equal
to 0.

JS label
This instruction will fetch and execute the instruction from the address of the label if Sign Flag is equal to
1.

JNS label
This instruction will fetch and execute the instruction from the address of the label if Sign Flag is equal to
0.

JA label
JNBE label
This instruction will fetch and execute the instruction from the address of the label if Carry Flag is equal to
0 and Zero Flag is equal to 0.

JAE label
JNB label
This instruction will fetch and execute the instruction from the address of the label if Carry Flag is equal to
0 or Zero Flag is equal to 1.

JNA label
JBE label
This instruction will fetch and execute the instruction from the address of the label if Carry Flag is equal to
1 or Zero Flag is equal to 1.

108
JNAE label
JB label
This instruction will fetch and execute the instruction from the address of the label if Carry Flag is equal to
1 and Zero Flag is equal to 0.

JG label
JNLE label
This instruction will fetch and execute the instruction from the address of the label if Sign Flag is equal to
Overflow Flag, and Zero Flag is equal to 0.

JGE label
JNL label
This instruction will fetch and execute the instruction from the address of the label if Sign Flag is equal to
Overflow Flag, or Zero Flag is equal to 1.

JNG label
JLE label
This instruction will fetch and execute the instruction from the address of the label if Sign Flag is not equal
to Overflow Flag, or Zero Flag is equal to 1.

JNGE label
JL label
This instruction will fetch and execute the instruction from the address of the label if Sign Flag is not equal
to Overflow Flag, and Zero Flag is equal to 0.

JCXZ label
This instruction will fetch and execute the instruction from the address of the label, if the CX register
contains all 0’s.

LOOP label
This instruction will automatically decrement CX by 1. If CX is not 0, execution will jump to a destination
specified by a label in the instruction. If CX = 0 after the auto decrement, execution will simply go on to
the next instruction after LOOP.

LOOPE label
LOOPZ label
This instruction will automatically decrement CX by 1. If CX is not 0 and Zero Flag is equal to 1, execution
will jump to a destination specified by a label in the instruction.

LOOPNE label
LOOPNZ label
This instruction will automatically decrement CX by 1. If CX is not 0 and Zero Flag is equal to 0, execution
will jump to a destination specified by a label in the instruction.

CALL procedure_name
The CALL instruction is used to transfer execution to a subprogram or a procedure.

RET
The RET instruction will return execution from a procedure to the next instruction after the CALL
instruction which was used to call the procedure.

109
String Manipulation Instructions
MOVSB
This instruction copies a byte from location in the data segment pointed by SI register to a location in the
extra segment pointed by DI register. After the byte is moved, SI and DI are automatically incremented or
decremented by 1 to point to the next source element and the next destination element based on the settings
of DF.

MOVSW
This instruction copies a word from location in the data segment pointed by SI register to a location in the
extra segment pointed by DI register. After the word is moved, SI and DI are automatically incremented or
decremented by 2 to point to the next source element and the next destination element based on the settings
of DF.

LODSB
This instruction copies a byte from the data segment pointed to by SI to AL. After the byte is moved, SI is
automatically incremented or decremented by 1 based on the settings of DF.

LODSW
This instruction copies a word from the data segment pointed to by SI to AX. After the word is moved, SI
is automatically incremented or decremented by 2 based on the settings of DF.

STOSB
This instruction copies a byte from the AL to a location in the extra segment pointed to by DI. After the
byte is moved, DI is automatically incremented or decremented by 1 based on the settings of DF.

STOSW
This instruction copies a word from the AX to a location in the extra segment pointed to by DI. After the
word is moved, DI is automatically incremented or decremented by 2 based on the settings of DF.

CMPSB
This instruction compares a byte in the data segment pointed by SI register with a byte in the extra segment
pointed by DI register. After the comparison, SI and DI are automatically incremented or decremented by
1 to point to the next source element and the next destination element based on the settings of DF.

COMPSW
This instruction compares a word in the data segment pointed by SI register with a word in the extra segment
pointed by DI register. After the comparison, SI and DI are automatically incremented or decremented by
2 to point to the next source element and the next destination element based on the settings of DF.

SCASB
This instruction compares AL register with a byte in the extra segment pointed by DI register. After the
comparison, DI is automatically incremented or decremented by 1 based on the settings of DF.

SCASW
This instruction compares AX register with a word in the extra segment pointed by DI register. After the
comparison, DI is automatically incremented or decremented by 2 based on the settings of DF.

REP
REP is a prefix, which is written before the string instruction. It will cause the CX register to be decremented
and the string instruction to be repeated until CX = 0.

110
REPE/REPZ
REPE and REPZ are two mnemonics for the same prefix. They will cause the string instruction to be
repeated as long as the compared bytes or words are equal (ZF = 1) and CX is not yet counted down to zero.

REPNE/REPNZ
REPNE and REPNZ are two mnemonics for the same prefix. They will cause the string instruction to be
repeated as long as the compared bytes or words are not equal (ZF = 0) and CX is not yet counted down to
zero.

Flag Manipulation Instructions

STC
This instruction sets the carry flag to 1.

CLC
This instruction resets the carry flag to 0.

CMC
This instruction complements the carry flag.

STD
This instruction sets the direction flag to 1.

CLD
This instruction resets the direction flag to 0.

STI
This instruction sets the Interrupt flag to 1.

CLI
This instruction resets the Interrupt flag to 0.

LAHF
The LAHF instruction copies the lower byte of the 8086 flag register to AH register.

SAHF
The SAHF instruction replaces the lower byte of the 8086 flag register with a byte from the AH register.

Stack Related Instructions

PUSH Source
The PUSH instruction decrements the stack pointer by 2 and copies a word from a specified source to the
location in the stack segment to which the stack pointer points.

POP Destination
The POP instruction copies a word from the stack location pointed to by the stack pointer to a destination
specified in the instruction and increments the stack pointer by 2.

111
PUSHF
The PUSH instruction decrements the stack pointer by 2 and copies flag register to the location in the stack
segment to which the stack pointer points.

POPF
The POP instruction copies a word from the stack location pointed to by the stack pointer to the flag register
and increments the stack pointer by 2.

Input-Output Instructions
IN AL, Port_8
IN AX, Port_8
IN AL, DX
IN AX, DX
The IN instruction copies data from a port to the AL or AX register. Port_8 is the 8 bit fixed port address
of the port. For variable port addressing, DX holds the 16-bit address of the port.

OUT Port_8, AL
OUT Port_8, AX
OUT DX, AL
OUT DX, AX
The OUT instruction copies a byte from AL or a word from AX to the specified port. Port_8 is the 8 bit
fixed port address of the port. For variable port addressing, DX holds the 16-bit address of the port.

Miscellaneous Instructions

HLT
The HLT instruction causes the 8086 to stop fetching and executing instructions.

NOP
This instruction simply uses up three clock cycles and increments the instruction pointer to point to the next
instruction.

ESC
This instruction is used to pass instructions to a coprocessor, such as the 8087 Math coprocessor, which
shares the address and data bus with 8086.

INT type
This instruction causes the execution of interrupt handler specified by the type.

INTO
If the overflow flag is set, this instruction causes the 8086 to do an indirect far call to a procedure you write
to handle the overflow condition.

IRET
The IRET instruction is used at the end of the interrupt service procedure to return execution to the
interrupted program.

LOCK
The LOCK prefix allows a microprocessor to make sure that another processor does not take control of the
system bus while it is in the middle of a critical instruction, which uses the system bus.

112
WAIT
When this instruction is executed, the 8086 enters an idle condition in which it is doing no processing.

XLAT
XLATB
The instruction copies the byte from the address pointed to by (BX + AL) in the data segment into AL.

8086 DOS INTERRUPTS – TYPE 21H

Function 01- Character input with echo

Action: Reads a character from the standard input device and echoes it to the standard output device.
If no character is ready it waits until one is available.
On entry: AH = 01h
Returns: AL = 8-bit data input

Function 02 - Character output


Action: Outputs a character to the standard output device.
On entry: AH = 02h
DL = 8 bit data (usually ASCII character)

Function 06- Direct console I/O


Action: Reads a character from the standard input device or returns zero if no character available. Also
can write a character to the current standard output device.
On entry: AH = 06h
DL = function requested: 0h to FEh = output
(DL = character to be output)
FFh = Input request
Returns: If output - nothing
If input - data ready: zero flag clear, AL = 8-bit data
If data not ready: zero flag set

Function 08- Character input with no echo


Action: Reads a character from the standard input device without copying it to the display.
If no character is ready it waits until one is available.
On entry: AH = 08h
Returns: AL = 8 bit data input

Function 09- Output character string


Action: Writes a string terminated with $ character to the display.
On entry: AH = 09h
DS:DX = segment: offset of string

Function 0Ah - Buffered input


Action: Reads a string from the current input device up to and including an ASCII carriage return(0Dh),
placing the received data in a user-defined buffer
On entry: AH = 0Ah
DS:DX = segment:offset of string buffer . The first byte of the buffer specifies the maximum number of
characters it can hold. The second byte of the buffer is set by DOS to the number of characters actually
read, excluding the terminating RETURN. The input string is stored from the third byte onwards.

113
8255 Programmable Peripheral Interface

114
8254 Programmable Interval Timer

Control Word Register

Counter Latch Command

115
Read-back Command

8259 Programmable Interrupt Controller

ICW1

ICW2

116
ICW3

ICW4

OCW1

117
OCW2

OCW3

118

You might also like