0% found this document useful (0 votes)
19 views132 pages

Ecs 1

The document provides an overview of computer science essentials, focusing on programming paradigms, languages, and their classifications. It discusses low-level and high-level programming languages, their characteristics, and comparisons, as well as various programming methodologies such as procedural, functional, object-oriented, and scripting languages. Additionally, it covers machine language, assembly language, and the architecture of basic computers.

Uploaded by

trilokesh.v2006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views132 pages

Ecs 1

The document provides an overview of computer science essentials, focusing on programming paradigms, languages, and their classifications. It discusses low-level and high-level programming languages, their characteristics, and comparisons, as well as various programming methodologies such as procedural, functional, object-oriented, and scripting languages. Additionally, it covers machine language, assembly language, and the architecture of basic computers.

Uploaded by

trilokesh.v2006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 132

Essentials of

computer science
Overview
Programming Paradigm Virtualization

Choice of Programming Methodology  Server Virtualization

Efficient Programming  Software Virtualization

Program Execution Design of Operating System

 Hardware Components Involved


 Software Components Involved
Brief Knowledge on software components
Brief Knowledge on hardware components
Design of Hardware Component
Design of Software Component
Concept – Program
– Input – Processing
- Output
Language
Human Language

Commonly used to express feelings and communication of messages

It can be oral or gestural communication

Computer Language

Computer language are the languages by which a user command a


computer to work on the algorithm for the desired input and expected
output
Program
Series of organized instructions that directs a computer to
perform task
Programming language
Set of words, symbols and codes that enables humans to
communicate effectively with computer

Natural language have grammar and vocabulary

Programming language has syntax, keywords and


constants
Why so many?
Continuous evolution
Optimization
New requirements & approaches
Levels of programming
Low level language
Low-level languages provide abstraction from the
hardware

Represented in the binary form i.e. 0 or 1

They are machine instructions.

Classification of Low-level languages : Machine-level


language & Assembly level language.
Machine language
First generation programming

Machine language is a machine dependent language. E.g. IBM vs


Apple varies with architecture

Operation Code – such as addition and subtraction

Operands – to identify the data to be processed

Only language that can be understandable be computer

Very efficient code but difficult to write


Assembly language
Second Generation Language

Comparatively easier

Symbolic Operation codes replaced binary codes

Operation Code – such as addition and subtraction

Operands – to identify the data to be processed

Code is translated to object code using ASSEMBLER


Assembly language
Assembly language
Comparison
Parameters Machine Level Language Assembly Level Language
above the machine level
lowest level in the hierarchy
language in the hierarchy
Hierarchy Level zero abstraction level from the
less abstraction level from the
hardware.
hardware.

Learning Curve hard to understand by Humans. easy to learn and maintain.

Simple English and is easy to


Written as binary i.e. 0 or 1.
understand.
first-generation programming second-generation programming
Generation
language. language.
Requirement for
Executed directly so no translator assembler to convert assembly
Translator
is required. language to machine code.
/Assembler
High-Level Language
Allows us to write programs that are independent of the type of computer.

Focus only on attention towards Logic.

 Needs a compiler to translate a high-level language into a low-level language.

 Easy to learn & maintain.

 Portable i.e. they are machine-independent.


Comparison

Parameters Low-Level Language High-Level Language


Level of machine friendly i.e. easily user friendly, as it is written
Understanding understood by computers. in simple English.

Time of Execution Takes time to execute. Executes at a faster pace.

Portability Not portable. Portable.

Memory Efficiency Memory efficient. Less memory efficient.

Debugging and
Not easy Easy
Maintenance
HLL vs LLL
Low Level Language (LLL) High Level Language (HLL)
Direct memory Management Interpreted

More memory efficient Less memory efficient

Strictly hardware dependent Mostly independent of hardware

Faster than high level language Poor performance

Complex in nature Easy to maintain

Statements corresponds to clock Codes are concise


cycles
Needs assembler Needs compiler or interpreter

Machine code and Assembly Python, Java, C++ etc


language
High level programming language
Procedural language
This programming paradigm, derived from structured programming specifies a
series of well-structured procedures and steps to compose a program.

It provides a set of commands by segregating the program into variables, functions,
statements & conditional operators.

Various Programming editors or IDEs help users develop programming code using
one or more programming languages.

Some of them are Adobe Dreamweaver, Eclipse or Microsoft visual studio, BASIC, C,
Java, PASCAL, FORTRAN are examples of Procedural Programming Language.
Procedural Language (Sample)
#include <stdio.h>
int main() printf("G.C.D of %d and %d is %d", n1, n2,
gcd);
{
int n1, n2, i, gcd;
return 0;
printf("Enter two integers: ");
}
scanf("%d %d", &n1, &n2);
for(i=1; i <= n1 && i <= n2; ++i)
{
// Checks if i is factor of both integers
if(n1%i==0 && n2%i==0)
gcd = i;
}
Functional Programming Language
Declarative programming paradigm constructed by applying and composing
functions.

Emphasizes expressions and declarations than on the execution of statements.

The foundation of functional programming is lambda calculus which uses


conditional expressions and recursion to perform the calculations.

Does not support iteration like loop statements & conditional statements like if-else.

Some of the most prominent functional programming languages are Haskell, SML,
Scala, F#, ML, Scheme, and More.
Functional Programming Language
(Sample)
#include<iostream.h> void main()
#include<conio.h> {
int x=400, y=600;
void swap(int x, int y) clrscr();
{ swap(x, y); // arguments
passed to the function
int temp;
cout<<"Value of x"<<x;
temp=x;
cout<<"Value of y"<<y;
x=y;
getch();
y=temp;
}
}
Object-oriented programming Language
Based on the “objects” i.e. it contains data in the form of fields and the code in the
form of procedures.

OOPs, offer many features like abstraction, encapsulation, polymorphism,


inheritance, classes, and Objects.

Encapsulation is the main principle as it ensures secure code.

Emphasizes code reusability with the concept of inheritance and polymorphism


allows the spreading of current implementations without changing much of the code.

Most multi-paradigm languages are OOPs languages such as Java, C++, C#, Python,
Javascript, and more.
Object-oriented programming
Language (Sample)
// Main method
public class Main {
public static void main(String[] args) {
// Static method
myStaticMethod(); // Call the static method
static void myStaticMethod() {
// myPublicMethod(); This would compile an
System.out.println("Static methods can be called error
without creating objects");

}
Main myObj = new Main(); // Create an object of
// Public method Main
public void myPublicMethod() { myObj.myPublicMethod(); // Call the public
method on the object
System.out.println("Public methods must be
called by creating objects"); }
} }
Scripting Programming Languages
programming languages that are not compiled but are rather interpreted.
The instructions are written for a run time environment.
majorly used in web applications, System administration, games and multimedia.
It is used to create plugins and extensions for existing applications.
Server Side Scripting Languages: Javascript, PHP, and PERL.
Client-Side Scripting Languages: Javascript, AJAX, Jquery
System Administration: Shell, PERL, Python
Linux Interface: BASH
Web Development: Ruby
Scripting programming Language
(Sample)
$('#loadTable').click(function() {
$.get('example/tableData.xml', function(data) {
$('div#tableWrapper').append('<table id="ajaxTable"></table>');
$(document).ready(function() {
$(data).children('tabledata').children('row').each(function() {
var thisRow = this;
$('table#ajaxTable').append('<tr></tr>');
$(thisRow).children('column').each(function() {
var thisColumn = $(this).text();
$('table#ajaxTable').children('tbody').children('tr').last().append('<td
style="border: 1px solid black">' + thisColumn + '</td>');
});
});
});
});
});
Logic Programming
The programming paradigm is largely based on formal logic.

The language does not tell the machine how to do something but employs
restrictions on what it must consider doing.

PROLOG, ASAP(Answer Set programming), and Datalog are major logic


programming languages, rules are written in the form of classes.
Machine (Assembly) Language
Human Abstract design Software
abstract interface
Thought hierarchy
H.L. Language Compiler
& abstract interface
Operating Sys.
Virtual VM Translator
abstract interface
Machine
Assembly
Language

Assembler

abstract interface
Computer
Machine Architecture
abstract interface
Language
Hardware Gate Logic
abstract interface
Platform
Electrical
Chips & Engineering
Hardware Physics
Logic Gates
hierarchy
Machine language
Abstraction – implementation duality:

Machine language ( = instruction set) can be viewed as a programmer-


oriented abstraction of the hardware platform

The hardware platform can be viewed as a physical means for realizing


the machine language abstraction
Another duality:

 Binary version: 0001 0001 0010 0011 (machine code)

 Symbolic version ADD R1, R2, R3 (assembly)


Machine language
Machine language
Another duality: ALU
combinational
 Binary version

 Symbolic version

Memory
state

 Machine language = an agreed-upon formalism for manipulating


a memory using a processor and a set of registers

 Same spirit but different syntax across different hardware platforms.


Typical machine language commands (3 types)
ALU operations

Memory access operations

 (addressing mode: how to specify operands)

Immediate addressing, LDA R1, 67 // R1=67

Direct addressing, LD R1, 67 // R1=M[67]

Indirect addressing, LDI R1, R2 // R1=M[R2]

Flow control operations


Typical machine language commands (a small sample)

// In what follows R1,R2,R3 are registers, PC is program counter,


// and addr is some value.
ADD R1,R2,R3 // R1  R2 + R3
ADDI R1,R2,addr // R1  R2 + addr
AND R1,R1,R2 // R1  R1 and R2 (bit-wise)
JMP addr // PC  addr
JEQ R1,R2,addr // IF R1 == R2 THEN PC  addr ELSE PC++
LOAD R1, addr // R1  RAM[addr]
STORE R1, addr // RAM[addr]  R1
NOP // Do nothing
// Etc. – some 50-300 command variants
Basic computer
A 16-bit machine consisting of the following elements:

reset Screen
Computer

Keyboard
Basic computer
A 16-bit machine consisting of the following elements:

inM

writeM
Instruction outM Data
instruction

CPU
Memory Memory
addressM
(ROM32K) (Memory)
pc

reset

Both memory chips are 16-bit wide and have 15-bit address space.
Basic computer (CPU)
A 16-bit machine consisting of the following elements:

ALU output

C C
C

D
D
C C
decode
outM

ALU
C
A
Mux

A
C
instruction A/M

Mux
M
inM
C writeM
A
addressM
C
reset

A
PC pc
Basic computer
A 16-bit machine consisting of the following elements:
Data memory: RAM – an addressable sequence of registers
Instruction memory: ROM – an addressable sequence of registers
Registers: D, A, M, where M stands for RAM[A]
Processing: ALU, capable of computing various functions
Program counter: PC, holding an address
Control: The ROM is loaded with a sequence of 16-bit instructions, one per
memory location, beginning at address 0. Fetch-execute cycle: later
Instruction set: Two instructions: A-instruction, C-instruction.
The A-instruction
Where value is either a number or a symbol referring to some number.
Why A-instruction?

Effect:

 Sets the A register to 21 @value // A  value

 RAM[21] becomes the selected RAM register M


Example: @21
The A-instruction
Used for:
@value // A  value
Entering a constant value
( A = value) Coding example:

@17 // A = 17
D = A // D = 17
 Selecting a RAM location
( register = RAM[A]) @17 // A = 17
D = M // D = RAM[17]
M = -1 // RAM[17]=-1

 Selecting a ROM location @17 // A = 17


( PC = A ) JMP // fetch the instruction
// stored in ROM[17]
The C-instruction
Both dest and jump are optional. dest = comp ; jump
First, we compute something.
Next, optionally, we can store the result, or use it to jump to somewhere to
continue the program execution.
comp:

0, 1, -1, D, A, !D, !A, -D, -A, D+1, A+1, D-1, A-1, D+A, D-A, A-D, D&A, D|A
M, !M, -M, M+1, M-1, D+M, D-M, M-D, D&M, D|M

dest: null, A, D, M, MD, AM, AD, AMD


Compare to zero. If the
jump: null, JGT, JEQ, JLT, JGE, JNE, JLE, JMP condition holds, jump to
ROM[A]
The C-instruction
dest = comp ; jump

 Computes the value of comp


 Stores the result in dest
 If (the condition jump compares to zero is true), goto the instruction at
ROM[A].
The C-instruction
dest = comp ; jump
comp:
0, 1, -1, D, A, !D, !A, -D, -A, D+1, A+1, D-1, A-1, D+A, D-A, A-D, D&A, D|A
M, !M, -M, M+1, M-1, D+M, D-M, M-D, D&M, D|M

dest: null, A, D, M, MD, AM, AD, AMD


jump: null, JGT, JEQ, JLT, JGE, JNE, JLE, JMP

Example: set the D register to -1


D = -1

Example: set RAM[300] to the value of the D register minus 1


@300
M = D-1
Example: if ((D-1) == 0) goto ROM[56]
@56
D-1; JEQ
Programming reference card
A-command: @value // set A to value
C-command: dest = comp ; jump // dest = and ;jump
// are optional
Where:
comp =
0 , 1 , -1 , D , A , !D , !A , -D , -A , D+1 , A+1 , D-1, A-1 , D+A , D-A , A-D , D&A , D|A,
M , !M , -M , M+1, M-1 , D+M, D-M, M-D, D&M, D|M
dest = M, D, A, MD, AM, AD, AMD, or null
jump = JGT , JEQ , JGE , JLT , JNE , JLE , JMP, or null
In the command dest = comp; jump, the jump materialzes if (comp
jump 0) is true. For example, in D=D+1,JLT, we jump if D+1 < 0.
The machine language
Two ways to express the same semantics:
Binary code (machine language)
Symbolic language (assembly)

symbolic binary
@17 0000 0000 0001 0001
translate
D+1; JLE 1110 0111 1100 0110

execute

hardware
The A-instruction
symbolic binary
@value 0value
 value is a non-negative decimal
number <= 215-1 or  value is a 15-bit binary number

 A symbol referring to such a


constant

Example
@21 0000 0000 0001 0101
The C-instruction

binary
symbolic
111A C1C2C3C4 C5C6 D1D2 D3J1J2J3
dest = comp ; jump

]
comp dest jump
not used
opcode
111A C1C2C3C4 C5C6 D1D2 D3J1J2J3

comp dest jump

The C-instruction
A D M
111A C1C2C3C4 C5C6 D1D2 D3J1J2J3

comp dest jump

The C-instruction
Hack assembly/machine language
Source code (example) Target code
// Computes 1+...+RAM[0] 0000000000010000
// And stored the sum in RAM[1] 1110111111001000
@i 0000000000010001
M=1 // i = 1 1110101010001000
@sum 0000000000010000
M=0 // sum = 0 1111110000010000
(LOOP) 0000000000000000
@i // if i>RAM[0] goto WRITE 1111010011010000
D=M 0000000000010010
@R0 1110001100000001
D=D-M 0000000000010000
@WRITE 1111110000010000
D;JGT assemble 0000000000010001
@i // sum += i 1111000010001000
D=M 0000000000010000
@sum assembler 1111110111001000
M=D+M 0000000000000100
@i // i++ or CPU emulator 1110101010000111
M=M+1 0000000000010001
@LOOP // goto LOOP 1111110000010000
0;JMP 0000000000000001
(WRITE) 1110001100001000
@sum 0000000000010110
D=M 1110101010000111
@R1
M=D // RAM[1] = the sum
(END)
@END We will focus on writing the assembly code.
0;JMP
Working with registers and memory
D: data register
A: address/data register
M: the currently selected memory cell, M=RAM[A]
Programming exercises
Exercise: Implement the following tasks
using commands:
1. Set D to A-1
2. Set both A and D to A + 1
3. Set D to 19
4. D++
5. D=RAM[17]
6. Set RAM[5034] to D - 1
7. Set RAM[53] to 171
8. Add 1 to RAM[7],
and store the result in D.
Programming exercises
Exercise: Implement the following tasks
1. D = A-1
using Hack commands:
2. AD=A+1
1. Set D to A-1
3. @19
2. Set both A and D to A + 1
D=A
3. Set D to 19
4. D=D+1
4. D++
5. @17
5. D=RAM[17]
D=M
6. Set RAM[5034] to D - 1 6. @5034
7. Set RAM[53] to 171 M=D-1
8. Add 1 to RAM[7], 7. @171
and store the result in D.
D=A
@53
M=D
8. @7
D=M+1
A simple program: add two numbers (demo)
PROGRAM DEVELOPMENT
C++ program
// Your First C++ Program

#include <iostream>

int main() {

std::cout << "Hello World!";

return 0;

}
Java Program
/* This is a simple Java program. public static void main(String
args[])
FileName : "HelloWorld.java". */
{
class HelloWorld
System.out.println("Hello,
{ World");
// Your program begins with a call to }
main().
}
// Prints "Hello, World" to the
terminal window.
Development cycle (PDLC)
Define
◦ Scope, objective
Analyze
◦ Input – process - output
Assumptions, deliverables, exceptions
Algorithm development
Coding and Documentation
Testing and Debugging
Maintenance
Algorithm
An algorithm is a sequence of unambiguous instructions for solving a
problem, i.e., for obtaining a required output for any legitimate input
in a finite amount of time
Algorithmic Analysis
Space complexity

◦ How much space is required

Time complexity

◦ How much time does it take to run the algorithm


General Analysis
Depends on the machine and platform.

However, the analysis should be independent of machine and


platform.

Exact value cannot be arrived.

Further, dependency on

◦ amount of input
◦ Sequence of input
What does “size of the input” mean?
If we are searching an array, the “size” of the input could be the size of the array

If we are merging two arrays, the “size” could be the sum of the two array sizes

If we are computing the nth Fibonacci number, or the nth factorial, the “size” is
n

We choose the “size” to be the parameter that most influences the actual
time/space required

It is usually obvious what this parameter is

Sometimes we need two or more parameters

83
General Analysis
Complexity is measured as a function of size of input

Time is measured as T(n) and Space as S(n) for ‘n’ inputs

Rate of growth - Exponential

2n, nn,n!

Polynomial Growth – n2, n3, log(n)

Better execution time is through algorithm with linear growth for large
amount of data
Time Analysis
Active or Characteristic Operations alone to be considered

What a “characteristic operation” is depends on the particular problem

If searching, it might be comparing two values

If sorting an array, it might be:

 comparing two values

 swapping the contents of two array locations

 both of the above

Sometimes we just look at how many times the innermost loop is


executed
Time Analysis
Bookkeeping operations shall be ignored.

Count the number of active operations

Ignore the constants and arrive at a common notation with respect to ‘n’
with the highest value.
Is it possible to find exact values?
It is sometimes possible, in assembly language, to compute exact time and
space requirements

 We know exactly how many bytes and how many cycles each machine
instruction takes

 For a problem with a known sequence of steps (factorial, Fibonacci), we can


determine how many instructions of each type are required

However, often the exact sequence of steps cannot be known in advance

 The steps required to sort an array depend on the actual numbers in the array
(which we do not know in advance)
High level Languages
In a higher-level language (such as Java), we do not know how long each operation takes

Which is faster, x < 10 or x <= 9 ?

We don’t know exactly what the compiler does with this

The compiler probably optimizes the test anyway (replacing the slower version with
the faster one)

In a higher-level language we cannot do an exact analysis

Our timing analyses will use major oversimplifications

Nevertheless, we can get some very useful results


Constant time
Constant time means there is some constant k such that this operation always takes
k nanoseconds

A Java statement takes constant time if:

It does not include a loop

It does not include calling a method whose time is unknown or is not a constant

If a statement involves a choice (if or switch) among operations, each of which takes
constant time, we consider the statement to take constant time

This is consistent with worst-case analysis


Linear time
We may not be able to predict to the nanosecond how long a Java program will
take, but do know some things about timing:

for (i = 0, j = 1; i < n; i++) { j = j * i; }

◦ This loop takes time k*n + c, for some constants k and c

k : How long it takes to go through the loop once


(the time for j = j * i, plus loop overhead)

n : The number of times through the loop (we can use this as the “size” of the
problem)

c : The time it takes to initialize the loop

◦ The total time k*n + c is linear in n


Constant time is (usually) better than linear time
Suppose we have two algorithms to solve a task:

◦ Algorithm A takes 5000 time units

◦ Algorithm B takes 100*n time units

Which is better?

◦ Clearly, algorithm B is better if our problem size is small, that is, if n < 50

◦ Algorithm A is better for larger problems, with n > 50

◦ So B is better on small problems that are quick anyway

◦ But A is better for large problems, where it matters more

We usually care most about very large problems - But not always!
The array subset problem
Suppose you have two sets, represented as unsorted arrays:
int[] sub = { 7, 1, 3, 2, 5 };
int[] super = { 8, 4, 7, 1, 2, 3, 9 };

and you want to test whether every element of the first set (sub) also occurs in the second set
(super):
System.out.println(subset(sub, super));

(The answer in this case should be false, because sub contains the integer 5, and super
doesn’t)

We are going to write method subset and compute its time complexity (how fast it is)

Let’s start with a helper function, member, to test whether one number is in an array

92
member
static boolean member(int x, int[] a) {
int n = a.length;
for (int i = 0; i < n; i++) {
if (x == a[i]) return true;
}
return false;
}
If x is not in a, the loop executes n times, where n = a.length
◦ This is the worst case
If x is in a, the loop executes n/2 times on average
Either way, linear time is required: k*n+c
subset
static boolean subset(int[] sub, int[] super) {
int m = sub.length;
for (int i = 0; i < m; i++)
if (!member(sub[i], super) return false;
return true;
}
The loop (and the call to member) will execute:
◦ m = sub.length times, if sub is a subset of super
◦ This is the worst case, and therefore the one we are most interested in
◦ Fewer than sub.length times (but we don’t know how few)
◦ We would need to figure this out in order to compute average time complexity
The worst case is a linear number of times through the loop
But the loop body doesn’t take constant time, since it calls member, which takes linear
time
Analysis of array subset algorithm
We’ve seen that the loop in subset executes m = sub.length times (in the
worst case)
Also, the loop in subset calls member, which executes in time linear in n =
super.length
Hence, the execution time of the array subset method is m*n, along with
assorted constants
◦ We go through the loop in subset m times, calling member each time
◦ We go through the loop in member n times
◦ If m and n are similar, this is roughly quadratic

95
What about the constants?
Forget the constants!

An added constant, f(n)+c, becomes less and less important as n gets larger

A constant multiplier, k*f(n), does not get less important, but...

◦ Improving k gives a linear speedup (cutting k in half cuts the time required in half)

◦ Improving k is usually accomplished by careful code optimization, not by better


algorithms

◦ We aren’t that concerned with only linear speedups!


Simplifying the formulae
Throwing out the constants is one of two things we do in analysis of
algorithms

◦ By throwing out constants, we simplify 12n2 + 35 to just n2

Our timing formula is a polynomial, and may have terms of various orders
(constant, linear, quadratic, cubic, etc.)

◦ We usually discard all but the highest-order term

◦ We simplify n2 + 3n + 5 to just n2
Big O notation
When we have a polynomial that describes the time requirements of an
algorithm, we simplify it by:
◦ Throwing out all but the highest-order term
◦ Throwing out all the constants

If an algorithm takes 12n3+4n2+8n+35 time, we simplify this formula to just n3

We say the algorithm requires O(n3) time


◦ We call this Big O notation
◦ (More accurately, it’s Big , but we’ll talk about that later)
Big O for subset algorithm
Recall that, if n is the size of the set, and m is the size of the (possible)
subset:
◦ We go through the loop in subset m times, calling member each
time
◦ We go through the loop in member n times

Hence, the actual running time should be k*(m*n) + c, for some


constants k and c

We say that subset takes O(m*n) time


Can we justify Big O notation?
Big O notation is a huge simplification; can we justify it?
◦ It only makes sense for large problem sizes
◦ For sufficiently large problem sizes, the highest-order term swamps all the rest!

Consider R = x2 + 3x + 5 as x varies:
◦ x=0 x2 = 0 3x = 0 5=5 R=5
◦ x = 10 x2 = 100 3x = 30 5=5 R = 135
◦ x = 100 x2 = 10000 3x = 300 5=5 R = 10,305
◦ x = 1000 x2 = 1000000 3x = 3000 5=5 R = 1,003,005
◦ x = 10,000 x2 = 108 3x = 3*104 5=5 R = 100,030,005
◦ x = 100,000 x2 = 1010 3x = 3*105 5=5 R = 10,000,300,005

100
y = x2 + 3x + 5, for x=1..10

101
y = x2 + 3x + 5, for x=1..20

102
Common time complexities
BETTER O(1) constant time
O(log n) log time
O(n) linear time
O(n log n) log linear time
O(n2) quadratic time
O(n3) cubic time
O(2n) exponential time
WORSE

103
Algorithm
Repeat for I = 1 to N
Repeat for J = 1 to N
SUM  0
Repeat for K = 1 to N
SUM  SUM + A[I, K] * B[K, J]
End Repeat K
C[I,J]  SUM
End Repeat J
End Repeat I
Order Notation
The Big-O notation:

◦ the running time of an algorithm as a function of the size of its


input

◦ worst case estimate

◦ asymptotic behavior

O(n2) means that the running time of the algorithm on an


input of size n is limited by the quadratic function of n
Big-Oh Notation Definition
Given functions f(n) and g(n), we say that f(n) is O(g(n)) if there are positive
constants
c and n0 such that

f(n)  cg(n) for n  n0

Example: 2n + 10 is O(n)

◦ 2n + 10  cn

◦ (c  2) n  10

◦ n  10/(c  2)

◦ Pick c = 3 and n0 = 10
More Big-Oh Examples
 7n-2
7n-2 is O(n)
need c > 0 and n0  1 such that 7n-2  c•n for n  n0
this is true for c = 7 and n0 = 1
 3n3 + 20n2 + 5
3n3 + 20n2 + 5 is O(n3)
need c > 0 and n0  1 such that 3n3 + 20n2 + 5  c•n3 for n  n0
this is true for c = 4 and n0 = 21
 3 log n + 5
3 log n + 5 is O(log n)
need c > 0 and n0  1 such that 3 log n + 5  c•log n for n  n0
this is true for c = 8 and n0 = 2
Big-Oh and Growth Rate
The big-Oh notation gives an upper bound on the growth rate of a function

The statement “f(n) is O(g(n))” means that the growth rate of f(n) is no more
than the growth rate of g(n)

We can use the big-Oh notation to rank functions according to their growth
rate
f(n) is O(g(n)) g(n) is O(f(n))
g(n) grows more Yes No
f(n) grows more No Yes
Same growth Yes Yes
Big-Oh Rules
f n  a0  a1n  a2 n2  ...  ad nd
If is f(n) a polynomial of degree d, then f(n) is O(nd), i.e.,

1. Drop lower-order terms

2. Drop constant factors

Use the smallest possible class of functions

◦ Say “2n is O(n)” instead of “2n is O(n2)”

Use the simplest expression of the class

◦ Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”


Asymptotic Algorithm Analysis
The asymptotic analysis of an algorithm determines the running time in big-Oh notation

To perform the asymptotic analysis


◦ Find the worst-case number of primitive operations executed as a function
of the input size
◦ Express this function with big-Oh notation

Example:
◦ We determine that algorithm arrayMax executes at most 7n  1 primitive
operations
◦ We say that algorithm arrayMax “runs in O(n) time”

Since constant factors and lower-order terms are eventually dropped anyhow, we can
disregard them when counting primitive operations
Important Functions Growth Rates
n log(n) n nlog(n) n2 n3 2n

8 3 8 24 64 512 256

16 4 16 64 256 4096 65536

32 5 32 160 1024 32768 4.3x109

64 6 64 384 4096 262144 1.8x1019

128 7 128 896 16384 2097152 3.4x1038

256 8 256 2048 65536 16777218 1.2x1077


Types of Analysis
Best Case Analysis

Average Case Analysis

Worst Case Analysis


Types of Analysis
Best case running time is usually useless

Average case time is very useful but often difficult to determine

We focus on the worst case running time

◦ Easier to analyze

◦ Crucial to applications such as games, finance and robotics


Space Complexity
Space complexity = The amount of memory required by an algorithm to run to
completion

◦ [Core dumps = the most often encountered cause is “dangling


pointers”]

Some algorithms may be more efficient if data completely loaded into memory

◦ Need to look also at system limitations


◦ E.g. Classify 2GB of text in various categories [politics, tourism, sport,
natural disasters, etc.] – can I afford to load the entire collection?
Space Complexity
Fixed part: The size required to store certain data/variables, that
is independent of the size of the problem:
◦ - e.g. name of the data collection

◦ same size for classifying 2GB or 1MB of texts

Variable part: Space needed by variables, whose size is dependent


on the size of the problem:

◦ - e.g. actual text

◦ - load 2GB of text VS. load 1MB of text


Space Complexity
Space complexity is the amount of memory used by the algorithm
(including the input values to the algorithm) to execute and produce
the result.

Sometime Auxiliary Space is confused with Space Complexity. But


Auxiliary Space is the extra space or the temporary space used by
the algorithm during it's execution.

Space Complexity = Auxiliary Space + Input space


Memory During Execution
Instruction Space: It's the amount of memory used to save the compiled

version of instructions.

Environmental Stack: Sometimes an algorithm(function) may be called

inside another algorithm(function). In such a situation, the current

variables are pushed onto the system stack, where they wait for further

execution and then the call to the inside algorithm(function) is made.


Memory During Execution
For example, If a function A() calls function B() inside it, then all the

variables of the function A()will get stored on the system stack temporarily,

while the function B() is called and executed inside the funciton A().

Data Space: Amount of space used by the variables and constants.

But while calculating the Space Complexity of any algorithm, we usually

consider only Data Space and we neglect the Instruction

Space and Environmental Stack.


Space Complexity
S(P) = c + S(instance characteristics)  for(i = 0; i<n; i++) {
◦ c = constant s+= a[i];

Example: }

float summation(const float (&a)[10], return s;


int n ) }
{ Space? one for n, one for a
float s = 0; [passed by reference!], one
for i  constant space!
int i;
Example
int square(int a) { return a*a; }

Linear: int sum(int A[], int n) { int sum = 0, i; for(i = 0; i < n; i++) sum
= sum + A[i]; return sum; }
Relatives of Big-Oh
big-Omega

◦ f(n) is (g(n)) if there is a constant c > 0 and an integer


constant n0  1 such that f(n)  c•g(n) for n  n0

big-Theta

◦ f(n) is (g(n)) if there are constants c’ > 0 and c’’ > 0 and an
integer constant n0  1 such that c’•g(n)  f(n)  c’’•g(n) for
n  n0
Examples
You go and ask the first person of the class, if he has the pen. Also, you ask
this person about other 99 people in the classroom if they have that pen &
So on.

O(n): Going and asking each student individually is O(N).

O(log n): Now I divide the class in two groups, then ask: “Is it on the left
side, or the right side of the classroom?” Then I take that group and divide
it into two and ask again, and so on. Repeat the process till you are left with
one student who has your pen. This is what you mean by O(log n).
Problem
int i, j, k = 0;
for (i = n / 2; i <= n; i++) {
for (j = 2; j <= n; j = j * 2) {
k = k + n / 2;
}
}
Output:
O(nLogn)
Problem
int a = 0, i = N;
while (i > 0) {
a += i;
i /= 2;
}
PROGRAM
EXECUTION
program
Plain text

Parsing

Text to command

Command to low level language


C COMPILER
C COMPILER
C COMPILER
Java compiler
Java byte code
View compiled class file in text
editor

Opcode(e.g. CA, 4C, etc) in the


bytecode above, each of them has
a corresponding mnemonic code
(e.g., aload_0 in the example
below). The opcode is not readable,
but we can use javap to see the
mnemonic form of a .class file.
Java byte code
"javap -c" prints out disassembled code for each method in the
class.
Disassembled code means the instructions that comprise the Java
bytecodes.
javap -classpath . -c HelloWorld
JAVA PREPROCESSOR
 javap -classpath . -verbose HelloWorld
preprocessor
Step before compiler

Processes the preprocessor commands in the source code

Eg. # define MAX_ROWS 10


compiler
It is a computer program that reads source code and converts into
assembly code or executable code

High level language to object code


assembler
Assembler creates object code by translating assembly language
instruction into OPCODE
interpreter
It is a translator which translates the source code into machine code
one line at a time

Whereas compiler translates all lines at a same time


PYTHON INTERPRETER
Python Code
linker
Linker uses the object files created by the compiler

Links the predefined library objects to create the executable code


loader
Loads the executable code in the main memory
FUNDAMETALS OF
OPERATING SYSTEM
Operating system
Operating System acts as a communication bridge (interface)
between the user and computer hardware

Resource Manager

Allocates memory and CPU for running the program


Operating system
Central Processing Unit (CPU)

Bus

Main Memory (RAM)

Secondary Storage Media

I / O Devices
Central processing unit
Central Processing Unit

The “brain” of the computer

Controls all other computer functions

In PCs (personal computers) also called the microprocessor or


simply processor.
bus
Computer components are connected by a bus.

A bus is a group of parallel wires that carry control signals and data
between components.
Main memory
Main memory holds information such as computer programs, numeric
data, or documents created by a word processor.

Main memory is made up of capacitors.

If a capacitor is charged, then its state is said to be 1, or ON.

We could also say the bit is set.

If a capacitor does not have a charge, then its state is said to be 0, or OFF.

We could also say that the bit is reset or cleared.


Main memory
Memory is divided into cells, where each cell contains 8 bits (a 1 or a

0). Eight bits is called a byte.

Each of these cells is uniquely numbered.

The number associated with a cell is known as its address.

Main memory is volatile storage. That is, if power is lost, the


information in main memory is lost.
Main memory
Operations such as

◦ get the information held at a particular address in memory,


known as a READ,

◦ or store information at a particular address in memory, known as


a WRITE.

Writing to a memory location alters its contents.

Reading from a memory location does not alter its contents.


Main Memory (con’t)

• All addresses in memory can be accessed in the same amount of


time.

• We do not have to start at address 0 and read everything until we


get to the address we really want (sequential access).

• We can go directly to the address we want and access the data


(direct or random access).

• That is why we call main memory RAM (Random Access Memory).

150
Secondary Storage Media
Disks -- floppy, hard, removable (random access)
• Tapes (sequential access)
• CDs (random access)
• DVDs (random access)
Secondary storage media store files that contain
• computer programs
• data
• other types of information
• Called persistent (permanent) storage because it is non-volatile.

151
I/O (Input/Output) Devices
Information input and output is handled by I/O (input/output) devices.
More generally, these devices are known as peripheral devices.
Examples:
◦ monitor
◦ keyboard
◦ mouse
◦ disk drive (floppy, hard, removable)
◦ CD or DVD drive
◦ printer
◦ scanner

152
Computer level hierarchy

You might also like