0% found this document useful (0 votes)
3 views

Principles of Programming Languages Notes

The document covers the evolution of programming languages from low-level to high-level languages, highlighting significant milestones such as the introduction of object-oriented and functional programming. It also discusses key concepts in programming language design, including syntax, semantics, type checking, and memory management, along with various data types and structures. Additionally, it addresses the importance of scope, lifetime, and garbage collection in programming, providing a comprehensive overview of foundational programming principles.

Uploaded by

anieshmacse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Principles of Programming Languages Notes

The document covers the evolution of programming languages from low-level to high-level languages, highlighting significant milestones such as the introduction of object-oriented and functional programming. It also discusses key concepts in programming language design, including syntax, semantics, type checking, and memory management, along with various data types and structures. Additionally, it addresses the importance of scope, lifetime, and garbage collection in programming, providing a comprehensive overview of foundational programming principles.

Uploaded by

anieshmacse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

PRINCIPLES OF PROGRAMMING LANGUAGES

UNIT-1
Evolution of Programming Languages

The evolution of programming languages reflects the growing need for more powerful, flexible,
and efficient ways to communicate instructions to computers. Over time, programming
languages have progressed from low-level languages that interact closely with hardware, to high-
level languages that abstract away many of the complexities of machine operations.

1. Early Programming Languages:

 Machine Language (First Generation): The earliest form of programming, consisting


of binary code, was difficult to write and understand. It directly interacted with hardware
and was tedious to develop.
 Assembly Language (Second Generation): A symbolic representation of machine code,
assembly allowed developers to use mnemonics instead of binary code, making
programming somewhat easier, but still low-level.
 High-Level Languages (Third Generation): The first high-level languages, such as
Fortran (1957), Lisp (1958), and COBOL (1959), abstracted away machine details,
allowing programmers to focus on solving problems rather than managing hardware.

2. Modern Programming Languages:

 Fourth-Generation Languages (4GL): These languages were developed to reduce the


complexity of programming even further. They are often domain-specific and designed to
make data manipulation and database management easier.
 Fifth-Generation Languages (5GL): Based on logic and constraints (e.g., Prolog), 5GLs
aim to automate problem-solving rather than simply following a series of commands.

3. Object-Oriented Programming (OOP):

 Introduced in the 1980s with languages like Smalltalk, C++, and Java, OOP is based on
the concept of objects and classes. It facilitates better modularity and reuse of code.

4. Functional Programming:

 Functional languages like Haskell and Lisp emphasize the use of functions as the primary
means of computation, supporting higher-order functions and immutability.

5. Scripting and Web Development Languages:


 Languages such as Python, JavaScript, and Ruby arose to handle specific tasks like web
development, automation, and data analysis, becoming popular for their simplicity and
ease of use.

Describing Syntax

Syntax refers to the set of rules that defines the structure of statements in a programming
language. It dictates how programs should be written to ensure that they can be correctly parsed
and executed.

1. Formal Grammar:

A formal grammar defines the syntax of a programming language using rules that describe the
structure of its statements and expressions.

 Context-Free Grammar (CFG): A formal grammar where the left-hand side of every
production rule is a single non-terminal symbol. It is used to describe the syntax of
programming languages.

Example:

o A typical production rule: S → aSb | ε (S is a non-terminal, a and b are terminal


symbols, and ε denotes the empty string).

Context-Free Grammar is particularly powerful because it can be used to describe the


syntax of most programming languages.

2. Syntax vs. Semantics:

 Syntax refers to the structure or form of the expressions, statements, and program units,
without considering meaning.
 Semantics refers to the meaning of these expressions and statements, describing how the
syntax is executed by the computer.

Context-Free Grammars (CFG)

Context-Free Grammars are used to define the syntax rules of a programming language, typically
represented as a set of production rules. Each rule expresses how a non-terminal symbol can be
replaced by a string of terminal and/or non-terminal symbols.

Example of Context-Free Grammar:


Expression → Term | Expression + Term | Expression - Term
Term → Factor | Term * Factor | Term / Factor
Factor → (Expression) | number

This grammar defines the structure of simple arithmetic expressions where +, -, *, /, and
parentheses can be used.

Attribute Grammars

Attribute Grammars extend Context-Free Grammars (CFGs) by adding attributes to the symbols
in the grammar rules. Attributes can carry additional information, such as type information,
computed values, or other properties of the language constructs.

 Synthesized Attributes: These attributes are computed from the children of a non-
terminal symbol.
 Inherited Attributes: These attributes are passed down from the parent or siblings of a
non-terminal symbol.

For example:

 An attribute grammar might be used to assign a type to an expression in a programming


language. If an expression consists of an addition operation between two numbers, the
synthesized attribute would be the type of the result.

Describing Semantics

Semantics defines the meaning of the syntax and provides the logic that governs the execution of
statements and expressions in a programming language. The main approaches to semantics are:

1. Operational Semantics:

 Defines the meaning of a language construct by describing how a computation proceeds


step-by-step.
 It specifies how each construct should be executed on a machine.

2. Denotational Semantics:

 Describes the meaning of constructs in terms of mathematical objects or functions, rather


than focusing on the step-by-step execution.

3. Axiomatic Semantics:
 Defines the meaning of language constructs using logical formulas and proofs. It
provides a formal method for reasoning about program correctness.

Lexical Analysis

Lexical analysis is the first phase of a compiler that breaks the source code into a series of
tokens. A token is a sequence of characters that forms a syntactically valid unit in the language
(e.g., keywords, operators, identifiers).

Process of Lexical Analysis:

1. Input: Raw source code.


2. Output: A stream of tokens (e.g., keywords, variables, constants).
3. Tools: Lexers or scanners are used to perform lexical analysis.

Parsing

Parsing is the process of analyzing a sequence of tokens and constructing a parse tree or abstract
syntax tree (AST) based on the grammar of the language.

 Parse Tree: A tree representation of the syntactic structure of the source code.
 Abstract Syntax Tree (AST): A simplified version of the parse tree that represents the
structure of the program without extraneous details (like parentheses).

Types of Parsing:

 Recursive-Descent Parsing: A top-down parsing technique where each non-terminal is


associated with a function that recursively calls functions for its right-hand side. It is
simple but can struggle with certain grammars, such as left-recursive ones.

Example of Recursive Descent:

For a grammar rule like A → B C, the recursive descent parser will first parse B, then C.

 Bottom-Up Parsing: A parsing strategy that begins with the input symbols (tokens) and
gradually builds up to the start symbol of the grammar. It works by reducing the tokens
into non-terminals. Shift-Reduce Parsing is a common bottom-up parsing technique
used in most modern parsers.

Recursion-Tree Method
The recursion-tree method is a technique used to solve recurrence relations, commonly found
in algorithm analysis, especially when analyzing the time complexity of recursive algorithms.

1. Recursion Tree: A tree structure where each node represents a recursive call to the
algorithm. The edges represent the cost at each level.
2. The total cost of the algorithm is the sum of the costs at each level of the tree.

Example:

For a recurrence like T(n) = 2T(n/2) + O(n), the recursion tree would have two subproblems
of size n/2 at each level, and at each level, the total work is linear, O(n).

Conclusion

 Syntax refers to the structural aspects of a programming language, while semantics


describes the meaning behind that structure.
 Context-Free Grammars are used to describe the syntax of programming languages,
and Attribute Grammars extend this by adding additional properties to the syntax.
 Lexical analysis and parsing are essential parts of compilers and interpreters, turning
source code into a structure that can be executed.
 Recursive-descent and bottom-up parsing are the two primary techniques for parsing,
each with its advantages and challenges.
 Understanding the evolution of programming languages and the theories behind them,
such as syntax and semantics, is crucial for designing efficient and reliable software.

UNIT -2
Names, Variables, and Binding

Names:

 In programming, names are used to represent variables, functions, classes, or any other
identifier in the program. Names provide a way to refer to memory locations where
values can be stored or retrieved.
 Naming conventions vary by language and context but generally follow rules like
starting with a letter or underscore, followed by letters, digits, or underscores.

Variables:

 Variables are symbolic names used to store data that can be manipulated during the
execution of a program. A variable’s value can change as the program executes.
 Each variable has a specific data type that dictates what kind of data it can store (e.g.,
integers, strings, etc.).
Binding:

 Binding refers to the association between a name (e.g., a variable name) and an entity
(e.g., a memory location or a value). It occurs when a variable is created or assigned a
value.
 Static Binding: Happens at compile time (e.g., type declarations).
 Dynamic Binding: Happens at runtime (e.g., method calls in polymorphic objects).

Type Checking

 Type checking is the process of verifying that the operations in a program are applied to
the correct data types.
o Static Type Checking: Ensures type correctness at compile time (e.g., strongly
typed languages like Java or C#).
o Dynamic Type Checking: Ensures type correctness at runtime (e.g., languages
like Python or JavaScript).

Advantages of type checking:

 Reduces errors by enforcing correct usage of data types.


 Improves code maintainability by making the type of data clear to developers.

Scope and Scope Rules

Scope:

 Scope refers to the region of a program where a variable, function, or other identifier is
accessible.
o Local Scope: A variable is accessible only within the block or function where it is
declared.
o Global Scope: A variable is accessible throughout the entire program.
o Block Scope: A variable is accessible only within the block of code (e.g., inside a
loop or a conditional statement).

Scope Rules:

 Lexical Scope (Static Scope): The scope is determined by the structure of the program
and the location where a variable is defined (e.g., in languages like JavaScript, Python).
 Dynamic Scope: The scope is determined by the calling context or stack of the program
at runtime (less common today, but used in languages like older versions of Lisp).
Lifetime and Garbage Collection

Lifetime:

 Lifetime refers to the duration for which a variable exists in memory. It is usually tied to
its scope.
o Automatic Lifetime: Variables in local scope (e.g., function variables) are
automatically created when a function is called and destroyed when the function
exits.
o Static Lifetime: Variables with a fixed lifetime throughout the program execution
(e.g., global variables).
o Dynamic Lifetime: Variables allocated dynamically (e.g., using malloc or new)
remain in memory until explicitly deallocated.

Garbage Collection:

 Garbage collection is an automatic memory management feature that reclaims memory


occupied by objects that are no longer in use by the program.
o Reference Counting: Keeps track of the number of references to an object and
frees memory when the reference count drops to zero.
o Mark and Sweep: Marks objects that are reachable and sweeps the rest, freeing
their memory.

Primitive Data Types

Primitive data types are the basic building blocks of data manipulation in a program. They
typically include:

 Integer: Represents whole numbers (e.g., int in C/C++, int in Java, integer in
Python).
 Floating-Point Numbers: Represents real numbers with fractional parts (e.g., float,
double in most languages).
 Character: Represents a single character (e.g., char in C, string of length 1 in Python).
 Boolean: Represents true or false values (e.g., bool in C++, boolean in Java).
 Void: Represents an absence of a value, often used for function return types.

Strings

 Strings are sequences of characters, typically used to represent text.


o Immutable Strings: In languages like Java and Python, strings are immutable,
meaning their values cannot be changed after they are created.
o Mutable Strings: Some languages (like C++) allow strings to be modified in
place.

String Operations include:

 Concatenation: Joining two or more strings together.


 Substring: Extracting a part of a string.
 Length: Finding the length of a string.
 Searching: Finding a substring within a string.

Array Types

 Arrays are collections of elements, all of the same type, stored in contiguous memory
locations. They can be accessed using indices.
o One-dimensional arrays: A list of elements (e.g., int[] arr = {1, 2, 3};).
o Multi-dimensional arrays: Arrays with more than one dimension (e.g., 2D
arrays, 3D arrays).

Array operations:

 Indexing: Accessing an element using an index.


 Traversal: Iterating through each element.
 Sorting: Arranging elements in a specified order.

Associative Arrays

 Associative arrays (also known as dictionaries or hash maps) are data structures where
each element is accessed via a key rather than an index.
o Example: A dictionary in Python: my_dict = {"key1": 10, "key2": 20}.

Operations:

 Insertion: Adding a new key-value pair.


 Deletion: Removing a key-value pair.
 Lookup: Accessing a value based on the key.

Record Types
 Records (also called structures) are composite data types that group together different
data types under a single name. Each element within a record is called a field or
member.
o Example: A student record may include fields like name, age, and grade.
o In C/C++, records are represented using struct:
o struct Student {
o char name[50];
o int age;
o float grade;
o };

Union Types

 Union types allow different data types to share the same memory location. A union can
store only one of its members at a time, but the memory space is shared between all
members.
o Example in C:
o union Data {
o int int_val;
o float float_val;
o char char_val;
o };
o Use cases: Unions are typically used when you need to store different types of
data in the same location but never need to store more than one type at a time.

Pointers and References

Pointers:

 A pointer is a variable that stores the memory address of another variable. Pointers are
used to directly access and manipulate memory.
o Pointer Arithmetic: You can perform arithmetic on pointers, such as
incrementing or decrementing them to point to different memory locations.
o Dereferencing: Accessing the value stored at the memory address a pointer
points to.

Example in C:

int a = 10;
int *p = &a; // p stores the address of a
printf("%d", *p); // Dereferencing p to get the value of a

References:
 A reference is an alias for another variable. Unlike pointers, references cannot be null
and do not require dereferencing.
o References are safer than pointers because they guarantee that a valid object is
always referred to.
o In languages like C++, references are used to pass arguments to functions by
reference rather than by value.

Example in C++:

int a = 10;
int &ref = a; // ref is a reference to a
ref = 20; // a is now 20 because ref refers to a

Summary

 Variables, binding, and types provide structure and organization to data in a program,
defining how values are stored, accessed, and manipulated.
 Scope governs the accessibility of variables, while lifetime defines how long a variable
exists in memory.
 Garbage collection ensures that unused memory is reclaimed, promoting memory
efficiency.
 Data types, including primitive types, arrays, records, unions, and
pointers/references, provide diverse ways to handle and organize data. Each data type
has its unique features and use cases, making it crucial for developers to understand their
differences and how to utilize them properly.

Arithmetic Expressions

 Arithmetic expressions involve mathematical operations such as addition, subtraction,


multiplication, division, and modulus.
 Operators used in arithmetic expressions include:
o Addition (+)
o Subtraction (-)
o Multiplication (*)
o Division (/)
o Modulus (%)
 Example:
 int a = 5, b = 3;
 int result = a + b; // result is 8
 Operator Precedence: The order in which operations are performed in an expression.
For example, multiplication has higher precedence than addition.
o Parentheses () can be used to explicitly specify the order of evaluation.
o Operators like *, /, and % have higher precedence than + and -.
Overloaded Operators

 Operator overloading allows you to define custom behavior for operators (like +, -, *)
when applied to user-defined types such as classes or structures.
 Example in C++: Overloading the + operator for a complex number class.
 class Complex {
 public:
 int real, imag;
 Complex operator+(const Complex& obj) {
 Complex temp;
 temp.real = real + obj.real;
 temp.imag = imag + obj.imag;
 return temp;
 }
 };
 Important Points:
o Not all operators can be overloaded (e.g., ::, .).
o The behavior of the operator should be intuitively clear (e.g., + should represent
addition).

Type Conversions

 Type conversions (also known as type casting) involve changing a variable’s type from
one to another. These conversions can be either implicit or explicit.
o Implicit Conversion: Automatically done by the compiler when a smaller data
type is converted into a larger one (e.g., int to float).
 Example: int a = 5; float b = a; (implicit conversion from int to
float).
o Explicit Conversion: Requires the programmer to specify the conversion, usually
through casting.
 Example: float a = 5.5; int b = (int) a; (explicit conversion
using casting).
 Example:
 double a = 3.14;
 int b = static_cast<int>(a); // b is 3
 Common Type Conversions:
o Widening: Converting a smaller type to a larger type (e.g., int to double).
o Narrowing: Converting a larger type to a smaller type (e.g., double to int), may
cause data loss.

Relational and Boolean Expressions

 Relational Expressions are used to compare two values or variables. They evaluate to
either true or false.
o Operators used:
 Equal to (==)
 Not equal to (!=)
 Greater than (>)
 Less than (<)
 Greater than or equal to (>=)
 Less than or equal to (<=)

Example:

int a = 5, b = 10;
bool result = a < b; // result is true

 Boolean Expressions use logical operators to combine or negate conditions.


o Logical operators:
 AND (&&)
 OR (||)
 NOT (!)

Example:

bool x = true, y = false;


bool result = x && y; // result is false

Assignment Statements

 Assignment statements assign a value to a variable using the assignment operator (=).
o Example: int x = 5; assigns the value 5 to the variable x.
 Mixed-mode assignments occur when the type of the variable on the left-hand side
differs from the type of the value being assigned to it.
o Example: float a = 5; int b = a; (Assigning a floating-point value to an
integer variable, which may involve type conversion).

Control Structures

Control structures allow the programmer to dictate the flow of execution within the program.
They include:

Selection (Conditional Statements):

 Selection allows different sections of code to be executed based on a condition.


o if-else statement: Executes one block of code if the condition is true, and another
if the condition is false.
o if (x > 5) {
o // Code to execute if x is greater than 5
o } else {
o // Code to execute if x is not greater than 5
o }
o switch-case statement: Executes one out of many blocks of code based on the
value of a variable.
o switch (x) {
o case 1:
o // Code for case 1
o break;
o case 2:
o // Code for case 2
o break;
o default:
o // Code for default case
o break;
o }

Iteration (Loops):

 Iteration allows a block of code to be repeated multiple times based on a condition.


o for loop: A loop that runs a specific number of times.
o for (int i = 0; i < 10; i++) {
o // Code to execute 10 times
o }
o while loop: A loop that continues as long as a condition is true.
o while (x < 10) {
o // Code to execute as long as x is less than 10
o }
o do-while loop: Similar to a while loop, but the condition is checked after
executing the block of code.
o do {
o // Code to execute
o } while (x < 10);

Branching:

 Branching in programming allows for the control of flow to different parts of the
program based on conditions.
o Can be achieved using if-else, switch, or ternary operators (i.e., condition ?
true_value : false_value).

Guarded Statements:

 Guarded statements are conditional statements used to control the flow of execution
with explicit conditions.
o In certain programming languages, guard clauses are used at the beginning of
functions to handle exceptional or edge cases.
o Example in pseudo-code:
o if (input is invalid) {
o return error;
o }
o // Proceed with normal execution

Summary

 Arithmetic expressions are used to perform mathematical operations and are evaluated
based on operator precedence.
 Overloaded operators allow custom behavior for standard operators in user-defined
types.
 Type conversions allow conversion between different data types, with implicit or
explicit casting.
 Relational and boolean expressions are used for comparison and logical operations.
 Assignment statements assign values to variables, and mixed-mode assignments
involve type conversion.
 Control structures like selection, iteration, branching, and guarded statements help
control the flow of the program based on conditions.

UNIT -3
Subprograms:

Subprograms (also known as functions, procedures, or methods) are blocks of code that
perform a specific task and can be invoked (called) from different parts of a program. The use of
subprograms helps in code reusability, organization, and readability.

Design Issues for Subprograms:

When designing subprograms, several issues need to be addressed, including:

1. Parameter Passing: Determining how information will be passed to and from the
subprogram.
2. Local Referencing: Deciding how variables and other resources within the subprogram
will interact with those outside of it.
3. Overloaded Methods: Allowing the same method name to perform different tasks based
on the types or numbers of parameters.
4. Generic Methods: Designing methods that work with a variety of data types (generics).
5. Side Effects: Deciding whether the subprogram should alter the state of the program,
such as modifying global variables or input/output operations.

Local Referencing:
 Local variables are variables that are declared and used inside a subprogram (function or
method). They are not accessible outside the subprogram.
 Global variables, on the other hand, are accessible throughout the entire program.

Key Points:

 Local variables are typically stack-allocated and are automatically destroyed when the
subprogram exits.
 Scope of local variables is limited to the subprogram.
 Lifetime of a local variable is tied to the duration of the subprogram execution.

Example:

void exampleFunction() {
int localVar = 5; // This is a local variable
// localVar can be used here
}
// localVar is not accessible outside exampleFunction()

 Global variables: Should be used cautiously as they can be accessed and modified from
anywhere in the program, leading to unintended side effects.
 int globalVar = 10; // Global variable

 void exampleFunction() {
 globalVar = 20; // Modifying global variable
 }

Parameter Passing:

When a subprogram is called, parameters (inputs) are passed to it. The design of parameter
passing affects how the function behaves and interacts with the calling code.

There are three main ways to pass parameters:

1. Pass by Value:
o The actual value of the argument is passed to the function.
o Any changes made to the parameter within the function do not affect the original
value.

Example:

void increment(int x) {
x = x + 1; // This change is local to the function
}

int main() {
int num = 5;
increment(num); // num remains 5 after the function call
}

2. Pass by Reference:
o The address of the argument is passed, allowing the function to modify the
original value.
o Changes made to the parameter will affect the original argument.

Example:

void increment(int &x) {


x = x + 1; // This change affects the original argument
}

int main() {
int num = 5;
increment(num); // num becomes 6 after the function call
}

3. Pass by Pointer (C/C++ Specific):


o Similar to pass-by-reference, but explicitly uses a pointer to access the argument’s
memory address.
o Useful in languages like C where pass-by-reference is not available.

Example:

void increment(int *x) {


*x = *x + 1; // Modify the value pointed by x
}

int main() {
int num = 5;
increment(&num); // num becomes 6 after the function call
}

Overloaded Methods:

 Method overloading refers to the ability to define multiple methods with the same name
but different parameter types or numbers of parameters.
 The compiler determines which method to call based on the arguments passed.

Key Points:

 Overloading can be done by varying the number of parameters or the types of


parameters.
 Overloaded methods should perform conceptually similar tasks to avoid confusion.

Example (Overloading by Number of Arguments):


void print(int num) {
cout << "Integer: " << num << endl;
}

void print(double num) {


cout << "Double: " << num << endl;
}

int main() {
print(5); // Calls print(int)
print(3.14); // Calls print(double)
}

Example (Overloading by Type of Arguments):

void print(int num) {


cout << "Integer: " << num << endl;
}

void print(string text) {


cout << "String: " << text << endl;
}

int main() {
print(10); // Calls print(int)
print("Hello"); // Calls print(string)
}

Generic Methods:

 Generic methods are methods that can operate on objects of any type.
 These methods are defined using generics (or templates in C++), and they allow code to
be more reusable.
 The advantage of using generic methods is that you can write a single method that works
with different data types.

Example in C++ (using Templates):

template <typename T>


T add(T a, T b) {
return a + b;
}

int main() {
int intResult = add(5, 10); // Adds two integers
double doubleResult = add(3.5, 2.5); // Adds two doubles
}

 Template parameters (e.g., T in the example) act as placeholders for actual data types
that will be specified when the function is called.
Design Issues for Functions:

1. Function Length: Ideally, functions should be small and perform a single task. Large
functions should be split into smaller ones for better readability and maintainability.
2. Function Name: The function name should clearly describe what the function does.
Good naming conventions improve code readability.
3. Function Parameters: The number of parameters should be minimal. If a function
requires many parameters, it might indicate that the function is doing too much and could
be split.
4. Return Type: Functions should return meaningful data. If a function has no meaningful
data to return, it can return void.
5. Side Effects: Side effects (such as modifying global variables or doing I/O operations)
should be minimized. Ideally, functions should have predictable and transparent
behavior.
6. Recursion: Functions that call themselves (recursion) can be powerful but should be used
carefully to avoid excessive memory use and stack overflow.

Summary:

 Subprograms are blocks of code designed for specific tasks, improving code reuse,
organization, and readability.
 Parameter passing can be done by value, reference, or pointer, each with different
implications for how data is handled.
 Overloaded methods allow the same function name to be used with different types or
numbers of parameters.
 Generic methods can handle different data types, enhancing the flexibility and
reusability of code.
 Proper function design involves considering factors like function length, naming,
parameters, return types, and side effects to make the code clear, maintainable, and
efficient.

Semantics of Call and Return:

The semantics of a function or subprogram call and return refer to the rules that define how the
program's execution flow proceeds when a subprogram is called, how parameters are passed,
how control is transferred to the subprogram, and how it returns to the calling program.
Understanding these semantics is crucial for implementing and managing subprograms.

1. Call Semantics:
o When a subprogram is called, the program saves the current execution context
(including the location of the next instruction, the calling function's state, and
local variables) onto the stack or another control structure.
oParameters are passed to the subprogram (by value, reference, or pointer).
oThe program transfers control to the subprogram, which begins execution.
2. Return Semantics:
o When the subprogram completes its execution, the control returns to the calling
function.
o The program restores the saved context from the call, including the return
address (where to continue execution after the subprogram).
o The subprogram may return a value (if it is not a void function) or may just end
its execution without returning anything.

Implementing Simple Subprograms:

A simple subprogram can be implemented using the following steps:

1. Function Definition: Define the subprogram with a name, return type (if any), and
parameters.
2. Function Call: Call the subprogram from another part of the program, passing necessary
arguments.
3. Return (optional): Optionally return a value from the subprogram to the caller.

Example (C++):

#include <iostream>
using namespace std;

// Simple subprogram that adds two numbers


int add(int a, int b) {
return a + b; // Return the sum of a and b
}

int main() {
int result = add(5, 10); // Call the 'add' function
cout << "Result: " << result << endl; // Output: Result: 15
}

Here, the add subprogram is called in main, and the return value is used.

Stack and Dynamic Local Variables:

Local variables inside a subprogram are typically stored on the stack. This means:

1. Stack Allocation: When a subprogram is called, a new stack frame is created to hold the
local variables, parameters, and return address.
2. Dynamic Local Variables: These are variables created dynamically (e.g., via new in C++
or malloc in C), often in the heap rather than the stack. Dynamic memory allocation
allows for variables to exist beyond the lifetime of the function call.

Stack-based Local Variables:

 Local variables in a subprogram are automatically destroyed when the subprogram exits.
They have automatic storage duration.

Example:

void example() {
int localVar = 5; // Local variable stored on the stack
// localVar is valid only within the 'example' function
}

When example is called, localVar is created, and when the function finishes execution,
localVar is destroyed automatically.

Dynamic Local Variables:

 These variables are created using dynamic memory allocation (e.g., new in C++), and
their memory is managed manually by the programmer.
 The lifetime of these variables extends beyond the function's scope, unlike stack
variables, and must be explicitly deallocated using delete or free.

Example (C++):

void example() {
int* ptr = new int(10); // Dynamically allocated memory
// Use ptr inside function
delete ptr; // Free the dynamically allocated memory
}

Nested Subprograms:

A nested subprogram refers to a subprogram (or function) defined inside another subprogram.
Some programming languages support this, while others do not.

 Scope of variables in nested subprograms can be tricky. For example, a variable in the
outer function can be accessed by the inner function if the inner function is defined inside
the outer one.

Example (Python supports nested functions):

def outerFunction():
x = 10 # Local variable in the outer function
def innerFunction():
print(x) # Inner function can access 'x' from outer function

innerFunction() # Calling the nested function

outerFunction() # Output: 10

Here, innerFunction can access x, which is local to outerFunction.

Blocks:

In some languages, you can define blocks of code (typically enclosed in curly braces {}) that
create a scope for variables.

 Block-level scoping refers to defining a set of instructions that can be executed together,
and the variables declared within that block are local to it.

Example:

void example() {
int x = 5;
{ // Start of a new block
int y = 10;
cout << x << " " << y << endl; // x and y are accessible here
} // End of block, y is destroyed here
// y is not accessible outside this block
}

In this case, y is only accessible inside the inner block, and its scope ends when the block
finishes.

Dynamic Scoping:

Dynamic scoping refers to how the scope of variables is determined by the calling context rather
than the textual or lexical structure of the program. This is in contrast to lexical scoping, where
the scope is determined by the physical structure of the program (i.e., where the function is
defined in the code).

 In dynamic scoping, a variable’s value is looked up in the most recent call on the stack.
If a variable is not found in the current function, the search continues to the calling
functions.
Dynamic scoping can lead to unpredictable results since the variable bindings change depending
on the call stack at runtime. This is less common in modern programming languages, as lexical
scoping (where the scope is determined by the location in the source code) is more predictable.

Example (in a dynamically scoped language like old versions of Lisp):

(define x 10) ; Global variable

(define (foo)
(print x)) ; Prints the value of x from the environment at runtime

(define (bar)
(define x 20) ; x is redefined locally
(foo)) ; Will print 20, because dynamic scoping looks up the most recent
value of x

(bar) ; Output: 20

Here, foo uses x from the dynamic call environment of bar, even though x was not defined in
foo itself.

Summary:

 Semantics of Call and Return describe how a program manages control transfer,
parameter passing, and returning values during function calls.
 Stack and Dynamic Local Variables: Stack variables are temporary and tied to the
subprogram’s execution, while dynamic variables persist beyond the subprogram’s
lifetime and require explicit memory management.
 Nested Subprograms allow subprograms to be defined within other subprograms, with
variable access governed by scoping rules.
 Blocks define local scopes that are valid within the block, and variables within them are
destroyed once the block ends.
 Dynamic Scoping assigns variable values based on the call stack, leading to runtime
variable lookups, as opposed to lexical scoping, which is determined by the code
structure.

Understanding these concepts is crucial for efficiently managing memory, variable scope, and
subprogram execution in programming languages.

UNIT -4
Object-Orientation (OO) – Design Issues for OOP Languages

Object-Oriented Programming (OOP) is a programming paradigm that organizes software design


around objects rather than functions and logic. An object is an instance of a class, which
contains both data (attributes) and methods (functions) that operate on the data. Below are some
design issues and aspects to consider when implementing OOP languages:

1. Encapsulation: This refers to bundling the data (attributes) and methods that operate on
the data into a single unit or class. It allows access to the data only through specific
methods, which helps protect data integrity by preventing direct access to the internal
state.
2. Inheritance: Inheritance allows new classes to derive from existing ones, inheriting the
attributes and methods of the parent class. This facilitates code reuse and supports
hierarchical class structures.
3. Polymorphism: Polymorphism enables objects to be treated as instances of their parent
class, allowing methods to be used interchangeably. It is essential for dynamic method
binding, where the method called is determined at runtime.
4. Abstraction: Abstraction involves hiding complex implementation details and exposing
only essential features. It allows programmers to focus on high-level functionality while
delegating low-level details.
5. Design Patterns: Common solutions to recurring problems (e.g., Singleton, Factory,
Observer, Strategy) are essential to improve code maintainability and flexibility.
6. Class Design: Decisions on how to structure classes, their relationships, and access
control (public, private, protected) are critical for effective object-oriented design.

Implementation of Object-Oriented Constructs

When implementing OOP constructs in a programming language, key issues arise related to the
translation of OOP concepts into actual executable code.

1. Object Representation: In memory, objects are typically represented as instances of


classes, with memory allocated for the object’s attributes and methods.
2. Dynamic Binding: This is the process of determining the method or function to be
invoked at runtime. Polymorphic method calls are resolved dynamically, which can
increase flexibility but may incur runtime overhead.
3. Message Passing: In object-oriented languages, objects interact with one another via
message passing, which typically involves calling methods on objects. This is central to
object communication.
4. Object Creation and Destruction: The creation of objects (via constructors) and their
destruction (via destructors or garbage collection) must be handled efficiently to ensure
resource management and proper object lifecycle.
5. Memory Management: The language must have strategies to manage object allocation
and deallocation to avoid memory leaks or dangling references, particularly in languages
without garbage collection.

Concurrency in Object-Oriented Languages


Concurrency refers to the ability of a system to run multiple tasks or processes simultaneously.
This is important for multi-threaded or distributed systems, and it must be handled carefully in
OOP languages.

1. Concurrency Design Issues:


o Ensuring that multiple threads can safely access shared data.
o Handling synchronization to avoid race conditions and deadlocks.
o Managing resources efficiently to prevent resource contention.
2. Semaphores: A semaphore is a synchronization primitive used to control access to
shared resources in a concurrent system. It is typically used to signal whether a resource
is available or not.
o Binary Semaphores: Can only take values 0 or 1, functioning like a lock.
o Counting Semaphores: Used to control access to a limited number of resources.

Example (Pseudocode):

semaphore s = 1; // Initializing semaphore to 1 (resource available)

P(s) { // Wait operation


if (s > 0) {
s--; // Decrement the semaphore
// Access the shared resource
}
}

V(s) { // Signal operation


s++; // Increment the semaphore (release the resource)
}

3. Monitors: A monitor is a higher-level synchronization construct, typically used in


object-oriented programming. It combines the shared data and the operations that can
access the data into a single module, which helps manage concurrency and ensures
mutual exclusion.
o Only one thread can execute a monitor method at a time.
o Monitors often include condition variables to manage waiting and signaling
between threads.
4. Message Passing: Message passing is another form of synchronization in which threads
or processes communicate by sending messages to each other. This is commonly used in
distributed systems or in systems where threads do not share memory directly.

Example (Message Passing):

o Thread A sends a message to Thread B.


o Thread B processes the message and sends a response back to Thread A.

Message passing can be synchronous or asynchronous depending on whether the sending


thread waits for a response.
Threads:

Threads are the smallest unit of execution within a process. They allow a program to perform
multiple operations concurrently within a single process.

1. Thread Creation: Threads can be created within a process to run tasks concurrently.
2. Thread Synchronization: Since multiple threads may access shared resources,
synchronization techniques (such as semaphores and monitors) are necessary to prevent
race conditions.
3. Multithreading: Many modern languages and operating systems support
multithreading, where multiple threads run simultaneously to perform parallel
computations.
4. Thread Safety: To avoid issues like race conditions, threads must be carefully
synchronized using locks, semaphores, or other concurrency mechanisms.

Statement Level Concurrency:

This involves parallelizing independent statements in the program to run them simultaneously.
This can significantly improve performance, especially on multi-core processors.

 Compiler Optimization: Compilers may attempt to parallelize independent statements to


take advantage of multi-core processors.
 Synchronization: Statements that modify shared data must be synchronized to avoid
conflicts.

Exception Handling:

Exception handling is a mechanism for dealing with runtime errors or exceptional conditions in
a program. It allows a program to recover gracefully from unexpected situations like division by
zero, file not found, or invalid input.

1. Try-Catch Blocks:
o Try: Contains the code that may throw an exception.
o Catch: Catches the exception and defines the response or handling for the
exception.

Example:

try {
int result = divide(10, 0); // Division by zero
} catch (const std::exception& e) {
std::cout << "Error: " << e.what() << std::endl; // Handling
exception
}

2. Throw: Exceptions can be raised using the throw keyword, allowing the program to
generate an error condition.
3. throw std::runtime_error("Division by zero");
4. Custom Exceptions: Developers can define custom exceptions to handle application-
specific errors.

Event Handling:

Event handling is a programming paradigm where the flow of execution is determined by events
such as user actions (clicking a button, pressing a key) or system-generated events (timer
expiration, external signals).

1. Event-Driven Programming: In GUI applications or systems that interact with users,


event-driven programming is used to handle events.
2. Event Handlers: Functions or methods are bound to specific events. When an event
occurs, the corresponding handler is executed.

Example (in GUI-based systems):

button.onClick = handleButtonClick;

void handleButtonClick() {
// Perform some action when the button is clicked
}

The program waits for the user interaction (event) and executes the handler when the
event occurs.

Summary:

 Object-Oriented Programming (OOP) focuses on designing programs using objects


that combine data and methods, emphasizing encapsulation, inheritance, polymorphism,
and abstraction.
 Concurrency involves handling multiple threads or processes that run in parallel, using
constructs like semaphores, monitors, and message passing to ensure safe access to
shared resources.
 Exception Handling provides a way to manage runtime errors, allowing programs to
recover from unexpected conditions.
 Event Handling is used in GUI-based or interactive systems to manage user or system-
generated events.

The combination of these techniques helps create efficient, robust, and scalable applications.

UNIT – 5
Introduction to Lambda Calculus

Lambda calculus is a formal system in mathematical logic and computer science for expressing
computation based on function abstraction and application. It is the foundation of functional
programming languages, where functions are treated as first-class citizens.

Key Concepts:

 Lambda Expression: A function is defined using the lambda symbol λ. For example,
λx.x+1 is a function that takes an argument x and returns x + 1.
 Function Application: Applying a function to an argument. For example, (λx.x+1) 5
applies the function λx.x+1 to the argument 5, resulting in 6.
 Abstraction: The process of defining a function. For example, λx.x+2 abstracts the
operation x + 2.
 Reduction: The process of simplifying expressions. For instance, ((λx.x+2) 3) reduces
to 3 + 2, which is 5.

In lambda calculus, everything is a function, including numbers, arithmetic, and logical


operations.

Fundamentals of Functional Programming Languages

Functional programming (FP) is a paradigm where computation is treated as the evaluation of


mathematical functions and avoids changing state and mutable data. Key features of functional
programming languages include:

1. First-Class Functions: Functions can be passed as arguments to other functions, returned


as values, and assigned to variables.
o Example: In the functional language Scheme, a function can be passed as an
argument:
o (define (apply-fn f x) (f x))
o (apply-fn (lambda (x) (+ x 2)) 3) ; result is 5
2. Immutability: Data is immutable by default, meaning that once a variable is set, its value
cannot be changed.
3. Pure Functions: Functions that have no side effects and return the same output for the
same input.
4. Recursion: Since loops are not used in functional programming, recursion is often used
as the primary control structure to repeat operations.
o Example: Factorial function in a functional style:
o (define (factorial n) (if (= n 0) 1 (* n (factorial (- n 1)))))
5. Higher-Order Functions: Functions that take other functions as arguments or return
them.
o Example: Map is a higher-order function that applies a function to each element
of a list:
o (define (map f lst) (if (null? lst) '() (cons (f (car lst)) (map f
(cdr lst)))))
o (map (lambda (x) (* x 2)) '(1 2 3)) ; result is (2 4 6)
6. Lazy Evaluation: Expressions are only evaluated when needed, which allows for the
construction of infinite data structures and deferred computation.

Programming with Scheme

Scheme is a minimalist, functional programming language that is a dialect of Lisp. It is widely


used in education for teaching computer science concepts. The key features of Scheme are:

1. Simple Syntax: Scheme’s syntax is very simple, consisting primarily of lists enclosed in
parentheses. For example:
o Function definition:
o (define (square x) (* x x))
2. First-Class Functions: Functions can be treated as values. They can be passed around,
returned from other functions, and assigned to variables.
o Example:
o (define f (lambda (x) (+ x 2)))
o (f 3) ; result is 5
3. Recursion: Scheme encourages recursive programming. Iteration is done via recursion,
as Scheme does not have traditional for or while loops.
o Example:
o (define (sum n)
o (if (= n 0)
o 0
o (+ n (sum (- n 1)))))
o (sum 5) ; result is 15
4. Macros: Scheme allows powerful macros to define new syntactic constructs, providing
flexibility in designing the language.
o Example of a simple macro:
o (define-syntax-rule (when condition expr)
o (if condition expr))
5. Lazy Evaluation: Scheme supports lazy evaluation through constructs like delay and
force.
o Example of lazy evaluation:
o (define x (delay (+ 2 2))) ; Defers evaluation
o (force x) ; Evaluates the expression
6. Garbage Collection: Scheme handles memory management automatically, and
developers do not need to manage memory manually.

Programming with ML
ML (Meta Language) is a functional programming language known for its strong type system
and emphasis on immutability. It is widely used for teaching programming and implementing
compilers. Some important aspects of ML include:

1. Type System: ML uses a statically-typed system with type inference. Types are inferred
automatically, so explicit type declarations are often unnecessary.
o Example:
o let square x = x * x

In this case, ML infers that x is an integer based on the * operation.

2. Pattern Matching: ML has powerful pattern matching for simplifying complex


conditional logic. It is used extensively in defining recursive functions.
o Example:
o let rec factorial n =
o match n with
o | 0 -> 1
o | n -> n * factorial (n - 1)
3. Immutability: In ML, values are immutable by default. To create mutable variables,
references are used.
o Example of using references:
o let x = ref 0
o let () = x := !x + 1
4. Higher-Order Functions: Like Scheme, ML supports first-class functions. Functions
can be passed as arguments, returned as values, and stored in data structures.
o Example of a higher-order function:
o let apply f x = f x
o let square x = x * x
o apply square 5 (* result is 25 *)
5. Modules and Functors: ML has a module system for organizing code into reusable
components. Functors are functions that take modules as arguments and return new
modules.
6. Garbage Collection: ML includes automatic garbage collection, freeing the programmer
from manual memory management.

Summary

 Lambda Calculus serves as the foundation for functional programming, with its
emphasis on functions as first-class citizens and simple operations like abstraction and
application.
 Functional Programming focuses on immutability, recursion, higher-order functions,
and the avoidance of side effects, offering powerful tools for concise and elegant code.
 Scheme is a minimalistic functional language that emphasizes simplicity and recursion,
commonly used in education.
 ML is a statically-typed functional language with strong support for pattern matching,
higher-order functions, and type inference.
Both Scheme and ML provide rich environments for functional programming, demonstrating
core concepts such as recursion, higher-order functions, and type systems. These languages
emphasize clarity, elegance, and mathematical rigor in programming.

Introduction to Logic and Logic Programming

Logic Programming is a programming paradigm based on formal logic. In logic programming,


you declare facts and rules that define the problem, and then the system uses logical inference to
find solutions. The most well-known logic programming language is Prolog (Programming in
Logic), which is used extensively in artificial intelligence, natural language processing, and
theorem proving.

Key Concepts in Logic Programming:

1. Facts: Basic assertions about the world that are considered true. For example:
2. parent(john, mary).
3. parent(mary, susan).

This means John is a parent of Mary, and Mary is a parent of Susan.

4. Rules: These are logical statements that define relationships between facts. They have the
form:
5. X is a parent of Y if X is a parent of Z and Z is a parent of Y.

In Prolog, this is represented as:

grandparent(X, Y) :- parent(X, Z), parent(Z, Y).

This rule defines that X is a grandparent of Y if X is a parent of Z and Z is a parent of Y.

6. Queries: These are questions asked to the logic system to find solutions based on the
facts and rules provided. For example:
7. ?- grandparent(john, susan).

The system will try to derive grandparent(john, susan) using the available facts and
rules.

8. Unification: The process by which the logic system matches patterns and finds variable
bindings to satisfy a query. Unification allows Prolog to answer questions like "who is a
grandparent of Susan?"
9. Backtracking: Prolog uses backtracking to search through possible solutions
systematically. If a rule doesn't work or leads to a dead-end, Prolog will backtrack to try
other possibilities.

Programming with Prolog


Prolog is a high-level programming language specifically designed for logic programming. It
allows programmers to define relations and infer new facts from existing ones. Prolog is
particularly useful in fields like artificial intelligence, expert systems, and natural language
processing.

Basic Syntax and Constructs in Prolog

1. Facts: Facts are statements about the world that are always true.
o Example:
o likes(alice, pizza).
o likes(bob, ice_cream).
2. Rules: Rules define relationships between facts.
o Example:
o loves(X, Y) :- likes(X, pizza), likes(Y, pizza).

This rule says "X loves Y if both X and Y like pizza."

3. Queries: Queries are used to ask questions about the data.


o Example:
o ?- loves(alice, bob).

Prolog will try to find bindings for the variables alice and bob that satisfy the
rule.

4. Variables: In Prolog, variables begin with an uppercase letter (e.g., X, Y), and they can
represent unknown values that Prolog will try to infer.
5. Lists: Lists are an important data structure in Prolog. Lists are enclosed in square
brackets and can contain other lists.
o Example:
o member(X, [X|_]).
o member(X, [_|T]) :- member(X, T).

This rule defines the member/2 predicate, which checks if an element is a member
of a list.

6. Recursion: Like functional programming, recursion is a common technique in Prolog for


processing lists and other data structures.
o Example:
o sum([], 0).
o sum([Head|Tail], Result) :- sum(Tail, TailResult), Result is Head
+ TailResult.

Example Program in Prolog: Family Relationships

% Facts
parent(john, mary).
parent(mary, susan).
parent(john, mike).
% Rule
grandparent(X, Y) :- parent(X, Z), parent(Z, Y).

% Query
?- grandparent(john, susan).

Output: true.

Prolog uses the facts and the rule to answer the query. It finds that John is a grandparent of Susan
because John is a parent of Mary, and Mary is a parent of Susan.

Advantages of Prolog

 Declarative Nature: You specify what should be done, not how it should be done.
Prolog handles the "how" (i.e., searching for solutions) internally.
 Inference Mechanism: Prolog’s backtracking and unification mechanisms make it a
powerful tool for solving problems that involve complex relationships, such as puzzles,
reasoning, and expert systems.

Multi-Paradigm Languages

A multi-paradigm programming language is one that supports more than one programming
paradigm, such as imperative, object-oriented, functional, and logic programming. Multi-
paradigm languages provide flexibility by allowing developers to choose the most suitable
paradigm for different aspects of the problem.

Some common multi-paradigm languages include:

1. Python: Supports object-oriented, functional, and imperative programming.


o Example of functional programming in Python:
o def square(x):
o return x * x
o Example of object-oriented programming:
o class Animal:
o def __init__(self, name):
o self.name = name
o def speak(self):
o print(f"{self.name} makes a sound")
2. Scala: Combines object-oriented programming with functional programming.
o Example of functional programming in Scala:
o val square = (x: Int) => x * x
o Example of object-oriented programming:
o class Animal(name: String) {
o def speak() = println(s"$name makes a sound")
o }
3. JavaScript: Supports event-driven, imperative, and functional programming paradigms.
o Example of functional programming in JavaScript:
o const square = x => x * x;
o Example of object-oriented programming:
o class Animal {
o constructor(name) {
o this.name = name;
o }
o speak() {
o console.log(`${this.name} makes a sound`);
o }
o }

Benefits of Multi-Paradigm Languages

1. Flexibility: Developers can choose the most appropriate paradigm for a given task,
increasing productivity.
2. Code Reusability: Multi-paradigm languages allow combining reusable components
from different paradigms (e.g., functions and objects).
3. Rich Libraries: They typically offer rich libraries and frameworks that cater to different
programming paradigms.

Challenges

1. Complexity: Multi-paradigm languages can be more difficult to learn and master, as they
require understanding multiple paradigms and how to use them effectively.
2. Inconsistent Syntax: The syntax and concepts across paradigms may sometimes clash or
lead to inconsistent code.

Summary

 Logic Programming is a paradigm where programs are written in terms of facts and
rules, and computation is performed through logical inference. Prolog is the most well-
known logic programming language.
 Prolog is particularly suited for tasks that involve complex relationships and reasoning,
such as AI and expert systems.
 Multi-paradigm languages combine features from multiple programming paradigms
(e.g., object-oriented, functional, logic), providing flexibility and allowing developers to
choose the most appropriate approach for different tasks. Examples include Python,
Scala, and JavaScript.

You might also like