0% found this document useful (0 votes)
25 views41 pages

PPL Unit-03

Uploaded by

sravanthit717
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views41 pages

PPL Unit-03

Uploaded by

sravanthit717
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

UNIT-03

Fundamentals of Subprograms
Subprograms are reusable blocks of code designed to perform a specific task. The
fundamentals of subprograms involve understanding their definition, structure, and
functionality within a programming language.
Here’s a breakdown of the key fundamentals of subprograms:

1. Definition of a Subprogram
A subprogram is a named, independent block of code that performs a specific task. It is
encapsulated as a unit to be invoked (called) by other parts of a program. Subprograms
promote reusability and modularity, enabling more organized and maintainable code.
It can be a function (returns a value) or a procedure (may not return a value).
Examples:
• Functions in C, Python (int add(int a, int b))
• Procedures in Pascal (procedure add(a: integer; b: integer))
• Methods in Java, C++ (public int add(int a, int b))

2. Components of a Subprogram
A subprogram typically has the following components:

Component Description

Name The identifier used to call the subprogram (e.g., add())

Parameters Input values passed to the subprogram

Body The implementation (code) of the subprogram

Return Type (For functions) The data type of the value returned

Invocation The process of calling the subprogram

3. Characteristics of Subprograms
1. Single Entry Point
o A subprogram always starts execution from a specific entry point.
2. Local Variables
o Subprograms can have local variables that are created and destroyed when
the subprogram is called and finished.
3. Independent Execution
o Subprograms execute independently of the main program and can be reused
in different parts of a program.
4. Return Control
o When a subprogram completes execution, control returns to the point where
it was called.

4. Types of Subprograms
There are two primary types of subprograms:

Type Description Example

int add(int a, int


Function A subprogram that returns a value
b)

A subprogram that performs an action but does not return a


Procedure void display()
value

5. Advantages of Using Subprogram


1. Modularity:
• Helps break a large program into smaller, manageable parts.
• Promotes a divide-and-conquer approach in software development.
2. Code Reusability:
• Subprograms can be reused in multiple places, reducing code duplication.
3. Readability:
• Simplifies complex programs by abstracting repetitive tasks, making the main
program more concise.
4. Ease of Maintenance:
• Modifications can be made in one place (the subprogram) instead of multiple
locations in the program.
5. Debugging and Testing:
• Errors are easier to isolate and fix when the program is divided into subprograms.
• Individual subprograms can be tested independently.
6. Supports Recursion:
• Allows a subprogram to call itself, facilitating solutions to problems like factorial
calculation, Fibonacci series, etc.

6. Lifecycle of a Subprogram Call


1. Declaration:
• The subprogram must be defined before it can be called.
• Example:
• int square(int x); // Declaration in C
2. Call:
• The subprogram is invoked using its name and necessary arguments.
• Example:
• int result = square(5); // Call
3. Execution:
• The control transfers to the subprogram, and its statements are executed.
4. Return:
• The subprogram returns control to the caller, optionally sending back a value.
5. Termination:
• Once execution is complete, the subprogram terminates, freeing any resources it
used.

7. Subprogram Invocation
There are two ways a subprogram can be invoked:
1. Direct Invocation
o The subprogram is called directly by its name.
o Example: int result = add(5, 3);
2. Recursive Invocation
o A subprogram can call itself (recursion).
o Example: int factorial(int n) {
if (n == 0) return 1;
else return n * factorial(n - 1);
}
Design Issues of Subprograms (Simplified)
When designing subprograms, several key decisions need to be made regarding how they
work and how they interact with the rest of the program. Here are the major design issues
explained :

1. Parameter Passing
How do we send data to a subprogram?
• Defines how data is shared between the caller and the subprogram.
• Issue: Affects performance, safety, and flexibility. Inefficient methods can slow
execution, and unsafe methods (like pass-by-reference) can lead to side effects.
• By Value: The subprogram gets a copy of the data (changes don’t affect the original).
• By Reference: The subprogram gets the actual variable (changes affect the original).
Example:
void change(int x) { x = 10; } // By Value
void change(int &x) { x = 10; } // By Reference

2. Return Values
How does the subprogram return a result to the caller?
• Functions return a value using the return statement.
• Procedures may not return a value.
• Allows a subprogram to send results back to the caller.
• Issue: Affects flexibility and functionality. Single return values are limiting, while
supporting multiple returns or complex types adds complexity.
Example:
int add(int a, int b) { return a + b; } // Function with return
void display() { printf("Hello!"); } // Procedure without return

3. Local Variables
How does the subprogram handle its own variables?
• Subprograms have local variables that are created when the subprogram is called
and destroyed when it ends.
• Issue: Affects memory usage, flexibility, and functionality. Static variables retain
values, while dynamic variables enable recursion.
Example:
void calculate() {
int num = 5; // Local variable
System.out.println(num * num); // Output: 25
}

4. Recursion
Can a subprogram call itself?
• Some languages support recursion, where a subprogram can solve a problem by
calling itself.
• Issue: Impacts memory and performance. Deep recursion may cause stack overflow,
and iterative solutions may be more efficient.
Example:
def factorial(n):
if n == 0:
return 1
return n * factorial(n - 1)

5. Overloading
Can multiple subprograms have the same name but different parameters?
• Some languages allow overloading, where the same subprogram name is used with
different parameter types or numbers.
Example:
int add(int a, int b) { return a + b; }
float add(float a, float b) { return a + b; }

6. Generic Subprograms
Can a subprogram work with different data types?
• Generic subprograms allow a single subprogram to handle multiple data types.
Example:
template <typename T>
T add(T a, T b) { return a + b; }

7. Type Checking for Parameters


• Ensures that actual and formal parameters match in type.
• Issue: Prevents runtime errors and ensures reliability.
• Examples: Java's strict type-checking.

8. Nested Subprogram Definitions


• Allows subprograms to be defined within other subprograms.
• Issue: Improves modularity and scope control by limiting access to certain helper
functions.

9. Exception Handling
• Manages errors during subprogram execution.
• Issue: Affects robustness and clarity. Without built-in handling, errors must be
manually managed, increasing complexity.

Scope and Lifetime


Scope defines where a variable can be accessed, and Lifetime defines how long it exists in
memory.

1. Scope – Where a Variable is Accessible


There are two types of scope:

Static Scope (Lexical Scope)


• The variable's scope is determined at compile-time based on the code structure.
• Most modern languages (e.g., C, Java, Python) use static scope.
Example:
int x = 10; // Global variable (static scope)
void myFunction() {
int y = 5; // Local variable (static scope)
printf("%d", y);
}

Dynamic Scope
• The variable's scope is determined at runtime based on the calling sequence.
• Rarely used in modern languages (used in older languages like LISP).
Example (Concept):
(defun myFunction ()
(print x)) ; Uses 'x' from the calling environment

2. Lifetime – How Long a Variable Exists


• Static Lifetime: The variable exists throughout the entire program (e.g., global
variables).
• Dynamic Lifetime: The variable is created when a subprogram starts and destroyed
when it ends (e.g., local variables).
Example:
void myFunction() {
int x = 5; } // Dynamic lifetime

Local Referencing Environments


A local referencing environment refers to the set of variables and their bindings that are
accessible within a specific subprogram or block of code. It defines how variables are
looked up when used within a local scope.

Key Concepts:
1. Local Environment:
The variables declared within a subprogram (like a function or block) are part of its
local referencing environment.
2. Binding:
A binding connects a variable to its value in a specific environment. The variable’s
value is looked up based on the current environment.

Example of Local Referencing Environment:


def example():
x = 10 # x is part of the local environment of 'example'
print(x) # The variable x is accessed in the local environment
In this example, x is only accessible within the example() function’s local environment.

Dynamic vs Static Referencing Environments


• Static (Lexical) Environment: The variable binding is determined by where the code
is written (e.g., the function’s scope).
• Dynamic Environment: The variable binding depends on the call stack (e.g., when
calling functions during execution).

Parameter Passing Methods


When passing parameters to subprograms (functions or procedures), there are different
methods to decide how the data is transferred. Here are the 5 main parameter passing
methods:

1. Pass-by-Value
• What happens: A copy of the parameter value is passed. The original value is not
changed.
• Use case: When you don’t want the original data to be modified.
Example:
void addTen(int num) {
num = num + 10; // num is changed locally
}

int main() {
int x = 5;
addTen(x); // x remains 5 because it's passed by value
printf("%d", x); // Output: 5
}

2. Pass-by-Reference
• What happens: The actual variable is passed, so changes to the parameter affect the
original variable.
• Use case: When you want the subprogram to modify the original data.
Example:
void addTen(int &num) {
num = num + 10; // num is modified directly
}
int main() {
int x = 5;
addTen(x); // x is now 15 because it's passed by reference
printf("%d", x); // Output: 15
}

3. Pass-by-Result
• What happens: The parameter is passed uninitialized to the subprogram, and any
changes are copied back to the original variable when the subprogram ends.
• Use case: When you want to calculate or modify a value and return it.
Example:
void multiplyByTwo(int &num) {
num = num * 2; // num is changed
}

int main() {
int x = 5;
multiplyByTwo(x); // x is updated directly
printf("%d", x); // Output: 10
}
4. Pass-by-Name
• What happens: The actual expression is passed (not the value). The expression is
evaluated each time it’s used in the subprogram.
• Use case: Used in some languages like Algol (not common today).
Example:
procedure multiply(a, b) {
print(a * b);
}

multiply(x + 2, x + 3); // a and b will be evaluated each time they are used

5. Pass-by-Value-Result
• What happens: Similar to pass-by-value, but after the subprogram ends, the result of
any changes is copied back to the original variable.
• Use case: To modify the parameter without affecting the original value until the
subprogram ends.
Example:
void addTen(int num) {
num = num + 10; // num is changed locally
}

int main() {
int x = 5;
addTen(x); // x is still 5 (pass-by-value), but will be updated after return
printf("%d", x); // Output: 15
}
Parameters that are Subprograms
Subprograms (functions or procedures) can be passed as parameters to other subprograms.
This enables higher flexibility by allowing dynamic execution based on the passed
subprogram.
• Function as a Parameter: A function can be passed to another function, enabling it
to be called with varying behaviour.
• Procedure as a Parameter: A procedure can be passed to another procedure to
execute custom actions.
This technique is commonly used in callback mechanisms and higher-order functions for
dynamic behaviour in programs.
Benefits of Passing Subprograms as Parameters:

• Flexibility: Allows you to write more general and reusable code, where you can pass
different operations or behaviors to functions.
• Dynamic behavior: Enables dynamic execution of code (e.g., passing different sorting
or filtering algorithms to a function).
Issues:
Two primary challenges arise when passing subprograms as parameters:
• Type Checking: When you pass a subprogram as a parameter, the compiler or
interpreter needs to verify that the subprogram being passed matches the expected
type (in terms of input and output).
o C/C++: Functions themselves cannot be passed, but pointers to functions,
including their protocol, can be passed and type-checked.
o Fortran 95+: Provides mechanisms for specifying and checking parameter
types of passed subprograms.
• Referencing Environment: Determines the variables accessible within the passed
subprogram during execution.
• Solution:
o Shallow Binding: The subprogram’s environment is determined by the
environment at the point where the subprogram is called. This is typically
seen in dynamically scoped languages.
o Deep Binding: The subprogram’s environment is determined by the
environment at the point where the subprogram is defined. This is often used
in lexically scoped languages.
o Ad Hoc Binding: The subprogram’s environment is determined by the
environment at the point where the subprogram is passed as a parameter,
based on the specific context

Calling Subprograms Indirectly


Indirectly calling a subprogram means invoking a subprogram without directly naming it in
the code. Instead of calling a function or procedure by its name, you use a reference,
pointer, or other mechanism to execute it. This can provide flexibility, allowing subprograms
to be selected and executed dynamically at runtime. There are several methods of calling
subprograms indirectly, depending on the programming language. Here are some common
techniques:
1. Function Pointers (in languages like C and C++):
• In C and C++, you can use function pointers to store the address of a function and
then call it indirectly.
How it works: A function pointer holds the memory address of a function, and you can
use this pointer to invoke the function at a later time.
2. First-Class Functions (in languages like Python, JavaScript, and Ruby):
• In languages like Python or JavaScript, functions are first-class citizens, meaning you
can treat functions as objects. You can store them in variables, pass them as
parameters, and invoke them dynamically.
How it works: A function can be assigned to a variable or passed as an argument, and
then that variable is used to call the function indirectly.
3. Callbacks:
• A callback is a function passed into another function as an argument, which is then
called inside that function. This is a common technique for indirect function calls,
especially in asynchronous or event-driven programming.
How it works: The "caller" function accepts another function (the callback) as a
parameter and calls it at the appropriate time.
4. Using Objects or Classes (in Object-Oriented Languages):
• In object-oriented languages (like Java, C++, or Python), you can call methods on
objects indirectly using references or pointers.
How it works: An object reference can point to a method, and that method can be
invoked indirectly.
5. Reflection/Introspection (in some languages like Java or Python):
• Some languages (like Java and Python) allow reflection or introspection, which
enables you to get information about classes and methods at runtime. You can use
this feature to call methods indirectly by their names.
How it works: You can dynamically find and invoke methods based on their names as
strings, even if the methods aren’t directly referenced in the code.
This allows flexibility, as the specific subprogram to be called can be decided at runtime.

Polymorphism in Subprograms:
Polymorphism refers to the ability to use a single interface or function with
different types of data.
Types of Polymorphism:
1. Ad Hoc Polymorphism (overloaded subprograms):

• Overloaded subprograms provide ad hoc polymorphism.


2. Parametric Polymorphism (Generic Subprograms):

• Parametric polymorphism is achieved when a subprogram is generic and can work


with any type of data.

Overloaded Subprograms
Overloaded subprograms are subprograms (functions or procedures) that share the same
name but differ in parameters—either by number or type. This allows you to use the same
name for different tasks, making the code more readable and concise.

Key Points:
• Overloading is based on the parameter list, not the return type.
• It allows a subprogram to perform similar actions on different types or numbers of
inputs.
• Same name, different number of parameters
A subprogram can have the same name but different numbers of parameters.
• Same name, different parameter types
A subprogram can have the same name but different types of parameters (e.g., one
accepts int and another accepts float).
Languages Supporting Overloaded Subprograms: C++, Java, C#, Ada
Example of Overloading:
class OverloadExample {
void display(int a) {
System.out.println("Integer: " + a);
}
void display(double a) {
System.out.println("Double: " + a);
}
void display(int a, int b) {
System.out.println("Two Integers: " + a + " and " + b);
}
public static void main(String[] args) {
OverloadExample obj = new OverloadExample();
obj.display(5);
obj.display(3.14);
obj.display(10, 20);
}
}

Advantages of Overloading:
1. Improved Readability: You can use the same name for similar functions that operate on
different data types, improving code clarity.
2. Code Reusability: Overloading allows you to reuse the same function name for different
tasks based on the arguments provided.
3. Flexibility: It allows the programmer to write flexible and concise code without creating
numerous differently named functions.
Challenges:
1. Complexity: Overloading can make it difficult to understand which function is being
called, especially when the parameter types or numbers are very similar.
2. Ambiguity: In some cases, if the parameters are not clearly distinguishable, it can cause
ambiguity, leading to errors.

Generic Subprograms
Generic subprograms are subprograms (functions or procedures) that can work with
different data types without needing to rewrite them for each type. Instead of specifying a
particular data type, you define the subprogram to work with any data type.

How It Works:
• A generic subprogram uses placeholders for data types (often called type
parameters) and allows type flexibility.
• These subprograms can be used with different data types like integers, floats, or
even custom types.
• Generic subprograms enable parametric polymorphism, allowing the subprogram to
operate on data of any type, and the type is determined when the subprogram is
called.

Example:
A generic subprogram can be written using a placeholder for the type.
In C++ (using templates): In C++, you can define a generic function using templates. A
template is a way to write a function that can work with any data type.
template <typename T>
T add(T a, T b) {
return a + b;
}
int main() {
int result1 = add(5, 3); // Works with integers
float result2 = add(2.5, 3.5); // Works with floats
}

In Python (using dynamic typing):


def add(a, b):
return a + b
print(add(5, 3)) # Works with integers
print(add(2.5, 3.5)) # Works with floats

In Java, Ada, C#, generics are used to define generic subprograms.


Benefit:

• Generic subprograms allow you to write reusable and flexible code, reducing
redundancy by working with multiple data types in a single subprogram.
• Reusability: You can use the same subprogram with different types, avoiding the
need to write multiple versions of the same function.
• Type Safety: Generic subprograms allow for type checking at compile time, ensuring
that type-related errors are caught early.
• Flexibility: Generic subprograms provide the flexibility to work with various types
without the need for casting or converting data types.

Design Issues for Functions


When designing functions, there are a few important decisions to make to ensure that the
function is efficient, readable, and reusable. Here are the main design issues to consider:

1. Function Name
• The function name should clearly describe what the function does.
• Choose a name that’s meaningful and easy to understand.
Example: calculateSum() is better than func().

2. Parameter Handling
• How many parameters does the function need?
• Should it use default values for some parameters?
• Types of parameters (e.g., integers, strings) should be chosen based on the function’s
task.
3. Return Type
• Decide whether the function will return a value (e.g., a number or string) or nothing
(void in some languages).
• The return type should match the purpose of the function.
Example:
• int addNumbers() returns an integer.
• C: Functions can return any type except arrays and functions. Arrays can be handled
using pointers.

4. Function Length
• Functions should be short and focused on doing one thing well. Long functions can
become hard to understand and maintain.
• Break down large tasks into smaller, simpler functions.

5. Error Handling
• Decide how the function will handle errors or unexpected inputs. Should it return an
error code, raise an exception, or handle it internally?

6. Number of Returned Values

• Most Languages (e.g., C, C++): Allow a single value to be returned. Multiple values
can be simulated using structures, classes, or references.
• Ruby: Allows a method to return multiple values. If a method has more than one
expression in its return statement, the values are returned as an array.

User-Defined Overloaded Operators


User-defined overloaded operators allow you to redefine how operators (like +, -, *, etc.)
behave for custom data types (such as classes or structures). This lets you use operators in a
more natural way with objects of your own types.
How It Works:
• Normally, operators work on built-in types (like int, float).
• Overloading these operators allows you to use them with user-defined types (like
objects of a class).
• You define custom behaviour for the operator, telling it how to interact with your
objects.

Closures in Programming Languages

• A closure is a subprogram combined with its referencing environment, allowing the


subprogram to be executed in a context where it can access variables from its
defining scope, even if that scope is no longer active.
• Variables from the defining scope (referencing environment) must remain accessible
even if the scope where they were created has ended. This requires variables to have
unlimited extent (lifetimes spanning the entire program).
Example:
function makeAdder(x) {
return function(y) {
return x + y;
};
}
// Using the closure
var add10 = makeAdder(10);
var add5 = makeAdder(5);
console.log("Add 10 to 20: " + add10(20)); // Output: 30
console.log("Add 5 to 20: " + add5(20)); // Output: 25
Explanation:

• The anonymous function inside makeAdder forms a closure.


• It retains access to the variable x from the scope where it was defined, even after
makeAdder has returned.
• Each call to makeAdder creates a new instance of the closure with a distinct value
of x.

Coroutines
Coroutines are special types of subprograms (functions or procedures) that can pause
execution and resume later, allowing for non-blocking behavior. Unlike traditional
subprograms, coroutines can be suspended in the middle of their execution and later
resumed, making them ideal for tasks like asynchronous programming or concurrent
operations.
Key Features of Coroutines
1. Multiple entry points:

• Unlike traditional functions or subprograms that start execution at the top and
terminate at the end, coroutines can resume execution from the point where they
were previously paused.
2. Self-controlled execution flow:

• Coroutines manage their own control flow rather than relying on a strict caller-callee
relationship.
3. Resumable Execution:

• A coroutine can pause its execution at a specific point and later resume from where it
left off.
• This contrasts with regular functions, which always restart execution from the
beginning when called.
4. Resume Mechanism:

• The first call to a coroutine starts its execution from the beginning.
• Subsequent calls (known as resumes) continue from the point just after the last
executed statement.
5. Quasi-Concurrent Execution:

• Coroutines enable a form of cooperative multitasking. Multiple coroutines can share


control and pass execution back and forth between themselves without overlapping
their execution.
o This differs from threads, which can execute concurrently and in parallel.
6. Looping Execution:
o Coroutines can repeatedly resume each other, potentially forever, allowing them to
serve as ongoing processes or workflows.

How Coroutines Work:


• On the first call, it begins execution at its starting point.
• When it encounters a pause or yield, it saves its current state (local variables and the
point of execution) and transfers control back to the caller.
• On subsequent resume calls, it restores its state and continues execution from the
saved point.
• This is useful for tasks that need to wait (like fetching data from the internet) without
blocking other tasks from executing.

Explanation of the Diagram:


• Coroutine A and Coroutine B are two independent subprograms (or functions) that
can execute in parallel or interleave their execution.
• The diagram shows how execution can be resumed from either coroutine using the
term "resume."
1. "resume from master":
o The master controls when the coroutines should resume.
o The master can resume either A or B, which is indicated by the arrows
pointing to A and B.
o Initially, the master resumes A (indicated by the first "resume from master").
2. "resume B" and "resume A":
o These actions represent when execution of one coroutine (A or B) is resumed
after being paused. The master decides the order in which the coroutines
should continue.
o The arrows between A and B show how control can transfer between them.
Coroutines can be paused and resumed multiple times, allowing for
interleaved execution.
• After A completes part of its work, it may yield (pause), allowing the master to switch
to B.
• The master can then resume B.
• The coroutines can alternate (A, then B, then A again), which shows how coroutines
can "interleave" their execution, rather than running to completion before returning
control.
Example:
def fibonacci():
a, b = 0, 1
while True:
yield a # Pause and return current value
a, b = b, a + b # Update for the next number
When fibonacci() is called and resumed, it produces the next Fibonacci number without
starting over.
Advantages of Coroutines
1. Simplified Asynchronous Programming: Coroutines are widely used in asynchronous
programming (e.g., in Python async/await) to manage tasks that might involve waiting, such
as file I/O or network requests, without blocking the entire program.
2. Efficient Execution: Since coroutines share a single thread of execution and pause
themselves when idle, they are more lightweight than threads and avoid the overhead of
context switching.
3. Better Control Flow: Coroutines allow developers to write code in a linear, readable
fashion while still managing complex workflows.
4. Cooperative Multitasking: They can manage tasks cooperatively, ensuring that no single
coroutine dominates execution time.

General Semantics of Calls and Returns


In programming, calls and returns refer to how subprograms (like functions or methods) are
invoked and how they finish their execution. The subprogram call and return operations are
together called subprogram linkage. Here's a simple explanation:

Call:
• When a subprogram (like a function) is called, it means the program asks that
subprogram to start and perform its task.
• The call is like a request to the subprogram to do something, passing any necessary
data (parameters).
• The program suspends its current activity and transfers control to the subprogram to
do its work.
Subprogram Call Actions:
1. Save Execution Status:
o Save the current state of the calling program, including register values, CPU
status bits, and the environment pointer (EP).
2. Parameter Passing:
o The process must handle how parameters (inputs/outputs) are passed to the
subprogram.
3. Allocate Local Variables:
o If the subprogram has local variables that are not fixed (static), memory must
be allocated for them during the call.
4. Pass the Return Address:
o Ensure the called subprogram knows where to return control after execution.
5. Transfer Control:
o The program jumps to the starting point of the subprogram and ensures it can
return to the correct spot in the caller after the subprogram finishes.
6. Handle Nonlocal Variables (if nested subprograms are supported):
o Provide a mechanism to access variables in outer scopes.
Example:
• Calling a function add(2, 3) asks the program to start the add function and add 2 and
3.

Return:
• Once the subprogram has finished its task, it returns to the point where it was
called, bringing back the result (if any).
• The return is like the subprogram saying, "I'm done," and the program continues
from where it left off.
Subprogram Return Actions:
1. Move Output Parameters:
o If parameters are passed using out mode or inout mode (where values can be
changed), the updated values are copied back to the original variables.
2. Store Function Result:
o If the subprogram is a function, move the return value to a location accessible
to the caller.
3. Deallocate Local Variables:
o Free the memory used for the local variables of the subprogram.
4. Restore Caller Status:
o The previously saved state of the caller is restored, so it can continue
execution properly.
5. Return Control:
o The program jumps back to the location in the caller where the subprogram
was initially called.
Example:
• After adding 2 and 3, the function returns the result 5 back to the main program,
which then uses it.

Implementing simple subprograms


-What Are Simple Subprograms?
o Key Features:
o No recursion: Subprograms cannot call themselves directly or indirectly.
o Fixed memory layout: Local variables and activation records do not grow
or shrink during execution.
o Single instance: Only one version of the subprogram is active at any time.

- Semantics of a Subprogram Call


1. Save the Caller’s Execution Status:
o Preserve the current state of registers, control flags, and environment pointers.
o This ensures the program can resume where it left off after the subprogram
finishes.
2. Compute and Pass Parameters:
o For pass-by-value, a copy of the parameter is created.
o For pass-by-reference, the memory address of the parameter is passed.
o For pass-by-value-result, a copy is made, and changes are written back after the
subprogram ends.
3. Pass the Return Address:
o Store the memory address of the instruction in the caller where execution will
resume.
4. Transfer Control:
o Execution jumps to the entry point of the subprogram.
- Semantics of a Subprogram Return
1. Update Parameters (if applicable):
o For pass-by-value-result, updated values are copied back to the original variables.
o For out-mode, results are assigned to the caller's variables.
2. Return Function Value (if applicable):
o Store the result of the function in a pre-allocated space accessible to the caller.
3. Restore Caller’s State:
o Restore all saved registers, flags, and environment pointers.
4. Return Control:
o Jump to the stored return address in the caller.
-Storage Requirements:
To execute a subprogram, memory is required for:
1. Status Information:
o Includes CPU registers, stack pointers, and program counters.
2. Parameters:
o Memory for input and output parameters passed during the call.
3. Return Address:
o Stores where to return after subprogram execution.
4. Return Value (for functions):
o A designated memory location for storing the result of the function.
5. Temporaries:
o Intermediate values generated during the execution of the subprogram.
-Activation Record
The activation record is a layout of data needed for a subprogram to execute.
o Contents:
o Parameters.
o Local variables.
o Return address.
o Temporaries.
o Fixed Size:
o In simple subprograms, the activation record size is known at compile time,
allowing static allocation.

-Advantages:
o Simple memory management.
o No overhead for dynamic memory allocation or stack operations.
o Efficient execution.
-Limitations:
o No support for recursion or re-entrant subprograms.

Implementing Subprograms with Stack-Dynamic Local Variables:


In most programming languages, stack-dynamic local variables refer to variables that are
allocated on the stack during the execution of a function or subprogram and deallocated
when the subprogram finishes execution. This is a common approach in languages like C,
C++, and Python for managing local variables.
Here’s how stack-dynamic local variables work in subprograms:
1. Stack-based Allocation:
o When a subprogram (like a function or procedure) is called, the local variables of that
subprogram are created on the stack (dynamic memory allocation). These variables
are destroyed once the subprogram exits.
2. Stack Frame (Activation Record):
o When the subprogram is called, an activation record (AR) is pushed onto the stack.
This AR contains the local variables for the subprogram, return address, parameters,
and other relevant data.
o Once the subprogram completes, the AR is popped from the stack, and the local
variables go out of scope and are destroyed.
Example
#include <stdio.h>
void subprogram(int x) {
int y = 10; // Local variables dynamically allocated on the stack
int z = x + y;
printf("Sum of %d and %d is: %d\n", x, y, z);
}
int main() {
int a = 5;
subprogram(a); // Calling the subprogram
return 0;
}
1. Stack Before subprogram Call (At main)
Initially, when the main() function is executing, there are local variables (a) for main(). The
stack looks like this:
|----------------------|
| main's AR | (Contains `a` and return address)
|----------------------|
2. Stack After Calling subprogram(a)
When subprogram(a) is called, a new activation record (AR) is pushed onto the stack. The AR
for subprogram() contains:
• The argument x (received from a in main).
• The local variables y and z.
|----------------------|
| main's AR | (Contains `a` and return address)
|----------------------|
|----------------------|
| subprogram's AR | (Contains `x`, `y`, `z`, and return address)
|----------------------|
3. Function Execution
o Local Variables:
o Inside subprogram(), the local variables y and z are dynamically allocated.
o y is initialized to 10.
o z is computed as x + y, which in this case is 5 + 10 = 15.
o printf is called to print the result.
4. Stack After Returning from subprogram()
Once subprogram() finishes execution, its AR is popped from the stack, and its local variables
(y and z) are destroyed. The stack now only contains the AR for main():
|----------------------|
| main's AR | (Contains `a` and return address)
|----------------------|
5. Memory Deallocation
Since the variables y and z were stack-dynamic, they were automatically deallocated when
subprogram() finished. The memory that was allocated for them is released, and there’s no
need for explicit memory management (like free() in some languages).

Nested Subprograms (Functions)


In many programming languages, a nested subprogram is a subprogram (function or
procedure) defined inside another subprogram. Nested subprograms have access to the
local variables of the outer function, but the outer function does not have access to the local
variables of the inner function.
In some programming languages, like Fortran 95+, Ada, Python, JavaScript, Ruby, and Lua,
subprograms (functions or procedures) can be nested.
Key Points about Nested Subprograms:
1. Subprogram Definition Within Another: In nested subprograms, a function or procedure
is defined inside another function. The inner function is local to the outer function, meaning
it cannot be directly accessed outside the scope of the function that contains it.
2. Access to Outer Variables: Inner subprograms can access variables that aredefined in the
enclosing (outer) subprogram. This can lead to more flexible designs, but also introduces
challenges in variable scope resolution and access to non-local variables.
3. Stack-Dynamic Local Variables: Nested subprograms typically use stack-dynamic local
variables. This means that local variables are allocated on the stack at runtime when the
subprogram is called and deallocated when the subprogram exits.
4. Scope and Lifetime:
o The scope of a variable in a nested subprogram depends on the position of its
declaration within the program.
o The lifetime of a variable is confined to the activation of the subprogram, meaning
that the variable is created when the subprogram is called and destroyed when it
finishes executing.
5. Static Scoping and Nested Subprograms:
o In languages with static scoping (or lexical scoping), the visibility of variables is
determined by the program’s source code structure. The outer subprograms’
variables are accessible to inner subprograms.
o Access to non-local variables (i.e., those defined in outer subprograms) is resolved by
following a static chain (a series of links between activation records of subprograms).
6. Activation Records:
o In a nested subprogram scenario, an activation record is pushed onto the stack when
a subprogram is called, and the static link in the record helps the inner subprogram
find variables from outer subprograms.
7. Static Links:
o To locate a variable in a nested subprogram, static links (or static chains) are used.
Static links point to the activation record of the most recent static parent (the
enclosing subprogram).
o Static Chain: A chain of static links connecting subprograms through their activation
records. This helps in resolving non-local variable references by following the chain
backward.

Example:
#include <stdio.h>
int outer(int a) {
int inner(int x) { return x * x; } // Nested subprogram
return inner(a) + 10; // Call nested subprogram
}
int main() {
printf("%d\n", outer(5)); // Call outer function
return 0;
}
Explanation:
• The outer function contains a nested function inner that returns the square of its
input.
• The outer function calls inner and adds 10 to its result.
• The main function calls outer(5) and prints the result.

Blocks
A block is a section of code enclosed in curly braces {} that defines a local scope for variables. It
allows variables to be used only inside the block and ensures they don't interfere with other
variables in the program.

Example in C:

{
int temp; temp =

list[upper]; list[upper] =

list[lower]; list[lower] =

temp;

o Here, temp is only accessible inside the block and doesn't affect other variables
named temp elsewhere in the program.

2. Variable Lifetime:
o Variables declared inside a block exist only while the block is being executed.

o Once control exits the block, the variables are no longer available.

3. How Blocks Are Implemented:


o Static-Chain Process: Blocks are treated as simple subprograms that are always
called from the same place in the program.

o Efficient Memory Use: Instead of creating and destroying records every time a block
is entered, the memory for block variables can be allocated upfront and reused once
the block is exited.

4. Example:
5. Memory Management:
o When a block ends, its variables are removed, and their memory can be reused by
other blocks. This helps manage memory efficiently.

Advantages of Blocks:

1. Scope Control: Variables inside a block can't be accessed outside it, reducing
conflicts.

2. Efficient Memory Usage: Memory for block variables is reused when the block ends.

3. No Interference: You can use the same variable names in different blocks without
interference.

Blocks are useful for keeping variables local to a specific part of the code, reducing interference, and
managing memory efficiently.

Implementing Dynamic Scoping


Dynamic scoping is a method used in some programming languages where a variable's value is
determined by the most recent active subprogram call in the execution chain.

There are two main ways to implement dynamic scoping: deep access and shallow access.

1. Deep Access
Deep access involves searching through the activation records (the stack frames) of all active
subprograms, starting from the most recently activated one, to find the nonlocal variable.

• How it works:

o Each subprogram has an activation record in the runtime stack.

o When a variable is referenced, the system looks for the variable starting from the
most recent subprogram call and works backwards through the stack.

o If the variable is not found in the current subprogram, the system looks in the
previous subprogram's activation record, and so on, until the variable is found or the
stack is exhausted.

• Example:

void sub3() {

int x, z;

x = u + v; // u and v are nonlocal variables

void sub2() {

int w, x;

void sub1() {

int v, w;

void main() {

int v, u;

If sub3() is called, it looks for u and v by searching the activation records in the order they were
called (from sub3 to main). It will first find x in sub3, then find v in sub1, and u in main.

• Disadvantages:

o Performance: Searching the stack can be slow, especially if the call chain is deep.

o Complexity: The length of the chain to be searched is not known at compile time,
which makes it difficult to optimize.

2. Shallow Access
Shallow access is a different approach where variable references are resolved by looking at the most
recent variable instance for a specific name. This method is faster for accessing variables but can be
more costly in terms of maintaining the structure for managing the variables.

• How it works:

o Instead of searching through the stack, a separate stack or central table is used for
each variable name.

o When a new variable is declared, it is placed at the top of the stack or table for that
specific name.

o Every time a subprogram is called, the new variable instance for that name is
pushed onto the stack or added to the table.

o When a variable is referenced, the system always uses the most recent version of
the variable (the one at the top of the stack or table).

• Two methods for shallow access:

1. Separate Stack for Each Variable: Each variable name has its own stack, and the most recent
variable is accessed by looking at the top of the stack.

2. Central Table: A single table holds all variable names with an "active" bit to indicate if the
variable is currently in use. The table points to the most recent value of the variable.

• Disadvantages:

o Subprogram Linkage Overhead: Maintaining the stacks or tables for each variable
can be costly when subprograms are called and returned.

Data Abstraction
Data abstraction is the concept of hiding the complex implementation details of data and
showing only the essential features to the user. It allows you to interact with data without
needing to know how it is stored or implemented internally.
In simpler terms:
• Data abstraction means focusing on what a data structure or object does, not on
how it does it.
• You only see what is necessary, and the complexity is hidden from you.
• abstraction helps manage complexity by allowing programmers to focus on essential
details while ignoring the rest.
There are two main types of abstraction in programming languages: process abstraction and data
abstraction.
• Process Abstraction:

o This allows us to define a process without worrying about how it works in detail. For
example, when you use a subprogram to sort a list of numbers, you don't need to
know how the sorting happens — you just call the sorting subprogram (e.g.,
sortInt(list, listLen)). The program focuses on the process (sorting the list), and the
details of the algorithm are hidden. o The essential details are the name of the
array, its length, and the fact that the array will be sorted. The algorithm used to
sort the array is not important for the user.

• Data Abstraction:

o Data abstraction involves defining the structure of data and the operations that can
be performed on it, without revealing the internal details. The operations (like
sorting in the earlier example) are also abstractions that help users interact with
data without needing to understand how the data is stored or manipulated.

Example:
Imagine you are driving a car:
• You don’t need to know how the engine works or how the brakes are designed to
drive the car.
• You just interact with the steering wheel, pedals, and gear shift (the essential
interface).
• The complexity of how the car functions internally is abstracted away.

In Programming:
• Objects and classes in object-oriented programming (OOP) use data abstraction. For
example, when you use a class like Car, you might only interact with methods like
start(), stop(), or accelerate().
• You don’t need to know how the car's engine or transmission system works
internally—only the functions it offers to interact with the car.
Why It’s Important:
• Simplifies the use of data and objects.
• Helps focus on what matters without getting lost in details.
• Makes code easier to maintain and extend.

Parameterized Abstract Data Types (ADT)


A Parameterized Abstract Data Type (ADT) is a data structure that allows you to define a
generic type for data and operations. Instead of specifying a fixed type, a parameterized
ADT allows the type of data (e.g., integer, string, custom objects) to be passed as a
parameter, making the ADT more flexible and reusable.
In simpler words:
• A parameterized ADT is a way to create data types that are not bound to a specific
type of data. Instead, the type is specified when the ADT is created or used, making
the data structure generic.
Advantages:
• Flexibility: You can define a single ADT that works with any data type, avoiding the
need to write separate implementations for each type.
• Reusability: You can reuse the same ADT for different types without modifying the
underlying code.
• Type Safety: The type passed as a parameter is enforced, preventing errors during
runtime.

Encapsulation Constructs
Encapsulation is one of the core concepts in object-oriented programming (OOP) that
involves bundling the data (variables) and methods (functions) that operate on the data into
a single unit called a class or object. It also restricts direct access to some of an object's
components, which helps to protect the object's integrity by preventing unintended
interference and misuse.
In simpler words:
• Encapsulation hides the internal state of an object and only exposes a controlled
interface to interact with it.
Why Encapsulation is Important:
• Data Protection: It hides the internal state and ensures that only valid changes can
be made.
• Code Maintainability: Changes in the internal structure don’t affect the external
interface.
• Flexibility: The implementation can change without affecting the code that uses the
class.
• Modularity: Encapsulation allows you to bundle the data and operations into a
coherent unit, improving organization.
Naming Encapsulation
In the context of encapsulation, naming refers to how you define and organize the names of
classes, objects, methods, attributes, and other members to make the encapsulation clear
and understandable.
The naming convention is crucial for encapsulation because it ensures that the internal state
of an object is well-defined and protected while making the object’s interface intuitive and
easy to use.
Key Aspects of Naming in Encapsulation:
1. Class Names:
o Naming convention: Class names should be descriptive and follow PascalCase
(each word starts with an uppercase letter).
o Example: Car, BankAccount, StudentProfile.
2. Attribute (Member Variable) Names:
o Naming convention: Attribute names should be camelCase (first word
lowercase, subsequent words capitalized). These should be meaningful and
reflect the data they store.
o Example: accountBalance, customerName, isAccountActive.
3. Method (Function) Names:
o Naming convention: Method names should also use camelCase. The method
name should indicate what action is performed (usually a verb).
o For getter and setter methods, it's common to use get and set prefixes.
o Example: getBalance(), setName(), calculateInterest().
Introduction to Data Abstraction
Data abstraction started in 1960 with COBOL, which introduced the record data structure. Other
languages, such as C, have similar features like structs. An abstract data type (ADT) is a data
structure that not only defines data but also includes subprograms (or operations) that manipulate
that data. It hides unnecessary details and allows programmers to interact with the data type using
only the defined operations.

An abstract data type consists of:

1. Data Representation: The structure that stores the data.

2. Operations: Subprograms or functions that interact with the data.


-Floating-Point as an Abstract Data Type

Even built-in types like floating-point numbers are abstract data types. For instance, floating-point
numbers in most languages allow you to store and perform operations like addition, subtraction,
multiplication, etc., but hide how the numbers are actually represented in memory.

Before the IEEE 754 standard for floating-point representation, different computer architectures
used various formats. However, programs could still be portable because the implementation details
were hidden.

-User-Defined Abstract Data Types

A user-defined abstract data type should have the following characteristics:

1. Hidden Representation: The actual data structure is hidden, and users can only interact with
it through the defined operations.

2. Type Definition and Operations: The type and its operations are packaged together, forming
an interface. This means other parts of the program can use the type without knowing how
it is implemented.

- Example: Stack ADT

A stack is a common abstract data type that stores elements and allows access to only the top
element. Its abstract operations could include: For example, the code for using the stack might look
like this: create(stk1); push(stk1, color1); push(stk1, color2); temp = top(stk1); In this code:

• stk1 is an instance of the stack ADT.

• color1 and color2 are pushed onto the stack.

• The top element is retrieved without being removed.

Benefits of Information Hiding:

• Increased Reliability: Users cannot directly access or manipulate the data, preventing
accidental errors.
• Reduced Complexity: By restricting access to the data, the programmer only needs to focus
on a smaller, defined part of the program.

• Avoiding Name Conflicts: Information hiding reduces the chances of naming issues because
variable names are scoped within the abstract data type.

Design Issues for Abstract Data Types (ADTs):


1. Interface Container: ADTs require a syntactic unit that defines the type and its operations.
This unit must make the type's name visible, while hiding the internal representation. It
allows clients to use the type but prevents direct access to its implementation.

2. Built-in Operations: ADTs should have minimal general operations, with most operations
provided within the type's definition. Common operations like assignment and equality
comparisons might be needed, but not all ADTs will require them.

3. Common Operations: Many ADTs require specific operations, such as:


o Iterators for accessing elements. o Accessors to retrieve

hidden data. o Constructors for initializing new objects.

o Destructors for cleaning up resources.

4. Encapsulation: Some languages, like C++, Java, and C#, directly support ADTs, while others
(like Ada) offer a more generalized encapsulation, allowing more flexible definitions.

5. Parameterized ADTs: Some languages support parameterized ADTs, where a data structure
can be designed to store elements of any type, making the ADT more versatile.

6. Access Controls: The language must define how to restrict access to the internal details of
an ADT, ensuring that only specified operations can modify the data.

7. Separation of Specification and Implementation: The design must decide whether the
ADT's specification is separate from its implementation or if that is left to the developer.

Language Examples for Abstract Data Types (ADTs)


1. Ada:

• Encapsulation is achieved with packages:

o Specification package: Declares the interface (e.g., type names, operations).

o Body package: Contains the implementation of the operations.

• Information Hiding:

o Public and private parts in the specification package.


o The type name is public; representation details are private.

• Types:

o Private types: Have built-in operations (e.g., assignment, comparison).

o Limited private types: No built-in operations.

2. C++:

• Encapsulation is done with classes:

o Private: Hidden members. o Public: Accessible interface for the clients.

o Protected: For inheritance.

• Constructors: Initialize data members and may allocate storage.

• Destructors: Clean up memory, especially for heap storage.

• Information Hiding:

o Private members are hidden, and public methods expose the interface.

o Friend functions allow external access to private members.

3. Java:

• Encapsulation through classes:

o Private and public modifiers for access control.

o All objects are heap-allocated and accessed via references.

• Access Control:

o No friend functions; access control is enforced via access modifiers.

o Package scope allows access within the same package.

• Example: Java code for stack class with methods for push, pop, etc.

4. C#:

• Based on C++ and Java but adds:

o Internal and protected internal access modifiers.

o Structs: Lightweight classes without inheritance support.

• Memory Management: o Garbage collection is used, reducing the need for

destructors.

• Properties: C# provides getters and setters as properties, making data access smoother.
Parameterized Abstract Data Types (ADT)
• Definition: Parameterized ADTs allow the design of ADTs that can store elements of any
type, making them flexible and reusable. These are also referred to as generic classes in
many programming languages.

Supported Languages:

• C++, Ada, Java 5.0, and C# 2005 provide support for parameterized ADTs.

Parameterized ADTs in Ada:

• Ada supports generic packages, which allow parameterization of the stack’s element type
and size. This flexibility enables the creation of more general ADTs.

Parameterized ADTs in C++:

• In C++, parameterization is achieved through parameterized constructors and template


classes. o The constructor allows the stack's size to be parameterized, while the
template mechanism enables parameterization of the element type.

Parameterized ADTs in Java 5.0:

• Java provides generics, allowing parameterization of classes where the parameters must be
types (classes). This approach is primarily used in collections like LinkedList and ArrayList,
enabling type safety and eliminating the need for casting.

Parameterized ADTs in C# 2005:

• Similar to Java, C# supports generic types for parameterized ADTs. These types are
commonly used in collections and can be accessed through indexing.

Encapsulation Constructs
Encapsulation is a way to group related data and functions into a single unit to make large programs
more manageable and avoid unnecessary recompilation. It helps in organizing code logically and
improves reusability.

Problems in Large Programs:

• Intellectual Manageability: When programs grow, organizing them as one large collection of
functions or types becomes difficult.

• Recompilation: In large programs, recompiling the entire program after every change can be
costly. The solution is to organize the program into smaller, independent units
(encapsulations), which can be compiled individually.

Encapsulation in C:

• In C, encapsulation is achieved by organizing related functions and data in separate files


(implementation and header files).
• Header files contain declarations, and implementation files contain the actual definitions.
The client includes the header to use these functions and data.

• However, C’s approach has some risks, such as potential mismatches between the header
and implementation files, which could cause errors that the linker won't catch.

Encapsulation in C++:

• C++ has two types of encapsulation:

1. Non-templated Classes: The header file contains function declarations, and the
definitions are in a separate file.

2. Template Classes: These include full definitions in the header file because of how
C++ handles templates and separate compilation.

• Friend Functions: C++ allows functions (like vector and matrix multiplication) to access
private members of multiple classes, even if they are not part of those classes, through
"friend" declarations.

Encapsulation in Ada:

• Ada Packages provide a powerful form of encapsulation, where you can define both data
types and operations in a single package. This makes it easier to handle cases like the vector
and matrix example in C++.

Encapsulation in C# (Assemblies):

• C# uses Assemblies as the primary encapsulation construct. An assembly is a file that


contains code (in Common Intermediate Language), metadata about classes, and references
to other assemblies.

• Assemblies can be either private (for a specific application) or public (usable by any
application).

• Internal Access Modifier: In C#, the internal modifier allows class members to be visible to
other classes within the same assembly.

Naming Encapsulations
In large software systems, naming encapsulations are crucial for organizing code and avoiding
conflicts between names used by different developers or libraries. These encapsulations create
logical units that allow independent parts of the program to work together without accidentally
using the same names for variables, methods, or classes.

Why Naming Encapsulations Are Needed:

• Multiple Developers: In large programs, many developers work on different parts of the
system, often in different locations. They need to be able to use their own names for
variables and functions without conflicting with others.
• Libraries: Modern software heavily relies on libraries. As developers create new libraries or
add new names to existing ones, they must avoid conflicts with names already used by other
libraries or the client’s application.

Challenges of Naming Conflicts:

• Without a mechanism to manage names, developers may unintentionally use the same
names, causing errors in the software system. This is particularly difficult because a library
author doesn’t know what names are used in other parts of the program.

Purpose of Naming Encapsulations:

• Scope Management: Naming encapsulations define a name scope to avoid conflicts. Each
part of the program (such as a library) can have its own naming space, keeping its names
separate from others.

• Logical Organization: A naming encapsulation can span multiple collections of code, even if
they are stored in different places. It ensures that names within this scope do not collide
with those in other parts of the program.

Naming Encapsulations in Different Languages:

• C++: Uses namespaces to group related names together and avoid conflicts.

• Java: Java has packages, which help organize classes and avoid naming issues.

• Ada: Ada uses packages to encapsulate names and keep them separate from other parts of
the program.

• Ruby: Ruby uses modules to achieve a similar effect, organizing code into namespaces.

You might also like