0% found this document useful (0 votes)
16 views43 pages

Unit 3

Uploaded by

sravanthit717
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views43 pages

Unit 3

Uploaded by

sravanthit717
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

UNIT-3

1. Fundamentals of Subprograms
Definition:

A subprogram is a sequence of program instructions designed to perform a specific


task. It is encapsulated as a unit to be invoked (called) by other parts of a program.
Subprograms promote reusability and modularity, enabling more organized and
maintainable code.

Characteristics of Subprograms:

1. Single Entry Point:

o A subprogram can only be entered at its defined starting point, ensuring a


controlled and predictable flow of execution.

2. Caller is Suspended:

o When a subprogram is called, the execution of the calling program (or


caller) is paused until the subprogram completes its execution.

o During this time, the subprogram executes independently, handling its


own operations.

3. Control Returns to Caller:

o After the subprogram finishes execution, control is returned to the point


where it was called in the caller.

o If a return value is expected, the subprogram sends it back to the caller.

4. Reusability:

o Subprograms can be invoked multiple times, potentially with different


inputs, saving effort and time compared to rewriting similar code.

Types of Subprograms:

1. Procedures:

o Subprograms that perform an operation but do not return a value.

o Examples:

▪ In C:

void printMessage() {

printf("Hello, World!\n");
}

▪ In Python:

def greet():

print("Hello, World!")

2. Functions:

o Subprograms that return a value after execution.

o Examples:

▪ In C:

int add(int a, int b) {

return a + b;

▪ In Python:

def add(a, b):

return a + b

Advantages of Using Subprograms:

1. Modularity:

o Helps break a large program into smaller, manageable parts.

o Promotes a divide-and-conquer approach in software development.

2. Code Reusability:

o Subprograms can be reused in multiple places, reducing code


duplication.

3. Readability:

o Simplifies complex programs by abstracting repetitive tasks, making the


main program more concise.

4. Ease of Maintenance:

o Modifications can be made in one place (the subprogram) instead of


multiple locations in the program.

5. Debugging and Testing:


o Errors are easier to isolate and fix when the program is divided into
subprograms.

o Individual subprograms can be tested independently.

6. Supports Recursion:

o Allows a subprogram to call itself, facilitating solutions to problems like


factorial calculation, Fibonacci series, etc.

Lifecycle of a Subprogram Call:

1. Declaration:

o The subprogram must be defined before it can be called.

o Example:

o int square(int x); // Declaration in C

2. Call:

o The subprogram is invoked using its name and necessary arguments.

o Example:

o int result = square(5); // Call

3. Execution:

o The control transfers to the subprogram, and its statements are executed.

4. Return:

o The subprogram returns control to the caller, optionally sending back a


value.

5. Termination:

o Once execution is complete, the subprogram terminates, freeing any


resources it used.

Examples in Programming Languages:

#include <stdio.h>

void greet() {

printf("Hello, World!\n");

}
int add(int a, int b) {

return a + b;

int main() {

greet(); // Call the procedure

int sum = add(5, 10); // Call the function

printf("Sum: %d\n", sum);

return 0;

Design Issues for Subprograms


Subprograms are critical building blocks of programs, and their design directly
affects usability, efficiency, reliability, and maintainability. Each of the following
points outlines a feature, explains why it is a design issue, and how it impacts
subprogram behavior.

Design Issues for Subprograms

1. Parameter Passing

o Defines how data is shared between the caller and the subprogram.

o Issue: Affects performance, safety, and flexibility. Inefficient methods can


slow execution, and unsafe methods (like pass-by-reference) can lead to
side effects.

o Methods:

▪ Pass by Value: Copies the argument.

▪ Pass by Reference: Passes the address of the argument.

▪ Pass by Value-Result: Combines value copying and referencing.

▪ Pass by Name: Replaces the formal parameter with the actual


parameter expression.

2. Return Values

o Allows a subprogram to send results back to the caller.


o Issue: Affects flexibility and functionality. Single return values are limiting,
while supporting multiple returns or complex types adds complexity.

3. Examples:

returning a single value, such as in:

int add(int a, int b) {

return a + b;

Supporting multiple return values using tuples in Python:

def divide(a, b):

return a // b, a % b

4. Local Variable Lifetime

o Determines whether variables retain their values between subprogram


calls.

o Issue: Affects memory usage, flexibility, and functionality. Static variables


retain values, while dynamic variables enable recursion.

o Examples:
• Static Variables:
• void counter() {
• static int count = 0; // Retains value across calls
• count++;
• printf("%d\n", count);
• }
• Dynamic Variables for recursion:
• def factorial(n):
• if n == 0:
• return 1
• return n * factorial(n - 1)

Nested scopes allow controlled access to variables.

5. Exception Handling

o Manages errors during subprogram execution.

o Issue: Affects robustness and clarity. Without built-in handling, errors


must be manually managed, increasing complexity.

6. Recursion
o Allows a subprogram to call itself.

o Issue: Impacts memory and performance. Deep recursion may cause


stack overflow, and iterative solutions may be more efficient.

o Examples: Recursive solutions (e.g., Fibonacci calculation in Python).

7. Subprogram Overloading

o Allows multiple subprograms with the same name but different


parameters.

o Issue: Improves readability and reusability.

o Examples: Function overloading (e.g., C++ int add(int, int) and double
add(double, double)).

8. Generic Subprograms

o Allows subprograms to work with different data types without rewriting.

o Issue: Improves reusability, but too much flexibility may cause unsafe
behavior.

o Examples: C++ templates (template <typename T> T add(T a, T b)).

9. Type Checking for Parameters

o Ensures that actual and formal parameters match in type.

o Issue: Prevents runtime errors and ensures reliability.

o Examples: Java's strict type-checking.

10. Nested Subprogram Definitions

o Allows subprograms to be defined within other subprograms.

o Issue: Improves modularity and scope control by limiting access to


certain helper functions.

Local Referencing Environment:


• The referencing environment refers to the set of all variables (names) that a
particular statement or piece of code can access or use at a given point.

• For example, when you're writing code, the variables that are defined in the
current function or block of code, as well as those that are available from other
surrounding parts of the program, make up the referencing environment for that
code.
Static Scoping:

• In static scoping, a variable's visibility (or which part of the program can use it)
is determined by where it is written in the code.

• For example, if you define a variable inside a function, that variable is visible only
inside that function and in any nested functions, but not outside.

• The referencing environment for a statement in static scoping includes:

1. The local variables (those defined in the current block or function).

2. All variables that are visible from outer (enclosing) functions or blocks
where the statement is located.

Dynamic Scoping:

• In dynamic scoping, the visibility of a variable is determined at runtime, based


on the calling sequence of functions.

• In dynamic scoping, a variable is visible if it is either local to the current function


or if it has been defined in any function that is currently active (i.e., being
executed).

• The referencing environment in dynamic scoping includes:

1. Local variables.

2. All visible variables from currently active subprograms (functions or


procedures that are in the middle of execution).

Active Subprogram:

• A subprogram (like a function or procedure) is considered active if it's currently


being executed but hasn't finished yet. This is important in dynamic scoping,
because it affects which variables are visible.

Local variables which effect referencing environment can be stack-dynamic or


static:

➢ Stack-Dynamic Local Variables:

• Stack-dynamic means that the memory for local variables is allocated


dynamically (when the function is called) and deallocated (freed) when the
function finishes.

• Advantages of stack-dynamic variables:

o They support recursion


o shared among some subprograms, making memory management more
efficient.

• Disadvantages of stack-dynamic variables:

o overhead because variables need to be allocated and deallocated when


a function is called and finishes.

o It might require indirect addressing (extra steps to access the variable),


which can make accessing data slower.

o Functions can’t be history-sensitive, meaning they cannot “remember”


past executions unless extra mechanisms are used.

➢ Static Local Variables:

• Static variables are allocated and initialized once, and they stay in memory for
the entire program execution.

• Advantages of static variables:

o They are more efficient because there’s no need to allocate or deallocate


memory every time the function is called.

o There’s no run-time overhead, so it's faster compared to stack-dynamic


variables.

o There’s no need for indirect addressing, making them quicker to access.

parameter passing method


1. Pass-by-Value:

• How it works: When you pass a parameter by value, the value of the variable is
copied to the called subprogram (function). The called function works with the
copy, and any changes made to the parameter inside the function do not
affect the original variable.

• Example: If you pass a number x = 5 to a function, the function only gets a copy
of x, so even if the function changes the value, the original x remains unchanged.

• def add_one(x):

x = x + 1 # Only changes the copy, not the original variable

return x

num = 5
result = add_one(num) # num remains 5 after the function call

2. Pass-by-Result:

• How it works: The called function doesn’t receive the original variable
directly. Instead, the function gets a temporary space to store the result. After
the function finishes executing, the result (the final value of the parameter) is
copied back to the original variable.

• Example: If you pass x = 5, the function makes changes to a copy of x, and when
the function finishes, the updated value is copied back into the original x.

def multiply_by_two(x):

x = x * 2 # Changes the copy of x

return x

num = 5

num = multiply_by_two(num) # num is updated to 10

3. Pass-by-Value-Result:

• How it works: This is a combination of pass-by-value and pass-by-result. The


function receives a copy of the original parameter (like pass-by-value), so any
changes made to the parameter in the function do not affect the original
variable. However, once the function completes, the final value of the
parameter (the modified copy) is copied back to the original variable (like pass-
by-result).

o First, the parameter is passed by value (a copy is given).

o Then, after the function finishes, the result is passed back to the original
variable.

• Example:

def increment_and_return(x):

x = x + 1 # Only changes the copy of x

return x

num = 5

num = increment_and_return(num) # num becomes 6 after the function call

4. Pass-by-Reference:
• How it works: In pass-by-reference, the called subprogram gets the actual
memory location (reference) of the parameter, not just a copy. This means that
any changes made to the parameter inside the function will directly affect
the original variable.

o Instead of a copy, the function works directly with the original value.

• Example:

def add_five(x):

x += 5 # This changes the original variable

return x

num = 5

result = add_five(num) # num is now 10 after the function call

5. Pass-by-Name:

• How it works: This is a more unconventional method of passing parameters.


When you pass a parameter by name, the actual expression or code
representing the parameter is passed to the function, not its value or reference.
The function then evaluates the expression each time it is used.

o It’s like delayed evaluation of the parameter.

• Example: In languages like Algol, this would mean passing code or expressions
that will be evaluated when used in the function.

def add_something(expr):

return expr + 5

result = add_something(x + 2) # The expression `x + 2` is evaluated inside the


function

parameter passing methods used in different programming


languages:
1. Fortran:

• Before Fortran 77: Pass-by-reference.

• Fortran 77 and later: Scalar variables are usually passed by value-result.

• Semantics Model: Inout (input/output) semantics model is used for parameter


passing.
2. C:

• Pass-by-value by default.

• Pass-by-reference is achieved by using pointers as parameters.

3. C++:

• Pass-by-reference is implemented using a special reference type (e.g., int&).

4. Java:

• All parameters are passed by value.

• Object parameters are passed by reference (i.e., the reference to the object is
passed).

5. Ada:

• Three modes: in, out, and in out.

o in (default): Parameter is read-only.

o out: Parameter can be assigned but not referenced.

o in out: Parameter can be both assigned and referenced.

6. C#:

• Pass-by-value is default.

• Pass-by-reference is done by using the ref keyword in both the formal parameter
and actual parameter.

7. PHP:

• Similar to C#, parameters are passed by value by default.

• Pass-by-reference is done using the & operator.

8. Perl:

• All parameters are implicitly passed in a predefined array named @_.

Type checking parameters:


Type checking refers to the process of verifying that the data types of the arguments
(or parameters) passed to a function match the expected data types declared for the
parameters in the function definition. This ensures that operations within the function
are performed on compatible data types, which helps prevent errors and ensures the
program behaves as expected.
Why is Type Checking Important?

• Reliability: Type checking helps catch errors early in the development process
by ensuring that data is being used correctly, making the program more reliable.

• Safety: Without type checking, mismatched types (e.g., passing a string where a
number is expected) could lead to unexpected behavior or crashes.

• Debugging: By verifying that parameters match their expected types, developers


can reduce the number of bugs related to incorrect usage of function
parameters.

Type Checking in Different Languages :

1. FORTRAN 77 and Original C:

o No type checking: Parameters are not type-checked, leading to potential


errors.

2. Pascal, FORTRAN 90, Java, and Ada:

o Always required: Type checking is mandatory for all function parameters.

3. ANSI C and C++:

o Choice by the user: Type checking is optional, but prototypes help


specify expected types.

4. Perl, JavaScript, PHP:

o No type checking: These dynamically typed languages don’t enforce type


checks, which can lead to runtime errors.

Multidimensional Arrays as Parameters

If a multidimensional array is passed to a subprogram and the subprogram is

separately compiled, the compiler needs to know the declared size of that array

to build the storage mapping function

Multidimensional Arrays as Parameters: C and C++

• Programmer is required to include the declared sizes of all but the first subscript
in the actual parameter
• Disallows writing flexible subprograms
• Solution: pass a pointer to the array and the sizes of the dimensions as other
parameters; the user must include the storage mapping function in terms of the
size parameters
Multidimensional Arrays as Parameters: Pascal and Ada

Pascal

– Not a problem; declared size is part of the array‘s type

Ada

– Constrained arrays - like Pascal

– Unconstrained arrays - declared size is part of the object declaration

Multidimensional Arrays as Parameters: Fortran

Formal parameter that are arrays have a declaration after the header

– For single-dimension arrays, the subscript is irrelevant

– For multi-dimensional arrays, the subscripts allow the storage-mapping

function

Multidimensional Arrays as Parameters: Java and C#

• Similar to Ada
• Arrays are objects; they are all single-dimensioned, but the elements can be
arrays
• Each array inherits a named constant (length in Java, Length in C#) that is setto
the length of the array when the array object is created

Design Considerations for Parameter Passing:


When designing how parameters are passed to functions or subprograms, there are two
main factors to consider:

1. Efficiency:

• Efficiency refers to how quickly and resource-effectively data is passed to a


function.

• Pass-by-reference is generally more efficient than pass-by-value, especially for


large structures or arrays, because instead of copying all the data, you only pass
a reference (or address) to the data.

• Pass-by-value, on the other hand, involves copying the entire data, which may
be less efficient for large data structures.

2. One-way or Two-way Data Transfer:


• One-way data transfer occurs when a parameter is passed only for reading (i.e.,
the function doesn't modify the value).

o Pass-by-value is typically used here to ensure the function doesn't alter


the original data.

• Two-way data transfer happens when data is both read and modified by the
function.

o Pass-by-reference is suitable here because it allows the function to


modify the actual value of the argument passed.

Parameters as subprograms:
In some programming languages, it’s useful to pass subprograms (functions or
procedures) as parameters to other subprograms. This allows for more flexible and
dynamic behavior, such as using a function as an argument in another function, or
passing custom operations to be executed within a given context.

This concept is also known as higher-order functions or function pointers (in


languages like C/C++) and is common in languages that support first-class functions
(like JavaScript, Python, and Lisp).

Benefits of Passing Subprograms as Parameters:

• Flexibility: Allows you to write more general and reusable code, where you can
pass different operations or behaviors to functions.

• Dynamic behavior: Enables dynamic execution of code (e.g., passing different


sorting or filtering algorithms to a function).

Issues with Passing Subprograms as Parameters:

When subprograms are passed as parameters, there are several key issues that need to
be addressed:

1. Are parameter types checked?

• Type checking: When you pass a subprogram as a parameter, the compiler or


interpreter needs to verify that the subprogram being passed matches the
expected type (in terms of input and output).
o Problem: If the types of parameters between the subprogram passed and
the function receiving it do not match, it could lead to errors.

o Solution: C and C++: functions cannot be passed as parameters but


pointers to functions can be passed; parameters can be type checked

o FORTRAN 95 type checks

o Later versions of Pascal andAda does not allow subprogram parameters;


a similar alternative is provided via Ada‘s generic facility

2. What is the correct referencing environment for a subprogram sent as a


parameter?

• Referencing environment: Each subprogram has a referencing environment,


which consists of the variables that are visible to the subprogram.

o Problem: When a subprogram is passed as a parameter, the environment


(or scope) it references needs to be clearly defined. The function receiving
the subprogram might not know which variables it has access to, leading
to confusion about which variables can be accessed by the passed
subprogram.

o Solution:

o Shallow Binding: The subprogram’s environment is determined by the


environment at the point where the subprogram is called. This is typically
seen in dynamically scoped languages.

o Deep Binding: The subprogram’s environment is determined by the


environment at the point where the subprogram is defined. This is often
used in lexically scoped languages.

o Ad Hoc Binding: The subprogram’s environment is determined by the


environment at the point where the subprogram is passed as a
parameter, based on the specific context

Calling Subprograms Indirectly:


Calling subprograms indirectly means invoking a subprogram without directly naming it
in the code. Instead of calling a function or procedure by its name, you use a reference,
pointer, or other mechanism to execute it. This can provide flexibility, allowing
subprograms to be selected and executed dynamically at runtime.

There are several methods of calling subprograms indirectly, depending on the


programming language. Here are some common techniques:
1. Function Pointers (in languages like C and C++):

In C and C++, you can use function pointers to store the address of a function and then
call it indirectly.

• How it works: A function pointer holds the memory address of a function, and
you can use this pointer to invoke the function at a later time.

2. First-Class Functions (in languages like Python, JavaScript, and Ruby):

In languages like Python or JavaScript, functions are first-class citizens, meaning you
can treat functions as objects. You can store them in variables, pass them as
parameters, and invoke them dynamically.

• How it works: A function can be assigned to a variable or passed as an


argument, and then that variable is used to call the function indirectly.

3. Callbacks:

A callback is a function passed into another function as an argument, which is then


called inside that function. This is a common technique for indirect function calls,
especially in asynchronous or event-driven programming.

• How it works: The "caller" function accepts another function (the callback) as a
parameter and calls it at the appropriate time.

4. Using Objects or Classes (in Object-Oriented Languages):

In object-oriented languages (like Java, C++, or Python), you can call methods on
objects indirectly using references or pointers.

• How it works: An object reference can point to a method, and that method can
be invoked indirectly.

5. Reflection/Introspection (in some languages like Java or Python):

Some languages (like Java and Python) allow reflection or introspection, which
enables you to get information about classes and methods at runtime. You can use this
feature to call methods indirectly by their names.

• How it works: You can dynamically find and invoke methods based on their
names as strings, even if the methods aren’t directly referenced in the code.

Polymorphism in Subprograms:
• Polymorphism refers to the ability to use a single interface or function with
different types of data.

Types of Polymorphism:

1. Ad Hoc Polymorphism (overloaded subprograms):

o Overloaded subprograms provide ad hoc polymorphism.

2. Parametric Polymorphism (Generic Subprograms):

o Parametric polymorphism is achieved when a subprogram is generic and


can work with any type of data.

Overloaded Subprograms:
An overloaded subprogram is one that has the same name as another subprogram in
the same referencing environment. This allows multiple versions of the same
subprogram to coexist, each providing different functionality based on the arguments
passed to them.

Key Concepts:

• Same Name, Different Protocols(referencing environment): Multiple


subprograms can share the same name, but they must differ in either:

o The number of parameters (arity).

o The type of parameters (signature).

o Or a combination of both.

• Unique Protocol for Each Version: Each version of an overloaded subprogram


has a unique signature, meaning the system can distinguish between the
versions by looking at the types and number of arguments passed to them.

Languages Supporting Overloaded Subprograms:

1. C++:

o C++ supports function overloading, allowing multiple functions with the


same name but different parameter types or numbers.

o The compiler selects the correct function version based on the arguments
passed at the call.

2. Java:

o Java also supports method overloading, where multiple methods with the
same name can exist, but the number or type of their parameters must
differ.
3. C#:

o C# allows method overloading where the same method name can have
different signatures (based on the number or types of arguments).

4. Ada:

o In Ada, the return type can be used to disambiguate overloaded


subprograms. This means you can have functions with the same
parameters but different return types.

o Ada allows users to define multiple versions of a subprogram with the


same name.

Advantages of Overloading:

1. Improved Readability: You can use the same name for similar functions that
operate on different data types, improving code clarity.

2. Code Reusability: Overloading allows you to reuse the same function name for
different tasks based on the arguments provided.

3. Flexibility: It allows the programmer to write flexible and concise code without
creating numerous differently named functions.

Challenges:

1. Complexity: Overloading can make it difficult to understand which function is


being called, especially when the parameter types or numbers are very similar.

2. Ambiguity: In some cases, if the parameters are not clearly distinguishable, it


can cause ambiguity, leading to errors.

Generic Subprograms:
A generic subprogram (also called a generic function or template function) is a
subprogram that can accept parameters of different types on different calls. This
allows the same subprogram to be used with multiple types without needing to define
multiple versions of the subprogram for each type.

• Generic subprograms enable parametric polymorphism, allowing the


subprogram to operate on data of any type, and the type is determined when the
subprogram is called.

• Example:

C++ template
Type max(Type first, Type second)

{ return first > second ? first : second; }

Generic Subprograms in Different Languages:

C++ (Templates):

In C++, you can define a generic function using templates. A template is a way to write
a function that can work with any data type.

Java (Generics):

In Java, generics are used to define generic subprograms (methods or classes) that
work with any data type.

Ada (Generics):

In Ada, generic subprograms (generic procedures and functions) allow you to define a
subprogram that can work with any data type.

C# (Generics):

In C#, generics are used to create methods that can accept different data types.

Advantages of Generic Subprograms:

1. Reusability: You can use the same subprogram with different types, avoiding the
need to write multiple versions of the same function.

2. Type Safety: Generic subprograms allow for type checking at compile time,
ensuring that type-related errors are caught early.

3. Flexibility: Generic subprograms provide the flexibility to work with various types
without the need for casting or converting data types.

Design Issues for Functions


Two major design issues for functions include side effects and return types.

1. Are Side Effects Allowed?

Side effects refer to any changes a function makes to variables or data outside its own
scope, such as modifying global variables, changing the state of input parameters, or
interacting with external systems (e.g., writing to a file, printing to the console).

Side Effects and Parameters:


• To reduce side effects, it is considered best practice to design functions such
that parameters are in-mode (input-only). This means that the function should
not modify the values of its input parameters; instead, it should only read them.

o Ada is a language that enforces this idea. In Ada, parameters are


categorized by their mode:

▪ In-mode parameters: Read-only, passed to the function, and


cannot be modified.

▪ Out-mode parameters: The function is responsible for assigning


values to these parameters, but the calling code cannot pass any
initial value.

▪ In-out-mode parameters: The function can both read and modify


these parameters.

2. What Types of Return Values Are Allowed?

The return type of a function determines what kind of value the function will return to
the caller. This can vary significantly across programming languages, and the design of
the function depends on what types are allowed or restricted.

Types of Return Values in Different Languages:

1. C:

o A function can return any primitive type (like int, float, char, etc.) or
pointers to data structures, but arrays and functions cannot be directly
returned.

o you can return a pointer to an array or a pointer to a function.

Example:

int* getArray() {

static int arr[] = {1, 2, 3};

return arr; // Returns a pointer to the array

2. C++:

o C++ allows functions to return any data type, including user-defined


types (e.g., classes, structs).
o Like C, functions can return pointers and references. In addition to the
types available in C, C++ offers more flexibility, especially with object-
oriented programming, where functions can return objects of custom
classes.

3. Ada:

o In Ada, the return type of a function can be any type, including user-
defined types (like records, arrays, etc.).

4. Java:

o In Java, methods (functions are called methods in Java) can return any
type, but Java does not support functions (in the sense of C-style
functions). Instead, Java uses methods that belong to classes.

Coroutines:
A coroutine is a type of subprogram (a function or procedure) that has:

1. Multiple entry points: Unlike traditional functions or subprograms that start


execution at the top and terminate at the end, coroutines can resume execution
from the point where they were previously paused.

2. Self-controlled execution flow: Coroutines manage their own control flow


rather than relying on a strict caller-callee relationship.

This is why coroutines are often referred to as implementing symmetric control, where
the caller and the called coroutine are treated as equals.

Key Features of Coroutines

1. Resumable Execution:

o A coroutine can pause its execution at a specific point and later resume
from where it left off.

o This contrasts with regular functions, which always restart execution from
the beginning when called.

2. Resume Mechanism:

o The first call to a coroutine starts its execution from the beginning.

o Subsequent calls (known as resumes) continue from the point just after
the last executed statement.

3. Quasi-Concurrent Execution:
o Coroutines enable a form of cooperative multitasking. Multiple
coroutines can share control and pass execution back and forth between
themselves without overlapping their execution.

o This differs from threads, which can execute concurrently and in parallel.

4. Looping Execution:

o Coroutines can repeatedly resume each other, potentially forever,


allowing them to serve as ongoing processes or workflows.

How Coroutines Work

When a coroutine is executed:

1. On the first call, it begins execution at its starting point.

2. When it encounters a pause or yield, it saves its current state (local variables
and the point of execution) and transfers control back to the caller.

3. On subsequent resume calls, it restores its state and continues execution from
the saved point.

Example:

Imagine a coroutine that generates Fibonacci numbers:


def fibonacci():

a, b = 0, 1

while True:

yield a # Pause and return current value

a, b = b, a + b # Update for the next number

When fibonacci() is called and resumed, it produces the next Fibonacci number without
starting over.

Advantages of Coroutines

1. Simplified Asynchronous Programming:

o Coroutines are widely used in asynchronous programming (e.g., in Python


async/await) to manage tasks that might involve waiting, such as file I/O
or network requests, without blocking the entire program.

2. Efficient Execution:

o Since coroutines share a single thread of execution and pause


themselves when idle, they are more lightweight than threads and avoid
the overhead of context switching.

3. Better Control Flow:

o Coroutines allow developers to write code in a linear, readable fashion


while still managing complex workflows.

4. Cooperative Multitasking:

o They can manage tasks cooperatively, ensuring that no single coroutine


dominates execution time.

General Semantics of Calls and Returns


The subprogram call and return process is collectively known as subprogram
linkage. Its implementation depends on the specific rules of the programming
language. Here's a simplified explanation of the key actions involved in a subprogram
call and return:

Actions During a Subprogram Call

1. Parameter Passing:
o The process must handle how parameters (inputs/outputs) are passed to
the subprogram (e.g., by value or reference).

2. Allocating Storage for Local Variables:

o If the subprogram has temporary variables (local variables) that are not
fixed (static), memory must be allocated for them during the call.

3. Saving the Caller’s State:

o The program must save everything needed to resume execution after the
subprogram finishes. This includes:

▪ Current register values.

▪ CPU status bits.

▪ The Environment Pointer (EP), which is used to access


parameters and local variables during subprogram execution.

4. Transferring Control:

o The program jumps to the starting point of the subprogram and ensures it
can return to the correct spot in the caller after the subprogram finishes.

5. Handling Nonlocal Variables:

o If the language supports nested subprograms, the process must provide


access to variables from enclosing scopes that the called subprogram
can use.

Actions During a Subprogram Return

1. Updating Parameters:

o If parameters are passed using out mode or inout mode (where values
can be changed), the updated values are copied back to the original
variables.

2. Deallocating Local Storage:

o Memory allocated for local variables is freed.

3. Restoring the Caller’s State:

o The previously saved state of the caller is restored, so it can continue


execution properly.

4. Returning Control:
o The program jumps back to the location in the caller where the
subprogram was initially called.

Implementing simple subprograms


-What Are Simple Subprograms?

• Key Features:

o No recursion: Subprograms cannot call themselves directly or indirectly.

o Fixed memory layout: Local variables and activation records do not grow
or shrink during execution.

o Single instance: Only one version of the subprogram is active at any time.

- Semantics of a Subprogram Call

1. Save the Caller’s Execution Status:

o Preserve the current state of registers, control flags, and environment


pointers.

o This ensures the program can resume where it left off after the
subprogram finishes.

2. Compute and Pass Parameters:

o For pass-by-value, a copy of the parameter is created.

o For pass-by-reference, the memory address of the parameter is passed.

o For pass-by-value-result, a copy is made, and changes are written back


after the subprogram ends.

3. Pass the Return Address:

o Store the memory address of the instruction in the caller where execution
will resume.

4. Transfer Control:

o Execution jumps to the entry point of the subprogram.

- Semantics of a Subprogram Return

1. Update Parameters (if applicable):

o For pass-by-value-result, updated values are copied back to the original


variables.
o For out-mode, results are assigned to the caller's variables.

2. Return Function Value (if applicable):

o Store the result of the function in a pre-allocated space accessible to the


caller.

3. Restore Caller’s State:

o Restore all saved registers, flags, and environment pointers.

4. Return Control:

o Jump to the stored return address in the caller.

-Storage Requirements:

To execute a subprogram, memory is required for:

1. Status Information:

o Includes CPU registers, stack pointers, and program counters.

2. Parameters:

o Memory for input and output parameters passed during the call.

3. Return Address:

o Stores where to return after subprogram execution.

4. Return Value (for functions):

o A designated memory location for storing the result of the function.

5. Temporaries:

o Intermediate values generated during the execution of the subprogram.

-Activation Record

The activation record is a layout of data needed for a subprogram to execute.

• Contents:

o Parameters.

o Local variables.

o Return address.

o Temporaries.

• Fixed Size:
o In simple subprograms, the activation record size is known at compile
time, allowing static allocation.

-Advantages:

o Simple memory management.

o No overhead for dynamic memory allocation or stack operations.

o Efficient execution.

-Limitations:

o No support for recursion or re-entrant subprograms.

Implementing Subprograms with Stack-Dynamic Local


Variables
In most programming languages, stack-dynamic local variables refer to variables that
are allocated on the stack during the execution of a function or subprogram and
deallocated when the subprogram finishes execution. This is a common approach in
languages like C, C++, and Python for managing local variables.

Here’s how stack-dynamic local variables work in subprograms:

1. Stack-based Allocation:

o When a subprogram (like a function or procedure) is called, the local


variables of that subprogram are created on the stack (dynamic memory
allocation). These variables are destroyed once the subprogram exits.

2. Stack Frame (Activation Record):


o When the subprogram is called, an activation record (AR) is pushed onto
the stack. This AR contains the local variables for the subprogram,
return address, parameters, and other relevant data.

o Once the subprogram completes, the AR is popped from the stack, and
the local variables go out of scope and are destroyed.

Key Steps in Implementing Stack-Dynamic Local Variables

Let’s walk through how to implement a function with stack-dynamic local variables.

Step-by-Step Example in C

Consider a simple program with a subprogram (function) that uses local variables
dynamically allocated on the stack.

#include <stdio.h>

void subprogram(int x) {

// Local variables dynamically allocated on the stack

int y = 10;

int z = x + y;

printf("Sum of %d and %d is: %d\n", x, y, z);

int main() {

int a = 5;

subprogram(a); // Calling the subprogram

return 0;

1. Stack Before subprogram Call (At main)

Initially, when the main() function is executing, there are local variables (a) for main().
The stack looks like this:

|----------------------|

| main's AR | (Contains `a` and return address)

|----------------------|

2. Stack After Calling subprogram(a)


When subprogram(a) is called, a new activation record (AR) is pushed onto the stack.
The AR for subprogram() contains:

• The argument x (received from a in main).

• The local variables y and z.

|----------------------|

| main's AR | (Contains `a` and return address)

|----------------------|

|----------------------|

| subprogram's AR | (Contains `x`, `y`, `z`, and return address)

|----------------------|

3. Function Execution

• Local Variables:

o Inside subprogram(), the local variables y and z are dynamically allocated.

o y is initialized to 10.

o z is computed as x + y, which in this case is 5 + 10 = 15.

o printf is called to print the result.

4. Stack After Returning from subprogram()

Once subprogram() finishes execution, its AR is popped from the stack, and its local
variables (y and z) are destroyed. The stack now only contains the AR for main():

|----------------------|

| main's AR | (Contains `a` and return address)

|----------------------|

5. Memory Deallocation

Since the variables y and z were stack-dynamic, they were automatically deallocated
when subprogram() finished. The memory that was allocated for them is released, and
there’s no need for explicit memory management (like free() in some languages).

Nested Subprograms and Static Scoping


In some programming languages, like Fortran 95+, Ada, Python, JavaScript, Ruby, and
Lua, subprograms (functions or procedures) can be nested, meaning one subprogram is
defined inside another. These languages typically use stack-dynamic local variables,
and subprograms can be nested within each other.

Key Points about Nested Subprograms:

1. Subprogram Definition Within Another: In nested subprograms, a function or


procedure is defined inside another function. The inner function is local to the
outer function, meaning it cannot be directly accessed outside the scope of the
function that contains it.

2. Access to Outer Variables: Inner subprograms can access variables that are
defined in the enclosing (outer) subprogram. This can lead to more flexible
designs, but also introduces challenges in variable scope resolution and access
to non-local variables.

3. Stack-Dynamic Local Variables: Nested subprograms typically use stack-


dynamic local variables. This means that local variables are allocated on the
stack at runtime when the subprogram is called and deallocated when the
subprogram exits.

4. Scope and Lifetime:

o The scope of a variable in a nested subprogram depends on the position


of its declaration within the program.

o The lifetime of a variable is confined to the activation of the subprogram,


meaning that the variable is created when the subprogram is called and
destroyed when it finishes executing.

5. Static Scoping and Nested Subprograms:

o In languages with static scoping (or lexical scoping), the visibility of


variables is determined by the program’s source code structure. The outer
subprograms’ variables are accessible to inner subprograms.

o Access to non-local variables (i.e., those defined in outer subprograms) is


resolved by following a static chain (a series of links between activation
records of subprograms).

6. Example in JavaScript:

Consider the following JavaScript code with nested subprograms:

javascript

function outer() {
let outerVar = 10;

function inner() {

let innerVar = 20;

console.log(outerVar); // Accessing outer function's variable

inner();

outer(); // Calling the outer function

In this example:

o The inner() function is defined inside the outer() function.

o inner() has access to outerVar because of static scoping, allowing it to


reference variables from the outer function.

7. Activation Records:

o In a nested subprogram scenario, an activation record is pushed onto


the stack when a subprogram is called, and the static link in the record
helps the inner subprogram find variables from outer subprograms.

8. Static Links:

o To locate a variable in a nested subprogram, static links (or static chains)


are used. Static links point to the activation record of the most recent
static parent (the enclosing subprogram).

o Static Chain: A chain of static links connecting subprograms through


their activation records. This helps in resolving non-local variable
references by following the chain backward.

Blocks
A block is a section of code enclosed in curly braces {} that defines a local scope for
variables. It allows variables to be used only inside the block and ensures they don't
interfere with other variables in the program.

Example in C:
{

int temp;

temp = list[upper];

list[upper] = list[lower];

list[lower] = temp;

o Here, temp is only accessible inside the block and doesn't affect other
variables named temp elsewhere in the program.

2. Variable Lifetime:

o Variables declared inside a block exist only while the block is being
executed.

o Once control exits the block, the variables are no longer available.

3. How Blocks Are Implemented:

o Static-Chain Process: Blocks are treated as simple subprograms that are


always called from the same place in the program.

o Efficient Memory Use: Instead of creating and destroying records every


time a block is entered, the memory for block variables can be allocated
upfront and reused once the block is exited.

4. Example:
5. Memory Management:

o When a block ends, its variables are removed, and their memory can be
reused by other blocks. This helps manage memory efficiently.

Advantages of Blocks:

1. Scope Control: Variables inside a block can't be accessed outside it, reducing
conflicts.

2. Efficient Memory Usage: Memory for block variables is reused when the block
ends.

3. No Interference: You can use the same variable names in different blocks
without interference.

Blocks are useful for keeping variables local to a specific part of the code, reducing
interference, and managing memory efficiently.

Implementing Dynamic Scoping


Dynamic scoping is a method used in some programming languages where a variable's
value is determined by the most recent active subprogram call in the execution chain.
There are two main ways to implement dynamic scoping: deep access and shallow
access.

1. Deep Access

Deep access involves searching through the activation records (the stack frames) of all
active subprograms, starting from the most recently activated one, to find the nonlocal
variable.

• How it works:

o Each subprogram has an activation record in the runtime stack.

o When a variable is referenced, the system looks for the variable starting
from the most recent subprogram call and works backwards through the
stack.

o If the variable is not found in the current subprogram, the system looks in
the previous subprogram's activation record, and so on, until the variable
is found or the stack is exhausted.

• Example:

• void sub3() {

• int x, z;

• x = u + v; // u and v are nonlocal variables

• }

• void sub2() {

• int w, x;

• }

• void sub1() {

• int v, w;

• }

• void main() {

• int v, u;

• }

If sub3() is called, it looks for u and v by searching the activation records in the order
they were called (from sub3 to main). It will first find x in sub3, then find v in sub1, and u
in main.
• Disadvantages:

o Performance: Searching the stack can be slow, especially if the call


chain is deep.

o Complexity: The length of the chain to be searched is not known at


compile time, which makes it difficult to optimize.

2. Shallow Access

Shallow access is a different approach where variable references are resolved by


looking at the most recent variable instance for a specific name. This method is faster
for accessing variables but can be more costly in terms of maintaining the structure for
managing the variables.

• How it works:

o Instead of searching through the stack, a separate stack or central table


is used for each variable name.

o When a new variable is declared, it is placed at the top of the stack or


table for that specific name.

o Every time a subprogram is called, the new variable instance for that
name is pushed onto the stack or added to the table.

o When a variable is referenced, the system always uses the most recent
version of the variable (the one at the top of the stack or table).

• Two methods for shallow access:

1. Separate Stack for Each Variable: Each variable name has its own stack, and
the most recent variable is accessed by looking at the top of the stack.

2. Central Table: A single table holds all variable names with an "active" bit to
indicate if the variable is currently in use. The table points to the most recent value of
the variable.

• Disadvantages:

o Subprogram Linkage Overhead: Maintaining the stacks or tables for


each variable can be costly when subprograms are called and returned.

The Concept of Abstraction


Abstraction is the process of simplifying complex systems by focusing on the essential
characteristics while ignoring unnecessary details. In programming, abstraction helps
manage complexity by allowing programmers to focus on essential details while
ignoring the rest.

There are two main types of abstraction in programming languages: process


abstraction and data abstraction.

1. Process Abstraction:

o This allows us to define a process without worrying about how it works in


detail. For example, when you use a subprogram to sort a list of numbers,
you don't need to know how the sorting happens — you just call the
sorting subprogram (e.g., sortInt(list, listLen)). The program focuses on
the process (sorting the list), and the details of the algorithm are hidden.

o The essential details are the name of the array, its length, and the fact that
the array will be sorted. The algorithm used to sort the array is not
important for the user.

2. Data Abstraction:

o Data abstraction involves defining the structure of data and the


operations that can be performed on it, without revealing the internal
details. The operations (like sorting in the earlier example) are also
abstractions that help users interact with data without needing to
understand how the data is stored or manipulated.

Introduction to Data Abstraction


Data abstraction started in 1960 with COBOL, which introduced the record data
structure. Other languages, such as C, have similar features like structs. An abstract
data type (ADT) is a data structure that not only defines data but also includes
subprograms (or operations) that manipulate that data. It hides unnecessary details
and allows programmers to interact with the data type using only the defined
operations.

An abstract data type consists of:

1. Data Representation: The structure that stores the data.

2. Operations: Subprograms or functions that interact with the data.

-Floating-Point as an Abstract Data Type

Even built-in types like floating-point numbers are abstract data types. For instance,
floating-point numbers in most languages allow you to store and perform operations like
addition, subtraction, multiplication, etc., but hide how the numbers are actually
represented in memory.

Before the IEEE 754 standard for floating-point representation, different computer
architectures used various formats. However, programs could still be portable because
the implementation details were hidden.

-User-Defined Abstract Data Types

A user-defined abstract data type should have the following characteristics:

1. Hidden Representation: The actual data structure is hidden, and users can only
interact with it through the defined operations.

2. Type Definition and Operations: The type and its operations are packaged
together, forming an interface. This means other parts of the program can use the
type without knowing how it is implemented.

- Example: Stack ADT

A stack is a common abstract data type that stores elements and allows access to only
the top element. Its abstract operations could include:

For example, the code for using the stack might look like this:

create(stk1);

push(stk1, color1);

push(stk1, color2);

temp = top(stk1);

In this code:

• stk1 is an instance of the stack ADT.

• color1 and color2 are pushed onto the stack.

• The top element is retrieved without being removed.

Benefits of Information Hiding:

• Increased Reliability: Users cannot directly access or manipulate the data,


preventing accidental errors.

• Reduced Complexity: By restricting access to the data, the programmer only


needs to focus on a smaller, defined part of the program.

• Avoiding Name Conflicts: Information hiding reduces the chances of naming


issues because variable names are scoped within the abstract data type.
Design Issues for Abstract Data Types (ADTs):
1. Interface Container: ADTs require a syntactic unit that defines the type and its
operations. This unit must make the type's name visible, while hiding the internal
representation. It allows clients to use the type but prevents direct access to its
implementation.

2. Built-in Operations: ADTs should have minimal general operations, with most
operations provided within the type's definition. Common operations like
assignment and equality comparisons might be needed, but not all ADTs will
require them.

3. Common Operations: Many ADTs require specific operations, such as:

o Iterators for accessing elements.

o Accessors to retrieve hidden data.

o Constructors for initializing new objects.

o Destructors for cleaning up resources.

4. Encapsulation: Some languages, like C++, Java, and C#, directly support ADTs,
while others (like Ada) offer a more generalized encapsulation, allowing more
flexible definitions.

5. Parameterized ADTs: Some languages support parameterized ADTs, where a


data structure can be designed to store elements of any type, making the ADT
more versatile.

6. Access Controls: The language must define how to restrict access to the
internal details of an ADT, ensuring that only specified operations can modify the
data.

7. Separation of Specification and Implementation: The design must decide


whether the ADT's specification is separate from its implementation or if that is
left to the developer.

Language Examples for Abstract Data Types (ADTs)


1. Ada:

• Encapsulation is achieved with packages:


o Specification package: Declares the interface (e.g., type names,
operations).

o Body package: Contains the implementation of the operations.

• Information Hiding:

o Public and private parts in the specification package.

o The type name is public; representation details are private.

• Types:

o Private types: Have built-in operations (e.g., assignment, comparison).

o Limited private types: No built-in operations.

2. C++:

• Encapsulation is done with classes:

o Private: Hidden members.

o Public: Accessible interface for the clients.

o Protected: For inheritance.

• Constructors: Initialize data members and may allocate storage.

• Destructors: Clean up memory, especially for heap storage.

• Information Hiding:

o Private members are hidden, and public methods expose the interface.

o Friend functions allow external access to private members.

3. Java:

• Encapsulation through classes:

o Private and public modifiers for access control.

o All objects are heap-allocated and accessed via references.

• Access Control:

o No friend functions; access control is enforced via access modifiers.

o Package scope allows access within the same package.

• Example: Java code for stack class with methods for push, pop, etc.

4. C#:
• Based on C++ and Java but adds:

o Internal and protected internal access modifiers.

o Structs: Lightweight classes without inheritance support.

• Memory Management:

o Garbage collection is used, reducing the need for destructors.

• Properties: C# provides getters and setters as properties, making data access


smoother.

Parameterized Abstract Data Types (ADT)


• Definition: Parameterized ADTs allow the design of ADTs that can store elements
of any type, making them flexible and reusable. These are also referred to as
generic classes in many programming languages.

Supported Languages:

• C++, Ada, Java 5.0, and C# 2005 provide support for parameterized ADTs.

Parameterized ADTs in Ada:

• Ada supports generic packages, which allow parameterization of the stack’s


element type and size. This flexibility enables the creation of more general ADTs.

Parameterized ADTs in C++:

• In C++, parameterization is achieved through parameterized constructors and


template classes.

o The constructor allows the stack's size to be parameterized, while the


template mechanism enables parameterization of the element type.

Parameterized ADTs in Java 5.0:

• Java provides generics, allowing parameterization of classes where the


parameters must be types (classes). This approach is primarily used in
collections like LinkedList and ArrayList, enabling type safety and eliminating the
need for casting.

Parameterized ADTs in C# 2005:

• Similar to Java, C# supports generic types for parameterized ADTs. These types
are commonly used in collections and can be accessed through indexing.
Encapsulation Constructs
Encapsulation is a way to group related data and functions into a single unit to make
large programs more manageable and avoid unnecessary recompilation. It helps in
organizing code logically and improves reusability.

Problems in Large Programs:

• Intellectual Manageability: When programs grow, organizing them as one large


collection of functions or types becomes difficult.

• Recompilation: In large programs, recompiling the entire program after every


change can be costly. The solution is to organize the program into smaller,
independent units (encapsulations), which can be compiled individually.

Encapsulation in C:

• In C, encapsulation is achieved by organizing related functions and data in


separate files (implementation and header files).

• Header files contain declarations, and implementation files contain the actual
definitions. The client includes the header to use these functions and data.

• However, C’s approach has some risks, such as potential mismatches between
the header and implementation files, which could cause errors that the linker
won't catch.

Encapsulation in C++:

• C++ has two types of encapsulation:

1. Non-templated Classes: The header file contains function declarations,


and the definitions are in a separate file.

2. Template Classes: These include full definitions in the header file


because of how C++ handles templates and separate compilation.

• Friend Functions: C++ allows functions (like vector and matrix multiplication) to
access private members of multiple classes, even if they are not part of those
classes, through "friend" declarations.

Encapsulation in Ada:

• Ada Packages provide a powerful form of encapsulation, where you can define
both data types and operations in a single package. This makes it easier to
handle cases like the vector and matrix example in C++.

Encapsulation in C# (Assemblies):
• C# uses Assemblies as the primary encapsulation construct. An assembly is a
file that contains code (in Common Intermediate Language), metadata about
classes, and references to other assemblies.

• Assemblies can be either private (for a specific application) or public (usable by


any application).

• Internal Access Modifier: In C#, the internal modifier allows class members to
be visible to other classes within the same assembly.

Naming Encapsulations
In large software systems, naming encapsulations are crucial for organizing code and
avoiding conflicts between names used by different developers or libraries. These
encapsulations create logical units that allow independent parts of the program to
work together without accidentally using the same names for variables, methods, or
classes.

Why Naming Encapsulations Are Needed:

• Multiple Developers: In large programs, many developers work on different


parts of the system, often in different locations. They need to be able to use their
own names for variables and functions without conflicting with others.

• Libraries: Modern software heavily relies on libraries. As developers create new


libraries or add new names to existing ones, they must avoid conflicts with
names already used by other libraries or the client’s application.

Challenges of Naming Conflicts:

• Without a mechanism to manage names, developers may unintentionally use


the same names, causing errors in the software system. This is particularly
difficult because a library author doesn’t know what names are used in other
parts of the program.

Purpose of Naming Encapsulations:

• Scope Management: Naming encapsulations define a name scope to avoid


conflicts. Each part of the program (such as a library) can have its own naming
space, keeping its names separate from others.

• Logical Organization: A naming encapsulation can span multiple collections of


code, even if they are stored in different places. It ensures that names within this
scope do not collide with those in other parts of the program.

Naming Encapsulations in Different Languages:


• C++: Uses namespaces to group related names together and avoid conflicts.

• Java: Java has packages, which help organize classes and avoid naming issues.

• Ada: Ada uses packages to encapsulate names and keep them separate from
other parts of the program.

• Ruby: Ruby uses modules to achieve a similar effect, organizing code into
namespaces.

You might also like