0% found this document useful (0 votes)
19 views212 pages

APP Unit 1 SRM - 17.7.23

The document provides an overview of programming languages, their elements, and various paradigms including imperative, procedural, object-oriented, functional, and concurrent programming. It discusses the Böhm-Jacopini theorem, which states that any computation can be expressed using three basic control structures: sequence, selection, and iteration. Additionally, it highlights the importance of programming language theory in understanding the design and implementation of programming languages.

Uploaded by

sudharsr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views212 pages

APP Unit 1 SRM - 17.7.23

The document provides an overview of programming languages, their elements, and various paradigms including imperative, procedural, object-oriented, functional, and concurrent programming. It discusses the Böhm-Jacopini theorem, which states that any computation can be expressed using three basic control structures: sequence, selection, and iteration. Additionally, it highlights the importance of programming language theory in understanding the design and implementation of programming languages.

Uploaded by

sudharsr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 212

SRM Institute of Science and Technology School of

Computing

Advanced Programming Practice-


21CSC203P
Unit-I (15 Session)

• Programming Languages – Elements of Programming languages -


Programming Language Theory - Bohm- Jacopini structured
program theorem - Multiple Programming Paradigm –
Programming Paradigm hierarchy – Imperative Paradigm:
Procedural, Object-Oriented and Parallel processing – Declarative
programming paradigm: Logic, Functional and Database
processing - Machine Codes – Procedural and Object Oriented
Programming – Suitability of Multiple paradigms in the
programming language- Subroutine, method call overhead and
Dynamic memory allocation for message and object storage -
Dynamically dispatched message calls and direct procedure call
overheads – Object Serialization – parallel Computing
• TextBook: Shalom, Elad. A Review of Programming Paradigms
Throughout the History: With a Suggestion Toward a Future
Approach, Kindle Edition
Programming Languages

A programming language is a formal language with a set of rules and syntax that
allows humans to communicate instructions to a computer. It serves as a means of
writing programs or algorithms that can be executed by a computer or other
computing devices.

Programming languages enable developers to create software applications,


websites, mobile apps, and various other computer programs. They provide a way
to express algorithms and logical instructions in a format that can be understood
and executed by a computer system.
Programming languages can be classified into different types based on various
criteria, such as their level of abstraction, programming paradigm, and domain of
application. Some common types of programming languages include:

High-level languages: These languages are designed to be closer to human


language and are easier to read and write. Examples include Python, Java, and
Ruby.

Low-level languages: These languages provide more direct control over hardware
resources and are closer to machine code. Examples include assembly languages
and machine languages.
Procedural languages: These languages organize a program as a series of
procedures or functions that perform specific tasks. Examples include C, Pascal,
and Fortran.

Object-oriented languages: These languages focus on organizing data and behavior


into objects, promoting modularity and code reuse. Examples include Java, C++,
and Python.

Functional languages: These languages treat computation as the evaluation of


mathematical functions and emphasize immutability and the absence of side
effects. Examples include Haskell, Lisp, and Scala.
Scripting languages: These languages are often used for automation and quick
prototyping. They usually have simpler syntax and dynamic typing. Examples
include JavaScript, Perl, and PHP.

Domain-specific languages (DSLs): These languages are designed for specific


domains or industries, such as SQL for database queries or HTML/CSS for web
development.

Each programming language has its own strengths, weaknesses, and areas of
application. Developers choose a programming language based on factors such as
the requirements of the project, performance needs, availability of libraries and
frameworks, and personal preference or familiarity
Elements of Programming languages

Programming languages consist of various elements that define their structure and
functionality. These elements include:

Syntax: Syntax refers to the rules and structure that define how statements and
expressions are written in a programming language. It specifies the correct order
and format of keywords, symbols, and operators to create valid programs.

Variables: Variables are used to store and manipulate data within a program. They
have a name, a data type that defines the kind of data they can hold (e.g., integer,
string, boolean), and a value.

Data Types: Data types define the kind of data that can be stored in variables or
manipulated by a program. Common data types include integers, floating-point
numbers, characters, strings, booleans, arrays, and objects.

Operators: Operators perform various operations on data, such as arithmetic


calculations (+, -, *, /), logical operations (AND, OR, NOT), and comparison (>, <,
==). They allow manipulation and evaluation of data within expressions and
statements.

Control Structures: Control structures dictate the flow and execution of a program.
They include conditional statements (if-else, switch) for making decisions based on
certain conditions and loop structures (for, while) for repeating a block of code.

Functions and Procedures: Functions and procedures encapsulate a set of


instructions that perform a specific task. They can accept input parameters,
process data, and return results. Functions can be reusable and modular, allowing
for code organization and reusability.

Input and Output: Programming languages provide mechanisms to interact with


the user and external devices. Input operations enable the program to accept data
from the user or other sources, while output operations display or store results or
information.
Libraries and Modules: Programming languages often have libraries or modules
that provide pre-written code and functionality to simplify common tasks. These
libraries extend the capabilities of the language and allow developers to leverage
existing code.

Comments: Comments are non-executable lines of text used to explain or


document code. They help improve code readability and provide information to
other developers or maintainers.

Error Handling: Programming languages offer mechanisms to handle and manage


errors or exceptions that occur during program execution. Error handling allows
developers to anticipate and respond to unexpected situations.

These elements, along with additional language-specific features and constructs,


collectively define the structure, behavior, and capabilities of a programming
language

Programming Language Theory

Programming Language Theory is a field of computer science that studies the


design, analysis, and implementation of programming languages. It focuses on
understanding the principles, concepts, and foundations that underlie
programming languages and their use.
Programming Language Theory covers a broad range of topics, including:

Syntax and Semantics: This area deals with the formal representation and
interpretation of programming language constructs. It involves defining the syntax
(grammar) of a language and specifying the meaning (semantics) of its constructs.

Type Systems: Type systems define and enforce the rules for assigning types to
expressions and variables in a programming language. They ensure type safety and
help catch errors at compile-time.

Programming Language Design and Implementation: This aspect involves the


process of creating new programming languages or extending existing ones. It
explores language features, constructs, and paradigms, and how they can be
efficiently implemented.

Programming Language Semantics: Semantics concerns the meaning and behavior


of programs. It involves defining mathematical models or operational semantics to
formally describe program execution and behavior.

Programming Language Analysis: This area focuses on static and dynamic analysis
of programs, including type checking, program verification, optimization
techniques, and program understanding.
Formal Methods: Formal methods involve using mathematical techniques to
analyze and prove properties of programs and programming languages. It aims to
ensure correctness, safety, and reliability of software systems.

Language Paradigms: Programming Language Theory explores different


programming paradigms, such as procedural, object-oriented, functional, logic, and
concurrent programming. It investigates the principles, strengths, and limitations
of each paradigm.

Language Implementation Techniques: This aspect covers compiler design,


interpretation, code generation, runtime systems, and virtual machines. It
investigates efficient strategies for executing programs written in various
programming languages.

Language Expressiveness: Language expressiveness refers to the power and


flexibility of a programming language in expressing different computations,
algorithms, and abstractions. It explores the trade-offs between expressiveness
and other factors such as performance and readability.

Programming Language Theory provides the foundation for understanding and


reasoning about programming languages. It helps in the development of new
languages, designing better programming constructs, improving software quality,
and building efficient and reliable software systems
Böhm-Jacopini theorem

The Böhm-Jacopini theorem, formulated independently by Corrado


Böhm and Giuseppe Jacopini, is a fundamental result in programming
language theory. It states that any computation can be performed using
only three basic control structures: sequence, selection (if-then-else), and
iteration (while or for loops). This means that any program, regardless of
its complexity, can be expressed using these three control structures
alone.

The theorem is significant because it establishes that more complex


control structures, such as goto statements or multiple exit points, are not
necessary to express any algorithm. By limiting the control structures to
sequence, selection, and iteration, the theorem promotes structured
programming, which emphasizes readable and modular code.

To understand the Böhm-Jacopini theorem, let's look at the three basic


control structures it allows:

Sequence: This control structure allows a series of statements to be


executed in a specific order, one after another. For example:
Statement 1;
Statement 2;
Statement 3;
Selection (if-then-else): This control structure enables a program to
make decisions based on certain conditions. It executes one set of
statements if a condition is true and another set of statements if the
condition is false. For example:

if (condition) {
Statement 1;
} else {
Statement 2;
}
Iteration (while or for loops): This control structure allows a set of
statements to be repeated until a certain condition is satisfied. It executes
the statements repeatedly as long as the condition holds true. For
example:

while (condition) {
Statement;
}
The Böhm-Jacopini theorem states that any program can be
structured using these three control structures alone. This means that
complex programs with loops, conditionals, and multiple branches can be
rewritten using only sequence, selection, and iteration constructs. The
theorem assures that these basic structures are sufficient to express any
algorithm or computation, promoting clarity and simplicity in program
design.
While the Böhm-Jacopini theorem advocates for the use of structured
programming principles, it is important to note that modern
programming languages often provide additional control structures and
abstractions to enhance code readability and maintainability. These
higher-level constructs build upon the foundations established by the
theorem but allow for more expressive and efficient programming
The Böhm-Jacopini theorem, also called structured program theorem,
stated that working out a function is possible by combining subprograms
in only three manners:
•Executing one subprogram, and the other subprogram (sequence) .
•Executing one of two subprograms according to the value of a
Boolean expression (selection) .
•Executing a subprogram until a Boolean expression is true (iteration)
Multiple programming paradigms
Multiple programming paradigms, also known as multi-paradigm programming, refers to
the ability of a programming language to support and integrate multiple programming
styles or paradigms within a single language. A programming paradigm is a way of
thinking and structuring programs based on certain principles and concepts.

Traditionally, programming languages have been associated with a specific paradigm, such
as procedural, object-oriented, or functional. However, with the advancement of
programming language design, many modern languages have incorporated elements and
features from multiple paradigms, providing developers with more flexibility and
expressive power.
Here are some common programming paradigms that can be supported by multi-
paradigm programming languages:

Procedural Programming: This paradigm focuses on the step-by-step execution of a


sequence of instructions or procedures. It emphasizes the use of procedures or functions
to organize and structure code.

Object-Oriented Programming (OOP): OOP is based on the concept of objects that


encapsulate data and behavior. It promotes modularity, reusability, and data abstraction.
Languages like C++, Java, and Python support OOP.
Functional Programming: This paradigm treats computation as the evaluation of
mathematical functions. It emphasizes immutability, pure functions, and higher-order
functions. Languages like Haskell, Lisp, and Scala support functional programming.

Declarative Programming: Declarative programming focuses on describing the desired


result rather than specifying the detailed steps to achieve it. Examples include SQL for
database queries and HTML/CSS for web development.

Logic Programming: Logic programming involves defining relationships and rules and
letting the program reason about queries and logical inferences. Prolog is a popular logic
programming language.

Concurrent Programming: Concurrent programming deals with handling multiple tasks or


processes that execute concurrently or in parallel. Languages like Go and Erlang provide
built-in support for concurrency.

By supporting multiple paradigms, programming languages can address different problem


domains and allow developers to choose the most appropriate style for a given task. This
flexibility enables the combination of different programming techniques within a single
program, leading to more expressive and maintainable code. It also promotes code reuse
and interoperability between different paradigms, as developers can leverage the
strengths of each paradigm to solve specific challenges
Programming Paradigm hierarchy
The concept of a programming paradigm hierarchy refers to the organization and
relationship between different programming paradigms based on their characteristics and
capabilities. It provides a way to understand how various paradigms relate to each other
and how they build upon or differ from one another in terms of abstraction, data handling,
control flow, and programming concepts.

While there is no universally accepted hierarchy, here is a general representation of the


programming paradigm hierarchy:
Imperative Programming: Imperative programming is considered the foundational
paradigm and encompasses procedural programming. It focuses on specifying a sequence
of instructions that the computer must execute to achieve a desired outcome. It involves
mutable state and explicit control flow. Procedural programming, such as C, Pascal, and
Fortran, falls within this category.

Structured Programming: Structured programming builds upon imperative programming


and emphasizes the use of structured control flow constructs like loops and conditionals. It
aims to improve code readability and maintainability by using procedures, functions, and
modules for organizing and structuring code. Languages like C, Pascal, and Python support
structured programming.
Object-Oriented Programming (OOP): Object-oriented programming introduces the
concept of objects that encapsulate data and behavior. It focuses on data abstraction,
encapsulation, inheritance, and polymorphism. OOP allows for modular and reusable code
and supports concepts like classes, objects, and inheritance. Languages like Java, C++, and
Python support OOP.

Functional Programming: Functional programming treats computation as the evaluation of


mathematical functions. It emphasizes immutability, pure functions, higher-order
functions, and declarative programming. Functional programming avoids mutable state
and emphasizes data transformations. Languages like Haskell, Lisp, and Scala support
functional programming.
Logic Programming: Logic programming focuses on defining relationships and rules using
logical formulas. It uses logical inference to query and reason about these relationships.
Prolog is a popular logic programming language.

Concurrent Programming: Concurrent programming deals with handling multiple tasks or


processes that execute concurrently or in parallel. It addresses synchronization,
communication, and coordination among concurrent processes. Languages like Go, Erlang,
and Java (with concurrency libraries) provide support for concurrent programming.

The hierarchy represents a general progression from simpler paradigms to more complex
and expressive ones. Each subsequent paradigm builds upon and extends the capabilities
of the previous paradigms, offering different abstractions and concepts for solving
programming problems.

It's important to note that languages and frameworks often support multiple paradigms,
allowing developers to combine elements from different paradigms within a single
program. This flexibility allows developers to choose the most suitable approach for a
given problem and leverage the strengths of different paradigms to write more efficient
and maintainable code.

Imperative Paradigm: Procedural, Object-Oriented and Parallel processing


The imperative paradigm is a programming paradigm that focuses on specifying a
sequence of instructions or statements that the computer must execute to achieve a
desired outcome. It involves describing the steps or procedures to be followed in order to
solve a problem. The imperative paradigm is characterized by mutable state and explicit
control flow.

Procedural Programming:
Procedural programming is a specific form of the imperative paradigm that organizes code
into procedures or subroutines. It emphasizes the use of procedures or functions, which
are named blocks of code that can be called and executed from different parts of the
program. Procedural programming promotes code modularity, reusability, and structured
control flow using constructs like loops and conditionals. Languages like C, Pascal, and
Fortran are examples of procedural programming languages.
Object-Oriented Programming (OOP):
Object-oriented programming (OOP) extends the imperative paradigm by introducing the
concept of objects. In OOP, objects are entities that encapsulate data (attributes) and
behavior (methods or functions). OOP emphasizes concepts such as data abstraction,
encapsulation, inheritance, and polymorphism. It allows for modular and reusable code
through the use of classes, which define the blueprint for creating objects. Languages like
Java, C++, and Python are examples of languages that support object-oriented
programming.

Parallel Processing:
Parallel processing is a concept that refers to the simultaneous execution of multiple tasks
or processes. It involves dividing a problem into smaller subproblems that can be executed
concurrently on multiple processors or cores. The goal of parallel processing is to improve
performance and efficiency by exploiting the available computational resources. Parallel
processing can be achieved through various techniques, such as multi-threading,
multiprocessing, and distributed computing. Languages like Go, Erlang, and Java (with
concurrency libraries) provide support for parallel processing.

In summary, the imperative paradigm focuses on specifying a sequence of instructions to


be executed by the computer. Procedural programming organizes code into procedures or
subroutines, while object-oriented programming introduces the concept of objects for data
encapsulation and modular code. Parallel processing allows for the simultaneous
execution of multiple tasks or processes to improve performance. These concepts and
paradigms provide different approaches and techniques for structuring and solving
problems within the imperative programming paradigm

Declarative programming paradigm: Logic, Functional and Database


processing

Declarative programming is a programming paradigm that focuses on describing


what needs to be achieved rather than how to achieve it. It emphasizes the use of
declarative statements or expressions that specify the desired result or outcome, leaving
the details of how the computation is carried out to the underlying system or interpreter.

Declarative programming consists of several sub-paradigms, including logic programming,


functional programming, and database processing:

Logic Programming:
Logic programming is a declarative programming paradigm that is based on formal logic. It
involves defining relationships, rules, and constraints using logical formulas. The
programmer specifies a set of logical rules, and the program uses logical inference to query
and reason about these rules. The most well-known logic programming language is Prolog,
which provides mechanisms for defining relations and conducting logical queries.

Functional Programming:
Functional programming is another declarative programming paradigm that treats
computation as the evaluation of mathematical functions. It emphasizes the use of pure
functions, which have no side effects and always produce the same output for the same
input. Functional programming promotes immutability, higher-order functions, and the
composition of functions to achieve desired results. Languages like Haskell, Lisp, and Scala
support functional programming.

Database Processing:
Database processing is a specific application of declarative programming that deals with
manipulating and querying databases. SQL (Structured Query Language) is a common
language used in database processing, which allows programmers to declaratively specify
operations like querying, inserting, updating, and deleting data from databases. In SQL, the
programmer describes the desired results and lets the database management system
(DBMS) handle the optimization and execution details.

In all of these declarative programming sub-paradigms, the focus is on expressing the


desired outcome or relationship rather than specifying a step-by-step procedure. The
systems or interpreters responsible for executing declarative programs handle the details
of how to achieve the desired result efficiently.

Declarative programming allows for concise and expressive code, code reuse, and a higher
level of abstraction, making programs more maintainable and easier to reason about.
However, it may have performance implications, as the underlying system must determine
the most efficient way to execute the declarative statements or queries
Machine Codes – Procedural and Object Oriented Programming
Machine code, also known as machine language, is the lowest level of programming
language that can be directly executed by a computer's processor. It consists of binary
instructions that represent specific operations and data manipulations understood by the
computer's hardware. Machine code instructions are specific to a particular computer
architecture or processor.

Procedural Programming with Machine Code:


In procedural programming, machine code instructions are used to write programs that
follow a procedural structure. Procedural programming focuses on breaking down a
problem into a series of procedures or functions, which are then executed sequentially.
Each procedure or function contains a set of machine code instructions that perform a
specific task.

In procedural programming with machine code, the programmer directly writes or


manipulates the machine code instructions to implement the desired functionality. The
programmer needs to understand the low-level details of the computer's architecture,
instruction set, and memory layout to write efficient and correct code.

Object-Oriented Programming with Machine Code:


Object-oriented programming (OOP) is a higher-level programming paradigm that
emphasizes objects as the fundamental building blocks of programs. OOP provides
concepts such as classes, objects, encapsulation, inheritance, and polymorphism. While
machine code is not inherently object-oriented, it can still be used to implement object-
oriented programming principles at a lower level.

In an object-oriented programming approach with machine code, the programmer can


design and implement their own object-oriented system using machine code instructions.
This involves designing memory layouts, defining structures for objects, implementing
inheritance and polymorphism mechanisms manually, and managing method dispatching.

However, implementing object-oriented programming directly with machine code can be


complex and error-prone, as it requires handling memory management, vtables, and other
low-level details manually. This approach is rarely used in practice due to the availability
of high-level programming languages and compilers that abstract away these low-level
details.

In summary, machine code can be used in both procedural and object-oriented


programming, but it requires a deep understanding of the computer's architecture and
instruction set. While procedural programming with machine code focuses on sequential
execution of procedures, object-oriented programming with machine code would involve
designing and implementing object-oriented concepts manually. However, modern
programming languages and compilers provide higher-level abstractions that make
procedural and object-oriented programming more accessible and efficient

Suitability of Multiple paradigms in the programming language


The suitability of multiple paradigms in a programming language refers to the
extent to which the language supports and integrates different programming paradigms
effectively. It assesses how well a programming language accommodates the principles
and concepts of various paradigms and allows developers to seamlessly use multiple
paradigms within a single codebase. The suitability of multiple paradigms can have several
advantages:

Flexibility and Expressiveness: Supporting multiple paradigms provides developers with a


wider range of tools and techniques to solve problems. Different paradigms excel in
different problem domains, and having multiple paradigms at their disposal allows
developers to choose the most suitable approach for a given task. This flexibility and
expressiveness enable developers to write concise and efficient code.

Code Reusability and Interoperability: Multiple paradigms often have their own libraries,
frameworks, and ecosystems. By supporting multiple paradigms, a programming language
allows developers to leverage existing code and libraries from different paradigms. This
promotes code reusability and interoperability, enabling developers to integrate different
components and systems seamlessly.

Problem-Specific Solutions: Some paradigms are better suited for specific problem
domains. For example, functional programming is well-suited for mathematical
calculations and data transformations, while object-oriented programming is effective for
modeling complex systems. By supporting multiple paradigms, a programming language
enables developers to use the most appropriate paradigm for a given problem, leading to
more efficient and maintainable solutions.

Learning and Transition: Supporting multiple paradigms in a programming language


benefits developers by allowing them to learn and practice different programming styles.
It broadens their skill set and enhances their understanding of different programming
concepts. Additionally, having multiple paradigms within a language makes it easier for
developers to transition between projects or teams that use different paradigms.

Language Evolution and Innovation: The ability to incorporate multiple paradigms in a


programming language facilitates language evolution and innovation. By adopting
concepts and ideas from different paradigms, a language can evolve to meet the changing
needs of the developer community and support emerging trends in software development.

However, it's important to note that incorporating multiple paradigms in a programming


language can also introduce complexity. Developers need to carefully consider the trade-
offs and design decisions associated with supporting multiple paradigms. Striking a
balance between providing flexibility and maintaining language consistency can be a
challenge.

Overall, the suitability of multiple paradigms in a programming language provides


developers with flexibility, expressiveness, code reusability, and the ability to choose the
most appropriate approach for solving different problems. It empowers developers to
write efficient and maintainable code and encourages innovation and growth within the
programming community

Subroutine:
A subroutine is a named sequence of instructions within a program that performs a
specific task. It is also known as a function or procedure. Subroutines help in organizing
code, promoting code reusability, and improving code readability. When a subroutine is
called, the program jumps to the subroutine's location, executes its instructions, and
returns to the point of the program from where it was called.

Method Call Overhead:


Method call overhead refers to the additional time and resources required to invoke a
method or function in an object-oriented programming language. When a method is called,
there is a certain amount of overhead involved in setting up the call, passing arguments,
and returning results. This overhead includes tasks such as pushing arguments onto the
stack, saving registers, and managing the call stack. While the overhead is typically small
and negligible for most applications, it becomes more significant in high-performance
scenarios or when calling methods frequently in tight loops.

Dynamic Memory Allocation for Message and Object Storage:


In object-oriented programming, objects are instances of classes that encapsulate data and
behavior. Dynamic memory allocation is often used to allocate memory for objects during
runtime. When an object is created, memory is dynamically allocated from the heap to
store the object's data. This dynamic memory allocation allows objects to have a flexible
lifetime and enables the creation and destruction of objects as needed.
Message passing is a mechanism used in object-oriented programming to invoke methods
or communicate between objects. When a message is sent to an object, the object's method
is invoked to handle the message. Depending on the programming language and
implementation, the message might contain information such as the name of the method
and the arguments to be passed.

Dynamic memory allocation for message and object storage involves the allocation and
deallocation of memory during runtime, as objects are created, used, and destroyed. This
flexibility in memory allocation allows for dynamic object creation, polymorphism, and
memory management.
However, dynamic memory allocation comes with additional overhead compared to static
memory allocation. There is a cost associated with allocating and deallocating memory,
and improper memory management can lead to memory leaks or fragmentation. Efficient
memory allocation strategies and techniques, such as pooling, garbage collection, or smart
pointers, are often employed to optimize memory usage and minimize overhead in
dynamic memory allocation scenarios.

Overall, subroutines, method call overhead, and dynamic memory allocation for message
and object storage are important concepts in programming that help organize code, enable
code reuse, and provide flexibility in memory management. Understanding these concepts
is crucial for writing efficient and maintainable code in procedural and object-oriented
programming paradigms
Dynamically dispatched message calls and direct procedure call overheads

Dynamically Dispatched Message Calls:


In object-oriented programming, dynamically dispatched message calls refer to the
mechanism of invoking methods or functions on objects at runtime based on the actual
type of the object. When a message is sent to an object, the runtime system determines the
appropriate method to be called based on the object's dynamic type or class hierarchy.

Dynamically dispatched message calls involve a level of indirection and typically incur
some overhead compared to direct procedure calls. The overhead is due to the need for
runtime lookup and method resolution to determine the correct method implementation
to be invoked. This lookup process involves traversing the object's class hierarchy and
finding the appropriate method implementation based on the dynamic type of the object.

The overhead associated with dynamically dispatched message calls can vary depending
on factors such as the programming language, the complexity of the class hierarchy, and
the efficiency of the runtime system. However, modern object-oriented programming
languages and runtime systems employ various optimizations, such as caching method
tables or using virtual function tables (vtables), to reduce the overhead of dynamic
dispatch.

Direct Procedure Call Overheads:


Direct procedure calls refer to the direct invocation of procedures or functions without
involving any dynamic dispatch mechanism. In direct procedure calls, the address of the
function is known at compile time, allowing the program to directly jump to the memory
location of the function and execute its instructions.

Direct procedure calls typically have lower overhead compared to dynamically dispatched
message calls. The direct nature of the call avoids the need for runtime method resolution
and lookup, reducing the indirection and associated overhead. Direct procedure calls have
a more straightforward and efficient execution path since the target procedure's address is
known in advance.

However, it's important to note that the overhead of direct procedure calls can still exist
due to factors such as argument passing, stack manipulation, and context switching. The
specific overhead may vary depending on the programming language, the calling
convention used, and the underlying hardware architecture.

In general, dynamically dispatched message calls introduce a level of indirection and


overhead due to the runtime lookup and method resolution required. On the other hand,
direct procedure calls have lower overhead as they directly invoke functions without the
need for runtime lookup. The choice between dynamically dispatched message calls and
direct procedure calls depends on the specific requirements of the application, the level of
polymorphism needed, and the performance considerations

Object Serialization
Object serialization refers to the process of converting an object's state into a
format that can be stored, transmitted, or reconstructed later. It involves transforming the
object and its associated data into a sequence of bytes, which can be written to a file, sent
over a network, or stored in a database. The reverse process, where the serialized data is
used to reconstruct the object, is called deserialization.

Object serialization is primarily used for two purposes:

Persistence: Object serialization allows objects to be stored persistently, meaning they can
be saved to a file or database and retrieved later. This enables applications to preserve the
state of objects across multiple program executions or to transfer objects between
different systems.

Communication: Serialized objects can be sent over a network or transferred between


different processes or systems. This is particularly useful in distributed systems or client-
server architectures where objects need to be exchanged between different components or
across different platforms.

During object serialization, the object's state, which includes its instance variables, is
transformed into a serialized form. This process may involve encoding the object's data,
along with information about its class structure and metadata. The serialized data is
typically represented as a sequence of bytes or a structured format like XML or JSON.
Some programming languages and frameworks provide built-in support for object
serialization, offering libraries and APIs that handle the serialization and deserialization
process automatically. These libraries often provide mechanisms to control serialization,
such as excluding certain fields, customizing serialization behavior, or implementing
custom serialization logic.

However, not all objects are serializable by default. Certain object attributes, such as open
file handles, network connections, or transient data, may not be suitable for serialization.
In such cases, specific measures need to be taken to handle or exclude these attributes
during serialization.
Object serialization is a powerful mechanism that facilitates data storage, communication,
and distributed computing. It allows objects to be easily persisted or transmitted across
different systems, preserving their state and enabling seamless integration between
heterogeneous environments

parallel Computing

Parallel computing refers to the use of multiple processors or computing


resources to solve a computational problem or perform a task simultaneously. It involves
breaking down a problem into smaller parts that can be solved concurrently or in parallel,
thus achieving faster execution and increased computational power.
Parallel computing can be applied to various types of problems, ranging from
computationally intensive scientific simulations and data analysis to web servers handling
multiple requests simultaneously. It is particularly beneficial for tasks that can be divided
into independent subtasks that can be executed concurrently.

There are different models and approaches to parallel computing:

Task Parallelism: In task parallelism, the problem is divided into multiple independent
tasks or subtasks that can be executed concurrently. Each task is assigned to a separate
processing unit or thread, allowing multiple tasks to be processed simultaneously. Task
parallelism is well-suited for irregular or dynamic problems where the execution time of
each task may vary.

Data Parallelism: Data parallelism involves dividing the data into smaller chunks and
processing them simultaneously on different processing units. Each unit operates on its
portion of the data, typically applying the same computation or algorithm to each chunk.
Data parallelism is commonly used in scientific simulations, image processing, and
numerical computations.

Message Passing: Message passing involves dividing the problem into smaller tasks that
communicate and exchange data by sending messages to each other. Each task operates
independently and exchanges information with other tasks as needed. This approach is
commonly used in distributed systems and parallel computing frameworks such as MPI
(Message Passing Interface).

Shared Memory: Shared memory parallelism involves multiple processors or threads


accessing and modifying a shared memory space. This model allows parallel tasks to
communicate and synchronize by reading and writing to shared memory locations.
Programming models such as OpenMP and Pthreads utilize shared memory parallelism.

Parallel computing offers several benefits, including:

Increased speed: By dividing the problem into smaller parts and executing them
simultaneously, parallel computing can significantly reduce the overall execution time and
achieve faster results.

Enhanced scalability: Parallel computing allows for the efficient utilization of multiple
processing units or resources, enabling systems to scale and handle larger workloads.

Improved performance: Parallel computing enables the execution of complex


computations and simulations that would otherwise be infeasible or take an impractical
amount of time with sequential processing.

However, parallel computing also introduces challenges such as load balancing, data
synchronization, and communication overhead. Proper design and optimization
techniques are essential to ensure efficient and effective parallel execution.

Overall, parallel computing is a powerful approach for achieving high-performance


computing and tackling complex problems by harnessing the capabilities of multiple
processing units or resources. It plays a crucial role in various domains, including scientific
research, data analysis, artificial intelligence, and large-scale computing systems

2. Overview

Structured programming was defined as a method used to minimize complexity that


uses:
1. Top-down analysis for problem solving :-

Top-down analysis includes solving the problem and providing


instructions for every step. When developing a solution is complicated,
the right approach is to divide a large problem into several smaller
problems and tasks.
2. Modularization for program structure and organization :

Modular programming is a method of organizing the instructions of a


program. Large programs are divided into smaller sections called
modules, subroutines, or subprograms. Each subroutine is in charge of a
specific job.
3. Structured code for the individual modules:
Structured coding relates to division of modules into set of
instructions organized within control structures. A control structure
defines the order in which a set of instructions are executed. The
statements within a specific control structure are executed:
• sequentially – denotes in which order the controls are
executed. They are executed one after the other in the
exact order they are listed in the code.
• conditionally – allows choosing which set of controls is
executed. Based on the needed condition, one set of commands
is chosen while others aren’t executed.
• repetitively – allows the same controls to be performed over
and over again. Repetition stops when it meets certain terms
or performs a defined number of iterations.
3.Component
•3.1 Structograms - graphical representation of structured
programming(Flowchart).
•3.2 Subroutine - Subroutine is a sequence of program instructions
packaged as a unit so that together they perform a specific task.
•3.3 Block- Block is a section of code grouped together and it consists
of one or more declarations and statements.(ex: { ….}
•3.4 Indentation- Another typical characteristic of structured
programming is indent style applied to a block to display program
structure. In most programming languages, indentation is not a
requirement but when used, code is easier to read and follow.
•3.5. Control structure – sequence, selection and iteration
Component
•Structograms or Nassi–Shneiderman -graphical representation of structured
programming.
•Structograms can be compared to flowcharts.
•Nassi–Shneiderman diagrams have no representation for a goto statement.

Ex:-
• Structograms use the following diagrams:
1.process blocks - Process blocks represent the simplest actions and don’t
require analysis. Actions are performed block by block.
2. branching blocks
Branching blocks are of two types – True/False or Yes/No block and
multiple branching block.
3. Testing loops
Testing loops allow the program to repeat one or many processes
until a condition is fulfilled. There are two types of testing loops –
test first and test last blocks – and the order in which the steps are
performed is what makes them different.
Advantages of structured programming are:

•Programs are more easily and more quickly written.


•Programs have greater reliability.
•Programs require less time to debug and test.
•Programs are easier to maintain.
Control structure – sequence, selection ,iteration and
recursion. (example for Control structure)

Recursion:
Recursion"; a statement is executed by repeatedly calling itself until
termination conditions are met. While similar in practice to iterative loops,
recursive loops may be more computationally efficient, and are
implemented differently as a cascading stack.
Graphical representation of the three basic patterns — sequence, selection, and repetition
Control Structure - DECISION MAKING (PYTHON )
•Decision making statements in programming languages decides the direction of
flow of program execution. Decision making statements available in python are:
if statement :
It is used to decide whether a certain statement or block of statements will be
executed or not i.e if a certain condition is true then a block of statement is executed
otherwise not.

Syntax:
if condition:
# Statements
to execute if
# condition is
true
Example :
i = 10
if (i > 15):
print ("10 is
less than 15")
print ("I am Not
in if"
• if..else statements:
We can use the else statement with if statement to execute a block of code
when the condition is false.
Syntax:
if (condition):
# Executes
this block if
# condition
is true
else:
# Executes
this block if
# condition
is false
Example :
i = 20;
if (i < 15):
print ("i is
smaller than
15") print
("i'm in if
Block")
else:
print ("i is
greater than
15") print
("i'm in else
Block")
print ("i'm not in if and not in else Block")
•nested if statements
Python allows us to nest if statements within if statements. i.e, we can place
an if statement inside another if statement.
Syntax:
if (condition1):
# Executes when
condition1 is true if
(condition2):
# Executes when
condition2 is true # if
Block is end here
# if Block is end here
Example : Nested if else
i = 10
if (i == 10):
# First
if
stateme
nt if (i <
15):
print ("i is
smaller than 15")
# Nested - if
statement
# Will only be executed if
statement above # it is true
if (i < 12):
print ("i is smaller
than 12 too") else:
print ("i is greater than 15")
if-elif-else ladder
•Here, a user can decide among multiple options. The if statements are
executed from the top down. As soon as one of the conditions controlling
the if is true, the statement associated with that if is executed, and the
rest of the ladder is bypassed. If none of the conditions is true, then
Syntax:-
Syntax:-
if
(
c
o
n
d
i
t
i
o
n
)
:

s
t
a
t
e
m
e
n
t
elif
(c
o
n
di
ti
o
n
):
st
at
e
m
e
n
t
else:
• statement the final else statement will be executed.
example
i = 20
if (i == 10):
pri
nt ("i
is
10")
elif (i
==
15):
pri
nt ("i
is
15")
elif (i
==
20):
pri
nt ("i
is
20")
else:
print ("i is not present")
CONDITIO SYNTA
N X
SIMPLE IF if test
expression:
statement(s
)
IF….ELSE if test expression:
Body of
if else:
Body of else
IF...ELIF...ELSE if test expression:
Body of if
elif test expression:
Body of
elif else:
Body of else
NESTED IF if test expression:
if test expression:
Body of if
else:
Body of else
else:
Body of else
Conditional Expression

A conditional expression evaluates an expression based on a


condition. Conditional expression is expressed using if and else
combined with expression

Syntax:
expression if Boolean-expression
else expression Example:
Biggest of two numbers
num1 = 23
num2 = 15
big = num1 if num1 >
num2 else num2 print ( “
the biggest number is “ ,
big )

Even or odd
print ( “ num is even “ if num % 2 == 0 else “ num is odd “)
21
Iteration – Loops
•Python has two primitive loop commands:
•while loops
•for loops
The while Loop

•With the while loop we can execute a set of statements as long as a


condition is true Example
•Print i as long as i is less than 6:
•i = 1
w
h
i
l
e
i

<

6
:

p
r
i
n
t
(
i
)
i += 1

Note: remember to increment i, or else the loop will continue forever.


The break Statement

With the break statement we can stop the loop even if the while condition is true:

Example
Exit the loop when i is 3:

i=1
w
h
i
l
e

<

6
:

p
r
i
n
t
(
i
)
i
f

=
=

3
:

b
r
e
a
k
i += 1
The continue Statement

With the continue statement we can stop the current iteration, and
continue with the next: Example
Continue to the next iteration if i is 3:
i=0
w
h
i
l
e
i

<

6
:

+
=
1
if
i
=
=
3
:
c
o
n
ti
n
u
e
p
r
i
n
t
(i
)
The else Statement

•With the else statement we can run a block of code once when the
condition no longer is true:
•Example
•Print a message once the condition is false:
•i = 1
w
h
i
l
e
i

<

6
:

p
r
i
n
t
(
i
)
i += 1
else:
print("i is no longer less than 6")
For Loops

•A for loop is used for iterating over a sequence (that is either a list, a
tuple, a dictionary, a set, or a string).
•This is less like the for keyword in other programming languages, and
works more like an iterator method as found in other object-orientated
programming languages.
•With the for loop we can execute a set of statements, once for each item in
a list, tuple, set etc.
Example
Print each fruit in a fruit list:
fruits = ["apple", "banana",
"cherry"] for x in fruits:
print(x)

The for loop does not require an indexing variable to set beforehand
Looping Through a String

•Even strings are iterable objects, they contain a sequence of characters:


Example
•Loop through the letters in the word "banana":
•for x
in
"ban
ana"
:
print
(x)
The break Statement

With the break statement we can stop the loop before it has looped
through all the items: Example
Exit the loop when x is "banana":

fruits = ["apple",
"banana", "cherry"] for
x in fruits:
print(x)
if x
==
"ba
nan
a":
brea
k
• Example
Exit the loop when x is "banana", but this time the break comes
before the print: fruits = ["apple", "banana", "cherry"]
for x in fruits:
if x
==
"ba
nan
a":
brea
k
print(x)
The continue Statement
With the continue statement we can stop the current iteration of the loop, and
continue with the next:
Example
Do not print banana:

fruits = ["apple", "banana",


"cherry"] for x in fruits:
if x
==
"ba
nan
a":
cont
inue
print(x)
The range() Function

•To loop through a set of code a specified number of


times, we can use the range() function,
•The range() function returns a sequence of numbers, starting from
0 by default, and increments by 1 (by default), and ends at a
specified number.
• Example
•Using the range() function:
• for x
in
rang
e(6):
print
(x)
•Note that range(6) is not the values of 0 to 6, but the values 0 to 5.
•The range() function defaults to 0 as a starting value, however it is
possible to specify the starting value by adding a parameter: range(2, 6),
which means values from 2 to 6 (but not including 6):
• Example
Using the start parameter:
• for x in
range(
2, 6):
print(x
)
•The range() function defaults to increment the sequence by 1,
however it is possible to specify the increment value by adding a third
parameter: range(2, 30, 3):
• Example
• Increment the sequence with 3
(default is 1): for x in range(2, 30,
3):
print(x)
Else in For Loop

•The else keyword in a for loop specifies a block of code to be executed


when the loop is finished:
• Example
•Print all numbers from 0 to 5, and print a message when the loop has ended:
for x
in
rang
e(6):
print
(x)
else:
print("Finally finished!")
Nested Loops
A nested loop is a loop inside a loop.
•The "inner loop" will be executed one time for each iteration of the "outer loop":
• Example
•Print each adjective for every fruit:
• adj = ["red", "big", "tasty"]
fruits = ["apple", "banana", "cherry"]

for
x
in
adj
:
for
y
in
fru
its:
pri
nt(
x,
y)
Reference : https://www.w3schools.com/python/python_for_loops.asp
Note:
What is meant by structured language?
• C is called a structured programming language because to solve a large problem, C
programming language divides the problem into smaller modules called functions
or procedures each of which handles a particular responsibility. The program which
solves the entire problem is a collection of such functions

Examples of Structured Programming language are C, C+, C++, C#, Java, PERL,
Ruby, PHP, ALGOL, Pascal, PL/I and Ada

What is unstructured programming language?


• An unstructured program is a procedural program – the statements are executed in
sequence as written. But this type of programming uses the goto statement. A goto
statement allows control to be passed to any other place in the program. ... This
means that it is often difficult to understand the logic of such a program.

Examples of unstructured Programming language are JOSS, FOCAL, MUMPS, TELCOMP,


COBOL
Procedural Programming Paradigm

Session 6 – 10 covers the following Topics:-


• Procedural Programming Paradigm
• Routines, Subroutines, functions
• Using Functions in Python
• logical view, control flow of procedural programming in various aspects
• Other languages: Bliss, ChucK, Matlab
• Demo: creating routines and subroutines using functions in Python
• Lab 2: Procedural Programming
TextBook: Shalom, Elad. A Review of Programming Paradigms
Throughout the History: With a Suggestion Toward a Future Approach,
Kindle Edition
Procedure Oriented Programming(POP):-

• High level languages such as COBOL, FORTRAN and C, is commonly known


as procedure oriented programming(POP). In the procedure oriented
programming, program is divided into sub programs or modules and then
assembled to form a complete program. These modules are called
functions.
• The problem is viewed as a sequence of things to be done.
• The primary focus is on functions.
• Procedure-oriented programming basically consists of writing a list of
instructions for the computer to follow and organizing these instructions
into groups known as functions.
•In a multi-function program, many important data items are placed as
global so that they may be accessed by all functions. Each function may
have its own local data. If a function made any changes to global data,
these changes will reflect in other functions. Global data are more unsafe
to an accidental change by a function. In a large program it is very difficult
to identify what data is used by which function.

•This approach does not model real world problems. This is because
functions are action-oriented and do not really correspond to the
elements of the problem.
Typical structure of procedure-oriented program
Relationship of data and functions in procedural programming
Characteristics of Procedure-Oriented Programming

•Emphasis is on doing things.


•Large programs are divided into smaller programs known as functions.
•Most of the functions share global data.
•Data move openly around the system from function to function.
•Functions transform data from one form to another.
•Employs top-down approach in program design.
Logical view and Control
flow of POP (routine,
subroutine and function)
• Procedural programming is a programming paradigm, derived from
structured programming, based on the concept of the procedure call.
Procedures, also known as routines, subroutines, or functions, simply
contain a series of computational steps to be carried out. Any given
procedure might be called at any point during a program's execution,
including by other procedures or itself.

• procedural languages generally use reserved words that act on blocks,


such as if, while, and for, to implement control flow, whereas non-
structured imperative languages use goto statements and branch tables
for the same purpose.

Note:
Subroutine:-
• Subroutines; callable units such as procedures, functions, methods, or
subprograms are used to allow a sequence to be referred to by a single
statement.
Function in python

• A function is a block of organized, reusable code that is used to perform a


single, related action. Functions provide better modularity for your
application and a high degree of code reusing.
• There are 2 types
of function
Built-in function
User defined function -User can create their own functions.
Defining a Function

• Function blocks begin with the keyword def followed by the


function name and parentheses ( ( ) ).
• Any input parameters or arguments should be placed within
these parentheses. You can also define parameters inside these
parentheses.
• The first statement of a function can be an optional statement - the
documentation string of the function or docstring.
• The code block within every function starts with a colon (:) and is
indented.
• The statement return [expression] exits a function, optionally
passing back an expression to the caller. A return statement
with no arguments is the same as return None.
Syntax:
• def
functionname(
parameters ):
"function_docst
ring"
funct
ion_sui
te
return
[expre
ssion]
Example:
#function definition
def my_function():
print("Hello from a function")

# To call a function, use the function name followed by parenthesis:


my_function()
Function Arguments

You can call a function by using the following types of formal arguments −
•Required arguments
•Keyword arguments
•Default arguments
•Variable-length arguments
•Required arguments are the arguments passed to a function in correct
positional order. Here, the number of arguments in the function call should
match exactly with the function definition.

To call the function printme(), you definitely need to pass one argument,
otherwise it gives a syntax error as follows −
# Function definition is here
def printme( str ):
"This prints a passed string into
this function" print str
return;
# Now you can call printme
function printme()
When the above code is executed, it produces the following result −

Traceback (most recent call last):


File "test.py", line 11, in
<module> printme();
TypeError: printme() takes exactly 1 argument (0 given)
Keyword arguments
Keyword arguments are related to the function calls. When you use keyword
arguments in a function call, the caller identifies the arguments by the
parameter name.
This allows you to skip arguments or place them out of order because the Python
interpreter is able to use the keywords provided to match the values with
parameters. You can also make keyword calls to the printme() function in the
following ways − Ex2:-

Ex1: # Function definition is here


• def printme( str ):
• "This prints a passed string into this function"
• print str
• return;
# Now you can call printme function
• printme( str = "My string")
• When the above
code is executed, it
produces the
following result −

O/p -> My string

Note that the order of parameters does not matter


Default arguments
•A default argument is an argument that assumes a default value if a value is not
provided in the function call for that argument. The following example gives an idea on
default arguments, it prints default age if it is not passed −
# Function definition is here
def printinfo( name, age = 35 ):
"This prints a passed info into
this function" print "Name: ",
name
print
"Age
", age
retur
n;
# Now you can call
printinfo function
printinfo( age=50,
name="miki" )
printinfo( name="miki" )
When the above code is executed, it produces the following result −
Variable-length arguments

• You may need to process a function for more arguments than you
specified while defining the function. These arguments are called
variable-length arguments and are not named in the function definition,
unlike required and default arguments.
• Syntax for a function with non-keyword variable
arguments is this − def functionname([formal_args,]
*var_args_tuple ):
"function_
docstrin
g"
function_
suite
return [expression]
An asterisk (*) is placed before the variable name that holds the values of
all nonkeyword variable arguments. This tuple remains empty if no
additional arguments are specified during the function call. Following is a
simple example −
# Function definition is here
def printinfo( arg1, *vartuple ):
"This prints a variable
passed arguments" print
"Output is: "
print arg1
for var
in
vartup
le:
print
var
return;

# Now you can call printinfo function


printinfo( 10 ) When the above code is executed, it produces the
following result − Output is:
10
Output is: printinfo( 70, 60, 50 )
70
60
50
The Anonymous Functions
•These functions are called anonymous because they are not declared in the
standard manner by using the def keyword. You can use the lambda keyword
to create small anonymous functions.
•Lambda forms can take any number of arguments but return just one value
in the form of an expression. They cannot contain commands or multiple
expressions.
•An anonymous function cannot be a direct call to print because lambda requires an
expression
•Lambda functions have their own local namespace and cannot access variables
other than those in their parameter list and those in the global namespace.
•Although it appears that lambda's are a one-line version of a function, they are
not equivalent to inline statements in C or C++, whose purpose is by passing
function stack allocation during invocation for performance reasons.
Syntax
• The syntax of lambda functions contains only a single statement, which is as
follows −
lambda [arg1 [,arg2,.. .argn]]:expression
•Following is the example to show how lambda form of function works −
# Function definition is here
sum = lambda arg1, arg2:
arg1 + arg2; # Now you
can call sum as a
function print "Value of
total : ", sum( 10, 20 )
print "Value of total : ",
sum( 20, 20 )
When the above code is executed, it produces the following result −
O/p
Value of
total : 30
Value of
total : 40
The
return
Statement
The statement return [expression] exits a function, optionally passing back
an expression to the caller. A return statement with no arguments is the
same as return None.
All the above examples are not returning any value. You can return a value
from a function as follows −
# Function definition is here
def sum( arg1, arg2 ):
# Add both the parameters and
return them." total = arg1 +
arg2
print "Inside the
function : ", total
return total;
# Now you can call sum function
total = sum( 10, 20 );
print "Outside the function : ", total
When the above code is executed, it produces the following result −
Inside the
function : 30
Outside the
function : 30
• Scope of Variables
•All variables in a program may not be accessible at all locations in that program.
This depends on where you have declared a variable.
•The scope of a variable determines the portion of the program where you can
access a particular identifier. There are two basic scopes of variables in Python

• Global variables
• Local variables
• Global vs. Local variables
•Variables that are defined inside a function body have a local scope, and those
defined outside have a global scope.
•This means that local variables can be accessed only inside the function in which
they are declared, whereas global variables can be accessed throughout the
program body by all functions. When you call a function, the variables declared
inside it are brought into scope. Following is a simple example −
total = 0; # This is global variable.
# Function definition is here
def sum( arg1, arg2 ):
# Add both the parameters and
return them." total = arg1 + arg2; #
Here total is local variable. print
"Inside the function local total : ",
total return total;
# Now you can call sum function
sum( 10, 20 );
print "Outside the function global total : ", total
When the above code is executed, it produces the
following result − Inside the function local total : 30
Outside the function global total : 0
3. Object-Oriented Programming

Session 11-15 covers the following Topics:-


•Object Oriented Programming Paradigm
•Class, Objects,Instances, Methods
•Encapsulation,Data Abstraction
•Polymorphism,Inheritance
•Constructor,Destructor
•Example Languages: BETA,Cecil, Lava.
•Demo: OOP in Python
• Lab 3: Object Oriented Programming
TextBook: Shalom, Elad. A Review of Programming Paradigms
Throughout the History: With a Suggestion Toward a Future Approach
Object-Oriented Programming

•OOP treat data as a critical element in the program development and


does not allow it to flow freely around the system.
•It ties data more closely to the functions that operate on it, and protects it
from accidental modification from outside functions.
•OOP allows decomposition of a problem into a number of entities called
objects and then build data functions around these objects.
•The data of an object can be accessed only by the functions associated with that
object.
•Functions of one object can access the functions of another objects
Organization of data and functions in OOP

Object A Object B

Data Data

Communication
Functions Functions

Object C

Data

Functions
Characteristics of Object-Oriented Programming

•Emphasis is on data rather than procedure.


•Programs are divided into objects.
•Data structures are designed such that they characterize the objects.
•Functions that operate on the data of an object are tied together in the data
structure.
•Data is hidden and can not be accessed by external functions.
•Objects may communicate with each other through functions.
•New data and functions can be added easily whenever necessary.
•Follows bottom-up approach in program design.
Object-Oriented Programming

• Definition:
It is an approach that provides a way of modularizing programs by
creating partitioned memory area for both data and functions that can be
used as templates for creating copies of such modules on demand. Thus
the object is considered to be a partitioned area of computer memory that
stores data and set of operations that can access that data.
Basic Concepts of Object-Oriented Programming

•Objects
•Classes
•Data Abstraction and Encapsulation
•Inheritance
•Polymorphism
•Dynamic Binding
•Message Passing
Basic Concepts
of OOP continue …

• Objects
Objects are the basic run-time entities in an object-oriented system. They
may represent a person, a place, a bank account, etc. Objects take up
space in the memory and have an associated address like a structure in C.

When a program is executed, the objects interact by sending messages to one


another.
Basic Concepts of continue …

OOP

• Objects
Object : CUSTOMER Object : ACCOUNT

DATA DATA
AC No.
AC No.
AC Balance
Name of AC Holder Type of Account
Address
FUNCTIONS
FUNCTIONS Account Balance
Deposit
Withdrawal
AC Balance Display
Basic Concepts
of OOP continue …

• Classes
Classes are user-defined
data types.

The entire set of data and code of an object can be made a user-defined
data type with the help of a class. Objects are variables of the type class.
Once a class has been defined, we can create any number of objects
belonging to that class. Each object is associated with the data of type
class with which they are created.

A class is a collection of objects of similar type.


Basic Concepts of OOP will create an
object mango
belonging to the
• Classes class fruit.

If fruit has been defined as a class, then the


statement

fruit mango;
continue …
Basic Concepts of OOP continue …

• Data Abstraction and


Encapsulation
o The wrapping up of data and functions into a
single unit is known as encapsulation.

o The data is not accessible to the outside


world, and only those functions which are
wrapped in the class can access it.
o These functions provide the interface between
the object’s data and the program. This
insulation of the data from direct access by the
program is called data hiding or information
hiding.
Basic Concepts of OOP continue …

• Data Abstraction and


Encapsulation

The attributes wrapped in the classes are called data members and
the functions that operate on these data are called methods or
member functions.
Since the classes use the concept of data abstraction, they are known as
Abstracted Data Types (ADT).
Basic Concepts of continue …

OOP

• Inheritance
o Inheritance is the process by which objects of one
class acquire the properties of objects of another class.

o It supports the concept of hierarchical classification.


o Each derived class shares common characteristics with the class
from which it is derived.
Property Inheritance
Bird
Attributes
: Feathers
Lay eggs

Flying Bird Non-flying Bird

Attributes: Attributes:
------------ ------------
------------ ------------

Robin Swallow Penguin Kiwi


Attributes: Attributes: Attributes: Attributes:
------------ ------------ ------------ ------------
------------ ------------ ------------ ------------
Basic Concepts of continue …

OOP

• Inheritance
o Inheritance provides the idea of
reusability.

o We can add additional features to an existing class without modifying it.


(By deriving new class from existing one. The new class will have the combined
features of both the classes.)
Basic Concepts of OOP continue …

• Polymorphism - ability to take more


than one form
o An operation may exhibit different behaviours in different instances.
o The behaviour depends upon the types of data used in the operation.
o add( 3, 5) gives 8
o Add(“hello”, “-world”) gives “hello-world”
Basic Concepts of OOP continue …

• Polymorphism - ability to take more


than one form
o The process of making an operator to exhibit different
behaviours in different instances is known as operator
overloading.
o << Insertion Operator
o << Left-shift bit-wise operator
o Using a single function name to perform different types of
tasks is known as function overloading.
o add( 3, 5) gives 8
o Add(“hello”, “-world”) gives “hello-world”
Basic Concepts of continue …

OOP

• Dynamic Binding
Binding refers to the linking of a procedure call to the code to
be executed in response to the call.

Dynamic binding ( late binding ) means that the code associated with a
given procedure call is not known until the time of the call at run-time.
It is associated with polymorphism and inheritance.
Basic Concepts of continue …

OOP

• Message Passing
o An oop consists of a set of objects that communicate with each other.
o Oop involves the following steps:
o Creating classes that define objects and their behaviour.
o Creating objects from class definitions.
o Establishing communication among objects.
o Objects communicate with one another by sending
and receiving information.
Basic Concepts of continue …

OOP

• Message Passing
o A message for an object is a request for execution of a procedure.
o The receiving object will invoke a function and generates results.
o Message passing involves specifying:
o The name of the Object.
o The name of the Function.
o The information to be send.
Benefits of OOP

• Inheritance – eliminate redundant code


and extend the use of existing classes.
• We can build programs from the
standard working module, no need of
starting from the scratch.
•Data hiding helps the programmer to build secure
programs that can not be invaded by code in other parts
of the program.
Benefits of OOP continue …

• Multiple instances of an
objects can
co-exists with out any
interference.
• It is easy to partition the work
in a project based on objects.
•Object-oriented system can be easily
upgraded from small to large systems.
•Message passing techniques for
communication between objects
makes the interface descriptions with
external systems much simpler.
•Software complexity can be easily managed.
5.Declarative paradigm
Unit- 2 (15 Session )
Session 6-10 covers the following Topics:-
• Definition Declarative Paradigm
• Sets of Declarative Statements
• Object Attribute and Binding behavior
• Creating Event without describing flow
• Other languages: Prolog, Z3, LINQ, SQL
• Demo: Declarative Programming in Python
• Lab 5: Declarative Programming
• TextBook: Shalom, Elad. A Review of Programming Paradigms
Throughout the History: With a Suggestion Toward a Future
Approach, Kindle Edition
1. Declarative paradigm

•Declarative programming is a programming paradigm that


expresses the logic of a computation without describing its control
flow.
•Logic, functional and domain-specific languages belong
under declarative paradigms
Examples would be HTML, XML, CSS, SQL, Prolog, Haskell, F# and Lisp.
•Declarative code focuses on building logic of software without
actually describing its flow. You are saying what without adding
how. For example with HTML you use <img src="./image.jpg" /> to
tell browser to display an image and you don’t care how it does
that.
1.1 History

•The two main subparadigms of declarative


programming are functional
Programming
&
logic programming.
Functional and logical programming languages are characterized
by a declarative programming style.
In logical programming languages, programs consist of logical
statements, and the program executes by searching for proofs of
the statements.

You might also like