UNIT3 Notes OOSD
UNIT3 Notes OOSD
Characteristics of OOD
Objects are abstractions of real-world or system entities and manage themselves.
Objects are independent and encapsulate state and representation information.
System functionality is expressed in terms of object services.
Shared data areas are eliminated.
Objects communicate by message passing.
Objects may be distributed and may execute sequentially or in parallel.
Advantages of OOD
Easier maintenance. Objects may be understood as stand-alone entities.
Objects are potentially reusable components.
For some systems, there may be an obvious mapping from real world entities to system objects.
Object Design
The object design phase determines the full definitions of the classes and associations used in the
implementation, as well as the interfaces and algorithms of the methods used to implement operations. The
object design phase adds internal objects for implementation and optimizes data structures and algorithms.
Steps of Design:
During object design, the designer must perform the following steps:
1. Combining the three models (object, dynamic and functional model) to obtain operations on
classes.
2. Design algorithms to implement operations.
3. Optimize access paths to data.
4. Implement control for external interactions
5. Adjust class structure to increase inheritance. 6. Design associations.
7. Determine object representation.
8. Package classes and associations into modules.
• After analysis, we have object, dynamic and functional model, but the object model is the
main framework around which the design is constructed.
• The object model from analysis may not show operations. The designer must convert the
actions and activities of the dynamic model and the processes of the functional model into
operations attached to classes in the object model. Each state diagram describes the life
history of an object. A transition is a change of state of the object and maps into an
operation on the object.
• We can associate an operation with each event received by an object. In the state diagram,
the action performed by a transition depends on both the event and the state of the
object. Therefore, the algorithm implementing an operation depends on the state of the
object. If the same event can be received by more than one state of an object, then the
code implementing the algorithm must contain a case statement dependent on the state. An
event sent by an object may represent an operation on another object.
• Events often occur in pairs, with the first event triggering an action and the second event
returning the result on indicating the completion of the action. In this case, the event pair
can be mapped into an operation performing the action and returning the control provided
that the events are on a single thread. An action or activity initiated by a transition in a
state diagram may expand into an entire dfd in the functional model .The network of
processes within the dfd represents the body of an operation.
• The flows in the diagram are intermediate values in operation. The designer convert the
graphic structure of the diagram into linear sequence of steps in the
algorithm. The process in the dfd represent sub operations. Some of them, but not
necessarily all may be operations on the original target object or on otherobjects.
* If a process extracts a value from input flow then input flow is the target.
* Process has input flow or output flow of the same type, input output flow is the target.
* Process constructs output value from several input flows, then the operation is a class
operation on output class.
* If a process has input or an output to data store or actor, data store or actor is the target.
What is modeling?
A model is an abstraction of something for the purpose of understanding it before building it. Because, real
systems that we want to study are generally very complex. In order to understand the real system, we have
to simplify the system. So a model is an abstraction that hides the non- essential characteristics of a system
and highlights those characteristics, which are pertinent to understand it.
Most modeling techniques used for analysis and design involve graphic languages. These graphic
languages are made up of sets of symbols. So, the symbols are used according to certain rules of
methodology for communicating the complex relationships of information more clearly than descriptive
text.
Modeling is used frequently, during many of the phases of the software life cycle such as analysis, design
and implementation. Modeling like any other object-oriented development, is an iterative process.
Why do we model?
Before constructing anything, a designer first build a model. The main reasons for constructingmodels
include:
• To test a physical entity before actually building it.
• To set the stage for communication between customers and developers.
• For visualization i.e. for finding alternative representations.
• For reduction of complexity in order to understand it.
The object modeling techniques is an methodology of object oriented analysis, design andimplementation
that focuses on creating a model of objects from the real world and then to use this model to develop object–
oriented software. Object tmodeling technique, OMT was developed by James Rambaugh. Now-a-days,
OMT is one of the most popular object oriented development techniques. It is primarily used by system and
software developers to support full life cycle development while targeting object oriented implementations.
OMT has proven itself easy to understand, to draw and to use. It is very successful in many application
domains: telecommunication, transportation, compilers etc. The popular object modelingtechnique are used
in many real world problems. The object-oriented paradigm using the OMT spans the entire development
cycle, so there is no need to transform one type of model to another.
Phases of OMT
The OMT methodology covers the full software development life cycle. The methodology has the
following phase.
1. Analysis - Analysis is the first phase of OMT methodology. The aim of analysis phase is
to build a model of the real world situation to show its important properties and domain.
This phase is concerned with preparation of precise and correct modelling of the real world.
The analysis phase starts with defining a problem statement which includes a setof goals.
This problem statement is then expanded into three models; an object model, a dynamic
model and a functional model. The object model shows the static data structure or skeleton
of the real world system and divides the whole application into objects. In others words,
this model represents the artifacts of the system. The dynamic model represents the
interaction between artifacts above designed represented as events, states and transitions.
The functional model represents the methods of the system from the data flow perspective.
The analysis phase generates object model diagrams, state diagrams, event flow diagrams
and data flow diagrams.
2. System design - The system design phase comes after the analysis phase. System design
phase determines the overall system architecture using subsystems, concurrent tasks and
data storage. During system design, the high level structure of the system is designed.
The decisions made during system design are:
o The system is organized in to sub-systems which are then allocated to processes
and tasks, taking into account concurrency and collaboration.
o Persistent data storage is established along with a strategy to manage shared or
global information.
o Boundary situations are checked to help guide trade off priorities.
3. Object design - The object design phase comes after the system design phase is over. Here
the implementation plan is developed. Object design is concerned with fully classifying the
existing and remaining classes, associations, attributes and operations necessary for
implementing a solution to the problem. In object design:
o Operations and data structures are fully defined along with any internal objects
needed for implementation.
o Class level associations are determined.
o Issues of inheritance, aggregation, association and default values are checked.
• Object Model
• Dynamic Model
• Functional Model
1. Object Model :The object model visualizes the elements in a software application in terms of
objects.
Object
An object is a real-world element in an object–oriented environment that may have a physical ora
conceptual existence. Each object has −
Objects can be modelled according to the needs of the application. An object may have a physical existence,
like a customer, a car, etc.; or an intangible conceptual existence, like a project, a process, etc.
Class
A class represents a collection of objects having same characteristic properties that exhibit common
behaviour. It gives the blueprint or description of the objects that can be created from it.Creation of an
object as a member of a class is called instantiation. Thus, object is an instance ofa class.
A set of attributes for the objects that are to be instantiated from the class. Generally,
different objects of a class have some difference in the values of the attributes. Attributes
are often referred as class data.
A set of operations that portray the behaviour of the objects of the class. Operations are
also referred as functions or methods.
Example
Let us consider a simple class, Circle, that represents the geometrical figure circle in a two– dimensional
space. The attributes of this class can be identified as follows −
x–coord, to denote x–coordinate of the center
y–coord, to denote y–coordinate of the center
a, to denote the radius of the circle
During instantiation, values are assigned for at least some of the attributes. If we create an object my_circle,
we can assign values like x-coord : 2, y-coord : 3, and a : 4 to depict its state. Now, if the operation scale()
is performed on my_circle with a scaling factor of 2, the value of the variable a will become 8. This
operation brings a change in the state of my_circle, i.e., the object has exhibited certain behavior.
2. Dynamic Model:
Dynamic model describe those aspect of a system concerned with time and the sequencing
of operations, events that mark changes, sequences of events, State that define the context
for events and the organization of events and states. The dynamicmodel capture control
that aspects of a system that describe the sequences of operations that occur, without regard
for what the operation do, what they operate on or how theyare implemented.
Represented by state diagrams
Dynamic model shows the time dependent behaviour of the system and the objects in it.
Begin analysis looking for events- externally visible stimuli and responses.
We need to perform the following steps while constructing a dynamic model:
3. Functional Model:
The functional model describe those aspect of a system that are concerned with
transformation of values: functions, mapping, constraints and functional dependencies. The
functional model captures what a system does, without regard how or when it is done.
Functional model shows how values are computed without regard for sequencing, decision
or object structure.
This model shows which value depend on which other values and the functions that relate
them.
The following steps are performed in constructing a functional model:
Rumbaugh et al. have defined DFD as, “A data flow diagram is a graph which shows the flow of data values
from their sources in objects through processes that transform them to their destinations on other objects.”
Processes,
Data Flows,
Actors, and
Data Stores.
Constraints, and
Control Flows.
Features of a DFD
Processes
Processes are the computational activities that transform data values. A whole system can be visualized as
a high-level process. A process may be further divided into smaller components.The lowest-level process
may be a simple function.
Representation in DFD − A process is represented as an ellipse with its name written inside it and contains
a fixed number of input and output data values.
Example − The following figure shows a process Compute_HCF_LCM that accepts two integers as inputs
and outputs their HCF (highest common factor) and LCM (least common multiple).
Data Flows
Data flow represents the flow of data between two processes. It could be between an actor and a process, or
between a data store and a process. A data flow denotes the value of a data item at some point of the
computation. This value is not changed by the data flow.
Representation in DFD − A data flow is represented by a directed arc or an arrow, labelled with the name
of the data item that it carries.
In the above figure, Integer_a and Integer_b represent the input data flows to the process, while
L.C.M. and H.C.F. are the output data flows.
• The output value is sent to several places as shown in the following figure. Here, the output
arrows are unlabelled as they denote the same value.
• The data flow contains an aggregate value, and each of the components is sent to different
places as shown in the following figure. Here, each of the forked components is labelled.
Actors
Actors are the active objects that interact with the system by either producing data and inputting them to
the system, or consuming data produced by the system. In other words, actors serve asthe sources and
the sinks of data.
Example − The following figure shows the actors, namely, Customer and Sales_Clerk in acounter
sales system.
Data Stores
Data stores are the passive objects that act as a repository of data. Unlike actors, they cannot perform any
operations. They are used to store data and retrieve the stored data. They represent a data structure, a disk
file, or a table in a database.
Representation in DFD − A data store is represented by two parallel lines containing the nameof the data
store. Each data store is connected to at least one process. Input arrows contain information to modify the
contents of the data store, while output arrows contain information retrieved from the data store. When a
part of the information is to be retrieved, the output arrowis labelled. An unlabelled arrow denotes full data
retrieval. A two-way arrow implies both retrieval and update.
Example − The following figure shows a data store, Sales_Record, that stores the details of all sales. Input
to the data store comprises of details of sales such as item, billing amount, date, etc. To find the average
sales, the process retrieves the sales records and computes the average.
Constraints
Constraints specify the conditions or restrictions that need to be satisfied over time. They allow adding new
rules or modifying existing ones. Constraints can appear in all the three models of object-oriented analysis.
• In Object Modelling, the constraints define the relationship between objects. They may also
define the relationship between the different values that an object may take at different
times.
• In Dynamic Modelling, the constraints define the relationship between the states and events
of different objects.
• In Functional Modelling, the constraints define the restrictions on the transformations and
computations.
Example − The following figure shows a portion of DFD for computing the salary of employees of a
company that has decided to give incentives to all employees of the sales department and increment the
salary of all employees of the HR department. It can be seen that the constraint
{Dept:Sales} causes incentive to be calculated only if the department is sales and the constraint
{Dept:HR} causes increment to be computed only if the department is HR.
Control Flows
A process may be associated with a certain Boolean value and is evaluated only if the value is true, though
it is not a direct input to the process. These Boolean values are called the control flows.
Representation in DFD − Control flows are represented by a dotted arc from the process producing the
Boolean value to the process controlled by them.
Example − The following figure represents a DFD for arithmetic division. The Divisor is tested for non-
zero. If it is not zero, the control flow OK has a value True and subsequently the Divide process computes
the Quotient and the Remainder.
In order to develop the DFD model of a system, a hierarchy of DFDs are constructed. The top- level DFD
comprises of a single process and the actors interacting with it.
At each successive lower level, further details are gradually included. A process is decomposed into sub-
processes, the data flows among the sub-processes are identified, the control flows are
determined, and the data stores are defined. While decomposing a process, the data flow into or out of the
process should match the data flow at the next level of DFD.
Example − Let us consider a software system, Wholesaler Software, that automates the transactions of a
wholesale shop. The shop sells in bulks and has a clientele comprising of merchants and retail shop owners.
Each customer is asked to register with his/her particulars and is given a unique customer code, C_Code.
Once a sale is done, the shop registers its details and sends the goods for dispatch. Each year, the shop
distributes Christmas gifts to its customers, which comprise of a silver coin or a gold coin depending upon
the total sales and the decision of the proprietor.
The functional model for the Wholesale Software is given below. The figure below shows the top-level
DFD. It shows the software as a single process and the actors that interact with it.
• Customers
• Salesperson
• Proprietor
In the next level DFD, as shown in the following figure, the major processes of the system are identified,
the data stores are defined and the interaction of the processes with the actors, and the data stores are
established.
• Register Customers
• Process Sales
• Ascertain Gifts
• Customer Details
• Sales Details
• Gift Details
The following figure shows the details of the process Register Customer. There are three processes in it,
Verify Details, Generate C_Code, and Update Customer Details. When the details of the customer are
entered, they are verified. If the data is correct, C_Code is generated and the data store Customer Details is
updated.
The following figure shows the expansion of the process Ascertain Gifts. It has two processes in it, Find Total Sales and
Decide Type of Gift Coin. The Find Total Sales process computes the yearly total sales corresponding to each customer and
records the data. Taking this record and thedecision of the proprietor as inputs, the gift coins are allotted through Decide
Type of Gift Coin process.
The Object Model, the Dynamic Model, and the Functional Model are complementary to each other for a complete Object-
Oriented Analysis.
• Object modelling develops the static structure of the software system in terms of objects.Thus it shows
the “doers” of a system.
• Dynamic Modelling develops the temporal behavior of the objects in response to external events. It
shows the sequences of operations performed on the objects.
• Functional model gives an overview of what the system should do.
The four main parts of a Functional Model in terms of object model are −
• Process − Processes imply the methods of the objects that need to be implemented.
• Actors − Actors are the objects in the object model.
• Data Stores − These are either objects in the object model or attributes of objects.
• Data Flows − Data flows to or from actors represent operations on or by objects. Dataflows to or
from data stores represent queries or updates.
Functional Model and Dynamic Model
The dynamic model states when the operations are performed, while the functional model states how they are performed
and which arguments are needed. As actors are active objects, the dynamic model has to specify when it acts. The data stores
are passive objects and they only respond to updates and queries; therefore the dynamic model need not specify when they
act.
The dynamic model shows the status of the objects and the operations performed on the occurrences of events and the
subsequent changes in states. The state of the object as a result of the changes is shown in the object model.
UNIT-03
TOPIC: DESIGNING
ALGORITHMS
UNIT-03/LECTURE-02
Designing algorithms
Each operation specified in the functional model must be formulated as an algorithm.
The analysis specification tells what the operation does from the view point of its clients,
but the algorithm shows how it is done. An algorithm may be subdivided into calls on
simpler operations, and so on recursively, until the lowest-level operations are simple
enough to implement directly without refinement .The algorithm designer must decide on
the following:
i) Choosing algorithms
Many operations are simple enough that the specification in the functional model already
constitutes a satisfactory algorithm because the description of what is done also shows
how it is done. Many operations simply traverse paths in the object link network or retrieve
or change attributes or links.
Non trivial algorithm is needed for two reasons:
• To implement functions for which no procedural specification
• To optimize functions for which a simple but inefficient algorithm serves as a
definition.
Some functions are specified as declarative constraints without any procedural definition.
In such cases, you must use your knowledge of the situation to invent an algorithm. The
essence of most geometry problems is the discovery of appropriate algorithms and the
proof that they are correct. Most functions have simple mathematical or procedural
definitions. Often the simple definition is also the best algorithm for computing the
function or else is also so close to any other algorithm that any loss in efficiency is the
worth the gain in clarity. In other cases, the simple definition of an operation would be
hopelessly inefficient and must be implemented with a more efficient algorithm.
For example, let us consider the algorithm for search operation. A search can be done in
two ways like binary search (which performs log n comparisons on an average) and a linear search
(which performs n/2 comparisons on an average). Suppose, our search algorithm is implemented
using linear search, which needs more comparisons. It would be better to implement the search
with a much efficient algorithm like binary search.
Considerations in choosing among alternative algorithm include:
a) Computational Complexity:
It is essential to think about complexity i.e. how the execution time (memory) grows with the
number of input values.
For example: For a bubble sort algorithm, time ∞ n2 Most other algorithms, time ∞ n log n
b) Ease of implementation and understand ability:
It is worth giving up some performance on non-critical operations if they can be implemented
quickly with a simple algorithm.
c) Flexibility:
Most programs will be extended sooner or later. A highly optimized algorithm often sacrifices
readability and ease of change. One possibility is to provide two Implementations of critical
applications, a simple but inefficient algorithm that can be implemented, quickly and used to
validate the system, and a complicated but efficient algorithm whose correct implementation can
be checked against the simple one.
d) Fine Timing the Object Model:
We have to think, whether there would be any alternatives, if the object model were structured
differently.
lower level operations on simpler objects. These lower level operations must be defined during
object design because most of them are not externally visible. During the design phase, you may
have to add new classes that were not mentioned directly in the client’s description of the problem.
These low-level classes are the implementation elements out of which the application classes are
built.
The basic design model uses the analysis model as the framework for implementation.
• The analysis model captures the logical information about the system, while the
design model must add details to support efficient information access.
• The inefficient but semantically correct analysis model can be optimized to make
the implementation more efficient, but an optimized system is more obscure and less likely
to be reusable in another context. The designer must strike an appropriate balance
between efficiency and clarity.
• During design optimization, the designer must Add Redundant Associations for Efficient
Access During analysis, it is undesirable to have redundancy in association network because
redundant associations do not add any information.
During design, however we evaluate the structure of the object model for an implementation. For
that, we have to answer the following questions:
* Is there a specific arrangement of the network that would optimize critical aspects of the completed
system?
* Should the network be restructured by adding new associations?
* Can existing associations be omitted?
The associations that were useful during analysis may not form the most efficient network when
the access patterns and relative frequencies of different kinds of access are considered.
In cases where the number of hits from a query is low because only a fraction of objects satisfy the
test, we can build an index to improve access to objects that must be frequently retrieved.
After adjusting the structure of the object model to optimize frequent traversal, the next thing to
optimize is the algorithm itself. Algorithms and data structures are directly related to each other, but
we find that usually the data structure should be considered first. One key to algorithm optimization
is to eliminate dead paths as early as possible. Sometimes the execution order of a loop must
be inverted.
Data that is redundant because it can be derived from other data can be “cached” or store in its
computed form to avoid the overhead of re computing it. The class that contains the cached data
must be updated if any of the objects that it depends on are changed.
Derived attributes must be updated when base values change. There are 3 ways to recognize when
an update is needed:
Explicit update: Each attribute is defined in terms of one or more fundamental base objects.
The designer determines which derived attributes are affected by each change to a
fundamental attribute and inserts code into the update operation on the base object to explicitly
update the derived attributes that depend on it.
Periodic Re computation: Base values are updated in bunches. Re compute all derived attributes
periodically without re computing derived attributes after each base value is changed. Re
computation of all derived attributes can be more efficient than incremental update because some
derived attributes may depend on several base attributes and might be updated more than once by
incremental approach. Periodic re computation is simpler than explicit update and less prone to
bugs. On the other hand, if the data set changes incrementally a few objects at a time, periodic
re computation is not practical because too many derived attributes must be recomputed when
only a few are affected.
Active values: An active value is a value that has dependent values. Each dependent value registers
itself with the active value, which contains a set of dependent values and update operations. An
operation to update the base value triggers updates all dependent values, but the calling code need
not explicitly invoke the updates. It provides modularity.
Structure Analysis and Structure Design (SA/SD)
In software engineering, structured analysis (SA) and structured design (SD) are methods for analyzing
business requirements and developing specifications for converting practices into computer programs,
hardware configurations, and related manual procedures.
Structured analysis and design techniques are fundamental tools of systems analysis. Structured analysis
consists of interpreting the system concept (or real world situations) into data and control terminology
represented by data flow diagrams. The flow of data and control from bubble to the data store to bubble
can be difficult to track and the number of bubbles can increase.
Structured Analysis views a system from the perspective of the data flowing through it. The function of the
system is described by processes that transform the data flows. Structured analysis takes advantage of
information hiding through successive decomposition (or top down) analysis. This allows attention to be
focused on pertinent details and avoids confusion from looking at irrelevant details. As the level of detail
increases, the breadth of information is reduced. The result of structured analysis is a set of related graphical
diagrams, process descriptions, and data definitions. They describe the transformations that need to take
place and the data required to meet a system's functional requirements .
Various tools and techniques are used for system development. They are −
It is a technique developed by Larry Constantine to express the requirements of system in a graphical form.
• It shows the flow of data between various functions of system and specifies how the
current system is implemented.
• It is an initial stage of design phase that functionally divides the requirement
specifications down to the lowest level of detail.
• Its graphical nature makes it a good communication tool between user and analyst or
analyst and system designer.
• It gives an overview of what data a system processes, what transformations are
performed, what data are stored, what results are produced and where they flow.
DFD is easy to understand and quite effective when the required design is not clear and the user wants a
notational language for communication. However, it requires a large number of iterations for obtaining
the most accurate and complete solution.
The following table shows the symbols used in designing a DFD and their significance –
DFDs are of two types: Physical DFD and Logical DFD. The following table lists the points that
differentiate a physical DFD from a logical DFD.
It provides low level details of hardware, software, It explains events of systems and data required by
files, and people. each event.
It depicts how the current system operates and how It shows how business operates; not how the system
a system will be implemented. can be implemented.
Context Diagram
A context diagram helps in understanding the entire system by one DFD which gives theoverview of a
system. It starts with mentioning major processes with little details and then goes onto giving more details
of the processes with the top-down approach.
A data dictionary is a structured repository of data elements in the system. It stores the descriptions of all
DFD data elements that is, details and definitions of data flows, data stores, data stored in data stores, and
the processes.
A data dictionary improves the communication between the analyst and the user. It plays an important role
in building a database. Most DBMSs have a data dictionary as a standard feature. For example, refer the
following table −
2 TITLE title 60
Decision Trees
Decision trees are a method for defining complex relationships by describing decisions and avoiding the
problems in communication. A decision tree is a diagram that shows alternative actions and conditions
within horizontal tree framework. Thus, it depicts which conditions to consider first, second, and so on.
Decision trees depict the relationship of each condition and their permissible actions. A square node
indicates an action and a circle indicates a condition. It forces analysts to consider the sequence of decisions
and identifies the actual decision that must be made.
The major limitation of a decision tree is that it lacks information in its format to describe what other
combinations of conditions you can take for testing. It is a single representation of the relationships between
conditions and actions.
A pseudocode does not conform to any programming language and expresses logic in plain
English.
• It may specify the physical programming logic without actual coding during and after
the physical design.
• It is used in conjunction with structured programming.
• It replaces the flowcharts of a program.
The Structured Analysis/Structured Design (SASD) approach is the traditional approach of software
development based upon the waterfall model. The phases of development of a system using SASD are −
• Feasibility Study
• Requirement Analysis and Specification
• System Design
• Implementation
• Post-implementation Review
Now, we will look at the relative advantages and disadvantages of structured analysis approachand
object-oriented analysis approach.
•
Focuses on data rather than the procedures as in Structured Analysis.
• It can be upgraded from small to large systems at a greater ease than in systems following
structured analysis.
Disadvantages
• Functionality is restricted within objects. This may pose a problem for systems which are
intrinsically procedural or computational in nature.
As it follows a top-down approach in contrast to bottom- In traditional structured analysis models, one
up approach of object-oriented analysis, it can be more phase should be completed before the next
easily comprehended than OOA. phase. This poses a problem in design,
particularly if errors crop up or
requirements change.
The specifications in it are written in simple English It does not support reusability of
code. So, the
language, and hence can be more easily analyzed by non- time and cost of development is
inherentlytechnical personnel. high.
Jackson System Development (JSD)
JSD steps
When it was originally presented by Jackson in 1982 the method consisted of six
steps:
1. Entity/action step
2. Initial model step
3. Interactive function step
4. Information function step
5. System timing step
6. System implementation step
Later, some steps were combined to create a method with only three steps
1. Modelling stage (analysis): with the entity/action step and entity structures step.
2. Network stage (design): with the initial model step, function step, and system
timing step.
3. Implementation stage (realisation): the implementation step.
Modeling stage
In the modeling stage the designer creates a collection of entity structure diagrams
and identifies the entities in the system, the actions they perform, the time-ordering of
the actions in the life of the entities, and the attributes of the actions and entities.
Entity structure diagrams use the diagramming notation of Jackson Structured
Programming structure diagrams. Purpose of these diagrams is to create a full
description of the aspects of the system and the organisation. Developers have to
decide which things are important and which are not. Good communication between
developers and users of the new system is very important. This stage is the
combination of the former entity/action step and the entity structures step.
Network stage
In the network stage a model of the system as a whole is developed and represented
as a system specification diagram (SSD) (also known as a network diagram).
Network diagrams show processes (rectangles) and how they communicate with each
other, either via state vector connections (diamonds) or via datastream connections
(circles). In this stage, the functionality of the system is defined. Each entity becomes
a process or program in the network diagram. External programs are later added to the
network diagrams. The purpose of these programs is to process input, calculate output
and to keep the entity processes up-to-date. The whole system is described with these
network diagrams and are completed with descriptions about the data and connections
between the processes and programs.
The initial model step specifies a simulation of the real world. The function step adds
to this simulation the further executable operations and processes needed to produce
output of the system. System timing step provides synchronization among processes,
introduces constraints.
This stage is the combination of the former ‘Initial model’ step, the ‘function’ step
and the ‘system timing’ step.
Implementation stage
In the implementation stage the abstract network model of the solution is converted
into a physical system, represented as a system implementation diagram (SID). The
SID shows the system as a scheduler process that calls modules that implement the
processes.
Datastreams are represented as calls to inverted processes. Database symbols
represent collections of entity state vectors, and there are special symbols for file
buffers (which must be implemented when processes are scheduled to run at different
time intervals).
The central concern of implementation step is optimization of the system. It is
necessary to reduce the number of processes because it is impossible to provide each
process that is contained in specification with its own virtual processor. By means of
transformation, processes are combined in order to limit their number to the number
of processors.
• Avoid it.
• Flatten the class hierarchy.
• Break out separate objects.
5 Implement method resolution : Method resolution is one. main features of an object-oriented
language that is lacking in a non object-oriented language. Method resolution can be implemented
in a following ways
• Avoid it.
• Resolve methods at compile time.
• Resolve methods at run time.
6. Implement associations : Implementing associations in a non oriented language can be done by
:
Definition of OOP:
Object oriented programming is a programming methodology that associates data
structures with a set of operators, which act upon it.
Depending on the object features supported, the languages are classified into two
categories:
• Object-Based Programming Languages
• Object-Oriented Programming Languages
The topology of the Object Oriented Programming is shown in Fig below. The
modules represent the physical building blocks of these languages; a module is a
collection of classes and object.
Object B
Fig Object – Oriented Programming
Program and data are the two basic elements of any programming language. Data
plays an important role and it can exist without a program, but a program has no
relevance without data. The conventional high-level languages stress on the
algorithms used to solve a problem. Complex procedures have been simplified by
structured programming. There are two paradigms that given how a program is
constructed. The first way is called the process-oriented model. This approach
characterizes a program as a series of linear steps. The process-oriented model can
be taught of as code acting on data. Procedural languages are also called as Function
oriented programming. (C language). The second approach is called object-oriented
programming. It organizes a program around its data and a set of well-defined
interfaces to that data. An object-oriented program can be characterized data
controlling access to code.
Function 1 Function 3
Function 2 Function 4
Procedural Programming
OOP uses objects and not algorithms as its fundamental building blocks. Each
object is an instance of some class. Classes allow the mechanism of data abstraction
for creating new data types. Inheritance allows building of new classes from the
existing class.
Unlike traditional languages OO languages allow localization of data and code and
restrict other objects from referring to its local region. OOP is centered on the
concepts of objects, encapsulation, abstract data types, inheritance, polymorphism,
and message-based communication. An OO language views the data and its
associated set of functions as an object and treats this combination as a single entity.
Thus, an object is visualized as a combination of data and functions, which
manipulate them. During the execution of a program, the objects interact with each
other by sending messages and receiving responses.
The wrapping up of data and methods into a single unit (called class) is known as
encapsulation. Data encapsulation is the most striking feature of a class. The data
is not accessible to the outside world and only those methods, which are wrapped in
the class, can access it. These methods provide the interface between the object’s
data and the program. This insulation of the data from direct access by the program
is called data hiding. Encapsulation makes it possible for objects to be treated like
“black boxes” each performing a specific task without any concern for internal
implementation.
Short Summary
Inheritance is the process by which object of one class acquire the properties of
objects of another class. Inheritance supports the concept of hierarchical
classification. For example, the bird robin is a part of the class flying bird, which is
again a part of the class bird. As Illustrated in Fig the principle behind this sort of
division is that each derived class shares common characteristics with the class from
which it is derived.
In OOP, the concept of inheritance provides the idea of reusability. This means that
we can add additional features to an existing class without modifying it. This is
possible by deriving a new class from the existing one. The new class will have the
combined features of both the classes. Thus the real appeal and power of the
inheritance mechanism is that it allows the programmer to reuse a class that is
almost, but not exactly, what he wants, and to tailor the class in such a way that it
does not introduce any undesirable side effects into the rest of the classes. In Java,
Bird
Attributes
Flying Nonflyin
Bird g Bird
Shape
Draw()
Fig Polymorphism
Polymorphism plays an important role in allowing objects having different internal
structures to share the same external interface. This means that a general class of
operations may be accessed in the same manner even though specific actions
associated with each operation may differ. Polymorphism is extensively used in
implementing inheritance.
Advantages of OOP
• Through inheritance we can eliminate redundant code and extend the use of
existing classes.
• We can build programs from the standard working modules that communicate
with one another rather than having to start writing the code from scratch. This
leads to saving of development time and higher productivity.
• The principle of data hiding helps the programmer to build secure programs that
cannot be invaded by code in other parts of the program.
• The data centered design approach enables us to capture more details of a model
in an implementable form.
• Object-oriented systems can be easily upgraded from small to large systems.