Software Engg Lab Manual 2011-12

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 61

Yeshwantrao Chavan College of Engineering

Wanadongri Hingna Road, Nagpur-441 110

Department of Computer Technology

LAB MANUAL ON

SOFTWARE ENGINEERING
SIXTH SEMESTER 2011-2012

YESHWANTRAO CHAVAN COLLEGE OF ENGINEERING DEPARTMENT OF COMPUTER TECHNOLOGY


SIXTH SEMESTER Term II (2010-2011) SOFTWARE ENGINEERRING MANUAL

INDEX Sr. No. 1 2 3 4 5 6 7 8 9 10 1 2 Practical Name


Introduction to Software Engineering fundamentals Overview of UML and Introduction to Rational Rose Interface To identify use cases and draw Use Case diagram for the given case study. To create use case documents for the given case study. To study Software Requirement Specification template Detailed analysis and design of Mini Project(SRS) To study E-R Diagram and Data Flow Diagrams. To draw Use Case Diagram and E-R diagram for Mini
project.

To study and draw UML Class diagrams. To study and draw Activity diagrams. BEYOND SYLLABUS: To study Manual/Automated testing. To study Microsoft Project Plan.

PRACTICAL NO: 1 AIM: Introduction to Software Engineering fundamentals.


THEORY:
1. Software:
A textbook description of software might take the following form: Software is (1)instructions (computer programs) that when executed provide desired function and performance,(2) data structures that enable the programs to adequately manipulate information, and (3) documents that describe the operation and use of the programs. There is no question that other, more complete definitions could be offered. But we need more than a formal definition.

2. Software Characteristics:
To gain an understanding of software (and ultimately an understanding of software engineering), it is important to examine the characteristics of software that make it different from other things that human beings build. When hardware is built, the human creative process (analysis, design, construction, testing) is ultimately translated into a physical form. If we build a new computer, our initial sketches, formal design drawings, and bread boarded prototype evolve into a physical product (chips, circuit boards, power supplies, etc.). Software is a logical rather than a physical system element. Therefore, software has characteristics that are considerably different than those of hardware:

I.

Software is developed or engineered; it is not manufactured in the classical sense.

Although some similarities exist between software development and hardware manufacture, the two activities are fundamentally different. In both activities, high quality is achieved through good design, but the manufacturing phase for hardware can introduce quality problems that are nonexistent (or easily corrected) for software. Both activities are dependent on people, but the relationship between people applied and work accomplished is entirely different. Both activities require the construction of a "product" but the approaches are different. Software costs are concentrated in engineering. This means that software projects cannot be managed as if they were manufacturing projects.

II.

Software doesn't "wear out."


Figure 1 depicts failure rate as a function of time for hardware. The relationship, often called the "bathtub curve," indicates that hardware exhibits relatively high failure rates early in its life (these failures are often attributable to design or manufacturing defects); defects are corrected and the failure rate drops to a steady-state level (ideally, quite low) for some period of time. As time passes, however, the failure rate rises again as hardware components suffer

from the cumulative effects of dust, vibration, abuse, temperature extremes, and many other environmental maladies. Stated simply, the hardware begins to wear out. Software is not susceptible to the environmental maladies that cause hardware to wear out. In theory, therefore, the failure rate curve for software should take the form of the idealized curve shown in Figure 2. Undiscovered defects will cause high failure rates early in the life of a program. However, these are corrected (ideally, without introducing other errors) and the curve flattens as shown. The idealized curve is a gross oversimplification of actual failure models for software. However, the implication is clearsoftware doesn't wear out. But it does deteriorate! This seeming contradiction can best be explained by considering the actual curve shown in Figure 2. During its life, software will undergo change (maintenance). As changes are made, it is likely that some new defects will be introduced, causing the failure rate curve to spike as shown in Figure 2. Before the curve can return to the original steady-state failure rate, another change is requested, causing the curve to spike again. Slowly, the minimum failure rate level begins to risethe software is deteriorating due to change. Another aspect of wear illustrates the difference between hardware and software. When a hardware component wears out, it is replaced by a spare part. There are no software spare parts. Every software failure indicates an error in design or in the process through which design was translated into machine executable code. Therefore, software maintenance involves considerably more complexity than hardware maintenance.

FIGURE 1. Failure curve for hardware

FIGURE 2. Idealized and actual failure curves for Software

III.

Although the industry is moving toward component-based assembly, most software continues to be custom built.

Consider the manner in which the control hardware for a computer-based product is designed and built. The design engineer draws a simple schematic of the digital circuitry, does some fundamental analysis to assure that proper function will be achieved, and then goes to the shelf where catalogs of digital components exist. Each integrated circuit (called an IC or a chip) has a part number, a defined and validated function, a well defined interface, and a standard set of integration guidelines. After each component is selected, it can be ordered off the shelf. As an engineering discipline evolves, a collection of standard design components is created. Standard screws and off-the-shelf integrated circuits are only two of thousands of standard components that are used by mechanical and electrical engineers as they design new systems. The reusable components have been created so that the engineer can concentrate on the truly innovative elements of a design, that is, the parts of the design that represent something

new. In the hardware world, component reuse is a natural part of the engineering process. In the software world, it is something that has only begun to be achieved on a broad scale. A software component should be designed and implemented so that it can be reused in many different programs. In the 1960s, we built scientific subroutine libraries that were reusable in a broad array of engineering and scientific applications. These subroutine libraries reused well defined algorithms in an effective manner but had a limited domain of application. Today, we have extended our view of reuse to encompass not only algorithms but also data structure. Modern reusable components encapsulate both data and the processing applied to the data, enabling the software engineer to create new applications from reusable parts. For example, today's graphical user interfaces are built using reusable components that enable the creation of graphics windows, pull-down menus, and a wide variety of interaction mechanisms. The data structure and processing detail required to build the interface are contained with a library of reusable components for interface construction. Most software continues to be custom built.

IV.

Software Applications:

Software may be applied in any situation for which a prespecified set of procedural steps (i.e., an algorithm) has been defined (notable exceptions to this rule are expert system software and neural network software). Information content and determinacy are important factors in determining the nature of a software application. Content refers to the meaning and form of incoming and outgoing information. For example, many business applications use highly structured input data (a database) and produce formatted reports. Software that controls an automated machine (e.g., a numerical control) accepts discrete data items with limited structure and produces individual machine commands in rapid succession. Information determinacy refers to the predictability of the order and timing of information. An engineering analysis program accepts data that have a predefined order, executes the analysis algorithm(s) without interruption, and produces resultant data in report or graphical format. Such applications are determinate. A multiuser operating system, on the other hand, accepts inputs that have varied content and arbitrary timing, executes algorithms that can be interrupted by external conditions, and produces output that varies as a function of environment and time. Applications with these characteristics are indeterminate. It is somewhat difficult to develop meaningful generic categories for software applications. As software complexity grows, neat compartmentalization disappears. The following software areas indicate the breadth of potential applications: 1) System software. System software is a collection of programs written to service other programs. Some system software (e.g., compilers, editors, and file management utilities) process complex, but determinate, information structures. Other systems applications (e.g., operating system components, drivers, telecommunications processors) process largely indeterminate data. In either case, the system software area is characterized by heavy interaction with computer hardware; heavy usage by multiple users; concurrent operation that requires scheduling, resource sharing, and sophisticated process management; complex data structures; and multiple external interfaces.

2) Real-time software. Software that monitors/analyzes/controls real-world events as they occur is called real time. Elements of real-time software include a data gathering component that collects and formats information from an external environment, an analysis component that transforms information as required by the application, a control/output component that responds to the external environment, and a monitoring component that coordinates all other components so that real-time response (typically ranging from 1 millisecond to 1 second) can be maintained. 3) Business software. Business information processing is the largest single software application area. Discrete "systems" (e.g., payroll, accounts receivable/payable, inventory) have evolved into management information system (MIS) software that accesses one or more large databases containing business information. Applications in this area restructure existing data in a way that facilitates business operations or management decision making. In addition to conventional data processing application, business software applications also encompass interactive computing (e.g., point of- sale transaction processing). 4) Engineering and scientific software. Engineering and scientific software have been characterized by "number crunching" algorithms. Applications range from astronomy to volcanology, from automotive stress analysis to space shuttle orbital dynamics, and from molecular biology to automated manufacturing. However, modern applications within the engineering/scientific area are moving away from conventional numerical algorithms. Computer-aided design, system simulation, and other interactive applications have begun to take on real-time and even system software characteristics. 5) Embedded software. Intelligent products have become commonplace in nearly every consumer and industrial market. Embedded software resides in read-only memory and is used to control products and systems for the consumer and industrial markets. Embedded software can perform very limited and esoteric functions (e.g., keypad control for a microwave oven) or provide significant function and control capability (e.g., digital functions in an automobile such as fuel control, dashboard displays, and braking systems). 6) Personal computer software. The personal computer software market has burgeoned over the past two decades. Word processing, spreadsheets, computer graphics, multimedia, entertainment, database management, personal and business financial applications, external network, and database access are only a few of hundreds of applications. 7) Web-based software. The Web pages retrieved by a browser are software that incorporates executable instructions (e.g., CGI, HTML, Perl, or Java), and data (e.g., hypertext and a variety of visual and audio formats). In essence, the network becomes a massive computer providing an almost unlimited software resource that can be accessed by anyone with a modem. 8) Artificial intelligence software. Artificial intelligence (AI) software makes use of nonnumeric algorithms to solve complex problems that are not amenable to computation or straightforward analysis. Expert systems, also called knowledge based systems, pattern recognition (image and voice), artificial neural networks, theorem proving, and game playing are representative of applications within this category.

How do we define software engineering? A definition proposed by Fritz Bauer [Software engineering is] the establishment and use of sound engineering principles in order to obtain economically software that is reliable and works efficiently on real machines The IEEE [IEE93] has developed a more comprehensive definition when it states: Software Engineering: (1) the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software. (2) The study of approaches as in (1). V.

SOFTWARE ENGINEERING: A LAYERED TECHNOLOGY:


Software engineering is a layered technology, any engineering approach (including software engineering) must rest on an organizational commitment to quality. Total quality management and similar philosophies foster a continuous process improvement culture, and this culture ultimately leads to the development of increasingly more mature approaches to software engineering. The bedrock that supports software engineering is a quality focus. The foundation for software engineering is the process layer. Software engineering process is the glue that holds the technology layers together and enables rational and timely development of computer software. Process defines a framework for a set of key process areas (KPAs) [PAU93] that must be established for effective delivery of software engineering technology. The key process areas form the basis for management control of software projects and establish the context in which technical methods are applied, work products (models, documents, data, reports, forms, etc.) are produced, milestones are established, quality is ensured, and change is properly managed. Software engineering methods provide the technical how-to's for building software. Methods encompass a broad array of tasks that include requirements analysis, design, program construction, testing, and support. Software engineering methods rely on a set of basic principles that govern each area of the technology and include modeling activities and other descriptive techniques. Software engineering tools provide automated or semi-automated support for the process and the methods. When tools are integrated so that information created by one tool can be used by another, a system for the support of software development, called computer-aided software engineering, is established. CASE combines software, hardware, and a software engineering database (a repository containing important information about analysis, design, program construction, and testing) to create a software engineering environment analogous to CAD/CAE (computer-aided design/engineering) for hardware.

Software engineering layers

PRACTICAL NO: 2 AIM: Overview of UML and Introduction to RATIONAL ROSE Interface.
THEORY:

What Is Visual Modeling?


If you were building a new addition to your house, you probably wouldn't start by just buying a bunch of wood and nailing it together until it looks about right. Similarly, you'd be more than a little concerned if the contractor doing the job decided to "wing it" and work without plans. You'd want some blueprints to follow so you can plan and structure the addition before you start working. Odds are, the addition will last longer this way. You wouldn't want the whole thing to come crashing down with the slightest rain. Models do the same thing for us in the software world. They are the blueprints for systems. A blueprint helps you plan an addition before you build it; a model helps you plan a system before you build it. It can help you be sure the design is sound, the requirements have been met, and the system can withstand even a hurricane of requirement changes. As you gather requirements for your system, you'll take the business needs of the users and map them into requirements that your team can use and understand. Eventually, you'll want to take these requirements and generate code from them. By formally mapping the requirements to the code, you can ensure that the requirements are actually met by the code, and that the code can easily be traced back to the requirements. This process is called modeling. The result of the modeling process is the ability to trace the business needs to the requirements to the model to the code, and back again, without getting lost along the way. Visual modeling is the process of taking the information from the model and displaying it graphically using a standard set of graphical elements. A standard is vital to realizing one of the benefits of visual modeling: communication. Communication between users, developers, analysts, testers, managers, and anyone else involved with a project is the primary purpose of visual modeling. You could accomplish this communication using nonvisual (textual) information, but on the whole, humans are visual creatures. We seem to be able to understand complexity better when it is displayed to us visually as opposed to written textually. By producing visual models of a system, we can show how the system works on several levels. We can model the interactions between the users and a system. We can model the interactions of objects within a system. We can even model the interactions between systems, if we so desire.

After creating these models, we can show them to all interested parties, and those parties can glean the information they find valuable from the model. For example, users can visualize the interactions they will make with the system from looking at a model. Analysts can visualize the interactions between objects from the models. Developers can visualize the objects that need to be developed and what each one needs to accomplish. Testers can visualize the interactions between objects and prepare test cases based on these interactions. Project managers can see the whole system and how the parts interact. And chief information Officers can look at highlevel models and see how systems in their organization interact with one another. All in all, visual models provide a powerful tool for showing the proposed system to all of the interested parties.

Systems of Graphical Notation


One important consideration in visual modeling is what graphical notation to use to represent various aspects of a system. Many people have proposed notations for visual modeling. Some of the popular notations that have strong support are Booch, Object Modeling Technology (OMT), and UML. Rational Rose supports these three notations; however, UML is a standard that has been adopted by the majority of the industry as well as the standards' governing boards such as ANSI and the Object Management Group (OMG). Booch Notation: The Booch method is named for its inventor, Grady Booch, at Rational Software Corporation. He has written several books discussing the needs and benefits of visual modeling, and has developed a notation of graphical symbols to represent various aspects of a model. For example, objects in this notation are represented by clouds, illustrating the fact that objects can be almost anything. Booch's notation also includes various arrows to represent the types of relationships between objects. Figure 1is a sampling of the objects and relationships represented in the Booch notation.

Figure 1.1: Examples of symbols in the Booch notation

Object Management Technology (OMT): The OMT notation comes from Dr. James Rumbaugh, who has written several books about systems analysis and design. In an aptly titled book, ObjectOriented Modeling and Design (Prentice Hall, 1990), Rumbaugh discusses the importance of modeling systems in realworld components called objects. OMT uses simpler graphics than Booch to illustrate systems. A sampling of the objects and relationships represented in the OMT notation follows in Figure 1.1.

Figure 1.2: Examples of symbols in the OMT notation

Unified Modeling Language (UML): UML notation comes from a collaborative effort of Grady Booch, Dr. James Rumbaugh, Ivar Jacobson, Rebecca WirfsBrock, Peter Yourdon, and many others. Jacobson is a scholar who has written about capturing system requirements in packages of transactions called use cases. We will discuss use cases in detail in Chapter 4. Jacobson also developed a method for system design called ObjectOriented Software Engineering (OOSE) that focused on analysis. Booch, Rumbaugh, and Jacobson, commonly referred to as the "three amigos," all work at Rational Software Corporation and focus on the standardization and refinement of UML. UML symbols closely match those of the Booch and OMT notations, and also include elements from other notations. Figure 1.3. Shows a sample of UML notation.

Figure 1.3: Examples of symbols in UML notation

Each of the three amigos of UML began to incorporate ideas from the other methodologies. Official unification of the methodologies continued until late 1995, when version 0.8 of the Unified Method was introduced. The Unified Method was refined and changed to the Unified Modeling Language in 1996. UML 1.0 was ratified and given to the Object Technology Group in 1997, and many major software development companies began adopting it. In 1997, OMG released UML 1.1 as an industry standard. Over the past years, UML has evolved to incorporate new ideas such as webbased systems and data modeling. The latest release is UML 1.3, which was ratified in 2000. The specification for UML 1.3 can be found at the Object Management Group's website, http://www.omg.org/.

Types of UML Diagrams


Each UML diagram is designed to let developers and customers view a software system from a different perspective and in varying degrees of abstraction. UML diagrams commonly created in visual modeling tools include: Use Case Diagram displays the relationship among actors and use cases. Class Diagram models class structure and contents using design elements such as classes, packages and objects. It also displays relationships such as containment, inheritance, associations and others. Interaction Diagrams

Sequence Diagram displays the time sequence of the objects participating in the interaction. This consists of the vertical dimension (time) and horizontal dimension (different objects) Collaboration Diagram displays an interaction organized around the objects and their links to one another. Numbers are used to show the sequence of messages.

State Diagram displays the sequences of states that an object of an interaction goes through during its life in response to received stimuli, together with its responses and actions. Activity Diagram displays a special state diagram where most of the states are action states and most of the transitions are triggered by completion of the actions in the source states. This diagram focuses on flows driven by internal processing Physical Diagrams

Component Diagram displays the high level packaged structure of the code itself. Dependencies among components are shown, including source code components, binary code components, and executable components. Some components exist at compile time, at link time, at run times well as at more than one time. Deployment Diagram displays the configuration of run-time processing elements and the software components, processes, and objects that live on them. Software component instances represent run-time manifestations of code units

UML Diagram ClassificationStatic, Dynamic, and Implementation A software system can be said to have two distinct characteristics: a structural, "static" part and a behavioral, "dynamic" part. In addition to these two characteristics, an additional characteristic that a software system possesses is related to implementation. Before we categorize UML diagrams into each of these three characteristics, let us take a quick look at exactly what these characteristics are.

Static: The static characteristic of a system is essentially the structural aspect of the system. The static characteristics define what parts the system is made up of. Dynamic: The behavioral features of a system; for example, the ways a system behaves in response to certain events or actions are the dynamic characteristics of a system. Implementation: The implementation characteristic of a system is an entirely new feature that describes the different elements required for deploying a system.

The UML diagrams that fall under each of these categories are:

Static Use case diagram Class diagram Dynamic o State diagram o Activity diagram o Sequence diagram o Collaboration diagram Implementation o Component diagram o Deployment diagram
o o

Finally, let us take a look at the 4+1 view of UML diagrams. 4+1 View of UML Diagrams Considering that the UML diagrams can be used in different stages in the life cycle of a system. The 4+1 view offers a different perspective to classify and apply UML diagrams. The 4+1 view is essentially how a system can be viewed from a software life cycle perspective. Each of these views represents how a system can be modeled. This will enable us to understand where exactly the UML diagrams fit in and their applicability. These different views are:

Design View: The design view of a system is the structural view of the system. This gives an idea of what a given system is made up of. Class diagrams and object diagrams form the design view of the system. Process View: The dynamic behavior of a system can be seen using the process view. The different diagrams such as the state diagram, activity diagram, sequence diagram, and collaboration diagram are used in this view. Component View: Next, you have the component view that shows the grouped modules of a given system modeled using the component diagram. Deployment View: The deployment diagram of UML is used to identify the deployment modules for a given system. This is the deployment view of the system. Use case View: Finally, we have the use case view. Use case diagrams of UML are used to view a system from this perspective as a set of discrete activities or transactions.

UML diagrams for case study:


1. Use case diagram

2.

Sequence Diagram for the Add a Course Scenario

3.

Main Class Diagram for the University Artifacts Package

4.

Course Reporting Class Diagram in the University Artifacts Package

5. Activity diagram

Rational Rose:
Rational Rose is an object-oriented Unified Modeling Language (UML) software design tool intended for visual modeling and component construction of enterprise-level software applications. In much the same way a theatrical director blocks out a play, a software designer uses Rational Rose to visually create (model) the framework for an application by blocking out

classes with actors (stick figures), use case elements (ovals), objects (rectangles) and messages/relationships (arrows) in a sequence diagram using drag-and-drop symbols. Rational Rose documents the diagram as it is being constructed and then generates code in the designer's choice of C++, Visual Basic, Java, Oracle8, CORBA or Data Definition Language. Rational Rose is commercial case-tool software. It supports two essential elements of modern software engineering: component based development and controlled iterative development. Models created with Rose can be visualized with several UML diagrams. Rose also supports Round-Trip engineering with several lan4guages. Two popular features of Rational Rose are its ability to provide iterative development and round-trip engineering. Rational Rose allows designers to take advantage of iterative development (sometimes called evolutionary development) because the new application can be created in stages with the output of one iteration becoming the input to the next. (This is in contrast to waterfall development where the whole project is completed from start to finish before a user gets to try it out.) Then, as the developer begins to understand how the components interact and makes modifications in the design, Rational Rose can perform what is called "roundtrip engineering" by going back and updating the rest of the model to ensure the code remains consistent. Summary of the Rational Rose Positive factors The tool itself was quite easy to install. The creation of the different diagrams can be learned quite fast. Code generation is simple. C++ Analyzer was also easy to use (though its functionality could be included in the Rose itself) Negative factors At first the tool seems to be quite complex. Some minor bugs were found. Separate tool had to be used (and learned) to reverse-engineer files. Layout manager could have been a bit more effective. Generated code was a bit obfuscated. ROSE is Rational Object Oriented Software Engineering. Rational Rose is a set of visual modeling tools for development of object oriented software. Rose uses the UML to provide graphical methods for non-programmers wanting to model business processes as well as programmers modeling application logic. Rational Rose includes tools for reverse engineering as well as forward engineering of classes and component architectures. You can gain valuable insights to your actual constructed architecture and pinpoint deviations from the original design. Rose offers a fast way for clients and new employees to become familiar with system internals. RATIONAL ROSE INTERFACE: Parts of the Screen:

The five primary pieces of the Rose interface are the browser, the documentation window, the toolbars, the diagram window, and the log. In this section, we'll look at each of these. Briefly, their purposes are: Browser Used to quickly navigate through the model Documentation window Used to view or update documentation of model elements Toolbars Used for quick access to commonly used commands Diagram window Used to display and edit one or more UML diagrams Log Used to view errors and report the results of various commands

PRACTICAL NO: 3 AIM: To identify use cases and draw Use Case diagram for the given case study.
THEORY:
Rational Rose is the world's leading visual modeling tool. Business analysts can use Rational Rose to model and visualize business processes and highlight opportunities to increase efficiency. Data analysts can model database designs in Rational Rose, improving their communication with developers. And when you model Use Cases in Rational Rose, you can be sure your solution is being built for the user. Rational Rose unifies business, systems and data analysts by enabling them to create and manage models in one tool with one modeling language. Use Cases A use case is a scenario that describes the use of a system by an actor to accomplish a specific goal. What does this mean? An actor is a user playing a role with respect to the system. Actors are generally people although other computer systems may be actors. A scenario is a sequence of steps that describe the interactions between an actor and the system. The use case model consists of the collection of all actors and all use cases. Use cases help us

capture the system's functional requirements from the users' perspective actively involve users in the requirements-gathering process provide the basis for identifying major classes and their relationships serve as the foundation for developing system test cases Means of capturing requirements Document interactions between user(s) and the system User (actor) is not part of the system itself But an actor can be another system

An individual use case represents a task to be done with support from the system (thus it is a coherent unit of functionality)

The rectangle represents the system boundary, the stick figures represent the actors (users of the system), and the ovals represent the individual use cases. Unfortunately, this notation tells us very little about the actual functionality of the system. Use case is actually defined as text, including descriptions of all of the normal and exception behaviour expected. They do not reveal the structure of the system (i.e. the system does), collectively define the boundaries of the system to be implemented and provide the basis for defining development iterations (Use language of user).

Creating Use-case in Rational Rose: Use Case View: Capturing Some Requirements
Expand the Use Case View and double click on Main to open the use case diagram window to get:

Using the toolbar, insert the use cases and actors into the main use case diagram to get:

Next, using the Unidirectional Association button on the toolbar to establish the following associations between the actors and use cases.

1. Open Rational Rose, Start --> Programs --> Programming Tools -->Rational Suite Development Studio -->Rational Rose Enterprise Edition 2. You will be presented with a Create Model Screen, Cancel that and you will see something like this.

You cancel the Create Model screen since you are not creating any particular type of model. However once you have a model, you can open an existing model by selecting Existing tab from Create New Model screen. 3. Now since you are only interested in creating Use Cases, you would create a new Use Case by right clicking on Use Case View, selecting New and then Use Case from the list of options (Make sure that you dont select the Use Case Diagram option). Additional information about Use Case View package can be entered in Open Specification option. 4. Once you have opened a Use Case, enter Enter System as the name for the Use Case and then right click on it, select Open Specification. Enter the brief description of the Use Case in the Documentation area of the specification. Click Ok. 5. Once the specification is filled out, you can add any type of diagram to this particular Use Case. You can add Class Diagram, Use Case Diagram, Activity Diagram, State Diagram, etc. Here as an example, I have added a Use Case diagram which shows user of the computerized scheduling system trying to log in the system by entering username and password. 6. To add a Use Case diagram, right click on the use case name Enter System and select New Use Case Diagram. White diagram sheet with its associated toolbox would appear in right frame. Let us name this use case diagram as Login. If a diagram sheet does not appear then doubleclick on the newly created Use case diagram. 7. Optional Step: If the symbol for drawing an actor does not appear on the toolbar (situated vertically in the middle of the screen), right click on the toolbar and click on the Customize link. The list box on the left contains the actor entity. Add it to the right hand side and click Close. Additionally all missing components can be added using the customize menu. 8. Next step is to draw the diagram. Select an actor and a Use case from the toolbox. Relationships among Use Cases 1. As mentioned earlier you can have multiple use cases. Now you may want to specify relationships between those use cases. There are three types of relationships Include: An include relationship is a stereotyped relationship that connects a base use case to an inclusion use case. An include relationship specifies how behaviour in the inclusion use case is used by the base use case Extends: An extend relationship is a stereotyped relationship that specifies how functionality of one use case can be inserted into the functionality of another use case. the

Refine: A refine relationship is a stereotyped relationship that connects two or more model elements at different semantic levels or development stages.

Relations: Association Relationship: An association provides a pathway for communication. The communication can be between use cases, actors, classes or interfaces. Associations are the most general of all relationships and consequentially the most semantically weak. If two objects are usually considered independently, the relationship is an association By default, the association tool on the toolbox is uni-directional and drawn on a diagram with a single arrow at one end of the association. The end with the arrow indicates who or what is receiving the communication. Bi-directional association: If you prefer, you can also customize the toolbox to include the bi-directional tool to the use-case toolbox.

Course Registration System Use-Case Model Main Diagram

PRACTICAL NO: 4 AIM: To create use case documents for the given case study.
THEORY: Use cases for case study: (Samples)
1. Login
1.1 Brief Description This use case describes how a user logs into the Course Registration System. 1.2 Flow of Events 1.2.1 Basic Flow This use case starts when the actor wishes to log into the Course Registration System. 1. The system requests that the actor enter his/her name and password. 2. The actor enters his/her name and password. 3. The system validates the entered name and password and logs the actor into the system. 1.2.2 Alternative Flows 1.2.2.1 Invalid Name/Password If, in the Basic Flow, the actor enters an invalid name and/or password, the system displays an error message. The actor can choose to either return to the beginning of the Basic Flow or cancel the login, at which point the use case ends. 1.3 Special Requirements None. 1.4 Pre-Conditions None. 1.5 Post-Conditions If the use case was successful, the actor is now logged into the system. If not, the system state is unchanged. 1.6 Extension Points

None.

2. Register for Courses


2.1 Brief Description This use case allows a Student to register for course offerings in the current semester. The Student can also update or delete course selections if changes are made within the add/drop period at the beginning of the semester. The Course Catalog System provides a list of all the course offerings for the current semester. 2.2 Flow of Events 2.2.1 Basic Flow This use case starts when a Student wishes to register for course offerings, or to change his/her existing course schedule. 1. The system requests that the Student specify the function he/she would like to perform (either Create a Schedule, Update a Schedule, or Delete a Schedule). 2. Once the Student provides the requested information, one of the sub flows is executed. If the Registrar selected Create a Schedule, the Create a Schedule subflow is executed. If the Registrar selected Update a Schedule, the Update a Schedule subflow is executed. If the Registrar selected Delete a Schedule, the Delete a Schedule subflow is executed. 2.2.1.1 Create a Schedule 1. The system retrieves a list of available course offerings from the Course Catalog System and displays the list to the Student. 2. The Student selects 4 primary course offerings and 2 alternate course offerings from the list of available offerings. 3. Once the student has made his/her selections, the system creates a schedule for the Student containing the selected course offerings. 4. The Submit Schedule subflow is executed. 2.2.1.2 Update a Schedule 1. The system retrieves and displays the Students current schedule (e.g., the schedule for the current semester). 2. The system retrieves a list of available course offerings from the Course Catalog System and displays the list to the Student. 3. The Student may update the course selections on the current selection by deleting and adding new course offerings. The Student selects the course offerings to add from the list of available course offerings. The Student also selects any course offerings to delete from the existing schedule. 4. Once the student has made his/her selections, the system updates the schedule for the Student using the selected course offerings. 5. The Submit Schedule subflow is executed. 2.2.1.3 Delete a Schedule 1. The system retrieves and displays the Students current schedule (e.g., the schedule for the current semester). 2. The system prompts the Student to confirm the deletion of the schedule. 3. The Student verifies the deletion.

4. The system deletes the Schedule. If the schedule contains enrolled in course offerings, the Student must be removed from the course offering. 2.2.1.4 Submit Schedule For each selected course offering on the schedule not already marked as enrolled in, the system verifies that the Student has the necessary prerequisites, that the course offering is open, and that there are no schedule conflicts. The system then adds the Student to the selected course offering. The course offering is marked as enrolled in in the schedule. The schedule is saved in the system. 2.2.2 Alternative Flows 2.2.2.1 Save a Schedule At any point, the Student may choose to save a schedule rather than submitting it. If this occurs, the Submit Schedule step is replaced with the following: The course offerings not marked as enrolled in are marked as selected in the schedule. The schedule is saved in the system. 2.2.2.2 Unfulfilled Prerequisites, Course Full, or Schedule Conflicts If, in the Submit Schedule sub-flow, the system determines that the Student has not satisfied the necessary prerequisites, or that the selected course offering is full, or that there are schedule conflicts, an error message is displayed. The Student can either select a different course offering and the use case continues, save the schedule, as is (see Save a Schedule subflow), or cancel the operation, at which point the Basic Flow is re-started at the beginning. 2.2.2.3 No Schedule Found If, in the Update a Schedule or Delete a Schedule sub-flows, the system is unable to retrieve the Students schedule, an error message is displayed. The Student acknowledges the error, and the Basic Flow is restarted at the beginning. 2.2.2.4 Course Catalog System Unavailable If the system is unable to communicate with the Course Catalog System, the system will display an error message to the Student. The Student acknowledges the error message, and the use case terminates. 2.2.2.5 Course Registration Closed When the use case starts, if it is determined that registration for the current semester has been closed, a message is displayed to the Student, and the use case terminates. Students cannot register for course offerings after registration for the current semester has been closed. 2.2.2.6 Delete Cancelled If, in the Delete A Schedule sub-flow, the Student decides not to delete the schedule, the delete is cancelled, and the Basic Flow is re-started at the beginning. 2.3 Special Requirements

None. 2.4 Pre-Conditions The Student must be logged onto the system before this use case begins.

PRACTICAL NO: 5 AIM: To study Software Requirement Specification template.


THEORY:

Student Course Registration System


REQUIREMENTS REPORT

CENG XXX Fall 2001

October xx, 20XX

Aye Gzel Mehmet Gl

Computer Engineering Department Middle East Technical University

Student Course Registration System Requirements Analysis Report


date 1. Introduction
1.1 Purpose of this Document 1.2 Scope of this Document 1.3 Overview 1.4 Business Content

2. General Description
2.1 Product Functions 2.2 Similar System(s) Information 2.3 User Characteristics 2.4 User Problem Statement 2.5 User Objectives 2.6 General Constraints 3.Functional Requirements Descriptions about what the system should accomplish not how. 3.1 Registration {Top level functions are called: Capability} Description about the top-level system function (Capability). USE CASE DIAGRAM 3.1.1 Student Log-in {This is a System Function} Corresponds to a USE CASE (one oval in the use case diagram) Draw a COLLABORATION Diagram Description paragraph. Then comes numbered requirements: One sentence each! 3.1.1.1 Every student has a Username 3.1.1.2 Usernames are unique
IEEE suggests every requirement to come with (include only if appropriate): 1.Description 2.Criticality 3.Technical issues 4.Cost and schedule 5.Risks 6.Dependencies with other requirements 7.Others as appropriate

3.1.2 Student Authentication

4.Interface Requirements 4.1 User Interfaces 4.1.1 GUI {Screen Dumps} 4.1.2 Command Line Interface 4.1.3 Application Programming Int. 4.1.4 Diagnostic interface 4.2 Hardware Interfaces 4.3 Communications Interfaces 4.4 Software Interfaces 5.Performance Requirements 6.Design Constraints
6.1 Standards Compliance

7.Other Non Functional Attributes


1.Security 2.Binary Compatibility 3.Reliability 4.Maintainability 5.Portability 6.Extensibility 7.Reusability 8.Appl. Affinity/compatibility 9.Resource Utilization 10.Serviceability 11. --- others ---

8.Preliminary Object-Oriented Analysis


8.1 Inheritance Relationships 8.2 Class Descriptions
IEEE Suggests each class to be described as: 8.2.1 Student
8.2.1.1 Abstract or Concrete: 8.2.1.2 List of Superclasses: 8.2.1.3 List of Subclasses: 8.2.1.4 Purpose: 8.2.1.5 Collaborations: {other classes} 8.2.1.6 Attributes: 8.2.1.7 Operations: 8.2.1.8 Constraints: {on general state and behavior}

9.Operational Scenarios {A proposed specific use} 10.Preliminary Schedule 11.Preliminary Budget 12.Appendices
12.1 Definitions, Acronyms, Abbreviations 12.2 References -- others -{NOTE: Titles have to be written, even if there is nothing to write under}

PRACTICAL NO: 6 AIM: Detailed analysis and design of Mini Project (SRS).
THEORY:
Note: Students are expected to create SRS document for their Mini project

PRACTICAL NO: 7

AIM: To study E-R Diagram and Data Flow Diagrams.


THEORY:
Entity-Relationship Diagrams (ERD):
Data models are tools used in analysis to describe the data requirements and assumptions in the system from a top-down perspective. They also set the stage for the design of databases later on in the SDLC. There are three basic elements in ER models: Entities are the "things" about which we seek information. Attributes are the data we collect about the entities. Relationships provide the structure needed to draw information from multiple entities. Generally, ERD's look likes this:

Developing an ERD. Developing an ERD requires an understanding of the system and its components.

Consider a hospital: Patients are treated in a single ward by the doctors assigned to them. Usually each patient will be assigned a single doctor, but in rare cases they will have two. Healthcare assistants also attend to the patients, a number of these are associated with each ward. Initially the system will be concerned solely with drug treatment. Each patient is required to take a variety of drugs a certain number of times per day and for varying lengths of time. The system must record details concerning patient treatment and staff payment. Some staff is paid part time and doctors and care assistants work varying amounts of overtime at varying rates (subject to grade). The system will also need to track what treatments are required for which patients and when and it should be capable of calculating the cost of treatment per week for each patient (though it is currently unclear to what use this information will be put).

How do we start an ERD?


1. Define Entities: these are usually nouns used in descriptions of the system, in the discussion of business rules, or in documentation; identified in the narrative (see highlighted items above). 2. Define Relationships: these are usually verbs used in descriptions of the system or in discussion of the business rules (entity ______ entity); identified in the narrative.

3. Add attributes to the relations; these are determined by the queries, and may also suggest new entities, e.g. grade; or they may suggest the need for keys or identifiers. What questions can we ask? a. Which doctors work in which wards? b. How much will be spent in a ward in a given week? c. How much will a patient cost to treat? d. How much does a doctor cost per week? e. Which assistants can a patient expect to see? f. Which drugs are being used? 4. Add cardinality to the relations. Many-to-Many must be resolved to two one-to-many with an additional entity Usually automatically happens. Sometimes involves introduction of a link entity (which will be all foreign key) Examples: Patient-Drug. 5. This flexibility allows us to consider a variety of questions such as: a. Which beds are free? b. Which assistants work for Dr. X? c. What is the least expensive prescription? d. How many doctors are there in the hospital? e. Which patients are families related? 6. Represent that information with symbols. Generally E-R Diagrams require the use of the following symbols: Reading an ERD It takes some practice reading an ERD, but they can be used with clients to discuss business rules. These allow us to represent the information from above such as the E-R Diagram below:

ERD brings out issues: 1. Many-to-Many. 2. Ambiguities. 3. Entities and their relationships. 4. What data needs to be stored? 5. The Degree of a relationship.

Now, think about a university in terms of an ERD. What entities, relationships and attributes might you consider? Look at this simplified view. There is also an example of a simplified view of an airline on that page. Data Flow Diagrams (DFD) DFD provide a graphical notation for capturing the flow of data and operations involved in an information system. However, they lack precise semantics. A prototype to test whether specifications reflect the users expectations cannot be derived directly from a DFD since no machine execution is possible without precise semantics for the notation. The syntax, i.e., way of composing bubbles, arrows, and boxed is defined precisely, but the semantics of DFDs is not specified precisely. (Therefore DFDs provide a semiformal notation for specifying systems). Specification Styles Informal specifications are written in a natural language. Semi-formal specifications use a notation with precise syntax but imprecise semantics. Formal specifications are written using a notation that has precise syntax and semantics (meaning). Operational specifications describe the desired behavior of the system. Descriptive specifications state desired properties of the system. Operational Specification 1. Data Flow Diagrams. 2. Finite State Machines. 3. Petri nets. Descriptive Specification 1. Entity-Relationship Diagrams. 2. Logic Specifications. 3. Algebraic Specifications.

External Entity

Process

Output

Importance of DFDs in a good software design The main reason why the DFD technique is so popular is probably because of the fact that DFD is a very simple formalism it is simple to understand and use. Starting with a set of high-level functions that a system performs, a DFD model hierarchically represents various sub-functions. In fact, any hierarchical model is simple to understand. Human mind is such that it can easily understand any hierarchical model of a system because in a hierarchical model, starting with a very simple and abstract model of a system, different details of the system are slowly introduced through different hierarchies. The data flow diagramming technique also follows a very simple set of intuitive concepts and rules. DFD is an elegant modeling technique that turns out to be useful not only to represent the results of structured analysis of a software problem, but also for several other applications such as showing the flow of documents or items in an organization. Data dictionary A data dictionary lists all data items appearing in the DFD model of a system. The data items listed include all data flows and the contents of all data stores appearing on the DFDs in the DFD model of a system. A data dictionary lists the purpose of all data items and the definition of all composite data items in terms of their component data items. For example, a data dictionary entry may represent that the data grossPay consists of the components regularPay and overtimePay. grossPay = regularPay + overtimePay For the smallest units of data items, the data dictionary lists their name and their type. Composite data items can be defined in terms of primitive data items using the following data definition operators: +: denotes composition of two data items, e.g. a+b represents data a and b. [,,]: represents selection, i.e. any one of the data items listed in the brackets can occur. For example, [a,b] represents either a occurs or b occurs. (): the contents inside the bracket represent optional data which may or may not appear. e.g. a+(b) represents either a occurs or a+b occurs. {}: represents iterative data definition, e.g. {name}5 represents five name data. {name}* represents zero or more instances of name data. =: represents equivalence, e.g. a=b+c means that a represents b and c. /* */: Anything appearing within /* and */ is considered as a comment.

PRACTICAL NO: 8 AIM: To draw Use Case Diagram and E-R diagram for Mini project.
THEORY: Note: Students are expected to draw Use Case Diagram and E-R Diagram for their Mini project

PRACTICAL NO: 9 AIM: To study and draw UML Class diagrams.


THEORY:
CLASS DIAGRAM:
Introduction: UML stands for Unified Modeling Language. It represents a unification of the concepts and notations presented by the three amigos in their respective books the goal is for UML to become a common language for creating models of object oriented computer software. In its current form UML is comprised of two major components: a Meta-model and a notation. In the future, some form of method or process may also be added to; or associated with, UML. The Meta-model: UML is unique in that it has a standard data representation. This representation is called the meta-model. The meta-model is a description of UML In UML. It describes the objects, attributes, and relationships necessary to represent the concepts of UML within a software application. This provides CASE manufacturers with a standard and unambiguous way to represent UML models. Hopefully it will allow for easy transport of UML models between tools. It may also make it easier to write ancillary tools for browsing, summarizing, and modifying UML models. A deeper discussion of the metamodel is beyond the scope of this column. Interested readers can learn more about it by downloading the UML documents from the rational web site The Notation: The UML notation is rich and full bodied. It is comprised of two major subdivisions. There is a notation for modeling the static elements of a design such as classes, attributes, and relationships. There is also a notation for modeling the dynamic elements of a design such as objects, messages, and finite state machines. In this article we will present some of the aspects of the static modeling notation. Static models are presented in diagrams called: Class Diagrams. Class Diagrams: The purpose of a class diagram is to depict the classes within a model. In an object oriented application, classes have attributes (member variables), operations (member functions) and relation-ships with other classes. A class icon is simply a rectangle divided into three

compartments. The topmost compartment contains the name of the class. The middle compartment contains a list of attributes (member variables), and the bottom compartment contains a list of operations (member functions). In many diagrams, the bottom two compartments are omitted. Even when they are present, they typically do not show every attribute and operations. The goal is to show only those attributes and operations that are useful for the particular diagram. Notice that each member variable is followed by a colon and by the type of the variable. If the type is redundant, or otherwise unnecessary, it can be omitted. Notice also that the return values follow the member functions in a similar fashion. Again, these can be omitted. Finally, notice that the member function arguments are just types.

1. Composition Relationships:

The black diamond represents composition. It is placed on the Circle class because it is the Circle that is composed of a Point. The arrowhead on the other end of the relationship denotes that the relationship is navigable in only one direction. That is, Point does not know About Circle. Composition relationships are a strong form of containment or aggregation. Composition also indicates that the lifetime of Point is dependent upon Circle. This means that if Circle is destroyed, Point will be destroyed with it. 2. Inheritance: The inheritance relationship in UML is depicted by a peculiar triangular arrowhead. This arrowhead, which looks rather like a slice of pizza, points to the base class. One or more lines proceed from the base of the arrowhead connecting it to the derived classes.

Aggregation / Association: The weak form of aggregation is denoted with an open diamond. This relationship denotes that the aggregate class (the class with the white diamond touching it) is in some way the whole, and the other class in the relationship is somehow part of that whole.

Aggregation The Window class contains many Shape instances. In UML the ends of a relationship are referred to as its roles. Notice that the role at the Shape end of the aggregation is marked with a *. This indicates that the Window contains many Shape instances. Notice also that the role has been named. It is the name of the instance variable within Window that holds all the Shapes. An association is nothing but a line drawn between the participating classes. In Figure above the association has an arrowhead to denote that Frame does not know anything about Window. The difference between an aggregation and an association is one of implication. Aggregation denotes whole/part relationships whereas associations do not. However, there is not likely to be much difference in the way that the two relationships are implemented. That is, it would be very difficult to look at the code and determine whether a particular relationship ought to be aggregation or association. For this reason, it is pretty safe to ignore the aggregation relationship altogether.

Association How to construct a Class diagram in Rational Rose SE:

Class Diagram for the case study:

PRACTICAL NO: 10 AIM: To study and draw Activity diagrams.


THEORY:
Activity Diagrams: An activity diagram is a way to model the workflow of a use case in graphical form. The diagram shows the steps in the workflow, the decision points in the workflow, who is responsible for completing each step, and the objects that are affected by the workflow. An activity is simply a step in the workflow. It is a task that a business worker performs. Within an activity, you can list the actions that occur for that activity. Actions are simply steps within the activity. The arrows connecting the activities are known as transitions. A transition lets you know which activity is performed once the current activity has completed. We can place guard conditions on the transitions to show when the transition occurs. Guard conditions are placed in square brackets. In this example, the activity create rejection letter is only performed if the guard condition missing documentation is true. The horizontal bars are called synchronizations. They let you know that two or more activities occur simultaneously. In Rose, you can use an activity diagram to model the workflow through a particular business use case. The main elements on an activity diagram are: _ Swimlanes, which show who is responsible for performing the tasks on the diagram. _ Activities, which are steps in the workflow. _ Actions, which are steps within an activity. Actions may occur when entering the activity, exiting the activity, while inside the activity, or upon a specific event. _ Business objects, which are entities affected by the workflow. _ Transitions, which show how the workflow moves from one activity to another. _ Decision points, which show where a decision needs to be made during the workflow. _ Synchronizations, which show when two or more steps in the workflow occur simultaneously.

_ The start state, which shows where the workflow begins. _ The end state, which shows where the workflow ends.

Activity diagrams for the case study:

PRACTICAL NO: 11 AIM: To study Manual/Automated testing.


THEORY:
Manual testing is the oldest and most rigorous type of software testing. Manual testing
requires a tester to perform manual test operations on the test software without the help of Test automation. Manual testing is a laborious activity that requires the tester to possess a certain set of qualities; to be patient, observant, speculative, creative, innovative, open-minded, resourceful and skillful. Manual testing. Compare with Test automation. Manual testing is the process of manually testing software for defects. It requires a tester to play the role of an end user, and use most of all features of the application to ensure correct behaviour. To ensure completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases. 1. Choose a high level test plan where a general methodology is chosen, and resources such as people, computers, and software licenses are identified and acquired. 2. Write detailed test cases, identifying clear and concise steps to be taken by the tester, with expected outcomes. 3. Assign the test cases to testers, who manually follow the steps and record the results. 4. Author a test report, detailing the findings of the testers. The report is used by managers to determine whether the software can be released, and if not, it is used by engineers to identify and correct the problems. A rigorous test case based approach is often traditional for large software engineering projects that follow a Waterfall model. However, at least one recent study did not show a dramatic difference in defect detection efficiency between exploratory testing and test case based testing.

Comparison to Automated Testing


Test automation is the technique of testing software using software rather than people. A test program is written that exercises the software and identifies its defects. These test programs may be written from scratch, or they may be written utilizing a generic Test automation framework that can be purchased from a third party vendor. Test automation can be used to automate the sometimes menial and time consuming task of following the steps of a use case and reporting the results. Test automation may be able to reduce or eliminate the cost of actual testing. A computer can follow a rote sequence of steps more quickly than a person, and it can run the tests overnight to present the results in the morning. However, the labour that is saved in actual testing must be spent instead authoring the test program. Depending on the type of application to be tested, and the automation tools that are chosen, this may require more labour than a manual approach. In addition, some testing tools present a very large amount of data, potentially creating a time

consuming task of interpreting the results. From a cost-benefit perspective, test automation becomes more cost effective when the same tests can be reused many times over, such as for regression testing and test-driven development, and when the results can be interpreted quickly. If future reuse of the test software is unlikely, then a manual approach is preferred. From the perspective of practicality, software that does not have a graphical user interface tends to be tested by automatic methods. Things such as device drivers and software libraries must be tested using test programs. In addition, testing of large numbers of users (performance testing and load testing) is typically simulated in software rather than performed in practice. Conversely, graphical user interfaces whose layout changes frequently are very difficult to test automatically. There are test frameworks that can be used for regression testing of user interfaces. They rely on recording of sequences of keystrokes and mouse gestures, then playing them back and observing that the user interface responds in the same way every time. Unfortunately, these recordings may not work properly when a button is moved or relabeled in a subsequent release. An automatic regression test may also be fooled if the program output varies significantly (e.g. the display includes the current system time). In cases such as these, manual testing may be more effective. Advantage of manual testing. There is no complete substitute for manual testing. Manual testing is crucial for the thorough testing of software applications. Although ways of automating this process have been available for over 20 years it is often not appropriate or convenient to automate. Automation can only be justified where repeatable consistent tests can be run over a stable environment. When this isn't the case (for example during the early stages of the test cycle or when the application is complicated and the business risk is large) then testing teams almost always revert back to manual testing.

Automated testing:Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process. Overview Although manual tests may find many defects in a software application, it is a laborious and time consuming process. In addition it may not be effective in finding certain classes of defects. Test automation is a process of writing a computer program to do testing that would otherwise need to be done manually. Once tests have been automated, they can be run quickly. This is often the most cost effective method for software products that have a long maintenance life, because even minor patches over the lifetime of the application can cause features to break which were working at an earlier point in time. There are two general approaches to test automation:

Code-driven testing. The public (usually) interfaces to classes, modules, or libraries are tested with a variety of input arguments to validate that the results that are returned are correct. Graphical user interface testing. A testing framework generates user interface events such as keystrokes and mouse clicks, and observes the changes that result in the user interface, to validate that the observable behaviour of the program is correct. Test automation tools can be expensive, and it is usually employed in combination with manual testing. It can be made cost-effective in the longer term, especially when used repeatedly in regression testing. One way to generate test cases automatically is model-based testing through use of a model of the system for test case generation, but research continues into a variety of alternative methodologies for doing so. What to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team must make. Selecting the correct features of the product for automation largely determines the success of the automation. Automating unstable features or features that are undergoing changes should be avoided. Code-driven testing A growing trend in software development is the use of testing frameworks such as the xUnit frameworks (for example, JUnit and NUnit) that allow the execution of unit tests to determine whether various sections of the code are acting as expected under various circumstances. Test cases describe tests that need to be run on the program to verify that the program runs as expected. Code driven test automation is a key feature of agile software development, where it is known as Test-driven development (TDD). Unit tests are written to define the functionality before the code is written. Only when all tests pass is the code considered complete. Proponents argue that it produces software that is both more reliable and less costly than code that is tested by manual exploration. It is considered more reliable because the code coverage is better, and because it is run constantly during development rather than once at the end of a waterfall development cycle. The developer discovers defects immediately upon making a change, when it is least expensive to fix. Finally, code refactoring is safer; transforming the code into a simpler form with less code duplication, but equivalent behaviour, is much less likely to introduce new defects. Graphical User Interface (GUI) testing Many test automation tools provide record and playback features that allow users to interactively record user actions and replay it back any number of times, comparing actual results to those expected. The advantage of this approach is that it requires little or no software development. This approach can be applied to any application that has a graphical user interface. However, reliance on these features poses major reliability and maintainability problems. Relabeling a button or moving it to another part of the window may require the test to be rerecorded. Record and playback also often adds irrelevant activities or incorrectly records some activities. A variation on this type of tool is for testing of web sites. Here, the "interface" is the web page. This type of tool also requires little or no software development. However, such a framework utilizes entirely different techniques because it is reading html instead of observing

window events. Another variation is scriptless test automation that does not use record and playback, but instead builds a model of the application under test and then enables the tester to create test cases by simply editing in test parameters and conditions. This requires no scripting skills, but has all the power and flexibility of a scripted approach. Test-case maintenance is easy, as there is no code to maintain and as the application under test changes the software objects can simply be re-learned or added. It can be applied to any GUI-based software application. What to test? Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped with oracles), defect logging, etc., without necessarily automating tests in an end-to-end fashion. One must keep following popular requirements when thinking of test automation: Platform and OS independence. Data driven capability (Input Data, Output Data, Meta Data.) Customizable Reporting (DB Access, crystal reports.) Easy debugging and logging. Version control friendly minimum or zero binary files. Extensible & Customizable (Open APIs to be able to integrate with other tools.) Common Driver (For example, in the Java development ecosystem, that means Ant or Maven and the popular IDEs). This enables tests to integrate with the developers' workflows. Supports unattended test runs for integration with build processes and batch runs. Continuous Integration servers require this. Email Notifications (Automated notification on failure or threshold levels). These may be the test runner or tooling that executes it. Support distributed execution environment (distributed test bed) Distributed application support (distributed SUT)

Framework approach in automation A framework is an integrated system that sets the rules of Automation of a specific product. This system integrates the function libraries, test data sources, object details and various reusable modules. These components act as small building blocks which need to be assembled in a regular fashion to represent a business process. Thus, framework provides the basis of test automation and hence simplifying the automation effort. There are various types of frameworks. They are categorized on the basis of the automation component they leverage. These are: Data-driven testing Modularity-driven testing Keyword-driven testing Hybrid testing Model-based testing

Why Automated Testing? Every software development group tests its products, yet delivered software always has defects. Test engineers strive to catch them before the product is released but they always creep in and they often reappear, even with the best manual testing processes. Automated software testing is the best way to increase the effectiveness, efficiency and coverage of your software testing. Manual software testing is performed by a human sitting in front of a computer carefully going through application screens, trying various usage and input combinations, comparing the results to the expected behavior and recording their observations. Manual tests are repeated often during development cycles for source code changes and other situations like multiple operating environments and hardware configurations. An automated software testing tool is able to playback pre-recorded and predefined actions, compare the results to the expected behavior and report the success or failure of these manual tests to a test engineer. Once automated tests are created they can easily be repeated and they can be extended to perform tasks impossible with manual testing. Because of this, savvy managers have found that automated software testing is an essential component of successful development projects. 1. Automated software testing has long been considered critical for big software development organizations but is often thought to be too expensive or difficult for smaller companies to implement. AutomatedQAs TestComplete is affordable enough for single developer shops and yet powerful enough that our customer list includes some of the largest and most respected companies in the world. 2. Companies like Corel, Intel, Adobe, Autodesk, Intuit, McDonalds, Motorola, Symantec and Sony all use TestComplete. 3. What makes automated software testing so important to these successful companies? 4. Automated Software Testing Saves Time and Money 5. Software tests have to be repeated often during development cycles to ensure quality. Every time source code is modified software tests should be repeated. For each release of the software it may be tested on all supported operating systems and hardware configurations. Manually repeating these tests is costly and time consuming. Once created, automated tests can be run over and over again at no additional cost and they are much faster than manual tests. Automated software testing can reduce the time to run repetitive tests from days to hours. A time savings that translates directly into cost savings. 6. Automated Software Testing Improves Accuracy 7. Even the most conscientious tester will make mistakes during monotonous manual testing. Automated tests perform the same steps precisely every time they are executed and never forget to record detailed results. 8. Automated Software Testing Increases Test Coverage 9. Automated software testing can increase the depth and scope of tests to help improve software quality. Lengthy tests that are often avoided during manual testing can be run unattended. They can even be run on multiple computers with different configurations. Automated software testing can look inside an application and see memory contents, data tables, file contents, and internal program states to determine if the product is behaving as expected. Automated software tests can easily execute thousands of different complex test cases during every test run providing coverage that is impossible with manual tests.

Testers freed from repetitive manual tests have more time to create new automated software tests and deal with complex features. 10. Automated Software Testing Does What Manual Testing Cannot 11. Even the largest software departments cannot perform a controlled web application test with thousands of users. Automated testing can simulate tens, hundreds or thousands of virtual users interacting with network or web software and applications. 12. Automated Software Testing Helps Developers and Testers 13. Shared automated tests can be used by developers to catch problems quickly before sending to QA. Tests can run automatically whenever source code changes are checked in and notify the team or the developer if they fail. Features like these save developers time and increase their confidence. 14. Automated Software Testing Improves Team Morale 15. This is hard to measure but weve experienced it first hand, automated software testing can improve team morale. Automating repetitive tasks with automated software testing gives your team time to spend on more challenging and rewarding projects. Team members improve their skill sets and confidence and, in turn, pass those gains on to their organization. 16. TestComplete is a Powerful and Affordable Automated Software Testing Tool.

PRACTICAL NO: 12 AIM: To study Microsoft Project Plan.


THEORY:

Introduction:
Whether you call it a project plan or a project timeline, it is absolutely imperative that you develop and maintain a document that clearly outlines the project milestones and major activities required to implement your project. This document needs to include the date each milestone or major activity is to be completed, and the owner of each. Your project plan also needs to be created at the beginning of the project, and a baseline version approved by the team as soon as possible.

Although you will probably not know all of the major activities required to implement your project in the beginning, it is important that you create a draft of the activities you think may need to be tracked via a formal document. Take some time and really think through what you know about the objective of your project. Look at some historical data from similar projects. You can even have a few informal meetings with knowledgeable individuals you can use as a sounding board to make sure you aren't completely off base. You'll be surprised how good a draft you can develop if you put in a little effort. With this draft you will be able to speak with subject matter experts (SMEs) and stakeholders to flesh out the project plan. If you don't make some level of effort to develop a rough draft, you may give a bad impression which will make it harder for you to obtain the support of the persons you need to implement the project. After you have fleshed out your draft with your core team, and some other SMEs that may not be a part of your team, you should give the document a baseline status. Your timeline project plan should not undergo many edits, if any, after it achieves baseline status. You should document the actual date your project activities are completed. If the actual completion date differs from your baseline date at anytime, you'll still have documented the date it was supposed to be completed for historical purposes. It is also a good idea to notate where things are deleted or added, and why. That way you aren't standing there looking crazy, trying to go through the crevices of your memory, when someone asks you why something you deleted isn't in the document.And trust me, someone will ask. A few key items to include in your timeline are:

A unique ID that your team can reference when giving an update. The name of the task. When the task should start. When the task should finish. The actual date the task was completed. Any tasks that need to happen before other tasks can begin. The owner of the task. Percent complete of each task.

You or the Project Sponsor you represent may decide to track or maintain more than what has been outlined above in your project plan. This is absolutely fine. These are just the items I have found to be vital, and a good foundation to build upon.

It is completely possible to run a project without a project plan or timeline; it's just not very smart. So, do yourself and your project team a favour. Document milestones and important tasks, keep up with the status, and you'll be that much closer to a well managed project. What is Microsoft Project? Microsoft Office Project is a software application sold by Microsoft that provides project management tools to manage projects. The program allows users to:

Understand and control project schedules and finances. Communicate and present project information. Organize work and people to make sure that projects are completed on schedule.

Overview of Microsoft Project Microsoft Project allows the project manager to enter the tasks of a project (also known as the "work breakdown structure" or WBS) and assign workers (known as "resources") to those tasks, as well as cost information. Microsoft Project also provides functionality that allows the user to create reports that communicate the status and progress of a project. Microsoft Project can help you facilitate all processes in the project management life cycle, from developing your scope, modelling your project schedule, and tracking and communicating progress to saving knowledge gained from the closed project. Furthermore, with Microsoft Office Project Professional 2003, project management standards can be established and disseminated throughout your enterprise. How to Create a Project Plan in Microsoft Project The basic steps of creating and working with a project plan in Microsoft Project (which are saved as MPP files) is as follows: 1. Create a new project file by clicking on New in the File menu. Choose Blank Project. 2. Set a project start date by choosing Project | Project Information. Enter your start date in the Start date box. 3. Save the Project file by clicking Save and specifying a project name. 4. Now the project manager can plan the steps of the project, called tasks. Go to View | Gantt Chart. In the Task Name field, enter each task, including details such as milestones. 5. Enter durations by clicking on the Duration field. The small letter "d" represents days. 2d

represents two days. Designate a milestone by entering 0d duration. Dont enter start and finish dates for the tasks; allow the Duration field to set these automatically. 6. Link tasks (for example, "this task must be done before that task") by selecting the tasks that are related and clicking the Link Tasks button on the toolbar. 7. Next, create a resource pool from which to allocate resources to the project. Click View | Resource Sheet. In the Resource Name field, enter all resources to be used in the project. 8. Now the project manager must assign resources to each task. A resource is typically a person, but it can also include non-human entities, such as equipment or expenses. Go to View | Gantt Chart, then select the task to which to assign a resource. Choose the Assign Resources button and in the dialog box choose the resource names and choose Assign. 9. Once resources are assigned, task type must be set in order for Project to lay out the schedule That task type may be duration, work, or units. The project manager can set a default task type in Tools | Options | Schedule. Individual tasks may also have their own type; select the task and click Task Information | Advanced. 10. Now the project manager is ready to set the project baseline in Microsoft Project. Once the project has been approved, this baseline can be fixed and used throughout the duration of the project to monitor how well project is tracking against expectations. To view the baseline against actual duration, resources, or costs, choose View | Tracking Gantt. The lower bar in the chart is the baseline. The upper bar represents the actual project status. 11. The project manager must now keep the project plan up to date through Microsoft Project. Choose the task to be updated, then choose Tools | Tracking | Update tasks. In the dialog box, enter progress data, either in the form percentage complete, actual dates, actual durations, or actual work. 12. To view project progress in Microsoft Project, choose Report | Visual Reports. 13. To close the project, create a final report by the means in step 12. If the project plan was useful to the management of the project, the project manager may wish to create a template plan to use in the future. Choose File | Save As | Template.

List of faculty engaging SOFTWARE ENGINEERING Practicals.

1. Mrs. Gauri Dhopavkar 2. Mrs. Gauri Chaudhary 3. Ms. Amreen Khan 4. Ms. Payal Thakre 5. Ms. Sheetal Ugemuge

You might also like