Software Engineeri NG Assignme NT: Unit 1 Overview
Software Engineeri NG Assignme NT: Unit 1 Overview
UNIT 1 OVERVIEW
SOFTWARE ENGINEERING:
The systematic application of scientific and technological knowledge through the medium of sound engineering principals to the production of computer programs and to the requirements definition, functional specification, design description, program implementation, and test methods that lead up to the code.
SOFTWARE:
It is a program which is used to direct the operation of computer by giving instructions on how to use them is called software. It is general term for various kinds of programs used in the operation of computers and various other related devices. Any instruction that can be stored electronically is called software. It is not hardware but is used with hardware. The storage and display devices are the hardware. Software is designed and built by software engineers. Software is used by virtually everyone in society.
CHARACTERISTICS OF SOFTWARE:
1. SOFTWARE IS DEVELOPED OR ENGINEERED; IT IS NOT MANUFACTURED IN CLASSIC SENSE. Software engineering and hardware manufacturing are fundamentally different.
Good design is the reason for greater quality in both the activities, But during the manufacturing phase the hardware could create quality problems (which can be easily corrected) for software. Both activities depend upon people, but the relation between the people and the way of accomplishing the work is entirely different. The ultimate goal is the construction of product but the way they approach it is different. Software costs are concentrated in engineering. This means that software projects cannot be managed as if they were manufacturing projects. 2. Software doesn't "wear out : Figure depicts failure rate as a function of time for hardware. The relationship, often called the "bathtub curve".
The above figure indicates that the hardware has high failure rates its lifetime when compared to software. This is due to
design or manufacturing defects. These defects are rectified and the failure rate drops to a level (ideally, quite low) and is maintained for some period of time. As time passes by the failure rate again shoots up due to internal and external effects such as dust, vibration, abuse, extreme temperatures, and many other environmental factors. Thus the hardware tends to wear out. Whereas software is not effected my environmental maladies. According to this theory, the software curve should be an idealized curve. The defects in the programming will cause error in the development phase which are ultimately corrected, thus the curve gets flattened. The idealized curve is a gross over simplification of actual failure models for software. Thus software does not wear out. But it does deteriorate! This can be best explained by actual curve. Software will undergo maintenance during its lifetime.
When we encounter a defect, it causes the curve ti shoot up. Before the curve can return back to normal another change is requested which inturn causes the curve to shoot up. Thus the minimum failure rate level starts to rise. Hence the software gets deteriorated due to the change.
Another difference between software and hardware is that the worn out components of hardware can be replace but its not like that with software. There are no spare parts for software. Every software failure indicates an error in design or in the process through which design was translated into machine executable code. Hence maintaining software is more complex than maintaining a hardware.
3. Although the industry is moving toward component-based assembly, most software continues to be custom built : Many part of the hardware are created and evolved so that the engineers can concentrate on innovative elements of design. The reuse of hardware is the natural part of the engineering process. But its just starting in the software world. The software is to be designed in such a way that it should be reused in many different programs. It was in the 1960s we built subroutines which were reusable in the broad array of scientific and engineering applications. But they have a limited domain of application. Then they decided to reuse not only algorithms but also data structures. Today we reuse both data and processing applied to the data, enabling the software engineers to create new applications from reusable parts.
management, complex data structures and multiple external interfaces. Eg: compilers, editors, operating system, drivers, etc.. 2. APPLICATION SOFTWARE: It is also known as application. It is computer software designed to help the user to perform specific tasks. It is usually commercially produced, to perform a specified useful task, other than system maintenance functions (which are performed by utility programs). Eg: Some of them are enterprise software, accounting software, database programs, etc.. 3. ENGINEERING AND SCIENTFIC SOFTWARE: It consists on number crunching algorithms. There are applications like astronomy to volcanology, from molecular biology to automated manufacturing, etc. there are many interactive applications such as system simulation, computeraided design and even system software characteristics. 4. EMBEDDED SOFTWARE: Embedded software is used to control products and systems for consumer and industrial markets. This is for end user. It resides in the read-only-memory (ROM). This software is used in intelligent products which have become very popular in every consumer and industrial market. 5. PROTOTYPE SOFTWARE: Prototype is a working model of a product or information system, usually built for demonstration purpose. It is generally used in development phase. It is used to build or show the product design which is used to communicate with the stake holders that the component may or may not comprise all the business objectives. It helps in the development of the working model.
6. WEB APPLICATION SOFTWARE: It is also called web software, www software. The software is stored on a server and delivered via web based on the web. The web pages retrieved by a browser are software that incorporates executable instructions. In essence, the network becomes a massive computer providing an almost unlimited software resource that can be accessed by anyone with a modem. Eg: HTML, java applets, ASP, PHP, etc.. 7. ARTIFICIAL INTELLIGENCE SOFTWARE: It consists of many non numerical algorithms which are used to solve problems that cannot be solved by simple analysis. It is mainly used in voice and image recognition, artificial neural networks, knowledge-based systems. One of the applications within this category is game playing.
SOFTWARE MYTHS:
MYTH: Any statement which is not true is called a myth. Software myths propagated misinformation and confusion. Misleading attitudes have caused serious problems for managers and technical people alike. Hence, we need to modify the old attitude and habits which are still believed and followed. Software myths are of three types. Management myths. Customer myths. Practitioners myths.
MANAGEMENT MYTHS: Manager are one of the most important people who are bestowed with great responsibility, such as maintaining
budgets, keeping schedules, and improving quality. Under this pressure the software manager tends to believe certain myths, if that belief will lessen the pressure (even temporarily). 1. Myth: We already have a book thats full of standards and procedures for building software wont that provide my people with everything they need to know? Reality: The software standards may provide the software engineers with all the guidance they need, the reality is that the standards may be outdated and which might be rarely referred to. Sometimes the practitioner might not be aware of its existence. Hence, we cannot depend on just standard books. 2. Myth: My people have state-of-the-art software development tools, after all, we buy them the newest computers. Reality: People with modern computers have all the software development tools. The reality is that CASE tools are more important than hardware to producing high quality software, yet they are rarely used effectively. 3. Myth: If we get behind schedule, we can add more programmers and catch up (sometimes called the Mongolian horde concept.) Reality: Adding people is a good way to catch up when a project is behind schedule. The reality is that adding people only takes much time than expected. This is because the new members need to be trained n well coordinated to catch up to the already existing people. Adding people only helps the project schedule when it is done in a planned, well-coordinated manner. 4. Myth: If I decide to outsource the software project to a third party, I can just relax and let that firm build it. Reality: Giving software projects to outside parties to develop solves software project management problems. The reality is people who cant manage internal software
development problems will struggle to manage or control the external development of software too.
CUSTOMER MYTHS: A customer can be anyone like a person working next to you, or a technical group working down the hall, the marketing/sales department, or some software company that has made a contract to construct software. These kinds of myths arise due to miscommunication by the practitioners or software managers. These myths leads to false expectations, ultimately ends with the dissatisfied with the developer. 1. Myth: A general statement of objectives is sufficient to begin writing programs we can fill in the details later. Reality: Without constant communication between customer and the developers it is impossible to build a software product that meets customers original needs. A detailed description has to be provided about the design, interface, information domain etc.., for desired output.
2. Myth: Project requirements continually change, but change can be easily accommodated because software is flexible. Reality: The reality is that every change has farreaching and unexpected consequences. Changes to software requirements must be managed very carefully to keep a software project on time and under budget. PRACTITIONERS MYTHS: Myths are being practiced by software practitioners over 50 years programming culture. During early days, software programming was considered as an art. Old ways and attitude die hard. 1. Myth: Once we write the program and get it work, our job is done. Reality: The practitioners tend to think that once they finish the piece of code their job is done but it is not so. The reality is that after finishing the code we need to maintain
it till the piece of software we have produced is out or service. 2. Myth: Until I get the program running I have no way of assessing its quality. Reality: The reality is that one of the most effective quality assurance practices (formal technical reviews) can be applied to any software design product and can serve as a quality filter very early in the product life cycle. 3. Myth: The only deliverable work product for a successful project is the working program. Reality: The working program is only one of the few deliverable products that are delivered from a wellmanaged software project. Documentation is also as important as any other phase in software development, as it provides a basis for software support after delivery. 4. Myth: Software engineering will make us create voluminous and unnecessary documentation and will invariably slow us down. Reality: Software engineering is all about the creation of large and unnecessary documentation. The reality is that software engineering is concerned with creating quality. This means doing things right the first time and not having to create deliverables needed to complete or maintain a software product. This practice usually leads to faster delivery times and shorter development cycles.
The quality focus is the base that supports software engineering. The foundation of software engineering is the process layer. It holds the technology layers together. Process defines a framework for a set of key process areas that must be established for effective delivery of software engineering technology. Common Process Framework: Software engineering work tasks Project milestones Work products Quality assurance points Software engineering methods provide technical ways for software. It includes standards formal or informal. It may also include conventions such as low level such as naming, variable use, language construction use, etc... . it may also involve design methodologies. Software engineering tools provide automate or semiautomated support for the process and methods. When tools are integrated, information created by one tool, can be used by another. It comprises of: Editors. Design aids. Compilers. Computer Aided Software Engineering (CASE).
entities. Regardless of the entity to be engineered, the following questions must be asked and answered: What is the problem to be solved? What characteristics of the entity are used to solve the problem? How will the entity (and the solution) be realized? How will the entity be constructed? What approach will be used to uncover errors that were made in the design and construction of the entity? How will the entity be supported over the long term, when corrections, adaptations, and enhancements are requested by users of the entity. Software engineering can be categorized into three categories: Definition phase - focuses on what (information engineering, software project planning, requirement analysis).
Development phase - focuses on how (software design, code generation, software testing).
Support phase - focuses on change (corrective maintenance, adaptive maintenance, perfective maintenance, preventative maintenance). There are four types of changes that are encountered during support phase: Correction: Even with the best quality assurance activities, it is likely that the customer will uncover defects in the software. Corrective maintenance changes the software to correct defects. Adaptation: Over time, the original environment (e.g., CPU, operating system, business rules, external product characteristics) for which the software was developed is likely to change. Adaptive maintenance results in modification to the software to accommodate changes to its external environment. Enhancement: As software is used, the customer/user will recognize additional functions that will provide benefit. Perfective maintenance extends
the software beyond its original functional requirements. Prevention: Computer software deteriorates due to change, and because of this, preventive maintenance, often called software re-engineering, this must be conducted to enable the software to serve the needs of its end users. In essence, preventive maintenance makes changes to computer programs so that they can be more easily corrected, adapted, and enhanced.
UMBRELLA ACTIVITIES: The phases and related steps described in our generic view of software engineering are complemented by a number of umbrella activities. Typical activities in this category include:
Software project tracking and control Formal technical reviews Software quality assurance Software configuration management Document preparation and production Reusability management Measurement Risk management.
THE SOFTWARE PROCESSES: Umbrella activities are applied throughout the software process. A common process framework is established by defining a small number of framework activities that are applicable to all software projects, regardless of their size or complexity.
Umbrella activitiessuch as software quality assurance, software configuration management, and measurement to overlay the process model. SOFTWARE PROJECT TRACKING AND CONTROL: It allows software team to progress against project plan and take necessary action to maintain the schedule. 2. RISK MANAGEMENT: It asses risk that may encounter the out comes of the system. 3. SOFTWARE QUALITY ASSURANCE: defines the activities required to ensure software quality. 4. FORMAT TECHNICAL REVIEW: asses work product to uncover and remove errors before continue to next step. 5. MEASUREMENT: defines and collects measures that help in meeting the needs. 6. SOFTWARE CONFIGURATION MANAGEMENT: manages the effects of change through out the software process. 7. REUSABILITY MANAGEMENT: defines criteria for work product reuse. 8. WORK PRODUCT PREPARATION AND PRODUCTION: it includes the activities required to create work product such as models, documents, forms, and so on.
1.
PROCESS PATTERN: Process patterns can be defined as the set of activities, actions, work tasks or work products and similar related behaviour followed in a software process lifecycle. TYPES: There are three types of process patterns. TASK PROCESS PATTERN: This type of process pattern depicts the detailed steps to perform a specific task, such as the Technical Review and Reuse First process patterns. STAGE PROCESS PATTERN: This type of process pattern depicts the steps, which are
often performed iteratively, of a single project stage. A project stage is a higherlevel form of process pattern, one that is often composed of several task process patterns. PHASE PROCESS PATTERN: This type of process pattern depicts the interactions between the stage process patterns for a single project phase, such as the Initiate and Delivery phases. PROCESS ASSESSMENT: CMMI: The Capability Maturity Integration (CMMI) is one of the leading models. It is an independent assessment grade organization on how well they follow defined processes. They do not concentrate on the quality of those processes or the software produced. Process assessment is a framework for the assessment of software processes. It aims to set out a clear model for process comparison. It is used to measure what a development organization or project team actually does during software development process. Within a process improvement context, process assessment provides the means of characterizing the current practice within an organizational unit in terms of the capability of the selected processes. Analysis of the results in the light of the organization's business needs identifies strengths, weakness and risks inherent in the processes. This, in turn, leads to the ability to determine whether the processes are effective in achieving their goals, and to identify significant causes of poor quality, or over runs in time or cost. These provide the drivers for prioritizing improvements to processes. Process capability determination is concerned with analysing the proposed capability of selected processes against a target process capability profile in order to identify the risks involved in undertaking a project using the selected processes. The proposed capability may be based on the results of relevant previous process
assessments, or may be based on an assessment carried out for the purpose of establishing the proposed capability.
PERSONAL SOFTWARE PROCESS(PSP): The personal software process(PSP) is structured set of process description, measurement, and methods that can help engineers improve their personal performance. It includes how to define processes, measure quality and measure productivity. It is general-purpose approach which can be part of any software process. There are 7 versions fop hp, each one built on previous one. TEAM SOFTWARE PROCESS (TSP): It guides engineering teams that are developing softwareintensive products. It helps organizations establish a mature and disciplined engineering practice that produces secure, reliable software in less time and lower costs. It is mostly applied in small and large organizations in a variety of domains, with a small result on first use. PSP AND TSP: In practice, PSP skills are used in a TSP team environment. TSP teams consist of PSP-trained developers who volunteer for areas of project responsibility, so the project is managed
by the team itself. Using personal data gathered using their PSP skills; the team makes the plans, the estimates, and controls the quality. Using PSP process methods can help TSP teams to meet their schedule commitments and produce high quality software.
PRODUCT AND PROCESS: Models should be constructed for both the product and process. If the process is weak the end product will be affected. People derive much satisfaction from the creative process as they do from the end product. The duality pf the product and process is one important element keeping creative people engaged as the transition from programming to software engineering is finalized. The product models help explain and evaluate the system. They are used for technical decisions. Process models reveal fragmented activities, reduce cost and expose duplication of effort. They are used for business decisions. SOFTWARE PROCESS MODELS: A process model for software engineering is chosen based on the nature of the project and application, the methods and tools to be used, and the controls and deliverables that are required. A variety of different process models for software engineering are discussed. Each represents an attempt to bring order to an inherently chaotic activity. It is important to remember that each of the models has been characterized in a way that (ideally) assists in the control and coordination of a real software project. And yet, at their core, all of the models exhibit characteristics of the Chaos model.
PROCESS MODELS
THE LINEAR SEQUENTIAL MODEL: It is also known as waterfall model or classic life cycle. Linear Sequential Model (old fashioned but reasonable approach when requirements are well understood). SYSTEM/INFORMATION ENGINEERING AND MODELLING: Work begins by establishing requirements for all system elements and then allocating some subset of these requirements to software. This system view is essential when software must interact with other elements such as hardware, people, and databases.
SOFTWARE REQUIREMENTS ANALYSIS: The requirements gathering process is intensified and focused specifically on software. DESIGN: The design process translates requirements into a representation of the software that can be assessed for quality before coding begins.
CODE GENERATION: The design process translates requirements into a representation of the software that can be assessed for quality before coding begins. TESTING: The testing process focuses on the logical internals of the software, ensuring that all statements have been tested, and on the functional externals; that is, conducting tests to uncover errors and ensure that defined input will produce actual results that agree with required results. Support: Software will undoubtedly undergo change after it is delivered to the customer (a possible exception is embedded software). Change will occur because errors have been encountered, because the software must be adapted to accommodate changes in its external environment (e.g., a change required because of a new operating system or peripheral device), or because the customer requires functional or performance enhancements. CAUSE OF FAILURE OF ABOVE MODEL: 1. Real Projects rarely follow the sequential flow the model proposes. 2. It is often difficult for the customer to state all requirements explicitly. 3. The customer must have experience. A minor change in system may lead whole development to bare-work.
INCREMENTAL MODEL:
The Incremental model combines elements of the linear sequential model with the iterative philosophy of the prototyping. This model has been explicitly designed to accommodate a product that evolves over time. It is developed step by step. While a software product is being developed, each step adds to what has already been completed. ADVANTAGES: System is developed and delivered in increments after establishing an overall architecture.
Requirements and specifications for each increment may be developed. Users may experiment with delivered increments while others are being developed. Intended to combine some of the advantages of prototyping but with a more manageable process and better system structure. Incremental development is especially useful when staffing us unavailable for a complete implementation by the business deadline. Early increments can be implemented with fewer people.
RAPID APPLICATION DEVELOPMENT MODEL: RAD (rapid application development) is a concept that products can be developed faster and of higher quality through gathering requirements, prototyping and early, reiterative user testing of design, the re-use of software components, a rigidly placed scheduled that defers design improvements to next product version, less formality in reviews and other team communication. LIMITATIONS OF RAD: More human resources required to create right teams. Time crucial-if commitments lack- RAD fails. Modular system is not appropriate for high performance system which requires rigorous tuning of interfaces. It is not appropriate where technical risks are high, i.e. applications making heavy use of new technical or new software require high degree of interoperability with existing systems. PROTOTYPING MODEL:
The main aim of prototype model is to counter the limitations of the waterfall model. This model is developed based on currently known requirements. By using this prototype, the client can get an "actual feel" of the system, since the interactions with prototype can enable the client to better understand the requirements of the desired system. It is an attractive idea for complicated and large system for which there is no manual process or existing system to help determining the requirements. This might be needed for novel systems where it is not clear if the constraints can be met or that algorithms can be developed to implement the requirements.
SPIRAL MODEL:
This model proposed by Barry Bohem in 1988, attempts to combine the strengths of various models. It incorporates the elements of the prototype driven approach along with the classic software life cycle. Is also takes into account the risk assessment whose outcome determines taking up the next phase of the designing activity. Unlike all other models which view designing as a linear process, this model views it as a spiral process. This is done by representing iterative designing cycles as an expanding spiral. Typically the inner cycles represent the early phase of requirement analysis along with prototyping to refine the requirement definition, and the outer spirals are progressively representative of the classic software
designing life cycle. At every spiral there is a risk assessment phase to evaluate the designing efforts and the associated risk involved for that particular iteration. At the end of each spiral there is a review phase so that the current spiral can be reviewed and the next phase can be planned. Advantages: 1. It facilities high amount of risk analysis. 2. This software designing model is more suitable for designing and managing large software projects. 3. The software is produced early in the software life cycle. Disadvantages: 1. Risk analysis requires high expertise. 2. It is costly model to use 3. Not suitable for smaller projects. 4. There is a lack of explicit process guidance in determining objectives, constraints and alternatives. 5. This model is relatively new. It does not have many users unlike the waterfall model or prototyping model.
THE FORMAL METHODS MODEL: The formal methods allow us to create a specification that is more complete, consistent. Set of theory and logic notations are used to create a clear statement of facts. These specifications are used to prove the correctness. The specifications are less ambiguous because they are created using mathematical specifications. They provide a mechanism to eliminate all problems in software engineering paradigms. DISADVANTAGES: 1. This model is time consuming and expensive. 2. Extensive training is required. 3. Difficult to use a communication mechanism for technically unsophisticated customers.
Unified process is not just any process, its a framework which should be customized for specific organizations or projects. The name unified process as opposed to rational unified process is generally used to describe the generic process, including those elements which are common to most refinements. The Unified Process divides the project into four phases:
Inception Elaboration Construction Transition INCEPTION PHASE: It is the smallest phase in the project. If the Inception Phase is long then it may be an indication of excessive up-front specification, which is contrary to the spirit of the Unified Process.
The following are typical goals for the Inception phase. Establish a justification or business case for the project. Establish the project scope and boundary conditions. Outline the use cases and key requirements that will drive the design tradeoffs. Outline one or more candidate architectures. Identify risks. Prepare a preliminary project schedule and cost estimate.
ELOBRATION PHASE: During the Elaboration phase the project team is expected to capture a healthy majority of the system requirements. However, the primary goals of Elaboration are to address known risk factors and to establish and validate the system architecture.
Common processes undertaken in this phase include the creation of use case diagrams, conceptual diagrams (class diagrams with only basic notation) and package diagrams (architectural diagrams). CONSTRUCTION PHASE: Construction is the largest phase in the project. In this phase the remainder of the system is built on the foundation laid in Elaboration. System features are implemented in a series of short, timeboxed iterations. Each iteration results in an executable release of the software. It is customary to write full text use cases during the construction phase and each one becomes the start of a new iteration. Common UML (Unified Modelling Language) diagrams used during this phase include Activity, Sequence, Collaboration, State (Transition) and Interaction Overview diagrams.
TRANSITION PHASE: The final project phase is Transition. In this phase the system is deployed to the target users. Feedback received from an initial release (or initial releases) may result in further refinements to be incorporated over the course of several Transition phase iterations.