0% found this document useful (0 votes)
134 views8 pages

Master of Computer Application (MCA) - Semester 3 MC0071 - Software Engineering - 4 Credits

Software reliability is important for six key reasons: 1) Users now expect fast and reliable software performance. 2) Unreliable software risks losing customers and future business. 3) System failures from unreliable software can result in enormous costs, such as crashes for aircraft control systems. 4) Unreliable systems are difficult to improve since issues are distributed throughout code. 5) Inefficiency is predictable but unreliability surprises users with unexpected errors. 6) Unreliable systems risk losing valuable information which is expensive to collect and maintain. Describing software reliability is difficult as a failure's impact depends on its nature, not just frequency. A system could be deemed unreliable from a single critical

Uploaded by

H Manohar Rayker
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
134 views8 pages

Master of Computer Application (MCA) - Semester 3 MC0071 - Software Engineering - 4 Credits

Software reliability is important for six key reasons: 1) Users now expect fast and reliable software performance. 2) Unreliable software risks losing customers and future business. 3) System failures from unreliable software can result in enormous costs, such as crashes for aircraft control systems. 4) Unreliable systems are difficult to improve since issues are distributed throughout code. 5) Inefficiency is predictable but unreliability surprises users with unexpected errors. 6) Unreliable systems risk losing valuable information which is expensive to collect and maintain. Describing software reliability is difficult as a failure's impact depends on its nature, not just frequency. A system could be deemed unreliable from a single critical

Uploaded by

H Manohar Rayker
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 8

Master of Computer Application (MCA) Semester 3 MC0071 Software Engineering 4 Credits (Book ID: B0808 & B0809)

1. What is the importance of Software Validation , in testing?

Ans: Software Validation Also known as software quality control.Validation checks that the product design satisfies or fits the intended usage (high-level checking) i.e., you built the right product. This is done through dynamic testing and other forms of review. According to the Capability Maturity Model (CMMI-SW v1.1), ? Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] In other words, validation ensures that the product actually meets the users needs, and that the specifications were correct in the first place, while verification is ensuring that the product has been built according to the requirements and design specifications. Validation ensures that you built the right thing. Verification ensures that you built it right. Validation confirms that the product, as provided, will fulfill its intended use. From testing perspective: Fault wrong or missing function in the code. Failure the manifestation of a fault during execution. Malfunction according to its specification the system does not meet its specified functionality. Within the modeling and simulation community, the definitions of validation, verification and accreditation are similar: Validation is the process of determining the degree to which a model, simulation, or federation of models and simulations, and their associated data are accurate representations of the real world from the perspective of the intended use(s). Accreditation is the formal certification that a model or simulation is acceptable to be used for a specific purpose. Verification is the process of determining that a computer model, simulation, or federation of models and simulations implementations and their associated data accurately represents the developers conceptual description and specifications. 2. Explain the following concepts with respect to Software Reliability: A) Software Reliability Metrics B) Programming for Reliability Ans:2 Software reliability Metrics: Metrics which have been used for software reliability specification are shown in Figure shown below.The choice of which metric should be used depends on the type of system to which it applies and therequirements of the application domain. For some systems, it may be appropriate to use differentreliability metrics for different sub-systems.

Fig.: Reliability matrix In some cases, system users are most concerned about how often the system will fail, perhaps because there is a significant cost in restarting the system. In those cases, a metric based on a rateof failure occurrence (ROCOF) or the mean time to failure should be used. In other cases, it is essential that a system should always meet a request for service because there is some cost in failing to deliver the service. The number of failures in some time period is less important. In those cases, a metric based on the probability of failure on demand (POFOD)should be used. Finally, users or system operators may be mostly concerned that the system is available when a request for service is made. They will incur some loss if the system is unavailable. Availability (AVAIL). Which takes into account repair or restart time, is then the most appropriate metric. There are three kinds of measurement, which can be made when assessing the reliability of asystem:1) The number of system failures given a number of systems inputs. This is used to measure thePOFOD.2) The time (or number of transaction) between system failures. This is used to measure ROCOFand MTTF.3) The elapsed repair or restart time when a system failure occurs. Given that the system must becontinuously available, this is used to measure AVAIL.Time is a factor in all of this reliability metrics. It is essential that the appropriate time unitsshould be chosen if measurements are to be meaningful. Time units, which may be used, arecalendar time, processor time or may be some discrete unit such as number of transactions.

Programming for Reliability: There is a general requirement for more reliable systems in all application domains. Customersexpect their software to operate without failures and to be available when it is required.Improved programming techniques, better programming languages and better qualitymanagement have led to very significant improvements in reliability for most software.However, for some systems, such as those, which control unattended machinery, these normal techniques may not be enough to achieve the level of reliability required. In these cases, special programming techniques may be necessary to achieve the required reliability. Some of thesetechniques are discussed in this chapter.Reliability in a software system can be achieved using three strategies: Fault avoidance: This is the most important strategy, which is applicable to all types of system. The design and implementation process should be organized with the objective of producing fault-free systems. Fault tolerance: This strategy assumes that residual faults remain in the system. Facilities are provided in the software to allow operation to continue when these faults cause system failures. Fault detection: Faults are detected before the software is put into operation. The softwarevalidation process uses static and dynamic methods to discover any faults, which remain in asystem after implementation. 3)Suggest six reasons why software reliability is important. Using an example, explain the difficulties of describing what software reliability means. Ans:3 Reliability is the most important dynamic characteristic of almost all software systems.Unreliable software results in high costs for end-users. Developers of unreliable systems mayacquire a bad reputation for quality and lose future business opportunities.The Reliability of a software system is a measure of how well users think it provides theservices that they require. Reliability is usually defined as the probability of failure-freeoperation for a specified time in a specified environment for a specific purpose. Say it is claimedthat software installed on an aircraft will be 99.99% reliable during an average flight of fivehours. This means that a software failure of some kind will probably occur in one flight out of 10000.A formal definition of reliability may not equate to users experience of the software. The difficulty in relating such a figure to users experience arises because it does not take the nature of the failure into account. A user does not consider all services to be of equal importance. A system might be thought of as unreliable if it ever failed to provide some critical service. For example, say a system was used to control braking on an aircraft but failed to work under asingle set of very rare conditions. If an aircraft crashed because of these failure conditions, pilotsof similar aircraft would regard the software as unreliable. There is a general requirement for more reliable systems in all application domains. Customers expect their software to operate without failure to be available when it is required. Improved programming techniques, better programming languages and better quality management have led to very significant improvements in reliability for most software. However, for some systems,such as those which control unattended machinery, these normal techniques may not be enough to achieve the level of reliability required, In these cases special programming techniques maybe necessary to achieve the required reliability. Software reliability is a function of the number of failures experienced by a particular user of that software. A software failure occurs when the software is executing. It is a situation in which the software does not deliver the service expected by the user. Software failures are not the same as software faults although these terms are often used interchangeably. six reasons why software reliability is important:

1)Computers are now cheap and fast: There is little need to maximize equipment usage.Paradoxically, however, faster equipment leads to increasing expectations on the part of the user so efficiency considerations cannot be completely ignored. 2)Unreliable software is liable to be discarded by users: If a company attains a reputation for unreliability because of single unreliable product, it is likely to affect future sales of all of thatcompanys products. 3)System failure costs may be enormous: For some applications, such a reactor control systemor an aircraft navigation system, the cost of system failure is orders of magnitude greater than thecost of the control system.4) Unreliable systems are difficult to improve: It is usually possible to tune an inefficientsystem because most execution time is spent in small program sections. An unreliable system ismore difficult to improve as unreliability tends to be distributed throughout the system.5) Inefficiency is predictable: Programs take a long time to execute and users can adjust their work to take this into account. Unreliability, by contrast, usually surprises the user. Software thatis unreliable can have hidden errors which can violate system and user data without warning andwhose consequences are not immediately obvious. For example, a fault in a CAD program usedto design aircraft might not be discovered until several plane crashers occurs.6) Unreliable systems may cause information loss: Information is very expensive to collectand maintains; it may sometimes be worth more than the computer system on which it is processed. A great deal of effort and money is spent duplicating valuable data to guard againstdata corruption caused by unreliable software. 4.What are the essential skills and traits necessary for effective project managers in successfullyhandling projects? Ans:4 Project management can be defined as a set of principles, methods, tools, and techniques for planning,organizing, staffing, directing, and controlling project-related activities in order to achieve projectobjectives within time and under cost and performance constraints. The effectiveness of the project manager is critical to project success. The qualities that a projectmanager must possess include an understanding of negotiation techniques, communication andanalytical skills, and requisite project knowledge. Control variables that are decisive in predicting the effectiveness of a project manager include the managers competence as acommunicator, skill as a negotiator, and leadership excellence, and whether he or she is a goodteam worker and has interdisciplinary skills. Project mangers are responsible for directing projectresources and developing plans, and must be able to ensure that a project will be completed in agiven period of time. They play the essential role of coordinating between and interfacing withcustomers and management. Project mangers must be able to: Optimize the likelihood of overall project success Apply the experiences and concepts learned from recent projects to new projects Manage the projects priorities

Table 3.2: Profile of Process Improvement ModelsHowever, the model is not merely a program for how to develop software in a professional,engineering-based manner; it prescribes an evolutionary improvement path from an ad hoc,immature process to a mature, disciplined process (Oshana & Linger 1999). Walnau, Hissam,and Seacord (2002) observe that the ISO and CMM process standards established the contextfor improving the practice of software development by identifying roles and behaviors thatdefine a software factory.The CMM identifies five levels of software development maturity in an organization: At level 1, the organizations software development follows no formal development process. The process maturity is said to be at level 2 if software management controls have beenintroduced and some software process is followed. A decisive feature of this level is that theorganizations process is supposed to be such that it can repeat the level of performance that itachieved on similar successful past projects. This is related to a central purpose of the CMM:namely, to improve the predictability of the development process significantly. The major technical requirement at level 2 is incorporation of configuration management into the process. C onfiguration management (or change management,as it is sometimes called) refers to the processes used to keep track of the changes made to the development product (including all theintermediate deliverables) and the multifarious impacts of these changes. These impacts rangefrom the recognition of development problems; identification of the need for changes; alterationof previous work; verification that agreed upon modifications have corrected the problem andthat corrections have not had a negative impact on other parts of the system; etc. An organization is said to be at level 3 if the development process is standard and consistent.The project management practices of the organization are supposed to have been formally agreedon, defined, and codified at this stage of process maturity. Organizations at level 4 are presumed to have put into place qualitative and quantitative measures of organizational process. These process metrics are intended to monitor development and to signal trouble and indicate where and how a development is going wrong when problems occur. Organizations at maturity level 5 are assumed to have established mechanisms designed toensure continuous process improvement and optimization. The metric feedbacks at this stage arenot just applied to recognize and control problems with the current project as they were in level-4organizations. They are intended to identify possible root causes in the process that have allowedthe problems to occur and to guide the evolution of the process so as to prevent the recurrence of such problems in future projects, such as through the introduction of appropriate newtechnologies and tools.The higher the CMM maturity level is, the more disciplined, stable, and well-defined thedevelopment process is expected to be and the environment is assumed to make more use of automated tools and the experience gained from many past successes (Zhiying 2003). Thestaged character of the model lets organizations progress up the maturity ladder by setting process targets for the organization. Each advance reflects a further degree of stabilization of anorganizations development process, with each level institutionaliz[ing] a different aspect of the process (Oshana & Linger 1999).Each CMM level has associated key process areas

(KPA) that correspond to activities that must be formalized to attain that level. For example, the KPAs at level 2 include configurationmanagement, quality assurance, project planning and tracking, and effective management of subcontracted software. The KPAs at level 3 include intergroup communication, training, process definition, product engineering, and integrated software management. Quantitative process management and development quality define the required KPAs at level 4. Level 5institutionalizes process and technology change management and optimizes defect prevention.The CMM model is not without its critics. For example, Hamlet and Maybee (2001) object to itsoveremphasis on managerial supervision as opposed to technical focus. They observe thatagreement on the relation between the goodness of a process and the goodness of the product is by no means universal. They present an interesting critique of CMM from the point of view of the so-called process versus product controversy. The issue is to what extent software engineersshould focus their efforts on the design of the software product being developed as opposed tothe characteristics of the software process used to develop that product.The usual engineering approach has been to focus on the product, using relativelystraightforward processes, such as the standard practice embodied in the Waterfall Model,adapted to help organize the work on developing the product. A key point of dispute is that noone has really demonstrated whether a good process leads to a good product. Indeed, good products have been developed with little process used, and poor products have been developedunder the guidance of a lot of purportedly good processes. Furthermore, adopting complexmanagerial processes to oversee development may distract from the underlying objective of developing a superior product. Hamlet and Maybee (2001) agree that, at the extremes of project size, there is no particular argument about the planning process to follow. They observe that for small-scale projects, thecost of a heavy process management structure far outweighs the benefits; however, for verylarge-scale projects that will develop multimillion-lines systems with long lifetimes, significant project management is clearly a necessity. However, in the midrange of projects with a fewhundred thousand lines of code, the trade-offs between the managed model of developmentand the technical model in which the management hierarchy is kept to an absolute minimumare less obvious; indeed, the technical model may possibly be the superior and more creativeapproach.Bamberger (1997), one of the authors of the Capability Maturity Model, addresses what she believes are some misconceptions about the model. For example, she observes that themotivation for the second level, in which the organization must have a repeatable software process, arises as a direct response to the historical experience of developers when their software development is out of control (Bamberger 1997). Often this is for reasons having todo with configuration management or mismanagement! Among the many symptoms of configuration mismanagement are: confusion over which version of a file is the current officialone; inadvertent side effects when repairs by one developer obliterate the changes of another developer; inconsistencies among the efforts of different developers; etc 6. Explain Time is closely correlated with money and cost, tools, and the characteristics of development methodologies. What do you make out of this statement? The software engineering process depends on time as a critical asset as well as a constraint or restriction on the process. Time can be a hurdle for organizational goals, effective problem solving, and quality assurance. Managed effectively, time can support the competitive advantage of an organization, but time is also a limitation, restricting or stressing quality and imposing an obstacle to efficient problem solving. Time is

the major concern of various stakeholders in the software engineering process, from users, customers, and business managers to software developers and project managers. Time is closely correlated with money and cost, tools, and the characteristics of development methodologies like Rapid Application Development that aim primarily at reducing time and accelerating the software engineering process. These methodologies exhibit characteristics such as reusability, which emphasizes avoiding reinventing the wheel, object-oriented analysis, design, and implementation. Examples include assembly from reusable components and component-based development; business objects; distributed objects; object-oriented software engineering and object-oriented business process reengineering; utilizing unified modeling languages (UML); and commercial-off-the-shelf software. Other characteristics are automation (via CASE tools); prototyping; outsourcing; extreme programming; and parallel processing. A redefined software engineering process must integrate the critical activities; major interdisciplinary resources (people, money, data, tools, and methodologies); organizational goals; and time in an ongoing round-trip approach to business-driven problem solving. This redefinition must address limitations identified in the literature related to business metrics, the process environment and external drivers, and process continuation, as fundamentals of process definition. A conceptual framework should emphasize the following characteristics for interdisciplinary software engineering. It must address exploring resources, external drivers, and diversity in the process environment to optimize the development process. It must overcome knowledge barriers in order to establish interdisciplinary skills in softwaredriven problem-solving processes. It must recognize that organizational goals determine the desired business values, which in turn guide, test, and qualify the software engineering process. The process activities are interrelated and not strictly sequential. Irrelevant activities not related to or that do not add value to other activities should be excluded. The optimized software engineering process must be iterative in nature with the degree of iteration ranging from internal feedback control to continual process improvement. The software engineering process is driven by time, which is a critical factor for goals; competition; stakeholder requirements; change; project management; money; evolution of tools; and problem-solving strategies and methodologies. 5. Which are the four phases of development according to Rational Unified Process? Ans:5Rational Unified Process Model (RUP): The RUP constitutes a complete framework for software development. The elements of the RUP(not of the problem being modeled) are the workers who implement the development, eachworking on some cohesive set of development activities and responsible for creating specificdevelopment artifacts. A worker is like a role a member plays and the worker can play manyroles (wear many hats) during the development. For example, a designer is a worker and theartifact that the designer creates may be a class definition. An artifact supplied to a customer as part of the product is a deliverable. The artifacts are maintained in the Rational Rose tools, not asseparate paper documents. A workflow is defined as a meaningful sequence of activities that produce some valuable result (Krutchen 2003). The development process has nine coreworkflows: business modeling; requirements; analysis and design; implementation; test;deployment; configuration and change management; project management; and environment.Other RUP elements, such as tool mentors, simplify training in the use of the Rational

Rosesystem. These core workflows are spread out over the four phases of development: The inception phase defines the vision of the actual user end-product and the scope of the project. The elaboration phase plans activities and specifies the architecture. The construction phase builds the product, modifying the vision and the plan as it proceeds. The transition phase transitions the product to the user (delivery, training, support,maintenance).In a typical two-year project, the inception and transition might take a total of five months, witha year required for the construction phase and the rest of the time for elaboration. It is importantto remember that the development process is iterative, so the core workflows are repeatedlyexecuted during each iterative visitation to a phase. Although particular workflows will predominate during a particular type of phase (such as the planning and requirements workflowsduring inception), they will also be executed during the other phases. For example, theimplementation workflow will peak during construction, but it is also a workflow duringelaboration and transition. The goals and activities for each phase will be examined in somedetail.The purpose of the inception phase is achieving concurrence among all stakeholders on theobjectives for the project. This includes the project boundary and its acceptance criteria.Especially important is identifying the essential use cases of the system, which are defined as theprimary scenarios of behavior that will drive the systems functionality. Based on the usualspiral model expectation, the developers must also identify a candidate or potential architectureas well as demonstrate its feasibility on the most important use cases. Finally, cost estimation planning, and risk estimation must be done. Artifacts produced during this phase include thevision statement for the product; the business case for development; a preliminary description of the basic use cases; business criteria for success such as revenues expected from the product; the plan; and an overall risk assessment with risks rated by likelihood and impact. A throw-away prototype may be developed for demonstration purposes but not for architectural purposes.The following elaboration phase ensures that the architecture, requirements, and plans are stableenough, and the risks are sufficiently mitigated, that [one] can reliably determine the costs andschedule for the project. The outcomes for this phase include an 80 percent complete use casemodel, nonfunctional performance requirements, and an executable architectural prototype. Thecomponents of the architecture must be understood in sufficient detail to allow a decision tomake, buy, or reuse components, and to estimate the schedule and costs with a reasonable degreeof confidence. Krutchen observes that a robust architecture and an understandable plan arehighly correlated[so] one of the critical qualities of the architecture is its ease of construction. Prototyping entails integrating the selected architectural components and testing them against the primary use case scenarios.The construction phase leads to a product that is ready to be deployed to the users. The transition phase deploys a usable subset of the system at an acceptable quality to the users, including betatesting of the product, possible parallel operation with a legacy system that is being replaced, andsoftware staff and user training.

You might also like