Software Engineering Unit1 Print
Software Engineering Unit1 Print
Often, a customer defines a set of general objectives for software but does not identify detailed input, processing, or output requirements. In other cases, the developer may be unsure of the efficiency of an algorithm, the adaptability of an operating system, or the form that human/machine interaction should take. In these, and many other situations, a prototyping paradigm may offer the best approach. The prototyping paradigm (Figure 2.5) begins with requirements gathering. Developer and customer meet and define the overall objectives for the software, identify whatever requirements are known, and outline areas where further definition is mandatory. A "quick design" then occurs. The quick design focuses on a representation of those aspects of the software that will be visible to the customer/user (e.g., input approaches and output formats). The quick design leads to the construction of a prototype. The prototype is evaluated by the customer/user and used to refine requirements for the software to be developed. Iteration occurs as the prototype is tuned to satisfy the needs of the customer, while at the same time enabling the developer to better understand what needs to be done. Ideally, the prototype serves as a mechanism for identifying software requirements. If a working prototype is built, the developer attempts to use existing program fragments or applies tools (e.g., report generators, window managers) that enable working programs to be generated quickly. But what do we do with the prototype when it has served the purpose just described? Brooks provides an answer: In most projects, the first system built is barely usable. It may be too slow, too big, awkward in use or all three. There is no alternative but to start again, smarting but smarter, and build a redesigned version in which these problems are solved . . . When a new system concept or new technology is used, one has to build a system to throw away, for even the best planning is not so omniscient as to get it right the first time. The management question, therefore, is not whether to build a pilot system and throw it away. You will do that. The only question is whether to plan in advance to build a throwaway, or to promise to deliver the throwaway to customers . . .
The prototype can serve as "the first system." The one that Brooks recommends we throw away. But this may be an idealized view. It is true that both customers and developers like the prototyping paradigm. Users get a feel for the actual system and developers get to build something immediately. Yet, prototyping can also be problematic for the following reasons: 1. The customer sees what appears to be a working version of the software, unaware that the prototype is held together with chewing gum and baling wire, unaware that in the rush to get it working no one has considered overall software quality or long-term maintainability. When informed that the product must be rebuilt so that high levels of quality can be maintained, the customer cries foul and demands that "a few fixes" be applied to make the prototype a working product. Too often, software development management relents. 2. The developer often makes implementation compromises in order to get a prototype working quickly. An inappropriate operating system or programming language may be used simply because it is available and known; an inefficient algorithm may be implemented simply to demonstrate capability. After a time, the developer may become familiar with these choices and forget all the reasons why they were inappropriate. The less-than-ideal choice has now become an integral part of the system. Although problems can occur, prototyping can be an effective paradigm for software engineering. The key is to define the rules of the game at the beginning; that is, the customer and developer must both agree that the prototype is built to serve as a mechanism for defining requirements. It is then discarded (at least in part) and the actual software is engineered with an eye toward quality and maintainability.
languages are in the process of developing interactive environments that (1) enable an analyst to interactively create language-based specifications of a system or software, (2) invoke automated tools that translate the language-based specifications into executable code, and (3) enable the customer to use the prototype executable code to refine formal requirements.
A feasibility study evaluates the project's potential for success; therefore, the perceived objectivity is an important factor in the credibility to be placed on the study by potential investors and lending institutions. It must therefore be conducted with an objective, unbiased approach to provide information upon which decisions can be based.
Economic Feasibility
Feasibility study
A feasibility study, also known as feasibility analysis, is an analysis of the viability of an idea. It describes a preliminary study undertaken to determine and document a projects viability. The results of this analysis are used in making the decision whether to proceed with the project or not. This analytical tool used during the project planning phrase shows how a business would operate under a set of assumption, such as the technology used, the facilities and equipment, the capital needs, and other financial aspects. The study is the first time in a project development process that show whether the project create a technical and economically feasible concept. As the study requires a strong financial and technical background, outside consultants conduct most studies.
Economic evaluation is a vital part of investment appraisal, dealing with factors that can be quantified, measured, and compared in monetary terms. The results of an economic evaluation are considered with other aspects to make the project investment decision as the proper investment appraisal helps to ensure that the right project is undertaken in a manner that gives it the best chances of success. Project investments involve the expenditure of capital funds and other resources to generate future benefits, whether in the form of profits, cost savings, or social benefits. For an investment to be worthwhile, the future benefit should compare favorably with the prior expenditure of resources need to achieve them. The bottom line in many projects is economic feasibility .During the early phases of the project, economic feasibility analysis amounts to little more than judging whether the possible benefits of solving the problem are worthwhile. As soon as specific requirements and solutions have been identified, the analyst can weigh the costs and benefits of each alternative. This is called a cost-benefit analysis.
Feasibility studies aim to objectively and rationally uncover the strengths and weaknesses of an existing business or proposed venture, opportunities and threats present in the environment, the resources required to carry through, and ultimately the prospects for success. In its simplest terms, the two criteria to judge feasibility are cost required and value to be attained. As such, a well-designed feasibility study should provide a historical background of the business or project, description of the product or service, accounting statements, details of the operations and management, marketing research and policies, financial data, legal requirements and tax obligations. Generally, feasibility studies precede technical development and project implementation.
Cost/Benefit Analysis
To assess economic feasibility, management has to analyze costs and benefits associated with the proposed project. The capital cost of a project affects the economic evaluation. Cost estimating is essentially an intuitive process that attempts to predict the final outcome of a future capital expenditure . Even though it seem impossible to come up with the exact number of costs and benefits for a particular project
during this initial phase of the development process, one should spend the adequate of time in estimating the costs and benefits of the project for comparison with other alternatives. When talking about the cost of IT/IS project, one would first think of the tangible costs that are easily to determine and estimate, such as hardware and software cost, or labor cost. However, in addition to these tangible costs, there are also some intangible costs, such as loss of goodwill, or operational inefficiency. One methodology for determining the costs of implementing and maintaining information technology is Total Cost of Ownership (TCO). It is a financial estimate designed to help consumers and enterprise managers assess direct and indirect costs. TCO is a holistic assessment of IT costs over time. The term holistic assessment implies an all-encompassing collection of the costs associated with IT investments, including capital investment, license fees, leasing costs, and service fees, as well as direct (budgeted) and indirect(unbudgeted) labor expenses. These are the financial impact of deploying information technology during its whole life-cycle as following
o o o o o o o o o o o o o o o o
o o o o o o
Costs associated with failure or outage Diminished performance incidents (i.e. users having to wait) Technology training costs of users and IT staff Infrastructure (floor space) costs Audit costs Migration costs
On the other hand, IT/IS projects can provide many benefits, both tangible and intangible, to an organization. The tangible benefit, such as cost saving or increasing in revenue, would be easier to estimate while intangible benefits are harder to quantify.
Technological Feasibility
Assessing technical feasibility is to evaluate whether the new system will perform adequately and whether an organization has ability to construct a proposed system or not. The technical assessment help answer the question such as whether has enough experience using that technology. One examples of the technical feasibility is shown in the credit union management.
End-user computer Hardware purchase costs Software license purchase costs Hardware & Software deployment costs Hardware warranties and maintenance costs Software license tracking costs Operations Infrastructure Costs Cost of Security Breaches (in loss of reputation and recovery cost) Cost for electricity and cooling Network hardware and software costs Server hardware and software costs Insurance costs Testing costs Cost to upgrade or scalability IT Personnel costs "C" Level Management Time costs Backup and Recovery Process costs
In developing the new system, one has to investigate and compare technology providers, determine reliability and competitiveness of that system, and identify limitations or constraints of technology, as well as the risk of the proposed system that is depend on the size of the system, complexity, and groups experience with the similar systems.
Proje ct Size: Proje ct size can be deter mine d by the number of members on the project team, project duration time, number of department involved, or the effort put in programming. The larger the size of the projects, the riskier the project is confirms that small projects are more likely to succeed than large projects. Project Structure: The project that its requirements are highly structured and well define will have lower risk than the one that the requirements are subject to the judgment of an individual.
projects. Without the high-risk project, the organization couldnt make the major breakthroughs in innovative uses of systems.
FINANCIAL ANALYSIS For investors to engage in a new investment project, the project has to be financially viable. Invested capital must show the potential to generate an economic return to investors at least equal to that available from other similarly risky investments, i.e. the return on investment needs to be equal or higher. For example, an investor expects a manufacturing facility to generate sufficient cash flows from operation to pay for the construction of the facility and ongoing operating expenses and, additionally, have an attractive interest rate of return. Estimates of the cost of operating and maintaining a manufacturing plant, as well as expected income generated, are therefore essential in determining the financial feasibility of the facility The objective of financial analysis is to ascertain whether the proposed project will be financially viable in the sense of being able to meet the burden of servicing debt and whether the proposed project will satisfy the return expectations of those who provide the capital. While conducting a financial appraisal certain aspects has to be looked into like: - Investment outlay and cost of project - Means of financing
Familiarity with Technology or Application area: The project will be less risky if the development and the user group is familiar with the technology and the systems. Therefore, it would be less risky if the development team uses the standard development tool and hardware environments. Also, on the users side, the more the technology needed for the system exists, how difficult it will be to build, and whether the firm users familiar with the systems development process, the more likely they understand the need for their involvement; this involvement can lead to the success of the project. However, one thing to keep in mind is that a project with the highly risk may still be conducted. Most company would have the reasonable combination among high-, medium-, and low-risk
- Projected profitability - Break- even point - Cash flows of the project - Investment worthiness judged in terms of various criteria of merit
BWhat difculties do you experience in your daily work?, can result in extensive lists of valuable ideas which can then be discussed in more detail, ranked, and analyzed. Advantages. Brainstorming and focus groups are excellent data collection techniques to use when one is new to a domain and seeking ideas for further exploration. They are good at rapidly identifying what is important to the participant population. Two important side benefits of brainstorming and focus groups are that they can introduce the researchers and participants to each other and additionally give the participants more of a sense of being involved in the research process. Conducting research in field environments is often stressful to the research participants; they are more likely to be willing participants if they feel comfortable with the researchers and feel they are partners in research that focuses on issues that they consider to be important. Disadvantages. Unless the moderator is very well trained, brainstorming and focus groups can become too unfocused. Although the nominal group technique helps people to express their ideas, people can still be shy in a group and not say what they really think. Just because a participant population raises particular issues, this does not mean the issues are really relevant to their daily work. It is often hard to schedule a brainstorming session or focus group with the busy schedules of software engineers.
Brainstorming
In brainstorming, several people get together and focus on a particular issue. The idea is to ensure that discussion is not limited to good_ ideas or ideas that make immediate sense, but rather to work together to uncover as many ideas as possible. Brainstorming works best with a moderator because the moderator can motivate the group and keep it focused. Brainstorming works best when there is a simple Trigger question_ to be answered and everybody is given the chance to contribute whatever comes to their mind, initially on paper. A good seminal reference for this process, called Nominal Group Technique, is the work of Delbecq et al. (1975). Trigger questions, such as, BWhat are the main tasks that you perform?, BWhat features would you like to see in software engineering tools?, or
ambiguous responses that cannot be interpreted or analyzed. It is highly advisable to pilot test the questions or forms and then redesign them as you learn which questions unambiguously attack the pertinent issues. In order to generate good statistical results from interviews or a questionnaire, a sample must be chosen that is representative of the population of interest. This requirement is particularly difficult in software engineering because we lack good demographic information about the population of developers and maintainers. However, this drawback should not prevent us from using interviews and questionnaires to conduct field studies, if we do not intend to perform statistical tests on the data or when the problem or population is small and well-defined. Interviews and questionnaires are often conducted in the same series of studies, with the interviews providing additional information to the answers from the questionnaires. Advantages. People are familiar with answering questions, either verbally or on paper, and as a result they tend to be comfortable and familiar with this data collection method. Participants also enjoy the opportunity to answer questions about their work. Disadvantages.
will result in various types of bias, such as self-selection bias (those most interested in our work may have different characteristics from the population as a whole). Results must always therefore be reported with an acknowledgement of potential biases, and other threats to validity. And results should be used keeping the biases in mind. In most cases, slightly biased data is still much more useful than a complete lack of data.
Interviews
Face-to-face interviews involve at least one researcher talking, in person, to at least one respondent at a time. Normally, a fixed list of carefully worded questions forms the basis of the interview. Depending on the goal of the study, respondents may be encouraged to elaborate on areas and deviate slightly from the script. Telephone interviews are the middle ground between faceto-face interviews and questionnaires. You have the interactivity of an interview at the cost and convenience of a phone call. Telephone interviews are not as personal as face-to-face interviews, yet they still provide researchers with opportunities to clarify questions and further probe interesting responses. Although this technique is popular in opinion polling and market research, it is little used in empirical software engineering. Advantages.
Interviews and questionnaires rely on respondents_ self-reports of their behaviors or attitudes. This dependency can bias the results in a number of ways. People are not perfect recorders of events around them; in particular, they preferentially remember events that are meaningful to them. For instance in one of our questionnaires, participants reported that reading documentation was a timeconsuming aspect of their job, but in 40 hours of observation, we hardly saw anyone doing so.2 If the objective of interviews and questionnaires is to obtain statistics based on the answers to fixed questions, then issues of sampling arise. Most studies in software engineering have to use what is called convenience sampling, meaning that we involve whoever is available and volunteers. This
Interviews are highly interactive. Researchers can clarify questions for respondents and probe unexpected responses. Interviewers can also build rapport with a respondent to improve the quality of responses. Disadvantages. Interviews are time and cost inefficient. Contact with the respondent needs to be scheduled and at least one person, usually the researcher, needs to travel to the meeting (unless it is conducted by phoneVbut this lessens the rapport that can be achieved). If the data from interviews consists of audio or video tapes, this needs to be
transcribed and/or coded; careful note-taking may, however, often be an adequate substitute for audio or video recording.
Questionnaires
Questionnaires are sets of questions administered in a written format. These are the most common field method because they can be administered quickly and easily. However, very careful attention needs to be paid to the wording of the questions, the layout of the forms, and the ordering of the questions in order to ensure valid results. Pfleeger and Kitchenham have published a six-part series on principles of survey research starting with (Pfleeger and Kitchenham, 2001). This series gives detailed information about how to design and implement questionnaires. Punter et al. (2003) further provides information on conducting web-based surveys in software engineering research. Advantages. Questionnaires are time and cost effective. Researchers do not need to schedule sessions with the software engineers to administer them. They can be filled out when a software engineer has time between tasks, for example, waiting for information or during compilation. Paper form-based questionnaires can be transported to the respondent for little more than the cost of postage. Web-based questionnaires cost even less since the paper forms are eliminated and the data are received in electronic form. Questionnaires can also easily collect data from a large number of respondents in geographically diverse locations. Disadvantages. Since there is no interviewer, ambiguous and poorlyworded questions are problematic. Even though it is relatively easy for software engineers to fill out questionnaires, they still must do so on their own and may not find the time. Thus, return rates can be relatively low which adversely affects the representativeness of the sample. We have found a consistent response rate of 5% to software engineering surveys, when people are contacted personally by email and asked to complete a web-based survey. If the objective of the questionnaire is to gather data for rigorous statistical analysis in order to refute a null hypothesis, then response rates much higher than this
will be needed. However, if the objective is to understand trends, with reasonable confidence, then low response rates may well be fine. The homogeneity of the population, and the sampling technique used also affect the extent to which one can generalize the results of surveys. In addition to the above, responses tend to be more terse than with interviews. Observation(shadowing) In shadowing, the experimenter follows the participant around and records their activities. Shadowing can occur for an unlimited time period, as long as there is a willing participant. Closely related to shadowing, observation occurs when the experimenter observes software engineers engaged in their work, or specific experiment related tasks, such as meetings or programming. The difference between shadowing and observation is that the researcher shadows one software engineer at a time, but can observe many at one time. Advantages. Shadowing and observation are easy to implement, give fast results, and require no special equipment. Disadvantages. For shadowing, it is often hard to see what a software engineer is doing, especially when they are using keyboard shortcuts to issue commands and working quickly. However, for the general picture, e.g., knowing they are now debugging, shadowing does work well. Observers need to have a fairly good understanding of the environment to interpret the software engineer_s behavior. This can sometimes be offset by predefining a set of categories or looked-for behaviors. Of course, again, this limits the type of data that will be collected.
Documentation Analysis This technique focuses on the documentation generated by software engineers, including comments in the program code, as well as separate documents describing a software system. Data collected from these sources can also be used in reengineering efforts, such as subsystem identification. Other sources of documentation that can be analyzed include local newsgroups, group e-mail lists, memos, and documents that define the development process. Advantages. Documents written about the system often contain conceptual information and present a glimpse of at least one person_s understanding of the software system. They can also serve as an introduction to the software and the team. Comments in the program code tend to provide low-level information on algorithms and data. Using the source code as the source of data allows for an up-to-date portrayal of the software system. Disadvantages. Studying the documentation can be time consuming and it requires some knowledge of the source. Written material and source comments may be inaccurate.
defined in the following manner [YOU89]: The data dictionary is an organized listing of all data elements that are pertinent to the system, with precise, rigorous definitions so that both user and system analyst will have a common understanding of inputs, outputs, components of stores and [even] intermediate calculations. Today, the data dictionary is always implemented as part of a CASE "structured analysis and design tool." Although the format of dictionaries varies from tool to tool, most contain the following information: Namethe primary name of the data or control item, the data store or an external entity. Aliasother names used for the first entry. Where-used/how-useda listing of the processes that use the data or control item and how it is used (e.g., input to the process, output from the process, as a store, as an external entity. Content descriptiona notation for representing content. Supplementary informationother information about data types, preset values (if known), restrictions or limitations, and so forth. Once a data object or control item name and its aliases are entered into the data dictionary, consistency in naming can be enforced. That is, if an analysis team member decides to name a newly derived data item xyz, but xyz is already in the dictionary, the CASE tool supporting the dictionary posts a warning to indicate duplicate names. This improves the consistency of the analysis model and helps to reduce errors. Where-used/how-used information is recorded automatically from the flow models. When a dictionary entry is created, the CASE tool scans DFDs and CFDs to determine which processes use the data or control information and how it is used. Although this may appear unimportant, it is actually one of the most important benefits of the dictionary. During analysis there is an almost continuous stream of changes. For large projects, it is often quite difficult to determine the impact of a change. Many a software engineer has asked, "Where is this data object used? What else will have to change if we modify it? What will the overall impact of the change be?" Because the data dictionary can be treated as a database, the analyst can ask "where used/how used" questions, and get answers to these queries.
The notation used to develop a content description is noted in the following table:
Data Construct Notation Meaning
= is composed of Sequence + and Selection [|] either-or Repetition { }n n repetitions of () optional data * ... * delimits comments The notation enables a software engineer to represent composite data in one of the three fundamental ways that it can be constructed: 1. As a sequence of data items. 2. As a selection from among a set of data items. 3. As a repeated grouping of data items. Each data item entry that is represented as part of a sequence, selection, or repetition may itself be another composite data item that needs further refinement within the dictionary. To illustrate the use of the data dictionary, we return to the level 2 DFD for the monitor system process for SafeHome, shown in Figure 12.22. Referring to the figure, the data item telephone number is specified as input. But what exactly is a telephone number? It could be a 7-digit local number, a 4-digit extension, or a 25-digit long distance carrier sequence. The data dictionary provides us with a precise definition of telephone number for the DFD in question. In addition it indicates where and how this data item is used and any supplementary information that is relevant to it. The data dictionary entry begins as follows: name: telephone number aliases: none where used/how used: assess against set-up (output) dial phone (input) description: telephone number = [local number|long distance number] local number = prefix + access number long distance number = 1 + area code + local number area code = [800 | 888 | 561] prefix = *a three digit number that never starts with 0 or 1* access number = * any four number string *
The content description is expanded until all composite data items have been represented as elementary items (items that require no further expansion) or until all composite items are represented in terms that would be well-known and unambiguous to all readers. It is also important to note that a specification of elementary data often restricts a system. For example, the definition of area code indicates that only three area codes (two toll-free and one in South Florida) are valid for this system. The data dictionary defines information items unambiguously. Although we might assume that the telephone number represented by the DFD in Figure 12.22 could accommodate a 25-digit long distance carrier access number, the data dictionary content description tells us that such numbers are not part of the data that may be used. For large computer-based systems, the data dictionary grows rapidly in size and complexity. In fact, it is extremely difficult to maintain a dictionary manually. For this reason, CASE tools should be used.