This document appears to be a project report submitted by students Shweta Dangi and Parul Soni for their BCA degree. The report outlines the development of a passport management system as their major project. It includes sections on the introduction and objectives of the project, hardware and software requirements, technologies used like Java and databases, the methodology and development process, implementation details, testing results, and conclusions. The system was created to simplify and streamline the passport application and management process.
This document appears to be a project report submitted by students Shweta Dangi and Parul Soni for their BCA degree. The report outlines the development of a passport management system as their major project. It includes sections on the introduction and objectives of the project, hardware and software requirements, technologies used like Java and databases, the methodology and development process, implementation details, testing results, and conclusions. The system was created to simplify and streamline the passport application and management process.
Original Description:
Passport Management System Project Report Using Java and Oracle
This document appears to be a project report submitted by students Shweta Dangi and Parul Soni for their BCA degree. The report outlines the development of a passport management system as their major project. It includes sections on the introduction and objectives of the project, hardware and software requirements, technologies used like Java and databases, the methodology and development process, implementation details, testing results, and conclusions. The system was created to simplify and streamline the passport application and management process.
This document appears to be a project report submitted by students Shweta Dangi and Parul Soni for their BCA degree. The report outlines the development of a passport management system as their major project. It includes sections on the introduction and objectives of the project, hardware and software requirements, technologies used like Java and databases, the methodology and development process, implementation details, testing results, and conclusions. The system was created to simplify and streamline the passport application and management process.
Download as DOCX, PDF, TXT or read online from Scribd
Download as docx, pdf, or txt
You are on page 1/ 191
Dr.
Hari Singh Gour Vishwavidyalaya, Sagar
(M.P.)
A Project Report On PASSPORT MANAGEMENT
Submitted As A Part Fulfilment Of B.C.A (SEMESTER VI) MAJOR PROJECT
GUIDED BY: Miss. (LECTURER, DHSGVV, SAGAR(M.P.))
SUBMITTED BY: SWETA DANGI PARUL SONI DR. HARI SINGH GOUR VISHWAVIDYALAYA SAGAR, (M.P.)
(SESSION 2011) DEPARTMENT OF COMPUTER SCIENCE
CERTIFICATE This is to certify that Miss. Shweta Dangi and Miss Parul Soni, students of B.C.A VI Semester, Dr. Hari Singh Gour Vishwavidyalaya, Sagar, (M.P.), have completed their major project titled I CI CI BANKI NG SYSTEM, as per the syllabus and have submitted a satisfactory report. This project is a partial fulfilment towards the award of degree of Bachelor of Computer Applications under Dr. Hari Singh Gour Vishwavidyalaya, Sagar, (M.P.). Date : Authorised Signatory,
ACKNOWLEDGEMENT There always remains a pleasure to acknowledge the assistance of several individuals to the accomplishment of our goal and complication of the project: ICICI BANKING SYSTEM. We owe a debt of gratitude to our project guide, Miss???, whose devotion of valuable time from her busy schedule and coordination lead us towards the completion of this project. She was extremely generous and we give our sincere thanks to her for the constant support and guidance. Our heartiest thanks extended to???. Last but not the least, we are thankful to the whole computer science faculty, who directly or indirectly helped and advised us at every step to complete this project work.
Project Team: Shweta Dangi Parul Soni DECLARATION We, MISS SHWETA DANGI and MISS PARUL SONI hereby declare that the work presented in this project report entitled, ICICI BANKING SYSTEM, submitted to Department of Computer Science And Application towards the partial fulfilment of B.C.A Semester, is an original & authentic record of work and it has not been submitted by anywhere. We further declare that if the statement cited above found false, we will be liable to be disqualified, at any stage.
( ) Name & Signature Enrollment No ( ) Name & Signature Enrollment No
TABLE OF CONTENTS Contents Page No. 1. Certification 1.1 Acknowledgement 1.2 College Certificate 1.3 Declaration 1.4 Project Certificate 2. About Company Aptech Ltd., Mumbai 2.1 Company profile 3. Introduction 3.1 Project Definition 3.2 Objective Definition 3.3 Software And Hardware Requirement 4. Technology Introduction 4.1 Strategies For Making ICICI Banking System
4.2 Technology Description: 4.2.1 Java 4.2.2 Advantages of Using Java 4.2.3 JDBC 4.3 Framework 4.4 Features 4.5 About DBMS Oracle 5. Methodology 5.1 Software Development Life Cycle 5.2 Software Process Model 5.3 Entity Relation Diagram 5.4 Data Flow Diagram 6. Implementation 6.1 Coding 7. Testing Results 7.1 Testing 7.2 Software Testing Techniques Used 7.3 Screen Layout 7.4 Database 8. Conclusion & Future Scope 8.1 Conclusion 9. Bibliography
Introduction
PROJECT INTRODUCTION The ideal structure of online passport registration provides security to the passports to be registered where in we can fill all the details in an efficient and easy manner. A passport is a document, issued by a national Government. This certifies, for the purpose of international travel, the identity and nationality of its holder. The Elements of identity are name, date-of-birth, sex and place of birth most often, most often nationality and citizenship are congruent. A passport does not of itself entitle the passport holder entry into another country, nor to consular protection while abroad or any other privileges it does. However normally entitle the passport holder to return to his country that issued the passport. Rights to consular protection arise from international agreements, and the right to return arises from the laws of issuing country. An individual can register for a passport irrespective of his/her age. The registration of passport is a major step for issuing of a passport. It is system or process in which an individual has to provide exact details of his /her personal information and residential information. Proper registration of a passport is very vital as all the detail filled by the individual are depicted on the passport that is issued.
PROJECT INTRODUCTION
SYNOPSIS The main aim of the project is to design innovative software,which deals with the passport authority management. The motto of the projectis to simplify the job of the administrative people and to render a user-friendly package.The system provides information regarding the passportapplication and its status (enquiry). The tedious jobs such as verifying all therecords of the applicant, confirming that all the personal details are furnished,submission of emigration check documents, passing of police enquiry, positivereport from the previous applied section, etc., are done in the most convenientway to the administrator.Also security is being provided in the most proficient way. Allthe intermediate stages starting from receiving of the application form torevealing the passport number along with the dispatch of the passport are beingdealt
SYSTEM DEFINITION At first, the applicant is given the application form. Theapplicant returns the application duly filled up with all the details. After returning the application the administrator awards with a file number to thatapplicant. This file number plays the major role in the entire administrative procedure.Using the file number administrator enters all the details such as personal information, address details, physical particulars and educationalqualifica tions. If any one of these are found to be invalid then that particular section is stopped for process and the confirmation is being stopped.Next section consists entirely of validating all the details like police records, validation of the amount paid, previous applied and submissionof all the documents including the photos. Only if the concerned departmentscorrectly approve all the reports then the file is sent to the passport writingsection. Then the passport writing is done and dispatched. Also an ability of theapplicant to enquire about the status of his application can be known throughthe enquiry section.
HARDWARE REQUIREMENTS 1. Processor : Intel Pentium III 800 Mhz. or above 2. Hard Disc : A minimum of 10 MB Hard disc space 3. RAM : 64 MB or above 4. Keyboard : Standard 5. Mouse : Standard 6. Graphics : 14 or above Color Monitor SOFTWARE REQUIREMENT (FOR DEVELOPER) a . Windows XP or later b. Java Development Kit 1.6.0 c. Notepad or Any Other Text Editor. Passport Seva Vision "To deliver passport services to citizens in a timely, transparent, more accessible, reliable manner and in a comfortable environment through streamlined processes and committed, trained and motivated workforce" In recent years, the Government of India has taken many initiatives to usher in an era of e-Governance to improve the delivery of public services. The National e- Governance Plan (NeGP) includes many high impact e-Governance projects that have been identified as Mission Mode Projects (MMP's). One such project focuses on reforming Passport services in India. The Ministry of External Affairs (MEA) is responsible for issuance of Passports to Indian Citizens through a network of 37 Passport offices across the Country and 180 Indian Embassies and Consulates abroad. A Passport is an essential travel document for those who are traveling abroad for education, tourism, pilgrimage, medical attendance, business purposes and family visits. During the last few years, the growing economy and spreading globalization have led to an increased demand for Passport and related services. The passport demand is estimated to be growing by around 10% annually. This increased demand for passport and related services is coming from both large cities and smaller towns, creating a need for wider reach and availability. To augment and improve the delivery of passport services to Indian citizens, the Ministry of External Affairs (MEA) launched the Passport Seva Project (PSP) in May 2010. The project has been implemented in a Public Private Partnership (PPP) mode with Tata Consultancy Services, selected through a public competitive procurement process. Under this program, the sovereign and fiduciary functions like verification, granting and issuing of passport have been retained by MEA. The ownership and strategic control of the core assets including data/information is with MEA. Passport Seva enables simple, efficient and transparent processes for delivery of passport and related services. Apart from creating a countrywide networked environment for Government staff, it integrates with the State Police for physical verification of applicant's credentials and with India Post for delivery of passports. Transforming Passport Services for India Citizens The Passport Seva Project is transforming passport and related services in India to provide a best-in-class experience to Indian citizens. PSP is enabling MEA to deliver passport services in a reliable, convenient and transparent manner, within defined service levels. Key aspects of the service transformation achieved by PSP are as follows: 1. Anywhere Anytime Access : Citizens can submit their passport applications and seek an appointment on payment of passport fees online through the PSP portal (www.passportindia.gov.in) at their convenience. The portal provides comprehensive and latest information on all passport related services. Citizens visit the nearest PSK with prior appointment date/time, thus avoiding long queues and inconvenience. 2. Increased Network :As extended arms of 37 Passport Offices, 77 Passport Seva Kendras (PSKs) have been made operational across the country and 16 Passport Seva Laghu Kendras(PSLKs) are being established as part of Passport Seva. 3. Improved Amenities :The PSK provides a world class ambience. Amenities in every PSK include helpful guides, information kiosks, photocopying, food and beverage facilities, public phone booth, baby care, newspapers and journals and television in a comfortable air-conditioned waiting lounge. The Electronic Queue Management System ensures the 'first-in-first-out' principle in application processing. 4. State of the Art Technology Infrastructure : Passport Seva is supported by state-of-the-art technology infrastructure which enables end-to-end passport services to be delivered with enhanced security comparable to the best in the world. The photograph and biometrics of the applicants are captured when they visit the PSK. Their applications and supporting documents are digitized and stored in the system for further processing. 5. Integration with Police and India Post : The PSP network connects with the State Police across all the states and union territories. The applicant's data is sent electronically for police verification. PSP also provides an interface to India Post for tracking delivery of passport to citizens. 6. Call Centre & Helpdesk : A multi-lingual National call centre operating in 17 Indian languages enables citizens to obtain passport service related information and receive updates about their passport applications, round the clock, seven days a week. An e-mail based helpdesk besides a mobile based application 'mPassport Seva' provides information on passport services.
Technology Introduction
JAVA Java is a programming language originally developed by James Gosling at Sun Microsystems (which is now a subsidiary of Oracle Corporation) and released in 1995 as a core component of Sun Microsystems' Java platform. The language derives much of its syntax from C and C++ but has a simpler object model and fewer low- level facilities. Java applications are typically compiled to bytecode (class file) that can run on any Java Virtual Machine (JVM) regardless of computer architecture. Java is a general-purpose, concurrent, class-based, object-oriented language that is specifically designed to have as few implementation dependencies as possible. It is intended to let application developers "write once, run anywhere". Java is currently one of the most popular programming languages in use, and is widely used from application software to web applications. The original and reference implementation Java compilers, virtual machines, and class libraries were developed by Sun from 1995. As of May 2007, in compliance with the specifications of the Java Community Process, Sun relicensed most of its Java technologies under the GNU General Public License. Others have also developed alternative implementations of these Sun technologies, such as the GNU Compiler for Java, GNU Classpath, and Dalvik. ADVANTAGES OF JAVA Java offers a number of advantages to developers. 1. Java is simple: Java was designed to be easy to use and is therefore easy to write, compile, debug, and learn than other programming languages. The reason that why Java is much simpler than C++, is because Java uses automatic memory allocation and garbage collection where else C++ requires the programmer to allocate memory and to collect garbage. 2. Java is object-oriented: Java is object-oriented because programming in Java is centered on creating objects, manipulating objects, and making objects work together. This allows you to create modular programs and reusable code. 3. Java is platform-independent: One of the most significant advantages of Java is its ability to move easily from one computer system to another. The ability to run the same program on many different systems is crucial to World Wide Web software, and Java succeeds at this by being platform-independent at both the source and binary levels. 4. Java is distributed: Distributed computing involves several computers on a network working together. Java is designed to make distributed computing easy with the networking capability that is inherently integrated into it. Writing network programs in Java is like sending and receiving data to and from a file. 5. Java is interpreted: An interpreter is needed in order to run Java programs. The programs are compiled into Java Virtual Machine code called bytecode. The bytecode is machine independent and is able to run on any machine that has a Java interpreter. With Java, the program need only be compiled once, and the bytecode generated by the Java compiler can run on any platform. 6. Java is secure: Java is one of the first programming languages to consider security as part of its design. The Java language, compiler, interpreter, and runtime environment were each developed with security in mind. 7. Java is robust: Robust means reliable and no programming language can really assure reliability. Java puts a lot of emphasis on early checking for possible errors, as Java compilers are able to detect many problems that would first show up during execution time in other languages. 8. Java is multithreaded: Multithreaded is the capability for a program to perform several tasks simultaneously within a program. In Java, multithreaded programming has been smoothly integrated into it, while in other languages, operating system-specific procedures have to be called in order to enable multithreading. Multithreading is a necessity in visual and network programming. JDBC Java DataBase Connectivity, commonly referred to as JDBC, is an API for the Java programming language that defines how a client may access a database. It provides methods for querying and updating data in a database. JDBC is oriented towards relational databases. A JDBC-to-ODBC bridge enables connections to any ODBC- accessible data source in the JVM host environment. Functionality JDBC allows multiple implementations to exist and be used by the same application. The API provides a mechanism for dynamically loading the correct Java packages and registering them with the JDBC Driver Manager. The Driver Manager is used as a connection factory for creating JDBC connections. JDBC connections support creating and executing statements. These may be update statements such as SQL's CREATE, INSERT, UPDATE and DELETE, or they may be query statements such as SELECT. Additionally, stored procedures may be invoked through a JDBC connection. JDBC represents statements using one of the following classes: Statement the statement is sent to the database server each and every time. PreparedStatement the statement is cached and then the execution path is pre-determined on the database server allowing it to be executed multiple times in an efficient manner. CallableStatement used for executing stored procedures on the database. Update statements such as INSERT, UPDATE and DELETE return an update count that indicates how many rows were affected in the database. These statements do not return any other information. Query statements return a JDBC row result set. The row result set is used to walk over the result set. Individual columns in a row are retrieved either by name or by column number. There may be any number of rows in the result set. The row result set has metadata that describes the names of the columns and their types. There is an extension to the basic JDBC API in the javax.sql. Examples The method Class.forName(String) is used to load the JDBC driver class. The line below causes the JDBC driver from some jdbc vendor to be loaded into the application. (Some JVMs also require the class to be instantiated with .newInstance().) Class.forName( "com.somejdbcvendor.TheirJdbcDriver" ); In JDBC 4.0, it's no longer necessary to explicitly load JDBC drivers using Class.forName(). When a Driver class is loaded, it creates an instance of itself and registers it with the DriverManager. This can be done by including the needed code in the driver class's static block. e.g. DriverManager.registerDriver(Driver driver) Now when a connection is needed, one of the DriverManager.getConnection() methods is used to create a JDBC connection. Connection conn = DriverManager.getConnection( "jdbc:somejdbcvendor:other data needed by some jdbc vendor", "myLogin", "myPassword" ); try { /* you use the connection here */ } finally { //It's important to close the connection when you are done with it try { conn.close(); } catch (Throwable ignore) { /* Propagate the original exception instead of this one that you may want just logged */ } } JDBC drivers JDBC drivers are client-side adapters (installed on the client machine, not on the server) that convert requests from Java programs to a protocol that the DBMS can understand. Types There are commercial and free drivers available for most relational database servers. These drivers fall into one of the following types: Type 1 that calls native code of the locally available ODBC driver. Type 2 that calls database vendor native library on a client side. This code then talks to database over network. Type 3, the pure-java driver that talks with the server-side middleware that then talks to database. Type 4, the pure-java driver that uses database native protocol. There is also a type called internal JDBC driver, driver embedded with JRE in Java- enabled SQL databases. It's used for Java stored procedures. This does not belong to the above classification, although it would likely be either a type 2 or type 4 driver (depending on whether the database itself is implemented in Java or not). An example of this is the KPRB driver supplied with Oracle RDBMS. "jdbc:default:connection" is a relatively standard way of referring making such a connection (at least Oracle and Apache Derby support it). The distinction here is that the JDBC client is actually running as part of the database being accessed, so access can be made directly rather than through network protocols.
FRAMEWORK TECHNOLOGY IN BANKING SECTOR Over the years as the consumer demand increased and the retailers geared up to meet this increase, technology evolved rapidly to support this growth. The hardware and software tools that have now become almost essential for retailing can be into 2 broad categories. Customer Interfacing Systems: Bar Coding and Scanners Point of sale systems use scanners and bar coding to identify an item, use pre-stored data to calculate the cost and generate the total bill for a client. Tunnel Scanning is a new concept where the consumer pushes the full shopping cart through an electronic gate to the point of sale. In a matter of seconds, the items in the cart are hit with laser beams and scanned. All that the consumer has to do is to pay for the goods. Payment Payment through credit cards has become quite widespread and this enables a fast and easy payment process. Electronic cheque conversion, a recent development in this area, processes a cheque electronically by transmitting transaction information to the retailer and consumer's Retail outlet. Rather than manually process a cheque, the retailer voids it and hands it back to the consumer along with a receipt, having digitally captured and stored the image of the cheque, which makes the process very fast. Internet Internet is also rapidly evolving as a customer interface, removing the need of a consumer physically visiting the store. Operation Support Systems: ERP System Various ERP vendors have developed retail-specific systems which help in integrating all the functions from warehousing to distribution, front and back office store systems and merchandising. An integrated supply chain helps the retailer in maintaining his stocks, getting his supplies on time, preventing stock-outs and thus reducing his costs, while servicing the customer better. CRM Systems The rise of loyalty programs, mail order and the Internet has provided retailers with real access to consumer data. Data warehousing & mining technologies offers retailers the tools they need to make sense of their consumer data and apply it to business. This, along with the various available CRM (Customer Relationship Management) Systems, allows the retailers to study the purchase behavior of consumers in detail and grow the value of individual consumers to their businesses. Advanced Planning and Scheduling Systems APS systems can provide improved control across the supply chain, all the way from raw material suppliers right through to the retail shelf. These APS packages complement existing (but often limited) ERP packages. They enable consolidation of activities such as long term budgeting, monthly forecasting; weekly factory scheduling and daily distribution scheduling intone overall planning process using a single set of data The major reasons behind the development of new trends are: Scalable and profitable Retail models are well established for most of the categories: Rapid Evolution of New-age Young Indian Consumers Retail Space is no more a constraint for growth Partnering among Brands, retailers, franchisees, investors and malls India is on the radar of Global Retailer Suppliers
ABOUT DATABASE MANAGEMENT SYSTEMS ORACLE The Oracle Database (commonly referred to as Oracle RDBMS or simply as Oracle) is an object-relational database management system (ORDBMS) produced and marketed by Oracle Corporation. Larry Ellison and his friends and former co-workers Bob Miner and Ed Oates started the consultancy Software Development Laboratories (SDL) in 1977. SDL developed the original version of the Oracle software. The name Oracle comes from the code- name of a CIA-funded project Ellison had worked on while previously employed by Ampex Physical and logical structures An Oracle database systemidentified by an alphanumeric system identifier or SID comprises at least one instance of the application, along with data storage. An instanceidentified persistently by an instantiation number (or activation id: SYS.V_$DATABASE.ACTIVATION#)comprises a set of operating-system processes and memory-structures that interact with the storage. Typical processes include PMON (the process monitor) and SMON (the system monitor). Users of the Oracle databases refer to the server-side memory-structure as the SGA (System Global Area). The SGA typically holds cache information such as data- buffers, SQL commands, and user information. In addition to storage, the database consists of online redo logs (or logs), which hold transactional history. Processes can in turn archive the online redo logs into archive logs (offline redo logs), which provide the basis (if necessary) for data recovery and for some forms of data replication. If the Oracle database administrator has implemented Oracle RAC (Real Application Clusters), then multiple instances, usually on different servers, attach to a central storage array. This scenario offers advantages such as better performance, scalability and redundancy. However, support becomes more complex, and many sites do not use RAC. In version 10g, grid computing introduced shared resources where an instance can use (for example) CPU resources from another node (computer) in the grid. The Oracle DBMS can store and execute stored procedures and functions within itself. PL/SQL (Oracle Corporation's proprietary procedural extension to SQL), or the object-oriented language Java can invoke such code objects and/or provide the programming structures for writing them. Storage The Oracle RDBMS stores data logically in the form of table spaces and physically in the form of data files ("data files"). Tablespaces can contain various types of memory segments, such as Data Segments, Index Segments, etc. Segments in turn comprise one or more extents. Extents comprise groups of contiguous data blocks. Data blocks form the basic units of data storage. There is also a partitioning feature available on newer versions of the database, which allows tables to be partitioned based on different set of keys. Specific partitions can then be easily added or dropped to help manage large data sets. Oracle database management tracks its computer data storage with the help of information stored in the SYSTEM tablespace. The SYSTEM tablespace contains the data dictionaryand often (by default) indexes and clusters. A data dictionary consists of a special collection of tables that contains information about all user- objects in the database. Since version 8i, the Oracle RDBMS also supports "locally managed" tablespaces which can store space management information in bitmaps in their own headers rather than in the SYSTEM tablespace (as happens with the default "dictionary-managed" tablespaces). Version 10i and later introduced the SYSAUX tablespace which contains some of the tables formerly in the SYSTEM tablespace. Disk files Disk files primarily consist of the following types: Data and index files Redo log files consisting of all changes to the database, used to recover from an instance failure Undo files used for recovery, rollbacks, and read consistency Archive log files Temporary files Control files containing information on data files At the physical level, data files comprise one or more data blocks, where the block size can vary between data files. Data files can occupy pre-allocated space in the file system of a computer server, utilize raw disk directly, or exist within ASM logical volumes. Control files The following parameters govern the size of the control files: * maxlogfile * maxlogmembers * maxloghistory * maxinstances * control_file_record_keep_time Database Schema Oracle database conventions refer to defined groups of object ownership (generally associated with a "username") as schemas. Most Oracle database installations traditionally came with a default schema called SCOTT. After the installation process has set up the sample tables, the user can log into the database with the username scott and the password tiger. The name of the SCOTT schema originated with Bruce Scott, one of the first employees at Oracle (then Software Development Laboratories), who had a cat named Tiger. Oracle Corporation has de-emphasized the use of the SCOTT schema, as it uses few of the features of the more recent releases of Oracle. Most recent examples supplied by Oracle Corporation reference the default HR or OE schemas. Other default schemas include: SYS (essential core database structures and utilities) SYSTEM (additional core database structures and utilities, and privileged account) OUTLN (utilized to store metadata for stored outlines for stable query-optimizer execution plans. BI, IX, HR, OE, PM, and SH (expanded sample schemas
containing more data and structures than the older SCOTT schema). System Global Area Each Oracle instance uses a System Global Area or SGAa shared-memory area to store its data and control-information. Each Oracle instance allocates itself an SGA when it starts and de-allocates it at shut- down time. The information in the SGA consists of the following elements, each of which has a fixed size, established at instance startup: The redo log buffer: this stores redo entriesa log of changes made to the database. The instance writes redo log buffers to the redo log as quickly and efficiently as possible. The redo log aids in instance recovery in the event of a system failure. the shared pool: this area of the SGA stores shared-memory structures such as shared SQL areas in the library cache and internal information in the data dictionary. An insufficient amount of memory allocated to the shared pool can cause performance degradation. Library cache The library cache stores shared SQL, caching the parse tree and the execution plan for every unique SQL statement. If multiple applications issue the same SQL statement, each application can access the shared SQL area. This reduces the amount of memory needed and reduces the processing-time used for parsing and execution planning. Data dictionary cache The data dictionary comprises a set of tables and views that map the structure of the database. Oracle databases store information here about the logical and physical structure of the database. The data dictionary contains information such as: user information, such as user privileges integrity constraints defined for tables in the database names and datatypes of all columns in database tables information on space allocated and used for schema objects The Oracle instance frequently accesses the data dictionary in order to parse SQL statements. The operation of Oracle depends on ready access to the data dictionary: performance bottlenecks in the data dictionary affect all Oracle users. Because of this, database administrators should make sure that the data dictionary cache has sufficient capacity to cache this data. Without enough memory for the data-dictionary cache, users see a severe performance degradation. Allocating sufficient memory to the shared pool where the data dictionary cache resides precludes these particular performance problems.
Methodology
SYSTEMS DEVELOPMENT LIFE CYCLE
The Systems Development Life Cycle (SDLC), or Software Development Life Cycle in systems engineering, information systems and software engineering, is the process of creating or altering systems, and the models and methodologies that people use to develop these systems. The concept generally refers to computer or information systems. In software engineering the SDLC concept underpins many kinds of software development methodologies. These methodologies form the framework for planning and controlling the creation of an information system 1 : the software development process. OVERVIEW Systems Development Life Cycle (SDLC) is a process used by a systems analyst to develop an information system, including requirements, validation, training, and user (stakeholder) ownership. Any SDLC should result in a high quality system that meets or exceeds customer expectations, reaches completion within time and cost estimates, works effectively and efficiently in the current and planned Information Technology infrastructure, and is inexpensive to maintain and cost-effective to enhance. Computer systems are complex and often (especially with the recent rise of Service- Oriented Architecture) link multiple traditional systems potentially supplied by different software vendors. To manage this level of complexity, a number of SDLC models or methodologies have been created, such as "waterfall"; "spiral"; "Agile"; "rapid prototyping"; "incremental"; and "synchronize and stabilize". SDLC models can be described along a spectrum of agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on lightweight processes which allow for rapid changes along the development cycle. Iterative methodologies, such as Rational Unified Process and Dynamic Systems Development Method, focus on limited project scope and expanding or improving products by multiple iterations. Sequential or big-design-up-front (BDUF) models, such as Waterfall, focus on complete and correct planning to guide large projects and risks to successful and predictable results. Other models, such as Anamorphic Development, tend to focus on a form of development that is guided by project scope and adaptive iterations of feature development. In project management a project can be defined both with a project life cycle (PLC) and an SDLC, during which slightly different activities occur. According to Taylor (2004) "the project life cycle encompasses all the activities of the project, while the systems development life cycle focuses on realizing the product requirements". HISTORY The Systems Life Cycle (SLC) is a type of methodology used to describe the process for building information systems, intended to develop information systems in a very deliberate, structured and methodical way, reiterating each stage of the life cycle. The systems development life cycle, according to Elliott & Strachan & Radford (2004) originated in the 1960s, to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines". Several systems development frameworks have been partly based on SDLC, such as the Structured Systems Analysis and Design Method (SSADM) produced for the UK government Office of Government Commerce in the 1980s. Ever since, according to Elliott (2004), "the traditional life cycle approaches to systems development have been increasingly replaced with alternative approaches and frameworks, which attempted to overcome some of the inherent deficiencies of the traditional SDLC". SYSTEMS DEVELOPMENT PHASES The System Development Life Cycle framework provides a sequence of activities for system designers and developers to follow. It consists of a set of steps or phases in which each phase of the SDLC uses the results of the previous one. A Systems Development Life Cycle (SDLC) adheres to important phases that are essential for developers, such as planning, analysis, design, and implementation, and are explained in the section below. A number of system development life cycle (SDLC) models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping, incremental, and synchronize and stabilize. The oldest of these, and the best known, is the waterfall model: a sequence of stages in which the output of each stage becomes the input for the next. These stages can be characterized and divided up in different ways, including the following: Project planning, feasibility study: Establishes a high-level view of the intended project and determines its goals. Systems analysis, requirements definition: Defines project goals into defined functions and operation of the intended application. Analyzes end- user information needs. Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudocode and other documentation. Implementation: The real code is written here. Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability. Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business. Maintenance: What happens during the rest of the software's life: changes, correction, additions, and moves to a different computing platform and more? This, the least glamorous and perhaps most important step of all, goes on seemingly forever. These stages of the Systems Development Life Cycle are divided in ten steps from definition to creation and modification of IT work products:
The tenth phase occurs when the system is disposed of and the task performed is either eliminated or transferred to other systems. The tasks and work products for each phase are described in subsequent chapters.
Not every project will require that the phases be sequentially executed. However, the phases are interdependent. Depending upon the size and complexity of the project, phases may be combined or may overlap. System analysis The goal of system analysis is to determine where the problem is in an attempt to fix the system. This step involves breaking down the system in different pieces to analyze the situation, analyzing project goals, breaking down what needs to be created and attempting to engage users so that definite requirements can be defined. Requirements analysis sometimes requires individuals/teams from client as well as service provider sides to get detailed and accurate requirements; often there has to be a lot of communication to and from to understand these requirements. Requirement gathering is the most crucial aspect as many times communication gaps arise in this phase and this leads to validation errors and bugs in the software program. Design In systems design the design functions and operations are described in detail, including screen layouts, business rules, process diagrams and other documentation. The output of this stage will describe the new system as a collection of modules or subsystems. The design stage takes as its initial input the requirements identified in the approved requirements document. For each requirement, a set of one or more design elements will be produced as a result of interviews, workshops, and/or prototype efforts. Design elements describe the desired software features in detail, and generally include functional hierarchy diagrams, screen layout diagrams, tables of business rules, business process diagrams, pseudocode, and a complete entity-relationship diagram with a full data dictionary. These design elements are intended to describe the software in sufficient detail that skilled programmers may develop the software with minimal additional input design. Testing The code is tested at various levels in software testing. Unit, system and user acceptance testings are often performed. This is a grey area as many different opinions exist as to what the stages of testing are and how much if any iteration occurs. Iteration is not generally part of the waterfall model, but usually some occur at this stage. In the testing the whole system is test one by one. Following are the types of testing: Defect testing Path testing Data set testing Unit testing System testing Integration testing Black box testing White box testing Regression testing Automation testing User acceptance testing Performance testing Operations and maintenance The deployment of the system includes changes and enhancements before the decommissioning or sunset of the system. Maintaining the system is an important aspect of SDLC. As key personnel change positions in the organization, new changes will be implemented, which will require system updates. SYSTEMS ANALYSIS AND DESIGN The Systems Analysis and Design (SAD) is the process of developing Information Systems (IS) that effectively use hardware, software, data, processes, and people to support the companys business objectives.
SYSTEMS DEVELOPMENT LIFE CYCLE Management and control
SDLC Phases Related to Management Controls. The Systems Development Life Cycle (SDLC) phases serve as a programmatic guide to project activity and provide a flexible but consistent way to conduct projects to a depth matching the scope of the project. Each of the SDLC phase objectives are described in this section with key deliverables, a description of recommended tasks, and a summary of related control objectives for effective management. It is critical for the project manager to establish and monitor control objectives during each SDLC phase while executing projects. Control objectives help to provide a clear statement of the desired result or purpose and should be used throughout the entire SDLC process. Control objectives can be grouped into major categories (Domains), and relate to the SDLC phases as shown in the figure. To manage and control any SDLC initiative, each project will be required to establish some degree of a Work Breakdown Structure (WBS) to capture and schedule the work necessary to complete the project. The WBS and all programmatic material should be kept in the Project Description section of the project notebook. The WBS format is mostly left to the project manager to establish in a way that best describes the project work. There are some key areas that must be defined in the WBS as part of the SDLC policy. The following diagram describes three key areas that will be addressed in the WBS in a manner established by the project manager.
WORK BREAKDOWN STRUCTURE.
Work breakdown structured organization The upper section of the Work Breakdown Structure (WBS) should identify the major phases and milestones of the project in a summary fashion. In addition, the upper section should provide an overview of the full scope and timeline of the project and will be part of the initial project description effort leading to project approval. The middle section of the WBS is based on the seven Systems Development Life Cycle (SDLC) phases as a guide for WBS task development. The WBS elements should consist of milestones and tasks as opposed to activities and have a definitive period (usually two weeks or more). Each task must have a measurable output (e.x. document, decision, or analysis). A WBS task may rely on one or more activities (e.g. software engineering, systems engineering) and may require close coordination with other tasks, either internal or external to the project. Any part of the project needing support from contractors should have a Statement of work (SOW) written to include the appropriate tasks from the SDLC phases. The development of a SOW does not occur during a specific phase of SDLC but is developed to include the work from the SDLC process that may be conducted by external resources such as contractors and struct. Baselines in the SDLC Baselines are an important part of the Systems Development Life Cycle (SDLC). These baselines are established after four of the five phases of the SDLC and are critical to the iterative nature of the model. Each baseline is considered as a milestone in the SDLC. Functional Baseline: established after the conceptual design phase. Allocated Baseline: established after the preliminary design phase. Product Baseline: established after the detail design and development phase. Updated Product Baseline: established after the production construction phase. Complementary to SDLC Complementary Software development methods to Systems Development Life Cycle (SDLC) are: Software Prototyping Joint Applications Design (JAD) Rapid Application Development (RAD) Extreme Programming (XP); extension of earlier work in Prototyping and RAD. Open Source Development End-user development Object Oriented Programming An alternative to the SDLC is Rapid application development, which combines prototyping, Joint Application Development and implementation of CASE tools. The advantages of RAD are speed, reduced development cost, and active user involvement in the development process.
V-MODEL (SOFTWARE DEVELOPMENT) The V-model represents a software development process (also applicable to hardware development) which may be considered an extension of the waterfall model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represents time or project completeness (left-to- right) and level of abstraction (coarsest-grain abstraction uppermost), respectively. VERIFICATION PHASES Requirements analysis In the Requirements analysis phase, the requirements of the proposed system are collected by analyzing the needs of the user(s). This phase is concerned with establishing what the ideal system has to perform. However it does not determine how the software will be designed or built. Usually, the users are interviewed and a document called the user requirements document is generated. The user requirements document will typically describe the systems functional, interface, performance, data, security, etc requirements as expected by the user. It is used by business analysts to communicate their understanding of the system to the users. The users carefully review this document as this document would serve as the guideline for the system designers in the system design phase. The user acceptance tests are designed in this phase. This is parallel processing There are different methods for gathering requirements of both soft and hard methodologies including; interviews, questionnaires, document analysis, observation, throw-away prototypes, use cases and status and dynamic views with users. System Design Systems design is the phase where system engineers analyze and understand the business of the proposed system by studying the user requirements document. They figure out possibilities and techniques by which the user requirements can be implemented. If any of the requirements are not feasible, the user is informed of the issue. A resolution is found and the user requirement document is edited accordingly. The software specification document which serves as a blueprint for the development phase is generated. This document contains the general system organization, menu structures, data structures etc. It may also hold example business scenarios, sample windows, reports for the better understanding. Other technical documentation like entity diagrams, data dictionary will also be produced in this phase. The documents for system testing are prepared in this phase. Architecture Design The phase of the design of computer architecture and software architecture can also be referred to as high-level design. The baseline in selecting the architecture is that it should realize all which typically consists of the list of modules, brief functionality of each module, their interface relationships, dependencies, database tables, architecture diagrams, technology details etc. The integration testing design is carried out in the particular phase. Module Design The module design phase can also be referred to as low-level design. The designed system is broken up into smaller units or modules and each of them is explained so that the programmer can start coding directly. The low level design document or program specifications will contain a detailed functional logic of the module, in pseudo code: database tables, with all elements, including their type and size all interface details with complete API references all dependency issues error message listings complete input and outputs for a module. The unit test design is developed in this stage. VALIDATION PHASES Unit Testing In computer programming, unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure. Unit tests are created by programmers or occasionally by white box testers. The purpose is to verify the internal logic code by testing every possible branch within the function, also known as test coverage. Static analysis tools are used to facilitate in this process, where variations of input data are passed to the function to test every possible case of execution. Integration Testing In integration testing the separate modules will be tested together to expose faults in the interfaces and in the interaction between integrated components. Testing is usually black box as the code is not directly checked for errors. System Testing System testing will compare the system specifications against the actual system. After the integration test is completed, the next test level is the system test. System testing checks if the integrated product meets the specified requirements. Why is this still necessary after the component and integration tests? The reasons for this are as follows: Reasons for system test 1. In the lower test levels, the testing was done against technical specifications, i.e., from the technical perspective of the software producer. The system test, though, looks at the system from the perspective of the customer and the future user. The testers validate whether the requirements are completely and appropriately met. o Example: The customer (who has ordered and paid for the system) and the user (who uses the system) can be different groups of people or organizations with their own specific interests and requirements of the system. 2. Many functions and system characteristics result from the interaction of all system components; consequently, they are only visible on the level of the entire system and can only be observed and tested there. User Acceptance Testing Acceptance testing is the phase of testing used to determine whether a system satisfies the requirements specified in the requirements analysis phase. The acceptance test design is derived from the requirements document. The acceptance test phase is the phase used by the customer to determine whether to accept the system or not. Acceptance testing helps to determine whether a system satisfies its acceptance criteria or not. to enable the customer to determine whether to accept the system or not. to test the software in the "real world" by the intended audience. Purpose of acceptance testing: to verify the system or changes according to the original needs. Procedures 1. Define the acceptance criteria: o Functionality requirements. o Performance requirements. o Interface quality requirements. o Overall software quality requirements. 2. Develop an acceptance plan: o Project description. o User responsibilities. o Acceptance description. o Execute the acceptance test plan. Release Testing Release testing is a phase that will determine if the software is suitable for the organisation of the end-user. How will compatibility with other systems be ensured? Is the performance of the software optimized?
ENTITY RELATION DIAGRAM DATA FLOW DIAGRAM SHARE TRANSACTION DIAGRAM OBJECT CLASS DIAGRAM
Implementation
JUSTIFICATION OF STUDY
CODING Frame Name: Login Coding: import java.sql.*; public class Login extends javax.swing.JFrame { Statement s; Connection conn; ResultSet rs; public static String utype; public Login() { initComponents(); try { DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver()); conn=DriverManager.getConnection("jdbc:oracle:oci8:@xe","scott","tiger"); s=conn.createStatement(); } catch(SQLException e) { jOptionPane1.showMessageDialog(this,"Error :"+e, "Error",0); } } private void jButton2ActionPerformed(java.awt.event.ActionEvent evt) { this.setVisible(false); } private void jButton3ActionPerformed(java.awt.event.ActionEvent evt) { jTextField1.setText(""); jPasswordField1.setText(""); } private void jButton1ActionPerformed(java.awt.event.ActionEvent evt) { try { String uname,pass; uname=jTextField1.getText(); pass=jPasswordField1.getText(); int cnt; rs=s.executeQuery("select count(*) from add_users where username='"+uname+"' and password='"+pass+"'"); rs.next(); cnt=rs.getInt(1); if(cnt==0) jOptionPane1.showMessageDialog(this,"Invalid Username/Password\nPlease Re-enter Username & Password","Error",0); else { rs=s.executeQuery("select type from add_users where username='"+uname+"' and password='"+pass+"'"); rs.next(); utype=rs.getString(1); System.out.println(utype); NewMDIApplication mdi=new NewMDIApplication(); mdi.setVisible(true); this.setVisible(false); } } catch(SQLException e) { jOptionPane1.showMessageDialog(this,"Error :"+e, "Error",0); } } public static void main(String args[]) { java.awt.EventQueue.invokeLater(new Runnable() { public void run() { new Login().setVisible(true); } }); }
TESTING METHODS THE BOX APPROACH Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases. White box testing White box testing is when the tester has access to the internal data structures and algorithms including the code that implement these. Types of white box testing The following types of white box testing exist: API testing (application programming interface) - testing of the application using public and private APIs Code coverage - creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once) Fault injection methods - improving the coverage of a test by introducing faults to test code paths Mutation testing methods Static testing - White box testing includes all static testing Test coverage White box testing methods can also be used to evaluate the completeness of a test suite that was created with black box testing methods. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested. Two common forms of code coverage are: Function coverage, which reports on functions executed Statement coverage, which reports on the number of lines executed to complete the test They both return a code coverage metric, measured as a percentage. Black box testing Black box testing treats the software as a "black box"without any knowledge of internal implementation. Black box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based testing, exploratory testing and specification-based testing. Specification-based testing Specification-based testing aims to test the functionality of software according to the applicable requirements. Thus, the tester inputs data into, and only sees the output from, the test object. This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Specification-based testing is necessary, but it is insufficient to guard against certain risks. Advantages and disadvantages: The black box tester has no "bonds" with the code, and a tester's perception is very simple: a code must have bugs. Using the principle, "Ask and you shall receive," black box testers find bugs where programmers do not. On the other hand, black box testing has been said to be "like a walk in a dark labyrinth without a flashlight," because the tester doesn't know how the software being tested was actually constructed. As a result, there are situations when (1) a tester writes many test cases to check something that could have been tested by only one test case, and/or (2) some parts of the back-end are not tested at all. Therefore, black box testing has the advantage of "an unaffiliated opinion", on the one hand, and the disadvantage of "blind exploring", on the other. Grey box testing Grey box testing (American spelling: gray box testing) involves having knowledge of internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level. Manipulating input data and formatting output do not qualify as grey box, because the input and output are clearly outside of the "black-box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. However, modifying a data repository does qualify as grey box, as the user would not normally be able to change the data outside of the system under test. Grey box testing may also include reverse engineering to determine, for instance, boundary values or error messages. Tests are frequently grouped by where they are added in the software development process, or by the level of specificity of the test. UNIT TESTING Unit testing refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors. These type of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to assure that the building blocks the software uses work independently of each other. Unit testing is also called component testing. INTEGRATION TESTING Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be localised more quickly and fixed. Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system. SYSTEM TESTING System testing tests a completely integrated system to verify that it meets its requirements. System integration testing System integration testing verifies that a system is integrated to any external or third-party systems defined in the system requirements. REGRESSION TESTING Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, or old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, to very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. ACCEPTANCE TESTING Acceptance testing can mean one of two things: 1. A smoke test is used as an acceptance test prior to introducing a new build to the main testing process, i.e. before integration or regression. 2. Acceptance testing performed by the customer, often in their lab environment on their own hardware, is known as user acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of development. ALPHA TESTING Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing. BETA TESTING Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users. NON-FUNCTIONAL TESTING Special methods exist to test non-functional aspects of software. In contrast to functional testing, which establishes the correct operation of the software (correct in that it matches the expected behavior defined in the design requirements), non-functional testing verifies that the software functions properly even when it receives invalid or unexpected inputs. Software fault injection, in the form of fuzzing, is an example of non-functional testing. Non-functional testing, especially for software, is designed to establish whether the device under test can tolerate invalid or unexpected inputs, thereby establishing the robustness of input validation routines as well as error-handling routines. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform non-functional testing.
SOFTWARE PERFORMANCE TESTING AND LOAD TESTING Performance testing is executed to determine how fast a system or sub- system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage. Load testing is primarily concerned with testing that can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way to test functionality. Stress testing is a way to test reliability. Load testing is a way to test performance. There is little agreement on what the specific goals of load testing are. The terms load testing, performance testing, reliability testing, and volume testing, are often used interchangeably. Stability testing Stability testing checks to see if the software can continuously function well in or above an acceptable period. This activity of non-functional software testing is often referred to as load (or endurance) testing. Usability testing Usability testing is needed to check if the user interface is easy to use and understand. It approach towards the use of the application. Security testing Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.
TESTING MODEL Iterative and incremental development Iterative and Incremental development is at the heart of a cyclic software development process developed in response to the weaknesses of the waterfall model. It starts with an initial planning and ends with deployment with the cyclic interactions in between.
An iterative development model Iterative and incremental development is essential parts of the Rational Unified Process, Extreme Programming and generally the various agile software development frameworks. It follows a similar process to the plan-do-check-act cycle of business process improvement. The basic idea A common mistake is to consider "iterative" and "incremental" as synonyms, which they are not. In software/systems development, however, they typically go hand in hand. The basic idea is to develop a system through repeated cycles (iterative) and in smaller portions at a time (incremental), allowing software developers to take advantage of what was learned during development of earlier parts or versions of the system. Learning comes from both the development and use of the system, where possible key steps in the process start with a simple implementation of a subset of the software requirements and iteratively enhance the evolving versions until the full system is implemented. At each iteration, design modifications are made and new functional capabilities are added. The procedure itself consists of the initialization step, the iteration step, and the Project Control List. The initialization step creates a base version of the system. The goal for this initial implementation is to create a product to which the user can react. It should offer a sampling of the key aspects of the problem and provide a solution that is simple enough to understand and implement easily. To guide the iteration process, a project control list is created that contains a record of all tasks that need to be performed. It includes such items as new features to be implemented and areas of redesign of the existing solution. The control list is constantly being revised as a result of the analysis phase. The iteration involves the redesign and implementation of a task from the project control list, and the analysis of the current version of the system. The goal for the design and implementation of any iteration is to be simple, straightforward, and modular, supporting redesign at that stage or as a task added to the project control list. The level of design detail is not dictated by the interactive approach. In a light- weight iterative project the code may represent the major source of documentation of the system; however, in a critical iterative project a formal Software Design Document may be used. The analysis of iteration is based upon user feedback, and the program analysis facilities available. It involves analysis of the structure, modularity, usability, reliability, efficiency, & achievement of goals. The project control list is modified in light of the analysis results.
Iterative development. Phases Incremental development slices the system functionality into increments (portions). In each increment, a slice of functionality is delivered through cross-discipline work, from the requirements to the deployment. The unified process groups increments/iterations into phases: inception, elaboration, construction, and transition. Inception identifies project scope, risks, and requirements (functional and non- functional) at a high level but in enough detail that work can be estimated. Elaboration delivers a working architecture that mitigates the top risks and fulfills the non-functional requirements. Construction incrementally fills-in the architecture with production-ready code produced from analysis, design, implementation, and testing of the functional requirements. Transition delivers the system into the production operating environment. Each of the phases may be divided into 1 or more iterations, which are usually time-boxed rather than feature-boxed. Architects and analysts work one iteration ahead of developers and testers to keep their work-product backlog full.
The unmodified "waterfall model". Progress flows from the top to the bottom, like a waterfall. Contrast with Waterfall development Waterfall development completes the project-wide work-products of each discipline in one step before moving on to the next discipline in the next step. Business value is delivered all at once, and only at the very end of the project. Backtracking is possible in an iterative approach. Implementation guidelines Guidelines that drive the implementation and analysis include: Any difficulty in design, coding and testing a modification should signal the need for redesign or re-coding. Modifications should fit easily into isolated and easy-to-find modules. If they do not, some redesign is possibly needed. Modifications to tables should be especially easy to make. If any table modification is not quickly and easily done, redesign is indicated. Modifications should become easier to make as the iterations progress. If they are not, there is a basic problem such as a design flaw or a proliferation of patches. Patches should normally be allowed to exist for only one or two iterations. Patches may be necessary to avoid redesigning during an implementation phase. The existing implementation should be analysed frequently to determine how well it measures up to project goals. Program analysis facilities should be used whenever available to aid in the analysis of partial implementations. User reaction should be solicited and analysed for indications of deficiencies in the current implementation.
TEST-DRIVEN DEVELOPMENT PROCESS Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: first the developer writes a failing automated test case that defines a desired improvement or new function, then produces code to pass that test and finally refactors the new code to acceptable standards. Kent Beck, who is credited with having developed or 'rediscovered' the technique, stated in 2003 that TDD encourages simple designs and inspires confidence. Test-driven development is related to the test-first programming concepts of extreme programming, begun in 1999, but more recently has created more general interest in its own right. Programmers also apply the concept to improving and debugging legacy code developed with older techniques. Requirements Test-driven development requires developers to create automated unit tests that define code requirements (immediately) before writing the code itself. The tests contain assertions that are either true or false. Passing the tests confirms correct behavior as developers evolve and refactor the code. Developers often use testing frameworks, such as xUnit, to create and automatically run sets of test cases. TEST-DRIVEN DEVELOPMENT CYCLE
A graphical representation of the development cycle, using a basic flowchart The following sequence is based on the book Test-Driven Development by Example Add a test In test-driven development, each new feature begins with writing a test. This test must inevitably fail because it is written before the feature has been implemented. (If it does not fail, then either the proposed new feature already exists or the test is defective.) To write a test, the developer must clearly understand the feature's specification and requirements. The developer can accomplish this through use cases and user stories that cover the requirements and exception conditions. This could also imply a variant, or modification of an existing test. This is a differentiating feature of test-driven development versus writing unit tests after the code is written: it makes the developer focus on the requirements before writing the code, a subtle but important difference. Run all tests and see if the new one fails This validates that the test harness is working correctly and that the new test does not mistakenly pass without requiring any new code. This step also tests the test itself, in the negative: it rules out the possibility that the new test will always pass, and therefore be worthless. The new test should also fail for the expected reason. This increases confidence (although it does not entirely guarantee) that it is testing the right thing, and will pass only in intended cases. Write some code The next step is to write some code that will cause the test to pass. The new code written at this stage will not be perfect and may, for example, pass the test in an inelegant way. That is acceptable because later steps will improve and hone it. It is important that the code written is only designed to pass the test; no further (and therefore untested) functionality should be predicted and 'allowed for' at any stage. Run the automated tests and see them succeed If all test cases now pass, the programmer can be confident that the code meets all the tested requirements. This is a good point from which to begin the final step of the cycle. Refactor code Now the code can be cleaned up as necessary. By re-running the test cases, the developer can be confident that code refactoring is not damaging any existing functionality. The concept of removing duplication is an important aspect of any software design. In this case, however, it also applies to removing any duplication between the test code and the production code for example magic numbers or strings that were repeated in both, in order to make the test pass in step 3. Repeat Starting with another new test, the cycle is then repeated to push forward the functionality. The size of the steps should always be small, with as few as 1 to 10 edits between each test run. If new code does not rapidly satisfy a new test, or other tests fail unexpectedly, the programmer should undo or revert in preference to excessive debugging. Continuous Integration helps by providing revertible checkpoints. When using external libraries it is important not to make increments that are so small as to be effectively merely testing the library itself, unless there is some reason to believe that the library is buggy or is not sufficiently feature- complete to serve all the needs of the main program being written. Development style There are various aspects to using test-driven development, for example the principles of "keep it simple, stupid" (KISS) and "You ain't gonna need it" (YAGNI). By focusing on writing only the code necessary to pass tests, designs can be cleaner and clearer than is often achieved by other methods. In Test-Driven Development by Example Kent Beck also suggests the principle "Fake it till you make it". To achieve some advanced design concept (such as a design pattern), tests are written that will generate that design. The code may remain simpler than the target pattern, but still pass all required tests. This can be unsettling at first but it allows the developer to focus only on what is important. Write the tests first. The tests should be written before the functionality that is being tested. This has been claimed to have two benefits. It helps ensure that the application is written for testability, as the developers must consider how to test the application from the outset, rather than worrying about it later. It also ensures that tests for every feature will be written. When writing feature-first code, there is a tendency by developers and the development organisations to push the developer onto the next feature, neglecting testing entirely. The first test might not even compile, at first, because all of the classes and methods it requires may not yet exist. Nevertheless, that first test functions as an executable specification. First fail the test cases. The idea is to ensure that the test really works and can catch an error. Once this is shown, the underlying functionality can be implemented. This has been coined the "test-driven development mantra", known as red/green/refactor where red means fail and green is pass. Test-driven development constantly repeats the steps of adding test cases that fail, passing them, and refactoring. Receiving the expected test results at each stage reinforces the programmer's mental model of the code, boosts confidence and increases productivity. Advanced practices of test-driven development can lead to Acceptance Test-driven development (ATDD) where the criteria specified by the customer are automated into acceptance tests, which then drive the traditional unit test-driven development (UTDD) process. This process ensures the customer has an automated mechanism to decide whether the software meets their requirements. With ATDD, the development team now has a specific target to satisfy, the acceptance tests, which keeps them continuously focused on what the customer really wants from that user story. Benefits A 2005 study found that using TDD meant writing more tests and, in turn, programmers that wrote more tests tended to be more productive. Hypotheses relating to code quality and a more direct correlation between TDD and productivity were inconclusive. Programmers using pure TDD on new ("greenfield") projects report they only rarely feel the need to invoke a debugger. Used in conjunction with a version control system, when tests fail unexpectedly, reverting the code to the last version that passed all tests may often be more productive than debugging. Test-driven development offers more than just simple validation of correctness, but can also drive the design of a program. By focusing on the test cases first, one must imagine how the functionality will be used by clients (in the first case, the test cases). So, the programmer is concerned with the interface before the implementation. This benefit is complementary to Design by Contract as it approaches code through test cases rather than through mathematical assertions or preconceptions. Test-driven development offers the ability to take small steps when required. It allows a programmer to focus on the task at hand as the first goal is to make the test pass. Exceptional cases and error handling are not considered initially, and tests to create these extraneous circumstances are implemented separately. Test- driven development ensures in this way that all written code is covered by at least one test. This gives the programming team, and subsequent users, a greater level of confidence in the code. While it is true that more code is required with TDD than without TDD because of the unit test code, total code implementation time is typically shorter. Large numbers of tests help to limit the number of defects in the code. The early and frequent nature of the testing helps to catch defects early in the development cycle, preventing them from becoming endemic and expensive problems. Eliminating defects early in the process usually avoids lengthy and tedious debugging later in the project. TDD can lead to more modularized, flexible, and extensible code. This effect often comes about because the methodology requires that the developers think of the software in terms of small units that can be written and tested independently and integrated together later. This leads to smaller, more focused classes, looser coupling, and cleaner interfaces. The use of the mock object design pattern also contributes to the overall modularization of the code because this pattern requires that the code be written so that modules can be switched easily between mock versions for unit testing and "real" versions for deployment. Because no more code is written than necessary to pass a failing test case, automated tests tend to cover every code path. For example, in order for a TDD developer to add an else branch to an existing if statement, the developer would first have to write a failing test case that motivates the branch. As a result, the automated tests resulting from TDD tend to be very thorough: they will detect any unexpected changes in the code's behaviour. This detects problems that can arise where a change later in the development cycle unexpectedly alters other functionality. Vulnerabilities Test-driven development is difficult to use in situations where full functional tests are required to determine success or failure. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations. TDD encourages developers to put the minimum amount of code into such modules and to maximize the logic that is in testable library code, using fakes and mocks to represent the outside world. Management support is essential. Without the entire organization believing that test-driven development is going to improve the product, management may feel that time spent writing tests is wasted. Unit tests created in a test-driven development environment are typically created by the developer who will also write the code that is being tested. The tests may therefore share the same blind spots with the code: If, for example, a developer does not realize that certain input parameters must be checked, most likely neither the test nor the code will verify these input parameters. If the developer misinterprets the requirements specification for the module being developed, both the tests and the code will be wrong. The high number of passing unit tests may bring a false sense of security, resulting in fewer additional software testing activities, such as integration testing and compliance testing. The tests themselves become part of the maintenance overhead of a project. Badly written tests, for example ones that include hard-coded error strings or which are themselves prone to failure, are expensive to maintain. This is especially the case with Fragile Tests. There is a risk that tests that regularly generate false failures will be ignored, so that when a real failure occurs it may not be detected. It is possible to write tests for low and easy maintenance, for example by the reuse of error strings, and this should be a goal during the code refactoring phase described above. The level of coverage and testing detail achieved during repeated TDD cycles cannot easily be re-created at a later date. Therefore these original tests become increasingly precious as time goes by. If a poor architecture, a poor design or a poor testing strategy leads to a late change that makes dozens of existing tests fail, it is important that they are individually fixed. Merely deleting, disabling or rashly altering them can lead to undetectable holes in the test coverage. Code visibility Test suite code clearly has to be able to access the code it is testing. On the other hand normal design criteria such as information hiding, encapsulation and the separation of concerns should not be compromised. Therefore unit test code for TDD is usually written within the same project or module as the code being tested. In object oriented design this still does not provide access to private data and methods. Therefore, extra work may be necessary for unit tests. In Java and other languages, a developer can use reflection to access fields that are marked private. Alternatively, an inner class can be used to hold the unit tests so they will have visibility of the enclosing class's members and attributes. In the .NET Framework and some other programming languages, partial classes may be used to expose private methods and data for the tests to access. It is important that such testing hacks do not remain in the production code. In C and other languages, compiler directives such as #if DEBUG ... #endif can be placed around such additional classes and indeed all other test-related code to prevent them being compiled into the released code. This then means that the released code is not exactly the same as that which is unit tested. The regular running of fewer but more comprehensive, end-to-end, integration tests on the final release build can then ensure (among other things) that no production code exists that subtly relies on aspects of the test harness. There is some debate among practitioners of TDD, documented in their blogs and other writings, as to whether it is wise to test private and protected methods and data anyway. Some argue that it should be sufficient to test any class through its public interface as the private members are a mere implementation detail that may change, and should be allowed to do so without breaking numbers of tests. Others say that crucial aspects of functionality may be implemented in private methods, and that developing this while testing it indirectly via the public interface only obscures the issue: unit testing is about testing the smallest unit of functionality possible. Fakes, mocks and integration tests Unit tests are so named because they each test one unit of code. A complex module may have a thousand unit tests and a simple module may have only ten. The tests used for TDD should never cross process boundaries in a program, let alone network connections. Doing so introduces delays that make tests run slowly and discourage developers from running the whole suite. Introducing dependencies on external modules or data also turns unit tests into integration tests. If one module misbehaves in a chain of interrelated modules, it is not so immediately clear where to look for the cause of the failure. When code under development relies on a database, a web service, or any other external process or service, enforcing a unit-testable separation is also an opportunity and a driving force to design more modular, more testable and more reusable code. Two steps are necessary: 1. Whenever external access is going to be needed in the final design, an interface should be defined that describes the access that will be available. See the dependency inversion principle for a discussion of the benefits of doing this regardless of TDD. 2. The interface should be implemented in two ways, one of which really accesses the external process, and the other of which is a fake or mock. Fake objects need do little more than add a message such as Person object saved to a trace log, against which a test assertion can be run to verify correct behaviour. Mock objects differ in that they themselves contain test assertions that can make the test fail, for example, if the person's name and other data are not as expected. Fake and mock object methods that return data, ostensibly from a data store or user, can help the test process by always returning the same, realistic data that tests can rely upon. They can also be set into predefined fault modes so that error-handling routines can be developed and reliably tested. Fake services other than data stores may also be useful in TDD: Fake encryption services may not, in fact, encrypt the data passed; fake random number services may always return 1. Fake or mock implementations are examples of dependency injection. A corollary of such dependency injection is that the actual database or other external-access code is never tested by the TDD process itself. To avoid errors that may arise from this, other tests are needed that instantiate the test-driven code with the real implementations of the interfaces discussed above. These tests are quite separate from the TDD unit tests, and are really integration tests. There will be fewer of them, and they need to be run less often than the unit tests. They can nonetheless be implemented using the same testing framework, such as xUnit. Integration tests that alter any persistent store or database should always be designed carefully with consideration of the initial and final state of the files or database, even if any test fails. This is often achieved using some combination of the following techniques: The TearDown method, which is integral to many test frameworks. try...catch...finally exception handling structures where available. Database transactions where a transaction atomically includes perhaps a write, a read and a matching delete operation. Taking a snapshot of the database before running any tests and rolling back to the snapshot after each test run. This may be automated using a framework such as Ant or NAnt or a continuous integration system such as CruiseControl. Initialising the database to a clean state before tests, rather than cleaning up after them. This may be relevant where cleaning up may make it difficult to diagnose test failures by deleting the final state of the database before detailed diagnosis can be performed.
Enter applicant id Click to search Scroll to see the complete information Check documents:
Reports:
Reports
Enter the applicant id Click one by one to see the documents 1. Print Receipt :
2. Total application form report :
3. Passport departments list :
Enter applicant id Click to show Click to print Click to print
4. Police department list :
5. Employee information report :
6. Passports information report :
Utilities
1. Change Menu bar Color :
2. Database Backup :
3. Digital clock :
Click to database backup, backup file will save in data Backup folder
Technical helpline:
Help menu User manual
Exit Confirmation
Click if you dont want to exit Click if you want to exit
DATABASE TABLE STRUCTURES
Table Definition: 1 Table Name : applicant Primary Key : a_id Foreign Key : pass_id, e_id, p_id Column Definition: Column Name Data Type Width Allow Null Default Description a_id NUMBER 5 N - Applicant id, system generated auto no aname VARCHAR2 20 N - Applicant first name lname VARCHAR2 20 N - Applicant last name sex VARCHAR2 10 N - Sex, male/female/other dob DATE - N - Date of birth m_status VARCHAR2 10 N - Marital status fname VARCHAR2 30 N - Father name mname VARCHAR2 30 N - Mother name sname VARCHAR2 30 Y - Spouse name nation VARCHAR2 20 N - Nationality cat VARCHAR2 10 N - Category emptype VARCHAR2 20 N - Employment type imark1 VARCHAR2 100 Y - Identification mark1 imark2 VARCHAR2 100 Y - Identification mark2 police_case VARCHAR2 100 N - Any police case, details pan_no VARCHAR2 10 Y - Pan number voter_no VARCHAR2 10 N - Voter id number adhar_no VARCHAR2 12 Y - Adhar card number exam VARCHAR2 20 Y - Last passed exam roll_no VARCHAR2 20 Y - Roll number perc VARCHAR2 6 Y - Percentage board VARCHAR2 30 Y - Board/university yop VARCHAR2 4 Y - Year of passing address VARCHAR2 50 N - Permanent address city VARCHAR2 20 N - City pincode VARCHAR2 6 N - Pincode state VARCHAR2 25 N - State country VARCHAR2 25 N - Country mob_no VARCHAR2 10 N - Mobile number email VARCHAR2 35 N - Email address refname VARCHAR2 30 N - Reference name refaddress VARCHAR2 50 N - Reference address refmob VARCHAR2 10 N - Reference mobile number pass_id NUMBER 2 N - Passport id, exist in passport table a_status VARCHAR2 30 N - Application status a_date DATE - N SYSDATE Application registration date e_id NUMBER 3 N - Employee id, exist in employee table p_id NUMBER 4 N - Police dept id, exist in police_dept table
Table Definition: 2 Table Name : a_document [Applicant Documents Table] Primary Key : Foreign Key : a_id Column Definition: Column Name Data Type Width Allow Null Default Description a_id NUMBER 5 N - Applicant id, exist in applicant table b_cer BLOB - N - Birth certificate add_proof BLOB - N - Address proof voter_card BLOB - N - Voter card pan_card BLOB - N - Pan card sign BLOB - N - Signature photo BLOB - N - Applicant photo challan BLOB - N - Bank challan copy
Table Definition: 3 Table Name : passport Primary Key : pass_id Foreign Key : Column Definition: Column Name Data Type Width Allow Null Default Description pass_id NUMBER 2 N - Passport id system generated auto no type VARCHAR2 25 N - Passport type duration VARCHAR2 2 N - Duration in days valid VARCHAR2 2 N - Validity in years pages VARCHAR2 2 N - Number of pages mode VARCHAR2 20 N - Mode, such are normal/tatkal fees NUMBER 4 , 2 N - Fees amount
Table Definition: 4 Table Name : police_dept Primary Key : p_id Foreign Key : Column Definition: Column Name Data Type Width Allow Null Default Description p_id NUMBER 4 N - Police dept. id system generated auto no pname VARCHAR2 30 N - Police dept. name address VARCHAR2 50 N - Address city VARCHAR2 20 N - City pincode VARCHAR2 6 N - Pincode state VARCHAR2 25 N - State country VARCHAR2 25 N - Country ph_no VARCHAR2 12 N - Phone number fax_no VARCHAR2 12 Y - Fax number mob_no VARCHAR2 10 Y - Mobile number email VARCHAR2 35 N - Email address
Table Definition: 5 Table Name : payment Primary Key : Foreign Key : a_id, pass_id Column Definition: Column Name Data Type Width Allow Null Default Description a_id NUMBER 5 N - Applicant id exist in applicant table pass_id NUMBER 2 N - Passport id exist in passport table dop DATE - N - Date of payment ifsc VARCHAR2 15 N - IFSC code of branch dd_no VARCHAR2 10 N - Challan number bank_name VARCHAR2 30 N - Bank name branch_name VARCHAR2 30 N - Branch name amount NUMBER 4, 2 N - Amount ver_status VARCHAR2 5 N - Payment details verification status
Table Definition: 6 Table Name : employee Primary Key : e_id Foreign Key : id Column Definition: Column Name Data Type Width Allow Null Default Description e_id NUMBER 3 N - Employee id auto generated number ename VARCHAR2 30 N - Employee name sex VARCHAR2 10 N - Sex design VARCHAR2 20 N - Designation doj DATE - N - Date of joining dob DATE - N - Date of birth cat VARCHAR2 10 N - Category edu VARCHAR2 20 N - Education qualification address VARCHAR2 50 N - Employee address city VARCHAR2 20 N - City pincode VARCHAR2 6 N - Pincode state VARCHAR2 25 N - State mob_no VARCHAR2 10 N - Mobile number email VARCHAR2 35 N - Login Email address password VARCHAR2 10 N - Login Password role VARCHAR2 25 N - Role in software admin/operator id NUMBER 2 N - Department id exist in pass_dept table
Table Definition: 7 Table Name : passport_dept Primary Key : id Foreign Key : Column Definition: Column Name Data Type Width Allow Null Default Description id NUMBER 2 N - Passport dept.id auto generated number dname VARCHAR2 25 N - Department name address VARCHAR2 50 N - Department Address city VARCHAR2 20 N - City pincode VARCHAR2 6 N - Pincode state VARCHAR2 25 N - State country VARCHAR2 25 N - Country ph_no VARCHAR2 12 N - Phone number fax_no VARCHAR2 12 N - Fax number mob_no VARCHAR2 10 N - Mobile number email VARCHAR2 35 N - Email address
USER MANUAL
USER MANUAL Frames Used: Login User, Add User, Edit User, Home Page, Add Product, Edit Products, All Products Details, Search Products, Add Customer, Edit Customer, All Customer Details, Search Customer, Purchase Items, Purchase Details, Add Supplier, Sales, Stock, About Us, Help. Login: User can login through this frame. In this frame two types of user can be made, administrator and restrictive. Administrator can run all application whereas restricted can run only few of them. Add User: This is the frame through which any user can be made his account to run application after login into his account. Edit User: This frame is used to edit the user account which is already made i.e. if any user wants to change his password and any other information so they can change that information. Home Page (MDI Application): This is the main frame of project through which all application can be run. This is the Home Page. Add Product: This is main frame for adding a new product. In this frame product code, name, company, features, mfg date etc are required. Edit Product: This frame is used to edit the information about product. All Product Details: This frame is used to show the all details of all products i.e. code, name, company, features, mfg, etc. Search Products: This frame is used to search any product by entering product name or product code in search box. This will give all information about that product. Add Customer: This frame is used to add a new customer to the customer list. Edit Customer: This frame is used to edit the customer information. All Customer details: This frame will show all details of the customers which are added i.e. customer name, code, address, tel, city, state etc. Search Customer: This frame is used to search the customer by entering the customer name or customer code and then after it will show details about that customer. Purchase Items: This frame is used for entering the purchase item with their dealer price and MRP and some other details. Purchase details: This frame is used to enter the details of the products what we have purchase after entering the purchase items. Add supplier: This frame is used to add supplier from where we purchase the products. Sales: This frame is used for the sales of items or the products then after it show the bill also. Stock: This frame is used to show the stock in hand whatever we purchase or sale. This will show information about that. About Us: This frame will show about the software and details of team members who made this software. Help: This frame will show all shortcuts used in this software and any help regarding to any query etc. Table Used: ADD_USERS NEWCUSTOMER PRODUCT_MASTER PURCHASE_DETAIL PURCHASE_MASTER SALES_DETAILS SALES_MASTER STOCK SUPPLIER Shortcuts Used: Add User: alt+N Edit User: alt+M All Users: alt+U Add Product: ctrl+P Edit Products: ctrl+E All Products Details: ctrl+A All Products List: ctrl+Shift+A Search Products: ctrl+F Add Customer: alt+C Edit Customer: alt+E All Customer Details: alt+A All Customer List: alt+Shift+A Search Customer: alt+F Purchase Items: ctrl+Shift+P Add Supplier: ctrl+Shift+S Sales: ctrl+Shift+N Bill Details: ctrl+Shift+B Stock: alt+Shift+S About Us: ctrl+V
SWOT ANALYSIS
STRENGTHS AND WEAKNESSES Few people in the modern computing world would use a strict waterfall model for their Systems Development Life Cycle (SDLC) as many modern methodologies have superseded this thinking. Some will argue that the SDLC no longer applies to models like Agile computing, but it is still a term widely in use in Technology circles. The SDLC practice has advantages in traditional models of software development, that lends itself more to a structured environment. The disadvantages to using the SDLC methodology is when there is need for iterative development or (i.e. web development or e-commerce) where stakeholders need to review on a regular basis the software being designed. Instead of viewing SDLC from a strength or weakness perspective, it is far more important to take the best practices from the SDLC model and apply it to whatever may be most appropriate for the software being designed. A comparison of the strengths and weaknesses of SDLC:
Strengths Weaknesses Control. Increased development time. Monitor Large projects. Increased development cost. Detailed steps. Systems must be defined up front. Evaluate costs and completion targets. Rigidity. Documentation. Hard to estimate costs, project overruns. Well defined user input. User input is sometimes limited. Ease of maintenance.
Development and design standards.
Tolerates changes in MIS staffing.
CONCLUSION AND FUTURE SCOPE
CONCLUSION AND SCOPE OF THE PROJECT The system has scope of improvement in the functions it performs. The future versions of the application can have the options of issuing Smart cards for regular user, providing network, internet shopping and other options. REFERENCES
BIBLIOGRAPHY 1. Complete Reference Java 2, McGraw-Hill Publication 2. Java Programming Cook Book, Herb Schildt, 3. Deepak Alur, John Crupi, and Dan Malks, Core J2EE Patterns: Best Practices and Design Strategies (Second Edition), Prentice-Hall, 2003.