Tasty Treat Report
Tasty Treat Report
PROJECT REPORT
ON
Taste buds-
The food ordering system
Done by:
This is to certify that Miss Gurpreet Kaur and miss Shefali Sharma, from
MCA 4th Semester have developed Software project titled “Taste Buds -
The Food Ordering System” for as a partial Fulfillment for the award of
EXTERNAL
ACKNOWLEDGMENT
My express thanks and gratitude and thanks to Almighty God, my parents and other family
members and friends without whose uncontained support, I could not have made this career in XXXX.
I wish to place on my record my deep sense of gratitude to my project guide, Mr. XXXXX, xxx
Software Solutions, Hyderabad for his constant motivation and valuable help through the project
work. Express my gratitude to Mr. XXXX, Director of XXXXX Institute of Management & Computer
Sciences for his valuable suggestions and advices through out the XXX course. I also extend my
thanks to other Faculties for their Cooperation during my Course.
Finally I would like to thank my friends for their cooperation to complete this project.
XXXXXXX
ABSTRACT
To develop an efficient intranet food ordering application, which can be
associated to hotel administration software in a typical five star hotel.
1. Title of the project : Taste Buds – The food ordering solution
2. Abstract:
The main objective of this project is to develop a system for a Five Star Hotel in which
its customers can order online for recipes from anywhere. The system will help the users in
displaying the list of recipes items available in that restaurant along with the offers available
for those recipes items. The system will also display the images of the recipes items along
with the list of items. The accessibility to the system in the restaurant will be given to the
Administrator with the user name and password.
3. Modules:
Current application is differentiated into the following modules which are closely
integrated to each other.
Customers
Recipes
Order
Shopping Cart
4. Requirements:
Hardware requirements:
Content Description
HDD 20 GB Min
40 GB Recommended
RAM 1 GB Min
2 GB Recommended
Software requirements:
Content Description
OS Windows XP with SP2 or Windows Vista
Database MS-SQL server 2005
Technologies ASP.NET with C#.NET
IDE Ms-Visual Studio .Net 2008
Browser IE
ORGANIZATION PROFILE
CONTENTS
1. INTRODUCTION
INTRODUCTION TO PROJECT
PURPOSE OF THE PROJECT
PROBLEM IN EXISTING SYSTEM
SOLUTION OF THESE PROBLEMS
2. SYSTEM ANALYSIS
3. FEASIBILITY REPORT
5. SELECTED SOFTWARE
5.1. INTRODUCTION TO .NET FRAMEWORK
5.2. ASP.NET
5.3. Java Script
5.4. IIS 6.0
5.5. C#.NET
5.6. SQL SERVER
6. SYSTEM DESIGN
6.1. INTRODUCTION
6.2. NORMALIZATION
6.3. E-R DIAGRAM
6.4. DATA FLOW DIAGRAMS
6.5. DATA DICTIONARY
7. OUTPUT SCREENS
8.1. INTRODUCTION
8.2. TESTING OBJECTIVES
8.3. UNIT TESTING
8.4. TEST PLAN
8.5. IMPLEMENTATION
9. SYSTEM SECURITY
9.1. INTRODUCTION
9.2. SECURITY IN SOFTWARE
10. CONCLUSION
12. BIBLIOGRAPHY
Chapter 1
1. INTRODUCTION
1.1. INTRODUCTION & OBJECTIVES
The main objective of this project is to develop a system for a Five Star Hotel in which
its customers can order online for recipes from anywhere. The system will help the users in
displaying the list of recipes items available in that restaurant along with the offers available
for those recipes items. The system will also display the images of the recipes items along
with the list of items. The accessibility to the system in the restaurant will be given to the
Administrator with the user name and password.
Items will be added to the cart, which can be reviewed and finalized at the time of
submitting order
Option given to customer to pay the bill separately for the items which he/she
orders, with this option this application can be easily integrated with any existing
hotel administration software.
1.3. PROBLEM IN EXISTING SYSTEM
• For ordering food, either user has to go to the restaurant manually or by making a
telephone call
• Hence the existing process is a time consuming process.
• In case if additional ordering has to be done, it’s an additional overhead again.
audit_vul_dur_peri
Risk_observation.aspx. od_frm.aspx
aspx View of statistical
data report
Allow to view the risk
and observation for Chart1.aspx
that app. No.
Allow to modify
selected website audit
request form details Assignment_page2.asp
x
Allow to mark a site for
random audit
2.2. STUDY OF THE SYSTEM
GUI’S
In the flexibility of the uses the interface has been developed a graphics concept in mind, associated
through a browses interface. The GUI’S at the top level have been categorized as
1. Administrative user interface
2. The operational or generic user interface
The administrative user interface concentrates on the consistent information that is practically, part
of the organizational activities and which needs proper authentication for the data collection. The
interfaces help the administrations with all the transactional states like Data insertion, Data deletion
and Date updation along with the extensive data search capabilities.
The operational or generic user interface helps the users upon the system in transactions through
the existing data and required services. The operational user interface also helps the ordinary users in
managing their own information helps the ordinary users in managing their own information in a
customized manner as per the assisted flexibilities.
NUMBER OF MODULES
• Customers
• Recipes
• Order
• Shopping Cart
Customers:
• This module performs registration and maintenance of customer information.
• This information can be very much useful for delivering the ordered ones avoiding any
confusion related to delivery address
Recipes:
• It contains
• Recipes
• Details about all recipes which are existing in the Taste-Buds
• Adding the new recipes and deleting the Recipes.
Order:
• It contains
• Orders List
• Payments information
Paying type
Card
Cash
• Using this, adding new order and deleting a order
Shopping Cart:
• It contains
• Recipes types
• list of items and cost
• Adding new items under recipes
Outputs from computer systems are required primarily to communicate the results of
processing to users. They are also used to provide a permanent copy of the results for later
consultation. The various types of outputs in general are:
Output Definition:
The outputs should be defined in terms of the following points:
Input Design:
Input design is a part of overall system design. The main objective during the input desings is
as given below:
To produce a cost-effective method of input.
To achieve the highest possible level of accuracy.
To ensure that the input is acceptable and understood by the user.
INPUT STAGES:
The main input stages can be listed as below:
Data recording
Data transcription
Data conversion
Data verification
Data control
Data transmission
Data validation
Data correction
INPUT TYPES:
It is necessary to determine the various types of inputs. Inputs can be categorized as follows:
External inputs, which are prime inputs for the system.
Internal inputs, which are user communications with the system.
Operational, which are computer department’s communications to the system?
Interactive, which are inputs entered during a dialogue.
INPUT MEDIA:
At this stage choice has to be made about the input media. To conclude about the input media
consideration has to be given to;
Type of input
Flexibility of format
Speed
Accuracy
Verification methods
Rejection rates
Ease of correction
Storage and handling requirements
Security
Easy to use
Portabilility
Keeping in view the above description of the input types and input media, it can be said that
most of the inputs are of the form of internal and interactive. As input data is to be the directly keyed
in by the user, the keyboard can be considered to be the most suitable input device.
2.5. PROCESS MODELS USED WITH JUSTIFICATION
SDLC MODEL:
Waterfall Model
Software products are oriented towards customers like any other engineering products. It is
either driver by market or it drives the market. Customer Satisfaction was the main aim in the 1980's.
Customer Delight is today's logo and Customer Ecstasy is the new buzzword of the new millennium.
Products which are not customer oriented have no place in the market although they are designed
using the best technology. The front end of the product is as crucial as the internal technology of the
product.
A market study is necessary to identify a potential customer's need. This process is also called
as market research. The already existing need and the possible future needs that are combined
together for study. A lot of assumptions are made during market study. Assumptions are the very
important factors in the development or start of a product's development. The assumptions which are
not realistic can cause a nosedive in the entire venture. Although assumptions are conceptual, there
should be a move to develop tangible assumptions to move towards a successful product.
Once the Market study is done, the customer's need is given to the Research and
Development Department to develop a cost-effective system that could potentially solve customer's
needs better than the competitors. Once the system is developed and tested in a hypothetical
environment, the development team takes control of it. The development team adopts one of the
software development models to develop the proposed system and gives it to the customers.
The basic popular models used by many software development firms are as follows:
A) System Development Life Cycle (SDLC) Model
B) Prototyping Model
C) Rapid Application Development Model
D) Component Assembly Model
5) Testing
After code generation phase the software program testing begins. Different testing methods
are available to detect the bugs that were committed during the previous phases. A number of testing
tools and methods are already available for testing purpose.
6) Maintenance
Software will definitely go through change once when it is delivered to the customer. There are
large numbers of reasons for the change. Change could happen due to some unpredicted input values
into the system. In addition to this the changes in the system directly have an effect on the software
operations. The software should be implemented to accommodate changes that could be happen
during the post development period.
DESIGN PRINCIPLES & METHODOLOGY:
Object Oriented Analysis and Design
When Object orientation is used in analysis as well as design, the boundary between OOA and
OOD is blurred. This is particularly true in methods that combine analysis and design. One reason for
this blurring is the similarity of basic constructs (i.e.,objects and classes) that are used in OOA and
OOD. Through there is no agreement about what parts of the object-oriented development process
belongs to analysis and what parts to design, there is some general agreement about the domains of
the two activities.
The fundamental difference between OOA and OOD is that the former models the problem
domain, leading to an understanding and specification of the problem, while the latter models the
solution to the problem. That is, analysis deals with the problem domain, while design deals with the
solution domain. However, in OOAD subsumed in the solution domain representation. That is, the
solution domain representation, created by OOD, generally contains much of the representation
created by OOA. The separating line is matter of perception, and different people have different views
on it. The lack of clear separation between analysis and design can also be considered one of the
strong points of the object-oriented approach the transition from analysis to design is “seamless”. This
is also the main reason OOAD methods-where analysis and designs are both performed.
The main difference between OOA and OOD, due to the different domains of modeling, is in
the type of objects that come out of the analysis and design process.
Features of OOAD:
It users Objects as building blocks of the application rather functions
All objects can be represented graphically including the relation between them.
All Key Participants in the system will be represented as actors and the actions done by
them will be represented as use cases.
A typical use case is nothing bug a systematic flow of series of events which can be well
described using sequence diagrams and each event can be described diagrammatically by
Activity as well as state chart diagrams.
So the entire system can be well described using OOAD model, hence this model is
chosen as SDLC model.
INTRODUCTION
Purpose: The main purpose for preparing this document is to give a general insight into the analysis
and requirements of the existing system or situation and for determining the operating characteristics
of the system.
Scope: This Document plays a vital role in the development life cycle (SDLC) and it describes the
complete requirement of the system. It is meant for use by the developers and will be the basic during
testing phase. Any changes made to the requirements in the future will have to go through formal
change approval process.
Items will be added to the cart, which can be reviewed and finalized at the time of
submitting order
The requirement specification for any system can be broadly stated as given below:
The system should be able to interface with the existing system
The system should be accurate
The system should be better than the existing system
The existing system is completely dependent on the user to perform all the duties.
Chapter 5
5. SELECTED SOFTWARE
5.1 INTRODUCTION TO .NET FRAMEWORK
Microsoft .NET Framework was designed with several intentions:
Interoperability - Because interaction between new and older applications is commonly
required, the .NET Framework provides means to access functionality that is implemented in
programs that execute outside the .NET environment. Access to COM components is provided in the
System.Runtime.InteropServices and System.EnterpriseServices namespaces of the framework, and
access to other functionality is provided using the P/Invoke feature.
CLR
The Common Language Runtime (CLR) is the virtual machine component of Microsoft's .NET
initiative. It is Microsoft's implementation of the Common Language Infrastructure (CLI) standard,
which defines an execution environment for program code. The CLR runs a form of bytecode called
the Common Intermediate Language (CIL, previously known as MSIL -- Microsoft Intermediate
Language).
Developers using the CLR write code in a language such as C# or VB.Net. At compile-time, a
.NET compiler converts such code into CIL code. At runtime, the CLR's just-in-time compiler (JIT
compiler) converts the CIL code into code native to the operating system. Alternatively, the CIL code
can be compiled to native code in a separate step prior to runtime. This speeds up all later runs of the
software as the CIL-to-native compilation is no longer necessary.
Although some other implementations of the Common Language Infrastructure run on non-
Windows operating systems, the CLR runs on Microsoft Windows operating systems.
The virtual machine aspect of the CLR allows programmers to ignore many details of the specific
CPU that will execute the program. The CLR also provides other important services, including the
following:
Memory management
Thread management
Exception handling
Garbage collection
Security
CLS
The Common Language Infrastructure (CLI) is an open specification (published under ECMA-
335 and ISO/IEC 23271) developed by Microsoft that describes the executable code and runtime
environment that form the core of a number of runtimes including the Microsoft .NET Framework,
Mono, and Portable.NET. The specification defines an environment that allows multiple high-level
languages to be used on different computer platforms without being rewritten for specific architectures.
The CLI is a specification, not an implementation, and is often confused with the Common
Language Runtime (CLR), which contains aspects outside the scope of the CLI specification.
Among other things CLI specification describes the following four aspects:
The Common Type System (CTS)
A set of types and operations that are shared by all CTS-compliant programming languages.
Metadata
Information about program structure is language-agnostic, so that it can be referenced
between languages and tools, making it easy to work with code written in a language you are not
using.
Common Language Specification (CLS)
A set of base rules to which any language targeting the CLI should conform in order to
interoperate with other CLS-compliant languages.
Class library
The Base Class Library, sometimes incorrectly referred to as the Framework Class Library
(FCL) (which is a superset including the Microsoft.* namespaces), is a library of classes available to all
languages using the .NET Framework. The BCL provides classes which encapsulate a number of
common functions such as file reading and writing, graphic rendering, database interaction, XML
document manipulation, and so forth. The BCL is much larger than other libraries, but has much more
functionality in one package.
Security
.NET has its own security mechanism, with two general features: Code Access Security (CAS),
and validation and verification. Code Access Security is based on evidence that is associated with a
specific assembly. Typically the evidence is the source of the assembly (whether it is installed on the
local machine, or has been downloaded from the intranet or Internet). Code Access Security uses
evidence to determine the permissions granted to the code. Other code can demand that calling code
is granted a specified permission. The demand causes the CLR to perform a call stack walk: every
assembly of each method in the call stack is checked for the required permission and if any assembly
is not granted the permission then a security exception is thrown.
When an assembly is loaded the CLR performs various tests. Two such tests are validation
and verification. During validation the CLR checks that the assembly contains valid metadata and CIL,
and it checks that the internal tables are correct. Verification is not so exact. The verification
mechanism checks to see if the code does anything that is 'unsafe'. The algorithm used is quite
conservative and hence sometimes code that is 'safe' is not verified. Unsafe code will only be
executed if the assembly has the 'skip verification' permission, which generally means code that is
installed on the local machine.
.NET Framework uses appdomains as a mechanism for isolating code running in a process.
Appdomains can be created and code loaded into or unloaded from them independent of other
appdomains. This helps increase fault tolerance of the application, as faults or crashes in one
appdomain does not affect rest of the application. Appdomains can also be configured independently
with different security privileges. This can help increasing security of the application by separating
potentially unsafe code. However, the developer has to split the application into sub-domains; it is not
done by the CLR.
Memory management
The .NET Framework CLR frees the developer from the burden of managing memory
(allocating and freeing up when done); instead it does the memory management itself. To this end, the
memory allocated to instantiations of .NET types (objects) is done contiguously from the managed
heap, a pool of memory managed by the CLR. As long as there exists a reference to an object, which
might be either a direct reference to an object or via a graph of objects, the object is considered to be
in use by the CLR. When there is no reference to an object, and thus cannot be reached or used, it
becomes garbage. However, it still holds on to the memory allocated to it. .NET Framework includes a
garbage collector which runs periodically, on a separate thread than the application's thread, that
enumerates all the unusable objects and reclaims the memory allocated to them.
The .NET Garbage Collector (GC) is a non-deterministic, compacting, mark-and-sweep
garbage collector. The GC runs only when a certain amount of memory has been used or there is
enough pressure for memory on the system. Since it is not guaranteed when the conditions to reclaim
memory is reached, the GC runs are non-deterministic. Each .NET application has a set of roots ,
which are a set of pointers maintained by the CLR that point to objects on the managed heap
(managed objects). These include references to static objects and objects defined as local variables or
method parameters currently in scope, as well as objects referred to by CPU registers. When the GC
runs, it pauses the application, and for each objects referred to in the root, it recursively enumerates
all the objects reachable from the root objects and marks the objects as reachable. It uses .NET
metadata and reflection to discover the objects encapsulated by an object, and then recursively walk
them. It then enumerates all the objects on the heap (which were initially allocated contiguously) using
reflection and all the objects, not marked as reachable, are garbage. This is the mark phase. Since the
memory held by garbage is not of any consequence, it is considered free space. However, this leaves
chunks of free space between objects which were initially contiguous. The objects are then compacted
together, by using memcpy to copy them over to the free space to make them contiguous again. Any
reference to an object invalidated by moving the object is updated to reflect the new location by the
GC. The application is resumed after the garbage collection is over.
The GC used by .NET Framework is actually generational. Objects are assigned a generation;
newly created objects belong to Generation 0. The objects that survive a garbage collection are
tagged as Generation 1, and the Generation 1 objects that survive another collection are Generation 2
objects. The .NET Framework uses up to Generation 2 objects. Higher generation objects are garbage
collected less frequently then lower generation objects. This helps increase the efficiency of garbage
collection, as older objects tend to have a larger lifetime than newer objects. Thus, by removing older
(and thus more likely to survive a collection) objects from the scope of a collection run, fewer objects
need to be checked and compacted.
5.2. ASP.NET
Server Application Development
Server-side applications in the managed world are implemented through runtime hosts.
Unmanaged applications host the common language runtime, which allows your custom managed
code to control the behavior of the server. This model provides you with all the features of the
common language runtime and class library while gaining the performance and scalability of the host
server.
The following illustration shows a basic network schema with managed code running in
different server environments. Servers such as IIS and SQL Server can perform standard operations
while your application logic executes through the managed code.
ASP.NET is the hosting environment that enables developers to use the .NET Framework to
target Web-based applications. However, ASP.NET is more than just a runtime host; it is a complete
architecture for developing Web sites and Internet-distributed objects using managed code. Both Web
Forms and XML Web services use IIS and ASP.NET as the publishing mechanism for applications,
and both have a collection of supporting classes in the .NET Framework.
XML Web services, an important evolution in Web-based technology, are distributed, server-
side application components similar to common Web sites. However, unlike Web-based applications,
XML Web services components have no UI and are not targeted for browsers such as Internet
Explorer and Netscape Navigator. Instead, XML Web services consist of reusable software
components designed to be consumed by other applications, such as traditional client applications,
Web-based applications, or even other XML Web services. As a result, XML Web services technology
is rapidly moving application development and deployment into the highly distributed environment of
the Internet.
If you have used earlier versions of ASP technology, you will immediately notice the
improvements that ASP.NET and Web Forms offers. For example, you can develop Web Forms
pages in any language that supports the .NET Framework. In addition, your code no longer needs to
share the same file with your HTTP text (although it can continue to do so if you prefer). Web Forms
pages execute in native machine language because, like any other managed application, they take full
advantage of the runtime. In contrast, unmanaged ASP pages are always scripted and interpreted.
ASP.NET pages are faster, more functional, and easier to develop than unmanaged ASP pages
because they interact with the runtime like any managed application.
The .NET Framework also provides a collection of classes and tools to aid in development and
consumption of XML Web services applications. XML Web services are built on standards such as
SOAP (a remote procedure-call protocol), XML (an extensible data format), and WSDL ( the Web
Services Description Language). The .NET Framework is built on these standards to promote
interoperability with non-Microsoft solutions.
For example, the Web Services Description Language tool included with the .NET Framework
SDK can query an XML Web service published on the Web, parse its WSDL description, and produce
C# or Visual Basic source code that your application can use to become a client of the XML Web
service. The source code can create classes derived from classes in the class library that handle all
the underlying communication using SOAP and XML parsing. Although you can use the class library
to consume XML Web services directly, the Web Services Description Language tool and the other
tools contained in the SDK facilitate your development efforts with the .NET Framework.
If you develop and publish your own XML Web service, the .NET Framework provides a set of
classes that conform to all the underlying communication standards, such as SOAP, WSDL, and XML.
Using those classes enables you to focus on the logic of your service, without concerning yourself with
the communications infrastructure required by distributed software development.
Finally, like Web Forms pages in the managed environment, your XML Web service will run with the
speed of native machine language using the scalable communication of IIS.
LANGUAGE SUPPORT
The Microsoft .NET Platform currently offers built-in support for three languages: C#, Visual
Basic, and JScript.
ASP.NET Web Forms pages are text files with an .aspx file name extension. They can be
deployed throughout an IIS virtual root directory tree. When a browser client requests .aspx resources,
the ASP.NET runtime parses and compiles the target file into a .NET Framework class. This class can
then be used to dynamically process incoming requests. (Note that the .aspx file is compiled only the
first time it is accessed; the compiled type instance is then reused across multiple requests).
An ASP.NET page can be created simply by taking an existing HTML file and changing its file
name extension to .aspx (no modification of code is required). For example, the following sample
demonstrates a simple HTML page that collects a user's name and category preference and then
performs a form postback to the originating page when a button is clicked:
ASP.NET provides syntax compatibility with existing ASP pages. This includes support for <%
%> code render blocks that can be intermixed with HTML content within an .aspx file. These code
blocks execute in a top-down manner at page render time.
CODE-BEHIND WEB FORMS
ASP.NET supports two methods of authoring dynamic pages. The first is the method shown in
the preceding samples, where the page code is physically declared within the originating .aspx file. An
alternative approach--known as the code-behind method--enables the page code to be more cleanly
separated from the HTML content into an entirely separate file.
In addition to (or instead of) using <% %> code blocks to program dynamic content, ASP.NET
page developers can use ASP.NET server controls to program Web pages. Server controls are
declared within an .aspx file using custom tags or intrinsic HTML tags that contain a runat="server"
attributes value. Intrinsic HTML tags are handled by one of the controls in the
System.Web.UI.HtmlControls namespace. Any tag that doesn't explicitly map to one of the controls
is assigned the type of System.Web.UI.HtmlControls.HtmlGenericControl.
Server controls automatically maintain any client-entered values between round trips to the
server. This control state is not stored on the server (it is instead stored within an <input
type="hidden"> form field that is round-tripped between requests). Note also that no client-side script
is required.
In addition to supporting standard HTML input controls, ASP.NET enables developers to utilize
richer custom controls on their pages. For example, the following sample demonstrates how the
<asp:adrotator> control can be used to dynamically display rotating ads on a page.
1. ASP.NET Web Forms provide an easy and powerful way to build dynamic Web UI.
2. ASP.NET Web Forms pages can target any browser client (there are no script library or cookie
requirements).
3. ASP.NET Web Forms pages provide syntax compatibility with existing ASP pages.
4. ASP.NET server controls provide an easy way to encapsulate common functionality.
5. ASP.NET ships with 45 built-in server controls. Developers can also use controls built by third
parties.
6. ASP.NET server controls can automatically project both uplevel and downlevel HTML.
7. ASP.NET templates provide an easy way to customize the look and feel of list server controls.
8. ASP.NET validation controls provide an easy way to do declarative client or server data validation.
5.3. JAVA SCRIPT
JavaScript is a script-based programming language that was developed by Netscape
Communication Corporation. JavaScript was originally called Live Script and renamed as JavaScript
to indicate its relationship with Java. JavaScript supports the development of both client and server
components of Web-based applications. On the client side, it can be used to write programs that are
executed by a Web browser within the context of a Web page. On the server side, it can be used to
write Web server programs that can process information submitted by a Web browser and then update
the browser’s display accordingly
Even though JavaScript supports both client and server Web programming, we prefer JavaScript at
Client side programming since most of the browsers supports it. JavaScript is almost as easy to learn
as HTML, and JavaScript statements can be included in HTML documents by enclosing the
statements between a pair of scripting tags
<SCRIPTS>... </SCRIPT>.
<SCRIPT LANGUAGE = “JavaScript”>
JavaScript statements
</SCRIPT>
Here are a few things we can do with JavaScript:
Validate the contents of a form and make calculations.
Add scrolling or changing messages to the Browser’s status line.
Animate images or rotate images that change when we move the mouse over them.
Detect the browser in use and display different content for different browsers.
Detect installed plug-ins and notify the user if a plug-in is required.
We can do much more with JavaScript, including creating entire application.
JavaScript Vs Java
JavaScript and Java are entirely different languages. A few of the most glaring differences are:
Java applets are generally displayed in a box within the web document; JavaScript can affect
any part of the Web document itself.
While JavaScript is best suited to simple applications and adding interactive features to Web
pages; Java can be used for incredibly complex applications.
There are many other differences but the important thing to remember is that JavaScript and
Java are separate languages. They are both useful for different things; in fact they can be used
together to combine their advantages.
Advantages:
JavaScript can be used for Sever-side and Client-side scripting.
It is more flexible than VBScript.
JavaScript is the default scripting languages at Client-side since all the browsers supports it.
Before an IIS process receives a request to execute, some preliminary processing occurs that
is described in the following steps:
1. A request arrives at HTTP.sys.
2. HTTP.sys determines if the request is valid. If the request is not valid, it sends a code for an
invalid request back to the client.
3. If the request is valid, HTTP.sys checks to see if the request is for static content (HTML)
because static content can be served immediately.
4. If the request is for dynamic content, HTTP.sys checks to see if the response is located in its
kernel-mode cache.
5. If the response is in the cache, HTTP.sys returns the response immediately.
6. If the response is not cached, HTTP.sys determines the correct request queue, and places the
request in that queue.
7. If the queue has no worker processes assigned to it, HTTP.sys signals the WWW service to
start one.
8. The worker process pulls the request from the queue and processes the request, evaluating
the URL to determine the type of request (ASP, ISAPI, or CGI).
9. The worker process sends the response back to HTTP.sys.
10. HTTP.sys sends the response back to the client and logs the request, if configured to
do so.
The following future illustrates the request response process in pointing interaction between the
kernel and user.
Preliminary Request Processing on IIS 6.0 in IIS 5.0 Isolation Mode and Earlier Versions of IIS The
request processing of IIS 6.0 running in IIS 5.0 isolation mode is nearly identical to the request
processing in IIS 5.1, IIS 5.0, and IIS 4.0. Before an IIS process receives a request to execute, some
preliminary processing occurs that is described in the following steps:
1. A request arrives. If the requested application is running in-process, then Inetinfo.exe takes the
request. If not, then DLLHost.exe takes the request.
2. Inetinfo.exe or DLLHost.exe determines if the request is valid. If the request is not valid, it sends
a code for an invalid request back to the client.
3. If the request is valid, Inetinfo.exe or DLLHost.exe checks to see if the response is located in the
IIS cache.
4. If the response is in the cache, it is returned immediately.
5. If the response is not cached, Inetinfo.exe or DLLHost.exe processes the request, evaluating
the URL to determine if the request is for static content (HTML), or dynamic content (ASP,
ASP.NET or ISAPI).
The response is sent back to the client and the request is logged, if IIS is configured to do so.
5.5. C#.Net:
C# is an object-oriented programming language developed by Microsoft as part of the .NET
initiative and later approved as a standard by ECMA and ISO. Anders Hejlsberg leads development of
the C# language, which has a procedural, object-oriented syntax based on C++ and includes aspects of
several other programming languages (most notably Delphi and Java) with a particular emphasis on
simplification.
Design goals
Features
The following description is based on the language standard and other documents listed in the
External links section.
By design, C# is the programming language that most directly reflects the underlying Common
Language Infrastructure (CLI). Most of C#'s intrinsic types correspond to value-types implemented by
the CLI framework. However, the C# language specification does not state the code generation
requirements of the compiler: that is, it does not state that a C# compiler must target a Common
Language Runtime (CLR), or generate Common Intermediate Language (CIL), or generate any other
specific format. Theoretically, a C# compiler could generate machine code like traditional compilers of
C++ or FORTRAN; in practice, all existing C# implementations target CLI.
C# has a unified type system. This means that all types, including primitives such as integers,
are subclasses of the System.Object class. For example, every type inherits a ToString() method. For
performance reasons, primitive types (and value types in general) are internally allocated on the stack.
Boxing and unboxing allow one to translate primitive data to and from their object form. Effectively, this
makes the primitive types a subtype of the Object type. Primitive types can also define methods (e.g.,
42.ToString() calls the ToString() method on an integer), and in the programmer's perspective behave
like any other object.
C# allows the programmer to create user-defined value types, using the struct keyword. From
the programmer's perspective, they can be seen as lightweight classes. Unlike regular classes, and like
the standard primitives, such value types are allocated on the stack rather than on the heap. They can
also be part of an object (either as a field or boxed), or stored in an array, without the memory
indirection that normally exists for class types. Structs also come with a number of limitations. Because
structs have no notion of a null value and can be used in arrays without initialization, they are implicitly
initialized to default values (normally by filling the struct memory space with zeroes, but the
programmer can specify explicit default values to override this). The programmer can define additional
constructors with one or more arguments. This also means that structs lack a virtual method table, and
because of that (and the fixed memory footprint), they cannot allow inheritance (but can implement
interfaces).
Preprocessor
C# features "preprocessor directives"[8] (though it does not have an actual preprocessor) based
on the C preprocessor that allows programmers to define symbols but not macros. Conditionals such
as #if, #endif, and #else are also provided. Directives such as #region give hints to editors for code
folding.
C#'s documentation system is similar to Java's Javadoc, but based on XML. Two methods of
documentation are currently supported by the C# compiler.
Single-line comments, such as those commonly found in Visual Studio generated code, are
indicated on a line beginning with ///.
Syntax for documentation comments and their XML markup is defined in a non-normative annex
of the ECMA C# standard. The same standard also defines rules for processing of such comments, and
their transformation to a plain XML document with precise rules for mapping of CLI identifiers to their
related documentation elements. This allows any C# IDE or other development tool to find
documentation for any symbol in the code in a certain well-defined way.
Code libraries
The C# specification details a minimum set of types and class libraries that the compiler expects to
have available and they define the basics required. In practice, C# is most often used with some
implementation of the Common Language Infrastructure (CLI), which is standardized as ECMA-335
Common Language Infrastructure (CLI).
ADO.NET OVERVIEW
ADO.NET is an evolution of the ADO data access model that directly addresses user
requirements for developing scalable applications. It was designed specifically for the web with
scalability, statelessness, and XML in mind.
ADO.NET uses some ADO objects, such as the Connection and Command objects, and also
introduces new objects. Key new ADO.NET objects include the DataSet, DataReader, and
DataAdapter.
The important distinction between this evolved stage of ADO.NET and previous data
architectures is that there exists an object -- the DataSet -- that is separate and distinct from any data
stores. Because of that, the DataSet functions as a standalone entity. You can think of the DataSet as
an always disconnected recordset that knows nothing about the source or destination of the data it
contains. Inside a DataSet, much like in a database, there are tables, columns, relationships,
constraints, views, and so forth.
A DataAdapter is the object that connects to the database to fill the DataSet. Then, it connects
back to the database to update the data there, based on operations performed while the DataSet held
the data. In the past, data processing has been primarily connection-based. Now, in an effort to make
multi-tiered apps more efficient, data processing is turning to a message-based approach that revolves
around chunks of information. At the center of this approach is the DataAdapter, which provides a
bridge to retrieve and save data between a DataSet and its source data store. It accomplishes this by
means of requests to the appropriate SQL commands made against the data store.
The XML-based DataSet object provides a consistent programming model that works with all
models of data storage: flat, relational, and hierarchical. It does this by having no 'knowledge' of the
source of its data, and by representing the data that it holds as collections and data types. No matter
what the source of the data within the DataSet is, it is manipulated through the same set of standard
APIs exposed through the DataSet and its subordinate objects.
While the DataSet has no knowledge of the source of its data, the managed provider has detailed and
specific information. The role of the managed provider is to connect, fill, and persist the DataSet to and
from data stores. The OLE DB and SQL Server .NET Data Providers (System.Data.OleDb and
System.Data.SqlClient) that are part of the .Net Framework provide four basic objects: the Command,
Connection, DataReader and DataAdapter. In the remaining sections of this document, we'll walk
through each part of the DataSet and the OLE DB/SQL Server .NET Data Providers explaining what
they are, and how to program against them.
The following sections will introduce you to some objects that have evolved, and some that are new.
These objects are:
Connections:
Connections are used to 'talk to' databases, and are represented by provider-specific classes
such as SqlConnection. Commands travel over connections and resultsets are returned in the form of
streams which can be read by a DataReader object, or pushed into a DataSet object.
Commands:
Commands contain the information that is submitted to a database, and are represented by
provider-specific classes such as SqlCommand. A command can be a stored procedure call, an
UPDATE statement, or a statement that returns results. You can also use input and output parameters,
and return values as part of your command syntax. The example below shows how to issue an INSERT
statement against the Northwind database.
DataReaders:
The DataReader object is somewhat synonymous with a read-only/forward-only cursor over data. The
DataReader API supports flat as well as hierarchical data. A DataReader object is returned after
executing a command against a database. The format of the returned DataReader object is different
from a recordset. For example, you might use the DataReader to show the results of a search list in a
web page.
DATAADAPTERS (OLEDB/SQL)
The DataAdapter object works as a bridge between the DataSet and the source data. Using
the provider-specific SqlDataAdapter (along with its associated SqlCommand and SqlConnection)
can increase overall performance when working with a Microsoft SQL Server databases. For other OLE
DB-supported databases, you would use the OleDbDataAdapter object and its associated
OleDbCommand and OleDbConnection objects.
The DataAdapter object uses commands to update the data source after changes have been made
to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command; using the
Update method calls the INSERT, UPDATE or DELETE command for each changed row. You can
explicitly set these commands in order to control the statements used at runtime to resolve changes,
including the use of stored procedures. For ad-hoc scenarios, a CommandBuilder object can generate
these at run-time based upon a select statement. However, this run-time generation requires an extra
round-trip to the server in order to gather required metadata, so explicitly providing the INSERT,
UPDATE, and DELETE commands at design time will result in better run-time performance.
1. ADO.NET is the next evolution of ADO for the .Net Framework.
2. ADO.NET was created with n-Tier, statelessness and XML in the forefront. Two new objects, the
DataSet and DataAdapter, are provided for these scenarios.
3. ADO.NET can be used to get data from a stream, or to store data in a cache for updates.
4. There is a lot more information about ADO.NET in the documentation.
5. Remember, you can execute a command directly against the database in order to do inserts,
updates, and deletes. You don't need to first put data into a DataSet in order to insert, update, or
delete it.
6. Also, you can use a DataSet to bind to the data, move through the data, and navigate data
relationships
ADO.NET OVERVIEW
ADO.NET is an evolution of the ADO data access model that directly addresses user
requirements for developing scalable applications. It was designed specifically for the web with
scalability, statelessness, and XML in mind.
ADO.NET uses some ADO objects, such as the Connection and Command objects,
and also introduces new objects. Key new ADO.NET objects include the DataSet,
DataReader, and DataAdapter.
The important distinction between this evolved stage of ADO.NET and previous data
architectures is that there exists an object -- the DataSet -- that is separate and distinct from
any data stores. Because of that, the DataSet functions as a standalone entity. You can think
of the DataSet as an always disconnected recordset that knows nothing about the source or
destination of the data it contains. Inside a DataSet, much like in a database, there are tables,
columns, relationships, constraints, views, and so forth.
A DataAdapter is the object that connects to the database to fill the DataSet. Then, it
connects back to the database to update the data there, based on operations performed while
the DataSet held the data. In the past, data processing has been primarily connection-based.
Now, in an effort to make multi-tiered apps more efficient, data processing is turning to a
message-based approach that revolves around chunks of information. At the center of this
approach is the DataAdapter, which provides a bridge to retrieve and save data between a
DataSet and its source data store. It accomplishes this by means of requests to the
appropriate SQL commands made against the data store.
The XML-based DataSet object provides a consistent programming model that works
with all models of data storage: flat, relational, and hierarchical. It does this by having no
'knowledge' of the source of its data, and by representing the data that it holds as collections
and data types. No matter what the source of the data within the DataSet is, it is manipulated
through the same set of standard APIs exposed through the DataSet and its subordinate
objects.
While the DataSet has no knowledge of the source of its data, the managed provider has
detailed and specific information. The role of the managed provider is to connect, fill, and
persist the DataSet to and from data stores. The OLE DB and SQL Server .NET Data
Providers (System.Data.OleDb and System.Data.SqlClient) that are part of the .Net
Framework provide four basic objects: the Command, Connection, DataReader and
DataAdapter. In the remaining sections of this document, we'll walk through each part of the
DataSet and the OLE DB/SQL Server .NET Data Providers explaining what they are, and how
to program against them.
The following sections will introduce you to some objects that have evolved, and some
that are new. These objects are:
Connections:
Connections are used to 'talk to' databases, and are represented by provider-specific
classes such as SqlConnection. Commands travel over connections and resultsets are
returned in the form of streams which can be read by a DataReader object, or pushed into a
DataSet object.
Commands:
Commands contain the information that is submitted to a database, and are represented
by provider-specific classes such as SqlCommand. A command can be a stored procedure
call, an UPDATE statement, or a statement that returns results. You can also use input and
output parameters, and return values as part of your command syntax. The example below
shows how to issue an INSERT statement against the Northwind database.
DataReaders:
The DataReader object is somewhat synonymous with a read-only/forward-only cursor over
data. The DataReader API supports flat as well as hierarchical data. A DataReader object is
returned after executing a command against a database. The format of the returned
DataReader object is different from a recordset. For example, you might use the DataReader
to show the results of a search list in a web page.
The DataAdapter object works as a bridge between the DataSet and the source data.
Using the provider-specific SqlDataAdapter (along with its associated SqlCommand and
SqlConnection) can increase overall performance when working with a Microsoft SQL Server
databases. For other OLE DB-supported databases, you would use the OleDbDataAdapter
object and its associated OleDbCommand and OleDbConnection objects.
The DataAdapter object uses commands to update the data source after changes have
been made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT
command; using the Update method calls the INSERT, UPDATE or DELETE command for
each changed row. You can explicitly set these commands in order to control the statements
used at runtime to resolve changes, including the use of stored procedures. For ad-hoc
scenarios, a CommandBuilder object can generate these at run-time based upon a select
statement. However, this run-time generation requires an extra round-trip to the server in order
to gather required metadata, so explicitly providing the INSERT, UPDATE, and DELETE
commands at design time will result in better run-time performance.
7. ADO.NET is the next evolution of ADO for the .Net Framework.
8. ADO.NET was created with n-Tier, statelessness and XML in the forefront. Two new
objects, the DataSet and DataAdapter, are provided for these scenarios.
9. ADO.NET can be used to get data from a stream, or to store data in a cache for updates.
10. There is a lot more information about ADO.NET in the documentation.
11. Remember, you can execute a command directly against the database in order to do
inserts, updates, and deletes. You don't need to first put data into a DataSet in order to
insert, update, or delete it.
12. Also, you can use a DataSet to bind to the data, move through the data, and navigate data
relationships
ADO.NET OVERVIEW
ADO.NET is an evolution of the ADO data access model that directly addresses user
requirements for developing scalable applications. It was designed specifically for the web with
scalability, statelessness, and XML in mind.
ADO.NET uses some ADO objects, such as the Connection and Command objects, and also
introduces new objects. Key new ADO.NET objects include the DataSet, DataReader, and
DataAdapter.
The important distinction between this evolved stage of ADO.NET and previous data
architectures is that there exists an object -- the DataSet -- that is separate and distinct from any data
stores. Because of that, the DataSet functions as a standalone entity. You can think of the DataSet as
an always disconnected recordset that knows nothing about the source or destination of the data it
contains. Inside a DataSet, much like in a database, there are tables, columns, relationships,
constraints, views, and so forth.
A DataAdapter is the object that connects to the database to fill the DataSet. Then, it connects
back to the database to update the data there, based on operations performed while the DataSet held
the data. In the past, data processing has been primarily connection-based. Now, in an effort to make
multi-tiered apps more efficient, data processing is turning to a message-based approach that revolves
around chunks of information. At the center of this approach is the DataAdapter, which provides a
bridge to retrieve and save data between a DataSet and its source data store. It accomplishes this by
means of requests to the appropriate SQL commands made against the data store.
The XML-based DataSet object provides a consistent programming model that works with all
models of data storage: flat, relational, and hierarchical. It does this by having no 'knowledge' of the
source of its data, and by representing the data that it holds as collections and data types. No matter
what the source of the data within the DataSet is, it is manipulated through the same set of standard
APIs exposed through the DataSet and its subordinate objects.
While the DataSet has no knowledge of the source of its data, the managed provider has detailed and
specific information. The role of the managed provider is to connect, fill, and persist the DataSet to and
from data stores. The OLE DB and SQL Server .NET Data Providers (System.Data.OleDb and
System.Data.SqlClient) that are part of the .Net Framework provide four basic objects: the Command,
Connection, DataReader and DataAdapter. In the remaining sections of this document, we'll walk
through each part of the DataSet and the OLE DB/SQL Server .NET Data Providers explaining what
they are, and how to program against them.
The following sections will introduce you to some objects that have evolved, and some that are new.
These objects are:
Connections. For connection to and managing transactions against a database.
Commands. For issuing SQL commands against a database.
DataReaders. For reading a forward-only stream of data records from a SQL Server data source.
DataSets. For storing, Remoting and programming against flat data, XML data and relational data.
DataAdapters. For pushing data into a DataSet, and reconciling data against a database.
When dealing with connections to a database, there are two different options: SQL Server .NET
Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider (System.Data.OleDb). In these
samples we will use the SQL Server .NET Data Provider. These are written to talk directly to Microsoft
SQL Server. The OLE DB .NET Data Provider is used to talk to any OLE DB provider (as it uses OLE
DB underneath).
Connections:
Connections are used to 'talk to' databases, and are represented by provider-specific classes
such as SqlConnection. Commands travel over connections and resultsets are returned in the form of
streams which can be read by a DataReader object, or pushed into a DataSet object.
Commands:
Commands contain the information that is submitted to a database, and are represented by
provider-specific classes such as SqlCommand. A command can be a stored procedure call, an
UPDATE statement, or a statement that returns results. You can also use input and output parameters,
and return values as part of your command syntax. The example below shows how to issue an INSERT
statement against the Northwind database.
DataReaders:
The DataReader object is somewhat synonymous with a read-only/forward-only cursor over data. The
DataReader API supports flat as well as hierarchical data. A DataReader object is returned after
executing a command against a database. The format of the returned DataReader object is different
from a recordset. For example, you might use the DataReader to show the results of a search list in a
web page.
DATAADAPTERS (OLEDB/SQL)
The DataAdapter object works as a bridge between the DataSet and the source data. Using
the provider-specific SqlDataAdapter (along with its associated SqlCommand and SqlConnection)
can increase overall performance when working with a Microsoft SQL Server databases. For other OLE
DB-supported databases, you would use the OleDbDataAdapter object and its associated
OleDbCommand and OleDbConnection objects.
The DataAdapter object uses commands to update the data source after changes have been made
to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command; using the
Update method calls the INSERT, UPDATE or DELETE command for each changed row. You can
explicitly set these commands in order to control the statements used at runtime to resolve changes,
including the use of stored procedures. For ad-hoc scenarios, a CommandBuilder object can generate
these at run-time based upon a select statement. However, this run-time generation requires an extra
round-trip to the server in order to gather required metadata, so explicitly providing the INSERT,
UPDATE, and DELETE commands at design time will result in better run-time performance.
13. ADO.NET is the next evolution of ADO for the .Net Framework.
14. ADO.NET was created with n-Tier, statelessness and XML in the forefront. Two new objects, the
DataSet and DataAdapter, are provided for these scenarios.
15. ADO.NET can be used to get data from a stream, or to store data in a cache for updates.
16. There is a lot more information about ADO.NET in the documentation.
17. Remember, you can execute a command directly against the database in order to do inserts,
updates, and deletes. You don't need to first put data into a DataSet in order to insert, update, or
delete it.
18. Also, you can use a DataSet to bind to the data, move through the data, and navigate data
relationships
PRIMARY KEY
Every table in SQL Server has a field or a combination of fields that uniquely identifies each
record in the table. The Unique identifier is called the Primary Key, or simply the Key. The primary key
provides the means to distinguish one record from all other in a table. It allows the user and the
database system to identify, locate and refer to one particular record in the database.
RELATIONAL DATABASE
Sometimes all the information of interest to a business operation can be stored in one table.
SQL Server makes it very easy to link the data in multiple tables. Matching an employee to the
department in which they work is one example. This is what makes SQL Server a relational database
management system, or RDBMS. It stores data in two or more tables and enables you to define
relationships between the table and enables you to define relationships between the tables.
FOREIGN KEY
When a field is one table matches the primary key of another field is referred to as a foreign key.
A foreign key is a field or a group of fields in one table whose values match those of the primary key of
another table.
REFERENTIAL INTEGRITY
Not only does SQL Server allow you to link multiple tables, it also maintains consistency
between them. Ensuring that the data among related tables is correctly matched is referred to as
maintaining referential integrity.
DATA ABSTRACTION
A major purpose of a database system is to provide users with an abstract view of the data.
This system hides certain details of how the data is stored and maintained. Data abstraction is divided
into three levels.
Physical level: This is the lowest level of abstraction at which one describes how the data are actually
stored.
Conceptual Level: At this level of database abstraction all the attributed and what data are actually
stored is described and entries and relationship among them.
View level: This is the highest level of abstraction at which one describes only part of the database.
ADVANTAGES OF RDBMS
SQL SERVER is a truly portable, distributed, and open DBMS that delivers unmatched performance,
continuous operation and support for every database.
SQL SERVER RDBMS is high performance fault tolerant DBMS which is specially designed for online
transactions processing and for handling large database application.
SQL SERVER with transactions processing option offers two features which contribute to very high
level of transaction processing throughput, which are
PORTABILITY
SQL SERVER is fully portable to more than 80 distinct hardware and operating systems
platforms, including UNIX, MSDOS, OS/2, Macintosh and dozens of proprietary platforms. This
portability gives complete freedom to choose the database sever platform that meets the system
requirements.
OPEN SYSTEMS
SQL SERVER offers a leading implementation of industry –standard SQL. SQL Server’s open
architecture integrates SQL SERVER and non –SQL SERVER DBMS with industries most
comprehensive collection of tools, application, and third party software products SQL Server’s Open
architecture provides transparent access to data from other relational database and even non-relational
database.
UNMATCHED PERFORMANCE
The most advanced architecture in the industry allows the SQL SERVER DBMS to deliver
unmatched performance.
NO I/O BOTTLENECKS
SQL Server’s fast commit groups commit and deferred write technologies dramatically reduce
disk I/O bottlenecks. While some database write whole data block to disk at commit time, SQL Server
commits transactions with at most sequential log file on disk at commit time, On high throughput
systems, one sequential writes typically group commit multiple transactions. Data read by the
transaction remains as shared memory so that other transactions may access that data without reading
it again from disk. Since fast commits write all data necessary to the recovery to the log file, modified
blocks are written back to the database independently of the transaction commit, when written from
memory to disk.
Chapter 6
6. SYSTEM DESIGN
6.1. INTRODUCTION
Software design sits at the technical kernel of the software engineering process and is applied
regardless of the development paradigm and area of application. Design is the first step in the
development phase for any engineered product or system. The designer’s goal is to produce a model
or representation of an entity that will later be built. Beginning, once system requirement have been
specified and analyzed, system design is the first of the three technical activities -design, code and test
that is required to build and verify software.
The importance can be stated with a single word “Quality”. Design is the place where quality is
fostered in software development. Design provides us with representations of software that can assess
for quality. Design is the only way that we can accurately translate a customer’s view into a finished
software product or system. Software design serves as a foundation for all the software engineering
steps that follow. Without a strong design we risk building an unstable system – one that will be difficult
to test, one whose quality cannot be assessed until the last stage.
During design, progressive refinement of data structure, program structure, and procedural
details are developed reviewed and documented. System design can be viewed from either technical or
project management perspective. From the technical point of view, design is comprised of four activities
– architectural design, data structure design, interface design and procedural design.
6.2. NORMALIZATION
It is a process of converting a relation to a standard form. The process is used to handle the
problems that can arise due to data redundancy i.e. repetition of data in the database, maintain data
integrity as well as handling problems that can arise due to insertion, updation, deletion anomalies.
Decomposing is the process of splitting relations into multiple relations to eliminate anomalies
and maintain anomalies and maintain data integrity. To do this we use normal forms or rules for
structuring relation.
Insertion anomaly: Inability to add data to the database due to absence of other data.
Normal Forms: These are the rules for structuring relations that eliminate anomalies.
A relation is said to be in first normal form if the values in the relation are atomic for every
attribute in the relation. By this we mean simply that no attribute value can be a set of values or, as it is
sometimes expressed, a repeating group.
SECOND NORMAL FORM:
A relation is said to be in second Normal form is it is in first normal form and it should satisfy any
one of the following rules.
1) Primary key is a not a composite primary key
2) No non key attributes are present
3) Every non key attribute is fully functionally dependent on full set of primary key.
Transitive Dependency: If two non key attributes depend on each other as well as on the primary key
then they are said to be transitively dependent.
The above normalization principles were applied to decompose the data in multiple tables
thereby making the data to be maintained in a consistent state.
6.3. E – R DIAGRAMS
The relation upon the system is structure through a conceptual ER-Diagram, which not only
specifics the existential entities but also the standard relations through which the system exists and
the cardinalities that are necessary for the system state to continue.
The entity Relationship Diagram (ERD) depicts the relationship between the data objects. The ERD
is the notation that is used to conduct the date modeling activity the attributes of each data object
noted is the ERD can be described resign a data object descriptions.
The set of primary components that are identified by the ERD are
Data object Relationships
Attributes Various types of indicators.
The primary purpose of the ERD is to represent data objects and their relationships.
(PASTE YOUR E – R DIAGRAM)
6.4. DATA FLOW DIAGRAMS
A data flow diagram is graphical tool used to describe and analyze movement of data through a
system. These are the central tool and the basis from which the other components are developed. The
transformation of data from input to output, through processed, may be described logically and
independently of physical components associated with the system. These are known as the logical
data flow diagrams. The physical data flow diagrams show the actual implements and movement of
data between people, departments and workstations. A full description of a system actually consists of
a set of data flow diagrams. Using two familiar notations Yourdon, Gane and Sarson notation develops
the data flow diagrams. Each component in a DFD is labeled with a descriptive name. Process is
further identified with a number that will be used for identification purpose. The development of DFD’S
is done in several levels. Each process in lower level diagrams can be broken down into a more
detailed DFD in the next level. The lop-level diagram is often called context diagram. It consists a
single process bit, which plays vital role in studying the current system. The process in the context
level diagram is exploded into other process at the first level DFD.
The idea behind the explosion of a process into more process is that understanding at one level
of detail is exploded into greater detail at the next level. This is done until further explosion is
necessary and an adequate amount of detail is described for analyst to understand the process.
Larry Constantine first developed the DFD as a way of expressing system requirements in a
graphical from, this lead to the modular design.
A DFD is also known as a “bubble Chart” has the purpose of clarifying system requirements and
identifying major transformations that will become programs in system design. So it is the starting point
of the design to the lowest level of detail. A DFD consists of a series of bubbles joined by data flows in
the system.
DFD SYMBOLS:
In the DFD, there are four symbols
1. A square defines a source(originator) or destination of system data
2. An arrow identifies data flow. It is the pipeline through which the information flows
3. A circle or a bubble represents a process that transforms incoming data flow into outgoing data
flows.
4. An open rectangle is a data store, data at rest or a temporary repository of data
Process that transforms data flow.
Data flow
Data Store
CONSTRUCTING A DFD:
Several rules of thumb are used in drawing DFD’S:
1. Process should be named and numbered for an easy reference. Each name should be
representative of the process.
2. The direction of flow is from top to bottom and from left to right. Data traditionally flow from source
to the destination although they may flow back to the source. One way to indicate this is to draw
long flow line back to a source. An alternative way is to repeat the source symbol as a destination.
Since it is used more than once in the DFD it is marked with a short diagonal.
3. When a process is exploded into lower level details, they are numbered.
4. The names of data stores and destinations are written in capital letters. Process and dataflow
names have the first letter of each work capitalized
A DFD typically shows the minimum contents of data store. Each data store should contain all
the data elements that flow in and out.
Questionnaires should contain all the data elements that flow in and out. Missing interfaces
redundancies and like is then accounted for often through interviews.
SAILENT FEATURES OF DFD’S
1. The DFD shows flow of data, not of control loops and decision are controlled considerations do not
appear on a DFD.
2. The DFD does not indicate the time factor involved in any process whether the dataflow take place
daily, weekly, monthly or yearly.
3. The sequence of events is not brought out on the DFD.
TYPES OF DATA FLOW DIAGRAMS
1. Current Physical
2. Current Logical
3. New Logical
4. New Physical
CURRENT PHYSICAL:
In Current Physical DFD proecess label include the name of people or their positions or the
names of computer systems that might provide some of the overall system-processing label includes an
identification of the technology used to process the data. Similarly data flows and data stores are often
labels with the names of the actual physical media on which data are stored such as file folders,
computer files, business forms or computer tapes.
CURRENT LOGICAL:
The physical aspects at the system are removed as mush as possible so that the current
system is reduced to its essence to the data and the processors that transform them regardless of
actual physical form.
NEW LOGICAL:
This is exactly like a current logical model if the user were completely happy with he user were
completely happy with the functionality of the current system but had problems with how it was
implemented typically through the new logical model will differ from current logical model while having
additional functions, absolute function removal and inefficient flows recognized.
NEW PHYSICAL:
The new physical represents only the physical implementation of the new system.
RULES GOVERNING THE DFD’S
PROCESS
1) No process can have only outputs.
2) No process can have only inputs. If an object has only inputs than it must be a sink.
3) A process has a verb phrase label.
DATA STORE
1) Data cannot move directly from one data store to another data store, a process must move data.
2) Data cannot move directly from an outside source to a data store, a process, which receives, must
move data from the source and place the data into data store
3) A data store has a noun phrase label.
SOURCE OR SINK
The origin and / or destination of data.
1) Data cannot move direly from a source to sink it must be moved by a process
2) A source and /or sink has a noun phrase land
DATA FLOW
1) A Data Flow has only one direction of flow between symbols. It may flow in both directions between
a process and a data store to show a read before an update. The later is usually indicated however
by two separate arrows since these happen at different type.
2) A join in DFD means that exactly the same data comes from any of two or more different processes
data store or sink to a common location.
3) A data flow cannot go directly back to the same process it leads. There must be atleast one other
process that handles the data flow produce some other data flow returns the original data into the
beginning process.
4) A Data flow to a data store means update (delete or change).
5) A data Flow from a data store means retrieve or use.
A data flow has a noun phrase label more than one data flow noun phrase can appear on a single
arrow as long as all of the flows on the same arrow move together as one package.
ACCEPTANCE TESTING
Acceptance Test is performed with realistic data of the client to demonstrate that the software is
working satisfactorily. Testing here is focused on external behavior of the system; the internal logic of
program is not emphasized. In this project ‘XXXXXXXXXXX’ I have collected some data and tested
whether project is working correctly or not.
Test cases should be selected so that the largest number of attributes of an equivalence class is
exercised at once. The testing phase is an important part of software development. It is the process of
finding errors and missing operations and also a complete verification to determine whether the
objectives are met and the user requirements are satisfied.
8.5. IMPLEMENTATION
Implementation is the stage where the theoretical design is turned into a working system.
The most crucial stage in achieving a new successful system and in giving confidence on the
new system for the users that it will work efficiently and effectively.
The system can be implemented only after thorough testing is done and if it is found to
work according to the specification.
It involves careful planning, investigation of the current system and its constraints on
implementation, design of methods to achieve the change over and an evaluation of change
over methods a part from planning. Two major tasks of preparing the implementation are
education and training of the users and testing of the system.
The more complex the system being implemented, the more involved will be the systems
analysis and design effort required just for implementation.
The implementation phase comprises of several activities. The required hardware and
software acquisition is carried out. The system may require some software to be developed. For
this, programs are written and tested. The user then changes over to his new fully tested system
and the old system is discontinued.
Chapter 9
9. SYSTEM SECURITY
9.1. INTRODUCTION
The protection of computer based resources that includes hardware, software, data, procedures
and people against unauthorized use or natural
Disaster is known as System Security.
SYSTEM SECURITY refers to the technical innovations and procedures applied to the hardware and
operation systems to protect against deliberate or accidental damage from a defined threat.
DATA SECURITY is the protection of data from loss, disclosure, modification and destruction.
SYSTEM INTEGRITY refers to the power functioning of hardware and programs, appropriate physical
security and safety against external threats such as eavesdropping and wiretapping.
PRIVACY defines the rights of the user or organizations to determine what information they are willing
to share with or accept from others and how the organization can be protected against unwelcome,
unfair or excessive dissemination of information about it.
System security refers to various validations on data in form of checks and controls to avoid the system
from failing. It is always important to ensure that only valid data is entered and only valid operations are
performed on the system. The system employees two types of checks and controls:
CLIENT SIDE VALIDATION
Various client side validations are used to ensure on the client side that only valid data is entered.
Client side validation saves server time and load to handle invalid data. Some checks imposed are:
VBScript in used to ensure those required fields are filled with suitable data only. Maximum lengths
of the fields of the forms are appropriately defined.
Forms cannot be submitted without filling up the mandatory data so that manual mistakes of
submitting empty fields that are mandatory can be sorted out at the client side to save the server
time and load.
Tab-indexes are set according to the need and taking into account the ease of user while working
with the system.
BENEFITS:
The project is identified by the merits of the system offered to the user. The merits of this project are as
follows: -
It’s a web-enabled project.
This project offers user to enter the data through simple and interactive forms. This is very helpful
for the client to enter the desired information through so much simplicity.
The user is mainly more concerned about the validity of the data, whatever he is entering. There
are checks on every stages of any new creation, data entry or updation so that the user cannot
enter the invalid data, which can create problems at later date.
Sometimes the user finds in the later stages of using project that he needs to update some of the
information that he entered earlier. There are options for him by which he can update the records.
Moreover there is restriction for his that he cannot change the primary data field. This keeps the
validity of the data to longer extent.
User is provided the option of monitoring the records he entered earlier. He can see the desired
records with the variety of options provided by him.
From every part of the project the user is provided with the links through framing so that he can go
from one option of the project to other as per the requirement. This is bound to be simple and very
friendly as per the user is concerned. That is, we can sat that the project is user friendly which is
one of the primary concerns of any good project.
Data storage and retrieval will become faster and easier to maintain because data is stored in a
systematic manner and in a single database.
Decision making process would be greatly enhanced because of faster processing of information
since data collection from information available on computer takes much less time then manual
system.
Allocating of sample results becomes much faster because at a time the user can see the records
of last years.
Easier and faster data transfer through latest technology associated with the computer and
communication.
Through these features it will increase the efficiency, accuracy and transparency,
LIMITATIONS:
The size of the database increases day-by-day, increasing the load on the database back up and
data maintenance activity.
Training for simple computer operations is necessary for the users working on the system.
Chapter 11
11. FUTURE IMPROVEMENT
Different online payments facilities will be provided via different payment gateways.
Extending customer base by offering different deals/offers to them in future
Expanding locations/ outlets to cover for better services.
In the future the employee information and their payroll maintenance will also be included as
part of this application
Chapter 11
12. BIBLIOGRAPHY