0% found this document useful (0 votes)
88 views65 pages

Threshold Based K-Means Monitoring Over Moving Objects

The document discusses a threshold-based k-means algorithm for monitoring moving objects. It aims to reduce computation and communication costs by assigning each object a threshold range, so that location updates are only sent when the object moves beyond the threshold. The algorithm uses an efficient technique to maintain k-means clustering as objects move. It presents formulas and algorithms for deriving individual thresholds for objects. Experimental results are used to validate that the approach meets performance goals of reducing costs while guaranteeing solution quality.

Uploaded by

gs admin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views65 pages

Threshold Based K-Means Monitoring Over Moving Objects

The document discusses a threshold-based k-means algorithm for monitoring moving objects. It aims to reduce computation and communication costs by assigning each object a threshold range, so that location updates are only sent when the object moves beyond the threshold. The algorithm uses an efficient technique to maintain k-means clustering as objects move. It presents formulas and algorithms for deriving individual thresholds for objects. Experimental results are used to validate that the approach meets performance goals of reducing costs while guaranteeing solution quality.

Uploaded by

gs admin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 65

THRESHOLD BASED K-MEANS MONITORING OVER

MOVING OBJECTS

ABSTRACT

Given a dataset P, a k-means query returns k points in space (called

centers), such that the average squared distance between each point in P and

its nearest center is minimized. Since this problem is NP-hard, several

approximate algorithms have been proposed and used in practice. In this

paper, we study continuous k-means computation at a server that monitors a

set of moving objects. Re-evaluating k-means every time there is an object

update imposes a heavy burden on the server (for computing the centers from

scratch) and the clients (for continuously sending location updates). We

overcome these problems with a novel approach that significantly reduces the

computation and communication costs, while guaranteeing that the quality of

the solution, with respect to the re-evaluation approach, is bounded by a user-

defined tolerance. The proposed method assigns each moving object a

threshold (i.e., range) such that the object sends a location update only when it

crosses the range boundary. First, we develop an efficient technique for

maintaining the k-means. Then, we present mathematical formulae and

algorithms for deriving the individual thresholds. Finally, we justify our

performance claims with extensive experiments.


1.INTRODUCTION

1.1 PROJECT OBJECTIVE

 To reduce heavy burden on the server side for computing the

centers from scratch

 To reduce heave burden on the client side for continuously

sending location updates.

 To reduce the Computation and communication costs.

1.2 PROJECT OVERVIEW

The Software Entitled “THRESHOLD BASED K-MEANS MONITORING

OVER NON STATIC OBJECTS” is a windows application which is

implemented to replace the existing system effectively. It can focus on

document objectives effectively and efficiently which is mentioned above.

The following modules are categorized based upon the proposed model

1. Login or Security Form

i. Here the visiting user will be filtered by two walls namely

Authentication and Authorization part. In the Authentication

part, the user will be filtered by username and password

filtering. After the Authentication part, the Authorization


wall will provide access rights to the user and check whether

the user is having administrator rights or limited rights.

2. Server Settings

a. In the Server side settings, the Server ID will be maintained for

unique maintenance (Primary Key), the server name and the

respective server configuration details will be stored accordingly.

3. Client Settings

a. In the Client side settings, the Client ID will be maintained for

unique maintenance like Server ID and respective the client name

and their configuration details will be stored accordingly.

4. Threshold Settings

a. Here the Client Interaction Area will be managed by Minutes

Settings, Threshold content and threshold range like Position

Changing, Relieving, Joining, Replacing, Grouping.

5. GPS Device Settings

a. Here the Communication Port will be set according to the Server

side facility. Via this communication port, the client and the server

will be communicate with additionally presented wireless

technology such as GPS interface . GPS( Global Positioning

System) is used to detect the client area acting as wireless

technology.
6. Interaction Area

a. Here the interaction area will revolves as the main part for client

and server interactions.

7. Visual Area

a. The Client and server actions and positions will be in live

visualization to the user.

8. Client Wise Running Graph

a. The running graph from each client wise flow is visualized to the

user.

9. Server Wise Running Graph

a. The running graph from each server wise flow is visualized to the

user.

10. Client and Server Action History

a. The Client and Server wise action history for each object updating

will be stored in the database and it allows the user to visualize.


2.1. PRODUCT PERSPECTIVE

Product Perspective describes viewing anything that is


produced, whether as the result of generation, growth etc.,

In this TKM System, the following solutions are produced


to overcome these problems with a novel approach that
significantly reduces the computation and communication costs,
while guaranteeing that the quality of the solution, with respect to
the re-evaluation approach, is bounded by a user-defined
tolerance. The proposed method assigns each moving object a
threshold (i.e., range) such that the object sends a location

update only when it crosses the range boundary. First, we develop


an efficient technique for maintaining the k-means. Then, we
present mathematical formulae and algorithms for deriving the
individual thresholds. Finally, we justify our performance claims
with extensive experiments.

SYSTEM ARCHITECTURE
2.2. PRODUCT FUNCTIONS

The Functions of the project includes

 K-Means Computations for Static data


 Threshold based K-means monitoring
 Threshold Assignments
 Mathematical formulation of thresholds
 Computation of thresholds
 Utilizing the object speed
 Dissemination of thresholds
2.3. USER CHARACTERISTICS

Generally, the server user will take a role in controlling each


client users by Threshold assignments . Whereas the client
users will forced to carry threshold assignment with them and
those thresholds will be handed over to the server user at the
time of relieving or position changing. Any new client user
enters into the process means, then the server release a new
threshold to the newly entered user.

2.4. CONSTRAINTS

 Server have to take in charge of all client users by keeping


thresholds.
 Newly entered Client user have to inform the server and get
threshold for further process.
 While any client replacing their current location and moving
to new location means, the particular client have to
handover their respective threshold to the server and get a
new threshold for their new location.
 At the time of any client relieving from the current process
means, it have to hand over the threshold stating their
position and status such as relieving.

2.5. ASSUMPTIONS AND DEPENDENCIES:

 Assume that all client users are under control of server.


 Each and every client object updation depends on threshold
control of server.
3.PROGRAMMING ENVIRONMENT

3.1. User Interfaces

o The Server will login with authentication rights and full


authorization rights.
o Each and every client logins by using mobile device by sending a
message to the server system and the server in turns check out
whether the user having account in server database. If the user
data is exists in the database means, the server will deliver the
threshold and make the client under server control.
3.2 Hardware Interfaces

There is no need of extra hardware interface except standard PC.

3.3. Software Interfaces

 The beta version tool named as Dot Net Framework 2.0 used for
server side.
 GSM Modem interface is used for communicating with the client
users.
 Client users need a mobile device to send a message
communication to
update their location to the server system

3.4. Functional Requirements for the Feature

3.4.1. Requirement 1 – Server Side Settings, Client Side Settings,


Threshold Settings

Deadline Factors : 5 Days to complete this Process


3.4.2. Requirement 2 – GPS Device Settings

Deadline Factors : 3 Days to complete this Process

Requirement 3 – Interaction Area, Visual Area

Deadline Factors : 10 Days to complete this Process

Requirement 4 –Client and Server Action History

Deadline Factors : 4 Days to complete this Process

Requirement 5 – Client wise Running graph, Server Wise Running Graph

Deadline Factors : 12 Days to complete this Process

3.3 Data Requirements:

A data model is a modeling work product that models a business


enterprise, application, framework, or component in terms of its data.

Client & Server Interaction

The computation of M(0) is the same as in REF. After the server


computes a center set, it sends to every object pi a threshold ∆ ,such that pi
needs to issue an update, only if its current location deviates from the previous
one by at least ∆.

Server Side Updation :

When the server receives such an update, it obtains the new k-


means using HC*, an optimized version of HC, over the last recorded locations
of the other objects (which are still within their assigned thresholds).

Then, it computes the new threshold values. Because most


thresholds remain the same, the server only needs to send messages to a
fraction of the objects.

4. Constraints:

System Component Specific Constraints:

4.1. Data and Content Constraints:


Client and Server information is maintained in the database.

4.2. Hardware Constraints:


Pentium Processor IV, 128MB RAM.

GSM MODEM COMMUNICATION PORT

4.3. Software Constraints:


VISUAL STUDIO DOT NET 2005,

SQL SERVER

CRYSTAL REPORTS

3.2 HARDWARE INTERFACES (MINIMUM CONFIGURATION)

• SYSTEM : PENTIUM IV 2.4 GHZ

• HARD DISK : 40 GB

• FLOPPY DRIVE : 1.44 MB


• MONITOR : 15 VGA COLOUR

• MOUSE : LOGITECH.

• RAM : 256 MB

• KEYBOARD : 110 KEYS ENHANCED.

3.3. SOFTWARE INTERFACES

• Operating system :- Windows XP Professional

• Front End :- Microsoft Visual Studio .Net 2005

• Coding Language :- VB.Net 2005

• Database :- SQL SERVER 2000

3.6 PLATFORM - WINDOWS XP

3.6.1 Built on the new Windows engine

Windows XP Professional is built on the proven code base of Windows NT and

Windows 2000, which features a 32-bit computing architecture and a fully protected

memory model. Windows XP Professional will provide a dependable computing

experience for all business users.


3.6.2 Enhanced device driver verifier

Building on the device driver verifier found in Windows 2000, the Windows XP

Professional will provide even greater stress tests for device drivers. Device drivers that

pass these tests will be the most robust drivers available, which will ensure maximum

system stability.

3.6.3 Dramatically reduced reboot scenarios

The dramatically reduced reboot scenario eliminates most scenarios that force

end users to reboot in Windows NT 4.0 and Windows 95/98/Me. Also, many software

installations will not require reboots. Users will experience higher levels of system

uptime.

3.6.4 Improved code protection

Critical kernel data structures are read-only, so that drivers and applications

cannot corrupt them. All device driver code is read-only and page protected. Rogue

applications cannot adversely affect core operating system areas.

3.6.5 Side-by-side DLL support


Provides a mechanism for multiple versions of individual Windows components

to be installed and run "side by side". This helps to address the "DLL hell" problem by

allowing an application written and tested with one version of a system component to

continue to use that version even if an application that uses a newer version of the

same component is installed.

3.6.6 Windows File Protection

Windows file protection protects core system files from being over written by

application installations. If a file is overwritten, Windows File Protection will restore

the correct version.

By safeguarding system files, Windows XP Professional mitigate many of the

most common system failures encountered in earlier versions of Windows.

3.6.7 Windows Installer


A system service that helps users installs, configure, track, upgrade, and

remove software programs correctly which helps to minimize user downtime and

increase system stability

3.6.8 Enhanced software restriction policies

Provide administrators a policy-driven mechanism to identify software running

in their environment and control its ability to execute. This facility can be used in

virus and Trojan horse prevention and software lockdown which contributes to

improve the system integrity, manageability, and, ultimately, lower cost of ownership

of the PC.

3.6.9 Preemptive multitasking architecture

Preemptive multitasking architecture designed to allow multiple applications to

run simultaneously, while ensuring great system response and stability. It runs the

users most demanding applications while still experiencing impressive system

response time.

3.6.10 Scalable memory and processor support


Scalable memory and processor supports up to 4 gigabytes (GB) of RAM and up

to two symmetric multiprocessors. Users who need the highest level of performance

will be able to work with the latest hardware.

3.6.11 Encrypting File System (EFS) with multi-user support

EFS encrypt each file with a randomly generated key. The encryption and

decryption processes are transparent to the user. In Windows XP Professional, EFS

can provide multiple users access to an encrypted document.

3.6.12 IP Security (IPSec)

IP Security helps to protect the data transmission across a network. IPSec is an

important part of providing security for Virtual Private Networks (VPNs), which allow

Organizations to transmit data securely over the Internet. IT administrators will be

able to build secure VPNs quickly and easily.

3.6.13 Kerberos support


Provides industry-standard and high-strength authentication with fast, single

logon to Windows 2000 — based enterprise resources. Kerberos is an Internet

standard, which makes it especially effective for networks that include different

operating systems such as UNIX.

Windows XP Professional will offer single logon for end users for resources and

supported applications hosted on both Windows 2000 and our next-generation server

platform, Windows Server 2003.

3.6.14 Smart card support

Smart card capabilities are integrated into the operating system, including

support for smart card logon to terminal server sessions hosted on Windows Server

2003 — based (the next-generation server platform) terminal servers. Smart cards

enhance software-only solutions such as client authentication, interactive logon, code

signing, and secure e-mail.

3.6.15 Internet Explorer Add-on Manager


Easily manage and enforce a list of Internet Explorer add-ons that are either

permitted or disabled to enhance security. Helps reduce the potential for crashes.

3.6.16 Windows Firewall

Turned on by default, the built-in Windows Firewall helps increase computer

security from startup to shutdown. It reduces the risk of network and Internet-based

attacks.

3.6.17 Windows Security Center

Easily manage security resources with this single, unified view of key settings,

tools, and access to resources. Windows Security Center helps to change settings

easily and identifies security issues.

3.6.18 Attachment Manager


Attachment manager isolates potentially unsafe attachments during the

opening process which helps to provide protection from viruses spread through

Outlook Express, Windows Messenger and Internet Explorer.

3.6.19 Data Execution Prevention

Data Execution Prevention helps to prevent certain types of malicious code from

attacking and overwhelming a computer’s memory reduces the risk of buffer overruns.

3.6.20 Windows Firewall Exception List

Windows Firewall Exception List helps administrators, to manage applications

and static port exceptions by allowing only ports needed by an application to be open.

3.6.21 Windows Firewall Application and Port Restrictions

Easily configure applications and ports to receive network traffic only with a

source address from any location, the local subnet only, or

from specific IP addresses. It helps reduce the potential for network-based attacks.
Easy to Use

3.6.22 Fresh visual design

While maintaining the core of Windows 2000, Windows XP Professional has a

fresh visual design. Common tasks have been consolidated and simplified, and new

visual cues have been added to help users navigate their computers more easily.

Administrators or end users can choose this updated user interface or the

classic Windows 2000 interface with the click of a button. It allows the most common

tasks to be exposed easily, helping users get the most of out of Windows XP

Professional.

3.6.23 Adaptive user environment

Adaptive user environment adapts to the way an individual user works. With a

redesigned start menu, the most frequently used applications are shown first. When

the user opens multiple files in the same application, (such as multiple e - mail

messages in the Microsoft Outlook messaging and collaboration client) the open

windows will be consolidated under a single task bar button.


To remove some of the clutter from the notification area, items that are not

being used will be hidden. All of these features can be set using Group Policy.

A cleaner work environment allows the user to be more efficient. Users

can find the crucial data and applications they need quickly and easily. All of these

settings can be controlled using Group Policy, so IT administrators can decide what

features are most appropriate for their environments.

3.6.24 Work with rich media

Windows Media Player for Windows XP is the first player to combine all of the

common digital media activities into a single, easy-to-use player.

The player makes it easy, to view the rich media information. For example,

virtual company meetings or "just-in-time" learning receives the best - possible audio

and video quality, because the player adapts to network conditions.

3.6.25 Context-sensitive task menus


When a file is selected in Windows Explorer, a dynamic menu appears. These

menu lists, tasks that are appropriate for the type of file selected. Common tasks that

were hard to find in previous versions of Windows are exposed for easy access.

3.6.26 Integrated CD burning

Support for burning CDs on CD-R and CD-RW drives is integrated into

Windows Explorer. Archiving data onto CD is now as easy as saving to a floppy disk,

and does not require an expensive third-party solution.

3.6.27 Easily publish information to the Web


Files and folders can be easily published to any Web service that uses the

WebDAV protocol. Users will be able to publish important information to Web servers

on the company's intranet.

3.6.28 Dual view

A single computer desktop can be displayed on two monitors driven off of a

single display adapter. With a laptop computer, a user could run the internal LCD

display as well as an external monitor.

A variety of high-end display adapters will support this functionality for

desktops. Users will be able to maximize their productivity by working on multiple

screens, while removing the need for multiple CPUs.

3.6.29 Troubleshooters

Troubleshooter helps users and administrators configure, optimize, and

troubleshoot numerous Windows XP Professional functions. It enables users to be

more self - sufficient, resulting in greater productivity, fewer help desk calls, and

better customer service.


3.7 SOFTWARE TOOLS USED

3.7.1 Overview Of The .Net Framework

The .NET Framework is a managed, type-safe environment for application

development and execution. The framework manages all aspects of the execution of

the program: it allocates memory for the storage of data and instructions, grants or

denies the appropriate permissions to the application, initiates and manages

application execution, and manages the reallocation of memory for resources that are

no longer needed. The .NET Framework consists of two main components: the

common language runtime and the .NET Framework class library.

3.7.1.1 Common Language Runtime (CLR): -

CLR is described as the “execution engine” of .NET. It provides

the environment within which programs run. The most important features are:

 Conversion from a low-level assembler-style language called Intermediate

Language (IL), into code native to the platform being executed on

 Memory Management, notably including garbage collection.

 Checking and enforcing security restrictions on the running code.

 Loading and executing programs with version control and other such

features.
The common language runtime can be thought of as the environment

that manages code execution. It provides core services, such as code

compilation, memory allocation, thread management, and garbage collection.

Through the common type system (CTS), it enforces strict type safety, and it

ensures that code is executed in a safe environment by enforcing code access

security. The .NET Framework class library provides a collection of useful and

reusable types that are designed to integrate with the common language

runtime. The types provided by the .NET Framework are object-oriented and

fully extensible, and allow the user to seamlessly integrate the applications

with the .NET Framework.

3.7.1.2 Languages and the .NET Framework

The .NET Framework is designed for cross-language compatibility.

Simply put, it means that .NET components can interact with each other no

matter what language they were originally Microsoft C++ or any other .NET

language. The language interoperability extends to full object-oriented

inheritance.

The level of cross-language compatibility is possible because of the

common language run time. When a .NET application is compiled, it is

converted from the language it was written in (Visual Basic .NET, any other

.NET compliant language) to Microsoft Intermediate Language (MSIL or IL). It is


a low-level language designed to be read and understood by the common

language run time. Because all .NET executables and DLLs exist as

intermediate language, they can freely interoperate.

The Common Language Specification defines the minimum standards

that .NET language compilers must conform to, and thus ensures that any

source code compiled by a .NET compiler can interoperate with the .NET

Framework.

The CTS ensures type compatibility between .NET components.

Because .NET applications are converted to IL prior to deployment and

execution, all primitive data types are represented as .NET types. Thus, a

Visual Basic Integer represented in IL code as a System.Int32. Because both

languages use a common and inter convertable type system, it is possible to

transfer data between components and avoid time-consuming conversions or

hard-to-find errors.

Visual Studio .NET ships with such languages as Visual Basic .NET, and

Visual C++ with managed extensions as well as the JScript scripting language.

The user can also write managed code for the .NET Framework in other

languages. Third party compilers exist for FORTRAN .NET, COBOL .NET,

Perl .Net, and a host of other languages. All of these languages share the same

cross-language compatibility and inheritability. Thus the user can write code
for the .NET Framework in the language of their choice, and it will be able to

interact with code written for the .NET Framework in any other language.

3.7.1.3 The Structure of a .NET Application

To understand how the common language run time manages the

execution of code; the user must examine the structure of a .NET application.

The primary unit of a .NET application is the assembly. An assembly is a self-

describing collection of code, resources, and metadata. The assembly manifest

contains information about what is contained within the assembly.

The assembly manifest provides

 Identity information, such as the name and version number of the assembly.

 A list of all types exposed by the assembly.

 A list of other assemblies required by the assembly.

 A list of code access security instructions for the assembly. It includes a list of

permissions required by the assembly and permissions to be denied the

assembly.

Each assembly has one and only one assembly manifest, and it contains

all the description information for the assembly. The assembly manifest can be

contained in its own separate file, or it can be contained within one of the

assembly's modules.

An assembly also contains one or more modules. A module contains the

code that makes up the application or library, and metadata that describes
that code. When the user compiles a project into an assembly, the code is

converted from high-level code to IL. Because all managed code is first

converted to IL code, applications written in different languages can easily

interact. For example, one developer might write an application in Visual

Basic .NET. Both resources will be converted to IL modules before being

executed, thus avoiding any language incompatibility issues.

Each module also contains a number of types. Types are templates that

describe a set of data encapsulation and functionality. There are two kinds of

types: reference types (classes) and value types (structures). Each type is

described to the common language run time in the assembly manifest. A type

can contain fields, properties, and methods, each of which should be related to

a common functionality. For example, the user might have a class that

represents a bank account. It would contain fields, properties, and methods

related to the functions needed to implement a bank account. A field

represents storage of a particular type of data. The user might have a field that

stores the name of an account holder. Properties are similar to fields, but

usually provide some kind of validation when the data is set or retrieved.

When an attempt is made to change the value, the property could check

to see if the attempted change was greater than a predetermined limit, and if

so, could disallow the change. Methods represent behavior, such as actions

taken on data stored within the class or changes to the user interface.

Continuing with the bank account example, the user might have a Transfer
method that transfers a balance from a checking account to a savings account,

or an Alert method that warns the user when his balance has fallen below a

predetermined level.

3.8 FRONT-END

 VB.NET

Visual Basic .NET is a major component of Microsoft Visual Studio .NET

suite..Net is a Framework in which Windows applications may be developed and

run. .NET must go back in time and follow the development of Windows and the

advent of Windows programming. The .NET version of Visual Basic is a new improved

version with more features and additions. After these new additions, VB qualifies to

become a full object-oriented language such as C++.

VB.NET is the following version of VB 6.0. Microsoft .NET is a new

programming and operating framework introduced by Microsoft. all .NET

supported languages access a common .NET library to develop applications

and share common tools to execute applications. Programming with Visual

Basic using .NET is called VB.NET. VB.NET, the following version of VB 6.0 is

an improved, stable, and full Object Oriented language. In VB 6.0 wasn’t a true

object-oriented language because there was no support for inheritance,

overloading, and interfaces. VB.NET supports inheritance, overloading, and

interfaces. Multithreading and Exception handling was two major weeks’ areas
of VB 6.0. In VB.NET, the user can develop multithreaded applications as the

user can do in C++ and C# and it also supports structured exception handling.

Here is list of VB.NET features:-

 Object Oriented Programming language.

 Support of inheritance, overloading, interfaces, shared members and

constructors.

 Supports all CLS features such as accessing and working with .NET

classes, interaction with other .NET languages, Meta data support,

common data types, and delegates.

 Multithreading support.

 Structured exception handling.

 ASP.NET

ASP.NET is the latest version of Microsoft’s Active Server Pages

technology (ASP). ASP+ is the other name for ASP.NET. ASP+ is just an early

name used by Microsoft when they developed ASP.Net.

ASP.NET provides a unified Web development model that includes the

services necessary for developers to build enterprise-class Web applications.

ASP.NET has been designed to work seamlessly with WYSIWYG (What You See
Is What You Get) HTML editors and other programming tools, including

Microsoft VisualStudio.NET. Not only does it make Web development easier,

but it also provides all the benefits that these tools have to offer, including a

GUI that developers can user to drop server controls onto a Web page and fully

integrated debugging support.

New features in ASP.Net are,

 Programmable controls

 Event-driven programming

 XML-based components

 Higher scalability

 Increased performance – Compiled code

 Easier configuration and deployment

 ADO.NET

ADO.NET is as set of classes that expose data access services to the .NET

programmer. ADO.NET provides a rich set of components for creating

distributed, data sharing applications. It is an integral part of the .NET

Framework, providing access to relational data, XML, and application data.

ADO.NET supports a variety of development needs, including the creation of


front-end database clients and middle-tier business objects used by

applications, tools, languages, or Internet browsers.

ADO.NET provides consistent access to data sources such as Microsoft

SQL Server, as well as data sources exposed through OLE DB and XML. Data-

sharing consumer applications can use ADO.NET to connect to these data

sources and retrieve, manipulate, and update data.

ADO.NET cleanly factors data access from data manipulation into

discrete components that can be used separately or in tandem. ADO.NET

includes .NET Framework data provides for connecting to a database,

executing commands, and retrieving results. Those results are either processed

directly, or placed in an ADO.NET DataSet object in order to be exposed to the

user in an ad-hoc manner, combined with data from multiple sources, or

remoted between tiers. The ADO.NET DataSet object can also be used

independently of a .NET Framework data provider to manage data local to the

application or sourced from XML.

3.9 BACK-END

 MS SQL Server 2000

Microsoft SQL Server2000 extends the performance, reliability, quality, and

ease-of-use of Microsoft SQL Server version 7.0. Microsoft SQL Server 2000 includes

several new features that make it an excellent database platform for large-scale online

transactional processing (OLTP), data warehousing, and e-commerce applications.


The OLAP Services feature available in SQL Server version 7.0 is now called

SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with

the term Analysis Services. Analysis Services also includes a new data mining

component.

The Repository component available in SQL Server version 7.0 is now called

Microsoft SQL Server 2000 Meta Data Services. References to the component now use

the term Meta Data Services. The term repository is used only in reference to the

repository engine within Meta Data Services.

4. SYSTEM DESIGN

4.1 FUNDAMENTAL DESIGN CONCEPTS

System design sits in the technical kernel of software engineering and applied

science regardless of the software process model that is used. Beginning once the

software requirements have been analyzed and specified, tests that are required in the

building and verifying the software is done. Each activity transforms information in a

number that ultimately results in validated computer software.


There are mainly three characteristics that serve as guide for evaluation of good

design,

 The design must be implement all of explicit requirements contained in the

analysis model, and it must accommodate all of the implicit requirements

desired by the customer.

 The design must be readable, understandable guide for those who generate

code and for those who test and subsequently support the software.

 The design should provide a complete picture of software, addressing the data,

its functional and behavioral domains from the implementation perspective.

System design is thus a process of planning a new system or to replace or the

complement of the existing system. The design based on the limitations of the existing

system and the requirements specification gathered in the phase of system analysis.

Input design is the process of converting the user-oriented description of the

computer based business information into program-oriented specification. The goal of

designing input data is to make the automation as easy and free from errors as

possible.

Logical Design of the system is performed where its features are described,

procedures that meet the system requirements are formed and a detailed specification

of the new system is provided


Architectural Design of the system includes identification of software

components, decoupling and decomposing them into processing modules, conceptual

data structures and specifying relationship among the components.

Detailed Design is concerned with the methods involved in packaging of

processing modules and implementation of processing algorithms, data structure and

interconnection among modules and data structure.

External Design of software involves conceiving, planning and specifying the

externally observable characteristics of the software product. The external design

begins in the analysis phase and continues till the design phase.

As per the design phase the following designs had to be implemented, each of

these design were processed separately keeping in mind all the requirements,

constraints and conditions. A step-by-step process was required to perform the design.

Process Design is the design of the process to be done; it is the designing that

leads to the coding. Here the conditions and the constraints given in the system are to

be considered. Accordingly the designing is to be done and processed.


The Output Design is the most important and direct source of information to

the user. The output design is an ongoing activity during study phase. The objectives

of the output design define the contents and format of all documents and reports in an

attractive and useful format.

4.2 DESIGN CONCEPTS

4.2.1 SYSTEM FLOW DAIGRAM

An overall representation of the system can be represented by using

system flow diagram. In a system flow diagram the source and the destination are

depicted by a rectangle. The arrow in a system flow diagram represents the flow of

data from one source to the other.

4.2.2 DATAFLOW DESIGNS

The data flow diagram (DFD) is a graphical tool used for expressing

system requirements in a graphical form. The DFD also known as the “bubble

chart” has the purpose of clarifying system requirements and identifying major

transformations that will become programs in system design. Thus DFD can be

stated as the starting point of the design phase that functionally decomposes

the requirements specifications down to the lowest level of detail. The DFD

consists of series of bubbles joined by lines. The bubbles represent data


transformations and the lines represent data flows in the system. A DFD

describes what data flow is rather than how they are processed, so it does not

depend on hardware, software, data structure or file organization.

4.2.3 Rules Used For Constructing a DFD

Process should be named and numbered for easy reference. Each name

should be representative of the process. The direction of flow is from top to

bottom and from left to right. That is data flow should be from source to

destination. When a process is exploded into lower level details, they are

numbered. The name of the data stores, sources and destinations are written

in capital letters. Process and data flow names have the first letter of each word

capitalized. The DFD is particularly designed to aid communication. If it

contains dozens of process and data stores it gets too unwieldy. The rule of the

thumb is to explode the DFD into a functional level. Beyond that, it is best to

take each function separately and expand it to show the explosion in a single

process. If a user wants to know what happens within a given process, then the

detailed explosion of that process may be shown.


4.2.4 DATABASE DESIGNS

Data Constraints

All business in the world runs on business data being gathered stored and

analyzed. Business managers determine a set of rules that must be applied to the

data being stored to ensure its integrity.

Types of Data Constraints

There are two types of data constraints that can be applied to data being

inserted into a database table .One type of constraint is called an I/O constraint. The

other type of constraint is called a business rule constraint.

I/O Constraints

The input /output data constraint is further divided into two distinctly different

constraints.

The Primary Key Constraint

Here the data constraint attached to a column ensures:

 That the data entered in the table column is unique across the entire

column.

 That none of the cells belonging to the table column are left empty.
The Foreign Key Constraint

Foreign constraint establishes a relationship between records across a master

and a detail table. The relationship ensures.

 Records cannot be inserted in a detail table if corresponding records in the

master table do not exist.

 Records of the master table cannot be deleted if corresponding records in the

detail table exist.

Business Rule Constraints

The Database allows the application of business rules to table columns.

Business managers determine business rules.

The Database allows programmers to define constraints at:

 Column Level

 Table Level

Column Level Constraints

If data constraints are defined along with the column definition where creating

or altering a table structure, they are column level constraints.


Table Level Constraints

If data constraints are defined after defining all the table columns when

creating or altering a table structure, it is a table level constraint.

Null Value Concepts

A NULL value is different from a blank of zero. NULL values are treated

specially by the database. A NULL value can be inserted into the columns of any data

type.

Not Null Constraint Defined at the Column Level

When a column is defined as not null, then that column becomes a mandatory

column .It implies that a value must be entered into the column if the record is to be

accepted for storage in the table.

The Primary Key Constraint

Primary Key Concepts

A primary is one or more column(s) in a table used to uniquely identify each

row in the table .A primary key column in a table has special attributes.

 It defines the column as a mandatory column i.e. the column cannot be left

blank. The NOT NULL attribute is active.

 The date held across the column MUST BE UNIQUE.


4.2.5 ENTITY RELATIONSHIP DIAGRAM

The Entity-Relationship (ER) model is a conceptual data model that views the

real world as entities and relationships. A basic component of the model is the Entity-

Relationship diagram, which is used to visually represent data objects. The model has

been extended and today it is commonly used for database design. Features of ER

Model are:

 It maps well to relational model.

 It is simple and easy to understand with the minimum of training. Therefore,

database designer to communicate to the end user can use the model.

 In addition, the model can be used as a design plan by the database to

implement a data model in specific database management software.

4.2.6 NORMALIZATION

In relational database design, the process of organizing data to minimize

redundancy is called normalization. Normalization usually involves dividing a,

database into two or more tables and defining relationships between tables. The

objective is to isolate data so that additions, deletions and modifications of a field can

be made in just one table and then propagated through the rest of the database via

defined relationships.
There are three normal forms, each with increasing levels of

normalization:

1NF

First Normal Form (1NF): Every cell in the table must have only one value i.e., it

should not have multiple values.

2NF

Second Normal Form (2NF): All non-key attributes must be fully functional

dependent on the primary key and not just the part of the key.

3NF

Third Normal Form (3NF): The database must be in second normal form and

non-prime attribute should be transitively dependent on the primary key.

Boyce Codd NF

It states that no inverse partial dependencies should exist in the database.

4NF
It deals with multiple valued dependencies.

5NF

It deals with joined dependencies.

Database is generally normalized up to 3NF, as every cell in the table has only

one value i.e. it does not have multiple values. All non-key attributes are fully

functionally dependent on the primary key and not just the part of the key and non-

prime attribute is transitively dependent on the primary key.

4.2.7 SYSTEM DEVELOPMENT

The development phase the computer-related business information system is

constructed form the specification prepared in the design phase. A principal activity of

the development phase is coding and testing the computer programs that make up the

computer program component of the overall system. Other important activities include

implementation planning, equipment acquisition and system testing. The development

phase concludes with a development phase report and user review.

The user entry screens are developed using VB. The user login is performed in

the login entry form. Textbox are provided for data entry and buttons are provided to

perform actions like submit, cancel etc.


5. SYSTEM TESTING & IMPLEMENTATION

After the successful study of requirement analysis the next step involved is the

Design and Development phase that practically helps to build the project.

The methods that are applied during the development phase

 Software Design

 Code Generation

 Software Testing

The Linear Sequential Model or Classic Life Cycle or the Waterfall Model

develops project. This is a sequential approach to software development that begins at

the system level and progresses through analysis, design, coding and testing.

 System / Information Engineering and Modeling

Because software is always part of a larger system, work begins by establishing

requirements for all system elements and then allocating some subset of these

requirements to software. System view is essential when software must interact with

other elements such as hardware people and database.

 Software requirements analysis

Requirements is intensified and focused specially on software. To understand

the nature of the program to be built, the software engineer must understand the
information domain for the software, as well as required function, behavior,

performance and interface.

 Design of the project

The design process translates requirements into a representation of the

software that can be accessed for quality before coding begins. Like requirements, the

design is documented and becomes part of the software configuration.

 Code Generation

The design must be translated into a machine-readable form. The code

generation step performs this task. If design is performed in a detailed manner, code

generation can be accomplished mechanistically.

After completing the design phase, code was generated using Visual Basic

environment and the SQL Server 2000 was used to create the database. The server

and the application were connected through ADO.Net concepts.

The purpose of code is to facilitate the identification and retrieval of items of

information. Codes are built with the mutually exclusive features. They are used to

give operational distractions and other information. Codes also show interrelationship

among different items. Codes are used for identifying, accessing, sorting and matching

records. The code ensures that only one value of code with single meaning is correctly

applied to give entity or attribute as described in various ways. Codes can also be

designed in a manner easily understood and applied by the user.

The coding standards used in the project are as follows:

1. All variable names are kept in such a way that it represents the

flow/function it is serving.
2. All functions are named such that it represents the function it is performing.

5.1 SYSTEM IMPLEMENTATION

A software application in general is implemented after navigating the complete

life cycle method of a project. Various life cycle processes such as requirement

analysis, design phase, verification, testing and finally followed by the implementation

phase results in a successful project management. The software application which is

basically a web based application has been successfully implemented after passing

various life cycle processes mentioned above.

As the software is to be implemented in a high standard industrial sector,

various factors such as application environment, user management, security,

reliability and finally performance are taken as key factors through out the design

phase. These factors are analyzed step by step and the positive as well as negative

outcomes are noted down before the final implementation.

Security and authentication is maintained in both user level as well as the

management level. The data is stored in Access 2000 as RDBMS, which is highly

reliable and simpler to use, the user level security is managed with the help of

password options and sessions, which finally ensures that all the transactions are

made securely.

The application’s validations are made, taken into account of the entry levels

available in various modules. Possible restrictions like number formatting, date

formatting and confirmations for both save and update options ensures the correct

data to be fed into the database. Thus all the aspects are charted out and the complete

project study is practically implemented successfully for the end users.


5.2 SYSTEM TESTING

Software testing is a critical element of software quality assurance and

represents the ultimate review of specification, design and code generation. Once the

source code has been generated, software must be tested to uncover as many errors as

possible before delivery to the customer. In order to find the highest possible number

of errors, tests must be conducted systematically and test cases must be designed

using disciplined techniques.

5.2.1 TYPES OF TESTING

 Whitebox Testing

White box testing some times called as glass box testing is a test case design

method that uses the control structures of the procedural design to derive test cases.

Using White Box testing methods, the software engineer can derive test case,

that guarantee that all independent paths with in a module have been exercised at

least once, exercise all logical decisions on their true and false sides, execute all loops

at their boundaries and within their operational bounds, exercise internal data

structures to ensure their validity. “Logic errors and incorrect assumptions are

inversely proportional to the probability that a program path will be executed“.

The logical flow of a program is some times counterintuitive, meaning that

unconscious assumptions about flow of control and data may lead to make design

errors that are uncovered only once path testing commences.


“Typographical errors are random“

When a program is translated into programming language source code, it is

likely that some typing errors will occur. Many will be uncovered by syntax and typing

checking mechanisms, but others may go undetected until testing begins. It is as

likely that a type will exist on an obscure logical path as on a mainstream path.

 Black box Testing

Black box testing, also called as behavioral testing, focuses on the functional

requirements of the software. That is, black box testing enables the software engineer

to derive sets of input conditions that will fully exercise all functional requirements for

a program. Black box testing attempts to find errors in the following categories:

1. Incorrect or missing functions

2. Interface errors

3. Errors in data structures or external data base access

4. Behavior or performance errors

5. Initialization and termination errors

By applying black box techniques, a set of test cases that satisfy the following

criteria were been created: Test cases that reduce, by a count that is greater than one,

the number of additional test cases that must be designed to achieve reasonable

testing and test cases that tell something about the presence or absence of classes of

errors, rather than an error associated only with the specific test at hand.
Black - box testing is not an alternative to white - box testing techniques.

Rather it is complementary approach that is likely to uncover a different class of errors

than white - box methods.

 Validation Testing

Validation testing provides the final assurance that software meets all

functional, behavioral and performance requirements. Validation testing can be

defined in many ways, but a simple definition is that validations succeed when the

software functions in a manner that is expected by the user. The software once

validated must be combined with other system element. System testing verifies that all

elements combine properly and that overall system function and performance is

achieved. After the integration of the modules, the validation test was carried out over

by the system. It was found that all the modules work well together and meet the

overall system function and performance.

 Integration Testing

Integration testing is a systematic technique for constructing the program

structure while at the same time conducting test to uncover errors associated with

interfacing. The objective is to take unit - tested modules and build a program

structure that has been dictated by design. Careful test planning is required to

determine the extent and nature of system testing to be performed and to establish

criteria by which the result will be evaluated.

All the modules were integrated after the completion of unit test. While Top -

Down Integration was followed, the modules are integrated by moving downward

through the control hierarchy, beginning with the main module. Since the modules

were unit - tested for no errors, the integration of those modules was found perfect
and working fine. As a next step to integration, other modules were integrated with the

former modules.

After the successful integration of the modules, the system was found to be

running with no uncovered errors, and also all the modules were working as per the

design of the system, without any deviation from the features of the proposed system

design.

 Acceptance Testing

Acceptance testing involves planning and execution of functional tests,

performance tests and stress tests in order to demonstrate that the implemented

system satisfies its requirements. When custom software is built for one customer, a

series of acceptance tests are conducted to enable the customer to validate all

requirements.

In fact acceptance cumulative errors that might degrade the system over time will

incorporate test cases developed during integration testing. Additional testing cases

are added to achieve the desired level functional, performance and stress testing of the

entire system.
 Unit testing

Static analysis is used to investigate the structural properties of source code.

Dynamic test cases are used to investigate the behavior of source code by executing

the program on the test data. This testing was carried out during programming stage

itself.

After testing each every field in the modules, the modulus of the project is

tested separately. Unit testing focuses verification efforts on the smallest unit of

software design and field. This is known as field - testing.

5.3 TEST CONSIDERATIONS FOLLOWED IN THIS PROJECT:

The test that occurs as part of unit testing is given below.

 Interface

Tested to ensure the information properly flows in and out of the program unit

under test.
 Local Data Structures

The temporarily stored data in this module have been checked for integrity. It

was seen that no lose of data or misinterpretation of data was taking place in this

module.

 Boundary Conditions

The data to this module have fixed length and are known to have a particular

range of values. The input data with corresponding lower bound and upper bound

values and also the values in between the range, and was found that the module

operates well with the boundary conditions.

 Independent Paths

The module was tested for independent paths to bound values and also the

values in between the range, and was found that the module operates well with the

boundary conditions.
CONCLUSION

This paper proposes TKM, the first approach for continuous k-means

computation over moving objects. Compared to the simple solution of re-

evaluating k-means for every object update, TKM achieves considerable savings

by assigning each object a threshold, such that the object needs to inform the

server only when there is a threshold violation. We present mathematical

formulae and an efficient algorithm for threshold computation. In addition, we

develop an optimized hill climbing technique for reducing the CPU cost, and

discuss optimizations of TKM for the case that object speeds are known.

Finally, we design different threshold dissemination protocols depending on the

computational capabilities of the objects. In the future, we plan to extend the

proposed techniques to related problems. For instance, k-medoids are similar

to k-means, but the centers are restricted to points in the dataset. TKM could

be used to find the k-means set, and then replace each center with the closest
data point. It would be interesting to study performance guarantees (if any) in

this case, as well as devise adaptations of TKM for the problem. Finally,

another direction concerns distributed monitoring of k-means. In this scenario,

there exist multiple servers maintaining the locations of distinct sets of objects.

The goal is to continuously compute the k-means using the minimum amount

of communication between servers.

REFERENCES
 [AV06] Arthur, D., Vassilvitskii, S. How Slow is the k-Means Method.

SoCG, 2006.

 [B02] Brinkhoff, T. A Framework for Generating Network-Based

Moving Objects. GeoInformatica, 6(2): 153-180, 2002.

 [BDMO03] Babcock, B., Datar, M., Motwani, R., O’Callaghan, L.

Maintaining Variance and k- Means over Data Stream Windows.

PODS, 2003.

 [BF98] Bradley, P., Fayyad, U. Refining Initial Points for k-Means

Clustering. ICML, 1998.

 [DVCK99] Datta, A., Vandermeer, D., Celik, A., Kumar, V. Broadcast

Protocols to Support Efficient Retrieval from Databases by Mobile

Users. ACM TODS, 24(1): 1-79, 1999.

 [GMM+03] Guha, S., Meyerson, A., Mishra, N., Motwani, R., O'Callaghan,

L. Clustering Data Streams: Theory and Practice. IEEE TKDE,15(3):

515-528, 2003.

 [GL04] Gedik, B., Liu, L. MobiEyes: Distributed Processing of

Continuously Moving Queries on Moving Objects in a Mobile System.

EDBT, 2004.

 [HS05] Har-Peled, S., Sadri, B. How Fast is the kmeans Method. SODA,

2005.

 [HXL05] Hu, H., Xu, J., Lee, D. A Generic Framework for Monitoring

Continuous Spatial Queries over Moving Objects. SIGMOD, 2005.


 [IKI94] Inaba, M., Katoh, N., Imai, H. Applications of Weighted Voronoi

Diagrams and Randomization to Variance-based Clustering.

SoCG,1994.

 [JMF99] Jain, A., Murty, M., Flynn, P. Data Clustering: a Review. ACM

Computing Surveys, 31(3): 264-323, 1999.

 [JLO07] Jensen, C., Lin, D., Ooi, B. C. Continuous Clustering of

Moving Objects. IEEE TKDE, 19(9): 1161-1173, 2007.

 [JLOZ06] Jensen, C., Lin, D., Ooi, B. C., Zhang, R. Effective Density

Queries on Continuously Moving Objects. ICDE, 2006.

APPENDIX
DATA FLOW DIAGRAM

CONTEXT FLOW DIAGRAM

Request Update
Threshold based k-
Admin/ User means monitoring Threshold DB
over non static
objects
Response
Retrieve
ZERO LEVEL DIAGRAM

Request Update
Threshold based k-
Admin/ User means monitoring Threshold DB
over non static
objects
Response
Retrieve

Maintains Client Settings Maintains threshold Settings


includes client configuration includes position changing ,
Maintains Server Settings
relieving , joining , replacing
includes Server configuration

Client
Threshold
Settings
settings
Server

settings

Update
Update
Update
Retrieve
Retrieve
Client_set Threshold_set
Retrieve

Server_set

TABLE DESIGN

Table Name : client_set

Primary Key : cid

FIELD NAME DATATYPE DESCRIPTION


cid Varchar Client ID
Cname Varchar Client Name
Ccon Varchar Client configuration
Positioning Numeric Check status for
Positioning
Joining Numeric Check status for joining
Relieving Numeric Check status for
relieving
replacing Numeric Check status for
replacing

Table Name : server_set

Primary Key : sid

FIELD NAME DATATYPE DESCRIPTION


sid Varchar Server ID
sname Varchar server Name
scon Varchar server configuration

ENTITY RELATIONSHIP DIAGRAM


INPUT SCREENS

LOGIN FORM
MASTER SCREEN
SERVER SETTINGS
CLIENT SETTINGS
THRESHOLD SETTINGS

You might also like