Threshold Based K-Means Monitoring Over Moving Objects
Threshold Based K-Means Monitoring Over Moving Objects
MOVING OBJECTS
ABSTRACT
centers), such that the average squared distance between each point in P and
update imposes a heavy burden on the server (for computing the centers from
overcome these problems with a novel approach that significantly reduces the
threshold (i.e., range) such that the object sends a location update only when it
The following modules are categorized based upon the proposed model
2. Server Settings
3. Client Settings
4. Threshold Settings
side facility. Via this communication port, the client and the server
technology.
6. Interaction Area
a. Here the interaction area will revolves as the main part for client
7. Visual Area
a. The running graph from each client wise flow is visualized to the
user.
a. The running graph from each server wise flow is visualized to the
user.
a. The Client and Server wise action history for each object updating
SYSTEM ARCHITECTURE
2.2. PRODUCT FUNCTIONS
2.4. CONSTRAINTS
The beta version tool named as Dot Net Framework 2.0 used for
server side.
GSM Modem interface is used for communicating with the client
users.
Client users need a mobile device to send a message
communication to
update their location to the server system
4. Constraints:
SQL SERVER
CRYSTAL REPORTS
• HARD DISK : 40 GB
• MOUSE : LOGITECH.
• RAM : 256 MB
Windows 2000, which features a 32-bit computing architecture and a fully protected
Building on the device driver verifier found in Windows 2000, the Windows XP
Professional will provide even greater stress tests for device drivers. Device drivers that
pass these tests will be the most robust drivers available, which will ensure maximum
system stability.
The dramatically reduced reboot scenario eliminates most scenarios that force
end users to reboot in Windows NT 4.0 and Windows 95/98/Me. Also, many software
installations will not require reboots. Users will experience higher levels of system
uptime.
Critical kernel data structures are read-only, so that drivers and applications
cannot corrupt them. All device driver code is read-only and page protected. Rogue
to be installed and run "side by side". This helps to address the "DLL hell" problem by
allowing an application written and tested with one version of a system component to
continue to use that version even if an application that uses a newer version of the
Windows file protection protects core system files from being over written by
remove software programs correctly which helps to minimize user downtime and
in their environment and control its ability to execute. This facility can be used in
virus and Trojan horse prevention and software lockdown which contributes to
improve the system integrity, manageability, and, ultimately, lower cost of ownership
of the PC.
run simultaneously, while ensuring great system response and stability. It runs the
response time.
to two symmetric multiprocessors. Users who need the highest level of performance
EFS encrypt each file with a randomly generated key. The encryption and
important part of providing security for Virtual Private Networks (VPNs), which allow
standard, which makes it especially effective for networks that include different
Windows XP Professional will offer single logon for end users for resources and
supported applications hosted on both Windows 2000 and our next-generation server
Smart card capabilities are integrated into the operating system, including
support for smart card logon to terminal server sessions hosted on Windows Server
2003 — based (the next-generation server platform) terminal servers. Smart cards
permitted or disabled to enhance security. Helps reduce the potential for crashes.
security from startup to shutdown. It reduces the risk of network and Internet-based
attacks.
Easily manage security resources with this single, unified view of key settings,
tools, and access to resources. Windows Security Center helps to change settings
opening process which helps to provide protection from viruses spread through
Data Execution Prevention helps to prevent certain types of malicious code from
attacking and overwhelming a computer’s memory reduces the risk of buffer overruns.
and static port exceptions by allowing only ports needed by an application to be open.
Easily configure applications and ports to receive network traffic only with a
from specific IP addresses. It helps reduce the potential for network-based attacks.
Easy to Use
fresh visual design. Common tasks have been consolidated and simplified, and new
visual cues have been added to help users navigate their computers more easily.
Administrators or end users can choose this updated user interface or the
classic Windows 2000 interface with the click of a button. It allows the most common
tasks to be exposed easily, helping users get the most of out of Windows XP
Professional.
Adaptive user environment adapts to the way an individual user works. With a
redesigned start menu, the most frequently used applications are shown first. When
the user opens multiple files in the same application, (such as multiple e - mail
messages in the Microsoft Outlook messaging and collaboration client) the open
being used will be hidden. All of these features can be set using Group Policy.
can find the crucial data and applications they need quickly and easily. All of these
settings can be controlled using Group Policy, so IT administrators can decide what
Windows Media Player for Windows XP is the first player to combine all of the
The player makes it easy, to view the rich media information. For example,
virtual company meetings or "just-in-time" learning receives the best - possible audio
menu lists, tasks that are appropriate for the type of file selected. Common tasks that
were hard to find in previous versions of Windows are exposed for easy access.
Support for burning CDs on CD-R and CD-RW drives is integrated into
Windows Explorer. Archiving data onto CD is now as easy as saving to a floppy disk,
WebDAV protocol. Users will be able to publish important information to Web servers
single display adapter. With a laptop computer, a user could run the internal LCD
3.6.29 Troubleshooters
more self - sufficient, resulting in greater productivity, fewer help desk calls, and
development and execution. The framework manages all aspects of the execution of
the program: it allocates memory for the storage of data and instructions, grants or
application execution, and manages the reallocation of memory for resources that are
no longer needed. The .NET Framework consists of two main components: the
the environment within which programs run. The most important features are:
Loading and executing programs with version control and other such
features.
The common language runtime can be thought of as the environment
Through the common type system (CTS), it enforces strict type safety, and it
security. The .NET Framework class library provides a collection of useful and
reusable types that are designed to integrate with the common language
runtime. The types provided by the .NET Framework are object-oriented and
fully extensible, and allow the user to seamlessly integrate the applications
Simply put, it means that .NET components can interact with each other no
matter what language they were originally Microsoft C++ or any other .NET
inheritance.
converted from the language it was written in (Visual Basic .NET, any other
language run time. Because all .NET executables and DLLs exist as
that .NET language compilers must conform to, and thus ensures that any
source code compiled by a .NET compiler can interoperate with the .NET
Framework.
execution, all primitive data types are represented as .NET types. Thus, a
hard-to-find errors.
Visual Studio .NET ships with such languages as Visual Basic .NET, and
Visual C++ with managed extensions as well as the JScript scripting language.
The user can also write managed code for the .NET Framework in other
languages. Third party compilers exist for FORTRAN .NET, COBOL .NET,
Perl .Net, and a host of other languages. All of these languages share the same
cross-language compatibility and inheritability. Thus the user can write code
for the .NET Framework in the language of their choice, and it will be able to
interact with code written for the .NET Framework in any other language.
execution of code; the user must examine the structure of a .NET application.
Identity information, such as the name and version number of the assembly.
A list of code access security instructions for the assembly. It includes a list of
assembly.
Each assembly has one and only one assembly manifest, and it contains
all the description information for the assembly. The assembly manifest can be
contained in its own separate file, or it can be contained within one of the
assembly's modules.
code that makes up the application or library, and metadata that describes
that code. When the user compiles a project into an assembly, the code is
converted from high-level code to IL. Because all managed code is first
Each module also contains a number of types. Types are templates that
describe a set of data encapsulation and functionality. There are two kinds of
types: reference types (classes) and value types (structures). Each type is
described to the common language run time in the assembly manifest. A type
can contain fields, properties, and methods, each of which should be related to
a common functionality. For example, the user might have a class that
represents storage of a particular type of data. The user might have a field that
stores the name of an account holder. Properties are similar to fields, but
usually provide some kind of validation when the data is set or retrieved.
When an attempt is made to change the value, the property could check
to see if the attempted change was greater than a predetermined limit, and if
so, could disallow the change. Methods represent behavior, such as actions
taken on data stored within the class or changes to the user interface.
Continuing with the bank account example, the user might have a Transfer
method that transfers a balance from a checking account to a savings account,
or an Alert method that warns the user when his balance has fallen below a
predetermined level.
3.8 FRONT-END
VB.NET
run. .NET must go back in time and follow the development of Windows and the
advent of Windows programming. The .NET version of Visual Basic is a new improved
version with more features and additions. After these new additions, VB qualifies to
Basic using .NET is called VB.NET. VB.NET, the following version of VB 6.0 is
an improved, stable, and full Object Oriented language. In VB 6.0 wasn’t a true
interfaces. Multithreading and Exception handling was two major weeks’ areas
of VB 6.0. In VB.NET, the user can develop multithreaded applications as the
user can do in C++ and C# and it also supports structured exception handling.
constructors.
Supports all CLS features such as accessing and working with .NET
Multithreading support.
ASP.NET
technology (ASP). ASP+ is the other name for ASP.NET. ASP+ is just an early
ASP.NET has been designed to work seamlessly with WYSIWYG (What You See
Is What You Get) HTML editors and other programming tools, including
but it also provides all the benefits that these tools have to offer, including a
GUI that developers can user to drop server controls onto a Web page and fully
Programmable controls
Event-driven programming
XML-based components
Higher scalability
ADO.NET
ADO.NET is as set of classes that expose data access services to the .NET
SQL Server, as well as data sources exposed through OLE DB and XML. Data-
executing commands, and retrieving results. Those results are either processed
remoted between tiers. The ADO.NET DataSet object can also be used
3.9 BACK-END
ease-of-use of Microsoft SQL Server version 7.0. Microsoft SQL Server 2000 includes
several new features that make it an excellent database platform for large-scale online
SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with
the term Analysis Services. Analysis Services also includes a new data mining
component.
The Repository component available in SQL Server version 7.0 is now called
Microsoft SQL Server 2000 Meta Data Services. References to the component now use
the term Meta Data Services. The term repository is used only in reference to the
4. SYSTEM DESIGN
System design sits in the technical kernel of software engineering and applied
science regardless of the software process model that is used. Beginning once the
software requirements have been analyzed and specified, tests that are required in the
building and verifying the software is done. Each activity transforms information in a
design,
The design must be readable, understandable guide for those who generate
code and for those who test and subsequently support the software.
The design should provide a complete picture of software, addressing the data,
complement of the existing system. The design based on the limitations of the existing
system and the requirements specification gathered in the phase of system analysis.
designing input data is to make the automation as easy and free from errors as
possible.
Logical Design of the system is performed where its features are described,
procedures that meet the system requirements are formed and a detailed specification
begins in the analysis phase and continues till the design phase.
As per the design phase the following designs had to be implemented, each of
these design were processed separately keeping in mind all the requirements,
constraints and conditions. A step-by-step process was required to perform the design.
Process Design is the design of the process to be done; it is the designing that
leads to the coding. Here the conditions and the constraints given in the system are to
the user. The output design is an ongoing activity during study phase. The objectives
of the output design define the contents and format of all documents and reports in an
system flow diagram. In a system flow diagram the source and the destination are
depicted by a rectangle. The arrow in a system flow diagram represents the flow of
The data flow diagram (DFD) is a graphical tool used for expressing
system requirements in a graphical form. The DFD also known as the “bubble
chart” has the purpose of clarifying system requirements and identifying major
transformations that will become programs in system design. Thus DFD can be
stated as the starting point of the design phase that functionally decomposes
the requirements specifications down to the lowest level of detail. The DFD
describes what data flow is rather than how they are processed, so it does not
Process should be named and numbered for easy reference. Each name
bottom and from left to right. That is data flow should be from source to
destination. When a process is exploded into lower level details, they are
numbered. The name of the data stores, sources and destinations are written
in capital letters. Process and data flow names have the first letter of each word
contains dozens of process and data stores it gets too unwieldy. The rule of the
thumb is to explode the DFD into a functional level. Beyond that, it is best to
take each function separately and expand it to show the explosion in a single
process. If a user wants to know what happens within a given process, then the
Data Constraints
All business in the world runs on business data being gathered stored and
analyzed. Business managers determine a set of rules that must be applied to the
There are two types of data constraints that can be applied to data being
inserted into a database table .One type of constraint is called an I/O constraint. The
I/O Constraints
The input /output data constraint is further divided into two distinctly different
constraints.
That the data entered in the table column is unique across the entire
column.
That none of the cells belonging to the table column are left empty.
The Foreign Key Constraint
Column Level
Table Level
If data constraints are defined along with the column definition where creating
If data constraints are defined after defining all the table columns when
A NULL value is different from a blank of zero. NULL values are treated
specially by the database. A NULL value can be inserted into the columns of any data
type.
When a column is defined as not null, then that column becomes a mandatory
column .It implies that a value must be entered into the column if the record is to be
row in the table .A primary key column in a table has special attributes.
It defines the column as a mandatory column i.e. the column cannot be left
The Entity-Relationship (ER) model is a conceptual data model that views the
real world as entities and relationships. A basic component of the model is the Entity-
Relationship diagram, which is used to visually represent data objects. The model has
been extended and today it is commonly used for database design. Features of ER
Model are:
database designer to communicate to the end user can use the model.
4.2.6 NORMALIZATION
database into two or more tables and defining relationships between tables. The
objective is to isolate data so that additions, deletions and modifications of a field can
be made in just one table and then propagated through the rest of the database via
defined relationships.
There are three normal forms, each with increasing levels of
normalization:
1NF
First Normal Form (1NF): Every cell in the table must have only one value i.e., it
2NF
Second Normal Form (2NF): All non-key attributes must be fully functional
dependent on the primary key and not just the part of the key.
3NF
Third Normal Form (3NF): The database must be in second normal form and
Boyce Codd NF
4NF
It deals with multiple valued dependencies.
5NF
Database is generally normalized up to 3NF, as every cell in the table has only
one value i.e. it does not have multiple values. All non-key attributes are fully
functionally dependent on the primary key and not just the part of the key and non-
constructed form the specification prepared in the design phase. A principal activity of
the development phase is coding and testing the computer programs that make up the
computer program component of the overall system. Other important activities include
The user entry screens are developed using VB. The user login is performed in
the login entry form. Textbox are provided for data entry and buttons are provided to
After the successful study of requirement analysis the next step involved is the
Design and Development phase that practically helps to build the project.
Software Design
Code Generation
Software Testing
The Linear Sequential Model or Classic Life Cycle or the Waterfall Model
the system level and progresses through analysis, design, coding and testing.
requirements for all system elements and then allocating some subset of these
requirements to software. System view is essential when software must interact with
the nature of the program to be built, the software engineer must understand the
information domain for the software, as well as required function, behavior,
software that can be accessed for quality before coding begins. Like requirements, the
Code Generation
generation step performs this task. If design is performed in a detailed manner, code
After completing the design phase, code was generated using Visual Basic
environment and the SQL Server 2000 was used to create the database. The server
information. Codes are built with the mutually exclusive features. They are used to
give operational distractions and other information. Codes also show interrelationship
among different items. Codes are used for identifying, accessing, sorting and matching
records. The code ensures that only one value of code with single meaning is correctly
applied to give entity or attribute as described in various ways. Codes can also be
1. All variable names are kept in such a way that it represents the
flow/function it is serving.
2. All functions are named such that it represents the function it is performing.
life cycle method of a project. Various life cycle processes such as requirement
analysis, design phase, verification, testing and finally followed by the implementation
basically a web based application has been successfully implemented after passing
reliability and finally performance are taken as key factors through out the design
phase. These factors are analyzed step by step and the positive as well as negative
management level. The data is stored in Access 2000 as RDBMS, which is highly
reliable and simpler to use, the user level security is managed with the help of
password options and sessions, which finally ensures that all the transactions are
made securely.
The application’s validations are made, taken into account of the entry levels
formatting and confirmations for both save and update options ensures the correct
data to be fed into the database. Thus all the aspects are charted out and the complete
represents the ultimate review of specification, design and code generation. Once the
source code has been generated, software must be tested to uncover as many errors as
possible before delivery to the customer. In order to find the highest possible number
of errors, tests must be conducted systematically and test cases must be designed
Whitebox Testing
White box testing some times called as glass box testing is a test case design
method that uses the control structures of the procedural design to derive test cases.
Using White Box testing methods, the software engineer can derive test case,
that guarantee that all independent paths with in a module have been exercised at
least once, exercise all logical decisions on their true and false sides, execute all loops
at their boundaries and within their operational bounds, exercise internal data
structures to ensure their validity. “Logic errors and incorrect assumptions are
unconscious assumptions about flow of control and data may lead to make design
likely that some typing errors will occur. Many will be uncovered by syntax and typing
likely that a type will exist on an obscure logical path as on a mainstream path.
Black box testing, also called as behavioral testing, focuses on the functional
requirements of the software. That is, black box testing enables the software engineer
to derive sets of input conditions that will fully exercise all functional requirements for
a program. Black box testing attempts to find errors in the following categories:
2. Interface errors
By applying black box techniques, a set of test cases that satisfy the following
criteria were been created: Test cases that reduce, by a count that is greater than one,
the number of additional test cases that must be designed to achieve reasonable
testing and test cases that tell something about the presence or absence of classes of
errors, rather than an error associated only with the specific test at hand.
Black - box testing is not an alternative to white - box testing techniques.
Validation Testing
Validation testing provides the final assurance that software meets all
defined in many ways, but a simple definition is that validations succeed when the
software functions in a manner that is expected by the user. The software once
validated must be combined with other system element. System testing verifies that all
elements combine properly and that overall system function and performance is
achieved. After the integration of the modules, the validation test was carried out over
by the system. It was found that all the modules work well together and meet the
Integration Testing
structure while at the same time conducting test to uncover errors associated with
interfacing. The objective is to take unit - tested modules and build a program
structure that has been dictated by design. Careful test planning is required to
determine the extent and nature of system testing to be performed and to establish
All the modules were integrated after the completion of unit test. While Top -
Down Integration was followed, the modules are integrated by moving downward
through the control hierarchy, beginning with the main module. Since the modules
were unit - tested for no errors, the integration of those modules was found perfect
and working fine. As a next step to integration, other modules were integrated with the
former modules.
After the successful integration of the modules, the system was found to be
running with no uncovered errors, and also all the modules were working as per the
design of the system, without any deviation from the features of the proposed system
design.
Acceptance Testing
performance tests and stress tests in order to demonstrate that the implemented
system satisfies its requirements. When custom software is built for one customer, a
series of acceptance tests are conducted to enable the customer to validate all
requirements.
In fact acceptance cumulative errors that might degrade the system over time will
incorporate test cases developed during integration testing. Additional testing cases
are added to achieve the desired level functional, performance and stress testing of the
entire system.
Unit testing
Dynamic test cases are used to investigate the behavior of source code by executing
the program on the test data. This testing was carried out during programming stage
itself.
After testing each every field in the modules, the modulus of the project is
tested separately. Unit testing focuses verification efforts on the smallest unit of
Interface
Tested to ensure the information properly flows in and out of the program unit
under test.
Local Data Structures
The temporarily stored data in this module have been checked for integrity. It
was seen that no lose of data or misinterpretation of data was taking place in this
module.
Boundary Conditions
The data to this module have fixed length and are known to have a particular
range of values. The input data with corresponding lower bound and upper bound
values and also the values in between the range, and was found that the module
Independent Paths
The module was tested for independent paths to bound values and also the
values in between the range, and was found that the module operates well with the
boundary conditions.
CONCLUSION
This paper proposes TKM, the first approach for continuous k-means
evaluating k-means for every object update, TKM achieves considerable savings
by assigning each object a threshold, such that the object needs to inform the
develop an optimized hill climbing technique for reducing the CPU cost, and
discuss optimizations of TKM for the case that object speeds are known.
to k-means, but the centers are restricted to points in the dataset. TKM could
be used to find the k-means set, and then replace each center with the closest
data point. It would be interesting to study performance guarantees (if any) in
this case, as well as devise adaptations of TKM for the problem. Finally,
there exist multiple servers maintaining the locations of distinct sets of objects.
The goal is to continuously compute the k-means using the minimum amount
REFERENCES
[AV06] Arthur, D., Vassilvitskii, S. How Slow is the k-Means Method.
SoCG, 2006.
PODS, 2003.
[GMM+03] Guha, S., Meyerson, A., Mishra, N., Motwani, R., O'Callaghan,
515-528, 2003.
EDBT, 2004.
[HS05] Har-Peled, S., Sadri, B. How Fast is the kmeans Method. SODA,
2005.
[HXL05] Hu, H., Xu, J., Lee, D. A Generic Framework for Monitoring
SoCG,1994.
[JMF99] Jain, A., Murty, M., Flynn, P. Data Clustering: a Review. ACM
[JLOZ06] Jensen, C., Lin, D., Ooi, B. C., Zhang, R. Effective Density
APPENDIX
DATA FLOW DIAGRAM
Request Update
Threshold based k-
Admin/ User means monitoring Threshold DB
over non static
objects
Response
Retrieve
ZERO LEVEL DIAGRAM
Request Update
Threshold based k-
Admin/ User means monitoring Threshold DB
over non static
objects
Response
Retrieve
Client
Threshold
Settings
settings
Server
settings
Update
Update
Update
Retrieve
Retrieve
Client_set Threshold_set
Retrieve
Server_set
TABLE DESIGN
LOGIN FORM
MASTER SCREEN
SERVER SETTINGS
CLIENT SETTINGS
THRESHOLD SETTINGS