Cloud Computing Report
Cloud Computing Report
Cloud Computing Report
CHAPTER NO
TITLE
PAGE NO
iii
1.
2.
SYSTEM STUDY 2.1 FEASABILITY STUDY 2.2 EXISTING SYSTEM 2.3 PROPOSED SYSTEM
11
14
15
4.2 FEATURES OF SQL SERVER 2000 5 SYSTEM DESIGN 5.1 INPUT DESIGN 5.2 OUTPUT DESIGN 5.3 DATABASE DESIGN 5.4 DATA FLOW DIAGRAM 5.5 SYSTEM FLOW DIAGRAM 32
SYSTEM TESTING AND MAINTENANCE 6.1 UNIT TESTING 6.2 INTEGRATION TESTING 6.3 VALIDATION
58
59
8 9
CONCLUSION BIBLIOGRAPHY APPENDIX SCREEN SHOT DATA TABLE STRUCTURE SAMPLE CODING I.
60 62
LIST OF FIGURES
FIGURE NO
NAME
PAGE NO
.NET FRAMEWORK
10
INTEROPERABILITY
12
WEB CONTROLS
Abstract: Cloud computing is fundamentally altering expectations for how and when computing, storage an networking resources should be allocated, managed and consumed. End-users are increasingly sensitive to the latency of services they consume. Service Developers want the Service Providers to ensure or provide the capability to dynamically allocate and manage resources in response to changing demand patterns in real-time. Ultimately, Service Providers are under pressure to architect their infrastructure to enable real-time end-to-end visibility and dynamic resource management with fine grained control to reduce total cost of ownership while also improving agility. The current approaches to enabling real-time, dynamic infrastructure are inadequate, expensive and not scalable to support consumer mass-market requirements. Over time, the server-centric infrastructure management systems have evolved to become a complex tangle of layered systems designed to automate systems administration functions that are knowledge and labor intensive. This expensive and non-real time paradigm is ill suited for a world where customers are demanding communication, collaboration and commerce at the speed of light. Thanks to hardware assisted virtualization, and the resulting decoupling of infrastructure and application management, it is now possible to provide dynamic visibility and control of service management to meet the rapidly growing demand for cloud-based services.
Existing System:
The existing system to enabling real-time, dynamic infrastructure are inadequate, expensive and not scalable to support consumer mass-market reuirements. Over time, the server-centric infrastructure management systems have evolved to become a complex tangle of layered systems designed to automate systems administration functions that are knowledge and labor intensive.
1. Traditional server-centric operating systems were not designed to manage shared distributed resource.
2. Current hyper visors do not provide adequate separation between application management and physical resource management.
4. Human system administrators do not lend themselves to enabling real-time dynamism. Policy-based management is not really automation .
Proposed System: In cloud computing we are using network-centric datacenter infrastructure management stack that borrows and applies key concepts that have enabled dynamism, scalability, reliability and security in the telecom industry, to the computing industry. Using this cloud computing concept resources can be allocated, managed and consumed dynamically in real time.
Server Operating Systems and Virtualization Storage Networks & Virtualization Network Virtualization Resources can be allocated, managed and consumed dynamically in real time. Systems Management Infrastructure Application Creation and Packaging
2.1
Project Profile
The Admin module is necessary to enter Admin id and Password to access the module, The Admin module can do: 1. New Account. 2. View Account Details. 3. Deposit.
The User module it is necessary to enter User id & password to access this module. The User module can do: 1. Transactions. 2. Profile Information. a. Can edit Account details. b. Can change the Password.
ii. Provides facility to change the password. iii .Can edit the user information. iv. Can view the all the information about the particular login customer.
2. SYSTEM STUDY
The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential.
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was
achieved because most of the technologies used are freely available. Only the customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the
user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.
System Requirement:
Software Requirement:
:- Windows XP Professional :- Microsoft Visual Studio .Net 2008 :- Visual C# .Net :- SqlServer2005
Hardware Requirement:
System
4. LANGAUGE SPECIFICATION
Microsoft
.NET
is
set
of
Microsoft
software
technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web
solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. Theres no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script. The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data
types and communications protocols so that components created in different languages can easily interoperate.
.NET is also the collective name given to various software components built upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET Server, for instance) and services (like Passport, .NET My Services, and so on).
The CLR is described as the execution engine of .NET. It provides the environment within which programs run. The most important features are
Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on. Memory collection. Checking and enforcing security restrictions on the running code. management, notably including garbage
Loading and executing programs, with version control and other such features. The following features of the .NET framework are also worth description:
Managed Code The code that targets .NET, and which contains certain extra Information - metadata - to describe itself. Whilst both managed and unmanaged code can run in the runtime, only managed code contains the information that allows the CLR to guarantee, for instance, safe execution and interoperability.
Managed Data With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending on the language youre using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in .NET applications - data that doesnt get garbage collected but instead is looked after by unmanaged code.
Common Type System The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesnt attempt to access memory that hasnt been allocated to it.
The
CLR
provides
built-in
support
for
language
interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.
THE CLASS LIBRARY .NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and
String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.
The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity. The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.
The multi-language capability of the .NET Framework and Visual Studio .NET enables developers to use their existing programming skills to build all types of applications and XML Web services. The .NET framework supports new versions of
Microsofts old favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to the family. Visual Basic .NET has been updated to include many new and improved language features that make it a powerful object-oriented programming language. These features include inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured exception handling, custom attributes and also supports multi-threading.
Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language can use the classes, objects, and components you create in Visual Basic .NET. Managed Extensions for C++ and attributed
programming are just some of the enhancements made to the C+ + language. Managed Extensions simplify the task of migrating existing C++ applications to the new .NET Framework. C# is Microsofts new language. Its a C-style language that is essentially C++ for Rapid Application Development. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the .NET libraries as its own.
Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the world of XML Web Services and dramatically improves the interoperability of Java-language
Active State has created Visual Perl and Visual Python, which enable .NET-aware applications to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET environment. Visual Perl includes support for Active States Perl Dev Kit.
Windows
C#.NET is also compliant with CLS (Common Language Specification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET Framework; it manages the execution of the code and also makes the development process easier by providing services. C#.NET is a CLS-compliant language. Any objects, classes, or components that created in C#.NET can be used in any other CLS-compliant language. In addition, we can use objects, classes, and components created in other CLScompliant languages in C#.NET .The use of CLS ensures complete interoperability among applications, regardless of the languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS: Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes. GARBAGE COLLECTION Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated resources, such as objects and variables. In addition, the .NET Framework automatically releases memory for reuse by destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.
OVERLOADING Overloading is another feature in C#. Overloading enables us to define multiple procedures with the same name, where each procedure has a different set of arguments. Besides using overloading for procedures, we can use it for constructors and properties in a class. MULTITHREADING: C#.NET also supports multithreading. An application that supports
multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction. STRUCTURED EXCEPTION HANDLING C#.NET supports structured handling, which enables us to detect and remove errors at runtime. In C#.NET, we need to use TryCatchFinally statements to create exception handlers. Using TryCatchFinally statements,
we can create robust and effective exception handlers to improve the performance of our application.
THE .NET FRAMEWORK The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet. OBJECTIVES OF. NET FRAMEWORK 1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely. 2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code. 3. Eliminates the performance problems.
4.1 Entity Relationship Diagram An Entity Relationship Diagram [ERD] express overall logical structure of database graphically. One of the most important aspects of system analyst is analysis of the system data, but analyzing only the data gives a narrow view of the system an entity is represented by a set of attributes.
Symbols: Entity set: An entity set is thing/object in an enterprise. Similar objects are grouped into an entity set.
ER DIAGRAM
CONNECT
HDFC
HAS
USER
ADMIN
4.2 Data Flow Diagram The diagram that depicts the flow of data through a system & process that manipulates data is called Data Flow Diagram. Data flow diagram is a process modeling. Data flow diagram Means a graphical technique that transforms the data from input to output. Data flow diagram is also known as Bubble Chart. The data flow diagram may be used to represent a system or website at any level of abstraction. Advantages of Data Flow Diagram:1.To provide an indication of flow data transformed as they move through the system. 2.To depict the function that transfers the dataflow. Symbols:
Process that transforms incoming data flow(s) into outgoing data flow(s).
Start
Admin
User
New Account
View Details
Deposit
Result
Logout
Stop
UseCase Diagram :
Login
Deposit
New Account
Admin
View Details
User
Transaction
View Profile
Change Password
Sequence Diagram
login
Admin
User
T Profile r a n s a c t i o n
View
New account
Change Passwo rd
View Details
Testing is vital to the success of the system. System testing makes a logical assumption that if all parts of the system are correct, the goal will be
successfully achieved. In the testing process we test the actual system in an organization and gather errors from the new system operates in full efficiency as stated. System testing is the stage of implementation, which is aimed to ensuring that the system works accurately and efficiently. In the testing process we test the actual system in an organization and gather errors from the new system and take initiatives to correct the same. All the front-end and back-end connectivity are tested to be sure that the new system operates in full efficiency as stated. System testing is the stage of implementation, which is aimed at ensuring that the system works accurately and efficiently. The main objective of testing is to uncover errors from the system. For the uncovering process we have to give proper input data to the system. So we should have more conscious to give input data. It is important to give correct inputs to efficient testing. Testing is done for each module. After testing all the modules, the modules are integrated and testing of the final system is done with the test data, specially designed to show that the system will operate successfully in all its aspects conditions. Thus the system testing is a confirmation that all is correct and an opportunity to show the user that the system works. Inadequate testing or non-testing leads to errors that may appear few months later.
This will create two problems Time delay between the cause and appearance of the problem. The effect of the system errors on files and records within the system. The purpose of the system testing is to consider all the likely variations to which it will be suggested and push the system to its limits. The testing process focuses on logical intervals of the software ensuring that all the statements have been tested and on the function intervals (i.e.,) conducting tests to uncover errors and ensure that defined inputs will produce actual results that agree with the required results. Testing has to be done using the two common steps Unit testing and Integration testing. In the project system testing is made as follows: The procedure level testing is made first. By giving improper inputs, the errors occurred are noted and eliminated. This is the final step in system life cycle. Here we implement the tested error-free system into real-life environment and make necessary changes, which runs in an online fashion. Here system maintenance is done every months or year based on company policies, and is checked for errors like runtime errors, long run errors and other maintenances like table verification and reports. 6.1. UNIT TESTING
Unit testing verification efforts on the smallest unit of software design, module. This is known as Module Testing. The modules are tested separately. This testing is carried out during programming stage itself. In these testing steps, each module is found to be working satisfactorily as regard to the expected output from the module. 6.2. INTEGRATION TESTING Integration testing is a systematic technique for constructing tests to uncover error associated within the interface. In the project, all the modules are combined and then the entire programmer is tested as a whole. In the integration-testing step, all the error uncovered is corrected for the next testing steps. 7. SYSTEM IMPLEMENTATION
Implementation is the stage of the project when the theoretical design is turned out into a working system. Thus it can be considered to be the most critical stage in achieving a successful new system and in giving the user, confidence that the new system will work and be effective.
The implementation stage involves careful planning, investigation of the existing system and its constraints on implementation, designing of methods to achieve changeover and evaluation of changeover methods. Implementation is the process of converting a new system design into operation. It is the phase that focuses on user training, site preparation and file conversion for installing a candidate system. The important factor that should be considered here is that the conversion should not disrupt the functioning of the organization.
securely interoperate. Theres no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script. The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate. .NET is also the collective name given to various software components built upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET Server, for instance) and services (like Passport, .NET My Services, and so on).
GSM:
GSM (Global System for Mobile Communications: originally from Groupe Spcial Mobile) is the world's most popular standard for mobile telephony systems. The GSM Association estimates that 80% of the global mobile market uses the standard. GSM is used by over 1.5 billion people across more than 212 countries and territories. This ubiquity means that subscribers can use their phones throughout the world, enabled by international roaming arrangements between mobile network operators. GSM differs from its predecessor technologies in that both signaling and speech channels are digital, and thus GSM is considered a second generation (2G) mobile phone system. This also facilitates the wide-spread implementation of data communication applications into the system.
A GSM network is composed of several functional entities, whose functions and interfaces are specified. Figure 1 shows the layout of a generic GSM network. The GSM network can be divided into three broad parts. The Mobile Station is carried by the subscriber. The Base Station Subsystem controls the radio link with the Mobile Station. The Network Subsystem, the main part of which is the Mobile services Switching Center (MSC), performs the switching of calls between the mobile users, and between mobile and fixed network users. The MSC also handles the mobility management operations. Not shown is the Operations and Maintenance Center, which oversees the proper operation and setup of the network. The Mobile Station and the Base Station Subsystem communicate across the Um interface, also known as the air interface or radio link. The Base Station Subsystem communicates with the Mobile services Switching Center across the A interface.
Figure1. General architecture of a GSM network Mobile Station The mobile station (MS) consists of the mobile equipment (the terminal) and a smart card called the Subscriber Identity Module (SIM). The SIM provides personal mobility, so that the user can have access to subscribed services irrespective of a specific terminal. By inserting the SIM card into another GSM terminal, the user is able to receive calls at that terminal, make calls from that terminal, and receive other subscribed services.
The mobile equipment is uniquely identified by the International Mobile Equipment Identity (IMEI). The SIM card contains the International Mobile Subscriber Identity (IMSI) used to identify the subscriber to the system, a secret key for authentication, and other information. The IMEI and the IMSI are independent, thereby allowing personal mobility. The SIM card may be protected against unauthorized use by a password or personal identity number. Base Station Subsystem
The Base Station Subsystem is composed of two parts, the Base Transceiver Station (BTS) and the Base Station Controller (BSC). These communicate across the standardized Abis interface, allowing (as in the rest of the system) operation between components made by different suppliers. The Base Transceiver Station houses the radio tranceivers that define a cell and handles the radio-link protocols with the Mobile Station. In a large urban area, there will potentially be a large number of BTSs deployed, thus the requirements for a BTS are ruggedness, reliability, portability, and minimum cost. The Base Station Controller manages the radio resources for one or more BTSs. It handles radiochannel setup, frequency hopping, and handovers, as described below. The BSC is the connection between the mobile station and the Mobile service Switching Center (MSC). Network Subsystem
The central component of the Network Subsystem is the Mobile services Switching Center (MSC). It acts like a normal switching node of the PSTN or ISDN, and additionally provides all the functionality needed to handle a mobile subscriber, such as registration, authentication, location updating, handovers, and call routing to a roaming subscriber. These services are provided in conjunction with several functional entities, which together form the Network Subsystem. The MSC provides the connection to the fixed networks (such as the PSTN or ISDN). Signalling between functional entities in the Network Subsystem uses Signalling System Number 7 (SS7), used for trunk Signalling in ISDN and widely used in current public networks. The Home Location Register (HLR) and Visitor Location Register (VLR), together with the MSC, provide the call-routing and roaming capabilities of GSM. The HLR contains all the administrative information of each subscriber registered in the corresponding GSM network, along with the current location of the mobile. The location of the mobile is typically in the form of the Signalling address of the VLR associated with the mobile station. The actual routing procedure will be described later. There is logically one HLR per GSM network, although it may be implemented as a distributed database.
The Visitor Location Register (VLR) contains selected administrative information from the HLR, necessary for call control and provision of the subscribed services, for each mobile currently located in the geographical area controlled by the VLR. Although each functional entity can be implemented as an independent unit, all manufacturers of switching equipment to date implement the VLR together with the MSC, so that the geographical area controlled by the MSC corresponds to that controlled by the VLR, thus simplifying the Signalling required. Note that the MSC contains no information about particular mobile stations. this information is stored in the location registers. The other two registers are used for authentication and security purposes. The Equipment Identity Register (EIR) is a database that contains a list of all valid mobile equipment on the network, where each mobile station is identified by its International Mobile Equipment Identity (IMEI). An IMEI is marked as invalid if it has been reported stolen or is not type approved. The Authentication Center (AuC) is a protected database that stores a copy of the secret key stored in each subscriber's SIM card, which is used for authentication and encryption over the radio channel.
Bibliography
To complete the HR Recruitment project we took the help of the following websites: Books Referred Asp.NET Black Book Software Engineering - By Roger Pressman. Database Management System-By Henry F Korth. Complete Reference of SQL-Tata Mc Graw hill Complete Reference of Asp.net-Mathew McDonald
Websites Referred
www.w3schools.com www.google.com