0% found this document useful (0 votes)
15 views4 pages

IJRTI2206315

Uploaded by

Menaka N
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views4 pages

IJRTI2206315

Uploaded by

Menaka N
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

© 2022 IJRTI | Volume 7, Issue 6 | ISSN: 2456-3315

Global Server – A Review on Security Issues and


Research Challenges
Dr.Mrs.Jasmine Samraj1 Mrs.Menaka.N2
1
Research Supervisor, PG and Research Department of Computer Science,
Quaid-E-Millath Government College for Women (Autonomous), Chennai.
Research Scholar (Ph.D.), PG and Research Department of Computer Science,
Quaid-E-Millath Government College for Women (Autonomous), Chennai.

ABSTRACT: The World Wide Web (WWW) and its use are increasing all the time. The significant increase in the
number of Internet users, the increasing switchover to online payments, the growth of Internet-enabled gadgets, and
favorable demographics are essential elements driving the digital marketing growth story. Overloading of the server can
occur under a variety of circumstances. When a large number of people try to access the website and its content, the
server may get overburdened. The server is intended to handle specific levels of traffic. Many operations require an
excessive amount of bandwidth, while the system consumes an inordinate amount of RAM or runs out of processor power
in the other situations. Hard disc speed, RAM, and CPU speed are only a few of the factors that enable the server
to handle its load. Virtual memory or hard drive space, as well as bus speeds, may have an impact on how the server
handles the load, but neither of these factors is commonly linked to server overload. This paper examines the issues that
Global Server faces.
Index Terms - World Wide Web, Overloading, Traffic, Server, Issues.

I. INTRODUCTION
Cloud Computing is an emerging platform in the area of information technology that faces a number of issues, including load
balancing as well as confidentiality. Organizations with substantial experience in dealing with server-side issues, most firms will
eventually face an overcrowded web server. An overload, on the other hand, can be highly disruptive for a company that is new to
identifying and responding to server issues, costing visitors, money, and reputation. Internet applications are pervasive nowadays,
and they have become vital for both individuals and businesses. This significance is reflected in the high demands placed on these
applications' performance, availability, and dependability. For many websites, an overwhelmed server is a sneaky problem:
Regardless of the type of hosting, there are a few issues that arise frequently and may be avoided from the start. To protect users'
information, Internet applications enable sophisticated web content and security capabilities. Server capacity planning is the
practice of optimizing current and future server performance by utilizing the knowledge of existing server capacity consumption.
Subspace identification method is used to create a big data distribution model for website network user traffic data, and neural
network identification method is used to conduct actual statistical analysis of website network traffic data [1]. A site's traffic is the
number of visitors it receives. The links between website traffic and quality have not received as much attention as the linkages
between Web hyperlink data and similar performance indicators.
Load balancing is among the most difficult problems that exist today. Load balancing is defined as a strategy for evenly
distributing workload across all nodes in order to avoid situations where some nodes are substantially busy while others are
utilized or idle. The load balancing mechanism has a significant impact on the system's performance. In cloud computing,
effective load balancing ensures that cost of accessing the cloud providers’ resource is decreased and resources can be made
readily available based on user demand. The resources can be easily exploited by the users depending on the high/low load. When
there is a low load, energy can be saved.
II. LITREATURE REVIEW
Workspace within a data centre is one of the top data centre challenges. Poor planning for data centre physical infrastructure can
have substantial and long-term consequences, whether it's insufficient capacity, much more space, or overheating due to hardware
proximity. William V. Wollman, Harry Jegers [4] describes Global Server Load Balancing (GSLB) improves application
performance by allowing the client to choose a server depending on server performance, network speed, and the client and
server's approximate network location. Vijender Kumar, Gowri Anand [5] explains External memory subsystems probably benefit
from SAS, giving businesses flexibility and cost reductions previously unavailable in traditional storage setups. The native dual
port functionality of each SAS drive, which enables an alternative path to every drive in the case of a controller failover, is an
essential benefit of SAS-based storage subsystems. With increasing workloads, storage requirements will keep rising. Scaling up
the hardware at regular intervals is the only way to increase the storage space of servers. Hosting services are those that are not
shared. Organizations are secure because they have entire authority. They can be required to deliver high-end performance as long
as they are costly [6]. Processing times may increase to intolerable levels during overload conditions, and power strain may lead
the server to act abnormally and perhaps even crash, resulting in service denial. Such server behavior in e-commerce systems
could result in significant cost overruns [7].
III. CHALLENGES FACED BY GLOBAL SERVER
• Data Security
• Capacity Planning
• Uptime and performance maintenance

IJRTI2206315 International Journal for Research Trends and Innovation (www.ijrti.org) 2091
© 2022 IJRTI | Volume 7, Issue 6 | ISSN: 2456-3315

• Slow website loading and Navigation speed


• Site outage and downtime
DATA SECURITY
Data Security issues are popping up left and right these days. As a result of these security issues, a variety of encryption
algorithms have emerged. Simultaneously, many security products placed in the internet create a significant amount of log
files every day, and figuring out how to discover knowledge and it has been a pressing concern. Cloud servers are complicated
because security contents need to be handled secretly while adhering to a single comprehensive security strategy. The intruder
simply needs to attack one level that is less secure in order to gain access to another device. Software and hardware security are
two types of security. Hardware security is a broad term that refers to a variety of processes and tactics being used keep outsiders
out. Trying to deceive the firewall, snatching credentials, or exploiting similar flaws takes place through software; hackers can
be blocked from approaching the organization by software security. Here data governance is required to ensure that every data is
authorized and every data has an authorized owner who can edit and read data. Every data should indeed be delivered following
adequate certification and handshakes among transmitter and recipient to ensure data is being sent to the right location by the
authorized sender. To ensure security, all data must be transmitted over a secure channel with encryption [8]. Every security flaw
may result in the loss of dollars in millions in property rights, confidential data leaking, and sensitive data breaches. The data
security management system which employs data mining and fusion technologies has a better performance. Figure 1 illustrates
the data transmission flow where security has to be implemented.

Figure 1: Overview of Data Transmission through Servers


CAPACITY PLANNING
Capacity Planning is an effective method for ensuring high-quality service on the Internet. Modern cloud services provide a wide
range of IT services, the majority of which are stored on specialized web hardware. Virtualization is a special measure for
consolidating servers, resulting in higher server usage. There is a great deal of information available for the virtualized
environment, but the sophistication of a virtual environment resists simple metrics. More Direct Attached Storage (DAS) to a
server adds complexity to management, and it doesn't answer numerous data security or elevated difficulties, and it's frequently
restricted by simple capacity constraints. Data centre managers are prone to over-provisioning in order to prevent failure [9]. As a
result, there is a waste of power and energy. The capacity of a data center has always been an unanswered topic for data centre
management as the amount of data has grown. As the proportion of people has grown, so has the increase in complexity. Phone
application configurations are starting to move more of the data to be processed from the end user device back into the data
centre. People do not plan for millions of similar things rather unique ones. Data centre managers can utilize a DCIM system to
discover unused physical space, capacity, electricity, cooling, and other resources in a data centre. This makes it simple to
increase capacity while lowering expenses, conserving energy, and avoiding downtime.

Figure 2: Overview of Load Balancing and Management


UPTIME AND PERFORMANCE MAINTENANCE
In today's modern digital revolution, having a strong and well-functioning Webpage gives a unique offering the opportunity to
connect clients all over the world. To get the most out of a company's Web investment, it's vital to make sure that all of the
aspects of the site are working effectively. As a technique of stability, scheduling, and evaluation, networks and network
surveillance is a vital and critical component of any Digital strategy. In a very volatile environment, website services flourishes.
Network outages as well as other problems were unavoidable realities. In the website hosting market, however, downtime is

IJRTI2206315 International Journal for Research Trends and Innovation (www.ijrti.org) 2092
© 2022 IJRTI | Volume 7, Issue 6 | ISSN: 2456-3315

devastating, and it might even spell the end for a client's business. The accepted practice for web hosting is 99.9% uptime. It's
catastrophic to have more than 10 minutes of downtime per week. Hardware and software breakdown, data leaks, and
administrative faults are the major causes of unexpected, emergency downtime. Planned downtime, which is unavoidable, is best
carried out at night or during off-peak hours, when the number of visitors to the website is often at its lowest. Installing reliable
software and updating it on a regular basis can avoid glitches on the network. Using high-quality, tried-and-true computer
hardware prevents degradation by conducting a regular system integration assessment. Authentic analytics-driven surveillance
strongly relates to self-healing data centers that can take action like unplugging a host or reconfiguring transmitted data before the
alert.

Figure 3: Overview of Load Balancing and Management


SLOW NAVIGATION SPEED AND WEBSITE LOADING TIME
Consumers are unlikely to continue to a website that takes too long to load or has sluggish internal navigation. According to
studies, the timeframe a person can await without dropping attention ranges between 0.3 and 3 seconds. That means the
data should be displayed to the consumer within three seconds. Five out of every six web users will abandon a site that takes
longer than four milliseconds to respond. To complicate things worse, slow website load has a negative impact on Google ranks,
leading to a slow decline. The HTTP demand and return processing is lengthened each moment during page switches to another
location. The finest server companies use a variety of tactics to prevent WebPages from responding slowly like, acquiring
alternative computers throughout the universe. The precise environment of the data centre storing the webpage has a considerable
impact on the loading time. The quicker the webpage loads, the closer the data centre serving the server is to the specified
location. Unnecessary extra objects in the database, such as records, acoustic noise, and other files from extensions or
customizations, are referred to as "overhead." SQL queries can take more time than required if there is too much "overhead." It
can even cause the http server to time out while searching for a database update in some instances. Graphics and films are
common examples of huge media files. Compression reduces the size of the files and improves loading speeds.

Figure 4: Flow of Speed and Quality of Connection from User’s End to Server
SITE OUTAGE AND DOWNTIME
Whenever a webpage is fully unreachable or not able to execute its principal function for its visitors, it is said to be unavailable.
The outage's duration is known to as the server outage. If a webpage is maintained on a central server, its host may stop or remove
the page to safeguard competing websites if there really is a large increase in traffic. This unusually high level of web traffic
might indicate an attacker or Trojan effort to trigger a significant disruption. DDOS (Distributed Denial of Service) attacks are
attempts to take down a website by sending a high quantity of false activity via a cluster of devices. As a result, if the site's
security measures aren't up to date, the site is extremely susceptible to outbreak. In the lack of a regular hardware system service
and support schedule, an unexpected breakdown could occur, resulting in downtime. The uptime of a hosting company cannot be
guaranteed. Server outages have been reported on various social media networks, including Instagram, Facebook, WhatsApp and
Twitter occasionally. Because of the rapid rate of technological changes, businesses choose to invest in incremental
improvements rather than replacing complete servers to save money. Long periods of downtime are essential to replace servers;
however it is often required to cope with the new web.

IJRTI2206315 International Journal for Research Trends and Innovation (www.ijrti.org) 2093
© 2022 IJRTI | Volume 7, Issue 6 | ISSN: 2456-3315

Figure 5: Server Outage


IV. CONCLUSION
Global Server faces a number of issues, which are described in this paper. Data Centers with a large number of servers must be
protected. When using cloud technology, work load balancing must account for both over- and under-load situations. Server
Downtime, Capacity Planning, Maintenance Management is also described in this paper. To ensure data flow security, an
overview of data transfer between a server and client is discussed. It is necessary to create a framework with high-potential
policies to strengthen data security and to overcome all the issues as described.

REFERENCES
1. Fettweis, G., Nagel, W., & Lehner, W. (2012, March). Pathways to servers of the future. In 2012 Design, Automation &
Test in Europe Conference & Exhibition (DATE) (pp. 1161-1166). IEEE.
2. Huang, L., Zhu, L., Zhou, X., & Liu, J. (2020, February). Research on Website Traffic Statistics System. In 2020 12th
International Conference on Measuring Technology and Mechatronics Automation (ICMTMA) (pp. 957-961). IEEE.
3. Nair, N. K., Navin, K. S., & Chandra, C. S. (2015, April). A survey on load balancing problem and implementation of
replicated agent based load balancing technique. In 2015 Global Conference on Communication Technologies
(GCCT) (pp. 897-901). IEEE.
4. Wollman, W. V., Jegers, H., Loftus, M., & Wan, C. (2003, October). Plug and play server load balancing and global
server load balancing for tactical networks. In IEEE Military Communications Conference, 2003. MILCOM 2003. (Vol.
2, pp. 933-937). IEEE.
5. Kumar, V., Anand, G., Kumar, S., Vasa, M., Wallace, D., & Mutnury, B. (2017, October). SAS 4.0 (22.5 Gbps)
challenges for server platforms. In 2017 IEEE 26th Conference on Electrical Performance of Electronic Packaging and
Systems (EPEPS) (pp. 1-3). IEEE.
6. Abidi, F., & Singh, V. (2013, December). Cloud servers vs. dedicated servers—A survey. In 2013 IEEE International
Conference in MOOC, Innovation and Technology in Education (MITE) (pp. 1-5). IEEE.
7. Guitart, J., Torres, J., & Ayguadé, E. (2010). A survey on performance management for internet
applications. Concurrency and Computation: Practice and Experience, 22(1), 68-106.
8. Kumar, N., & Karhana, A. (2019). Data security framework for data-centers. International Journal Of Computer Ences
And Engineering, 7(1), 451-456.
9. Spellmann, A. C., & Gimarc, R. L. (2013, November). Capacity Planning: A Revolutionary Approach for Tomorrow's
Digital Infrastructure. In Int. CMG Conference.

IJRTI2206315 International Journal for Research Trends and Innovation (www.ijrti.org) 2094

You might also like