IJRTI2206315
IJRTI2206315
ABSTRACT: The World Wide Web (WWW) and its use are increasing all the time. The significant increase in the
number of Internet users, the increasing switchover to online payments, the growth of Internet-enabled gadgets, and
favorable demographics are essential elements driving the digital marketing growth story. Overloading of the server can
occur under a variety of circumstances. When a large number of people try to access the website and its content, the
server may get overburdened. The server is intended to handle specific levels of traffic. Many operations require an
excessive amount of bandwidth, while the system consumes an inordinate amount of RAM or runs out of processor power
in the other situations. Hard disc speed, RAM, and CPU speed are only a few of the factors that enable the server
to handle its load. Virtual memory or hard drive space, as well as bus speeds, may have an impact on how the server
handles the load, but neither of these factors is commonly linked to server overload. This paper examines the issues that
Global Server faces.
Index Terms - World Wide Web, Overloading, Traffic, Server, Issues.
I. INTRODUCTION
Cloud Computing is an emerging platform in the area of information technology that faces a number of issues, including load
balancing as well as confidentiality. Organizations with substantial experience in dealing with server-side issues, most firms will
eventually face an overcrowded web server. An overload, on the other hand, can be highly disruptive for a company that is new to
identifying and responding to server issues, costing visitors, money, and reputation. Internet applications are pervasive nowadays,
and they have become vital for both individuals and businesses. This significance is reflected in the high demands placed on these
applications' performance, availability, and dependability. For many websites, an overwhelmed server is a sneaky problem:
Regardless of the type of hosting, there are a few issues that arise frequently and may be avoided from the start. To protect users'
information, Internet applications enable sophisticated web content and security capabilities. Server capacity planning is the
practice of optimizing current and future server performance by utilizing the knowledge of existing server capacity consumption.
Subspace identification method is used to create a big data distribution model for website network user traffic data, and neural
network identification method is used to conduct actual statistical analysis of website network traffic data [1]. A site's traffic is the
number of visitors it receives. The links between website traffic and quality have not received as much attention as the linkages
between Web hyperlink data and similar performance indicators.
Load balancing is among the most difficult problems that exist today. Load balancing is defined as a strategy for evenly
distributing workload across all nodes in order to avoid situations where some nodes are substantially busy while others are
utilized or idle. The load balancing mechanism has a significant impact on the system's performance. In cloud computing,
effective load balancing ensures that cost of accessing the cloud providers’ resource is decreased and resources can be made
readily available based on user demand. The resources can be easily exploited by the users depending on the high/low load. When
there is a low load, energy can be saved.
II. LITREATURE REVIEW
Workspace within a data centre is one of the top data centre challenges. Poor planning for data centre physical infrastructure can
have substantial and long-term consequences, whether it's insufficient capacity, much more space, or overheating due to hardware
proximity. William V. Wollman, Harry Jegers [4] describes Global Server Load Balancing (GSLB) improves application
performance by allowing the client to choose a server depending on server performance, network speed, and the client and
server's approximate network location. Vijender Kumar, Gowri Anand [5] explains External memory subsystems probably benefit
from SAS, giving businesses flexibility and cost reductions previously unavailable in traditional storage setups. The native dual
port functionality of each SAS drive, which enables an alternative path to every drive in the case of a controller failover, is an
essential benefit of SAS-based storage subsystems. With increasing workloads, storage requirements will keep rising. Scaling up
the hardware at regular intervals is the only way to increase the storage space of servers. Hosting services are those that are not
shared. Organizations are secure because they have entire authority. They can be required to deliver high-end performance as long
as they are costly [6]. Processing times may increase to intolerable levels during overload conditions, and power strain may lead
the server to act abnormally and perhaps even crash, resulting in service denial. Such server behavior in e-commerce systems
could result in significant cost overruns [7].
III. CHALLENGES FACED BY GLOBAL SERVER
• Data Security
• Capacity Planning
• Uptime and performance maintenance
IJRTI2206315 International Journal for Research Trends and Innovation (www.ijrti.org) 2091
© 2022 IJRTI | Volume 7, Issue 6 | ISSN: 2456-3315
IJRTI2206315 International Journal for Research Trends and Innovation (www.ijrti.org) 2092
© 2022 IJRTI | Volume 7, Issue 6 | ISSN: 2456-3315
devastating, and it might even spell the end for a client's business. The accepted practice for web hosting is 99.9% uptime. It's
catastrophic to have more than 10 minutes of downtime per week. Hardware and software breakdown, data leaks, and
administrative faults are the major causes of unexpected, emergency downtime. Planned downtime, which is unavoidable, is best
carried out at night or during off-peak hours, when the number of visitors to the website is often at its lowest. Installing reliable
software and updating it on a regular basis can avoid glitches on the network. Using high-quality, tried-and-true computer
hardware prevents degradation by conducting a regular system integration assessment. Authentic analytics-driven surveillance
strongly relates to self-healing data centers that can take action like unplugging a host or reconfiguring transmitted data before the
alert.
Figure 4: Flow of Speed and Quality of Connection from User’s End to Server
SITE OUTAGE AND DOWNTIME
Whenever a webpage is fully unreachable or not able to execute its principal function for its visitors, it is said to be unavailable.
The outage's duration is known to as the server outage. If a webpage is maintained on a central server, its host may stop or remove
the page to safeguard competing websites if there really is a large increase in traffic. This unusually high level of web traffic
might indicate an attacker or Trojan effort to trigger a significant disruption. DDOS (Distributed Denial of Service) attacks are
attempts to take down a website by sending a high quantity of false activity via a cluster of devices. As a result, if the site's
security measures aren't up to date, the site is extremely susceptible to outbreak. In the lack of a regular hardware system service
and support schedule, an unexpected breakdown could occur, resulting in downtime. The uptime of a hosting company cannot be
guaranteed. Server outages have been reported on various social media networks, including Instagram, Facebook, WhatsApp and
Twitter occasionally. Because of the rapid rate of technological changes, businesses choose to invest in incremental
improvements rather than replacing complete servers to save money. Long periods of downtime are essential to replace servers;
however it is often required to cope with the new web.
IJRTI2206315 International Journal for Research Trends and Innovation (www.ijrti.org) 2093
© 2022 IJRTI | Volume 7, Issue 6 | ISSN: 2456-3315
REFERENCES
1. Fettweis, G., Nagel, W., & Lehner, W. (2012, March). Pathways to servers of the future. In 2012 Design, Automation &
Test in Europe Conference & Exhibition (DATE) (pp. 1161-1166). IEEE.
2. Huang, L., Zhu, L., Zhou, X., & Liu, J. (2020, February). Research on Website Traffic Statistics System. In 2020 12th
International Conference on Measuring Technology and Mechatronics Automation (ICMTMA) (pp. 957-961). IEEE.
3. Nair, N. K., Navin, K. S., & Chandra, C. S. (2015, April). A survey on load balancing problem and implementation of
replicated agent based load balancing technique. In 2015 Global Conference on Communication Technologies
(GCCT) (pp. 897-901). IEEE.
4. Wollman, W. V., Jegers, H., Loftus, M., & Wan, C. (2003, October). Plug and play server load balancing and global
server load balancing for tactical networks. In IEEE Military Communications Conference, 2003. MILCOM 2003. (Vol.
2, pp. 933-937). IEEE.
5. Kumar, V., Anand, G., Kumar, S., Vasa, M., Wallace, D., & Mutnury, B. (2017, October). SAS 4.0 (22.5 Gbps)
challenges for server platforms. In 2017 IEEE 26th Conference on Electrical Performance of Electronic Packaging and
Systems (EPEPS) (pp. 1-3). IEEE.
6. Abidi, F., & Singh, V. (2013, December). Cloud servers vs. dedicated servers—A survey. In 2013 IEEE International
Conference in MOOC, Innovation and Technology in Education (MITE) (pp. 1-5). IEEE.
7. Guitart, J., Torres, J., & Ayguadé, E. (2010). A survey on performance management for internet
applications. Concurrency and Computation: Practice and Experience, 22(1), 68-106.
8. Kumar, N., & Karhana, A. (2019). Data security framework for data-centers. International Journal Of Computer Ences
And Engineering, 7(1), 451-456.
9. Spellmann, A. C., & Gimarc, R. L. (2013, November). Capacity Planning: A Revolutionary Approach for Tomorrow's
Digital Infrastructure. In Int. CMG Conference.
IJRTI2206315 International Journal for Research Trends and Innovation (www.ijrti.org) 2094