Bi El
Bi El
(22CS2D03T)
Submitted by
SANJANA R 1RV23LCS09
SHREYAS K G 1RV23LCS11
2024-2025
Contents
Introduction.
Cloud computing
Telco cloud network
Problem statement
Network Function Virtualization
Software Defined Network
Virtual Machine
Hypervisor
Docker and Container
Advantages
Application
Conclusion.
References.
2
INTRODUCTION
Real Estate Property is not only a person's primary desire, but it also reflects a person's
wealth and prestige in today's society. Real estate investment typically appears to be lucrative
since property values do not drop in a choppy fashion. Changes in the value of the real estate
will have an impact on many home investors, bankers, policymakers, and others. Real estate
investing appears to be a tempting option for investors. As a result, anticipating the important
estate price is an essential economic indicator. According to the 2011 census, the Asian country
ranks second in the world in terms of the number of households, with a total of 24.67 crores.
However, previous recessions have demonstrated that real estate costs cannot be seen. The
expenses of significant estate property are linked to the state's economic situation. Regardless,
we don't have accurate standardized approaches to live the significant estate property values.
The publication's description is minimal error and the highest accuracy. The aforementioned
title of the paper is Hedonic models based on price data from Belfast infer that submarkets and
residential valuation this model is used to identify over a larger spatial scale and implications
for the evaluation process related to the selection of comparable evidence and the quality of
variables that the values may require. Understanding current developments in house prices and
homeownership are the subject of the study. In this article, they utilized a feedback mechanism
or social pandemic that fosters a perception of property as an essential market investment.
OBJECTIVE
Here prediction is intended to make it as instructional as possible by tackling each stage of the
machine learning process and attempting to comprehend it well. Bangalore Real Estate Prediction as a
method, which is known as a "toy issue," identifying problems that are not of immediate scientific
relevance but are helpful to demonstrate and practice. The objective was to forecast the price of a
specific apartment based on market pricing while accounting for various "features" that would be
established in the following sections.
METHODOLOGY
3
1. Collection of Dataset: The statistics were gathered from Bangalore home prices. The information
includes many variables such as area type, availability, location, BHK, society, total square feet,
bathrooms, and balconies. Complete data set is downloaded from Kaggle website.
2 Data Cleaning: Most of the time, the data we collect is noisy. It may have empty fields, incorrect
data, and outliers. This kind of data may negatively affect the accuracy of the prediction done by the
model. Hence, it is essential to remove any such noisy data. The first step is to check the dataset for
any missing fields. We have dropped all the rows having empty fields from our dataset. The second
step is to check for incorrect data. The location column in the dataset had multiple entries of the same
location but different spellings. It is essential to correct it because the model will treat them as two
distinct locations and therefore affect the model’s prediction. Dataset has 13320 entries with 9
different parameters like area type, availability, location, size, society, total square feet and for area
type there has been unique areas like super built area, plot area, built-up, carpet area all this data are
not necessary hence they have been cleaned up. Hence unnecessary columns like area type, society,
balcony, availability has been dropped. Some of the columns have null value that are misleading and
their number is less compare to total entries hence that entries will be dropped from data set and
finally having data set with no null values.
4
Fig 2 cleaned data set witjout null values
In our model features like bhk and bedroom are same hence we create a new column combining bhk
and bedroom and stored them as bhk later we can drop both bhk and bedroom columns. We can see
that we only have numbers for each bhk. Here we can see that some houses have a lot of rooms like 43
this data is inappropriate hence we will choose limit as 20. For total square feet some values are like
1133 - 1384 which is not good. Here we can see those total_sqft values that have a range 3067 - 8156 or
units with them 34.46Sq. Meter. The values that have range we will take their average and the values
that have units with them, we will drop those values. New column called price per square feet is need to
predict the land price in that region. We can see that there are locations having 533 data points or even
1 data points. we will mark the locations that have less than or equal to 10 data points as other and drop
them at the end we will left with 1287 unique location points.
4 Outlier removal: They are unusual values in your dataset, and they can distort statistical analyses
and violate their assumptions. Unfortunately, all analysts will confront outliers and be forced to make
decisions about what to do with them. Given the problems they can cause, you might think that it’s best
to remove them from your data. In case of total square feet we will assume 1bhk = 300 square feet if
some values do not satisfy this condition, we will remove those rows from our dataset after wards we
will be remaining with 12456 data entries. For price per square feet the minimum price is 267 only for
a sqft which is very unusual for a metro city like Bengaluru. The maximum price of 176470 per sqft is
also very unusual. We will remove these types of extreme values, by using mean and standard
deviation. Now our dataset has 10242 rows left, after removing the extreme values.
For BHK outliers in our dataset we can see that the values of some 2 and 3 bhk houses are very
unusual. Like a 2 bhk and a 3 bhk houses are in the same location but their price differs a lot. We
will solve this by applying mathematical equations to separate the price value for 2 bhk and 3 bhk.
From the below figure we can see for example around 1700 Total Square feet Area that some 2bhk
houses have more price than the 3 bhk houses. we will remove this abnormality We should also
remove properties where for same location, the price of (for example) 3-bedroom apartment is less
than 2-bedroom apartment (with same square ft area). For bathrooms We can see that most of the
houses in our dataset has 2,4 or 5 bathrooms and some houses even has 13 or 16 bathrooms. This
let to fault detection hence we remove this inappropriate data from data set having bathroom more
then 10. we have a 2bhk house and it has 5 bathrooms, it will be very unusual for a house in
Bengaluru. To remove these kinds of outliers we will assume a condition that a 1 bhk house can
have at most 3 bathrooms. In our dataset we have a size column, but we have already bhk column.
Also, the price_per_sqft column was created to remove outliers. Now we have no use of size and
price_per_sqft columns, therefore we will drop them.
6
Fig 6 Mathematical values of Price per square feet
Fig 7 Price value of Total square feet area in Rajajinagar in terms of lakhs
Fig 8 Price value of Total square feet area in various with respect to BHK
7
Fig 9 Normal disribution of price per square feet
loud Computing
Cloud Computing has its roots in Grid Computing and Utility Computing. As the internet data-
transfer speed increased, it was visualized that these computing capacities can be provisioned at
a larger scale through the internet. IT companies saw promise and opportunity in this internet-
enabled compute utility model. Cloud computing has become a revolutionary concept in IT and
Telecom, reshaping the business models service offerings, and hardware/software provisioning,
thereby unleashing new revenue generating services. Cloud computing has become one of the
highly used terms in the industry by companies, developers, and the end users. In mobile
applications, mobile clouds are being developed, and to cater for security private clouds are
being developed. In Telecom sector cloud can be used for telecom application delivery.
Nowadays we are witnessing a shift of telecom service providers and vendors to the cloud. We
have researched and analyzed these trends, telecom sector concerns, cloud options for specific
telecom product, and ways to deal with the migration problems
Cloud computing is a general term for anything that involves delivering hosted services over the
internet. These services are divided into three main categories or types of cloud computing:
infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS)
8
After Cloud Computing
9
> Rapid expansion
> Multi-Tenancy
Lack of Scalability: which we have already seen, this includes the Problem with Scaling In /
Scaling Out the capacities. This includes handling of Uncontrolled and unpredictable traffic
explosion which is usually seen during Matches, Events etc.
10
Lack of Adaptation: In telecom domain there is lot of Inter-Op testing required between two
nodes for making it operational with traditional hardware, there is always challenge with new
Technology such as 5G & IOT. There also also issues with New Functionality such as New
Links or New Interface required. There is always lots of adaptation required for any New
Business Model which usually gets delayed with traditional hardware
Lack of Flexibility: where you can’t balance traffic between two nodes. Typically Voice traffic
peaks at 19:00 hrs. where Data traffic peaks at 23:00 hrs. We can’t shuffle resources between
circuit switch network & packet Core network in case of traditional networks. In case of Cloud.
This will not only improve user experience but also will save CAPEX Investment. There is
always high amount of OPEX required in AMC and keeping spares available for Traditional
hardware. This cost can also be saved by migrating to Cloud Native network where overall TCO
is less.
Lack of Speed: With traditional networks such as Slow to Change & Innovate. One feature will
be available with one vendor but not available with another one. With Cloud Native networks,
we can easily switch the Vendors to avoid Vendor Lock in & get benefits. With Traditional
network
• Low ARPU.
• Low revenue.
• Multi-vendor complexity.
Building Blocks of NFV: Network functions virtualization infrastructure (NFVI) is the layer who
is responsible to handle hardware, this hosts all Storage, Compute & Network hardware &
abstract same as virtual resources for consumption of virtual machines. We deploy genetic COTS
based hardware or Servers or Blades in NFVI Layer. In order to leverage massive scale required
12
by Telecom network providers, we can deploy these hardware blades or servers in bulk.
Virtualized network functions (VNFs) are layer where we host actual Application working
network functions running as Software. One single VNF can be deployed over multiple virtual
machines. On the right-hand side, we can see NFV-MANO Layer which is Network functions
virtualization management and orchestration architectural framework Managing & Controlling
the entire piece
Telecom Applications as vSMSC, vMSC, vHLR, vSGSN or vGGSN are deployed as Software
module in VNF. VNF is hosting these Telecom Nodes as Virtual Applications. We can allocate
resources to these Virtual Nodes on basis of requirement such as vHLR needs more storage
while vMSC requires more Compute. All of these Individual virtual network nodes can have
reserved Compute, Storage & Network which will be used by that Application Only
Virtual Network Function ( VNF ):A VNF, on the other hand, is a network function using
software that is decoupled from the underlying hardware. These virtualized network functions
run inside virtual machines (VMs) and are known as Virtual Network Functions (VNFs). One
single VNF can be deployed over multiple virtual machines . For example , vSMSC is running
on 2 VMs , vMSC is running on 1 VM while vHLR is running on 3 VMs as visible on screen
VNF Manager is responsible for things related to FCAPS & Management of VNFs such as
setting, monitoring and logging all kind of fault, configuring the network element , collecting
performance data etc .. VNF Manager manages Life cycle VNFs which includes setting up or
Creation , Maintaining and Tearing down of VNF & Overall FCAPS of Virtualization and VNF
13
Element Management ( EM ):The FCAPS and O&M of Applications such as vMSC or vSMSC
or vHLR is done by EM shown on top of VNF . The EM stands for Element Management and
does FCAPS of Application such as MSC Link down , MSC KPI Degradation etc.
NFV ORCHESTRATOR(NFVO)
This Orchestration is Top most is key to any type of Automation expected out of SDN & NFV.
This is part of NFV framework is also known as NFV MANO – Management and Orchestration.
This is also called as NFVO or NFV orchestrator
NFVO also does Resource orchestration which is ensuring that there are adequate compute,
storage, and network resources available to provide a network service. For this, the NFVO works
with the VIM or directly with NFV infrastructure (NFVI) resources, depending on the
requirements. It has the ability to coordinate, authorize, release, and engage NFVI resources
14
independently of any specific VIM. It also provides governance of VNF instances sharing
resources of the NFVI
Network connectivity. For any new Node or New Service, we need multiple things such as IP
Allocation, Bandwidth Allocation , Policy Opening , Routing changes to achieve End to End
Reachability and proceed with Service testing . All this is not automated & it takes lot of time to
prepare design , perform changes in every router / switch & make it thru
In typical scenarios, This make take few days or week to finish IP Routing & enabling end to end
15
reachability of all required links . SDN helps us here by making this routing / switching network
flexible & programmable
Data Plane works, In traditional network, There multiple switches & Routers having links
connected to each other via line cards carry this traffic. This is where the data is actually moved
from one device to another, As visible on screen traffic is going all the way from Router A to
Router B to Router C to Router D.
16
control plane . In traditional network, The role of control plane to take routing decisions. Every
router is having its own brain to decide best path to route the traffic., In traditional network,
Router A decides to route traffic to router B on basis of Brain applied by itself i.e. Router A .
Since Router A doesn’t know what’s happening in life of subsequent routers such as C or D ,
This can be point of concern in case there is problem in network such as link failures or
congestion . There has to be someone who is keeping end to end eye on complete network and
take holistic decision
Management Plane. In traditional network, This is used for performing Operation & maintenance
of network. For example, We use Management plane to fetch reports, Perform Configurations,
Get Alerts and Alarms
SDN Powered network where controller or brain of network is separated from data or forwarding
plane . SDN will change dramatically the way we design , manage & run our networks
Makes Networking & IP Routing flexible: SDN ultimately enable packets or traffic to reach its
destination, It does same with help of software & dynamic algorithms with full flexibility &
Agility. Instead of wasting many days in performing manual routing for enabling reachability,
SDN does this in much better way that requires far less time
Decoupling control & Data plane: In traditional network, Where Both Brain & Data forwarding
layer sits on same router, here we can see centralized controller whereas Data forwarding plane
still resides on Router. Here, centralized Controller decides traffic Routing, Data Plane only used
for forwarding Payload to destination. We call this as De-Coupling of control & Data plane
Offloads brain to centralized controller & Central View of Resources: SDN provide a central
view for more efficient resource allocation and running of network services. This facilitates
centralized monitoring of entire network. The control plane is taking decision considering end to
17
end topology . While routing traffic from Router-A , It does consider what’s happening in life of
Router-D . In case there is some outage or link congestion or degradation happened with Router-
D , All the routers are told to route traffic via Router-X
Programmable Network, Centrally Managed, Agile for any Need : Centralized control plane
means network control to become directly programmable and the underlying infrastructure to be
abstracted for applications and network services . SDN makes networks programmable so that
operators can support multiple applications such as dynamic provisioning of bandwidth, Auto
Scale out , Scale In , Building protection paths etc .. There are upper layer applications which
can flexibly manage the controller such as exposing Controller directly to user on Web portal
where End user can provision or De-Provision bandwidth himself . The true power of SDN is
Abstraction, The whole logic flow is so automated that network applications can make requests
SDN controller which in turn will adjust the network resources, changing configuration . All of
this happens within few Seconds
Virtual Machine
There are two ways to implement Virtualization, 1st One is traditional Virtual Machine based
hypervisors & 2nd one is more advanced where we are use Containers.
If you’re a techie or work in IT or Telecom or Network domain, This becomes necessary for you
to understand this Virtualization technology as Everyone from Large Scale Mobile operators to
Giants like Google and Amazon are building services to support it . Here , We will cover high
level concepts of Hypervisor based Virtual Machines and Container which are easy to remember
for life time
Container is based on advanced technology and provide effective Alternative for Virtual
Machines running on Hypervisors. Here we are going to cover another aspect of Virtualization
19
where we go deep in understanding what is Virtual Machine based Hypervisor & New
technologies in this field by name of Docker & Container
VMs or Virtual Machines and Containers are going to do same thing .. i.e. Virtualization, Their
Goals are same. They are going to enable Applications to use virtual resources such as Virtual
Hardware, Virtual Compute, Virtual Network in much more scale-able and flexible way . With
Virtualization, There is no need for physical hardware allowing for more efficient use of
computing resources, both in terms of power , space , Infra and cost effectiveness . The main
difference between containers and VMs is in their architectural approach
Virtualization is the act of migrating physical systems into a virtual environment. You can see
that , App1 running on traditional Network on left hand side which gets virtualized . In other
words, Virtualization is the creation of a virtual resource by abstracting actual Hardware
resources. Here we have virtualized computing resources which allows us to use one server for
multiple Apps by creating isolated environments for these individual virtualized Applications .
Here resources are abstracted as Virtual Compute , Virtual Storage & Virtual Network ( The
abstraction means a software emulation of that hardware ) . The Apps are only interacting with
Guest OS which is using these virtual resources instead of physical hardware
With help of Virtualization, We can instantly access nearly limitless computing resources which
allow for faster and Flexible roll out of business requirements. With Virtualization, We can
improve overall App performance due to better and faster scalability , better utilization and
availability of resources
HYPERVISOR
Hypervisor is building block for Hosting Applications and Virtual Machines. The role of
hypervisor to take resources from Physical computers. Further these resources are emulated and
abstracted as software and provided to various VMs in form of Virtual Compute, Virtual Storage
& Virtual Network. One VM can’t consume resources allocated to another VM . Hypervisor
isolates and protects these VMs from each other. In case one of VM or Virtual Machines
exhausts all its resources, Its not allowed to simply eat up resources for another VM
The Hypervisor runs on Host Machine using Host Operating systems and physical resources. It
20
further hosts multiple VMs also called as Guest Machines. This guest machine or VM contains:-
Guest Operating system such as MAC, Windows or Linux .. Pls note that this Guest can be
different from Host Operating System where Hypervisor is running the application and
supporting system binaries and libraries
Light Weight: Since, There is no Guest Operating system in Container, The Host Operating
system is itself used and shared by all containers. Only few additional Libraries are required by
Container to work . This makes containers so much lightweight. These two points makes
containers fairly small in size as compared to Virtual machines. We have tried to show you this
with example of Elephant Vs Mouse. Hypervisor based virtual machine represents Elephant
21
whereas mouse represents smaller Container
Lightning Fast : One of the other key benefits of containers is lightning fast speed . Containers
are at least 9x faster than Virtual machines, They are created and destroyed in few seconds as we
don’t need to boot up any additional Operating system. This speed is really helpful in situations
where we need things to be happening really fast . You can see example of Tortoise and Rabbit
where Hypervisor represents slow Tortoise and container represents fast Rabbit
Virtualization Mode: VM virtualizes the hardware, While containers virtualize the operating
system instead of hardware. This makes Containers much more portable and efficient. Since
there is no OS based dependency in container, We can have much better integration here .
Containers are an abstraction at the app layer that packages code and dependencies together
Portability: With help of Dockers, The containers are so powerful & so much portable. Anyone
from Coders, developers can develop these applications in Lab & Test same . The same code
then can be ported to anywhere such as public cloud, private cloud etc.. There is no extra
dependency required for running this Application in any environment. The container is world in
itself & ensure seamless migration of code to any Platform . We also call containers .. “build
once, run anywhere” . I am again iterating .. Containers are complete Package .. They contains
both application and respective libraries and any other dependencies as One package . This
package will run without any issue on any infrastructure. You can never achieve this portability
with Hypervisor due to this dependency.
Modularity and Scalability: Docker are used to scale in , scale out capacities on modular basis .
The entire process is very automated and empowers entire process
ADVANTAGES
Telecommunications perspective
• Cloud Delivery Model
• Communication Services
Network Services
Service Provider’s perspective
22
• Reduction in cost
• Highly scalable and flexible infrastructure
• Efficient and flexible resource allocation and
management
23
Now, Let’s assume there is Cricket or Soccer or Football match tonight. As consequence, More
and more people will start watching this match online & The traffic will start increasing rapidly
With more users coming online & the Utilization of GGSN will start shooting up .. Ultimately
Utilization will keep on rising and will gradually going to touch 95% . Users are still able to see
match as some headroom is left for additional traffic. Now once utilization touch 100% , Traffic
can’t further scale anymore & users will get buffering . At this time, You can see Red Dot’s on
GGSN Utilization chart where user experience is suffered to capping of traffic done by GGSN.
Now users will start getting buffering as visible on handset on left. Users can’t view match
anymore due to extremely high utilization on Network here
With help of NFV, we create the new capacities and with help of SDN, we can create highways
and pathways to provide connectivity between users and this new Node to offload some of
traffic. Now, we can see the power of this cloud network to handle traffic more effectively and in
flexible way to meet any dynamic user requirements
CONCLUSION
25
• Cloudification or Virtualization is journey to migrate this Purpose build hardware
running in Switch rooms to the Virtualized Nodes running Generic hardware deployed in
Data Center
• Telecom trends in cloud computing suggest that Telcos have to become agile, in terms of
business and technical cloud IT, in providing cloud-based services
• Telcos shift to cloud computing would generate new business models, processes, cost-
improvements, and service-dynamism and will enable end-to-end service management.
• In future there is a need of Three Transitions -Switch Rooms to Data Centers, purpose
build hardware such as HLR by Nokia and physical device in rack to Virtual Device in
Cloud Network
REFRENCE
• C. Vecchio, S. Pandey, and R. Buyya, “High-Performance Cloud Computing: A View of
Scientific Applications,” in 10th IEEE International Symposium on Pervasive Systems,
Algorithms and Networks, 2009,
• Nygren, R.K. Sitaraman, and J. Sun, “The Akamai network: a platform for high-
performance internet applications,” in SIGOPS Operating Systems Review, ACM, 2010,
vol. 44, pp. 2-19.
26