0% found this document useful (0 votes)
27 views10 pages

FM 8

The document discusses several topics related to emerging technologies including cloud computing, business intelligence, artificial intelligence, robotic process automation, and machine learning. It provides definitions and explanations of these concepts as well as their applications and benefits.

Uploaded by

Nikhil Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views10 pages

FM 8

The document discusses several topics related to emerging technologies including cloud computing, business intelligence, artificial intelligence, robotic process automation, and machine learning. It provides definitions and explanations of these concepts as well as their applications and benefits.

Uploaded by

Nikhil Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

1

Cloud Computing, Business Intelligence,


Artificial Intelligence, Robotic Process
Automation and Machine Learning
Cloud computing
Simply described, cloud computing is the delivery of a variety of services through
the Internet, or “the cloud.” It involves storing and accessing data via distant servers
as opposed to local hard drives and private data centers.
Before the advent of cloud computing, businesses had to acquire and operate their
own servers to suit their demands. This necessitated the purchase of sufficient server
capacity to minimise the risk of downtime and disruptions and to meet peak traffic
volumes. Consequently, significant quantities of server space were unused for the
most of the time. Today’s cloud service providers enable businesses to lessen their
reliance on costly onsite servers, maintenance staff, and other IT resources.

Types of cloud computing


There are three deployment options for cloud computing: private cloud, public
cloud, and hybrid cloud.
Private cloud:
Private cloud offers a cloud environment that is exclusive to a single corporate
organisation, with physical components housed on-premises or in a vendor’s
datacenter. This solution gives a high level of control due to the fact that the private
cloud is available to just one enterprise.
In many instances, a business maintains a private cloud infrastructure on-premises
and provides cloud computing services to internal users over the intranet. In other
cases, the company engages with a third-party cloud service provider to host and
operate its servers off-site.
Public cloud:
2

The public cloud stores and manages access to data and applications through the
internet. the public cloud deployment model enables enterprises to grow with more
ease; the option to pay for cloud services on an as-needed basis is a significant
benefit over local servers.
Hybrid cloud:
The hybrid cloud architecture enables businesses to store sensitive data on-premises
and access it through apps hosted in the public cloud. In order to comply with
privacy rules, an organisation may, for instance, keep sensitive user data in a private
cloud and execute resource-intensive computations in a public cloud.

Business Intelligence:
Business intelligence includes business analytics, data mining, data visualisation, data
tools and infrastructure, and best practises to assist businesses in making choices
that are more data-driven. Modern BI systems promote adaptable self-service
analysis, controlled data on dependable platforms, empowered business users, and
rapid insight delivery.

BI Methods:
Company intelligence is a broad word that encompasses the procedures and
methods of gathering, storing, and evaluating data from business operations or
activities in order to maximise performance. All of these factors combine to provide
a full perspective of a firm, enabling individuals to make better, proactive decisions.
In recent years, business intelligence has expanded to incorporate more procedures
and activities designed to enhance performance. These procedures consist of:
(i) Data mining: Large datasets may be mined for patterns using databases,
analytics, and machine learning (ML).
(ii) Reporting: The dissemination of data analysis to stakeholders in order for them
to form conclusions and make decisions.
(iii) Performance metrics and benchmarking: Comparing current performance data
to previous performance data in order to measure performance versus objectives,
generally utilising customised dashboards.
(iv) Descriptive analytics: Utilizing basic data analysis to determine what transpired
3

(v) Querying: BI extracts responses from data sets in response to data-specific


queries.
(vi) Statistical analysis: Taking the results of descriptive analytics and use statistics
to further explore the data, such as how and why this pattern occurred.
(vii) Data Visualization: Data consumption is facilitated by transforming data
analysis into visual representations such as charts, graphs, and histograms.
(viii) Visual Analysis: Exploring data using visual storytelling to share findings in
real-time and maintain the flow of analysis.
(ix) Data Preparation: Multiple data source compilation, dimension and
measurement identification, and data analysis preparation.

Artificial Intelligence (AI)


John McCarthy of Stanford University defined artificial intelligence as, “It is the
science and engineering of making intelligent machines, especially intelligent
computer programs. It is related to the similar task of using computers to
understand human intelligence, but AI does not have to confine itself to
methods that are biologically observable.”
However, decades prior to this description, Alan Turing’s landmark paper
“Computing Machinery and Intelligence” marked the genesis of the artificial
intelligence discourse. Turing, commonly referred to as the “father of computer
science,” poses the question “Can machines think?” in this article. From there, he
proposes the now famous “Turing Test,” in which a human interrogator attempts to
differentiate between a machine and a human written answer.
Stuart Russell and Peter Norvig then published ‘Artificial Intelligence: A Modern
Approach’, which has since become one of the most influential AI textbooks. In it,
they discuss four alternative aims or definitions of artificial intelligence, which
distinguish computer systems based on reasoning and thinking vs. acting:
Human approach:
● Systems that think like humans
● Systems that act like humans
Ideal approach:
4

● Systems that think rationally


● Systems that act rationally
According to Lex Fridman’s 2019 MIT lecture, we are at the top of inflated
expectations and nearing the trough of disillusionment.

Types of Artificial Intelligence – Weak AI vs. Strong AI


Weak AI, also known as Narrow AI or Artificial Narrow Intelligence (ANI), is AI
that has been trained and honed to do particular tasks. Most of the AI that
surrounds us today is powered by weak AI. This form of artificial intelligence is
anything but feeble; it allows sophisticated applications such as Apple’s Siri,
Amazon’s Alexa, IBM Watson, and driverless cars, among others.
Artificial General Intelligence (AGI) and Artificial Super Intelligence (AIS)
comprise strong AI (ASI). Artificial general intelligence (AGI), sometimes known as
general artificial intelligence (AI), is a hypothetical kind of artificial intelligence in
which a machine possesses human-level intellect, a self-aware consciousness,
and the ability to solve problems, learn, and plan for the future.
Superintelligence, also known as Artificial Super Intelligence (ASI), would transcend
the intelligence and capabilities of the human brain. Despite the fact that strong AI
is yet totally theoretical and has no practical applications, this does not preclude AI
researchers from studying its development.
5

Deep Learning vs. Machine Learning


Given that deep learning and machine learning are frequently used
interchangeably, it is important to note the distinctions between the two. As stated
previously, both deep learning and machine learning are subfields of artificial
intelligence; nonetheless, deep learning is a subfield of machine learning.
Deep learning automates a significant portion of the feature extraction step,
reducing the need for manual human involvement and enabling the usage of
bigger data sets. Deep learning may be thought of as “scalable machine learning,”
as Lex Fridman stated in the aforementioned MIT presentation. Classical or “non-
deep” machine learning requires more human interaction to learn. Human
specialists develop the hierarchy of characteristics in order to comprehend the
distinctions between data inputs, which often requires more structured data to learn.

Robotic Process Automation:


With RPA, software users develop software robots or “bots” that are capable of
learning, simulating, and executing rules-based business processes.
By studying human digital behaviours, RPA automation enables users to construct
bots. Give your bots instructions, then let them to complete the task.
Robotic Process Automation software bots can communicate with any application or
system in the same manner that humans can, with the exception that RPA bots can
function continuously, around-the-clock, and with 100 percent accuracy and
dependability Robotic Process Automation bots possess a digital skill set that
exceeds that of humans.
Bots may copy-paste, scrape site data, do computations, access and transfer files,
analyse emails, log into programmes, connect to APIs, and extract unstructured data,
among other tasks. Due to the adaptability of bots to any interface or workflow,
there is no need to modify existing corporate systems, apps, or processes in order to
automate.
Benefits of RPA
(i) Higher productivity
6

(ii) Higher accuracy


(iii) Saving of cost
(iv) Integration across platforms
(v) Better customer experience
(vi) Harnessing AI
(vii) Scalability

Machine learning
Machine learning (ML) is a branch of study devoted to understanding and
developing systems that “learn,” or ways that use data to improve performance on a
set of tasks. In applications such as medicine, email filtering, speech recognition,
and computer vision, when it is difficult or impractical to create traditional
algorithms to do the required tasks, machine learning techniques are utilised.
Programs that are capable of machine learning can complete tasks without
being expressly designed to do so. It includes computers learning from available
data in order to do certain jobs. For basic jobs handed to computers, it is feasible
to build algorithms that instruct the machine on how to perform all steps necessary
to solve the problem at hand; no learning is required on the side of the computer.
For complex jobs, it might be difficult for a person to manually build the necessary
algorithms. In reality, it may be more efficient to assist the computer in developing
its own algorithm as opposed to having human programmers describe each step.
The field of machine learning involves a variety of methods to educate computers to
perform jobs for which
there is no optimal solution. In situations when there are a large number of viable
replies, one strategy is to classify some of the correct answers as legitimate. This
information may subsequently be utilised to train the computer’s algorithm(s) for
determining accurate replies. Approaches towards machine learning
On the basis of the type of “signal” or “feedback” provided to the learning system,
machine learning systems are generally categorised into five major categories:
(i) Supervised learning
7

Supervised learning algorithms construct a mathematical model of a data set that


includes both the inputs and expected outcomes. The data consists of a collection of
training examples and is known as training data. Each training example consists of
one or more inputs and the expected output, sometimes referred to as a supervisory
signal. Each training example in the mathematical model is represented by an array
or vector, sometimes known as a feature vector, and the training data is represented
by a matrix. By optimising an objective function iteratively, supervised learning
algorithms discover a function that may be used to predict the output associated
with fresh inputs. A function that is optimum will enable the algorithm to find the
proper output for inputs that were not included in the training data. It is claimed
that an algorithm has “learned” to do a task if it improves its outputs or predictions
over time. Active learning, classification, and regression are examples of supervised-
learning algorithm
Unsupervised learning
Unsupervised learning approaches utilise a dataset comprising just inputs to identify
data structure, such as grouping and clustering. Therefore, the algorithms are taught
using unlabelled, unclassified, and uncategorized test data. Unsupervised learning
algorithms identify similarities in the data and respond based on the presence or
absence of such similarities in each new data set.
In statistics, density estimation, such as calculating the probability density function, is
a fundamental application of unsupervised learning. Despite the fact that
unsupervised learning encompasses additional disciplines, such as data feature
summary and explanation.
Cluster analysis is the process of assigning a set of data to subsets (called clusters)
so that observations within the same cluster are similar based on one or more
preset criteria, while observations obtained from other clusters are different.
Semi supervised learning
Semi-supervised learning is intermediate between unsupervised learning (without
labelled training data) and supervised learning (with completely labelled training
data). Many machine-learning researchers have discovered that when unlabelled
8

data is combined with a tiny quantity of labelled data, there is a significant gain in
learning accuracy.
Reinforcement learning
Reinforcement learning is a subfield of machine learning concerned with
determining how software agents should operate in a given environment so as to
maximise a certain concept of cumulative reward. Due to the field’s generic nature, it
is explored in several different fields, including game theory, control theory,
operations research, information theory, simulation-based optimization, multi-agent
systems, swarm intelligence, statistics, and genetic algorithms.
Dimensionality reduction
Dimensionality reduction is the process of acquiring a set of major variables in order
to reduce the number of random variables under consideration.
In other words, it is the process of lowering the size of the feature set, which is also
referred to as the “number of features.” The majority of dimensionality reduction
strategies may be categorised as either deletion or extraction of features. Principal
component analysis is a well-known technique for dimensionality reduction (PCA).
PCA includes transforming data with more dimensions (e.g., 3D) to a smaller space
(e.g., 2D). This results in a decreased data dimension (2D as opposed to 3D), while
retaining the original variables in the model and without altering the data.
Numerous dimensionality reduction strategies assume that high-dimensional data
sets reside along low-dimensional manifolds, leading to the fields of manifold
learning and manifold regularisation.
9

Model vs. Data-driven


Decision-making
In artificial intelligence, there are two schools of thought: data-driven and model-
driven. The data-driven strategy focuses on enhancing data quality and data
governance in order to enhance the performance of a particular problem
statement. In contrast, the model-driven method attempts to increase performance
by developing new models and algorithmic manipulations (or upgrades). In a
perfect world, these should go hand in hand, but in fact, model-driven techniques
have advanced far more than data-driven ones. In terms of data governance, data
management, data quality handling, and general awareness, there is still much room
for improvement.
Recent work on Covid-19 serves as an illustration in this perspective. While the
globe was struggling from the epidemic, several AI-related projects emerged.
Whether it’s recognising Covid-19 from a CT scan, X-ray, or other medical imaging,
estimating the course of the disease, or even projecting the overall number of
fatalities, artificial intelligence is essential. On the one hand, this extensive effort
around the globe has increased our understanding of the illness and, in certain
locations, assisted clinical personnel in their work with vast populations.
However, only few of the vast quantity of work was judged suitable for any actual
implementation procedure, such as in the healthcare industry. Primarily data quality
difficulties are responsible for this deficiency in practicality. Numerous projects and
studies utilised duplicate photos from different sources. Even still, training data
are notably lacking in external validation and demographic information. The
majority of these studies would fail a systematic review and fail to reveal biases.
Consequently, the quoted performance cannot be applied to real-world
scenarios.
A crucial feature of Data science to keep in mind is that poor data will never
result in superior performance, regardless of how strong your model is. Real-
world applications require an understanding of systematic data collection,
10

management, and consumption for a Data Science project. Only then can society
reap the rewards of the ‘wonderful AI’

Solved Case 1
Arjun joined as an instructor in a higher learning institution. His responsibility is to
teach data analysis
to students. He is particularly interested in teaching analytics and model building.
Arjun was preparing a teaching
plan for the new upcoming batch.
What elements do you think, he should incorporate into the plan.
Teaching note - outline for solution:
The instructor may explain first the utility of data analytics from the perspective of
business organizations.
He may explain how data analytics may translate their discoveries into insights that
eventually aid executives,
managers, and operational personnel in making more educated and prudent
business choices.
He may further explain the four forms of data analytics:
(i) Descriptive analytics
(ii) Diagnostic analytics
(iii) Predictive analytics
(iv) Prescriptive analytics
The instructor should explain each of the terms along with their appropriateness in
using under real-life problem
situations.
The advantages and disadvantages of using each of the methods should also be
discussed thoroughly.

You might also like