0% found this document useful (0 votes)
203 views12 pages

Artificial Intelligence

Artificial intelligence is a branch of computer science that aims to create intelligent machines by programming computers to exhibit traits like knowledge, reasoning, problem-solving, perception, learning, and planning. It involves studying how the human brain thinks in order to develop intelligent software systems. While still a developing field, AI has become essential to technology and is being applied in various industries like education, healthcare, law, and transportation through technologies like speech recognition, virtual assistants, and machine learning. However, there are also risks like job automation that need to be addressed as AI capabilities continue to grow.

Uploaded by

Ayesha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
203 views12 pages

Artificial Intelligence

Artificial intelligence is a branch of computer science that aims to create intelligent machines by programming computers to exhibit traits like knowledge, reasoning, problem-solving, perception, learning, and planning. It involves studying how the human brain thinks in order to develop intelligent software systems. While still a developing field, AI has become essential to technology and is being applied in various industries like education, healthcare, law, and transportation through technologies like speech recognition, virtual assistants, and machine learning. However, there are also risks like job automation that need to be addressed as AI capabilities continue to grow.

Uploaded by

Ayesha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Artificial Intelligence:

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has
become an essential part of the technology industry.

Research associated with artificial intelligence is highly technical and specialized. The core problems of
artificial intelligence include programming computers for certain traits such as:

Knowledge

Reasoning

Problem solving

Perception

Learning

Planning

Ability to manipulate and move objects

Artificial Intelligence is an approach to make a computer, a robot, or a product to think how smart human think. AI is a
study of how human brain think, learn, decide and work, when it tries to solve problems. And finally this study outputs intelligent
software systems.

Artificial Intelligence refers to the intelligence of machines. This is in contrast to the natural intelligence
of humans and animals. With Artificial Intelligence, machines perform functions such as learning,
planning, reasoning and problem-solving. Most noteworthy, Artificial Intelligence is the simulation of
human intelligence by machines. It is probably the fastest-growing development in the World
of technology and innovation. Furthermore, many experts believe AI could solve major challenges and
crisis situations.
Types of Artificial Intelligence
First of all, the categorization of Artificial Intelligence is into four types. Arend Hintze came up with this
categorization. The categories are as follows:

Type 1: Reactive machines – These machines can react to situations. A famous example can be Deep
Blue, the IBM chess program. Most noteworthy, the chess program won against Garry Kasparov, the
popular chess legend. Furthermore, such machines lack memory. These machines certainly cannot use
past experiences to inform future ones. It analyses all possible alternatives and chooses the best one.

Type 2: Limited memory – These AI systems are capable of using past experiences to inform future
ones. A good example can be self-driving cars. Such cars have decision making systems. The car makes
actions like changing lanes. Most noteworthy, these actions come from observations. There is no
permanent storage of these observations.

Type 3: Theory of mind – This refers to understand others. Above all, this means to understand that
others have their beliefs, intentions, desires, and opinions. However, this type of AI does not exist yet.

Type 4: Self-awareness – This is the highest and most sophisticated level of Artificial Intelligence. Such
systems have a sense of self. Furthermore, they have awareness, consciousness, and emotions.
Obviously, such type of technology does not yet exist. This technology would certainly be a revolution.

How Does Artificial Intelligence Transform the Way We Live?


 Some people believe AI is beneficial to various industries, professions, and the workforce as a whole,
while others believe it is detrimental. The ability of AI to perform a significant amount of skilled work
automatically has implications for the human resources needed to carry out the same skilled work.
While many industries require multiple tasks, AI can automate the completion of single tasks, one at a
time.

Let’s take a look at how AI can be applied in different fields.

Education

Health
Law

Transport
Smart Apps

Smart Homes
Latest Artificial Intelligence Technologies 
1. Natural language generation

Machines process and communicate in a different way than the human brain. Natural language generation  is a trendy technology that
converts structured data into the native language. The machines are programmed with algorithms to convert the data into a desirable format
for the user. Natural language is a subset of artificial intelligence that helps content developers to automate content and deliver in the desired
format. The content developers can use the automated content to promote on various social media platforms, and other media platforms to
reach the targeted audience. Human intervention will significantly reduce as data will be converted into desired formats. The data can be
visualized in the form of charts, graphs, etc.

2. Speech recognition

Speech recognition is another important subset of artificial intelligence that converts human speech into
a useful and understandable format by computers. Speech recognition is a bridge between human and
computer interactions. The technology recognizes and converts human speech in several languages. Siri
of iPhone is a classic example of speech recognition.

3. Virtual agents

Virtual agents have become valuable tools for instructional designers. A virtual agent is a computer
application that interacts with humans. Web and mobile applications provide   as their customer service
agents to interact with humans to answer their queries. Google Assistant helps to organize meetings,
and Alexia from Amazon helps to make your shopping easy. A virtual assistant also acts like a language
assistant, which picks cues from your choice and preference. The IBM Watson understands the typical
customer service queries which are asked in several ways. Virtual agents act as software-as-a-service
too.

4. Decision management
Modern organizations are implementing decision management systems for data conversion and
interpretation into predictive models. Enterprise-level applications implement decision management
systems to receive up-to-date information to perform business data analysis to aid in organizational
decision-making. Decision management helps in making quick decisions, avoidance of risks, and in the
automation of the process. The decision management system is widely implemented in the financial
sector, the health care sector, trading, insurance sector, e-commerce, etc.

5. Biometrics

Deep learning another branch of artificial intelligence that functions based on artificial neural
networks. This technique teaches computers and machines to learn by example just the way
humans do. The term “deep” is coined because it has hidden layers in neural networks.
Typically, a neural network has 2-3 hidden layers and can have a maximum of 150 hidden
layers. Deep learning is effective on huge data to train a model and a graphic processing unit.
The algorithms work in a hierarchy to automate predictive analytics. Deep learning has spread
its wings in many domains like aerospace and military to detect objects from satellites, helps in
improving worker safety by identifying risk incidents when a worker gets close to a machine,
helps to detect cancer cells, etc.

6. Machine learning

Machine learning is a division of artificial intelligence which empowers machine to make sense from
data sets without being actually programmed. Machine learning technique helps businesses to make
informed decisions with data analytics performed using algorithms and statistical models. Enterprises
are investing heavily in machine learning to reap the benefits of its application in diverse domains.
Healthcare and the medical profession need machine learning techniques to analyze patient data for the
prediction of diseases and effective treatment. The banking and financial sector needs machine learning
for customer data analysis to identify and suggest investment options to customers and for risk and
fraud prevention. Retailers utilize machine learning for predicting changing customer preferences,
consumer behavior, by analyzing customer data.

7. Robotic process automation

Robotic process automation is an application of artificial intelligence that configures a robot


(software application) to interpret, communicate and analyze data. This discipline of artificial
intelligence helps to automate partially or fully manual operations that are repetitive and rule-
based.

8. Peer-to-peer network

The peer-to-peer network helps to connect between different systems and computers for data sharing
without the data transmitting via server. Peer-to-peer networks have the ability to solve the most
complex problems. This technology is used in crypto currencies. The implementation is cost-effective as
individual workstations are connected and servers are not installed.
9. Deep learning platforms

Deep learning another branch of artificial intelligence that functions based on artificial neural networks.
This technique teaches computers and machines to learn by example just the way humans do. The term
“deep” is coined because it has hidden layers in neural networks. Typically, a neural network has 2-3
hidden layers and can have a maximum of 150 hidden layers. Deep learning is effective on huge data to
train a model and a graphic processing unit. The algorithms work in a hierarchy to automate predictive
analytics. Deep learning has spread its wings in many domains like aerospace and military to detect
objects from satellites, helps in improving worker safety by identifying risk incidents when a worker gets
close to a machine, helps to detect cancer cells, etc.

10. AL optimized hardware

Artificial intelligence software has a high demand in the business world. As the attention for the
software increased, a need for the hardware that supports the software also arises. A conventional chip
cannot support artificial intelligence models. A new generation of artificial intelligence chips is being
developed for neural networks, deep learning, and computer vision. The AL hardware includes CPUs to
handle scalable workloads, special purpose built-in silicon for neural networks, neuromorphic chips, etc.
Organizations like Nvidia, Qualcomm. AMD is creating chips that can perform complex AI calculations.
Healthcare and automobile may be the industries that will benefit from these chips.

Risks of Artificial intelligence

JOB AUTOMATION

Job automation is generally viewed as the most immediate concern. It’s no longer a matter of if AI will replace
certain types of jobs, but to what degree. In many industries — particularly but not exclusively those whose
workers perform predictable and repetitive tasks — disruption is well underway. According to a 2019 Brookings
Institution study, 36 million people work in jobs with “high exposure” to automation, meaning that before long at
least 70 percent of their tasks — ranging from retail sales and market analysis to hospitality and warehouse labor
— will be done using AI. An even newer Brookings report concludes that white collar jobs may actually be most at
risk. And per a 2018 report from McKinsey & Company, the African American workforce will be hardest hit.

“The reason we have a low unemployment rate, which doesn’t actually capture people that aren’t looking for
work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy,” renowned
futurist Martin Ford told Built In. “I don’t think that’s going to continue.”

As AI robots become smarter and more dexterous, he added, the same tasks will require fewer humans. And while
it’s true that AI will create jobs, an unspecified number of which remain undefined, many will be inaccessible to
less educationally advanced members of the displaced workforce.

“If you’re flipping burgers at McDonald’s and more automation comes in, is one of these new jobs going to be a
good match for you?” Ford said. “Or is it likely that the new job requires lots of education or training or maybe
even intrinsic talents — really strong interpersonal skills or creativity — that you might not have? Because those
are the things that, at least so far, computers are not very good at.”

Find out who's hiring.


See jobs at top tech companies & startups

VIEW ALL JOBS

John C. Havens, author of Heartificial Intelligence: Embracing Humanity and Maximizing Machines, calls bull on the
theory that AI will create as many or more jobs than it replaces.

About four years ago, Havens said, he interviewed the head of a law firm about machine learning. The man wanted
to hire more people, but he was also obliged to achieve a certain level of returns for his shareholders. A $200,000
piece of software, he discovered, could take the place of ten people drawing salaries of $100,000 each. That meant
he’d save $800,000. The software would also increase productivity by 70 percent and eradicate roughly 95 percent
of errors. From a purely shareholder-centric, single bottom-line perspective, Havens said, “there is no legal reason
that he shouldn’t fire all the humans.” Would he feel bad about it? Of course. But that’s beside the point.

Even professions that require graduate degrees and additional post-college training aren’t immune to AI
displacement. In fact, technology strategist Chris Messina said, some of them may well be decimated. AI already is
having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for “a
massive shakeup.”

“Think about the complexity of contracts, and really diving in and understanding what it takes to create a perfect
deal structure,” he said. “It’s a lot of attorneys reading through a lot of information — hundreds or thousands of
pages of data and documents. It’s really easy to miss things. So AI that has the ability to comb through and
comprehensively deliver the best possible contract for the outcome you're trying to achieve is probably going to
replace a lot of corporate attorneys.”

Accountants should also prepare for a big shift, Messina warned. Once AI is able to quickly comb through reams of
data to make automatic decisions based on computational interpretations, human auditors may well be
unnecessary.

PRIVACY, SECURITY AND THE RISE OF 'DEEPFAKES' 


While job loss is currently the most pressing issue related to AI disruption, it’s merely one among many potential
risks. In a February 2018 paper titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and
Mitigation,” 26 researchers from 14 institutions (academic, civil and industry) enumerated a host of other dangers
that could cause serious harm — or, at minimum, sow minor chaos — in less than five years.

“Malicious use of AI,” they wrote in their 100-page report, “could threaten digital security (e.g. through criminals
training machines to hack or socially engineer victims at human or superhuman levels of performance), physical
security (e.g. non-state actors weaponizing consumer drones), and political security (e.g. through privacy-
eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns).”

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A
prime example, he said, is China’s “Orwellian” use of facial recognition technology in offices, schools and other
venues. But that’s just one country. “A whole ecosphere” of companies specialize in similar tech and sell it around
the world.

What we can so far only guess at is whether that tech will ever become normalized. As with the internet, where we
blithely sacrifice our digital data at the altar of convenience, will round-the-clock, AI-analyzed monitoring someday
seem like a fair trade-off for increased safety and security despite its nefarious exploitation by bad actors?

“Authoritarian regimes use or are going to use it,” Ford said. “The question is, How much does it invade Western
countries, democracies, and what constraints do we put on it?”

“Authoritarian regimes use or are going to use it ... The question is, How much does it invade Western
countries, democracies, and what constraints do we put on it?”

AI will also give rise to hyper-real-seeming social media “personalities” that are very difficult to differentiate from
real ones, Ford said. Deployed cheaply and at scale on Twitter, Facebook or Instagram, they could conceivably
influence an election. 

The same goes for so-called audio and video deepfakes, created by manipulating voices and likenesses. The latter
is already making waves. But the former, Ford thinks, will prove immensely troublesome. Using machine learning, a
subset of AI that’s involved in natural language processing, an audio clip of any given politician could be
manipulated to make it seem as if that person spouted racist or sexist views when in fact they uttered nothing of
the sort. If the clip’s quality is high enough so as to fool the general public and avoid detection, Ford added, it could
“completely derail a political campaign.” 

And all it takes is one success. 

From that point on, he noted, “no one knows what’s real and what’s not. So it really leads to a situation where you
literally cannot believe your own eyes and ears; you can't rely on what, historically, we’ve considered to be the
best possible evidence… That’s going to be a huge issue.”

Lawmakers, though frequently less than tech-savvy, are acutely aware and pressing for solutions.

AI BIAS AND WIDENING SOCIOECONOMIC INEQUALITY


Widening socioeconomic inequality sparked by AI-driven job loss is another cause for concern. Along with
education, work has long been a driver of social mobility. However, when it’s a certain kind of work — the
predictable, repetitive kind that’s prone to AI takeover — research has shown that those who find themselves out
in the cold are much less apt to get or seek retraining compared to those in higher-level positions who have more
money. (Then again, not everyone believes that.)
Various forms of AI bias are detrimental, too. Speaking recently to the New York Times, Princeton computer
science professor Olga Russakovsky said it goes well beyond gender and race. In addition to data and algorithmic
bias (the latter of which can “amplify” the former), AI is developed by humans and humans are inherently biased.

“A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in
high socioeconomic areas, primarily people without disabilities,” Russakovsky said. “We’re a fairly homogeneous
population, so it’s a challenge to think broadly about world issues.”

In the same article, Google researcher Timnit Gebru said the root of bias is social rather than technological, and
called scientists like herself “some of the most dangerous people in the world, because we have this illusion of
objectivity.” The scientific field, she noted, “has to be situated in trying to understand the social dynamics of the
world, because most of the radical change happens at the social level.”

And technologists aren’t alone in sounding the alarm about AI’s potential socio-economic pitfalls. Along with
journalists and political figures, Pope Francis is also speaking up — and he’s not just whistling Sanctus. At a late-
September Vatican meeting titled, “The Common Good in the Digital Age,” Francis warned that AI has the ability to
“circulate tendentious opinions and false data that could poison public debates and even manipulate the opinions
of millions of people, to the point of endangering the very institutions that guarantee peaceful civil coexistence.”

“If mankind’s so-called technological progress were to become an enemy of the common good,” he added, “this
would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.”

A big part of the problem, Messina said, is the private sector’s pursuit of profit above all else. Because “that’s what
they’re supposed to do,” he said. “And so they’re not thinking of, ‘What’s the best thing here? What’s going to
have the best possible outcome?”

“The mentality is, ‘If we can do it, we should try it; let’s see what happens,” he added. “‘And if we can make money
off it, we’ll do a whole bunch of it.’ But that’s not unique to technology. That’s been happening forever.’”

AUTONOMOUS WEAPONS AND A POTENTIAL AI ARMS RACE


Not everyone agrees with Musk that AI is more dangerous than nukes, including Ford. But what if AI decides
to launch nukes — or, say, biological weapons — sans human intervention? Or, what if an enemy manipulates data
to return AI-guided missiles whence they came? Both are possibilities. And both would be disastrous. The more
than 30,000 AI/robotics researchers and others who signed an open letter on the subject in 2015 certainly think
so. 

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” they
wrote. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually
inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the
Kalashnikovs of tomorrow.

Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous
and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on
the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing
to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing
nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military
AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for
humans, especially civilians, without creating new tools for killing people.”

(The U.S. Military’s proposed budget for 2020 is $718 billion. Of that amount, nearly $1 billion would support AI
and machine learning for things like logistics, intelligence analysis and, yes, weaponry.)

A story in Vox detailed a frightening scenario involving the development of a sophisticated AI system “with the goal
of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its
calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to
wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates
the number with higher confidence.”

That’s jarring, sure. But rest easy. In 2012 the Obama Administration’s Department of Defense issued a directive
regarding “Autonomy in Weapon Systems” that included this line: “Autonomous and semi-autonomous weapon
systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment
over the use of force.”

And in early November of this year, a Pentagon group called the Defense Innovation Board published ethical
guidelines regarding the design and deployment of AI-enabled weapons. According to the Washington Post,
however, “the board’s recommendations are in no way legally binding. It now falls to the Pentagon to determine
how and whether to proceed with them.”

Well, that’s a relief. Or not.

STOCK MARKET INSTABILITY CAUSED BY ALGORITHMIC HIGH FREQUENCY TRADING 

Have you ever considered that algorithms could bring down our entire financial system? That’s right, Wall Street.
You might want to take notice. Algorithmic trading could be responsible for our next major financial crisis in the
markets. 

What is algorithmic trading? This type of trading occurs when a computer, unencumbered by the instincts or
emotions that could cloud a human’s judgement, execute trades based off of pre-programmed instructions. These
computers can make extremely high-volume, high-frequency and high-value trades that can lead to big losses and
extreme market volatility. Algorithmic High-Frequency Trading (HFT) is proving to be a huge risk factor in our
markets. HFT is essentially when a computer places thousands of trades at blistering speeds with the goal of selling
a few seconds later for small profits. Thousands of these trades every second can equal a pretty hefty chunk of
change. The issue with HFT is that it doesn’t take into account how interconnected the markets are or the fact that
human emotion and logic still play a massive role in our markets. 

A sell-off of millions of shares in the airline market could potentially scare humans into selling off their shares in
the hotel industry, which in turn could snowball people into selling off their shares in other travel-related
companies, which could then affect logistics companies, food supply companies, etc.

Take the “Flash Crash” of May 2010 as an example. Towards the end of the trading day, the Dow Jones plunged
1,000 points (more than $1 trillion in value) before rebounding towards normal levels just 36 minutes later. What
caused this crash? A London-based trader named Navinder Singh Sarao first caused the crash and then it became
exacerbated by HFT computers. Apparently Sarao used a “spoofing” algorithm that placed an order for thousands
of stock index futures contracts betting that the market would fall. Instead of going through with the bet, Sarao
was going to cancel the order at the last second and buy the lower priced stocks that were being sold off due to his
original bet. Other humans and HFT computers saw this $200 million bet and took it as a sign that the market was
going to tank. In turn, HFT computers began one of the biggest stock sell-offs in history, causing a brief loss of more
than $1 trillion globally.

Financial HFT algorithms aren’t always correct, either. We view computers as the end-all-be-all when it comes to
being correct, but AI is still really just as smart as the humans who programmed it. In 2012, Knight Capital Group
experienced a glitch that put them on the verge of bankruptcy. Knight’s computers mistakenly streamed thousands
of orders per second into the NYSE market causing mass chaos for the company. The HFT algorithms executed an
astounding 4 million trades of 397 million shares in only 45 minutes. The volatility created by this computer error
led to Knight losing $460 million overnight and having to be acquired by another firm. Errant algorithms obviously
have massive implications for shareholders and the markets themselves, and nobody learned this lesson harder
than Knight.

Mitigating the Risks of AI


Many believe the only way to prevent or at least temper the most malicious AI from wreaking havoc is some sort
of regulation. 

“I am not normally an advocate of regulation and oversight — I think one should generally err on the side of
minimizing those things — but this is a case where you have a very serious danger to the public,” Musk said at
SXSW.

“It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely.
This is extremely important.”

Ford agrees — with a caveat. Regulation of AI implementation is fine, he said, but not of the research itself.

“You regulate the way AI is used,” he said, “but you don’t hold back progress in basic technology. I think that would
be wrong-headed and potentially dangerous.”

You might also like