0% found this document useful (0 votes)
55 views73 pages

Anushka SIP Report

My minor project

Uploaded by

Anushka Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views73 pages

Anushka SIP Report

My minor project

Uploaded by

Anushka Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

SUMMER INTERNSHIP PROJECT REPORT

ON
A Study of Artificial Intelligence: Comparison Between
AMD and NVIDIA

Submitted in partial fulfilment of requirement of Bachelor of


Commerce (Hons.)

B.Com(H)-IV Semester (Morning)


Batch 2022-25

Submitted to: Submitted by:


Ms. Aastha Behl Anushka Mishra
Assistant Professor 01714188822

JAGANNATH INTERNATIONAL MANAGEMENT SCHOOL

1
ACKNOWLEDGEMENT

I would like to gratefully acknowledge the contribution of all the people who took active
part and provided valuable support to me during the course of this project to begin with;
I would like to offer my sincere thanks to MS. AASTHA BEHL my internal mentor.

Without her guidance, support and valuable suggestions during the project, it would not
have been accomplished.

I also sincerely thanks to my friends and family who provided valuable suggestions
shared their rich experience and helped me script the exact requisites.

ANUSHKA MISHRA

B.Com (Hons)

01714188822

2
DECLARATION

I ANUSHKA MISHRA, hereby declare that the minor project report on the topic,
“Artificial Intelligence: Comparison of AMD and NVIDIA” is an original piece of research
work done by me. I have specified by the mean of references, from where the information
has been taken. To the best of my knowledge, my minor project is not the same as those
which may have already been submitted for the degree of any university or board.

ANUSHKA MISHRA

B.COM (HONS) II SEMESTER

2022-2025

3
CONTENTS

S. PAGE
PARTICULARS REMARKS
No. NO.
1. Acknowledgement 2

2. Declaration 3

3. List of Tables and Figures 5

4. Executive Summary 6

5. Introduction of the study 7

6. Objectives of the study 16

7. Review of Literature 18

8. Company Profile 22

9. Research Methodology 34

10. Limitations of the Study 42

11. Findings and Analysis 45

12. SWOT Analysis 61

13. Conclusion 67

14. Bibliography 70

15. Appendices 72

4
List of Tables and Figures
S. No. Figures Page No.
1. Figure 1 7
2. Figure 2 15
3. Figure 3 16
4. Figure 4 18
5. Figure 5 21
6. Figure 6 22
7. Figure 7 23
8. Figure 8 24
9. Figure 9 33
10. Figure 10 34
11. Figure 11 42
12. Figure 12 45
13. Table 1, figure 13 46
14. Table 2, figure 14 47
15. Table 3, figure 15 48
16. Table 4, figure 16 49
17. Table 5, figure 17 50
18. Table 6, figure 18 51
19. Table 7, figure 19 52
20. Table 8, figure 20 53
21. Table 9, figure 21 54
22. Table 10, figure 22 55
23. Figure 23 61
24. Table 11 66
25. Figure 24 67
26. Figure 25 70

5
EXECUTIVE SUMMARY

This project is based on an in-depth comparison between AMD and NVIDIA, two GPU
Manufacturers making significant contributions to the field of artificial intelligence (AI).

This study also focuses on the comparison of financial statuses of the two companies
and conclude that which one did better in 2023 ending or 2024 beginning.

The objective is to present an analysis of the impact of artificial intelligence (AI) in today's
world, focusing on its various applications across different sectors and the implications
of its widespread adoption.

The study also contains a brief SWOT Analysis of both the companies to find out the
future opportunities and threats and plan accordingly

6
CHAPTER – 1

INTRODUCTION
TO THE TOPIC

Fig. 1

7
WHAT IS ARTIFICIAL INTELLIGENCE
Artificial Intelligence (AI) is a science and a set of computational technologies that are
inspired by—but typically operate quite differently from—the ways people use their
nervous systems and bodies to sense, learn, reason, and act. While the rate of progress
in AI has been patchy and unpredictable, there have been significant advances since the
field's inception sixty years ago. Once a mostly academic area of study, twenty-first
century AI enables a constellation of mainstream technologies that are having a
substantial impact on everyday lives. Computer vision and AI planning, for example,
drive the video games that are now a bigger entertainment industry than Hollywood.
Deep learning, a form of machine learning based on layered representations of variables
referred to as neural networks, has made speech-understanding practical on our phones
and in our kitchens, and its algorithms can be applied widely to an array of applications
that rely on pattern recognition. Natural Language Processing (NLP) and knowledge
representation and reasoning have enabled a machine to beat the Jeopardy champion
and are bringing new power to Web searches.

Artificial Intelligence (AI) holds immense significance across various domains,


revolutionizing industries and transforming the way we live, work, and interact with
technology. One of the primary areas where AI is making a profound impact is in
healthcare. AI-powered systems can analyze vast amounts of medical data to assist in
diagnosing diseases, identifying treatment options, and even predicting patient
outcomes. This has the potential to significantly improve patient care and outcomes while
reducing medical errors. Moreover, in the realm of finance, AI algorithms are
revolutionizing trading strategies, risk management, fraud detection, and customer
service. AI-driven chatbots provide personalized assistance to customers, enhancing
their experience and increasing operational efficiency for financial institutions.
Additionally, AI is driving innovation in transportation through the development of
autonomous vehicles, optimizing traffic flow, and improving safety on roads. In
manufacturing, AI-powered robots are streamlining production processes, increasing
efficiency, and lowering costs. Furthermore, AI is shaping the future of education,

8
personalized learning experiences, and advancing research through data analysis and
predictive modelling. Overall, the significance of AI lies in its ability to drive efficiency,
innovation, and transformation across various sectors, ultimately shaping the future of
society.

EVOLUTION OF ARTIFICIAL INTELLIGENCE: CONCEPT TO REALITY


AI research has a rich history marked by several pioneering works and significant
advancements. Here's a brief overview of some key milestones and the progression of
AI research over time:

1. Alan Turing's Contributions (1936-1950s):


Alan Turing laid the theoretical foundation for computer science and artificial
intelligence with his seminal paper "On Computable Numbers, with an Application to
the Entscheidungsproblem" (1936). During World War II, Turing worked on breaking
the Enigma code, an effort that can be seen as early work in machine learning and
cryptography. Turing proposed the Turing Test in 1950 as a way to assess a
machine's intelligence by its ability to exhibit behavior indistinguishable from that of
a human.

2. Dartmouth Conference (1956):


The Dartmouth Conference, organized by John McCarthy, Marvin Minsky, Nathaniel
Rochester, and Claude Shannon, is considered the birth of AI as a field. The term
"artificial intelligence" was coined at this conference, and researchers set out
ambitious goals for creating intelligent machines.

3. Early AI Programs (1950s-1960s): In the late 1950s and early 1960s, researchers
developed early AI programs, including the Logic Theorist by Allen Newell and
Herbert A. Simon, and the General Problem Solver (GPS) by Newell and Simon.

9
These programs demonstrated the feasibility of using computers to solve problems
and reason symbolically.

4. Expert Systems (1970s-1980s): Expert systems, which encoded human expertise in


specific domains, became prominent in the 1970s and 1980s. MYCIN, developed by
Edward Shortliffe, was one of the first expert systems, designed to diagnose bacterial
infections and recommend treatments.

5. Machine Learning and Neural Networks (1980s-1990s): The development of machine


learning algorithms, such as backpropagation for training neural networks, gained
traction in the 1980s and 1990s. Notable contributions include the development of
the backpropagation algorithm by Rumelhart, Hinton, and Williams in the 1980s.

6. AI Winter and Resurgence (Late 1980s-1990s): Despite initial enthusiasm, AI faced


a period of scepticism and funding cuts known as the "AI Winter" in the late 1980s
and early 1990s. However, research continued, and interest in AI revived with the
emergence of new techniques and applications, such as data mining and natural
language processing.

7. Deep Learning Revolution (2000s-Present): The 2000s saw a resurgence of interest


in neural networks, particularly deep learning, fuelled by the availability of large
datasets and advances in computing power. Breakthroughs in deep learning, such
as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), led
to significant progress in areas like computer vision, speech recognition, and natural
language processing.

8. Current Trends and Challenges: Recent advancements in AI include reinforcement


learning, generative adversarial networks (GANs), and the application of AI in fields
like healthcare, autonomous vehicles, and robotics. Ethical and societal concerns,
including bias in AI systems, transparency, and the impact of automation on jobs,
remain significant challenges.

10
Throughout its history, AI research has been characterized by periods of excitement,
followed by periods of scepticism, but overall, it has continued to advance steadily, with
increasingly sophisticated algorithms and applications.

TYPES OF AI
From chatbots to super-robots, here’s the types of AI to know and where the tech's
headed next.

I. Capability-Based types:
1) Narrow AI - Narrow AI, also known as Artificial Narrow Intelligence (ANI) or weak
AI, describes AI tools designed to carry out very specific actions or commands.
ANI technologies are built to serve and excel in one cognitive capability and
cannot independently learn skills beyond its design. They often utilize machine
learning and neural network algorithms to complete these specified tasks. For
instance, natural language processing is a type of narrow AI because it can
recognize and respond to voice commands but cannot perform other tasks
beyond that. Some examples of narrow AI include image recognition software,
self-driving cars and AI virtual assistants.

2) Artificial General Intelligence (AGI) - Artificial general intelligence (AGI), also


called general AI or strong AI, describes AI that can learn, think and perform a
wide range of actions similarly to humans. The goal of designing artificial general
intelligence is to be able to create machines that are capable of performing
multifunctional tasks and act as lifelike, equally intelligent assistants to humans in
everyday life. Though still a work in progress, the groundwork of artificial general
intelligence could be built from technologies such as supercomputers, quantum
hardware and generative AI models like ChatGPT.

11
3) Artificial Superintelligence - Artificial superintelligence (ASI), or super AI, is the
stuff of science fiction. It’s theorized that once AI has reached the general
intelligence level, it will soon learn at such a fast rate that its knowledge and
capabilities will become stronger than that even of humankind. ASI would act as
the backbone technology of completely self-aware AI and other individualistic
robots. Its concept is also what fuels the popular media trope of “AI takeovers.”
But at this point, it’s all speculation.

II. Functionality-Based types:


1) Reactive Machine AI - They can respond to immediate requests and tasks, but
they aren’t capable of storing memory, learning from past experiences or
improving their functionality through experiences. Additionally, reactive machines
can only respond to a limited combination of inputs. Reactive machines are the
most fundamental type of AI. In practice, reactive machines are useful for
performing basic autonomous functions, such as filtering spam from your email
inbox or recommending items based on your shopping history. But beyond that,
reactive AI can’t build upon previous knowledge or perform more complex tasks.

2) Limited Memory AI - Limited memory AI can store past data and use that data to
make predictions. This means it actively builds its own limited, short-term
knowledge base and performs tasks based on that knowledge. The core of limited
memory AI is deep learning, which imitates the function of neurons in the human
brain. This allows a machine to absorb data from experiences and “learn” from
them, helping it improve the accuracy of its actions over time. Today, the limited
memory model represents the majority of AI applications. It can be applied in a
broad range of scenarios, from smaller scale applications, such as chatbots, to
self-driving cars and other advanced use cases.

3) Theory of Mind AI - Theory of mind refers to the concept of AI that can perceive
and pick up on the emotions of others. The term is borrowed from psychology,

12
describing humans’ ability to read the emotions of others and predict future
actions based on that information. Theory of mind hasn’t been fully realized yet
and stands as the next substantial milestone in AI’s development. Theory of mind
could bring plenty of positive changes to the tech world, but it also poses its own
risks. Since emotional cues are so nuanced, it would take a long time for AI
machines to perfect reading them and could potentially make big errors while in
the learning stage. Some people also fear that once technologies are able to
respond to emotional signals as well as situational ones, the result could mean
automation of some jobs.

4) Self-Aware AI - Self-aware AI describes artificial intelligence that possesses self-


awareness. Referred to as the AI point of singularity, self-aware AI is the stage
beyond theory of mind and is one of the ultimate goals in AI development. It’s
thought that once self-aware AI is reached, AI machines will be beyond our
control, because they’ll not only be able to sense the feelings of others but will
have a sense of self as well.

13
OVERVIEW OF THE COMPANIES
BASIS Advanced Micro Devices NVIDIA

History Founded in 1969, AMD initially Founded in 1993 by Jensen


focused on manufacturing Huang, Chris Malachowsky,
microprocessors, and later and Curtis Priem, NVIDIA
expanded into graphics initially focused on graphics
processing units (GPUs) with processing units (GPUs) for
the acquisition of ATI gaming and professional
Technologies in 2006. visualization. In recent years,
NVIDIA has become a key
player in AI hardware,
providing GPUs optimized for
deep learning and other AI
applications.
Products/ Known for its CPUs, GPUs, and GPUs for gaming,

Services APUs (Accelerated Processing professional visualization, and


Units). Ryzen CPUs and AI applications. NVIDIA also
Radeon GPUs are prominent offers software tools and
products. AMD also provides platforms for AI development,
semi-custom solutions for game such as CUDA, cuDNN, and
consoles. the NVIDIA Deep Learning
Institute.
Organization Organized into two main Organized into two main

Structure segments: Computing and segments: Graphics and


Graphics, and Enterprise, Compute & Networking.
Embedded, and Semi-Custom. Nvidia’s structure reflects its
This structure allows AMD to focus on GPU technology
across various applications.

14
focus on both consumer and
enterprise markets.
Future Plans Continues to focus on NVIDIA continues to invest
innovation in CPU and GPU heavily in AI research and
technologies, including development, with a focus on
advancements in high- advancing GPU technology
performance computing and AI. for deep learning,
AMD's acquisition of Xilinx autonomous vehicles,
signals its intent to expand into robotics, and other AI
FPGA (Field-Programmable applications.
Gate Array) technology.
Competitions  Intel: Main competitor in the  AMD: Main competitor in
CPU market. the GPU market.
 NVIDIA: Main competitor in  Intel: Emerging competitor
the GPU market. in the discrete GPU
 Other semiconductor market.
companies such as  Other semiconductor
Qualcomm, IBM, and ARM. companies focusing on AI,
data centers, and
automotive technologies.

fig. 2

15
CHAPTER – 2

OBJECTIVES OF THE STUDY

Fig. 3

16
OBJECTIVES

1) To know about Artificial Intelligence (AI)

2) To have an overview of the two competing GPU manufacturers: AMD and NVIDIA

3) To know about their history and financial statuses

4) To understand diverse product portfolio of both the companies

5) To have SWOT Analysis

6) To know which company will do better in 2024

7) To know about the impacts of AI in future

17
CHAPTER – 3

REVIEW OF LITERATURE

Fig. 4

18
I. Introduction
In the 21st century artificial intelligence (AI) has become an important area of research
in virtually all fields: engineering, science, education, medicine, business, accounting,
finance, marketing, economics, stock market and law, among others (Halal (2003),
Masnikosa (1998), Metaxiotis et al. (2003), Raynor (2000), Stefanuk and Zhozhikashvili
(2002), Tay and Ho (1992) and Wongpinunwatana et al. (2000)). The field of AI has grown
enormously to the extent that tracking proliferation of studies becomes a difficult task
(Ambite and Knoblock (2001), Balazinski et al. (2002), Cristani (1999) and Goyache
(2003)). Apart from the application of AI to the fields mentioned above, studies have been
segregated into many areas with each of these springing up as individual fields of
knowledge (Eiter et al. (2003), Finkelstein et al. (2003), Grunwald and Halpern (2003),
Guestrin et al. (2003), Lin (2003), Stone et al. (2003) and Wilkins et al. (2003)).

II. The Challenge of the AI Field


This work grew out of the challenges that AI possesses in view of the rise and growing
nature of information technology worldwide that has characterised business- and non-
business organisational development (Barzilay et al. (2002), Baxter et al. (2001),
Darwiche and Marquis (2002), Gao and Culberson (2002), Tennenholtz (2002) and
Wiewwiora (2003)). The necessity for research in AI is being motivated by two factors
that are (i) to give the new entrants into the AI field an understanding of the basic
structure of the AI literature (Brooks (2001), Gamberger and Lavrac (2002), Kim (1995),
Kim and Kim (1995), Patel-Schneider and Sebastiani (2003) and Zanuttini (2003)). As
such, the literature discussed here answers the common query, “why must I study AI?”
(ii) the upsurge of interest in AI that has prompted an increased interest and huge
investments in AI facilities. Interested researchers from all disciplines wish to be aware
of the work of others in their field, and share the knowledge gleaned over the years
(Rosati (1999), Kaminka et al. (2002), Bod (2002), Acid and De Campos (2003), Walsh
and Wellman (2003), Kambhampati (2000) and Barber (2000)). By sharing AI knowledge,
new techniques and approaches can be developed so that a greater understanding of
the field can be gained. To these ends, this paper has also been written for researchers

19
in AI so they can continue in their efforts aimed at developing this area of concentration
through newly generated ideas. Consequently, they would be able to push forward the
frontier of knowledge in AI. In the section that follows this paper presents a brief
explanation of some important areas in Artificial Intelligence. This is to introduce the
readers into the wide-ranging topics that AI encompasses.
These descriptions only account for a selected number of areas:

1. Reasoning – Research on reasoning has evolved from the following dimensions:


case-based, non-monotonic, model, qualitative, automated, spatial, temporal and
common sense.

2. Genetic Algorithm (GA) – This is a search algorithm based on the mechanics of


natural selection and natural genetics. It is an iterative procedure maintaining a
population of structures that are candidates solutions to specific domain challenges.

3. Expert system – An expert system is computer software that can solve a narrowly
defined set of problems using information and reasoning techniques normally
associated with a human expert. It could also be viewed as a computer system that
performs at or near the level of a human expert in a particular field of endeavour.

4. Natural Language Understanding (NLG) – NLG systems are computer software


systems that produce texts in English and other human languages, often from non –
linguistic input data. These systems need substantial amounts of knowledge that is
difficult to acquire.

5. Knowledge Representation (KR) – Knowledge bases are used to model application


domains and to facilitate access to stored information. Research on KR originally
concentrated around formalisms that are typically tuned to deal with relatively small
knowledge base, but provide powerful reasoning services, and are highly expressive.

20
Fig 5: relationship among diverse fields of AI

21
CHAPTER – 4

COMPANY PROFILE

Fig. 6

22
4.1 Advanced Micro Devices (AMD)

Established: May 1969

Headquarters: United States of America

Address: 2485, Augustine Dr., Santa Clara,

California

Website: www.amd.com Fig. 7

Telephone: +1 (800) 307-657

Founder: Jerry Sanders

CEO: Lisa Su

Revenue: $22.68 Billion

Industry: Semiconductors, Artificial intelligence, GPUs, Graphics cards,

Consumer electronics, Video games, Computer hardware

Stock price: 179.67 USD

Number of employees: 26,000

Type: Public

Products: CPUs, GPUs, chipsets, microprocessors, drivers, etc.

Key People: Lisa Su (chair & CEO), Victor Pen (president), John Edward

Caldwell (lead independent director), Mark Papermaster (CTO)

23
4.2 NVIDIA

Established: Apr 1993

Headquarters: United States of

America

Address: 2788, Saint Thomas

Expy, Santa Clara, CA 95051 Fig. 8

Website: www.nvidia.com

Telephone: +1 (408) 486-2000

Founder: Jensen Huang, Chris Malachowsky, Curtis Priem

CEO: Jensen Huang

Revenue: $60.9 Billion

Industry: Computer hardware, Computer software, Cloud computing,

Semiconductors, Artificial intelligence, GPUs, Graphics cards, Consumer

electronics, Video games

Stock price: 926.50 USD

Number of employees: 29,600

Type: Public

Products: Graphic Processing Units (GPUs)

Key people: Jensen Huang (CEO and President), Bill Dally (Chief

Scientist)
24
4.3 Advanced Micro Devices, Inc. (AMD)

4.3.1 OVERVIEW:

Advanced Micro Devices, Inc. (AMD) is a large American company from Sunnyvale,
California that makes computer hardware components. It makes many different
computer parts, but it is most famous for its central processing units (CPUs) and graphics
processing units (GPUs). Another important product are their motherboard chipsets for
their CPUs. AMD started as a company that made products for Intel, another large
hardware company, and competitor of AMD. In 2006, AMD bought ATI Technologies with
$4.3 billion of cash and 52 million AMD stocks. In 2020, AMD said that they are buying
Xilinx, a company that makes circuits that can be changed using computer code (FPGA).

Ryzen is AMD's brand name for their CPUs for normal use. Ryzen CPUs have between
2 and 64 cores and can achieve speeds above 5 gigahertz (GHz). Radeon is their brand
name for other computer products like GPUs and computer parts made by other
companies that AMD put their brand on (OEM).

4.3.2 HISTORY:

1969 – AMD was founded in 1969 by Walter Jeremiah (“Jerry”) Sanders, a former
executive at Fairchild Semiconductor Corporation, and seven others.
1970 – The company released its first product and went public two years later. In the
mid-1970s the company began producing computer chips. Starting out as a second-
source manufacturer of computer chips, the company placed a great emphasis on quality
and grew steadily.
1982 – The company began supplying second-source chips for Intel Corporation, which
made the microprocessor used in IBM personal computers (PCs).
1986 – The agreement with Intel ended in 1986.
1991 – AMD released the Am386 microprocessor family, a reverse-engineered chip that
was compatible with Intel’s next-generation 32-bit 386 microprocessor.

25
1994 – There ensued a long legal battle that was finally decided in a 1994 U.S. Supreme
Court ruling in AMD’s favour. That same year, Compaq Computer Corporation contracted
with AMD to produce Intel-compatible chips for their computers.
1996 – AMD acquired a microprocessor company known as NexGen and began
branching out from the Intel-compatible chip market.
2000 – AMD introduced the Athlon processor, which was designed to run the Microsoft
Corporation’s Windows operating system. With the release of the Athlon processor, AMD
became the first company to produce a 1-GHz (gigahertz) microprocessor, which marked
AMD as a serious competitor in the chip market.
2003 – the company released the Opteron chip, another product that showcased the
company’s ability to produce high-end chips.
2006 – AMD absorbed ATI Technologies, a manufacturer of video graphics cards for use
in PCs.
2008 – AMD announced plans to split the company in two—with one part designing
microprocessors and the other manufacturing them. This announcement followed news
that the Advanced Technology Investment Company and the Mubadala Development
Company, would acquire a controlling interest in AMD, pending approval by shareholders
and the U.S. and German governments.
2009 – The European Commission fined rival Intel a record €1.06 billion (£948 million;
$1.45 billion) for engaging in anticompetitive practices that violated the European Union’s
antitrust laws. These practices allegedly involved financially compensating and providing
rebates to manufacturers and retailers who favoured its computer chips over those of
AMD, as well as paying manufacturers to cancel or postpone the launching of products
utilizing AMD’s chips.
2014 – The company was restructured into two parts: computing and graphics, which
made processors for personal computers, and enterprise, embedded, and semi-custom,
which made more-specialized processors.

26
4.3.3 PRODUCTS:

1. Ryzen Processors: Known for their high performance and efficiency, catering to
desktop, laptop, and server markets.
2. Radeon Graphics Cards: Offering competitive graphics performance for gaming and
professional applications.

3. EPYC Server Processors: Designed for data centers and enterprise computing,
offering high core counts and performance for server workloads.

4. Ryzen Threadripper CPUs: Targeted at high-end desktop users and content creators,
offering exceptional multi-threaded performance.

5. Ryzen Mobile Processors: Powering laptops and ultrabooks with a balance of


performance and power efficiency.

6. Radeon Instinct Accelerators: Aimed at AI, machine learning, and high-performance


computing workloads.

7. A-Series APUs: Combining CPU and GPU cores on a single chip, targeting
mainstream desktop and laptop markets.

8. Radeon Pro Graphics: Optimized for professional workflows such as content creation,
engineering, and scientific computing.

4.3.4 MARKET POSITION:

MD has solidified its position as a key player in the semiconductor industry, particularly
in the CPU and GPU markets. With its Ryzen processors, AMD has successfully
challenged Intel's dominance, offering competitive performance and value across
various segments. In the GPU market, while Nvidia maintains a strong presence, AMD's

27
Radeon graphics cards have provided viable alternatives, especially in the mid-range
and budget segments. Additionally, AMD's efforts in the data center and AI markets with
products like EPYC server processors and Radeon Instinct accelerators have shown
promise, although competition remains fierce. Through strategic partnerships and
innovative product offerings, AMD continues to expand its market presence, although it
faces ongoing challenges from industry rivals.

Financial Position:

 4th quarter revenue – $6.2 billion, an increase of 6% year-over-year


 4th quarter gross margin was 47%, an increase of 4 points year-over-year
 4th quarter EPS – net profit of $0.41, compared to $0.18 in the previous quarter,
improved by 56%
 Full year revenue was $22.7 billion, a decrease of 4% year-over-year

4.4 NVIDIA Corporation

4.4.1 OVERVIEW:

Nvidia Corporation is an American multinational corporation. It is based in Santa Clara,


California. They make graphical processing technologies for computers and mobile
devices like smartphones. The company supplies electronic chips for motherboard
chipsets, smartphone graphic controllers, graphics processing units, and game consoles.
Nvidia product lines include GeForce, Quadro, and nForce (chipsets).

In 2023 it was said to be the world’s most valuable chipmaker. Demand for its artificial
intelligence (AI) chips more than doubled its income in 2023. Its stock market value
jumped to more than $1 trillion.

28
4.4.2 HISTORY:

NVIDIA Corporation, founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis
Priem, has a rich history marked by significant technological innovations and industry
milestones:

1. Founding Years (1993-1999):


 NVIDIA was established in 1993 with the goal of developing advanced graphics
processing units (GPUs) for gaming and professional applications.
 In 1999, NVIDIA launched its breakthrough product, the GeForce 256 GPU,
which introduced hardware-based transform and lighting capabilities,
revolutionizing 3D graphics rendering in gaming.

2. Early Success and Expansion (2000s):


 Throughout the early 2000s, NVIDIA solidified its position as a leading GPU
manufacturer, releasing a series of successful GeForce graphics cards.
 NVIDIA expanded its product offerings to include professional visualization
solutions with the Quadro line of GPUs, catering to industries such as film
production, design, and engineering.
 In 2006, NVIDIA entered the mobile computing market with the introduction of the
Tegra line of mobile processors, targeting smartphones, tablets, and other
portable devices.

3. Advancements in GPU Technology (2010s):


 NVIDIA continued to innovate in GPU technology, introducing new architectures
such as Fermi, Kepler, Maxwell, and Pascal, each offering significant
improvements in performance, power efficiency, and feature sets.
 The company expanded its presence in high-performance computing (HPC) and
data centers, with GPUs being used for parallel processing tasks like scientific
simulations, deep learning, and AI inference.

29
 In 2016, NVIDIA launched the GeForce GTX 10 series GPUs based on the Pascal
architecture, delivering substantial performance gains for gaming and VR
applications.

4. AI and Autonomous Vehicles (2010s-2020s):


 NVIDIA became a key player in artificial intelligence (AI) and machine learning
with the development of CUDA parallel computing platform and libraries, enabling
accelerated computing for AI workloads.
 The company's GPUs were widely adopted in AI research, training, and inference
tasks, driving advancements in areas like natural language processing, computer
vision, and autonomous driving.
 NVIDIA's DRIVE platform emerged as a leading solution for autonomous vehicle
development, providing hardware and software solutions for perception, mapping,
planning, and control.

5. Recent Developments (2020s):


 In 2020, NVIDIA announced its intention to acquire Arm Holdings from SoftBank
Group, a deal that has significant implications for the semiconductor industry,
pending regulatory approvals.
 The company continues to innovate with the launch of new GPU architectures like
Ampere, targeting gaming, data center, and AI applications, with a focus on
performance, efficiency, and AI acceleration.

Throughout its history, NVIDIA has remained at the forefront of GPU technology, driving
advancements in gaming, professional visualization, AI, and autonomous systems. Its
commitment to innovation and strategic partnerships has solidified its position as a
leading semiconductor company with a global impact.

30
4.4.3 PRODUCTS AND SOLUTIONS:

NVIDIA offers a diverse range of products and solutions across multiple industries,
leveraging its expertise in graphics processing units (GPUs), artificial intelligence (AI),
and high-performance computing (HPC):

1. Graphics Processing Units (GPUs):


 GeForce GTX/RTX GPUs: Designed for gaming enthusiasts, these GPUs deliver
high-performance graphics and real-time ray tracing capabilities for immersive
gaming experiences.
 Quadro GPUs: Tailored for professional workstations, Quadro GPUs provide
advanced visualization and compute capabilities for tasks such as 3D rendering,
CAD/CAM, and scientific simulations.
 Tesla GPUs: Optimized for high-performance computing and AI workloads, Tesla
GPUs accelerate scientific computing, data analytics, and deep learning tasks in
data centers and research institutions.

2. Data Center Solutions:


 NVIDIA DGX Systems: Integrated AI platforms that combine powerful GPUs with
purpose-built software for accelerated AI training and inference tasks in data
centers.
 NVIDIA AI Enterprise: AI software suite for deploying and managing AI workloads
in enterprise data centers, providing tools for training, inference, and data
analytics.
 NVIDIA Mellanox Networking: High-performance networking solutions for data
centers, including InfiniBand and Ethernet technologies for fast, low-latency
communication between servers and storage systems.

31
3. AI and Deep Learning Platforms:
 NVIDIA GPU Cloud (NGC): Cloud-based platform that provides GPU-optimized
containers, software frameworks, and pre-trained models for accelerating AI and
HPC workloads in the cloud.
 NVIDIA CUDA: Parallel computing platform and programming model that enables
developers to harness the power of NVIDIA GPUs for accelerating scientific
simulations, data analytics, and AI applications.
 NVIDIA Jarvis: AI-powered conversational AI platform for building virtual
assistants, chatbots, and natural language processing applications with advanced
speech recognition and language understanding capabilities.

4. Autonomous Vehicle Solutions:


 NVIDIA DRIVE AGX: AI computing platform for autonomous vehicles, providing
hardware and software solutions for perception, localization, mapping, and
planning tasks in self-driving cars and trucks.
 NVIDIA DRIVE Constellation: Simulation platform for testing and validating
autonomous vehicle algorithms in virtual environments, enabling safe and
scalable development of autonomous driving systems.

5. Gaming Technologies:
 NVIDIA GeForce NOW: Cloud gaming service that allows users to stream PC
games to various devices, providing access to a vast library of games without the
need for high-end gaming hardware.
 NVIDIA Reflex: Technology designed to reduce system latency in competitive
gaming, providing faster response times for improved gameplay and competitive
advantage.

4.4.4 MARKET POSITION

Nvidia remains a dominant force in the semiconductor industry, particularly in the GPU
market. With its GeForce lineup, Nvidia has long been the preferred choice for high-

32
performance gaming and graphics processing, maintaining a strong foothold in both
consumer and professional markets. Additionally, Nvidia's Tesla GPUs have established
themselves as a go-to solution for data centers, AI, and high-performance computing
tasks, further solidifying Nvidia's position in these lucrative segments. Nvidia's CUDA
platform has become a standard for parallel computing, further enhancing its ecosystem
and market influence. Despite facing competition from rivals like AMD and Intel, Nvidia's
relentless innovation, strategic partnerships, and diversified product portfolio continue to
sustain its leading market position.

Financial position:

 4th quarter revenue – $22.1 billion, up 22% from Q3, up 265% from 2023
 4th quarter gross margin was 72.7%
 4th quarter EPS has increased over the past week from 5.1 to 5.13 (0.59%)
 Full year revenue was $60.92 billion for Jan 31, 2024

Fig. 9: Battle of the Graphics

33
CHAPTER – 5

RESEARCH METHODOLOGY

Fig. 10

34
Research Methodology chapter here describes research methods, approaches and
designs in detail highlighting those used throughout the study, justifying my choice
through describing advantages and disadvantages of each approach and design
considering their practical applicability to our research.

Research can be defined as “an activity that involves finding out, in a more or less
systematic way, things you did not know” (Walliman and Walliman, 2011, p.7).
“Methodology is the philosophical framework within which the research is conducted or
the foundation upon which the research is based” (Brown, 2006). O’Leary (2004, p.85)
describes methodology as the framework which is associated with a particular set of
paradigmatic assumptions that we will use to conduct our research. Allan and Randy
(2005) insist that when conducting a research methodology should meet certain criteria.

A research method is a systematic plan for conducting research. Sociologists draw on a


variety of both qualitative and quantitative research methods, including experiments,
survey research, participant observation, and secondary data. Quantitative methods aim
to classify features, count them, and create statistical models to test hypotheses and
explain observations. Qualitative methods aim for a complete, detailed description of
observations, including the context of events and circumstances.

This project is based on a survey research which is primary in nature.

A survey is a research method in which subjects respond to a series of statements or


questions in a questionnaire or an interview. Surveys target some population, which are
the people who are the focus of research. Because populations are usually quite large,
the researcher will target a sample, which is a part of a population that represents the
whole.

Once our sample is selected, we need a plan for asking questions and recording
answers. The most common types of surveys are questionnaires and interviews. A
questionnaire is series of written statements or questions.

35
With an interview, the researcher personally asks subjects a series of questions and
gives participants the freedom to respond as they wish. Both questionnaires and
interviews can include open-ended questions (allowing the subjects to respond freely),
or close-ended questions (including a selection of fixed responses).

The current project report has been prepared using both primary as well as secondary
data sources.

Primary Research

Primary research is defined as a methodology used by researchers to collect data


directly, rather than depending on data collected from previously done research.

Technically, they “own” the data. Primary research is solely carried out to address a
certain problem, which requires in-depth analysis.

Businesses or organizations can themselves conduct primary research or can employ a


third party to conduct research on their behalf. One major advantage of primary research
is, this type of research is “pinpointed”, research is carried around only a specific issue
or problem and all the focus is directed to obtain related solutions.

In this technology-driven world, meaningful data is more valuable than gold.


Organizations or businesses need highly validated data to make informed decisions.
This is the very reason why many companies are proactive to gather their own data so
that the authenticity of data is maintained, and they get first-hand data without any
alterations.

Here are some of the primary research methods organizations or businesses use to
collect data:

1. Interviews (telephonic or face-to-face): Conducting interviews is a qualitative


research method to collect data and has been a popular method for ages. These
interviews can be conducted in person (face-to-face) or over the telephone.
Interviews are open-ended method which involves dialogues or interaction between
interviewer (researcher) and interviewee (respondent). Conducting face-to-face
interview is said to generate a better response from respondents as it is a more

36
personal approach. However, the success of face-to-face interview depends heavily
on researcher’s ability to ask questions and his/her experience related to conducting
such interviews in the past. The types of questions that are used in this type of
research are mostly open-ended questions. These questions help to gain in-depth
insights into opinions and perceptions of respondents.

2. Online surveys: Once conducted with pen and paper, surveys have come a long way
since then. Today, most researchers use online surveys to send it to respondents to
gather information from them. Online surveys are convenient and can be sent on
emails or can be filled out online. These can be accessed on handheld devices like
smartphone, tablets, iPads and similar devices. Once a survey is deployed, a certain
amount of stipulated time is given to respondents to answer survey questions and
send it back to researcher. In order to get maximum information from respondents,
surveys should have a good mix or open-ended questions and close ended
questions. Survey should not be lengthy; else respondents lose interest and tend to
leave it half done.

3. Focus groups: This popular research technique is used to collect data from a small
group of people, usually restricted to 6-10. Focus group brings together people who
are experts in subject matter, for which research is being conducted. Focus group
has a moderator who stimulates discussions among the members to get greater
insights. Organizations and businesses can make use of this method especially to
identify niche market to learn about a specific group of consumers.

4. Observations: In this primary research method, there is no direct interaction between


researcher and person/consumer being observed. Researcher observes the
reactions of a subject and makes notes. Trained observers or cameras are used to
record reactions. Observations are noted in a predetermined situation. For example,

37
a bakery brand wants to know how people react its new biscuits, observer notes the
first reaction of consumers and evaluates collective data to draw inference.

Advantages of Primary Research:

 Data collected is first hand and is accurate. In other words, there is no dilution of
data.
 Primary research focuses mainly on problem in hand, which means entire attention
is directed to find probable solution to a pinpointed subject matter.
 Data collected can be controlled. Primary research gives a means to control how
data is collected and used.
 Primary research is a time-tested method; therefore, one can rely on the results that
are obtained from conducting this type of research.

Disadvantages of Primary Research:

 One of the major disadvantages of primary research is, it can be quite expensive to
conduct. One may be required to spend a huge sum of money depending on the
setup or primary research method used. Not all businesses or organizations may be
able to spend a considerable amount of money.
 This type of research can be time-consuming. Conducting interviews, sending and
receiving online surveys can be quite an exhaustive process and need investing time
and patience for the process to work. Moreover, evaluating results and applying the
findings to improve product or service will need additional time.
 Sometimes just using one primary research method may not be enough. In such
cases, use of more than one method is required and this might increase both times
required to conduct research and the cost associated with it.

For gathering the primary data, the target population has been identified as the clients
dealing with the organization in different departments.

Sample size: 100 respondents

38
Sampling technique: Simple random sampling with judgement sampling

100 clients were taken into the sample based upon the judgement of the researcher and
these clients were randomly selected basis their interaction with the organization.

Data Collection: The primary research has been based on data collected from the
identified respondents using a self-structured questionnaire. The questionnaire contains
statements regarding the details of the respondents and their opinion on the impact of
the uses of AI in modern world.

Research Tool: MS-Excel

Research techniques: Frequency analysis and Graphical analysis

Secondary Research

Secondary research or desk research is a research method that involves using already
existing data. Existing data is summarized and collated to increase the overall
effectiveness of research. Secondary research includes research material published in
research reports and similar documents. These documents can be made available by
public libraries, websites, data obtained from already filled in surveys etc. Some
government and non-government agencies also store data that can be used for research
purposes and can be retrieved from them.

Secondary research is much more cost-effective than primary research, as it makes use
of already existing data, unlike primary research where data is collected first hand by
organizations or businesses, or they can employ a third party to collect data on their
behalf.

Secondary research is cost effective and that’s one of the reasons that makes it a
popular choice among a lot of businesses and organizations. Not every organization is
able to pay huge sum of money to conduct research and gather data. So, rightly
secondary research is also termed as “desk research”, as data can be retrieved from
sitting behind a desk.

39
Following are popularly used secondary research methods and examples:

1. Data available on the internet: One of the most popular ways of collecting secondary
data is using the internet. Data is readily available on the internet and can be
downloaded at the click of a button. This data is practically free of cost, or one may
have to pay a negligible amount to download the already existing data. Websites
have a lot of information that businesses or organizations can use to suit their
research needs. However, organizations need to consider only authentic and trusted
website to collect information.

2. Govt and non-govt agencies: Data for secondary research can also be collected from
some government and nongovernment agencies. For example, US Government
Printing Office, US Census Bureau, and Small Business Development Centers have
valuable and relevant data that businesses or organizations can use. There is a
certain cost applicable to download or use data available with these agencies. Data
obtained from these agencies are authentic and trustworthy.

3. Public libraries: Public libraries are another good source to search for data for
secondary research. Public libraries have copies of important research that were
conducted earlier. They are a storehouse of important information and documents
from which information can be extracted. The services provided in these public
libraries vary from one library to another. More often, libraries have a huge collection
of government publications with market statistics, large collection of business
directories and newsletters.

4. Educational institutions: Importance of collecting data from educational institutions


for secondary research is often overlooked. However, more research is conducted in
colleges and universities than any other business sector. The data that is collected
by universities is mainly for primary research. However, businesses or organizations
can approach educational institutions and request for data from them.

40
5. Commercial information sources: Local newspapers, journals, magazines, radio and
TV stations are a great source to obtain data for secondary research. These
commercial information sources have firsthand information on economic
developments, political agenda, market research, demographic segmentation and
similar subjects. Businesses or organizations can request to obtain data that is most
relevant to their study. Businesses not only have the opportunity to identify their
prospective clients but can also know about the avenues to promote their products
or services through these sources as they have a wider reach.

This project has considered data available from all the above given sources of secondary
data.

41
CHAPTER – 6

LIMITATIONS OF THE STUDY

Fig. 11

42
Primary research, while invaluable for generating firsthand data and insights, also
comes with its own set of limitations:

1. Cost and Time: Conducting primary research can be expensive and time-consuming.
It involves various expenses such as participant recruitment, data collection tools,
incentives, and researcher time. Depending on the scope and scale of the research,
the costs and time required can be substantial.

2. Resource Intensive: Primary research often requires a significant allocation of


resources, including human resources for tasks such as survey administration, data
collection, and analysis. This can strain organizational budgets and manpower.

3. Sampling Bias: There's a risk of sampling bias, where the sample chosen for the
research may not be representative of the larger population. This can occur due to
various factors such as sampling methods, non-response bias, or self-selection bias.

4. Limited Scope: Primary research typically focuses on specific objectives or


questions, which may not provide a comprehensive understanding of the topic under
investigation. Researchers may miss out on valuable insights that could have been
obtained through broader or more diverse research methods.

5. Validity and Reliability: Ensuring the validity and reliability of primary research
findings can be challenging. Factors such as researcher bias, respondent bias,
measurement error, and sampling error can affect the accuracy and credibility of the
results.

6. Ethical Considerations: Researchers must adhere to ethical guidelines when


conducting primary research, particularly when involving human participants. This

43
includes obtaining informed consent, protecting participants' confidentiality, and
ensuring that the research does not cause harm or discomfort.

7. Subjectivity: Despite efforts to maintain objectivity, primary research can still be


influenced by the researcher's biases, perspectives, and interpretations. This
subjectivity may impact the way data is collected, analysed, and reported.

8. Generalizability: Findings from primary research may not always be generalizable to


broader populations or contexts. This limitation is particularly relevant in qualitative
research, where the emphasis is on understanding specific phenomena in depth
rather than making generalizable claims.

44
CHAPTER – 7

FINDINGS AND ANALYSIS

Fig. 12

45
ANALYSIS BASED ON QUESTIONNAIRE

Q1. Are you familiar with the concept of Artificial Intelligence?

Frequency Percentage
Yes 68 68%
No 20 20%
Maybe 12 12%
Table 1: familiarity with AI

In the above table, 68% of the respondents are familiar with Artificial Intelligence
and same has been represented graphically below:

Yes No maybe

12%

20%

68%

Fig. 13

46
Q2. In which areas do you think AI will have the biggest impact?

Industries frequency Percentage


Healthcare 90 90%
Transportation 50 50%
Finance 40 40%
Education 60 60%
Entertainment 20 20%
Table 2: which area will have the biggest impact?

The analysis shows that out of the total respondents, majority of the
people or clients think that AI will have the biggest impact on Finance sector
due to the increasing risks and frauds in the economy. The same has been
represented below graphically.

100

90

80

70

60

50

40

30

20

10

0
Healthcare Transportation Finance Education Entertainment

Responses

Fig. 14

47
Q3. Which of the following AI technologies have you used?

Technologies frequency Percentage


Machine learning 60 60%
NLP 50 50%
Computer vision 65 65%
Robotics 55 55%
Chatbots 70 70%
Virtual assistants 80 80%
Table 3: what technologies are being used?

It is evident from the table that the most popular AI technology being used is
Virtual Assistants like Siri (apple), Alexa (Amazon), google assistant, Bixby, etc.
for purposes like customer service, data entry, reducing costs, etc.

90

80

70

60

50

40

30

20

10

0
machine NLP computer vision Category 4 chatbots virtual
learning assistants

Technologies

Fig. 15

48
Q4. Do you think AI can improve the learning experience for students?

Frequency Percentage
Yes 60 60%

No 12 12%

Maybe 28 28%

Table 4: if learning experience can be improved

The table given above shows that 60% of people or respondents agree that the
AI can improve learning experience for students since it can improve the students’
overall performance and boost their motivation. Same has been shown graphically
below:

Yes No maybe

0%

28%

60%
12%

Fig. 16

49
Q5. Which of the following areas can AI help with in benefitting the students? (select all
that apply)

Areas frequency Percentage


Personalized Learning 95 95%
Automated Grading 75 75%
Tutoring Support 80 80%
Career Guidance 50 50%
Educational Content
60 60%
Generation
Table 5: what areas can be benefitted?

It is clear from the table given above that Personalized Learning (90%) is the
most chosen area that AI can help with for benefitting the students, followed by
Tutor Support (80%) and Automated Grading (75%).
100

90

80

70

60

50

40

30

20

10

0
Personalized Automated Grading Tutoring Support Career Guidance Educational
Learning Content Generation

Areas

Fig. 17

50
Q6. Would you consider pursuing a career in AI or related fields?

Frequency Percentage
Yes 62 62%

No 23 23%

Maybe 15 15%

Table 6: pursue career in AI or related fields?

The given table above indicates the agreement of 62% of the respondents
would consider pursuing a career in AI or related fields.

Yes No Maybe

15%

23%

62%

Fig. 18

51
Q7. Do you think AI will replace human jobs in the future?

Frequency Percentage
Yes 70 70%

No 16 16%

Maybe 14 14%

Table 7: will AI replace human jobs in future?

The table above shows that out of the total respondents, 70% of the
respondents agree that AI will replace human jobs in future since it can perform
repetitive tasks and those which are currently difficult and impossible for humans
to do.

Yes No Maybe

14%

16%

70%

Fig. 19

52
Q8. How did you come to know about AMD and NVIDIA?

Responses Frequency

Newspaper/magazine 20

Digital/social media 50

Friends/Word of 25
mouth

Tele calling 5
Table 8: Awareness about the companies

The table given above has been interpreted to highlight the marketing elements which
have helped the most in creative awareness about the company.

Fig. 20

Newspaper/magazine Digital/Social Media Friends/WoM Tele Call

5%
20%

25%

50%

53
Q9. Are you following AMD and NVIDIA on social media platforms?

Response Frequency Percentage


Yes 60 60%
No 40 40%
Table 9: following on social media platforms

According to the respondents, most of the responses (60%) are following AMD
and NVIDIA on social media platforms highlighting the popularity and need of
this medium for brand awareness of Graphic cards.

Yes No

40%

60%

Fig. 21

54
Q10. Would like to recommend AMD and NVIDIA to friends and family?

Responses Frequency Percentage

Yes 85 85%

No 15 15%

Table 10: recommendation to friends and family

Yes Yes

15%

85%

Fig. 22

55
FINDINGS BASED ON THE STUDY

7.1 ROLE OF AI IN VIDEO GAMES

Artificial intelligence (AI) has had a significant impact on the gaming industry in recent
years, with many games now incorporating AI to enhance gameplay and make it more
immersive for players.

One common use of AI in gaming is in the control of non-player characters (NPCs).


These characters can interact with players in a more realistic and dynamic way, adding
to the immersion of the game.

For example, NPC characters might have their own goals and motivations that they
pursue, or they might react differently to different player actions. This can make the game
feel more alive and believable, as players feel like they are interacting with real
characters rather than just programmed entities.

AI is also being used in game design to create more dynamic and interesting levels and
content. This can help developers create more diverse and engaging games with less
effort. For example, AI might be used to design game levels that are procedurally
generated, meaning that they are created on the fly as the player progresses through
the game. This can help keep the game fresh and interesting for players, as they are not
simply playing through the same levels over and over again.

56
7.2 AI IN GRAPHICS CARDS

Artificial intelligence (AI) has significantly impacted graphic cards and their functionalities
in recent years, particularly in enhancing performance, efficiency, and user experience.
One of the primary roles of AI in graphic cards is through technologies like NVIDIA's
Tensor Cores or AMD's AI-accelerated features. These specialized hardware units are
designed to accelerate AI workloads, including deep learning inference and training
tasks. In gaming, AI-based technologies like NVIDIA's DLSS (Deep Learning Super
Sampling) utilize neural networks to upscale lower-resolution images in real-time,
resulting in higher-quality visuals while maintaining smooth frame rates.

Role of AI in NVIDIA and AMD:

The new NVIDIA RTX GPU and AMD CPU-powered AI workstations provide the power
and performance required for training such smaller models, as well as local fine-tuning,
and helping to offload data center and cloud resources for AI development tasks. The
devices let users select single- or multi-GPU configurations as required for their
workloads.

Smaller trained AI models also provide the opportunity to use workstations for local
inferencing. RTX GPU and AMD CPU-powered workstations can be configured to run
these smaller AI models for inference serving for small workgroups or departments.

With up to 48GB of memory in a single NVIDIA RTX GPU, these workstations offer a
cost-effective way to reduce compute load on data centers. And when professionals do
need to scale training and deployment from these workstations to data centers or the
cloud, the NVIDIA AI Enterprise software platform enables seamless portability of
workflows and toolchains.

RTX GPU and AMD CPU-powered workstations also enable cutting-edge visual
workflows. With accelerated computing power, the new workstations enable highly
interactive content creation, industrial digitalization, and advanced simulation and
design.

57
7.3 EMERGING TRENDS IN AI

1. Multimodal AI: It goes beyond traditional single-mode data processing to encompass


multiple input types, such as text, images and sound - a step toward mimicking the
human ability to process diverse sensory information.
2. Agentic AI: Unlike traditional AI systems, which mainly respond to user inputs and
follow predetermined programming, AI agents are designed to understand their
environment, set goals and act to achieve those objectives without direct human
intervention.
3. Open-Source AI: To be Open-Source, an AI system needs to make its components
available under licenses that individually grant the freedoms to study how the system
works and inspect its components, use the system for any purpose and without
having to ask for permission, modify the system to change its recommendations,
predictions or decisions to adapt to your needs, and share the system with or without
modifications, for any purpose.
4. Retrieval-augmented orientation (RAG): It has emerged as a technique for reducing
hallucinations, with potentially profound implications for enterprise AI adoption. RAG
blends text generation with information retrieval to enhance the accuracy and
relevance of AI-generated content.
5. Customized enterprise generative AI models: To build customized generative AI,
most organizations instead modify existing AI models -- for example, tweaking their
architecture or fine-tuning on a domain-specific data set. This can be cheaper than
either building a new model from the ground up or relying on API calls to a public
LLM.
6. Shadow AI: Shadow AI typically arises when employees need quick solutions to a
problem or want to explore new technology faster than official channels allow. This
is especially common for easy-to-use AI chatbots, which employees can try out in
their web browsers with little difficulty -- without going through IT review and approval
processes.
7. Generative AI: It is a type of artificial intelligence technology that can produce various
types of content, including text, imagery, audio and synthetic data.

58
7.4 FUTURE PREDICTIONS FOR AI

Two of the hottest topics in AI today are agents and artificial general intelligence (AGI).

Agents are AI systems that can complete loosely defined tasks: say, planning and
booking your upcoming trip. AGI refers to an artificial intelligence system that meets or
exceeds human capabilities on every dimension.

When people envision the state of AI in 2030, agents and/or AGI are often front and
center.

Yet we predict that these two terms won’t even be widely used by 2030. Because they
will cease to be relevant as independent concepts.

By 2030, AI will be unfathomably more powerful than humans in ways that will transform
our world. It will also continue to lag human capabilities in other ways. If an artificial
intelligence can understand and explain every detail of human biology down to the
atomic level, who cares if it is “general” in the sense of matching human capabilities
across the board?

The concept of artificial general intelligence is not particularly coherent. As AI races


forward in the years ahead, the term will become increasingly unhelpful and irrelevant.

7.5 WILL AI REPLACE HUMANS IN THE FUTURE?

Tom Cruise’s Cruise Oblivion: Age of Tomorrow is a movie where you can find machines
acting and thinking like humans. AI applications work faster, with greater operation
efficiency and accuracy, and with better decision-making than humans. This says that
artificial intelligence achievements closely mimic human intelligence in the sense of
understanding, reasoning, and learning. However, there is cause and effect with these
innovations, and the significant advancements outperform humans in specific tasks,
which challenges the scope of human intelligence. It symbolises that the future of
developing artificial intelligence requires experts, and it creates various career
opportunities.

59
Now the bigger question arises. Will AI replace humans in future?

No, AI will not replace human intelligence, as it is humans who are developing AI
applications through programming and algorithms. Automation makes it easier to
replace manual labour, and today, in every sector, these AI technologies are making it
easier to complete complex tasks. There are certain reasons to prove it:

1. Emotional intelligence: One distinctive quality that makes people relevant is


emotional intelligence. While artificial intelligence (AI) aims to mirror human
intelligence, emotional intelligence is more difficult to duplicate than intellectual
intelligence. Why? Since AI is incapable of feeling pain, it requires empathy and an
in-depth awareness of the human condition, including pain and suffering.
2. Limited creative process: Since AI can only operate with the information that it gets;
it lacks the human ability to generate original ideas and methods of accomplishing
work. As a result, it is limited to the defined frameworks and unable to develop new
strategies, techniques, or behavioural patterns.
3. Humans make AI: Artificial intelligence is intelligence created by humans. Humans
provide the data and create algorithms that AI machines or applications use to
operate. And individuals are the ones who use these gadgets. As AI applications
grow, so will the demand for human services. Someone who also creates the AI
systems for these machines is required to construct, manage, and maintain them.
4. Require a Fact Check: The fact that artificially intelligent chatbots like ChatGPT or
other AI content-generating applications often commit mistakes and need human
editors to double-check their facts is a major issue.
5. Soft skills are not for AI: Soft skills are essential for humans; they include
collaboration, focus on specifics, analytical and imaginative thinking, successful
interaction, and interpersonal skills. Every industry needs these soft skills, so you
must acquire them if you want to thrive in your career. These are talents that humans
learn and are expected to have, whereas they are impossible to implement in AI
frameworks.

60
CHAPTER – 8

SWOT ANALYSIS

Fig. 23

61
8.1 AMD (Advanced Micro Devices)
STRENGTHS:

1. Product Portfolio: AMD offers a diverse range of products including CPUs, GPUs,
and semi-custom chips for gaming consoles, which diversifies its revenue streams
and reduces dependency on any one product line.
2. Technological Innovation: AMD has made significant strides in CPU and GPU
technologies, especially with its Ryzen and Radeon product lines, offering
competitive performance and efficiency compared to its competitors.
3. Partnerships and Alliances: Collaborations with companies like Microsoft and Sony
for providing chips for their gaming consoles have strengthened AMD’s market
position.
4. Strong Financial Performance: AMD has shown consistent revenue growth and
improved profitability in recent years, indicating its ability to effectively compete in the
market.

WEAKNESSES:

1. Dependency on PC Market: AMD’s revenue is significantly reliant on the PC market,


making it vulnerable to fluctuations in PC demand and market trends.
2. Limited Resources: Despite its growth, AMD has limited resources compared to its
primary competitors, which might constrain its ability to invest in research and
development or expand its market presence.
3. Brand Perception: While AMD’s products have improved significantly in recent years,
it still faces challenges in brand perception and market positioning compared to
established competitors like Intel and Nvidia.

62
OPPORTUNITIES:

1. Growth in Data Centers: With increasing demand for data centers and cloud
computing, there is an opportunity for AMD to expand its presence in the server CPU
market, where it competes with Intel.
2. Emerging Technologies: Advancements in technologies such as artificial intelligence,
machine learning, and autonomous vehicles present opportunities for AMD to
develop specialized chips and solutions tailored to these markets.
3. Strategic Partnerships: Forming strategic alliances with companies in emerging
markets or industries can help AMD expand its market reach and enhance its product
offerings.
4. Expansion in Emerging Markets: There is potential for AMD to further penetrate
emerging markets where demand for computing and graphics solutions is growing
rapidly.

THREATS:

1. Competition: Intense competition from established players like Intel and Nvidia, as
well as emerging competitors, could potentially erode AMD’s market share and
profitability.
2. Market Saturation: The PC market, which is a significant revenue source for AMD,
may become saturated or experience stagnant growth, limiting the company’s
potential for expansion.
3. Supply Chain Disruptions: Any disruptions in the supply chain, such as shortages of
key components or geopolitical tensions affecting manufacturing, could impact
AMD’s ability to meet demand and deliver products to customers.
4. Technological Shifts: Rapid technological advancements and shifts in consumer
preferences could render AMD’s existing products obsolete or less competitive,
requiring continuous innovation and adaptation.

63
8.2 NVIDIA
STRENGTHS:

1. Market Leadership: Nvidia is a market leader in the GPU segment, with its GeForce
products dominating the gaming market and its Tesla GPUs being widely used in
data centers and AI applications.
2. Technological Superiority: Nvidia’s GPUs are known for their performance, efficiency,
and advanced features like ray tracing and AI acceleration, giving the company a
competitive edge in various markets.
3. Diversification: Nvidia has successfully diversified its business beyond gaming into
areas such as data centers, automotive, and professional visualization, reducing its
reliance on any single market segment.
4. Strong Financial Performance: Nvidia has consistently delivered strong financial
results, with robust revenue growth and profitability, providing resources for
investments in R&D and strategic initiatives.

WEAKNESSES:

1. Dependency on GPU Market: While Nvidia has diversified its business, it remains
heavily reliant on the GPU market, exposing the company to risks associated with
fluctuations in demand or competitive pressures.
2. High R&D Costs: Developing cutting-edge GPU technologies requires substantial
investments in research and development, which could impact Nvidia’s profitability if
these investments do not yield expected results.
3. Regulatory Challenges: Nvidia’s business operations are subject to various
regulations and compliance requirements, which could pose challenges or
constraints on its activities in certain regions or markets.

64
OPPORTUNITIES:

1. Data Center Growth: Continued growth in demand for data center and AI-related
services presents opportunities for Nvidia to expand its data center GPU business
and develop specialized solutions for AI workloads.
2. Autonomous Vehicles: The adoption of autonomous vehicles and advanced driver-
assistance systems (ADAS) creates opportunities for Nvidia to provide GPU
solutions for automotive applications, leveraging its expertise in AI and computer
vision.
3. AI and Edge Computing: The proliferation of AI and edge computing technologies
offers opportunities for Nvidia to develop GPUs optimized for edge devices and IoT
applications, catering to the growing demand for AI inference at the network edge.
4. Gaming Market Expansion: Nvidia can further capitalize on the growing gaming
market by introducing new products and services targeted at different segments of
gamers, including casual gamers, esports enthusiasts, and VR users.

THREATS:

1. Competition: Nvidia faces intense competition from rivals like AMD, Intel, and
emerging players in various markets, which could impact its market share, pricing
power, and profitability.
2. Technological Disruption: Rapid advancements in technology or shifts in consumer
preferences could render Nvidia’s products obsolete or less competitive,
necessitating continuous innovation and adaptation.
3. Supply Chain Risks: Disruptions in the global supply chain, such as shortages of
critical components or geopolitical tensions affecting manufacturing, could impact
Nvidia’s ability to meet demand and deliver products to customers.
4. Regulatory Risks: Regulatory changes or legal challenges related to antitrust,
intellectual property, or data privacy could adversely affect Nvidia’s business
operations and financial performance, especially in highly regulated markets.

65
SWOT TABLE SUMMARISED

Brand/SWOT Advanced Micro Devices NVIDIA

1. Strong reputation for 1. Leader in the GPU


innovation in the market
semiconductor industry 2. Diversification beyond
2. Diverse product portfolio GPUs like data center
spanning CPUs, GPUs, solutions, autonomous
Strengths
and semi-custom vehicles and edge
solutions computing
3. Strong partnerships with 3. Strategic partnerships with
Microsoft, Sony and Microsoft, Google and
Google Amazon
1. Limited market presence 1. Market fluctuation risks
compared to Intel (biggest 2. Changes in consumer
competitor) demand
Weaknesses 2. Dependence on 3rd party 3. Supply chain constraints
foundries like TSMC for related to GPU production
semiconductor
manufacturing
1. Growing demand for high- 1. Continued expansion of AI
performance computing across healthcare, finance
solutions and manufacturing
2. Expansion into new industries
Opportunities markets like 5g 2. Growth in cloud gaming
infrastructure services and VR
3. Leveraging increasing applications
trend towards remote
work and online services
1. Intense competition with 1. Potential disruption from
Nvidia and intel alternative computing
2. Rapid technological architectures like FPGA
advancement may render and ASICs
Threats
existing products obsolete 2. Regulatory scrutiny and
3. Global economic antitrust concerns
conditions, geopolitical
risks and trade tensions
Table 11: SWOT Analysis

66
CHAPTER – 9

CONCLUSION

Fig. 24

67
This chapter will present the conclusions drawn from my study.

Comparative analysis of AMD and Nvidia:

Advanced Micro Devices (AMD) and Nvidia are two semiconductor giants battling for
supremacy in the high-growth markets of data center, artificial intelligence (AI), and
gaming. As we look ahead to 2024, both companies are well-positioned to benefit from
the exponential rise of AI applications, but their paths to growth will differ. This
comparative analysis will examine the financial metrics, competitive advantages, risks,
and analyst opinions for AMD and Nvidia.

Financial Data and Revenue Growth

Nvidia is projected to achieve explosive revenue growth in fiscal 2024 of 126% to $58.1
billion, driven by the massive adoption of its AI GPUs in data centers. Analysts expect
another strong year in fiscal 2025 with 91% growth to $116 billion as supply constraints
ease and Nvidia captures more of the expanding AI opportunity.

In contrast, AMD is expected to grow revenue at a more modest 10.3% in 2023 to $25
billion, rebounding from a 3.9% decline in 2022. However, analysts model an
acceleration to 21.9% growth in 2024 to $30.5 billion as AMD's latest EPYC server CPUs
gain share and its data center GPU business ramps up.

While Nvidia's growth is eye-popping, AMD is coming from a large revenue base of $23.6
billion in 2022 compared to Nvidia's $26.9 billion. Still, Nvidia's growth trajectory
positions it to pull far ahead of AMD in total revenue by 2024.

Risks and Uncertainty

The AI opportunity for both Nvidia and AMD is immense but still nascent. Forecasting
growth trajectories in such a dynamic market is challenging. If enterprise adoption of AI

68
is slower than expected or if hyperscalers shift workloads to in-house chips, it would
impact growth estimates.

Geopolitical tensions around AI and potential export restrictions are another risk factor.
Nvidia also faces risks around its gaming business which can be volatile based on crypto
demand and chip shortages. AMD has exposure to the cyclical PC market.

Which is better – AMD or NVIDIA?

In conclusion, while both AMD and Nvidia stand to benefit handsomely from the AI
megatrend, Nvidia appears to have the edge based on its current market
leadership, software advantages, and explosive near-term growth trajectory. AMD
should still be able to carve out a profitable position in AI inference and continue share
gains in server CPUs. However, catching up to Nvidia in AI training will be an uphill
battle. Overall, both AMD and Nvidia are at the center of one of the most transformative
technologies of our time, providing a long runway for growth.

Future of AI in the upcoming years

According to a survey by McKinsey, 63% of companies that adopted AI into their


operations in 2023 reported revenue increases.

Artificial intelligence in business operations is expected to double the efficiency


of the workforce and boost profitability by an average of 38% by 2035, according
to Accenture's prediction.

Gartner has revealed in a recent report that companies incorporating AI are projected to
have twice the market share and 10 times more efficiency than their competitors in 2024.

In 2024, it's estimated that businesses will interact with their customers more through
AI-powered communication channels than human-led efforts as per a study by IBM.

69
CHAPTER – 10

BIBLIOGRAPHY

Fig. 25

70
 https://en.wikipedia.org/wiki/AMD

 https://en.wikipedia.org/wiki/Nvidia

 www.nasdaq.com

 https://finance.yahoo.com/

 https://www.forbes.com/

 www.techtarget.com

 https://pianalytix.com/

71
APPENDIX
(Attached Copy of Questionnaire)

QUESTIONNAIRE

Kindly, fill the details given in the questionnaire required by the intern for preparing his
project.

Name of the respondent:

a. Are you familiar with the concept of artificial intelligence?


() Yes
() No
() Maybe

b. In which areas do you think AI can have the biggest impact? (select all that apply)
()
()
()
()
()
()

c. Which of the following AI technologies have you used? (Select all that apply)
() Machine Learning
() Natural Language Processing (NLP)
() Computer Vision
() Robotics
() Chatbots
() Virtual Assistants

d. Do you think AI can improve the learning experience for students?


() Yes
() No
() Maybe

72
e. Which of the following areas do you think AI can benefit students? (Select all that apply)
() Personalized Learning
() Automated Grading
() Tutoring Support
() Career Guidance
() Educational Content Generation

f. Would you consider pursuing a career in AI or related fields?


() Yes
() No
() Maybe

g. Do you think AI will replace human jobs in the future?


() Yes
() No
() Maybe

h. How did you come to know about AMD and NVIDIA?


() Newspaper/Magazine
() Digital/Social Media
() Friends/Word of Mouth
() Tele Calling

i. Are you following AMD and NVIDIA on social media platforms?


() Yes
() No

j. Would you like to recommend AMD and NVIDIA to friends and family?
() Yes
() No

73

You might also like